This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Component Specifications

Learn about OpenFunction component specifications.

1 - Function Specifications

Learn about Function Specifications.

This document describes the specifications of the Function CRD.

Function.spec

NameTypeDescriptionRequired
imagestringImage upload path, e.g. demorepo/demofunction:v1true
buildobjectBuilder specification for the functionfalse
imageCredentialsobjectCredentials for accessing the image repository, refer to v1.LocalObjectReferencefalse
servingobjectServing specification for the functionfalse
versionstringFunction version, e.g. v1.0.0false
workloadRuntimestringWorkloadRuntime for Function. Know values: OCIContainer and WasmEdge.Default: OCIContainerfalse

Function.spec.build

↩ Parent

NameTypeDescriptionRequired
srcRepoobjectThe configuration of the source code repositorytrue
builderstringName of the Builderfalse
builderCredentialsobjectCredentials for accessing the image repository, refer to v1.LocalObjectReferencefalse
builderMaxAgestringThe maximum time of finished builders to retain.false
dockerfilestringPath to the Dockerfile instructing Shipwright when using the Dockerfile to build imagesfalse
envmap[string]stringEnvironment variables passed to the buildpacks builderfalse
failedBuildsHistoryLimitintegerThe number of failed builders to retain. Default is 1.false
shipwrightobjectSpecification of the Shipwright enginefalse
successfulBuildsHistoryLimitintegerThe number of successful finished builders to retain. Default is 0.false
timeoutstringThe maximum time for the builder to build the imagefalse

Function.spec.build.srcRepo

↩ Parent

NameTypeDescriptionRequired
bundleContainerobjectBundleContainer describes the source code bundle container to pullfalse
credentialsobjectRepository access credentials, refer to v1.LocalObjectReferencefalse
revisionstringReferencable instances in the repository, such as commit ID and branch name.false
sourceSubPathstringThe directory of the function in the repository, e.g. functions/function-a/false
urlstringSource code repository addressfalse

Function.spec.build.srcRepo.bundleContainer

↩ Parent

NameTypeDescriptionRequired
imagestringThe bundleContainer image nametrue

Function.spec.build.shipwright

↩ Parent

NameTypeDescriptionRequired
params[]objectParameters for the build strategyfalse
strategyobjectStrategy references the BuildStrategy to use to build the imagefalse
timeoutstringThe maximum amount of time the shipwright Build should take to executefalse

Function.spec.serving

↩ Parent

NameTypeDescriptionRequired
annotationsmap[string]stringAnnotations that will be added to the workloadfalse
bindingsmap[string]objectDapr bindings that the function needs to create and use.false
hooksobjectHooks that will be executed before or after the function executionfalse
labelsmap[string]stringLabels that will be added to the workloadfalse
outputs[]objectThe outputs which the function will send data tofalse
paramsmap[string]stringParameters required by the function, will be passed to the function as environment variablesfalse
pubsubmap[string]objectDapr pubsub that the function needs to create and usefalse
scaleOptionsobjectConfiguration of auto scaling.false
statesmap[string]objectDapr state store that the function needs to create and usefalse
templateobjectTemplate is a pod template which allows modifying operator generated pod template.false
timeoutstringTimeout defines the maximum amount of time the Serving should take to execute before the Serving is runningfalse
tracingobjectConfiguration of tracingfalse
triggersobjectTriggers used to trigger the function. Refer to Function Trigger.true
workloadTypestringThe type of workload used to run the function, known values are: Deployment, StatefulSet and Jobfalse

Function.spec.serving.hooks

↩ Parent

NameTypeDescriptionRequired
policystringThere are two kind of hooks, global hooks and private hooks, the global hooks define in the config file of OpenFunction Controller, the private hooks define in the Function. Policy is the relationship between the global hooks and the private hooks of the function. Known values are:
  Append: All hooks will be executed, the private pre hooks will execute after the global pre hooks , and the private post hooks will execute before the global post hooks. this is the default policy.
  Override: Only execute the private hooks.
false
post[]stringThe hooks will be executed after the function executionfalse
pre[]stringThe hooks will be executed before the function executionfalse

Function.spec.serving.outputs[index]

↩ Parent

NameTypeDescriptionRequired
daprobjectDapr output, refer to a exist component or a component defined in bindings or pubsubfalse

Function.spec.serving.outputs[index].dapr

↩ Parent

NameTypeDescriptionRequired
namestringThe name of the dapr componenttrue
metadatamap[string]stringMetadata passed to Daprfalse
operationstringOperation field tells the Dapr component which operation it should perform, refer to Dapr docsfalse
topicstringWhen the type is pubsub, you need to set the topicfalse
typestringType of Dapr component, such as: bindings.kafka, pubsub.rocketmqfalse

Function.spec.serving.scaleOptions

↩ Parent

NameTypeDescriptionRequired
kedaobjectConfiguration about keda autoscalingfalse
knativemap[string]stringKnative autiscaling annotations. Refer to Knative autoscaling.false
maxReplicasintegerMinimum number of replicas which will scale the resource down to. By default, it scales to 0.false
minReplicasintegerMaximum number of replicas which will scale the resource up to.false

Function.spec.serving.scaleOptions.keda

↩ Parent

NameTypeDescriptionRequired
scaledJobobjectScale options for jobfalse
scaledObjectobjectScale options for deployment and statefulsetfalse
triggers[]objectEvent sources that trigger dynamic scaling of workloads. Refer to kedav1alpha1.ScaleTriggers.false

Function.spec.serving.scaleOptions.keda.scaledJob

↩ Parent

NameTypeDescriptionRequired
failedJobsHistoryLimitintegerHow many failed jobs should be kept. It defaults to 100.false
pollingIntervalintegerThe pollingInterval is in seconds. This is the interval in which KEDA checks the triggers for the queue length or the stream lag. It defaults to 30 seconds.false
restartPolicystringRestart policy for all containers within the pod. Value options are OnFailure or Never. It defaults to Never.false
scalingStrategyobjectSelect a scaling strategy. Value options are default, custom, or accurate. The default value is default. Refer to kedav1alpha1.ScalingStrategyfalse
successfulJobsHistoryLimitintegerHow many completed jobs should be kept. It defaults to 100.false

Function.spec.serving.scaleOptions.keda.scaledObject

↩ Parent

NameTypeDescriptionRequired
advancedobjectThis property specifies whether the target resource (for example, Deployment and StatefulSet) should be scaled back to original replicas count after the ScaledObject is deleted. Default behavior is to keep the replica count at the same number as it is in the moment of ScaledObject deletion. Refer to kedav1alpha1.AdvancedConfig.false
cooldownPeriodintegerThe cooldownPeriod is in seconds, and it is the period of time to wait after the last trigger activated before scaling back down to 0. It defaults to 300 seconds.false
pollingIntervalintegerThe pollingInterval is in seconds. This is the interval in which KEDA checks the triggers for the queue length or the stream lag. It defaults to 30 seconds.false

Function.spec.serving.states[key]

↩ Parent

NameTypeDescriptionRequired
specobjectDapr state stroe component spec. Refer to Dapr docs.false

Function.spec.serving.tracing

↩ Parent

NameTypeDescriptionRequired
baggagemap[string]stringBaggage is contextual information that passed between spans. It is a key-value store that resides alongside span context in a trace, making values available to any span created within that trace.true
enabledbooleanWether to enable tracingtrue
providerobjectThe tracing implementation used to create and send spantrue
tagsmap[string]stringThe tag that needs to be added to the spansfalse

Function.spec.serving.tracing.provider

↩ Parent

NameTypeDescriptionRequired
namestringTracing provider name, known values are skywalking and opentelemetrytrue
exporterobjectService to collect span for opentelemetryfalse
oapServerstringThe skywalking server urlfalse

Function.spec.serving.tracing.provider.exporter

↩ Parent

NameTypeDescriptionRequired
endpointstringThe exporter urltrue
namestringThe exporter name, known values are otlp, jaeger, and zipkintrue
compressionstringThe compression type to use on OTLP trace requests. Options include gzip. By default no compression will be used.false
headersstringKey-value pairs separated by commas to pass as request headers on OTLP trace requests.false
protocolstringThe transport protocol to use on OTLP trace requests. Options include grpc and http/protobuf. Default is grpc.false
timeoutstringThe maximum waiting time, in milliseconds, allowed to send each OTLP trace batch. Default is 10000.false

Function.spec.serving.triggers

↩ Parent

NameTypeDescriptionRequired
dapr[]objectList of dapr triggers, refer to dapr bindings or pusub componentsfalse
httpobjectThe http triggerfalse
inputs[]objectA list of components that the function can get data fromfalse

Function.spec.serving.triggers.dapr[index]

↩ Parent

NameTypeDescriptionRequired
namestringThe dapr component nametrue
topicstringWhen the component type is pubsub, you need to set the topicfalse
typestringType of Dapr component, such as: bindings.kafka, pubsub.rocketmqfalse

Function.spec.serving.triggers.http

↩ Parent

NameTypeDescriptionRequired
portintegerThe port the function is listening on, e.g. 8080false
routeobjectRoute defines how traffic from the Gateway listener is routed to a function.false

Function.spec.serving.triggers.http.route

↩ Parent

NameTypeDescriptionRequired
gatewayRefobjectGatewayRef references the Gateway resources that a Route wantsfalse
hostnames[]stringHostnames defines a set of hostname that should match against the HTTP Host header to select a HTTPRoute to process the request.false
rules[]objectRules are a list of HTTP matchers, filters and actions. Refer to HTTPRouteRule.false

Function.spec.serving.triggers.http.route.gatewayRef

↩ Parent

NameTypeDescriptionRequired
namestringThe name of the gatewaytrue
namespacestringThe namespace of the gatewaytrue

Function.spec.serving.triggers.inputs[index]

↩ Parent

NameTypeDescriptionRequired
daprobjectA dapr component that function can get data from. Now just support dapr state storetrue

Function.spec.serving.triggers.inputs[index].dapr

↩ Parent

NameTypeDescriptionRequired
namestringThe dapr component name, maybe a exist component or a component defined in statetrue
typestringThe dapr component type, such as state.redisfalse

2 - EventSource Specifications

Learn about EventSource Specifications.

2.1 - EventSource Specifications

EventSource Specifications.

This document describes the specifications of the EventSource CRD.

EventSource

FieldDescription
apiVersion stringevents.openfunction.io/v1alpha1
kind stringEventSource
metadata v1.ObjectMeta(Optional) Refer to v1.ObjectMeta
spec EventSourceSpecRefer to EventSourceSpec
status EventSourceStatusStatus of EventSource

EventSourceSpec

Belong to EventSource.

FieldDescription
eventBus string(Optional) Name of the EventBus resource associated with the event source.
redis map[string]RedisSpec(Optional) The definition of a Redis event source, with key being the event name, refer to RedisSpec.
kafka map[string]KafkaSpec(Optional) The definition of a Kafka event source, with key being the event name, refer to KafkaSpec.
cron map[string]CronSpec(Optional) The definition of a Cron event source, with key being the event name, refer to CronSpec.
sink SinkSpec(Optional) Definition of the Sink (addressable access resource, i.e. synchronization request) associated with the event source, cf. SinkSpec.

SinkSpec

Belong to EventSourceSpec.

FieldDescription
ref ReferenceRefer to Reference.

Reference

Belong to SinkSpec.

FieldDescription
kind stringThe type of the referenced resource. It defaults to Service.
namespace stringThe namespace of the referenced resource, by default the same as the namespace of the Trigger.
name stringName of the referenced resource, for example, function-ksvc.
apiVersion stringThe apiVersion of the referenced resource. It defaults to serving.knative.dev/v1.

GenericScaleOption

Belong to scaleOption.

FieldDescription
pollingInterval intThis is the interval to check each trigger on. It defaults to 30 seconds.
cooldownPeriod intThe period to wait after the last trigger reported active before scaling the resource back to 0. It defaults to 300 seconds.
minReplicaCount intMinimum number of replicas KEDA will scale the resource down to. It defaults to 0.
maxReplicaCount intThis setting is passed to the HPA definition that KEDA will create for a given resource.
advanced kedav1alpha1.AdvancedConfigSee KEDA documentation.
metadata map[string]stringKEDA trigger’s metadata
authRef kedav1alpha1.ScaledObjectAuthRefEvery parameter you define in TriggerAuthentication definition does not need to be included in the metadata of the trigger for your ScaledObject definition. To reference a TriggerAuthentication from a ScaledObject you add the authenticationRef to the trigger, refer to KEDA documentation.

2.2 - Redis

Event source specifications of Redis.

RedisSpec

Belong to EventSourceSpec.

FieldDescription
redisHost stringAddress of the Redis server, e.g. localhost:6379.
redisPassword stringPassword for the Redis server, e.g. 123456.
enableTLS bool(Optional) Whether to enable TLS access, which defaults to false. Value options: true, false.
failover bool(Optional) Whether to enable the failover feature. Requires the sentinalMasterName to be set. It defaults to false. Value options: true, false.
sentinelMasterName string(Optional) The name of the sentinel master. Refer to Redis Sentinel Documentation.
redeliverInterval string(Optional) The interval for redeliver. It defaults to 60s. 0 means the redeliver mechanism is disabled. E.g. 30s
processingTimeout string(Optional) Message processing timeout. It defaults to 15s. 0 means timeout is disabled. E.g. 30s
redisType string(Optional) The type of Redis. Value options: node for single-node mode, cluster for cluster mode. It defaults to node.
redisDB int64(Optional) The database index to connect to Redis. Effective only if redisType is node. It defaults to 0.
redisMaxRetries int64(Optional) Maximum number of retries. It defaults to no retries. E.g. 5
redisMinRetryInterval string(Optional) Minimum backoff time for retries. The default value is 8ms. -1 indicates that the backoff time is disabled. E.g. 10ms
redisMaxRetryInterval string(Optional) Maximum backoff time for retries. The default value is 512ms. -1 indicates that the backoff time is disabled. E.g. 5s
dialTimeout string(Optional) Timeout to establish a new connection. It defaults to 5s.
readTimeout string(Optional) Read timeout. A timeout causes Redis commands to fail rather than wait in a blocking fashion. It defaults to 3s. -1 means disabled.
writeTimeout string(Optional) Write timeout. A timeout causes Redis commands to fail rather than wait in a blocking fashion. It defaults to consistent with readTimeout.
poolSize int64(Optional) Maximum number of connections. It defaults to 10 connections per runtime.NumCPU. E.g. 20
poolTimeout string(Optional) The timeout for the connection pool. The default is readTimeout + 1 second. E.g. 50s
maxConnAge string(Optional) Connection aging time. The default is not to close the aging connection. E.g. 30m
minIdleConns int64(Optional) The minimum number of idle connections to maintain to avoid performance degradation from creating new connections. It defaults to 0. E.g. 2
idleCheckFrequency string(Optional) Frequency of idle connection recycler checks. Default is 1m. -1 means the idle connection recycler is disabled. E.g. -1
idleTimeout string(Optional) Timeout to close idle client connections, which should be less than the server’s timeout. It defaults to 5m. -1 means disable idle timeout check. E.g. 10m

2.3 - Kafka

Event source specifications of Kafka.

KafkaSpec

Belong to EventSourceSpec.

FieldDescription
brokers stringA comma-separated string of Kafka server addresses, for example, localhost:9092.
authRequired boolWhether to enable SASL authentication for the Kafka server. Value options: true, false.
topic stringThe topic name of the Kafka event source, for example, topicA, myTopic.
saslUsername string(Optional) The SASL username to use for authentication. Only required if authRequired is true. For example, admin.
saslPassword string(Optional) The SASL user password for authentication. Only required if authRequired is true. For example, 123456.
maxMessageBytes int64(Optional) The maximum number of bytes a single message is allowed to contain. Default is 1024. For example, 2048.
scaleOption KafkaScaleOption(Optional) Kafka’s scale configuration.

KafkaScaleOption

Belong to KafkaSpec.

FieldDescription
GenericScaleOptionGeneric scale configuration.
consumerGroup stringKafka’s consumer group name.
topic stringTopic under monitoring, for example, topicA, myTopic.
lagThreshold stringThreshold for triggering scaling, in this case is the Kafka’s lag.

2.4 - Cron

Event source specifications of Cron.

CronSpec

Belong to EventSourceSpec.

FieldDescription
schedule stringRefer to Schedule format for a valid schedule format, for example, @every 15m.

3 - EventBus Specifications

Learn about EventBus Specifications.

3.1 - EventBus Specifications

EventBus Specifications.

This document describes the specifications of the EventBus (ClusterEventBus) CRD.

EventBus (ClusterEventBus)

FieldDescription
apiVersion stringevents.openfunction.io/v1alpha1
kind stringEventBus(ClusterEventBus)
metadata v1.ObjectMeta(Optional) Refer to v1.ObjectMeta
spec EventBusSpecRefer to EventBusSpec
status EventBusStatusStatus of EventBus(ClusterEventBus)

EventBusSpec

Belong to EventBus.

FieldDescription
topic stringThe topic name of the event bus.
natsStreaming NatsStreamingSpecDefinition of the Nats Streaming event bus (currently only supported). See NatsStreamingSpec.

GenericScaleOption

Belong to scaleOption.

FieldDescription
pollingInterval intThe interval to check each trigger on. It defaults to 30 seconds.
cooldownPeriod intThe period to wait after the last trigger reported active before scaling the resource back to 0. It defaults to 300 seconds.
minReplicaCount intMinimum number of replicas which KEDA will scale the resource down to. It defaults to 0.
maxReplicaCount intThis setting is passed to the HPA definition that KEDA will create for a given resource.
advanced kedav1alpha1.AdvancedConfigSee KEDA documentation.
metadata map[string]stringKEDA trigger’s metadata.
authRef kedav1alpha1.ScaledObjectAuthRefEvery parameter you define in TriggerAuthentication definition does not need to be included in the metadata of the trigger for your ScaledObject definition. To reference a TriggerAuthentication from a ScaledObject, add the authRef to the trigger. Refer to KEDA documentation.

3.2 - NATS Streaming

Event bus specifications of NATS Streaming.

NatsStreamingSpec

Belong to EventBusSpec.

FieldDescription
natsURL stringNATS server address, for example, nats://localhost:4222.
natsStreamingClusterID stringNATS cluster ID, for example, stan.
subscriptionType stringSubscriber type, value options: topic, queue.
ackWaitTime string(Optional) Refer to Acknowledgements , for example, 300ms.
maxInFlight int64(Optional) Refer to Max In Flight , for example, 25.
durableSubscriptionName string(Optional) The name of the persistent subscriber. For example, my-durable.
deliverNew bool(Optional) Subscriber options (only one can be used). Whether to send only new messages. Value options: true, false.
startAtSequence int64(Optional) Subscriber options (only one can be used). Set the starting sequence position and status. For example, 100000.
startWithLastReceived bool(Optional) Subscriber options (only one can be used). Whether to set the start position to the latest news place. Value options: true, false.
deliverAll bool(Optional) Subscriber options (only one can be used). Whether to send all available messages. Value options: true, false.
startAtTimeDelta string(Optional) Subscriber options (only one can be used). Use the difference form to set the desired start time position and state, for example, 10m, 23s.
startAtTime string(Optional) Subscriber options (only one can be used). Set the desired start time position and status using the marker value form. For example, Feb 3, 2013 at 7:54pm (PST).
startAtTimeFormat string(Optional) Must be used with startAtTime. Sets the format of the time. For example, Jan 2, 2006 at 3:04pm (MST).
scaleOption NatsStreamingScaleOption(Optional) Nats streaming’s scale configuration.

NatsStreamingScaleOption

Belong to NatsStreamingSpec.

FieldDescription
GenericScaleOptionGeneric scale configuration.
natsServerMonitoringEndpoint stringNats streaming’s monitoring endpoint.
queueGroup stringNats streaming’s queue group name.
durableName stringNats streaming’s durable name.
subject stringSubject under monitoring, for example, topicA, myTopic.
lagThreshold stringThreshold for triggering scaling, in this case is the Nats Streaming’s lag.

4 - Trigger Specifications

Learn about Trigger Specifications.

This document describes the specifications of the Trigger CRD.

Trigger

FieldDescription
apiVersion stringevents.openfunction.io/v1alpha1
kind stringTrigger
metadata v1.ObjectMeta(Optional) Refer to v1.ObjectMeta
spec TriggerSpecRefer to TriggerSpec
status TriggerStatusStatus of Trigger

TriggerSpec

Belong to Trigger.

FieldDescription
eventBus string(Optional) Name of the EventBus resource associated with the Trigger.
inputs map[string]Input(Optional) The input of trigger, with key being the input name. Refer to Input.
subscribers []Subscriber(Optional) The subscriber of the trigger. Refer to Subscriber.

Input

Belong to TriggerSpec.

FieldDescription
namespace string(Optional) The namespace name of the EventSource, which by default matches the namespace of the Trigger, for example, default.
eventSource stringEventSource name, for example, kafka-eventsource.
event stringEvent name, for example, eventA.

Subscriber

Belong to TriggerSpec.

FieldDescription
condition stringTrigger conditions for triggers, refer to cel-spec for more writing specifications, for example, eventA && eventB or `eventA
sink SinkSpec(Optional) Triggered Sink (addressable access resource, for example, synchronization request) definition, refer to SinkSpec.
deadLetterSink SinkSpec(Optional) Triggered dead letter Sink (addressable access resource, for example, synchronization request) definition, refer to SinkSpec.
topic string(Optional) Used to send post-trigger messages to the specified topic of the event bus, for example, topicTriggered.
deadLetterTopic string(Optional) Used to send post-trigger messages to the specified dead letter topic of the event bus, for example, topicDL.

SinkSpec

Belong to EventSourceSpec.

FieldDescription
ref ReferenceRefer to Reference.

Reference

Belong to SinkSpec.

FieldDescription
kind stringThe type of the referenced resource. It defaults to Service.
namespace stringThe namespace of the referenced resource, by default the same as the namespace of the Trigger.
name stringName of the referenced resource, for example, function-ksvc.
apiVersion stringThe apiVersion of the referenced resource. It defaults to serving.knative.dev/v1.