This is the multi-page printable view of this section. Click here to print.
Reference
- 1: Component Specifications
- 1.1: Function Specifications
- 1.2: EventSource Specifications
- 1.2.1: EventSource Specifications
- 1.2.2: Redis
- 1.2.3: Kafka
- 1.2.4: Cron
- 1.3: EventBus Specifications
- 1.3.1: EventBus Specifications
- 1.3.2: NATS Streaming
- 1.4: Trigger Specifications
- 2: FAQ
1 - Component Specifications
1.1 - Function Specifications
This document describes the specifications of the Function CRD.
Function.spec
Name | Type | Description | Required |
---|---|---|---|
image | string | Image upload path, e.g. demorepo/demofunction:v1 | true |
build | object | Builder specification for the function | false |
imageCredentials | object | Credentials for accessing the image repository, refer to v1.LocalObjectReference | false |
serving | object | Serving specification for the function | false |
version | string | Function version, e.g. v1.0.0 | false |
workloadRuntime | string | WorkloadRuntime for Function. Know values: OCIContainer and WasmEdge.Default: OCIContainer | false |
Function.spec.build
Name | Type | Description | Required |
---|---|---|---|
srcRepo | object | The configuration of the source code repository | true |
builder | string | Name of the Builder | false |
builderCredentials | object | Credentials for accessing the image repository, refer to v1.LocalObjectReference | false |
builderMaxAge | string | The maximum time of finished builders to retain. | false |
dockerfile | string | Path to the Dockerfile instructing Shipwright when using the Dockerfile to build images | false |
env | map[string]string | Environment variables passed to the buildpacks builder | false |
failedBuildsHistoryLimit | integer | The number of failed builders to retain. Default is 1. | false |
shipwright | object | Specification of the Shipwright engine | false |
successfulBuildsHistoryLimit | integer | The number of successful finished builders to retain. Default is 0. | false |
timeout | string | The maximum time for the builder to build the image | false |
Function.spec.build.srcRepo
Name | Type | Description | Required |
---|---|---|---|
bundleContainer | object | BundleContainer describes the source code bundle container to pull | false |
credentials | object | Repository access credentials, refer to v1.LocalObjectReference | false |
revision | string | Referencable instances in the repository, such as commit ID and branch name. | false |
sourceSubPath | string | The directory of the function in the repository, e.g. functions/function-a/ | false |
url | string | Source code repository address | false |
Function.spec.build.srcRepo.bundleContainer
Name | Type | Description | Required |
---|---|---|---|
image | string | The bundleContainer image name | true |
Function.spec.build.shipwright
Name | Type | Description | Required |
---|---|---|---|
params | []object | Parameters for the build strategy | false |
strategy | object | Strategy references the BuildStrategy to use to build the image | false |
timeout | string | The maximum amount of time the shipwright Build should take to execute | false |
Function.spec.serving
Name | Type | Description | Required |
---|---|---|---|
annotations | map[string]string | Annotations that will be added to the workload | false |
bindings | map[string]object | Dapr bindings that the function needs to create and use. | false |
hooks | object | Hooks that will be executed before or after the function execution | false |
labels | map[string]string | Labels that will be added to the workload | false |
outputs | []object | The outputs which the function will send data to | false |
params | map[string]string | Parameters required by the function, will be passed to the function as environment variables | false |
pubsub | map[string]object | Dapr pubsub that the function needs to create and use | false |
scaleOptions | object | Configuration of auto scaling. | false |
states | map[string]object | Dapr state store that the function needs to create and use | false |
template | object | Template is a pod template which allows modifying operator generated pod template. | false |
timeout | string | Timeout defines the maximum amount of time the Serving should take to execute before the Serving is running | false |
tracing | object | Configuration of tracing | false |
triggers | object | Triggers used to trigger the function. Refer to Function Trigger. | true |
workloadType | string | The type of workload used to run the function, known values are: Deployment, StatefulSet and Job | false |
Function.spec.serving.hooks
Name | Type | Description | Required |
---|---|---|---|
policy | string | There are two kind of hooks, global hooks and private hooks, the global hooks define in the config file of OpenFunction Controller,
the private hooks define in the Function. Policy is the relationship between the global hooks and the private hooks of the function. Known values are: Append: All hooks will be executed, the private pre hooks will execute after the global pre hooks , and the private post hooks will execute before the global post hooks. this is the default policy. Override: Only execute the private hooks. | false |
post | []string | The hooks will be executed after the function execution | false |
pre | []string | The hooks will be executed before the function execution | false |
Function.spec.serving.outputs[index]
Name | Type | Description | Required |
---|---|---|---|
dapr | object | Dapr output, refer to a exist component or a component defined in bindings or pubsub | false |
Function.spec.serving.outputs[index].dapr
Name | Type | Description | Required |
---|---|---|---|
name | string | The name of the dapr component | true |
metadata | map[string]string | Metadata passed to Dapr | false |
operation | string | Operation field tells the Dapr component which operation it should perform, refer to Dapr docs | false |
topic | string | When the type is pubsub, you need to set the topic | false |
type | string | Type of Dapr component, such as: bindings.kafka, pubsub.rocketmq | false |
Function.spec.serving.scaleOptions
Name | Type | Description | Required |
---|---|---|---|
keda | object | Configuration about keda autoscaling | false |
knative | map[string]string | Knative autiscaling annotations. Refer to Knative autoscaling. | false |
maxReplicas | integer | Minimum number of replicas which will scale the resource down to. By default, it scales to 0. | false |
minReplicas | integer | Maximum number of replicas which will scale the resource up to. | false |
Function.spec.serving.scaleOptions.keda
Name | Type | Description | Required |
---|---|---|---|
scaledJob | object | Scale options for job | false |
scaledObject | object | Scale options for deployment and statefulset | false |
triggers | []object | Event sources that trigger dynamic scaling of workloads. Refer to kedav1alpha1.ScaleTriggers. | false |
Function.spec.serving.scaleOptions.keda.scaledJob
Name | Type | Description | Required |
---|---|---|---|
failedJobsHistoryLimit | integer | How many failed jobs should be kept. It defaults to 100. | false |
pollingInterval | integer | The pollingInterval is in seconds. This is the interval in which KEDA checks the triggers for the queue length or the stream lag. It defaults to 30 seconds. | false |
restartPolicy | string | Restart policy for all containers within the pod. Value options are OnFailure or Never. It defaults to Never. | false |
scalingStrategy | object | Select a scaling strategy. Value options are default, custom, or accurate. The default value is default. Refer to kedav1alpha1.ScalingStrategy | false |
successfulJobsHistoryLimit | integer | How many completed jobs should be kept. It defaults to 100. | false |
Function.spec.serving.scaleOptions.keda.scaledObject
Name | Type | Description | Required |
---|---|---|---|
advanced | object | This property specifies whether the target resource (for example, Deployment and StatefulSet) should be scaled back to original replicas count after the ScaledObject is deleted. Default behavior is to keep the replica count at the same number as it is in the moment of ScaledObject deletion. Refer to kedav1alpha1.AdvancedConfig. | false |
cooldownPeriod | integer | The cooldownPeriod is in seconds, and it is the period of time to wait after the last trigger activated before scaling back down to 0. It defaults to 300 seconds. | false |
pollingInterval | integer | The pollingInterval is in seconds. This is the interval in which KEDA checks the triggers for the queue length or the stream lag. It defaults to 30 seconds. | false |
Function.spec.serving.states[key]
Name | Type | Description | Required |
---|---|---|---|
spec | object | Dapr state stroe component spec. Refer to Dapr docs. | false |
Function.spec.serving.tracing
Name | Type | Description | Required |
---|---|---|---|
baggage | map[string]string | Baggage is contextual information that passed between spans. It is a key-value store that resides alongside span context in a trace, making values available to any span created within that trace. | true |
enabled | boolean | Wether to enable tracing | true |
provider | object | The tracing implementation used to create and send span | true |
tags | map[string]string | The tag that needs to be added to the spans | false |
Function.spec.serving.tracing.provider
Name | Type | Description | Required |
---|---|---|---|
name | string | Tracing provider name, known values are skywalking and opentelemetry | true |
exporter | object | Service to collect span for opentelemetry | false |
oapServer | string | The skywalking server url | false |
Function.spec.serving.tracing.provider.exporter
Name | Type | Description | Required |
---|---|---|---|
endpoint | string | The exporter url | true |
name | string | The exporter name, known values are otlp, jaeger, and zipkin | true |
compression | string | The compression type to use on OTLP trace requests. Options include gzip. By default no compression will be used. | false |
headers | string | Key-value pairs separated by commas to pass as request headers on OTLP trace requests. | false |
protocol | string | The transport protocol to use on OTLP trace requests. Options include grpc and http/protobuf. Default is grpc. | false |
timeout | string | The maximum waiting time, in milliseconds, allowed to send each OTLP trace batch. Default is 10000. | false |
Function.spec.serving.triggers
Name | Type | Description | Required |
---|---|---|---|
dapr | []object | List of dapr triggers, refer to dapr bindings or pusub components | false |
http | object | The http trigger | false |
inputs | []object | A list of components that the function can get data from | false |
Function.spec.serving.triggers.dapr[index]
Name | Type | Description | Required |
---|---|---|---|
name | string | The dapr component name | true |
topic | string | When the component type is pubsub, you need to set the topic | false |
type | string | Type of Dapr component, such as: bindings.kafka, pubsub.rocketmq | false |
Function.spec.serving.triggers.http
Name | Type | Description | Required |
---|---|---|---|
port | integer | The port the function is listening on, e.g. 8080 | false |
route | object | Route defines how traffic from the Gateway listener is routed to a function. | false |
Function.spec.serving.triggers.http.route
Name | Type | Description | Required |
---|---|---|---|
gatewayRef | object | GatewayRef references the Gateway resources that a Route wants | false |
hostnames | []string | Hostnames defines a set of hostname that should match against the HTTP Host header to select a HTTPRoute to process the request. | false |
rules | []object | Rules are a list of HTTP matchers, filters and actions. Refer to HTTPRouteRule. | false |
Function.spec.serving.triggers.http.route.gatewayRef
Name | Type | Description | Required |
---|---|---|---|
name | string | The name of the gateway | true |
namespace | string | The namespace of the gateway | true |
Function.spec.serving.triggers.inputs[index]
Name | Type | Description | Required |
---|---|---|---|
dapr | object | A dapr component that function can get data from. Now just support dapr state store | true |
Function.spec.serving.triggers.inputs[index].dapr
Name | Type | Description | Required |
---|---|---|---|
name | string | The dapr component name, maybe a exist component or a component defined in state | true |
type | string | The dapr component type, such as state.redis | false |
1.2 - EventSource Specifications
1.2.1 - EventSource Specifications
This document describes the specifications of the EventSource CRD.
EventSource
Field | Description |
---|---|
apiVersion string | events.openfunction.io/v1alpha1 |
kind string | EventSource |
metadata v1.ObjectMeta | (Optional) Refer to v1.ObjectMeta |
spec EventSourceSpec | Refer to EventSourceSpec |
status EventSourceStatus | Status of EventSource |
EventSourceSpec
Belong to EventSource.
Field | Description |
---|---|
eventBus string | (Optional) Name of the EventBus resource associated with the event source. |
redis map[string]RedisSpec | (Optional) The definition of a Redis event source, with key being the event name, refer to RedisSpec. |
kafka map[string]KafkaSpec | (Optional) The definition of a Kafka event source, with key being the event name, refer to KafkaSpec. |
cron map[string]CronSpec | (Optional) The definition of a Cron event source, with key being the event name, refer to CronSpec. |
sink SinkSpec | (Optional) Definition of the Sink (addressable access resource, i.e. synchronization request) associated with the event source, cf. SinkSpec. |
SinkSpec
Belong to EventSourceSpec.
Field | Description |
---|---|
ref Reference | Refer to Reference. |
Reference
Belong to SinkSpec.
Note
The resources cited are generally Knative Service.Field | Description |
---|---|
kind string | The type of the referenced resource. It defaults to Service . |
namespace string | The namespace of the referenced resource, by default the same as the namespace of the Trigger. |
name string | Name of the referenced resource, for example, function-ksvc . |
apiVersion string | The apiVersion of the referenced resource. It defaults to serving.knative.dev/v1 . |
GenericScaleOption
Belong to scaleOption.
Field | Description |
---|---|
pollingInterval int | This is the interval to check each trigger on. It defaults to 30 seconds. |
cooldownPeriod int | The period to wait after the last trigger reported active before scaling the resource back to 0. It defaults to 300 seconds. |
minReplicaCount int | Minimum number of replicas KEDA will scale the resource down to. It defaults to 0 . |
maxReplicaCount int | This setting is passed to the HPA definition that KEDA will create for a given resource. |
advanced kedav1alpha1.AdvancedConfig | See KEDA documentation. |
metadata map[string]string | KEDA trigger’s metadata |
authRef kedav1alpha1.ScaledObjectAuthRef | Every parameter you define in TriggerAuthentication definition does not need to be included in the metadata of the trigger for your ScaledObject definition. To reference a TriggerAuthentication from a ScaledObject you add the authenticationRef to the trigger, refer to KEDA documentation. |
1.2.2 - Redis
RedisSpec
Belong to EventSourceSpec.
Note
The EventSource generates Dapr Bindings Components for adapting Redis event sources according to the RedisSpec, and in principle we try to maintain the consistency of the relevant parameters. For more information, see Redis binding spec.Field | Description |
---|---|
redisHost string | Address of the Redis server, e.g. localhost:6379 . |
redisPassword string | Password for the Redis server, e.g. 123456 . |
enableTLS bool | (Optional) Whether to enable TLS access, which defaults to false . Value options: true , false . |
failover bool | (Optional) Whether to enable the failover feature. Requires the sentinalMasterName to be set. It defaults to false . Value options: true , false . |
sentinelMasterName string | (Optional) The name of the sentinel master. Refer to Redis Sentinel Documentation. |
redeliverInterval string | (Optional) The interval for redeliver. It defaults to 60s . 0 means the redeliver mechanism is disabled. E.g. 30s |
processingTimeout string | (Optional) Message processing timeout. It defaults to 15s . 0 means timeout is disabled. E.g. 30s |
redisType string | (Optional) The type of Redis. Value options: node for single-node mode, cluster for cluster mode. It defaults to node . |
redisDB int64 | (Optional) The database index to connect to Redis. Effective only if redisType is node . It defaults to 0 . |
redisMaxRetries int64 | (Optional) Maximum number of retries. It defaults to no retries. E.g. 5 |
redisMinRetryInterval string | (Optional) Minimum backoff time for retries. The default value is 8ms . -1 indicates that the backoff time is disabled. E.g. 10ms |
redisMaxRetryInterval string | (Optional) Maximum backoff time for retries. The default value is 512ms . -1 indicates that the backoff time is disabled. E.g. 5s |
dialTimeout string | (Optional) Timeout to establish a new connection. It defaults to 5s . |
readTimeout string | (Optional) Read timeout. A timeout causes Redis commands to fail rather than wait in a blocking fashion. It defaults to 3s . -1 means disabled. |
writeTimeout string | (Optional) Write timeout. A timeout causes Redis commands to fail rather than wait in a blocking fashion. It defaults to consistent with readTimeout. |
poolSize int64 | (Optional) Maximum number of connections. It defaults to 10 connections per runtime.NumCPU. E.g. 20 |
poolTimeout string | (Optional) The timeout for the connection pool. The default is readTimeout + 1 second. E.g. 50s |
maxConnAge string | (Optional) Connection aging time. The default is not to close the aging connection. E.g. 30m |
minIdleConns int64 | (Optional) The minimum number of idle connections to maintain to avoid performance degradation from creating new connections. It defaults to 0 . E.g. 2 |
idleCheckFrequency string | (Optional) Frequency of idle connection recycler checks. Default is 1m . -1 means the idle connection recycler is disabled. E.g. -1 |
idleTimeout string | (Optional) Timeout to close idle client connections, which should be less than the server’s timeout. It defaults to 5m . -1 means disable idle timeout check. E.g. 10m |
1.2.3 - Kafka
KafkaSpec
Belong to EventSourceSpec.
Note
The EventSource generates Dapr Bindings Components for adapting Kafka event sources according to the KafkaSpec, and in principle we try to maintain the consistency of the relevant parameters. You can get more information by visiting the Kafka binding spec.Field | Description |
---|---|
brokers string | A comma-separated string of Kafka server addresses, for example, localhost:9092 . |
authRequired bool | Whether to enable SASL authentication for the Kafka server. Value options: true , false . |
topic string | The topic name of the Kafka event source, for example, topicA , myTopic . |
saslUsername string | (Optional) The SASL username to use for authentication. Only required if authRequired is true . For example, admin . |
saslPassword string | (Optional) The SASL user password for authentication. Only required if authRequired is true . For example, 123456 . |
maxMessageBytes int64 | (Optional) The maximum number of bytes a single message is allowed to contain. Default is 1024 . For example, 2048 . |
scaleOption KafkaScaleOption | (Optional) Kafka’s scale configuration. |
KafkaScaleOption
Belong to KafkaSpec.
Field | Description |
---|---|
GenericScaleOption | Generic scale configuration. |
consumerGroup string | Kafka’s consumer group name. |
topic string | Topic under monitoring, for example, topicA , myTopic . |
lagThreshold string | Threshold for triggering scaling, in this case is the Kafka’s lag. |
1.2.4 - Cron
CronSpec
Belong to EventSourceSpec.
Note
The EventSource generates Dapr Bindings Components for adapting Cron event sources according to the CronSpec, and in principle we try to maintain the consistency of the relevant parameters. For more information, see the Cron binding spec.Field | Description |
---|---|
schedule string | Refer to Schedule format for a valid schedule format, for example, @every 15m . |
1.3 - EventBus Specifications
1.3.1 - EventBus Specifications
This document describes the specifications of the EventBus (ClusterEventBus) CRD.
EventBus (ClusterEventBus)
Field | Description |
---|---|
apiVersion string | events.openfunction.io/v1alpha1 |
kind string | EventBus(ClusterEventBus) |
metadata v1.ObjectMeta | (Optional) Refer to v1.ObjectMeta |
spec EventBusSpec | Refer to EventBusSpec |
status EventBusStatus | Status of EventBus(ClusterEventBus) |
EventBusSpec
Belong to EventBus.
Field | Description |
---|---|
topic string | The topic name of the event bus. |
natsStreaming NatsStreamingSpec | Definition of the Nats Streaming event bus (currently only supported). See NatsStreamingSpec. |
GenericScaleOption
Belong to scaleOption.
Field | Description |
---|---|
pollingInterval int | The interval to check each trigger on. It defaults to 30 seconds. |
cooldownPeriod int | The period to wait after the last trigger reported active before scaling the resource back to 0. It defaults to 300 seconds. |
minReplicaCount int | Minimum number of replicas which KEDA will scale the resource down to. It defaults to 0 . |
maxReplicaCount int | This setting is passed to the HPA definition that KEDA will create for a given resource. |
advanced kedav1alpha1.AdvancedConfig | See KEDA documentation. |
metadata map[string]string | KEDA trigger’s metadata. |
authRef kedav1alpha1.ScaledObjectAuthRef | Every parameter you define in TriggerAuthentication definition does not need to be included in the metadata of the trigger for your ScaledObject definition. To reference a TriggerAuthentication from a ScaledObject , add the authRef to the trigger. Refer to KEDA documentation. |
1.3.2 - NATS Streaming
NatsStreamingSpec
Belong to EventBusSpec.
Note
The EventBus (ClusterEventBus) provides the configuration to the EventSource and Trigger references in order to generate the corresponding Dapr Pub/Sub Components to get messages from the event bus, and in principle we try to maintain consistency in the relevant parameters. For more information, see the NATS Streaming pubsub spec.Field | Description |
---|---|
natsURL string | NATS server address, for example, nats://localhost:4222 . |
natsStreamingClusterID string | NATS cluster ID, for example, stan . |
subscriptionType string | Subscriber type, value options: topic , queue . |
ackWaitTime string | (Optional) Refer to Acknowledgements , for example, 300ms . |
maxInFlight int64 | (Optional) Refer to Max In Flight , for example, 25 . |
durableSubscriptionName string | (Optional) The name of the persistent subscriber. For example, my-durable . |
deliverNew bool | (Optional) Subscriber options (only one can be used). Whether to send only new messages. Value options: true , false . |
startAtSequence int64 | (Optional) Subscriber options (only one can be used). Set the starting sequence position and status. For example, 100000 . |
startWithLastReceived bool | (Optional) Subscriber options (only one can be used). Whether to set the start position to the latest news place. Value options: true , false . |
deliverAll bool | (Optional) Subscriber options (only one can be used). Whether to send all available messages. Value options: true , false . |
startAtTimeDelta string | (Optional) Subscriber options (only one can be used). Use the difference form to set the desired start time position and state, for example, 10m , 23s . |
startAtTime string | (Optional) Subscriber options (only one can be used). Set the desired start time position and status using the marker value form. For example, Feb 3, 2013 at 7:54pm (PST) . |
startAtTimeFormat string | (Optional) Must be used with startAtTime. Sets the format of the time. For example, Jan 2, 2006 at 3:04pm (MST) . |
scaleOption NatsStreamingScaleOption | (Optional) Nats streaming’s scale configuration. |
NatsStreamingScaleOption
Belong to NatsStreamingSpec.
Field | Description |
---|---|
GenericScaleOption | Generic scale configuration. |
natsServerMonitoringEndpoint string | Nats streaming’s monitoring endpoint. |
queueGroup string | Nats streaming’s queue group name. |
durableName string | Nats streaming’s durable name. |
subject string | Subject under monitoring, for example, topicA , myTopic . |
lagThreshold string | Threshold for triggering scaling, in this case is the Nats Streaming’s lag. |
1.4 - Trigger Specifications
This document describes the specifications of the Trigger CRD.
Trigger
Field | Description |
---|---|
apiVersion string | events.openfunction.io/v1alpha1 |
kind string | Trigger |
metadata v1.ObjectMeta | (Optional) Refer to v1.ObjectMeta |
spec TriggerSpec | Refer to TriggerSpec |
status TriggerStatus | Status of Trigger |
TriggerSpec
Belong to Trigger.
Field | Description |
---|---|
eventBus string | (Optional) Name of the EventBus resource associated with the Trigger. |
inputs map[string]Input | (Optional) The input of trigger, with key being the input name. Refer to Input. |
subscribers []Subscriber | (Optional) The subscriber of the trigger. Refer to Subscriber. |
Input
Belong to TriggerSpec.
Field | Description |
---|---|
namespace string | (Optional) The namespace name of the EventSource, which by default matches the namespace of the Trigger, for example, default . |
eventSource string | EventSource name, for example, kafka-eventsource . |
event string | Event name, for example, eventA . |
Subscriber
Belong to TriggerSpec.
Field | Description |
---|---|
condition string | Trigger conditions for triggers, refer to cel-spec for more writing specifications, for example, eventA && eventB or `eventA |
sink SinkSpec | (Optional) Triggered Sink (addressable access resource, for example, synchronization request) definition, refer to SinkSpec. |
deadLetterSink SinkSpec | (Optional) Triggered dead letter Sink (addressable access resource, for example, synchronization request) definition, refer to SinkSpec. |
topic string | (Optional) Used to send post-trigger messages to the specified topic of the event bus, for example, topicTriggered . |
deadLetterTopic string | (Optional) Used to send post-trigger messages to the specified dead letter topic of the event bus, for example, topicDL . |
SinkSpec
Belong to EventSourceSpec.
Field | Description |
---|---|
ref Reference | Refer to Reference. |
Reference
Belong to SinkSpec.
Note
The resources cited are generally Knative Service.Field | Description |
---|---|
kind string | The type of the referenced resource. It defaults to Service . |
namespace string | The namespace of the referenced resource, by default the same as the namespace of the Trigger. |
name string | Name of the referenced resource, for example, function-ksvc . |
apiVersion string | The apiVersion of the referenced resource. It defaults to serving.knative.dev/v1 . |
2 - FAQ
This document describes FAQs when using OpenFunction.
Q: How to use private image repositories in OpenFunction?
A: OpenFunction uses Shipwright (which utilizes Tekton to integrate with Cloud Native Buildpacks) in the build phase to package the user function to the application image.
Users often choose to access a private image repository in an insecure way, which is not yet supported by the Cloud Native Buildpacks.
We offer a workaround as below to get around this limitation for now:
- Use IP address instead of hostname as access address for private image repository.
- You should skip tag-resolution when you run the Knative-runtime function.
For references:
buildpacks/tekton-integration#31
Q: How to access the Knative-runtime function without introducing a new ingress controller?
A: OpenFunction provides a unified entry point for function accessibility, which is based on the Ingress Nginx implementation. However, for some users, this is not necessary, and instead, introducing a new ingress controller may affect the current cluster.
In general, accessible addresses are for the sync(Knative-runtime) functions. Here are two ways to solve this problem:
Magic DNS
You can follow this guide to config the DNS.
CoreDNS
This is similar to using Magic DNS, with the difference that the configuration for DNS resolution is placed inside CoreDNS. Assume that the user has configured a domain named “openfunction.dev” in the ConfigMap
config-domain
under theknative-serving
namespace (as shown below):$ kubectl -n knative-serving get cm config-domain -o yaml apiVersion: v1 data: openfunction.dev: "" kind: ConfigMap metadata: annotations: knative.dev/example-checksum: 81552d0b labels: app.kubernetes.io/part-of: knative-serving app.kubernetes.io/version: 1.0.1 serving.knative.dev/release: v1.0.1 name: config-domain namespace: knative-serving
Next, let’s add an A record for this domain. OpenFunction uses Kourier as the default network layer for Knative Serving, which is where the domain should flow to.
$ kubectl -n kourier-system get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kourier LoadBalancer 10.233.7.202 <pending> 80:31655/TCP,443:30980/TCP 36m kourier-internal ClusterIP 10.233.47.71 <none> 80/TCP 36m
Then the user only needs to configure this Wild-Card DNS resolution in CoreDNS to resolve the URL address of any Knative Service in the cluster.
Where “10.233.47.71” is the address of the Service kourier-internal.
$ kubectl -n kube-system get cm coredns -o yaml apiVersion: v1 data: Corefile: | .:53 { errors health ready template IN A openfunction.dev { match .*\.openfunction\.dev answer "{{ .Name }} 60 IN A 10.233.47.71" fallthrough } kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa } hosts /etc/coredns/NodeHosts { ttl 60 reload 15s fallthrough } prometheus :9153 forward . /etc/resolv.conf cache 30 loop reload loadbalance } ...
If the user cannot resolve the URL address for this function outside the cluster, configure the
hosts
file as follows:Where “serving-sr5v2-ksvc-sbtgr.default.openfunction.dev” is the URL address obtained from the command “kubectl get ksvc”.
10.233.47.71 serving-sr5v2-ksvc-sbtgr.default.openfunction.dev
After the above configuration is done, you can get the URL address of the function with the following command. Then you can trigger the function via curl
or your browser.
$ kubectl get ksvc
NAME URL
serving-sr5v2-ksvc-sbtgr http://serving-sr5v2-ksvc-sbtgr.default.openfunction.dev
Q: How to enable and configure concurrency for functions?
A: OpenFunction categorizes function types into “sync runtime” and “async runtime” based on the type of request being handled. These two types of functions are driven by Knative Serving and Dapr + KEDA.
Therefore, to enable and configure the concurrency of functions, you need to refer to the specific implementation in the above components.
The following section describes how to enable and configure concurrency of functions in OpenFunction according to the “sync runtime” and “async runtime” sections.
Sync runtime
You can start by referring to this document in Knative Serving on enabling and configuring concurrency capabilities.
Knative Serving has Soft limit and Hard limit configurations for the concurrency feature.
Soft limit
You can refer to the Global(ConfigMap)
and Global(Operator)
sections of this document to configure global concurrency capabilities.
And for Per Revision
you can configure it like this.
apiVersion: core.openfunction.io/v1beta2
kind: Function
metadata:
name: function-sample
spec:
serving:
scaleOptions:
knative:
autoscaling.knative.dev/target: "200"
Hard limit
OpenFunction currently doesn’t support configuring Hard limit for Per Revision
. You can refer to the Global(ConfigMap)
and Global(Operator)
sections of this document to configure global concurrency capabilities.
In summary
In a nutshell, you can configure Knative Serving’s Autoscaling-related configuration items for each function as follows, just as long as they can be passed in as Annotation, otherwise, you can only do global settings.
# Configuration in Knative Serving
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: helloworld-go
namespace: default
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/<key>: "value"
# Configuration in OpenFunction (recommended)
apiVersion: core.openfunction.io/v1beta2
kind: Function
metadata:
name: function-sample
spec:
serving:
scaleOptions:
knative:
<key>: "value"
# Alternative approach
apiVersion: core.openfunction.io/v1beta2
kind: Function
metadata:
name: function-sample
spec:
serving:
annotations:
autoscaling.knative.dev/<key>: "value"
Async runtime
You can start by referring to the document in Dapr on enabling and configuring concurrency capabilities.
Compared to the concurrency configuration of sync runtime, the concurrency configuration of async runtime is simpler.
# Configuration in Dapr
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodesubscriber
namespace: default
spec:
template:
metadata:
annotations:
dapr.io/app-max-concurrency: "value"
# Configuration in OpenFunction (recommended)
apiVersion: core.openfunction.io/v1beta2
kind: Function
metadata:
name: function-sample
spec:
serving:
annotations:
dapr.io/app-max-concurrency: "value"
Q: How to create source repository credential for the function image build process?
A: You may be prompted with errors like Unsupported type of credentials provided, either SSH private key or username/password is supported (exit code 110)
when using spec.build.srcRepo.credentials
, which means you are using an incorrect Secret resource as source repository crendital.
OpenFunction currently implements the function image building framework based on ShipWright, so we need to refer to this document to setup the correct source repository credential.
Q: How to install OpenFunction in an offline environment?
A: You can install and use OpenFunction in an offline environment by following steps:
Pull the Helm Chart
Pull the helm chart in an environment that can access GitHub:
helm repo add openfunction https://openfunction.github.io/charts/
helm repo update
helm pull openfunction/openfunction
Then use tools like scp to copy the helm package to your offline environment, e.g.:
scp openfunction-v1.0.0-v0.5.0.tgz <username>@<your-machine-ip>:/home/<username>/
Synchronize images
You need to sync these images to your private image repository:
# dapr
docker.io/daprio/dashboard:0.10.0
docker.io/daprio/dapr:1.8.3
# keda
openfunction/keda:2.8.1
openfunction/keda-metrics-apiserver:2.8.1
# contour
docker.io/bitnami/contour:1.21.1-debian-11-r5
docker.io/bitnami/envoy:1.22.2-debian-11-r6
docker.io/bitnami/nginx:1.21.6-debian-11-r10
# tekton-pipelines
openfunction/tektoncd-pipeline-cmd-controller:v0.37.2
openfunction/tektoncd-pipeline-cmd-kubeconfigwriter:v0.37.2
openfunction/tektoncd-pipeline-cmd-git-init:v0.37.2
openfunction/tektoncd-pipeline-cmd-entrypoint:v0.37.2
openfunction/tektoncd-pipeline-cmd-nop:v0.37.2
openfunction/tektoncd-pipeline-cmd-imagedigestexporter:v0.37.2
openfunction/tektoncd-pipeline-cmd-pullrequest-init:v0.37.2
openfunction/tektoncd-pipeline-cmd-workingdirinit:v0.37.2
openfunction/cloudsdktool-cloud-sdk@sha256:27b2c22bf259d9bc1a291e99c63791ba0c27a04d2db0a43241ba0f1f20f4067f
openfunction/distroless-base@sha256:b16b57be9160a122ef048333c68ba205ae4fe1a7b7cc6a5b289956292ebf45cc
openfunction/tektoncd-pipeline-cmd-webhook:v0.37.2
# knative-serving
openfunction/knative.dev-serving-cmd-activator:v1.3.2
openfunction/knative.dev-serving-cmd-autoscaler:v1.3.2
openfunction/knative.dev-serving-cmd-queue:v1.3.2
openfunction/knative.dev-serving-cmd-controller:v1.3.2
openfunction/knative.dev-serving-cmd-domain-mapping:v1.3.2
openfunction/knative.dev-serving-cmd-domain-mapping-webhook:v1.3.2
openfunction/knative.dev-net-contour-cmd-controller:v1.3.0
openfunction/knative.dev-serving-cmd-default-domain:v1.3.2
openfunction/knative.dev-serving-cmd-webhook:v1.3.2
# shipwright-build
openfunction/shipwright-shipwright-build-controller:v0.10.0
openfunction/shipwright-io-build-git:v0.10.0
openfunction/shipwright-mutate-image:v0.10.0
openfunction/shipwright-bundle:v0.10.0
openfunction/shipwright-waiter:v0.10.0
openfunction/buildah:v1.23.3
openfunction/buildah:v1.28.0
# openfunction
openfunction/openfunction:v1.0.0
openfunction/kube-rbac-proxy:v0.8.0
openfunction/eventsource-handler:v4
openfunction/trigger-handler:v4
openfunction/dapr-proxy:v0.1.1
openfunction/revision-controller:v1.0.0
Create custom values
Create custom-values.yaml
in your offline environment:
touch custom-values.yaml
Edit custom-values.yaml
, add the following content:
knative-serving:
activator:
activator:
image:
repository: <your-private-image-repository>/knative.dev-serving-cmd-activator
autoscaler:
autoscaler:
image:
repository: <your-private-image-repository>/knative.dev-serving-cmd-autoscaler
configDeployment:
queueSidecarImage:
repository: <your-private-image-repository>/knative.dev-serving-cmd-queue
controller:
controller:
image:
repository: <your-private-image-repository>/knative.dev-serving-cmd-controller
domainMapping:
domainMapping:
image:
repository: <your-private-image-repository>/knative.dev-serving-cmd-domain-mapping
domainmappingWebhook:
domainmappingWebhook:
image:
repository: <your-private-image-repository>/knative.dev-serving-cmd-domain-mapping-webhook
netContourController:
controller:
image:
repository: <your-private-image-repository>/knative.dev-net-contour-cmd-controller
defaultDomain:
job:
image:
repository: <your-private-image-repository>/knative.dev-serving-cmd-default-domain
webhook:
webhook:
image:
repository: <your-private-image-repository>/knative.dev-serving-cmd-webhook
shipwright-build:
shipwrightBuildController:
shipwrightBuild:
image:
repository: <your-private-image-repository>/shipwright-shipwright-build-controller
GIT_CONTAINER_IMAGE:
repository: <your-private-image-repository>/shipwright-io-build-git
MUTATE_IMAGE_CONTAINER_IMAGE:
repository: <your-private-image-repository>/shipwright-mutate-image
BUNDLE_CONTAINER_IMAGE:
repository: <your-private-image-repository>/shipwright-bundle
WAITER_CONTAINER_IMAGE:
repository: <your-private-image-repository>/shipwright-waiter
tekton-pipelines:
controller:
tektonPipelinesController:
image:
repository: <your-private-image-repository>/tektoncd-pipeline-cmd-controller
kubeconfigWriterImage:
repository: <your-private-image-repository>/tektoncd-pipeline-cmd-kubeconfigwriter
gitImage:
repository: <your-private-image-repository>/tektoncd-pipeline-cmd-git-init
entrypointImage:
repository: <your-private-image-repository>/tektoncd-pipeline-cmd-entrypoint
nopImage:
repository: <your-private-image-repository>/tektoncd-pipeline-cmd-nop
imagedigestExporterImage:
repository: <your-private-image-repository>/tektoncd-pipeline-cmd-imagedigestexporter
prImage:
repository: <your-private-image-repository>/tektoncd-pipeline-cmd-pullrequest-init
workingdirinitImage:
repository: <your-private-image-repository>/tektoncd-pipeline-cmd-workingdirinit
gsutilImage:
repository: <your-private-image-repository>/cloudsdktool-cloud-sdk
digest: sha256:27b2c22bf259d9bc1a291e99c63791ba0c27a04d2db0a43241ba0f1f20f4067f
shellImage:
repository: <your-private-image-repository>/distroless-base
digest: sha256:b16b57be9160a122ef048333c68ba205ae4fe1a7b7cc6a5b289956292ebf45cc
webhook:
webhook:
image:
repository: <your-private-image-repository>/tektoncd-pipeline-cmd-webhook
keda:
image:
keda:
repository: <your-private-image-repository>/keda
tag: 2.8.1
metricsApiServer:
repository: <your-private-image-repository>/keda-metrics-apiserver
tag: 2.8.1
dapr:
global:
registry: <your-private-image-registry>/daprio
tag: '1.8.3'
contour:
contour:
image:
registry: <your-private-image-registry>
repository: <your-private-image-repository>/contour
tag: 1.21.1-debian-11-r5
envoy:
image:
registry: <your-private-image-registry>
repository: <your-private-image-repository>/envoy
tag: 1.22.2-debian-11-r6
defaultBackend:
image:
registry: <your-private-image-registry>
repository: <your-private-image-repository>/nginx
tag: 1.21.6-debian-11-r10
Install OpenFunction
Run the following command in an offline environment to try to install OpenFunction:
kubectl create namespace openfunction
helm install openfunction openfunction-v1.0.0-v0.5.0.tgz -n openfunction -f custom-values.yaml
Note
If the helm install
command gets stuck, it may be caused by the job contour-contour-cergen
.
Run the following command to confirm whether the job is executed successfully:
kubectl get job contour-contour-cergen -n projectcontour
If the job exists and the job status is completed, run the following command to complete the installation:
helm uninstall openfunction -n openfunction --no-hooks
helm install openfunction openfunction-v1.0.0-v0.5.0.tgz -n openfunction -f custom-values.yaml --no-hooks
Patch ClusterBuildStrategy
If you want to build wasm functions or use buildah
to build functions in offline environment, run the following command to patch ClusterBuildStrategy
:
kubectl patch clusterbuildstrategy buildah --type='json' -p='[{"op": "replace", "path": "/spec/buildSteps/0/image", "value":"openfunction/buildah:v1.28.0"}]'
kubectl patch clusterbuildstrategy wasmedge --type='json' -p='[{"op": "replace", "path": "/spec/buildSteps/0/image", "value":"openfunction/buildah:v1.28.0"}]'
Q: How to build and run functions in an offline environment
A: Let’s take Java functions as an example to illustrate how to build and run functions in an offline environment:
Synchronize
https://github.com/OpenFunction/samples.git
to your private code repositoryFollow this prerequisites doc to create
push-secret
andgit-repo-secret
Change public maven repository to private maven repository:
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>dev.openfunction.samples</groupId> <artifactId>samples</artifactId> <version>1.0-SNAPSHOT</version> <properties> <maven.compiler.source>11</maven.compiler.source> <maven.compiler.target>11</maven.compiler.target> </properties> <repositories> <repository> <id>snapshots</id> <name>Maven snapshots</name> <!--<url>https://s01.oss.sonatype.org/content/repositories/snapshots/</url>--> <url>your private maven repository</url> <releases> <enabled>false</enabled> </releases> <snapshots> <enabled>true</enabled> </snapshots> </repository> </repositories> <dependencies> <dependency> <groupId>dev.openfunction.functions</groupId> <artifactId>functions-framework-api</artifactId> <version>1.0.0-SNAPSHOT</version> </dependency> </dependencies> </project>
Make sure to commit the changes to the code repo.
Synchronize
openfunction/buildpacks-java18-run:v1
to your private image repositoryModify
functions/knative/java/hello-world/function-sample.yaml
according to your environment:apiVersion: core.openfunction.io/v1beta2 kind: Function metadata: name: function-http-java spec: version: "v2.0.0" image: "<your private image repository>/sample-java-func:v1" imageCredentials: name: push-secret build: builder: <your private image repository>/builder-java:v2-18 params: RUN_IMAGE: "<your private image repository>/buildpacks-java18-run:v1" env: FUNC_NAME: "dev.openfunction.samples.HttpFunctionImpl" FUNC_CLEAR_SOURCE: "true" srcRepo: url: "https://<your private code repository>/OpenFunction/samples.git" sourceSubPath: "functions/knative/java" revision: "main" credentials: name: git-repo-secret serving: template: containers: - name: function # DO NOT change this imagePullPolicy: IfNotPresent triggers: http: port: 8080
If your private mirror repository is insecure, please refer to Use private image repository in an insecure way
Run the following commands to build and run the function:
kubectl apply -f functions/knative/java/hello-world/function-sample.yaml