This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

v0.7 (latest)

Welcome to the OpenFunction documentation site!

1 - Introduction

Overview

OpenFunction is a cloud-native open source FaaS (Function as a Service) platform aiming to let you focus on your business logic without having to maintain the underlying runtime environment and infrastructure. You can generate event-driven and dynamically scaling Serverless workloads by simply submitting business-related source code in the form of functions.

Architecture and Design

Core Features

  • Cloud agnostic and decoupled with cloud providers’ BaaS
  • Pluggable architecture that allows multiple function runtimes
  • Support both sync and async functions
  • Unique async functions support that can consume events directly from event sources
  • Support generating OCI-Compliant container images directly from function source code.
  • Flexible autoscaling between 0 and N
  • Advanced async function autoscaling based on event sources’ specific metrics
  • Simplified BaaS integration for both sync and async functions by introducing Dapr
  • Advanced function ingress & traffic management powered by K8s Gateway API (In Progress)
  • Flexible and easy-to-use events management framework

License

OpenFunction is licensed under the Apache License, Version 2.0. For more information, see LICENSE.

2 - Getting Started

2.1 - Installation

This document describes how to install OpenFunction.

Prerequisites

  • You need to have a Kubernetes cluster.

  • You need to ensure your Kubernetes version meets the requirements described in the following compatibility matrix.

OpenFunction VersionKubernetes 1.17Kubernetes 1.18Kubernetes 1.19Kubernetes 1.20+
HEADN/AN/AN/A
v0.7.0N/AN/AN/A
v0.6.0√*√*

Install OpenFunction

Now you can install OpenFunction and all its dependencies with helm charts.

The ofn CLI install method is deprecated.

Requirements

  • Kubernetes version: >=v1.20.0-0
  • Helm version: >=v3.6.3

Steps to install OpenFunction helm charts

  1. Run the following command to add the OpenFunction chart repository first:

    helm repo add openfunction https://openfunction.github.io/charts/
    helm repo update
    
  2. Then you have several options to setup OpenFunction, you can choose to:

    • Install all components:

      kubectl create namespace openfunction
      helm install openfunction openfunction/openfunction -n openfunction
      
    • Install Serving only (without build):

      kubectl create namespace openfunction
      helm install openfunction --set global.ShipwrightBuild.enabled=false --set global.TektonPipelines.enabled=false openfunction/openfunction -n openfunction
      
    • Install Knative sync runtime only:

      kubectl create namespace openfunction
      helm install openfunction --set global.Keda.enabled=false openfunction/openfunction -n openfunction
      
    • Install OpenFunction async runtime only:

      kubectl create namespace openfunction
      helm install openfunction --set global.Contour.enabled=false  --set global.KnativeServing.enabled=false openfunction/openfunction -n openfunction
      
  3. Run the following command to verify OpenFunction is up and running:

    kubectl get po -n openfunction
    

Uninstall OpenFunction

Helm

If you installed OpenFunction with Helm, run the following command to uninstall OpenFunction and its dependencies.

helm uninstall openfunction -n openfunction

Upgrade OpenFunction

helm upgrade [RELEASE_NAME] openfunction/openfunction -n openfunction

With Helm v3, CRDs created by this chart are not updated by default and should be manually updated. See also the Helm Documentation on CRDs.

Refer to helm upgrade for command documentation.

Upgrading an existing Release to a new version

From OpenFunction v0.6.0 to OpenFunction v0.7.x

There is a breaking change when upgrading from v0.6.0 to 0.7.x which requires additional manual operations.

Uninstall the Chart

First, you’ll need to uninstall the old openfunction release:

helm uninstall openfunction -n openfunction

Confirm that the component namespaces have been deleted, it will take a while:

kubectl get ns -o=jsonpath='{range .items[?(@.metadata.annotations.meta\.helm\.sh/release-name=="openfunction")]}{.metadata.name}: {.status.phase}{"\n"}{end}'

If the knative-serving namespace is in the terminating state for a long time, try running the following command and remove finalizers:

kubectl edit ingresses.networking.internal.knative.dev -n knative-serving

Upgrade OpenFunction CRDs

Then you’ll need to upgrade the new OpenFunction CRDs

kubectl apply -f https://openfunction.sh1a.qingstor.com/crds/v0.7.0/openfunction.yaml

Upgrade dependent components CRDs

You also need to upgrade the dependent components’ CRDs

You only need to deal with the components included in the existing Release.

  • knative-serving CRDs
    kubectl apply -f https://openfunction.sh1a.qingstor.com/crds/v0.7.0/knative-serving.yaml
    
  • shipwright-build CRDs
    kubectl apply -f https://openfunction.sh1a.qingstor.com/crds/v0.7.0/shipwright-build.yaml
    
  • tekton-pipelines CRDs
    kubectl apply -f https://openfunction.sh1a.qingstor.com/crds/v0.7.0/tekton-pipelines.yaml
    

Install new chart

helm repo update
helm install openfunction openfunction/openfunction -n openfunction

2.2 - Quickstarts

2.2.1 - Prerequisites

Registry Credential

When building a function, you’ll need to push your function container image to a container registry like Docker Hub or Quay.io. To do that you’ll need to generate a secret for your container registry first.

You can create this secret by filling in the REGISTRY_SERVER, REGISTRY_USER and REGISTRY_PASSWORD fields, and then run the following command.

REGISTRY_SERVER=https://index.docker.io/v1/
REGISTRY_USER=<your_registry_user>
REGISTRY_PASSWORD=<your_registry_password>
kubectl create secret docker-registry push-secret \
 --docker-server=$REGISTRY_SERVER \
 --docker-username=$REGISTRY_USER \
 --docker-password=$REGISTRY_PASSWORD

Source repository Credential

If your source code is in a private git repository, you’ll need to create a secret containing the private git repo’s username and password:

USERNAME=<your_git_username>
PASSWORD=<your_git_password>
kubectl create secret generic git-repo-secret \
 --username=$USERNAME \
 --password=$USERNAME

You can then reference this secret in the Function CR’s spec.build.srcRepo.credentials

apiVersion: core.openfunction.io/v1beta1
kind: Function
metadata:
  name: function-sample
spec:
  version: "v2.0.0"
  image: "openfunctiondev/sample-go-func:v1"
  imageCredentials:
    name: push-secret
  port: 8080 # default to 8080
  build:
    builder: openfunction/builder-go:latest
    env:
      FUNC_NAME: "HelloWorld"
      FUNC_CLEAR_SOURCE: "true"
    srcRepo:
      url: "https://github.com/OpenFunction/samples.git"
      sourceSubPath: "functions/knative/hello-world-go"
      revision: "main"
      credentials:
         name: git-repo-secret
  serving:
    template:
      containers:
        - name: function # DO NOT change this
          imagePullPolicy: IfNotPresent 
    runtime: "knative"

Kafka

Async functions can be triggered by events in message queues like Kafka, here you can find steps to setup a Kafka cluster for demo purpose.

  1. Install strimzi-kafka-operator in the default namespace.

    helm repo add strimzi https://strimzi.io/charts/
    helm install kafka-operator -n default strimzi/strimzi-kafka-operator
    
  2. Run the following command to create a Kafka cluster and Kafka Topic in the default namespace. The Kafka and Zookeeper clusters created by this command have a storage type of ephemeral and are demonstrated using emptyDir.

    Here we create a 1-replica Kafka server named <kafka-server> and a 1-replica topic named <kafka-topic> with 10 partitions

    cat <<EOF | kubectl apply -f -
    apiVersion: kafka.strimzi.io/v1beta2
    kind: Kafka
    metadata:
      name: <kafka-server>
      namespace: default
    spec:
      kafka:
        version: 3.1.0
        replicas: 1
        listeners:
          - name: plain
            port: 9092
            type: internal
            tls: false
          - name: tls
            port: 9093
            type: internal
            tls: true
        config:
          offsets.topic.replication.factor: 1
          transaction.state.log.replication.factor: 1
          transaction.state.log.min.isr: 1
          default.replication.factor: 1
          min.insync.replicas: 1
          inter.broker.protocol.version: "3.1"
        storage:
          type: ephemeral
      zookeeper:
        replicas: 1
        storage:
          type: ephemeral
      entityOperator:
        topicOperator: {}
        userOperator: {}
    ---
    apiVersion: kafka.strimzi.io/v1beta2
    kind: KafkaTopic
    metadata:
      name: <kafka-topic>
      namespace: default
      labels:
        strimzi.io/cluster: <kafka-server>
    spec:
      partitions: 10
      replicas: 1
      config:
        retention.ms: 7200000
        segment.bytes: 1073741824
    EOF
    
  3. Run the following command to check Pod status and wait for Kafka and Zookeeper to run and start.

    $ kubectl get po
    NAME                                              READY   STATUS        RESTARTS   AGE
    <kafka-server>-entity-operator-568957ff84-nmtlw   3/3     Running       0          8m42s
    <kafka-server>-kafka-0                            1/1     Running       0          9m13s
    <kafka-server>-zookeeper-0                        1/1     Running       0          9m46s
    strimzi-cluster-operator-687fdd6f77-cwmgm         1/1     Running       0          11m
    

    Run the following command to view the metadata for the Kafka cluster.

    $ kafkacat -L -b <kafka-server>-kafka-brokers:9092
    

2.2.2 - Create sync functions

Before you creating any functions, make sure you’ve installed all the prerequisites

Sync functions are funtions whose inputs are payloads of HTTP requests, and the output or response are sent to the waiting client immediately after the function logic finishes processing the inputs payload. Below you can find some sync function examples in different languages:

Sync Functions
GoHello World, Multi-functions, Sync function with path parameters, log processing, Sync function with output binding
NodejsHello World, Sync function with output binding
PythonHello World
JavaHello World
DotNetHello World

You can find more function samples here

2.2.3 - Create async functions

Before you creating any functions, make sure you’ve installed all the prerequisites

Async functions are event-driven and their inputs are usually events from Non-HTTP event sources like message queues, cron triggers, MQTT brokers etc. and usually the client will not wait for an immediate response after triggering an async function by delivering an event. Below you can find some async function examples in different languages:

Async Functions
GoKafka input & HTTP output binding, Cron input & Kafka output binding, Cron input binding, Kafka input binding, Kafka pubsub
NodejsMQTT binding & pubsub
Python
Java
DotNet

You can find more function samples here

3 - Concepts

3.1 - Function Definition

Function

Function is the control plane of Build and Serving and it’s also the interface for users to use OpenFunction. Users needn’t to create the Build or Serving separately because Function is the only place to define a function’s Build and Serving.

Once a function is created, it will controll the lifecycle of Build and Serving without user intervention:

  • If Build is defined in a function, a builder custom resource will be created to build function’s container image once a function is deployed.

  • If Serving is defined in a function, a serving custom resource will be created to control a function’s serving and autoscalling.

  • Build and Serving can be defined together which means the function image will be built first and then it will be used in serving.

  • Build can be defined without Serving, the function is used to build image only in this case.

  • Serving can be defined without Build, the function will use a previously built function image for serving.

Build

OpenFunction uses Shipwright and Cloud Native Buildpacks to build the function source code into container images.

Once a function is created with Build spec in it, a builder custom resource will be created which will use Shipwright to manage the build tools and strategy. The Shipwright will then use Tekton to control the process of building container images including fetching source code, generating image artifacts, and publishing images.

Serving

Once a function is created with Serving spec, a Serving custom resource will be created to control a function’s serving phase. Currently OpenFunction Serving supports two runtimes: the Knative sync runtime and the OpenFunction async runtime.

The sync runtime

For sync functions, OpenFunction currently supports using Knative Serving as runtime. And we’re planning to add another sync function runtime powered by the KEDA http-addon.

The async runtime

OpenFunction’s asyn runtime is an event-driven runtime which is implemented based on KEDA and Dapr. Async functions can be triggered by various event types like message queue, cronjob, and MQTT etc.

Reference

For more information, see Function Specifications.

3.2 - Function Build

Currently, OpenFunction builds function images with Cloud Native Buildpacks. The traditional Dockerfile build method will be supported in the future.

Build functions by adding a build section in the function definition

You can build a function image by simply adding a build section in the Function definition like below. If there is a serving section defined as well, the function will be launched as soon as the build completes.

apiVersion: core.openfunction.io/v1beta1
kind: Function
metadata:
  name: logs-async-handler
spec:
  version: "v2.0.0"
  image: openfunctiondev/logs-async-handler:v1
  imageCredentials:
    name: push-secret
  build:
    builder: openfunction/builder-go:latest
    env:
      FUNC_NAME: "LogsHandler"
      FUNC_CLEAR_SOURCE: "true"
      ## Customize functions framework version, valid for functions-framework-go for now
      ## Usually you needn't to do so because the builder will ship with the latest functions-framework
      # FUNC_FRAMEWORK_VERSION: "v0.4.0"
      ## Use FUNC_GOPROXY to set the goproxy
      # FUNC_GOPROXY: "https://goproxy.cn"
    srcRepo:
      url: "https://github.com/OpenFunction/samples.git"
      sourceSubPath: "functions/async/logs-handler-function/"
      revision: "main"

To push the function image to a container registry, you have to create a secret containing the registry’s credential and add the secret to imageCredentials. You can refer to the prerequisites for more info.

Build functions with the pack CLI

Usually it’s necessary to build function images directly from local source code especially for debug purpose or for offline environment. You can use the pack CLI for this.

Pack is a tool maintained by the Cloud Native Buildpacks project to support the use of buildpacks. It enables the following functionality:

  • Build an application using buildpacks.
  • Rebase application images created using buildpacks.
  • Creation of various components used within the ecosystem.

Follow the instructions here to install the pack CLI tool. You can find more details on how to use the pack CLI here.

To build OpenFunction function images from source code locally, you can follow the steps below:

Download function samples

git clone https://github.com/OpenFunction/samples.git
cd samples/functions/Knative/hello-world-go

Build the function image with the pack CLI

pack build func-helloworld-go --builder openfunction/builder-go:v2.4.0-1.17 --env FUNC_NAME="HelloWorld"  --env FUNC_CLEAR_SOURCE=true

Launch the function image locally

docker run --rm --env="FUNC_CONTEXT={\"name\":\"HelloWorld\",\"version\":\"v1.0.0\",\"port\":\"8080\",\"runtime\":\"Knative\"}" --env="CONTEXT_MODE=self-host" --name func-helloworld-go -p 8080:8080 func-helloworld-go

Visit the function

curl http://localhost:8080

Output example:

hello, world!

OpenFunction Builders

To build a function image with Cloud Native Buildpacks, a builder image is needed.

Here you can find builders for popular languages maintained by the OpenFunction community:

Builders
Goopenfunction/builder-go:v2.4.0 (openfunction/builder-go:latest)
Nodejsopenfunction/builder-node:v2-16.15 (openfunction/builder-node:latest)
Javaopenfunction/builder-java:v2-11, openfunction/builder-java:v2-16, openfunction/builder-java:v2-17, openfunction/builder-java:v2-18
Pythonopenfunction/gcp-builder:v1
DotNetopenfunction/gcp-builder:v1

3.3 - Function Inputs and Outputs

Functions usually have inputs and outputs.

Function Inputs

For a sync function, the input is always the payload of the HTTP request.

For an async function, the input data comes from:

Function Outputs

For a sync function, the output can be sent through the HTTP response.

Both sync functions and async functions can send outputs to Dapr components including:

For example, here you can find an async function with a cron input binding and a Kafka output binding:

apiVersion: core.openfunction.io/v1beta1
kind: Function
metadata:
  name: cron-input-kafka-output
spec:
  ...
  serving:
    ...
    runtime: "async"
    inputs:
      - name: cron
        component: cron
    outputs:
      - name: sample
        component: kafka-server
        operation: "create"
    bindings:
      cron:
        type: bindings.cron
        version: v1
        metadata:
        - name: schedule
          value: "@every 2s"
      kafka-server:
        type: bindings.kafka
        version: v1
        metadata:
        - name: brokers
          value: "kafka-server-kafka-brokers:9092"
        - name: topics
          value: "sample-topic"
        - name: consumerGroup
          value: "bindings-with-output"
        - name: publishTopic
          value: "sample-topic"
        - name: authRequired
          value: "false"

Here is another async function example that use a Kafka Pub/sub component as input.

apiVersion: core.openfunction.io/v1beta1
kind: Function
metadata:
  name: autoscaling-subscriber
spec:
  ...
  serving:
    ...
    runtime: "async"
    inputs:
      - name: producer
        component: kafka-server
        topic: "sample-topic"
    pubsub:
      kafka-server:
        type: pubsub.kafka
        version: v1
        metadata:
          - name: brokers
            value: "kafka-server-kafka-brokers:9092"
          - name: authRequired
            value: "false"
          - name: allowedTopics
            value: "sample-topic"
          - name: consumerID
            value: "autoscaling-subscriber"

Sync functions can also send output to Dapr output binding components or Pub/sub components, here is an example:

apiVersion: core.openfunction.io/v1beta1
kind: Function
metadata:
  name: function-front
spec:
  serving:
    ...
    runtime: knative
    outputs:
      - name: target
        component: kafka-server
        operation: "create"
    bindings:
      kafka-server:
        type: bindings.kafka
        version: v1
        metadata:
          - name: brokers
            value: "kafka-server-kafka-brokers:9092"
          - name: authRequired
            value: "false"
          - name: publishTopic
            value: "sample-topic"
          - name: topics
            value: "sample-topic"
          - name: consumerGroup
            value: "function-front"

3.4 - Function Scaling and Triggers

Scaling is one of the core features of a FaaS or Serverless platform.

OpenFunction defines function scaling in ScaleOptions and defines triggers to activate function scaling in Triggers

ScaleOptions

You can define unified function scale options for sync and async functions like below which will be valid for both sync and async functions:

apiVersion: core.openfunction.io/v1beta1
kind: Function
metadata:
  name: function-sample
spec:
  serving:
    scaleOptions:
      minReplicas: 0
      maxReplicas: 10

Usually simply defining minReplicas and maxReplicas is not enough for async functions. You can define seperate scale options for async functions like below which will override the minReplicas and maxReplicas.

You can find more details of async function scale options in KEDA ScaleObject Spec and KEDA ScaledJob Spec.

apiVersion: core.openfunction.io/v1beta1
kind: Function
metadata:
  name: function-sample
spec:
  serving:
    scaleOptions:
      keda:
        scaledObject:
          pollingInterval: 15
          minReplicaCount: 0
          maxReplicaCount: 10
          cooldownPeriod: 60
          advanced:
            horizontalPodAutoscalerConfig:
              behavior:
                scaleDown:
                  stabilizationWindowSeconds: 45
                  policies:
                  - type: Percent
                    value: 50
                    periodSeconds: 15
                scaleUp:
                  stabilizationWindowSeconds: 0

You can also set advanced scale options for Knative sync functions too which will override the minReplicas and maxReplicas.

You can find more details of the Knative sync function scale options here

apiVersion: core.openfunction.io/v1beta1
kind: Function
metadata:
  name: function-sample
spec:
  serving:
    scaleOptions:
      knative:
        autoscaling:
          autoscaling.knative.dev/min-scale: 0
          autoscaling.knative.dev/max-scale: 10
          autoscaling.knative.dev/initial-scale: "1"
          autoscaling.knative.dev/scale-down-delay: "0"
          autoscaling.knative.dev/window: "60s"
          autoscaling.knative.dev/panic-window-percentage: "10.0"
          autoscaling.knative.dev/metric: "concurrency"
          autoscaling.knative.dev/target: 100

Triggers

Triggers define how to activate function scaling for async functions. You can use triggers defined in all KEDA scalers as OpenFunction’s trigger spec.

Sync functions’ scaling is activated by various options of HTTP requests which are already defined in the previous ScaleOption section.

apiVersion: core.openfunction.io/v1beta1
kind: Function
metadata:
  name: function-sample
spec:
  serving:
    runtime: "async"
    triggers:
      - type: kafka
        metadata:
          topic: logs
          bootstrapServers: kafka-server-kafka-brokers.default.svc.cluster.local:9092
          consumerGroup: logs-handler
          lagThreshold: "20"

3.5 - Function Signatures

Comparison of different function signatures

There’re three function signatures in OpenFunction: HTTP, CloudEvent, and OpenFunction. Let’s explain this in more detail using Go function as an example.

HTTP and CloudEvent signatures can be used to create sync functions while OpenFunction signature can be used to create both sync and async functions.

Further more OpenFunction signature can utilize various Dapr building blocks including Bindings, Pub/sub etc to access various BaaS services that helps to create more powerful functions. (Dapr State management, Configuration will be supported soon)

HTTPCloudEventOpenFunction
Signaturefunc(http.ResponseWriter, *http.Request) errorfunc(context.Context, cloudevents.Event) errorfunc(ofctx.Context, []byte) (ofctx.Out, error)
Sync FunctionsSupportedSupportedSupported
Async FunctionsNot supportedNot supportedSupported
Dapr BindingNot supportedNot supportedSupported
Dapr Pub/subNot supportedNot supportedSupported

Samples

As you can see, OpenFunction signature is the recommended function signature, and we’re working on supporting this function signature in more language runtimes.

HTTPCloudEventOpenFunction
GoHello World, Multi-functions, Sync function with path parameters, log processingSync function with path parametersSync function with path parameters, Sync function with output binding, Kafka input & HTTP output binding, Cron input & Kafka output binding, Cron input binding, Kafka input binding, Kafka pubsub
NodejsHello WorldSync function with output binding, MQTT binding & pubsub
PythonHello World
JavaHello World
DotNetHello World

3.6 - Networking

3.6.1 - Introduction

Overview

Previously starting from v0.5.0, OpenFunction uses Kubernetes Ingress to provide unified entrypoints for sync functions, and a nginx ingress controller has to be installed.

With the maturity of Kubernetes Gateway API, we decided to implement OpenFunction Gateway based on the Kubernetes Gateway API to replace the previous ingress based domain method in OpenFunction v0.7.0.

You can find the OpenFunction Gateway proposal here

OpenFunction Gateway provides a more powerful and more flexible function gateway including features like:

  • Enable users to switch to any gateway implementations that support Kubernetes Gateway API such as Contour, Istio, Apache APISIX, Envoy Gateway (in the future) and more in an easier and vendor-neutral way.

  • Users can choose to install a default gateway implementation (Contour) and then define a new gateway.networking.k8s.io or use any existing gateway implementations in their environment and then reference an existing gateway.networking.k8s.io.

  • Allow users to customize their own function access pattern like hostTemplate: "{{.Name}}.{{.Namespace}}.{{.Domain}}" for host-based access.

  • Allow users to customize their own function access pattern like pathTemplate: "{{.Namespace}}/{{.Name}}" for path-based access.

  • Allow users to customize each function’s route rules (host-based, path-based or both) in function definition and default route rules are provided for each function if there’re no customized route rules defined.

  • Send traffic to Knative service revisions directly without going through Knative’s own gateway anymore. You will need only OpenFunction Gateway since OpenFunction 0.7.0 to access OpenFunction sync functions, and you can ignore Knative’s domain config errors if you do not need to access Knative service directly.

  • Traffic splitting between function revisions (in the future)

The following diagram illustrates how client traffics go through OpenFunction Gateway and then reach a function directly:

3.6.2 - OpenFunction Gateway

Inside OpenFunction Gateway

Backed by the Kubernetes Gateway, an OpenFunction Gateway defines how users can access sync functions.

Whenever an OpenFunction Gateway is created, the gateway controller will:

  • Add a default listener named ofn-http-internal to gatewaySpec.listeners if there isn’t one there.

  • Generate gatewaySpec.listeners.[*].hostname based on domain or clusterDomain.

  • Inject gatewaySpec.listenters to the existing Kubernetes Gateway defined by the gatewayRef of the OpenFunction Gateway.

  • Create an new Kubernetes Gateway based on the gatewaySpec.listenters field in gatewayDef of the OpenFunction Gateway.

  • Create a service named gateway.openfunction.svc.cluster.local that defines a unified entry for sync functions.

After an OpenFunction Gateway is deployed, you’ll be able to find the status of Kubernetes Gateway and its listeners in OpenFunction Gateway status:

status:
  conditions:
  - lastTransitionTime: "2022-08-04T10:20:57Z"
    message: Gateway is scheduled
    observedGeneration: 2
    reason: Scheduled
    status: "True"
    type: Scheduled
  - lastTransitionTime: "2022-08-04T10:20:57Z"
    message: Valid Gateway
    observedGeneration: 2
    reason: Valid
    status: "True"
    type: Ready
  listeners:
  - attachedRoutes: 0
    conditions:
    - lastTransitionTime: "2022-08-04T10:20:57Z"
      message: Valid listener
      observedGeneration: 2
      reason: Ready
      status: "True"
      type: Ready
    name: ofn-http-internal
    supportedKinds:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
  - attachedRoutes: 0
    conditions:
    - lastTransitionTime: "2022-08-04T10:20:57Z"
      message: Valid listener
      observedGeneration: 2
      reason: Ready
      status: "True"
      type: Ready
    name: ofn-http-external
    supportedKinds:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute

The Default OpenFunction Gateway

OpenFunction Gateway uses Contour as the default Kubernetes Gateway implementation. The following OpenFunction Gateway will be created automatically once you install OpenFunction:

apiVersion: networking.openfunction.io/v1alpha1
kind: Gateway
metadata:
  name: openfunction
  namespace: openfunction
spec:
  domain: ofn.io
  clusterDomain: cluster.local
  hostTemplate: "{{.Name}}.{{.Namespace}}.{{.Domain}}"
  pathTemplate: "{{.Namespace}}/{{.Name}}"
  httpRouteLabelKey: "app.kubernetes.io/managed-by"
  gatewayRef:
    name: contour
    namespace: projectcontour
  gatewaySpec:
    listeners:
      - name: ofn-http-internal
        hostname: "*.cluster.local"
        protocol: HTTP
        port: 80
        allowedRoutes:
          namespaces:
            from: All
      - name: ofn-http-external
        hostname: "*.ofn.io"
        protocol: HTTP
        port: 80
        allowedRoutes:
          namespaces:
            from: All

You can customize the default OpenFunction Gateway like below:

kubectl edit gateway openfunction -n openfunction

Switch to a different Kubernetes Gateway

You can switch to any gateway implementations that support Kubernetes Gateway API such as Contour, Istio, Apache APISIX, Envoy Gateway (in the future) and more in an easier and vendor-neutral way.

Here you can find more details.

Multiple OpenFunction Gateway

Multiple Gateway are meaningless for OpenFunction, we currently only support one OpenFunction Gateway.

3.6.3 - Route

What is Route?

Route is part of the Function definition. Route defines how traffic from the Gateway listener is routed to a function.

A Route specifies the Gateway to which it will attach in GatewayRef that allows it to receive traffic from the Gateway.

Once a sync Function is created, the function controller will:

  • Look for theGateway called openfunction in openfunction namespace, then attach to this Gateway if route.gatewayRef is not defined in the function.
  • Automatically generate route.hostnames based on Gateway.spec.hostTemplate, if route.hostnames is not defined in function.
  • Automatically generate route.rules based on Gateway.spec.pathTemplate or path of /, if route.rules is not defined in function.
  • a HTTPRoute custom resource will be created based on Route. BackendRefs will be automatically link to the corresponding Knative service revision and label HTTPRouteLabelKey will be added to this HTTPRoute.
  • Create service {{.Name}}.{{.Namespace}}.svc.cluster.local, this service defines an entry for the function to access within the cluster.
  • If the Gateway referenced by route.gatewayRef changed, will update the HTTPRoute.

After a sync Function is deployed, you’ll be able to find Function addresses and Route status in Function’s status field, e.g:

status:
  addresses:
    - type: External
      value: http://function-sample-serving-only.default.ofn.io/
    - type: Internal
      value: http://function-sample-serving-only.default.svc.cluster.local/
  build:
    resourceHash: "14903236521345556383"
    state: Skipped
  route:
    conditions:
      - lastTransitionTime: "2022-08-04T10:43:29Z"
        message: Valid HTTPRoute
        observedGeneration: 1
        reason: Valid
        status: "True"
        type: Accepted
    hosts:
      - function-sample-serving-only.default.ofn.io
      - function-sample-serving-only.default.svc.cluster.local
    paths:
      - type: PathPrefix
        value: /
  serving:
    lastSuccessfulResourceRef: serving-znk54
    resourceHash: "10715302888241374768"
    resourceRef: serving-znk54
    service: serving-znk54-ksvc-nbg6f
    state: Running

Host Based Routing

Host-based is the default routing mode. When route.hostnames is not defined, route.hostnames will be generated based on gateway.spec.hostTemplate. If route.rules is not defined, route.rules will be generated based on path of /.

kubectl apply -f - <<EOF
apiVersion: core.openfunction.io/v1beta1
kind: Function
metadata:
  name: function-sample
spec:
  version: "v1.0.0"
  image: "openfunctiondev/v1beta1-http:latest"
  port: 8080
  serving:
    runtime: knative
    template:
      containers:
        - name: function
          imagePullPolicy: Always
  route:
    gatewayRef:
      name: openfunction
      namespace: openfunction
EOF

If you are using the default OpenFunction Gateway, the function external address will be as below:

http://function-sample.default.ofn.io/

Path Based Routing

If you define route.hostnames in a function, route.rules will be generated based on gateway.spec.pathTemplate.

kubectl apply -f - <<EOF
apiVersion: core.openfunction.io/v1beta1
kind: Function
metadata:
  name: function-sample
spec:
  version: "v1.0.0"
  image: "openfunctiondev/v1beta1-http:latest"
  port: 8080
  serving:
    runtime: knative
    template:
      containers:
        - name: function
          imagePullPolicy: Always
  route:
    gatewayRef:
      name: openfunction
      namespace: openfunction
    hostnames:
    - "sample.ofn.io"
EOF

If you are using the default OpenFunction Gateway, the function external address will be as below:

http://sample.default.ofn.io/default/function-sample/

Host and Path based routing

You can define hostname and path at the same time to customize how traffic should be routed to your function.

kubectl apply -f - <<EOF
apiVersion: core.openfunction.io/v1beta1
kind: Function
metadata:
  name: function-sample
spec:
  version: "v1.0.0"
  image: "openfunctiondev/v1beta1-http:latest"
  port: 8080
  serving:
    runtime: knative
    template:
      containers:
        - name: function
          imagePullPolicy: Always
  route:
    gatewayRef:
      name: openfunction
      namespace: openfunction
    rules:
      - matches:
          - path:
              type: PathPrefix
              value: /v2/foo
    hostnames:
    - "sample.ofn.io"
EOF

If you are using the default OpenFunction Gateway, the function external address will be as below:

http://sample.default.ofn.io/v2/foo/

3.6.4 - Function Entrypoints

There are several methods to access a sync function. Let’s elaborate on this in the following section.

This documentation will assume you are using default OpenFunction Gateway and you have a sync function named function-sample.

Access functions from within the cluster

Access functions by the internal address

OpenFunction will create this service for every sync Function: {{.Name}}.{{.Namespace}}.svc.cluster.local. This service will be used to provide the Function internal address.

Get Function internal address by running following command:

export FUNC_INTERNAL_ADDRESS=$(kubectl get function function-sample -o=jsonpath='{.status.addresses[?(@.type=="Internal")].value}')

This address provides the default method for accessing functions within the cluster, it’s suitable for use as sink.url of EventSource.

Access Function using curl in pod:

kubectl run --rm ofn-test -i --tty --image=radial/busyboxplus:curl -- curl -sv $FUNC_INTERNAL_ADDRESS

Access functions from outside the cluster

Access functions by the Kubernetes Gateway’s IP address

Get Kubernetes Gateway’s ip address:

export IP=$(kubectl get node -l "! node.kubernetes.io/exclude-from-external-load-balancers" -o=jsonpath='{.items[0].status.addresses[?(@.type=="InternalIP")].address}')

Get Function’s HOST and PATH:

export FUNC_HOST=$(kubectl get function function-sample -o=jsonpath='{.status.route.hosts[0]}')
export FUNC_PATH=$(kubectl get function function-sample -o=jsonpath='{.status.route.paths[0].value}')

Access Function using curl directly:

curl -sv -HHOST:$FUNC_HOST http://$IP$FUNC_PATH

Access functions by the external address

To access a sync function by the external address, you’ll need to configure DNS first. Either Magic DNS or real DNS works:

  • Magic DNS

    Get Kubernetes Gateway’s ip address:

    export IP=$(kubectl get node -l "! node.kubernetes.io/exclude-from-external-load-balancers" -o=jsonpath='{.items[0].status.addresses[?(@.type=="InternalIP")].address}')
    

    Replace domain defined in OpenFunction Gateway with Magic DNS:

    export DOMAIN="$IP.sslip.io"
    kubectl patch gateway.networking.openfunction.io/openfunction -n openfunction --type merge --patch '{"spec": {"domain": "'$DOMAIN'"}}'
    

    Then, you can see Function external address in Function’s status field:

    kubectl get function function-sample -oyaml
    
    status:
    addresses:
    - type: External
      value: http://function-sample.default.172.31.73.53.sslip.io/
    - type: Internal
      value: http://function-sample.default.svc.cluster.local/
    build:
      resourceHash: "14903236521345556383"
      state: Skipped
    route:
      conditions:
      - lastTransitionTime: "2022-08-20T11:03:14Z"
        message: Valid HTTPRoute
        observedGeneration: 13
        reason: Valid
        status: "True"
        type: Accepted
      hosts:
      - function-sample.default.172.31.73.53.sslip.io
      - function-sample.default.svc.cluster.local
      paths:
      - type: PathPrefix
        value: /
    serving:
      lastSuccessfulResourceRef: serving-t56fq
      resourceHash: "2638289828407595605"
      resourceRef: serving-t56fq
      service: serving-t56fq-ksvc-bv8ng
      state: Running
    
  • Real DNS

    If you have an external IP address, you can configure a wildcard A record as your domain:

    # Here example.com is the domain defined in OpenFunction Gateway
    *.example.com == A <external-ip>
    

    If you have a CNAME, you can configure a CNAME record as your domain:

    # Here example.com is the domain defined in OpenFunction Gateway
    *.example.com == CNAME <external-name>
    

    Replace domain defined in OpenFunction Gateway with the domain you configured above:

    export DOMAIN="example.com"
    kubectl patch gateway.networking.openfunction.io/openfunction -n openfunction --type merge --patch '{"spec": {"domain": "'$DOMAIN'"}}'
    

    Then, you can see Function external address in Function’s status field:

    kubectl get function function-sample -oyaml
    
    status:
    addresses:
    - type: External
      value: http://function-sample.default.example.com/
    - type: Internal
      value: http://function-sample.default.svc.cluster.local/
    build:
      resourceHash: "14903236521345556383"
      state: Skipped
    route:
      conditions:
      - lastTransitionTime: "2022-08-20T13:07:17Z"
        message: Valid HTTPRoute
        observedGeneration: 14
        reason: Valid
        status: "True"
        type: Accepted
      hosts:
      - function-sample.default.example.com
      - function-sample.default.svc.cluster.local
      paths:
      - type: PathPrefix
        value: /
    serving:
      lastSuccessfulResourceRef: serving-t56fq
      resourceHash: "2638289828407595605"
      resourceRef: serving-t56fq
      service: serving-t56fq-ksvc-bv8ng
      state: Running
    

Then, you can get Function external address by running following command:

export FUNC_EXTERNAL_ADDRESS=$(kubectl get function function-sample -o=jsonpath='{.status.addresses[?(@.type=="External")].value}')

Now, you can access Function using curl directly:

curl -sv $FUNC_EXTERNAL_ADDRESS

3.7 - OpenFunction Events

3.7.1 - Introduction

Overview

OpenFunction Events is OpenFunction’s event management framework. It provides the following core features:

  • Support for triggering target functions by synchronous and asynchronous calls
  • User-defined trigger judgment logic
  • The components of OpenFunction Events can be driven by OpenFunction itself

Architecture

The following diagram illustrates the architecture of OpenFunction Events.

openfunction-events

Concepts

EventSource

EventSource defines the producer of an event, such as a Kafka service, an object storage service, and even a function. It contains descriptions of these event producers and information about where to send these events.

EventSource supports the following types of event source server:

  • Kafka
  • Cron (scheduler)
  • Redis

EventBus (ClusterEventBus)

EventBus is responsible for aggregating events and making them persistent. It contains descriptions of an event bus broker that usually is a message queue (such as NATS Streaming and Kafka), and provides these configurations for EventSource and Trigger.

EventBus handles event bus adaptation for namespace scope by default. For cluster scope, ClusterEventBus is available as an event bus adapter and takes effect when other components cannot find an EventBus under a namespace.

EventBus supports the following event bus broker:

  • NATS Streaming

Trigger

Trigger is an abstraction of the purpose of an event, such as what needs to be done when a message is received. It contains the purpose of an event defined by you, which tells the trigger which EventSource it should fetch the event from and subsequently determine whether to trigger the target function according to the given conditions.

Reference

For more information, see EventSource Specifications and EventBus Specifications.

3.7.2 - Use EventSource

This document gives an example of how to use an event source to trigger a synchronous function.

In this example, an EventSource is defined for synchronous invocation to use the event source (a Kafka server) as an input bindings of a function (a Knative service). When the event source generates an event, it will invoke the function and get a synchronous return through the spec.sink configuration.

Create a Function

Use the following content to create a function as the EventSource Sink. For more information about how to create a function, see Create sync functions.

apiVersion: core.openfunction.io/v1beta1
kind: Function
metadata:
  name: sink
spec:
  version: "v1.0.0"
  image: "openfunction/sink-sample:latest"
  port: 8080
  serving:
    runtime: "knative"
    template:
      containers:
        - name: function
          imagePullPolicy: Always

After the function is created, run the following command to get the URL of the function.

$ kubectl get functions.core.openfunction.io
NAME   BUILDSTATE   SERVINGSTATE   BUILDER   SERVING         URL                                   AGE
sink   Skipped      Running                  serving-4x5wh   https://openfunction.io/default/sink   13s

Create a Kafka Cluster

  1. Run the following commands to install strimzi-kafka-operator in the default namespace.

    helm repo add strimzi https://strimzi.io/charts/
    helm install kafka-operator -n default strimzi/strimzi-kafka-operator
    
  2. Use the following content to create a file kafka.yaml.

    apiVersion: kafka.strimzi.io/v1beta2
    kind: Kafka
    metadata:
      name: kafka-server
      namespace: default
    spec:
      kafka:
        version: 3.1.0
        replicas: 1
        listeners:
          - name: plain
            port: 9092
            type: internal
            tls: false
          - name: tls
            port: 9093
            type: internal
            tls: true
        config:
          offsets.topic.replication.factor: 1
          transaction.state.log.replication.factor: 1
          transaction.state.log.min.isr: 1
          default.replication.factor: 1
          min.insync.replicas: 1
          inter.broker.protocol.version: "3.1"
        storage:
          type: ephemeral
      zookeeper:
        replicas: 1
        storage:
          type: ephemeral
      entityOperator:
        topicOperator: {}
        userOperator: {}
    ---
    apiVersion: kafka.strimzi.io/v1beta2
    kind: KafkaTopic
    metadata:
      name: events-sample
      namespace: default
      labels:
        strimzi.io/cluster: kafka-server
    spec:
      partitions: 10
      replicas: 1
      config:
        retention.ms: 7200000
        segment.bytes: 1073741824
    
  3. Run the following command to deploy a 1-replica Kafka server named kafka-server and 1-replica Kafka topic named events-sample in the default namespace. The Kafka and Zookeeper clusters created by this command have a storage type of ephemeral and are demonstrated using emptyDir.

    kubectl apply -f kafka.yaml
    
  4. Run the following command to check pod status and wait for Kafka and Zookeeper to be up and running.

    $ kubectl get po
    NAME                                              READY   STATUS        RESTARTS   AGE
    kafka-server-entity-operator-568957ff84-nmtlw     3/3     Running       0          8m42s
    kafka-server-kafka-0                              1/1     Running       0          9m13s
    kafka-server-zookeeper-0                          1/1     Running       0          9m46s
    strimzi-cluster-operator-687fdd6f77-cwmgm         1/1     Running       0          11m
    
  5. Run the following command to view the metadata of the Kafka cluster.

    kafkacat -L -b kafka-server-kafka-brokers:9092
    

Trigger a Synchronous Function

Create an EventSource

  1. Use the following content to create an EventSource configuration file (for example, eventsource-sink.yaml).

    apiVersion: events.openfunction.io/v1alpha1
    kind: EventSource
    metadata:
      name: my-eventsource
    spec:
      logLevel: "2"
      kafka:
        sample-one:
          brokers: "kafka-server-kafka-brokers.default.svc.cluster.local:9092"
          topic: "events-sample"
          authRequired: false
      sink:
        uri: "http://openfunction.io.svc.cluster.local/default/sink"
    
  2. Run the following command to apply the configuration file.

    kubectl apply -f eventsource-sink.yaml
    
  3. Run the following commands to check the results.

    $ kubectl get eventsources.events.openfunction.io
    NAME             EVENTBUS   SINK   STATUS
    my-eventsource                     Ready
    
    $ kubectl get components
    NAME                                                      AGE
    serving-8f6md-component-esc-kafka-sample-one-r527t        68m
    serving-8f6md-component-ts-my-eventsource-default-wz8jt   68m
    
    $ kubectl get deployments.apps
    NAME                                           READY   UP-TO-DATE   AVAILABLE   AGE
    serving-8f6md-deployment-v100-pg9sd            1/1     1            1           68m
    

Create an event producer

To start the target function, you need to create some events to trigger the function.

  1. Use the following content to create an event producer configuration file (for example, events-producer.yaml).

    apiVersion: core.openfunction.io/v1beta1
    kind: Function
    metadata:
      name: events-producer
    spec:
      version: "v1.0.0"
      image: openfunctiondev/v1beta1-bindings:latest
      serving:
        template:
          containers:
            - name: function
              imagePullPolicy: Always
        runtime: "async"
        inputs:
          - name: cron
            component: cron
        outputs:
          - name: target
            component: kafka-server
            operation: "create"
        bindings:
          cron:
            type: bindings.cron
            version: v1
            metadata:
              - name: schedule
                value: "@every 2s"
          kafka-server:
            type: bindings.kafka
            version: v1
            metadata:
              - name: brokers
                value: "kafka-server-kafka-brokers:9092"
              - name: topics
                value: "events-sample"
              - name: consumerGroup
                value: "bindings-with-output"
              - name: publishTopic
                value: "events-sample"
              - name: authRequired
                value: "false"
    
  2. Run the following command to apply the configuration file.

    kubectl apply -f events-producer.yaml
    
  3. Run the following command to check the results in real time.

    $ kubectl get po --watch
    NAME                                                           READY   STATUS              RESTARTS   AGE
    serving-k6zw8-deployment-v100-fbtdc-dc96c4589-s25dh            0/2     ContainerCreating   0          1s
    serving-8f6md-deployment-v100-pg9sd-6666c5577f-4rpdg           2/2     Running             0          23m
    serving-k6zw8-deployment-v100-fbtdc-dc96c4589-s25dh            0/2     ContainerCreating   0          1s
    serving-k6zw8-deployment-v100-fbtdc-dc96c4589-s25dh            1/2     Running             0          5s
    serving-k6zw8-deployment-v100-fbtdc-dc96c4589-s25dh            2/2     Running             0          8s
    serving-4x5wh-ksvc-wxbf2-v100-deployment-5c495c84f6-8n6mk      0/2     Pending             0          0s
    serving-4x5wh-ksvc-wxbf2-v100-deployment-5c495c84f6-8n6mk      0/2     Pending             0          0s
    serving-4x5wh-ksvc-wxbf2-v100-deployment-5c495c84f6-8n6mk      0/2     ContainerCreating   0          0s
    serving-4x5wh-ksvc-wxbf2-v100-deployment-5c495c84f6-8n6mk      0/2     ContainerCreating   0          2s
    serving-4x5wh-ksvc-wxbf2-v100-deployment-5c495c84f6-8n6mk      1/2     Running             0          4s
    serving-4x5wh-ksvc-wxbf2-v100-deployment-5c495c84f6-8n6mk      1/2     Running             0          4s
    serving-4x5wh-ksvc-wxbf2-v100-deployment-5c495c84f6-8n6mk      2/2     Running             0          4s
    

3.7.3 - Use EventBus and Trigger

This document gives an example of how to use EventBus and Trigger.

Prerequisites

  • You need to create a function as the target function to be triggered. Please refer to Create a function for more details.
  • You need to create a Kafka cluster. Please refer to Create a Kafka cluster for more details.

Deploy an NATS streaming server

Run the following commands to deploy an NATS streaming server. This document uses nats://nats.default:4222 as the access address of the NATS streaming server and stan as the cluster ID. For more information, see NATS Streaming (STAN).

helm repo add nats https://nats-io.github.io/k8s/helm/charts/
helm install nats nats/nats
helm install stan nats/stan --set stan.nats.url=nats://nats:4222

Create an OpenFuncAsync Runtime Function

  1. Use the following content to create a configuration file (for example, openfuncasync-function.yaml) for the target function, which is triggered by the Trigger CRD and prints the received message.

    apiVersion: core.openfunction.io/v1beta1
    kind: Function
    metadata:
      name: trigger-target
    spec:
      version: "v1.0.0"
      image: openfunctiondev/v1beta1-trigger-target:latest
      port: 8080
      serving:
        runtime: "async"
        scaleOptions:
          keda:
            scaledObject:
              pollingInterval: 15
              minReplicaCount: 0
              maxReplicaCount: 10
              cooldownPeriod: 30
        triggers:
          - type: stan
            metadata:
              natsServerMonitoringEndpoint: "stan.default.svc.cluster.local:8222"
              queueGroup: "grp1"
              durableName: "ImDurable"
              subject: "metrics"
              lagThreshold: "10"
        inputs:
          - name: autoscaling-pubsub
            component: eventbus
            topic: metrics
        pubsub:
          eventbus:
            type: pubsub.natsstreaming
            version: v1
            metadata:
              - name: natsURL
                value: "nats://nats.default:4222"
              - name: natsStreamingClusterID
                value: "stan"
              - name: subscriptionType
                value: "queue"
              - name: durableSubscriptionName
                value: "ImDurable"
              - name: consumerID
                value: "grp1"
    
  2. Run the following command to apply the configuration file.

    kubectl apply -f openfuncasync-function.yaml
    

Create an EventBus and an EventSource

  1. Use the following content to create a configuration file (for example, eventbus.yaml) for an EventBus.

    apiVersion: events.openfunction.io/v1alpha1
    kind: EventBus
    metadata:
      name: default
    spec:
      natsStreaming:
        natsURL: "nats://nats.default:4222"
        natsStreamingClusterID: "stan"
        subscriptionType: "queue"
        durableSubscriptionName: "ImDurable"
    
  2. Use the following content to create a configuration file (for example, eventsource.yaml) for an EventSource.

    apiVersion: events.openfunction.io/v1alpha1
    kind: EventSource
    metadata:
      name: my-eventsource
    spec:
      logLevel: "2"
      eventBus: "default"
      kafka:
        sample-two:
          brokers: "kafka-server-kafka-brokers.default.svc.cluster.local:9092"
          topic: "events-sample"
          authRequired: false
    
  3. Run the following commands to apply these configuration files.

    kubectl apply -f eventbus.yaml
    kubectl apply -f eventsource.yaml
    
  4. Run the following commands to check the results.

    $ kubectl get eventsources.events.openfunction.io
    NAME             EVENTBUS   SINK   STATUS
    my-eventsource   default           Ready
    
    $ kubectl get eventbus.events.openfunction.io
    NAME      AGE
    default   6m53s
    
    $ kubectl get components
    NAME                                                 AGE
    serving-6r5dl-component-eventbus-jlpqf               11m
    serving-9689d-component-ebfes-my-eventsource-cmcbw   6m57s
    serving-9689d-component-esc-kafka-sample-two-l99cg   6m57s
    serving-k6zw8-component-cron-9x8hl                   61m
    serving-k6zw8-component-kafka-server-sjrzs           61m
    
    $ kubectl get deployments.apps
    NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
    serving-6r5dl-deployment-v100-m7nq2        0/0     0            0           12m
    serving-9689d-deployment-v100-5qdvk        1/1     1            1           7m17s
    

Create a Trigger

  1. Use the following content to create a configuration file (for example, trigger.yaml) for a Trigger.

    apiVersion: events.openfunction.io/v1alpha1
    kind: Trigger
    metadata:
      name: my-trigger
    spec:
      logLevel: "2"
      eventBus: "default"
      inputs:
        inputDemo:
          eventSource: "my-eventsource"
          event: "sample-two"
      subscribers:
        - condition: inputDemo
          topic: "metrics"
    
  2. Run the following command to apply the configuration file.

    kubectl apply -f trigger.yaml
    
  3. Run the following commands to check the results.

    $ kubectl get triggers.events.openfunction.io
    NAME         EVENTBUS   STATUS
    my-trigger   default    Ready
    
    $ kubectl get eventbus.events.openfunction.io
    NAME      AGE
    default   62m
    
    $ kubectl get components
    NAME                                                 AGE
    serving-9689d-component-ebfes-my-eventsource-cmcbw   46m
    serving-9689d-component-esc-kafka-sample-two-l99cg   46m
    serving-dxrhd-component-eventbus-t65q7               13m
    serving-zwlj4-component-ebft-my-trigger-4925n        100s
    

Create an Event Producer

  1. Use the following content to create an event producer configuration file (for example, events-producer.yaml).

    apiVersion: core.openfunction.io/v1beta1
    kind: Function
    metadata:
      name: events-producer
    spec:
      version: "v1.0.0"
      image: openfunctiondev/v1beta1-bindings:latest
      serving:
        template:
          containers:
            - name: function
              imagePullPolicy: Always
        runtime: "async"
        inputs:
          - name: cron
            component: cron
        outputs:
          - name: target
            component: kafka-server
            operation: "create"
        bindings:
          cron:
            type: bindings.cron
            version: v1
            metadata:
              - name: schedule
                value: "@every 2s"
          kafka-server:
            type: bindings.kafka
            version: v1
            metadata:
              - name: brokers
                value: "kafka-server-kafka-brokers:9092"
              - name: topics
                value: "events-sample"
              - name: consumerGroup
                value: "bindings-with-output"
              - name: publishTopic
                value: "events-sample"
              - name: authRequired
                value: "false"
    
  2. Run the following command to apply the configuration file.

    kubectl apply -f events-producer.yaml
    
  3. Run the following commands to observe changes of the target asynchronous function.

    $ kubectl get functions.core.openfunction.io
    NAME                                  BUILDSTATE   SERVINGSTATE   BUILDER   SERVING         URL                                   AGE
    trigger-target                        Skipped      Running                  serving-dxrhd                                         20m
    
    $ kubectl get po --watch
    NAME                                                     READY   STATUS              RESTARTS   AGE
    serving-dxrhd-deployment-v100-xmrkq-785cb5f99-6hclm      0/2     Pending             0          0s
    serving-dxrhd-deployment-v100-xmrkq-785cb5f99-6hclm      0/2     Pending             0          0s
    serving-dxrhd-deployment-v100-xmrkq-785cb5f99-6hclm      0/2     ContainerCreating   0          0s
    serving-dxrhd-deployment-v100-xmrkq-785cb5f99-6hclm      0/2     ContainerCreating   0          2s
    serving-dxrhd-deployment-v100-xmrkq-785cb5f99-6hclm      1/2     Running             0          4s
    serving-dxrhd-deployment-v100-xmrkq-785cb5f99-6hclm      1/2     Running             0          4s
    serving-dxrhd-deployment-v100-xmrkq-785cb5f99-6hclm      2/2     Running             0          4s
    

3.7.4 - Use Multiple Sources in One EventSource

This document describes how to use multiple sources in one EventSource.

Prerequisites

  • You need to create a function as the target function to be triggered. Please refer to Create a function for more details.
  • You need to create a Kafka cluster. Please refer to Create a Kafka cluster for more details.

Use Multiple Sources in One EventSource

  1. Use the following content to create an EventSource configuration file (for example, eventsource-multi.yaml).

    apiVersion: events.openfunction.io/v1alpha1
    kind: EventSource
    metadata:
      name: my-eventsource
    spec:
      logLevel: "2"
      kafka:
        sample-three:
          brokers: "kafka-server-kafka-brokers.default.svc.cluster.local:9092"
          topic: "events-sample"
          authRequired: false
      cron:
        sample-three:
          schedule: "@every 5s" 
      sink:
        uri: "http://openfunction.io.svc.cluster.local/default/sink"
    
  2. Run the following command to apply the configuration file.

    kubectl apply -f eventsource-multi.yaml
    
  3. Run the following commands to observe changes.

    $ kubectl get eventsources.events.openfunction.io
    NAME             EVENTBUS   SINK   STATUS
    my-eventsource                     Ready
    
    $ kubectl get components
    NAME                                                      AGE
    serving-vqfk5-component-esc-cron-sample-three-dzcpv       35s
    serving-vqfk5-component-esc-kafka-sample-one-nr9pq        35s
    serving-vqfk5-component-ts-my-eventsource-default-q6g6m   35s
    
    $ kubectl get deployments.apps
    NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
    serving-4x5wh-ksvc-wxbf2-v100-deployment   1/1     1            1           3h14m
    serving-vqfk5-deployment-v100-vdmvj        1/1     1            1           48s
    

3.7.5 - Use ClusterEventBus

This document describes how to use a ClusterEventBus.

Prerequisites

You have finished the steps described in Use EventBus and Trigger.

Use a ClusterEventBus

  1. Use the following content to create a ClusterEventBus configuration file (for example, clustereventbus.yaml).

    apiVersion: events.openfunction.io/v1alpha1
    kind: ClusterEventBus
    metadata:
      name: default
    spec:
      natsStreaming:
        natsURL: "nats://nats.default:4222"
        natsStreamingClusterID: "stan"
        subscriptionType: "queue"
        durableSubscriptionName: "ImDurable"
    
  2. Run the following command to delete EventBus.

    kubectl delete eventbus.events.openfunction.io default
    
  3. Run the following command to apply the configuration file.

    kubectl apply -f clustereventbus.yaml
    
  4. Run the following commands to check the results.

    $ kubectl get eventbus.events.openfunction.io
    No resources found in default namespace.
    
    $ kubectl get clustereventbus.events.openfunction.io
    NAME      AGE
    default   21s
    

3.7.6 - Use Trigger Conditions

This document describes how to use Trigger conditions.

Prerequisites

You have finished the steps described in Use EventBus and Trigger.

Use Trigger Conditions

Create two event sources

  1. Use the following content to create an EventSource configuration file (for example, eventsource-a.yaml).

    apiVersion: events.openfunction.io/v1alpha1
    kind: EventSource
    metadata:
      name: eventsource-a
    spec:
      logLevel: "2"
      eventBus: "default"
      kafka:
        sample-five:
          brokers: "kafka-server-kafka-brokers.default.svc.cluster.local:9092"
          topic: "events-sample"
          authRequired: false
    
  2. Use the following content to create another EventSource configuration file (for example, eventsource-b.yaml).

    apiVersion: events.openfunction.io/v1alpha1
    kind: EventSource
    metadata:
      name: eventsource-b
    spec:
      logLevel: "2"
      eventBus: "default"
      cron:
        sample-five:
          schedule: "@every 5s" 
    
  3. Run the following commands to apply these two configuration files.

    kubectl apply -f eventsource-a.yaml
    kubectl apply -f eventsource-b.yaml
    

Create a trigger with condition

  1. Use the following content to create a configuration file (for example, condition-trigger.yaml) for a Trigger with condition.

    apiVersion: events.openfunction.io/v1alpha1
    kind: Trigger
    metadata:
      name: condition-trigger
    spec:
      logLevel: "2"
      eventBus: "default"
      inputs:
        eventA:
          eventSource: "eventsource-a"
          event: "sample-five"
        eventB:
          eventSource: "eventsource-b"
          event: "sample-five"
      subscribers:
      - condition: eventB
        sink:
          uri: "http://openfunction.io.svc.cluster.local/default/sink"
      - condition: eventA && eventB
        topic: "metrics"
    
  2. Run the following commands to apply the configuration files.

    kubectl apply -f condition-trigger.yaml
    
  3. Run the following commands to check the results.

    $ kubectl get eventsources.events.openfunction.io
    NAME            EVENTBUS   SINK   STATUS
    eventsource-a   default           Ready
    eventsource-b   default           Ready
    
    $ kubectl get triggers.events.openfunction.io
    NAME                EVENTBUS   STATUS
    condition-trigger   default    Ready
    
    $ kubectl get eventbus.events.openfunction.io
    NAME      AGE
    default   12s
    
  4. Run the following command and you can see from the output that the eventB condition in the Trigger is matched and the Knative service is triggered because the event source eventsource-b is a cron task.

    $ kubectl get functions.core.openfunction.io
    NAME                                  BUILDSTATE   SERVINGSTATE   BUILDER   SERVING         URL                                   AGE
    sink                                  Skipped      Running                  serving-4x5wh   https://openfunction.io/default/sink   3h25m
    
    $ kubectl get po
    NAME                                                        READY   STATUS    RESTARTS   AGE
    serving-4x5wh-ksvc-wxbf2-v100-deployment-5c495c84f6-k2jdg   2/2     Running   0          46s
    
  5. Create an event producer by referring to Create an Event Producer.

  6. Run the following command and you can see from the output that the eventA && eventB condition in the Trigger is matched and the event is sent to the metrics topic of the event bus at the same time. The OpenFuncAsync function is triggered.

    $ kubectl get functions.core.openfunction.io
    NAME                                  BUILDSTATE   SERVINGSTATE   BUILDER   SERVING         URL                                   AGE
    trigger-target                        Skipped      Running                  serving-7hghp                                         103s
    
    $ kubectl get po
    NAME                                                        READY   STATUS    RESTARTS   AGE
    serving-7hghp-deployment-v100-z8wrf-946b4854d-svf55         2/2     Running   0          18s
    

4 - Operations

4.1 - Networking

4.1.1 - Switch to another Kubernetes Gateway

You can switch to any gateway implementations that support Kubernetes Gateway API such as Contour, Istio, Apache APISIX, Envoy Gateway (in the future) and more in an easier and vendor-neutral way.

For example, you can choose to use Istio as the underlying Kubernetes Gateway like this:

  1. Install OpenFunction without Contour:
helm install openfunction --set global.Contour.enabled=false openfunction/openfunction -n openfunction
  1. Install Istio and then enable its Knative integration:
kubectl apply -l knative.dev/crd-install=true -f https://github.com/knative/net-istio/releases/download/knative-v1.3.0/istio.yaml
kubectl apply -f https://github.com/knative/net-istio/releases/download/knative-v1.3.0/istio.yaml
kubectl apply -f https://github.com/knative/net-istio/releases/download/knative-v1.3.0/net-istio.yaml
  1. Create a GatewayClass named istio:
kubectl apply -f - <<EOF
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: GatewayClass
metadata:
  name: istio
spec:
  controllerName: istio.io/gateway-controller
  description: The default Istio GatewayClass
EOF
  1. Create an OpenFunction Gateway:
kubectl apply -f - <<EOF
apiVersion: networking.openfunction.io/v1alpha1
kind: Gateway
metadata:
  name: custom-gateway
  namespace: openfunction
spec:
  domain: ofn.io
  clusterDomain: cluster.local
  hostTemplate: "{{.Name}}.{{.Namespace}}.{{.Domain}}"
  pathTemplate: "{{.Namespace}}/{{.Name}}"
  gatewayDef:
    namespace: openfunction
    gatewayClassName: istio
  gatewaySpec:
    listeners:
    - name: ofn-http-external
      protocol: HTTP
      port: 80
      allowedRoutes:
        namespaces:
          from: All
EOF
  1. Reference the custom OpenFunction Gateway (Istio) in the gatewayRef field of a Function:
kubectl apply -f - <<EOF
apiVersion: core.openfunction.io/v1beta1
kind: Function
metadata:
  name: function-sample
spec:
  version: "v1.0.0"
  image: "openfunctiondev/v1beta1-http:latest"
  port: 8080
  serving:
    runtime: knative
    template:
      containers:
        - name: function
          imagePullPolicy: Always
  route:
    gatewayRef:
      name: custom-gateway
      namespace: openfunction
EOF

4.1.2 - Configure Local Domain

Configure Local Domain

By configuring the local domain, you can access functions from within a Kubernetes cluster through the function’s external address.

Configure CoreDNS based on Gateway.spec.domain

Assume you have a Gateway that defines this domain: *.ofn.io, you need to update CoreDNS configuration via following commands:

  1. Edit the coredns configmap:
kubectl -n kube-system edit cm coredns -o yaml
  1. Add rewrite stop name regex .*\.ofn\.io gateway.openfunction.svc.cluster.local to the configuration file in the .:53 section, e.g:
apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health {
           lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        rewrite stop name regex .*\.ofn\.io gateway.openfunction.svc.cluster.local
        prometheus :9153
        forward . /etc/resolv.conf {
           max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }    
...

Configure nodelocaldns based on Gateway.spec.domain

If you are also using nodelocaldns like Kubesphere, you need to update nodelocaldns configuration by the following commands:

  1. Edit the nodelocaldns configmap:
kubectl -n kube-system edit cm nodelocaldns -o yaml
  1. Add ofn.io:53 section to the configuration file, e.g:
apiVersion: v1
data:
  Corefile: |
    ofn.io:53 {
        errors
        cache {
            success 9984 30
            denial 9984 5
        }
        reload
        loop
        bind 169.254.25.10
        forward . 10.233.0.3 {
            force_tcp
        }
        prometheus :9253
    }
    cluster.local:53 {
        errors
        cache {
            success 9984 30
            denial 9984 5
        }
        reload
        loop
        bind 169.254.25.10
        forward . 10.233.0.3 {
            force_tcp
        }
        prometheus :9253
        health 169.254.25.10:9254
    }
    .:53 {
        errors
        cache 30
        reload
        loop
        bind 169.254.25.10
        forward . /etc/resolv.conf
        prometheus :9253
    }    
...

5 - Best Practices

For more examples of using OpenFunction, refer to the samples repository, such as Autoscaling service based on queue depth.

5.1 - Create a Knative-based Function to Interact with Middleware

Learn how to create a Knative-based function to interact with middleware via Dapr components.

This document describes how to create a Knative-based function to interact with middleware via Dapr components.

Overview

Similar to asynchronous functions, the functions that are based on Knative runtime can interact with middleware through Dapr components. This document uses two functions, function-front and kafka-input, for demonstration.

The following diagram illustrates the relationship between these functions.

Prerequisites

Create a Kafka Server and Topic

  1. Run the following commands to install strimzi-kafka-operator in the default namespace.

    helm repo add strimzi https://strimzi.io/charts/
    helm install kafka-operator -n default strimzi/strimzi-kafka-operator
    
  2. Use the following content to create a file kafka.yaml.

    apiVersion: kafka.strimzi.io/v1beta2
    kind: Kafka
    metadata:
      name: kafka-server
      namespace: default
    spec:
      kafka:
        version: 3.1.0
        replicas: 1
        listeners:
          - name: plain
            port: 9092
            type: internal
            tls: false
          - name: tls
            port: 9093
            type: internal
            tls: true
        config:
          offsets.topic.replication.factor: 1
          transaction.state.log.replication.factor: 1
          transaction.state.log.min.isr: 1
          default.replication.factor: 1
          min.insync.replicas: 1
          inter.broker.protocol.version: "3.1"
        storage:
          type: ephemeral
      zookeeper:
        replicas: 1
        storage:
          type: ephemeral
      entityOperator:
        topicOperator: {}
        userOperator: {}
    ---
    apiVersion: kafka.strimzi.io/v1beta2
    kind: KafkaTopic
    metadata:
      name: sample-topic
      namespace: default
      labels:
        strimzi.io/cluster: kafka-server
    spec:
      partitions: 10
      replicas: 1
      config:
        retention.ms: 7200000
        segment.bytes: 1073741824
    
  3. Run the following command to deploy a 1-replica Kafka server named kafka-server and 1-replica Kafka topic named sample-topic in the default namespace.

    kubectl apply -f kafka.yaml
    
  4. Run the following command to check pod status and wait for Kafka and Zookeeper to be up and running.

    $ kubectl get po
    NAME                                              READY   STATUS        RESTARTS   AGE
    kafka-server-entity-operator-568957ff84-nmtlw     3/3     Running       0          8m42s
    kafka-server-kafka-0                              1/1     Running       0          9m13s
    kafka-server-zookeeper-0                          1/1     Running       0          9m46s
    strimzi-cluster-operator-687fdd6f77-cwmgm         1/1     Running       0          11m
    
  5. Run the following commands to view the metadata of the Kafka cluster.

    # Starts a utility pod.
    $ kubectl run utils --image=arunvelsriram/utils -i --tty --rm
    # Checks metadata of the Kafka cluster.
    $ kafkacat -L -b kafka-server-kafka-brokers:9092
    

Create Functions

  1. Use the following example YAML file to create a manifest kafka-input.yaml and modify the value of spec.image to set your own image registry address. The field spec.serving.inputs defines an input source that points to a Dapr component of the Kafka server. It means that the kafka-input function will be driven by events in the topic sample-topic of the Kafka server.

    apiVersion: core.openfunction.io/v1beta1
    kind: Function
    metadata:
      name: kafka-input
    spec:
      version: "v1.0.0"
      image: <your registry name>/kafka-input:latest
      imageCredentials:
        name: push-secret
      build:
        builder: openfunction/builder-go:latest
        env:
          FUNC_NAME: "HandleKafkaInput"
          FUNC_CLEAR_SOURCE: "true"
        srcRepo:
          url: "https://github.com/OpenFunction/samples.git"
          sourceSubPath: "functions/async/bindings/kafka-input"
          revision: "main"
      serving:
        runtime: async
        scaleOptions:
          minReplicas: 0
          maxReplicas: 10 
          keda:
            scaledObject:
              pollingInterval: 15
              minReplicaCount: 0
              maxReplicaCount: 10
              cooldownPeriod: 60
              advanced:
                horizontalPodAutoscalerConfig:
                  behavior:
                    scaleDown:
                      stabilizationWindowSeconds: 45
                      policies:
                      - type: Percent
                        value: 50
                        periodSeconds: 15
                    scaleUp:
                      stabilizationWindowSeconds: 0
        triggers:
          - type: kafka
            metadata:
              topic: sample-topic
              bootstrapServers: kafka-server-kafka-brokers.default.svc:9092
              consumerGroup: kafka-input
              lagThreshold: "20"
        inputs:
          - name: greeting
            component: target-topic
        bindings:
          target-topic:
            type: bindings.kafka
            version: v1
            metadata:
              - name: brokers
                value: "kafka-server-kafka-brokers:9092"
              - name: topics
                value: "sample-topic"
              - name: consumerGroup
                value: "kafka-input"
              - name: publishTopic
                value: "sample-topic"
              - name: authRequired
                value: "false"
        template:
          containers:
            - name: function
              imagePullPolicy: Always
    
  2. Run the following command to create the function kafka-input.

    kubectl apply -f kafka-input.yaml
    
  3. Use the following example YAML file to create a manifest function-front.yaml and modify the value of spec.image to set your own image registry address.

    apiVersion: core.openfunction.io/v1beta1
    kind: Function
    metadata:
      name: function-front
      annotations:
        plugins: |
          pre:
          - plugin-custom
          - plugin-example
          post:
          - plugin-custom
          - plugin-example      
    spec:
      version: "v1.0.0"
      image: "<your registry name>/sample-knative-dapr:latest"
      imageCredentials:
        name: push-secret
      port: 8080 # Default to 8080
      build:
        builder: openfunction/builder-go:latest
        env:
          FUNC_NAME: "ForwardToKafka"
          FUNC_CLEAR_SOURCE: "true"
        srcRepo:
          url: "https://github.com/OpenFunction/samples.git"
          sourceSubPath: "functions/knative/with-output-binding"
          revision: "main"
      serving:
        scaleOptions:
          minReplicas: 0
          maxReplicas: 5
        runtime: knative
        outputs:
          - name: target
            component: kafka-server
            operation: "create"
        bindings:
          kafka-server:
            type: bindings.kafka
            version: v1
            metadata:
              - name: brokers
                value: "kafka-server-kafka-brokers:9092"
              - name: authRequired
                value: "false"
              - name: publishTopic
                value: "sample-topic"
              - name: topics
                value: "sample-topic"
              - name: consumerGroup
                value: "function-front"
        template:
          containers:
            - name: function
              imagePullPolicy: Always
    
  4. In the manifest, spec.serving.outputs defines an output that points to a Dapr component of the Kafka server. That allows you to send custom content to the output target in the function function-front.

    func Sender(ctx ofctx.Context, in []byte) (ofctx.Out, error) {
      ...
    	_, err := ctx.Send("target", greeting)
    	...
    }
    
  5. Run the following command to create the function function-front.

    kubectl apply -f function-front.yaml
    

Check Results

  1. Run the following command to view the status of the functions.

    $ kubectl get functions.core.openfunction.io
    
    NAME             BUILDSTATE   SERVINGSTATE   BUILDER         SERVING         URL                                             AGE
    function-front   Succeeded    Running        builder-bhbtk   serving-vc6jw   https://openfunction.io/default/function-front   2m41s
    kafka-input      Succeeded    Running        builder-dprfd   serving-75vrt                                                   2m21s
    
  2. Run the following command to create a pod in the cluster for accessing the function.

    kubectl run curl --image=radial/busyboxplus:curl -i --tty --rm
    
  3. Run the following command to access the function through URL.

    [ root@curl:/ ]$ curl -d '{"message":"Awesome OpenFunction!"}' -H "Content-Type: application/json" -X POST http://openfunction.io.svc.cluster.local/default/function-front
    
  4. Run the following command to view the log of function-front.

    kubectl logs -f \
      $(kubectl get po -l \
      openfunction.io/serving=$(kubectl get functions function-front -o jsonpath='{.status.serving.resourceRef}') \
      -o jsonpath='{.items[0].metadata.name}') \
      function
    

    The output looks as follows.

    dapr client initializing for: 127.0.0.1:50001
    I0125 06:51:55.584973       1 framework.go:107] Plugins for pre-hook stage:
    I0125 06:51:55.585044       1 framework.go:110] - plugin-custom
    I0125 06:51:55.585052       1 framework.go:110] - plugin-example
    I0125 06:51:55.585057       1 framework.go:115] Plugins for post-hook stage:
    I0125 06:51:55.585062       1 framework.go:118] - plugin-custom
    I0125 06:51:55.585067       1 framework.go:118] - plugin-example
    I0125 06:51:55.585179       1 knative.go:46] Knative Function serving http: listening on port 8080
    2022/01/25 06:52:02 http - Data: {"message":"Awesome OpenFunction!"}
    I0125 06:52:02.246450       1 plugin-example.go:83] the sum is: 2
    
  5. Run the following command to view the log of kafka-input.

    kubectl logs -f \
      $(kubectl get po -l \
      openfunction.io/serving=$(kubectl get functions kafka-input -o jsonpath='{.status.serving.resourceRef}') \
      -o jsonpath='{.items[0].metadata.name}') \
      function
    

    The output looks as follows.

    dapr client initializing for: 127.0.0.1:50001
    I0125 06:35:28.332381       1 framework.go:107] Plugins for pre-hook stage:
    I0125 06:35:28.332863       1 framework.go:115] Plugins for post-hook stage:
    I0125 06:35:28.333749       1 async.go:39] Async Function serving grpc: listening on port 8080
    message from Kafka '{Awesome OpenFunction!}'
    

5.2 - Use SkyWalking for OpenFunction as an Observability Solution

Learn how to use SkyWalking for OpenFunction as an observability solution.

This document describes how to use SkyWalking for OpenFunction as an observability solution.

Overview

Although FaaS allows developers to focus on their business code without worrying about the underlying implementations, it is difficult to troubleshoot the service system. OpenFunction tries to introduce capabilities of observability to improve its usability and stability.

SkyWalking provides solutions for observing and monitoring distributed systems in many different scenarios. OpenFunction has bundled go2sky(SkyWalking’s Golang agent) in OpenFunction tracer options to provide distributed tracing, statistics of function performance, and functions dependency map.

Prerequisites

Tracing Parameters

The following table describes the tracing parameters.

NameDescriptionExample
enabledSwitch for tracing, default to false.true, false
provider.nameProvider name can be set to “skywalking”, “opentelemetry” (pending).“skywalking”
provider.oapServerThe oap server address.“skywalking-opa:11800”
tagsA collection of key-value pairs for Span custom tags in tracing.
tags.funcThe name of function. It will be automatically filled.“function-a”
tags.layerIndicates the type of service being tracked. It should be set to “faas” when you use the function.“faas”
baggageA collection of key-value pairs, exists in the tracing and also needs to be transferred across process boundaries.

The following is a JSON formatted configuration reference that guides the formatting structure of the tracing configuration.

{
  "enabled": true,
  "provider": {
    "name": "skywalking",
    "oapServer": "skywalking-oap:11800"
  },
  "tags": {
    "func": "function-a",
    "layer": "faas",
    "tag1": "value1",
    "tag2": "value2"
  },
  "baggage": {
    "key": "key1",
    "value": "value1"
  }
}

Enable Tracing Configuration of OpenFunction

Option 1: global configuration

This document uses skywalking-oap.default:11800 as an example of the skywalking-oap address in the cluster.

  1. Run the following command to modify the configmap openfunction-config in the openfunction namespace.

    kubectl edit configmap openfunction-config -n openfunction
    
  2. Modify the content under data.plugins.tracing by referring to the following example and save the change.

    data:
      plugins.tracing: |
        enabled: true
        provider:
          name: "skywalking"
          oapServer: "skywalking-oap:11800"
        tags:
          func: tracing-function
          layer: faas
          tag1: value1
          tag2: value2
        baggage:
          key: "key1"
          value: "value1"    
    

Option 2: function-level configuration

To enable tracing configuration in the function-level, add the field plugins.tracing under metadata.annotations in the function manifest as the following example.

metadata:
  name: tracing-function
  annotations:
    plugins.tracing: |
      enabled: true
      provider:
        name: "skywalking"
        oapServer: "skywalking-oap:11800"
      tags:
        func: tracing-function
        layer: faas
        tag1: value1
        tag2: value2
      baggage:
        key: "key1"
        value: "value1"      

It is recommended that you use the global tracing configuration, or you have to add function-level tracing configuration for every function you create.

Use SkyWalking as a Distributed Tracing Solution

  1. Create functions by referring to this document. You can find more examples to create sync and async functions in OpenFunction Quickstarts.

  2. Then, you can observe the flow of entire link on the SkyWalking UI.

  3. You can also observe the comparison of the response time of the Knative runtime function (function-front) in the running state and under cold start.

    In cold start:

    In running:

5.3 - Elastic Log Alerting

Learn how to create an async function to find out error logs.

This document describes how to create an async function to find out error logs.

Overview

This document uses an asynchronous function to analyze the log stream in Kafka to find out the error logs. The async function will then send alerts to Slack. The following diagram illustrates the entire workflow.

Prerequisites

Create a Kafka Server and Topic

  1. Run the following commands to install strimzi-kafka-operator in the default namespace.

    helm repo add strimzi https://strimzi.io/charts/
    helm install kafka-operator -n default strimzi/strimzi-kafka-operator
    
  2. Use the following content to create a file kafka.yaml.

    apiVersion: kafka.strimzi.io/v1beta2
    kind: Kafka
    metadata:
      name: kafka-logs-receiver
      namespace: default
    spec:
      kafka:
        version: 3.1.0
        replicas: 1
        listeners:
          - name: plain
            port: 9092
            type: internal
            tls: false
          - name: tls
            port: 9093
            type: internal
            tls: true
        config:
          offsets.topic.replication.factor: 1
          transaction.state.log.replication.factor: 1
          transaction.state.log.min.isr: 1
          default.replication.factor: 1
          min.insync.replicas: 1
          inter.broker.protocol.version: "3.1"
        storage:
          type: ephemeral
      zookeeper:
        replicas: 1
        storage:
          type: ephemeral
      entityOperator:
        topicOperator: {}
        userOperator: {}
    ---
    apiVersion: kafka.strimzi.io/v1beta2
    kind: KafkaTopic
    metadata:
      name: logs
      namespace: default
      labels:
        strimzi.io/cluster: kafka-logs-receiver
    spec:
      partitions: 10
      replicas: 1
      config:
        retention.ms: 7200000
        segment.bytes: 1073741824
    
  3. Run the following command to deploy a 1-replica Kafka server named kafka-logs-receiver and 1-replica Kafka topic named logs in the default namespace.

    kubectl apply -f kafka.yaml
    
  4. Run the following command to check pod status and wait for Kafka and Zookeeper to be up and running.

    $ kubectl get po
    NAME                                                     READY   STATUS        RESTARTS   AGE
    kafka-logs-receiver-entity-operator-57dc457ccc-tlqqs     3/3     Running       0          8m42s
    kafka-logs-receiver-kafka-0                              1/1     Running       0          9m13s
    kafka-logs-receiver-zookeeper-0                          1/1     Running       0          9m46s
    strimzi-cluster-operator-687fdd6f77-cwmgm                1/1     Running       0          11m
    
  5. Run the following commands to view the metadata of the Kafka cluster.

    # Starts a utility pod.
    $ kubectl run utils --image=arunvelsriram/utils -i --tty --rm
    # Checks metadata of the Kafka cluster.
    $ kafkacat -L -b kafka-logs-receiver-kafka-brokers:9092
    

Create a Logs Handler Function

  1. Use the following example YAML file to create a manifest logs-handler-function.yaml and modify the value of spec.image to set your own image registry address.

    apiVersion: core.openfunction.io/v1beta1
    kind: Function
    metadata:
      name: logs-async-handler
    spec:
      version: "v2.0.0"
      image: <your registry name>/logs-async-handler:latest
      imageCredentials:
        name: push-secret
      build:
        builder: openfunction/builder-go:latest
        env:
          FUNC_NAME: "LogsHandler"
          FUNC_CLEAR_SOURCE: "true"
          # Use FUNC_GOPROXY to set the goproxy
          # FUNC_GOPROXY: "https://goproxy.cn"
        srcRepo:
          url: "https://github.com/OpenFunction/samples.git"
          sourceSubPath: "functions/async/logs-handler-function/"
          revision: "main"
      serving:
        runtime: "async"
        scaleOptions:
          keda:
            scaledObject:
              pollingInterval: 15
              minReplicaCount: 0
              maxReplicaCount: 10
              cooldownPeriod: 60
              advanced:
                horizontalPodAutoscalerConfig:
                  behavior:
                    scaleDown:
                      stabilizationWindowSeconds: 45
                      policies:
                      - type: Percent
                        value: 50
                        periodSeconds: 15
                    scaleUp:
                      stabilizationWindowSeconds: 0
        triggers:
          - type: kafka
            metadata:
              topic: logs
              bootstrapServers: kafka-server-kafka-brokers.default.svc.cluster.local:9092
              consumerGroup: logs-handler
              lagThreshold: "20"
        template:
          containers:
            - name: function
              imagePullPolicy: Always
        inputs:
          - name: kafka
            component: kafka-receiver
        outputs:
          - name: notify
            component: notification-manager
            operation: "post"
        bindings:
          kafka-receiver:
            type: bindings.kafka
            version: v1
            metadata:
              - name: brokers
                value: "kafka-server-kafka-brokers:9092"
              - name: authRequired
                value: "false"
              - name: publishTopic
                value: "logs"
              - name: topics
                value: "logs"
              - name: consumerGroup
                value: "logs-handler"
          notification-manager:
            type: bindings.http
            version: v1
            metadata:
              - name: url
                value: http://notification-manager-svc.kubesphere-monitoring-system.svc.cluster.local:19093/api/v2/alerts
    
  2. Run the following command to create the function logs-async-handler.

    kubectl apply -f logs-handler-function.yaml
    
  3. The logs handler function will be triggered by messages from the logs topic in Kafka.

6 - Reference

6.1 - OpenFunction CLI

Learn how to use the CLI of OpenFunction.

This document describes how to use the CLI of OpenFunction.

Overview

ofn is the command-line interface for OpenFunction. It helps you install and manage OpenFuntion.

The following table describes the main commands supported by ofn.

CommandDescription
initProvides management for the framework of OpenFunction.
installInstalls OpenFunction and its dependencies.
uninstallUninstalls OpenFunction and its dependencies.
createCreates a function from a file or stdin.
applyApplies a function from a file or stdin.
demoCreates a kind cluster to install OpenFunction and run its demo.
getPrints a table of the most important information about the specified function.
get builderPrints important information about the Builder.
get servingPrints important information about the Serving.
deleteDeletes the specified function.

Use ofn to Install and Uninstall OpenFunction

  1. Run the following command to download ofn.

    wget -c  https://github.com/OpenFunction/cli/releases/latest/download/ofn_linux_amd64.tar.gz -O - | tar -xz
    
  2. Run un the following commands to make ofn executable and move it to /usr/local/bin/.

    chmod +x ofn && mv ofn /usr/local/bin/
    
  3. Run the following command to install OpenFunction.

    ofn install --all
    
  4. Run the following command to uninstall OpenFunction and its dependencies.

    ofn uninstall --all
    

ofn install Parameters

The following table describes parameters available for the ofn install command.

ParameterDescription
–allFor installing all dependencies.
–asyncFor installing OpenFunction Async runtime (Dapr & KEDA).
–cert-managerFor installing Cert Manager.
–daprFor installing Dapr.
–dry-runFor previewing the components and their versions to be installed.
–ingressFor installing Ingress Nginx.
–kedaFor installing KEDA.
–knativeFor installing Knative Serving (with Kourier as the default gateway).
–region-cnFor users who have limited access to gcr.io or github.com. If you add this parameter in the ofn install command, you have to add it again in the ofn uninstall command.
–shipwrightFor installing Shipwright.
–syncFor installing OpenFunction Sync runtime (to be supported).
–upgradeFor upgrading components to target versions during installation.
–yesFor automatically passing yes to prompts. (-y for short)
–verboseFor displaying detailed information.
–version < version number>For specifying any stable version or the latest version of OpenFunction for installation. (default “v0.4.0”) If not specified, the latest stable version will be installed.
–timeout < duration >For specifying the timeout duration, which defaults to 5 minutes.

ofn uninstall Parameters

The following table describes parameters available for the ofn uninstall command.

ParameterDescription
–allFor uninstalling all dependencies.
–asyncFor uninstalling OpenFunction Async runtime (Dapr & KEDA).
–cert-managerFor uninstalling Cert Manager.
–daprFor uninstalling Dapr.
–dry-runFor previewing the components and their versions to be uninstalled.
–ingressFor uninstalling Ingress Nginx.
–kedaFor uninstalling KEDA.
–knativeFor uninstalling Knative Serving (with Kourier as the default gateway).
–region-cnFor users who have limited access to gcr.io or github.com. If you add this parameter in the ofn install command, you have to add it again in the ofn uninstall command.
–shipwrightFor uninstalling Shipwright.
–syncFor uninstalling OpenFunction Sync runtime (to be supported).
–verboseFor displaying detailed information.
–yesFor automatically passing yes to prompts. (-y for short)
–version < version number>For specifying any stable version or the latest version of OpenFunction to be uninstalled. If not specified, the currently installed version will be uninstalled.
–waitFor awaiting the results of the uninstallation until namespaces are cleaned up.
–timeout < duration >For specifying the timeout duration, which defaults to 5 minutes.

Inventory

During installation, the OpenFunction CLI keeps the installed component details in $home/.ofn/<cluster name>-inventory.yaml. Therefore, during uninstallation, the OpenFunction CLI removes the installed components based on the contents of $home/.ofn/<cluster name>-inventory.yaml.

In addition, the OpenFunction CLI supports obtaining component versions and paths to the component YAML files from environment variables.

ofn demo Parameters

The following table describes parameters available for the ofn demo command.

ParameterDescription
–auto-pruneFor removing the demo kind cluster. To keep the demo kind cluster, run ofn demo --auto-prune=false, and you can delete the demo kind cluster by running kind delete cluster --name openfunction.
–verboseFor displaying detailed information.
–region-cnFor users who have limited access to gcr.io or github.com.

6.2 - Component Specifications

Learn about OpenFunction component specifications.

6.2.1 - Function Specifications

Learn about Function Specifications.

This document describes the specifications of the Function CRD.

Function

FieldDescription
apiVersion stringcore.openfunction.io/v1alpha2
kind stringFunction
metadata v1.ObjectMeta(Optional) Refer to v1.ObjectMeta
spec FunctionSpecRefer to FunctionSpec
status FunctionStatusRefer to FunctionStatus

FunctionSpec

Belong to Function.

FieldDescription
version string(Optional) Function version, e.g. v1.0.0
image stringImage upload path, e.g. demorepo/demofunction:v1
imageCredentials v1.LocalObjectReference(Optional) Credentials for accessing the image repository, refer to v1.LocalObjectReference
port int32(Optional) The port the function is listening on, e.g. 8080
build BuildImpl(Optional) Builder specification for the function, see BuildImpl
serving ServingImpl(Optional) Serving specification for the function, see ServingImpl

BuildImpl

Belong to FunctionSpec.

FieldDescription
builder stringName of the Builder
builderCredentials v1.LocalObjectReference(Optional) Credentials for accessing the image repository, refer to v1.LocalObjectReference
shipwright ShipwrightEngine(Optional) Specification of the Shipwright engine, refer to ShipwrightEngine
params map[string]string(Optional) Parameters passed to Shipwright
env map[string]string(Optional) Parameters passed to the buildpacks builder
srcRepo GitRepoThe configuration of the source code repository, refer to GitRepo
dockerfile string(Optional) Path to the Dockerfile instructing Shipwright when using the Dockerfile to build images

ShipwrightEngine

Belong to BuildImpl.

FieldDescription
strategy Strategy(Optional) Index of image build strategy, refer to Strategy
timeout v1.Duration(Optional) Build timeout, refer to v1.Duration

Strategy

Belong to ShipwrightEngine.

FieldDescription
name stringName of the strategy
kind string(Optional) Kind of the build strategy, which defaults to “BuildStrategy” or optional “ClusterBuildStrategy”

GitRepo

Belong to BuildImpl.

FieldDescription
url stringSource code repository address
revision string(Optional) Referenceable instances in the repository, such as commit ID and branch name.
sourceSubPath string(Optional) The directory of the function in the repository, e.g. functions/function-a/
credentials v1.LocalObjectReference(Optional) Repository access credentials, refer to v1.LocalObjectReference

ServingImpl

Belong to FunctionSpec.

FieldDescription
runtime stringType of load runtime, optional: Knative, OpenFuncAsync
params map[string]string(Optional) Parameters passed to the workloads
openFuncAsync OpenFuncAsyncRuntime(Optional) Used to define the configuration of OpenFuncAsync when the runtime is OpenFuncAsync, see OpenFuncAsyncRuntime
template v1.PodSpec(Optional) Template for the definition of Pods in the workloads, refer to v1.PodSpec

OpenFuncAsyncRuntime

Belong to ServingImpl.

FieldDescription
dapr Dapr(Optional) Definition of Dapr components, see Dapr
keda Keda(Optional) Definition of Keda, see Keda

Dapr

Belong to OpenFuncAsyncRuntime.

FieldDescription
annotations map[string]string(Optional) Annotations for Dapr components, see Dapr documentation
components map[string]componentsv1alpha1.ComponentSpec(Optional) Dapr Components Spec map, with key being component’s name and value being componentsv1alpha1.ComponentSpec
inputs []DaprIO(Optional) The definition of the inputs of the function, see DaprIO
outputs []DaprIO(Optional) The definition of the outputs of the function, see DaprIO

DaprIO

Belong to Dapr.

FieldDescription
name stringName of the input and output of the function. Consistent with the name of DaprComponent means associated
component stringIndicates the name of components
type stringType of Dapr component, optional: bindings, pubsub
topic string(Optional) When the type is pubsub, you need to set the topic
operation string(Optional) Operation field tells the Dapr component which operation it should perform, refer to Dapr docs
params map[string]string(Optional) Parameters passed to Dapr

Keda

Belong to OpenFuncAsyncRuntime.

FieldDescription
scaledObject KedaScaledObjectDefinition of KEDA scalable objects (Deployments), refer to KedaScaledObject
scaledJob KedaScaledJobDefinition of KEDA scalable jobs, refer to KedaScaledJob

KedaScaledObject

Belong to Keda.

FieldDescription
workloadType stringHow to run the function, known values are Deployment or StatefulSet, which defaults to Deployment.
pollingInterval int32(Optional) The pollingInterval is in seconds. This is the interval in which KEDA checks the triggers for the queue length or the stream lag. It defaults to 30 seconds.
cooldownPeriod int32(Optional) The cooldownPeriod is in seconds, and it is the period of time to wait after the last trigger activated before scaling back down to 0. It defaults to 300 seconds.
minReplicaCount int32(Optional) Minimum number of replicas which KEDA will scale the resource down to. By default, it scales to 0.
maxReplicaCount int32(Optional) This setting is passed to the HPA definition that KEDA will create for a given resource.
advanced kedav1alpha1.AdvancedConfig(Optional) This property specifies whether the target resource (for example, Deployment and StatefulSet) should be scaled back to original replicas count after the ScaledObject is deleted. Default behavior is to keep the replica count at the same number as it is in the moment of ScaledObject deletion. Refer to kedav1alpha1.AdvancedConfig.
triggers []kedav1alpha1.ScaleTriggersEvent sources that trigger dynamic scaling of workloads. Refer to kedav1alpha1.ScaleTriggers.

KedaScaledJob

Belong to Keda.

FieldDescription
restartPolicy v1.RestartPolicyRestart policy for all containers within the pod. Value options are OnFailure or Never. It defaults to Never.
pollingInterval int32(Optional) The pollingInterval is in seconds. This is the interval in which KEDA checks the triggers for the queue length or the stream lag. It defaults to 30 seconds.
successfulJobsHistoryLimit int32(Optional) How many completed jobs should be kept. It defaults to 100.
failedJobsHistoryLimit int32(Optional) How many failed jobs should be kept. It defaults to 100.
maxReplicaCount int32(Optional) The max number of pods that is created within a single polling period.
scalingStrategy kedav1alpha1.ScalingStrategy(Optional) Select a scaling strategy. Value options are default, custom, or accurate. The default value is default. Refer to kedav1alpha1.ScalingStrategy.
triggers []kedav1alpha1.ScaleTriggersEvent sources that trigger dynamic scaling of workloads, refer to kedav1alpha1.ScaleTriggers.

6.2.2 - EventSource Specifications

Learn about EventSource Specifications.

6.2.2.1 - EventSource Specifications

EventSource Specifications.

This document describes the specifications of the EventSource CRD.

EventSource

FieldDescription
apiVersion stringevents.openfunction.io/v1alpha1
kind stringEventSource
metadata v1.ObjectMeta(Optional) Refer to v1.ObjectMeta
spec EventSourceSpecRefer to EventSourceSpec
status EventSourceStatusStatus of EventSource

EventSourceSpec

Belong to EventSource.

FieldDescription
eventBus string(Optional) Name of the EventBus resource associated with the event source.
redis map[string]RedisSpec(Optional) The definition of a Redis event source, with key being the event name, refer to RedisSpec.
kafka map[string]KafkaSpec(Optional) The definition of a Kafka event source, with key being the event name, refer to KafkaSpec.
cron map[string]CronSpec(Optional) The definition of a Cron event source, with key being the event name, refer to CronSpec.
sink SinkSpec(Optional) Definition of the Sink (addressable access resource, i.e. synchronization request) associated with the event source, cf. SinkSpec.

SinkSpec

Belong to EventSourceSpec.

FieldDescription
ref ReferenceRefer to Reference.

Reference

Belong to SinkSpec.

FieldDescription
kind stringThe type of the referenced resource. It defaults to Service.
namespace stringThe namespace of the referenced resource, by default the same as the namespace of the Trigger.
name stringName of the referenced resource, for example, function-ksvc.
apiVersion stringThe apiVersion of the referenced resource. It defaults to serving.knative.dev/v1.

GenericScaleOption

Belong to scaleOption.

FieldDescription
pollingInterval intThis is the interval to check each trigger on. It defaults to 30 seconds.
cooldownPeriod intThe period to wait after the last trigger reported active before scaling the resource back to 0. It defaults to 300 seconds.
minReplicaCount intMinimum number of replicas KEDA will scale the resource down to. It defaults to 0.
maxReplicaCount intThis setting is passed to the HPA definition that KEDA will create for a given resource.
advanced kedav1alpha1.AdvancedConfigSee KEDA documentation.
metadata map[string]stringKEDA trigger’s metadata
authRef kedav1alpha1.ScaledObjectAuthRefEvery parameter you define in TriggerAuthentication definition does not need to be included in the metadata of the trigger for your ScaledObject definition. To reference a TriggerAuthentication from a ScaledObject you add the authenticationRef to the trigger, refer to KEDA documentation.

6.2.2.2 - Redis

Event source specifications of Redis.

RedisSpec

Belong to EventSourceSpec.

FieldDescription
redisHost stringAddress of the Redis server, e.g. localhost:6379.
redisPassword stringPassword for the Redis server, e.g. 123456.
enableTLS bool(Optional) Whether to enable TLS access, which defaults to false. Value options: true, false.
failover bool(Optional) Whether to enable the failover feature. Requires the sentinalMasterName to be set. It defaults to false. Value options: true, false.
sentinelMasterName string(Optional) The name of the sentinel master. Refer to Redis Sentinel Documentation.
redeliverInterval string(Optional) The interval for redeliver. It defaults to 60s. 0 means the redeliver mechanism is disabled. E.g. 30s
processingTimeout string(Optional) Message processing timeout. It defaults to 15s. 0 means timeout is disabled. E.g. 30s
redisType string(Optional) The type of Redis. Value options: node for single-node mode, cluster for cluster mode. It defaults to node.
redisDB int64(Optional) The database index to connect to Redis. Effective only if redisType is node. It defaults to 0.
redisMaxRetries int64(Optional) Maximum number of retries. It defaults to no retries. E.g. 5
redisMinRetryInterval string(Optional) Minimum backoff time for retries. The default value is 8ms. -1 indicates that the backoff time is disabled. E.g. 10ms
redisMaxRetryInterval string(Optional) Maximum backoff time for retries. The default value is 512ms. -1 indicates that the backoff time is disabled. E.g. 5s
dialTimeout string(Optional) Timeout to establish a new connection. It defaults to 5s.
readTimeout string(Optional) Read timeout. A timeout causes Redis commands to fail rather than wait in a blocking fashion. It defaults to 3s. -1 means disabled.
writeTimeout string(Optional) Write timeout. A timeout causes Redis commands to fail rather than wait in a blocking fashion. It defaults to consistent with readTimeout.
poolSize int64(Optional) Maximum number of connections. It defaults to 10 connections per runtime.NumCPU. E.g. 20
poolTimeout string(Optional) The timeout for the connection pool. The default is readTimeout + 1 second. E.g. 50s
maxConnAge string(Optional) Connection aging time. The default is not to close the aging connection. E.g. 30m
minIdleConns int64(Optional) The minimum number of idle connections to maintain to avoid performance degradation from creating new connections. It defaults to 0. E.g. 2
idleCheckFrequency string(Optional) Frequency of idle connection recycler checks. Default is 1m. -1 means the idle connection recycler is disabled. E.g. -1
idleTimeout string(Optional) Timeout to close idle client connections, which should be less than the server’s timeout. It defaults to 5m. -1 means disable idle timeout check. E.g. 10m

6.2.2.3 - Kafka

Event source specifications of Kafka.

KafkaSpec

Belong to EventSourceSpec.

FieldDescription
brokers stringA comma-separated string of Kafka server addresses, for example, localhost:9092.
authRequired boolWhether to enable SASL authentication for the Kafka server. Value options: true, false.
topic stringThe topic name of the Kafka event source, for example, topicA, myTopic.
saslUsername string(Optional) The SASL username to use for authentication. Only required if authRequired is true. For example, admin.
saslPassword string(Optional) The SASL user password for authentication. Only required if authRequired is true. For example, 123456.
maxMessageBytes int64(Optional) The maximum number of bytes a single message is allowed to contain. Default is 1024. For example, 2048.
scaleOption KafkaScaleOption(Optional) Kafka’s scale configuration.

KafkaScaleOption

Belong to KafkaSpec.

FieldDescription
GenericScaleOptionGeneric scale configuration.
consumerGroup stringKafka’s consumer group name.
topic stringTopic under monitoring, for example, topicA, myTopic.
lagThreshold stringThreshold for triggering scaling, in this case is the Kafka’s lag.

6.2.2.4 - Cron

Event source specifications of Cron.

CronSpec

Belong to EventSourceSpec.

FieldDescription
schedule stringRefer to Schedule format for a valid schedule format, for example, @every 15m.

6.2.3 - EventBus Specifications

Learn about EventBus Specifications.

6.2.3.1 - EventBus Specifications

EventBus Specifications.

This document describes the specifications of the EventBus (ClusterEventBus) CRD.

EventBus (ClusterEventBus)

FieldDescription
apiVersion stringevents.openfunction.io/v1alpha1
kind stringEventBus(ClusterEventBus)
metadata v1.ObjectMeta(Optional) Refer to v1.ObjectMeta
spec EventBusSpecRefer to EventBusSpec
status EventBusStatusStatus of EventBus(ClusterEventBus)

EventBusSpec

Belong to EventBus.

FieldDescription
topic stringThe topic name of the event bus.
natsStreaming NatsStreamingSpecDefinition of the Nats Streaming event bus (currently only supported). See NatsStreamingSpec.

GenericScaleOption

Belong to scaleOption.

FieldDescription
pollingInterval intThe interval to check each trigger on. It defaults to 30 seconds.
cooldownPeriod intThe period to wait after the last trigger reported active before scaling the resource back to 0. It defaults to 300 seconds.
minReplicaCount intMinimum number of replicas which KEDA will scale the resource down to. It defaults to 0.
maxReplicaCount intThis setting is passed to the HPA definition that KEDA will create for a given resource.
advanced kedav1alpha1.AdvancedConfigSee KEDA documentation.
metadata map[string]stringKEDA trigger’s metadata.
authRef kedav1alpha1.ScaledObjectAuthRefEvery parameter you define in TriggerAuthentication definition does not need to be included in the metadata of the trigger for your ScaledObject definition. To reference a TriggerAuthentication from a ScaledObject, add the authRef to the trigger. Refer to KEDA documentation.

6.2.3.2 - NATS Streaming

Event bus specifications of NATS Streaming.

NatsStreamingSpec

Belong to EventBusSpec.

FieldDescription
natsURL stringNATS server address, for example, nats://localhost:4222.
natsStreamingClusterID stringNATS cluster ID, for example, stan.
subscriptionType stringSubscriber type, value options: topic, queue.
ackWaitTime string(Optional) Refer to Acknowledgements , for example, 300ms.
maxInFlight int64(Optional) Refer to Max In Flight , for example, 25.
durableSubscriptionName string(Optional) The name of the persistent subscriber. For example, my-durable.
deliverNew bool(Optional) Subscriber options (only one can be used). Whether to send only new messages. Value options: true, false.
startAtSequence int64(Optional) Subscriber options (only one can be used). Set the starting sequence position and status. For example, 100000.
startWithLastReceived bool(Optional) Subscriber options (only one can be used). Whether to set the start position to the latest news place. Value options: true, false.
deliverAll bool(Optional) Subscriber options (only one can be used). Whether to send all available messages. Value options: true, false.
startAtTimeDelta string(Optional) Subscriber options (only one can be used). Use the difference form to set the desired start time position and state, for example, 10m, 23s.
startAtTime string(Optional) Subscriber options (only one can be used). Set the desired start time position and status using the marker value form. For example, Feb 3, 2013 at 7:54pm (PST).
startAtTimeFormat string(Optional) Must be used with startAtTime. Sets the format of the time. For example, Jan 2, 2006 at 3:04pm (MST).
scaleOption NatsStreamingScaleOption(Optional) Nats streaming’s scale configuration.

NatsStreamingScaleOption

Belong to NatsStreamingSpec.

FieldDescription
GenericScaleOptionGeneric scale configuration.
natsServerMonitoringEndpoint stringNats streaming’s monitoring endpoint.
queueGroup stringNats streaming’s queue group name.
durableName stringNats streaming’s durable name.
subject stringSubject under monitoring, for example, topicA, myTopic.
lagThreshold stringThreshold for triggering scaling, in this case is the Nats Streaming’s lag.

6.2.4 - Trigger Specifications

Learn about Trigger Specifications.

This document describes the specifications of the Trigger CRD.

Trigger

FieldDescription
apiVersion stringevents.openfunction.io/v1alpha1
kind stringTrigger
metadata v1.ObjectMeta(Optional) Refer to v1.ObjectMeta
spec TriggerSpecRefer to TriggerSpec
status TriggerStatusStatus of Trigger

TriggerSpec

Belong to Trigger.

FieldDescription
eventBus string(Optional) Name of the EventBus resource associated with the Trigger.
inputs map[string]Input(Optional) The input of trigger, with key being the input name. Refer to Input.
subscribers []Subscriber(Optional) The subscriber of the trigger. Refer to Subscriber.

Input

Belong to TriggerSpec.

FieldDescription
namespace string(Optional) The namespace name of the EventSource, which by default matches the namespace of the Trigger, for example, default.
eventSource stringEventSource name, for example, kafka-eventsource.
event stringEvent name, for example, eventA.

Subscriber

Belong to TriggerSpec.

FieldDescription
condition stringTrigger conditions for triggers, refer to cel-spec for more writing specifications, for example, eventA && eventB or `eventA
sink SinkSpec(Optional) Triggered Sink (addressable access resource, for example, synchronization request) definition, refer to SinkSpec.
deadLetterSink SinkSpec(Optional) Triggered dead letter Sink (addressable access resource, for example, synchronization request) definition, refer to SinkSpec.
topic string(Optional) Used to send post-trigger messages to the specified topic of the event bus, for example, topicTriggered.
deadLetterTopic string(Optional) Used to send post-trigger messages to the specified dead letter topic of the event bus, for example, topicDL.

SinkSpec

Belong to EventSourceSpec.

FieldDescription
ref ReferenceRefer to Reference.

Reference

Belong to SinkSpec.

FieldDescription
kind stringThe type of the referenced resource. It defaults to Service.
namespace stringThe namespace of the referenced resource, by default the same as the namespace of the Trigger.
name stringName of the referenced resource, for example, function-ksvc.
apiVersion stringThe apiVersion of the referenced resource. It defaults to serving.knative.dev/v1.

6.3 - FAQ

This document describes FAQs when using OpenFunction.

Q: How to use private image repositories in OpenFunction?

A: OpenFunction uses Shipwright (which utilizes Tekton to integrate with Cloud Native Buildpacks) in the build phase to package the user function to the application image.

Users often choose to access a private image repository in an insecure way, which is not yet supported by the Cloud Native Buildpacks.

We offer a workaround as below to get around this limitation for now:

  1. Use IP address instead of hostname as access address for private image repository.
  2. You should skip tag-resolution when you run the Knative-runtime function.

For references:

buildpacks/lifecycle#524

buildpacks/tekton-integration#31

Q: How to access the Knative-runtime function without introducing a new ingress controller?

A: OpenFunction provides a unified entry point for function accessibility, which is based on the Ingress Nginx implementation. However, for some users, this is not necessary, and instead, introducing a new ingress controller may affect the current cluster.

In general, accessible addresses are for the sync(Knative-runtime) functions. Here are two ways to solve this problem:

  • Magic DNS

    You can follow this guide to config the DNS.

  • CoreDNS

    This is similar to using Magic DNS, with the difference that the configuration for DNS resolution is placed inside CoreDNS. Assume that the user has configured a domain named “openfunction.dev” in the ConfigMap config-domain under the knative-serving namespace (as shown below):

    $ kubectl -n knative-serving get cm config-domain -o yaml
    
    apiVersion: v1
    data:
      openfunction.dev: ""
    kind: ConfigMap
    metadata:
      annotations:
        knative.dev/example-checksum: 81552d0b
      labels:
        app.kubernetes.io/part-of: knative-serving
        app.kubernetes.io/version: 1.0.1
        serving.knative.dev/release: v1.0.1
      name: config-domain
      namespace: knative-serving
    

    Next, let’s add an A record for this domain. OpenFunction uses Kourier as the default network layer for Knative Serving, which is where the domain should flow to.

    $ kubectl -n kourier-system get svc
    
    NAME               TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
    kourier            LoadBalancer   10.233.7.202   <pending>     80:31655/TCP,443:30980/TCP   36m
    kourier-internal   ClusterIP      10.233.47.71   <none>        80/TCP                       36m
    

    Then the user only needs to configure this Wild-Card DNS resolution in CoreDNS to resolve the URL address of any Knative Service in the cluster.

    Where “10.233.47.71” is the address of the Service kourier-internal.

    $ kubectl -n kube-system get cm coredns -o yaml
    
    apiVersion: v1
    data:
      Corefile: |
        .:53 {
            errors
            health
            ready
            template IN A openfunction.dev {
              match .*\.openfunction\.dev
              answer "{{ .Name }} 60 IN A 10.233.47.71"
              fallthrough
            }
            kubernetes cluster.local in-addr.arpa ip6.arpa {
              pods insecure
              fallthrough in-addr.arpa ip6.arpa
            }
            hosts /etc/coredns/NodeHosts {
              ttl 60
              reload 15s
              fallthrough
            }
            prometheus :9153
            forward . /etc/resolv.conf
            cache 30
            loop
            reload
            loadbalance
        }
        ...
    

    If the user cannot resolve the URL address for this function outside the cluster, configure the hosts file as follows:

    Where “serving-sr5v2-ksvc-sbtgr.default.openfunction.dev” is the URL address obtained from the command “kubectl get ksvc”.

    10.233.47.71 serving-sr5v2-ksvc-sbtgr.default.openfunction.dev
    

After the above configuration is done, you can get the URL address of the function with the following command. Then you can trigger the function via curl or your browser.

$ kubectl get ksvc

NAME                       URL
serving-sr5v2-ksvc-sbtgr   http://serving-sr5v2-ksvc-sbtgr.default.openfunction.dev

Q: How to enable and configure concurrency for functions?

A: OpenFunction categorizes function types into “sync runtime” and “async runtime” based on the type of request being handled. These two types of functions are driven by Knative Serving and Dapr + KEDA.

Therefore, to enable and configure the concurrency of functions, you need to refer to the specific implementation in the above components.

The following section describes how to enable and configure concurrency of functions in OpenFunction according to the “sync runtime” and “async runtime” sections.

Sync runtime

You can start by referring to this document in Knative Serving on enabling and configuring concurrency capabilities.

Knative Serving has Soft limit and Hard limit configurations for the concurrency feature.

Soft limit

You can refer to the Global(ConfigMap) and Global(Operator) sections of this document to configure global concurrency capabilities.

And for Per Revision you can configure it like this.

apiVersion: core.openfunction.io/v1beta1
kind: Function
metadata:
  name: function-sample
spec:
  serving:
    scaleOptions:
      knative:
        autoscaling:
          target: "200"

Hard limit

OpenFunction currently doesn’t support configuring Hard limit for Per Revision. You can refer to the Global(ConfigMap) and Global(Operator) sections of this document to configure global concurrency capabilities.

In summary

In a nutshell, you can configure Knative Serving’s Autoscaling-related configuration items for each function as follows, just as long as they can be passed in as Annotation, otherwise, you can only do global settings.

# Configuration in Knative Serving
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: helloworld-go
  namespace: default
spec:
  template:
    metadata:
      annotations:
        autoscaling.knative.dev/<key>: "value"

# Configuration in OpenFunction (recommended)
apiVersion: core.openfunction.io/v1beta1
kind: Function
metadata:
  name: function-sample
spec:
  serving:
    scaleOptions:
      knative:
        autoscaling:
          <key>: "value"

# Alternative approach
apiVersion: core.openfunction.io/v1beta1
kind: Function
metadata:
  name: function-sample
spec:
  serving:
    annotations:
      autoscaling.knative.dev/<key>: "value"

Async runtime

You can start by referring to the document in Dapr on enabling and configuring concurrency capabilities.

Compared to the concurrency configuration of sync runtime, the concurrency configuration of async runtime is simpler.

# Configuration in Dapr
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nodesubscriber
  namespace: default
spec:
  template:
    metadata:
      annotations:
        dapr.io/app-max-concurrency: "value"

# Configuration in OpenFunction (recommended)
apiVersion: core.openfunction.io/v1beta1
kind: Function
metadata:
  name: function-sample
spec:
  serving:
    annotations:
      dapr.io/app-max-concurrency: "value"

7 - Contributing

7.1 - Overview

This document provides the guidelines for how to contribute to the OpenFunction through issues and pull-requests. Contributions can also come in additional ways such as engaging with the community in community calls, commenting on issues or pull requests and more.

See the OpenFunction community repository for more information on community engagement and community membership.

Issues

Issue types

In most OpenFunction repositories there are usually 4 types of issues:

  • Issue/Bug report: You’ve found a bug and want to report and track it.
  • Issue/Feature request: You want to use a feature and it’s not supported yet.
  • Issue/Proposal: Used for items that propose a new idea or functionality. This allows feedback from others before code is written.

Before submitting

Before you submit an issue, make sure you’ve checked the following:

  1. Is it the right repository?
    • The OpenFunction project is distributed across multiple repositories. Check the list of repositories if you aren’t sure which repo is the correct one.
  2. Check for existing issues
    • Before you create a new issue, please do a search in open issues to see if the issue or feature request has already been filed.
    • If you find your issue already exists, make relevant comments and add your reaction. Use a reaction:
      • 👍 up-vote
      • 👎 down-vote

Pull Requests

All contributions come through pull requests. To submit a proposed change, follow this workflow:

  1. Make sure there’s an issue raised, which sets the expectations for the contribution you are about to make.
  2. Fork the relevant repo and create a new branch
  3. Create your change
    • Code changes require tests
  4. Update relevant documentation for the change
  5. Commit with DCO sign-off and open a PR
  6. Wait for the CI process to finish and make sure all checks are green
  7. You can expect a review within a few days

Use work-in-progress PRs for early feedback

A good way to communicate before investing too much time is to create a “Work-in-progress” PR and share it with your reviewers. The standard way of doing this is to add a “[WIP]” prefix in your PR’s title and assign the do-not-merge label. This will let people looking at your PR know that it is not ready yet.

Developer Certificate of Origin: Signing your work

Every commit needs to be signed

The Developer Certificate of Origin (DCO) is a lightweight way for contributors to certify that they wrote or otherwise have the right to submit the code they are contributing to the project. Here is the full text of the DCO

Contributors sign-off that they adhere to these requirements by adding a Signed-off-by line to commit messages.

This is my commit message
Signed-off-by: Random J Developer <random@developer.example.org>

Git has a -s command line option to append this automatically to your commit message:

git commit -s -m 'This is my commit message'

Each Pull Request is checked whether or not commits in a Pull Request do contain a valid Signed-off-by line.

I didn’t sign my commit, now what?!

No worries - You can easily replay your changes, sign them and force push them!

git checkout <branch-name>
git commit --amend --no-edit --signoff
git push --force-with-lease <remote-name> <branch-name>

Development Guide

Here you can find a development guide that will walk you through how to get started with building OpenFunction in your local environment.

Code of Conduct

Please see the OpenFunction community code of conduct.

7.2 - Roadmap

OpenFunction encourages the community to help with prioritization. A GitHub project of OpenFunction’s roadmap is available for the community to provide feedbacks on proposed issues and track them across development.

Please vote by adding a 👍 on the GitHub issues for the feature capabilities you would most like to see OpenFunction support. This will help the OpenFunction maintainers to have a better understanding of your requirements.

Contributions from the community is always welcomed. If there are features on the roadmap that you are interested in contributing to, please comment on the GitHub issue and include your solution proposal.