Skip to content

edmondop/cadence-helm-chart

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Cadence

Cadence is a distributed, scalable, durable, and highly available orchestration engine to execute asynchronous long-running business logic in a scalable and resilient way.

Important information

BanzaiCloud is not maintaining anymore the Helm Chart for Cadence, so if you have been using the Cadence Helm Chart for some time you might need to change few things from now on. A special thanks to their team for the great work.

TL;DR;

The new release process allows anybody to quickly publish the chart on GitHub repo, using chart-releaser-action GitHub action that creates public Helm Chart Repo. If your repo is not hosted under user edmondop, please replace the right username

$ helm repo add cadence https://edmondop.github.io/cadence-helm-chart/
$ helm repo update

Introduction

This chart bootstraps a Cadence and a Cadence-UI deployment on a Kubernetes cluster using the Helm package manager.

Prerequisites

  • Kubernetes 1.7+ with Beta APIs enabled
  • Cadence 0.24.0+

Installing the Chart

To install the chart with the release name my-release:

$ helm install my-release --namespace cadence cadence/cadence

Tip: List all releases using helm list

Upgrading Chart

# Helm
$ helm upgrade [RELEASE_NAME] cadence/cadence

From 0.20.x (or below) to 0.21.y (or above)

Version 0.21.0 extends the configuration interface by introducing clusterMetadata settings to the values.yaml in a backward incompatible manner with existing Cadence clusters.

To upgrade the Cadence deployment under existing Cadence clusters from version 0.20.x (or below) to 0.21.y (or above) you MUST set the existing cluster's configuration in the values.yaml file's .server.config.clusterMetadata section to be used for the upgrade.

Example configuration for upgrading from version 0.16.x (or below)

server:
  # ...
  config:
    clusterMetadata:
      enableGlobalDomain: true
      maximumClusterCount: 10
      masterClusterName: "active"
      currentClusterName: "active"
      clusterInformation:
        - name: active
          enabled: true

Example configuration for upgrading from version 0.17.x

server:
  # ...
  config:
    clusterMetadata:
      enableGlobalDomain: true
      maximumClusterCount: 10
      masterClusterName: "master"
      currentClusterName: "master"
      clusterInformation:
        - name: master
          enabled: true

Example configuration for upgrading from 0.18.x (or above), single Cadence cluster

server:
  # ...
  config:
    clusterMetadata:
      enableGlobalDomain: true
      maximumClusterCount: 10
      masterClusterName: "primary"
      currentClusterName: "primary"
      clusterInformation:
        - name: primary
          enabled: true

Example configuration for upgrading from 0.18.x (or above), multiple Cadence clusters

server:
  # ...
  config:
    clusterMetadata:
      enableGlobalDomain: true
      maximumClusterCount: 10
      masterClusterName: "primary"
      currentClusterName: "primary" # "secondary" # Note: use the name of the Cadence cluster you are using on the cluster/namespace/release you are upgrading.
      clusterInformation:
        - name: primary
          enabled: true
        - name: secondary
          enabled: true

Uninstalling the Chart

To uninstall/delete the my-release deployment:

$ helm delete my-release

The command removes all the Kubernetes components associated with the chart and deletes the release.

Install the Chart with Cassandra

The chart comes with a single node Cassandra by default (from incubator/cassandra).

$ helm install cadence/cadence

You can increase the number of Cassandra nodes if you want:

$ helm install --set cassandra.config.cluster_size=3 cadence/cadence

Note: It takes a few minutes to start Cassandra. You can speed it up by using configuration from values.dev.yaml.

Configure the Chart to use existing Cassandra cluster

You can configure the chart to use an existing Cassandra cluster instead of installing one.

Prerequisites:

  • Running Cassandra cluster
  • Existing keyspaces for default and visibility stores
  • Existing user(s) with access to those keyspaces (if authentication is required)

You can easily start your own Cassandra cluster using the same incubator/cassandra chart:

$ helm install -f values/cassandra.yaml --name cassandra incubator/cassandra

Wait for Cassandra to become ready:

$ kubectl wait --for=condition=Ready pod/cassandra-0 --timeout=90s

Create two Cassandra keyspaces:

$ kubectl exec -it cassandra-0 -- cqlsh -e "CREATE KEYSPACE cadence WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };"
$ kubectl exec -it cassandra-0 -- cqlsh -e "CREATE KEYSPACE cadence_visibility WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };"
$ helm install -f values/values.cassandra.yaml cadence/cadence

Alternatively, install the chart with manual migrations. Follow the steps in migrations.md.

$ helm install -f values/values.cassandra.yaml --set schema.setup.enabled=false --set schema.update.enabled=false cadence/cadence

Install the Chart with MySQL

Note: MySQL support is currently beta in Cadence.

The chart can be installed with a single node MySQL (from stable/mysql).

$ helm install --set cassandra.enabled=false --set mysql.enabled=true --set mysql.mysqlPassword=cadence cadence/cadence

Note: When installing MySQL from within the chart with automatic migrations, you must configure a password. See the Limitations section for details.

Configure the Chart to use existing MySQL instance

You can configure the chart to use an existing MySQL instance instead of installing one.

Prerequisites:

  • Running MySQL instance
  • Existing databases for default and visibility stores
  • Existing user(s) with access to those databases

You can easily start your own MySQL instance using the same stable/mysql chart:

$ helm install -f values/mysql.yaml --name mysql stable/mysql

Wait for MySQL to become ready:

$ kubectl wait --for=condition=Ready pod/$(kubectl get pods -l 'app=mysql' -o jsonpath='{..metadata.name}') --timeout=90s
$ helm install -f values/values.mysql.yaml cadence/cadence

Alternatively, install the chart with manual migrations. Follow the steps in migrations.md.

$ helm install -f values/values.mysql.yaml --set schema.setup.enabled=false --set schema.update.enabled=false cadence/cadence

Prometheus monitoring

As of 0.5.8 Cadence exports Prometheus metrics. The chart supports annotating Cadence components with Prometheus annotations, so that Prometheus can scrape them:

$ helm install --set server.metrics.annotations.enabled=true cadence/cadence

Alternatively, you can enable ServiceMonitor when using Prometheus Operator:

$ helm install --set server.metrics.serviceMonitor.enabled=true cadence/cadence

Note that you can enable monitoring for each service separately. See the configuration reference bellow.

Scaling Cadence

Cadence server components (frontend, history, matching, worker) are executed in separate Deployments, so scaling them separately is possible. However you can decide to apply the same replica count to each service (just like in case of resource limits and taint/affinity settings).

See the configuration reference bellow for details.

Recommended setup

The chart is self-contained, meaning it installs everything required for running the application by default. It can install Cassandra (default) or MySQL, but it is recommended that you configure every component and run migrations manually.

See values.prod.yaml for details of a production setup.

See migrations.md for running migrations manually.

Limitations

In order to use the automatic migration feature, you have to manually set credentials for the chosen storage type (if there is any). The default Cassandra store is installed without password, but for MySQL to work you have to set mysql.mysqlPassword manually (if you install a storage engine from within the chart).

The reason behind this limitation is that migrations are executed as helm hooks, which needs the credentials before MySQL is even started.

Port forwarding to Cadence frontend

As of version 0.5.1 of this chart service (frontend, history, matching, worker) pods use the pod IP as bind address.

This is a limitation of how Cadence cluster membership works and is required for scaling Cadence components.

Unfortunately, this change caused kubectl port-forward directly to Cadence pods to stop working.

If you need to port forward to any of the components (eg. to create a domain), you can use a socat sidecar container (for example by installing and configuring the stable/socat-tunneller chart):

helm install stable/socat-tunneller --name cadence-frontend-tunnel --set tunnel.host=cadence-frontend --set tunnel.port=7933 --set nameOverride=cadence-frontend-tunnel

Then you can port-forward to this tunnel:

kubectl port-forward svc/cadence-frontend-tunnel 7933:7933

and create a domain:

docker run --rm ubercadence/cli:master --address host.docker.internal:7933 --domain samples-domain domain register --global_domain false

Metrics

Note: from chart version 0.19.0, the metrics collection services (Prometheus, StatsD) are mutually exclusive - only one of those can be enabled at the same time based on the values configuration. The default configuration enables Prometheus.

If you want to enable StatsD (and disable Prometheus), you may edit the global metrics configuration values accordingly. If you want to use them in a mixed fashion, make sure to disable both in the global metrics configuration values and enable the desired one in the service-specific configuration values (this is required, because global configurations take precedence over service-specific configurations).

Configuration

The following table lists the configurable parameters of the chart and their default values. Global options overridable per service are marked with an asterisk.

Parameter Description Default
nameOverride Override name of the application ``
fullnameOverride Override full name of the application ``
server.image.repository Server image repository ubercadence/server
server.image.tag Server image tag 0.24.0
server.image.pullPolicy Server image pull policy IfNotPresent
server.replicaCount* Server replica count 1
server.metrics.annotations.enabled* Annotate pods with Prometheus annotations false
server.metrics.serviceMonitor.enabled* Enable Prometheus ServiceMonitor false
server.metrics.prometheus.timerType* Prometheus timer type histogram
server.metrics.statsd.hostPort* Statsd daemon host and port ``
server.podAnnotations* Server pod annotations {}
server.podSecurityContext* Server pod security context {}
server.securityContext* Server security context {}
server.resources* Server CPU/Memory resource requests/limits {}
server.nodeSelector* Node labels for pod assignment {}
server.tolerations* Toleration labels for pod assignment []
server.affinity* Affinity settings for pod assignment {}
server.config.logLevel Server log level debug,info
server.config.numHistoryShards Number of history shards 1000
server.config.persistence.[store].driver Connection driver cassandra
server.config.persistence.[store].cassandra Cassandra connection details (see values.yaml) {}
server.config.persistence.[store].sql SQL connection details (see values.yaml) {}
server.[service].service.type [service] service type ClusterIP
server.[service].service.port [service] service port 7933/7934/7935/7939
server.[service].service.annotations [service] service annotations {}
server.[service].metrics.annotations.enabled Annotate [service] pods with Prometheus annotations ``
server.[service].metrics.serviceMonitor.enabled Enable Prometheus ServiceMonitor for [service] ``
server.[service].metrics.prometheus.timerType [service] Prometheus timer type ``
server.[service].metrics.statsd.hostPort [service] Statsd daemon host and port ``
server.[service].podAnnotations [service] pod annotations {}
server.[service].podSecurityContext [service] pod security context {}
server.[service].securityContext [service] security context {}
server.[service].resources [service] CPU/Memory resource requests/limits {}
server.[service].nodeSelector [service] Node labels for pod assignment {}
server.[service].tolerations [service] Toleration labels for pod assignment []
server.[service].affinity [service] Affinity settings for pod assignment {}
server.frontend.service.nodePort frontend service nodePort, if service type is NodePort ``
web.enabled Enable WebUI service true
web.replicaCount Number of WebUI service Replicas 1
web.image.repository WebUI image repository ubercadence/web
web.image.tag WebUI image tag 3.32.0
web.image.pullPolicy WebUI image pull policy IfNotPresent
web.service.annotations WebUI service annotations {}
web.service.type WebUI service type ClusterIP
web.service.port WebUI service port 80
web.service.nodePort WebUI service nodePort, if service type is NodePort ``
web.ingress.enabled Enable WebUI Ingress false
web.ingress.annotations WebUI Ingress annotations {}
web.ingress.hosts WebUI Ingress hosts /
web.ingress.tls WebUI Ingress tls config []
web.podSecurityContext WebUI pod security context {}
web.securityContext WebUI security context {}
web.resources WebUI CPU/Memory resource requests/limits {}
web.nodeSelector Node labels for pod assignment {}
web.tolerations Toleration labels for pod assignment []
web.affinity Affinity settings for pod assignment {}
schema.setup.enabled Create database or keyspace true
schema.setup.backoffLimit Create database job back off limit 100
schema.update.enabled Update schema true
schema.update.backoffLimit Update schema job back off limit 100
cassandra.enabled Install Cassandra cluster true
cassandra.config.cluster_size Cassandra cluster node number 1
mysql.enabled Install MySQL false

Specify each parameter using the --set key=value[,key=value] argument to helm install. For example:

$ helm install my-release --set server.image.tag=0.7.1 cadence/cadence

Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example:

$ helm install my-release --values values.yaml cadence/cadence

Contributing

Developer setup

You can download dependencies and install the pre-commit hooks using make setup or update the dependencies running make update-dependencies

Testing

To implement test automation for the Helm Chart with GitHub Actions, it was necessary to create a local fork of the database charts to ensure some resources that do not support annotations could be initialized in the correct order. When a PR like https://github.com/helm/chart-testing/pull/243/files will be merged, we could use Helm Post Renderer to add annotations to resources in Charts that do not support annotating them, or alternatively we could remove database charts as dependencies and pre-install them to simplify tasting but at the cost of worse developer experience.

New release process.

The new release process use GitHub Pages as an Helm Chart Repository. In order to test the release, set up GitHub Pages for your fork. This will allow you to see how your changes affect the final outcome. Please create a branch named gh-pages and enabled on your repository GitHub Pages from that branch. If you need more information, please consult the Chart Releaser Action Documentation

Chart upgrade

For contributions involving an upgrade to the Cadence server version or modifying the chart and subsequently releasing new chart versions, please refer to the Cadence chart upgrade documentation