Skip to main content
Version: 8.6

Red Hat OpenShift

Red Hat OpenShift, a Kubernetes distribution maintained by Red Hat, provides options for both managed and on-premises hosting.

Deploying Camunda 8 on Red Hat OpenShift is supported using Helm, given the appropriate configurations.

However, it's important to note that the Security Context Constraints (SCCs) and Routes configurations might require slight deviations from the guidelines provided in the general Helm deployment guide.

Cluster Specification

When deploying Camunda 8 on an OpenShift cluster, the cluster specification should align with your specific requirements and workload characteristics. Here's a suggested configuration to begin with:

  • Instance type: 4 vCPUs (x86_64, >3.1 GHz), 16 GiB Memory (for example, mi7.xlarge on AWS)
  • Number of dedicated nodes: 4
  • Volume type: SSD volumes (with between 1000 and 3000 IOPS per volume, and a throughput of 1,000 MB/s per volume, for instance, gp3 on AWS)

If you need to set up an OpenShift cluster on a cloud provider, we recommend our guide to deploying a ROSA cluster.

Supported Versions

We conduct testing and ensure compatibility against the following OpenShift versions:

OpenShift VersionEnd of Support Date
4.17.xJune 27, 2025
4.16.xDecember 27, 2025
4.15.xAugust 27, 2025
4.14.xMay 1, 2025
Version compatibility

Camunda 8 supports OpenShift versions in the Red Hat General Availability, Full Support, and Maintenance Support life cycle phases. For more information, refer to the Red Hat OpenShift Container Platform Life Cycle Policy.

Requirements

Deploy Camunda 8 via Helm charts

Configure your deployment

Start by creating a values.yml file to store the configuration for your environment. This file will contain key-value pairs that will be substituted using envsubst. Over this guide, you will add and merge values in this file to configure your deployment to fit your needs.

You can find a reference example of this file here:

aws/rosa-hcp/camunda-versions/8.6/procedure/install/helm-values/base.yml
loading...
Merging YAML files

This guide references multiple configuration files that need to be merged into a single YAML file. Be cautious to avoid duplicate keys when merging the files. Additionally, pay close attention when copying and pasting YAML content. Ensure that the separator notation --- does not inadvertently split the configuration into multiple documents.

We strongly recommend double-checking your YAML file before applying it. You can use tools like yamllint.com or the YAML Lint CLI if you prefer not to share your information online.

Configuring the Ingress

Before exposing services outside the cluster, we need an Ingress component. Here's how you can configure it:

Routes expose services externally by linking a URL to a service within the cluster. OpenShift supports both the standard Kubernetes Ingress and routes, giving cluster users the flexibility to choose.

The presence of routes is rooted in their specification predating Ingress. The functionality of routes differs from Ingress; for example, unlike Ingress, routes don't allow multiple services to be linked to a single route or the use of paths.

To use these routes for the Zeebe Gateway, configure this through Ingress as well.

Setting Up the application domain for Camunda 8

The route created by OpenShift will use a domain to provide access to the platform. By default, you can use the OpenShift applications domain, but any other domain supported by the router can also be used.

To retrieve the OpenShift applications domain (used as an example here), run the following command:

export OPENSHIFT_APPS_DOMAIN=$(oc get ingresses.config.openshift.io cluster -o jsonpath='{.spec.domain}')

Next, define the route domain that will be used for the Camunda 8 deployment. For example:

export DOMAIN_NAME="camunda.$OPENSHIFT_APPS_DOMAIN"

echo "Camunda 8 will be reachable from $DOMAIN_NAME"

If you choose to use a custom domain instead, ensure it is supported by your router configuration and replace the example domain with your desired domain. For more details on configuring custom domains in OpenShift, refer to the official custom domain OpenShift documentation.

Checking if HTTP/2 is enabled

As the Zeebe Gateway also uses gRPC (which relies on HTTP/2), HTTP/2 Ingress Connectivity must be enabled.

To check if HTTP/2 is already enabled on your OpenShift cluster, run the following command:

oc get ingresses.config/cluster -o json | jq '.metadata.annotations."ingress.operator.openshift.io/default-enable-http2"'

Alternatively, if you use a dedicated IngressController for the deployment:

# List your IngressControllers
oc -n openshift-ingress-operator get ingresscontrollers

# Replace <ingresscontroller_name> with your IngressController name
oc -n openshift-ingress-operator get ingresscontrollers/<ingresscontroller_name> -o json | jq '.metadata.annotations."ingress.operator.openshift.io/default-enable-http2"'
  • If the output is "true", it means HTTP/2 is enabled.
  • If the output is null or empty, HTTP/2 is not enabled.
Enable HTTP/2

If HTTP/2 is not enabled, you can enable it by running the following command:

IngressController configuration:

oc -n openshift-ingress-operator annotate ingresscontrollers/<ingresscontroller_name> ingress.operator.openshift.io/default-enable-http2=true

Global cluster configuration:

oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=true

This will add the necessary annotation to enable HTTP/2 for Ingress in your OpenShift cluster globally on the cluster.

Configure Route TLS

Additionally, the Zeebe Gateway should be configured to use an encrypted connection with TLS. In OpenShift, the connection from HAProxy to the Zeebe Gateway service can use HTTP/2 only for re-encryption or pass-through routes, and not for edge-terminated or insecure routes.

  1. Zeebe Gateway: two TLS secrets for the Zeebe Gateway are required, one for the service and the other one for the route:

    • The first TLS secret is issued to the Zeebe Gateway Service Name. This must use the PKCS #8 syntax or PKCS #1 syntax as Zeebe only supports these, referenced as camunda-platform-internal-service-certificate.

      In the example below, a TLS certificate is generated for the Zeebe Gateway service with an annotation. The generated certificate will be in the form of a secret.

      Another option is Cert Manager. For more details, review the OpenShift documentation.

    PKCS #8, PKCS #1 syntax

    PKCS #1 private key encoding. PKCS #1 produces a PEM block that contains the private key algorithm in the header and the private key in the body. A key that uses this can be recognised by its BEGIN RSA PRIVATE KEY or BEGIN EC PRIVATE KEY header. NOTE: This encoding is not supported for Ed25519 keys. Attempting to use this encoding with an Ed25519 key will be ignored and default to PKCS #8.

    PKCS #8 private key encoding. PKCS #8 produces a PEM block with a static header and both the private key algorithm and the private key in the body. A key that uses this encoding can be recognised by its BEGIN PRIVATE KEY header.

    PKCS #1, PKCS #8 syntax definitionfrom cert-manager

    • The second TLS secret is used on the exposed route, referenced as camunda-platform-external-certificate. For example, this would be the same TLS secret used for Ingress. We also configure the Zeebe Gateway Ingress to create a Re-encrypt Route.

    Finally, we mount the Service Certificate Secret (camunda-platform-internal-service-certificate) to the Zeebe Gateway Pod. Update your values.yml file with the following:

    aws/rosa-hcp/camunda-versions/8.6/procedure/install/helm-values/zeebe-gateway-route.yml
    loading...

    The domain used by the Zeebe Gateway for gRPC is zeebe-$DOMAIN_NAME which different from the one used for the rest, namely $DOMAIN_NAME, to avoid any conflicts. It is also important to note that the port used for gRPC is 443.

  2. Operate: mount the Service Certificate Secret to the Operate pod and configure the secure TLS connection. Here, only the tls.crt file is required.

Update your values.yml file with the following:

aws/rosa-hcp/camunda-versions/8.6/procedure/install/helm-values/operate-route.yml
loading...

The actual configuration properties can be reviewed in the Operate configuration documentation.

  1. Tasklist: mount the Service Certificate Secret to the Tasklist pod and configure the secure TLS connection. Here, only the tls.crt file is required.

    Update your values.yml file with the following:

aws/rosa-hcp/camunda-versions/8.6/procedure/install/helm-values/tasklist-route.yml
loading...

The actual configuration properties can be reviewed in the Tasklist configuration documentation.

  1. Connectors: update your values.yml file with the following:
aws/rosa-hcp/camunda-versions/8.6/procedure/install/helm-values/connectors-route.yml
loading...

The actual configuration properties can be reviewed in the Connectors configuration documentation.

  1. Configure all other applications running inside the cluster and connecting to the Zeebe Gateway to also use TLS.

  2. Set up the global configuration to enable the single Ingress definition with the host. Update your configuration file as shown below:

aws/rosa-hcp/camunda-versions/8.6/procedure/install/helm-values/domain.yml
loading...

Configuring the Security Context Constraints

Depending on your OpenShift cluster's Security Context Constraints (SCCs) configuration, the deployment process may vary. By default, OpenShift employs more restrictive SCCs. The Helm chart must assign null to the user running all components and dependencies.

The global.compatibility.openshift.adaptSecurityContext variable in your values.yaml can be used to set the following possible values:

  • force: The runAsUser and fsGroup values will be null in all components.
  • disabled: The runAsUser and fsGroup values will not be modified (default).
aws/rosa-hcp/camunda-versions/8.6/procedure/install/helm-values/scc.yml
loading...

Enable Enterprise components

Some components are not enabled by default in this deployment. For more information on how to configure and enable these components, refer to configuring Enterprise components and Connectors.

Fill your deployment with actual values

Once you've prepared the values.yml file, run the following envsubst command to substitute the environment variables with their actual values:

# generate the final values
envsubst < values.yml > generated-values.yml

# print the result
cat generated-values.yml
Camunda Helm chart no longer automatically generates passwords

Starting from Camunda 8.6, the Helm chart deprecated the automatic generation of secrets, and this feature has been fully removed in Camunda 8.7.

Next, store various passwords in a Kubernetes secret, which will be used by the Helm chart. Below is an example of how to set up the required secret. You can use openssl to generate random secrets and store them in environment variables:

aws/rosa-hcp/camunda-versions/8.6/procedure/install/generate-passwords.sh
loading...

Use these environment variables in the kubectl command to create the secret.

aws/rosa-hcp/camunda-versions/8.6/procedure/install/create-identity-secret.sh
loading...

Install Camunda 8 using Helm

Now that the generated-values.yml is ready, you can install Camunda 8 using Helm.

The following are the required environment variables with some example values:

aws/rosa-hcp/camunda-versions/8.6/procedure/install/chart-env.sh
loading...

Then run the following command:

aws/rosa-hcp/camunda-versions/8.6/procedure/install/install-chart.sh
loading...

This command:

  • Installs (or upgrades) the Camunda platform using the Helm chart.
  • Substitutes the appropriate version using the $CAMUNDA_HELM_CHART_VERSION environment variable.
  • Applies the configuration from generated-values.yml.
note

This guide uses helm upgrade --install as it runs install on initial deployment and upgrades future usage. This may make it easier for future Camunda 8 Helm upgrades or any other component upgrades.

You can track the progress of the installation using the following command:

watch -n 5 '
kubectl get pods -n camunda --output=wide;
if [ $(kubectl get pods -n camunda --field-selector=status.phase!=Running -o name | wc -l) -eq 0 ] &&
[ $(kubectl get pods -n camunda -o json | jq -r ".items[] | select(.status.containerStatuses[]?.ready == false)" | wc -l) -eq 0 ];
then
echo "All pods are Running and Healthy - Installation completed!";
else
echo "Some pods are not Running or Healthy";
fi
'

Verify connectivity to Camunda 8

Please follow our guide to verify connectivity to Camunda 8

Domain name for gRPC Zeebe

In this setup, the domain used for gRPC communication with Zeebe is slightly different from the one in the guide. Instead of using zeebe.$DOMAIN_NAME, you need to use zeebe-$DOMAIN_NAME.

Pitfalls to avoid

For general deployment pitfalls, visit the deployment troubleshooting guide.

Security Context Constraints (SCCs)

Security Context Constraints (SCCs) are a set of conditions that a pod must adhere to in order to be accepted into the system. They define the security conditions under which a pod operates.

Similar to how roles control user permissions, SCCs regulate the permissions of deployed applications, both at the pod and container level. It's generally recommended to deploy applications with the most restrictive SCCs possible. If you're unfamiliar with security context constraints, you can refer to the OpenShift documentation.

Restrictive SCCs

The following represents the most restrictive SCCs that can be used to deploy Camunda 8. Note that in OpenShift 4.10, these are equivalent to the built-in restricted SCCs (which are the default SCCs).

Allow Privileged: false
Default Add Capabilities: <none>
Required Drop Capabilities: KILL, MKNOD, SYS_CHROOT, SETUID, SETGID
Allowed Capabilities: <none>
Allowed Seccomp Profiles: <none>
Allowed Volume Types: configMap, downwardAPI, emptyDir, persistentVolumeClaim, projected, secret
Allow Host Network: false
Allow Host Ports: false
Allow Host PID: false
Allow Host IPC: false
Read Only Root Filesystem: false
Run As User Strategy: MustRunAsRange
SELinux Context Strategy: MustRunAs
FSGroup Strategy: MustRunAs
Supplemental Groups Strategy: RunAsAny

When using these SCCs, be sure not to specify any runAsUser or fsGroup values in either the pod or container security context. Instead, allow OpenShift to assign arbitrary IDs.

note

If you are providing the ID ranges yourself, you can also configure the runAsUser and fsGroup values accordingly.

The Camunda Helm chart can be deployed to OpenShift with a few modifications, primarily revolving around your desired security context constraints.