Skip to main content
Version: 8.9 (unreleased)

Red Hat OpenShift

Red Hat OpenShift, a Kubernetes distribution maintained by Red Hat, provides options for both managed and on-premises hosting.

Deploying Camunda 8 on Red Hat OpenShift is supported using Helm, given the appropriate configurations.

However, it's important to note that the Security Context Constraints (SCCs) and Routes configurations might require slight deviations from the guidelines provided in the general Helm deployment guide.

Additional information and a high-level overview of Kubernetes as the upstream project is available on our Kubernetes deployment reference.

Requirements

  • Helm
  • kubectl to interact with the cluster.
  • jq to interact with some variables.
  • GNU envsubst to generate manifests.
  • oc (version supported by your OpenShift) to interact with OpenShift.
  • A namespace to host the Camunda Platform.
  • Permissions to install Kubernetes operators (cluster-admin or equivalent) for deploying the infrastructure services (Elasticsearch, PostgreSQL, Keycloak). These operators can also be installed via the OpenShift OperatorHub, but this guide installs them directly from source for full control over versions and configuration.

For the tool versions used, check the .tool-versions file in the repository. It contains an up-to-date list of versions that we also use for testing.

Architecture

This section installs Camunda 8 following the architecture described in the reference architecture. The architecture includes the following core components:

  • Orchestration Cluster: Core process execution engine (Zeebe, Operate, Tasklist, and Identity)
  • Web Modeler and Console: Management and design tools (Web Modeler, Console, and Management Identity)

Infrastructure components are deployed using official Kubernetes operators as described in Deploy infrastructure with Kubernetes operators:

For OpenShift deployments, the following OpenShift-specific configurations are also included:

  • OpenShift Routes: Native OpenShift way to expose services externally (alternative to standard Kubernetes Ingress)
  • Security Context Constraints (SCCs): Security framework for controlling pod and container permissions
Single namespace deployment

This guide uses a single Kubernetes namespace for simplicity, since the deployment is done with a single Helm chart. This differs from the reference architecture, which recommends separating Orchestration Cluster and Web Modeler or Console into different namespaces in production to improve isolation and enable independent scaling.

Identity Provider (IdP) setup

An OIDC-compatible identity provider (IdP) is required. This reference architecture does not include an IdP. You must configure your own before proceeding. Options include:

After deploying your IdP, merge the corresponding auth overlay into your values.yml using yq before running envsubst:

Keycloak Operator overlays:

# Merge the Keycloak Operator Helm values (use "domain" or "no-domain" variant)
yq ". *+ load(\"camunda-keycloak-domain-values.yml\")" values.yml > values-merged.yml && mv values-merged.yml values.yml

# Merge the identity secrets overlay
yq ". *+ load(\"camunda-values-identity-secrets.yml\")" values.yml > values-merged.yml && mv values-merged.yml values.yml

The overlay files are available in the Keycloak operator-based directory. The identity secrets are created automatically by the Keycloak Operator.

No-domain deployments and IdP choice

If you deploy Camunda without a domain (using kubectl port-forward), you'll generally need to use Keycloak as your IdP. Most external OIDC providers (for example, Microsoft Entra ID and Okta) don't allow localhost as a valid redirect URI for security reasons. Keycloak, when deployed locally in the cluster, can be configured to accept localhost-based redirect URIs.

Why isn't an IdP included by default?

The choice of identity provider is highly specific to each organization's security requirements, existing infrastructure, and compliance needs. Rather than bundling a default IdP that may not match your setup, the reference architecture leaves this choice to you.

Deploy Camunda 8 via Helm charts

Obtain a copy of the reference architecture

All configuration files, deployment scripts, and Helm values referenced in this guide are available in the Camunda deployment references repository.

generic/openshift/single-region/get-your-copy.sh
loading...

This places you at the repository root, from which both directories are accessible:

  • generic/kubernetes/operator-based/ — operator deployment scripts (Elasticsearch, PostgreSQL, Keycloak)
  • generic/openshift/single-region/ — OpenShift-specific Helm values and procedures

Environment setup

Source the environment variables required by the deployment scripts:

generic/kubernetes/operator-based/0-set-environment.sh
loading...
note

Ensure you source this file before running any deployment or configuration commands in the following sections.

Configure your deployment

Start by copying the base Helm values file from the cloned repository into a working values.yml at the repository root:

cp generic/openshift/single-region/helm-values/base.yml values.yml

This file contains key-value pairs that will be substituted using envsubst. Over this guide, you will merge additional overlays into this file to configure your deployment.

Review the base Helm values
generic/openshift/single-region/helm-values/base.yml
loading...
Merging YAML files

This guide references multiple configuration files that need to be merged into a single YAML file. Be cautious to avoid duplicate keys when merging the files. Additionally, pay close attention when copying and pasting YAML content. Ensure that the separator notation --- does not inadvertently split the configuration into multiple documents.

We strongly recommend double-checking your YAML file before applying it. You can use tools like yamllint.com or the YAML Lint CLI if you prefer not to share your information online.

Configuring the Ingress

Before exposing services outside the cluster, we need an Ingress component. Here's how you can configure it:

Routes expose services externally by linking a URL to a service within the cluster. OpenShift supports both the standard Kubernetes Ingress and routes, giving cluster users the flexibility to choose.

The presence of routes is rooted in their specification predating Ingress. The functionality of routes differs from Ingress; for example, unlike Ingress, routes don't allow multiple services to be linked to a single route or the use of paths.

To use these routes for the Zeebe Gateway, configure this through Ingress as well.

Setting Up the application domain for Camunda 8

The route created by OpenShift will use a domain to provide access to the platform. By default, you can use the OpenShift applications domain, but any other domain supported by the router can also be used.

To retrieve the OpenShift applications domain (used as an example here), run the following command and define the route domain that will be used for the Camunda 8 deployment:

generic/openshift/single-region/procedure/setup-application-domain.sh
loading...

If you choose to use a custom domain instead, ensure it is supported by your router configuration and replace the example domain with your desired domain. For more details on configuring custom domains in OpenShift, refer to the official custom domain OpenShift documentation.

Checking if HTTP/2 is enabled

As the Zeebe Gateway also uses gRPC (which relies on HTTP/2), HTTP/2 Ingress Connectivity must be enabled.

To check if HTTP/2 is already enabled on your OpenShift cluster, run the following command:

oc get ingresses.config/cluster -o json | jq '.metadata.annotations."ingress.operator.openshift.io/default-enable-http2"'

Alternatively, if you use a dedicated IngressController for the deployment:

generic/openshift/single-region/procedure/get-ingress-http2-status.sh
loading...
  • If the output is "true", it means HTTP/2 is enabled.
  • If the output is null or empty, HTTP/2 is not enabled.
Enable HTTP/2

If HTTP/2 is not enabled, you can enable it by running the following command:

IngressController configuration:

generic/openshift/single-region/procedure/enable-ingress-http2.sh
loading...

Global cluster configuration:

oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=true

This will add the necessary annotation to enable HTTP/2 for Ingress in your OpenShift cluster globally on the cluster.

Configure Route TLS

Additionally, the Zeebe Gateway should be configured to use an encrypted connection with TLS. In OpenShift, the connection from HAProxy to the Zeebe Gateway service can use HTTP/2 only for re-encryption or pass-through routes, and not for edge-terminated or insecure routes.

  1. Zeebe cluster: two TLS secrets for the Zeebe Gateway are required, one for the service and the other one for the route:
  • The first TLS secret is issued to the Zeebe Gateway Service Name. This must use the PKCS #8 syntax or PKCS #1 syntax as Zeebe only supports these, referenced as camunda-platform-internal-service-certificate. This certificate is also used in the other components such as Operate, Tasklist.

    In the example below, a TLS certificate is generated for the Zeebe Gateway service with an annotation. The generated certificate will be in the form of a secret.

    Another option is Cert Manager. For more details, review the OpenShift documentation.

PKCS #8, PKCS #1 syntax

PKCS #1 private key encoding. PKCS #1 produces a PEM block that contains the private key algorithm in the header and the private key in the body. A key that uses this can be recognised by its BEGIN RSA PRIVATE KEY or BEGIN EC PRIVATE KEY header. NOTE: This encoding is not supported for Ed25519 keys. Attempting to use this encoding with an Ed25519 key will be ignored and default to PKCS #8.

PKCS #8 private key encoding. PKCS #8 produces a PEM block with a static header and both the private key algorithm and the private key in the body. A key that uses this encoding can be recognised by its BEGIN PRIVATE KEY header.

PKCS #1, PKCS #8 syntax definition from cert-manager

  • The second TLS secret is used on the exposed route, referenced as camunda-platform-external-certificate. For example, this would be the same TLS secret used for Ingress. We also configure the Zeebe Gateway Ingress to create a Re-encrypt Route.

To configure the orchestration cluster securely, it's essential to set up a secure communication configuration between pods:

  • We enable gRPC Ingress for the Zeebe Pod, which sets up a secure proxy that we'll use to communicate with the Zeebe cluster. To avoid conflicts with other services, we use a specific domain (zeebe-$CAMUNDA_DOMAIN) for the gRPC proxy, different from the one used by other services ($CAMUNDA_DOMAIN). We also note that the port used for gRPC is 443.
  • We mount the Service Certificate Secret (camunda-platform-internal-service-certificate) to the Zeebe pod and configure a secure TLS connection.

Merge the orchestration route overlay into your values.yml file:

yq '. *+ load("generic/openshift/single-region/helm-values/orchestration-route.yml")' values.yml > values-merged.yml && mv values-merged.yml values.yml
Review the orchestration route configuration
generic/openshift/single-region/helm-values/orchestration-route.yml
loading...

The actual configuration properties can be reviewed in the Zeebe Gateway configuration documentation.

  1. Connectors: merge the connectors route overlay:

    yq '. *+ load("generic/openshift/single-region/helm-values/connectors-route.yml")' values.yml > values-merged.yml && mv values-merged.yml values.yml
    Review the connectors route configuration
    generic/openshift/single-region/helm-values/connectors-route.yml
    loading...

    The actual configuration properties can be reviewed in the connectors configuration documentation.

  2. TLS for internal applications: Configure all other applications running inside the cluster and connecting to the Zeebe Gateway to also use TLS.

  3. Domain configuration: Set up the global configuration to enable the single Ingress definition with the host. Merge the domain overlay:

    yq '. *+ load("generic/openshift/single-region/helm-values/domain.yml")' values.yml > values-merged.yml && mv values-merged.yml values.yml
    Review the domain configuration
    generic/openshift/single-region/helm-values/domain.yml
    loading...

Configuring the Security Context Constraints

Depending on your OpenShift cluster's Security Context Constraints (SCCs) configuration, the deployment process may vary. By default, OpenShift employs more restrictive SCCs. The Helm chart must assign null to the user running all components and dependencies.

The global.compatibility.openshift.adaptSecurityContext variable in your values.yaml can be used to set the following possible values:

  • force: The runAsUser and fsGroup values will be null in all components.
  • disabled: The runAsUser and fsGroup values will not be modified (default).
yq '. *+ load("generic/openshift/single-region/helm-values/scc.yml")' values.yml > values-merged.yml && mv values-merged.yml values.yml
Review the restrictive SCC configuration
generic/openshift/single-region/helm-values/scc.yml
loading...

Enable Enterprise components

Some components are not enabled by default in this deployment. For more information on how to configure and enable these components, refer to configuring Enterprise components and connectors.

Deploy prerequisite services

Before deploying Camunda, you need to deploy the infrastructure services it depends on. The core infrastructure (Elasticsearch and PostgreSQL) is deployed using Kubernetes operators as described in Deploy infrastructure with Kubernetes operators. Keycloak can optionally be deployed as your OIDC provider:

All deploy scripts are located in generic/kubernetes/operator-based/. Review each script before executing to understand the deployment steps, and adapt the operator Custom Resource configurations for your specific requirements (resource limits, storage, replicas, etc.).

Working directory

All commands in this guide assume you are at the repository root (the directory created by get-your-copy.sh). The deploy commands below use subshells (cd ... && ./deploy.sh) to preserve your working directory.

Deploy Elasticsearch

To deploy Elasticsearch using the ECK operator:

(cd generic/kubernetes/operator-based/elasticsearch && ./deploy.sh)

The script installs the ECK operator, deploys an Elasticsearch cluster, and waits until it is ready.

Review the Elasticsearch cluster configuration
generic/kubernetes/operator-based/elasticsearch/elasticsearch-cluster.yml
loading...

For more details on the Elasticsearch deployment, see Elasticsearch deployment in the operator-based infrastructure guide.

Deploy PostgreSQL

Deploy PostgreSQL clusters using the CloudNativePG operator:

CLUSTER_FILTER="pg-identity,pg-webmodeler" (cd generic/kubernetes/operator-based/postgresql && ./deploy.sh)

This script installs the CNPG operator (auto-detecting OpenShift to apply SCC patches), creates secrets, deploys the specified PostgreSQL clusters, and waits for readiness.

The following PostgreSQL clusters are created:

  • pg-identity: Database for Camunda Identity component
  • pg-webmodeler: Database for Web Modeler component (remove from configuration if not needed)
Review the PostgreSQL cluster configuration
generic/kubernetes/operator-based/postgresql/postgresql-clusters.yml
loading...

For more details on the PostgreSQL deployment, see PostgreSQL deployment in the operator-based infrastructure guide.

Deploy Keycloak (optional)

If you choose Keycloak as your identity provider (IdP), deploy it using the Keycloak Operator. First, deploy its PostgreSQL database, then deploy the Keycloak operator and instance. If you use an external OIDC provider instead, skip this section.

Deploy Keycloak with OpenShift Routes:

# Deploy the PostgreSQL database for Keycloak
CLUSTER_FILTER=pg-keycloak (cd generic/kubernetes/operator-based/postgresql && ./deploy.sh)

# Deploy Keycloak
export KEYCLOAK_CONFIG_FILE="keycloak-instance-domain-openshift.yml"
(cd generic/kubernetes/operator-based/keycloak && ./deploy.sh)
Review the OpenShift Keycloak instance configuration
generic/kubernetes/operator-based/keycloak/keycloak-instance-domain-openshift.yml
loading...

For more details on the Keycloak deployment, see Keycloak deployment in the operator-based infrastructure guide.

Merge operator overlays into values

Once the operator-managed services are running, merge the corresponding Helm values overlays into your values.yml file. These overlays configure Camunda components to use the external operator-managed services instead of embedded subcharts.

Merge the Elasticsearch overlay:

yq '. *+ load("generic/kubernetes/operator-based/elasticsearch/camunda-elastic-values.yml")' values.yml > values-merged.yml && mv values-merged.yml values.yml
Review the Elasticsearch Helm overlay
generic/kubernetes/operator-based/elasticsearch/camunda-elastic-values.yml
loading...

Merge the Identity PostgreSQL overlay:

yq '. *+ load("generic/kubernetes/operator-based/postgresql/camunda-identity-values.yml")' values.yml > values-merged.yml && mv values-merged.yml values.yml
Review the Identity PostgreSQL Helm overlay
generic/kubernetes/operator-based/postgresql/camunda-identity-values.yml
loading...

If Web Modeler is enabled, also merge the Web Modeler PostgreSQL overlay:

yq '. *+ load("generic/kubernetes/operator-based/postgresql/camunda-webmodeler-values.yml")' values.yml > values-merged.yml && mv values-merged.yml values.yml
Review the Web Modeler PostgreSQL Helm overlay
generic/kubernetes/operator-based/postgresql/camunda-webmodeler-values.yml
loading...

Merge the Keycloak overlay (optional — only if Keycloak was deployed as your IdP; choose the appropriate variant for your setup):

yq '. *+ load("generic/kubernetes/operator-based/keycloak/camunda-keycloak-domain-values.yml")' values.yml > values-merged.yml && mv values-merged.yml values.yml
Review the Keycloak domain Helm overlay
generic/kubernetes/operator-based/keycloak/camunda-keycloak-domain-values.yml
loading...

Fill your deployment with actual values

If Web Modeler is enabled, create the SMTP secret:

generic/openshift/single-region/procedure/create-webmodeler-secret.sh
loading...
note

Database and authentication secrets are automatically managed by the operators:

  • PostgreSQL credentials: Created by CloudNativePG via set-secrets.sh
  • Keycloak admin credentials (optional): Created by the Keycloak Operator
  • Elasticsearch credentials: Created by ECK
  • Identity secrets: Created by the operator-based deployment scripts

Only the SMTP password for Web Modeler needs to be created manually.

Once you've prepared the values.yml file with all overlays merged, run the following envsubst command to substitute the environment variables with their actual values:

generic/openshift/single-region/procedure/assemble-envsubst-values.sh
loading...

Install Camunda 8 using Helm

Now that the generated-values.yml is ready, you can install Camunda 8 using Helm.

The following are the required environment variables with some example values:

generic/openshift/single-region/procedure/chart-env.sh
loading...
  • CAMUNDA_NAMESPACE is the Kubernetes namespace where Camunda will be installed.
  • CAMUNDA_RELEASE_NAME is the name of the Helm release associated with this Camunda installation.

Then run the following command:

generic/openshift/single-region/procedure/install-chart.sh
loading...

This command:

  • Installs (or upgrades) the Camunda platform using the Helm chart.
  • Substitutes the appropriate version using the $CAMUNDA_HELM_CHART_VERSION environment variable.
  • Applies the configuration from generated-values.yml.
note

This guide uses helm upgrade --install as it runs install on initial deployment and upgrades future usage. This simplifies future Camunda 8 Helm upgrades or any other component upgrades.

You can track the progress of the installation using the following command:

generic/kubernetes/single-region/procedure/check-deployment-ready.sh
loading...

Verify connectivity to Camunda 8

First, we need an OAuth client to be able to connect to the Camunda 8 cluster.

Generate an M2M token using Identity

Generate an M2M token by following the steps outlined in the Identity getting started guide, along with the incorporating applications documentation.

Below is a summary of the necessary instructions:

  1. Open Identity in your browser at https://${CAMUNDA_DOMAIN}/managementidentity. You will be redirected to your IdP and prompted to log in.
  2. Log in with the initial user admin. This username is defined by the identity.firstUser.username value in your Helm chart configuration. Retrieve the auto-generated password from the Kubernetes secret:
kubectl get secret identity-secret-for-components \
--namespace "$CAMUNDA_NAMESPACE" \
-o jsonpath='{.data.identity-first-user-password}' | base64 -d; echo
  1. Select Add application and select M2M as the type. Assign a name like "test."
  2. Select the newly created application. Then, select Access to APIs > Assign permissions, and select the Orchestration API with "read" and "write" permission.
  3. Retrieve the client-id and client-secret values from the application details
export ZEEBE_CLIENT_ID='client-id' # retrieve the value from the identity page of your created m2m application
export ZEEBE_CLIENT_SECRET='client-secret' # retrieve the value from the identity page of your created m2m application
  1. Open the Orchestration Cluster Identity in your browser at https://${CAMUNDA_DOMAIN}/identity and log in with the user admin (defined in identity.firstUser of the values file).
  2. In the Identity navigation menu, select Roles.
  3. Either select an existing role (for example, Admin) or create a new role with the appropriate permissions for your use case.
  4. In the selected role view, open the Clients tab and click Assign client.
  5. Enter the client ID of your application created in Management Identity (for example, test) and click Assign client to save.

This operation links the OIDC client to the role's permissions in the Orchestration Cluster, granting the application access to the cluster resources. For more information about managing roles and clients, see Roles.

Use the token

For a detailed guide on generating and using a token, consult the relevant documentation on authenticating with the Orchestration Cluster REST API.

Export the following environment variables:

generic/kubernetes/single-region/procedure/export-verify-zeebe-domain.sh
loading...

Generate a temporary token to access the Orchestration Cluster REST API, then capture the value of the access_token property and store it as your token. Use the stored token (referred to as TOKEN in this case) to interact with the Orchestration Cluster REST API and display the cluster topology:

generic/kubernetes/single-region/procedure/check-zeebe-cluster-topology.sh
loading...

...and results in the following output:

Example output
generic/kubernetes/single-region/procedure/check-zeebe-cluster-topology-output.json
loading...

Pitfalls to avoid

For general deployment pitfalls, visit the deployment troubleshooting guide.

Persistent volume reclaim policy

OpenShift StorageClasses often default to a Delete reclaim policy, which means persistent volume data is permanently deleted when a PVC is removed. This can lead to complete and unrecoverable data loss for Orchestration Cluster brokers.

Ensure your StorageClass uses a Retain reclaim policy for production deployments. Verify your configuration:

oc get storageclass
# RECLAIMPOLICY should show "Retain", not "Delete"

For more details, see the production install guide.

Security Context Constraints (SCCs)

Security Context Constraints (SCCs) are a set of conditions that a pod must adhere to in order to be accepted into the system. They define the security conditions under which a pod operates.

Similar to how roles control user permissions, SCCs regulate the permissions of deployed applications, both at the pod and container level. It's generally recommended to deploy applications with the most restrictive SCCs possible. If you're unfamiliar with security context constraints, you can refer to the OpenShift documentation.

Restrictive SCCs

The following represents the most restrictive SCCs that can be used to deploy Camunda 8. Note that in OpenShift 4.10, these are equivalent to the built-in restricted SCCs (which are the default SCCs).

Allow Privileged: false
Default Add Capabilities: <none>
Required Drop Capabilities: KILL, MKNOD, SYS_CHROOT, SETUID, SETGID
Allowed Capabilities: <none>
Allowed Seccomp Profiles: <none>
Allowed Volume Types: configMap, downwardAPI, emptyDir, persistentVolumeClaim, projected, secret
Allow Host Network: false
Allow Host Ports: false
Allow Host PID: false
Allow Host IPC: false
Read Only Root Filesystem: false
Run As User Strategy: MustRunAsRange
SELinux Context Strategy: MustRunAs
FSGroup Strategy: MustRunAs
Supplemental Groups Strategy: RunAsAny

When using these SCCs, be sure not to specify any runAsUser or fsGroup values in either the pod or container security context. Instead, allow OpenShift to assign arbitrary IDs.

note

If you are providing the ID ranges yourself, you can also configure the runAsUser and fsGroup values accordingly.

The Camunda Helm chart can be deployed to OpenShift with a few modifications, primarily revolving around your desired security context constraints.

Writing pod permissions for logs

OpenShift security policies often restrict writing to files within containers. This can cause Camunda pods to fail to write to the filesystem, which is typically required for writing log in files.

Instead, we configure the environment to output logs to stdout and stderr only, which are supported by OpenShift logging infrastructure.

For Camunda components (except Identity), this can be done by setting the environment variable in the chart values:

zeebe/tasklist/operate/etc:
env:
- name: CAMUNDA_LOG_FILE_APPENDER_ENABLED
value: "false"

This will disable the file appender and ensure logs are visible via the container's log output.