Skip to main content
Version: 8.9 (unreleased)

Migrate from Bitnami subcharts

This section provides guidance for migrating your Camunda 8 Self-Managed infrastructure components from Bitnami subcharts to production-grade alternatives.

Target audience

This guide is for customers running Camunda 8 with Bitnami subcharts enabled. If your installation already uses external databases, managed services, or operator-managed infrastructure, you do not need to follow this migration.

Why migrate?

Bitnami subcharts (PostgreSQL, Elasticsearch, Keycloak) provided with the Camunda Helm chart are convenient for development and testing. However, for production environments, Camunda recommends using managed services or operator-based deployments:

  • End of open-source Bitnami images: Bitnami has archived open-source container images, requiring a transition to alternatives.
  • Production readiness: Operators and managed services offer automated failover, backup, monitoring, and security patching.
  • Vendor support: Operators and managed services offer dedicated support channels from infrastructure vendors (Elastic, CloudNativePG, Keycloak, AWS, Azure, GCP).
  • Long-term maintainability: Decoupling infrastructure lifecycle from the Camunda Helm chart ensures independent upgrade paths.

What gets migrated?

The migration covers all Bitnami-managed infrastructure components deployed as part of the Camunda Helm chart:

Source (Bitnami subchart)DataMigration method
Bitnami PostgreSQL (Identity)User data and authorizationspg_dump / pg_restore
Bitnami PostgreSQL (Keycloak)Realms, users, and clientspg_dump / pg_restore
Bitnami PostgreSQL (Web Modeler)Projects and diagramspg_dump / pg_restore
Bitnami ElasticsearchZeebe, Operate, Tasklist, and Optimize indicesReindex from remote (_reindex API)
Bitnami Keycloak (StatefulSet)Realms, users, and clients (via PostgreSQL)Keycloak Operator CR replaces StatefulSet
Camunda core components are not affected

The Camunda application components themselves (Zeebe, Operate, Tasklist, Optimize, Connectors, Identity, and Web Modeler) are not migrated; they're reconfigured via a Helm upgrade to use the new infrastructure backends. Your process instances, decisions, and forms remain intact.

Migration phases

The migration follows a five-phase approach designed to minimize downtime:

PhaseDowntimeOutcome
1. Deploy targetsNo planned downtimeInstall operators and create the target infrastructure alongside Bitnami.
2. Initial backupNo planned downtimeBack up data while the application is still running.
3. CutoverMaintenance window
Typically 5–60 minutes (or ~5 minutes with warm reindex)
Freeze traffic, take a final backup, restore data, run the Helm upgrade, and resume.
4. ValidateNo planned downtimeVerify that all components are healthy on the new infrastructure.
5. Cleanup BitnamiNo planned downtimeRemove old Bitnami StatefulSets, PVCs, and migration artifacts.

Downtime estimation

Elasticsearch data volumeStandard downtimeWith warm reindex (ES_WARM_REINDEX=true)
< 1 GB~5 minutes~5 minutes
1–10 GB~10–40 minutes~5 minutes
10–50 GB~40 minutes–2 hours~5 minutes
> 50 GB2+ hours~5 minutes

The main downtime driver is the Elasticsearch reindex duration. With the warm reindex strategy, Elasticsearch data is pre-copied during Phase 2 (no downtime), reducing Phase 3 to a fast delta sync. See downtime estimation for benchmarked timings.

Precautions (all paths)

Regardless of the migration path you choose, review the following precautions before starting the migration.

General precautions
  • Test in staging first: Run the full migration in a non-production environment before migrating production.
  • Schedule a maintenance window: All migration paths (except zero-downtime) require a downtime window during cutover.
  • Check cluster capacity: During the migration, both old and new infrastructure run simultaneously, requiring additional CPU, memory, and storage.
  • Backup your Helm values: Consider a manual backup before starting: helm get values camunda -n camunda > backup-values.yaml.
  • DNS TTL: If using a domain for Keycloak, ensure DNS TTL is low before cutover to minimize propagation delay.
  • Keycloak OIDC impact: Keycloak is the OIDC provider for all Camunda components (and possibly external applications). Migrating Keycloak changes the underlying service. If you use a DNS CNAME for Keycloak, plan to update the DNS target to the new Keycloak service after cutover. If external applications share the same Keycloak realm, coordinate the DNS switch with their teams.
  • Session impact: The database migration preserves all persistent data (realms, users, clients, signing keys, and refresh tokens). Since Keycloak 25+, user sessions are persisted in the database and survive the switch. In-flight authentication flows (login pages in progress) and pending action tokens (password reset links) are lost; users simply need to retry. This is inherent to the downtime window and has no lasting effect.
  • Dual-region Elasticsearch: There is currently no dedicated migration procedure for dual-region setups. This applies only to installations upgrading from Camunda 8.8, which was the last version to include Bitnami Elasticsearch as a default subchart. If you need to perform this migration in a dual-region environment, follow the single-region migration procedure and apply it individually to each region.

Choose your migration target

Depending on your infrastructure capabilities and organizational requirements, choose one of the following migration paths:

ScenarioRecommended pathGuide
You want production-grade, self-managed infrastructure in Kubernetes with operator lifecycle management.Kubernetes operators (CloudNativePG, ECK, Keycloak Operator)Migrate to Kubernetes operators
You prefer fully managed infrastructure from your cloud provider with minimal operational overhead.Managed services (AWS RDS, Elastic Cloud, Azure Database for PostgreSQL, etc.)Migrate to managed services
You cannot use operators or managed services, or require full control (VMs, bare-metal, Docker Compose).Manual deploymentAdvanced alternatives
Your SLA does not allow any maintenance window.Zero-downtime migration (logical replication, CCR)Zero-downtime migration

Prerequisites (all paths)

Regardless of your chosen migration target, ensure the following:

  • A running Camunda 8 installation using the Helm chart with Bitnami subcharts enabled
  • kubectl configured and pointing to your cluster
  • helm with the camunda/camunda-platform repository added
  • Sufficient cluster resources to temporarily run both old and new infrastructure side-by-side
  • A tested backup of your current installation (see Precautions)
Plan authentication and service access up front

All migration paths require an explicit decision for authentication and connectivity:

  • If you keep Keycloak, plan for a Keycloak Operator deployment, and configure the hostname with the full external URL, for example https://your-domain.example.com/auth.
  • If you replace Keycloak with an external OIDC provider, complete that design before cutting over because Identity configuration changes are part of the migration.
  • If your PostgreSQL or Elasticsearch access depends on cloud-specific IAM authentication such as AWS IRSA, the provided migration jobs are not sufficient, and you need a custom migration workflow.

Migration guides

Advanced usage

Migration hooks

The migration scripts support hooks — custom shell scripts that run before or after each migration phase. Place executable scripts in the hooks/ directory of the migration repository:

HookTrigger
pre-phase-1.shBefore deploying target infrastructure
post-phase-1.shAfter target infrastructure is deployed
pre-phase-2.shBefore initial backup
post-phase-2.shAfter initial backup
pre-phase-3.shBefore cutover (before freeze)
post-phase-3.shAfter cutover is complete
pre-phase-4.shBefore validation
post-phase-4.shAfter validation
pre-phase-5.shBefore Bitnami cleanup
post-phase-5.shAfter Bitnami cleanup
pre-rollback.shBefore rollback
post-rollback.shAfter rollback

For example, send a Slack notification before cutover:

#!/bin/bash
# hooks/pre-phase-3.sh
curl -X POST "$SLACK_WEBHOOK" \
-H 'Content-Type: application/json' \
-d '{"text":"⚠️ Camunda migration cutover starting — downtime expected"}'
note

Hook scripts are sourced (not forked), so they have access to all library functions and variables. A failing hook aborts the migration (due to set -e). Add || true to make a hook best-effort.

Typical hook use cases:

  • Pause external consumers before Phase 3 and resume them after validation.
  • Send change-management or on-call notifications at the start and end of cutover.
  • Run smoke tests after Phase 3 or Phase 4, and fail the migration if a critical endpoint is unavailable.
  • Update DNS or Ingress records for Keycloak after the new service becomes active.