Migrate from Bitnami subcharts
This section provides guidance for migrating your Camunda 8 Self-Managed infrastructure components from Bitnami subcharts to production-grade alternatives.
This guide is for customers running Camunda 8 with Bitnami subcharts enabled. If your installation already uses external databases, managed services, or operator-managed infrastructure, you do not need to follow this migration.
Why migrate?
Bitnami subcharts (PostgreSQL, Elasticsearch, Keycloak) provided with the Camunda Helm chart are convenient for development and testing. However, for production environments, Camunda recommends using managed services or operator-based deployments:
- End of open-source Bitnami images: Bitnami has archived open-source container images, requiring a transition to alternatives.
- Production readiness: Operators and managed services offer automated failover, backup, monitoring, and security patching.
- Vendor support: Operators and managed services offer dedicated support channels from infrastructure vendors (Elastic, CloudNativePG, Keycloak, AWS, Azure, GCP).
- Long-term maintainability: Decoupling infrastructure lifecycle from the Camunda Helm chart ensures independent upgrade paths.
What gets migrated?
The migration covers all Bitnami-managed infrastructure components deployed as part of the Camunda Helm chart:
| Source (Bitnami subchart) | Data | Migration method |
|---|---|---|
| Bitnami PostgreSQL (Identity) | User data and authorizations | pg_dump / pg_restore |
| Bitnami PostgreSQL (Keycloak) | Realms, users, and clients | pg_dump / pg_restore |
| Bitnami PostgreSQL (Web Modeler) | Projects and diagrams | pg_dump / pg_restore |
| Bitnami Elasticsearch | Zeebe, Operate, Tasklist, and Optimize indices | Reindex from remote (_reindex API) |
| Bitnami Keycloak (StatefulSet) | Realms, users, and clients (via PostgreSQL) | Keycloak Operator CR replaces StatefulSet |
The Camunda application components themselves (Zeebe, Operate, Tasklist, Optimize, Connectors, Identity, and Web Modeler) are not migrated; they're reconfigured via a Helm upgrade to use the new infrastructure backends. Your process instances, decisions, and forms remain intact.
Migration phases
The migration follows a five-phase approach designed to minimize downtime:
| Phase | Downtime | Outcome |
|---|---|---|
| 1. Deploy targets | No planned downtime | Install operators and create the target infrastructure alongside Bitnami. |
| 2. Initial backup | No planned downtime | Back up data while the application is still running. |
| 3. Cutover | Maintenance window Typically 5–60 minutes (or ~5 minutes with warm reindex) | Freeze traffic, take a final backup, restore data, run the Helm upgrade, and resume. |
| 4. Validate | No planned downtime | Verify that all components are healthy on the new infrastructure. |
| 5. Cleanup Bitnami | No planned downtime | Remove old Bitnami StatefulSets, PVCs, and migration artifacts. |
Downtime estimation
| Elasticsearch data volume | Standard downtime | With warm reindex (ES_WARM_REINDEX=true) |
|---|---|---|
| < 1 GB | ~5 minutes | ~5 minutes |
| 1–10 GB | ~10–40 minutes | ~5 minutes |
| 10–50 GB | ~40 minutes–2 hours | ~5 minutes |
| > 50 GB | 2+ hours | ~5 minutes |
The main downtime driver is the Elasticsearch reindex duration. With the warm reindex strategy, Elasticsearch data is pre-copied during Phase 2 (no downtime), reducing Phase 3 to a fast delta sync. See downtime estimation for benchmarked timings.
Precautions (all paths)
Regardless of the migration path you choose, review the following precautions before starting the migration.
General precautions
- Test in staging first: Run the full migration in a non-production environment before migrating production.
- Schedule a maintenance window: All migration paths (except zero-downtime) require a downtime window during cutover.
- Check cluster capacity: During the migration, both old and new infrastructure run simultaneously, requiring additional CPU, memory, and storage.
- Backup your Helm values: Consider a manual backup before starting:
helm get values camunda -n camunda > backup-values.yaml. - DNS TTL: If using a domain for Keycloak, ensure DNS TTL is low before cutover to minimize propagation delay.
- Keycloak OIDC impact: Keycloak is the OIDC provider for all Camunda components (and possibly external applications). Migrating Keycloak changes the underlying service. If you use a DNS CNAME for Keycloak, plan to update the DNS target to the new Keycloak service after cutover. If external applications share the same Keycloak realm, coordinate the DNS switch with their teams.
- Session impact: The database migration preserves all persistent data (realms, users, clients, signing keys, and refresh tokens). Since Keycloak 25+, user sessions are persisted in the database and survive the switch. In-flight authentication flows (login pages in progress) and pending action tokens (password reset links) are lost; users simply need to retry. This is inherent to the downtime window and has no lasting effect.
- Dual-region Elasticsearch: There is currently no dedicated migration procedure for dual-region setups. This applies only to installations upgrading from Camunda 8.8, which was the last version to include Bitnami Elasticsearch as a default subchart. If you need to perform this migration in a dual-region environment, follow the single-region migration procedure and apply it individually to each region.
Choose your migration target
Depending on your infrastructure capabilities and organizational requirements, choose one of the following migration paths:
| Scenario | Recommended path | Guide |
|---|---|---|
| You want production-grade, self-managed infrastructure in Kubernetes with operator lifecycle management. | Kubernetes operators (CloudNativePG, ECK, Keycloak Operator) | Migrate to Kubernetes operators |
| You prefer fully managed infrastructure from your cloud provider with minimal operational overhead. | Managed services (AWS RDS, Elastic Cloud, Azure Database for PostgreSQL, etc.) | Migrate to managed services |
| You cannot use operators or managed services, or require full control (VMs, bare-metal, Docker Compose). | Manual deployment | Advanced alternatives |
| Your SLA does not allow any maintenance window. | Zero-downtime migration (logical replication, CCR) | Zero-downtime migration |
Prerequisites (all paths)
Regardless of your chosen migration target, ensure the following:
- A running Camunda 8 installation using the Helm chart with Bitnami subcharts enabled
kubectlconfigured and pointing to your clusterhelmwith thecamunda/camunda-platformrepository added- Sufficient cluster resources to temporarily run both old and new infrastructure side-by-side
- A tested backup of your current installation (see Precautions)
All migration paths require an explicit decision for authentication and connectivity:
- If you keep Keycloak, plan for a Keycloak Operator deployment, and configure the hostname with the full external URL, for example
https://your-domain.example.com/auth. - If you replace Keycloak with an external OIDC provider, complete that design before cutting over because Identity configuration changes are part of the migration.
- If your PostgreSQL or Elasticsearch access depends on cloud-specific IAM authentication such as AWS IRSA, the provided migration jobs are not sufficient, and you need a custom migration workflow.
Migration guides
Migrate to Kubernetes operators
Step-by-step guide to migrate Camunda 8 Self-Managed infrastructure from Bitnami subcharts to CloudNativePG, ECK Elasticsearch, and Keycloak Operator.
Migrate to managed services
Migrate Camunda 8 Self-Managed infrastructure from Bitnami subcharts to cloud-managed services such as AWS RDS, managed Elasticsearch, Azure Database for PostgreSQL, and similar.
Advanced alternatives
Alternative migration paths for Camunda 8 Self-Managed when Kubernetes operators or managed services are not available — including VM-based, bare-metal, and Docker Compose deployments.
Zero-downtime migration (advanced)
Advanced guide for migrating Camunda 8 Self-Managed infrastructure from Bitnami subcharts to Kubernetes operators or managed services with zero downtime using real-time data replication.
Advanced usage
Migration hooks
The migration scripts support hooks — custom shell scripts that run before or after each migration phase. Place executable scripts in the hooks/ directory of the migration repository:
| Hook | Trigger |
|---|---|
pre-phase-1.sh | Before deploying target infrastructure |
post-phase-1.sh | After target infrastructure is deployed |
pre-phase-2.sh | Before initial backup |
post-phase-2.sh | After initial backup |
pre-phase-3.sh | Before cutover (before freeze) |
post-phase-3.sh | After cutover is complete |
pre-phase-4.sh | Before validation |
post-phase-4.sh | After validation |
pre-phase-5.sh | Before Bitnami cleanup |
post-phase-5.sh | After Bitnami cleanup |
pre-rollback.sh | Before rollback |
post-rollback.sh | After rollback |
For example, send a Slack notification before cutover:
#!/bin/bash
# hooks/pre-phase-3.sh
curl -X POST "$SLACK_WEBHOOK" \
-H 'Content-Type: application/json' \
-d '{"text":"⚠️ Camunda migration cutover starting — downtime expected"}'
Hook scripts are sourced (not forked), so they have access to all library functions and variables. A failing hook aborts the migration (due to set -e). Add || true to make a hook best-effort.
Typical hook use cases:
- Pause external consumers before Phase 3 and resume them after validation.
- Send change-management or on-call notifications at the start and end of cutover.
- Run smoke tests after Phase 3 or Phase 4, and fail the migration if a critical endpoint is unavailable.
- Update DNS or Ingress records for Keycloak after the new service becomes active.