Advanced migration alternatives
This guide covers advanced migration alternatives for organizations that cannot use Kubernetes operators or managed services for their infrastructure components. These approaches require more manual effort but provide full control over the deployment.
The approaches described here are not automated via the migration scripts and require significant manual configuration and operational expertise. For most deployments, we recommend using either Kubernetes operators or managed services.
When to use this guide
Consider these alternatives if:
- Your organization doesn't allow operator installations in the cluster—for example, due to security or compliance constraints.
- You're running on bare-metal infrastructure without managed service access.
- You need to migrate to an existing database infrastructure, like a shared PostgreSQL cluster managed by a DBA team.
- You're running Camunda outside of Kubernetes—for example, using Docker Compose or VM-based deployments.
Read the topic overview to learn why you should migrate.
This page intentionally avoids prescribing full installation commands for PostgreSQL, Elasticsearch, or Keycloak on custom targets, such as standalone StatefulSets, VMs, or bare metal. Use the official documentation for the distribution you operate, and use this page only for the Camunda-specific migration flow and Helm wiring.
Prerequisites
Before starting the migration, ensure you have the following general prerequisites:
- A running Camunda 8 installation using the Helm chart with Bitnami subcharts enabled
kubectlconfigured and pointing to your clusterhelmwith thecamunda/camunda-platformrepository added- Sufficient cluster resources to temporarily run both old and new infrastructure side-by-side
- A tested backup of your current installation (see Precautions)
Precautions
Review the general precautions that apply to all migration paths.
Review the operational readiness checklist, including the staging rehearsal and pre-migration checklist, before starting a production migration.
Option 1: Manually-deployed PostgreSQL and Elasticsearch on Kubernetes
If you can't install CloudNativePG (CNPG) or Amazon Elastic Cloud on Kubernetes (ECK) operators, but still run on Kubernetes, provision PostgreSQL and Elasticsearch using your platform standard manifests or the official product documentation for the distributions you operate.
Before cutover, ensure the target platform provides the following:
- A stable PostgreSQL endpoint reachable from the Camunda namespace.
- A stable Elasticsearch endpoint reachable from the Camunda namespace.
- Persistent storage sized for the current data set and expected growth.
- Databases and users for
identity,keycloak, andwebmodeler. - Credentials stored in Kubernetes Secrets for the migration jobs and the Helm upgrade.
Once the targets exist, the migration flow stays the same:
- Freeze Camunda during the final cutover window.
- Migrate PostgreSQL with
pg_dumpandpg_restore(see PostgreSQL migration flags). - Migrate Elasticsearch with the method that fits your target (see Elasticsearch migration decision matrix).
Reconfigure Helm
After you migrate the data, update your Helm values to point to the external endpoints:
# Disable Bitnami subcharts
identityPostgresql:
enabled: false
webModelerPostgresql:
enabled: false
elasticsearch:
enabled: false
identityKeycloak:
enabled: false
identity:
externalDatabase:
host: "<postgres-host>"
port: 5432
database: "identity"
username: "identity"
existingSecret: "external-pg-identity"
existingSecretPasswordKey: "password"
webModeler:
restapi:
externalDatabase:
host: "<postgres-host>"
port: 5432
database: "webmodeler"
user: "webmodeler"
existingSecret: "external-pg-webmodeler"
existingSecretPasswordKey: "password"
orchestration:
data:
secondaryStorage:
type: elasticsearch
elasticsearch:
url: https://<elasticsearch-host>:9200
optimize:
database:
elasticsearch:
enabled: true
external: true
url:
protocol: https
host: <elasticsearch-host>
port: 9200
Finally, run helm upgrade to switch Camunda to the new endpoints:
helm upgrade ${CAMUNDA_RELEASE_NAME} camunda/camunda-platform \
-n ${NAMESPACE} \
--version ${CAMUNDA_HELM_CHART_VERSION} \
-f your-custom-values.yaml
Option 2: VM-based PostgreSQL and Elasticsearch
If your infrastructure runs on virtual machines (VMs) or bare-metal servers, treat PostgreSQL and Elasticsearch provisioning as a separate platform task, and follow the official product documentation:
- PostgreSQL documentation for installation, remote access, backup/restore tooling, and hardening.
- Elasticsearch documentation for installation, cluster topology, TLS, and operations.
Before migration, make sure you have:
- VM endpoints or DNS names reachable from Kubernetes.
- Firewall and TLS settings validated from the cluster to the target hosts.
- Databases, users, and credentials created for the Camunda components.
- A staging rehearsal showing that
pg_restoreand your chosen Elasticsearch migration method work against those endpoints.
Once the services are ready, reconfigure Helm, and replace the hosts with your VM or bare-metal addresses. For the data migration, use the approaches described in the data migration approaches summary.
Option 3: Docker Compose deployment
If you're targeting Docker Compose, use this guide for the migration workflow, and use the dedicated Docker Compose assets as the source of truth:
- Follow the local Docker Compose quickstart for the supported setup and runtime behavior.
- Use the maintained Compose assets in camunda-distributions/docker-compose instead of copying an embedded example from this page.
You still need to migrate PostgreSQL and Elasticsearch data separately using the same approaches described in data migration approaches summary.
Docker Compose deployments are suitable for development and testing only. For production environments, use Kubernetes operators or managed services.
Data migration approaches summary
Regardless of the target infrastructure, the data migration approach remains the same:
| Component | Method | Tools |
|---|---|---|
| PostgreSQL | Dump and restore | pg_dump / pg_restore (custom format) |
| Elasticsearch | Snapshot/restore, reindex, or elasticdump | Elasticsearch Snapshot API, elasticdump, Reindex API |
| Keycloak | Via PostgreSQL data migration | No separate migration needed |
PostgreSQL migration flags
When you migrate your PostgreSQL data, use these flags:
pg_restore \
--clean # Drop objects before recreating
--if-exists # Don't error if objects don't exist
--no-owner # Don't set ownership (avoids permission issues)
--no-privileges # Don't restore privilege assignments
-d <database> # Target database
<dump-file>
Elasticsearch migration decision matrix
| Scenario | Recommended method |
|---|---|
| Target accessible from Kubernetes + shared storage possible | Filesystem snapshot/restore |
| Target accessible from Kubernetes + no shared storage | elasticdump or S3 snapshot repository |
| Target not accessible from Kubernetes | S3 snapshot repository |
| Large datasets (> 50 GB) | Snapshot/restore (fastest method) |
Keycloak considerations
Regardless of the infrastructure target, Keycloak migration always involves migrating its PostgreSQL database. After the data migration:
- If using the Keycloak Operator (recommended): Deploy a Keycloak Custom Resource pointing to the migrated PostgreSQL database.
- If using an external OIDC provider: Configure Camunda to use the external provider via external OIDC provider. You can then decommission Keycloak entirely.
- If using a standalone Keycloak instance (VM or Docker): Point it to the migrated PostgreSQL database and update the Camunda Helm values to reference the external Keycloak URL.
Operational readiness
Before running any of the alternative migration approaches in production, follow these steps to minimize risk.
Staging rehearsal
- Replicate your production environment in a staging/test cluster, including the target infrastructure (standalone StatefulSets, VMs, Docker Compose, etc.).
- Run the full migration end to end using the chosen approach (manual StatefulSets, VMs, or Docker).
- Measure actual timings: since alternative deployments vary widely, timing data from staging is critical for setting maintenance windows.
- Test the failback path: verify you can roll back by restoring the original Helm values and reconnecting to the Bitnami subcharts.
For VM-based or Docker Compose targets, include network connectivity testing (firewall rules, DNS resolution from Kubernetes to external hosts) as part of the rehearsal.
Production dry-run
Create a step-by-step runbook and walk through it without executing destructive commands. Document each command and expected output.
For inspiration, review the backup and cutover migration scripts used by the automated paths. They illustrate the sequence of operations and safety checks you should replicate in your runbook.
Pre-migration checklist
- Verify target connectivity: confirm the Kubernetes cluster can reach the target infrastructure (VMs, external databases). Test with
curl,psql, orkubectl execfrom within the cluster. - Notify stakeholders: announce the maintenance window.
- Verify backups: ensure you have a recent backup from your existing backup strategy, independent of the migration scripts.
- Document the runbook: for manual migrations, have a written, step-by-step runbook reviewed by a second team member.
- Prepare rollback commands: pre-write the
helm upgradecommand needed to revert to Bitnami subcharts.
Failback procedure
- Helm rollback: revert the Helm values to use Bitnami subcharts again. Since the Bitnami PVCs still exist (they are not deleted during migration), data is intact.
- If Bitnami PVCs are deleted: restore from your independent backup or from the
pg_dumpfiles created during migration.
Once you delete the old Bitnami PVCs (during post-migration cleanup), rollback is no longer trivial. Keep the old resources until your team has observed the system under production load through at least one full business cycle (for example, a complete weekday with peak traffic). Only proceed with cleanup once you are confident the new infrastructure is stable.
Data safety measures
- Always create
pg_dumpbackups before any data migration, regardless of the target infrastructure. - Store backup files outside the cluster (cloud storage bucket, NFS share) for redundancy.
- The same
pg_restoreflags (--clean --if-exists --no-owner --no-privileges) apply to all targets and are idempotent. - Keep the old Bitnami infrastructure running in read-only mode, if possible, for several days as a safety net.
Post-migration monitoring
After completing the migration, monitor for at least 48 hours:
- Pod restarts:
kubectl get pods -n ${NAMESPACE} --watch - Target database health: monitor connection counts, replication status (if using replicas), and storage usage.
- Camunda component logs: look for connection errors, authentication failures, or data inconsistencies.
- Process instance completion: verify that in-flight process instances continue to execute correctly.
- External connectivity stability: for VM or Docker targets, monitor network latency and connection drops between Kubernetes and the external infrastructure.