Skip to main content
Version: 8.9 (unreleased)

Configure RDBMS in Helm chart

Camunda 8 Self-Managed supports using an external relational database (RDBMS) as the Orchestration Cluster's secondary storage instead of Elasticsearch or OpenSearch.

This page provides:

Prerequisites

Provide a supported relational database that is reachable by the Camunda components.

See the RDBMS support policy for the complete list of supported databases and versions.

Ensure that:

  • Your network allows traffic from Camunda pods to the database.
  • Required JDBC parameters (SSL/TLS, authentication, failover) are configured as needed.
  • The database user has permissions to create and modify schema objects if autoDDL is enabled.

For a short checklist and troubleshooting steps you can run after configuring the database, see validate RDBMS connectivity (Helm).

Configuration

Connection parameters (required)

ParameterTypeDefaultDescription
orchestration.data.secondaryStorage.typestring""Must be rdbms to use a relational database.
orchestration.data.secondaryStorage.rdbms.urlstring""JDBC connection URL for the database.
orchestration.data.secondaryStorage.rdbms.usernamestring""Username for database authentication.

Database credentials

Store the database password in a Kubernetes secret and reference it. For testing only, you can use inlineSecret.

ParameterTypeDefaultDescription
orchestration.data.secondaryStorage.rdbms.secret.existingSecretstring""Name of Kubernetes secret containing the password.
orchestration.data.secondaryStorage.rdbms.secret.existingSecretKeystring""Key within the secret storing the password.
orchestration.data.secondaryStorage.rdbms.secret.inlineSecretstring""Password value (testing only, not production-safe).

Connection pool and performance tuning

ParameterTypeDefaultDescription
orchestration.data.secondaryStorage.rdbms.flushIntervalISO-8601 duration""How frequently the exporter flushes events.
orchestration.data.secondaryStorage.rdbms.queueSizeinteger1000Exporter queue size. Larger = more buffering.
orchestration.data.secondaryStorage.rdbms.queueMemoryLimitinteger20Memory limit (MB) for the exporter queue.
orchestration.data.secondaryStorage.rdbms.history.connectionPool.maximumPoolSizeinteger""Maximum JDBC connections. Default: auto-tuned.
orchestration.data.secondaryStorage.rdbms.history.connectionPool.minimumIdleinteger""Minimum idle connections. Default: auto-tuned.
orchestration.data.secondaryStorage.rdbms.history.connectionPool.connectionTimeoutISO-8601 duration""Timeout for acquiring a connection.

Schema and table management

ParameterTypeDefaultDescription
orchestration.data.secondaryStorage.rdbms.autoDDLbooleantrueEnable Liquibase auto-schema creation.
orchestration.data.secondaryStorage.rdbms.prefixstring""Optional table name prefix for all tables.

History and data retention

ParameterTypeDefaultDescription
orchestration.data.secondaryStorage.rdbms.history.defaultHistoryTTLISO-8601 duration""Default TTL for historic process data.
orchestration.data.secondaryStorage.rdbms.history.minHistoryCleanupIntervalISO-8601 duration""Minimum interval for history cleanup.
orchestration.data.secondaryStorage.rdbms.history.maxHistoryCleanupIntervalISO-8601 duration""Maximum interval for history cleanup.
orchestration.data.secondaryStorage.rdbms.history.historyCleanupBatchSizeinteger1000Batch size when deleting historic data.
orchestration.data.secondaryStorage.rdbms.history.defaultBatchOperationHistoryTTLISO-8601 duration""TTL for batch operation history.
orchestration.data.secondaryStorage.rdbms.history.batchOperationCancelProcessInstanceHistoryTTLISO-8601 duration""TTL for cancel-process-instance history.
orchestration.data.secondaryStorage.rdbms.history.batchOperationMigrateProcessInstanceHistoryTTLISO-8601 duration""TTL for migrate-process-instance history.
orchestration.data.secondaryStorage.rdbms.history.batchOperationModifyProcessInstanceHistoryTTLISO-8601 duration""TTL for modify-process-instance history.
orchestration.data.secondaryStorage.rdbms.history.batchOperationResolveIncidentHistoryTTLISO-8601 duration""TTL for resolve-incident history.

Connection pool lifecycle

ParameterTypeDefaultDescription
orchestration.data.secondaryStorage.rdbms.history.connectionPool.idleTimeoutISO-8601 duration""Maximum time a connection can remain idle.
orchestration.data.secondaryStorage.rdbms.history.connectionPool.maxLifetimeISO-8601 duration""Maximum lifetime of a JDBC connection.

Example usage

note

Operate has limited functionality when using RDBMS as secondary storage in Camunda 8.9-alpha3. See Operate limitations for details.

orchestration:
exporters:
camunda:
enabled: false
rdbms:
enabled: true
data:
secondaryStorage:
type: rdbms
rdbms:
url: jdbc:postgresql://hostname:5432/camunda
username: camunda
secret:
existingSecret: camunda-db-secret
existingSecretKey: password

Bundled vs. custom JDBC drivers

Camunda bundles JDBC drivers for some databases (PostgreSQL, MariaDB, H2). For others (Oracle, MySQL, SQL Server), you must supply a custom driver.

See: JDBC driver management for:

  • Which drivers are bundled
  • When to supply custom drivers
  • How to load drivers (init containers, custom images, volumes)

Schema creation and management

Camunda automatically creates your database schema using Liquibase (when autoDDL: true). You can also manage the schema manually if required.

See: Schema management for:

  • Automatic schema creation with autoDDL
  • Database user permissions for each RDBMS type
  • Manual schema management and DBA workflows
  • Schema upgrades and verification

Troubleshooting and operations

For detailed troubleshooting of common issues and post-deployment operations, see RDBMS troubleshooting and operations, which covers:

  • Connection failures and authentication errors
  • JDBC driver loading issues
  • Schema creation failures
  • Slow data export and performance tuning
  • TLS/SSL configuration
  • Post-deployment operations (password rotation, driver updates, schema validation)

Verifying connectivity

After deployment, verify the Orchestration Cluster is writing to the database:

  1. Confirm tables were created:
-- PostgreSQL example
SELECT table_name FROM information_schema.tables
WHERE table_schema = 'public';
  1. Deploy a process and start an instance using Web Modeler.

  2. Query the database to confirm the instance was recorded:

SELECT * FROM process_instances;
  1. Review logs for successful initialization:
INFO  io.camunda.exporter.rdbms.RdbmsExporter - RdbmsExporter created with Configuration: flushInterval=PT0.5S
INFO io.camunda.exporter.rdbms.RdbmsExporter - Exporter opened with last exported position

For a complete post-deployment checklist, see validate RDBMS connectivity (Helm).

Using AWS Aurora PostgreSQL (optional)

If you are using AWS Aurora PostgreSQL as your relational database, you can configure it the same way as a standard PostgreSQL instance.

Optionally, Camunda also supports the AWS JDBC wrapper driver, which provides additional features such as improved failover handling and IAM-based authentication.

For details and examples, see using AWS Aurora PostgreSQL with Camunda.

Limitations and unsupported scenarios

Component-specific RDBMS support

  • Orchestration Cluster: ✅ Full RDBMS support for secondary storage (includes Zeebe, Operate, Tasklist, Orchestration Identity).
  • Connectors: ✅ Supports RDBMS for process definitions and state.
  • Web Modeler: ✅ RDBMS support available in 8.9.
  • Optimize: ❌ Requires Elasticsearch or OpenSearch only. Optimize cannot use RDBMS.

If you deploy Optimize, you must still provision Elasticsearch or OpenSearch.

Multi-region deployments

Cross-region RDBMS deployments are not yet tested or supported in Camunda 8.9. Deploy RDBMS in the same region as your Kubernetes cluster.

Self-managed database HA

Camunda assumes your RDBMS handles its own HA (replication, failover). Use cloud-managed databases or vendor-specific HA solutions for production.

Custom JDBC driver libraries

Only JDBC drivers from official vendor sources are supported. Custom or modified drivers may cause unexpected behavior.