Skip to main content
Version: 8.8 (unreleased)

Configuration

This page uses YAML examples to show configuration properties. Alternate methods to externalize or override your configuration are provided by Spring Boot, and can be applied without rebuilding your application (properties files, Java System properties, or environment variables).

note

Configuration properties can be defined as environment variables using Spring Boot conventions. To define an environment variable, convert the configuration property to uppercase, remove any dashes -, and replace any delimiters . with underscore _.

For example, the property camunda.client.worker.defaults.max-jobs-active is represented by the environment variable CAMUNDA_CLIENT_WORKER_DEFAULTS_MAXJOBSACTIVE.

Modes

The Camunda Spring Boot SDK has modes with meaningful defaults aligned with the distribution's default connection details. Each mode is made for a Camunda 8 setup, and only one mode may be used at a time.

note

The defaults applied by the modes are overwritten by any other set property, including legacy/deprecated properties. Check your configuration and logs to avoid unwanted override.

SaaS

This allows you to connect to a Camunda instance in our SaaS offering as the URLs are templated.

Activate by setting:

camunda:
client:
mode: saas

This applies the following defaults:

SaaS mode
loading...

The only thing you need to configure then, are the connection details to your Camunda SaaS cluster:

camunda:
client:
auth:
client-id: <your client id>
client-secret: <your client secret>
cloud:
cluster-id: <your cluster id>
region: <your region>

Other connectivity configuration does not further apply for the SaaS mode.

Self-Managed

This allows you to connect to a Self-Managed instance protected with JWT authentication. The default URLs are configured to align with all Camunda distributions using localhost addresses.

Activate by setting:

camunda:
client:
mode: self-managed

This applies the following defaults:

Self-managed mode
loading...

Connectivity

The connection to Camunda API is determined by camunda.client.grpc-address and camunda.client.rest-address

Camunda API connection

gRPC address

Define the address of the gRPC API exposed by the Zeebe Gateway:

camunda:
client:
grpc-address: http://localhost:26500
note

You must add the http:// scheme to the URL to avoid a java.lang.NullPointerException: target error.

REST address

Define address of the Camunda 8 REST API exposed by the Zeebe Gateway:

camunda:
client:
rest-address: http://localhost:8080
note

You must add the http:// scheme to the URL to avoid a java.lang.NullPointerException: target error.

Prefer REST over gRPC

If true, the Camunda Client will use REST instead of gRPC whenever possible to communicate with the Camunda APIs:

camunda:
client:
prefer-rest-over-grpc: true

Advanced connectivity settings

Keep alive

Time interval between keep alive messages sent to the gateway (default is 45s):

camunda:
client:
keep-alive: PT60S

Override authority

The alternative authority to use, commonly in the form host or host:port:

camunda:
client:
override-authority: host:port

Max message size

A custom maxMessageSize allows the client to receive larger or smaller responses from Zeebe. Technically, it specifies the maxInboundMessageSize of the gRPC channel (default 5MB):

camunda:
client:
max-message-size: 4194304

Max metadata size

A custom maxMetadataSize allows the client to receive larger or smaller response headers from Camunda:

camunda:
client:
max-metadata-size: 4194304

CA certificate

Path to a root CA certificate to be used instead of the certificate in the default store:

camunda:
client:
ca-certificate-path: path/to/certificate

Authentication

The authentication method is determined by camunda.client.auth.method. If omitted, the client will try to detect the authentication method based on the provided properties.

Authenticate with the cluster using the following alternative methods:

info

When using camunda.client.mode=saas, the authentication method presets are not applied in favor of the properties contained in the SaaS preset.

No authentication

By default, no authentication will be used.

To explicitly activate this method, you can set:

camunda:
client:
auth:
method: none

As alternative, do not provide any other property indicating an implicit authentication method.

This will load this preset:

No authentication
loading...

Basic authentication

You can authenticate with the cluster using Basic authentication, if the cluster is setup to use Basic authentication.

To explicitly activate this method, you can set:

camunda:
client:
auth:
method: basic

This authentication method will be implied if you set either camunda.client.auth.username or camunda.client.auth.password.

This will load this preset:

Basic authentication
loading...

OIDC authentication

You can authenticate with the cluster using OpenID Connect (OIDC) with client ID and client secret.

To explicitly activate this method, you can set:

camunda:
client:
auth:
method: oidc

This authentication method will be implied if you set either camunda.client.auth.client-id or camunda.client.auth.client-secret.

This will load this preset:

OIDC authentication
loading...

Credentials cache path

You can define the credentials cache path of the zeebe client, the property contains directory path and file name:

camunda:
client:
auth:
credentials-cache-path: /tmp/credentials

Custom identity provider security context

Several identity providers, such as Keycloak, support client X.509 authorizers as an alternative to client credentials flow.

As a prerequisite, ensure you have proper KeyStore and TrustStore configured, so that:

  • Both the Spring Camunda application and identity provider share the same CA trust certificates.
  • Both the Spring Camunda and identity provider own certificates signed by trusted CA.
  • Your Spring Camunda application own certificate has proper Distinguished Name (DN), e.g. CN=My Camunda Client, OU=Camunda Users, O=Best Company, C=DE.
  • Your application DN registered in the identity provider client authorization details.

Once prerequisites are satisfied, your Spring Camunda application must be configured either via global SSL context, or with an exclusive context which is documented below.

Refer to your identity provider documentation on how to configure X.509 authentication. For example, Keycloak.

If you require configuring SSL context exclusively for your identity provider, you can use this set of properties:

camunda:
client:
auth:
keystore-path: /path/to/keystore.p12
keystore-password: password
keystore-key-password: password
truststore-path: /path/to/truststore.jks
truststore-password: password
  • keystore-path: Path to client's KeyStore; can be both in JKS or PKCS12 formats
  • keystore-password: KeyStore password
  • keystore-key-password: Key material password
  • truststore-path: Path to client's TrustStore
  • truststore-password: TrustStore password

When the properties are not specified, the default SSL context is applied. For example, if you configure an application with javax.net.ssl.* or spring.ssl.*, the latter is applied. If both camunda.client.auth.* and either javax.net.ssl.* or spring.ssl.* properties are defined, the camunda.client.auth.* takes precedence.

Job worker configuration options

Overriding job worker values using properties

You can override the JobWorker annotation's values, as you can see in the example above where the enabled property is overridden:

camunda:
client:
override:
foo:
enabled: false

In this case, foo is the type of the worker that we want to customize.

You can override all supported configuration options for a worker, for example:

camunda:
client:
override:
foo:
timeout: PT10S
info

You could also provide a custom class that can customize the JobWorker configuration values by implementing the io.camunda.spring.client.annotation.customizer.JobWorkerValueCustomizer interface and register it as bean.

Job type

You can configure the job type via the JobWorker annotation:

@JobWorker(type = "foo")
public void handleJobFoo() {
// handles jobs of type 'foo'
}

If you don't specify the type attribute, the method name is used by default:

@JobWorker
public void foo() {
// handles jobs of type 'foo'
}

As a third possibility, you can set a task type as property:

camunda:
client:
worker:
override:
foo:
type: bar

As a fourth possibility, you can set a default task type as property:

camunda:
client:
worker:
defaults:
type: foo

This is used for all workers that do not set a task type via the annotation or set a job type as individual worker property.

Define job worker function parameters

The way you define the job worker functions' method signature will also influence the way variables will be fetched.

All listed methods to fetch variables will form a joint list of variables to fetch unless explicitly mentioned otherwise.

Explicit ways to control the variable fetching

Provide a list of variables to fetch

You can specify that you only want to fetch some variables (instead of all) when executing a job, which can decrease load and improve performance:

@JobWorker(type = "foo", fetchVariables={"variable1", "variable2"})
public void handleJobFoo(final JobClient client, final ActivatedJob job) {
String variable1 = (String)job.getVariablesAsMap().get("variable1");
System.out.println(variable1);
// ...
}

You can also override the variables to fetch in your properties:

camunda:
client:
worker:
override:
foo:
fetch-variables:
- variable1
- variable2
caution

Using the properties-defined way of fetching variables will override all other detection strategies.

Prevent the variable filtering

You can force that all variables are loaded anyway:

@JobWorker(type = "foo", fetchAllVariables = true)
public void handleJobFoo(final JobClient client, final ActivatedJob job, @Variable String variable1) {
}

Implicit ways to control the variable fetching

ActivatedJob parameter

This will prevent the implicit variable fetching detection as you can retrieve variables in a programmatic way now:

@JobWorker(type = "foo")
public void handleJobFoo(final ActivatedJob job) {
String variable1 = (String)job.getVariablesAsMap().get("variable1");
System.out.println(variable1);
// ...
}
Using @Variable

By using the @Variable annotation, there is a shortcut to make variable retrieval simpler and only fetch certain variables, making them available as parameters:

@JobWorker(type = "foo")
public void handleJobFoo(@Variable(name = "variable1") String variable1) {
System.out.println(variable1);
// ...
}

If you don't specify the name attribute on the annotation, the method parameter name is used as the variable name if you enabled the -parameters compiler flag in the getting started section:

@JobWorker(type = "foo")
public void handleJobFoo(final JobClient client, final ActivatedJob job, @Variable String variable1) {
System.out.println(variable1);
// ...
}

Using @VariablesAsType

You can also use your own class into which the process variables are mapped to (comparable to getVariablesAsType() in the Java client API). Therefore, use the @VariablesAsType annotation. In the example below, MyProcessVariables refers to your own class:

@JobWorker(type = "foo")
public ProcessVariables handleFoo(@VariablesAsType MyProcessVariables variables){
// do whatever you need to do
variables.getMyAttributeX();
variables.setMyAttributeY(42);

// return variables object if something has changed, so the changes are submitted to Zeebe
return variables;
}

Here, the variables to fetch will be limited to the names of the fields of the used type. The @JsonProperty annotation is respected.

Using @CustomHeaders

You can use the @CustomHeaders annotation for a parameter to retrieve custom headers for a job:

@JobWorker(type = "foo")
public void handleFoo(@CustomHeaders Map<String, String> headers){
// do whatever you need to do
}

Completing jobs

Auto-completing jobs

By default, the autoComplete attribute is set to true for any job worker.

In this case, the Spring integration will handle job completion for you:

@JobWorker(type = "foo")
public void handleJobFoo(final ActivatedJob job) {
// do whatever you need to do
// no need to call client.newCompleteCommand()...
}

This is the same as:

@JobWorker(type = "foo", autoComplete = true)
public void handleJobFoo(final ActivatedJob job) {
// ...
}
note

The code within the handler method needs to be synchronously executed, as the completion will be triggered right after the method has finished.

When using autoComplete you can:

  • Return a Map, String, InputStream, or Object, which will then be added to the process variables.
  • Throw a BpmnError, which results in a BPMN error being sent to Zeebe.
  • Throw any other Exception that leads in a failure handed over to Zeebe.
@JobWorker(type = "foo")
public Map<String, Object> handleJobFoo(final ActivatedJob job) {
// some work
if (successful) {
// some data is returned to be stored as process variable
return variablesMap;
} else {
// problem shall be indicated to the process:
throw new BpmnError("DOESNT_WORK", "This does not work because...");
}
}

Programmatically completing jobs

Your job worker code can also complete the job itself. This gives you more control over when exactly you want to complete the job (for example, allowing the completion to be moved to reactive callbacks):

@JobWorker(type = "foo", autoComplete = false)
public void handleJobFoo(final JobClient client, final ActivatedJob job) {
// do whatever you need to do
client.newCompleteCommand(job.getKey())
.send()
.exceptionally( throwable -> { throw new RuntimeException("Could not complete job " + job, throwable); });
}

You can also control auto-completion in your configuration.

Globally:

camunda:
client:
worker:
defaults:
auto-complete: false

Per worker:

camunda:
client:
worker:
override:
foo:
auto-complete: false

Ideally, you don't use blocking behavior like send().join(), as this is a blocking call to wait for the issued command to be executed on the workflow engine. While this is very straightforward to use and produces easy-to-read code, blocking code is limited in terms of scalability.

This is why the worker sample above shows a different pattern (using exceptionally). Often, you might want to use the whenComplete callback:

send().whenComplete((result, exception) -> {})

This registers a callback to be executed when the command on the workflow engine was executed or resulted in an exception. This allows for parallelism. This is discussed in more detail in this blog post about writing good workers for Camunda 8.

note

When completing jobs programmatically, you must specify autoComplete = false. Otherwise, there is a race condition between your programmatic job completion and the Spring integration job completion, and this can lead to unpredictable results.

Reacting on problems

Throwing BpmnErrors

Whenever your code hits a problem that should lead to a BPMN error being raised, you can throw a BpmnError to provide the error code used in BPMN:

@JobWorker(type = "foo")
public void handleJobFoo() {
// some work
if (businessError) {
// problem shall be indicated to the process:
throw CamundaError.bpmnError("ERROR_CODE", "Some explanation why this does not work");
// this is a static function that returns an instance of BpmnError
}
}

Failing jobs in a controlled way

Whenever you want a job to fail in a controlled way, you can throw a JobError and provide parameters like variables, retries and retryBackoff:

@JobWorker(type = "foo")
public void handleJobFoo() {
try {
// some work
} catch(Exception e) {
// problem shall be indicated to the process:
throw CamundaError.jobError("Error message", new ErrorVariables(), null, Duration.ofSeconds(10), e);
// this is a static function that returns an instance of JobError
}
}

The JobError takes 5 parameters:

  • errorMessage: String
  • variables: Object (optional), default null
  • retries: Integer (optional), defaults to job.getRetries() - 1
  • retryBackoff: Duration (optional), defaults to PT0S
  • cause: Exception (optional), defaults to null
note

The job error is sent to the engine by the SDK calling the Fail Job API. The stacktrace of the job error will become the actual error message. The provided cause will be visible in Operate.

Implicitly failing jobs

If your handler method would throw any other exception than the ones listed above, the default Camunda Client error handling will apply, decrementing retries with a retryBackoff of 0.

Advanced job worker configuration options

Execution threads

The number of threads for invocation of job workers (default 1):

camunda:
client:
execution-threads: 2
note

We generally do not advise using a thread pool for workers, but rather implement asynchronous code, see writing good workers for additional details.

Disable a job worker

You can disable workers via the enabled parameter of the @JobWorker annotation:

@JobWorker(enabled = false)
public void foo() {
// worker's code - now disabled
}

You can also override this setting via your application.yaml file:

camunda:
client:
worker:
override:
foo:
enabled: false

This is especially useful if you have a bigger code base including many workers, but want to start only some of them. Typical use cases are:

  • Testing: You only want one specific worker to run at a time.
  • Load balancing: You want to control which workers run on which instance of cluster nodes.
  • Migration: There are two applications, and you want to migrate a worker from one to another. With this switch, you can disable workers via configuration in the old application once they are available within the new.

To disable all workers, but still have the Camunda client available, you can use:

camunda:
client:
worker:
defaults:
enabled: false

Configure jobs in flight

Number of jobs that are polled from the broker to be worked on in this client and thread pool size to handle the jobs:

@JobWorker(maxJobsActive = 64)
public void foo() {
// worker's code
}

This can also be configured as property:

camunda:
client:
worker:
override:
foo:
max-jobs-active: 64

To configure a global default, you can set:

camunda:
client:
worker:
defaults:
max-jobs-active: 64
Enable job streaming

Read more about this feature in the job streaming documentation.

Job streaming is disabled by default for job workers. To enable job streaming on the Camunda client, configure it as follows:

@JobWorker(streamEnabled = true)
public void foo() {
// worker's code
}

This can also be configured as property:

camunda:
client:
override:
foo:
stream-enabled: true

To configure a global default, you can set:

camunda:
client:
worker:
defaults:
stream-enabled: true
Control tenant usage

Generally, the client default tenant-ids is used for all job worker activations.

Configure global worker defaults for additional tenant-ids to be used by all workers:

camunda:
client:
worker:
defaults:
tenant-ids:
- <default>
- foo

Additionally, you can set tenantIds on the job worker level by using the annotation:

@JobWorker(tenantIds="myOtherTenant")
public void foo() {
// worker's code
}

You can also override the tenant-ids for each worker:

camunda:
client:
worker:
override:
foo:
tenants-ids:
- <default>
- foo

Additional configuration options

For a full set of configuration options, see CamundaClientProperties.java.

Message time to live

The time-to-live which is used when none is provided for a message (default 1H):

camunda:
client:
message-time-to-live: PT2H

Request timeout

The request timeout used if not overridden by the command (default is 10s):

camunda:
client:
request-timeout: PT20S

Tenant usage

When using multi-tenancy, the Zeebe client will connect to the <default> tenant. To control this, you can configure:

camunda:
client:
tenant-id: foo

Observing metrics

The Camunda Spring Boot SDK provides some out-of-the-box metrics that can be leveraged via Spring Actuator. Whenever actuator is on the classpath, you can access the following metrics:

  • camunda.job.invocations: Number of invocations of job workers (tagging the job type)

For all of those metrics, the following actions are recorded:

  • activated: The job was activated and started to process an item.
  • completed: The processing was completed successfully.
  • failed: The processing failed with some exception.
  • bpmn-error: The processing completed by throwing a BPMN error (which means there was no technical problem).

In a default setup, you can enable metrics to be served via http:

management:
endpoints:
web:
exposure:
include: metrics

Access them via http://localhost:8080/actuator/metrics/.