Deploy a ROSA HCP Cluster with Terraform
This guide provides a detailed tutorial for deploying a Red Hat OpenShift on AWS (ROSA) cluster with Hosted Control Plane (HCP) capabilities. It is specifically tailored for deploying Camunda 8 using Terraform, a widely-used Infrastructure as Code (IaC) tool.
We recommend this guide for building a robust and sustainable infrastructure. However, if you are looking for a quicker trial or proof of concept, or if your needs aren't fully met by our module, consider following the official ROSA Quickstart Guide.
This guide aims to help you leverage IaC to streamline and reproduce your cloud infrastructure setup. While it covers the essentials for deploying an ROSA HCP cluster, for more advanced use cases, please refer to the official Red Hat OpenShift on AWS Documentation.
If you are completely new to Terraform and the idea of IaC, read through the Terraform IaC documentation and give their interactive quick start a try for a basic understanding.
Requirements
- A Red Hat Account to create the Red Hat OpenShift cluster.
- An AWS account to create any resources within AWS.
- AWS CLI (2.17+), a CLI tool for creating AWS resources.
- Terraform (1.9+)
- kubectl (1.30+) to interact with the cluster.
- ROSA CLI to interact with the cluster.
- jq (1.7+) to interact with some Terraform variables.
- This guide uses GNU/Bash for all the shell commands listed.
Considerations
This setup provides a foundational starting point for working with Camunda 8, though it is not optimized for peak performance. It serves as a solid initial step in preparing a production environment by leveraging Infrastructure as Code (IaC) tools.
Terraform can seem complex at first. If you're interested in understanding what each component does, consider trying out the Red Hat OpenShift on AWS UI-based tutorial. This guide will show you what resources are created and how they interact with each other.
If you require managed services for PostgreSQL Aurora or OpenSearch, you can refer to the definitions provided in the EKS setup with Terraform guide. However, please note that these configurations may need adjustments to fit your specific requirements and have not been tested. By default, this guide assumes that the database services (PostgreSQL and Elasticsearch) integrated into the default chart will be used.
For testing Camunda 8 or developing against it, you might consider signing up for our SaaS offering. If you already have a Red Hat OpenShift cluster on AWS, you can skip ahead to the Helm setup guide.
To keep this guide concise, we provide links to additional documentation covering best practices, allowing you to explore each topic in greater depth.
Following this guide will incur costs on your cloud provider account and your Red Hat account, specifically for the managed OpenShift service, OpenShift worker nodes running in EC2, the hosted control plane, Elastic Block Storage (EBS), and Route 53. For more details, refer to ROSA AWS pricing and the AWS Pricing Calculator as total costs vary by region.
Variants
Unlike the EKS Terraform setup, we currently support only one main variant of this setup:
The standard installation uses a username and password connection for Camunda components (or relies solely on network isolation for certain components). This option is straightforward and easier to implement, making it ideal for environments where simplicity and rapid deployment are priorities, or where network isolation provides adequate security.
The second variant, IRSA (IAM Roles for Service Accounts), may work but has not been tested. If you’re interested in setting it up, please refer to the EKS guide as a foundational resource.
Outcome
Infrastructure diagram for a single region ROSA setup (click on the image to open the PDF version)
Following this tutorial and steps will result in:
- A Red Hat OpenShift with Hosted Control Plane cluster running the latest ROSA version with six nodes ready for Camunda 8 installation.
- The EBS CSI driver is installed and configured, which is used by the Camunda 8 Helm chart to create persistent volumes.
1. Configure AWS and initialize Terraform
Terraform prerequisites
To manage the infrastructure for Camunda 8 on AWS using Terraform, we need to set up Terraform's backend to store the state file remotely in an S3 bucket. This ensures secure and persistent storage of the state file.
Advanced users may want to handle this part differently and use a different backend. The backend setup provided is an example for new users.
Set up AWS authentication
The AWS Terraform provider is required to create resources in AWS. Before you can use the provider, you must authenticate it using your AWS credentials.
A user who creates resources in AWS will always retain administrative access to those resources, including any Kubernetes clusters created. It is recommended to create a dedicated AWS IAM user for Terraform purposes, ensuring that the resources are managed and owned by that user.
You can further change the region and other preferences and explore different authentication methods:
For development or testing purposes you can use the AWS CLI. If you have configured your AWS CLI, Terraform will automatically detect and use those credentials. To configure the AWS CLI:
aws configure
Enter your
AWS_ACCESS_KEY_ID
,AWS_SECRET_ACCESS_KEY
, region, and output format. These can be retrieved from the AWS Console.For production environments, we recommend the use of a dedicated IAM user. Create access keys for the new IAM user via the console, and export them as
AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
.
Create an S3 bucket for Terraform state management
Before setting up Terraform, you need to create an S3 bucket that will store the state file. This is important for collaboration and to prevent issues like state file corruption.
To start, set the region as an environment variable upfront to avoid repeating it in each command:
export AWS_REGION=<your-region>
Replace <your-region>
with your chosen AWS region (for example, eu-central-1
).
Now, follow these steps to create the S3 bucket with versioning enabled:
Open your terminal and ensure the AWS CLI is installed and configured.
Run the following command to create an S3 bucket for storing your Terraform state. Make sure to use a unique bucket name and set the
AWS_REGION
environment variable beforehand:# Replace "my-rosa-tf-state" with your unique bucket name
export S3_TF_BUCKET_NAME="my-rosa-tf-state"
aws s3api create-bucket --bucket "$S3_TF_BUCKET_NAME" --region "$AWS_REGION" \
--create-bucket-configuration LocationConstraint="$AWS_REGION"Enable versioning on the S3 bucket to track changes and protect the state file from accidental deletions or overwrites:
aws s3api put-bucket-versioning --bucket "$S3_TF_BUCKET_NAME" --versioning-configuration Status=Enabled --region "$AWS_REGION"
Secure the bucket by blocking public access:
aws s3api put-public-access-block --bucket "$S3_TF_BUCKET_NAME" --public-access-block-configuration \
"BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true" --region "$AWS_REGION"Verify versioning is enabled on the bucket:
aws s3api get-bucket-versioning --bucket "$S3_TF_BUCKET_NAME" --region "$AWS_REGION"
This S3 bucket will now securely store your Terraform state files with versioning enabled.
Create a config.tf
with the following setup
Once the S3 bucket is created, configure your config.tf
file to use the S3 backend for managing the Terraform state:
loading...
Initialize Terraform
Once your config.tf
and authentication are set up, you can initialize your Terraform project. The previous steps configured a dedicated S3 Bucket (S3_TF_BUCKET_NAME
) to store your state, and the following creates a bucket key that will be used by your configuration.
Configure the backend and download the necessary provider plugins:
export S3_TF_BUCKET_KEY="camunda-terraform/terraform.tfstate"
echo "Storing terraform state in s3://$S3_TF_BUCKET_NAME/$S3_TF_BUCKET_KEY"
terraform init -backend-config="bucket=$S3_TF_BUCKET_NAME" -backend-config="key=$S3_TF_BUCKET_KEY"
Terraform will connect to the S3 bucket to manage the state file, ensuring remote and persistent storage.
OpenShift cluster module setup
This module sets up the foundational configuration for ROSA HCP and Terraform usage.
We will leverage Terraform modules, which allow us to abstract resources into reusable components, simplifying infrastructure management.
The Camunda-provided module is publicly available and serves as a robust starting point for deploying a Red Hat OpenShift cluster on AWS using a Hosted Control Plane. It is highly recommended to review this module before implementation to understand its structure and capabilities.
Please note that this module is based on the official ROSA HCP Terraform module documentation. It is presented as an example for running Camunda 8 in ROSA. For advanced use cases or custom setups, we encourage you to use the official module, which includes vendor-supported features.
Set up ROSA authentication
To set up a ROSA cluster, certain prerequisites must be configured on your AWS account. Below is an excerpt from the official ROSA planning prerequisites checklist:
Verify that your AWS account is correctly configured:
aws sts get-caller-identity
Check if the ELB service role exists, as if you have never created a load balancer in your AWS account, the role for Elastic Load Balancing (ELB) might not exist yet:
aws iam get-role --role-name "AWSServiceRoleForElasticLoadBalancing"
If it doesn't exist, create it:
aws iam create-service-linked-role --aws-service-name "elasticloadbalancing.amazonaws.com"
Create a Red Hat Hybrid Cloud Console account if you don’t already have one: Red Hat Hybrid Cloud Console.
Enable ROSA on your AWS account via the AWS Console.
Enable HCP ROSA on AWS Marketplace:
- Navigate to the ROSA console: AWS ROSA Console.
- Choose Get started.
- On the Verify ROSA prerequisites page, select I agree to share my contact information with Red Hat.
- Choose Enable ROSA.
Note: Only a single AWS account can be associated with a Red Hat account for service billing.
Install the ROSA CLI from the OpenShift AWS Console.
Get an API token, go to the OpenShift Cluster Management API Token, click Load token, and save it. Use the token to log in with ROSA CLI:
export RHCS_TOKEN="<yourToken>"
rosa login --token="$RHCS_TOKEN"
# Verify the login
rosa whoamiVerify your AWS quotas:
rosa verify quota --region="$AWS_REGION"
Note: This may fail due to organizational policies.
Create the required account roles:
rosa create account-roles --mode auto
Verify your AWS quotas, and if quotas are insufficient, consult the following:
Ensure the
oc
CLI is installed. If it’s not already installed, follow the official ROSA oc installation guide:rosa verify openshift-client
Set up the ROSA cluster module
Create a
cluster.tf
file in the same directory as yourconfig.tf
file.Add the following content to your newly created
cluster.tf
file to utilize the provided module:Configure your clusterCustomize the cluster name, availability zones, with the values you previously retrieved from the Red Hat Console. Additionally, provide a secure username and password for the cluster administrator.
Ensure that you have set the environment
RHCS_TOKEN
is set with your OpenShift Cluster Management API Token.By default, this cluster will be accessible from the internet. If you prefer to restrict access, please refer to the official documentation of the module.
aws/rosa-hcp/camunda-versions/8.7/cluster.tfloading...
Camunda Terraform moduleThis ROSA module is based on the official Red Hat Terraform module for ROSA HCP. Please be aware of potential differences and choices in implementation between this module and the official one.
We invite you to consult the Camunda ROSA module documentation for more information.
Initialize Terraform for this module using the following Terraform command:
terraform init -backend-config="bucket=$S3_TF_BUCKET_NAME" -backend-config="key=$S3_TF_BUCKET_KEY"
Configure user access to the cluster. By default, the user who creates the OpenShift cluster has administrative access. If you want to grant access to other users, follow the Red Hat documentation for granting admin rights to users when the cluster is created.
Customize the cluster setup. The module offers various input options that allow you to further customize the cluster configuration. For a comprehensive list of available options and detailed usage instructions, refer to the ROSA module documentation.
Define outputs
Terraform allows you to define outputs, which make it easier to retrieve important values generated during execution, such as cluster endpoints and other necessary configurations for Helm setup.
Each module that you have previously set up contains an output definition at the end of the file. You can adjust them to your needs.
Execution
We strongly recommend managing sensitive information (for example, the OpenSearch or Aurora username and password) using a secure secrets management solution like HashiCorp Vault. For details on how to inject secrets directly into Terraform via Vault, see the Terraform Vault Secrets Injection Guide.
Open a terminal in the created Terraform folder where
config.tf
and other.tf
files are.Plan the configuration files:
terraform plan -out cluster.plan # describe what will be created
After reviewing the plan, you can confirm and apply the changes.
terraform apply cluster.plan # apply the creation
Terraform will now create the OpenShift cluster with all the necessary configurations. The completion of this process may require approximately 20-30 minutes for each component.
Reference files
Depending on the installation path you have chosen, you can find the reference files used on this page:
- Standard installation: Reference Files
2. Preparation for Camunda 8 installation
Access the created OpenShift cluster
You can access the created OpenShift cluster using the following steps:
Set up the required environment variables:
export CLUSTER_NAME="$(terraform console <<<local.rosa_cluster_name | jq -r)"
export CLUSTER_API_URL=$(terraform output -raw openshift_api_url)
export CLUSTER_ADMIN_USERNAME="$(terraform console <<<local.rosa_admin_username | jq -r)"
export CLUSTER_ADMIN_PASSWORD="$(terraform console <<<local.rosa_admin_password | jq -r)"
If you want to give cluster administrator access to the created user, this is not required for a standard installation but can be useful for debugging:
rosa grant user cluster-admin --cluster="$CLUSTER_NAME" --user="$CLUSTER_ADMIN_USERNAME"
Log in to the OpenShift cluster:
oc login -u "$CLUSTER_ADMIN_USERNAME" "$CLUSTER_API_URL" -p "$CLUSTER_ADMIN_PASSWORD"
Clean up and configure the kubeconfig context:
oc config rename-context $(oc config current-context) "$CLUSTER_NAME"
oc config use-context "$CLUSTER_NAME"
Verify your connection to the cluster with oc
:
oc get nodes
Create a project for Camunda using oc
:
oc new-project camunda
In the remainder of the guide, the camunda
namespace part of the camunda project will be referenced to create the required resources in the Kubernetes cluster, such as secrets or one-time setup jobs.
3. Install Camunda 8 using the Helm chart
Now that you've exported the necessary values, you can proceed with installing Camunda 8 using Helm charts. Follow the guide Camunda 8 on OpenShift for detailed instructions on deploying the platform to your OpenShift cluster.