Deploying Hazelcast Cluster on Cloud using Terraform

See something wrong? Edit this page.

This guide will get you started with using Terraform to deploy Hazelcast clusters on cloud.

You can see the whole project here.

What You’ll Learn

You will learn how to deploy a Hazelcast cluster and Hazelcast Management Center on AWS, Azure, and GCP using Terraform. Terraform files have the necessary resources defined and all you need to do is setting your credentials to give Terraform permissions for creating resources on your behalf. After you run Terraform and create a cluster on cloud, you will be able to monitor the cluster using Hazelcast Management Center. You can modify the Terraform files to create new resources or destroy the whole cluster.


  • Terraform v0.13+

  • Access to one of AWS, Azure or GCP. The account must have permissions to create resources.

Giving Access to Terraform

Cloud providers offer different ways of authenticating Terraform to create resources. Below you can see some of them.

  • AWS

  • Azure

  • GCP

You can set environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. Terraform will use these environment variables to create resources. Run the following commands.


You can find other ways of providing credentials at Terraform AWS authentication.

The account you use in Terraform should have the following permissions assigned to it:


The iam permissions are necessary to create and delete a role using Terraform in aws/ Created role aws_iam_role.discovery_role will be used by Hazelcast instances to find each other.

If you are using a user account, you can login with Azure CLI. Run the following command to authenticate. Terraform will be able to detect your account.

$ az login

If you have multiple subscriptions or tenants you can choose one by adding following lines to azure/

provider "azurerm" {
  version = "=2.23.0"
  features {}

  subscription_id = "00000000-0000-0000-0000-000000000000"
  tenant_id       = "11111111-1111-1111-1111-111111111111"


If you want to authenticate through service principals, please refer to Authenticating to Azure.

The account or service principal you use should have the role Owner assigned to it.

You can use service accounts to authenticate Terraform. Get a service account key file, you can create key files on Google Console. After you have created a key file, put its path in gcp/ as follows.

provider "google" {
  version = "3.5.0"

  credentials = file("KEY-FILE-PATH/YOUR-KEY-FILE.json")
  batching {
    enable_batching = "false"
  project = var.project_id
  region  = var.region
  zone    =

The service account you use should have the following roles assigned to it:

Compute Admin
Role Administrator
Security Admin
Service Account Admin
Service Account User

Configuring Terraform for Connection

Now that Terraform has access to your credentials, we need to supply some variables to configure Terraform so that resources can be created correctly.

Terraform will need a public-private key pair to be able to provision files and execute commands on the remote machines. For this purpose, you can use one of your existing key pairs or create a new one with the following command:

$ ssh-keygen -f ~/.ssh/YOUR-KEY-NAME -t rsa

This command will create two key files: and YOUR-KEY-NAME, public and private keys respectively. Terraform will use them to access VMs.

  • AWS

  • Azure

  • GCP

In aws/terraform.tfvars, you need to provide values for two variables.

  • aws_key_name: This is the name of the public-private key pair we created earlier.

  • local_key_path: This is the path we created the key pair at.

In this guide we use an Ubuntu image and AWS creates a user with name ubuntu by default. So if you want to connect to your VMs via ssh you will have to use ubuntu. If you want to use another Linux distribution, please refer to AWS EC2 Managing users and change the variable aws_ssh_user accordingly.
The configuration defined in aws/ assumes you have a default VPC in the region defined in var.region. AWS creates a default VPC for each activated region. If you didn’t delete it you can skip this note. Otherwise, please refer to AWS Creating Default VPC.

In azure/terraform.tfvars, you need to provide values for two variables.

  • azure_key_name: This is the name of the public-private key pair we created earlier.

  • local_key_path: This is the path we created the key pair at.

In gcp/terraform.tfvars, you need to provide values for three variables.

  • gcp_key_name: This is the name of the public-private key pair we created earlier.

  • local_key_path: This is the path we created the key pair at.

  • project_id: This is the id of the project you will use.

Deploying the Cluster

After you have authenticated your preferred cloud provider and provided necessary variables, cd into the directory of that provider.

If you are using a paid subscription, you may be charged for the resources that will be created in this guide. However you can complete the guide using free tier subscriptions provided by AWS, Azure and GCP.

Initialize Terraform.

$ terraform init

Run the following to create an execution plan. This command will not create any resources but only show what actions Terraform will perform to reach the desired state defined in Terraform files.

$ terraform plan

Apply your Terraform configuration. It should take a couple of minutes.

$ terraform apply

After the resources are created, the output should be similar to following:

mancenter_public_ip =
members_public_ip = [

Now you deployed 2 Hazelcast cluster members and a Hazelcast Management Center. You can monitor the state of your cluster from the following address:


You can change the input variables in file by updating terraform.tfvars. After your changes the new desired state will be applied by terraform apply. You can use ssh to examine VMs by using the IPs provided in the output of terraform apply. If you cannot find the outputs you can run 'terraform show' to see the current state of your configuration.

When you are done with the guide, run the following to delete all the resources created.

$ terraform destroy


In this guide we used Terraform to create Hazelcast cluster on cloud. We defined the state we wanted in and Terraform applied our desired state on our cloud provider. Then we used Hazelcast Management Center to monitor the state of our cluster. We changed the desired state by updating terraform.tfvars file and Terraform applied our changes when we run terraform apply.