Terraform modules to deploy a HashiCorp Nomad cluster on AWS using an Auto Scaling Group (ASG). The modules are designed to provision Nomad servers and clients in ASG, making it easy to manage the infrastructure for Nomad cluster. Additionally, the repository includes Packer scripts to build a custom Amazon Machine Image (AMI) with Nomad pre-installed.
The repository includes a Packer file, to build a custom Amazon Machine Image (AMI) with Nomad and docker
pre-installed. This AMI is used by the Terraform modules when creating the ASG instances.
To build the AMI, run:
cd packer
make build
NOTE: dry_run
mode is toggled as true by default. To build the AMI, set the dry_run
variable in Makefile
to false
.
The key resources provisioned by this module are:
- Auto Scaling Group (ASG)
- Security Group
- IAM Role
- Application Load Balancer (ALB) (optional)
The module deploys Nomad on top of an Auto Scaling Group (ASG). For optimal performance and fault tolerance, it is recommended to run the Nomad server ASG with 3 or 5 EC2 instances distributed across multiple Availability Zones. Each EC2 instance should utilize an AMI built using the provided Packer script.
NOTE: The Nomad Client terraform module allows setting up EC2 instances instead of ASGs. Check out the nomad_clients
Terraform Module Reference for more information.
Each EC2 instance within the ASG is assigned a Security Group that permits:
- All outbound requests
- All inbound ports specified in the Nomad documentation
The common Security Group is attached to both client and server nodes, enabling the Nomad agent to communicate and discover other agents within the cluster. The Security Group ID is exposed as an output variable for adding additional rules as needed. Furthermore, you can provide your own list of security groups as a variable to the module.
An IAM Role is attached to each EC2 instance within the ASG. This role is granted a minimal set of IAM permissions, allowing each instance to automatically discover other instances in the same ASG and form a cluster with them.
An internal Application Load Balancer (ALB) is optionally created for the Nomad servers. The ALB is configured to listen on port 80/443 and forward requests to the Nomad servers on port 4646. The ALB is exposed as an output variable for adding additional rules as needed.
The setup_server
script included in this project configures and bootstraps Nomad server nodes in an AWS Auto Scaling group. The script performs the following steps:
- Configures the Nomad agent as a server on the EC2 instances and uses the
nomad_join_tag_value
tag to auto-join the cluster. Once all the server instances discover each other, they elect a leader. - Bootstraps the Nomad ACL system with a pre-configured token on the first server.
- It waits for the cluster leader to get elected before bootstrapping ACL.
- The token must be passed as the
nomad_acl_bootstrap_token
variable.
Check out nomad_servers
documentation for module reference.
The setup_client
script included in this project configures Nomad client nodes in an AWS Auto Scaling group. The script performs the following steps:
- Configures the Nomad agent as a client on the EC2 instances and uses the
nomad_join_tag_value
tag to auto-join the cluster. - Configures DNS resolution for the Nomad cluster inside
exec
driver. - Prepares configurations for different task drivers.
Check out nomad_clients
documentation for module reference.
module "nomad_servers" {
source = "git::https://github.com/zerodha/nomad-cluster-setup//modules/nomad-servers?ref=main"
cluster_name = "demo-nomad"
nomad_join_tag_value = "demo"
instance_count = 3
ami = "ami-xyz"
vpc = "vpc-xyz"
subnets = "subnet-xyz"
create_alb = true
nomad_alb_hostname = "nomad.example.internal"
nomad_gossip_encrypt_key = var.nomad_gossip_encrypt_key
nomad_acl_bootstrap_token = var.nomad_acl_bootstrap_token
}
module "nomad_client_demo" {
source = "git::https://github.com/zerodha/nomad-cluster-setup//modules/nomad-clients?ref=main"
cluster_name = "demo-nomad"
nomad_join_tag_value = "demo"
client_name = "example-app"
enable_docker_plugin = true
ami = "ami-abc"
instance_type = "c6a.xlarge"
instance_desired_count = 10
vpc = "vpc-xyz"
subnets = "subnet-xyz"
route_53_resolver_address = "10.0.0.2"
}
NOTE: This module does not set up an ALB for accessing applications running on Nomad Clients. This is left up to the user to configure. Check out terraform-aws-alb
or Other Examples for more information. You may also need to set target_group_arns
if Auto-Scaling Groups are used.
Contributions to this repository are welcome. Please submit a pull request or open an issue to suggest improvements or report bugs.