Terraform module to deploy Kubernetes with RKE2 on OpenStack.
Unlike RKE version this module is not opinionated and let you configure everything via RKE2 configuration file.
- Terraform 0.13+
- OpenStack environment properly sourced
- A Openstack image fullfiling RKE2 requirements and featuring curl
- At least one Openstack floating IP
- HA controlplane
- Multiple agent node pools
- Upgrade mechanism
See examples directory.
See USAGE.md for all available options.
You can either specify a ssh key file to generate new keypair via ssh_key_file
(default) or specify already existent keypair via ssh_keypair_name
.
Warning
Default config will try to use ssh agent for ssh connections to the nodes. Add use_ssh_agent = false
if you don't use it.
You can define your own rules (e.g. limiting port 22 and 6443 to admin box).
secgroup_rules = [ { "source" = "x.x.x.x", "protocol" = "tcp", "port" = 22 },
{ "source" = "x.x.x.x", "protocol" = "tcp", "port" = 6443 },
{ "source" = "0.0.0.0/0", "protocol" = "tcp", "port" = 80 },
{ "source" = "0.0.0.0/0", "protocol" = "tcp", "port" = 443}
]
You can set affinity policy for controlplane and each nodes pool server_group_affinity
. Default is soft-anti-affinity
.
Warning
soft-anti-affinity
and soft-affinity
needs Compute service API 2.15 or above.
Some providers require to boot the instances from an attached boot volume instead of the nova ephemeral volume. To enable this feature, provide the variables to the config file. You can use different value for server and agent nodes.
boot_from_volume = true
boot_volume_size = 20
boot_volume_type = "rbd-1"
You can specify rke2 version with rke2_version
variables. Refer to RKE2 supported version.
Upgrade by setting the target version via rke2_version
and do_upgrade = true
. It will upgrade the nodes one-by-one, server nodes first.
Warning
In-place upgrade mechanism is not battle-tested and relies on Terraform provisioners.
Set the manifests_path
variable to point out the directory containing your manifests and HelmChart (see JupyterHub example).
If you need a template step for your manifests, you can use manifests_gzb64
(see cinder-csi-plugin example).
Warning
Modifications made to manifests after cluster deployement wont have any effect.
Set the additional_configs_path
variable to the directory containing your additional rke2 server configs. (see the Audit Policy example)
If you need a template step for your config files, you can use additional_configs_gzb64
.
Warning
Modifications made to manifests after cluster deployement wont have any effect.
You need to manually drain and remove node before downscaling a pool nodes.
Usage with Terraform Kubernetes Provider and Helm Provider
You can tell the module to output kubernetes config by setting output_kubernetes_config = true
.
Warning
Interpolating provider variables from module output is not the recommended way to achieve integration. See here and here.
Use of a data sources is recommended.
(Not recommended) You can use this module to populate Terraform Kubernetes Provider :
provider "kubernetes" {
host = module.controlplane.kubernetes_config.host
client_certificate = module.controlplane.kubernetes_config.client_certificate
client_key = module.controlplane.kubernetes_config.client_key
cluster_ca_certificate = module.controlplane.kubernetes_config.cluster_ca_certificate
}
Recommended way needs two apply
operations, and setting the proper terraform_remote_state
data source :
provider "kubernetes" {
host = data.terraform_remote_state.rke2.outputs.kubernetes_config.host
client_certificate = data.terraform_remote_state.rke2.outputs.kubernetes_config.client_certificate
client_key = data.terraform_remote_state.rke2.outputs.kubernetes_config.client_key
cluster_ca_certificate = data.terraform_remote_state.rke2.outputs.kubernetes_config.cluster_ca_certificate
}
Note
Changes to certain module arguments will intentionally not cause the recreation of instances.
To provide users a better and more manageable experience, several arguments have been included in the instance's ignore_changes
lifecycle. You must manually taint
the instance for force the recreation of the resource :
terraform taint 'module.controlplane.module.server.openstack_compute_instance_v2.instance'
You can specify a proxy via proxy_url
variable. Private address ranges are automatically excluded, you can add more addresses via no_proxy
variable. You might want to add you organization's DNS domain (that of the Keystone OpenStack API endpoint).