Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for GCP GKE DNS-based endpoint to connect to GKE cluster #2637

Open
SehiiRohoza opened this issue Nov 27, 2024 · 11 comments
Open

Comments

@SehiiRohoza
Copy link

Description

Every GKE cluster has a control plane that handles Kubernetes API requests. It'll be handly if Terraform provider Kubernets would be able to use GKE DNS-based endpoint to connect to GKE DNS-based endpoint to deploy k8s resources.

Accordingly to the official documentation DNS-based endpoint:

The DNS-based endpoint gives a unique DNS or fully qualified domain name (FQDN) for each cluster control plane. This DNS name can be used to access your control plane. The DNS name resolves to an endpoint that is accessible from any network reachable by Google Cloud APIs, including on-premises or other cloud networks. Enabling the DNS-based endpoint eliminates the need for a bastion host or proxy nodes to access the control plane from other VPC networks or external locations.

You can find more benefits here.

This feature will be added soon into Terraform provider Google (see references).

Potential Terraform Configuration

# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key.

References

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@fralvarop
Copy link

I knew this was going to be in here already. We have a VPN so we can use kubectl to attack production cluster. I have recently set up de dns-based enpoint and find out the provider won't work without the vpn, my workaround right know is the following.

provider "kubernetes" {
  config_path = "~/.kube/config"
}

@appilon appilon removed their assignment Nov 29, 2024
@alexsomesan
Copy link
Member

I'm not sure what exactly is the ask here. The host attribute in the kubeconfig file already accepts URLs and thus DNS FQDNs of the API server endpoints. How is this case different?

@SehiiRohoza
Copy link
Author

SehiiRohoza commented Dec 3, 2024

@alexsomesan Please be so kind provide me some example of Terraform code that shows how to get data from the existed GKE cluster and how to use DNS-endpoint (from the obtained data) to deploy k8s resources. I was not able to find any documentation that clarify this in great details. As a result, I assume that this feature should be implemented.

@alexsomesan
Copy link
Member

There's a bit too little information to go on here. Before I can put together an example we need to clarify some aspects.

  • Do you have both external and internal endpoints enabled or just internal?
  • Have you tried resolving the DNS name of the endpoint from the same machine where you are running Terraform? Is it resolving to a public or private IP address?

@SehiiRohoza
Copy link
Author

There's a bit too little information to go on here. Before I can put together an example we need to clarify some aspects.

  • Do you have both external and internal endpoints enabled or just internal?
  • Have you tried resolving the DNS name of the endpoint from the same machine where you are running Terraform? Is it resolving to a public or private IP address?

I'd like to disable both internal and external IP endpoints and fully switch to DNS-endpoint.
The DNS-endpoint resolves into completely different external IP from my laptop (so, the IP and DNS endpoint are not connected).

@macninjaface
Copy link

macninjaface commented Dec 4, 2024

Example of error when attempting to create a new namespace in a GKE cluster when DNS Endpoints is enabled...

Provider config:

data "google_client_config" "provider" {}

data "google_container_cluster" "my_cluster" {
  name     = "my-test-cluster"
  location = "us-east1"
  project  = "my-gcp-project-id"
}

provider "kubernetes" {
  host                   = "https://${data.google_container_cluster.my_cluster.endpoint}"
  token                  = data.google_client_config.provider.access_token
  cluster_ca_certificate = base64decode(
    data.google_container_cluster.my_cluster.master_auth[0].cluster_ca_certificate,
  )

  exec {
    api_version = "client.authentication.k8s.io/v1beta1"
    command     = "gke-gcloud-auth-plugin"
  }
}

Error:
Error: Post "https://redacted-gke-dns-endpoint-hostname.us-east1.gke.goog/api/v1/namespaces": tls: failed to verify certificate: x509: certificate signed by unknown authority

When using ~/.kube/config this error does not occur.

Versions:
Terraform: 1.5.7
Kubernetes provider: 2.34.0
Google provider: 4.85.0
GKE version: 1.30.5-gke.1014003
Internal and external IP endpoints are disabled.

GKE DNS Endpoint is a new feature released in Nov. 2024 that allows access to the GKE control plane using a DNS hostname instead of an IP address.

@lsiqueira
Copy link

@macninjaface Can you try without the cluster_ca_certificate?

@danistrebel
Copy link

danistrebel commented Dec 11, 2024

FWIW this worked perfectly fine using the DNS endpoint. No need for token or CA cert.

provider  kubernetes {
  host = "https://${google_container_cluster.default.control_plane_endpoints_config[0].dns_endpoint_config[0].endpoint}"
  exec {
    api_version = "client.authentication.k8s.io/v1beta1"
    args        = []
    command     = "gke-gcloud-auth-plugin"
  }
}

@macninjaface
Copy link

@macninjaface Can you try without the cluster_ca_certificate?

Confirming this works! Thanks!

@echaouchna
Copy link

echaouchna commented Dec 13, 2024

FWIW this worked perfectly fine using the DNS endpoint. No need for token or CA cert.

provider  kubernetes {
  host = "https://${google_container_cluster.default.control_plane_endpoints_config[0].dns_endpoint_config[0].endpoint}"
  exec {
    api_version = "client.authentication.k8s.io/v1beta1"
    args        = []
    command     = "gke-gcloud-auth-plugin"
  }
}

Yes this works, the only problem with this solution is that the gke-gcloud-auth-plugin should be available in the PATH

In my case this also seems to be working, which is more convenient for me, as we don't need any additional binary, especially in the context of a cicd where I'm using the official terraform docker image

data "google_client_config" "default" {}

provider "kubernetes" {
  host  = "https://${module.gke.dns_endpoint}"
  token = data.google_client_config.default.access_token
}

@TheKangaroo
Copy link

One additional note I would like to add, since I stumbled upon this myself: the provider configuration works fine without the protocol if you specify the cluster_ca_certificate like this:

provider "kubernetes" {
  host  = module.gke.dns_endpoint
  cluster_ca_certificate = ...
  token = data.google_client_config.current.access_token
}

However, if you omit cluster_ca_certificate, you need to add the protocol to the host:

provider "kubernetes" {
  host  = "https://${module.gke.dns_endpoint}"
  token = data.google_client_config.current.access_token
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

9 participants