Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot Create Valid ConfigMapList #2400

Closed
squat opened this issue Jan 12, 2024 · 6 comments
Closed

Cannot Create Valid ConfigMapList #2400

squat opened this issue Jan 12, 2024 · 6 comments
Labels

Comments

@squat
Copy link

squat commented Jan 12, 2024

Terraform Version, Provider Version and Kubernetes Version

Terraform version: v1.5.7
Kubernetes provider version: v2.25.2
Kubernetes version: v1.27.3

Affected Resource(s)

  • kubernetes_manifest

Terraform Configuration Files

terraform {
  required_providers {
    http = {
      source  = "hashicorp/http"
      version = "3.4.1"
    }
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "2.25.2"
    }
  }
}

provider "kubernetes" {
  config_path = "~/.kube/config"
}

data "http" "configmap_list" {
  url = "https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/5fcbcc9198075da5a60b09c553e709c45f1a8c09/manifests/grafana-dashboardDefinitions.yaml"
}

resource "kubernetes_manifest" "configmap_list" {
  manifest = yamldecode(data.http.configmap_list.response_body)
}

Debug Output

https://gist.github.com/squat/f7159c4591bd1e5d04b2957109718b2b

Steps to Reproduce

  1. kind create cluster
  2. kubeclt create namespace monitoring
  3. terraform init
  4. terraform plan

Expected Behavior

What should have happened?

I'm attempting to deploy a resource from the Kube-Prometheus project. The plan should have succeeded and in fact, when vetting this manifest with Kubeconform, everything is reported to be OK. Indeed, applying the manifest manually with kubectl apply -f ... works as expected.

Actual Behavior

What actually happened?

The plan fails with the error:

Planning failed. Terraform encountered an error while generating this plan.

╷
│ Error: Attribute key missing from "manifest" value
│
│   with kubernetes_manifest.configmap_list,
│   on main.tf line 22, in resource "kubernetes_manifest" "configmap_list":
│   22: resource "kubernetes_manifest" "configmap_list" {
│
│ 'metadata' attribute key is missing from manifest configuration

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@squat squat added the bug label Jan 12, 2024
@arybolovlev arybolovlev added question and removed bug labels Jan 17, 2024
@arybolovlev
Copy link
Contributor

Hi @squat,

The *List is a collection and Kubernetes API does not have a corresponding kind. We do not support and don't have plans to add support for *List resources at this moment.

You can read more here.

A possible workaround here could be to use built-in Terraform functions to split multi-document YAML file into separate documents to apply them individually.

I hope that helps.

@squat
Copy link
Author

squat commented Jan 18, 2024

Hmm that's too bad. This limitation in the provider makes it difficult to interoperate with tooling that is standard and canonical in the ecosystem. After all, *List is a Kubernetes-native concept that all Kubernetes tooling understands, e.g. kubectl.

A possible workaround here could be to use built-in Terraform functions to split multi-document YAML file into separate documents to apply them individually.

A problem with that is that this document is not a multi-document YAML file; it's a single YAML document.

@alexsomesan
Copy link
Member

alexsomesan commented Mar 5, 2024

Hey @squat! Long time no see, man!

Sorry for not pitching in earlier, somehow this issue evaded me until now.

Supporting *List isn't a priority because they really aren't first class resources of the API, but rather an output container (others think so too). What breaks the deal for us supporting it is that they are actually missing the ObjectMeta part of their OpenApi schema, which is also what you're seeing reported in the error (missing attribute "metadata").

Tools that handle them (like kubectl) do so with bespoke client-side logic. The bespoke part is what we're trying really hard to avoid in kubernetes_manifest since literally every other Kubernetes kind is handled through the same code path.

The fact they reinforce an anti-pattern, not allowing for 1:1 mapping of Kubernetes resources to Terraform ones, isn't helping either.

Fortunately, there is a trivial way around them since Terraform can already decode YAML.
Here's an example that worked for me, which is only 3 extra lines of code, mostly for legibility:

data "http" "configmap_list" {
  url = "https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/5fcbcc9198075da5a60b09c553e709c45f1a8c09/manifests/grafana-dashboardDefinitions.yaml"
}

locals {
  configmaps = yamldecode(data.http.configmap_list.response_body).items
}

resource "kubernetes_manifest" "configmap_list" {
  count    = length(local.configmaps)
  manifest = local.configmaps[count.index]
}

Let us know whether that's a good way forward for you.

Nice to hear from you again!

@squat
Copy link
Author

squat commented Mar 5, 2024

Hi @alexsomesan ❤️
I read that comment a couple of times and it seems to me that it's actually based on a mis-reading of the Kubernetes documentation. Using kind: List is discouraged, not kind: *List. Indeed, there is no List kind defined in the Kubernetes API, but there plenty of *List kinds. Specifically, this note caught my attention:

Keep in mind that the Kubernetes API does not have a kind named List.
kind: List is a client-side, internal implementation detail for processing collections that might be of different kinds of object. Avoid depending on kind: List in automation or other code.

This makes me think that depending on kind: *List in automation or code should be fair game.

Thanks for the code sample, Alex. It's definitely solvable with yamldecode. I had wanted to avoid making a special case for this in my terraform, since in reality this file is actually just one of many files in a directory on disk and the terraform config iterates over all of the files. I can definitely add an optional check to see if the file has an items field, it's just a bit more annoying.

@alexsomesan
Copy link
Member

@squat I actually did a little experiment which I hope will demonstrate that *List are not real API resources.

What I did is I wrote a quick Go snippet that takes that exact YAML file you shared and passes it through API machinery to decode it and then uses the client-go dynamic API client (same as kubernetes_manifest) to make a server-side-apply call to with the payload from the YAML file. The API returns an error saying it can't find the resource (404). The only change I made to the YAML payload was to add the missing ObjectMeta attribute to give it a better chance of being accepted by the API.

So even if we wanted to support them, the API will not accept the *List payloads directly. This means that clients need to do the unpacking work before actually passing the individual resources from items one by one to the API. At that point, it's the equivalent of the example TF code I shared yesterday.

To further drive that point, here's a log of kubectl handling the same file. It does exactly the same thing, breaking it down into individual resources on the client side and POSTing them to the API one by one.

If I may be honest here, I think using these *List containers in automation is more trouble than it's worth. I can't see any benefit in terms of UX as they obfuscate the actual resources and make it harder for editors to comprehend them and tools to process them even in batch (with extraction of individual resources always eventually required).

My personal plea is, let's just not (ab)use them. YAML is bad enough as it is.

@iBrandyJackson iBrandyJackson closed this as not planned Won't fix, can't repro, duplicate, stale Mar 28, 2024
@klaus993
Copy link

The workaround I found for this is to use the kubectl provider instead of this one

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants