Metrics-Server is unabled to scrap the node after passing tls certificates for client and server #1613
Labels
kind/support
Categorizes issue or PR as a support question.
needs-triage
Indicates an issue or PR lacks a `triage/foo` label and requires one.
What happened:
--kubelet-certificate-authority=/etc/kubernetes/pki/ca.crt
--kubelet-client-certificate=/etc/kubernetes/pki/metrics-server/metrics-client.crt
--kubelet-client-key=/etc/kubernetes/pki/metrics-server/metrics-client.key
also i have generating certificates for server and sign it with /etc/kubernetes/pki/front-proxy-ca.crt
and specify them with the following flags
--client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
--requestheader-allowed-names=""
--tls-cert-file=/etc/kubernetes/pki/metrics-server/metrics-server.crt
--tls-private-key-file=/etc/kubernetes/pki/metrics-server/metrics-server.key
i got the following logs
Caches populated for *v1.Node from k8s.io/[email protected]/tools/cache/reflector.go:229
I1231 03:10:40.545053 1 round_trippers.go:466] curl -v -XGET -H "Accept: application/vnd.kubernetes.protobuf, /" -H "User-Agent: metrics-server/v0.7.2 (linux/amd64) kubernetes/0969601" -H "Authorization: Bearer " 'https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=221623&timeout=6m20s&timeoutSeconds=380&watch=true'
I1231 03:10:40.545769 1 round_trippers.go:553] GET https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=221623&timeout=6m20s&timeoutSeconds=380&watch=true 200 OK in 0 milliseconds
I1231 03:10:40.545818 1 round_trippers.go:570] HTTP Statistics: GetConnection 0 ms ServerProcessing 0 ms Duration 0 ms
I1231 03:10:40.545826 1 round_trippers.go:577] Response Headers:
I1231 03:10:40.545841 1 round_trippers.go:580] Date: Tue, 31 Dec 2024 03:10:40 GMT
I1231 03:10:40.545851 1 round_trippers.go:580] Audit-Id: b98deac8-2cc2-4edf-98ad-b1e80fcc42db
I1231 03:10:40.545855 1 round_trippers.go:580] Cache-Control: no-cache, private
I1231 03:10:40.545857 1 round_trippers.go:580] Content-Type: application/vnd.kubernetes.protobuf;stream=watch
I1231 03:10:40.545860 1 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 3f40142c-1c78-4e47-a433-5571928e4727
I1231 03:10:40.545874 1 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: eaf373d0-66ca-45d4-8a4b-16a7163527c1
I1231 03:10:40.632564 1 server.go:136] "Scraping metrics"
I1231 03:10:40.632977 1 scraper.go:121] "Scraping metrics from nodes" nodes=["controlplane01","node01"] nodeCount=2 nodeSelector=""
I1231 03:10:40.639830 1 scraper.go:143] "Scraping node" node="controlplane01"
I1231 03:10:40.640400 1 round_trippers.go:466] curl -v -XGET -H "User-Agent: metrics-server/v0.7.2 (linux/amd64) kubernetes/0969601" -H "Authorization: Bearer " 'https://controlplane01:10250/metrics/resource'
I1231 03:10:40.641464 1 round_trippers.go:495] HTTP Trace: DNS Lookup for controlplane01 resolved to [{192.168.1.32 }]
I1231 03:10:40.641691 1 round_trippers.go:510] HTTP Trace: Dial to tcp:192.168.1.32:10250 succeed
I1231 03:10:40.642450 1 healthz.go:176] Installing health checkers for (/healthz): "ping","log","poststarthook/max-in-flight-filter","poststarthook/storage-object-count-tracker-hook","metadata-informer-sync"
I1231 03:10:40.643183 1 healthz.go:176] Installing health checkers for (/livez): "ping","log","poststarthook/max-in-flight-filter","poststarthook/storage-object-count-tracker-hook","metric-collection-timely","metadata-informer-sync"
I1231 03:10:40.643751 1 healthz.go:176] Installing health checkers for (/readyz): "ping","log","poststarthook/max-in-flight-filter","poststarthook/storage-object-count-tracker-hook","metric-storage-ready","metric-informer-sync","metadata-informer-sync","shutdown"
I1231 03:10:40.644391 1 genericapiserver.go:523] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete
I1231 03:10:40.644728 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca-bundle::/etc/kubernetes/pki/front-proxy-ca.crt,request-header::/etc/kubernetes/pki/front-proxy-ca.crt" certDetail=""front-proxy-ca" [] validServingFor=[front-proxy-ca] issuer="" (2024-12-26 08:17:54 +0000 UTC to 2034-12-24 08:22:54 +0000 UTC (now=2024-12-31 03:10:40.64470991 +0000 UTC))"
I1231 03:10:40.644755 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca-bundle::/etc/kubernetes/pki/front-proxy-ca.crt,request-header::/etc/kubernetes/pki/front-proxy-ca.crt" certDetail=""front-proxy-ca" [] validServingFor=[front-proxy-ca] issuer="" (2024-12-26 08:17:54 +0000 UTC to 2034-12-24 08:22:54 +0000 UTC (now=2024-12-31 03:10:40.644746173 +0000 UTC))"
I1231 03:10:40.644849 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/pki/metrics-server/metrics-server.crt::/etc/kubernetes/pki/metrics-server/metrics-server.key"
I1231 03:10:40.645085 1 scraper.go:143] "Scraping node" node="node01"
I1231 03:10:40.646204 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/pki/front-proxy-ca.crt"
I1231 03:10:40.646861 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/front-proxy-ca.crt"
I1231 03:10:40.647047 1 round_trippers.go:553] GET https://controlplane01:10250/metrics/resource in 6 milliseconds
I1231 03:10:40.647056 1 round_trippers.go:570] HTTP Statistics: DNSLookup 0 ms Dial 0 ms TLSHandshake 5 ms Duration 6 ms
I1231 03:10:40.647057 1 round_trippers.go:466] curl -v -XGET -H "User-Agent: metrics-server/v0.7.2 (linux/amd64) kubernetes/0969601" -H "Authorization: Bearer " 'https://node01:10250/metrics/resource'
I1231 03:10:40.647061 1 round_trippers.go:577] Response Headers:
E1231 03:10:40.647091 1 scraper.go:149] "Failed to scrape node" err="Get "https://controlplane01:10250/metrics/resource\": tls: failed to verify certificate: x509: certificate signed by unknown authority" node="controlplane01"
as i found that the metrics-server using the servering certificates for scraping the nodes which is not correct
--client-ca-file=/etc/kubernetes/pki/ca.crt
--requestheader-client-ca-file=/etc/kubernetes/pki/ca.crt
--requestheader-allowed-names=""
--tls-cert-file=/etc/kubernetes/pki/metrics-server/metrics-client.crt
--tls-private-key-file=/etc/kubernetes/pki/metrics-server/metrics-client.key
and i got this logs
"Scraping metrics"
I1231 03:41:52.077166 1 scraper.go:121] "Scraping metrics from nodes" nodes=["controlplane01","node01"] nodeCount=2 nodeSelector=""
I1231 03:41:52.079977 1 healthz.go:176] Installing health checkers for (/healthz): "ping","log","poststarthook/max-in-flight-filter","poststarthook/storage-object-count-tracker-hook","metadata-informer-sync"
I1231 03:41:52.080545 1 healthz.go:176] Installing health checkers for (/livez): "ping","log","poststarthook/max-in-flight-filter","poststarthook/storage-object-count-tracker-hook","metric-collection-timely","metadata-informer-sync"
I1231 03:41:52.081124 1 healthz.go:176] Installing health checkers for (/readyz): "ping","log","poststarthook/max-in-flight-filter","poststarthook/storage-object-count-tracker-hook","metric-storage-ready","metric-informer-sync","metadata-informer-sync","shutdown"
I1231 03:41:52.082086 1 genericapiserver.go:523] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete
I1231 03:41:52.082463 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/pki/ca.crt"
I1231 03:41:52.082723 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
I1231 03:41:52.082855 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/pki/metrics-server/metrics-client.crt::/etc/kubernetes/pki/metrics-server/metrics-client.key"
I1231 03:41:52.082836 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca-bundle::/etc/kubernetes/pki/ca.crt,request-header::/etc/kubernetes/pki/ca.crt" certDetail=""kubernetes" [] validServingFor=[kubernetes] issuer="" (2024-12-26 08:17:54 +0000 UTC to 2034-12-24 08:22:54 +0000 UTC (now=2024-12-31 03:41:52.082732754 +0000 UTC))"
I1231 03:41:52.083113 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca-bundle::/etc/kubernetes/pki/ca.crt,request-header::/etc/kubernetes/pki/ca.crt" certDetail=""kubernetes" [] validServingFor=[kubernetes] issuer="" (2024-12-26 08:17:54 +0000 UTC to 2034-12-24 08:22:54 +0000 UTC (now=2024-12-31 03:41:52.083080826 +0000 UTC))"
I1231 03:41:52.083252 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/etc/kubernetes/pki/metrics-server/metrics-client.crt::/etc/kubernetes/pki/metrics-server/metrics-client.key" certDetail=""system:serviceaccount:kube-system:metrics-server" [client] issuer="kubernetes" (2024-12-30 16:02:42 +0000 UTC to 2025-12-30 16:02:42 +0000 UTC (now=2024-12-31 03:41:52.083240668 +0000 UTC))"
I1231 03:41:52.083472 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail=""apiserver-loopback-client@1735616511" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1735616511" (2024-12-31 02:41:51 +0000 UTC to 2025-12-31 02:41:51 +0000 UTC (now=2024-12-31 03:41:52.083459541 +0000 UTC))"
I1231 03:41:52.083595 1 secure_serving.go:213] Serving securely on [::]:10250
I1231 03:41:52.083632 1 genericapiserver.go:671] [graceful-termination] waiting for shutdown to be initiated
I1231 03:41:52.083713 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I1231 03:41:52.094363 1 scraper.go:143] "Scraping node" node="controlplane01"
I1231 03:41:52.095660 1 scraper.go:143] "Scraping node" node="node01"
I1231 03:41:52.095882 1 round_trippers.go:466] curl -v -XGET -H "User-Agent: metrics-server/v0.7.2 (linux/amd64) kubernetes/0969601" -H "Authorization: Bearer " 'https://node01:10250/metrics/resource'
I1231 03:41:52.096205 1 round_trippers.go:466] curl -v -XGET -H "User-Agent: metrics-server/v0.7.2 (linux/amd64) kubernetes/0969601" -H "Authorization: Bearer " 'https://controlplane01:10250/metrics/resource'
I1231 03:41:52.096934 1 round_trippers.go:495] HTTP Trace: DNS Lookup for controlplane01 resolved to [{192.168.1.32 }]
I1231 03:41:52.097218 1 round_trippers.go:495] HTTP Trace: DNS Lookup for node01 resolved to [{192.168.1.42 }]
I1231 03:41:52.097320 1 round_trippers.go:510] HTTP Trace: Dial to tcp:192.168.1.32:10250 succeed
I1231 03:41:52.098719 1 round_trippers.go:510] HTTP Trace: Dial to tcp:192.168.1.42:10250 succeed
I1231 03:41:52.102543 1 round_trippers.go:553] GET https://controlplane01:10250/metrics/resource in 6 milliseconds
I1231 03:41:52.103400 1 round_trippers.go:570] HTTP Statistics: DNSLookup 0 ms Dial 0 ms TLSHandshake 5 ms Duration 6 ms
I1231 03:41:52.103407 1 round_trippers.go:577] Response Headers:
E1231 03:41:52.103463 1 scraper.go:149] "Failed to scrape node" err="Get "https://controlplane01:10250/metrics/resource\": tls: failed to verify certificate: x509: certificate signed by unknown authority" node="controlplane01"
as i notices that the metrics server still not able to verify the root certificates and the certificates even though i try to force setting the args to make it work
scraping the controlplane01 node was successfully
vagrant@controlplane01:/etc/kubernetes/pki/metrics-server$ sudo curl --cacert /etc/kubernetes/pki/ca.crt -k --cert /etc/kubernetes/pki/metrics-server/metrics-client.crt --key /etc/kubernetes/pki/metrics-server/metrics-client.key https://controlplane01:10250/metrics/resource
HELP container_cpu_usage_seconds_total [STABLE] Cumulative cpu time consumed by the container in core-seconds
TYPE container_cpu_usage_seconds_total counter
container_cpu_usage_seconds_total{container="coredns",namespace="kube-system",pod="coredns-668d6bf9bc-d7nx2"} 21.954851 1735616289623
container_cpu_usage_seconds_total{container="coredns",namespace="kube-system",pod="coredns-668d6bf9bc-tgqp8"} 21.387374 1735616281897
container_cpu_usage_seconds_total{container="haproxy",namespace="kube-system",pod="haproxy-controlplane01"} 11.025043 1735616289692
container_cpu_usage_seconds_total{container="keepalived",namespace="kube-system",pod="keepalived-controlplane01"} 5.294422 1735616295051
container_cpu_usage_seconds_total{container="kube-apiserver",namespace="kube-system",pod="kube-apiserver-controlplane01"} 645.14839 1735616281158
container_cpu_usage_seconds_total{container="kube-controller-manager",namespace="kube-system",pod="kube-controller-manager-controlplane01"} 255.985614 1735616295097
container_cpu_usage_seconds_total{container="kube-proxy",namespace="kube-system",pod="kube-proxy-dggzb"} 7.476272 1735616288976
container_cpu_usage_seconds_total{container="kube-scheduler",namespace="kube-system",pod="kube-scheduler-controlplane01"} 85.313789 1735616291744
container_cpu_usage_seconds_total{container="metrics-server",namespace="kube-system",pod="metrics-server-b6c794f59-z2jvv"} 10.685636 1735616283919
container_cpu_usage_seconds_total{container="weave",namespace="kube-system",pod="weave-net-7ln6x"} 13.188094 1735616288337
container_cpu_usage_seconds_total{container="weave-npc",namespace="kube-system",pod="weave-net-7ln6x"} 4.177897 1735616295019
HELP container_memory_working_set_bytes [STABLE] Current working set of the container in bytes
TYPE container_memory_working_set_bytes gauge
container_memory_working_set_bytes{container="coredns",namespace="kube-system",pod="coredns-668d6bf9bc-d7nx2"} 1.4925824e+07 1735616289623
container_memory_working_set_bytes{container="coredns",namespace="kube-system",pod="coredns-668d6bf9bc-tgqp8"} 6.0858368e+07 1735616281897
container_memory_working_set_bytes{container="haproxy",namespace="kube-system",pod="haproxy-controlplane01"} 4.1508864e+07 1735616289692
container_memory_working_set_bytes{container="keepalived",namespace="kube-system",pod="keepalived-controlplane01"} 1.6830464e+07 1735616295051
container_memory_working_set_bytes{container="kube-apiserver",namespace="kube-system",pod="kube-apiserver-controlplane01"} 2.78708224e+08 1735616281158
container_memory_working_set_bytes{container="kube-controller-manager",namespace="kube-system",pod="kube-controller-manager-controlplane01"} 1.3023232e+08 1735616295097
container_memory_working_set_bytes{container="kube-proxy",namespace="kube-system",pod="kube-proxy-dggzb"} 7.22944e+07 1735616288976
container_memory_working_set_bytes{container="kube-scheduler",namespace="kube-system",pod="kube-scheduler-controlplane01"} 8.2034688e+07 1735616291744
container_memory_working_set_bytes{container="metrics-server",namespace="kube-system",pod="metrics-server-b6c794f59-z2jvv"} 1.6449536e+07 1735616283919
container_memory_working_set_bytes{container="weave",namespace="kube-system",pod="weave-net-7ln6x"} 5.361664e+07 1735616288337
container_memory_working_set_bytes{container="weave-npc",namespace="kube-system",pod="weave-net-7ln6x"} 5.1142656e+07 1735616295019
HELP container_start_time_seconds [STABLE] Start time of the container since unix epoch in seconds
TYPE container_start_time_seconds gauge
container_start_time_seconds{container="coredns",namespace="kube-system",pod="coredns-668d6bf9bc-d7nx2"} 1.7356099029630275e+09
container_start_time_seconds{container="coredns",namespace="kube-system",pod="coredns-668d6bf9bc-tgqp8"} 1.7356098970272162e+09
container_start_time_seconds{container="haproxy",namespace="kube-system",pod="haproxy-controlplane01"} 1.7356098287100894e+09
container_start_time_seconds{container="keepalived",namespace="kube-system",pod="keepalived-controlplane01"} 1.7356097475229456e+09
container_start_time_seconds{container="kube-apiserver",namespace="kube-system",pod="kube-apiserver-controlplane01"} 1.7356098335315273e+09
container_start_time_seconds{container="kube-controller-manager",namespace="kube-system",pod="kube-controller-manager-controlplane01"} 1.735609747470418e+09
container_start_time_seconds{container="kube-proxy",namespace="kube-system",pod="kube-proxy-dggzb"} 1.7356098606049485e+09
container_start_time_seconds{container="kube-scheduler",namespace="kube-system",pod="kube-scheduler-controlplane01"} 1.7356097474939694e+09
container_start_time_seconds{container="metrics-server",namespace="kube-system",pod="metrics-server-b6c794f59-z2jvv"} 1.7356146398680563e+09
container_start_time_seconds{container="weave",namespace="kube-system",pod="weave-net-7ln6x"} 1.735609893556628e+09
container_start_time_seconds{container="weave-npc",namespace="kube-system",pod="weave-net-7ln6x"} 1.7356098516320243e+09
HELP node_cpu_usage_seconds_total [STABLE] Cumulative cpu time consumed by the node in core-seconds
TYPE node_cpu_usage_seconds_total counter
node_cpu_usage_seconds_total 1135.72 1735616290756
HELP node_memory_working_set_bytes [STABLE] Current working set of the node in bytes
TYPE node_memory_working_set_bytes gauge
node_memory_working_set_bytes 1.582764032e+09 1735616290756
HELP pod_cpu_usage_seconds_total [STABLE] Cumulative cpu time consumed by the pod in core-seconds
TYPE pod_cpu_usage_seconds_total counter
pod_cpu_usage_seconds_total{namespace="kube-system",pod="coredns-668d6bf9bc-d7nx2"} 21.998036 1735616288521
pod_cpu_usage_seconds_total{namespace="kube-system",pod="coredns-668d6bf9bc-tgqp8"} 21.476306 1735616289650
pod_cpu_usage_seconds_total{namespace="kube-system",pod="haproxy-controlplane01"} 11.834439 1735616285933
pod_cpu_usage_seconds_total{namespace="kube-system",pod="keepalived-controlplane01"} 5.332261 1735616287233
pod_cpu_usage_seconds_total{namespace="kube-system",pod="kube-apiserver-controlplane01"} 647.919233 1735616286993
pod_cpu_usage_seconds_total{namespace="kube-system",pod="kube-controller-manager-controlplane01"} 255.962488 1735616293312
pod_cpu_usage_seconds_total{namespace="kube-system",pod="kube-proxy-dggzb"} 7.520156 1735616293967
pod_cpu_usage_seconds_total{namespace="kube-system",pod="kube-scheduler-controlplane01"} 85.288485 1735616285727
pod_cpu_usage_seconds_total{namespace="kube-system",pod="metrics-server-b6c794f59-z2jvv"} 10.797169 1735616289827
pod_cpu_usage_seconds_total{namespace="kube-system",pod="weave-net-7ln6x"} 17.846659 1735616291969
HELP pod_memory_working_set_bytes [STABLE] Current working set of the pod in bytes
TYPE pod_memory_working_set_bytes gauge
pod_memory_working_set_bytes{namespace="kube-system",pod="coredns-668d6bf9bc-d7nx2"} 1.5159296e+07 1735616288521
pod_memory_working_set_bytes{namespace="kube-system",pod="coredns-668d6bf9bc-tgqp8"} 6.1104128e+07 1735616289650
pod_memory_working_set_bytes{namespace="kube-system",pod="haproxy-controlplane01"} 5.4439936e+07 1735616285933
pod_memory_working_set_bytes{namespace="kube-system",pod="keepalived-controlplane01"} 1.7072128e+07 1735616287233
pod_memory_working_set_bytes{namespace="kube-system",pod="kube-apiserver-controlplane01"} 3.56995072e+08 1735616286993
pod_memory_working_set_bytes{namespace="kube-system",pod="kube-controller-manager-controlplane01"} 1.31162112e+08 1735616293312
pod_memory_working_set_bytes{namespace="kube-system",pod="kube-proxy-dggzb"} 7.251968e+07 1735616293967
pod_memory_working_set_bytes{namespace="kube-system",pod="kube-scheduler-controlplane01"} 8.226816e+07 1735616285727
pod_memory_working_set_bytes{namespace="kube-system",pod="metrics-server-b6c794f59-z2jvv"} 1.6707584e+07 1735616289827
pod_memory_working_set_bytes{namespace="kube-system",pod="weave-net-7ln6x"} 1.43220736e+08 1735616291969
HELP resource_scrape_error [STABLE] 1 if there was an error while getting container metrics, 0 otherwise
TYPE resource_scrape_error gauge
resource_scrape_error 0
scraping the node node01 also was successful:
sudo curl --cacert /etc/kubernetes/pki/ca.crt -k --cert /etc/kubernetes/pki/metrics-server/metrics-client.crt --key /etc/kubernetes/pki/metrics-server/metrics-client.key https://node01:10250/metrics/resource
HELP container_cpu_usage_seconds_total [STABLE] Cumulative cpu time consumed by the container in core-seconds
TYPE container_cpu_usage_seconds_total counter
container_cpu_usage_seconds_total{container="kube-proxy",namespace="kube-system",pod="kube-proxy-54cr2"} 9.639458 1735616267825
container_cpu_usage_seconds_total{container="kubernetes-dashboard-api",namespace="kubernetes-dashboard",pod="kubernetes-dashboard-api-5b76d9c66d-w98xc"} 4.585459 1735616261889
container_cpu_usage_seconds_total{container="kubernetes-dashboard-auth",namespace="kubernetes-dashboard",pod="kubernetes-dashboard-auth-57c8f974bb-vl9kr"} 2.965941 1735616267060
container_cpu_usage_seconds_total{container="kubernetes-dashboard-metrics-scraper",namespace="kubernetes-dashboard",pod="kubernetes-dashboard-metrics-scraper-85d56b4fcd-h2v2f"} 2.240477 1735616253709
container_cpu_usage_seconds_total{container="kubernetes-dashboard-web",namespace="kubernetes-dashboard",pod="kubernetes-dashboard-web-8c8677847-tgkcq"} 2.951347 1735616259899
container_cpu_usage_seconds_total{container="nginx",namespace="default",pod="nginx"} 0.129866 1735616263970
container_cpu_usage_seconds_total{container="proxy",namespace="kubernetes-dashboard",pod="kubernetes-dashboard-kong-6556f7cc45-25dqm"} 18.164751 1735616252552
container_cpu_usage_seconds_total{container="weave",namespace="kube-system",pod="weave-net-nwknb"} 16.858115 1735616258442
container_cpu_usage_seconds_total{container="weave-npc",namespace="kube-system",pod="weave-net-nwknb"} 5.038365 1735616259655
HELP container_memory_working_set_bytes [STABLE] Current working set of the container in bytes
TYPE container_memory_working_set_bytes gauge
container_memory_working_set_bytes{container="kube-proxy",namespace="kube-system",pod="kube-proxy-54cr2"} 7.3527296e+07 1735616267825
container_memory_working_set_bytes{container="kubernetes-dashboard-api",namespace="kubernetes-dashboard",pod="kubernetes-dashboard-api-5b76d9c66d-w98xc"} 5.5005184e+07 1735616261889
container_memory_working_set_bytes{container="kubernetes-dashboard-auth",namespace="kubernetes-dashboard",pod="kubernetes-dashboard-auth-57c8f974bb-vl9kr"} 4.2389504e+07 1735616267060
container_memory_working_set_bytes{container="kubernetes-dashboard-metrics-scraper",namespace="kubernetes-dashboard",pod="kubernetes-dashboard-metrics-scraper-85d56b4fcd-h2v2f"} 3.7433344e+07 1735616253709
container_memory_working_set_bytes{container="kubernetes-dashboard-web",namespace="kubernetes-dashboard",pod="kubernetes-dashboard-web-8c8677847-tgkcq"} 4.100096e+07 1735616259899
container_memory_working_set_bytes{container="nginx",namespace="default",pod="nginx"} 6.078464e+06 1735616263970
container_memory_working_set_bytes{container="proxy",namespace="kubernetes-dashboard",pod="kubernetes-dashboard-kong-6556f7cc45-25dqm"} 1.15126272e+08 1735616252552
container_memory_working_set_bytes{container="weave",namespace="kube-system",pod="weave-net-nwknb"} 9.2102656e+07 1735616258442
container_memory_working_set_bytes{container="weave-npc",namespace="kube-system",pod="weave-net-nwknb"} 5.3006336e+07 1735616259655
HELP container_start_time_seconds [STABLE] Start time of the container since unix epoch in seconds
TYPE container_start_time_seconds gauge
container_start_time_seconds{container="kube-proxy",namespace="kube-system",pod="kube-proxy-54cr2"} 1.7356098711668828e+09
container_start_time_seconds{container="kubernetes-dashboard-api",namespace="kubernetes-dashboard",pod="kubernetes-dashboard-api-5b76d9c66d-w98xc"} 1.7356098829024224e+09
container_start_time_seconds{container="kubernetes-dashboard-auth",namespace="kubernetes-dashboard",pod="kubernetes-dashboard-auth-57c8f974bb-vl9kr"} 1.735609880124374e+09
container_start_time_seconds{container="kubernetes-dashboard-metrics-scraper",namespace="kubernetes-dashboard",pod="kubernetes-dashboard-metrics-scraper-85d56b4fcd-h2v2f"} 1.735609885740109e+09
container_start_time_seconds{container="kubernetes-dashboard-web",namespace="kubernetes-dashboard",pod="kubernetes-dashboard-web-8c8677847-tgkcq"} 1.7356098788678856e+09
container_start_time_seconds{container="nginx",namespace="default",pod="nginx"} 1.7356098909882374e+09
container_start_time_seconds{container="proxy",namespace="kubernetes-dashboard",pod="kubernetes-dashboard-kong-6556f7cc45-25dqm"} 1.7356098846059828e+09
container_start_time_seconds{container="weave",namespace="kube-system",pod="weave-net-nwknb"} 1.735609874908378e+09
container_start_time_seconds{container="weave-npc",namespace="kube-system",pod="weave-net-nwknb"} 1.73560987521816e+09
HELP node_cpu_usage_seconds_total [STABLE] Cumulative cpu time consumed by the node in core-seconds
TYPE node_cpu_usage_seconds_total counter
node_cpu_usage_seconds_total 370.988 1735616261759
HELP node_memory_working_set_bytes [STABLE] Current working set of the node in bytes
TYPE node_memory_working_set_bytes gauge
node_memory_working_set_bytes 1.043800064e+09 1735616261759
HELP pod_cpu_usage_seconds_total [STABLE] Cumulative cpu time consumed by the pod in core-seconds
TYPE pod_cpu_usage_seconds_total counter
pod_cpu_usage_seconds_total{namespace="default",pod="nginx"} 0.200262 1735616265970
pod_cpu_usage_seconds_total{namespace="kube-system",pod="kube-proxy-54cr2"} 9.680489 1735616264420
pod_cpu_usage_seconds_total{namespace="kube-system",pod="weave-net-nwknb"} 22.097047 1735616263123
pod_cpu_usage_seconds_total{namespace="kubernetes-dashboard",pod="kubernetes-dashboard-api-5b76d9c66d-w98xc"} 4.690021 1735616259895
pod_cpu_usage_seconds_total{namespace="kubernetes-dashboard",pod="kubernetes-dashboard-auth-57c8f974bb-vl9kr"} 3.038463 1735616256249
pod_cpu_usage_seconds_total{namespace="kubernetes-dashboard",pod="kubernetes-dashboard-kong-6556f7cc45-25dqm"} 18.460118 1735616262406
pod_cpu_usage_seconds_total{namespace="kubernetes-dashboard",pod="kubernetes-dashboard-metrics-scraper-85d56b4fcd-h2v2f"} 2.342227 1735616265619
pod_cpu_usage_seconds_total{namespace="kubernetes-dashboard",pod="kubernetes-dashboard-web-8c8677847-tgkcq"} 3.042861 1735616265683
HELP pod_memory_working_set_bytes [STABLE] Current working set of the pod in bytes
TYPE pod_memory_working_set_bytes gauge
pod_memory_working_set_bytes{namespace="default",pod="nginx"} 6.336512e+06 1735616265970
pod_memory_working_set_bytes{namespace="kube-system",pod="kube-proxy-54cr2"} 7.4285056e+07 1735616264420
pod_memory_working_set_bytes{namespace="kube-system",pod="weave-net-nwknb"} 1.47615744e+08 1735616263123
pod_memory_working_set_bytes{namespace="kubernetes-dashboard",pod="kubernetes-dashboard-api-5b76d9c66d-w98xc"} 5.5259136e+07 1735616259895
pod_memory_working_set_bytes{namespace="kubernetes-dashboard",pod="kubernetes-dashboard-auth-57c8f974bb-vl9kr"} 4.2651648e+07 1735616256249
pod_memory_working_set_bytes{namespace="kubernetes-dashboard",pod="kubernetes-dashboard-kong-6556f7cc45-25dqm"} 1.17559296e+08 1735616262406
pod_memory_working_set_bytes{namespace="kubernetes-dashboard",pod="kubernetes-dashboard-metrics-scraper-85d56b4fcd-h2v2f"} 3.7691392e+07 1735616265619
pod_memory_working_set_bytes{namespace="kubernetes-dashboard",pod="kubernetes-dashboard-web-8c8677847-tgkcq"} 4.1259008e+07 1735616265683
HELP resource_scrape_error [STABLE] 1 if there was an error while getting container metrics, 0 otherwise
TYPE resource_scrape_error gauge
resource_scrape_error 0
What you expected to happen:
providing the correct certificates for client and server sighned by the correct ca, the metrics server should run correctly without any issue, also in the documentation there is no one example of the right configrations, when the kubelet using client and servering certificates
Anything else we need to know?:
it is a bug or me just misconfiging things ?
Environment:
kubectl version
1.32):/kind bug
The text was updated successfully, but these errors were encountered: