Replies: 2 comments 1 reply
-
@ppatierno @sknot-rh Any ideas? I do not really understand Prometheus. But just to make it clear -> these are just examples. You can modify them or adjust them in any way you want. The idea really is that you plug the monitoring into your own Prometheus instance. |
Beta Was this translation helpful? Give feedback.
-
@F-Plesa I think maybe a better approach for your case might be to scrape from the More info on federation can be found here: https://prometheus.io/docs/prometheus/latest/federation/ Here's an attempt at an example that might help...as you can see it only scrapes a limited set of defined metrics by using the - job_name: openshift-monitoring-federation
honor_labels: true
honor_timestamps: true
params:
match[]:
- kube_persistentvolume_capacity_bytes
- kubelet_volume_stats_used_bytes{endpoint="https-metrics",namespace="my-namespace"}
- kubelet_volume_stats_available_bytes{endpoint="https-metrics",namespace="my-namespace"}
scrape_interval: 2m
scrape_timeout: 1m
metrics_path: /federate
scheme: https
bearer_token_file: "/var/run/secrets/kubernetes.io/serviceaccount/token"
tls_config:
insecure_skip_verify: true
relabel_configs:
- source_labels: [__meta_kubernetes_service_name]
separator: ;
regex: prometheus-k8s
replacement: $1
action: keep
- source_labels: [__meta_kubernetes_service_port_name]
separator: ;
regex: web
replacement: $1
action: keep
kubernetes_sd_configs:
- role: service
namespaces:
names:
- openshift-monitoring If you're using the Prometheus operator to deploy Prometheus, you might be able to do this in a ServiceMonitor CR. You'll also need to create a RoleBinding between the ServiceAccount that Prometheus is running as (probably |
Beta Was this translation helpful? Give feedback.
-
Hello, we are deploying Strimzi on OpenShift (with seperate Prometheus instance, not using openshift-monitoring) and have a problem with additionalScrapeConfig file provided in example/metrics. As it is, it uses role: node. There is a namespaces: names: [] field but since nodes are not namespaced object, Prometheus (in my experience) disregards that field entirely.
The result of this is Prometheus scraping nodes for additional metrics such as CPU usage of Kafka brokers which is nicely shown in Grafana. However, Prometheus is also scraping CPU usage for all other deployments in all other namespaces. This is a big issue for two reasons: performance and security. In our case we have separate namespaces "kafka" and "monitoring" and we want to monitor Kafka deployment in "kafka" namespace. But, when exploring metrics in Prometheus I see metrics from all other namespaces as well.
Since role: node cannot be namespaced (if it can, please correct me), is it possible to implement the resource usage metrics with role: pod instead so it can be namespaced? The approach of "scrape everything and display what you need" is a problem, as described, especially in big clusters. Realistically what should be metrics for 6 pods (3 brokers, 3 zookeepers) ends up being thousands of metrics from all over the place.
Not sure if this should be filed under Discussions or Issues so I am posting it here for now.
Prometheus documentation: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config
Strimzi additional scrape config: https://github.com/strimzi/strimzi-kafka-operator/blob/main/examples/metrics/prometheus-additional-properties/prometheus-additional.yaml
Beta Was this translation helpful? Give feedback.
All reactions