Replies: 2 comments 1 reply
-
You have quite a few dimensions defined in your config. My guess is you are generating a very large number of series. I would check the value of the metric |
Beta Was this translation helpful? Give feedback.
-
The number of dimensions matters less than the cardinality. You want to avoid dimensions like IDs and stick to dimensions like http status code and http route.
The reason to add them is so that span metrics can be broken down by these dimensions in Prometheus.
Are you saying you see memory impact 7 hours after no longer sending traces due to metrics? The go runtime will hold on to memory it's not using from an OS perspective as long as there's not memory pressure. This is common for garbage collected languages. Perhaps you're seeing the impact of this behavior? |
Beta Was this translation helpful? Give feedback.
-
hi everyone
I have an observability stack running on docker and i am using Tempo for tracing.
I am using the monolithic distribution.
My problem is when one of my services is active it generates traces correctly and sends it to Tempo but I notice a massive increase in RAM and CPU. While trouble shooting I have managed to confirm that when I disable the metrics-generator, tempo still runs correctly and the resource abuse does not repeat it self. I still need the metrics-generator for span metrics and service graph.
I would have appreciate any help solving this issue please
yaml files.zip
Beta Was this translation helpful? Give feedback.
All reactions