Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing nss metrics for channels and servers in Jetstream #216

Open
OfirYemini opened this issue Mar 29, 2023 · 24 comments
Open

Missing nss metrics for channels and servers in Jetstream #216

OfirYemini opened this issue Mar 29, 2023 · 24 comments

Comments

@OfirYemini
Copy link

My goal is to extract the pending messages per subject like the following cli command:

image

Following the guidelines here I've run the exporter with the following command:

./prometheus-nats-exporter -channelz -serverz http://localhost:8222

but I'm not getting any of the nss.* metrics. those are the only metrics I get:

HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 0
go_gc_duration_seconds{quantile="0.25"} 0
go_gc_duration_seconds{quantile="0.5"} 0
go_gc_duration_seconds{quantile="0.75"} 0
go_gc_duration_seconds{quantile="1"} 0
go_gc_duration_seconds_sum 0
go_gc_duration_seconds_count 0
HELP go_goroutines Number of goroutines that currently exist.
TYPE go_goroutines gauge
go_goroutines 16
HELP go_info Information about the Go environment.
TYPE go_info gauge
go_info{version="go1.17.13"} 1
HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 4.233328e+06
HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 4.233328e+06
HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 1.44722e+06
HELP go_memstats_frees_total Total number of frees.
TYPE go_memstats_frees_total counter
go_memstats_frees_total 0
HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 3.647192e+06
HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 4.233328e+06
HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 3.432448e+06
HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 4.268032e+06
HELP go_memstats_heap_objects Number of allocated objects.
TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 24728
HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 3.432448e+06
HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 7.70048e+06
HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 0
HELP go_memstats_lookups_total Total number of pointer lookups.
TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 0
HELP go_memstats_mallocs_total Total number of mallocs.
TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 24728
HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 19200
HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 32768
HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 63920
HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 65536
HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 5.275648e+06
HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 1.411324e+06
HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 655360
HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 655360
HELP go_memstats_sys_bytes Number of bytes obtained from system.
TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 1.495988e+07
HELP go_threads Number of OS threads created.
TYPE go_threads gauge
go_threads 11
HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 0
HELP process_max_fds Maximum number of open file descriptors.
TYPE process_max_fds gauge
process_max_fds 1024
HELP process_open_fds Number of open file descriptors.
TYPE process_open_fds gauge
process_open_fds 13
HELP process_resident_memory_bytes Resident memory size in bytes.
TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 1.9111936e+07
HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
TYPE process_start_time_seconds gauge
process_start_time_seconds 1.68012116363e+09
HELP process_virtual_memory_bytes Virtual memory size in bytes.
TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 1.41858816e+09
HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
TYPE process_virtual_memory_max_bytes gauge
process_virtual_memory_max_bytes 1.8446744073709552e+19
HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served.
TYPE promhttp_metric_handler_requests_in_flight gauge
promhttp_metric_handler_requests_in_flight 1
HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code.
TYPE promhttp_metric_handler_requests_total counter
promhttp_metric_handler_requests_total{code="200"} 2
promhttp_metric_handler_requests_total{code="500"} 0
promhttp_metric_handler_requests_total{code="503"} 0

@OfirYemini OfirYemini changed the title Missing nss metrics for channels and servers Missing nss metrics for channels and servers in Jetstream Apr 7, 2023
@arkh-consensys
Copy link

arkh-consensys commented Jun 20, 2023

Same here I also dont get any channel related metrics exported:
Running with this helm config:

`

exporter:
enabled: true
image: natsio/prometheus-nats-exporter:0.11.0
serviceMonitor:
enabled: true
args:
- -varz
- -jsz=all
- -channelz
- -subz
- -serverz
- -connz
- -routez
- -prefix=nats
- -use_internal_server_id
- -use_internal_server_name

nats:
  image: nats:2.9.18-alpine3.18`

@lethargosapatheia
Copy link

lethargosapatheia commented Dec 13, 2023

I have the exact same issue using nats:2.9.16-linux. It's crucial for us to be able to get messages per stream. Unfortunately we can only see consumer per channel (stream) and pending consumer per channel. We are also using jetstream. Are there any plans to address this?

These are the arguments I'm currently using:

  - args:
    - -varz
    - -connz
    - -subz
    - -routez
    - -healthz
    - -jsz=all
    - -channelz
    - -serverz
    - http://nats:8222

@wallyqs
Copy link
Member

wallyqs commented Dec 13, 2023

-channelz is a nats streaming feature not a jetstream one, so that would be why the metrics are not being reported.

@lethargosapatheia
Copy link

Thanks for the clarification, but do you know if there are any plans to add jetstream metrics that expose messages per stream with jetstream?

@arkh-consensys
Copy link

@wallyqs Thanks. Is there anyway we could extract messages per stream / subject on jetstream context?

@0xterminator
Copy link

Any progress here ? We really need this metrics - how are they being activated ? @wallyqs

@Jarema
Copy link
Member

Jarema commented Sep 11, 2024

Prometheus exporter uses HTTP monitoring endpoints of NATS, and those do not expose per-subject metrics.
To get that data, you need to call stream info with added parameters:

09:30:19 >>> $JS.API.STREAM.INFO.events
{"offset":0,"subjects_filter":"\u003e"}

It will provide paginated response..

Keep in mind that exposing that metric (for example in nats surveyor which has access to that API) is a big risk - many systems have millions of subjects in a stream and calculating that information frequently can be really taxing.

We're thinking how we can make it better, but that is out of scope of prometheus exporter at this point.

@0xterminator
Copy link

Prometheus exporter uses HTTP monitoring endpoints of NATS, and those do not expose per-subject metrics. To get that data, you need to call stream info with added parameters:

09:30:19 >>> $JS.API.STREAM.INFO.events
{"offset":0,"subjects_filter":"\u003e"}

It will provide paginated response..

Keep in mind that exposing that metric (for example in nats surveyor which has access to that API) is a big risk - many systems have millions of subjects in a stream and calculating that information frequently can be really taxing.

We're thinking how we can make it better, but that is out of scope of prometheus exporter at this point.

@Jarema I have two question related to your response:

  1. Where do you call the stream info - is this a cli command ? I presume this concertns metrics like nss_chan_msgs_total in the grafana-nss-dash: https://github.com/nats-io/prometheus-nats-exporter/blob/main/walkthrough/grafana-nss-dash.json#L272. Am I correct ? And you are saying there is no way for us to get them using the prom-nats-exporter at the moment ?

  2. I see you have added this dashboard: https://github.com/nats-io/prometheus-nats-exporter/blob/main/walkthrough/grafana-jetstream-dash.json#L859. The lower part (Consumer metrics) dont have any data and I dont see them - they are no longer the streaming metrics dashboard, but jetstream ones - any idea why I have none at all eventhough I have streamers and consumers running ?

@Jarema
Copy link
Member

Jarema commented Sep 11, 2024

  1. The stream info is a CLI call that uses $JS.API.STREAM.INFO NATS server API under the hood (you can check what it does calling whatever method with --trace suffix.
  2. I'm bit confused. Streaming (as STAN) has been deprecated. JetStream is it's successor. What you want to say is that you have JetStream streams and consumers and that metrics do not show their data?

@0xterminator
Copy link

  1. The stream info is a CLI call that uses $JS.API.STREAM.INFO NATS server API under the hood (you can check what it does calling whatever method with --trace suffix.
  2. I'm bit confused. Streaming (as STAN) has been deprecated. JetStream is it's successor. What you want to say is that you have JetStream streams and consumers and that metrics do not show their data?

@Jarema Thanks for the answers.

  1. Ok, understood. Just to double-check - why do you have a dashboard that has metrics like: nss_chan_msgs_total provided you dont collect them ?

  2. I have KV stores which use jetstream under the hood and none of the metrics in https://github.com/nats-io/prometheus-nats-exporter/blob/main/walkthrough/grafana-jetstream-dash.json#L859 are being scraped. Any idea ?

@ripienaar
Copy link

nss_chan_msgs_total is a NATS Streaming metric. NATS Streaming is not JetStream so that metric does not apply to current software - same for all nss_* variables.

@0xterminator
Copy link

nss_chan_msgs_total is a NATS Streaming metric. NATS Streaming is not JetStream so that metric does not apply to current software - same for all nss_* variables.

@ripienaar OK, but then comes my question - can jestream metrics get exposed and collected ?

@ripienaar
Copy link

Did you pass the CLI flag to enable JetStream monitoring?

@0xterminator
Copy link

Did you pass the CLI flag to enable JetStream monitoring?

Did you pass the CLI flag to enable JetStream monitoring?

Yep. This is my command in docker-compsoe:

command:
  - -m
  - '8222'
  - --name=my-system
  - --js
  - --config=/etc/nats/nats.conf
  - -D
  
  IT shoud be exposing the 8222 port and passing --js right ?

@ripienaar
Copy link

Is that even running? There's several problems with this, like prometheus exporter doesn't accept --config as an option, and --js takes a flag. Probably it fails immediately with an error.

$ docker run --rm -p 7777:7777 natsio/prometheus-nats-exporter:latest --jsz all http://n1:8222/

and elsewhere, its fetching all the stats.

[rip@n1-lon]% curl -s 0:7777/metrics|awk -F"{" '/^jetstream/ {print $1}'|sort
jetstream_consumer_ack_floor_consumer_seq
jetstream_consumer_ack_floor_stream_seq
jetstream_consumer_delivered_consumer_seq
jetstream_consumer_delivered_stream_seq
jetstream_consumer_num_ack_pending
jetstream_consumer_num_pending
jetstream_consumer_num_redelivered
jetstream_consumer_num_waiting
jetstream_server_jetstream_disabled
jetstream_server_max_memory
jetstream_server_max_storage
jetstream_server_total_consumers
jetstream_server_total_message_bytes
jetstream_server_total_messages
jetstream_server_total_streams
jetstream_stream_consumer_count
jetstream_stream_first_seq
jetstream_stream_last_seq
jetstream_stream_total_bytes
jetstream_stream_total_messages

@0xterminator
Copy link

Is that even running? There's several problems with this, like prometheus exporter doesn't accept --config as an option, and --js takes a flag. Probably it fails immediately with an error.

$ docker run --rm -p 7777:7777 natsio/prometheus-nats-exporter:latest --jsz all http://n1:8222/

and elsewhere, its fetching all the stats.

[rip@n1-lon]% curl -s 0:7777/metrics|awk -F"{" '/^jetstream/ {print $1}'|sort
jetstream_consumer_ack_floor_consumer_seq
jetstream_consumer_ack_floor_stream_seq
jetstream_consumer_delivered_consumer_seq
jetstream_consumer_delivered_stream_seq
jetstream_consumer_num_ack_pending
jetstream_consumer_num_pending
jetstream_consumer_num_redelivered
jetstream_consumer_num_waiting
jetstream_server_jetstream_disabled
jetstream_server_max_memory
jetstream_server_max_storage
jetstream_server_total_consumers
jetstream_server_total_message_bytes
jetstream_server_total_messages
jetstream_server_total_streams
jetstream_stream_consumer_count
jetstream_stream_first_seq
jetstream_stream_last_seq
jetstream_stream_total_bytes
jetstream_stream_total_messages

These are the commands when I start nats, not the prometheus-nats-exproter. Here are both once again for clarity:

services:
  nats:
    profiles:
      - all
      - nats
    image: nats:latest
    container_name: nats
    restart: always
    ports:
      - 4222:4222
      - 8222:8222
    volumes:
      - ./nats.conf:/etc/nats/nats.conf
    command:
      - -m
      - '8222'
      - --name=xxxxx
      - --js
      - --config=/etc/nats/nats.conf
      - -D
      
      
      prometheus-nats-exporter:
    profiles:
      - all
      - monitoring
    image: natsio/prometheus-nats-exporter:latest
    container_name: prometheus-nats-exporter
    restart: always
    command:
      - "-D"
      - "-varz"
      - "-jsz"
      - "all"
      - "-accstatz"
      - "-channelz"
      - "-connz"
      - "-leafz"
      - "-connz_detailed"
      - "-serverz"
      - "-healthz"
      - "-routez"
      - "-subz"
      - "-port"
      - "7777"
      - "http://nats:8222"
    ports:
      - 7777:7777
    env_file:
      - ./../.env
    depends_on:
      - nats

and there are no jetstream metrics as in your case above. Is it because of the way I run nats or because we are using a kv store (which internally uses a jetstream but still) ?

@ripienaar
Copy link

My case above shows that lots of metrics for JetStream were gathered. The output from the curl command is a unique list of JetStream metrics that were gathered.

Since KV does not tend to have consumers (unless you use watch) consumer related metrics won't really be there, here's a KV though:

[rip@n1-lon]% curl -s 0:7777/metrics|grep KV_
jetstream_stream_consumer_count{account="USERS",account_id="ADM6CMOXUMFKRJTPGLFY5DGYUJNLQV5SGFZMXCMTWV3CKB6Z43GQ3L6C",cluster="lon",domain="hub",is_meta_leader="false",is_stream_leader="true",meta_leader="n2-nyc",server_id="http://n1-lon.example.net:8222",server_name="n1-lon",stream_leader="n1-lon",stream_name="KV_X"} 0
jetstream_stream_first_seq{account="USERS",account_id="ADM6CMOXUMFKRJTPGLFY5DGYUJNLQV5SGFZMXCMTWV3CKB6Z43GQ3L6C",cluster="lon",domain="hub",is_meta_leader="false",is_stream_leader="true",meta_leader="n2-nyc",server_id="http://n1-lon.example.net:8222",server_name="n1-lon",stream_leader="n1-lon",stream_name="KV_X"} 1
jetstream_stream_last_seq{account="USERS",account_id="ADM6CMOXUMFKRJTPGLFY5DGYUJNLQV5SGFZMXCMTWV3CKB6Z43GQ3L6C",cluster="lon",domain="hub",is_meta_leader="false",is_stream_leader="true",meta_leader="n2-nyc",server_id="http://n1-lon.example.net:8222",server_name="n1-lon",stream_leader="n1-lon",stream_name="KV_X"} 1
jetstream_stream_total_bytes{account="USERS",account_id="ADM6CMOXUMFKRJTPGLFY5DGYUJNLQV5SGFZMXCMTWV3CKB6Z43GQ3L6C",cluster="lon",domain="hub",is_meta_leader="false",is_stream_leader="true",meta_leader="n2-nyc",server_id="http://n1-lon.example.net:8222",server_name="n1-lon",stream_leader="n1-lon",stream_name="KV_X"} 38
jetstream_stream_total_messages{account="USERS",account_id="ADM6CMOXUMFKRJTPGLFY5DGYUJNLQV5SGFZMXCMTWV3CKB6Z43GQ3L6C",cluster="lon",domain="hub",is_meta_leader="false",is_stream_leader="true",meta_leader="n2-nyc",server_id="http://n1-lon.example.net:8222",server_name="n1-lon",stream_leader="n1-lon",stream_name="KV_X"} 1

@ripienaar
Copy link

Your docker compose file works fine..assuming you're adding streams or kv buckets, here I added one.

% nats kv add X
% nats kv put X Y Z
% curl -s 0:7777/metrics|grep ^jetstream 
jetstream_server_jetstream_disabled{cluster="",domain="",is_meta_leader="true",meta_leader="",server_id="http://nats:8222",server_name="xxxxx"} 0
jetstream_server_max_memory{cluster="",domain="",is_meta_leader="true",meta_leader="",server_id="http://nats:8222",server_name="xxxxx"} 1.2373779456e+10
jetstream_server_max_storage{cluster="",domain="",is_meta_leader="true",meta_leader="",server_id="http://nats:8222",server_name="xxxxx"} 3.447327744e+10
jetstream_server_total_consumers{cluster="",domain="",is_meta_leader="true",meta_leader="",server_id="http://nats:8222",server_name="xxxxx"} 0
jetstream_server_total_message_bytes{cluster="",domain="",is_meta_leader="true",meta_leader="",server_id="http://nats:8222",server_name="xxxxx"} 38
jetstream_server_total_messages{cluster="",domain="",is_meta_leader="true",meta_leader="",server_id="http://nats:8222",server_name="xxxxx"} 1
jetstream_server_total_streams{cluster="",domain="",is_meta_leader="true",meta_leader="",server_id="http://nats:8222",server_name="xxxxx"} 1
jetstream_stream_consumer_count{account="$G",account_id="$G",cluster="",domain="",is_meta_leader="true",is_stream_leader="true",meta_leader="",server_id="http://nats:8222",server_name="xxxxx",stream_leader="xxxxx",stream_name="KV_X"} 0
jetstream_stream_first_seq{account="$G",account_id="$G",cluster="",domain="",is_meta_leader="true",is_stream_leader="true",meta_leader="",server_id="http://nats:8222",server_name="xxxxx",stream_leader="xxxxx",stream_name="KV_X"} 1
jetstream_stream_last_seq{account="$G",account_id="$G",cluster="",domain="",is_meta_leader="true",is_stream_leader="true",meta_leader="",server_id="http://nats:8222",server_name="xxxxx",stream_leader="xxxxx",stream_name="KV_X"} 1
jetstream_stream_total_bytes{account="$G",account_id="$G",cluster="",domain="",is_meta_leader="true",is_stream_leader="true",meta_leader="",server_id="http://nats:8222",server_name="xxxxx",stream_leader="xxxxx",stream_name="KV_X"} 38
jetstream_stream_total_messages{account="$G",account_id="$G",cluster="",domain="",is_meta_leader="true",is_stream_leader="true",meta_leader="",server_id="http://nats:8222",server_name="xxxxx",stream_leader="xxxxx",stream_name="KV_X"} 1

@0xterminator
Copy link

Your docker compose file works fine..assuming you're adding streams or kv buckets, here I added one.

% nats kv add X
% nats kv put X Y Z
% curl -s 0:7777/metrics|grep ^jetstream 
jetstream_server_jetstream_disabled{cluster="",domain="",is_meta_leader="true",meta_leader="",server_id="http://nats:8222",server_name="xxxxx"} 0
jetstream_server_max_memory{cluster="",domain="",is_meta_leader="true",meta_leader="",server_id="http://nats:8222",server_name="xxxxx"} 1.2373779456e+10
jetstream_server_max_storage{cluster="",domain="",is_meta_leader="true",meta_leader="",server_id="http://nats:8222",server_name="xxxxx"} 3.447327744e+10
jetstream_server_total_consumers{cluster="",domain="",is_meta_leader="true",meta_leader="",server_id="http://nats:8222",server_name="xxxxx"} 0
jetstream_server_total_message_bytes{cluster="",domain="",is_meta_leader="true",meta_leader="",server_id="http://nats:8222",server_name="xxxxx"} 38
jetstream_server_total_messages{cluster="",domain="",is_meta_leader="true",meta_leader="",server_id="http://nats:8222",server_name="xxxxx"} 1
jetstream_server_total_streams{cluster="",domain="",is_meta_leader="true",meta_leader="",server_id="http://nats:8222",server_name="xxxxx"} 1
jetstream_stream_consumer_count{account="$G",account_id="$G",cluster="",domain="",is_meta_leader="true",is_stream_leader="true",meta_leader="",server_id="http://nats:8222",server_name="xxxxx",stream_leader="xxxxx",stream_name="KV_X"} 0
jetstream_stream_first_seq{account="$G",account_id="$G",cluster="",domain="",is_meta_leader="true",is_stream_leader="true",meta_leader="",server_id="http://nats:8222",server_name="xxxxx",stream_leader="xxxxx",stream_name="KV_X"} 1
jetstream_stream_last_seq{account="$G",account_id="$G",cluster="",domain="",is_meta_leader="true",is_stream_leader="true",meta_leader="",server_id="http://nats:8222",server_name="xxxxx",stream_leader="xxxxx",stream_name="KV_X"} 1
jetstream_stream_total_bytes{account="$G",account_id="$G",cluster="",domain="",is_meta_leader="true",is_stream_leader="true",meta_leader="",server_id="http://nats:8222",server_name="xxxxx",stream_leader="xxxxx",stream_name="KV_X"} 38
jetstream_stream_total_messages{account="$G",account_id="$G",cluster="",domain="",is_meta_leader="true",is_stream_leader="true",meta_leader="",server_id="http://nats:8222",server_name="xxxxx",stream_leader="xxxxx",stream_name="KV_X"} 1

I am seeing htem to. Just wondering why I dont see the metrics defined here: https://github.com/nats-io/prometheus-nats-exporter/blob/main/walkthrough/grafana-jetstream-dash.json#L859

also, any examples of docker-compose and setup with nats-surveyor with jetstream ?

@ripienaar
Copy link

It’s common for Prometheus exporters to only export data that has values.

So if you have no consumers you get no metrics for consumers as an example.

@0xterminator
Copy link

It’s common for Prometheus exporters to only export data that has values.

So if you have no consumers you get no metrics for consumers as an example.

I have both producers and consumers - that is the thing. Both using put and watch for the kv stores.

image

Looking at the dashboard as you will see in the pic above, I have nothing on the consumers side

@0xterminator
Copy link

Could the metric names be outdated maybe ? hence the dashboard being old ?

@ripienaar
Copy link

I've shown you how to look for what metrics are returned, you can easily find out with curl and grep for consumer.

If you have consumers, then show them with consumer info so we can confirm they exist.

Please try to provide more information than you are doing, we're here donating our free time to you, please try to help us help you.

@0xterminator
Copy link

I've shown you how to look for what metrics are returned, you can easily find out with curl and grep for consumer.

If you have consumers, then show them with consumer info so we can confirm they exist.

Please try to provide more information than you are doing, we're here donating our free time to you, please try to help us help you.

All good now. Thanks guys, checked and it was fine. Had to modify the dashboard a bit, but overall jetstrea_* metrics showed up for consumers. Ty!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants