Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

data received by kibana is unreadable #1

Closed
anubisg1 opened this issue Mar 4, 2019 · 7 comments
Closed

data received by kibana is unreadable #1

anubisg1 opened this issue Mar 4, 2019 · 7 comments

Comments

@anubisg1
Copy link

anubisg1 commented Mar 4, 2019

Hello.

we are trying to get telemetry data into kibana, but what we receive is unreadable as it includes what seem to be binary information. this is what kibana sees


�leaf101-N93180YC-EX��126/Cisco-NX-OS-device:System/procsys-items/sysload-items@(P���ɔ-Z��zx��keyszp�6/Cisco-NX-OS-device:System/procsys-items/sysload-items*6/Cisco-NX-OS-device:System/procsys-items/sysload-itemsz����contentz��z��� sysload-itemsz��z���loadAverage15m*�0.330000z�� loadAverage1m*�0.570000z�� loadAverage5m*�0.450000z���name*�sysloadz���runProc@�z��	totalProc@��
--

some logs from grpc2kafka

[xxxxx@localhost]$ docker-compose logs -f
Attaching to grpc2kafka_grpc2afka_1
grpc2afka_1  | 2019-03-04 14:44:26,763 INFO  [main] Grpc2Kafka - Connecting to Kafka cluster into localhost :9092...
grpc2afka_1  | 2019-03-04 14:44:26,794 INFO  [main] ProducerConfig - ProducerConfig values: 
grpc2afka_1  |  acks = 1
grpc2afka_1  |  batch.size = 16384
grpc2afka_1  |  bootstrap.servers = [localhost:9092]
grpc2afka_1  |  buffer.memory = 33554432
grpc2afka_1  |  client.dns.lookup = default
grpc2afka_1  |  client.id = 
grpc2afka_1  |  compression.type = none
grpc2afka_1  |  connections.max.idle.ms = 540000
grpc2afka_1  |  delivery.timeout.ms = 120000
grpc2afka_1  |  enable.idempotence = false
grpc2afka_1  |  interceptor.classes = []
grpc2afka_1  |  key.serializer = class org.apache.kafka.common.serialization.StringSerializer
grpc2afka_1  |  linger.ms = 0
grpc2afka_1  |  max.block.ms = 60000
grpc2afka_1  |  max.in.flight.requests.per.connection = 5
grpc2afka_1  |  max.request.size = 1048576
grpc2afka_1  |  metadata.max.age.ms = 300000
grpc2afka_1  |  metric.reporters = []
grpc2afka_1  |  metrics.num.samples = 2
grpc2afka_1  |  metrics.recording.level = INFO
grpc2afka_1  |  metrics.sample.window.ms = 30000
grpc2afka_1  |  partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
grpc2afka_1  |  receive.buffer.bytes = 32768
grpc2afka_1  |  reconnect.backoff.max.ms = 1000
grpc2afka_1  |  reconnect.backoff.ms = 50
grpc2afka_1  |  request.timeout.ms = 30000
grpc2afka_1  |  retries = 3
grpc2afka_1  |  retry.backoff.ms = 100
grpc2afka_1  |  sasl.client.callback.handler.class = null
grpc2afka_1  |  sasl.jaas.config = null
grpc2afka_1  |  sasl.kerberos.kinit.cmd = /usr/bin/kinit
grpc2afka_1  |  sasl.kerberos.min.time.before.relogin = 60000
grpc2afka_1  |  sasl.kerberos.service.name = null
grpc2afka_1  |  sasl.kerberos.ticket.renew.jitter = 0.05
grpc2afka_1  |  sasl.kerberos.ticket.renew.window.factor = 0.8
grpc2afka_1  |  sasl.login.callback.handler.class = null
grpc2afka_1  |  sasl.login.class = null
grpc2afka_1  |  sasl.login.refresh.buffer.seconds = 300
grpc2afka_1  |  sasl.login.refresh.min.period.seconds = 60
grpc2afka_1  |  sasl.login.refresh.window.factor = 0.8
grpc2afka_1  |  sasl.login.refresh.window.jitter = 0.05
grpc2afka_1  |  sasl.mechanism = GSSAPI
grpc2afka_1  |  security.protocol = PLAINTEXT
grpc2afka_1  |  send.buffer.bytes = 131072
grpc2afka_1  |  ssl.cipher.suites = null
grpc2afka_1  |  ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
grpc2afka_1  |  ssl.endpoint.identification.algorithm = https
grpc2afka_1  |  ssl.key.password = null
grpc2afka_1  |  ssl.keymanager.algorithm = SunX509
grpc2afka_1  |  ssl.keystore.location = null
grpc2afka_1  |  ssl.keystore.password = null
grpc2afka_1  |  ssl.keystore.type = JKS
grpc2afka_1  |  ssl.protocol = TLS
grpc2afka_1  |  ssl.provider = null
grpc2afka_1  |  ssl.secure.random.implementation = null
grpc2afka_1  |  ssl.trustmanager.algorithm = PKIX
grpc2afka_1  |  ssl.truststore.location = null
grpc2afka_1  |  ssl.truststore.password = null
grpc2afka_1  |  ssl.truststore.type = JKS
grpc2afka_1  |  transaction.timeout.ms = 60000
grpc2afka_1  |  transactional.id = null
grpc2afka_1  |  value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
grpc2afka_1  | 
grpc2afka_1  | 2019-03-04 14:44:26,934 INFO  [main] AppInfoParser - Kafka version : 2.1.0
grpc2afka_1  | 2019-03-04 14:44:26,934 INFO  [main] AppInfoParser - Kafka commitId : eec43959745f444f
grpc2afka_1  | 2019-03-04 14:44:26,935 INFO  [main] Grpc2Kafka - Starting NX-OS gRPC server without TLS on port 50051...
grpc2afka_1  | 2019-03-04 14:45:42,003 INFO  [grpc-default-executor-0] NxosMdtDialoutService - Receiving request ID 0 with 150 bytes of data
grpc2afka_1  | 2019-03-04 14:45:42,005 INFO  [grpc-default-executor-0] NxosMdtDialoutService - Sending message to Kafka topic telemetry
grpc2afka_1  | 2019-03-04 14:45:42,156 INFO  [kafka-producer-network-thread | producer-1] Metadata - Cluster ID: aFF4pMgZRmmhgdyEdJ7sxg
grpc2afka_1  | 2019-03-04 14:45:42,180 INFO  [grpc-default-executor-0] NxosMdtDialoutService - Message has been sent to Kafka topic telemetry
grpc2afka_1  | 2019-03-04 14:45:42,180 INFO  [grpc-default-executor-0] NxosMdtDialoutService - Terminating communication...
grpc2afka_1  | 2019-03-04 14:45:42,309 INFO  [grpc-default-executor-0] NxosMdtDialoutService - Receiving request ID 0 with 378 bytes of data
grpc2afka_1  | 2019-03-04 14:45:42,309 INFO  [grpc-default-executor-0] NxosMdtDialoutService - Sending message to Kafka topic telemetry
grpc2afka_1  | 2019-03-04 14:45:42,312 INFO  [grpc-default-executor-0] NxosMdtDialoutService - Message has been sent to Kafka topic telemetry
grpc2afka_1  | 2019-03-04 14:45:42,312 INFO  [grpc-default-executor-0] NxosMdtDialoutService - Terminating communication...
grpc2afka_1  | 2019-03-04 14:45:42,470 INFO  [grpc-default-executor-0] NxosMdtDialoutService - Receiving request ID 0 with 378 bytes of data
grpc2afka_1  | 2019-03-04 14:45:42,470 INFO  [grpc-default-executor-0] NxosMdtDialoutService - Sending message to Kafka topic telemetry
grpc2afka_1  | 2019-03-04 14:45:42,472 INFO  [grpc-default-executor-0] NxosMdtDialoutService - Message has been sent to Kafka topic telemetry
grpc2afka_1  | 2019-03-04 14:45:42,472 INFO  [grpc-default-executor-0] NxosMdtDialoutService - Terminating communication...
grpc2afka_1  | 2019-03-04 14:45:47,464 INFO  [grpc-default-executor-0] NxosMdtDialoutService - Receiving request ID 0 with 378 bytes of data
grpc2afka_1  | 2019-03-04 14:45:47,464 INFO  [grpc-default-executor-0] NxosMdtDialoutService - Sending message to Kafka topic telemetry
grpc2afka_1  | 2019-03-04 14:45:47,466 INFO  [grpc-default-executor-0] NxosMdtDialoutService - Message has been sent to Kafka topic telemetry
grpc2afka_1  | 2019-03-04 14:45:47,466 INFO  [grpc-default-executor-0] NxosMdtDialoutService - Terminating communication...
@agalue
Copy link
Owner

agalue commented Mar 4, 2019

gRPC relies on Protobuf (GPB) for modeling data, and GPB is binary.

Something we can do is add an option to covert the data into JSON prior sending it to Kafka. That way, there won't be issues with Elasticsearch/Kibana. I've done that for other projects, so it is something we can have without too much effort.

That said, the way Cisco decided to model the telemetry data is not the best in my personal opinion, so traversing the data can be difficult, regardless the format of the payload (GPB or JSON).

For this reason, an additional app, maybe based on Kafka Streams, would be useful to pre-process the data, and reformat it on a way that can be easily digested by applications like Elasticsearch.

@anubisg1
Copy link
Author

anubisg1 commented Mar 4, 2019

Alejandro,

thank you for your reply. converting to json is definitely something hat would help, it is what pipeline does for example.

our goal is to get telemetry and then be able to visualize it from a time series DB.

Today, other then using pipeline export to influxDB (which doesn't work for NX-OS cisco-ie/pipeline-gnmi#1 ) , or manually parsing data from kafka before sending to a time series DB, we have no other way (AFAIK) to do this.

do you have any suggestion on what we should do, end to end, from a telemetry receiver to the data visualization in a tool like graphana?

@agalue
Copy link
Owner

agalue commented Mar 4, 2019

At OpenNMS, we have a post-processing module that runs prior storing the data provided by the Nexus device to a TSDB. This module, basically executes a provided Groovy script to transform the GPB data at runtime, and pass the result content to the persistence layer.

For Elasticsearch, the approach might be similar, but to keep this application simple, I would do that externally. For example, by writing a Kafka Stream application that will take the data from the Kafka Topic that this application is populating, apply some customization and put the results on another topic. This resulting topic is the one that would feed Elasticsearch.

Certainly, I could add support to convert the GBP data to JSON, but that might not be enough. I'll create a branch with this change soon.

Converting GPB to JSON is not that hard. Here is an example.

With Java, besides compiling the .proto file that Cisco provides similar to how this project do it, a dependency to com.google.protobuf:protobuf-java-util is required to have access to the converter.

The idea is to see how the data looks in JSON format, but please keep an open mind here, because the NX-OS data is not nicely formatted, so the content might be hard to traverse in order to get what's actually needed. This is why I believe having a pre-processor is not optional.

@agalue
Copy link
Owner

agalue commented Mar 8, 2019

I've created a PR with the required changes to optionally convert the GPB payload to JSON.

To give it a try, please checkout the branch called feature/json-output, compile it, and run it. If that works for you, I'll merge the PR.

@anubisg1
Copy link
Author

Thank you, looking into it today

@anubisg1
Copy link
Author

anubisg1 commented Mar 14, 2019

It is working nicely. thank you

@agalue
Copy link
Owner

agalue commented Mar 14, 2019

You're very welcome!
The PR with the changes has been merged.

@agalue agalue closed this as completed Mar 14, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants