-
Suspect two issues
Configuration:
In kafka, the ingress rate is ~21k where as the egress is around 17k , so i suspect the bottle neck is not at transforms but at the vector's consuming from kafka .. Looking at utilisation metrics , looks like there is some lag at lua transform.
Any suggestion on how to improve through put and how to enable multiple instance of vector to consume will help. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Hi @pkirubak ! We are rolling out a For now, you could try splitting up the processing by partitioning the data and running multiple, identical, Lua transforms, that would run each in their own task. This would involve using the |
Beta Was this translation helpful? Give feedback.
Hi @pkirubak !
We are rolling out a
native
(andnative_json
) codec that I think will allow you to avoid using thelua
transform to convert JSON to metrics here which, as you note, appears to be the bottleneck. We expect this to be complete by v0.22.0.For now, you could try splitting up the processing by partitioning the data and running multiple, identical, Lua transforms, that would run each in their own task. This would involve using the
route
transform with one route per parallellua
transform, partitioning across some field from the metric value that would fairly distribute (maybe the microsecond portion of the timestamp?), and then fanning back into theprom_metric
sink. Let me know…