Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Search very slow when logline has only one token (e.g. from masking) #37

Open
shiosai opened this issue Jun 14, 2021 · 6 comments
Open
Labels
help wanted Extra attention is needed performance A performance issue

Comments

@shiosai
Copy link

shiosai commented Jun 14, 2021

I have a lot of lines that are are masked completely as they contain a lot of rubbish - e.g. resulting in one token (=mask) which will become the template.
For some reason search on these lines is extremely slow (given we have a bigger search tree already) while it should be actually super fast as they have only one token.

I cannot give a good example of the log due to confidentiality but perhaps this issue/limitation is generally known already?

@shiosai
Copy link
Author

shiosai commented Jun 15, 2021

When looking at fast_match I don't see any point to do the complicated self.get_seq_distance calls for all clusters if len(tokens)==1.
Would something like this not be enough?:

if cluster.log_template_tokens[0]==tokens[0]:
                  return cluster

It seems to considerable speed up the whole thing - the effect is less visible with max_clusters activated but that seems to have an very strong negative effect on performance in any case

@davidohana
Copy link
Collaborator

Hello,
Seems like the change you suggest will keep the correctness of the algorithm, but you should prove it in a regression test.
Also please demonstrate that it improves speed.
A PR is welcome!
BTW, please check your masking - if you get many single token templates you might need to improve that.

@davidohana davidohana added the performance A performance issue label Jun 16, 2021
@shiosai
Copy link
Author

shiosai commented Jun 16, 2021

Hi,
thanks for your reply. Perhaps my log is not very typical. I have general 50% rubbish (the whole line) I want to mask. Mostly its a recursive "LS" call that basically just lists thousands of filepathes (each is one token) - and each would be a single cluster. I could remove all these rubbish cases before feeding it to drain but I thought its better to have everything in one place. Additionally, there will be always new weird one token lines. Mylogs are around 500.000 lines long as users can basically do whatever they want and it will not be feasible to always adapt masks.
As for PR, I will try my best but at the moment I added more cache related workarounds that significantly improve the whole processing (e.g. I made another "cache" for the last few tokens/masks as we have a lot of repetition - assumption is that same loglines might repeat).
Such an improvement makes the change above obsolete - at least for my logs.

@davidohana
Copy link
Collaborator

Typical usage of Drain is for extracting up to few thousands of templates. Perhaps with the new max_clusters feature + some optimizations, you can avoid some masking and just "forget" rare clusters. If you end up with generic improvements, please do a PR.

@davidohana davidohana added the help wanted Extra attention is needed label Jan 9, 2022
@shiosai
Copy link
Author

shiosai commented Feb 5, 2022

I had to do some other topics but want to get back on drain - I plan to test it with billions of streamed loglines and it seems e.g. the patch shown here still adds some huge improvements. When I use e.g. drain_bigfile_demo.py it doesnt really make a difference though - probably not too surprising, as it contains only like 50 clusters and most of the processing is regex and not drain. So at the moment, I cannot proof that it helps here.

Perhaps it makes sense to add some drain_hugefile_demo.py for benchmarking and such. Would be interesting too, for improving the max_clusters feature as this is as you said fundamental for this use case. Does anyone have an idea where to get a huge (~2GB) and very diverse log?

@davidohana
Copy link
Collaborator

Another demo with a significantly bigger log file can be a great addition.
Perhaps you can use one of the datasets here: https://github.com/logpai/loghub ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed performance A performance issue
Projects
None yet
Development

No branches or pull requests

2 participants