-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
10-fold manysearch
performance slowdown between v0.9.5 and v0.9.6
#463
Comments
I used git bisect to confirm that it's #302 that causes the big bump. Still working to figure out why 😅 |
Ah-hah! Figured it out! I'll write it up in more detail, but |
(Fixed in #464) |
updated benchmarks as of v0.9.8 here: #479. Loading via .sig.zip and manifest CSVs seems to resolve the slowdown since v0.8.6 too. 🎉 |
When benchmarking #430, I noticed a massive slowdown in manysearch. I took a few days to track it down using our conda-forge releases and https://github.com/dib-lab/2022-branchwater-benchmarking.
See results in
a_vs_d.txt
underbenchmarks/
for the various branches of the benchmark repo - summarized below:--ignore-abundance
I think the max memory is variable because it depends on loading order and duration of the time the sketches are held in memory.
The bigger concern, of course, is the 10x increase in time that happened with release v0.9.6.
The culprit is #302, whichs add abundance-weighted information to manysearch and introduced a performance regression around sketch downsampling. #464 fixes the regression and also makes abundance calculations user-toggleable.
On a separate note, there has been a regular increase in execution time over the releases - from v0.8.6 to v0.9.0, and again between v0.9.1 and v0.9.5.
I think the first of these two slowdowns might have to do with the switch to use core manifest loading, and in particular the need to read .sig files when using pathlists - once to build a manifest, and then again when actually loading the sketch for search. This might be something that we can fix when dealing with #445.
Not sure what changed between v0.9.1 and v0.9.5, though. Maybe I should run some more benchmarks to nail down the exact release where the time increase occurred. Or maybe we should just profile the heck out of the plugin. Or both.
The text was updated successfully, but these errors were encountered: