-
Notifications
You must be signed in to change notification settings - Fork 143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve continuous benchmarking with Bencher #769
Comments
Thanks for reaching out, @epompeii! Yea, we don't run the benchmarks on all PRs because it takes a long time to run them. But it sounds very interesting with better reporting and comparison over time. I'm curious to hear what your experience is doing benchmarks online as part of CI? For us, performance varies quite a lot, making it a bit difficult to detect small changes in performance. |
Yeah, this can definitely be a blocker. I think the most common thing I've seen is only running a subset of benchmarks on PRs to at least cover the critical path.
There are a few ways to handle this. In order of most to least effective:
|
Thanks, that's
Thanks! That's great advise. |
Hey
fastcrypto
team!I came across your white paper, and I think you all have built a pretty nice continuous benchmarking site.
I just wanted to reach out because I'm the maintainer of an open source continuous benchmarking tool called Bencher: https://github.com/bencherdev/bencher
It looks like you all currently only benchmark releases. Though, I may be missing something.
Bencher would allow you to track your benchmarks over time, compare the performance of pull requests, and catch performance regressions before they get merged.
I would be more than happy to answer any questions that you all may have!
The text was updated successfully, but these errors were encountered: