Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make nnbench ready for concurrency/parallelism #189

Open
nicholasjng opened this issue Dec 5, 2024 · 1 comment
Open

Make nnbench ready for concurrency/parallelism #189

nicholasjng opened this issue Dec 5, 2024 · 1 comment
Labels
enhancement New feature or request
Milestone

Comments

@nicholasjng
Copy link
Collaborator

From a conversation with @janfb.

Unlike microbenchmarks, whose results become less accurate when parallelized due to resource contention, there is no reason not to run an ML benchmark suite in parallel. We should give an example of how to do that in the docs, or even make that the default (think something like make -j $N).

Maybe we can get started on a single MP backend, like multiprocessing or joblib.

Bonus: It could be wise to restructure (read: drop) the BenchmarkRunner before this, and instead give the collection and run loop APIs as stateless functions, which is then fairly easy to parallelize over.

@nicholasjng nicholasjng added the enhancement New feature or request label Dec 5, 2024
@nicholasjng nicholasjng added this to the v0.5.0 milestone Dec 5, 2024
@nicholasjng
Copy link
Collaborator Author

This also has implications for our benchmark structuring recommendations. In the case of a multiprocessing approach, it might be best to structure benchmarks per-algorithm (model) and send one model's entire benchmark suite to one Python process to improve memory efficiency.

On the other hand, this clashes with our approach for parametrizing over a model input value in a benchmark, which we advertise as a best practice for avoiding code duplication.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant