Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Plotting program perf_plotter added under test/src #78

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

John-Bonnin
Copy link

It generates rich plots of performance data. perf_plotter.py plots the contents of ../benchmarks/benchmarks.txt in a number of small graphs. You can call it with an integer argument to plot just 1 of the graphs.

@marip8
Copy link
Collaborator

marip8 commented Sep 3, 2021

@John-Bonnin can you post a picture of the generated plot(s) for reference?

@John-Bonnin
Copy link
Author

Sample Graphs Window

Is there a better place to create documentation? I'm not familiar enough with Github yet.

@marip8
Copy link
Collaborator

marip8 commented Sep 7, 2021

Is there a better place to create documentation? I'm not familiar enough with Github yet.

Posting images on this PR is fine. I just wanted to document what these graphs looked like (and see for myself) without having to run the code

To me, the value of this plot would be to see the performance of the various solvers next to one another for the same trial, rather than seeing the results for each solver individually over all trials. In theory I would like to look up the parameters of my particular problem (# of waypoints, # of samples per waypoint, DOF), see how all available solvers perform, and then be able to choose the best one. I think this might look like a 2D plot of planning time vs. graph size (# of waypoints * # of samples per waypoint) with all of the solvers (of the same template type (i.e. float or double)) on the plot. We could also look at other ways of parameterizing this plot.

@John-Bonnin can you prototype a change like this and post an image of the resulting plot(s)? No need to formally commit any changes right now until we settle on the final structure of the plots

@Levi-Armstrong @colin-lewis-19 any thoughts about the purpose/appearance of the proposed plot?

@colin-lewis-19
Copy link
Contributor

I agree with the usefulness of direct comparisons. Being able to down select which benchmarks run/ are plotted may also be useful if we are iterating on an implementation.

descartes_light/test/src/perf_plotter.py Outdated Show resolved Hide resolved
descartes_light/test/src/perf_plotter.py Outdated Show resolved Hide resolved
descartes_light/test/src/perf_plotter.py Outdated Show resolved Hide resolved
descartes_light/test/src/perf_plotter.py Outdated Show resolved Hide resolved
descartes_light/test/src/perf_plotter.py Outdated Show resolved Hide resolved
descartes_light/test/src/perf_plotter.py Outdated Show resolved Hide resolved
…ts; restructured .plot() to only display requested graphs, permitting easy 1:1 comparison.
@Levi-Armstrong
Copy link
Collaborator

Most likely if you switch to leveraging google benchmarks you will have to update the plotter to parse the generated table. Also using google benchmarks the plotting would already be handled by a GitHub action and publish it to the repo Github pages.

Renamed class to correct convention.

Co-authored-by: Michael Ripperger <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants