You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As a user, it would be good to have some examples of how things look after a long run (long here meaning say 10 minutes of MCMC with a toy model that is fast). These examples should have the output included so that users don't have to repeat the computing time themselves.
However, the problem with having these in this repository is that you end up adding a slow step to the CI, which is really annoying for development. You also end up with output in the repository, which quickly bloats it (even if just notebooks, they can get really big if they contain many plots).
My proposed solution would be to put any long-running examples in a separate repository. That gives a clear signal that these examples aren't run every time we change the code base, while preserving the ability to have long-run examples somewhere with output stored. Perhaps we would just update this examples repository with each release. This solution also avoids filling our dev repository with notebook output or putting a slow, long-running step in the main repository development workflow.
Other context
Whatever we do, there are some things to consider about how to test these long-run examples.
As a user, it would be good to have some examples of how things look after a long run (long here meaning say 10 minutes of MCMC with a toy model that is fast). These examples should have the output included so that users don't have to repeat the computing time themselves.
However, the problem with having these in this repository is that you end up adding a slow step to the CI, which is really annoying for development. You also end up with output in the repository, which quickly bloats it (even if just notebooks, they can get really big if they contain many plots).
My proposed solution would be to put any long-running examples in a separate repository. That gives a clear signal that these examples aren't run every time we change the code base, while preserving the ability to have long-run examples somewhere with output stored. Perhaps we would just update this examples repository with each release. This solution also avoids filling our dev repository with notebook output or putting a slow, long-running step in the main repository development workflow.
Other context
Whatever we do, there are some things to consider about how to test these long-run examples.
One option would be to just write them in Python, and test them using existing doctest functionality. There is an example of something like this here: https://towardsdatascience.com/python-documentation-testing-with-doctest-the-easy-way-c024556313ca. That solution could then be formatted with format this with https://github.com/adamchainz/blacken-docs
If we use notebooks, they could be tested with https://semaphoreci.com/blog/test-jupyter-notebooks-with-pytest-and-nbmake.
The text was updated successfully, but these errors were encountered: