Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expand the "Maintaining an MDAKit" documentation #223

Open
wants to merge 8 commits into
base: main
Choose a base branch
from
Binary file added docs/source/img/finding-ci-error.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
95 changes: 95 additions & 0 deletions docs/source/maintaining-a-kit/improving.rst
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you substitute - with _ here please? For a variety of reasons that'll make things easier to deal with.

Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
#####################
Improving your MDAKit
#####################

Adding new features
===================
Developing your MDAKit can involve, for example, adding new features, or
altering existing ones to improve performance or user-friendliness. This may
come at your own initiative, or the request of a user.

How you develop your kit is up to you! Just remember it's a good idea to
develop the tests and documentation alongside the new code (and to run the
existing tests to ensure you aren't causing any unintended changes).


Go beyond the minimal MDAKit Registry requirements
==================================================
The requirements for registering an MDAKit are intentionally minimal to keep
the entry barrier low. **Going beyond these minimal requirements is highly
recommended**.

This may include (but is not limited to):

- **More tests**: can you get to 100% coverage?!
- **More documentation**: Explain each part of your code. Add examples. Make
your documentation visually appealing and easy to navigate!
- :ref:`Add a logo <logo>`: Spice up your MDAKit with your very own logo!
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like a much lower importance thing than the rest, I don't mind it being here, but just pointing out that it's a bit "jarring" in the sense that you have lots of high importance thing and then "logo".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fair; I wanted a way to bring it to people's attention that they could have a logo, since it's not mentioned anywhere else (and rule of threes to make this not sound awkward). Perhaps if I added 'you could even...' to make it clearer it comes after the two former things?

- **Use tooling and workflows**: don't rely solely on the MDAKit Registry's CI
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe here worth mentioning tools for formatting, etc.. (i.e. your explanation focuses a lot on workflows but not so much on tooling).

run - set up your own automated testing and alerts!
- **Community resources**: make it easy for users to report issues, ask
questions - or even contribute to your MDAKit themselves!
- **Release on PyPi/conda-forge**: make it easier for users to install your kit!
- :ref:`Make a journal publication <publishing>`: get recognition for your code!

If applicable, remember to :ref:`update your kit's metadata <update-metadata>`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be good to highlight this more to the reader. Would it be worth putting this in a note box?

so new features are reflected on the Registry.


.. _update-metadata:

Updating the MDAKit's metadata
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The fact that we have separate sections for some but not all of the above is a bit "odd" - in the sense that the information flow goes "here's an overview" to "here are some more details on the above". Is there a better way to order this? Separate pages?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't like splitting things up too much when the sections are this short (makes for a lot of back-and-forth mavigation), but that being said I am planning to add more documentation later that includes details on some of the other above things, and at that stage likely would better to go separate pages.
I'd keep this updating section here since it's generally relevant, but might move the publishing/logo sections

==============================
As your kit evolves, you may wish to update the information displayed on the
MDAKit's registry - for example, to expand the description, add a publication,
or update the installation instructions.

If this is the case, simply edit your kit's ``metadata.yaml`` in your fork of
the ``MDAKits`` repository, then make a new PR. The MDAKits team will then
fiona-naughton marked this conversation as resolved.
Show resolved Hide resolved
review and merge your PR to apply the changes.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add a sentence here about what happens if you don't update it - the CI could fail and/or users could be getting the wrong information.


.. _publishing:

Publishing your MDAKit!
=======================
Publishing an article in a journal such as `JOSS <https://joss.readthedocs.io/>`_
is a good way to get recogneition for your work and provide a citable source for
users.

`JOSS papers`_ are short, relatively simple, and in meeting the requirements
for registering an MDAKit, you will have already met many of the JOSS submission
requirements. Once you've built up your kit's documentation and testing,
consider publication with JOSS or a similar journal!
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add something about zenodo? I.e. at the very least we recommend also adding a DOI to your repository through Zenodo to make sure your work is citable.



.. _logo:

Adding a logo for your MDAKit
=============================
A custom logo can add some pizazz to your MDAKit. You are welcome to create an
entirely custom logo, use the default `'empty gear' template`_,
or modify the template - feel free to place your logo within the gears!

If you used the MDAKits cookiecutter, there is already a placeholder logo in
your documentation. The `MDAKits cookiecutter documentation`_ has information
on updating this.

.. note::
MDAnalysis recommends that kit authors carefully check that any material used
in their kit - including logos - is used under appropriate licenses, and do
not infringe on any copyrights.

MDAnalysis cannot give any legal advice on these matters; if you are unsure
about any legal matters, please consult a lawyer.


.. _`MDAKits cookiecutter documentation`:
https://cookiecutter-mdakit.readthedocs.io/en/latest/usage.html#documentation-configuration

.. _`JOSS papers`:
https://joss.readthedocs.io/en/latest/submitting.html#submission-process

.. _`'empty gear' template`:
https://github.com/MDAnalysis/branding/tree/main/templates/MDAKits

125 changes: 125 additions & 0 deletions docs/source/maintaining-a-kit/keep-healthy.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,125 @@
*************************
Keeping an MDAKit healthy
*************************

.. _failingci:

If an MDAKit fails the weekly CI
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume you mean the registry CI here:

  1. It might be good to clarify this (i.e. in case there's confusion about their own cron CI)
  2. The registry CI actually runs twice weekly

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be worth defining "CI" too - not just continuous integration, but also that these are tests that the registry runs automatically for you. Maybe a section above this that includes the "develop" and "latest" explanations below?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another currently-in-the-works thing I have is a section diving more into CI (and hopefully a glossary, too) - I don't want to end up definitely it separately each time it gets brought up, and in any case I think such a section would be better placed in the making a kit section rather than here. But I could at least add something quick here, if at least an interim till I get other stuff done and up

================================

In the event that a kit no longer passes its tests, an issue is automatically
raised on the `MDAKits GitHub`_, the maintainers (as identified in
fiona-naughton marked this conversation as resolved.
Show resolved Hide resolved
``metadata.yaml``) are notified, and the kit's CI badges appearing on the
:ref:`Registry <mdakits>` will be updated.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
:ref:`Registry <mdakits>` will be updated.
:ref:`Registry <mdakits>` will be updated to red to notify others that it is failing.


Note that two tests are run:

- **develop**, using the 'current' version of your MDAKit (installed when
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- **develop**, using the 'current' version of your MDAKit (installed when
- **develop**, using the 'source' version (also assumed to be the development version) of your MDAKit (installed when

running the commands under ``src_install`` in ``metadata.yaml``), and the
current ``develop`` branch of MDAnalysis.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
current ``develop`` branch of MDAnalysis.
current ``develop`` (active development) branch of MDAnalysis.


- **latest**, (if applicable) using the latest release of your MDAKit (installed
when running the commands under ``install`` in ``metadata.yaml``), and the
most recent release of MDAnalysis.

Depending on the nature of the failure, one or both of **develop** and
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here it might be worth mentioning how the develop case is more likely to fail?

**latest** may be failing.


Why did CI fail?
----------------
There are a number of reasons that the CI tests may fail - it could be an
internal issue arising as you develop your kit, or it may indicate that updates
are needed to keep in line with changes within MDAnalysis or other dependencies.
It may reflect a single test that is no longer passing, or that a larger error
is preventing your kit from being installed/any tests from being run.

If you don't already have an idea what is causing your kit to fail, you can read
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
If you don't already have an idea what is causing your kit to fail, you can read
If you don't already have an idea of the source of your kit's failure, you can read

This is just a me thing - please ignore if you want (I just felt like "an idea what" sounded a bit odd for some reason).

the CI log file to find the exact point of failure and accompanying error
messages:

#. Click on the *'Actions'* tab on the
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you want to hyperlink to the actions page here directly?

`MDAKits Github page <https://github.com/MDAnalysis/MDAKits/>`_.

#. Click on the most recent *'GH Actions Cron CI'* job.

#. Under *'Annotations'*, find and click the failing job(s) with your kit's
name. Failing jobs should show a red cross and be grouped at the top.

#. You should be directed to the place in the CI log where the failure occurs.
Some scrolling may be required to find the origin of the error.

.. image:: ../img/finding-ci-error.gif
:alt: Finding the error message after being notified of a failing CI run


Fixing an failure
-----------------
Once the point of failure has been identified, you can set about trying to fix
it. The exact fix required will of course depend on exactly what went wrong, but
hopefully the error message(s) in the log will be enough to get you started.

Any fixes will be applied in your kit's home repository - no direct interaction
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should encourage them to raise issues on the core library's issue tracker if they think the issue is a breaking behaviour change in MDAnalysis.

with the Registry is required.

If you're still not sure what's gone wrong or how to fix it, you can comment on
the issue that was raised on the `MDAKits GitHub`_. The MDAKits team, or
other members of the community, may be able to help - but remember, ultimate
responsibility remains with **you**.


After applying a fix
--------------------
Once you have applied a fix to your MDAKit (and, if applicable, pushed a new
release with these changes applied), no further action is required from you.

Assuming that the fix does indeed solve the issue, the tests will pass the next
time the automated CI is run. After the successful run, the CI badges on the
:ref:`Registry <mdakits>` will be restored to 'passing' and the issue raised on
the `MDAKits GitHub`_ will be automatically closed.


Keeping an eye out for upstream changes
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here it might be good to mention that the cookeicutter has a workflow for checking against the develop version of MDAnalysis.

=======================================
Avoid failing tests before they happen!

Just as you are likely to keep improving your kit, the upstream packages on
which is relies - including MDAnalysis - will also continue to evolve.
fiona-naughton marked this conversation as resolved.
Show resolved Hide resolved
Sometimes, this means that things your kit relies on will no longer work.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

things - API points?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure what you mean, sorry?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"things your kit relies on", the "things" felt a bit unspecific so I was wondering if there was better wording

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fair; let me think on that, I'd personally avoid mentioning API though because that's one of those confusing-to-newcomers terms

Keeping an eye out for such changes will allow you to modify your kit
appropriately *before* the upstream change is fully applied and your code
starts to fail.

Usually, a package will warn users of any upcoming changes that may affect
downstream usage (e.g. changing how a function is to be used), by raising
a warning indicating the upcoming change when the function is used.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DeprecationWarning (link to Python docs for the warning type)

If your kit relies on any such to-be-changed features, then (assuming the
relevant code is covered by your kit's tests) these warnings will be triggered
when running the tests and will appear in the logs of the automated CI runs -
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

... when accessing the deprecated feature. Assuming your unit tests cover this part of your code, such DeprecationWarnings will appear in the output logs..

I would avoid mentioning automated CI probably, i.e. you can see this if you run tests locally or via gh actions.

it pays to keep an eye on these!

It is also a good idea check release notes for new releases of packages your kit
uses and watch for any announcements of major upcoming changes.


Keeping support windows in mind
===============================
Your kit should specify which versions of the software it relies on (including
Python) it works with. Ideally, as new versions of these dependencies are
released, your kit will be updated to work with these.

It is *not* expected that your kit remains compatible with *all* historic
releases - and indeed, many old versions of these packages will not work with
each other. These packages will also have **support windows** of how long after
a given release the developers will keep an eye to make sure it still works as
intended.

`SPEC0 <https://scientific-python.org/specs/spec-0000/>`_ is a standard outlining
a timeline of which versions of Python and common dependencies in the Scientific
Python ecosystem should be supported and compatible with each other. You can
fiona-naughton marked this conversation as resolved.
Show resolved Hide resolved
follow SPEC0 to determine which Python/dependency versions you should aim to
support, and which old versions you can drop.


.. _`MDAKits GitHub`:
https://github.com/MDAnalysis/MDAKits/issues
41 changes: 30 additions & 11 deletions docs/source/maintainingakit.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,19 +9,38 @@ Maintaining an MDAKit
like to see covered here, please get in touch via
`MDAnalysis Github Discussions`_.

Successfully registering an MDAKit is not the end of the journey! Like a pet, a
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see where you're going here - my 2 cents is that the "like a pet" is maybe a bit too much, but I leave it to you

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's fair - I wasn't actually super happy with this either 😅 but also didn't like how it read jumping straight in - let me think on it again

software package still requires input to keep it healthy and thriving.
This includes both expanding and adding new features and ensuring the MDAKit
continues to run as intended. **While not required for an MDAKit to remain in
the registry, such activities are highly encouraged**.

There are a variety of reasons a kit may behave unexpectedly after being
submitted to the registry. Apart from actively developing the kit, changes in
kit dependencies, or even Python itself, can introduce/deprecate new/old functionality.
For this reason, the kits' continuous integration is rerun weekly to
confirm the kits expected behavior.

In the event that a kit no longer passes its tests, an issue in
MDAnalysis/MDAKits is automatically raised while notifying the maintainers
indicated in the `metadata.yaml` file.
While the registry developers will be happy to help where possible, ultimately,
the maintainers of the MDAKit are responsible for resolving such issues and
ensuring that the tests pass.
The issue will automatically close after the next CI run if the tests pass again.
kit dependencies, or even Python itself, can introduce/deprecate new/old
functionality.

As part of the MDAKit Registry, your kits' tests (as specified in
``metadata.yaml``) are automatically rerun each week, so that you (and potential
fiona-naughton marked this conversation as resolved.
Show resolved Hide resolved
users) have assurance that your code still works as intended, or are notified
when it does not.

**However, the ultimate responsibility for maintaining your MDAKit remains with
you.**

The sections below provide some information to keep in mind for maintaining your
MDAKit after registration.

.. toctree::
:maxdepth: 2

maintaining-a-kit/keep-healthy
maintaining-a-kit/improving


.. _`MDAnalysis GitHub Discussions`:
https://github.com/MDAnalysis/mdanalysis/discussions


Want to go even further beyond your MDAKit? You can also help us out by
:ref:`reviewing other MDAKits <reviewers-guide>`!
Loading