Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

General guidelines for third party model submission #18

Open
WeiliangChenOIST opened this issue Feb 6, 2019 · 7 comments
Open

General guidelines for third party model submission #18

WeiliangChenOIST opened this issue Feb 6, 2019 · 7 comments
Assignees
Labels
discussion Open a discussion for non bug/implementation/modelling topics urgent This issue needs to be investigated immediately

Comments

@WeiliangChenOIST
Copy link
Contributor

@iahepburn @risingape @tristan0x @smelchio

Since now we have example PRs that are not from the developers, I think it is a good time to discuss the general guidelines of these submissions.

In practice, I think it will be difficult for us to validate the submitted models, so the person who submits the PR will likely take that responsibility, in which case, we may need to add a small notice in the subfolder mentioning that we as the developer cannot guarantee the correctness of these models (and probably rename the sub direction as third_party_examples).

Now come to the reviewer duty. In general, I think the reviewer should check if the submission is understandable and if the materials are properly referenced. But beyond that, do you guys think the reviewers should also rerun the scripts to see if it produces a matched results? Or if there is any other task you think the reviewer needs to be done before the approval?

@WeiliangChenOIST WeiliangChenOIST added discussion Open a discussion for non bug/implementation/modelling topics urgent This issue needs to be investigated immediately labels Mar 5, 2019
@WeiliangChenOIST
Copy link
Contributor Author

set it to urgent as there are three pull request waiting

@iahepburn
Copy link
Contributor

My view on this is that it's fairly harmless to allow these models since these are just examples and not anything to do with validation per say. I agree that it's basically how we label them, though Tristan has already suggested 'notebooks' in a comment on a PR and now they are all under 'notebooks' - does anyone prefer a different subdirectory to 'notebooks'?
Looking at this I feel the directory 'python_scripts' is too vague. All STEPS models are python scripts. I would kind of prefer to go back to the original label of 'manual_tutorial' actually.

@iahepburn
Copy link
Contributor

No replies from the Geneva side today, but how about this - we have two directories "python_scripts" and "jupyter_notebooks" then have subdirectories in each of those directories, like "user_manual" (which could probably appear in both), "publication_scripts" etc??

@tristan0x
Copy link
Collaborator

Now come to the reviewer duty. In general, I think the reviewer should check if the submission is understandable and if the materials are properly referenced. But beyond that, do you guys think the reviewers should also rerun the scripts to see if it produces a matched results? Or if there is any other task you think the reviewer needs to be done before the approval?

I agree with @WeiliangChenOIST the reviewer should not have to read the referenced papers and check the exact correctness of the values.
This being said, IMHO the reviewer has the responsibility to keep the repository consistent and understandable by non-contributors. It means that the reviewer should ensure that:

  1. code is understandable, i.e the reviewer may ask the contributor to refactor some part of the code and/or add comments/explanations.
  2. contribution uses Python good practices
  3. notebooks are consistent. For instance, the cells have to be executed from the beginning from an empty kernel.
  4. Python code is properly formatted

The 3 last points can be enforced with a script I will work on later today. It should be added to this repository and mentioned in the CONTRIBUTING section we will add in top-level README.md
We can also set up a travis ci configuration on this repository to check that automatically whenever someone creates a pull-request.

@tristan0x
Copy link
Collaborator

Here is my proposal:

  1. First, @WeiliangChenOIST and @iahepburn refactor the files layout of this repository.
  2. Then @tristan0x setup continuous integration to control the quality of the contributions. I might lead to substantial changes in the source code of this repository.
  3. Finally, rebase the pull-requests on the latest master and review them.

@tristan0x
Copy link
Collaborator

I already made part of task (2) in the pull-request #19

@WeiliangChenOIST
Copy link
Contributor Author

WeiliangChenOIST commented Mar 7, 2019

@iahepburn I think it is still easy to change the directory name for these PRs, and we should do so now.
Here is my suggestion of changes

  1. rename python_scripts back to examples, both python scripts and notebooks can be put inside, but in case there are multiple files, a subfolder needs to be created for this example.
  2. [still in consideration] provide subcategories such as [well_mixed_serial][spatial_serial][spatial_parallel]
  3. Provide an instruction file under examples to clarify the responsibility of the example provider and reviewer.

@WeiliangChenOIST WeiliangChenOIST self-assigned this Mar 11, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discussion Open a discussion for non bug/implementation/modelling topics urgent This issue needs to be investigated immediately
Projects
None yet
Development

No branches or pull requests

4 participants