-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
General guidelines for third party model submission #18
Comments
set it to urgent as there are three pull request waiting |
My view on this is that it's fairly harmless to allow these models since these are just examples and not anything to do with validation per say. I agree that it's basically how we label them, though Tristan has already suggested 'notebooks' in a comment on a PR and now they are all under 'notebooks' - does anyone prefer a different subdirectory to 'notebooks'? |
No replies from the Geneva side today, but how about this - we have two directories "python_scripts" and "jupyter_notebooks" then have subdirectories in each of those directories, like "user_manual" (which could probably appear in both), "publication_scripts" etc?? |
I agree with @WeiliangChenOIST the reviewer should not have to read the referenced papers and check the exact correctness of the values.
The 3 last points can be enforced with a script I will work on later today. It should be added to this repository and mentioned in the CONTRIBUTING section we will add in top-level README.md |
Here is my proposal:
|
I already made part of task (2) in the pull-request #19 |
@iahepburn I think it is still easy to change the directory name for these PRs, and we should do so now.
|
@iahepburn @risingape @tristan0x @smelchio
Since now we have example PRs that are not from the developers, I think it is a good time to discuss the general guidelines of these submissions.
In practice, I think it will be difficult for us to validate the submitted models, so the person who submits the PR will likely take that responsibility, in which case, we may need to add a small notice in the subfolder mentioning that we as the developer cannot guarantee the correctness of these models (and probably rename the sub direction as
third_party_examples
).Now come to the reviewer duty. In general, I think the reviewer should check if the submission is understandable and if the materials are properly referenced. But beyond that, do you guys think the reviewers should also rerun the scripts to see if it produces a matched results? Or if there is any other task you think the reviewer needs to be done before the approval?
The text was updated successfully, but these errors were encountered: