-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: parse and clean archive badges and markdown links to URL #243
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #243 +/- ##
==========================================
+ Coverage 74.06% 75.67% +1.60%
==========================================
Files 10 10
Lines 671 703 +32
Branches 82 90 +8
==========================================
+ Hits 497 532 +35
+ Misses 166 161 -5
- Partials 8 10 +2 ☔ View full report in Codecov by Sentry. |
hey @banesullivan i'll leave some more specific feedback here but it looks like
|
I'm also noticing some inconsistency in JOSS: for joss / sleplet, there is a link to the paper. For nbcompare, there is a link to the DOI, which resolves to the paper! Both will work! My question is, should we be consistent in how we save the doi and always link to the doi rather than the paper in terms of the data we store in our "database," aka YML file? the archive value is inconsistent, too, but I think it's OK as is. Especially because sometimes it's a GitHub link and others it's Zenodo, and sometimes the Zenodo link is to the "latest" rather than the actual archive VERSION that we approved. So let's not worry about archive and focus on JOSS DOI as it hopefully isn't a huge amount of work. if it is, we are ok as is and we can open an issue about a future iteration that cleans it up just a bit more. So, the takeaways here are:
Thank you so much. This looks really great!! 🚀 If you'd like, we could merge this as is today, and you could work on the comments above in another pr. that would allow us to update the website tomorrow. if you'd like to complete all of the work in this pr, say the word, and we can hold off until January to get things merged. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oops. it looks like i left blocks of feedbak but not the line by line comments. i'm just making sure those are visible now.
@@ -7,6 +7,8 @@ | |||
from datetime import datetime | |||
from typing import Any | |||
|
|||
import doi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we've added a new dep, we should make sure that it is noted in the changelog and also document why we added it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Further clarified this in e1246e0
This utility will attempt to parse the DOI link from the various formats | ||
that are commonly present in review metadata. This utility will handle: | ||
|
||
* Markdown links in the format `[label](URL)`, e.g., `[my archive](https://doi.org/10.1234/zenodo.12345678)` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it looks like the issue that failed when i ran it locally using a zenodo badge
pyOpenSci/software-submission#83
honestly, I may have updated and added that (it is possible). But it would be good to parse a markdown badge url too.
let me know if you don't see that error but i saw it running locally.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All good, it was the JOSS DOI field causing the failure as it was left black for pyOpenSci/software-submission#83. The zenodo badge in this example is handle and a recurring pattern on a lot of reviews
src/pyosmeta/utils_clean.py
Outdated
archive = archive.replace("http://", "https://") | ||
# Validate that the URL resolves | ||
if not check_url(archive): | ||
raise ValueError(f"Invalid archive URL: {archive}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we want to use pytest.raises
ValueError // match= to hit these missing lines in our coverage.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added more validation testing in a4e9477
hey @banesullivan i'm checking in on this pr. we are running into the issue that every pr on the website fails CI because of the dois! i think we have two options.
The above approach is great if you are busy during this first full week back!! Please let me know what you prefer! |
Co-authored-by: Leah Wasser <[email protected]>
Whoops! Not sure how I missed this on the first go around. I track this down and it was because the JOSS DOI field was left blank. I have updated the parser/validator to handle missing data like this in b6b9f27
Ooof, I didn't realize how much slower it was until now. Indeed it is the DOI validation step that is causing this slowdown. I can add a print statement, perhaps in this for-loop to indicate each package it is currently processing. With this dramatic of a slow down, I'm considering not doing the full validation of the DOI URLs to ensure they resolve correctly. Perhaps these could be gathered and at the end of critical build steps like for the documentation we check these DOI URLs to validate that the resolve. For now, I'm adding a simple |
I'm not sure! This feels weird to me as the DOIs themselves resoilve to the paper and if I use the >>> import doi
>>> doi.validate_doi('10.21105/joss.05221')
'https://joss.theoj.org/papers/10.21105/joss.05221' I'm not sure what's best here. This may be something we engage JOSS directly on and follow up. |
agh! Let's push to merge as is at this point and document any lingering issues like archive URL format inconsistencies for future work |
thank you @banesullivan this is running much better now and isn't erroring. let's merge. I've opened a few issues #248 #247 that we can work on independently. i'm going to merge this and push out a new patch release to see if we can get ci to be happy on the website side of things !! 🤞🏻 |
This will parse markdown links/badges to consistently capture DOI URLs from the archive and JOSS DOI fields.
Note that this adds https://github.com/papis/python-doi as a dependency
The solution I landed on here is to always coerce the DOI to a single URL since this is what the review templates expect here:
https://github.com/pyOpenSci/pyopensci.github.io/blob/d0b561cc493f6e4691f171b4460b0f7d4793267a/_includes/package-grid.html#L42
The parser here now also validates the DOI links using the
python-doi
dependency. For testing/validation, I needed to use a real/valid DOI for the test data so I used PyVista's DOI for these 😄I also noticed that some review issues had the JOSS archive key as
JOSS DOI
and others have it asJOSS
. The changes here account for that and ensure data are normalized toJOSS