Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Peer Review Guidelines #11

Open
augustfly opened this issue Feb 22, 2018 · 3 comments
Open

Peer Review Guidelines #11

augustfly opened this issue Feb 22, 2018 · 3 comments

Comments

@augustfly
Copy link
Contributor

If the suggestion is that we summarize our backend editorial / peer review guidelines then I'm going to want to balk at posting anything that isn't already public/official. We do not need to add noise to the problem by talking about anecdotal rather than documented processes.

We would do well to include a summary/link to the excellent JOSS editorial checklist or the R Open Science onboarding process.

@dbouquin
Copy link
Contributor

The goal was just to give examples (public only) of how different publishers and journals are talking about software with authors. I think "Guidelines" is a problematic term causing some confusion. @womullan just summarized what he could share from ASCOM. Recommendation for better phrasing?
I agree those links should be included!

@augustfly
Copy link
Contributor Author

Here is another public checklist we could reference, though it spills over from software into replication/reproducibility:
https://ajps.org/ajps-replication-policy/
https://ajpsblogging.files.wordpress.com/2016/05/ajps-replic-guidelines-ver-2-1.pdf
https://ajpsblogging.files.wordpress.com/2016/05/quant-data-checklist-ver-1-2.pdf

@augustfly
Copy link
Contributor Author

Yeah, I think that the dissonance between standard peer review and these formulaic approaches is really high. I'm unconvinced that we will be doing authors a service by logging soft undocumented, closed efforts that fall in between. I might give talks on this aspect of our workflow, but I don't feel comfortable including them here unless we (the AAS) have docs to back up the work we claim to do.

Looking to where this is going let's imagine that the community writes a set of hard recommendations for software review. What are a Journal's choices? Reject them because you know you aren't going to be able to implement them uniformly in the current peer review system? Accept them provisionally hoping for a pat on the back from the community by delivering best effort? Accept them, claim to do them, and not actually do them because you know you aren't going to be able to implement them uniformly in the current system.

Seems to me that the only truly successful outcome will be for Journal's to adopt them and change their system to make transparent how you've implemented software review.

Perhaps I'm a few steps ahead or behind the conversation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants