Featured post

Fixing peer review

Today Academic Karma comes out of beta with the launch of a new website and we thought it was the perfect opportunity to explain what we think is wrong with peer review, why this is bad for science and how Academic Karma aims to fix peer review.

Peer-review is – at its best – a cornerstone of science. Good peer review identifies potential weaknesses in scientific work, encourages authors to do further work to provide convincing evidence if necessary, and helps to ensure that details required for others to understand and replicate experiments are presented. Good peer-review should lead to greater reproducibility and fewer retractions.

While publication is heavily incentivized – and publication rates continue to grow dramatically (e.g. doi:10.1371/journal.pone.0068397 ) – there are meagre rewards for good peer-review. Editors recount stories of prolific scientists who barely peer-review at all. The reviewing academics do is largely on the basis a sense of duty or friendly coercion from editors. As a result peer review is rarely prioritised and leads to long delays in publishing.

As explained in our animation and  faq, Academic Karma is a universal peer review platform. This means that as a reviewer, you could now do all your reviewing, for whatever journal, using Academic Karma. You would then have a complete copy of all your peer review in a single place, and you would have a public profile listing the number of times per calendar year you reviewed for each journal. Unfortunately there is currently one caveat – the journal has to accept your review via email. Academic Karma will customise the review form to match the format required by any journal, which in our experience makes it possible for most journals to accept the review. If the journal is not willing to process a review which contains all the relevant information simply because you sent it via email, then do you really want to review for that journal anyway?

But the point of this blog is not to sell you on why you should use Academic Karma to improve your experience of peer-review (although we think it will!) but how a universal peer review platform like Academic Karma can help fix peer-review.

The first way is that it can change the incentive structure around reviewing. Academic Karma does this using a reviewing currency (which we call karma). When you review you earn karma, and when your manuscript is reviewed you pay karma. By keeping a positive balance you are reviewing enough to support the rate at which you are publishing. As an extra incentive, if you have a positive balance you can see reviews for your paper as soon as they are completed by the reviewer (i.e. without waiting for all of the other reviewers and editors to finish). Karma can then be awarded either on the basis of timeliness and quality of a review. We are currently transferring karma on the basis of reviews completed within 10 days (50 karma) or 20 days (25 karma) to incentivise timeliness, but it would be possible to incorporate quality measures into this transfer.

The second way is that it keeps a publicly visible profile of an academic’s contribution as a reviewer. This provides extra visibility for reviewing activity, which we hope over time will be incorporated into the ways in which academics are evaluated.

The third way is that it decouples co-ordination of peer review from publishing. We sometimes mistakenly believe that we are peer reviewing ‘for’ a particular journal, when in fact we are peer reviewing for our peers, and for the scientific literature and science in general. However, peer review as a process has become something which is tightly tied to a journal. As a result, authors often have to re-start the peer review process with completely new reviewers when they submit a revised manuscript, even when many of the revisions have been done in response to reviewers reviewing ‘for’ the first journal. A universal peer review platform allows authors to keep the same reviewers even when resubmitting to a different journal. More broadly, however, it is worth noting that journals have not had to innovate much in terms of peer review – there has been substantially more innovation in other areas of publishing. By decoupling peer review co-ordination from publishing (while still allowing complete editorial control), it suddenly becomes possible to innovate purely in the area of peer review. This is exactly what Academic Karma is going to have to do if it is going to succeed in building a critical mass of enthusiastic peer-reviewers.

You can now sign in and register at  Academic Karma using your ORCID credentials – so there is no need to remember yet another password/username.  We hope this platform is a big step towards fixing peer review, but at the end of the day it is just a platform and we really need your support to turn it into a movement for reforming peer review.

Open science agreements between reviewers and authors

For #OAWeek16 we are launching open science agreements.  These are agreements between author and reviewer which are agreed before the reviewer accepts a review invitation from a journal.  We describe why we think open science agreements are needed here.  The rest of this blog is devoted to showing how open science agreements work in practice.
An invited reviewer visits http://academickarma.org/reviewagreement,  where they will find the following form.

screenshot-from-2016-10-26-131910

Why do we need open-science agreements between authors and reviewers

If you are committed to open science, there are fairly limited options for reviewing papers. Basically your only option is to review for journals which support and enforce open science principles. You are then trusting the journal  to make sure the author makes their data and code available, and also publishes the article open access.   Moreover, you are forced to decline to review interesting papers because they are not submitted to a limited number of journals supporting open science, or because the authors couldn’t afford the article processing charge and submitted to a closed-access journal.   This approach provides no feedback mechanisms to the authors of these papers that you would have been happy to review their paper had they committed to making their article, code and data openly accessible to the community.   It provides no incentive to authors to be more open.

We wanted to create an alternative, and this is where the idea of open-science agreements came from.    When you are invited to review a paper, you can visit http://academickarma.org/reviewagreement  to specify (anonymously) what you expect from the authors in terms of openness before you agree to review their paper.   You also specify how open you plan to make your review, and how long you expect it will take you to complete the review.   Rather than asking the authors to pay to publish open-access, the reviewer can ask that both the submitted and revised manuscripts are uploaded to a preprint server.   The authors can suggest modifications and explain if they are  unable to satisfy all of the reviewers expectations.  If an agreement is reached , then the reviewer can review the paper with the knowledge that their free labour is supporting resources (code, data, manuscript)  which can be re-used by everyone.

More information on how open science agreements work is provided here.

Reviewer-author contracts as a way to encourage openness in scientific publishing

In preparation for open access week, we are asking for feedback on our new initiative: Reviewer-Author contracts.

Background:

Academics review manuscripts for free  in order for publishing companies to make billions by charging readers to the access the work.   We think that a viable alternative is for scientists to only agree to review manuscripts which are first deposited in preprint servers, and to make the content of their review openly available alongside the preprint.    Reviewer agreements are a way to give the reviewer a bit more influence over the openness (open access, open code, open data)  of papers they choose to review, or at least a way for them to chose to only review papers which are following open practices.

How does it work:

1. Reviewer receives an invitation from a journal to review a paper.
2. Invited reviewer fills in the form at http://academickarma.org/reviewagreement, specifying their conditions for agreeing to review the paper.
3. Academic Karma sends an email to the paper author informing them of the conditions requested by the anonymous invited reviewer.
4. The author can either agree, decline, or modify the agreement.
5. If the agreement is modified, an email sent to the invited reviewer, who can either agree, decline, or modify the agreement
6.  3-5 repeated until an agreement reached, or either invited reviewer or author declines.
7. Once an agreement is reached, the  reviewer agrees with the journal to review the paper.
8. The author posts the preprint.
9. The reviewer reviews posts as a comment on Biorxiv preprint page, or if they want to remain anonymous, they  post the review on the Academic Karma review page and we post their comment for them (possibly  after a specified ’embargo’ to give the authors a chance to respond first).
10. The review is sent to the journal editor.
11. The author modifies the paper, re-uploads to a preprint server, and posts their response to the review

What is currently included in the review agreement:

1. Option to ask author to post preprint.
2. Option to ask author to agree to post a revised preprint.
3. Option to ask author to make data openly available.
4. Option to ask author to make source code openly available.

So that the expectations are not just on the author’s side, the reviewer can also choose to  commit to

1. The maximum time they will take for the review.
2. Whether they will agree to review a revised manuscript.
3. Whether they would be willing for their review to be transferred to another journal.
4.  How long of an embargo period before the content of their review is posted.

These ‘contracts’ are not binding in any legal sense of course.  However, a permanent record is kept so that both reviewer and author can refer to what they agreed to.
We welcome any feedback on what extra conditions we might want to include, as well as what extra optional commitments reviewers might like to make.

Scientists who open-review preprints

Peer review is controlled by publishers who generate billions in profits from free labour. What are your options if you do not want to provide free labour to be exploited by publishers, but still  want to contribute to peer review?

  1. Demand payment for peer review either to yourself, or for your department, or perhaps a contribution to a worthy cause
  2. Refuse to review for certain publishers
  3. Open review preprints

The problem with 1. is that publishers will pass that cost onto authors. A good example of Option 2 is The Cost of Knowledge boycott of Elsevier.

What about option 3?

The idea is that you only agree to review manuscripts if they are also posted as preprints, and that you publish your reviews openly online and send the link to the journal editor.  The authors can respond to your reviews openly, independently of the review process in any particular journal. The journal can still use your reviews of course, but then again so can anyone else.  Instead of doing free work for a publisher, you have instead contributed a common good from which everyone can benefit .

There is already a  group of pioneers in this space who are already posting open preprint peer review.  These scientists have between them written 61 open reviews of 51 preprints.  Undoubtedly there are many more examples (e.g. in blog posts) which have not collected here – please point us to these so that we can index them.

Not everyone is comfortable posting non-anonymous open peer review. We have created a platform where you can post content-open preprint peer review anonymously or non-anonymously. You can review any arXiv, bioRxiv, PeerJ, SSRN preprint, or even papers deposited in several institutional repositories.   If you are worried your review is overly critical and might be damaging to the authors, you can also set an ’embargo’ period to give the authors a chance to respond before the review is made open.

So here’s to those scientists who are showing us that there is a way to contribute to peer review without providing free labour to be exploited by publishers.

 

Prize for most endorsed review of #SMBE16 preprint

One of the goals of Academic Karma is to separate peer-review from publication in a particular journal.  One of the most promising ways of doing this is by the community shifting to content-open review of preprints.

In order to try to encourage this, we are putting up a $200USD prize for the most endorsed review of a preprint presented at SMBE16.

Here is a primer on how to use Academic Karma to do preprint peer review, and how to endorse preprint reviews.  Here  is a list of preprints presented at SMBE16.

The competition will run until the end of July.

Reviewing conference preprints.

Recently we tweeted the reasons for peer-reviewing but not for journals:

Its important to clarify what we mean by ‘for’ here:  we use it to mean as the primary purpose of the review. So by all means share a content-open preprint with a journal considering publishing the article.  Also, by content-open we mean making the content of the review open, but potentially choosing to remain anonymous.  Junior researchers  justifiably worry about implications of non-anonymously criticising senior researchers in their field.

One way to peer-review, but not primarily ‘for’ journals is to review a preprint which was presented at a conference you attend.  The advantages are that you get the chance to hear the author explain the work in person, and you can also ask them to clarify anything which was unclear in the preprint.

In order to make this easier, we have implemented a feature which lists preprints presented at selected conferences.  If you would like this feature enabled for an upcoming conference, you can register it here  http://academickarma.org/conferences, and let us know so we can help you curate the list of preprints.

One great example of this is the Society for Molecular Biology and Evolution conference (SMBE16):  http://academickarma.org/conference/SMBE16.

In order to try to encourage more ‘conference preprint’ peer-review, we are going to award $200USD to the most endorsed review of a preprint presented at SMBE16.  The rest of this post is a primer on how to use Academic Karma to review a SMBE16 preprint.  In order to review a preprint you dont need to register specifically for Academic Karma, but you do need an ORCID identifier (http://orcid.org).

Step 1:  Choose a preprint to review here:  http://academickarma.org/conference/SMBE16.

Step 2:   Click on the ‘Review’ link, which takes you to the review page for that particular paper. For illustrative purposes, we are going to do a review of : A profile-based method for identifying functional divergence of orthologous genes in bacterial genomes.

Step 3:  On the review page, click on the ‘Review this paper’ link

reviewpage

Step 4: This takes you to a  login-page for logging into Academic Karma via your ORCID.  Once you have logged in you will be automatically redirected to the review page.

 

orcid

Step 5:  Write and submit the review.  You can specify who contributed to the review, whether you would like the comments sent via email directly to the author, if you would like to have the reviewed cc’d (e.g. to an editor), if you would like to sign the review, and an ’embargo period’ before the comments become publicly visible (allowing the authors to respond to the review before it goes live).

submit.png

Step 6: Thats it!    Maybe tweet the review if you would like and ask people if they would like to endorse it.

endorse

To endorse a review, just click on the endorse button (you need to be logged in, but its easy to log in using your ORCID or twitter credentials.

 

All completed reviews are listed alongside the preprints here:  http://academickarma.org/conference/SMBE16.

 

 

Making open-access publishing cheaper and faster

One of the major goals of Academic Karma is to make open-access publishing both cheaper and faster.

One of the reasons open-access publishing remains expensive is the cost of coordinating peer review.  This is estimated to make up at least half the cost of publishing in a pure open-access journal.  Peer review is also one of the main reasons it takes so long between submitting and publishing a manuscript (https://quantixed.wordpress.com/2015/03/16/waiting-to-happen-ii-publication-lag-times/).   Peer review is slow because its difficult to find reviewers (we are all too busy working on our own manuscripts, which makes sense as that is what we are rewarded for) and its difficult to incentivise reviewers to spend time promptly on the review rather than their own work.

Scientific reports, a Nature Publishing Group open-access journal, also recognised that slow peer review is a problem, but they proposed to solve this by providing fast-track peer review with a commercial partner for a fee of $750, of which $100 goes to each reviewer (presumably either two or three reviewers) and the remainder is taken as an extra processing charge by the journal and its commercial partner.

In contrast, the whole point of Academic Karma is that diligent, timely reviewers should be able to access fast-track peer review on their own papers.  So in a sense we also want to introduce a two-speed review process, except that ours rewards Academics who are contributing to speeding up peer review.  The logic is that this creates an incentive for every scientist to be a regular and diligent peer reviewer, which speeds up the system for everybody.  In the long-run this creates a self-sustaining system which makes it much easier to find Academics eager to do ‘enough’ peer review so that their own papers will be reviewed in a reasonable time-frame.

Of course, timeliness is not the only factor in peer review.  The quality of the review is also very important, and for that reason we are working on ways in which reviewers, editors and authors can evaluate the quality of the review.  Ultimately we are trying to build a system which speeds up, improves the quality and lowers the cost of peer-review.