Many researchers agree that for all its faults, Peer Review is still the best mechanism available for the evaluation of research papers.
However, there are growing doubts that Pre-Publication Peer Review, single or double blinded, is the best way to get the job done. Fascinating background reading on this topic includes the Effect of Blinding and Unmasking on the Quality of Peer Review from the Journal of General Internal Medicine.
In a 2002 survey performed by the Association of Learned and Professional Society Publishers (ALPSP1), 45% of the respondents expected to see some Peer Review change in the next five years – for example, journals moving to Open Post-Publication Peer Review. Although the timing of their prediction was off, it is true that there is now growing interest in this field and a few practitioners.
Driven by the fact that more and more scholarly publications are launched every year, the concept of Peer Review has been criticized for consuming too much valuable time. Moreover, Pre-Publication Peer Review and selection does not protect against fraud or misconduct. Other questions that have been raised about Peer Review include:
- What does it do for Science?
- What does the scientific community want it to do?
- Does it illuminate good ideas or shut them down?
- Should reviewers remain anonymous?
In this post, I want to explore in more detail what motivates researchers to evaluate the previously submitted work of their peers. If we can better understand the reasons why researchers review, we can also discuss scenarios which may improve both the transparency and quality of that process.
Let’s first consider what could boost the motivation of a researcher to review an article. At present there are a myriad of excuses that most of us use to put off this extra work, which usually claims several hours of an already tight time budget. The scientific community does not know or record how many hours scientists spend on Peer Review. Their institutions do not acknowledge this huge time commitment when assigning new funding. This effort is not acknowledged in a closed Peer Review system because scientists do not receive any credit for their work, as they would for a citable publication and it carries no weight when applying for a new position. This is completely different from the rewards which flow from publishing new research, in particular if that research has been published in a high-branded traditional journal.
Therefore we should ask ourselves:
Is peer review broken as a system?
Yes, but many believe it is required to maintain a certain level of quality control in academia. At the very least, Pre-Publication Peer Review is a concept recognized by the scientific community as supporting rigorous communication. More coverage of the flaws within the Peer Review system is provided in this post by The Scientist.
Why do we review?
A systematic survey by Sense About Science on Peer Review in 2009 represented the views of more than 4,000 researchers in all disciplines. It found that the majority of researchers agreed that reviewing meant playing their part within academic community. Review requests were hard to decline given their sense of commitment to their peers, despite the fact that they didn’t believe they would gain personal recognition for their work. The second most common motivation was to enjoy helping others to improve their article. And finally, more than two thirds of all the participating researchers like seeing new work from their peers ahead of publication. I will keep this latter point in mind for discussion later.
What sort of reward would researchers like?
Having understood the main reasons why researchers agree to review, the survey asked what would further incent them to undertake this task, possibly in a timelier manner! Interestingly, almost half of the researchers said that they would be more likely to review for a journal if they received a payment (41%) or a complimentary subscription (51%, in the the days before the spread of Open Access). Despite this result, only a vanishingly small minority of journals provides any kind of payment to its reviewers. This seems even more amazing in terms of the 35-40% profit margins which are common place in for-profit scholarly journal publishing.
Given that these publishers can afford to pay, why don’t they?
One acceptable answer could be that they do not want to introduce new bias into the process. Another answer is that given the number of about 1.5-2 million articles being published every year in STM disciplines as reported by Bjork et al and an average rejection rate of 50% (a factor of 2 for total number of submitted manuscripts to be reviewed) and at least two reviewers involved per paper, it would cost publishers a tidy sum to pay each reviewer a reasonable amount of money to compensate them for their considerable time.
Are there other ways to provide reviewers with credit?
Acknowledgement in the journal or a formal accreditation as for example CME/CPD points could improve their motivation said a still significant percentage of researchers. However only a minority would feel motivated by the idea that their report would be published with the paper.
Half of all scientists felt that they would be rather discouraged if their identity was disclosed to all readers of the article. The other half did not feel discouraged and expected higher quality from a more open evaluation process. These findings have been reported in a study by Van Rooyen et al. who found that 55% of the responding scientists were in favor of reviewers being identified, only 25% were against it. More importantly, the authors concluded that Open Peer Review leads to better quality reviews. One reason for this conclusion is quite obvious: if both the name and the comments are disclosed to the public, it appears to be only natural that a reviewer will spend at least as much, if not more, effort to make sure that the report is as good as a scientific paper. Another reason is that the reviewer is aware that a useful report could contribute to scientific discourse much more efficiently than a short statement with a few ticks in a standard reviewers’ form which only two people can access: the journal’s editor and most likely the author. These reviewers’ comments in an open report can be read in principal by all researchers in that field and may help them to improve their own work.
In another study, which analyzed the effects of blinding on the quality of peer review, McNutt et al2 reported that reviewers who chose to sign their reviews were more constructive in their comments. In principle all new concepts which motivate reviewers to submit a review on a paper and which are not simply based on a cash-value incentive will require a disclosure of both the identity of the reviewer and the report.
Should we continue to Review?
15,000 researchers have asked this question and subsequently withdrew their services in this regard. One reason not to participate in the review process is to protest the monopoly power within the international publishing industry which led to the Elsevier boycott. Coverage of this issue can be found in the New York Times and the Cost of Knowledge.
Having asked a range of different questions above, I’d like to move on and describe the different types of Peer Review.
Disclosure of a reviewer’s identity to the public is called Open Peer-Review. This simply means that either the names or the full report for a paper will be published with the paper itself, after the peer-review process has been completed. Open non-mandatory peer review has been established for example by PLOS Medicine and the PeerJ.
Let us now imagine a more open evaluation system for research which has been introduced as Post-Publication Peer Review (PPPR). I have previously discussed the ethics of this topic on my blog Science Publishing Laboratory. Like the current system of Pre-Publication evaluation, the new system relies on reviews and ratings. However, Post-Publication Peer Review differs in two crucial respects:
- Journal editors and reviewers do not decide whether or not a work will be published – as the articles are already published
- Reviews take the form of public communications to the community at large, not secret missives to editors and authors. Post-Publication Peer Review is, for example, used by F1000Research. In addition, Public Post-Publication Peer Review:
- Invites the scientific community to comment, review and rate a paper. The journal editor does not select the reviewers but it is instead it becomes a voluntary activity for those who feel interested and qualified to do so.
- Has no limitation as to the number of reviewers, unlike other Peer Review methodologies.
- Imposes no artificial time limit when the reviewing is “over”. Even years have gone by, researchers can evaluate the paper and write a review
The advantages of the open evaluation of research are readily observable. As Kriegeskorte, a member of the Editorial Board of ScienceOpen, summarized in his article entitled Open Evaluation: A Vision for Entirely Transparent Post-Publication Peer Review and Rating for Science:
Public Post-Publication Peer Review makes peer review more similar to getting up to comment on a talk presented at a conference. Because the reviews are transparent and do not decide the fate of a publication, they are less affected by politics. Because they are communications to the community, their power depends on how compelling their arguments are to the community. This is in contrast to secret peer review, where weak arguments can prevent publication because editors largely rely on reviewers’ judgments and the reviewers are not acting publicly before the eyes of the community. 4PR is a real discourse in science and the general research community benefit from it.
What incentives does an individual reviewer have when submitting a comment or review in a new open evaluation system?
- Reviewers could be credited for their work, for example, how frequently they participated and to what extent they felt committed to playing their part as a member of the academic community. As we mentioned above, this has been a key motivation for the vast majority of researchers in terms of writing a review
- Authors and Peers could comment on reviews to emphasize those which have been more useful for them than others. This establishes a rating not only for the paper itself but also for the comments and reviews which is a completely new concept in science. As a result reviewers get credited simply by the fact that peers acknowledge their work through (positive) feedback
- Reviewers who contribute more frequent and constructive reviews than their colleagues within a certain area of expertise could be highlighted by a ranking. This ranking is a direct measure of the individual performance of a researcher which would be much more useful for evaluations of researchers compared to the Impact Factors of the journals in which they have published.
- If a reviewer received direct feedback about the review, an open discussion could ensue which may lead to a more concentrated level of discourse, as, for example, during a conversation during a conference or a poster presentation.
- And finally, if a reviewer decided to write a review, they are willing to read the paper because they are interested in the new research of one of their peers. And more importantly, they are free to decide when to submit the review. This straightforward situation is completely different to the present Pre-Publication Peer Review when a researcher is asked by an editor to 1) read and 2) review a new submission which they have never seen before.
Despite reports such as The Peer Review Process by the JISC Scholarly Communication Group that have indicated that the Peer Review process would evolve, new concepts have been introduced only rarely so far. Open Access and a transparent Open Peer-to-Peer evaluation are the prerequisites for a new peer review concept which provides more benefits for reviewers than the present review system in scholarly publishing.
With growing awareness of the dammage to the public perception of research caused by high profile retractions such as this reported in Nature, and an interesting recently observed correlation between a higher level of retraction and prestigious journals, it seems only logical that the momentum towards more transparency in research communication will grow.
Therefore we should support new ventures and publishing initiatives which have introduced principles of open evaluation and transparent reviewing. These new projects could help open our eyes to “an ideal world” as Timothy Gowers, Royal Society Mathematician and Fields Medalist summarized in his terrific vision to revolutionize scholarly communication and publishing. It will be interesting to find out how this will also improve the motivation of reviewers to do their important work of quality maintenance in academic publishing.
Alexander Grossmann is Physicist and Professor of Publishing Management at HTWK University of Applied Sciences in Leipzig, Germany. After his PhD and PostDoc in Aachen and Munich at the Max Planck Institute for Quantum Optics, respectively, he worked as Associate Professor of Physics in Tuebingen. In 2001 he accepted a management position in academic publishing industry. After 12 years as Publishing Director and Vice President Publishing at Wiley, Springer and De Gruyter, Alexander founded 2013 with a partner from the U.S. the open science network ScienceOpen.
Follow me on Twitter: @SciPubLab
Visit my LinkedIn profile: http://goo.gl/Hyy8jn
This blog post has been made public under a creative commons CC-BY 4.0 licence which allows free usage, reuse and open sharing for any purpose if the author has been credited.
Image Credit: (c) Alexander Grossmann CC-BY 4.0
- (ALPSP) The Association of Learned and Professional Society Publishers (2002): Authors and Electronic Publishing. The ALPSP research study on authors’ and readers’ views on electronic research communication (ISBN 978-0-907341-23-9)
- (McNutt) McNutt RA et al: The effects of blinding on the quality of peer review. A randomized trial. JAMA. 1990 Mar 9;263 (10):1371-6