In my recent posts I tried to summarize where we are coming from when talking about academic publishing today. Peer review is an important, if not the most important feature of publishing in science. We should therefore ask ourselves whether the present process of evaluating a new scientific work is still adequate today in terms of functionality, interactivity and speed.
How has peer review been shaped by workflow issues in the past?
For decades the majority of scholarly journals have relied on a process of “peer review” to ensure a certain scientific standard for their publications. During this process the journal’s editor or an external editor-in-chief (EiC) assigns usually two, sometimes three independent peers to anonymously review a new manuscript which has been submitted to a specific journal. Depending on the feedback of the reviewers and also the authors’ constructive reply to their comments, the journal editor or EiC decides whether to accept the manuscript for publication or reject it. If their manuscript is rejected, authors are free to resubmit the same work to another journal and so on. Apparently the main goal of peer review is quality control and its method has been anonymous reports by scientists working in the same field. Communication between scientists, journal editors and external reviewers was almost exclusively limited to letter post until the mid-nineties. From a practical point of view there were no alternatives either for researchers or publishers to manage the review process. The complete peer review workflow was thus determined by the given communication possibilities at the time when most scholarly journals were launched and operated, but not by an analysis of the optimum result for a given problem. Consequently commercial and widely used online manuscript processing systems such as ScholarOneTM or Editorial ManagerTM have essentially recreated this workflow in a digital environment.
I want to clearly emphasize this observation because prior discussions of alternatives to peer review in scholarly literature have been influenced by claims that the present dominate process is the only way to ensure a certain scientific standard. Nevertheless, the present process seems to deliver a sufficient certainty for both journal editors and researchers as we have seen in the past. There have been only comparatively few examples of misconduct or significant mistakes discovered for works which were published in peer reviewed journals. This suggests that peer review is an important if not the most important feature of publishing in science. However, one may ask whether the development of new modes of communication and networking which have followed the tremendous rise of the internet since the end of the nineties should have also enabled publishers and researchers to modify this workflow when evaluating scientific work.
“Peer review is an important if not the most important feature of publishing in science.”
Are there any peer review alternatives today?
Of course there are and it would have been surprising if there had been no initiatives to modify the incumbent system of peer review. In the last ten years there have been several examples of publishers experimenting with new concepts in that process. One first step was to question whether anonymity is a necessary feature of peer review. We live in a much more open world today where people regularly share information that used to be limited to a single recipient of a letter with huge communities via Twitter, Facebook and other networks. An attempt to open up the review process by making the names of reviewers visible together with the accepted article was made by the Journal of Medical Internet Research beginning in 1999. This process is called open peer reviewing. The British Medical Journal began a similar process also in 1999, but decided to disclose the identity of the reviewers to the author only, and not to the public. Founded in 2007 the Frontiers journals introduced an interactive peer review process where all reviewers decide together in an internet forum on whether a paper is accepted or rejected and also publish the names of the reviewers along with a statement. Recently launched, F1000 Research publishes research papers with the disclosed identity of their, however still traditionally nominated reviewers, as well as the reviewer report. Other journals started to mix both traditional and public (not necessarily open) reviewing in different graduations, such as the Journal of Interactive Media in Education, launched in 1996, Atmospheric Chemistry and Physics since 2001, or Nature in 2006. However, Nature decided to stop that experiment soon after its launch because only 5% of authors actively agreed to support it. Nevertheless, the Nature experiment showed that more than half of all those articles which were made open for public comment received feedback from readers, which is an important finding when asking ourselves if a public, open peer review would work. All these alternatives have been mostly limited to a few, individual journals or publishing platforms without generating a significant impact on the scholarly community, although that may be about to change as the number of experiments increase.
“Alternatives have been limited to a few, individual journals or publishing platforms without generating a significant impact.”
Looking forward: the 4PR process
We have seen that there are some examples which represent a wide range of slightly different approaches to the concept of open peer review which aim at a higher level of transparency in the current system but are limited to a specific scientific (sub-)discipline or combine traditional and open peer review, either public or partially open. Nearly all of these efforts address the question of anonymity, but take as a given the premise that quality control depends on a workflow developed decades ago where an editor chooses “peers” for a given paper and then makes a decision on whether it should be published. In this area there have been fewer experiments. One straightforward idea and one that I am currently experimenting with on the platform ScienceOpen would be to openly post an article which has been submitted to a journal and wait for comments of potential readers instead of assigning reviewers by the journals’ editors. Readers’ feedback on a particular article would be openly accessible and the authors or any other reader could reply to these comments openly. This idea combines aspects which have been raised by authors and readers and are becoming more and more important in scholarly publishing: immediate and open access to a new scientific work; unique citeability of the current and past versions of an article by a DOI (digital object identifier); freedom of any researcher in that field to submit comments on that article; rigorous openness of all information provided in this process, as for example the names of commenters and reviewers. Of course such a new process which I call public post-publication peer reviewing (4PR) requires some elements to maintain the high scientific standard as for traditionally peer reviewed journals. Only those researcher should be enabled to comment on an article if they are working in the same area of research and have a certain level of expertise, for example by a defined number of peer-reviewed publications in that field. Those researchers could be authenticated for example by an ORCID registration. In addition there must be an alert option for any reader to identify those comments which seems to be inappropriate. This is necessary because in this process an editor will no longer nominate individual reviewers and occasionally filter their feedback before providing it to the author or making it public.
“Readers’ feedback on a particular article would be openly accessible and the authors or any other reader could reply to these comments openly.”
The result of such a 4PR process would be more transparency because both authors and readers would see the identity of the reviewers. This would reduce to a minimum the rare but obvious opportunity for misuse by competing reviewers. Moreover, not every expert in the field will know everything which may be necessary to competently evaluate the work of the author. The ability to identify the reviewer would help the reader to judge specific feedback he has given – or not given. And each reader should also be enabled to invite further reviewers, if he/she feels the paper has been inadequately reviewed, but is not an expert on the subject. In the current system, publishing editors and researchers are under high pressure with repeated requests for reviewers’ invitations. For some journals more than 70 percent of potential reviewers decline or do not answer such requests, which initiates another loop to find peer reviewers. It is obvious, however, that an expert in a specific field will eagerly track those new submissions by his peers which are of relevance for his or her current research. This is already currently done on a regular basis in those areas of research where most recent articles are openly available as preprints before submission to a journal for example in physics or computer sciences (arXiv). Then, it could be an easy task for that individual researcher to submit a short, general feedback, for example that he finds the article useful (or not), or to write a summary of his evaluation immediately after reading supported by an easy-to-handle web form which evaluates all relevant facts of a publications. An option for private communication between reader and reviewer would make a dialogue possible in those cases where a reviewer does not want to post comments publicly. In every case, the authors would benefit from an immediate reply of peers to their publications and also the potentially more specific feedback concerning particular aspects of their work with the idea that opening up the peer review process to more than two reviewers will provide more feedback in the long run. This constructive feedback could then result in a slightly modified or updated version of the same article which has been submitted first to the public. All versions of an article should be archived and identified with a DOI to link any citation unambiguously to the specific version of that article.
“The result of such a 4PR process would be more transparency.”
These are examples of some of the benefits for authors and reviewers, but there will also be improvements for the reader. More and more researchers currently complain that it is almost impossible to track all relevant new publications or preprints on a regular basis. With the exception of limiting themselves to a small selection of specific journals, there is almost no chance to predefine or select those papers which may contain some relevant information for the individual reader. Having established a 4PR process, the relevance of existing or new contributions will be perpetually influenced by the readers’ feedback: usage, full-text downloads, but now also the rating of reviewers and the number of comments to a specific publications. Those papers which have been heavily discussed and positively evaluated will rise to the top of the list of works in a certain scientific area, those which will have received negative comments or no feedback at all will continue to sink to the bottom of the list. Interestingly, this process is strongly dynamic and a publications which was considered and rated as a rising star immediately after publication could lose this attribute after some while when readers discover new aspects which influence both the relevance and therefore the rating of that article – and vice versa. It is somehow a living list of documents which becomes frequently updated by new comments or new versions of an article. In this way it would be possible to address the quality control aspect of peer review with a completely new structure independent of gate-keeping editors and journals.
“The relevance of existing or new contributions will be influenced by the readers’ feedback.”
The 4PR principle which I have described is easy and rigorous and makes no comprises in terms of scientific standards, while promoting transparency in the peer review process. Since it is based on networking tools which are commonly used in other forums, there are no longer technical limitations to establishing such a process. I expect this principle to successively displace the traditional mode of peer-review developed from the paper model in the next years. This is not at all because the traditional process was bad, but because it absolutely makes sense to make use of current modes of communication to exchange information between researchers more efficiently and transparently. As a first live experiment which will adopt this concept I am involved in the new scholarly communication platform ScienceOpen and will track the development of this idea closely.
“It absolutely makes sense to make use of current modes of communication to exchange information between researchers more efficiently and transparently.”
Alexander Grossmann is Physicist and Professor of Publishing Management at HTWK University of Applied Sciences in Leipzig, Germany. After his PhD and PostDoc in Aachen and Munich at the Max Planck Institute for Quantum Optics, respectively, he worked as Associate Professor of Physics in Tuebingen. In 2001 he accepted a management position in academic publishing industry. After 12 years as Publishing Director and Vice President Publishing at Wiley, Springer and De Gruyter, Alexander founded 2013 with a partner from the U.S. the open science network ScienceOpen.
Follow me on Twitter: @SciPubLab
Visit my LinkedIn profile: http://goo.gl/Hyy8jn
This blog post has been made public under a creative commons CC-BY 4.0 licence which allows free usage, reuse and open sharing for any purpose if the author has been credited.
Image Credit: (c) Alexander Grossmann CC-BY 4.0