In my last blog I argued that carrying out a completely transparent public evaluation of research results – Public Post-Publication Peer Review (4PR) – is the best way to ensure scientific quality. I strongly believe that and in some ways started the publishing platform ScienceOpen as an experiment to test this hypothesis. But what happens in the extreme case when a manuscript is submitted that can be perceived as outside of the scientific discourse – “crackpot” or “pseudoscience” theories from perpetual motion to parapsychology abound. A whole list of pseudoscience topics can be found here. It is easy to reject the papers on the taxonomy of unicorns, but there are some fields of alternative medicine for example where the lines are not so clearly drawn. Politically charged fields such as climate change or genetically modified foods can also present a grey zone where legitimate research and industry-sponsored propaganda can be difficult to distinguish. In principal one could think about two options of an editorial workflow to cope with those submissions.
Quality Check in Public Post-Publication Peer Review: Option 1
In our Public Post-Publication Peer Review model we still require a first editorial check that should filter out the clearly non-scientific work written by non-scientists outside of any norms of scientific communication. Papers that then pass this check should be evaluated by the scientific community in a transparent and public way. But in some cases the editors doing a first editorial check may be unsure whether a paper is sound science and may call on one or more experts to give their opinion – a pre-publication peer review. If these experts give the paper a very poor review, what should happen next? ScienceOpen is committed to providing an open platform for discussion – is it then against our principles to begin rejecting some manuscripts based on a pre-publication peer review process? One editorial policy could simply be for ScienceOpen to reserve the right to reject all papers which do not meet some minimum standards of scientific communication. This sounds straightforward but if defining those standards in a formal and transparent way seems to be almost impossible. Without such a formal and transparent reasoning, the rejection could be perceived as arbitrary. and so violate the principle of public post publication peer review as such. It would protect the reputation of the platform and of the scientific community. There could also be public health considerations if papers make false claims about the health benefits of certain products which should not be underestimated. So rejection is an attractive option.
Quality Check in Public Post-Publication Peer Review: Option 2
Another editorial policy could be to inform authors about the poor reviews and should they choose to publish their work anyway to publish the paper with the poor reviews attached. This would it be in line with the Public Post-Publication Peer Review model and a platform model of publishing where curating of content is kept to a minimum. Because the internet has made it so easy to “publish” ideas in the meaning of “make public” we can assume that there is an open access journal somewhere with such low standards that will happily publish the findings on alien DNA even without any peer review at all. The “peer reviewed” Journal of Cryptozoology would probably be happy to publish that unicorn taxonomy paper. And of course an editorial check is not going to pick up every case of pseudoscience or fraudulent science so the post publication evaluation is absolutely necessary. This is a principled but risky approach.
Open to Suggestions…
The ScienceOpen platform will begin publishing in January 2014 and is in the process of fine-tuning its editorial policy. While the problem of “crackpot” submissions will not come up often, it is important to have transparent policies in place before confronted with a climate change denial paper. Because it is so easy to make ideas public in our digital networked world, this is a difficult problem that will require a concerted effort by the entire scientific community.
We welcome any feedback on the best way to move forward!
Alexander Grossmann is Physicist and Professor of Publishing Management at HTWK University of Applied Sciences in Leipzig, Germany. After his PhD and PostDoc in Aachen and Munich at the Max Planck Institute for Quantum Optics, respectively, he worked as Associate Professor of Physics in Tuebingen. In 2001 he accepted a management position in academic publishing industry. After 12 years as Publishing Director and Vice President Publishing at Wiley, Springer and De Gruyter, Alexander founded 2013 with a partner from the U.S. the open science network ScienceOpen.
Follow me on Twitter: @SciPubLab
Visit my LinkedIn profile: http://goo.gl/Hyy8jn
ORCID: http://orcid.org/0000-0001-9169-5685
This blog post has been made public under a creative commons CC-BY 4.0 licence which allows free usage, reuse and open sharing for any purpose if the author has been credited.
Image Credit: CC 0
In an ideal system you’d have several layers of filtering, not just one. For instance, you have one ‘tag’ on papers, that tells the reader (machine or human), the paper is freshly submitted. Another says it passed received positive reviews/scores from (one, two, three, etc.) peers of a certain peer-review ‘karma’. Yet another could indicate that the paper is under scrutiny for fraud/misconduct/pseudoscience and the (potentially anonymous) reviews who caused this flag. Authors who do not wish to receive this tag may withdraw their submission, or appeal the decision. Another one could say that this paper has been highly downloaded, highly recommended, highly cited etc.
And so on. For each incoming stream of information, the human reader can decide which hurdles the paper must pass in order for the reader to see it, e.g.: For any paper in my field (defined by keywords) or by authors whose work I follow closely, I want to see everything that is submitted. For my broader field of interest (defined by keywords or by a ‘topic’ tag or a combination thereof), I’d only like to see peer-reviewed work with at least three high scores. From a certain circle of colleagues, I want to see the papers they bookmark/recommend, no matter what the status of the paper. For a general topic (in my case ‘biology’), I’d only like to see peer-reviewed papers, that are highly recommended and downloaded and that have received media coverage, i.e., the papers people talk about – maybe I even only want to see the coverage and not the paper.
For laughs, once a week, I want to have a look at a randomly chosen paper from the fraud/misconduct/pseudoscience pile – perhaps the next Nobel prize slumbers there…
It’s technically easy to do…
Tricky. Option 2 may well poison the journal content making it a less attractive venue for proper authors. You could have an option 3, where you publish stuff reckoned to be dubious in a separate category that is clearly demarcated as such. If enough peers think it worthwhile, promote it out of that category?
Stephen – Thanks for this very helpful feedback. What would you think about the solution to put those articles in a specific area or category which appear to be dubious or which at least require some feedback by an independent referee, for example a member of the Editorial Board? Unless the referee will have not approved that paper it does not receive a DOI and is explicitly displayed with a label which says “under approval” or so. I can imagine that this workflow would support your perspective? I would like this idea very much because on the one hand we do not block suspect papers in general but cope with them in a transparent way which is visible for all other users. On the other hand the authors will be not credited by a publication with a DOI. Both aspects seem to be very important in the discussion.
the more principled option 2 will eventually prevail. publish then filter. once all evaluative meta-information about papers is publicly available, each community can build its own filter in which reviewer judgments are weighted differently (e.g. by each reviewer’s status within that community). we will have a plurality of evaluative perspectives on the literature. some communities will explore what others call crackpot theories. but we need not frequent the webportals to the literature provided by communities whose judgment we do not trust.
I very much like the concept of transparent peer review and of the publication of peer comments alongside the articles. One concern is whether peer pressure will prevent the kind of dissection of (high profile) articles that can be found at pubpeer (https://pubpeer.com/featured). I don’t know of any other commenting platforms which has routinely this kind of commenting and this seems to be critically facilitated by anonymity.
The possibility of anonymous peer review/flagging must at least be retained for cases that could be described as “whistle blowing”, e.g. identification of possible unlawful manipulations of the data. I therefore second Brembs above “Yet another could indicate that the paper is under scrutiny for fraud/misconduct/pseudoscience and the (potentially anonymous) reviews who caused this flag.” The last thing we want is interesting and important comments not being submitted out of fears (legitimate or perceived) of potential retributions.