Along with over 10,000 others, I signed the San Francisco Declaration on Research Assessment (DORA). Why? I believe that the impact factor was a useful tool for the paper age, but that we now have the capability to develop much more powerful tools to evaluate research.
In my last blog I argued that carrying out a completely transparent public evaluation of research results – Public Post-Publication Peer Review (4PR) – is the best way to ensure scientific quality. I strongly believe that and in some ways started the publishing platform ScienceOpen as an experiment to test this hypothesis. But what happens in the extreme case when a manuscript is submitted that can be perceived as outside of the scientific discourse – “crackpot” or “pseudoscience” theories from perpetual motion to parapsychology abound. A whole list of pseudoscience topics can be found here. It is easy to reject the papers on the taxonomy of unicorns, but there are some fields of alternative medicine for example where the lines are not so clearly drawn. Politically charged fields such as climate change or genetically modified foods can also present a grey zone where legitimate research and industry-sponsored propaganda can be difficult to distinguish. In principal one could think about two options of an editorial workflow to cope with those submissions.
In my recent posts I tried to summarize where we are coming from when talking about academic publishing today. Peer review is an important, if not the most important feature of publishing in science. We should therefore ask ourselves whether the present process of evaluating a new scientific work is still adequate today in terms of functionality, interactivity and speed.