Among the three, the latter is perhaps the most underdeveloped, but this will change soon, as JMIR is currently undertaking a "prepublication Open Peer review" trial. On submission, authors are asked whether they are interested in us posting the paper on the web before formal publication to solicit further comments from peers and the public.
Before delving into the challenges of doing an open peer review experiment, I should briefly discuss the semantic confusion around the term “open peer review” - commonly used by so many - which has bothered me for a while. People may actually discuss "open peer review" without realizing that everybody has something else in mind. The term “open peer review” has many (I would argue at least 4) different meanings and “open peer review” comes in vastly different flavours. In brief, my little proposed taxonomy is to distinguish between:
- “named (or: signed) peer-reviews”,
- “openly accessible peer reviews”,
- “prepublication open peer review”,
- and “post publication open peer review”.
The first variant is to be open about the identity of the reviewers. This form of “open peer review” is better (and more precisely) called “named (signed) peer review” (“named” being the antonym to “anonymous”). JMIR has been doing this since its inception in 1999, and was probably a pioneer in adopting a policy of complete transparency, revealing the identity of the reviewers alongside with each published article, which we think is an adequate form of acknowledgment given the (sometimes substantial) contributions the reviewers make. We keep the names of the reviewers confidential if we reject a manuscript, to avoid retributions from authors.
Openess is never free from tensions, and even JMIRs mini-step towards openness has its problems. While most peer-reviewers are happy to see their name published alongside the articles they reviewed, we also had one or two cases where reviewers thought the manuscript should not have been published, and were not happy about their name showing up alongside the paper. As reviewers frequently disagree on the merits of a paper (and what they would like to see changed to make it “publishable”), it is not always possible or reasonable to bring a paper into a form that makes all reviewers happy (not to talk about the fact that even reviewers can be wrong).
By and large however we did not observe any “adverse effects”.
Another subflavour of named reviews (which I call signed reviews) has been introduced by the BMJ, which in 1999 announced to reveal the name(s) of the reviewer(s) to the author (regardless of whether or not the manuscript is accepted) together with the review. Contrary to JMIR’s policy however, the name of the reviewers are not published.
The Br J Psych did a randomized trial on signed peer-reviews and concluded that signed reviews increase the quality of the reviews, time to complete, and the probability of recommending acceptance.
Although “named / signed peer review” is a small step towards transparency, we would actually suggest not to call it an “open peer review”, unless the entire process is visible on the web.
A second variant of open peer review, which I call openly accessible peer-reviews, is publishing the actual peer-reviews and pre-publication history (e.g. after the manuscript has been published) alongside with the article, as for example BMC does.
The most radical form of openness in peer-review is however a variant where the peer-review process (or let’s call it more accurately an assessment process) can be monitored in real time and influenced by everybody, i.e. “pre-publication open peer review” or “post-publication open peer-review”, i.e. publishing the entire manuscript online and inviting everybody to comment on it, before the manuscript has been published, or after it has been published. Neither of these variants are radically new approaches: In fact, pre-publication peer-review has been done in some fields (e.g. physics) almost routinely, by authors sharing or self-publishing a preprint (or e-print), and the BMJ has experimented with a preprint server in the late 90ies (http://clinmed.netprints.org/) - with dismal results. The Netprints server was hardly used by anybody and the service has been discontinued.
Post-publication open peer-reviews (e,g. the model of PLOS One, which has abandoned pre-publication peer-review completely) is also not really a new model, as many journals have the possibility for readers to post rapid responses alongside with articles.
However, journal initiated pre-publication open peer-review (as opposed to author-initiated peer-review, i.e. preprints) is still a rarity (I know of the Economic eJournal (WebCite ), the new Scholarly Exchange (WebCite), and most recently also Medical Education Online (WebCite) .
Traditional subscription-based journals have always been vividly opposed to consider anything for publication if it has previously been published on the web (“Ingelfinger rule”), citing safety concerns for patients, but perhaps – as I argued previously –primarily rather to preserve their newsworthiness and in an effort not to loose subscribers (who would perhaps not pay for papers which are also found on the web). Hence, Open Access journals are particularly suited to experiment with open peer review and open assessment models, because for them, loosing subscribers and the newsworthiness of publishing something that has already been published online is it is less a concern.
Yet, as I argued already a decade ago, even for traditional journals their role is changing -- the future role of journals is that they “are not longer in the business of revealing new information, but in making existing information understandable and useful for a broader audience.”. Journals will be read not because they “reveal” new information, but because they help readers of a certain scientific community to cope with the onslaught of publicly available information on the web, but not necessarily because it is “new”.
With the role of the web in scholarly communication is still evolving, with seeing all these new developments and buzz around “Science 2.0” I am more convinced then ever that my (and others') early predications along these lines were not far off. My impression is that even subscription-journals are now more relaxed in terms of publishing material even if it has been published on the Web before.
JMIR has always been running a Wiki, which was thought of as an open space to experiment with new ideas. We are now using the Wiki for what we call a “Peeki” – a peer review wiki (also alluding to the fact that it enables readers to “peek” behind the scenes of peer review) – posting selected submitted articles and their peer-reviews comments as well as anything readers may want to add or correct (since 2007 we have been asking submitting authors to give us permission to do this). In particular, we plan to post all articles which are preliminary accepted on the Peeki.
However, there are also huge barriers and challenges associated with pre-publication peer-review.
The first major concern is that most early experiences (including our own) point to a lack of participation from readers [see Nature experiment (WebCite)]. Why is this? Perhaps because there are lack of incentives to comment on / assess unpublished articles, perhaps because of lack of citability (of the comments) and “credit points” for tenure”? The quality and quantity of comments from the community remains a concern.
A lot of journal editors (including myself) who have been contemplating a similar approach have been discouraged by the Nature experiment, which showed that very few readers actually participated, and that the comments were not very helpful.
An unresolved research question include which factors determine active reader input. Another issue is whether concerns regarding authors to login with a fake identity and to comment positively on their own paper are justified.
Another challenge is that (as preliminary results from the JMIR experiment confirm) not many authors are excited about these possibilities, perhaps due to authors’ concerns that somebody may “steal” their ideas.
Ex-BMJ editor Richard Smith (and the world's best editor - now unfortunately doing other things which nobody understands) wrote a couple of years ago that “openness [disclosing reviewer names to authors] should eliminate some of the worst abuses of peer review, where reviewers under the cloak of anonymity steal ideas or procrastinate.”. In fact, the main argument against open peer-review is perhaps that it reintroduces the quasi-anonymity of the peer-reviewer (even if login is required, we all know how quickly a hotmail email address and a false identify is created).
Associated with plagiarism concerns are authors’ concerns regarding the (perceived) lack of citability of their unpublished draft versions, and perceived lack of having created an archival (“historic”) record to prove priority of ideas – both however problems which my group actively tries to address with the WebCite project (see Eysenbach, Gunther. The paradox of the current state of scholarly communication in the age of Web 2.0 Posted at http://gunther-eysenbach.blogspot.com/ Jan 18, 2008. Archived at http://www.webcitation.org/5Ux2jEb70).
Authors also may be concerned about the paper not being “publishable” somewhere else if the comments are negative. Even if the comments/the paper is removed, there may still be traces somewhere e.g in the Internet Archive or other archiving projects such as WebCite.
There are also many unresolved questions regarding what policies journals which experiment with open pre-publication peer-review should implement, e.g. should comments/papers be deleted after rejection, or after acceptance, or both, or never? How should all the different versions and the comments be indexed and cited?
Given all these questions it is surprising that funding agencies do not show more interest in funding research around these areas. Last year, I have been sending out various grant proposals to various foundations who are traditionally interested in scholarly communication issues and/or openness.
One proposal to develop a generic “Peeki” platform to enable open peer-review (as a plugin for OJS), which I submitted to the Mellon Foundation. It was turned down with the explanation that “support for biomedical publications does not fit within the current funding priorities of the Foundation's Scholarly Communications program.”. Explaining that this is not for a specific publication, but a generic platform to enable open peer-review which could be used by other journals (or even non-journal venues), were countered with the argument that “there is no credible evidence in what you submitted to indicate that any other journal than the Journal of Internet Medical Research would actually use software that is proposed.”
The same proposal submitted to the Open Society Institute was turned down with the argument implying that the project does not differ from other open peer review projects. The project officer wrote “PLoS One is the best example of open peer review which I know of, but others have also experimented with it, including Nature, which unfortunately found that it didn't work within their community.“. Well... in my humble opinion I agree that PLOS One is great, but unfortunately PLOS has morphed from an Open Access advocacy group to a competitive business which currently doesn’t make any software available under an open source license, and there is little evidence or research that the PLOS One model is the one we should all adopt - after all, it means to abandon any kind of pre-publication peer-review.
For the time being I’ve put the Peeki proposal into the drawer, but given the many unresolved research questions (including under which circumstances open peer review works) and the many different variants of open peer review models I believe that more research and development in this area is urgently required. An open source platform for open peer review (one that also engineers and experiments with incentives for peer-reviewers which can be used by them for promotion & tenure purposes, e.g. "karma" points) is urgently needed, so I do not think that this is the end of the story.
Leave a comment if you agree or differ.
Please cite as: Eysenbach, Gunther. Open Peer Review - Trials and Tribulations. Gunther Eysenbach Random Research Rants Blog. 2008-03-12. URL:http://gunther-eysenbach.blogspot.com/2008/03/open-peer-review-trials-and.html. Accessed: 2008-03-12. (Archived by WebCite® at http://www.webcitation.org/5WGnhpf6d)