Last year, the UK government put the peer review process for scientific publication under scrutiny in the shape of a House of Commons (HoC) Science and Technology Committee report entitled Peer review in scientific publications (HoC 2011a). The report was produced on the basis of written and oral evidence to the committee and those with sufficient interest can actually watch video footage of the proceedings on the internet (http://www.parliamentlive.tv/Main/Player.aspx?meetingId=8301; retrieved 11 December 2011). Perhaps not the greatest draw for the majority of the public, but for someone deeply engaged with the academic publishing industry, I found the live proceedings fascinating and the ensuing report very informative. It is always of great interest to see how others view one of the fundamental pillars of our industry. The evidence was given by publishers, editors and organisations with an interest in the process of scientific publishing, but the questions were asked my Members of Parliament with little or no knowledge of the scientific publishing industry.
government has previously considered peer review (HoC 2011b) and several organisations have also published on the process in recent years (Council of Science Editors 2006, The British Academy 2007, Thomson Reuters 2011, Ware 2008, Ware & Monkman 2008). The immediate reason for some of the recent interest were doubts about the process related to some peer reviewed information, later questioned in terms of its accuracy, that was published about climate science (HoC 2011b) and the—somewhat naive—assumption by the press, the public and politicians that the peer review process protected against this kind of thing that, in the words of the ‘Climategate’ report, that the peer review process was a ‘firewall’ between truth and falsehood (http://www.scribd.com/doc/34003747/Muir-Russell-Final; retrieved 13 December 2011) which, clearly, it was not. UK
The value of the UK HoC report is that it is, surely, one of the most comprehensive and authoritative documents on peer review that exists. As such, it should be compulsory reading—at least the first 100 pages that contain the main body of the report—for editors, publishers, reviewers and, indeed, scientific authors. The report is wide-ranging, taking in definitions of the peer review process and what it aims to achieve; a review of current practices and discussion on their efficacy. The role of editors, authors and reviewers is discussed and new approaches to reviewing such as post-publication approaches and publishing ethics. In addition, the review of data in the reviewing process, not something I have encountered in nursing, is covered.
The peer review process
It is worth reiterating the terms of reference of the HoC committee in full to give a clear idea of what the purpose of the investigation was and what the report addresses, these were:
1. the strengths and weaknesses of peer review as a quality control mechanism for
scientists, publishers and the public;
2. measures to strengthen peer review;
3. the value and use of peer-reviewed science on advancing and testing scientific
4. the value and use of peer-reviewed science in informing public debate;
5. the extent to which peer review varies between scientific disciplines and between
countries across the world;
6. the processes by which reviewers with the requisite skills and knowledge are
identified, in particular as the volume of multi-disciplinary research increases;
7. the impact of IT and greater use of online resources on the peer-review process;
8. possible alternatives to peer review.
The key features of the peer review process were summarised and included acknowledgement of the fact that many journals are under extreme pressure for the limited space they offer due to the very high level of submissions they receive. This means that there is often a ‘triage’ stage where some kind of ‘in-house’ evaluation of manuscripts takes place whereby editors make a decision about whether items go forward for review. A great many manuscripts are rejected at this stage. The different types of review: single-blind; double-blind; and open, were described but, whatever the process, the variation in quality of reviews was acknowledged. New models of publication—including open access—were included and it was indicated that these newer, less traditional, forms of publishing, which are gaining popularity, are not devoid of a peer review process.
With increasing submissions to scientific journals, approximately three million manuscripts annually, and the increasing complexity of research—inter- and multidisciplinary—the need for more reviewers, in some cases per paper, and the increasing frequency with which they were asked to review was acknowledged as a burden. Nevertheless, even without concrete evidence of its efficacy, the peer review process is widely considered to be essential to the scientific publishing industry. Moreover, participating in peer review is also considered ‘integral’ to the career of a researcher.
Editors, authors and reviewers participate in the peer review process and, of course, many people play all three roles as only active and successful academics are likely to be asked to review manuscripts and be appointed as editors. However, the activities related to the publishing industry remain largely the domain of the gifted amateur and, while there is training available for editors and peer reviewers, this is far from the norm. Of course, the issue of who would fund such training was raised and there was some indication that this might be in the domain of the research councils. Surprisingly, the publishing industry—which benefits financially from the peer review process—was not identified as a source of funding.
Returning to the issue of burden, the difficulty in and processes for finding reviewers was covered as were mechanisms for cutting down on the burden such as cutting out re-review by putting the responsibility for ensuring the validity of papers on the authors following an initial review and then comprehensive instructions from an editor on what needed to be done to ensure publication. However, not acknowledged in the report, depending on authors who are ambitious and dependent on their publications for promotions and research grants could lead to some malpractice and, of course, increases the burden on editors to provide more detailed feedback and to oversee this process.
The general lack of recognition that academics who carry out peer review receive was acknowledged and, speaking as a reviewer and an editor, it is clear that those who willingly and efficiently review manuscripts are those who are repeatedly asked to do it and the burden can be significant. This is highly legitimate academic work but, apart from a general expectation that this is part of the duties of an academic, it is rarely if ever precisely measured and taken into account in staff review and promotion rounds.
Impact and integrity
The perceived importance of having well published research, implying quality and importance of the published finding, is the main reason why the peer review process was being considered. It was acknowledged that publication in a peer reviewed journal, especially one of high impact, tended to signal something about the standing of the published work. While publication metrics were discounted in the UK Research Assessment Exercises and would be in the forthcoming Research Excellence Framework, it is clear that the vast majority of ‘outputs’, which include books, reports and patents, are papers in refereed high impact factor journals.
In addition to impact, the peer review process both conveys and requires integrity. Without question, reviewers are supposed to hold the highest professional standards and, for instance, to declare and conflicts of interest with manuscripts they may be reviewing. In addition, the highest standards of integrity are expected of authors; unfortunately, they are not always adhered to and reviewers, alongside editors, play their part in detecting fraud, duplication and plagiarism. All editors will report an increase in the incidence of malpractice in publications and the reasons must include the sheer increase in the number of submissions, an increased awareness—thanks to organisations like the Committee on Publications Ethics (http://publicationethics.org/; retrieved 13 December 2011)—of the issues involved and the inception and more widespread use of new technology to detect malpractice.
The report concluded that peer review was crucial and that, while there were different models available, they did not all suit all types of publication. The role of editors was acknowledged and that, while they often lacked formal training, a great deal was offered ‘on the job’. Integrity and transparency were essential aspects of the process. However, none of the above guaranteed the worth of a piece of research; its worth remains in the eyes of the person reading it, for which, as the HoC report (2011 p. 94) stated: ‘there is no substitute’.
Council of Science Editors (2006) CSE’s white paper on promoting integrity in scientific journal publishing CSE,
House of Commons Science and Technology Committee (2011a) Peer review in scientific publications The Stationery Office Ltd,
House of Commons Science and Technology Committee (2011b) The reviews into the
University of East Anglia’s Climactic Research Unit’s E-mails The Stationery Office Ltd, London
British Academy (2007) Peer review: the challenges for the humanities and sciences The British Academy, London
Thomson Reuters (2011) Increasing the quality and timeliness of scholarly peer review Thomson Reuters,
New York NY
Ware (2008) Peer review: benefits, perceptions and alternatives Publising Research Consortium,
Ware & Monkman (2008) Peer review in scholarly journals: perspectives of the scholarly community – an international study Publising Research Consortium,