© Yould Publications Ltd 2018The academic publishing landscape has changed beyond recognition since it began as a ‘gentlemanly’ exchange of ideas in journals like Proceedings of the Royal Academy. Yet the essential model of writing lengthy manuscripts, submission, peer review, editing and publication persists. But for how much longer?
The current situation
Some aspects of academic publishing have changed greatly in the past twenty years, but these have been used to improve and perpetuate the traditional model. The model is based on submission of lengthy manuscripts which are peer reviewed and then either published – after amendment – or rejected. They are then subject to the same process, often repeatedly, until acceptance.
Problems with the current situation
The process is lengthy and, while it is designed to produce high quality published work, it is no guarantee of quality or honesty, as the number of retracted manuscripts suggests. Producing lengthy manuscripts is time-consuming and then the peer review process can last from a few weeks to over a year for some journals. The final product, usually a lengthy article, is then published and in many cases never read. Even if the article is read, few people read every section – especially the introductory and discussion sections – focusing rather on the methods and the results. Introductions and discussion probably serve a useful function for editors and reviewers, but they are often a hard read for other academics, students and the public who simply want to know what was done and what was discovered.
The development of the Internet and the World Wide Web have been significant influences on the academic publishing industry. However, they have mainly been used to accelerate and automate the publishing process, they have not been used to change the process and outcomes. Online platforms are now almost universally used by academic publishers for submitting manuscripts, organising the peer review process and then publishing articles. On the back of this has arisen the open access movement which has put pressure on publishers to initiate processes whereby their content can be made available to read without restriction. Two main models operate known and ‘gold’ and ‘green’ routes whereby, respectively, the author either pays to make the final published article available or they may make the final accepted version of the manuscript available on a repository following an embargo period imposed by the publisher. Even more recently, the ‘diamond’ route to open access has developed as evidenced by, for example, the WikiJournal of Medicine. Here, no money changes hands either to publish or to read articles.
Following publication, and as another result of the Internet, the rise of social media – Twitter® in particular – has had a profound influence on the way publications are disseminated. Publishers, long used to disseminating journals contents lists and occasionally other information about their journals, now have journal websites and these are used to disseminate content and to highlight specific articles. A journal Twitter® site often spearheads a range of other social media such as blogs, podcasts or YouTube®. Again, this is not a variation on the publishing process, merely an alternative way of advertising content. However, it is quicker, it can be automated and is easier to access by a range of platforms and in almost any location.
An unintended consequence of online publishing and open access is the rapid growth of predatory publishers. According to the eponymous--and now withdrawn--Beall’s list there were approximately 100 in 2013; now there are over 1000. Predatory publishers cast suspicion over the legitimate aspects of the business run by publishers such as PLOS ONE and Biomed Central, and by publishers offering hybrid journals – where articles may be both pay to view or open access.
While the above factors have changed the process in terms of speed and efficiency and have also made the process virtually paperless, other changes resulting from the Internet have the potential to alter the process more substantively. These changes are described below.
From the days when data were jealously guarded, and researchers considered it a virtue – often enforced by research ethics committees – to destroy data within a few years of completing a project, it is now almost de rigueur both to maintain data in perpetuity and to make it publicly available. Large databases are now available for secondary analysis, combining with other databases and, increasingly, subject to big data analysis and data-mining. A range of repositories is available for storing and sharing data and these include ones that are hosted by universities and academic publishers and specialised project databases such as SHARE and BioBank.
Related to the issue of making data available is the use of supplementary material by journals as this is how they tend to make data available. However, it is possible to make other information available such as additional tables, figures, appendices and even reference lists if these aspects are restricted by the journal guidelines. Supplementary material can be made available online and accessible via hyperlinks embedded in the online version of the published article or via hyperlinks on the webpages of the journals. In fact, some publishers provide hyperlinks for many cited articles taking readers directly to the online versions where these are available.
Largely due to the AllTrials campaign for transparency in the conduct and reporting of clinical trials there has been a significant move towards registering the protocols of studies in advance of conducting them. A range of registries exist across the world – ClinicalTrials.gov being a typical example – and these are, essentially, databases where researchers intending to conduct a clinical trial can publish the protocol. These databases permit the public to see what trials have been registered and there is provision for publishing deviations from protocols, an indication of completion of the trial and then a summary of the results. These registries are not confined to trials. It is more common to see other designs being registered and, indeed, it is now common for journals to publish protocol articles. In addition, it is becoming more common to register systematic reviews and the Prospero website exists specifically for this – although it is confined to systematic reviews of clinical studies. The Cochrane library also publishes the protocols of reviews that are being conducted under their auspices. The situation regarding study registration is being levered further by academic journals and publishers as they strive to be at the forefront of setting and maintaining rigour and transparency in academic publishing.
Another phenomenon, which began in the 1990s but has accelerated recently, has been the publication of preprints. Described by Wikipedia as: ‘…a version of a scholarly or scientific paper that precedes publication in a peer-reviewed scholarly or scientific journal. The preprint may be available, often as a non-typeset version available free, before and/or after a paper is published in a journal’. Preprints have gathered credibility with research funders and most publishers. Preprints arose because of frustration with the length of time taken to review manuscripts and the need to share scientific results early. Academic publishers have largely accommodated the existence of preprints and the need to accelerate the publishing process. The increasingly common phenomenon of publishers facilitating the publication of final accepted copies of manuscripts in addition to the early view of corrected proofs acknowledges this. The Times Higher Education recently described preprints as being ‘largely indistinguishable’ from the final published articles.
Post-publication review, as opposed to the usual pre-publication system of review, is comment on publications once they have been published. This has always been possible either by correspondence with authors or in the correspondence pages of journals. But the advent of the internet and social media has facilitated this and led to some consideration of whether this is a serious contender to pre-publication review. A process could be imagined whereby manuscripts are posted online – possibly as pre-prints – to receive comment and then be altered accordingly, thereby to change and evolve as comments are made. Alternatively, published articles – either peer reviewed or not – could receive comments linked to the article for readers to take into consideration. Potentially several models exists and there has been the rise of platforms, eg PubPeer, which exist for this purpose, and facilities for comment within other platforms such as ReseachGate. Recently publons, a site for recording reviewing and editing activity which is run by Clarivate, has championed post-publication review.
Wikipedia is an online open access encyclopaedia which may be freely edited. There are strict guidelines about what may be entered on Wikipedia and edits are closely monitored to ensure that all entries are useful and demonstrably linked to reliable sources. One extension of Wikipedia is Wikiversity, a Wikimedia project, which aims to support learning through provision of courses and tutorials. Unlike Wikipedia – where entries are not peer reviewed – Wikiversity allows the publication of original research through the WikiJournal project and this started with the WikiJournal of Medicine. This journal aims to publish peer reviewed articles which are then available open access. No article processing charge is made to authors; therefore, this is the ‘diamond’ route to open access and, being published as editable Wikipedia pages means that articles may be edited post-publication.
The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public. The Conversation ‘motto’ is ‘Academic rigour, journalistic flair’. Only academics or research students with official university email addresses may contribute to The Conversation and potential articles are ‘pitched’ using a structured format to a specialist editor who can decide to accept or reject the article. If accepted, the article must be written to a struct format – usually 700 words – and in language of a very high general readability: that of an educated 16-year old. Jargon and complex sentences are discouraged, and authors may check and adjust the readability of their pieces during the process of submission. While articles are not peer reviewed, the editors will scrutinise them and check the provenance of the evidence being cited. Not every piece in The Conversation reports directly on research – some verge on opinion pieces; but backed by evidence. However, many are very effective abstracts linked to original reports and data.
How should the academic publishing industry respond?
The academic publishing industry, which has proved itself to be adaptable over the past twenty years, will continue to adapt. It must, resistance to the changes already taking place would be futile. The changes taking place should be seen both as a sign that the traditional model of academic publishing is under further threat but also as an opportunity to increase their utility to the academic community.
While all the above developments have been challenges for the academic publishing industry, some have been assimilated by the industry and are now considered good practice. Indeed, some have enhanced academic publishing, including:
- data repositories
- supplementary material
- study registration
Other issues, currently tolerated by the academic publishing industry, are threats to the ‘traditional’ model because they either pre-empt or by-pass the established systems of peer review. Other more recent changes could be viewed as threats which have not yet been addressed and which, if they take hold, may erode the foothold of the academic publishing industry. This is not likely to be very rapid, given the extent of the established academic publishing industry in most fields, and there is time for the industry to adapt; these threats are:
- post-publication review
- The Conversation
The issues to be overcome are:
- Quality control
Publishers may have to reinvent their role but, of course, none of this is going to happen immediately. Therefore, publishers may be reluctant to change, significantly, the way they work. All will monitor competitors to see how far they are willing to go and to see if any new approaches, pioneered by others, are working.
Publishers could and, indeed, should carve out a role for themselves in this new situation. They already have vast experience, resources and staff and – of prime importance – reputation. While individual initiatives such as the WikiJournal of Medicine may multiply they are unlikely and probably do not intend to become major competitors to any of the established academic publishers. Academic publishers operate as competitors but there is also considerable cooperation and cross-fertilisation of ideas and movement of staff between companies. If a major threat was perceived there would probably be some coordinated and collaborative action.
Academic publishers should be capable of providing gateways into academic publishing which provide a ‘one-stop shop’ to features such as online registration of studies, data repositories, quality assurance through peer review (at one or several stages of the process) and allocation of Digital Object Identifiers. Publication of streamlined – The Conversation type – summaries of research and scholarship which provided all the necessary links to data, registration and features such as tables and figures could be one model. They could also provide Wikipedia like platforms for updating and amending articles.
In terms of peer review, the established publishers can offer platforms that individual diamond routes such as WikiJournal of Medicine are unlikely to be able to afford. Reviewing and manuscript processing platforms such as ScholarOne® and the Elvise® system are very expensive and require regular maintenance, customisation and upgrading. Therefore, operating such platforms is likely to remain solely in the domain of the established academic publishing companies. Thus, while any journal of whatever size may operate a system of peer review, the established publishers will do it better.
In this context is meant, specifically, bibliometrics related to journal and author performance in terms of citations. Currently, there are several systems such as Clarivate, Scopus and Google Scholar which record these and, based on citations, there are several ways of calculating bibliometrics. But, for journal performance the impact factor remains the most commonly understood and accepted method and the impact factor published by Clarivate is considered the gold standard. To measure author performance – and again Clarivate provides the gold standard measure – the h-index is a widely accepted measure.
If the academic publishing industry is driven purely by these bibliometrics then the many aspects of the model proposed above will not happen. However, there is some pressure from within academia to stop using bibliometric to measure performance and this was best articulated in the DORA (Declaration on Research Assessment). A new model of publication may give impetus to the end of the use of citation-based metrics. Nevertheless, an alternative, more suitable to the potential changes in academic publishing does exist in the form of alternative metrics – the Altmetric score – based on online mentions of articles across a range of platforms such as online newspapers and social media. Some academic publishers are already providing Altmetric scores on the landing pages of articles.
Is there a role for editors?
Currently, with support from publishers, editors manage and operate the systems that are involved with submitting and publishing an academic article. Models and titles change across the industry but, currently, it is hard to envisage the system working without editors who interact with authors and reviewers. Moreover, it is editors who make decisions at every step following submission through to acceptance. Unless any significant changes in academic publishing lead to a ‘free for all’ in terms of what can be submitted and published, it is hard to envisage any model working without editors. Even if there is a significant move towards post-publication review someone must still decide if post-published comments are acceptable and if changes ought to be made on their basis to article. This is, for example, how Wikpedia works where editors check entries and updates for utility and sourcing – having already decided about the worth of the entry in general. This is one possible model for academic editors who, in addition to scrutinising suggested amendments for utility and sourcing could also judge the scientific aspects of reviews and, if necessary, draw on further experts to assist. They would also have to investigate claims of scientific misconduct. Therefore, it is unlikely that the role of editors can be dispensed with; but editors would have to learn to work differently.
The academic publishing industry continues to change and we can only assume further change will come. In parallel, the established processes of publishing are being eroded and alternative models of publication are developing. Particularly, the traditional academic article where all the aspects of a study are gathered in one long piece of writing, is surely on the verge of extinction. The purpose of such articles was to gather together information, with references, that was otherwise inaccessible couched within a structured argument arranged into relatively standard sections. Often, the essential information contained in such articles tells the reader little more than the abstract. In the age of social media, information overload and with the hundreds of journals available in most fields, publishers are going to have to find new ways of packaging their products and the product and the processes of production must also change.
The issues of quality and metrics are not insurmountable, and the role of editors is unlikely to become redundant. The traditional reliance on citations alone and derivative calculations may be threatened; this will be welcomed by many. Alternative metrics, more tuned to the way information is now obtained, already exist and these could be used in conjunction with or even as an alternative to metrics based on citations.
This is a position paper reflecting the views of the author and aims to stimulate discussion amongst academics, editors and publishers. Roger Watson declares the following interests: Editor-in-Chief, Journal of Advanced Nursing; Editor, Nursing Open; Editorial Board member, WikiJournal of Medicine.