Have you been asked to do a peer review for a journal recently? The chances are that at least one invitation may have landed in your inbox as the new academic year approaches. Did you quietly decline or did you add the request to your growing to-do list? This week is Peer Review Week and Karen Rowlett reflects on what new technologies and AI may mean for best practice and the ethics of the review process.
The number of manuscripts submitted to academic journals is soaring. Between 2016 and 2022 it has been estimated that the number of papers received per year has grown from 1.92 million to 2.82 million. As peer reviewers are usually busy academics with their own research, teaching and administration commitments, and are not paid or recognised for their contributions, how can journal editors ever find enough peer reviewers to appraise journal submissions? New technologies such as artificial intelligence (AI) offer opportunities to streamline the peer review process but may also introduce more complex ethical problems.
For instance, how many times have you been asked to review a paper that is not related to your research area? One immediate benefit of AI technology may be in more accurate targeting of requests so that the paper you are asked to review is a closer match to your expertise. This should increase reviewer uptake, speed up the review process and improve the quality of the reviews.
AI tools may also be useful to pre-screen manuscripts for potential issues with research integrity such as image manipulation, plagiarism, reporting of non-existent cell lines and probes, inappropriate or missing citations, missing consent forms or non-adherence to reporting guidelines for human clinical trials. These are all issues that can be difficult for reviewers to spot but can have a significant impact on the integrity of the scholarly record. AI may also be better than humans at detecting submissions from paper mills.
Could AI eventually replace human peer reviewers? There’s some evidence that large-language models are already being used in peer review. While using tools such as Chat GPT to refine your prose might be acceptable, particularly for non-native English speakers, relying on these tools to make decisions or write review reports could be much more problematic. A recent study of peer review reports for conference abstracts reported that AI influence was much more likely to be detected in reports submitted close to the reviewer’s deadline, suggesting that hard-pressed academics are already using it as a short cut.
What kind of peer review reports might be generated by AI? Tools such as Chat GPT might be useful to generate a summary of a submitted manuscript but the whole point of peer review is that an assessment of the quality of the research is made by an experienced researcher in the field. In the study of conference abstract peer reviews, the text of the reviews appeared to be more bland and convergent when AI was used. The AI-generated reviews also seemed to be more likely to recommend further experiments than expert peer reviewers – a trend that might lead to more ‘reviewer 2’ angst for authors (when overly critical comments can be offensive or discouraging for authors). AI tools that produce low-quality, bland and non-committal peer reviews will surely only make the situation even worse.
There are potential ethical issues with peer reviewers using AI tools to assess manuscripts without the permission of the authors. Such conduct would violate the confidentiality of peer review, go against journals’ reviewer guidelines, breach copyright and place the author’s work in the public domain. In order to deter such activity, publishers have already started adding guidance on the use of AI in peer review and how to detect it in reports to their instructions to reviewers and editors.
Could AI tools improve bias and prejudice in peer review decisions? Bias in peer review and editorial decisions has been well documented and is known to disadvantage authors based on their gender, ethnicity, age, experience, race and geographical location. Switching to AI-based decisions might not necessarily result in a fairer outcome. Unfortunately, AI tools can also have inherent biases depending on the data that they are trained on, particularly if it is from historical data, and depending on who developed the tool and how the software is designed.
Despite all these reservations, it seems likely that AI-based tools will be integrated into the peer review process by publishers in the near future. Key elements to consider will be transparency and accountability. Authors, reviewers and readers will need to know how AI was used in the submission process – and it should never be a direct replacement for human judgement. It must be clear to all involved which parts of journal workflow are powered by AI decisions. Authors and reviewers should also be able to challenge decisions made on the basis of AI in the same way that they can currently dispute other editorial decisions. Well -developed AI tools used responsibly at the right point in the journal submission process do have the potential to streamline peer review and make the whole ordeal easier for reviewers and authors alike.
Karen Rowlett is Research Publications Adviser at the University of Reading.