Partner Blogs

The best peer review reports are at least 947 words

From the LSE Impact Blog

Reading Time: 5 minutes
The best peer review reports are at least 947 words

Based on an analysis of the relationship between peer review reports and subsequent citations, Abdelghani Maddi argues that longer and hence more constructive and engaged peer review reports are closely associated with papers that are more cited. This post draws on the authors article, On the peer review reports: does size matter?, published in Scientometrics.

This article is shared from the LSE Impact blog the article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science or Dementia Researcher. Shared under the Creative Commons Attribution 3.0 Unported (CC BY 3.0) the orginal publication can be found at https://blogs.lse.ac.uk/impactofsocialsciences/2024/05/16/the-best-peer-review-reports-are-at-least-947-words/


Peer review continues to be the focus of considerable debate across academic fields as diverse as the sociology of science, economics and biology. This focus reflects a divide within the scientific community regarding its efficacy and role in identifying errors, and instances of scientific misconduct. In recent years it has been fueled by a proliferation of notable cases of errors and misconduct detected in articles published in prestigious international journals, including those published by Elsevier and Springer Nature. The rise of generative AI has again only heightened concerns due to its overt and covert use in peer review.

Nonetheless, there exists a consensus within the scientific community regarding the pivotal role of peer review in knowledge generation and dissemination. Despite uncertainties surrounding its ability to detect fraud and questionable practices, reviewers’ comments typically aim to provide constructive feedback aimed at enhancing the quality of papers, encompassing both structural and substantive aspects, including methodological validation and alignment to recognised publication standards.

However, as we found in a recent paper, quantifying the added value and tangible impact conferred by peer review on scientific publications remains challenging. The approach we opted for was to focus on the length of peer review reports. Our theory being that there is a direct correlation between the length of reports and the extent of revisions and modifications requested from authors. Hence, longer reports are, generally, associated with a higher likelihood of authors making revisions and enhancements to their papers, thereby bolstering their quality, visibility, and citation metrics.

We used data from Publons to extract information regarding the length of reviewers’ reports for a corpus of 57,482 publications. To ensure generalisability, the structure of the Publons database was adjusted with that of the Web of Science database using the Raking Ratio method, employing a control group comprising 12.3 million articles. Consequently, the weighted sample faithfully mirrors the overall database structure across disciplinary distribution, collaborative patterns, open access practices, among other variables.

papers garnering the highest citation counts tended to be associated with longer reviewer reports, exceeding the average length.

Our analyses revealed a statistically significant impact of reviewers’ report length on citations received, with reports surpassing approximately one and a half pages (947 words) marking a critical threshold. Notably, papers garnering the highest citation counts tended to be associated with longer reviewer reports, exceeding the average length. Beyond this threshold, citation counts exhibited an increasing trend with longer report lengths, corroborating the initial hypothesis positing the synonymous relationship between the length of referees’ reports and the extent of revisions solicited, thereby enhancing manuscript “quality”.

 This is a plot from a robust regression analysis showing the estimated coefficients and their significance levels for various predictors of citation counts (dependent variable: Log(1 + Citations count)). Here is a detailed description for screen readers: Title Robust Regression Analysis: Estimated Coefficients and Significance Axes X-axis: Estimate Y-axis: Explanatory and control variables Legend Significance: Open circle: Not Significant Filled circle: Significant Variables and Estimates Review Length Categories (Reference: 1 to 231 words) 232 to 535 words: Significant positive coefficient 536 to 946 words: Not significant 947 to 1612 words: Significant positive coefficient 1613 to 2891 words: Significant positive coefficient Other Variables Number of Countries per Publication (log): Significant positive coefficient Number of Funders per Publication (log): Significant positive coefficient Is Open Access: Significant positive coefficient Impact factor (log): Significant positive coefficient Publication Year: Significant positive coefficient, close to 1 Research Fields (Reference: Humanities & Art) Biology and Applied Biological Sciences: Significant positive coefficient Psychology: Not significant Medical Specialties and Medical Technologies: Significant positive coefficient Life Sciences and Biomedical Science: Significant positive coefficient Applied Physical Sciences & Engineering: Significant positive coefficient Chemistry: Significant positive coefficient Social Sciences: Significant positive coefficient Physics: Not significant Materials Science: Not significant Mathematics: Not significant Computer Science & Engineering: Significant positive coefficient Earth & Space Sciences: Not significant Economics, Business and Finance: Significant positive coefficient Additional Notes Adjusted R-squared of the model: 0.47 Red dashed vertical line: Noteworthy reference point on the x-axis (estimate close to 0.97 for Publication Year) This plot helps in understanding the impact of various factors like review length, number of countries and funders per publication, open access status, impact factor, publication year, and research field on the citation count of academic papers. Significant variables are indicated by filled circles, while non-significant ones are marked with open circles. Review Length Categories Review Length (232 to 535 words) Y-axis position: 2nd from the top Estimate: Slightly positive (Significant) Review Length (536 to 946 words) Y-axis position: 3rd from the top Estimate: Near zero (Not Significant) Review Length (947 to 1612 words) Y-axis position: 4th from the top Estimate: Positive (Significant) Review Length (1613 to 2891 words) Y-axis position: 5th from the top Estimate: More positive (Significant) Other Variables Number of Countries per Publication (log) Y-axis position: 6th from the top Estimate: Positive (Significant) Number of Funders per Publication (log) Y-axis position: 7th from the top Estimate: Positive (Significant) Is Open Access Y-axis position: 8th from the top Estimate: Positive (Significant) Impact factor (log) Y-axis position: 9th from the top Estimate: Positive (Significant) Publication Year Y-axis position: 10th from the top Estimate: High positive, near 0.97 (Significant) Research Fields Humanities & Art (Reference Category) Y-axis position: 11th from the top (No marker for this category as it's the reference) Biology and Applied Biological Sciences Y-axis position: 12th from the top Estimate: Positive (Significant) Psychology Y-axis position: 13th from the top Estimate: Near zero (Not Significant) Medical Specialties and Medical Technologies Y-axis position: 14th from the top Estimate: Positive (Significant) Life Sciences and Biomedical Science Y-axis position: 15th from the top Estimate: Positive (Significant) Applied Physical Sciences & Engineering Y-axis position: 16th from the top Estimate: Positive (Significant) Chemistry Y-axis position: 17th from the top Estimate: Positive (Significant) Social Sciences Y-axis position: 18th from the top Estimate: Positive (Significant) Physics Y-axis position: 19th from the top Estimate: Near zero (Not Significant) Materials Science Y-axis position: 20th from the top Estimate: Near zero (Not Significant) Mathematics Y-axis position: 21st from the top Estimate: Near zero (Not Significant) Computer Science & Engineering Y-axis position: 22nd from the top Estimate: Positive (Significant) Earth & Space Sciences Y-axis position: 23rd from the top Estimate: Near zero (Not Significant) Economics, Business and Finance Y-axis position: 24th from the top (bottom most) Estimate: Positive (Significant) Summary The plot depicts the estimated coefficients for various variables, indicating their significance and impact on the citation count. Variables with significant positive impacts are marked with filled circles, while those that are not significant are marked with open circles. The x-axis estimates help in understanding the magnitude and direction of the influence of each variable on the dependent variable, citation count.

This finding highlights the role of reviewers in enhancing the quality of scientific publications. Their contribution extends far beyond mere minor revisions including spelling or grammatical corrections. Reviewers, who can be aptly termed the “unsung heroes” of scientific research, significantly contribute to improving publication quality by providing detailed and constructive reports, even within shrinking deadlines. Yet, their contribution remains largely unrecognised and underestimated.

Another salient aspect raised by the study is the importance of time for conducting thorough peer review. Journal editors need to acknowledge that furnishing useful, comprehensive reports entails a considerable investment of time and effort from reviewers. Soliciting evaluations within extremely short time frames, such as a week for reading the publication and drafting the report, as practiced by certain “gray” publishers, can compromise the quality of assessments and consequently diminish the value added by peer review to manuscripts. This points to the need to reconsider the emphasis on speed in response times for peer review.

Journal editors need to acknowledge that furnishing useful, comprehensive reports entails a considerable investment of time and effort from reviewers.

A third implication is related to the current saturation of the scientific publishing system, where the increasing number of submitted articles strains journals, reviewers, and the scientific community as a whole. This workload overload risks compromising the focus of reviewers, who must assess a large number of articles within short deadlines. This could lead to a risk of disseminating “bad science” if certain important aspects of an articles are not properly evaluated due to time constraints. Especially so in a context where the speed of peer review has become a sort of “advertising” argument wielded by some publishers to attract authors. Even though post-publication peer review allows for partial “correction” of errors and questionable practices, the volume of annual publications far exceeds the capacities of the active community to scrutinise all publications. The scientific community requires a pre-publication evaluation system that functions properly.

Finally, open peer review offers promising prospects for enhancing transparency and vigilance. According to one recent study the peer review model adopted by a journal has a direct influence on the behaviour of both researchers and reviewers. The study highlights that journals implementing open peer review protocols incentivise heightened engagement from researchers and reviewers, as their actions directly impact their reputations. In addition, by allowing access to reviewer reports and promoting the reuse of evaluation data, this approach could facilitate deeper and more quantitative analyses of the impact of peer review on the quality of scientific publications and the advancement of science in general. Thus, open peer review represents an opportunity to rethink and improve peer review practices in a context where the increasing quantity of publications necessitates a more efficient and transparent approach.


This post draws on the authors’ article, What is a high-quality research environment? Evidence from the UK’s research excellence framework, published in Research Evaluation. 

The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.

Authors

Abdelghani Maddi is a research engineer at GEMASS (CNRS/Sorbonne University). Economist by training specializing in Scientometrics. He is passionate about understanding scientific knowledge production, promoting open science, and improving research evaluation.

Leave a comment

Your email address will not be published. Required fields are marked *

Translate »