Research paper citation may skew its influence

February 3, 2022 (last updated on July 13, 2023)

New research suggests papers with lower citation numbers cause readers to assume the paper is of a lower quality.

By Sarah Steimer

There’s a presumption that highly cited journal articles are the most influential papers — the greater the number of times a paper has been cited, the higher the perceived quality. But this puts the less cited papers at a clear disadvantage, causing readers to assume that a lower citation count means lower quality.

A new study published in Research Policy shows a causal link between citation counts and perceived quality — marking the first large scale, systemic review of the practice. The study authors were able to underscore the polarizing effects of citation counts: Citations to already highly cited papers are two-to-three times more likely to reflect substantial intellectual influence, while papers with poor perceived quality (lower citations) are read more superficially and discovered later in the projects.

The relationship between the number of citations a paper has (i.e., its influence) and the influence of the work was first articulated by sociologist Robert K. Merton. The normative theory of citation suggests that researchers cite others to acknowledge the influence that they've had on their work.
 

Eamon Duede
Eamon Duede


“We had this sneaking suspicion that that's not what we do,” says Eamon Duede, one of the study’s co-authors and a joint PhD candidate in the Committee on Conceptual and Historical Studies of Science and the department of Philosophy. “If we actually look at the process of constructing a study and writing it up, you find that you do an awful lot of post-hoc citations searching: You jump on Google Scholar and look around for papers that support your claims.”

It turns out there’s literature to support this more rhetorical citation concept, which states that people cite other works not because it influenced them, but because the very existence of those works supports the claims they’re trying to make.

“We started to wonder: If we look at any given paper, which of those citations are the ones that denote influence? And which of those are citations that denote rhetorical efficiency?” Duede says.

The team turned its attention to research paper search engines, which rank results by relevance and prominence: If you search anything, it returns papers relevant to your search term and they’re often ranked by the number of times they’re cited. For their study, the researchers hid the citation numbers for one group, while another group saw the numbers.

What they found was that individuals’ valuation of the quality of the most-cited papers was unchanged when learning how many times it was cited. More famous, highly cited papers were cited less superficially than obscure papers. The team was able to show that high citation counts make papers appear to be of relatively higher quality, causing researchers to read those papers more closely.

However, readers’ evaluation of the quality of every paper not in the top 75th percentile or higher dropped. If they didn’t know how many times a paper had been cited, they evaluated it significantly higher, but knowing the number of times it was cited harmed the bottom 75% of all papers.

“This means that people don't read them carefully,” Duede says of the less cited papers. “They just kind of skimmed them. What that ultimately results in is that those papers don't have a chance to influence their readers. Because the readers have already taken on board this negative evaluation of the quality of that paper, they don't devote much of their own intellectual resources and time to them.”

Co-author Misha Teplitskiy of the University of Michigan explains that the research showed the effects of citation counts on the ability of research papers to influence readers.
 

Misha Teplitskiy
Misha Teplitskiy


”I think one explanation our study provides is that while the distribution of paper quality may be relatively normal, the distribution of influence — which is a function of quality as well as search, reading, and citing practices — is more skewed,” he says. “Specifically, the highest status works, in our case citation hits, really do exert a disproportionate, skewed amount of influence. So, from the perspective of influence, it does make sense to place a lot of weight on ‘hits.’ From the perspective of quality, it makes a lot less sense.”

Duede suggests there are two major implications to the findings. First ​​is that revealing a paper’s citation count causes a significant bias to the would-be reader. Second, it reveals that there are significant trade-offs that are inherent in the metrics that are currently used to evaluate the influence and quality of scientific researchers. Duede is referring to the h-index, a metric for evaluating the cumulative impact of an author's scholarly output and performance.

“The higher the h-index, the more influential that person is thought to be,” he says. “The possible problem with this is that the h-index might not be actually capturing how influential that person's work is, because we found that more than half of all citations denote little to no influence on their readers.”

How these findings can be used to change the fate of research papers depends on either individual or collective action. On a case-by-case basis, authors could do more to gain up-front readers before citations set in as a proxy for quality: Before or soon after their papers publish, they can draw eyes to their work by promoting it on social media, doing media interviews, or sending press releases.

The alternative option is to move away from citation counts or relative citation counts as a metric for the influence and quality of the work or the researcher. “For the vast majority of papers, those papers are actually much better than people think they are,” Duede says. “A lot of scholarly work is being unevenly evaluated solely on this proxy, which is citation.”