How Junk Citations Have Discredited the Academy: Part 2

Editor’s Note: This piece is part of an ongoing series of articles by Professor Bruce Gilley. To read the other articles in the series, click here.


In 1980, two doctors from the Boston University Medical Center published a five-sentence letter in the New England Journal of Medicine noting that only four of their 11,882 patients prescribed opioids developed an addiction. They concluded that “the development of addiction is rare” when opioids are prescribed. They provided no evidence and did not caution that their letter referred only to in-hospital applications rather than personal prescriptions.

From then until 2017, the letter was cited 439 times in scientific literature to support the claim that addiction was rare in patients given opioids, according to a 2017 analysis by University of Toronto scholar Pamela Leung and colleagues. This one junk citation, they believe, was a contributing factor to the opioid epidemic in the United States. “The crisis arose in part because physicians were told that the risk of addiction was low when opioids were prescribed for chronic pain,” they wrote. The letter “was widely invoked in support of this claim.”

While not usually so deadly, junk citations are widespread in academic research. The term refers to a citation that is wrong, irrelevant, misleading, corrupt, uninformative, useless, or purely rhetorical. At root, junk citations have become a way to signal the scientific basis of a claim without actually explaining what that basis is, a lazy shortcut that has turned millions of academic minds to mush. In the words of Ole Bjørn Rekdal, a Norwegian anthropologist who has written extensively on junk citations, “At times, I get the feeling that references have been placed in quantities and with a degree of precision reminiscent of last minute oregano flakes being sprinkled over a pizza on the way to the oven.”

Such straightforward citational errors are the core of the problem, which I discussed last year in a talk that can be viewed online. The problem existed long before the Internet and citation software, but these tools made it exponentially worse. Today, much credentialled research is built upon a cannonade of junk citations, which forces the reader to submit in the face of an overwhelming bombardment. As the scientific foundations of published research have been weakened by junk citations, the undisciplined ideologies and agendas of researchers have been given freer rein.

In the first part of this series, I discussed a citation in a grant application that the applicant had not read, had not understood, or had wanted to pass off as scientific despite its obvious flaws. Such a failure to tell the reader anything about a cited work is the cardinal sin on which many junk citations rely. It violates the fundamental rule, laid down by Wayne Booth and colleagues in their widely-used The Craft of Research, that anytime we cite the work of others, we must tell the reader what research they did, what their findings were, and what, if anything, limits these findings’ relevance to the current argument. “Don’t accept a claim just because an authority asserts it,” they warned (on page 87 of the third edition published in 2008).

[Related: “How Junk Citations Have Discredited the Academy: Part I”]

There is nothing wrong with providing the reader with a set of “further readings” or “related research.” But if citations are deployed as part of an argument, including an argument about what other research has been done on a topic, then they cannot be on a “Just trust me” basis. Reader beware.

Dozens of studies have shown that many citations contained in academic research are, to varying degrees, flawed. This could be as simple as getting the page number wrong. But more often, it involves misrepresenting or misunderstanding the source’s claims. In a 2017 exposé of bad citations in 472 articles in three peer-reviewed library science journals, Wilfrid Laurier University librarians Peter Genzinger and Deborah Wills found that 30% of the citations misrepresented the cited work, either wholly or in part.

Junk citations weaken not only evidence but also objectivity. They provide a bully pulpit from which scholars can assert that their claims are “well-documented” or “the overwhelming consensus” of the field. Critics of my 2017 article “The Case for Colonialism,” such as Ohio University’s Brandon Kendhammer, used fusillades of junk citations that, I showed elsewhere, were almost all incorrect. Kendhammer’s junk citation–based article is now cited by other ideologues. In their 2022 article demanding reparations from the West, Elise Klein and Elizaveta Fouksman cite it as “a good overview” of the “extensive” evidence of “the brutal legacy of colonialism.” Junk food leads to obesity, and junk citations lead to academic propaganda.

The truth is, we are all junkies. But there is hope. I recently published my first academic article that does not contain a single junk citation. The experience was liberating. I did not feel the need to take a shower afterward because my co-author and I knew that we had read and explained every source we cited and had used them in a way consistent with the findings and duly respectful of the authors. Rather than treating the work of others as doormats that we trampled upon while rushing to assert our own “contribution,” we carefully read and described it, putting our own contribution in a more modest light.

In future installments, I will show how junk citations have taken on darker purposes.


Image: Adobe Stock

Author

  • Bruce Gilley

    Bruce Gilley is a professor of political science at Portland State University and a member of the board of the National Association of Scholars. In addition to his work on academic freedom and the revival of intellectual pluralism on campus, Dr. Gilley’s research centers on comparative development and politics as well as contemporary public policy issues.

4 thoughts on “How Junk Citations Have Discredited the Academy: Part 2

  1. Because no study – even a random controlled trial – can ever truly prove something, discussing the state of the field is valuable to set the context of the study and contribute to understanding how the results contribute to an *inductive* evidence base based on a body of work. Ideally, the author SHOULD include a discussion of evidence consistent with their hypothesis as well as the evidence inconsistent with it so that the reader can judge the results in context. Unfortunately, limitations on the size (word count) of submitted articles often makes it difficult or impossible to do this.

    What should NOT be done is to misrepresent the context of the cited literature. I have had this occur with my own work, particularly a paper I wrote on the adverse impact of political ideology on scientific ethics almost a decade and a half ago. Because it made reference to efforts revealed in the Climategate e-mail releases to efforts to block publication of findings that disagreed with the e-mail authors, it was seized upon by AGW activists as being a political attack on the global warming theory and has been cited and portrayed as such in a number of subsequent papers articles which portrayed me as an industry-funded mercenary seeking to do a hack job . Ironically, that example was not at the core of the paper, which was focused on health policy and public health research, and was included ONLY because my editors wanted examples from outside the health arena – and my paper made NO statement one way or another on AGW research other than the specific behavior of the researchers in question. As far as the mercenary accusation, the project was unfunded and the initial article was written in two days in my campus office after I read of a provision in an early draft of the Obamacare bill that would deny federal funding to any institution that published findings that the Agency for Healthcare Quality and Research disagreed with, which struck a note with me as I had TWO conference papers embargoed by a state health department that funded the project because the agency head found the results “embarrassing.”

  2. Like opioids, the internet provides an easy solution to one problem, while creating deeper ones. Undergrad students are given paper after paper to complete on the clock, with a pressing need for citations. They barely understand the material, but they know how to get quick fixes. Their school provides them with access to massive resources that they barely know how to use, so they enter some keywords into whatever database, skim for the minimum amount of information in a paper they likely don’t understand, plop the cite down, and scurry along to the next source they probably don’t understand. Lather, rinse, repeat for how many years?

    The habit is thus formed.

    Grad school doesn’t improve things much. On the professor’s end at most stages, most don’t have time to scour every cite on every paper, and most undergrads – and especially grads – understand this on at least a subconscious level. At most, and rarely, the claim will be evaluated at face value and whether the student correctly cited and effectively synthesized the information into their paper. Who has the time to evaluate each source in a student paper?

    The habit is enabled.

    Peer review is supposed to mitigate this issue. But how many peer reviewers scour every citation? How many are ideologues? How many jurists truly scour them in theses or dissertations? What passed quality peer review in 2017, might have been called into question or even refuted in 2021.

    The modern academic culture led to this. Couple this with PhD candidates driven farther afield for new areas of research (or academics desperately seeking grant money), and its no wonder “interdisciplinary” is such a hot buzzword, all too often suffused with ideological claptrap. And here we’ve arrived at incentives that have led to the perversion of the marketplace of ideas.

    1. And most faculty advisors demand that students cite a large number of papers in the literature to ensure they have “fully reviewed what is known.”

      Then, of course, you have faculty who make their displeasure known when they cite dissenting literature – as happened to me on a medical sociology prelim answer where I was asked to discuss the relationship between SES and health status and, in my answer, discussed not only the neo-Marxist view of a direct causal relationship, but also research findings that indicated that much of the observed correlation was due to the relationship being confounded by the fact that social capital and socialization have a causal relationship to SES AND health status. I passed, but the professor responsible for the question let me know she did not approve. As that sort of ideological pressure increasingly socializes students to the idea that only one perspective is acceptable, students become more likely to try to fit evidence to support the acceptable model — a hazard of the pressures to support a paradigm that Kuhn describes in “The Structure of a Scientific Revolution.”

Leave a Reply

Your email address will not be published. Required fields are marked *