“Publish or perish” leads to sketchy research

“Publish or perish,” or the pressure on academics to perpetually put out publishable research, has led to a number of phenomena that weaken the credibility of academic research, particularly scientific research.

One result of “publish or perish” is the predatory open-access journal. These are non-peer-reviewed journals with distinguished-sounding names that will publish anything for a fee and sometimes for exclusive rights to publish the article.

And when I say “anything,” I mean that the “International Journal of Advanced Computer Technology” once accepted ten pages of the sentence, “Get me off your fucking mailing list,” repeated over and over again.

It is unclear whether academics that choose to publish with such journals are actually fooled by their prestigious-sounding titles or simply desperate to stick another article on their curriculum vitae.

Academics can also cave in to the “publish or perish” system by writing their papers to maximize publication output per unit of research. While outright self-plagiarism is seldom a problem, academics can learn to publish in a “least publishable increment” or to tack on extra results to a previously published paper.

The most frightening aspect of “publish or perish,” however, is the way in which it distorts the scientific method. Peter Higgs, Nobel Prize winning physicist, famously said that he did not think he could have devoted the proper attention to his instrumental work in subatomic physics if he had received the kind of pressure to publish that most modern academics do. If he was constantly being expected to output a paper for publication, he could not have had the time to establish a focused research curriculum.

In fact, Higgs’ university was on the verge of firing him for a lack of publication output until Higgs’ 1980 Nobel Prize nomination made them decide he might be worth keeping around.

Additionally, studies that show positive results (i.e. results that affirm the author’s hypothesis) are often more likely to be published than studies that do not. So what if the desire to be published biases scientists towards positive results in their experiments?

When he was a fellow at the University of Edinburgh, Daniele Fanelli was happy to publish the positive result that a positive correlation existed between the number of papers published in a state and the percentage of positive results reported by the state’s professors. More competitive states, Fanelli reasoned, placed more pressure on their professors to publish and thus cause a bias towards positive results. (Admittedly, it perplexes me in what sense professors are somehow more in competition with professors in their own state than academia in general.)

It should go without saying that science where the experimenter is biased towards a specific outcome is unacceptable. And negative results are valuable for future academics, in that they establish unfruitful avenues of investigation.

I won’t claim to have a clear solution to the problems enumerated above. In many ways, the number and profile of an academic’s publications may be the most reasonable way to quickly appraise an academic’s productivity. But these are issues we need to be aware of and work to gradually combat if the credibility of academic enterprise is not to be gravely damaged.

Post Author: tucollegian

Leave a Reply

Your email address will not be published. Required fields are marked *