The Science We Do is becoming More Unreliable; Warns Study


The institutional and commercial pressure on scientists and researchers to publish more and more papers in the top most journals has been affecting the credibility of science, according to the concerned members of the scientific community. A new experiment now was able to actually demonstrate the deterioration.

Researchers in the US wanted to simulate the result that comes about when scientists compete for professional prestige and funding so they created a computer model which highlighted how scientists are under immense stress to publish sensational results.

Developed by the researchers at the University of California, the model was input with simulated lab groups which were honest, did not play tricks or intentionally manipulated their results. They were however given bigger rewards if they successfully put out “novel” findings, just like in reality. The simulated groups also were encouraged to spend more effort in conducting their procedures rigorously as that would improve the quality of their study, although reduce their output in quantity.

The outcome observed, revealed by Paul Smaldino, the head researcher on the website The Conversation, was that over the time efforts were down to their minimum while false discoveries had exponentially increased.

Additionally, the model proposed that the scientists who employ shortcuts to obtain the incentives will probably be passing the same working approaches to the upcoming generation of scientists who work with them in their lab. This will create an effect that the study was named after, called “the natural selection of bad science.”

Smaldino, when speaking to The Guardian, pointed out that until publishing, surprising results in high-profile journals is highly rewarded; above the subtle nuanced differences which are pointed by studies, till then dodgy methods of maximizing such results will be widespread.

While it wasn’t the first time that news of such a problem has come up it’s probably the only time that numbers were run in this manner through a computer simulation done like this for a study.

A current concerning phenomenon that science is going through is called the reproducibility crisis which highlights how there is an emergence of weak discoveries that are hard to reproduce but still get visibility because of their shocking or new nature. Such studies are hindrances in scientific data yet they grab the attention of reporters and media.
Journals and the mainstream media prefers such studies due to their shock factor and novelty, but they carry a real risk which is damaging the integrity of science, particularly because of the pressure that scientists feel to twist and turn their papers in efforts to make such impressions. However, it’s a never ending cycle as when such attention-grabbing studies are published they attract more grants and funding from various institutions for the researchers who can then conduct more research.

The case is that this evolution of the trend of substandard science actually requires no intentional planning, dishonesty or hoodwinking from individual scientists making it quite easy to actually just be applied. This is not to say that all scientists and researchers have left strict methods and scientific validity but that if institutions keep rewarding shock science above in-depth outcomes then “bad science” will proliferate unleashed.

This is actually enhanced by other quantifiable measures that check the importance of papers and their researchers. One such measure is the P value which can be exploited and be misleading causing all kinds of wrong impressions and harming science.

Rating scientists on their sales figures is not the right way to go if we don’t want our scientists to strive to reach targets like salesmen do. The hard but necessary solution for this conundrum must be developed at an institutional level and a move must be made from judging the quantity of a scientist’s work to employing practices that encourage quality.

In their paper, the researchers emphasized that the long-term cost of using such simplistic quantifying methods will be hefty. To make certain that the science we do today is both reproducible and meaningful, organizations must encourage and reward that kind of work.