May 20, 2019
Why Junk Science Continues To Proliferate
By Michael D. Shaw
Last week’s article showed what can happen when the mantle of science is hijacked by fanatics, determined to enforce their agenda—one way or the other. Despite favorable regulatory assessments from every relevant government agency around the world, once the highly criticized IARC 2A classification came out in 2015, one of the makers of glyphosate (the most suable) found a huge target on its back. And we keep hearing that “scientific consensus” is so important.
Last year, Alex Berezow of the American Council of Science and Health wrote a fine piece on this matter that includes this wonderful pull quote:
“The reason that legal arguments consistently trump scientific evidence is because we live in a thoroughly postmodern world. Logic and data have been replaced by emotion and virtue signaling. When a culture believes that truth is simply a matter of opinion, science is among the first casualties. America will not remain #1 in the world for scientific research if we continue to allow lawyers to bleed companies dry over crimes they never committed.”
While the above statement is surely true, the legal system is only part of the problem. There is also the matter of way too much “science” being in reality nothing more than garbage—junk science, if you will.
Berezow’s article came out six months before this paper, claiming a significant association between glyphosate and non-Hodgkin’s lymphoma. As noted in last week’s column, this work has been attacked on many fronts, for its misuse of statistical techniques, and its cherry-picking of data. Yet, it was published in a peer-reviewed journal. The editorial board was certainly smart enough to perceive these massive flaws; but somehow this paper got through.
So, what is behind the continuing proliferation of junk science?
1. Peer review isn’t what it used to be. Back in the day, peer review was regarded as the gold standard for scientific publication, only that was before rampant bias in science. Nowadays, your peers are likely to have an agenda, and if it agrees with yours, your paper—with all its flaws—will be published. Yes, bias always existed in science, but it seldom affected editorial review to the extent seen today.
2. Journals love to publish sexy, breakthrough, headline-grabbing results. No doubt, that’s why the Zhang paper cited above (glyphosate/non-Hodgkin’s lymphoma) was released. That, and the likelihood that the publication’s editors were on board with Dr. Zhang’s biases. Similarly, the popular media will promote and sometimes exaggerate such findings, in its quest for sensationalism.
3. Lack of reproducible results. A fundamental precept of all experimental science is the importance of another researcher reproducing initially published results. These days, though, that principle has largely disappeared. After all, what’s “sexy” or “breakthrough” about merely duplicating the work of someone else?
4. Conflating correlation and causation. How often have you seen studies in which some chemical agent or dietary item is “linked” to a dire outcome? But this can be very misleading. Certainly, underwear is linked to auto accidents, since virtually all accident victims are wearing this garment during the incident. However, no one has yet produced data on a control group that was “going commando.” Moreover, real science requires a hypothesis to accompany your observation. What mechanism is proposed whereby donning underwear while driving would cause a car accident?
In addition, there is no formal definition of “linked.” Statistical significance is used to accept or reject the null hypothesis, which holds that there is no relationship between measured variables. A data set is typically deemed to be statistically significant if the probability of the phenomenon being random is less than 1/20, resulting in a p-value of 5%. When the test result exceeds this p-value, the null hypothesis is accepted. When the test result is less than this p-value, the null hypothesis is rejected.
As such, virtually all studies that propose a “link” tout results in which p is less than 0.05. Nonetheless, statisticians have been saying for years that this parameter is being improperly used. A better course for p < 0.05 would be encouragement to repeat the experiment.
5. The ever-increasing number of journals. Usually, the best studies get into the best journals, so that leaves plenty of room for other outlets, which may have to lower their standards.
As always, there is a market for cheap imitations, and that’s why we have junk science. If anything, the demand is growing.