August 15, 2016
When Science Gets Political—Part 1
By Michael D. Shaw
Science matters to any health-oriented website, because the largest component of government and privately financed scientific research is—or at least purports to be—involved with health. Figures from a few years ago peg annual US spending (private and public sectors) on research and development at nearly $500 billion. As such, any time a service provider (scientist in this case) and lots of money are present, politics is sure to also be in the mix.
While scientists have struggled to get funding since the Middle Ages, the explosion of technology in the past 50 years, along with the proliferation of large “research universities” has changed science from a search for truth to a search for money. But, it gets worse. Consider that you have in place an enterprise worth half a trillion dollars that is almost completely devoid of meaningful outcomes evaluation. In fact, in many cases, the most positive outcome—and an endpoint in itself—is that a paper is accepted and published in a scientific journal.
Back in 2005, John Ioannidis, a professor of medicine at Stanford, published a paper entitled “Why Most Published Research Findings Are False.” Subsequent efforts to reproduce previously published results of hundreds of experiments, including a widely publicized paper entitled “Estimating the Reproducibility of Psychological Science,” cast huge doubts on a large number of studies. No doubt, the implications go far beyond the field of psychology. Care to guess how much money was spent on these failed research efforts?
Ironically, most attacks on junk science are directed toward either the experimental design; the transparent data dredging in search of some result; or the use of sketchy statistics to show a “link” between some cause and supposed effect. Hardly ever will you see arguments challenging other researchers in a particular field to duplicate such results—which has always been the gold standard for scientific truth.
In one memorable case, I was able to personally confront (via telephone) the lead researcher on an especially ridiculous and politically motivated “study” that attempted to link levels of phthalates (eeee-ville chemicals) in pregnant women to certain neonatal intelligence performance tests. This work was published in a respected journal, and received wide press coverage. Here’s a short list of what was wrong…
1. Even if you were willing to accept the absurd premise, the results were wildly inconsistent, and in some cases showed better performance in babies from mothers with higher phthalate levels. As such, no meaningful trend was demonstrated at all. Moreover, there was precious little difference between “high” and “low” phthalate levels.
2. The researcher admitted to classic “dry-labbing.” No original work was done whatsoever. She was able to obtain urine test data on a few dozen women, along with neonatal records—which would only have been possible since this research was done at a large teaching hospital. Is it “science” to compare two columns of data which already exist, and then get crummy results?
3. The demographics of the women involved, and a bit of reading between the lines in the published paper, implied that there were plenty of confounding factors—including illegal drug use during pregnancy by many of the participants. I mean, if you’re looking at neonatal cognitive function, why bother with such drug use? Far better to jump on the chemophobia bandwagon and assault phthalates.
4. There was nothing at all remarkable about the phthalate levels in these women. Indeed, virtually all American women showed levels in the range measured. Thus, there was nothing special about this cohort, other than the fact that free and easy data was readily available! Never mind the tone-deaf anti-scientific bias, whereby looking at how their use of illegal drugs during pregnancy would have been a much more interesting study, but was ignored.
I challenged the lead author on these and other points, and she had no argument against them. But, then she played her trump card: “It got published, didn’t it?”
And so it did. She got a grant to tout a preconceived politically-motivated chemophobic finding, and did it in a most slothful manner. Admittedly, she was enterprising in determining the political bent of the granting agency. Sadly, this is but one terrible study among thousands.