Mark Hyman deceives about “science research deception”

One of the favorite fallacious arguments favored by pseudoscientists and denialists of science is the ever infamous “science was wrong before” gambit, wherein it is argued that, because science is not perfect or because scientists are not perfect, then science is not to be trusted. We’ve seen it many times before. Indeed, we saw it just yesterday, when promoters of quackery and anti-vaccine cranks leapt all over the revelation that American scientists had intentionally infected Guatemalan prisoners with syphilis without their consent as part of an experiment in the 1940s. They didn’t attack the story because it was an inexcusable and horrific violation of human rights; rather, they attacked it because they thought that they could use the story to discredit science-based medicine (SBM) and, if they could discredit SBM, then it would somehow constitute an argument that their pseudoscience and quackery are valid. We’ve seen this behavior many times before. Another example includes Robert F. Kennedy, Jr.’s salivating over the Pol Thorsen scandal, even though Thorsen was a relatively minor figure in the Danish studies that demonstrated that thimerosal in vaccines is not associated with autism.

Over the years, I’ve vacillated between the view that this sort of behavior is either outright deceptive and that the pseudoscientists using it know what they’re doing or that they genuinely believe what they’re saying. In this latter case, the pseudoscience supporter or denialist seems to subscribe to a serious case of black-and-white thinking to the point where, if something is not perfect, then, as the sketch goes, it’s crap. There’s even a name for this fallacy, the Nirvana fallacy or the fallacy of the perfect solution. Of course inevitably there is a huge amount of selectivity in the application of the Nirvana fallacy. If it’s science-based medicine and imperfect, it’s crap, but if it’s “alternative medicine,” seemingly any flaw is excusable.

A perfect example of this is a post on that wretched hive of pseudoscience and quackery, The Huffington Post, by the father of that other wretched hive of pseudoscience and quackery, so-called “functional medicine.” Yes, I’m referring to Dr. Mark Hyman of Ultrawellness, whose graced both HuffPo and his own blog with Science for Sale: Protect Yourself From Medical Research Deception (the version on Dr. Hyman’s on blog here). You’ll see the Nirvana fallacy combined with a heapin’ helpin’ of pseudoscience and logical fallacies on display. Like a typical Mike Adams screed, it begins with a study that finds fault with evidence-based medicine (EBM):

A recent study in the Journal of the American Medical Association found over 40 percent of the best designed, peer-reviewed scientific papers published in the world’s top medical journals misrepresented the actual findings of the research.(i) The “spin doctors” writing the papers found a way to show treatments worked, when in fact, they didn’t.

Doctors and health care consumers rely on published scientific studies to guide their decisions about which treatments work and which don’t. We expect academic medical researchers to determine what needs to be studied, and to objectively report their data. We rely on government regulators to prevent harmful medications from being approved, or to quickly remove harmful medications or treatments from the market.

The study to which Hyman refers did appear in JAMA in May. Written by Isabelle Boutron, MD, PhD of the Centre d’Épidémiologie Clinique, Hôpital Hôtel Dieu in Paris and team, the article was entitled Reporting and Interpretation of Randomized Controlled Trials With Statistically Nonsignificant Results for Primary Outcomes, the study analyzed randomized clinical trials reported in December 2006 that failed to find a statistically significant difference in primary outcome for what they defined as “spin. Primary outcome is defined as the main outcome for which the treatment is being tested for an effect, such as death from cancer, lowering of blood pressure, etc.

They looked at different strategies of spinning results in which no significant difference was found in the primary outcome of a study, which they divided into the following three main categories: “(1) a focus on statistically significant results (within-group comparison, secondary outcomes, subgroup analyses, modified population of analyses); (2) interpreting statistically nonsignificant results for the primary outcomes as showing treatment equivalence or comparable effectiveness; and (3) claiming or emphasizing the beneficial effect of the treatment despite statistically nonsignificant results.” They also attempted to quantify the extent of spin. As Hyman points out, Boutron et al did indeed find that approximately 40% of the 72 articles examined contained spin of one of these types in two or more sections of their text.

Let me make one thing clear. I am not condoning or defending spin. However, one is human nature. Think about it this way. You’ve just spent years doing a study (and most studies do take at least a couple of years; some as many as ten). The results didn’t turn out resoundingly positive–or maybe they didn’t turn out positive at all. It’s human nature to want to salvage something out of all that work. Personally, I view “spin strategy #1” as in essence an attempt by scientists to salvage something useful out of a trial. Within-group comparisons, secondary outcome analyses, and various other “data-mining” techniques are not per se bad science, although I will agree that if secondary results are presented as meaning that the trial as positive (or at least more positive than it was) they can constitute deceptive spin; what I won’t necessarily accept is that they are always intentional or deceptive. When I see this sort of thing, though, I do wonder where the reviewers were. These are exactly the sorts of techniques that, whether intentional or not, reviewers are supposed to slap down.

You’ll see where I’m going with this after you read what Hyman writes next:

What most physicians and consumers don’t recognize is that science is now for sale; published data often misrepresents the truth, academic medical research has become corrupted by pharmaceutical money and special interests, and government regulators more often protect industry than the public. Increasingly, academic medical researchers are for hire, and research, once a pure activity of inquiry, is now a tool for promoting products.

While it’s hard to argue that there is undue pharmaceutical company influence in the medical literature (I’ve written about the issue on numerous occasions on this very blog), there’s just one problem.

Boutron et al is not evidence of undue pharmaceutical company influence. Her article doesn’t even look at the issue. In fact, I find it rather ironic that Boutron et al write:

Our results are consistent with those of other related studies showing a positive relation between financial ties and favorable conclusions stated in trial reports. Other studies assessed discrepancies between results and their interpretation in the Conclusions sections.10, 26 Yank and colleagues10 found that for-profit funding of meta-analyses was associated with favorable conclusions but not favorable results. Other studies have shown that the Discussion sections of articles often lacked a discussion of limitations.27

Clearly Boutron et al want you to believe that their results indicated undue pharma influence resulting in more spinning of negative studies. They don’t come right out and actually say that directly, though. They’re too clever for that. In fact, they didn’t even do an analysis to show a correlation between increasing spin or claiming results that the data do not support and funding source. Indeed, in a letter to the editor, David B. Allison and Mark Cope call Boutron et al out for making this statement thusly:

Although this implies an analysis of the association between source of funding and reporting, in particular on the use of spin, such an analysis was not included in the article. The authors noted in the “Methods” section that they assessed source of funding. It would therefore be helpful if the authors could examine this relationship.

Not surprisingly, Boutron et al‘s response indicates that there was no statistically significant relationship between the funding source reported and the amount of “spin” in the article, which would tend to support my argument above that simple human nature is a major contributor to attempts to “spin” data. I would also argue that the pressure to publish “positive” results also plays a role. Certainly industry influence could in addition be a factor in how scientific results are reported or misrepresented, but the paper presented by Mark Hyman as slam dunk evidence of the perfidy of big pharma is nothing of the sort. Indeed, it is not evidence to support his contention at all. In fact, I find it rather odd that Boutron et al conveniently left out their analysis that found no relationship between industry funding and level of spin in the articles they examined. I daresay they were very disappointed by that result. I might even speculate that their disappointment might have led them to leave that result out of their manuscript.

What a very human thing to do.

This wouldn’t have been a big deal to me except for one thing. Boutron et al tried to have their cake and eat it too by not reporting their analysis showing a lack of correlation between industry funding and level of spin in the articles they studied while still implying rather cleverly that their results supported a relationship between industry funding and spin. In fact, one might say that Boutron et al are guilty of the very sort of spin they claim to have found in other articles. Is that evidence that Boutron et al have been influenced by their funding source? In any case, Boutronet al were forced to admit, “The statement in our “Comment” section that was noted by Allison and Cope was too strong. Because of small numbers and missing data, we cannot draw any clear conclusion on the relation between funding source and the presence of spin.”

Ouch. That’s going to leave a mark. It’s also going to leave a mark on Mark Hyman’s reliance on this study to support his argument. He even misrepresented the study by claiming that “the authors of this report did not just read the abstracts and conclusions of the studies they reviewed, but had independently analyzed the raw data.” I don’t know if we read the same paper, but I couldn’t find anywhere in the methods a description of Boutron et al obtaining and analyzing the raw data. Then Hyman goes on to claim that Boutron et al supports his contention that this “spinning” of negative research results is a direct result of malign pharma influence when this particular study clearly doesn’t support such an assertion.

Of course, Hyman doesn’t just rely on this study. He trots out the same old tropes, basically napalming burning-man sized straw men into ash by claiming that science is an “objective endeavor that removes bias and is inherently true and reliable.” No, science is not “inherently true and reliable,” nor is it necessarily always objective. (Look for quacks to quote mine that sentence.) Rather, science is a method that seeks to minimize bias and the effects of normal human cognitive oddities that lead to incorrect conclusions. Indeed, much of what is published in the scientific literature is incorrect; that is not a flaw in science, but rather scientists publishing their observations. Those observations don’t always end up standing up to scrutiny. Ultimately, science is messy as hell, with conflicting results that may take years or even decades to resolve. However, resolve they do ultimately. As messy as it is, science works, although its very messiness can be confusing to lay people and allows an opening for ideologues like Hyman to take advantage of how confusing scientific results can look.

It’s particularly amusing to me to see Hyman harping on authors of scientific papers misrepresenting their results. The reason is that Hyman’s history of misrepresenting and twisting science to support his own pseudoscience is truly prodigious. Amusingly, Hyman’s article that immediately preceded his attack on evidence-based medicine was an article that completely misrepresented research on gut flora and disease in an outrageously pseudoscience-laden article entitled 5 Steps to Kill Hidden Bugs in Your Gut That Make You Sick. In it, Hyman buys into the idea that “toxic byproducts” of gut flora can make you sick, and he marshals and tortures a variety of studies to try to prove his point. Indeed, my only regret is that I didn’t devote an entire heapin’ helpin’ of not-so-Respectful Insolence to this article.

It’s on HuffPo, of course.

Perhaps the most amusing part of Hyman’s article is how he concludes with recommendations. Coming from him, they are truly howlers. Of the seven, a couple stand out as particularly amusing, so much so that they fried yet another one of my irony meters. For example:

2. Do your homework: Be suspicious of media reports of scientific findings. Does the finding make sense in the context of other studies and is it the best possible approach. Educate yourself by learning to use PUBMED (the National Library of Medicine) and reviewing different perspectives.

The wag in my can’t resist pointing out that Dr. Hyman should take his own advice. His pathetically inept analysis of Boutron et al is evidence that he has no clue how to analyze the scientific literature. Rather, he tortures it until it supports his pseudoscience.

Does it pass the “sniff test”: Is the treatment suggested just a “me too” drug that has not been proven to be any better than existing treatments? Does it make sense to you or does something smell rotten? Trust your intuition.

This is particularly hilarious because “intuition” matters little in science. The “intuition” that scientists develop to detect studies that don’t seem convincing comes not from any sort of “common sense” but from having a deep knowledge of the scientific literature. This is where the weakness in “Google University” knowledge is most frequently laid bare.

The bottom line is that Dr. Hyman is taking advantage of known shortcomings in how science is conducted and the messiness of its process in order to sow fear and doubt in a classic denialist fashion. He’s building huge straw men about science and then blasting them with flamethrowers of burning stupid in the form of the Nirvana fallacy. (Nirvana flames of burning stupid? I like it.) Hyman may start out with a legitimate criticism of how medical science is done in 2010, but, like Mike Adams, he can’t resist going far beyond that into the stratosphere of crankery, all in the name of supporting the quackery he happens to like.