Dichotomous thinking, uncertainty, and science denial

If there’s one thing that I’ve learned about communicating science and medicine, in particular countering pseudoscience and quackery, it’s that belief in pseudoscience is very difficult to shake. Anyone who tries to counter, for example, antivaccine misinformation will soon find that simply countering misinformation with good information doesn’t work, at least not with the hard core antivaxers. The same is true when trying to counter cancer quackery and many other forms of medical pseudoscience. Certainly one thing that contributes to this resistance is a very human trait to crave certainty and become anxious when there’s uncertainty. A while back, I came across an article by a psychotherapist named Jeremy Shapiro—he’s at my old stomping grounds of Case Western Reserve University, too!—that delved into this question and led me to decide to write about it for this week. It’s a tendency that is at the root of a lot of science denial, not just pseudoscience and quackery in medicine but denial of climate science, evolution, and many other forms of science denial, which he attributes to the same basic thinking error, dichotomous thinking.

Before I get into the article itself, I tend to like to refer to dichotomous thinking as: If we don’t know everything, we know nothing! In fact, I gave an example of this sort of thinking just last week but didn’t really discuss it. Basically, Bill Maher, when interviewing Dr. Jay Gordon, kept harping on the uncertainty in medicine as a reason to doubt the safety and efficacy of vaccines, to doubt the conclusions of science, listing examples of dietary recommendations that changed, of Accutane being withdrawn from the market, and various other examples of when new findings led to significant changes in medical practice. Basically, his entire line of “reasoning” if you can call it that, was exactly the sort of dichotomous thinking that I listed above: If we don’t know everything then we know nothing, but with an additional twist. If we don’t know everything about everything in medicine, then anything is possible, no matter how much evidence against it exists; e.g., a link between vaccines and autism. At the end of the interview segment, he even explicitly said that, unless a doctor can tell him exactly what causes cancer and exactly how to cure it then he won’t shut up about asking questions about medical issues. Basically, Maher kept ranting about what we don’t know about medicine, completely ignoring how much we do know.

Shapiro characterizes this sort of dichotomous thinking a bit differently. After first noting that science deniers do cite science and empirical evidence but cite it in invalid and misleading ways, he notes that dichotomous thinking, also referred to as black-and-white or all-or-none thinking, is a characteristic factor in a number of mental conditions, including depression, anxiety, aggression and borderline personality disorder. This type of thinking involves taking a spectrum of possibilities and dividing it into two possibilities, eliminating shades of gray. Everything is either black or white.

Then:

Spectrums are sometimes split in very asymmetric ways, with one-half of the binary much larger than the other. For example, perfectionists categorize their work as either perfect or unsatisfactory; good and very good outcomes are lumped together with poor ones in the unsatisfactory category. In borderline personality disorder, relationship partners are perceived as either all good or all bad, so one hurtful behavior catapults the partner from the good to the bad category. It’s like a pass/fail grading system in which 100 percent correct earns a P and everything else gets an F.

In my observations, I see science deniers engage in dichotomous thinking about truth claims. In evaluating the evidence for a hypothesis or theory, they divide the spectrum of possibilities into two unequal parts: perfect certainty and inconclusive controversy. Any bit of data that does not support a theory is misunderstood to mean that the formulation is fundamentally in doubt, regardless of the amount of supportive evidence.

Similarly, deniers perceive the spectrum of scientific agreement as divided into two unequal parts: perfect consensus and no consensus at all. Any departure from 100 percent agreement is categorized as a lack of agreement, which is misinterpreted as indicating fundamental controversy in the field.

This is exactly the way antivaxers appear to think. Those of you who regularly encounter antivaccine misinformation will be able to discern this pattern in the arguments made. If a vaccine isn’t absolutely 100% safe, it’s dangerous, toxin-laden crap. If a vaccine is not 100% effective at preventing the disease it’s designed to prevent, it’s utterly useless. Any vaccine failure at all is evidence to them that they are correct, which is why they constantly harp on outbreaks in which vaccinated children fall ill as “evidence” that vaccines are useless, often crowing about how more vaccinated children became ill than unvaccinated. Of course, when they do this, they completely ignore the inconvenient fact that there are many more vaccinated children than unvaccinated children, which means that when you look at the percentage of unvaccinated children who fall ill compared to the percentage of vaccinated children who fall ill you’ll always find that unvaccinated children are far more likely to fall ill than vaccinated children. It’s a gambit that persuades because many people aren’t that great at math and won’t think in fractions, percentages, and probabilities of becoming ill without prompting and having someone lead them through the calculation.

Dichotomous thinking also has a consequence in how one evaluates existing scientific evidence:

Proof exists in mathematics and logic but not in science. Research builds knowledge in progressive increments. As empirical evidence accumulates, there are more and more accurate approximations of ultimate truth but no final end point to the process. Deniers exploit the distinction between proof and compelling evidence by categorizing empirically well-supported ideas as “unproven.” Such statements are technically correct but extremely misleading, because there are no proven ideas in science, and evidence-based ideas are the best guides for action we have.

I have observed deniers use a three-step strategy to mislead the scientifically unsophisticated. First, they cite areas of uncertainty or controversy, no matter how minor, within the body of research that invalidates their desired course of action. Second, they categorize the overall scientific status of that body of research as uncertain and controversial. Finally, deniers advocate proceeding as if the research did not exist.

This is one reason why I almost never use the word “proof” when discussing science, even when discussing homeopathy which has been about as close to proven to be impossible as anything can be in science. I discuss evidence, not proof. In any event, let’s apply this example to antivaxers. Again, like the example above, where I pointed out that if a vaccine is not 100% effective and safe it’s dangerous and ineffective (to them), if there is any controversy in the science surrounding a vaccine, no matter how minor, then the “science isn’t settled”. (How many times have you heard that line from antivaxers—or, come to think of it, quacks and other science deniers?) The “controversy” doesn’t even have to be a legitimate one, either. Take the example of whether vaccines cause autism. All the large, well-designed, well-executed epidemiological studies have failed to find a whisper of a whiff of a hint of a signal of a correlation between vaccination and autism. However, there do exist studies that have found a correlation. The problem is that they’re all studies by antivaxers, such as Andrew Wakefield, Mark and David Geier, and others, and they’re all terrible studies with huge methodological flaws. As discussed by Shapiro, though, antivaxers give equal (or even greater) weight to the studies by antivaxers finding that vaccines cause autism as they do to the vast compendium of studies by legitimate scientists that find vaccines don’t cause autism, and conclude that there’s still a scientific controversy over whether vaccines cause autism. There isn’t.

There’s a similar pattern with cancer quackery. Because I specialize in the surgical treatment of breast cancer, I understand that it’s incredibly scary to be diagnosed with cancer, even a cancer like breast cancer, which most of the time can be treated with a high likelihood of long-term survival or “cure,” if you will. Most patients, even those who believe in cancer quackery, will accept that surgery “works”, because it’s fairly intuitive that removing a cancer will treat it, but far more have a problem with chemotherapy and radiation. That’s why there are so many women with alternative medicine cancer cure testimonials who, when you examine their stories more closely, turn out to have accepted surgery but refused chemotherapy and/or radiation and were, in essence, lucky enough to have been “cured” by the surgery. In any event, the same sorts of arguments are common. If there isn’t a 100% cure rate, then conventional cancer treatment is useless and a dangerous mix of “cut, burn, poison”. Meanwhile, cancer quacks promise a 100% cure rate or a 90% cure rate for cancers that conventional medicine can’t cure but only manage, and any study that a treatment has more toxicity or is less effective than previously suspected is touted as evidence that chemotherapy doesn’t work. (It does.)

Shapiro points out the same phenomenon in another area of science. See if you hear echoes of arguments we’ve dissected on this blog before:

This same type of thinking can be seen among creationists. They seem to misinterpret any limitation or flux in evolutionary theory to mean that the validity of this body of research is fundamentally in doubt. For example, the biologist James Shapiro (no relation) discovered a cellular mechanism of genomic change that Darwin did not know about. Shapiro views his research as adding to evolutionary theory, not upending it. Nonetheless, his discovery and others like it, refracted through the lens of dichotomous thinking, result in articles with titles like, “Scientists Confirm: Darwinism Is Broken” by Paul Nelson and David Klinghoffer of the Discovery Institute, which promotes the theory of “intelligent design.” Shapiro insists that his research provides no support for intelligent design, but proponents of this pseudoscience repeatedly cite his work as if it does.

Again, this is part of how this sort of thinking works. The core tenets of the theory of evolution are supported by an enormous body of mutually-reinforcing evidence from a number of different disciplines built up over many decades. The controversies in evolution, such as they are, tend to be at the bleeding edge of the science, and the bleeding edge is always way more uncertain than the core. (Otherwise they wouldn’t be bleeding edge.) Yet creationists use those scientific controversies to cast doubt on the very core of evolution. It’s how science denial works. Similarly, climate science deniers use controversies at the very edge climate science to cast doubt on the entire conclusion of climate science that the earth’s climate is warming catastrophically largely due to human activity. Like the case with antivaxers, too, often the deniers produce dubious scientific studies to give the illusion of scientific controversy.

As Shapiro concludes:

There is a vast gulf between perfect knowledge and total ignorance, and we live most of our lives in this gulf. Informed decision-making in the real world can never be perfectly informed, but responding to the inevitable uncertainties by ignoring the best available evidence is no substitute for the imperfect approach to knowledge called science.

Indeed.

My one quibble with Shapiro is that I’m not so sure that a pathological (or near-pathological) level of dichotomous thinking is necessary for science denial to take hold, just a normal level coupled with perhaps an above-average need for certainty. When discussing this aspect of science denial, I like to quote a song by David Bowie, Law (Earthlings on Fire): “I don’t want knowledge. I want certainty!” That pretty much sums it up. If there’s a trait among humans that strikes me as being universal, it’s an unquenchable thirst for certainty. It’s a major force that drives people into the arms of religion, even radical religions that have clearly irrational views, and it isn’t expressed only through extreme religiosity. As anyone who accepts science as the basis of medical therapy knows, there’s a lot of the same psychology going on in medicine as well. This should come as no surprise to those committed to science-based medicine because there is a profound conflict between our human desire for certainty and the uncertainty that is always inherent in so much of our medical knowledge. The reason is that the conclusions of science are always provisional, and those of science-based medicine arguably even more so than many other branches of science. Why? Because medicine involves applying imperfect science to the treatment of disease. Often that application produces clear-cut cures. Arguably more often, though, the results are more mixed and less satisfyingly clear-cut (e.g., the treatment of chronic diseases). To go back to the David Bowie quote, evidence is knowledge, not certainty or, to echo Shapiro, “proof”. Take that craving for certainty and mix in some dichotomous thinking and any conclusion of medicine that isn’t 100% certain becomes very uncertain.

As I’ve said before, one of the hardest things for the average person who is not medically or scientifically trained to accept about science-based medicine is that the conclusions of science are always subject to change based on new evidence, sometimes so much so that even those of us “in the biz” can become a bit disconcerted at the rate at which knowledge we had thought to be fairly settled changes. One example that I frequently like to cite is how duodenal peptic ulcer disease (PUD) was treated 35 years ago compared to how it is treated now. Between 1984 and 1994, a revolution occurred on the basis of the discovery of H. pylori as the cause of most of the gastric and peptic ulcer disease we see. Where in 1985 we treated PUD with H2-blockers and other drugs designed to block stomach acid secretion, now antibiotics represent the mainstay of treatment and are curative at a much higher success rate than any treatment other than surgery and without the complications of surgery. I’m sure any other physician here could come up with multiple other examples. In my own field of breast cancer surgery, from time to time I look back at how we treated breast cancer nearly 30 years ago, when I first started residency, and compare it to how we treat it now, and I marvel at the changes, many of which I had to learn after having completed my training. If such changes can be disconcerting even to physicians dedicated to science-based medicine, imagine how much more disconcerting they are to lay people, particularly when they hear news reports of one study that produces one result, followed just months later by a report of a different study that gives a completely different result. That’s definitely not certainty!

The problem is that quacks offer what humans crave: Certainty. They also offer it in a manner that begs for dichotomous thinking: My quackery is good and effective; conventional medicine is a useless and toxic (and big pharma profits). Unfortunately, as I’ve discussed before, scientists often fall prey to what has been called the “truth wins” assumption. This assumption, stated simply, is that when the truth is correctly stated it will be universally recognized. Those of us who make it one of our major activities to combat pseudoscience know, of course, that the truth doesn’t always win. Quite the contrary, actually, I’m not even sure the “truth” even wins a majority of the time — or even close to a majority of the time. Moreover, most recommendations of science-based medicine are not “truth” per se; they are simply the best recommendations physicians can currently make based on current scientific evidence. They have changed. They’re changing now. They will continue to change. The examples are endless: mammography recommendations, treatment for hypercholesterolemia, adjuvant chemotherapy recommendations for breast cancer and other cancers. Unfortunately, there are quite a few doctors who are just as uncomfortable with change as the average person and still use out-of-date treatments and techniques.

The challenge, then, as a physician and science communicator, is twofold. First, we have to be comfortable when dealing with uncertainty and change ourselves. If we can’t, then there’s no way we’ll be able to communicate uncertainty. Second, we have to be careful to acknowledge and explain the uncertainty in the findings of science, noting where there is little to no uncertainty and where there is more. Of course, by the time we are adults, it’s often too late to get that message across in a way that we ever really internalize it. We really need to teach our children not just critical thinking but that in science there is no such thing as absolute proof and that medical and scientific conclusions are supported by evidence and subject to change and revision in the face of new evidence. Then there would be less for quacks to work with when they try to persuade people that science is unreliable and their treatments provide certainty.