Does thinking make it so? The placebo myth rears its ugly head again.

Blogging is a funny thing. Sometimes the coincidence involved is epic. For instance, as I do on many Mondays, yesterday I crossposted a modified and updated version of a post from a week ago from my not-so-super-secret other blog. This time around, it just so happened to be a post about what I like to refer to as the placebo narrative. As is my wont, I described in the usual ridiculous level of detail why that narrative is so popular among promoters of pseudoscientific medical treatments and, more importantly, why that narrative is approaches black hole density bullshit. It’s something that various studies and publications that I encounter every so often require me to revisit, re-explain, and elaborate more upon based on new information. Common threads include having to point out that thinking doesn’t make it so, and that the placebo narrative as recounted by “integrative medicine” partisans has an uncomfortable resemblance to The Secret and its Law of Attraction. Integrative medicine advocates even twist epigenetics to imply that The Secret is real in medicine and that thinking makes it so.

Indeed, there’s a reason why I’ve referred to The Secret as, in essence, the central dogma of alternative medicine. Basically, as more and more rigorous clinical trials fail to find specific effects of the various alternative medicine quack modalities (but I repeat myself) that “integrative medicine” mavens want to “integrate” into real medicine, rather than abandon those methods as ineffective, as we do for drugs and other treatments that fail to produce specific effects greater than placebo, they move the goalposts and switch their rationale. Now, it doesn’t matter if an alternative medicine quack modality “works” or not because, like Rudie, it can’t fail because—voilà!—it always works through placebo effects.

But there’s a problem. Placebo effects depend upon the patient’s having an expectation of what a treatment will do, and there’s no way to convince a patient that an inert sugar pill (or whatever else is being used as a placebo) will do anything useful to relieve their symptoms without lying to the patient. That’s why, lately, it’s been very, very important to add to the placebo myth the myth of “placebo without deception.” Here’s where the coincidence comes in. Yesterday, on the very same day I reposted my article about placebo effects from last week, what to my wondering eyes should appear but the Grand Poobah of “placebos without deception,” Ted Kaptchuk, publishing an op-ed in the LA TIMES entitled ‘Honest placebos’ show medicine can work without any actual medicine.

Groan.

At first, I was tempted just to Tweet a “compare and contrast” between Kaptchuk’s latest spew and my post from yesterday, but then I realized that learning requires repetition. The cliche goes that you tell them what you’re going to tell them, tell them, and then tell them what you told them. I doubt I need three posts this week on the placebo narrative, but Kaptchuk’s article suggested to me that delving a bit more into the “placebo without deception” myth couldn’t hurt and might help. So let’s dig in.

Kaptchuk, predictably, starts with the “placebo without deception” myth:

Placebo effects have a bad reputation in the medical world. Physicians are trained to dismiss them as misleading — as in, “it’s only a placebo effect,” or “it’s no different from a placebo effect.” Placebo is a label that marks a drug as ineffective and disqualifies research subjects who respond to “bogus” treatments.

But what if patients who take “honest placebos” — meaning they are told explicitly that they are swallowing sugar pills — can still experience relief from discomfort and disability? That’s been the result of a number of studies by my research group at Harvard Medical School and other teams around the world over the last few years. While these trials were relatively small and short in duration, they collectively challenge our greatest assumption about placebos: that they require deception in order to be effective.

No. They. Do. Not.

No. No. No. No.

Kaptchuk even contradicts himself in describing the experiments. Basically, his description of the experiments hint at the reason why his experiments in fact demonstrate exactly the opposite of what he claims, that placebo effects do require deception:

In our research group’s experiments, patients with illnesses such as irritable bowel syndrome, chronic low back pain, and episodic migraine attacks were randomly assigned to one of two groups: one got an honest placebo while the other was given no treatment. Participants were generally told that placebo effects are powerful in double-blind clinical trials (in which neither patients nor researchers know what the patient is getting), but that this study would examine whether placebos still work when patients know what they are getting. We also told them they didn’t have to believe it would work.

Many laughed and suggested we were nuts. But they agreed to try it out; most had been ill for years and were desperate for relief. The results upended the conventional wisdom. Many patients treated with an honest placebo felt significantly better. On average, irritable bowel patients reported 60% adequate relief, chronic low back suffers had 30% improvement in both pain and disability, and migraine pain was 30% lower in two hours.

Notice how Kaptchuk characterizes what the patients were told: That placebo effects are “powerful” in double-blind randomized clinical trials. Not exactly. I discussed all of those studies at one time or another. Here’s what he really told patients, first in the “open label placebo study” in patients with irritable bowel syndrome:

…patients were told that “placebo pills, something like sugar pills, have been shown in rigorous clinical testing to produce significant mind-body self-healing processes.

With recruitment fliers saying:

Participants were recruited from advertisements for “a novel mind-body management study of IBS” in newspapers and fliers and from referrals from healthcare professionals. During the telephone screening, potential enrollees were told that participants would receive “either placebo (inert) pills, which were like sugar pills which had been shown to have self-healing properties” or no-treatment.

Telling patients that placebos have powerful or significant “mind-body self-healing” properties is a bit different from saying vaguely that placebos are “powerful” in randomized clinical trials. Then, in the study looking at placebos versus the drug Maxalt for migraines, this is what subjects were told:

Our first goal is to understand why Maxalt makes you pain-free in one attack but not in another. Our second goal is to understand why placebo pills can also make you pain-free. Our third goal is to understand why Maxalt works differently when given in double-blind study vs. real-life experience when you take it at home.

I repeat for emphasis: “Our second goal is to understand why placebo pills can also make you pain-free.” Not to see if placebo pills can make you pain free or to understand why placebo pills might be able to make you pain-free or could possibly make you pain free. “Can make you pain free.” To be fair, this isn’t quite as blatant as the IBS study in which subjects were told that placeboes could produce “powerful mind-body effects.” It’s still “priming the pump,” though, rather blatantly.

Not as blatantly as Kaptchuk’s most recent study, though, looking at placebos for low back pain:

After informed consent, all participants were asked if they had heard of the “placebo effect” and explained in an approximately 15-minute a priori script, adopted from an earlier OLP study,18 the following “4 discussion points”: (1) the placebo effect can be powerful, (2) the body automatically can respond to taking placebo pills like Pavlov dogs who salivated when they heard a bell, (3) a positive attitude can be helpful but is not necessary, and (4) taking the pills faithfully for the 21 days is critical. All participants were also shown a video clip (1 minute 25 seconds) of a television news report, in which participants in an OLP trial of irritable bowel syndrome were interviewed (excerpted from: http://www.nbcnews.com/video/nightly-news/40787382#40787382).

I know. I know. I just used this one yesterday, but it’s worth repeating. Compare and contrast. Compare and contrast, my friends. And repetition, but hopefully not too much repetition.

Kaptchuk also exaggerates the level of symptom relief experienced. For example, here is how the results were described in the actual IBS paper:

Open-label placebo produced significantly higher mean (±SD) global improvement scores (IBS-GIS) at both 11-day midpoint (5.2±1.0 vs. 4.0±1.1, p< .001) and at 21-day endpoint (5.0±1.5 vs. 3.9±1.3, p = .002). Significant results were also observed at both time points for reduced symptom severity (IBS-SSS, p = .008 and p = .03) and adequate relief (IBS-AR, p = .02 and p = .03); and a trend favoring open-label placebo was observed for quality of life (IBS-QoL) at the 21-day endpoint (p = .08).

I find it rather interesting that the way Kaptchuk chose to frame his results in the actual manuscript, compared to how he describes his results in this op-ed (and has described it in pretty much every interview in the lay press that I’ve seen where he mentiones this study). One wonders whether saying that 60% of subjects taking placebos felt better compared to 35% receiving regular care feeling better sounds more convincing that citing improvement scores as unimpressive as the ones listed above. The reason is that I very much wonder whether the improvements reported are clinically significant. For instance, in the main result reported, those in the notreatment arm reported an average IBS-GIS of 4 (no change). In the Open Placebo arm, the average reported was 5 (Slightly Improved). How clinically relevant is this? I don’t know, but I have suspicions that such a small change skirts the borders of clinical relevance and might not even achieve it.

As far as the migraine study with Maxalt and placebo, let’s go to the paper as well, to look at something Kaptchuk tends not to mention, namely the second endpoint examined, specifically whether or not the subject was pain free after 2.5 hours:

Unlike the primary endpoint, the proportion of participants who were pain-free during the no-treatment condition (0.7%) was not statistically different from when participants took open-label placebo (5.7%). As with the primary endpoint, the proportion of participants pain-free after treatment was not statistically different between Maxalt treatment mislabeled as placebo (14.6%) and placebo treatment mislabeled as Maxalt (7.7%). The resulting therapeutic gain (that is, drug-placebo difference) was 8.8 percentage points under “placebo” labeling [odds ratio (OR), 2.80], 26.6 percentage points under “Maxalt or placebo” labeling (OR, 7.19), and 24.6 percentage points under “Maxalt” labeling (OR, 5.70).

As I noted at the time, the critical finding here is that Maxalt beat any sort of placebo effect, and not by a little bit, either. For all the Maxalt groups, the percentage of subjects who were pain free was 25.5% compared to 6.7% for all the placebo groups. That’s nearly a four-fold difference. Also note that the no treatment condition was not statistically different from the open-label placebo condition. The error bars were quite large, as well. Another problem with the study was that the authors made no effort to assess expectancy because they were afraid of causing patients to question the accuracy of the information provided on the envelopes. The lack of assessment of expectancy greatly decreases the utility of this study and the ability to generalize from it. Worse, no assessment of blinding was performed because the investigators were worried that this, too, would provoke suspicions in an in-study design. Quite frankly, I did not find this a convincing excuse.

As for the third study, I didn’t discuss the magnitude of pain relief from “open label placebo” as much as I should have in my original discussion of the study. I more or less took Kaptchuk’s description at face value, which was a failing on my part that I’m happy to remedy today. First, Kaptchuk used a composite scale that assessed pain intensity by asking participants to rate their pain using 3 standard Numeric Rating Scales, ranging from 0 (“no pain”) to 10 (“worst pain imaginable”), scoring maximum pain, minimum pain, and usual pain. The mean of the 3 measures was their primary pain outcome. If you dig into the actual tables, the results are less impressive. The changes in pain in each of the three measures used to construct the composite score ranged from 0.54 to 2.15 on a scale of 10. Even the authors point out that these are likely to be barely clinically significant, pointing out that a 30% reduction has been recommended as an indication of clinical significance and open label placebo just barely achieved that. And, of course, there’s still the sticky issue of having to lie to the patient.

Kaptchuk concludes with a distillation of the placebo narrative that contains the seeds of its own refutation:

Patients are open to safe self-healing methods such as honest placebos, according to survey research. But are doctors? Even if the evidence for honest placebos continues to grow, physicians may resist despite the obvious advantages: lower cost, lower risk, no side effects. Placebo treatment just goes against their years of training and reliance on medications. Patients likely will have to ask for placebo treatment and get their doctors on board.

That said, prescribing sugar pills is not the only way physicians can harness the power of self-healing. Placebo effects are most pronounced when patients interact with caring and empathetic doctors and nurses; when they feel skilled hands touch them; when they perform time-honored medical rituals and observe tools and symbols of healing; and when they are comforted with reassurance, support and hope.

See? It’s not the patients who are resistant to using placebos! It’s those hidebound, dogmatic doctors who believe that treatments should be science-based and who have the temerity to consider it unethical to lie to patients (or to grossly exaggerate or mischaracterize placebo effects). Now here’s the refutation. Yes! We know placebo effects can be enhanced by empathy and the “human touch.” In other words, good bedside manner matters. That means that we don’t have to lie to patients or exaggerate by calling placebo effects “powerful mind-body self-healing” or other such woo babble (again, like technobabble in Star Trek, only with woo). All we have to do is to use empathy and the human touch in concert with real, honest-to-goodness treatments shown through science to be effective against whatever the patient has, no need even for a little shading of the truth or lying about sugar pills (or alternative medicine treatments) to patients.

Of course, Ted Kaptchuk and his acolytes will never, ever accept that solution, because doing so would require them to admit that the quackery they so badly want to “integrate” into science-based medicine is ineffective, which would basically eliminate the specialty of integrative medicine.