More dubious statements about placebo effects

In discussing “alternative” medicine it’s impossible not to discuss, at least briefly, placebo effects. Indeed, one of the most common complaints I (and others) voice about clinical trials of alternative medicine is lack of adequate placebo controls. Just type “acupuncture” in the search box in the upper left hand corner of the blog, and you’ll pull up a number of discussions of acupuncture clinical trials that I’ve done over the years. If you check some of these posts, you’ll find that in nearly every case I discuss whether the placebo or sham control is adequate, noting that, the better the sham controls, the less likely acupuncture studies are to have a positive result.

Some of the less clueless advocates of “complementary and alternative medicine” (CAM) seem to sense that much of what they do is nothing more than placebo effects and will as a result simply argue that what they do is OK because it’s “harnessing the placebo effect.” One problem that advocates of science-based medicine (like me) have with this argument is that it has always been assumed that a good placebo requires deceiving the patient by either saying or implying that they’re receiving an active treatment or medicine of sometime. Then, as I was watching the news last night (amazingly, I was actually home in time for the news–things are slowing down right before Christmas, I guess), I saw this story:

Placebos can help patients feel better, even if they are fully aware they are taking a sugar pill, researchers reported on Wednesday on an unusual experiment aimed to better understand the “placebo effect.”

Nearly 60 percent of patients with irritable bowel syndrome reported they felt better after knowingly taking placebos twice a day, compared to 35 percent of patients who did not get any new treatment, they report in the Public Library of Science journal PLoS ONE.

“Not only did we make it absolutely clear that these pills had no active ingredient and were made from inert substances, but we actually had ‘placebo’ printed on the bottle,” Ted Kaptchuk of Harvard Medical School and Beth Israel Deaconess Medical Center in Boston, who led the study, said in a statement.

This study is quite intriguing in that it directly compared a placebo treatment in patients who knew they were getting a placebo treatment with no treatment at all. So, of course, I had to head right on over to PLoS ONE to find the article Placebos without Deception: A Randomized Controlled Trial in Irritable Bowel Syndrome. The investigators, led by Dr. Ted J. Kaptchuk of Harvard’s Osher Research Center. The Osher Center, for those of you not familiar with it, is Harvard’s center of quackademic medicine; only this time they seem to be trying to do some real research into placebo effects. They also describe concisely the ethical problems with using placebos, at least as they are currently understood:

Directly harnessing placebo effects in a clinical setting has been problematic because of a widespread belief that beneficial responses to placebo treatment require concealment or deception. [3] This belief creates an ethical conundrum: to be beneficial in clinical practice placebos require deception but this violates the ethical principles of respect for patient autonomy and informed consent. In the clinical setting, prevalent ethical norms emphasize that “the use of a placebo without the patient’s knowledge may undermine trust, compromise the patient-physician relationship, and result in medical harm to the patient.” [4]

For purposes of this study, Kaptchuk et al wanted to determine if an open-label placebo pill with a persuasive rationale was more effective than no treatment at all in a completely unblinded study. In this study, it was made completely clear that the pills the patients in the experimental group were getting were placebos. In fact, they were even named “placebo.” I’ll discuss the problems I found in the study in a moment, but first I’ll just summarize the results. 92 patients with irritable bowel syndrome were screened and 80 patients randomized either to no treatment (43 subjects) or placebo (37 subjects). The primary outcome measurements were assessed using questionnaires, such as the IBS Global Improvement Scale (IBS-GIS) which asks participants: “Compared to the way you felt before you entered the study, have your IBS symptoms over the past 7 days been: 1) Substantially Worse, 2) Moderately Worse, 3) Slightly Worse, 4) No Change, 5) Slightly Improved, 6) Moderately Improved or 7) Substantially Improved.” Other scales were used as well. The trial lasted three weeks, and the results were as follows:

Open-label placebo produced significantly higher mean (±SD) global improvement scores (IBS-GIS) at both 11-day midpoint (5.2±1.0 vs. 4.0±1.1, p<.001) and at 21-day endpoint (5.0±1.5 vs. 3.9±1.3, p = .002). Significant results were also observed at both time points for reduced symptom severity (IBS-SSS, p = .008 and p = .03) and adequate relief (IBS-AR, p = .02 and p = .03); and a trend favoring open-label placebo was observed for quality of life (IBS-QoL) at the 21-day endpoint (p = .08).

I find it rather interesting that the way the authors chose to frame their results in the actual manuscript, compared to how they described their results to the media. One wonders whether saying that 60% of subjects taking placebos felt better compared to 35% receiving regular care feeling better sounded more convincing thatn citing improvement scores like the ones listed above.

Be that as it may, I think I know why this study is in PLoS ONE, which is not known for publishing high quality clinical trials, rather than in a better journal. New England Journal of Medicine material, this is most definitely not. (Of course, given the NEJM’s propensity towards a bit of credulity to woo, perhaps being in the NEJM doesn’t mean what it once did.) The first thing one notices, of course, is that there isn’t a single objective measure in the entire clinical trial. It’s all completely subjective. This is fine, as far as it goes, given that placebo effects affect primarily subjective outcomes, such as pain, anxiety, etc. It would have been quite interesting if the investigators had included along with their subjective measurements some objective measurements, such as number of bowel movements a day, time lost from work, or medication requirements. Anything. The authors even acknowledge this problem, pointing out that there are few objective measures for IBS. This may be true, but it doesn’t mean it wouldn’t have been worth trying measures that are at least related to IBS. Then there’s the potential issue of reporting bias. Because this wasn’t a double-blinded trial, or even a single-blinded trial, it was impossible to hide from the subjects which group they were assigned to. Combine this with the lack of objective measures, and all that’s there is subjective measures prone to considerable bias, all for a condition whose natural history is naturally one of waxing and waning symptoms.

This paper, as you might imagine, is being touted all over the media and blogosphere as evidence that the placebo effect can be induced without deceiving patients. Indeed, the authors say as much themselves:

We found that patients given open-label placebo in the context of a supportive patient-practitioner relationship and a persuasive rationale had clinically meaningful symptom improvement that was significantly better than a no-treatment control group with matched patient-provider interaction. To our knowledge, this is the first RCT comparing open-label placebo to a no-treatment control. Previous studies of the effects of open-label placebo treatment either failed to include no-treatment controls [27] or combined it with active drug treatment. [28] Our study suggests that openly described inert interventions when delivered with a plausible rationale can produce placebo responses reflecting symptomatic improvements without deception or concealment.

Uh, no. The reason I say this is because, all their claims otherwise notwithstanding, the investigators deceived their subjects to induce placebo effects. Here’s how they describe what they told their patients:

Patients who gave informed consent and fulfilled the inclusion and exclusion criteria were randomized into two groups: 1) placebo pill twice daily or 2) no-treatment. Before randomization and during the screening, the placebo pills were truthfully described as inert or inactive pills, like sugar pills, without any medication in it. Additionally, patients were told that “placebo pills, something like sugar pills, have been shown in rigorous clinical testing to produce significant mind-body self-healing processes.” The patient-provider relationship and contact time was similar in both groups. Study visits occurred at baseline (Day 1), midpoint (Day 11) and completion (Day 21). Assessment questionnaires were completed by patients with the assistance of a blinded assessor at study visits.

Moreover, the investigators recruited subjects thusly:

Participants were recruited from advertisements for “a novel mind-body management study of IBS” in newspapers and fliers and from referrals from healthcare professionals. During the telephone screening, potential enrollees were told that participants would receive “either placebo (inert) pills, which were like sugar pills which had been shown to have self-healing properties” or no-treatment.

Even the authors had to acknowledge that this was a problem:

A further possible limitation is that our results are not generalizable because our trial may have selectively attracted IBS patients who were attracted by an advertisement for “a novel mind-body” intervention. Obviously, we cannot rule out this possibility. However, selective attraction to the advertised treatment is a possibility in virtually all clinical trials.

In other words, not only did Kaptchuk et al deceive their subjects to trigger placebo effects, whether they realize or will admit that that’s what they did or not, but they might very well have specifically attracted patients more prone to believing that the power of “mind-body” interactions. Yes, patients were informed that they were receiving a placebo, but that knowledge was tainted by what the investigators told them about what the placebo pills could do. After all, investigators told subjects in the placebo group that science says that the placebo pills they would take were capable of activating some sort of woo-ful “mind-body” healing process. In fact, I would say that what Kaptchuk et al did was arguably worse from an ethical standpoint than what investigators do in the usual clinical trial. Consider: In most clinical trials, investigators tell subjects that they will be randomized to receive either the medicine being tested or a sugar pill (i.e., placebo). This, patients are told, means that they have a 50-50 chance of getting a real medicine and a 50-50 chance of receiving the placebo. In explaining this, investigators in general make no claim that that the placebo pill has any effect whatsoever and, in fact, are explicitly told that it does not. In contrast, Kaptchuk et al explicitly deceived their subjects for purposes of the study by telling them that the sugar pill activated some sort of mind-body woo that would make them feel better Yes, they did tell the subjects that they didn’t have to believe in mind-body interactions. But did it matter? I doubt it, because people with authority, whom patients tend to believe (namely doctors) also told subjects that evidence showed that these placebo pills activated some sort of “mind-body” mechanism that was described as “powerful.” This alone makes proclamations about how the investigators triggered placebo effects without deception–shall we say?–not exactly in line with the reality of the situation. At least, I don’t buy the investigators’ explanation, even though Ed Yong states that “no one I spoke to criticised the design of this trial,” and Edzard Ernst described it as “elegant.”

Geez, I never thought I’d be disagreeing with Ed or Edzard, but there you go.

Actually, the overall design wasn’t that bad. It’s the execution I’m more than quibbling with, in particular the matching of subjects. Even so, I don’t have a huge problem with the study. After all, it’s a pilot study. The biggest problem I have is with how the study is being sold to the press, as though it were evidence that placebo effects can really be triggered without at least some degree of deception. It shows nothing of the sort.

One last note. Take a guess who funded this study. Go on. Where did the investigators get the money for this study? That’s right.

NCCAM funded the study. Why am I not surprised? Actually, come to think of it, this is one of the better studies that NCCAM has funded. Even so, it’s only just an OK study. It has a somewhat intriguing finding that could well be due to differences between the experimental groups, reporting bias, and/or recruiting bias. But ground-breaking or somehow demonstrating that the placebo effect can be activated without deceiving patients.

Not quite, but nice try.