In discussing “alternative” medicine it’s impossible not to discuss, at least briefly, placebo effects. Indeed, one of the most common complaints I (and others) voice about clinical trials of alternative medicine is lack of adequate placebo controls. Just type “acupuncture” in the search box in the upper left hand corner of the blog, and you’ll pull up a number of discussions of acupuncture clinical trials that I’ve done over the years. If you check some of these posts, you’ll find that in nearly every case I discuss whether the placebo or sham control is adequate, noting that, the better the sham controls, the less likely acupuncture studies are to have a positive result.
Some of the less clueless advocates of “complementary and alternative medicine” (CAM) seem to sense that much of what they do is nothing more than placebo effects and will as a result simply argue that what they do is OK because it’s “harnessing the placebo effect.” One problem that advocates of science-based medicine (like me) have with this argument is that it has always been assumed that a good placebo requires deceiving the patient by either saying or implying that they’re receiving an active treatment or medicine of sometime. Then, as I was watching the news last night (amazingly, I was actually home in time for the news–things are slowing down right before Christmas, I guess), I saw this story:
Placebos can help patients feel better, even if they are fully aware they are taking a sugar pill, researchers reported on Wednesday on an unusual experiment aimed to better understand the “placebo effect.”
Nearly 60 percent of patients with irritable bowel syndrome reported they felt better after knowingly taking placebos twice a day, compared to 35 percent of patients who did not get any new treatment, they report in the Public Library of Science journal PLoS ONE.
“Not only did we make it absolutely clear that these pills had no active ingredient and were made from inert substances, but we actually had ‘placebo’ printed on the bottle,” Ted Kaptchuk of Harvard Medical School and Beth Israel Deaconess Medical Center in Boston, who led the study, said in a statement.
This study is quite intriguing in that it directly compared a placebo treatment in patients who knew they were getting a placebo treatment with no treatment at all. So, of course, I had to head right on over to PLoS ONE to find the article Placebos without Deception: A Randomized Controlled Trial in Irritable Bowel Syndrome. The investigators, led by Dr. Ted J. Kaptchuk of Harvard’s Osher Research Center. The Osher Center, for those of you not familiar with it, is Harvard’s center of quackademic medicine; only this time they seem to be trying to do some real research into placebo effects. They also describe concisely the ethical problems with using placebos, at least as they are currently understood:
Directly harnessing placebo effects in a clinical setting has been problematic because of a widespread belief that beneficial responses to placebo treatment require concealment or deception. [3] This belief creates an ethical conundrum: to be beneficial in clinical practice placebos require deception but this violates the ethical principles of respect for patient autonomy and informed consent. In the clinical setting, prevalent ethical norms emphasize that “the use of a placebo without the patient’s knowledge may undermine trust, compromise the patient-physician relationship, and result in medical harm to the patient.” [4]
For purposes of this study, Kaptchuk et al wanted to determine if an open-label placebo pill with a persuasive rationale was more effective than no treatment at all in a completely unblinded study. In this study, it was made completely clear that the pills the patients in the experimental group were getting were placebos. In fact, they were even named “placebo.” I’ll discuss the problems I found in the study in a moment, but first I’ll just summarize the results. 92 patients with irritable bowel syndrome were screened and 80 patients randomized either to no treatment (43 subjects) or placebo (37 subjects). The primary outcome measurements were assessed using questionnaires, such as the IBS Global Improvement Scale (IBS-GIS) which asks participants: “Compared to the way you felt before you entered the study, have your IBS symptoms over the past 7 days been: 1) Substantially Worse, 2) Moderately Worse, 3) Slightly Worse, 4) No Change, 5) Slightly Improved, 6) Moderately Improved or 7) Substantially Improved.” Other scales were used as well. The trial lasted three weeks, and the results were as follows:
Open-label placebo produced significantly higher mean (±SD) global improvement scores (IBS-GIS) at both 11-day midpoint (5.2±1.0 vs. 4.0±1.1, p<.001) and at 21-day endpoint (5.0±1.5 vs. 3.9±1.3, p = .002). Significant results were also observed at both time points for reduced symptom severity (IBS-SSS, p = .008 and p = .03) and adequate relief (IBS-AR, p = .02 and p = .03); and a trend favoring open-label placebo was observed for quality of life (IBS-QoL) at the 21-day endpoint (p = .08).
I find it rather interesting that the way the authors chose to frame their results in the actual manuscript, compared to how they described their results to the media. One wonders whether saying that 60% of subjects taking placebos felt better compared to 35% receiving regular care feeling better sounded more convincing thatn citing improvement scores like the ones listed above.
Be that as it may, I think I know why this study is in PLoS ONE, which is not known for publishing high quality clinical trials, rather than in a better journal. New England Journal of Medicine material, this is most definitely not. (Of course, given the NEJM’s propensity towards a bit of credulity to woo, perhaps being in the NEJM doesn’t mean what it once did.) The first thing one notices, of course, is that there isn’t a single objective measure in the entire clinical trial. It’s all completely subjective. This is fine, as far as it goes, given that placebo effects affect primarily subjective outcomes, such as pain, anxiety, etc. It would have been quite interesting if the investigators had included along with their subjective measurements some objective measurements, such as number of bowel movements a day, time lost from work, or medication requirements. Anything. The authors even acknowledge this problem, pointing out that there are few objective measures for IBS. This may be true, but it doesn’t mean it wouldn’t have been worth trying measures that are at least related to IBS. Then there’s the potential issue of reporting bias. Because this wasn’t a double-blinded trial, or even a single-blinded trial, it was impossible to hide from the subjects which group they were assigned to. Combine this with the lack of objective measures, and all that’s there is subjective measures prone to considerable bias, all for a condition whose natural history is naturally one of waxing and waning symptoms.
This paper, as you might imagine, is being touted all over the media and blogosphere as evidence that the placebo effect can be induced without deceiving patients. Indeed, the authors say as much themselves:
We found that patients given open-label placebo in the context of a supportive patient-practitioner relationship and a persuasive rationale had clinically meaningful symptom improvement that was significantly better than a no-treatment control group with matched patient-provider interaction. To our knowledge, this is the first RCT comparing open-label placebo to a no-treatment control. Previous studies of the effects of open-label placebo treatment either failed to include no-treatment controls [27] or combined it with active drug treatment. [28] Our study suggests that openly described inert interventions when delivered with a plausible rationale can produce placebo responses reflecting symptomatic improvements without deception or concealment.
Uh, no. The reason I say this is because, all their claims otherwise notwithstanding, the investigators deceived their subjects to induce placebo effects. Here’s how they describe what they told their patients:
Patients who gave informed consent and fulfilled the inclusion and exclusion criteria were randomized into two groups: 1) placebo pill twice daily or 2) no-treatment. Before randomization and during the screening, the placebo pills were truthfully described as inert or inactive pills, like sugar pills, without any medication in it. Additionally, patients were told that “placebo pills, something like sugar pills, have been shown in rigorous clinical testing to produce significant mind-body self-healing processes.” The patient-provider relationship and contact time was similar in both groups. Study visits occurred at baseline (Day 1), midpoint (Day 11) and completion (Day 21). Assessment questionnaires were completed by patients with the assistance of a blinded assessor at study visits.
Moreover, the investigators recruited subjects thusly:
Participants were recruited from advertisements for “a novel mind-body management study of IBS” in newspapers and fliers and from referrals from healthcare professionals. During the telephone screening, potential enrollees were told that participants would receive “either placebo (inert) pills, which were like sugar pills which had been shown to have self-healing properties” or no-treatment.
Even the authors had to acknowledge that this was a problem:
A further possible limitation is that our results are not generalizable because our trial may have selectively attracted IBS patients who were attracted by an advertisement for “a novel mind-body” intervention. Obviously, we cannot rule out this possibility. However, selective attraction to the advertised treatment is a possibility in virtually all clinical trials.
In other words, not only did Kaptchuk et al deceive their subjects to trigger placebo effects, whether they realize or will admit that that’s what they did or not, but they might very well have specifically attracted patients more prone to believing that the power of “mind-body” interactions. Yes, patients were informed that they were receiving a placebo, but that knowledge was tainted by what the investigators told them about what the placebo pills could do. After all, investigators told subjects in the placebo group that science says that the placebo pills they would take were capable of activating some sort of woo-ful “mind-body” healing process. In fact, I would say that what Kaptchuk et al did was arguably worse from an ethical standpoint than what investigators do in the usual clinical trial. Consider: In most clinical trials, investigators tell subjects that they will be randomized to receive either the medicine being tested or a sugar pill (i.e., placebo). This, patients are told, means that they have a 50-50 chance of getting a real medicine and a 50-50 chance of receiving the placebo. In explaining this, investigators in general make no claim that that the placebo pill has any effect whatsoever and, in fact, are explicitly told that it does not. In contrast, Kaptchuk et al explicitly deceived their subjects for purposes of the study by telling them that the sugar pill activated some sort of mind-body woo that would make them feel better Yes, they did tell the subjects that they didn’t have to believe in mind-body interactions. But did it matter? I doubt it, because people with authority, whom patients tend to believe (namely doctors) also told subjects that evidence showed that these placebo pills activated some sort of “mind-body” mechanism that was described as “powerful.” This alone makes proclamations about how the investigators triggered placebo effects without deception–shall we say?–not exactly in line with the reality of the situation. At least, I don’t buy the investigators’ explanation, even though Ed Yong states that “no one I spoke to criticised the design of this trial,” and Edzard Ernst described it as “elegant.”
Geez, I never thought I’d be disagreeing with Ed or Edzard, but there you go.
Actually, the overall design wasn’t that bad. It’s the execution I’m more than quibbling with, in particular the matching of subjects. Even so, I don’t have a huge problem with the study. After all, it’s a pilot study. The biggest problem I have is with how the study is being sold to the press, as though it were evidence that placebo effects can really be triggered without at least some degree of deception. It shows nothing of the sort.
One last note. Take a guess who funded this study. Go on. Where did the investigators get the money for this study? That’s right.
NCCAM funded the study. Why am I not surprised? Actually, come to think of it, this is one of the better studies that NCCAM has funded. Even so, it’s only just an OK study. It has a somewhat intriguing finding that could well be due to differences between the experimental groups, reporting bias, and/or recruiting bias. But ground-breaking or somehow demonstrating that the placebo effect can be activated without deceiving patients.
Not quite, but nice try.
47 replies on “More dubious statements about placebo effects”
CAM = Clueless Advocates’ Mess?
That bit jumped out at me, too. While there may be some clinical testing that has shown “mind-body self-healing processes” due to placebos, that is already setting up expectations in the mind of the subject. Already you’re adding a difference between the two groups that goes beyond simply the sugar pill.
And, yeah, not doing age-, gender- or symptom-matching with controls…no chance of skewed results there, nosirreebob.
Excuse me, but if one group gets “nothing” and the other group gets a “sugar pill–i.e., nothing”, then there is no difference at all between the two groups. The ONLY factor that stands out here is the blatant contamination of advertising for the study with the “mind-body” crap. The only thing triggered was BELIEF. They simple recruited people who were already convinced of a “mind-body” element of “healing” and then gave them very subjective questionnaires to very subjectively confirm their preconceived bias.
My only concern is that you give this study any credence whatsoever and I am shocked that Ernst has anything good to say about it.
It occurs to me that my doctor might prescribe antibiotics, and truthfully reassure a patient by saying something like “you don’t need to believe in anything, tests show that these pills work no matter what you believe.”
That’s the same kind of impression that may be given by what these researchers said about “rigorous clinical studies.”
As a side note, I read years ago that people studying the placebo effect had found that certain colors and shapes of pills were more effective. That’s one that could, I think, be investigated more thoroughly without ethical problems. Do (for example) 200 mg of ibuprofen in a small red pill really work better than the same medicine in a middle-sized white or green one? No patient would be deprived of real treatment in that study.
There are a couple of other issues here:
“Placebo” means something to scientists, but for lay people it has almost a magical quality that ties into the description in the solicitation. It seems like that might get a result like those seen in any other kind of wooish treatments.
I’ve read ethical discussions on the need for putting false side effects in placebo treatments, especially in “heavy-hitting” trials where patients expect non-disease-related side effects; otherwise, the controls realize they’re in the control group (and the treatment folks become SURE they’ve lucked out and are actually getting treatment, strengthening that effect), a realization that might actually worsen their condition with a sort of “anti-placebo effect.” Could something similar happen to the controls here, who show up for some sort of treatment and are told they’re getting none? If the subjects were also restricted from using other treatments, the effects could be stronger.
I don’t find the “mind-body self-healing processes” bit quite that much of a problem. It would be perfectly ethical, as I understand it, to give a placebo while saying “this is a placebo, it doesn’t actually do anything but sometimes simply feeling like you’re taking action helps people feel better.” And in particular, giving a placebo in practice would necessarily be accompanied by some such statement, since you do have to tell the patient WHY you want them to take this.
I’ll agree that these researchers went far beyond that, to an unjustifiable extent that probably skewed the results. But setting up an expectation in the placebo group is not per se a reason to reject a study of the placebo effect in my judgement; the setting of the expectation is part of what needs to be studied. They just didn’t do it properly – an issue of degree, not kind.
Well, I can tell you that Kaptchuck and his fellow Osherians in general are becoming up front that what they are doing is studying placebo. And NCCAM is to a considerable extent finding itself backed into that corner, as its funded trials consistently fail to detect anything else.
This is, on the one hand, evidence that The Truth is Out There and if you try to follow basic rules of evidence, it’s going to lead you where it will in the end. On the other hand, the existence of NCCAM means a) we’re spending what most people would consider a disproportionate amount of money to study placebo and b) we have to study placebos that are relatively expensive and impractical, such as courses of acupuncture, since we have to look at official “CAM” modalities and c) we still have to use a lot of abracadabra and hocus pocus such as “mind-body interventions.”
At least we can be reassured that, waste of money or no, NCCAM is not, in the end, foisting nonsense on the world, but rather, slowly, reluctantly, maybe not very effectively, but nevertheless ineluctably, debunking it. Whether they like it or not.
“have been shown in rigorous clinical testing” – are they referencing this very paper, prior to its publication? how meta of them…
For proper blinding, they should have compared placebos to a sugar pill….
I wonder if doing a crossover design would have been more appropriate, with subjects serving as their own controls.
It’s hard, though. One would expect, no matter the design, that there would be some effect in the group taking the sugar pill than in the controls. It just might not be as great an effect as if they thought they were taking the real deal.
Hmm…maybe a group that was told they were given a real drug, a group that was told they were given an inert sugar pill (without any mention of mind-body stuff) and a third group that gets no additional treatment, except that both of the first two groups receive the same sugar pills.
Would you also describe as “mind-body woo” the belief of many if not most doctors that mental and emotional factors, such as stress, can cause or exacerbate subjective digestive complaints? Surely you cannot argue with a straight face that it is perfectly reasonable for doctors to tell patients that their discomfort may be related to their mental state, but Woo if they then suggest that changing that mental state might reduce the discomfort.
” Harnessing the placebo effect” – I think that woo-meisters utilize this concept** because they believe in vitalism and have a rather primitive idea that there is some sort of “life energy” ( Chi, Prana, *elan vital*, libido, mana, bio-energy, whatever) flowing though living beings that can be blocked, directed, diverted, harnessed, increased, or drained. In this somewhat hydraulic model, damming the flow results in illness- be it physical or psychological. Placebos “get it moving”.
** actually, I think that Mike Adams has revealed his limitations this way.
@ Vicki:
“As a side note, I read years ago that people studying the placebo effect had found that certain colors and shapes of pills were more effective. ”
Soooooorta. It’s a question pondered more by Marketing than by R&D really. Although, Khan et al. (DOI 10.1007/s00213-010-1874-z) investigated this in a roundabout way, I think they are asking the question backwards–the question should be more, “How can we make pills for better patient compliance” rather than “are pills aesthetically designed for marketing purposes”.
The problem is, making a pill that is aesthetically pleasing is at odds with the goal of selling pills all over the world. In the US, we like our pills to be white or blue or greenish or perhaps a friendly shade of pink for oral contraceptives, but we’re not huge fans of red or yellow or black–those indicate blood, war, danger and death in the West. However, in East Asian countries, white denotes death and green means infidelity, while red and yellow are associated with good luck, happiness and nourishment. I would hazard a guess that any result one might find from studying the effects of aesthetics on placebo response would vary accordingly by cultural background.
Oh good. I’m glad someone wrote a more critical response to this. Orac, I’m not sure that we’re “disagreeing” on this. I tried to get across a lot of the limitations to the study in my piece and that it’s an interesting pilot study that needs more work. I was a little surprised that no one I spoke to had anything particularly critical to say about it although I ran the post past at least one medical blogger who was happy with the balance of it.
“Additionally, patients were told that “placebo pills, something like sugar pills, have been shown in rigorous clinical testing to produce significant mind-body self-healing processes.”
“…potential enrollees were told that participants would receive “either placebo (inert) pills, which were like sugar pills which had been shown to have self-healing properties””
Try to tell me that this isn’t stacking the deck. /snark
Given that the investigators were willing to deceive their subjects (even if they don’t consider what they did deceptive), it would have been interesting to have a third arm where subjects were given a “traditional placebo” where they were told they would be randomized into a group that might get placebo or a new experimental medication. I’d also be interested to see a fourth arm where the patients were given sugar pill and told they were in a control group receiving sugar pill and that they should expect no improvement as the pill was inert and they were being used a baseline reference group.
I have to wonder why they even made that mind body statement at all given the goal of this study.
A lot of the disagreement here hinges on what the definition of a âplaceboâ actually is. If your definition is âa treatment that does nothingâ, then if a treatment does something, then by definition it can’t be a âplaceboâ. This is the definition that CAM uses. If a treatment does âsomethingâ, then by their definition, âit works too!â. This is how they come up with the mistaken notion that acupuncture works and acupuncture with toothpicks works too, because both of them work better than âdoing nothingâ.
If you want to divide treatment modalities into placebos and non-placebos, then because those two divisions are collectively exhaustive (i.e. cover all possible treatments), then your definition of âplaceboâ and ânon-placeboâ has to be collectively exhaustive too.
The definition of âplaceboâ that I like, is âa treatment that improves health, the effects of which are not mediated through pharmacological, nutritive, or surgical mechanismsâ.
Similarly my definition of a ânoceboâ is âa treatment that degrades health, the effects of which are not mediated through pharmacological, nutritive, or surgical mechanismsâ.
Note in both cases there has to be an effect on health. If there is no effect, then a treatment is neither a placebo, a nocebo, or any other type of treatment, other than a failed treatment.
If you can’t characterize a treatment as either a placebo or a non-placebo, then your definitions need to be modified until you can.
I think this is a good and valuable study. It shows that people can be given placebo treatments (in this case a pill with no active ingredients), be told it is a placebo, and they still exhibit improvement in symptoms. This shows how difficult it is to do studies of treatments without active components.
This study shows that people getting better is not evidence that a treatment is not a placebo. That people in a treatment leg of homeopathy, acupuncture, reiki, prayer, etc. get better compared to a non-treatment leg is not evidence that the treatment leg was not a placebo.
I think that this is a really good approach to administering placebos for use in clinical trials. Give everyone in the trial the same mind-body spiel about placebos. It probably will increase the fraction of people in the trial that get better, on both legs. That is a good thing. If there is coupling between a placebo effect of giving a pill, and the therapeutic effect of the actual treatment (and there probably is), then you want to activate both of them as much as possible. This approach does not compromise the effectiveness of the trial. If it is no better than placebo, that will still show up with a great placebo response and with a crappy placebo response.
Where it is most important to have a good and robust placebo response is for ineffective treatments, or for treatments that are a placebo. If a CAM treatment is no better than a placebo, then it is a placebo and can only be used as a placebo, not for actual clinical treatment.
In no way does this study justify giving people placebos as treatment in a clinical setting if there are other treatments known to be more effective than placebo. Maybe if there are no known effective treatments, and the practitioner and patient discuss that, and the patient still wants âsomethingâ, then maybe the practitioner could give an open label placebo after discussing it with the patient. I don’t see this as that different than giving psychotherapy (which under my definition is a placebo because it does not involve pharmacology, surgery or dietary interventions). Psychotherapy works, people do get better. How and why they get better is complex and doesn’t involve woo-woo magic, it involves physiology, physiology which we don’t yet fully understand.
Daedalus2u:
I think you need to revise your definitions: at least, I assume you don’t mean to include all forms of exercise and physical therapy as placebos.
RE PREVIOUS COMMENT “have been shown in rigorous clinical testing” – are they referencing this very paper, prior to its publication? how meta of them…”
NO … WRONG — they are referencing their prior paper in BMJ showing that a placebo given in a warm clinical context elicited a large placebo response in IBS ..
Components of placebo effect: randomised controlled trial in patients with irritable bowel syndrome.
BMJ. 2008 May 3;336(7651):999-1003. Epub 2008 Apr 3.
This study is the basis for the statement: “placebo pills, something like sugar pills, have been shown in rigorous clinical testing to produce significant mind-body self-healing processes.”
Many of Orac’s criticisms are reasonable critiques of a small proof-of-principle study that needs to be replicated in a larger sample ….Placebo studies, like all science, are incremental, and progress through replication.
Penn & Teller covered this succinctly in their book “How to Play With Your Food”:
If you are suffering from some condition (headache, upset stomach, etc…) and do nothing….
one of 3 things is going to happen; the condition is going to get better, no change, or it’s going to get worse.
Now if one is suffering and takes something for it (aspirin, tree bark, astrally projected acupuncture needles up your ass, etc…)
one of 3 things is going to happen; the condition is going to get better, no change, or it’s going to get worse.
The only use of placebos is to eliminate those circumstances that might have changed on their own; there ‘ain’t’ no such thing as the “placebo effect”.
the ONLY thing one can say if your placebo group shows some response is that whatever you were testing for had no effect, not that the placebo did; a placebo that had a significant effect would be worthless as a placebo.
Orac, can you elaborate on your point re: deception? I am just a layman, but it seems like I’ve heard that there is a pretty big pile of evidence that placebo effects do occur and that the study’s description of them is basically accurate.
Is it just the “mind-body…self-healing processes” phrase that you don’t like? Surely the placebo effect is the result of something going on in the mind affecting the body? How would you otherwise describe the mechanism of the placebo effect? Thanks and great post
A few statistical comments.
1. I don’t know why people don’t use randomly permuted blocks in RCTs when they know before hand that the trial is going to be relatively small. This would ensure that the number of subjects in each arm were more similar.
2. Using the “last observation carried forward” as a means of handling missing data has been the subject of some criticism in the imputation literature. While it’s been assumed that any bias this method induces is conservative, this is not always the case. What it does do is artificial reduce the variance in the data since the inputed values are treated as real data. So the calculated p-values are smaller than they should be.
3. I’m also wary of sample size calculations that are based on effect sizes. A clinical trial should be powered to detect effects that are clinically significant. If you power a trial to detect “a large effect (d=.8)”, what does that mean?
4. If you look at the main result, those in the no-treatment arm reported an average IBS Global Improvement Score of 4 (No Change). In the Open Placebo arm, the average was 5 (Slightly Improved). Is this of any clinical relevance?
I’ve just had a look at the protocol for this trial which is also available from the PLOS site.
The authors state that for the IBS Global Improvement Scale (IBS-GIS) they were going to define “responders” as patients who answered that their symptoms were either âmoderately improvedâ or âsubstantially improvedâ.
In other words, if they scored a 6 or 7 on the IBS-GIS they were deemed to have responded to treatment.
In the section entitled “Planned Statistical Analysis”, they state that for the IBS-GIS they were going to use chi square tests to determine whether the proportion of responders in the open-label placebo condition was greater than the proportion of responders in the wait-list control.
This is not what they have done. Why haven’t the reviewers asked them for the results that were prespecified in the protocol?
vicki, yes I didn’t mean to exclude things like exercise, but exercise is usually not considered to be a “therapy”, any more than breathing clean air is considered to be a therapy. It is something that everyone needs to stay healthy.
As you know, I’ve long argued it was the NewAge that mattered most (why have scientists wasted money studying whether water has a memory again?) glad to see something – anything – that finally got you to focus on it.
Mind, Body, Spirit, Baby! – and Merry Christmas!
Also, if (as usual) you think I’m off my rocker, I challenge you – since you get grants for this kind of thing – to do a study of the people who go for CAM (or into the anti-vaccine hooey) to discover what their “spiritual” beliefs are.
I betcha, they’ll be overwhelmingly NewAgers.
@r_nebblesworth–“there is a pretty big pile of evidence that placebo effects do occur…” Since treatments are commonly tested vs. placebo, and the tested treatments must have a greater effect than placebo to be considered effective, isn’t that in itself evidence that placebo effects do occur? IOW, a placebo effect is assumed for these kinds of trials.
I think the reason that they havenât tested the difference in responders as they stated in the study protocol is because the study is under-powered.
The IBS-GIS has been used in a previous study (Lembo et al. (2001). Alosetron controls bowel urgency and provides global symptom improvement in women with diarrhea-predominant irritable bowel syndrome. Am J Gastroenterol 96(9): 2662-2670). In this study, at 4 weeks about 40% of those in the placebo group were classified as responders.
So in the Kaptchuk study, at 3 weeks we might expect that overall 40% of the participants would be classified as responders.
So if we were doing a sample size calculation, we might want 90% power to detect a difference of 35% in one arm versus 45% in the other arm. For this you would need 523 participants in each arm.
What if we said, 35% in one arm versus 50% in the other? 240 in each arm. OK, how about 35% in one arm versus 55% in the other: 138 in each arm.
With 40 participants in each arm (close to what they actually have), you would have 80% power to detect a difference of 35% in one arm versus 70% in the other.
When I saw this reported in the Guardian I couldn’t (and still can’t) see what the big deal is anyway. Whether what the participants were told about âsignificant mind-body healing processesâ should be regarded as a deception or not it sounds like a âpositive consultationâ to me:
http://www.bmj.com/content/294/6581/1200 (I read about this in the Placebo chapter of Skrabanek and McCormick’s Follies and Fallacies in Medicine.)
Furthermore, even if they’d been told nothing other than the first bit (about inert pills), why would anyone be surprised if such a non-deceptive placebo pill-taking regime elicited a detectable placebo response compared to no treatment? Has it really always been assumed that some deception is necessary? If so, why?
It doesn’t strike me as a terrible study, but I also saw the “mind-body interaction” bit in the description and thought “Oh hell, really? You just told them that it wouldn’t be inert?”
The more interesting procedure would have been to give them a sugar pill, tell them it was a sugar pill and that it wouldn’t do jack, and leave it at that. At least that way, we’d get some measure out of seeing what sort of subjective benefits can be gained simply from the ritualistic process of taking a pill.
So, in order for an honest doctor to prescribe a placebo treatment, she/he still has to appeal to a sense of woo with the “mind-body” stuff. My question: Does using such a meaningless phrase equate with dishonesty? I guess it still does.
TBM, when a doctor prescribes psychotherapy, what non-placebo treatment is being prescribed? What about psychotherapy makes it a non-placebo?
Orac⦠I have a question for you. Do you make a distinction of mind body, versus brain body?
purenoiz, there is no mind, there is only body. What people consider to be the “mind”, is an emergent property of the brain and its physiology.
There is no woo-woo magic that can be called the mind, there is only the brain and physiology, all (and only) mediated through chemistry and physics.
Or rather there is no datum (note singular) that there is any non-material mind caused by or associated with woo-woo magic.
There is plenty we don’t know, but nothing so far seems to need woo-woo magic to explain, just nitric oxide ;).
Daedulus nice of you to read my question, however your ego forgot you are not Orac. Also, that wasn’t really what I wanted to know. I already know there isn’t an organ called the mind, rather the distinction of the mind versus the brain. One psychological, one physical. It was a personal question not a factual question. In fact it was a question to open up and explore some concepts in neuroscience around how the brain maps the body, and perceives it’s environment, and then how we psychologically interpret the data our sensory organs present to us. I’m not the woo magic believer you think I am, so quit being an assumptive ⦠, and ask questions to find out the inquiry before you answer a question that isn’t being asked.
@purenoiz Or you could try phrasing your questions in a clearer, less ambiguous manner. Like, you know ,someone interesting in having a discussion.
I agree with daedalus2u “vicki, yes I didn’t mean to exclude things like exercise, but exercise is usually not considered to be a “therapy”, any more than breathing clean air is considered to be a therapy. It is something that everyone needs to stay healthy.”
Yes, exercise is good for all of us; no argument there. But when my doctor, or the physical therapist she refers me to, says that I, specifically, need to do these exercises to treat this known problem, that is therapy.
“Get X amount of exercise that raises your heart rate” is general advice: “do 15-20 reps of each of these four things for your shoulder every days” crosses into therapy, I think.
Alternatively, you would need to classify “nutritive” interventions under placebo rather than therapy, because “eat a healthy, varied diet” is something everyone should do.
I don’t think those changes count as placebos either. Sometimes people need to make up for a specific deficiency, and other times they need to stop eating things that are fine for most of us, because of allergies or sensitivities.
A nutritious diet to promote general good health is not a placebo, but eating specific foods to treat specific disorders can be. Eating tiger reproductive organs to correct erectile dysfunction is a placebo. Eating organic food (as opposed to equally nutritious non-organic food)is a placebo. Eating something containing specific vitamins to prevent/cure a deficiency is not.
Eating food that is aesthetically prepared, and not just pre-formed pellets or bars probably has placebo effects. But the exercise one gets from needing to chew one food over another is not a placebo effect.
My conceptualization is that placebo effects are mediated through communication, either external or internal and not through direct pharmacological or physical effects. Pavlov’s bell ringing was a placebo that caused his dogs to salivate.
If people are conditioned to believe that acupuncture works, it may âworkâ for them, but it is still a placebo.
In the referenced study, the patients were (previously) conditioned to believe that getting pills from a health care provider was going to help them. It did help them, but it is still a placebo.
@Youngskeptic, and nobody else please?
What is unclear about me asking a direct question to a SPECIFIC somebody? Is that not how one begins to engage in a conversation with a specific somebody?
There is an issue of degree involved. No sensible practitioner would deny that a patient’s changing of their mental state can reduce their discomfort. However, many CAM practitioners go far beyond this sensible claim (which amounts to “mental states can ameliorate what mental states caused”) and talk about ‘harnessing the placebo effect’ and suchlike. They are suggesting that mental states can cure what things other than mental states caused, and that is going beyond the evidence.
Many CAM advocates are sure that if the evidence was collected it would actually support this idea, and so they do studies like this one which purport to show the ‘power’ of the placebo effect. However, it is important to examine closely the results they claim to achieve. In this case they claimed that patients who knew they were receiving placebo and presumably understood the implications of that still showed improvement. When we look closer we find that the truth is closer to “patients who knew they were receiving placebo and were specifically misled by the experimenters as to the implications of that.”
“The objectives of this study were to assess the feasibility of recruiting IBS patients to participate in a trial of open-label placebo and to assess whether an open-label placebo pill with a persuasive rationale was more effective than no-treatment in relieving symptoms of IBS in the setting of matched patient-provider interactions.”
“Persuasive rationale”. I can imagine George Orwell wincing a little bit at that one.
You will be given âplacebo pills made of an inert substance, like sugar pills, that have been shown in clinical studies to produce significant improvement in IBS symptoms through mind-body self-healing processesâ – or nothing.
Non-deceptive?
Compare: you will be given inert placebo pills – sugar pills: which are reported to provide some help with IBS symptoms in clinical trials, by patients who do not know whether they are receiving an active or an inert pill.
Non-persuasive, perhaps? Deceptive, definitely not.
@ceekay 19
The study which you mention does not justify the authors’ claim for the given statement. The study relates to a clinical trial of acupuncture against sham and no treatment for IBS. No sugar pills involved.
“The objectives of this study were to assess the feasibility of recruiting IBS patients to participate in a trial of open-label placebo and to assess whether an open-label placebo pill with a persuasive rationale was more effective than no-treatment in relieving symptoms of IBS in the setting of matched patient-provider interactions.”
“Persuasive rationale”. I can imagine George Orwell wincing a little bit at that one.
You will be given âplacebo pills made of an inert substance, like sugar pills, that have been shown in clinical studies to produce significant improvement in IBS symptoms through mind-body self-healing processesâ – or nothing.
Non-deceptive?
Compare: you will be given inert placebo pills – sugar pills: which are reported to provide some help with IBS symptoms in clinical trials, by patients who do not know whether they are receiving an active or an inert pill.
Non-persuasive, perhaps? Deceptive, definitely not.
@ceekay 19
Re:42 Clumsy.
The study which you mention does not justify the authors’ statement.
That’s better.
Looking at the protocol for the trial:
‘In this study, we will test whether placebo pills that are truthfully described âas inert substances (placebos), like sugar pills, that have been shown in rigorous clinical tests to somehow produce significant self-healing processes in IBS patientsâ given in a context of positive expectation and reassurance can produce clinical improvement in patients with IBS.’
The actual trial said rigorous clinical testing showed placebo pills â…produce significant mind-body self-healing processes.” “Somehow” turns into “mind-body” between plan and study. Since “mind-body” means “we don’t have a clue how” it would have been more truthful to say so, rather than to waffle. In fact, for IBS, the evidence shows that placebos are frequently associated with self-reported symptomatic benefits.
More neutral (truthful?) would have been to describe placebo pills as “inert sugar pills that sometimes seem to help patients in clinical tests who are unaware that the pills have no active substance.” Most neutral would be simply to have informed patients that placebo pills contain no active substance â full stop.
Also, since the stated intention was to find out whether placebo pills work in a context of positive expectation and reassurance, a control group ought to have received placebo pills in the absence of that context. Thus, for example, some patients could have been given strictly neutral information about the pills, without mention of any benefit and instructions to self-administer and then left to get on with it. Self-healer, heal thyself, indeed.
Also, the initial telephone screening in which potential participants were told they would receive “pills which… had been shown to have self-healing properties” was a wide open self-selection invitation to woofers.
You’ve probably heard how part of researching the effectiveness of a new drug is eliminating the influence of the emotions and beliefs from what the hopefully future medication does in the body through what is called the placebo effect.