Categories
Clinical trials Complementary and alternative medicine Medicine Science Skepticism/critical thinking

Still more oversold placebo research from our old friend Ted Kaptchuk

Here we go again. In the wake of study after study that fails to find activity of various “complementary and alternative medicine” (CAM) beyond that of placebo, the campaign to a “rebrand” CAM as “working” through the “power of placebo” continues apace, in the wake of the successful campaign to “rebrand” various needle-based medical modalities as “acupuncture.” Personally, I’ve argued that in reality this new focus on placebo effects as the “mechanism” through which CAM “works” is in reality more a manifestation of the common fantasy that wishing makes it so. Meanwhile I’ve argued that this emphasis on placebo medicine that CAM advocates are trying to sell as the mechanism by which CAM “works” is nothing more than the old medical paternalism rearing its ugly head in a new form disguised as “empowering” patients.

None of this, of course, can stop everybody’s favorite apologist for “complementary and alternative medicine” (CAM) and, in particular, using placebo effects therapeutically, from continuing to do what he does with a study that’s been widely reported in the news and even featured on Science Friday last week. Basically, it’s a study in Science Translational Medicine, in which our old friend Ted Kaptchuk teamed up with an investigator interested in migraines, Rami Burstein, to do a study that finds that believing a medicine will work can have a strong effect on its actual activity on migraine. As is the case with most studies in which Kaptchuk is involved, it’s mildly interesting from a scientific standpoint. Unlike most studies in which he is involved, Kaptchuk seems a bit more able to tone down the hyperbole, which is a good thing. Unfortunately, this study, as much as it’s being touted by the press as providing new information on placebo effects, really doesn’t tell us much that is new.

But first, in case you don’t remember who Ted Kaptchuk is, let’s take a moment to remind you, given that it’s been a while since he’s appeared as a topic on this blog. He’s the Director of the Program in Placebo Studies, Beth Israel Deaconess Medical Center, and a professor of medicine at the Harvard Medical School. His work on placebo effects has been a frequent topic right here on this very blog and has been a mixed bag. On the one hand, Kaptchuk sometimes does interesting work, but on the other hand he can’t seem to help himself when it comes to overselling it. For example, two years ago, Kaptchuk’s group published a study in which they evaluated subjective placebo effects and objective physiologic effects of “sham acupuncture” in asthma patients. The observations were actually intriguing, as I pointed out. Basically, Kaptchuk compared asthma patients receiving “placebo acupuncture” with patients receiving a real albuterol inhaler. What he found was that placebo effects from the sham acupuncture could make patients feel as though they were less short of breath, even though pulmonary function tests revealed that their lung function had not improved, a result that was not unexpected. It was also, as Peter Lipson described, a finding that indicated how dangerous it could be to rely on placebo effects to treat asthma in that it could easily result in the death of your patients by lulling them into a false sense of security of not feeling short of breath when, from a physiologic standpoint, they are on the knife’s edge of respiratory failure. Meanwhile, advocates of using placebo effects intentionally in medicine spun this study as some great evidence that placeboes could be useful in medicine when in fact it suggested that relying on placebo effects to alter physiology could be very dangerous.

The other message that Kaptchuk has been promoting is that it is possible to have “placeboes without deception.” One of the greatest difficulties as a physician with intentionally utilizing placebo effects, if they are useful, is that under our current understanding of how placeboes work deception is necessary. The patient has to be convinced that the placebo they are getting will help them, which requires the physician or health care provider, in essence, to lie. Indeed, Kaptchuk did a study a few years back testing placeboes in irritable bowel syndrome (IBS) in which he concluded that one could have placebo effects without deception. As I and others pointed out, his study showed nothing of the sort, as the power of suggestion, in which placebo pills were described as being capable of producing “powerful mind-body effects”, was used. Yet Kaptchuk’s old spin continues to persist, even in an NPR story about Kaptchuk and Burstein’s “hot off the presses” migraine study:

The group has shown: that placebos rival the effect of active medication in patients with asthma; that even when patients know they’re taking a placebo, they can get relief from the cramps, bloating and diarrhea of irritable bowel syndrome; and that those subliminal suggestions can activate patients’ placebo response.

Placeboes had no physiologic effect in the asthma study, and, I forgot to mention, in the IBS study the effects observed were actually very small and were not evidence of “placeboes without deception.”

So what about the study itself? Actually, like the asthma study, it’s a pretty well-designed study. Unlike the asthma study, whose results were often exaggerated and misrepresented to mean that placebo effects were as effective as real asthma medicine, Kaptchuk appears not to be willfully misinterpreting it, although he can’t always resist letting some of his old exaggerations slip into his discussion and interviews, as you will see. First, however, let’s look at the study.

Basically, it examined the effect of placebo or an active drug against migraine, Maxalt (rizatriptan), on the migraine headaches of 66 subjects. Each participant was asked to document seven migraine attacks, starting with one untreated attack at the beginning of the study and six subsequent attacks. These six attacks were randomly assigned to be treated with 10 mg of Maxalt or placebo, each labeled either as “placebo,” “Maxalt,” or “Maxalt or placebo.” Patients were asked to record one pain score 30 minutes after onset of the headache as a baseline and then to take the study pill, after which they were to record a second pain score 2.5 hours after the onset of the headache. They were provided with “rescue medications,” which the participants could use as needed at the 2.5 hour time point. Basically, if the participants’ headache hadn’t been adequately relieved, they could take the medication.

One of the strengths of this study is that there really wasn’t much interaction with the physicians running the study, thus minimizing the effects of personal interaction with health care providers after the first visit when subjects were recruited for the study. There ended up being six groups divided into three groups of two, as shown in the image below:

Fig1

The idea is that there were three conditions: “negative information” (placebo labeling), “neutral” or uncertain information (label says that the pill could be Maxalt or placebo), and “positive information” (Maxalt labeling). Each of these conditions is either true or not; i.e., the “placebo” envelope can contain placebo or the actual Maxalt or the “Maxalt” envelope could contain placebo or the actual Maxalt.” Two main outcomes were measured: (1) the decrease in pain score from 30 minutes after onset to the 2.5 hour mark; i.e., two hours after taking the drug or placebo; and (2) whether or not the subject was pain-free at the 2.5 hour mark.

However, contrary to a lot of the discussion that occurs later expressing how “powerful” placebo effects alone were even under “truthful” conditions, there was a bit of priming going on in the study materials, as published in the supplemental data and information:

Scripted Information Read to Participants. “You are invited to take part in a research study for the purpose of understanding the effects of repeated administration of Maxalt for the treatment of acute migraine attacks, and why placebo rates are so high in migraine therapy. Our first goal is to understand why Maxalt makes you pain-free in one attack but not in another. Our second goal is to understand why placebo pills can also make you pain-free. Our third goal is to understand why Maxalt works differently when given in double-blind study vs. real-life experience when you take it at home. These goals are scientifically important for developing new therapies for migraine.

I repeat for emphasis: “Our second goal is to understand why placebo pills can also make you pain-free.” Not to understand why placebo pills might be able to make you pain-free or could possibly make you pain free. “Can make you pain free.” To be fair, this isn’t nearly as blatant as the IBS study in which subjects were told that placeboes could produce “powerful mind-body effects” in the study information. Also, mentioning that “placebo rates are so high in migraine therapy” primes the subjects to expect placeboes to work.

With this suggestion, it’s not entirely surprising that placebo effects were fairly robust. The results can basically be summarized thusly as changes in pain scores:

  • No treatment: 15 percent increase in pain.
  • Known placebo: 26 percent decrease.
  • Placebo labeled Maxalt: 25 percent decrease.*
  • Maxalt labeled as placebo: 36 percent decrease.*
  • Mystery pill (Maxalt or placebo): 40 percent decrease.
  • Known Maxalt: 40 percent decrease.

Note that there was no statistically significant difference between the placebo labeled as Maxalt and the Maxalt labeled as placebo. This is perhaps the most interesting finding, and suggests that positive labeling can boost placebo effects (again, not a new finding) and that negative labeling can decrease whatever contribution there is by placebo effects to the action of the real drug). As for the other values in other groups, reported differences are a lot less impressive if you look at the graph, complete with error bars:

Fig1

Note the huge overlap between decreases in pain scores in the placebo group regardless of whether the label was “placebo,” “unspecified,” or “positive.” The same is true for decreases in pain scores in the Maxalt group regardless of label. This raises the question of whether reported differences, albeit statistically significantly different, are in any way clinically significant.

The second endpoint examined was whether or not the subject was pain free after 2.5 hours. It’s here where the real differences dwell:

Unlike the primary endpoint, the proportion of participants who were pain-free during the no-treatment condition (0.7%) was not statistically different from when participants took open-label placebo (5.7%). As with the primary endpoint, the proportion of participants pain-free after treatment was not statistically different between Maxalt treatment mislabeled as placebo (14.6%) and placebo treatment mislabeled as Maxalt (7.7%). The resulting therapeutic gain (that is, drug-placebo difference) was 8.8 percentage points under “placebo” labeling [odds ratio (OR), 2.80], 26.6 percentage points under “Maxalt or placebo” labeling (OR, 7.19), and 24.6 percentage points under “Maxalt” labeling (OR, 5.70).

One critical finding here is that Maxalt beat any sort of placebo effect, and not by a little bit, either. For all the Maxalt groups, the percentage of subjects who were pain free was 25.5% compared to 6.7% for all the placebo groups. That’s nearly a four-fold difference. Also note that the no treatment condition was not statistically different from the open-label placebo condition.

The error bars, however, remain wide:

Kam-Hansen 1..7

So what does this all mean? In the discussion, Burstein and Kaptchuk try to sell the reader on some old ideas about placebo and some CAM-friendly ideas about placebo:

By manipulating the information provided to patients, our primary analysis showed that the magnitude of headache relief induced by Maxalt (10-mg rizatriptan), as well as that of placebo, was lowest when pills were labeled as placebo, and higher when pills had uncertain labeling or were labeled as active medication. Two other findings were that (i) placebo treatment mislabeled as 10-mg Maxalt reduced headache severity as effectively as did Maxalt mislabeled as placebo, and (ii) open-label placebo treatment was superior to no treatment. We conclude that raising the likelihood of receiving active treatment for pain relief significantly contributed to increased success rate of triptan therapy for migraine, that open-label placebo treatment may have an important therapeutic benefit, and that placebo and medication effects can be modulated by expectancies.

Although Maxalt was superior to placebo under each type of information, we were surprised that the efficacy of Maxalt mislabeled as placebo was not significantly better than the efficacy of placebo mislabeled as Maxalt. We were also surprised to find that open-label placebo treatment induced pain relief as compared with the worsening of pain during the untreated attack. A therapeutic benefit of open-label placebo versus no treatment was also recently reported for patients with irritable bowel syndrome in a randomized controlled study (8) and in a pilot study in depression (9).

Methinks the authors doth protest too much surprise at seeing placebo effects in the open label placebo group, given that the study materials suggested that subjects would experience pain relief or even be pain free based on placebo effects and subjects were told that migraines are subject to placebo effects. Mentioning Kaptchuk’s previous IBS study in which he tried to argue that placeboes without deception were possible is just another way of insinuating the idea that it’s possible to have placebo effects without deception into this study, even though superficially the authors appear to be much more straightforward about discussing their results. On the other hand, when it comes to the endpoint that people with migraines really care about, being pain-free, open label placebo was no different than no treatment at all, a point that gets lost in all the discussions of “the power of positive thinking.”

It’s also rather frustrating that the authors state that many placebo researchers would have considered the providing of information about whether or not patients were receiving placebo, real drug, or had a 50:50 chance of receiving real drug as something that would affect expectancy; i.e., the expectation of benefit. However, they didn’t make any effort to assess expectancy because they were afraid of causing patients to question the accuracy of the information provided on the envelopes. The lack of assessment of expectancy greatly decreases the utility of this study and the ability to generalize from it a potential mechanism to explain their results. Worse, no assessment of blinding was performed because the investigators were worried that this would provoke suspicions in an in-study design. Quite frankly, this is not a convincing excuse. Assessment of blinding is such a routine and key part of randomized clinical trials that to not include it in the trial design is bound to leave an opening for suspicion that the subjects might have been able to guess whether the envelope containing the pill they were using for each migraine incident contained real drug or placebo.

In the end, however, this study really doesn’t tell us much that is new or that we didn’t already know. Its results are nearly completely predictable if you know the experimental design, namely that telling a patient he is getting medicine enhances placebo effects and that telling a patient he is getting a placebo can decrease the perceived effectiveness of a real medicine. For a subjective finding, a significant part of the drug effect appears to be placebo. We knew that, which is why I found it rather odd that Kaptchuk exulted over how “half the effect” of Maxalt is due to placebo effects. I also find it rather odd how, in his interview on Science Friday, Kaptchuk emphasized how subjects were given different expectations; yet he didn’t actually assess expectancy. He also goes on about how most studies don’t include a no-treatment control, as if that were a major observation. Of course, the reason that most studies examining subjective outcomes don’t include a no-treatment control anymore is because scientists know from previous studies that placebo effects can be significant confounders.

So the observation that there was a difference between the no-treatment control the placebo arm was not unexpected for the main endpoint, decrease in pain, because that’s a subjective endpoint. Even less unexpected is the observation that there was no statistically significant difference in the chances of being pain-free at the 2.5 hour time point between the no-treatment control and the open-label placebo group. This is entirely consistent with what I’ve been arguing for a long time, that the more “objective” or “hard” the endpoint (and, although there is still a subjective component, being pain-free is a harder endpoint than stating a pain score), the weaker any placebo effects observed are, to the point that the very “hardest” endpoints, such as tumor regression or survival, are not affected by placebo. None of this stops Kaptchuk from emphasizing the decrease in pain scores in the open label placebo group in his Science Friday interview but not mentioning that no more people were pain free in that group than in the no treatment group.

I think we get a glimpse into Kaptchuk’s mind in the part of the Science Friday interview when he says:

I think that in the same way a physician has to calculate what pharmaceutical I have to give, how many milligrams he has to give of that drug, he or she might have to calculate the exact right words to accompany the pharmaceutical. In this study the words actually double the effect or cuts the effect of the drug in half. What exactly those words are, I think that’s more research. Our experiment was a proof of concept to see if words work. Then we now have to figure out what’s an ethical way to provide a positive message that’s true, accurate, and is not an exaggeration.

Basically, Kaptchuk appears to be saying that we have to find the right magical words to invoke the mystical placebo effect. Seriously. This experiment had only three sets of words, the two of which that mattered were either affirming that drug was present or stating that placebo was present. It’s not fancy.

Certainly, Kaptchuk does nothing to discourage headlines like:

All of which miss the point. It’s not “positive thinking.” It might be expectancy, but we don’t know that because it wasn’t assessed in this study. The two are different.

In the end, this isn’t really a bad study. It’s just that, even though he’s doing a lot better than he used to, Kaptchuk still can’t seem to resist reading a bit too much into it. He’s not as blatant about it as he used to be, but he’s still implying that placebo effects without deception are possible and desirable. It really doesn’t tell us much that we don’t already know. Placebo effects can enhance subjective effects of pharmaceuticals, expectancy can affect the perception of how well drugs work on subjective outcomes. Understanding placebo effects is important. Overselling them does not help our understanding.

By Orac

Orac is the nom de blog of a humble surgeon/scientist who has an ego just big enough to delude himself that someone, somewhere might actually give a rodent's posterior about his copious verbal meanderings, but just barely small enough to admit to himself that few probably will. That surgeon is otherwise known as David Gorski.

That this particular surgeon has chosen his nom de blog based on a rather cranky and arrogant computer shaped like a clear box of blinking lights that he originally encountered when he became a fan of a 35 year old British SF television show whose special effects were renowned for their BBC/Doctor Who-style low budget look, but whose stories nonetheless resulted in some of the best, most innovative science fiction ever televised, should tell you nearly all that you need to know about Orac. (That, and the length of the preceding sentence.)

DISCLAIMER:: The various written meanderings here are the opinions of Orac and Orac alone, written on his own time. They should never be construed as representing the opinions of any other person or entity, especially Orac's cancer center, department of surgery, medical school, or university. Also note that Orac is nonpartisan; he is more than willing to criticize the statements of anyone, regardless of of political leanings, if that anyone advocates pseudoscience or quackery. Finally, medical commentary is not to be construed in any way as medical advice.

To contact Orac: [email protected]

42 replies on “Still more oversold placebo research from our old friend Ted Kaptchuk”

Ah yes, I remember Kaptchuk’s IBS study, and raising my eyebrows at the time at the “powerful mind-body effects” bit. I seem to remember that the paper promised to publish research subsequently looking in more detail about what the patients understood by that.

Don’t suppose you know if they ever did that? Would be fascinated to read it.

“Note that there was no statistically significant difference between the placebo labeled as Maxalt and the Maxalt labeled as placebo. ” yet the table gives:
Placebo labeled Maxalt: 25 percent decrease.*
Maxalt labeled as placebo: 36 percent decrease.*

Is the 4th word “no” incorrect?

I know one thing for sure from this study – I don’t wan’t no stinking placebo

” Basically, Kaptchuk appears to be saying that we have to find the right magical words to invoke the mystical placebo effect”.
Agreed.

bobh@2: The study involved only 66 subjects total over all categories. Poisson statistical errors give you a standard error of sqrt(N) with N subjects, and the 95% confidence level is about twice that. With N = 66, you would need a difference of 16, or about 24%, to reach the 95% significance threshold. Since there were apparently 66 total subjects in the study, the actual N is less, so the fractional error is larger.

IOW, this study was underpowered if the purpose is to make any claims about whether placebo-labeled-as-real-pill gives a significantly different result from real-pill-labeled-as-placebo.

IIRC, the IRB that oversaw Kaptchuk’s asthma-acupuncture study had some questions regarding deception and debriefing of study subjects after conclusion. The published study did not mention anything about debriefing subjects, which raised some flags about how subjects were handled.

Basically, the issue was this: subjects were told that they would receive either “active” or “placebo” acupuncture, but the study protocol did not include any “active” acupuncture arm. The published study and supplemental materials did not make any mention of whether or not subjects were informed of the deception. Ethically, they should have been informed either at the beginning that there was no active acupuncture arm, or debriefed at the end of the study to let them know that fact.

Perhaps the fallout around that study, combined with criticisms of the “no deception” placebo study, has tempered his tone somewhat.

As I have observed here before, a decade or two ago I was very excited about the potential of placebos, but I have found the harder I look at “the placebo effect”, the less there is to it. Even the meager effects in this study could be attributed to effects on reporting rather than actual effects on the subjects’ subjective experiences. By this I mean that two subjects experiencing the same amount of pain might report that pain differently depending on whether they though they had, or might have had, an active treatment or not.

I’m also skeptical about the clinical use of the power of suggestion, which is about all of possible use that remains of “the placebo effect” after close scrutiny, in this way. It is by no means reliable (not everyone is suggestible, and even those who are are not always so), and as Orac observed it can mask symptoms.

In circumstances where it is beneficial for a patient to ignore symptoms that have been thoroughly investigated, I think there are better ways of teaching them to do that than giving them a pill (or whatever) with no active ingredients.

I meant,: “depending on whether they thought they had, or might have had, an active treatment or not”.

@Krebiozen

Ah, so you’re becoming more of the Mark Crislip school of thought regarding placebo effects; i.e., they don’t exist. It’s all just flaws in observation, reporting, data collection and study design flaws, with maybe a bit of very minor physiological effects due to contaminants mixed in.

@ Todd W.:

I am probably also of that school.

Woo-meisters who trumpet the placebo effect as a type of treatment usually subscribe to vitalistic beliefs- perhaps thinking that we can harness this liberatation of the healing/ life essence within or allow the chi to flow unimpeded due to relaxation or confidence.
But they also believe in magic, distance healing and chakras.

Todd and Denice,
Yes, I’m thinking more and more like Dr. Crislip these days. I think most of what is described as “the placebo effect” in most placebo-controlled RCTs is indeed down to regression to the mean, comparing endpoints to baseline instead of to no treatment and other non-specific effects. Much of the rest of it is due to suggestion – having experimented with hypnosis quite extensively when I was younger, I am well aware of how easily our subjective experience can be affected. Of course our subjective experience is important, and it can affect the level of stress we experience, and the way we behave, which can both affect our health, but conflating this with medicine seems to me to be a mistake.

I do find it interesting how a sort of mythology has grown up around “the placebo effect”, with even some luminaries such as Drs. Offitt and Goldacre apparently subscribing to some of it (Goldacre’s spiel on placebos is very entertaining) . I have mentioned before this trial of real and sham knee joint abridement which is often cited as evidence that sham knee surgery (making an incision and then simply sewing it back up) is as effective as lavage (washing out the joint) and debridement (basically scraping out the joint), thus demonstrating the awesome power of the placebo. However:

The authors found that all three treatment groups fared equally: each reported subjective symptomatic relief, but no objective improvement in function was noted in any of the groups.

(My emphasis)

The same appears to be true of angioplasty for angina, with sham surgery affording the same long-term symptomatic relief as real surgery (PMID: 15950570 for example), but neither offering long term improvements in hard endpoints such as survival, which are perhaps more important.

Yet another example is the quack cancer drug Krebiozen (hence my ‘nym), which allegedly caused a cancer patient’s complete remission, twice, each time he relapsed when he heard that the drug had failed in other patients. I find it telling that those who use this as an example of how powerful “the placebo effect” is are still dependent on this anecdote from the 1950s. Surely some other patients would have experienced complete remission from the placebo effect since if this is possible. My sad conclusion is that it isn’t possible, though there is a host of people who would berate me for having such a negative attitude in saying so.

The apparently mangled links do work; there are two, one at the beginning and one at the end, with a non-functioning pseudolink between them.

Another thought on those knee and angioplasty studies; imagine two patients, one who has had sham surgery and another who hasn’t. They both have the same physiological symptoms, but the one who has had surgery reports lessened symptoms as compared to the one who has not. There may be no difference at all in their subjective experience of the symptoms, but as I touched on above, the “treated” patient may feel that his/her symptoms must have been lessened by the treatment, and compares them to an exaggerated baseline of what s/he misremembers of his/her symptoms before treatment.

In this respect I wonder if the term “placebo” is more insightful than I had previously realized. My Latin is rusty but I believe it literally means “I please you”, usually taken to mean an inactive substance given to please the patient, but perhaps the symptomatic relief reported is more about pleasing the medical practitioner than the patient – “yes, that’s much better thanks Doctor Munchausen”.

Often, when my 5 year old says he’s not feeling well (sore tummy, trouble sleeping, cough and sore throat) I give him a tic tac and tell him that it is a really strong medicine. (Is this wrong of me?) It’s amazing how fast the symptoms seem to disappear. Of course the tic tac is not doing anything. His feeling better is part placebo effect and part just getting what he wanted, which is my attention, concern and a tiny sweet morsel. Knowingly treating patients with a placebo is treating them like a 5 year old.

@Bend I shall have to remember the ‘tic tac treatment’ when my own small child gets slightly bigger. I wonder if the same thing would work on hypochondriac husbands? Probably not, he is somewhat more observant than the two year old.
I too find it unethical to give patients sugar pills and tell them they will get better. The lack of objective benefit is crucial even if the patient ‘feels better’ but they really are not when you measure their physiology. I personally would feel deeply betrayed by a medical practitioner who insulted my intelligence in this way. I am perfectly happy with them telling me honestly that nothing they can do can make me feel better and I should take what comfort measures I can. Almost all of the treatment for a cold for example are elaborate comfort measures. (Not that I don’t enjoy copious amounts of hot tea with honey while ill). I have terminated my relationship with several medical providers for either lying to me or treating me as if I did not have the intelligence to grasp what they were saying.

I am surprised at how angry this study makes me. Exploiting the placebo effect and exploring its possibilities as treatment, to me, seems to dismiss the patient and their real discomfort and physical symptoms as something that they should be able to just “think away.”

It might be my own experience with chronic pain that makes this so upsetting to me. I have read so many articles that discuss educating the patient to believe their pain isn’t a problem once it is determined any other treatment options beyond long-term pain medication have been exhausted.

It just seems to me that there is an interest in convincing the patient that they aren’t really sick. Like Bend observes – it treats the patient like they’re a child with a skinned knee.

@Mrs Woo #15,
That is my concern when the alt med/CAM crowd talk up the placebo effect as the are doing now. Not so much that thepatients aren’t sick, but in effect it is being tied into the patient blaming imo – you just haven’t thought hard enough/believed/wanted to heal yourself.
I think that is the bottom line in placebo and CAM – it ties nicely into the various mind/body/energy nonsenses and gives the ‘practitioner’ another out for their particular treatment modality not working. Again.

Eesh, the BBC World Service, which is overnight filler for a great many U.S. public-radio outlets, is leading with Kaptchuk on “Health Check.”

I’ll need to read the paper to see if he addressed this, but it is probably important to study design- triptans often have unpleasant side effects. For any patient who had this experience, it would be completely obvious whether it was a Maxalt or a placebo.

The study did not address this. I discussed this, in fact, in the part where I pointed out that they did not assess blinding.

I give him a tic tac and tell him that it is a really strong medicine. (Is this wrong of me?) It’s amazing how fast the symptoms seem to disappear.

There is a category of falsehoods which Terry Pratchett calls “lies-to-children”, and I think what you’re doing is arguably in that category. It would be wrong to treat an adult, or a normal adolescent, like that. But for a five-year-old, you have to simplify things a bit.

Because of the placebo effect, it is often necessary in clinical trials to give some of the subjects a placebo. The people enrolling subjects in the study are supposed to (at least in the US) inform the subjects that they may be getting the new experimental drug, or they may be getting a placebo, and the subjects are supposed to sign a form acknowledging this, as well as other foreseeable risks from the treatment. In that case, you’re not treating the patient like a five-year-old. But if you’re prescribing homeopathic remedies, for instance, you are.

One problem, which has not been addressed, is that placebo, like hypnosis, has effect only in some individuals, which are frequently fond of alternative medicine.
No, I didn’t say morons.

Kiiri: Almost all of the treatment for a cold for example are elaborate comfort measures.

I think that’s true for the stomach flu and mild cases of influenza. I’ve been sucking down copious amounts of hot water, lemon and honey in an attempt to deal with a particularly annoying head cold. I don’t think there’s anything wrong with placebos if you’re dealing with a mild illness.
But, frankly, if I think something’s bad enough that I have to go to the doctor, I want them to give me something that’ll work and for them to explain to me how it’ll work.

Daniel:

Perhaps a more diplomatic name for them would be “individuals predisposed to see success in alternative medicine”?

MO’B @24: Nothing, as long as it’s an appropriate time and place. If you’re looking for a hookup and the barkeep is making the last call for alcohol, beer goggles might be a good thing for you. If your wife/SO is with you, not so much. Likewise, a placebo might be appropriate for a cold or a skinned knee, but if there is a serious underlying medical condition, I’d prefer a real treatment.

Eric Lund, Not that it’s needed, since my SO has no perceivable faults or defects, but beer goggles still make her the most beautiful woman in the room. They also make me devilishly suave and handsome, which is a trick considering the materials they have to work with.

So, falsely telling migraine sufferers that they were receiving a placebo resulted in a statistically significant benefit compared with no treatment. However, when told the truth about receiving a placebo there was no significant difference from receiving no treatment.

How anyone can claim that this result shows that open label treatment may have important therapeutic benefits is beyond comprehension.

@ M O’B

If your SO “has no perceivable faults or defects” (which I am sure is absolutely true) then, by definition, she would be the “most beautiful woman in the room,” (unless my SO were also in the room) so beer goggles are unnecessary.

By an amazing coincidence, beer also makes me devilishly suave and handsome, as well as making me brilliantly witty.

(unless my SO were also in the room)

At any moment now, I’m expecting MO’B to have his friend talk to DrBollocks’ friend to arrange terms (location, time, etc.). I hear there’s a nice secluded grove with a tranquil stream running by. Always a pleasant scene for such doings.

In this culture, “take this pill, you will get better” is very ingrained. One question is whether being told that a pill is a placebo is enough to counteract that the mere act of being given a pill “will make you better”. That is, being given a disclosed placebo in a clinical context is still a deception.

@ davep 32
Being given a disclosed placebo is certainly a deception if a patient is told that placebos can give clinical benefits. They are gratefully received in clinical trial contexts because the patient knows that they may not be placebos. There exists a reasonable hope of receiving a treatment that may clinically be of benefit. It is not too surprising that in clinical trials placebo groups report benefits that no-treatment control groups do not. If there is any evidence to suggest that those benefits are clinically significant I would love to see it.

As for this trial the evidence is unequivocal: disclosed placebos fare no better than no treatment.

Selling the placebo effect is like selling paperweights. Except the paperweight in this case is a smartphone they smash right in front of you after purchase to enhance its paperweight properties.

The high cost is part of the user-experience and will enhance satisfaction of the paperweight qualities of the smashed smartphone through user expectations.

I worry about the “best evidence ” at present being: RCT – double blind, long term , placebo-controlled, multicenter , with intention to treat studies – I know it is not the best but it is the best we have, so if we question one of those components i.e. “placebo-controlled” what does this say about the evidence-based results used to promote new pharmaceuticals?

Hello, here I am back again to show you this article from the BBC http://www.bbc.com/news/health-26954482 which demonstrates why some people are sceptical about the research carried out by drug companies who want to sell their products even if they are not efficacious and, in fact, can do more harm than good. Putting profit before health can mean that the results of research are not always published. Isn’t it time that independent research was carried out before new drugs are accepted as it seems we can’t trust drug companies to be totally honest can we?

@ sablonneuse

*sigh*
L’article sous lequel tu postes date de janvier. Tu frises avec la nécromancie, là.

Now, your premise – that there is a strong temptation for people who want to sell a product to hide or underplay the negatives of this product – is indeed set in reality.

However, two points:
1 – It’s not just drug companies. It’s not just corporations, for that matter. About any human activity, whose success is measured by its outcome, be it monetary or fame or whatever, will have cheaters cutting corners.
2 – this thread is about products sold by proponents of alt-meds. A bunch of people who are selling products with an accountability on par or even worse than the typical drug company. Research on efficacy or toxicity? You will be lucky if a study was even started. Unless you include alt-meds among “drug companies” (which they actually are), you may come out sounding both out of topic and trying to give them a lame excuse on the line “yeah, they are cheating, but so are these ones over there too”.

OK, three points:
3 – There is a form of “independent” research, for a limited value of independence*, it’s called academic research. Please contact your local député and remind him/her how important universities and research agencies are for our technological future. French politics tend to be really fuzzy about this.

*No-one is really independent. In order to know anything about a specific topic, you have to learn about this topic. To have a deep understanding of it, you have to work in this field and becoming part of the community. You cannot do either and come out without biases. And money is not the only currency to buy loyalty.
How do you pay “independent” researchers and keep them independent? If they depend on someone for their wages, they are not truly independent.
Remember the “batailles d’experts” in many judiciary trials. It always end up people arguing which of the two experts really know what he is talking about.

4th and last point, I promise
4 – Re: non-published research, the US government recently decided that drugs companies should register all of their starting trials on a public database, and anyone can ask about the results (or at least check if these results were published).
That should help a bit. I actually don’t understand why it took so long to have this. And why we European don’t do it either.
(maybe we do. I’m not up-to-date)

Um, my point #2 may be wrong, worrying about bad research from drug companies is on topic, so carry on…

Comments are closed.

Discover more from RESPECTFUL INSOLENCE

Subscribe now to keep reading and get access to the full archive.

Continue reading