Although I’m a translational researcher, I’m also a surgeon. That was my first and primary training and only later did I decide to get my PhD during my residency, when the opportunity to do so with a decent stipend presented itself. From my perspective, clinical research in surgery is difficult, arguably more difficult than it is for other medical modalities, at least in some ways. For instance, in surgery, it usually very difficult to do a double-blind placebo-controlled trial. For one thing, doing “sham” surgery on patients in the placebo arm is ethically dicey, and it’s very hard to maintain that essential prerequisite for an ethical clinical trial, clinical equipoise. After all, even “sham” surgery involves subjecting the patient to anesthesia, cutting the skin, and subjecting the subject to at least some of the risks of the “real” surgery.
In addition, it’s usually impossible to blind the surgeon, because the surgeon will usually know what he is doing. There are exceptions based on the specific operation being done, but they are uncommon. This problem, which is not entirely unique to surgical clinical trials but far more common in them, sometimes necessitates designs where the surgeons who do the surgery do not do the post-op care, an arrangement that most surgeons, trained—and correctly so—that a surgeon should take care of his patients postoperatively, generally do not like. Nor do patients, who quite reasonably expect that their surgeon will follow them postoperatively. Then there’s the issue of standardization and the known issue that surgical skill does matter; better technical surgeons tend to produce better results. All of these factors have conspired to make truly double-blind, randomized, sham/placebo-controlled clinical trials in surgery uncommon. But how uncommon are they?
I learned the other day of a recent publication in BMJ that looked at the use of placebo controls in the evaluation of surgery. It’s a systemic review performed by investigators at the Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences (NDORMS) at Oxford of randomized clinical trials of surgical procedures that actually used a placebo (i.e., sham surgery) control group, and its results are just one more indication of how hard it is to demonstrate that a treatment, in this case, a surgical intervention, works.
The authors discuss the issue right in the beginning:
Modern surgery is changing rapidly. Surgical interventions can now be offered to improve function and quality of life not just to save life. The improvement in the safety of surgical procedures and anaesthesia has facilitated this change.1 The mortality associated with anaesthesia has decreased from between 64 and 100 in 100,000 in the 1940s to between 0.4 and 1 in 100,000 at present. The prevalence of serious adverse events related to surgical interventions has remained relatively constant over the past 10 years, despite an increase in the number of surgical procedures performed each year. The postoperative death rate is between 1.9% and 4% and in most cases is due to the primary disease.3 5
The increase in the applications for surgical procedures has been driven by a greater involvement of technology in surgical procedures. Such technological advances have made many interventions less invasive, more likely to be endoscopic, and less resembling typical open surgery, such as laparotomy. However, these new procedures are often introduced into surgical practice without any formal evaluation of safety and efficacy, such as using randomised clinical trials. This is because, unlike drug products, such verification is currently not mandated by regulatory authorities. Furthermore, there is generally a scarcity of information reported on the surgical learning curves or the iterative development of a new technique. Both existing and innovative surgical practice clearly needs to be evaluated, and any evaluative method should take account of the unique idiosyncrasies and challenges presented by surgical interventions.
And it’s true. I’ve discussed how, for example, the popularity of laparoscopic cholecystectomy, both among surgeons and patients, far outpaced the evidence base that it was safe. Indeed, early on, back in the early 1990s, the rate of a particularly serious complication of cholecystectomy, injury to the common bile duct, was much higher in laparoscopic procedures than in open procedures. Still, ultimately, that rate came down, and, given the marked advantages of laparoscopic procedures in terms of decreased hospital stay, less pain, and faster return to work. Moreover, when it comes to surgical procedures, the line between “tinkering” (as in incrementally improving a procedure, and research can often be quite difficult to identify. The authors quite appropriately point out that the outcome of a surgical treatment is the cumulative effect of the three main elements of any treatment, namely the critical surgical element (or, as we surgeons like to put it, “fixing the problem”), placebo effects, and nonspecific effects and that, as with any medical treatment, it is expected that surgery is associated with placebo effects and nonspecific effects.
So the authors examined the literature, adhering to the guidelines for the Cochrane Collaboration. Studies were eligible if they were randomized clinical trials examining the efficacy of a surgical procedure compared to placebo, with surgery defined as any interventional procedure that changes the anatomy and requires a skin incision or use of endoscopy (or, as I like to put it, rearranging a patient’s anatomy with therapeutic intent). Here were the results:
In 39 out of 53 (74%) trials there was improvement in the placebo arm and in 27 (51%) trials the effect of placebo did not differ from that of surgery. In 26 (49%) trials, surgery was superior to placebo but the magnitude of the effect of the surgical intervention over that of the placebo was generally small. Serious adverse events were reported in the placebo arm in 18 trials (34%) and in the surgical arm in 22 trials (41.5%); in four trials authors did not specify in which arm the events occurred. However, in many studies adverse events were unrelated to the intervention or associated with the severity of the condition. The existing placebo controlled trials investigated only less invasive procedures that did not involve laparotomy, thoracotomy, craniotomy, or extensive tissue dissection.
I was actually surprised that there were so few actual randomized clinical trials of surgical interventions the authors found. On the other hand, 39 out of 53 of these studies were published after 2000, suggesting that the rigor of surgical clinical trials has been better over the last 14 years. On the other hand, less than half the trials (n=22; 42%) reported an objective primary outcome; that is, an outcome that did not depend upon the self-reporting of the subject. Moreover, the number of patients in each clinical trial tended to be small, between 10 and 298 patients, with a median of 60. Also, no placebo controlled surgical trials investigating more invasive surgical procedures such as laparotomy, thoracotomy, craniotomy, or extensive tissue dissection were identified. The authors also noted that interventions in the placebo arm tended to be associated with less serious adverse events compared with the treatment arm, as the authors tried to minimize risks by withholding part of the intervetion; for example, only offering partial burr holes rather than full burr holes or not administering heparin. In any case, often the type of serious adverse events in the placebo group were more likely if the procedures involved exogenous materials or tissue.
In any case, the authors point out that randomized, placebo-controlled clinical trials of surgery are rare, but many of them provide clear evidence against the surgical procedure being studied and clear evidence for a significant placebo effect due to surgery. In many, the risk of adverse events was actually less in the placebo group in many trials, and the majority of trials reported improvements both in the placebo group and the experimental group.
As a researcher myself, I wonder how it is possible to “sell” a clinical trial of a surgical intervention versus a placebo/sham intervention. I’ve participated in surgical clinical trials before and found it incredibly difficult to “sell” a clinical trial, even one that I really believe in, to patients when the difference is surgery versus no surgery. I doubt it would be any different with a clinical trial proposing a sham surgery versus a “real” surgery. A BMJ blogger quite nicely describes the problem in the context of a made-up example of a patient with serious gastroesophageal reflux:
Your gastroenterologist, Dr Barrett, tells you about a new procedure that has shown some promise in initial studies. Using a minimally invasive endoscopic procedure, they can put a stitch in your stomach to stop acid from travelling back up the gullet. The hospital is taking part in a trial of the procedure and, lucky you, they’d like to put you forward. You’re anxious about having an endoscopy, but think it will be worth it if it will alleviate your symptoms.
But there’s more. Dr Barrett explains that the study is trying to assess just how effective this new procedure is and that the best way to do that is to compare the results from the procedure to a control. The people who take part in the trial will therefore be randomly allocated into two groups: those that have an endoscopy plus the stitching procedure and those that have the endoscopy but don’t have the stitch. They won’t find out which group they’re in until the trial has been completed.
You’re confused. On the one hand, you understand scientific process and the need for a robust evaluation of the procedure. On the other, you’re not sure if you want to subject yourself to an invasive technique if you might not get the potential benefit of the stitching procedure at the end of it. To complicate things further, Dr Barrett says there may be some benefit attributable to just having the endoscopy alone and that there are risks associated with the stitching procedure.
What should you do?
Indeed. What should you do? What would you do, if you had, for example, severe reflux and were offered this trial? Would you agree to be a subject?
To me, it’s not surprising that there is a placebo effect in surgery. This has been known at least since the 1950s when one of the earliest placebo-controlled clinical trials was performed for a procedure known as internal artery ligation. This was a commonly performed surgical procedure for angina pectoris whose biologic rationale was that more blood flow would be shunted away from the internal mammary article and towards the coronary arteries. Patients also reported less chest pain after the operation. Unfortunately, the randomized clinical trial, published in 1959, did not bear it out and let to the rapid extinction of the procedure. Other procedures, unfortunately, were not so easily abandoned. (I’m talking about you, vertebroplasty.) Unfortunately, none of the studies included a true untreated control group, making a direct estimate of placebo effects impossible, although the authors explained why they chose not to include trials.
The authors conclude:
Placebo controlled trials in surgery are as important as they are in medicine, and they are justified in the same way. They are powerful, feasible way of showing the efficacy of surgical procedures. They are necessary to protect the welfare of present and future patients as well as to conduct proper cost effectiveness analyses. Only then may publicly funded surgical interventions be distributed fairly and justly. Without such studies ineffective treatment may continue unchallenged.
While I tend to agree with this, I really am not quite convinced by the authors’ rather airy dismissal of ethical concerns over such trials. After all, in most medical trials, the placebo itself usually doesn’t produce an active risk of injury; sham surgery, on the other hand, not only produces the passive risk of injury due to not treating the underlying condition and an active risk of injury due to the part of the surgical procedure performed. Still, I do wish that there were more such trials. Blinding isn’t so much necessary for procedures for which there is a hard, objective endpoint to measure, such as death or tumor recurrence, but for the large number of surgical procedures designed to alleviate symptoms subject to placebo effects it is clear that there are likely to be a number of surgical procedures whose efficacy is primarily placebo.
13 replies on “Placebo effects in surgery”
I wish I hadn’t read this piece as I am going in for sinus surgery for persistent headaches. This doesn’t have any hard end point results and will have been very open to placebo effect in the past.
Most journal articles I read and attempt to understand are about drug trials, so, for me, the science (art?) of investigating the efficacy of surgical procedures prompts all kinds of questions:
• How long might placebo effects last after surgery?
• Do patients who receive sham surgery ever find out that they didn’t get the real thing? If so, do their symptoms return?
• How could surgical interventions for something more nebulous, like mood disorders (e.g., deep-brain stimulation), be adequately tested?
They could always send one arm of the trial to John of God. Just sayin’.
Hmmm… is this one of those arguements in favour of animal testing? There at least *some* of the ethical concerns are reduced – though of course new ones are introduced.
And yes, I realise my bretheren (and sisteren – don’t want to find we’re ignoring sex-related effects in the surgical outcomes, do we?) and I don’t have some (many?) of the conditions where surgical intervention is required in you primates. But surely there are conditions where there is a sufficient degree of similarity that firm conclusions can be drawn.
Regarding the GERD example, I’m not sure it’s the best choice to illustrate the ethical problems. People with severe GERD undergo endoscopies routinely, both to evaluate the condition of the esophagus (since severe GERD is a major risk factor for esophageal cancer) and to perform dialations; I’ve had the latter done twice, as scarring at the bottom of my esophagus has formed a ring that sometimes causes food to get stuck. Now I’m on omeprazole, I haven’t had to have it done again, but it was a real blessing that first time.
So if you wanted to do the hypothetical trial, you’d probably aim for the many GERD sufferers lining up for an endoscopy anyway, and making it worth their while by paying for the procedure. I’d probably sign up for that, personally, since it would offer the possibility of not being dependent on a PPI.
@Fergus Glencross – Best of luck with your surgery! Reoccurring sinus headaches were what initially led me to science-based medicine in the first place, as I had a headache that had seemed to last 18 months & had tried numerous woo interventions, none of which worked. (“Try it – it worked for me!” Yeah, well, I guess I just wasn’t credulous enough.) I finally went to an ENT who suggested sinus surgery, even while cautioning me the evidence base wasn’t overwhelming. To present my anecdata, though, it worked great for me; it didn’t eliminate the headaches entirely, but it reduced the frequency from every other day to once every other month.
It seems placebos are being discussed all over at the moment:
http://www.theonion.com/articles/american-medical-association-introduces-new-highly,36154/
The only trial I remember where sham surgery was used was for knee arthroscopy (I hope I spelled that right). The procedure was basically a ‘cleaning out’ of the knee joint and was performed with I think two small incisions and one of those camera tool (endoscopy thingies (I’m an epi, not a doctor)) where they took out bits of cartilage and it was supposed to alleviate pain. I remember reading it because my mom had recurrent knee problems (leading up to double knee replacement years ago) and I used to read everything about knee problems (and Crohn’s because of a sister-in-law). They trial randomize everyone then for those getting sham, they got the two small incisions and one stitch each. They even kept them ‘out’ the same amount of time. It was blinded well as they asked everyone whether they got real or sham treatment and they really couldn’t tell. It also found that those getting no treatment fared just as well as those getting the surgery. I think for cases like this where the outcome is more quality of life than anything that more trials should be done on outcomes in a placebo controlled fashion. Though I do acknowledge the point about ethics and also selling it to patients.
Orac, have you seen the case of SIMPLICITY-HTN3, a randomized controlled trial of renal denervation for resistant arterial hypertension.
Btw, link to a Forbes article discussing the trial, and (unfortunately) the heavy spin used by the promoter of that technique to minimize the results of the trial:
http://www.forbes.com/sites/larryhusten/2014/05/21/the-walking-dead-renal-denervation-in-europe-just-cant-be-stopped/
In my case, it was (described/sold as) to minimize the likelihood of worsening the tear(s) and winding up with a more radical meniscectomy in the future. I do sometimes wonder if the same recommendation would be made today, about seven years down the line.
What I find shocking is the fact that people talk about the placebo (and nocebo) effect, as if it’s normal that ‘nothing’ works at least partly as good (33% and upwards) as ‘the real thing’. It’s because conventional medicine actually acts as if: 1. the placebo effect is not as real as the effect of ‘real drugs or surgery’ and 2. it’s just a disturbing thing, that interferes with the testing of drugs, etc. To put the placebo effect in perspective: it should actually not exist! And one more thing: the ‘nocebo’ effect plays out whenever negative expectations are triggered, which means that media announcements for a coming flu, or the importance of ‘preventive screening’, vaccinations (even for new born babies), etc., install negative expectations in people’s mind. They already are told about the ‘dangers’ of not getting flu shots, preventive screenings, etc., and when the media puts extra emphasis on it, people ‘know’ what to expect if they don’t do what they’re told to do (by doctors, in the first place). This keeps the sickness industry going, of course. What a world we live in 😉
Ronald,
You seem to be laboring under a number of misconceptions about placebos.
It depends what you mean by “works”. Placebos don’t have physical effects, by definition. They don’t kill bacteria, accelerate wound healing or do anything else objective. The knowledge (or belief) that a patient has taken a placebo may have subjective effects: they may feel calmer, and less bothered by their symptoms. A good example is this recent study of asthma patients, who were given either a real asthma inhaler, a fake inhaler, or sham acupuncture. All the patients reported similar relief from asthma symptoms, but only those given the real inhaler showed an improvement in their lung function. You may think that relieving symptoms is good, but in the case of asthma it can be very dangerous to mask symptoms; it isn’t a long way from slight wheezing to a full-blown asthma attack that could be fatal.
That’s because it isn’t as real as the effect of drugs and surgery.
What is often described as the placebo effect includes things like, for example, natural improvement in conditions that would have occurred without any treatment and without a placebo (regression to the mean). Few clinical trials have a ‘no treatment’ arm, but patients often seek medical help, and are enrolled on clinical trials, when their symptoms are at their worst. It would be a mistake to attribute any improvements to the effects of a placebo.
I see no evidence for any placebo effects that are unexplainable or even unexpected. Do you have some evidence that placebos do something unexplainable? Please share if you do, as I have been hunting in vain for a long time.
Are you suggesting that influenza, breast, prostate and cervical cancers and infectious diseases like hepatitis B are the results of ‘nocebo’ effects? Perhaps such media announcements might make people think a cold is flu, or that a lump might be something nasty, but it isn’t going to cause the cold or the lump, and it might prompt the person to get checked out and/or vaccinated, which is a good thing, isn’t it?
But we know all too well what happens if people have untreated cancer, or if their cancer is detected too late, or if they (or their children) don’t get vaccinated. Just read a history book, or a personal account from a century or more ago. It has nothing to do with what people believe, or what ‘nocebo’ they have taken, that kind of deluded belief is a luxury allowed by the efficacy of modern scientific medicine.
What a world we live in where modern medicine is so successful some people have completely forgotten how horrible life would be without it, and have come to believe they don’t need it. Some, like you apparently, think that modern medicine is a con, perpetuating and even causing the illnesses it claims to treat or prevent. Some seem to believe they could prevent disease simply by thinking pink and fluffy thoughts. Those people should spend a month or two traveling in a developing country, or work in a general hospital for a while. That might give them a more realistic perspective on the value of modern scientific medicine.