Placebo effects in surgery

Although I’m a translational researcher, I’m also a surgeon. That was my first and primary training and only later did I decide to get my PhD during my residency, when the opportunity to do so with a decent stipend presented itself. From my perspective, clinical research in surgery is difficult, arguably more difficult than it is for other medical modalities, at least in some ways. For instance, in surgery, it usually very difficult to do a double-blind placebo-controlled trial. For one thing, doing “sham” surgery on patients in the placebo arm is ethically dicey, and it’s very hard to maintain that essential prerequisite for an ethical clinical trial, clinical equipoise. After all, even “sham” surgery involves subjecting the patient to anesthesia, cutting the skin, and subjecting the subject to at least some of the risks of the “real” surgery.

In addition, it’s usually impossible to blind the surgeon, because the surgeon will usually know what he is doing. There are exceptions based on the specific operation being done, but they are uncommon. This problem, which is not entirely unique to surgical clinical trials but far more common in them, sometimes necessitates designs where the surgeons who do the surgery do not do the post-op care, an arrangement that most surgeons, trained—and correctly so—that a surgeon should take care of his patients postoperatively, generally do not like. Nor do patients, who quite reasonably expect that their surgeon will follow them postoperatively. Then there’s the issue of standardization and the known issue that surgical skill does matter; better technical surgeons tend to produce better results. All of these factors have conspired to make truly double-blind, randomized, sham/placebo-controlled clinical trials in surgery uncommon. But how uncommon are they?

I learned the other day of a recent publication in BMJ that looked at the use of placebo controls in the evaluation of surgery. It’s a systemic review performed by investigators at the Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences (NDORMS) at Oxford of randomized clinical trials of surgical procedures that actually used a placebo (i.e., sham surgery) control group, and its results are just one more indication of how hard it is to demonstrate that a treatment, in this case, a surgical intervention, works.

The authors discuss the issue right in the beginning:

Modern surgery is changing rapidly. Surgical interventions can now be offered to improve function and quality of life not just to save life. The improvement in the safety of surgical procedures and anaesthesia has facilitated this change.1 The mortality associated with anaesthesia has decreased from between 64 and 100 in 100,000 in the 1940s to between 0.4 and 1 in 100,000 at present. The prevalence of serious adverse events related to surgical interventions has remained relatively constant over the past 10 years, despite an increase in the number of surgical procedures performed each year. The postoperative death rate is between 1.9% and 4% and in most cases is due to the primary disease.3 5

The increase in the applications for surgical procedures has been driven by a greater involvement of technology in surgical procedures. Such technological advances have made many interventions less invasive, more likely to be endoscopic, and less resembling typical open surgery, such as laparotomy. However, these new procedures are often introduced into surgical practice without any formal evaluation of safety and efficacy, such as using randomised clinical trials. This is because, unlike drug products, such verification is currently not mandated by regulatory authorities. Furthermore, there is generally a scarcity of information reported on the surgical learning curves or the iterative development of a new technique. Both existing and innovative surgical practice clearly needs to be evaluated, and any evaluative method should take account of the unique idiosyncrasies and challenges presented by surgical interventions.

And it’s true. I’ve discussed how, for example, the popularity of laparoscopic cholecystectomy, both among surgeons and patients, far outpaced the evidence base that it was safe. Indeed, early on, back in the early 1990s, the rate of a particularly serious complication of cholecystectomy, injury to the common bile duct, was much higher in laparoscopic procedures than in open procedures. Still, ultimately, that rate came down, and, given the marked advantages of laparoscopic procedures in terms of decreased hospital stay, less pain, and faster return to work. Moreover, when it comes to surgical procedures, the line between “tinkering” (as in incrementally improving a procedure, and research can often be quite difficult to identify. The authors quite appropriately point out that the outcome of a surgical treatment is the cumulative effect of the three main elements of any treatment, namely the critical surgical element (or, as we surgeons like to put it, “fixing the problem”), placebo effects, and nonspecific effects and that, as with any medical treatment, it is expected that surgery is associated with placebo effects and nonspecific effects.

So the authors examined the literature, adhering to the guidelines for the Cochrane Collaboration. Studies were eligible if they were randomized clinical trials examining the efficacy of a surgical procedure compared to placebo, with surgery defined as any interventional procedure that changes the anatomy and requires a skin incision or use of endoscopy (or, as I like to put it, rearranging a patient’s anatomy with therapeutic intent). Here were the results:

In 39 out of 53 (74%) trials there was improvement in the placebo arm and in 27 (51%) trials the effect of placebo did not differ from that of surgery. In 26 (49%) trials, surgery was superior to placebo but the magnitude of the effect of the surgical intervention over that of the placebo was generally small. Serious adverse events were reported in the placebo arm in 18 trials (34%) and in the surgical arm in 22 trials (41.5%); in four trials authors did not specify in which arm the events occurred. However, in many studies adverse events were unrelated to the intervention or associated with the severity of the condition. The existing placebo controlled trials investigated only less invasive procedures that did not involve laparotomy, thoracotomy, craniotomy, or extensive tissue dissection.

I was actually surprised that there were so few actual randomized clinical trials of surgical interventions the authors found. On the other hand, 39 out of 53 of these studies were published after 2000, suggesting that the rigor of surgical clinical trials has been better over the last 14 years. On the other hand, less than half the trials (n=22; 42%) reported an objective primary outcome; that is, an outcome that did not depend upon the self-reporting of the subject. Moreover, the number of patients in each clinical trial tended to be small, between 10 and 298 patients, with a median of 60. Also, no placebo controlled surgical trials investigating more invasive surgical procedures such as laparotomy, thoracotomy, craniotomy, or extensive tissue dissection were identified. The authors also noted that interventions in the placebo arm tended to be associated with less serious adverse events compared with the treatment arm, as the authors tried to minimize risks by withholding part of the intervetion; for example, only offering partial burr holes rather than full burr holes or not administering heparin. In any case, often the type of serious adverse events in the placebo group were more likely if the procedures involved exogenous materials or tissue.

In any case, the authors point out that randomized, placebo-controlled clinical trials of surgery are rare, but many of them provide clear evidence against the surgical procedure being studied and clear evidence for a significant placebo effect due to surgery. In many, the risk of adverse events was actually less in the placebo group in many trials, and the majority of trials reported improvements both in the placebo group and the experimental group.

As a researcher myself, I wonder how it is possible to “sell” a clinical trial of a surgical intervention versus a placebo/sham intervention. I’ve participated in surgical clinical trials before and found it incredibly difficult to “sell” a clinical trial, even one that I really believe in, to patients when the difference is surgery versus no surgery. I doubt it would be any different with a clinical trial proposing a sham surgery versus a “real” surgery. A BMJ blogger quite nicely describes the problem in the context of a made-up example of a patient with serious gastroesophageal reflux:

Your gastroenterologist, Dr Barrett, tells you about a new procedure that has shown some promise in initial studies. Using a minimally invasive endoscopic procedure, they can put a stitch in your stomach to stop acid from travelling back up the gullet. The hospital is taking part in a trial of the procedure and, lucky you, they’d like to put you forward. You’re anxious about having an endoscopy, but think it will be worth it if it will alleviate your symptoms.

But there’s more. Dr Barrett explains that the study is trying to assess just how effective this new procedure is and that the best way to do that is to compare the results from the procedure to a control. The people who take part in the trial will therefore be randomly allocated into two groups: those that have an endoscopy plus the stitching procedure and those that have the endoscopy but don’t have the stitch. They won’t find out which group they’re in until the trial has been completed.

You’re confused. On the one hand, you understand scientific process and the need for a robust evaluation of the procedure. On the other, you’re not sure if you want to subject yourself to an invasive technique if you might not get the potential benefit of the stitching procedure at the end of it. To complicate things further, Dr Barrett says there may be some benefit attributable to just having the endoscopy alone and that there are risks associated with the stitching procedure.

What should you do?

Indeed. What should you do? What would you do, if you had, for example, severe reflux and were offered this trial? Would you agree to be a subject?

To me, it’s not surprising that there is a placebo effect in surgery. This has been known at least since the 1950s when one of the earliest placebo-controlled clinical trials was performed for a procedure known as internal artery ligation. This was a commonly performed surgical procedure for angina pectoris whose biologic rationale was that more blood flow would be shunted away from the internal mammary article and towards the coronary arteries. Patients also reported less chest pain after the operation. Unfortunately, the randomized clinical trial, published in 1959, did not bear it out and let to the rapid extinction of the procedure. Other procedures, unfortunately, were not so easily abandoned. (I’m talking about you, vertebroplasty.) Unfortunately, none of the studies included a true untreated control group, making a direct estimate of placebo effects impossible, although the authors explained why they chose not to include trials.

The authors conclude:

Placebo controlled trials in surgery are as important as they are in medicine, and they are justified in the same way. They are powerful, feasible way of showing the efficacy of surgical procedures. They are necessary to protect the welfare of present and future patients as well as to conduct proper cost effectiveness analyses. Only then may publicly funded surgical interventions be distributed fairly and justly. Without such studies ineffective treatment may continue unchallenged.

While I tend to agree with this, I really am not quite convinced by the authors’ rather airy dismissal of ethical concerns over such trials. After all, in most medical trials, the placebo itself usually doesn’t produce an active risk of injury; sham surgery, on the other hand, not only produces the passive risk of injury due to not treating the underlying condition and an active risk of injury due to the part of the surgical procedure performed. Still, I do wish that there were more such trials. Blinding isn’t so much necessary for procedures for which there is a hard, objective endpoint to measure, such as death or tumor recurrence, but for the large number of surgical procedures designed to alleviate symptoms subject to placebo effects it is clear that there are likely to be a number of surgical procedures whose efficacy is primarily placebo.