Categories
Clinical trials Medicine Quackery Surgery

Avoiding scientific delusions

i-e7a12c3d2598161273c9ed31d61fe694-ClassicInsolence.jpgI happen to be in Phoenix today, attending the Academic Surgical Congress, where I actually have to present one of my abstracts. That means, between flying to Phoenix last night and preparing for my talk, I didn’t have time to serve up a heapin’ helping of that Respectful Insolence™ you know and (hopefully) love. Fortunately, there’s still a lot of stuff in the vaults of the old blog begging to be moved over to the new blog; so that’s what I’ll do today. I’ll probably be back tomorrow with new material, given that the conference will likely produce blog fodder. (Conferences usually do.) And, don’t worry. Barring my being so utterly seduced by the warm weather here that I totally forget my blog responsibilities after having traveled from bitter cold to weather in the 70’s, I’ll still probably produce Your Friday Dose of Woo, maybe surreptitiously during the boring bits of the conference. Probably. In the meantime, enjoy this blast from the past, particularly those of you who have been regulars for less than 18 months or so, who probably have never seen it before. (And, of course, if that’s the case, it’s new to you.)

This article originally appeared on August 31, 2005, and it produced a lot of heated discussion the first time around. It also seems to me to be a very appropriate followup to yesterday’s post and will fit in nicely with the “Just Science” week theme some bloggers are trying to adhere to. After all, it is about the scientific method.

In all my blogging about altie claims and the alleged (and most likely nonexistent) link between mercury exposure from thimerosal in childhood vaccines and autism, I’ve been consistent in one thing. I try as much as possible to champion evidence-based medicine. I insist on evidence-based medicine because, with good reason, I believe it to be superior to testimonial-based medicine, which is what practitioners like Dr. Rashid Buttar, who cannot demonstrate that his “transdermal cream” even gets his chelating agent into the bloodstream at concentrations that will chelate anything, much less mercury, much less that it can improve the functional level of autistics in controlled randomized trials, do. (On a side note, I’ve also learned that Dr. Buttar also doesn’t like me very much for saying that, much to my amusement. If this, this, or this inspired him to mention me in the same sentence as “stupidity” or a “brick wall,” he really should check out Kev’s fantastically sarcastic letter to him or this broadside against chelation therapy for autism. Or better yet, he should read Peter Bowditch’s take on the matter. That ought to raise his blood pressure. Then he could chelate himself to get it back down.) I also try to insist on evidence-based medicine because lack of evidence is what quacks like Hulda Clark prey upon in selling their worthless “cures” for diseases like cancer. However, contrary to what some alties will claim, I do not limit my insistence on evidence-based medicine to various alternative treatments. I insist on uniform scientific standards for evaluating the biological mechanism of disease and potential treatments, whether the treatment be “alternative” or “conventional.”

Alties are frequently unhappy about medicine’s growing insistence on well-designed clinical trials to test their claims, considering it evidence of the “elitism” that they despise in “conventional” medicine. What they don’t understand is that the reason that the scientific method and clinical trials are so important is not because scientists and “conventional” doctors are any wiser than “alternative” practitioners or even the general population at large. They most certainly are not; they are just more highly educated and trained. The reason the scientific methods and clinical trials are so important in developing and evaluating new therapies is because doctors are human and therefore just as prone to bias and wishful thinking as the worst pseudoscientist or quack. They are just as prone to falling victim to the trap of wanting so badly to believe that an experimental result is valid or that a treatment is effective that they fool themselves into believing it or to resisting change because “always done it this way.” (Altie practitioners tend to be prone to a different kind of self-deception, namely the Galileo gambit, in which they believe themselves akin to Galileo, persecuted because they are so far ahead of their time.)

Last Sunday’s New York Times had a very good example of a “conventional” treatment that demonstrates why clinical trials are so important. The treatment is vertebroplasty using spinal cement to treat vertebral fractures due to osteoporosis:

No one is sure why it helps, or even if it does. The hot cement may be shoring up the spine or merely destroying the nerve endings that transmit pain. Or the procedure may simply have a placebo effect.

And some research hints that the procedure may be harmful in the long run, because when one vertebra is shored up, adjacent ones may be more likely to break.

But vertebroplasty and a similar procedure, kyphoplasty, are fast becoming the treatments of choice for patients with bones so weak their vertebrae break.

The two procedures are so common, said Dr. Ethel Siris, an osteoporosis researcher at Columbia University, that “if you have osteoporosis and come into an emergency room with back pain from a fractured vertebra, you are unlikely to leave without it.” She said she was concerned about the procedures’ widespread and largely uncritical acceptance.

Sound familiar? If not, consider this quote:

“I struggle with this,” said Dr. Joshua A. Hirsch, director of interventional neuroradiology at Massachusetts General Hospital in Boston. He believes in clinical trials, he said, but when it comes to vertebroplasty and kyphoplasty, “I truly believe these procedures work.”

“I adore my patients,” Dr. Hirsch added, “and it hurts me that they suffer, to the point that I come in on my days off to do these procedures.”

Dr. Hirsch apparently started with the noblest of motives, wanting to relieve his patients’ unremitting pain from spinal metastases due to cancer or fractures due to osteoporosis. He still believes he is helping; otherwise he would probably abandon vertebroplasty. Many altie practitioners start out similarly, no doubt. They come up with a method or a treatment, see what appears to be a good result, become convinced that it works, and thus become true believers. The difference is that, unlike Dr. Buttar, as an academician Dr. Hirsch at least still feels uneasy about advocating this therapy without adequate research or strong objective evidence to show that it really works better than a sham procedure, because doing so goes against his academic training. (Dangerous “alternative” practitioners like Dr. Kerry have no such qualms, even though he was educated at the University of Pittsburgh.) Nonetheless, Dr. Hirsch appears to have convinced himself by personal observation and small pilot studies that the procedure works. That the the Director of Interventional Neuroradiology at Massachusetts General Hospital can convince himself that an unproven treatment works on the basis of personal observation and small pilot studies simply shows how easy it is to persuade oneself to believe what one wants to believe. He may or may not be correct in his belief, but we have no way of knowing.

Unfortunately, personal observation is prone to far too many biases, the worst of which is selective thinking or confirmation bias. In short, we remember successes (or seeming successes) and observations that confirm our expectations, and tend to forget or discount failures and observations that do not confirm our expectations. Small pilot studies are also prone to bias and confounding factors, which is why they are generally good only as a means of determining if a treatment shows an inkling of effectiveness worth following up with a larger trial. As the claim spreads, it can then become accepted through communal reinforcement, regardless of the poor quality of the initial data. Apparently this is happening now with vertebroplasty.

In studies of pain relief treatments or procedures, one particularly nasty bias that cannot be eliminated without good placebo controls is regression to the mean:

For example, he said, patients come in crying for relief when their pain is at its apogee. By chance, it is likely to regress whether or not they are treated. That phenomenon, regression to the mean, has foiled researchers time and time again.

Although there was one uncontrolled pilot study that reported 26/29 patients showing pain relief after the procedure, a followup study with placebo control, although quite small, cast doubt on the effectiveness of the procedure:

But Dr. David F. Kallmes, one of her partners, wanted a rigorous test. He began a pilot study, randomly assigning participants to vertebroplasty or placebo. To make it more appealing, he told patients that 10 days later they could get whichever treatment they had failed to get the first time.

It was hard to find subjects, and Dr. Kallmes ended up with only five. For the sham procedure, he pressed on the patient’s back as if injecting cement, injected a local anesthetic, opened a container of polymethylmethacrylate so the distinctive nail-polish-remover smell would waft through the air and banged on a bowl so it sounded like he was mixing cement.

In 2002, he reported his results: three patients initially had vertebroplasty and two had the sham. But there was no difference in pain relief. All the patients thought they had gotten the placebo, and all wanted the other treatment after 10 days. One patient who had vertebroplasty followed in 10 days by the sham said the second procedure – the sham – relieved his pain.

In other words, none of the patients got any relief when they thought that they might be getting a placebo the first time around and thus wanted the “real thing.” This implies that the pain relief due to the treatment may well be due to a placebo effect. Remember, placebo effects are often more potent the more elaborate or invasive the treatment is, and thus harder to control for. This is one of many reasons that trials for surgical or invasive procedures to relieve chronic pain are often so hard to do. Sadly, Dr. Kallmes trial was so small that we cannot make any definitive conclusions from it, although there is also an Australian trial that found that pain relief at six weeks in the vertebroplasty group was no better than the control group, bringing the long-term effectiveness of the technique into doubt.

It turns out that the bulk of the evidence that is being used to argue that vertebroplasty is effective are in essence testimonials, rather uncomfortably like the “evidence” being used to promote Dr. Buttar’s “transdermal chelation” therapy and other altie treatments. We have no idea whether vertebroplasty actually works, for which patients it does and doesn’t work, what the long term results are in terms of durable pain relief, whether it increases the risk of additional fractures, or what the potential complications are. To find that out would require clinical trials, and, barring such trials, we can never be certain whether vertebroplasty or kyphoplasty are anything superior to elaborate placebos. The difference, of course, is that at least vertebroplasty has a biologically and anatomically plausible rationale to lead us to think that it might work. The same most definitely does not apply to Dr. Buttar’s treatment. Read this and tell me that this story of a doctor giving a talk about vertebroplasty to a skeptical audience of doctors doesn’t sound familiar:

“I could tell by looking at the audience that no one believed me,” she said. When she finished, no one even asked questions.

Finally, a woman in back raised her hand. Her father, she told the group, had severe osteoporosis and had fractured a vertebra. The pain was so severe he needed morphine; that made him demented, landing him in a nursing home.

Then he had vertebroplasty. It had a real Lazarus effect, the woman said: the pain disappeared, the narcotics stopped, and her father could go home.

“That was all it took,” Dr. Jensen said. “Suddenly, people were asking questions. ‘How do we get started?’ “

Can you picture this sort of scene in an infomercial for an herbal remedy? I can.

So what’s wrong with testimonials? Well, as I like to say, the plural of “anecdote” is not “data,” and testimonials usually don’t even rise to the level of anecdotes. Testimonials are often highly subjective, and, of course, practitioners can and do pick which testimonials they present. Even in the case of cancer “cure rates,” testimonials often mean little because they are given for diseases that surgery alone “cured.” (Also, dead patients don’t provide good testimonials.) Worse, testimonial-based practice tends to preclude the detailed observation and long-term followup necessary to identify which patients benefit from treatments and which do not, complication types and rates, or long-term results of the treatment. Anecdotes are really good for only one thing, and that’s developing hypotheses to test with basic scientific experimentation and then clinical trials. Vertebroplasty may indeed be very effective at pain relief with a low risk of complications. Or it may not. We simply don’t have the data to know one way or the other, and now we may never have it. What is odd is that Medicare and insurance companies are usually pretty firm about not paying for an experimental procedure (which is what vertebroplasty should be considered), yet somehow third party payers have been persuaded to pay for this procedure.

Science itself and randomized clinical trials are designed to combat such biases. In preclinical studies, the scientific method uses the careful formulation of hypotheses and testing of those hypotheses with experiments that can either confirm or falsify the hypothesis, experiments that include appropriate control groups to rule out results due to factors other than what the researcher is studying. The scientific method, rigidly adhered to, helps investigators protect themselves from their own tendency to see what they want to see, to correct mistaken results, and recover from stupidity faster. The same is true of randomized clinical trials, which accomplish this in much the same way by using four factors: strict inclusion criteria, so that only patients with the disease being studied are admitted; close measurement of endpoints that are as objectively and reproducibly measured as possible; careful, statistically valid randomization, so that the control group and experimental groups resemble each other as closely as possible; and a placebo control (or a comparison against the standard of care treatment for disease in which a placebo control would be unethical, as in cancer trials). Whenever possible, double blinding is advisable, so that neither the patients nor the doctors know which patient is getting which treatment, so that doctors don’t treat patients in either group differently or look more closely for (and therefore find) treatment effects in the experimental group and so that patients don’t pick up cues from the doctors’ interaction with them. This maximizes objectivity and minimizes bias.

It should also be remembered that one study is not enough, either. Single studies can be wrong one-third or even one-half of the time. I’ve often joked that, if you look hard enough, you can almost always find a study that supports whatever conclusion about a clinical question that you want to make. Alties don’t understand this and will cite one or two carefully selected reports that seem to support their claims, ignoring the many that do not. Illustrating this example is chelation therapy for another disease, namely athersclerotic vascular disease, for which chelationists will cite old papers with inadequate controls that seemed to show a benefit. For example, there was one randomized study in 1990 that appeared to show a benefit for chelation therapy over placebo, but this was a study that looked at only 10 patients. Multiple much larger randomized studies have been done since then, such as this one, and none of them has shown a benefit. Guess which studies alties like to cite? (Hint: It isn’t any study newer than 1991 or so.) Hopefully an ongoing NCCAM study will resolve the study once and for all, although there is little doubt in my mind that chelationists will not believe the study if, as is likely, it fails to find a beneficial treatment effect.

What really needs to be considered in clinical decision-making is the totality of data from well-designed clinical studies, something the Cochrane Collaboration tries to facilitate by evaluating the literature concerning important clinical questions and synthesizing it into recommendations and a summary of the quality of available evidence to support their recommendations (or the lack thereof). The bottom line is that evidence-based medicine, far from being a way for “conventional” doctors to assert their superiority over “alternative medicine,” is a in actuality means for doctors to try to avoid medical and scientific self-delusion about the effectiveness of a favorite treatment. Just because the medical profession all too often doesn’t do a good job of practicing evidence-based medicine is not a reason to throw these scientific standards out in favor of fluffy, feel-good, testimonial-based treatments like Dr. Buttar’s or to give advocates of such treatments a pass in terms of supporting their claims. Rather, it is a strong reason to strive to do a better job at improving the science behind our treatments and the scientific rigor of our clinical trials. Evidence-based medicine may not be without problems itself (and perhaps I shall try to address some of its shortcomings in future posts). However, it is far better than the alternative.

By Orac

Orac is the nom de blog of a humble surgeon/scientist who has an ego just big enough to delude himself that someone, somewhere might actually give a rodent's posterior about his copious verbal meanderings, but just barely small enough to admit to himself that few probably will. That surgeon is otherwise known as David Gorski.

That this particular surgeon has chosen his nom de blog based on a rather cranky and arrogant computer shaped like a clear box of blinking lights that he originally encountered when he became a fan of a 35 year old British SF television show whose special effects were renowned for their BBC/Doctor Who-style low budget look, but whose stories nonetheless resulted in some of the best, most innovative science fiction ever televised, should tell you nearly all that you need to know about Orac. (That, and the length of the preceding sentence.)

DISCLAIMER:: The various written meanderings here are the opinions of Orac and Orac alone, written on his own time. They should never be construed as representing the opinions of any other person or entity, especially Orac's cancer center, department of surgery, medical school, or university. Also note that Orac is nonpartisan; he is more than willing to criticize the statements of anyone, regardless of of political leanings, if that anyone advocates pseudoscience or quackery. Finally, medical commentary is not to be construed in any way as medical advice.

To contact Orac: [email protected]

Discover more from RESPECTFUL INSOLENCE

Subscribe now to keep reading and get access to the full archive.

Continue reading