NCCAM and “pragmatic trials.” Again.

I’m not alone in pointing this out, but if there’s one thing about research and clinical trials into “complementary and alternative medicine” (CAM) that has become very apparent to me over the years, it’s that the more rigorous the study the less likely it is to show an effect. In normal research, the usual progression in clinical research goes from small pilot and observational studies to small to medium-sized randomized studies, ultimately culminating in large randomized, double-blind studies that are tightly controlled to eliminate as many potential biases and confounders as possible, as well as to minimize placebo effects. Of course, when you do that for studies of most CAM interventions, what you will almost always find is that they have no effect. In science-based treatments with real effects, the effect will persist as the trials become more rigorous. True, the effect size might decrease, as biases are eliminated with increasingly rigorous trials, but the effect will persist. Not so for an ineffective treatment with homeopathy. You might see effects in preliminary studies, which tend to be uncontrolled, unblinded, and prone to bias and placebo effects, but decreasing effect size as the rigor of clinical trials increases is pretty much the sine qua non of CAM therapies.

All of which is why CAMsters like what are known as “pragmatic trials.” Pragmatic trials are a type of clinical trial meant to mimic the “real world” in order to study how treatments work outside of the cozy, controlled confines of randomized clinical trials. Such trials are said to measure “effectiveness,” while randomized clinical trials measure “efficacy.” Now, pragmatic trials have their place. For a treatment that has been shown to be efficacious in randomized clinical trials, it can be useful to test it under less-controlled situations to see if it’s still effective. The reason is that, once a drug or treatment is approved and reaches wider clinical usage, it is suddenly used on a much wider variety of patients, patients who don’t fit the strict inclusion and exclusion criteria of a good clinical trial. Often that means that the effectiveness is lower than the efficacy. However, there is an exception, and that’s when a treatment has little or no efficacy. In that case, often the effectiveness will appear to be greater than the efficacy, thanks to placebo effects, confirmation bias, expectancy effects, and the like.

So which sorts of studies do you think that the National Center for Complementary and Alternative Medicine (NCCAM) is interested in? Just check out this recent post on the NCCAM blog by NCCAM director Josephine Briggs entitled Let’s Get “Pragmatic” About Pain:

The “real world” question is an active topic across the National Institutes of Health (NIH), reflected in growing interest in what are being called effectiveness studies or pragmatic trials. Of course, NIH clinical studies happen in the real world, but usually under conditions that are tightly specified and controlled. In the typical interventional study—whether it is of a new drug, a procedure, or a new behavioral approach—simultaneous use of other therapies is limited, patient eligibility criteria are closely specified, only highly experienced practitioners are engaged, nurse coordinators closely monitor compliance, and so on. Careful control of all aspects helps ensure study results can be replicated. It helps to create “internal validity.” And, by reducing the sources of variability, we reduce the need for large numbers of participants. But, there is a tradeoff in this approach to trial design. At least sometimes, when the results are implemented in real-world conditions, the intervention does not work as expected. Hence, the concept is gaining acceptance that we need both explanatory studies to test the efficacy of therapies and pragmatic studies to examine real-world effectiveness.

Yes, pragmatic studies are becoming more accepted these days, but not unless it is already first known that the treatment being subjected to the pragmatic study actually shows efficacy in well-controlled randomized trials. At least, they shouldn’t be. CAM practitioners and quackademics publishing dubious CAM studies tend to short circuit the whole efficacy part and head straight for the apparent effectiveness part. NCCAM, it would appear, is no different, as Briggs then goes on to write:

There are many practical questions emerging from our pain portfolio that seem ready for a more pragmatic approach. We have growing evidence, reflected in systematic reviews and practice guidelines, that a number of the mind-body therapies can have real benefit in pain management. But these results raise many new questions. Do these approaches improve patient well-being when they are integrated into primary care settings? What works well in pain clinics that see referral patients? Do the mind-body approaches reduce opioid abuse? What patient populations are best targeted? How can providers effectively encourage models of self-management?

In other words, let’s do pragmatic studies on “mind-body” therapies (whatever that means), so that we are more likely to find the appearance of effectiveness. Then let’s use that appearance of effectiveness as an “evidence base” to justify “integrating” quackery with science-based medicine. No, I don’t think that Dr. Briggs is consciously advocating this. I think she’s let herself be sucked into the culture of NCCAM and the mindset that rules there. She started out as a respectable scientist with a sincere desire to promote and maintain scientific rigor at NCCAM. Unfortunately, it is an impossible task. When you are placed in a situation where your mission is to study treatments that do not have a preclinical scientific basis, such as acupuncture, reiki, homeopathy, and the like, the only way you can do it is to bypass the normal process of translational science that progresses from observation to basic science to clinical trials and go straight to clinical trials.

NCCAM was the product of a woo-loving Senator (Tom Harkin) throwing his influence around. The NIH didn’t want it. There was no scientific need for it, and scientists didn’t ask for it. Science-based physicians certainly didn’t want it either. Yet we got it. It was forced upon us by a politician who thought that bee pollen had cured his allergies. Unfortunately, it also appears to be here to stay, and to continue to promote pseudoscience through the use of “pragmatic” trials.