Categories
Clinical trials Complementary and alternative medicine Medicine Quackery

Maybe we should use therapeutic touch instead of growth factors to culture cells

ResearchBlogging.orgIn complaining about the infiltration of pseudoscience in the form of “complementary and alternative medicine” (CAM) into academic medicine, as I have many times, I’ve made the observation that three common modalities appear to function as “gateway woo, if you will, in that they are the tip of the wedge (not unlike the wedge strategy for “intelligent design” creationism, actually) that slip into any defect or crack it can find and widen it, allowing entrance of more hard core woo like homeopathy behind them.

All of these modalities fall under the rubric of “energy healing” in that the rationale given for how they “work” is that they somehow alter, correct, or “unblock” the flow of qi, or that mystically implausible “life energy” that scientists can’t seem to measure but energy healers assure us really, truly does exist. One of these is acupuncture, which has proliferated throughout many areas of medicine despite a lack of evidence that it does anything more than a placebo. However, at least acupuncture actually does something in that it involves introducing needles underneath the skin. It might conceivably do something, although it’s virtually certain that, whatever it might do, it isn’t achieving it by “unblocking” the flow of anything, much less qi. The next is reiki, which to me is nothing more than faith healing with Eastern mysticism rather than Christian religion as its basis. In reiki, the healer claims to be able to manipulate a patient’s qi for therapeutic effect in essence by holding his hands out and willing it. The last of this trio of wedgy woo is a distinctly American form of woo known as therapeutic touch (TT), which tends to be promoted and practiced primarily by nurses. Indeed, I view TT as, in essence, an Americanized form of reiki whose name is a misnomer, in that its practitioners hold their hands very close to the patient without actually touching the patient and will them to be healed by manipulating their “life energy.”

As I said, these forms of woo are “gateway woo” that lead the way to the introduction of the harder core stuff, like homeopathy, applied kinesiology, or even reflexology. However, we skeptics are seemingly supposed to accept it when we are told that these are really and truly science, maaaan. Sometimes advocates of these modalities are stung by such criticism to the point where they want to try to prove that there’s science behind their mysticism, and when they do there are sometimes truly hilarious results. For instance, not too long ago I discussed a series of experiments published in which reiki was tested for its ability to alleviate the increase in heart rate observed in rats placed under stress. I couldn’t help but giggle when I pictured reiki masters doing their mystical hand gestures and concentration on laboratory rats. I wondered what could possibly top that experiment for sheer ridiculousness.

Now I know. Now they’re doing therapeutic touch on cell culture and writing glowing press releases about it:

Steeped in white-coat science since she earned her Ph.D. in cell biology at Columbia University 20 years ago, Gloria Gronowicz is about the last person you’d expect to put stock in the touchy-feely discipline of energy medicine. But then the University of Connecticut researcher saw it with her own eyes, under a high-power microscope in her own laboratory, where, once, only well-accepted biological building blocks — proteins, mitochondria, DNA and the like — got respect.

Therapeutic Touch performed by trained energy healers significantly stimulated the growth of bone and tendon cells in lab dishes.

Her results, recently published in two scientific journals, provide novel evidence that there may be a powerful energy field that, when channeled through human hands, can influence the course of events at a cellular level.

“What she’s showing is an association that defies explanation with what we currently know,” said Margaret A. Chesney, a professor of medicine at the University of Maryland and former deputy director of the National Center for Complementary and Alternative Medicine at the National Institutes of Health.” She’s Daniel Boone

I truly hate it when someone says that a result “defies explanation with what we currently know.” The reason is that, in experiments investigating the paranormal (and, make no mistake, the claims of TT that they can manipulate a human energy field with therapeutic intent definitely merit the adjective “paranormal”), it is in fact very rare for a result actually to deserve such a description. There are virtually always alternative hypotheses or explanations for such results that do not involve the invocation of ideas that defy the currently understood laws of physics. For instance, in the case of any one single study, in general alternate hypotheses always include the possibility that the observed result was spurious or a statistical fluke (at the 95% confidence level, at the very minimum 5% of studies will appear “positive” by random chance alone) or that there is some sort of unacknowledged or unnoticed systematic bias in the experiment that isn’t apparent in its description and may not be apparent even to the investigators. Indeed, when it exists, study authors are usually blissfully unaware of such sources of systematic bias; they think they’ve eliminated all sources of bias, but a closer inspection shows that they have not. Less commonly it’s intentional. Furthermore, when the results of a set of experiments supposedly defy so many currently understood tenets of physics and chemistry, as the results reported in the articles mentioned in the above press release do, then no single study can eliminate those two possibilities. It would take an amount of evidence coming close to being as compelling as all the scientific evidence that argues against the result being possible to force a paradigm shift to make a serious argument that current science is incorrect on this issue.

It’s also a bit of a stretch to call the journals where these results were published “two scientific journals.” The reason is that one of the journals is the Journal of Alternative and Complementary Medicine (JACM), a major bastion of pseudoscience and a convenient wastebasket journal in which CAM mavens publish their pseudoscience. It’s also a little dubious that the two papers, one in JACM and one in the Journal of Orthopedic Research (JOR) appear to be in fact one study. To me this looks like a case of publishing what we in the biz call the MPU, or “minimal publishable unit,” in order to produce the most publications per set of experiments. Because these two articles clearly describe what is in essence one study, I’m going to take them both together and treat them more or less as one study. However, I’m going to concentrate mainly on the JOR paper, because at least it was subjected to something resembling peer review. (Again, based on the low quality of the articles published there, I consider JACM’s quality of peer review to be clearly so shockingly substandard on a scientific basis that I don’t even consider it peer review at all.)

Overall, I’m underwhelmed by Gronowicz’s study.

One thing I noticed right away in the JOR paper is that the manuscript was first submitted on August 1, 2006 but was not accepted for publication until March 27, 2008. That’s a really long time, even for a medical journal like JOR. (Medical journals tend to take a long time to publish manuscripts, sometimes as long as a year.) No doubt TT advocates will say it’s because those evil reductionist scientists are trying to suppress Real And True Evidence That TT Works, but more likely it implies a lot of criticism and requested revisions, perhaps with a bit of fighting with the editor to get this published. The fact that the JACM article was published a few months ago makes me wonder whether Gronowicz, frustrated with a real scientific journal’s skepticism over her work, decided to publish quickly in a woo-friendly journal, which gladly ate up what she was dishing out. Perhaps she did it because she needed a publication or two for a grant application or progress report. That’s just my guess, although I consider it an educated one. Otherwise, why would she have published in such a trashy journal as JACM, when she had a paper in the pipeline for JOR?

Basically, in the two papers, Gronowicz studies whether the application of TT can stimulate the growth of human cells and the mineralization of human osteoblasts (HOBs). Her findings from the two studies, if you believe them, suggest that not only does TT increase the proliferation and mineralization of osteoblasts but that it is specific in its effects in that it exhibits no such effects on the growth of the osteosarcoma cell line SaOS. Put in its simplest terms, the implication, again if you believe the study, is that TT practitioners can somehow through force of will used to manipulate life energy cause “good” cells to grow without also making the “bad” cells grow too. Unaddressed is why on earth anyone would think that manipulating the life energy of an intelligent organism could be “scaled down” to manipulating the life energy of a plate of cells.

There are a number of problems with both studies, but first let’s look at the positives. First, the researchers did actually do a fairly reasonable job of trying to blind those doing the assays to the experimental groups, as described in the JOR paper:

Control (untreated) and “treatment” tissue cultures plates were clamped in one of two ring stands on a bench top, and were approximately 15 inches from the benchtop so that the practitioner hands could reach all sides without touching. Control and treated plates were positioned at either end of an L-shaped laboratory. Treatment was alternately performed on either end of the room with the treated plates receiving treatment twice a week and the untreated plates remaining clamped for the same time period while treatment was being performed on the other end of the room. Then the tissue culture plates were returned to the same incubator. Positioning of the plates in the incubator was random, and a technician with no knowledge of TT, set up the plates and returned the plates to the incubator.

Unfortunately, it was not described how cells were allocated to one of the groups, and it was also somewhat disturbing that in later experiments, the investigators added a “placebo” group, in which a random lab tech or student not trained in therapeutic touch, mimicked the motion of TT practitioners holding their hands a few inches from the plate and distracted themselves from “focusing” or showing “intent” on the cells by counting backwards from 100 to 1. (I kid you not.) Unfortunately, the authors, as far as I can tell, appeared to compare it not only to the new experiments but to the old experiments as well; in other words, the placebo group was analyzed post hoc with the other groups. Also, aside from the performance of the final assays, the handling of the plates did not appear to be blinded, which could potentially be problematic.

A second problem that I noted was a puzzling inconsistency between data from identical experiments presented in both of the papers. For example, this is the graph of human osteoblast proliferation as measured by 3H-thymidine uptake from the JACM paper (C=control; TT= therapeutic touch; P=placebo):

i-ba05ed656acf48e09b62d854f9febaef-HOB1.jpg

Note one very important observation. Proliferation in the TT group was only slightly higher than in the control group, but proliferation in the placebo group was also elevated over control, albeit not as much and not by a statistically significant amount. Now let’s look at the JOR figure, which seems to show the same comparisions for human osteoblasts:

i-49e6884d0e0c15d83b52b48e009da995-HOB2.jpg

The pattern is the same in that the results of the proliferation measurements were: TT > P > C. However, the differences observed are much larger. Whenever I see sets of data like this, I have to wonder which data from which experiments were included in each graph and why the graphs show such a striking difference in the magnitude of the alleged effect. (That’s aside from the ethics of publishing in essence identical experiments in two different journals.)

There’s also a graph in the JACM article conveniently not included in the JOR article that suggests even more strongly that the observed results could very well be due to random chance. At the very least, it suggests that something strange is going on here, and it’s not strangeness due to a new and amazing discovery. I’m referring to a time course of the alleged affect at one week and two weeks based on the number of TT treatments:

i-0d7654302c20fc062aceaceec0850d74-HOBtimecourse.jpg

Note that at one week there is no difference between control and TT regardless of the number of TT treatments. This result was reported in both articles; for whatever reason, one week of TT was not enough in these cells. However, at two weeks, there was a difference with four treatments and eight treatments but not with six or ten treatments, a result of three different experiments. Such a result brings into serious question whether the TT results were, in fact, real. The reason is that, if TT were real, we’d expect a dose-response curve of some sort, most likely with the effect increasing with the number of treatments, the “dose” of TT, if you will. There is no good scientific reason to expect that only four or eight TT treatments would have an effect while six or ten would not. Such a result desperately needs to be explained. Maybe it’s quantum effects. Or maybe it’s magic, where only multiples of four treatments have an effect. Or maybe it’s some sort of mathematically perfect woo, where only treatments numbering a power of two have an effect. Too bad the investigators didn’t do only two treatments or carry their analysis out to twelve or sixteen TT treatments.

Finally, in the JOR article, much is made of how the statistics were handled. For example, here is an excerpt of the methods section:

Data analysis focused on comparing the distribution of levels of proliferation and mineralization across study conditions, for example, “therapeutic touch versus control” or “therapeutic touch versus control versus placebo.” All comparisons used “exact” nonparametric statistical tests. Nonparametric tests were selected because study measures typically did not follow normal distributions and sometimes exhibited clear evidence of heterogeneity of variance between groups. “Exact” versions of the tests were performed to avoid reliance on “large-sample” approximations in the calculation of p-values.

This, of course, begs the question of why there was so much heterogeneity in variance between groups. The variables under study at generally take on a normal distribution, and there is no a priori reason to suspect that the variance between the groups would differ so much that special statistical consideration.

I’ve mentioned before how important it is to control for multiple comparisons when there are more than two experimental groups. Failure to do so can easily lead to “statistically significant” correlations that in reality are not. The reason is that, at a 5% confidence level, each pairwise comparison produces a 5% chance of a “false positive,” and the more comparisons there are the more the chance of finding a “false positive” increases. There is a statistical correction for this tendency known as the Bonferroni method. It is actually to Gronowicz’s credit that she did indeed use the Bonferroni method. However, using the Bonferroni method rendered her correlations to be not significant statistically, as this excerpt from the JOR article admits:

Figure 1C demonstrated that TT stimulated HOB DNA synthesis after 2 weeks ( p = 0.04) but the placebo individual did not stimulate DNA synthesis. In the post hoc pairwise comparisons that followed this statistically significant finding, the Bonferroni adjustment required application of a significance level equal to 0.0167. None of the comparisons fell below this more rigorous threshold (control vs. TT, p = 0.095; control vs. placebo, p = 0.017; TT vs. placebo, p = 0.43), suggesting that the experiment was underpowered to support the conservative Bonferroni approach. However, the results are suggestive of the possibility that the training and intention of an experienced practitioner may be required to elicit an effect.

In other words, when done with the proper statistics, there was no statistically significant effect of TT on human osteoblast (HOB) proliferation. Indeed, the comparison of TT versus placebo was the only comparison that came close to statistical significance. The same was true for mineralization of HOBs:

TT was able to increase mineralization compared to untreated even at 4 and 6 weeks of TT treatment. However, once again the study may have been unpowered to support use of the conservative, Bonferroni approach to performance of multiple, pairwise statistical tests. The p-values at 4 and 6 weeks were both equal to 0.029. Although these p-values fell below the nominal 0.05 cutoff for significance, they did not reach the more extreme threshold of 0.0167 required by use of the Bonferroni method.

In other words, the mineralization results for HOBs were not statistically significant, either. True, there were a bunch of other statistically significant differences between the TT group and controls, but they were not independent results. They were measurements that are tightly correlated with proliferation and mineralization. I have a guess as to the reason why so much is made of the statistics in the JOR paper, in which five paragraphs in the Methods section were devoted to statistics and the justification of the statistical tests chosen but not so much in the JACM paper, where only one brief paragraph was used and the statistical test chosen was not nearly as rigorous. It has to do with that nasty peer review. My guess is that the reviewers for JOR forced Gronowicz to use the less permissive test, which, when she applied it, resulted in the effect of TT going from highly statistically significant to non-significant (although for a couple of values it was close). In contrast, the JACM “reviewers” were perfectly happy with the use of an inappropriate statistical test. Indeed, in the JACM paper, Gronowicz appears to have used a pairwise comparison using Student’s t test, rather than the more appropriate test for multiple comparisons: ANOVA with the appropriate post-analysis correction applied to account for multiple comparisons.

This strikes me as a good time to point out a general rule in biological studies of this type. Whenever you read a paper in which the authors spend this much time justifying their use of statistics and pointing out that their results are statistically significant under just one less rigorous test but not significant under the correct (and more rigorous) test, it is almost certain that what you’re looking at is an effect that is probably either a product of random chance, perhaps coupled with a mild undetected systematic bias in the experiments. This is different from clinical trials, where the variability is such that it’s not uncommon to see such discussion, especially since such trials cannot be repeated as easily as cell culture experiments such as these.

Finally, whenever you come across experiments such as this that claim to provide hard evidence for a highly implausible modality, it is important to consider three things. First, what is the real hypothesis being tested? Is it reasonable? In this case, it is not. After all, if TT actually did work the way its advocates claim, it is by redirecting the flow of “life energy” in a living organism by in essence “focusing” and thinking good thoughts about the patient to help him “heal.” Even if that were possible in a human, with trillions of cells, how could a “healer” possibly know to modulate such “energy” in a plate of a few million cells, which presumably contains far less of this “energy”? It would be like killing an ant with a bazooka (one of my favorite activities when it comes to woo, actually). Next, it is important to look at the actual methodology in detail rather than just the abstract. The abstract is the concentrated “spin” that the authors want placed on their results. It will not contain the caveats or the shortcomings that might cast doubt on the results. Finally, above all, remember that no phenomenon can be established by a single experiment or even a single lab. After all, spurious results and unrecognized systematic bias can bedevil even the best researcher. To be accepted, such a result has to be repeatable, both by the investigator and by other laboratories. The more in conflict with established science the result, the more critical this reproducibility is. Indeed, for a result that conflicts with well-established science as dramatically as this one does to be accepted, the level of evidence in support of the phenomenon should start to approach the amount of evidence existing that suggests that the phenomenon cannot exist.

Sadly, Dr. Granowicz seems to have forgotten this. In the discussions of both papers, she argues that her results definitely show that TT can “elicit biological effects in vitro” and opining that the type of energy emanating from TT practitioners’ hands is unknown. Her discussion in the JACM paper is even worse, as she cites a number of incredibly dubious studies on reiki and Qi Gong from–you guessed it!–JACM. Meanwhile, in the press release, she says:

“Should somebody with osteoporosis or a broken leg go to their Reiki practitioner?” Gronowicz said. “We don’t know.”

Actually, we do know.

Perhaps you wonder: Who would fund a study like this? Who would fund a study in which TT practitioners tried to ply their craft on dishes of cultured cells? Do you even have to ask. That’s right. This study was funded by NCCAM. It’s just another example of your tax dollars hard at work.

REFERENCES:

1. Jhaveri, A., Walsh, S.J., Wang, Y., McCarthy, M., Gronowicz, G. (2008). Therapeutic touch affects DNA synthesis and mineralization of human osteoblasts in culture. Journal of Orthopaedic Research DOI: 10.1002/jor.20688

2. Gronowicz, G.A., Jhaveri, A., Clarke, L.W., Aronow, M.S., Smith, T.H. (2008). Therapeutic Touch Stimulates the Proliferation of Human Cells in Culture. The Journal of Alternative and Complementary Medicine, 14(3), 233-239. DOI: 10.1089/acm.2007.7163

By Orac

Orac is the nom de blog of a humble surgeon/scientist who has an ego just big enough to delude himself that someone, somewhere might actually give a rodent's posterior about his copious verbal meanderings, but just barely small enough to admit to himself that few probably will. That surgeon is otherwise known as David Gorski.

That this particular surgeon has chosen his nom de blog based on a rather cranky and arrogant computer shaped like a clear box of blinking lights that he originally encountered when he became a fan of a 35 year old British SF television show whose special effects were renowned for their BBC/Doctor Who-style low budget look, but whose stories nonetheless resulted in some of the best, most innovative science fiction ever televised, should tell you nearly all that you need to know about Orac. (That, and the length of the preceding sentence.)

DISCLAIMER:: The various written meanderings here are the opinions of Orac and Orac alone, written on his own time. They should never be construed as representing the opinions of any other person or entity, especially Orac's cancer center, department of surgery, medical school, or university. Also note that Orac is nonpartisan; he is more than willing to criticize the statements of anyone, regardless of of political leanings, if that anyone advocates pseudoscience or quackery. Finally, medical commentary is not to be construed in any way as medical advice.

To contact Orac: [email protected]

Comments are closed.

Discover more from RESPECTFUL INSOLENCE

Subscribe now to keep reading and get access to the full archive.

Continue reading