Note: Grant writing ruled again this weekend; so I took this post, which first appeared elsewhere, and decided to revise and repost it. It seems appropriate, given what I’ve been discussing lately. Enjoy, and hopefully there’ll be something new tomorrow..
I’ve been complaining a lot about a certain journalist lately, specifically one named David Freedman. Before the most recent paean to unscientific medicine written by him, he wrote another article. The article, which was trumpeted by Tara Parker-Pope, came under the heading of “Brave Thinkers” and is entitled Lies, Damned Lies, and Medical Science. It is being promoted in news stories like this, where the story is spun as indicating that medical science is so flawed that even the cell-phone cancer data can’t be trusted:
Visit msnbc.com for breaking news, world news, and news about the economy
Let me mention two things before I delve into the meat of the article. First, these days I’m not nearly as enamored of The Atlantic as I used to be. I was a long-time subscriber (at least 20 years) until last fall, when The Atlantic published an article so egregiously bad on the H1N1 vaccine that Mark Crislip decided to annotate it in his own inimitable fashion. Fortunately, this article isn’t as bad (it’s a mixed bag, actually, making some good points and then undermining some of them by overreaching), although it does lay on the praise for Ioannidis and the attacks on SBM a bit thick. Be that as it may, clearly The Atlantic has developed a penchant for “brave maverick doctors” and using them to cast doubt on science-based medicine. Second, I actually happen to love John Ioannidis’ work, so much so that I’ve written about it at least twice over the last three years, including The life cycle of translational research and Does popularity lead to unreliability in scientific research?, where I introduced the topic using Ioannidis’ work. Indeed, I find nothing at all threatening to me as an advocate of science-based medicine in Ioannidis’ two most famous papers, Contradicted and Initially Stronger Effects in Highly Cited Clinical Research and Why Most Published Research Findings Are False. The conclusions of these papers to me are akin to concluding that water is wet and everybody dies. It is, however, quite good that Ioannidis is there to spell out these difficulties with SBM, because he tries to keep us honest.
Unfortunately, both papers are frequently wielded like a shibboleth by advocates of alternative medicine against science-based medicine (SBM) as “evidence” that it is corrupt and defective to the very core and that therefore their woo is at least on equal footing with science-based medicine. Ioannidis has formalized the study of problems with the application of science to medicine that most physicians intuitively sense but have not ever really thought about in a rigorous, systematic fashion. Contrast this to so-called “complementary and alternative medicine” (i.e., CAM), where you will never see such a questioning of the methodology and evidence base behind it (mainly because its methodology is primarily anecdotal and its evidence base nonexistent or fatally flawed) and most practitioners never change their practice as a result of any research, and you’ll see my point.
Right from the beginning, the perspective of the author David H. Freedman is clear. I first note the title of the article (Lies, Damned Lies, and Medical Science) is intentionally and unnecessarily inflammatory. On the other hand, I suppose that entitling it something like “Why science-based medicine is really complicated and most medical studies ultimately turn out to be wrong” wouldn’t have been as eye-catching. Even Ioannidis restrained himself more when he entitled his PLoS review an almost as exaggerated Why Most Published Research Findings Are False, which has made it laughably easy for cranks to the misuse and abuse of his article. My annoyance at the title and general tone of Freedman’s article notwithstanding, coupled with the sorts of news coverage it’s getting notwithstanding, there are still important messages in Freedman’s article worth considering, if you get past the spin, which begins very early in describing Ioannidis and his team thusly:
Last spring, I sat in on one of the team’s weekly meetings on the medical school’s campus, which is plunked crazily across a series of sharp hills. The building in which we met, like most at the school, had the look of a barracks and was festooned with political graffiti. But the group convened in a spacious conference room that would have been at home at a Silicon Valley start-up. Sprawled around a large table were Tatsioni and eight other youngish Greek researchers and physicians who, in contrast to the pasty younger staff frequently seen in U.S. hospitals, looked like the casually glamorous cast of a television medical drama. The professor, a dapper and soft-spoken man named John Ioannidis, loosely presided.
I’m guessing the only reason Freedman didn’t liken this team to Dr. Greg House and his minions is because, unlike Dr. House, Ioannidis is dapper and soft-spoken, although like Dr. House’s team apparently Ioannidis’ team is full of good-looking young doctors. After describing how Ioannidis delved into the medical literature and was shocked by the number of seemingly important and significant published findings that were later reversed in subsequent studies, Freedman boils down the what I consider to be the two most important messages that derive from Ioannidis’ work:
This array suggested a bigger, underlying dysfunction, and Ioannidis thought he knew what it was. “The studies were biased,” he says. “Sometimes they were overtly biased. Sometimes it was difficult to see the bias, but it was there.” Researchers headed into their studies wanting certain results–and, lo and behold, they were getting them. We think of the scientific process as being objective, rigorous, and even ruthless in separating out what is true from what we merely wish to be true, but in fact it’s easy to manipulate results, even unintentionally or unconsciously. “At every step in the process, there is room to distort results, a way to make a stronger claim or to select what is going to be concluded,” says Ioannidis. “There is an intellectual conflict of interest that pressures researchers to find whatever it is that is most likely to get them funded.”
Perhaps only a minority of researchers were succumbing to this bias, but their distorted findings were having an outsize effect on published research. To get funding and tenured positions, and often merely to stay afloat, researchers have to get their work published in well-regarded journals, where rejection rates can climb above 90 percent. Not surprisingly, the studies that tend to make the grade are those with eye-catching findings. But while coming up with eye-catching theories is relatively easy, getting reality to bear them out is another matter. The great majority collapse under the weight of contradictory data when studied rigorously. Imagine, though, that five different research teams test an interesting theory that’s making the rounds, and four of the groups correctly prove the idea false, while the one less cautious group incorrectly “proves” it true through some combination of error, fluke, and clever selection of data. Guess whose findings your doctor ends up reading about in the journal, and you end up hearing about on the evening news? Researchers can sometimes win attention by refuting a prominent finding, which can help to at least raise doubts about results, but in general it is far more rewarding to add a new insight or exciting-sounding twist to existing research than to retest its basic premises–after all, simply re-proving someone else’s results is unlikely to get you published, and attempting to undermine the work of respected colleagues can have ugly professional repercussions.
Of course, I’ve discussed the problems of publication bias before multiple times right here on this very blog. Contrary to the pharma conspiracy-mongering of many CAM advocates, more commonly the reason for bias in the medical literature is what is described above: Simply confirming previously published results is not nearly as interesting as publishing something new and provocative. Scientists know it; journal editors know it. In fact, this is far more likely a problem than the fear of undermining the work of respected colleagues, although I have little doubt that that fear is sometimes operative. The reason is, again, because novel and controversial findings are more interesting and therefore more attractive to publish. A young investigator doesn’t make a name for himself by simply agreeing with respected colleagues. He makes a name for himself by carving out a niche and even more so if he shows that commonly accepted science has been wrong. Indeed, I would argue that this is the very reason that comparative effectiveness research (CER) is given such short shrift in the medical literature, so much so that the government has decided to encourage it in the Patient Protection and Affordable Care Act. CER is nothing more than comparing already existing and validated therapies head-to-head against each other to see which is more effective. To most scientists, nothing could be more boring, no matter how important CER is. Until recently, doing CER was a good way to bury a medical academic career in the backwaters. Hopefully, that will change, but to my mind the very problems Ioannidis points out are part of the reason why CER has had such rough sledding in achieving respectability.
More importantly, what Freedman appears (at least to me) to portray as a serious, nigh unfixable problem in the medical research that undergirds SBM is actually its greatest strength: it changes with the evidence. Yes, there is a bias towards publishing striking new findings and not publishing (or at least not publishing in highly prestigious journals) less striking or negative findings. This has been a well-known bias that’s been bemoaned for decades; indeed, I remember learning about it in medical school, and you don’t want to know how long ago I went to medical school.
Even so, Freedman inadvertently echoes a message that we at SBM have discussed many times, namely that high quality evidence is essential. In the article, Freedman points out that 80% of nonrandomized trials turn out to be wrong, as are “25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials.” Big surprise, right? Less rigorous designs produce false positives more often! Also remember, in an absolutely ideal world with a perfectly designed randomized clinical trial (RCT), by choosing p<0.05 as the cutoff for statistical significance, we would expect that at least 5% of RCTs will be wrong by random chance alone. Add type II errors to that and the number is expected to be even higher, again, just by random chance alone. When you consider these facts, then having only 10% of large randomized trials turn out to be incorrect is actually not too bad at all. Even if only 25% of all randomized trials turn out to be wrong, that isn’t all that bad either; these include smaller trials. After all, the real world is messy; trials are never perfect, nor is their analysis. The real messages should be that lesser quality trials that are unrandomized are highly unreliable and that even randomized trials should be replicated if at all possible. Unfortunately, resources are such that such trials can’t always be replicated or expanded upon, which means that we as scientists need to do our damnedest to work on improving the quality of such trials. Also, don’t forget that the probability of a trial being wrong increases as the implausibility of the hypothesis being tested increases, as Steve Novella and Alex Tabarrok have pointed out in discussing Ioannidis’ results. Unfortunately, with the rise of CAM, more and more studies are being done on highly implausible hypotheses, which will make the problem of false-positive studies even worse. Is this contributing to the problem overall? I don’t know, but that would be a really interesting hypothesis for Ioannidis and his group to study, don’t you think?
Another important lesson from Ioannidis’ work cited by Freedman is that hard outcomes are much more important than soft outcomes in medical studies. For example, death is the hardest outcome of all. If a treatment for a chronic condition is going to claim benefit, it behooves researchers to demonstrate that it has a measurable effect on mortality. I discussed this issue a bit in the context of the controversy over Avastin and breast cancer, where the RCTs used to justify approving Avastin for use against stage IV breast cancer found an effect on disease-free survival but not overall survival. However, this issue is not important just in cancer trials, but in any trial for an intervention that is being used to reduce mortality. “Softer” outcomes, be they disease-free survival, reductions in blood lipid levels, reductions in blood pressure, or whatever, are always easier to demonstrate than decreased mortality.
Unfortunately, one thing that comes through in Freedman’s article is something similar to other work I’ve seen from him. For instance, when Freedman wrote about Andrew Wakefield back in May, he got it so wrong that he was not even wrong when he described The Real Lesson of the Vaccines-Cause-Autism Debacle. To him the discovery of Andrew Wakefield’s malfeasance is as nothing compared to what he sees as the corruption and level of error present in the current medical literature. In other words, Freedman presented Wakefield not as a pseudoscience maven, an aberration, someone outside the system who somehow managed to get his pseudoscience published in a respectable medical journal and thereby caused enormous damage to vaccination programs in the U.K. and beyond. Oh, no. To Freedman, Wakefield is representative of the system. One wonders, given how much he distrusts the medical literature, Freedman actually knew Wakefield was wrong. After all, all the studies that refute Wakefield presumably suffer from the same intractable problems that Freedman sees in all medical literature. In any case, perhaps this apparent view explains why, while Freedman gets some things right in his profile of Ioannidis, he gets one thing enormously wrong:
Ioannidis initially thought the community might come out fighting. Instead, it seemed relieved, as if it had been guiltily waiting for someone to blow the whistle, and eager to hear more. David Gorski, a surgeon and researcher at Detroit’s Barbara Ann Karmanos Cancer Institute, noted in his prominent medical blog that when he presented Ioannidis’s paper on highly cited research at a professional meeting, “not a single one of my surgical colleagues was the least bit surprised or disturbed by its findings.” Ioannidis offers a theory for the relatively calm reception. “I think that people didn’t feel I was only trying to provoke them, because I showed that it was a community problem, instead of pointing fingers at individual examples of bad research,” he says. In a sense, he gave scientists an opportunity to cluck about the wrongness without having to acknowledge that they themselves succumb to it–it was something everyone else did.
To say that Ioannidis’s work has been embraced would be an understatement. His PLoS Medicine paper is the most downloaded in the journal’s history, and it’s not even Ioannidis’s most-cited work–that would be a paper he published in Nature Genetics on the problems with gene-link studies. Other researchers are eager to work with him: he has published papers with 1,328 different co-authors at 538 institutions in 43 countries, he says. Last year he received, by his estimate, invitations to speak at 1,000 conferences and institutions around the world, and he was accepting an average of about five invitations a month until a case last year of excessive-travel-induced vertigo led him to cut back.
Freedman includes an anecdote about how medical practitioners are unsurprised by some of Ioannidis’ results. Unfortunately, instead of the interpretation intended, namely that physicians are aware of the problems in the medical literature described by Ioannidis and take such information into account when interpreting studies (i.e., that Ioannidis’ work is simply reinforcement of what they know or suspect anyway), Freedman instead interprets physicians’ reaction to Ioannidis as “an opportunity to cluck about the wrongness without having to acknowledge that they themselves succumb to it–it was something everyone else did.” I suppose it’s possible that there is a grain of truth in that — but only a small grain. In reality, at least from my observations, the reason that scientists and skeptics have not only refrained from attacking Ioannidis but in actuality have embraced him and his findings of deficiencies in how we do clinical trials is for the right reasons. We want to be better, and we are not afraid of criticism. Try, for instance, to imagine an Ioannidis in the world of CAM. Pretty hard, isn’t it? Then picture how a CAM-Ioannidis would be received by CAM practitioners? I bet you can’t imagine that they would shower him with praise, publications in their best journals, and far more invitations to speak at prestigious medical conferences than one person could ever possibly accept.
Yet that’s how science-based practitioners have received John Ioannidis.
In the end, Ioannidis has a message that is more about how little the general public understands the nature of science than it is about the flaws in SBM:
We could solve much of the wrongness problem, Ioannidis says, if the world simply stopped expecting scientists to be right. That’s because being wrong in science is fine, and even necessary–as long as scientists recognize that they blew it, report their mistake openly instead of disguising it as a success, and then move on to the next thing, until they come up with the very occasional genuine breakthrough. But as long as careers remain contingent on producing a stream of research that’s dressed up to seem more right than it is, scientists will keep delivering exactly that.
“Science is a noble endeavor, but it’s also a low-yield endeavor,” he says. “I’m not sure that more than a very small percentage of medical research is ever likely to lead to major improvements in clinical outcomes and quality of life. We should be very comfortable with that fact.”
We should indeed. On the other hand, those of us in the trenches with individual patients don’t have the luxury of ignoring many studies that conflict (as Ioannidis suggests elsewhere in the article). Moreover, it is science that gives us our authority with patients. If patients lose trust in science, then there is little reason not to go to a homeopath. Consequently, we need to do the best we can with what exists. Nor does Ioannidis’ work mean that SBM is so hopelessly flawed that we might as well all throw up our hands and become reiki masters, which is what Freedman seems to be implying. SBM is our tool to bring the best existing care to our patients, and it is important that we know the limitations of this tool. Contrary to what CAM advocates claim, there currently is no better tool. If there were, and it could be demonstrated conclusively to be superior, I’d happily switch to using it.
To paraphrase Winston Churchill’s famous speech, many forms of medicine have been tried and will be tried in this world of sin and woe. No one, certainly not those of us at SBM, pretends that SBM is perfect or all-wise. Indeed, it has been said (mainly by me) that SBM is the worst form of medicine except all those other forms that have been tried from time to time. I add to this my own little challenge: Got a better system than SBM? Show me! Prove that it’s better! In the meantime, we should be grateful to John Ioannidis for exposing defects and problems with our system while at the same time expressing irritation at people like Freedman for overhyping them.
29 replies on “Stepping back: Lies, damned lies, and…science-based medicine?”
Don’t we have sort of a parallel experiment for Ioannidis in the world of CAM? The man in question is Professor Edzard Ernst, the UK’s first Professor of Complementary Medicine. Ernst was appointed with a mission to bring systematic research to CAM. He has consistently pointed out the holes in complementary medicine – notably, where data is poor, or absent, and also the manifest shortcomings of anecdotal experimental designs favoured by the CAM fraternity. He has also highlighted those rare herbal and other complementary remedies/treatments that are backed by evidence of acceptable quality.
So what has Ernst’s reception been within the CAM community? Has it been Ioannidis-like?
Well… err… no. Rather the opposite.
In fact, Ernst has been relentlessly attacked by the CAM advocates, including attacks in distinctly personal terms in the British national press. He also attracted the enmity of Prince Charles, aka the ‘Quacktitioner Royal’, whose acolytes tried to get Ernst sacked by his University, the University of Exeter [The University Vice-Chancellor was less than supportive, though he eventually let Ernst keep his job. The VC has, BTW, just been knighted]. Funding for Ernst’s unit, despite their excellent publication record, has been scarce. The upshot of all this is that Ernst has just taken early retirement, clearly somewhat tired of all the battles.
Just posted a comment – held in Spam filter presumably as it had a link in it – pointing out that we HAVE a parallel (or perhaps ‘contrast’) to Ioannidis in the CAM world. The man is Professor Edzard Ernst, the UK’s first Prof of Complementary Medicine.
If you wonder whether CAM folk have ’embraced’ Ernst’s critique of their methodologies, just have a quick Google and the answer will soon become apparent.
George Lundberg is the former Editor-in-Chief at the AMA. His most recent article in the April issue of The Scientist speaks to the need and the purposefulness of building a data set using “n’s” of one. That’s right. It is clear to him that “the proper study of my genome (sic) is my genome”. His comment reflects the understanding of the uniqueness of each individual’s physiology such that the notion of RDBC trials is not at the foundation of the database he is building.
It is difficult to boil down the premise of the article written by the author. First, he would like us to believe that those who practice medicine do so with “science” at the foundation of their activity. Nothing could be further from the truth. About 60% of the most common procedures that physicians practice have ANY evidence behind them.
Secondly, his notion that there is no evidence behind CAM is nothing short of ignorance. Dr. Ernst in England has claimed that there is little evidence that acupuncture works. Believers in the holy grail of biochemical based medicine love such comments because they ASSUME that Ernst has done the work to make the assessment and it is accurate.
Those assumptions would be wrong. If you would like to defend Ernst and those who degrade CAM applications then someone explain why the US Military uses auricular acupuncture in the battlefield. If you know anything about the military you know that the Generals don’t let ANYTHING into the war theatre unless they are confident it works.
The technique, known as battlefield acupuncture, has been verified clinically by practitioners throughout the world. FMRI studies were used by the ASAF Colonel (MD PhD and MPH) who created it to validate his hypothesis and that is what led to the technique being deployed.
This is but one example in which Dr. Ernst was dead wrong and there are others. A massive list of others. The list is indeed so massive that one wonders how Ernst and others can claim that there is no evidence. He is more of a poster child for those who believe that clinicians make poor scientists.
Another example; type in PEMF (pulsed electromagnetic fields) into Pubmed and one will find hundreds of studies that have concluded efficacy in healing bones. There are FDA approved devices in the market for exactly that purpose.
The best integrative medical docs use these kinds of tools and the reason why integrative medicine continues to grow in its acceptance is because (1) they achieve results with patients that those who practice conventional medicine cannot achieve and (2) they know something about nutritional strategies that conventionally trained physicians don’t know.
So where are we?
First, and I do appreciate that this is tough to take, until we train physicians in the use of a larger toolbox with more tools in it – we will not solve the challenge of health care in this country. Remember the NYT article in March of 2009 in which the medical students at Harvard gave the Harvard medical school faculty an “F” in integrity. Why? Becuase the weren’t being trained to heal patients. They were being trained in how to dispense drugs.
Physicians are trained to practice what someone trained in finance might call “statistical based medicine”. In finance, the equivalent would be considered fraud because it attempts to “fit” the customer to the curve generated by the research. Sometimes it works but mostly it doesn’t. At the very least, it is inefficient and thus the corollary works well. Our health care system is inefficient – not many dispute that – at least not on the facts they don’t.
Medicine fits patients to the curve derived by drug research studies – at least the ones that are published.
Instead of defending drug companies, it seems to me that physicians ought to be up in arms over how the drug companies has painted them into a corner and degraded the art of medicine into something that is less focused on patient outcomes
Let’s not forget that drugs work (at all) only 30%-50% of the time. Then there is a range of how well they work when they do work. That range spans everything from net negative impact to high positive impact.
Just so we are clear about pharmaceutical companies. One of the largest was given the opportunity to develop a simple test, based on the prospective patient’s genome that would help assess, with a high degree of certainty, whether the drug that the patient was about to be prescribed would fall into the 30%-50% effective category – rather than the 50%-70% ineffective category. Realizing that the drug company would only sell about 50% of the drugs that they are currently selling – they turned down the opportunity to develop the simple test and fired the chief scientist whose team had made the proposal.
True story.
As a finance person what is clear is that the only reason why conventional medicine continues to be used is because market forces are not operating in health care. We have institutionalized and baked into our system the principle that we (the taxpayers) pay for the clinician’s time. Let’s start paying only for the quality of the clinicians work product and we will “fix” a lot that is wrong.
The only problem with such a proposal of course is that physicians have grown so accustomed to being paid – regardless of their results – that they think they have a right to be paid regardless of their results.
Sorry, folks that gig is up.
Then you wouldn’t mind providing a citation about this story, so that readers can judge for themselves if your version is an accurate reflection of what happened, now, would you?
By the way, it’s a canard that just because all of us have unique genomes that “individualization” of the sort touted by alt-med is the way to go.
[Citation needed]
ROTFLOL. Do a quick search for acupuncture in this blog and you’ll find that there’s no particular reliance on Ernst. Rather, the actual scientific literature is carefully assessed.
Because there’s a particular well-placed quack who has bamboozled his superiors into letting him victimize fellow soldiers with useless treatments. Again, covered previously in great detail.
Argument from authority. Meaningless authority at that, since generals have no particular standing to speak on scientific or medical issues.
Then you should have no trouble providing the references.
More tools are always good. But they have to work for something other than defrauding the patients out of their money.
Which is a whole lot better than net negative to no impact, as with sCAM.
[Citation needed]
Ah, the irony.
@wayne miller
You suggest it is wrong to think that, “Ernst has done the work to make the assessment and it is accurate”. Is this the same Professor Ernst who has published over 100 studies on acupuncture? Here’s a good one that addresses pain, that you claim acupuncture is so effective at treating. http://www.ncbi.nlm.nih.gov/pubmed/21440191
“Numerous reviews have produced little convincing evidence that acupuncture is effective in reducing pain. Serious adverse events, including deaths, continue to be reported.”
About treating the *individual* by alt med: where to begin? ( BTW- I shall blithely step around any and all references to the “free market”- because I can. Impulse control, you know)
Last I heard, genetic testing doesn’t appear to show up in alt med’s bag of tr… I mean, *diagnostic and treatment modalities*: it remains entirely within the realm of SBM.
Alt med has perfected *creating the appearance* of unique treatments, tailored to individual needs, which goes hand-in-glove with its carefully designed self-portrayal of being compassionate- like a concerned family member or friend. Style over substance reigns.
However, if you look closely at their ideas about the causation of illness you’ll come up against a wall of mind-numbingly generic statements:
“Your chi is ‘off'”; ” Your Chraka’s ‘off'”; “Your nutrition is ‘off'”; ” The Acid/alkaline balance is ‘off'”; ” Those toxins are ‘off the charts'”… boiling down complexities to simple problems easily fix-able by similarly one-trick protocols.
To boil it down even further: the is little recognition of the impact of “germs” ( bacteria, virus) and less stress upon genetic influences ( see anti-vax gospel especially): illness is caused by lifestyle. There is hardly any differentiation pertaining to *which* illness- it’s *all* of them!
Those plagued by illness have brought it on themselves: they ate fast foods, non-organic produce, meat, sugar, drank coffee and (Cover the children’s ears!) alcohol, didn’t exercise two hours daily, didn’t meditate, or manage to exude spirituality enough. They lived in toxic environments, were vaccinated, used toxin-emitting products, and experienced toxic emotions in toxic relationships.
The plan involves de-toxifying and re-building via perfected nutrition possibly supplemented by energy-re-balancing. If the complaint is HIV/AIDS, cancer,SMI, ASD/ LD, arthritis, or diabetes, the course of action will probably be the same. Tow the mark, sinner!
If you were to take “wayne miller”‘s post without prior knowledge of the topics, he makes a very good case for why one should ignore Ernst and other skeptics and support CAM. He’s slick, he presented well and he came off as authoritative without being too condescending. In essence, wayne miller is a salesman.
However, if you even have a minor knowledge of how academia and scientific progress works, you’d immediately take a big salt shaker full of [citation needed] notes and just sprinkle liberally over his entire well-practiced monologue.
Unfortunately I’m betting that wayne miller is just another drive by troll that practically cut and pasted that diatribe from other continual drive-bys on skeptical sites, so the opportunity to get those citations will be entirely non-existent.
@wayne miller
Ironic that a guy in finance is chastising SBM.
You’re probably a drive-by troll, so I am not expecting you to back up your utter screed, but if not, citations for your assertions, please.
Sorry, I’m not familiar with medical statistics (old physics major). Can someone explain the difference between “30%-50% effective” and “50%-70% ineffective”, and why these are two different categories?
Thanks,
He’s just trying to be pompous in saying “whether it will work or not for a particular patient.”
I swear I come to this blog just to read the comments. The trolls keep me entertained. LOL.
wayne miller,
I’ll leave most of your claims alone and let others deal with them (although you should search here and at Science Based Medicine about your “60% of treatments have no evidence” claim, that misunderstanding of an old study has been debunked numerous times). I will point out that not only has Dr. Ernst done reviews of the acupuncture literature, he has also done primary research himself and actually used to use acupuncture in his practice before seeing there wasn’t evidence for it.
The point I do want to address is your wild misinformation about “battlefield” acupuncture. I’m a military medic and know many, many other medics. I have not of heard of even one medic being trained to do battlefield acupuncture, much less any of us using it in the field. There was a crank Air Force Colonel, now retired, (who is a radiation oncologist and not involved in trauma care) who invented this procedure and was using his influence to push for training in battlefield acupuncture. I supposedly a couple doctors have been trained use it (at military hospitals, not on the battlefield), and supposedly some Army Rangers have been trained too, but acupuncture needles were not available when I deployed, nor when anyone else I know did either. My wife is also a medic and works with a bunch of ER doctors who have deployed, and none of them were trained in acupuncture before deployed to work in trauma wards in Iraq and Afghanistan.
The Colonel’s idea has met with stiff resistance and absolute embarrassment by just about every military physicians I have read on the subject. I have kept up with this; I have talked to several doctors and even went out of my way to go to the information page that Colonel set up on the subject on our medical portal, just to see what his evidence was (it was laughable). Just about the most inhumane thing I can imagine doing as a medic would be to take time out from treated a severely injured military member to poke him in the ear with a gold needle.
Also, I do understand the military and your statement that “If you know anything about the military you know that the Generals don’t let ANYTHING into the war theatre unless they are confident it works.” This is absolutely wrong, and just shows you aren’t familiar with how the military works, generals make ill informed decisions all the time. Besides, this technique is not being regularly used on the battlefield. We use regular painkillers as necessary, and transport patients to the next echelon of care as soon as possible. We medics never take time to perform acupuncture in the field, and have no supplies or training even if we thought it appropriate.
Regarding individualization, I’m not saying that it works, but non-OTC homeopathy is incredibly individualized. I’m curious as to what kinds of individualizations other forms of alt-med use.
@wayne miller:
How in the world do you do a genomic study of a single person? I’m no geneticists, but I think that if it were a study of epigenetics you might be able to learn something from a single person by subjecting him/her to different environments. Is that what he means?
“Biochemical based medicine”? Do you mean pharmaceuticals? Or that, for example, acupuncture works based on qi/chi/ki? Or is just another way of saying “conventional medicine”?
What in the world does that have to do with CAM?
So there’s a widespread trend among CAM practitioner to only accept payment if the patient is satisfied with the treatment? Since otherwise CAM practitioners are in the same boat as everyone else.
You mean the same generals that are/were using dowsing rods to “detect” IEDs?
That’s not the only worthless crap to be sent to the front lines. If you think that the US military is somehow safe from political manipulation, you’re extremely naive. Lots of politicians will try to get spending for their districts, even if it means the product is worthless.
Neither you, nor the US military are immune from self-delusion.
@ArtK:
To nitpick, it was an Iraqi general doing that, not an American one. Not saying that American generals are uniquely immune to self-delusion, though.
Here it is folks, everything you ever wanted to known about treating battlefield injuries and treatment with acupuncture…complete with real photographs to direct medics in the proper placement of acupuncture needles while tending to combat injuries:
Novel Medical Acupuncture Treatment for Active Combatants in the Battlefield
(No need to thank me Wayne Miller…just delighted to oblige our resident “finance person”. Um,I don’t think I’ll be consulting you on my finances)
@ lilady : you never know: he might be one of those fellows predicting the approaching fall of western currencies ( due to China’s actions) and telling you to stock up on old silver coins so you’ll be able to buy groceries and *survive* when hyper-inflation hits. Take your money out of banks, stocks, bonds,real estate and place in it _solid assets_ to protect yourself.**
Hey, it sells newsletters….
** in. your. dreams.
@ Denice Walter: Hmm, there are a number of Wayne Miller(s) on the internet, including one who deals in gold and silver coins…could be that you have ID’ed “our” Wayne Miller.
So the autism scare was all the fault of the evil medical journals and had nothing to do with newspapers and other media jumping on a juicy story and promoting it to sell their papers without making any effort to see how it fit with the overall evidence/
The paper above seems a nice balance to the one Ben Goldacre and others published recently documenting how poor the evidence for health claims is in Newspapers.
@ lilady: while it was just a guess, because I encounter more fraudulent nonsense than you could possibly “shake a stick at”**- and not limited to woo : psych and ec are involved as well- certain phrases set off alarm bells of suspicion. I believe it is a gift- I speculate it has to do with the interaction of verbal ability plus (visual) field independence- runs in families.
** Old School expression -origins unknown ( to me)
G. Shelley, are you describing the article (not paper) that Orac called “not even wrong”?
No. I am not the wayne miller that sells gold on the internet. As for the “crank” colonel I referred to – your use of name calling speaks volumes.
As for the citations regarding the pharmaceutical company that was unwilling to engage in product development that would hurt sales – if you need a reference for that to validate my statement nothing I give you will satisfy your appetite anyway. Besides, I will not betray the confidence of how I know that story.
The battlefield acupuncture studies were verified by fMRI studies – which is why it was adopted.
As for Generals, I stand corrected. They don’t always do things that make sense. What else is new. That would make them like everyone else.
Thank you for the entertainment – all.
Citation?
(you should really put the term “battlefield acupuncture” in the handy dandy search box on the upper left hand side of this page)
I note that you decline to address the actual evidence demonstrating him to be a crank.
Don’t be ridiculous. You made a very precise claim about a very precise story. That’s not something which anybody could credibly expect us to simply take your word for. If you said that pharmaceutical companies are primarily profit-driven, nobody would disagree. But you made claims far beyond that, which demand evidence.
Most likely translation: I made the whole thing up.
fMRI doesn’t meaningfully verify that acupuncture works. That there are detectable effects in the brain, sure. You’re sticking needles into people, they will feel it, that will be reflected in brain activity. In no way does that demonstrate that it is doing anything USEFUL. To do that, you have to measure said useful effect.
—Johnny Tarr, Gaelic Storm
I believe the Goldacre article G. Shelley is referring to is this one. I agree that newspaper reporting of medical stories is often dreadful, and misinforms the public who either don’t know how to check on a story, or can’t be bothered.
By the way, I know a little about diagnostic testing. Wayne’s story about the diagnostic test being scrapped because it would cut drug sales seems a little unlikely to me.
Most pharmaceutical companies have a diagnostics division. They could have developed the diagnostic test commercially and easily recouped the drug sales losses through sales of the diagnostic test. There’s big money in diagnostics.
Krebiozen:
So he or she is not referring to any article/paper that is linked to or mentioned in the above article. No wonder the comment is confusing.