Categories
Cancer Clinical trials Medicine Popular culture Pseudoscience Skepticism/critical thinking

Do medical errors really kill a quarter of a million people a year in the US?

It is an unquestioned belief among believers in alternative medicine and even just among many people who do not trust conventional medicine that conventional medicine kills. Not only does exaggerating the number of people who die due to medical complications or errors fit in with the world view of people like Mike Adams and Joe Mercola, but it’s good for business. After all, if conventional medicine is as dangerous as claimed, then alternative medicine starts looking better in comparison.

In contrast, real physicians and real medical scientists are very much interested in making medicine safer and more efficacious. One way we work to achieve that end is by using science to learn more about disease and develop new treatments that are as efficacious or more so than existing treatments with fewer adverse reactions (clinical equipoise). Another strategy is to use what we know to develop quality metrics against which we measure our practice and to decrease the use of low value and unnecessary tests. Indeed, I am heavily involved in just such an effort for breast cancer patients. Then, of course, we try to estimate how frequent medical errors are and how often they cause harm or even death. All of these efforts are very difficult, of course, but perhaps the most difficult of all is the last one. Estimates of medical errors depend very much on how medical errors are defined, and whether a given death can be attributed to a medical error depends very much on how it is determined whether a death was preventable and whether a given medical error led to that death.

Some cases are easy. For example, if I as a surgeon operating in the abdomen were to slip and put a hole in the aorta, leading to the rapid exsanguination of the patient, it’s obvious that the error caused the patient’s death. But what about giving the wrong antibiotic in a septic patient who is critically ill with multisystem organ dysfunction? Other examples come to mind. When should a given medical error in such a critically ill patient who has a high probability of dying even with perfect care be blamed if that patient dies? It’s not a straightforward question.

Perhaps that’s why estimates of the number of deaths due to medical error tend to be all over the map. Consider this. According to the CDC, there are approximately 2.6 million deaths from all causes in the US every year. Now consider these headlines from a week or two ago week:

These stories all refer to an article last week in BMJ by Martin A Makary and Michael Daniel entitled “Medical error—the third leading cause of death in the US,” which claims that over 251,000 people die in hospitals as a result of medical errors. Given that, according to the CDC, only 715,000 of those deaths occur in hospitals, if Makary and Daniel’s numbers are to be believed, some 35% of inpatient deaths are due to medical errors. That’s just one reason why there are a lot of problems with this article, but there are even more problems with how the results have been reported in the press and the recommendations made by the authors.

Error inflation?

The first thing I noticed that surprised me about this BMJ article is that it isn’t a fresh study at all. Rather it’s a regurgitation of already existing data. It is not a “second study,” as, for example, the USA TODAY headline calls it. It’s just a pooling of existing data to produce a point estimate of the death rate among hospitalized patients reported in the literature extrapolated to the reported number of patients hospitalized in 2013 based on four major existing studies since the Institute of Medicine (IOM) report “To Err Is Human” in 1999. Basically, it’s not unlike a similar pooling and extrapolation of studies performed by John James in 2013. Yet, in every article I’ve seen about it, it’s described as a study. In reality, it’s more an op-ed calling for better reporting of deaths from medical errors, with extrapolations based on studies with small numbers. That’s not to denigrate the article just for that. Such analyses are often useful; rather it’s to point out how poorly this article has been reported and how few seemed to notice that this article adds exactly nothing new to the primary scientific literature.

Here’s a passage from Dr. Makary’s article that disturbed me right off the bat:

The role of error can be complex. While many errors are non-consequential, an error can end the life of someone with a long life expectancy or accelerate an imminent death. The case in the box shows how error can contribute to death. Moving away from a requirement that only reasons for death with an ICD code can be used on death certificates could better inform healthcare research and awareness priorities.

Case history: role of medical error in patient death

A young woman recovered well after a successful transplant operation. However, she was readmitted for non-specific complaints that were evaluated with extensive tests, some of which were unnecessary, including a pericardiocentesis. She was discharged but came back to the hospital days later with intra-abdominal hemorrhage and cardiopulmonary arrest. An autopsy revealed that the needle inserted during the pericardiocentesis grazed the liver causing a pseudoaneurysm that resulted in subsequent rupture and death. The death certificate listed the cause of death as cardiovascular.

This example is nowhere near as straightforward an example as the authors appear to think it is. In fact, it’s an utterly horrible example. For one thing, notice the weasel wording. We’re told that the the patient was evaluated with “extensive tests, some of which were unnecessary, including a pericardiocentesis.” This implies that the pericardiocentesis wasn’t necessary, but an equally valid interpretation is that it was just one of the tests. Besides, without a lot more details, it’s impossible to tell whether the pericardiocentesis was necessary or not. While the point that the percutaneous procedure contributed to this patient’s death is valid, how do we classify this? Delayed bleeding is a known complication of percutaneous procedures, as is damage to adjacent organs that are potentially in the path of the needle. Even when done by expert hands, such procedures will cause significant bleeding in some patients and even death in a handful. When such bleeding occurs, that does not necessarily mean there was a medical “error.” It might, but even in this case, I can’t help but point out that most injuries to the liver by a percutaneous needle heal themselves. Indeed, ultrasound- or CT-guided liver biopsies are performed using much larger needles than any needle used for a pericardiocentesis, and bleeding is uncommon. (One study pegs it at 0.7%.) It was unfortunate indeed that this patient developed a pseudoaneurysm. Of course, this death might have been due to medical error. Perhaps the physician doing the procedure didn’t take adequate care to avoid lacerating the liver. We just don’t know. Given that the vast majority of bleeding after percutaneous procedures can’t be attributed to medical error, if you define any such bleeding complication as medical error, you’re going to vastly overestimate the true rate of medical error.

So let’s take a look at the some of the most cited studies that make up the data used by Makary and Daniel for their commentary.

The IOM Report: To Err Is Human

This was an issue in the IOM study, To Err Is Human. This study dates way, way back to 1999 and estimated that between 44,000 and 98,000 deaths could be attributed to medical error. The IOM came by its estimate by examining two large studies, one from Colorado and Utah, and the other conducted in New York, that found that adverse events occurred in 2.9% and 3.7% of hospitalizations and that in Colorado and Utah 6.6% of those adverse events led to death, as compared to 13.6% in New York hospitals. In both of these studies, it was estimated that over half of these adverse events resulted from medical errors and therefore could have been prevented. Thus, when extrapolated to 33.6 million admissions to US hospitals in 1997, the results of the IOM study implied that between 44,000 and 98,000 Americans die because of medical errors.

To be honest, I didn’t have that big of a problem with the IOM study. I thought it was a bit too broad in defining what constituted a medical error. Indeed, one of the authors of one of the studies used by the IOM related in a New England Journal of Medicine article in 2000:

In both studies, two investigators subsequently reviewed the data and reclassified the events as preventable or not preventable. Preventability is difficult to determine because it is often influenced by decisions about expenditures. For example, if every patient were tested for drug allergies before being given a prescription for antibiotics, many drug reactions would be prevented. From this perspective, all allergic reactions to antibiotics, which are adverse events according to the studies’ definitions, are preventable. But such preventive testing would not be cost effective, so we did not classify all drug reactions as preventable adverse events.

In both studies, we agreed among ourselves about whether events should be classified as preventable or not preventable, but these decisions do not necessarily reflect the views of the average physician and certainly do not mean that all preventable adverse events were blunders. For instance, surgeons know that postoperative hemorrhage occurs in a certain number of cases, but with proper surgical technique, the rate decreases. Even with the best surgical technique and proper precautions, however, a hemorrhage can occur. We classified most postoperative hemorrhages resulting in the transfer of patients back to the operating room after simple procedures (such as hysterectomy or appendectomy) as preventable, even though in most cases there was no apparent blunder or slip-up by the surgeon. The IOM report refers to these cases as medical errors, which to some observers may seem inappropriate.

Certainly, most surgeons consider this inappropriate. Postoperative hemorrhage is a known complication of any surgery. Many times, when a surgeon takes a patient back for postoperative hemorrhage, no specific cause is found, no obvious blood vessel whose tie fell off, for example. The very definition of medical errors used in many of these studies will inflate the apparent rate. Physicians know that not every adverse event is preventable or due to medical error. However, choices on how to define medical errors had to be made, and, given the difficulty in determining which adverse events (like postoperative bleeding) are due to physician error, system error, or just the plain bad luck of being the patient for whom an accepted potential complication or adverse event happens, it’s not surprising that a simpler definition of “medical error” is preferred by many investigators.

For all its flaws and the awful “doctors are killing lots of patients” reporting that it provoked, reporting that frustrated many of the investigators who carried out the IOM study because it distracted from the true message of the report, which was to encourage further investigation and a “culture of safety” in hospitals to improve the safety of patient care, the IOM report does deserve a lot of the credit for sparking the movement to improve quality and decrease medical errors over the last 17 years. At the time, I tended to agree with IOM panel member Lucian Leape, MD, who pointed out that, even if the IOM report did greatly overestimate the number of deaths due to medical errors, “Is it somehow better if the number is only 20,000 deaths? No, that’s still horrible, and we need to fix it.” Exactly.

Classen et al: Quadrupling the IOM number

In 2004, another study was published by David Classen et al involving three tertiary care hospitals using, among other measures, the Institute for Healthcare Improvement’s Global Trigger Tool. This study found from four to ten times the number of deaths attributable to medical error than the IOM did; i.e., approximately 400,000 per year. If this were true, then medical errors would be approaching the number two cause of death in the US, cancer, which claims 585,000 people per year.

Classen et al noted that adverse event tracking methods that had frequently been in use at the time of the IOM report missed a lot of adverse events, noting that this tool found up to ten times as many adverse events. This is, of course, not surprising because, regardless of industry or topic, any voluntary reporting system of bad things is going to underreport those bad things. People don’t like admitting that something bad happened; it’s human nature. As a result, after the IOM report, investigators tried to develop automated tools to mine either administrative data (data reported to insurance companies for purposes of reimbursement) for discharge codes that correlate with adverse events:

The Global Trigger Tool uses specific methods for reviewing medical charts. Closed patient charts are reviewed by two or three employees—usually nurses and pharmacists, who are trained to review the charts in a systematic manner by looking at discharge codes, discharge summaries, medications, lab results, operation records, nursing notes, physician progress notes, and other notes or comments to determine whether there is a “trigger” in the chart. A trigger could be a notation indicating, for example, a medication stop order, an abnormal lab result, or use of an antidote medication. Any notation of a trigger leads to further investigation into whether an adverse event occurred and how severe the event was. A physician ultimately has to examine and sign off on this chart review.

Also, Classen et al, like previous investigators, did not really try to distinguish preventable from unpreventable adverse events:

We used the following definition for harm: “unintended physical injury resulting from or contributed to by medical care that requires additional monitoring, treatment, or hospitalization, or that results in death.” Because of prior work with Trigger Tools and the belief that ultimately all adverse events may be preventable, we did not attempt to evaluate the preventability or ameliorability (whether harm could have been reduced if a different approach had been taken) of these adverse events. All events found were classified using an adaptation of the National Coordinating Council for Medication Error Reporting and Prevention’s Index for Categorizing Errors.

There’s the problem right there. Not all adverse events are preventable. We can argue day and night about what percentage of adverse events are potentially preventable; there are sincere disagreements based on evidence on how to determine that number. The problem comes when adverse events are automatically equated with medical errors. The two are not the same. To be fair, Classen et al try not to do this. The authors are very up front that they deem 100% of the adverse events they detected to be potentially preventable. In any case, Classen et al found in 795 hospital admissions in three hospitals and adverse event rate of 33.2% and a lethal adverse event rate of 1.1%, or 9 deaths.

Again, in fairness, I note that Classen et al never extrapolate their numbers to all hospital admissions. Nor did their study differentiate inpatient adverse events or death as due to medical errors (and therefore preventable) or unpreventable. Rather, the purpose of their study was to demonstrate how traditional methods of reporting underestimate adverse events and how the Global Trigger Tool is far more sensitive at detecting such events than voluntary reporting methods. They showed that. None of that stopped Makary and Daniel from taking this one study of less than 1,000 hospital admissions and extrapolating it to 400,000 preventable deaths in hospitals per year. That is the peril from extrapolating from such small numbers.

Landrigan et al: Not as high as Classen, but still too high and not improving

Another study examining the use of the Global Triggering Tool was carried out by Landrigan et al in 10 North Carolina hospitals and published in the NEJM in 2010. I mention it precisely because it uses similar methods to the ones used by Classen et al and comes up with dramatically lower numbers of preventable deaths. It is also a useful study because it examines temporal trends in estimates of harm, asking the question, “Have statewide rates of harm been decreasing over time in North Carolina?” In brief, examining 2,341 hospital admissions over the ten North Carolina hospitals chosen and using internal and external reviewers to judge whether adverse events detected were preventable, Landrigan et al found that there was a reduction in preventable harms identified by external reviewers that did not quite reach statistical significance (P=0.06), with no significant change in the overall rate of harms. This is a depressing finding, although one wonders if the finding might have reached statistical significance if more hospitals had been included.

Overall, despite the lower percentages, the findings of Landrigan et al are not dissimilar to those of Classen et al taking into account that Landrigan et al deemed 63.1% of the adverse events that they identified as preventable, as opposed to the 100% that Classen et al chose. In their 2,341 hospital admissions, Landrigan et al found an adverse event rate of 18.1%, a lethal adverse event rate of 0.6%, and deemed 14 deaths to have been preventable, with the data summarized in a graph below:

MedicalErrorCharts

I also note that, like Classen et al, Landrigan et al made no effort to extrapolate their findings to the whole of the United States. That was not the purpose of their study. Rather, the purpose of their study was to ask whether rates of adverse events were declining in North Carolina hospitals from 2002 to 2007. None of that stopped Makary and Daniel from extrapolating from Landrigan’s data to close to 135,000 preventable deaths.

HealthGrade’s failing grade

By far the largest study cited by Makary and Daniel is the HealthGrades Quality Study. The HealthGrades study has the advantage of having analyzed patient outcome data for nearly every hospital in the US using data from the Centers for Medicare and Medicaid. Indeed, 37 million Medicare discharges from 2000-2002 were examined using AHRQ’s PSI (Patient Safety Indicator) Version 2.1, Revision 1, March 2004 software application. The authors identified the rates of 16 patient safety incidents relevant to the Medicare population. Four key findings included:

  1. Approximately 1.14 million total patient safety incidents occurred among the 37 million hospitalizations in the Medicare population from 2000 through 2002.
  2. The PSIs with the highest incident rates per 1,000 hospitalizations at risk were Failure to Rescue, Decubitus Ulcer, and Post-operative Sepsis. These three patient safety incidents accounted for almost 60% of all patient safety incidents among Medicare patients hospitalized from 2000 through 2002.
  3. Of the total of 323,993 deaths among patients who experienced one or more PSIs from 2000 through 2002, 263,864, or 81%, of these deaths were potentially attributable to the patient safety incident(s).
  4. Failure to Rescue (i.e., failure to diagnose and treat in time) and Death in Low Mortality Diagnostic Related Groups (i.e., unexpected death in a low risk hospitalization) accounted for almost 75% of all mortality attributable to patient safety incidents.

One notes that these data were all derived from Medicare recipients, which means that the vast majority of these patients were over 65. Indeed, the authors of the report themselves note that this is so, pointing out that Medicare patients have much higher patient safety incident rates, “particularly for Post-operative Respiratory Failure and Death in Low Mortality DRG where the relative incident rate differences were 85% and 55% higher, respectively, in the Medicare population as compared to all patients.” So just from the fact that this is a study of Medicare recipients, who are much older than non-Medicare recipients, you know that this study is going to skew towards sicker patients and a higher rate of adverse events, even if the care they received was completely free from error. Still, that didn’t stop Makary and Daniels from including this study and estimating 251,000 potentially preventable hospital deaths per year.

Sloppy language, sloppy thinking

No one, least of all I, denies that medical errors and potentially substandard care (again, the two are not the same thing, although there is overlap) are a major problem. If I didn’t believe that, I wouldn’t have devoted so much of my time over the last three years to quality improvement in breast cancer care, and, as I’ve noted before, I’ve frequently been surprised at the variability in utilization of various treatments among hospitals just in my state.

I’ll paraphrase what the IOM said in its 1999 report: You cannot improve what you cannot measure. The problem that we have here is that everybody seems to be using different language and terms about what we are measuring. For example, Makary and Daniels argue:

Human error is inevitable. Although we cannot eliminate human error, we can better measure the problem to design safer systems mitigating its frequency, visibility, and consequences. Strategies to reduce death from medical care should include three steps: making errors more visible when they occur so their effects can be intercepted; having remedies at hand to rescue patients; and making errors less frequent by following principles that take human limitations into account (fig 2⇓). This multitier approach necessitates guidance from reliable data.

It’s hard to disagree with this. Who can argue with the need for reliable data upon which to base our recommendations and efforts to improve quality of care? However, the devil, as they say, is always in the details. Language matters, as well. Adverse events happen even in the absence of medical errors. Many adverse events are not preventable and do not imply medical errors or substandard medical care. Moreover, determining whether a given medical error directly caused or contributed to a given death in the hospital is far from straightforward in most cases. That’s why I don’t like the term “medical errors” in the context of this discussion, except in egregious cases, particularly as it is often used in the lay press, to imply that any potentially preventable death must have been due to an error. Makary and Daniels fall into that trap in perhaps the most quoted part of their BMJ article:

There are several possible strategies to estimate accurate national statistics for death due to medical error. Instead of simply requiring cause of death, death certificates could contain an extra field asking whether a preventable complication stemming from the patient’s medical care contributed to the death. An early experience asking physicians to comment on the potential preventability of inpatient deaths immediately after they occurred resulted in an 89% response rate. Another strategy would be for hospitals to carry out a rapid and efficient independent investigation into deaths to determine the potential contribution of error. A root cause analysis approach would enable local learning while using medicolegal protections to maintain anonymity. Standardized data collection and reporting processes are needed to build up an accurate national picture of the problem. Measuring the consequences of medical care on patient outcomes is an important prerequisite to creating a culture of learning from our mistakes, thereby advancing the science of safety and moving us closer towards the Institute of Medicine’s goal of creating learning health systems.

Note in the first sentence they refer to “death due to medical error,” while in the second sentence they propose asking whether a “preventable complication stemming from the patient’s medical care contributed to the death.” This basically conflates potentially preventable adverse events and deaths with medical errors, when the two are not the same. Rather, I (and many other investigators) prefer to divide such deaths into preventable and unpreventable. Unpreventable deaths include deaths that no intervention could have prevented, such as death from terminal cancer. Preventable deaths include, yes, deaths from medical error, but they also include deaths that, for example, might have been prevented if patient deterioration was picked up sooner. Whether not picking up such deterioration is an “error” or the result of a problem in the system might or might not be clear. Unfortunately, conflating the two, deaths due to medical error and potentially preventable deaths, only provide ammunition to quacks like the one currently engaged in a campaign against me.

How much death is due to medical error, anyway?

I’ll conclude by giving my answer to the question that all of these studies ask, starting with the IOM report: How many deaths in the US are due to medical errors? The answer is: I don’t know! And neither do Makary and Daniels—or anyone else for sure. I do know that there might be a couple of hundred thousand possibly preventable deaths in hospitals, but that number might be much lower or higher depending on how you define “preventable.” I’m also pretty sure that medical errors, in and of themselves, are not the number three cause of deaths. That’s because medical errors rarely occur in isolation from serious medical conditions, which means it’s very easy to attribute most deaths to primarily a medical error, whether such attribution is appropriate or justified or not. That number of 250,000 almost certainly includes a lot of deaths that were not primarily due to medical error, given that that’s 9% of all deaths every year.

But it’s even more than that. Whenever you see an estimate of how many deaths are “deaths by medicine,” it’s very helpful to compare that estimate with what we know to assess its plausibility. As I mentioned above, According to the CDC, of the 2.6 million deaths that occur every year in the U.S., 715,000 occur in hospitals, which means that, if Makary’s estimates are correct, 35% of all hospital deaths are due to medical errors. But the plausibility of Makary’s estimate is worse than that. Remember that the upper estimate used by Makary and Daniels is 400,000 inpatient deaths due to medical error. That’s 56%—yes, 56%—of all inpatient deaths? Seriously? It’s just not anywhere near plausible that one-third to over one-half of all inpatient deaths in the US are due to medical error. It just isn’t.

On its face, such a claim is very hard to believe, especially if you consider that, of those who died in a hospital, 75% were age 65 and over, and 27% were age 85 and over. That’s a lot of people prone to dying because they are old and ill, regardless of how good their care was. Add to that the fact that between 2000 and 2010, hospital deaths decreased 8% even though the number of hospitalizations increased 11%, and Makary’s numbers become less and less credible.

Here are some other things I know. I know that the risk of death and complications is a fairly meaningless number unless weighed against the benefits of medical care, a point that Harriet Hall made long ago, noting for example that an “an insulin reaction counts as an adverse drug reaction, but if the patient weren’t taking insulin he probably wouldn’t be alive to have a reaction.” I also know that Makary’s suggestion that there should be a field on death certificates asking whether a problem or error related to patient care contributed to a patient’s death will be a non-starter in the litigious United States of America, promises of anonymity notwithstanding.

Over the last three years, I’ve learned for myself from firsthand experience just how difficult it is to improve the quality of patient care. I’ve also learned from firsthand experience that nowhere near all adverse outcomes are due to negligence or error on the part of physicians and nurses. None of this is to say that every effort shouldn’t be made to improve patient safety. Absolutely that should be a top health care policy priority. It’s an effort that will require the rigorous application of science-based medicine on top of expenditures to make changes in the health care system, as well as agreement on exactly how to define and measure medical errors. After all, one death due to medical error is too much, and even if the number is “only” 20,000 that is still too high and needs urgent attention to be brought down. Unfortunately, I also know that, human systems being what they are, the rate will never be reduced to zero. That shouldn’t stop us from trying to make that number as close to zero as we can.

ADDENDUM: Here’s a nice video explanation of why the “deadly doctor” gambit is so dubious.

By Orac

Orac is the nom de blog of a humble surgeon/scientist who has an ego just big enough to delude himself that someone, somewhere might actually give a rodent's posterior about his copious verbal meanderings, but just barely small enough to admit to himself that few probably will. That surgeon is otherwise known as David Gorski.

That this particular surgeon has chosen his nom de blog based on a rather cranky and arrogant computer shaped like a clear box of blinking lights that he originally encountered when he became a fan of a 35 year old British SF television show whose special effects were renowned for their BBC/Doctor Who-style low budget look, but whose stories nonetheless resulted in some of the best, most innovative science fiction ever televised, should tell you nearly all that you need to know about Orac. (That, and the length of the preceding sentence.)

DISCLAIMER:: The various written meanderings here are the opinions of Orac and Orac alone, written on his own time. They should never be construed as representing the opinions of any other person or entity, especially Orac's cancer center, department of surgery, medical school, or university. Also note that Orac is nonpartisan; he is more than willing to criticize the statements of anyone, regardless of of political leanings, if that anyone advocates pseudoscience or quackery. Finally, medical commentary is not to be construed in any way as medical advice.

To contact Orac: [email protected]

66 replies on “Do medical errors really kill a quarter of a million people a year in the US?”

This paper has been published in a peer reviewed journal with more than 17 of impact factor. In order to be credible, you should send your own analysis in a peer reviewed journal with an IF at least equivalent. 😉
I fear that your analysis is as nonsensical as the paper itself. A good comment in three lines has been posted in Science Based Medicine by James Edwards, which may save the time of the reader.

The core fallacy of SCAM is to count all detriments of medicine while ignoring all benefits. In the world of “death by medicine” a patient who presents with an aneurysm and dies on the table, was killed by medicine – although the actual trade was perhaps a 25% chance of death versus 100% if left untreated.

Medicine does make mistakes – and is honest about it, unlike SCAM (just look at the chiropractors in outright denial over chiropractic induced stroke). That doesn’t mean that it is valid to count up all harms which occur during medical treatment and present that as if it were the whole story.

Daniel — Y’know, if you’re going to refer to a comment on another blog, a link might be helpful. Seeing as how I have a class to prepare, I only skimmed Orac’s article, but it seemed pretty decent to me.

Orac concludes,

Unfortunately, I also know that, human systems being what they are, the rate will never be reduced to zero.

MJD says,

Medical errors can only reach zero through the intervention of a moon god or in the absence of evolution.

Medical benefits keep the trillion, trillion, trillion atoms in the human system together and functioning – Thx

DC,I cannot locate the comment, but gave up after several hundred entries. Can you just quote it since it’s so short?

And in case anyone was wondering, the comment was something most people would agree with “you also need to consider lives saved”
But this is not a good comment, it isn’t even really relevant. Even if you save 90 lives, losing 10 because of incompetence or systemic errors that could easily be avoided (introducing checklists or whatever, rather than relying on the memory of the surgeon) shouldn’t be shrugged off as “meh”
It also doesn’t address anything in Orac’s article, or the similar one by Dr Gorski (so similar, that I am inclined to suspect collusion), which is to question how valid the stats are. To suggest that once something has passed peer review, that is sufficient and any criticism should go through the same process or can be ignored is a little bizarre.
A more reasonable objection might be to say that it doesn’t take into account how many would have died anyway – If someone has a couple of hours to live, but surgery would save them, and the surgeon makes an error that kills them, how fair is it to say they were killed by medical error? What if their expected life is a couple of months?

#6

… James Edwards said: “We could reduce medical error deaths to zero by simply discontinuing all medical care. (Obviously not realistic) But this leads to the point that such statistics should include a number of lives saved my medical treatment as compared to deaths due to errors.”

DC, for the benefit of the rest of us dullards, perhaps you could explain why you “fear” that Orac’s analysis is “nonsensical”?

(I think the verb you want is “think”, but “fear” does tend to soften it a bit.)

The main flaw in the BMJ paper is to make a separate category of medical errors; the second flaw is to say that we can say errors lead to death in the same way as, for instance, heart disease. There is a clear confusion with iatrogenesis, which is a concept totally distinct from medical error. If a person die from anti-cancer therapy, it is not medical error, but iatrogenesis. And I gave the example of a woman, who has not been treated despite of a mammogram showing a cancer. In this case, there was no iatrogenesis. In case of death, it would have been a death by cancer, and it would have been controversial whether lack of treatment was a cause of death.
CAM practitioners can easily argue on the question of medical error, because error is specific to science. On the question of iatrogenesis, they do better too, because sugar pills and small needles have less side effects than efficient drugs and scalpels.

CAM practitioners can easily argue on the question of medical error, because error is specific to science. On the question of iatrogenesis, they do better too, because sugar pills and small needles have less side effects than efficient drugs and scalpels.

Also, medicine is actually prepared to believe it can be wrong, so takes steps to track and correct errors and their causes, whereas no form of quackery has any mechanism for self-correction.

Indeed, it is impossible for quackery to have any mechanism for self-correction. As soon as you allow objective tests of the kind that might show error, you have to accept the outcomes of tests showing that your nonsense doesn’t work.

I call this the “Deadly Doctor Gambit”. I made a video on only the very high level view of this statistic.

OMG Concordance! Hi! Long-time fan of your YouTube channel here!

…it is not too lively these days though… :C

@ Guy Chapman
“Also, medicine is actually prepared to believe it can be wrong”
Unfortunately there are still doctors that are reluctant to acknowledge their errors.

@Daniel: Yes, this is true, but the fundamental difference between modern medicine and the bad old days of bloodletting and purging is deference, at a fundamental level, to objective testing even when it contradicts your beliefs. There are some doctors who are arrogant (including many of the more egregious cranks such as Wakefield) but most doctors these days are not.

They are well-informed and prepared to state the facts, though, and some people with beliefs inconsistent with reality see this as arrogance. In fact, quacks, charlatans and cranks perceive any dissent as arrogance.

I would add that it is medical error to administer SSRIs to depressed persons due to the subsequent propensity of them to brush up on their riflery at the nearest Catholic girl’s school.

@ Guy Chapman:

I’m in total agreement with you.

Alt med, on the other hand, doesn’t monitor itself for error., perhaps its only self-criticism occurs when a practitioner critiques other practitioners-
but usually they find a way to blame the patient ( who didn’t follow the entire protocol, who cheated, who wasn’t spiritual enough), SBM ( earlier treatments ruined the cure) or both.

Unfortunately, the topic of death caused by SBM has been the subject of articles, books and films by alt med advocates: most notoriously ‘Death by Medicine’ (( shudder)) by Gary Null and assorted cranks ( Carolyn Dean amongst them) in book or article form or as a FULL LENGTH FILM – available for free over the internet.

IIRC it says 600K or 700K deaths are caused by medicine a year ( which includes error). Would that be about one in three?
If they were going to pull a number out of the ‘air’, at least they chose one that stands out. Confabulists aren’t limited by
realistic estimates.

IIRC it says 600K or 700K deaths are caused by medicine a year ( which includes error).

That’s suspiciously close to the total number of in-hospital deaths, which leads me to suspect that they are attributing all in-hospital deaths to “medicine”.

I did have a question about what is included in the numerator and denominator above. The 2.6 million total deaths seemed low to me, so I clicked over to the CDC site (just a quick perusal, as I have neither the time nor the expertise to drill down too deeply today), and while it’s not explicitly stated I suspect infant mortality cases are not included in that total (rates are about 8 per thousand for the all-cause number and 6 per thousand for infant mortality–the combined numbers are plausible for a life expectancy of 75-80 years, while if the total number included infant mortality, the death rate from all other causes would be much too low). However, there is some infant mortality in hospitals–some never leave the neonatal intensive care unit alive, while others are brought in for severe illnesses. What is not clear is whether the hospital deaths number includes infant mortality cases, and if so how much the overall picture would change.

Eric – Are these only in-hospital deaths? A lot of people die elsewhere. Both my parents died at home.

@ Eric Lund:

Well, some of these people believe that doctors are deadly so perhaps they DO attribute all in-hospital deaths to them.

Even if human medical personnel made zero errors, hospitals would still be a dangerous place.

That’s an argument to reduce nosocomial infections, not foregoing a needed trip to the hospital. Do you propose the solution is to stay away from hospitals?

palindrom@21: The OP quotes CDC statistics that 715K people died in US hospitals during the most recent year for which data are available. I don’t know for sure that this is the number Denice recalls the alt-med types claiming is the total annual deaths from medicine, and coincidences do happen. However, especially when dealing with the alt-med crowd, that’s not the way to bet.

Here’s an approach. You compare different hospital death rates for the same condition – pick a common one – and risk adjust the patients on arrival.

Differences in survival rates are down to poor care if you get the numbers. Remember common conditions.

Next you test to see if the difference is down to poor care. You put in place a scheme where the poorly performing hospital gets staff who know what they are doing to run it for a few years. If the performance picks up, you know that it was down to poor care.

So its perfectly testable to determine for the major causes if poor care contributes or causes the deaths.

@Nick: You may recall that this was exactly what happened with cardiac care in the UK, leading to a recommendation to concentrate paediatric heart surgery in specialist centres. That was, to put it mildly, politically complex.

If I have a burst appendix, I’ll take the .1% chance of dying in a hospital due to medical error over the 100% chance I’ll die without going to the hospital.

Even if human medical personnel made zero errors, hospitals would still be a dangerous place.

So what was the alternative when I was bleeding out after giving birth? I’ll wait.

Thanks for that, Orac! Honored to contribute to your much more in-depth analysis and clinical perspective.

I’m so fed up with these unrealistic views of what constitutes a patient death from an “error.” I recently participated in a quality review of a case where a very elderly patient had a bowel infarct and had emergency surgery. He was never able to come off the vent and eventually died. He did develop a ventilator-acquired pneumonia…so his death would be counted as “medical error,” even though this guy would have definitely died without surgery, and even with the most perfect care the mortality rate approaches 80%. But no, we’ve got to blame the doctors!!!!

Delphine: Oh don’t mind SN, he’s just sore that doctors don’t do lobotomies, tongue extractions and eye extractions anymore, so he’ll never find that perfect woman.

In my family, we figure one major or attempted error per day in the hospital.

We keep a hawk eye out for them. We still plan to get out ASAP, even though some of our alternative nutrition practices can prevent damage, reduce sepsis risks, and speed recovery enough to do things ordinary patients simply can’t do.

sCAM artists routinely use this type of data in their arguments against ‘real’ medicine and for ‘pretend’ medicine.

The obvious falacy in this argument is that we do not know how much morbidity and mortality occurs as a result of ‘pretend’ medicine, both directly and indirectly.

For example:

A 19 month old child with bacterial meningitis is treated with various herbs and spices by well-intentioned but delusional parents without improvement, and then by a naturopath who does not bother with the tedium of history taking and examination before prescribing an ineffective remedy.

The child then deteriorates to the point of respiratory arrest; an ambulance is called and the child subsequenlt dies in the ICU of multiple organ failure caused by bacterial sepsis.

This case then goes to the ICUs morbidity and mortality meeting, the case is dissected, discussed, and measures to prevent such an event occurring in the future are implemented.

The naturopath responds by taking their website offline. No regret or remorse is expressed. No review of practice is implemented. Just move on, blame God and keep selling those useless remedies.

‘Real’ medicine is far from perfect, but at least there is a mechanism where errors can be detected, reviewed, published and acted apon. No such mechanism exists in sCAM world; just denial and moaning about how dangerous “allopathic medicine” is.

@ DrRJM

This case then goes to the ICUs morbidity and mortality meeting,

I would assume the case also goes on the list of “medical errors / death while in the hospital’s staff care”, the sort of list later used to try to make studies like the one which started this thread’s topic.

IOW, CAM practitioners are involved in a few of these reported medical errors. Certainly not all of them, not even a major part of them; but still enough to get their share of the burns in their hurry to immolate mainstream medicine.

That CAM supporters take advantage of this study is not surprising, but what we must acknowledge is that this type of paper is a perfect exemple of the failure of peer review in Impact Factor driven research.

Daniel @ #38:

I agree that peer-review has significant limitations.

IF matters to academics, but most clinicians I work with judge a paper on its scientific merit rather than the IF of the journal it is published in.

We run a weekly Journal Club, and any paper of significance to us gets forensically dissected, much as Orac has done with this and other papers.

The main problem we run into these days is the sheer number of journals out there.

@ DrRJM
Many clinicians I know pay more attention to the journal where the study is published than to arguments evidencing fallacies in these papers.
From Orac:
“Not all studies are created equal, nor are all journals. There are lots of journals out there that publish weak science because they don’t have the reputation of top tier journals like, say, Cell, Science, or the New England Journal of Medicine.”

@Daniel Corcos: Your quote indicates that Orac pays attention to both. Who pays attention only to the journal?

This quote shows that he pays a lot of attention to the journal. I conclude that he does not pay as much attention to demonstration of the fallacies of the papers from discussions I had with him in this forum and in SBM, but I think that he is not ready to have another discussion about it.

That’s amusing, accusing him of using fallacies and relying on proof by assertion as your “evidence”.

If I remember back in the 1990s they were marking ODs of prescription drugs as medical error. Weird.

The graphic has some nice granet-like texturing going on with the bars. It kinda reminds me of tombstones.

DrRJM:

What makes that particular case especially relevant is that the parent have been overtly blaming the child’s death on medicine — saying that he only died because the ambulance lacked particular pediatric life support facilities. After all, they didn’t even call 911 until the child was in respiratory arrest, and it is true that ambulances do not generally carry all sizes of endotracheal tubes.

Which brings us to another category of “deaths by medical error” claims — waiting until the patient is too ill for a good chance of success, and then blaming medicine for failing to save them. Compare people who are advised to get chemo, refuse, wait a year, then come back when the cancer is much more advanced, whereupon the quack who treated them in the meantime can point at the subsequent failure to save them as showing how hollow medical promises were to begin with. It becomes a self-fulfilling prophecy.

@Daniel Corcos, I think the only thing Orac is probably tired of is your monomania, I know I am.

@ John Philips
I did not say he was not tired about what you call my monomania. Have you evidence that it is the only thing he is tired of?

@ Guy Chapman #43
Exactly what I was saying. You are relying on authority. I was talking about fallacies in papers published in high impact journals. Proof by assertion would mean that I repeat the same proposition in face of evidence showing the contrary. Where is the evidence showing that there is no sustainable increase of breast cancer incidence after mammography screening? Where is the evidence showing that radiation induced cancer cannot occur before 10 years after irradiation? And how do you do to post your answer to a specific comment?

@Daniel Corcos, I think the only thing Orac is probably tired of is your monomania, I know I am.

Bingo. Daniel’s perseveration has become tiresome in the extreme. Seemingly no matter what the topic is, he brings it around to his personal obsession, and I’m tired of it. When warned, Michael Dochniak managed to (mostly) control his monomania about latex in vaccines as a cause of autism, but Daniel seems unable or unwilling to control his.

On this blog and my not-so-super-secret other blog, you are a monomaniac (or maybe a duomaniac) who perseverates about one topic, two at most. I’ve had complaints.

Long ago there was a cartoon in one of the “popular” medical journals, showing a physician tied up in a chair and a wild-eyed patient saying “Tell me, doctor – when did you first notice this paranoia of mine?”

Substitute “monomania” for “paranoia” and it’s been brought up to date nicely. 🙂

@ DB
I don’t completely agree with Horgan, but I understand what he means by tribalism.

@Daniel Corcos. Pointing out your monomania, or perhaps Orac’s more apt duomania phrasing, is not tribalism, just sheer boredom at nothing new from you. Or if there is it is hidden among your not so subtle posts about your duomania.

Exactly. Annoyance and sheer boredom at Daniel’s perseveration over his hobby horses motivated me. Basically, I had had enough of it.

Orac says (#50),

Michael Dochniak managed to (mostly) control his monomania about latex in vaccines as a cause of autism…

MJD says,

Orac’s contortion below is what brought me here 5-years ago:

https://www.respectfulinsolence.com/2011/05/10/anti-vaccine-contortions-they-never-end/

I’ve enjoyed the banter immensely but am saddened when Orac occasionally deletes a comment. 🙁

My kindergarten teacher was the only person who had the right to keep me quiet.

MJD, wrong again, for on this blog Orac is the kindergarten teacher. You should consider it a privilege that he still tolerates you, for if it was my blog, you, DC and PGP would be long gone. So luckily for the three of you, it is not mine.

John Phillips says,

… if it was my blog, you, DC and PGP would be long gone.

MJD say,

Without DC and PGP this blog may become an uncomfortable group hug.

On topic,

I agree with Orac, medical errors do not kill a quarter of a million people a year in the U.S.

In the bar chart, there is no category for foul play or intentional homicide as a cause of death in the United States. Why is this the case?

Why are homicides and murders not listed as a separate category in that bar chart?

@ MJD, I doubt it, as we would still get the usual rash of idiots coming through here with their anti-vax or woosterish unevidenced gish gallops. We would just have three less boring and or annoying and or ignorant dolts wasting electrons and screen space.

[…] As for us believing that the pharmaceutical industry “wouldn’t be able to produce unsafe products,” all I want to ask Mr. Kuntz is: What the heck are you smoking? I don’t know of a single pro-vaccine advocate who doesn’t realize that pharmaceutical companies are for-profit businesses or who thinks that the pharmaceutical industry can’t produce unsafe products. What we do know is that vaccines are heavily regulated and rigorously tested, claims of antivaccinationists like Mr. Kuntz otherwise. Indeed, you can tell where he’s coming from when he uncritically repeats the claim that the medical industry kills around 250,000 people in the US every year and is the third leading cause of death. No, it isn’t. […]

Comments are closed.

Discover more from RESPECTFUL INSOLENCE

Subscribe now to keep reading and get access to the full archive.

Continue reading