Categories
Clinical trials Medicine

Don’t get sick in July? (Revisited)

ResearchBlogging.orgJune is almost over. If you work in an academic medical center, as I do, that can mean only one thing.

The new interns are coming, and existing residents will soon be advancing to the next level. The joy! The excitement! The trepidation! And it’s not all just the senior residents and the faculty feeling these emotions. It’s the patients too. At least, it’s the patients feeling the trepidation. The reason is the longstanding belief in academic medical centers, a belief that has diffused out of them and into “common wisdom,” that you really, really don’t want to get sick in July.

But is there any truth to this common wisdom, passed down from hoary emeritus faculty to professor to assistant professor to resident to medical student every year? Is there any truth to the belief commonly held by the public that care deteriorates in July? After all, this is something I’ve been taught as though it were fact ever since I first set trembling foot on the wards way back in 1986. So it must be true, right? Well, maybe. It turns out that a recent study published in the Journal of General Internal Medicine has tried once again to answer this question and come to a rather disturbing answer.

Imagine, if you will, that you want to determine whether there really is a “July effect,” that quality of care really does plummet precipitously as common wisdom claims. How would you approach it? Mortality rates? That’s actually fairly hard, because mortality rates fluctuate according to the time of year. For example, trauma admissions tend to spike in the summer. Well do I remember during my residency the fear of the fourth of July weekend, because it was usually the busiest trauma weekend of the year–and we had new residents to have to deal with it all. It was an attending’s and senior resident’s worst nightmare. In any case, if a hospital has an active trauma program it would naturally be expected that it would have more deaths during the summer regardless of resident status, quite simply because there is more trauma. Complication rates? That might also be a useful thing to look at, but that’s actually not as easy as it seems either. How about comparing morbidity and mortality rates between teaching hospitals and community hospitals throughout the year and test whether mortality rates increase in academic hospitals relative to community hospitals. That won’t work very well, either, mainly because there tends to be a huge difference in case mix and severity between academic institutions and community hospitals. Community hospitals tend to see more routine cases of lower severity than teaching hospitals do.

Yes, the probem in doing such studies is that it’s not as straightforward as it seems. Choosing appropriate surrogate endpoints that indicate quality of care attributable to resident care is not easy. It’s been tried in multiple studies, and the results have been conflicting. One reason is that existing quality metrics in medicine have not been sufficiently standardized and risk-adjusted to allow for reliable month-to-month comparisons on a large scale basis. In surgery, we are trying to develop such metrics in the form of the American College of Surgeons-National Surgical Quality Improvement Program (ACS-NSQIP), but these measures don’t always apply to nonsurgical specialties and there are multiple competing measures of quality. It’s true that we’re getting much better at assessing quality than we used to be, but it’s also true that we have a long way to go before we have a reliable, standardized, validated set of quality measures that can be applied over a large range of specialties.

That leaves investigators to pick and choose surrogates that suit their purposes, and that’s exactly what the investigators of this most recent study, hailing from the University of Southern California and UCLA, have done. The surrogate that they chose is medication error-related deaths:

Inexperienced medical staff are often considered a possible source of medical errors.1-6 One way to examine the relation-ship between inexperience and medical error is to study changes in the number of medical errors in July, when thousands begin medical residencies and fellowships.1,7-11 This approach allows one to test the hypothesis that inexperienced residents are associated with increased medical errors1,8,9,11-15–the so-called “July Effect.”

Previous attempts to detect the July Effect have mostly failed,1,8-17 perhaps because these studies examined small,8,10-13,15-17 non-geographically representative samples,8-17 spanning a limited period,11-16 although a study of anaesthesia trainees at one Australian hospital over a 5-year period did demonstrate an increase in the rate of undesirable events in February–the first month of their academic year.1 In contrast, our study examines a large, nationwide mortality dataset spanning 28 years. Unlike many other studies,18 we focus on fatal medication errors–an indicator of important medical mistakes. We use these errors to test the “New Resident Hypothesis”–the arrival of new medical residents in July is associated with increased fatal medication errors.

To test this hypothesis of the “July effect,” the investigators examined the database of computerized United States death certificates from 1979 to 2006 containing the records of 62,338,584 deaths. The authors then looked for deaths for which a medication was listed as the primary cause of death. Their results are summarized below:

i-e74520630f3dc369dc243494cc5456d6-fig1-thumb-480x464-51596.jpg

One thing that irritates me about this graph is that it does something I really, really hate in a graph. It cuts off the bottom, which, because the graph doesn’t go to zero, makess the differences between the values seem a whole lot larger than they really are. That “July spike” plotted on this graph is an increase in the number of deaths due to medications over expected by maybe 7%, but it looks like a whole lot more. In fairness, though, the investigators analyzed: (1) only preventable adverse effects; (2) only medication errors (rather than combining several types of medical errors like medicinal and surgical); (3) only fatal medication errors; (4) only those medication errors coded as the primary cause of death (rather than medication errors coded as primary, secondary, and/or tertiary). Still, one always have to wonder about how the denominator is calculated; i.e., how the “expected” number of deaths for each month is calculated. Basically, the investigators used a simple least-squares regression analysis to estimate the “expected” number of deaths.

If this is where the investigators had stopped, I might not have been as annoyed by this study. Sure, it’s questionable whether assuming that deaths due to medication errors are strongly correlated with new, inexperienced residents. After all, if there’s one thing we’re starting to appreciate more and more, it’s that medication errors tend to be a system problem, rather than a problem of any single practitioner or group of practitioners. But the above graph does appear to show an anomaly in July.

Unfortunately the investigators did something that always disturbs me when I see it in a paper. They faced a problem. Death certificates didn’t show whether the death occurred in a teaching hospital or not. So, in order to get at whether there was a correlation between a greater “July effect” and teaching hospitals, as would be expected, they looked at county-level data for hospital deaths due to medication errors. Then they determined whether each of these counties had at least one teaching hospital and estimated the percentage of the hospitals in each county that are teaching hospitals, the rationale being the higher the proportion of teaching hospitals in a county, the larger the July effect is likely to be. This is the graph they came up with:

i-cfacaf610310b7a331cc11f37b83e040-fig4-thumb-480x374-51599.jpg

Holy ecological fallacy, Batman! The investigators appear to be implying that a relationship found in group level data applies to individual level data; i.e., individual hospitals. it almost reminds me of a Geier study. In any case, why didn’t surgical errors increase if the “July effect” exists? Wouldn’t this be expected? I mean, we surgeons are totally awesome and all, but we’re only human, too. If the July effect exists, I have no reason to believe that we would be immune to it.

The existence of a “July effect” is not implausible. After all, in late June and early July every year, we flood teaching hospitals with a new crop of young, eager, freshly minted doctors. I can feel the anticipation at my own institution right now. It’s a veritable yearly rite that we go through in academia. Countering the likelihood of a “July effect” is the seasonally tightened anal sphincters of attendings and senior residents that lead them to keep a tight rein on these new residents–which is as it should be. In any case, this particular study is mildly suggestive, but hardly strong evidence for the existence of the “July effect.” Personally, I find the previous study on this issue that I blogged about three years ago to be far more convincing; its results suggested a much more complex interplay of factors.

In the end, I have some serious problems with this study, not the least of which is the assumption that medication errors are correlated so strongly with inexperienced residents when we now know that they are far more a systems issue than they are due to any individual physicians or groups. There are many steps in the chain from a medication order all the way down to actually administering the medication to the patient where something can go wrong, and, in fact, these days the vast majority of the effort that goes into preventing medication errors is expended on putting systems in place that catch these errors before the medication ever makes it to the patient, either through computerized ordering systems that question orders with incorrect doses or medications, systems where pharmacists and then nurses check and double check the order, and then systems where the actual medication order is checked against the medication to be given using computerized bar code scanning systems. It’s really a huge stretch to conclude that fatal medication errors are a good surrogate marker for quality of care attributable to the resident staff, the pontifications and bloviations of the authors to justify their choice in the Introduction and Discussion sections of this study notwithstanding. The other problem is the pooling of county level data into a heapin’ helpin’ of the ecological fallacy. Is there a July effect? I don’t know. It wouldn’t surprise me if there were. If the July effect does exist, however, this study is pretty thin gruel to support its existence and estimate its severity.

REFERENCE:

Phillips, D., & Barker, G. (2010). A July Spike in Fatal Medication Errors: A Possible Effect of New Medical Residents Journal of General Internal Medicine DOI: 10.1007/s11606-010-1356-3

By Orac

Orac is the nom de blog of a humble surgeon/scientist who has an ego just big enough to delude himself that someone, somewhere might actually give a rodent's posterior about his copious verbal meanderings, but just barely small enough to admit to himself that few probably will. That surgeon is otherwise known as David Gorski.

That this particular surgeon has chosen his nom de blog based on a rather cranky and arrogant computer shaped like a clear box of blinking lights that he originally encountered when he became a fan of a 35 year old British SF television show whose special effects were renowned for their BBC/Doctor Who-style low budget look, but whose stories nonetheless resulted in some of the best, most innovative science fiction ever televised, should tell you nearly all that you need to know about Orac. (That, and the length of the preceding sentence.)

DISCLAIMER:: The various written meanderings here are the opinions of Orac and Orac alone, written on his own time. They should never be construed as representing the opinions of any other person or entity, especially Orac's cancer center, department of surgery, medical school, or university. Also note that Orac is nonpartisan; he is more than willing to criticize the statements of anyone, regardless of of political leanings, if that anyone advocates pseudoscience or quackery. Finally, medical commentary is not to be construed in any way as medical advice.

To contact Orac: [email protected]

24 replies on “Don’t get sick in July? (Revisited)”

Are any of the differences noted by the authors statistically significant?

Send that chart in to Edward Tufte. He’d have a few choice words to say. The “suppressed origin” is one of many practices he deplores. It is definitely misleading and any time I see one I lower my opinion of the research accordingly.

‘existing quality metrics in medicine have not been sufficiently standardised’

Oh Really? I wish us luck.

Please… let us all get on with the science we can do,…
and in the mean time let us simultaneously practice the art/science of health care.
We must remain, as ORAC leads us, Wary of woo, but not suppressed by the fear of statisticians or litigants.
By the way ….thank you ….this remains the site i most recommend.

I start my intern year next week, I’ll let you know if there’s a July effect. After all, as we all know personal anecdotes will always trump the literature.

“medication errors tend to be a system problem, rather than a problem of any single practitioner or group of practitioners….” Yes. Medication errors can occur by 1a. an error by the manufacturer, 1b. physician or practitioner prescribing the wrong medication, 2. the pharmacy issuing the wrong medication or dose, 3. the issuer providing the wrong medication or dose or wrong timing, or 4. the patient taking the wrong medication or dose (self-medicating or faking etc.) or 5. adversely reacting to the medication (right or wrong prescription/dose). I’m sure they’re more. I was surprised the research you’re pointing to is focuses on doctors and residents. In my own hospital and trauma room work experience, a majority of medications are administered to patients by nurses and nursing assistants. Is this not the case at most hospitals? The drug errors happen at any point in the chain. Where I saw it was in hasty dispensing and charting errors (doses given not charted).

As a health worker, I found teaching hospitals had more checks and balances because there were always more hands on deck, no matter how inexperienced. As a nursing student, every med issued had to have a witness double-check it. We were slow, careful, and always washed our hands. Non-teaching hospitals seemed like understaffed ghost towns.

[On a side note, when I was a nursing student, we had to fully research every drug before administering them. It took hours of work (pre-internet) the night before our shifts. Inevitably we’d find contraindications and warnings about the mix of drugs patients were taking (either because they could cause ill-effects together or cancel each other out) but were always ordered to go ahead and give them all as prescribed by the physician anyways. I never felt ok about signing the charts.]

Also, there can be a whole lot of other factors behind a death rate spiking at any particular time of year (depression during holidays, and so on). The medication might just tip it over the edge.

Big topic. I enjoy facts. I wish we had a lot more of them. Carry on.

In regards to the lack of effect on surgical errors, my guess would be that new residents don’t get to perform surgeries with the risk of fatal errors in their first month unsupervised (while being allowed to do heparin injections).

The charts are fine. If you aren’t comfortable with those numbers they could be changed to log2 and centered on 0 instead of 1.

Since they represent a ratio I think that if you started the graph at 0 you’d be misrepresenting the data.

I pretty much agree with JohnV. In both cases we are interested in how far the ratios deviate from the expected ratio of 1.0, and the graphs are symmetrical about 1.0. Perhaps they should have used a different style of plot, rather than a bar graph, but the choice to run the axis from 0.90 to 1.10 (or 0.85 to 1.15 in the 2nd example) is sensible.

Really, a forest plot would be better.

I also agree – the plots should not be going to zero, since it is a ratio. The second figure is cutting off error bars at the bottom, now that is bad plotting.

The error bars overlap quite a bit anyway – it is also interesting that the best four months are the 4 months before July. Isn’t this (better performance with experience) also what you would expect under the July hypothesis? It’s not like everyone is magically trained and competent as of August 1st…

Re: 7-9

The problem is that a bar chart implies an origin at 0. If what they are intersted in showing is the deviation around 1 they simply should have plotted the center point with the error bars and an axis across at 1.0. No bars.

It appears they got lazy and went with the Excel default and made a bar chart with superfluous bars. That’s “Chartjunk” to use Tufte’s word. Still poor presentation design. It instills no confidence in the result.

Maybe I’m missing something, but it seems that there are (broadly) two kinds of medication errors, only one of which would be caught by this study. A patient can get either too much or too little medication: if someone is accidentally, or for systemic reasons, not given a medication for a serious condition, the death certificate may say “stroke” or “infection” but not “lack of appropriate antibiotics.” Similarly, if they get relatively harmless medicine A instead of needed medicine B, the lack of B is unlikely to be listed there.

And thanks for pointing out the chart-junk–as others have said, if they want to center around a norm, a bar graph is not the way to to.

Vicki: “If someone is accidentally, or for systemic reasons, not given a medication for a serious condition, the death certificate may say “stroke” or “infection” but not “lack of appropriate antibiotics.” Similarly, if they get relatively harmless medicine A instead of needed medicine B, the lack of B is unlikely to be listed there.”

With the modern philosophy of cya first I’m sure more patients are killed by too much medicine and procedures than the mistake of not prescribing. American doctors are very aggressive.

We have a MRSA problem because of too many prescriptions not because of an epidemic of “lack of appropiate antibiotics”.

Looking at the absolute rates of errors or complications by case mix would definitely not be useful, but what about the size of change in teaching hospitals, using non-teaching hospitals as controls? The hypothesis is that they change in July, and we can test that. Surely there are enough Julys on record to test for a substantive change. I don’t understand why that wouldn’t work, and I’d be happy to be informed why.

Does this include homeopathic medication errors? After all, there are a lot more drownings in July.

I’d have used a different sort of charts – or rather, I would have used a bar chart, but I’d have plotted [ratio] – 1 instead of just [ratio]. That would have made more sense, given that what is interesting here is the deviation from the expected value (the fact that it is a ratio is a red herring – that is just due to normalisation).

As an aside, is it possible to bug Seed about the volume on their flash ads? I don’t mind the music, but it’s *loud* compared to every other flash applet I normally run.

– Jake

JakeS:

What you are suggesting (ratio-1) sounds good at first blush, but really when you are plotting ratios you want to do so on a log scale. After all, 0.5 is the same difference as 2.0, and 0.33 is the same difference as 3.0. In order to make the graph convey the appropriate relationships between the ratios one needs to have a plot that makes these differences seem the same. A log plot will do this; since logx=-log(1/x). I mentioned a forest plot (above), which is one way to show a bunch of points with error bars and how they relate to a norm, and is often used with odds ratios (and thus applies nicely to other ratios).

In any case, why didn’t surgical errors increase if the “July effect” exists? Wouldn’t this be expected? I mean, we surgeons are totally awesome and all, but we’re only human, too. If the July effect exists, I have no reason to believe that we would be immune to it.

Because July interns, at least at our institution, aren’t operating? I would, however, suspect that medical (dosing, management, etc) errors in patients admitted to surgical services would be at least as affected by the July effect as any…

It cuts off the bottom, which, because the graph doesn’t go to zero, makess the differences between the values seem a whole lot larger than they really are.

The graph is measuring a ratio. It can’t cut off at zero – or if it did, then the top should be plotted up to infinity (the reciprocal). What would be better is if the ratio was plotted logarithmically, so that “twice as much” showed up as the negative of “half as much”.

@Epinephrine:

But they aren’t plotting ratios here. They’re plotting anomalies – that is, deviations from a target value. It just looks like they’re plotting ratios because they normalise to the target value. But you can remove the normalisation without loss of generality, and then you’re looking at a difference, not a ratio.

And plotting anomalies doesn’t call for an exponential scale, unless you have a good theoretical reason to believe that the measured anomaly scales exponentially to some underlying cause.

– Jake

UK study found a similar effect for when junior doctors start – first Wednesday in August:
http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0007103

They found differences for emergency admissions but not for things like malignancy.

One of the things is that the rotation system means that the new doctors are likely in new hospitals, and it could be that the cause is not lack of competence but lack of familiarity with equipment and routines which leads to mistakes. I don’t know how interns go round in the USA and whether this could be a factor?

Here’s another possibility: It takes time for new residents to learn how to interact with supervising doctors and with nurses – who to listen to, who tends to possibly make mistakes and bears careful watching, how to suggest a different course without bruising egos.

Anecdote: My 91-year-old dad’s pacemaker replacement site became infected. The head of infectious diseases at the hospital prescribed Vancomycin. Now that’s a great thing to try to knock out a serious infection, except in someone on a heavy dose of warfarin (Coumadin), as my father was (which of course is not at all unusual in folks with cardio problems). The resulting intestinal bleeding very nearly killed my father within days (he passed away a couple of months later, after a total of 5 discharges and readmissions following on the pacemaker infection). The ER doc told me several times that if my dad had arrived at the ER 5 minutes later he would certainly have died, and in fact the ER and ICU docs were quite surprised that he survived.

The ICU doc told me and my sister quite directly the Vancomycin/warfarin thing was a mistake, accompanying that with an expression that kind of implied an eye-roll.

Now this happened in January, so what does it possibly have to do with the “July effect”? Well, during my enforced familiarity with hospitals, I’ve gained the impression that those who see patients least often (the supervising docs) can sometimes make decisions that are problematic, especially in the environment of polypharmacy that affects many people with chronic conditions, or suites of them, and that those who see patients most often (residents, nurses) can catch the potential problems and intercede. Perhaps freshly minted residents are less able/willing to do that?

Good post.

My July rotation as an intern was NICU. When I walked in that morning, the nurses came to me and the other intern and said, “Don’t fucking kill my babies.” I responded, “I wasn’t planning on it. Nice to meet you.” But I got the message. NICU nurses are not known to be warm and fuzzy, but they are very good at what they do. There wasn’t one time I ordered a medication that it wasn’t checked by the nurses first before it went to pharmacy or they went to get the meds (no pixis in those days!).

Funny, the other intern lasted a week, then quit the pediatric residency program.

There is a link in this post to “Epiwonk.” My work compuer will not allow me to link to this site; it is identified as a “malicious” site. Apparently this means it is “untested” by security. Does anyone know anything about this site? It looks interesting. I can call the information services to override if I can tell them a good reason to. Pleae advise if you happen to have an answer.

Comments are closed.

Discover more from RESPECTFUL INSOLENCE

Subscribe now to keep reading and get access to the full archive.

Continue reading