Don’t get sick in July?

Blogging on Peer-Reviewed Research

Dave Munger and others have been spearheading an effort to promote the acceptance of a specific logo that science bloggers (ScienceBloggers, included) can use to let the reader know that the topic of a blog post is a discussion of real, peer-reviewed research. Use of the logo, which I’ve used for this post, means a blogger is not just commenting on research that’s been reported in the media, but rather has gone, so to speak, straight to the horse’s mouth to look up the original peer-reviewed journal article. It’s a worthy effort, and I plan on going back through the last few months of blogging and tagging appropriate posts, such as this one where I discussed a recent article showing that having a positive mental attitude probably does not impact cancer survival.

There’s another peer-reviewed paper that I’ve been meaning to discuss for about a month and a half now, but somehow it’s gotten buried or pushed aside. Just as I was going to mention it last week, for instance, other topics came up that interested me more, at least at the time. Yesterday’s inauguration of the BPR3 effort tweaked me to finally dig this paper out of the stack of Things That I Should Really Blog about and actually, you know, blog about it.

There’s a common saying in academic medical centers that you may have heard before: “Never get sick in July.” The reason, of course, is that sometime between June 24 and July 1 is when most residency programs start. This means that every July freshly minted interns who less than a month ago were in medical school are set loose on an unsuspecting patient population, while interns and junior residents suddenly find themselve in charge for the first time. Actually, it shouldn’t be as bad as that, if the supervision is adequate, but the question is whether there really is an increase in complications in July and August, the earliest months in the academic year. It turns out that a group at my old alma mater, the University of Michigan, took a look at just this question for surgical patients. The article got a fair amount of publicity in September, when it first came out. The article appeared in the Annals of Surgery and was entitled Seasonal Variation in Surgical Outcomes as Measured by the American College of Surgeons-National Surgical Quality Improvement Program (ACS-NSQIP), whose abstract follows:

Objective: We hypothesize that the systems of care within academic medical centers are sufficiently disrupted with the beginning of a new academic year to affect patient outcomes.

Methods: This observational multiinstitutional cohort study was conducted by analysis of the National Surgical Quality Improvement Program-Patient Safety in Surgery Study database. The 30-day morbidity and mortality rates were compared between 2 periods of care: (early group: July 1 to August 30) and late group (April 15 to June 15). Patient baseline characteristics were first compared between the early and late periods. A prediction model was then constructed, via stepwise logistic regression model with a significance level for entry and a significance level for selection of 0.05.

Results: There was 18% higher risk of postoperative morbidity in the early (n 9941) versus the late group (n 1=313) (OR 1.18, 95%, CI 1.07-1.29, P 0.0005, c-index 0.794). There was a 41% higher risk for mortality in the early group compared with the late group (OR 1.41, CI 1.11-1.80, P < 0.005, c-index 0.938). No significant trends in patient risk over time were noted. Conclusion: Our data suggests higher rates of postsurgical morbidity and mortality related to the time of the year. Further study is needed to fully describe the etiologies of the seasonal variation in outcomes.

A reasonable question to ask is whether there is any data before this demonstrated any seasonal adjustments in surgical complications that might be related to the new crops of interns that show up every year. The authors speculate that this is probably because existing quality metrics have not until recently been sufficiently standardized and adjusted for risk based on preexisting conditions to allow reliable month-to-month comparisons on a large scale. Recently, however, the American College of Surgeons-National Surgical Quality Improvement Program (ACS-NSQIP) has changed that. This system uses a set of defined comorbidities and endpoints, along with a much more rigorous risk adjustment system to allow a valid comparison of results among hospitals, and it accounts for some seasonal variables that might confound an analysis and either mask or accentuate seasonal variations in outcomes. In this study, the authors analyzed data from over 60,000 patients in 14 academic medical centers and 4 large, private, community-based hospitals over three years. They conclude that there is indeed an 18% higher rate of complications “early” in the year (July and August) as compared to “late” in the year (April 15 to June 15, most likely chosen because chief residents tend to start disappearing to go to fellowships during the last two weeks of the year) and a 41% higher chance of mortality. They also found statistically significant differences in mean OR time, meantime between getting the patient in the room and making the incision, and time under general anaesthesia, all to the worse in the early group.

Although the effort taken to do this study was impressive and this represents the first study that I’m aware of that seems to support a “July effect,” I’m not sure that it is as strong an indicator as the authors would lead you to believe. Certainly it’s not as strong as some news reports played it, some of which in essence repeated the classic “don’t get sick in July” warning. One reason for my skepticism is shown in Figure 2, which plots the mortality rate versus the month of the year:

i-a31ff33fde7d2d9592e94762abfd1eed-Fig2.jpg

Note that there are two large spikes, one in July and one in December (the latter of which is even higher than July), and one lesser spike in March. Given this variation, I’m not sure why they tried to perform a linear regression on the data; there’s no reason to think that, even if there is a decrease in mortality as the year goes on, it would necessarily be a linear relationship. Indeed, if I were to guess, I’d think it would probably approach a lower boundary asymptotically. The authors also did the same regression in Figure 1, which graphed the morbidity rate over the course of the academic year as well, with even less convincing results given that the apparent line was much flatter.

All this means is that, as the authors acknowledge, the relationship, if one exists is either (1) more complex than simply being due to seasonal variations in the experience of the residents or (2) not adequately documented by the present data, as superior as it is to prior data. They are correct, however, that this data could be an indication that disruptions in hospital routine are the major cause of seasonal variations in morbidity and mortality rates. Lots of attending staff is on vacation in December for the Christmas and New Year holidays, and the most senior residents also tend to go on vacation during those times. Another factor is that patients tend not to want to have surgery around the holidays if it can be safely delayed. The same may be true for the summer months. Obviously big cancer operations and the like aren’t going to be delayed, but it’s usually fairly safe to delay having an inguinal hernia repaired or an elective cholecystectomy, for example. Consequently, it’s not unreasonable to speculate that more urgent cases during these times of the year might lead to more complications, although one would hope that the robust risk adjustment in ACS-NSQIP would allow that relationship to be teased out. The problem is that the system has a very specific definition of what “urgent” means and doesn’t necessarily capture “semiurgent” cases, in which the operation doesn’t necessarily occur within 12 hours of the patient’s admission.

Finally, the obvious control group, again as acknowledged by the authors, is not in this study, namely a group of hospitals without residency programs. The most difficult task in doing such a comparison would be that community hospitals tend to do many fewer big cases, less high risk surgery, and a lot more of the more common and uncomplicated “bread and butter” surgical cases. Indeed, they usually refer the complex cases to the big academic medical centers, mainly because most community hospitals, aside from the really big ones (most of which, if big enough, are affiliated with a medical school and have residents), are simply not equipped to handle such complex cases. Even so, with enough cases entered into the database, it should become possible to do such a comparison. It will, however, be difficult and complex.

Fortunately, ACS-NISQIP is an ongoing project that continues to collect outcomes data. As the database grows, it should be possible to isolate single variables, such as resident experience, that are associated with differences in outcomes. One thing I can say for sure, though: My anal sphincter tone is definitely much tighter in July than it is in May and June when the new interns start.