Well, well, well, what have we here about the Avandia study?

I’ve been meaning to go through the recent meta-analysis of Avandia published by the New England Journal of Medicine that purported to show major increase in the risk for cardiac events (myocardial infarctions and cardiac death) in patients who use Avandia, but somehow never got around to it. I’m not sure I need to now, given how, via Kevin, MD, I’ve found this rant byThe Angry Pharmacist, who has looked over the meta-analysis and found that there is considerably less there than meets the eye and that the value of the study is considerably different than what has been reported in the press. Key observations cherry-picked from The Angry Pharmacist’s rant, which you really should read in its entirety, include:

  • Look who funded this “Study” (from http://content.nejm.org/cgi/content/full/NEJMoa072761) Dr. Nissen reports receiving research support to perform clinical trials through the Cleveland Clinic Cardiovascular Coordinating Center from Pfizer, AstraZeneca, Daiichi Sankyo, Roche, Takeda, Sanofi-Aventis, and Eli Lilly. Dr. Nissen consults for many pharmaceutical companies but requires them to donate all honoraria or consulting fees directly to charity so that he receives neither income nor a tax deduction. No other potential conflict of interest relevant to this article was reported. Yeah, does anyone else see something wrong with believing a study that was funded by the COMPETITORS of GSK? I’m sure Dr. Nissen doesnt get ANYTHING from these companies. No sir. He probably doesnt get to use their houses in the Bahamas on their corporate jets and eat with the offical Pfizer credit card. Charity my ass.
  • Amazing how they didnt include trials that showed no deaths: Six of the 48 trials did not report any myocardial infarctions or deaths from cardiovascular causes and therefore were not included in the analysis because the effect measure could not be calculated. You think maybe that skewed the data a bit? Lets not include the studies that showed no risk, just the ones that did. Way to go douches.

A couple of points: First, alternative medicine advocates love to point out “conflicts of interest” and are quick to dismiss a study funded by big pharma that produces results that they don’t like just because of the funding source. We see this all the time when antivaccination loons immediately dismiss any studies failing to find a link between mercury in the thimerosal preservative in vaccines and autism if any funding came from a pharmaceutical company that makes vaccines. Of course, I don’t mean to say that such potential conflicts of interest shouldn’t be considered when interpreting the results of these studies, but alt-med advocates often seem to focus on such conflicts above all else, without actually examining whether the science in the paper is sound. Given The Angry Pharmacist’s observation (which, I confess, I probably wouldn’t have noticed if I had done my own deconstruction of the paper), I wonder if all the alt-med advocates who are crying “I told you so!” or castigating big pharma will even care or consider that there may well be a significant conflict of interest in this study. What’s good for the goose is good for the gander, you know.

My guess is that no one will mention this.

The far more telling point seems to be The Angry Pharmacist’s observation about the selection of studies to be included in the meta-analysis. Meta-analyses are highly dependent, as you might expect, on which studies are included in the analysis. I’m not a statistician, but the reasoning given by the authors for excluding these studies seems specious to me based on fairly basic principles of the design of meta-analyses and basic biostatistics. In reality, you can estimate a risk from these studies. It’s zero. Of course, we know it’s not really zero. What this means for these studies is that they were too small to see any deaths, which means that you can estimate the risk of death within the statistical power of the study by saying that there is an 95% probability that it is no greater than X. Unless these studies were poorly designed or had other deficiencies that made excluding them valid, just because no deaths were observed is probably not in and of itself a valid reason to exclude them. The very danger with meta-analyses is, as expressed over at Musings of a Distractable Mind:

We had an endocrinologist in our office a few days ago (not representing GSK) and we discussed this issue, and his comment was that Dr. Nissen is “the Michael Moore of the medical industry.” Strong words. Mr. Moore is a crusader against the big and rich for the protection of the little guy (in his opinion). The problem is (in my opinion) that Mr. Moore does not always come to conclusions based on evidence, but starts with a conclusion and finds evidence to support this. This is precisely the danger of a meta-analysis of the sort that was done in this case.

In all fairness, let me cite a bit of the discussion of the meta-analysis, in order to point out that the authors did report many of the limitations of their study:

Our study has important limitations. We pooled the results of a group of trials that were not originally intended to explore cardiovascular outcomes. Most trials did not centrally adjudicate cardiovascular outcomes, and the definitions of myocardial infarction were not available. Many of these trials were small and short-term, resulting in few adverse cardiovascular events or deaths. Accordingly, the confidence intervals for the odds ratios for myocardial infarction and death from cardiovascular causes are wide, resulting in considerable uncertainty about the magnitude of the observed hazard. Furthermore, we did not have access to original source data for any of these trials. Thus, we based the analysis on available data from publicly disclosed summaries of events. The lack of availability of source data did not allow the use of more statistically powerful time-to-event analysis. A meta-analysis is always considered less convincing than a large prospective trial designed to assess the outcome of interest. Although such a dedicated trial has not been completed for rosiglitazone, the ongoing Rosiglitazone Evaluated for Cardiac Outcomes and Regulation of Glycaemia in Diabetes (RECORD) trial may provide useful insights.

In other words, there were few events, and they didn’t have access to the primary data, leading to very wide error bars (i.e., a high level of uncertainty) around the results. This may explain why the authors left out the six studies that showed no cardiovascular events, although the lack of those studies may still have skewed the results. (At the very least, they should have attempted analysis with and without those studies.) These are some reasons that I’m not sure I can dismiss this meta-analysis as vociferously or with as much contempt as the two bloggers I cited did (although perhaps I will read it in detail over the weekend and decide if it’s worthy of a little special helping Respectful Insolence™). Both of them, however, raise a number of excellent points that suggest that there may be less to this study than meets the eye than has been reported in the press and that, most disgustingly of all in my opinion, has inspired many an opportunistic malpractice attorney to put up advertisements shilling for business, just as they did for Vioxx.