Is there publication bias in animal studies?

ResearchBlogging.orgLast month, in response to some truly despicable activities by animal rights zealots, I wrote a series of posts about how animal rights activists target even researchers’ children and appear to fetishize violence. This simply continued a string of posts that I’ve done over the years, the longest (and, in my not-so-humble-opinion, the best) deconstructs a lot of the bad scientific arguments used by animal rights activists to claim that animal research is useless, or nearly so, as well as other arguments made by extremists. One of the key points emphasized in these responses is that, regardless of their shortcomings, animal models for many conditions provide useful data, have lead to medical breakthroughs, and are better than any of the alternatives currently touted by animal rights activists. Someday, for example, cell culture and computer models may allow us to replace the use of animals for a lot of studies, but that day is not today, nor is it likely to arrive anytime soon.

Not surprisingly, then, I’ve had a few readers make me aware of a recently released study published in PLoS Biology, entitled Publication Bias in Reports of Animal Stroke Studies Leads to Major Overstatement of Efficacy. I actually knew about this study last week, because I’m on the PLoS press list. But the study was embargoed until Monday night, and for some reason I let Mike Adams distract me from taking on a real scientific study. On the other hand, it’s always a good time to have some fun with our favorite woo-meister of all. It’s just fortunate that my readers didn’t let me forget about this study.

Science-based medicine depends upon preclinical studies in cell culture and animal models in order to determine disease mechanisms and, just as importantly, to test new therapies before testing them in humans. I’ve pointed out before that animal studies don’t always correlate as cleanly as we would like with human studies. However, for all their imperfections, animal studies can allow us to study phenomena that require three dimensional structure with all the different types of cells normally in the organ in question. One example I like to use is the study of tumor angiogenesis, which requires complex interactions between the tumor cells, vascular cells, and the stroma. Although I’m aware of models that examine endothelial cells, fibroblasts, and tumor cells in three dimensional coculture that can produce some pretty cool results, but they are still just cells in dishes. They’re cells in dishes using sophisticated culture systems, but cells in dishes nonetheless.

It’s thus of great interest to know what the predictive capability of animal models is. In the case of this study, the authors performed a metaanalysis of animal models of acute ischemic stroke to try to estimate the effect of publication bias on the reported results. As you may be aware, publication bias is an insidious generalized form of bias that creeps into the medical literature because studies showing a positive result are more likely to be published than studies that show a negative result. Also known as “the file drawer effect” (where negative studies tend to be left in the “file drawer” rather than to be published, publication bias is a problem in the clinical trial literature, so much so that clinical trial registries such as Clinicaltrials.gov, have been set up to make sure that the results of all human clinical trials see the light of day. The authors lay out this problem right in the introduction:

It’s not surprising that positive clinical trials are more likely to be published–and published in more prestigious journals–than negative studies, because positive studies are scientifically and clinically much more interesting. They produce results that change clinical practice and, presumably, improve our medical practice. On the other hand, although it isn’t always appreciated, negative trials can be very useful, too. They can lead to physicians abandoning therapies that they thought to be effective, and that can advance medical therapy as well.

What’s not as well characterized is whether publication bias is a major problem in animal studies. This study presents evidence suggesting that it might be, at least in models of ischemic stroke and interventions designed to limit and ameliorate the damage done by the cessation of blood flow to a segment of the brain. Basically, the investigators examined a database of animal models of stroke and interventions and identified 16 unique systematic reviews of the literature on this topic that encompassed 255 publications. Only ten publications reported no significant effect of their interventions on volume of dead brain tissue after a stroke, and only six were completely negative, reporting no significant findings. I will admit right here that I don’t fully understand all the mathematics and analyses involved, but it is possible to do statistical analyses of the studies to look for patterns suggestive of publication bias, specifically excesses of imprecise studies with large effect sizes. The authors describe their findings in the abstract:

The consolidation of scientific knowledge proceeds through the interpretation and then distillation of data presented in research reports, first in review articles and then in textbooks and undergraduate courses, until truths become accepted as such both amongst “experts” and in the public understanding. Where data are collected but remain unpublished, they cannot contribute to this distillation of knowledge. If these unpublished data differ substantially from published work, conclusions may not reflect adequately the underlying biological effects being described. The existence and any impact of such “publication bias” in the laboratory sciences have not been described. Using the CAMARADES (Collaborative Approach to Meta-analysis and Review of Animal Data in Experimental Studies) database we identified 16 systematic reviews of interventions tested in animal studies of acute ischaemic stroke involving 525 unique publications. Only ten publications (2%) reported no significant effects on infarct volume and only six (1.2%) did not report at least one significant finding. Egger regression and trim-and-fill analysis suggested that publication bias was highly prevalent (present in the literature for 16 and ten interventions, respectively) in animal studies modelling stroke. Trim-and-fill analysis suggested that publication bias might account for around one-third of the efficacy reported in systematic reviews, with reported efficacy falling from 31.3% to 23.8% after adjustment for publication bias. We estimate that a further 214 experiments (in addition to the 1,359 identified through rigorous systematic review; non publication rate 14%) have been conducted but not reported. It is probable that publication bias has an important impact in other animal disease models, and more broadly in the life sciences.

The authors used two analyses, first one called a “trim and fill” analysis, which looks at the bias in the data set used for a metaanalysis to impute the number and most probable results of unpublished experiments in order to estimate what the meta-analysis treatment effect would be in the absence of publication bias. Given that this is an estimate based on a “fill-in” method, at best it can only be a rough estimate, and its unclear how accurate its estimates are, given that there is a lot of variability between different trim-and-fill estimators and models in various meta-analyses. It also assumes that asymmetries in the funnel plot are all due to publication bias (i.e., that all or most of the missing studies are negative), when that is not necessarily true. There can be other reasons why studies are not published (they couldn’t pass peer review, for instance), and the unpublished studies may not all be negative. They also used Egger regression, which is subject to a different set of potential biases.

Still, the numbers estimated for publication bias are not that surprising, given that they are in line with estimates for clinical trials. Given that a lot of animal experiments are, in essence, clinical trials that could never be done on humans for ethical reasons, this should not be surprising. The authors produced this graph as an estimate of how much the efficacy of each intervention for ischemic stroke is overestimated:

i-0a16e84930f2603e322b6adb2f7b6277-pubbias-thumb-450x478-43902.jpg

What is interesting about the graph above is how different the calculated effect overestimate size is depending on the specific intervention. This implies that the animal models used for this study are better for estimating some outcome measures than others. Despite the confounding factors:

For meta-analyses of individual interventions, we do not believe that these techniques are sufficiently robust to allow the reliable reporting of a true effect size adjusted for publication bias. This is partly because most meta-analyses are too small to allow reliable reporting, but also because the true effect size may be confounded by many factors, known and unknown, and the empirical usefulness of a precise estimate of efficacy in animals is limited. However, these techniques do allow some estimation both of the presence and of the likely magnitude of publication bias, and reports of meta-analysis of animal studies should include some assessment of the likelihood that publication bias confounds their conclusions, and the possible magnitude of the bias.

So, basically, all we can conclude from this study is that, for one intervention and one type of animal model, there appears to be publication bias, the effect of which can only be very roughly estimated and which varies depending upon which intervention is studied. It is unknown whether publication bias exists for other animal models and, if so, how much, but it would be shocking indeed if it did not exist for at least some animal models of disease and treatment.

Animal studies are very important in science-based medicine, because they provide the first test of an intervention in something other than a test tube or tissue culture plate. Positive results in animal studies often, depending upon a number of factors, lead to clinical trials. That is the entire point for studies of human disease in which a treatment is tested in an animal, as opposed to purely basic science studies, such as the creation of transgenic mice to test the effect of knocking out or overexpressing a gene product. Consequently, to minimize the chances of animal models misleading, it is as important to reduce publication bias in animal studies as it is in human studies. However, because far more animal studies are performed than human studies this will be difficult.

One thing that studies like this demonstrate is that nothing is sacred in science. Animal rights activists claim that scientists unquestioningly and mindlessly support the contention that animal studies represent the best models for disease that we have. Studies like this demonstrate that such is not the case. Not only that, but they demonstrate that scientists continue to seek to minimize the use of animals and to make sure that animals that are used in research are not wasted. Critical studies like this one point out the flaws in how animal research is done and suggest ways to correct those flaws and maximize the chances that the results of animal research will inform rather than mislead.

REFERENCE:

Sena, E., van der Worp, H., Bath, P., Howells, D., & Macleod, M. (2010). Publication Bias in Reports of Animal Stroke Studies Leads to Major Overstatement of Efficacy PLoS Biology, 8 (3) DOI: 10.1371/journal.pbio.1000344