For antivaxers, aluminum is the new mercury.
Let me explain, for the benefit of those not familiar with the antivaccine movement. For antivaxers, it is, first and foremost, always about the vaccines. Always. Whatever the chronic health issue in children, vaccines must have done it. Autism? It’s the vaccines. Sudden infant death syndrome? Vaccines, of course. Autoimmune diseases? Obviously it must be the vaccines causing it. Obesity, diabetes, ADHD? Come on, you know the answer!
Because antivaxers will never let go of their obsession with vaccines as The One True Cause Of All Childhood Health Problems, the explanation for how vaccines supposedly cause all this harm are ever morphing in response to disconfirming evidence. Here’s an example. Back in the late 1990s and early 2000s, antivaxers in the US (as opposed to in the UK, where the MMR vaccine was the bogeyman) focused on mercury in vaccines as the cause of autism. That’s because many childhood vaccines contained thimerosal, a preservative that contains mercury. In an overly cautious bit of worshiping at the altar of the precautionary principle, in 1999 the CDC recommended removing the thimerosal from childhood vaccines, and as a result it was removed from most vaccines by the end of 2001. (Some flu vaccines continued to contain thimerosal for years after that, but no other childhood vaccine did, and these days it’s uncommon for thimerosal-containing vaccines of any kind.)
More importantly, the removal of thimerosal from childhood vaccines provided a natural experiment to test the hypothesis that mercury causes or predisposes to autism. After all, if mercury in vaccines caused autism, the near-complete removal of that mercury from childhood vaccines in a short period of time should have resulted in a decline in autism prevalence beginning a few years after the removal. Guess what happened? Autism prevalence didn’t decline. It continued to rise. To scientists, this observation was a highly convincing falsification of the hypothesis through a convenient natural experiment, although those who belong to the strain of antivaccine movement sometimes referred to as the mercury militia still flog mercury as a cause of autism even now. Robert F. Kennedy, Jr. is perhaps the most famous mercury militia member, although of late he’s been sounding more and more like a run-of-the-mill antivaxer.
Which brings us to aluminum.
With mercury in vaccines pretty definitively eliminated as The One True Cause Of Autism, antivaxers started looking for other ingredients to blame for autism because, as I said before, it’s first, foremost, and always all about the vaccines. So naturally they shifted their attention to the aluminum adjuvants in many vaccines. Adjuvants are compounds added to vaccine in order to boost the immune response to the antigen used, and aluminum salts have been used as effective adjuvants for many years now and have an excellent safety record. None of that has stopped antivaxers from trying to make aluminum the new mercury by blaming aluminum-containing vaccines for autism. I was reminded by this earlier this week when my e-mail was flooded with messages about new study being flogged by antivaxers in spectacularly ignorant ways, including three—yes, three—identical messages from a certain antivaxer with a severe case of Dunning-Kruger and delusions of grandeur basically challenging me to review this study and assuring me that antivaxers would be citing it for a long time. Well, whenever I receive messages like that, particularly annoying repetition, my answer is: Be very careful what you wish for.
Also: Challenge accepted.
Which brings us to the study itself. It’s by antivaccine “researchers” whose previous studies and review articles I’ve discussed before. Yes, I’m referring to Christopher Shaw and Lucija Tomljenovic in the Department of Ophthalmology at the University of British Columbia. Both have a long history of publishing antivaccine “research,” mainly falsely blaming the aluminum adjuvants in vaccines for autism and, well, just about any health problem children have and blaming Gardasil for premature ovarian failure and all manner of woes up to and including death. Shaw was even prominently featured in the rabidly antivaccine movie The Greater Good. Not surprisingly, they’ve had a paper retracted, as well..
This time around, they’ve gone back to their old stomping grounds, the Journal of Inorganic Biochemistry, and, along with two other co-authors, published Subcutaneous injections of aluminum at vaccine adjuvant levels activate innate immune genes in mouse brain that are homologous with biomarkers of autism. It’s where they published review article in 2011 full of antivaccine misinformation and distortions. So, given Shaw and Tomljenovic’s history, it is not unreasonable to be suspicious of this study as well. But, hey, you never know. Maybe it’s a good study that sheds light on an important aspect of the pathogenesis of autism…Ah, who’m I kidding? It’s nothing of the sort. It’s yet another study designed to imply that aluminum adjuvants cause autism.
Before we look at the study itself, specifically the experiments included in it, let’s consider the hypothesis being tested, because experiments in any study should be directed at falsifying the hypothesis. Unfortunately, there is no clear statement of hypothesis where it belongs, namely in the introduction. Instead, what we get is this:
Given that infants worldwide are regularly exposed to Al adjuvants through routine pediatric vaccinations, it seemed warranted to reassess the neurotoxicity of Al in order to determine whether Al may be considered as one of the potential environmental triggers involved in ASD.
In order to unveil the possible causal relationship between behavioral abnormalities associated with autism and Al exposure, we initially injected the Al adjuvant in multiple doses (mimicking the routine pediatric vaccine schedule) to neonatal CD-1 mice of both sexes.
This is basically a fishing expedition in which the only real hypothesis is that “aluminum in vaccines is bad and causes bad immune system things to happen in the brain.” “Fishing expeditions” in science are studies in which the hypothesis is not clear and the investigators are looking for some sort of effect that they suspect they will find. In fairness, fishing expeditions are not a bad thing in and of themselves—indeed, they are often a necessary first step in many areas of research—but they are hypothesis-generating, not hypothesis confirming. After all, there isn’t a clear hypothesis to test; otherwise it wouldn’t be a fishing expedition. The point is that this study does not confirm or refute any hypothesis, much less provide any sort of slam-dunk evidence that aluminum adjuvants cause autism.
Moving along, I note that this is a mouse experiment, and somehow antivaxers are selling this as compelling evidence that vaccines cause autism through their aluminum adjuvants causing an inflammatory reaction in the brain. Now, seriously. Mouse models can be useful for a lot of things, but, viewed critcally, for the most part autism is not really one of them. After all, autism is a human neurodevelopmental disorder diagnosed entirely by behavioral changes, and correlating mouse behavior with human behavior is very problematic. Indeed, correlating the behavior of any animal, even a primate, with human behavior is fraught with problems. Basically, there is no well-accepted single animal model of autism, and autism research has been littered with mouse models of autism that were found to be very much wanting. (“Rain mouse,” anyone?) Basically, despite the existence of many mouse strains touted to be relevant to autism, almost none of them are truly relevant because:
A good animal model satisfies three fundamental criteria. The first, called face validity, requires sufficient similarities between the phenotype of the mice and symptoms of the human disorder. The second, called construct validity, is achieved if the biological cause of the human disease is replicated in the mouse — for example, when an autism-associated gene is mutated in mice. Finally, a mouse model has predictive validity if treatments improve both the human symptoms of the disorder and the mouse phenotype.
Diagnosis of autism is purely behavioral and requires clearly defined symptoms in each of three core categories: abnormal social interactions, impaired communication and repetitive behavior. One of the challenges in studying mouse models is determining which behaviors from the mouse repertoire could be considered analogous to these symptoms.
And:
So far, very few of these mouse models display behavioral phenotypes relevant to all three core domains of autism. What’s more, in some cases, physical problems such as poor general health following seizures, or low exploratory activity, produce false positives that prevent the interpretation of more complex, autism-relevant phenotypes.
Pay particular attention to the part about construct validity. The assumption behind this study is that immune changes in the brain of mice will be relevant to immune activation in the brains of autistic humans. That is an assumption that hasn’t yet been confirmed with sufficient rigor to view this study’s results as any sort of compelling evidence that aluminum adjuvants cause autism. Yes, the authors include this important-looking diagram describing how they think immune system activation causes autism (click to embiggen):
In the end, though, as impressive as it is, the relevance of this chart to autism is questionable at best, as is the relevance of this study. So let’s look at the mouse strain chosen by the investigators, CD-1 mice. Basically, there’s nothing particularly “autistic” (even in terms of existing mouse models purported to be relevant to autism) about these mice, which are described in most catalogues of companies selling them as “general purpose.” Basically, the authors used them because they had used them before in previous studies in which they reported that aluminum injections caused motor neuron degeneration (nope, no autism) and another crappy paper in the same journal from 2013 purporting to link aluminum with adverse neurological outcomes. That’s it.
As for the experiment itself, neonatal mice were divided into two groups, a control group that received saline injections and the experimental group received injections of aluminum hydroxide in doses timed such that they that purportedly mimicked the pediatric vaccine schedule. Looking over the schedule used, I can’t help but note that there’s a huge difference between human infant development and mouse development. Basically, the mice received aluminum doses claimed to be the same as what human babies get by weight six times in the first 17 days of life. By comparison, in human babies these doses are separated by months. In addition, in human babies, vaccines are injected intramuscularly (in a muscle). In this study, the mice were injected subcutaneously (under the skin). This difference immediately calls into question applicability and construct validity. The authors stated that they did it because they wanted to follow previously utilized protocols in their laboratory. In some cases, that can be a reasonable rationale for an experimental choice, but in this case the original choice was questionable in the first place. Blindly sticking with the same bad choice is just dumb.
So what were the endpoints examined in the mice injected with aluminum hydroxide compared to saline controls? After 16 weeks, the mice were euthanized and their brains harvested to measure gene expression and the levels of the proteins of interest. Five males and five females from each group were “randomly paired” for “gene expression profiling.” Now, when I think of gene expression profiling, I usually think of either cDNA microarray experiments, in which the levels of thousands of genes are measured at the same time, or next generation sequencing, in which the level of every RNA transcript in the cell can be measured simultaneously. That doesn’t appear to be what the authors did. Instead, they used a technique known as PCR to measure the messenger RNA levels of a series of cytokines. Basically, they examined the amount of RNA coding for various immune proteins in the brain chosen by the authors as relevant to inflammation. The authors also did Western blots for many of those proteins, which is a test in which proteins are separated on a gel, blotted to a filter, and then probed with specific antibodies, resulting in bands that can be measured by a number of techniques, including autoradiography or chemiluminescence, both of which can be recorded on film on which the relevant bands can be visualized. Basically, what the authors did wasn’t really gene expression profiling. It was measuring a bunch of genes and proteins and hoping to find a difference.
There’s an even weirder thing. The authors didn’t use quantitative real time reverse transcriptase PCR, which has been the state-of-the-art for measuring RNA message levels for quite some time. Rather, they used a very old, very clunky form of PCR that can only produce—at best—semiquantitative results. (That’s why we used to call it semiquantitative PCR.) Quite frankly, in this day and age, there is absolutely zero excuse for choosing this method for quantifying gene transcripts. If I were a reviewer for this article, I would have recommended not publishing it based on this deficiency alone. Real time PCR machines, once very expensive and uncommon, are widely available. (Hell, I managed to afford very simple one in my lab nearly 15 years ago.) Any basic or translational science department worth its salt has at least one available to its researchers.
The reason that this semiquantitative technique is considered inadequate is that the amount of PCR product grows exponentially, roughly doubling with every cycle of PCR, asymptotically approaching a maximum as the primers are used up.
It usually takes around 30-35 cycles before everything saturates and the differences observed in the intensity of the DNA bands when they are separated on a gel become indistinguishable. That’s why PCR was traditionally and originally primarily considered a “yes/no” test. Either the RNA being measured was there and produced a PCR band, or it didn’t. In this case, the authors used 30 cycles, which is more than enough to result in saturation. (Usually semiquantitative PCR stops around 20-25 cycles or even less.) And I didn’t even (yet) mention how the authors didn’t use DNAse to eliminate the small amounts of DNA that contaminate nearly all RNA isolations. Basically, the primers used for PCR pick up DNA as well as any any RNA, and DNA for the genes of interest will be guaranteed to contaminate the specimens without DNAse treatment. Yes, you molecular biologists out there, I know that’s simplistic, but my audience doesn’t consist of molecular biologists.
Now, take a look at Figures 1A and 1B as well as Figures 2A and 2B. (You can do it if you want. The article is open access.) Look at the raw bands in the A panels of the figures. Do you see much difference, except for IFNG (interferon gamma) in Figure 1A? I don’t. What I see are bands of roughly the same intensity, even the ones that are claimed to vary by three-fold. In other words, I basically am very skeptical that the investigators saw much of difference in gene expression between controls and the aluminum-treated mice. In fairness, for the most part, the protein levels as measured by Western blot did correlate with what was found on PCR, but there’s another odd thing. The investigators didn’t do Western blots for all the same proteins whose gene expression they measured by PCR. Of course, they present primers for 27 genes, but only show blots for 18 (17 inflammatory genes plus beta actin, which was used as a standard to normalize the values for the other 17 genes).
I also question the statistical tests chosen by the authors. Basically, they examined each gene separately and used Student’s t-test to assess statistical significance. However, in reality they did many comparisons, at least 17, and there’s no evidence that the authors controlled for multiple comparisons. If one chooses statistical significance to occur at p < 0.05 and compares 20 samples, by random chance alone at least one will be different. Add to that the fact that there is no mention of whether the people performing the assays were blinded to experimental group, and there's a big problem. Basic science researchers often think that blinding isn't necessary in their work, but there is a potential for unconscious bias that they all too often don't appreciate. For example, the authors used Image J, free image processing software developed by the NIH. I've used Image J before. It's a commonly used app used to quantify the density of bands on gels, even though it's old software and hasn't been updated in years. Basically, it involves manually drawing outlines of the bands, setting the background, and then letting the software calculate the density of the bands. The potential for bias shows up in how you draw the lines around the bands and set the backgrounds. As oblivious as they seem to be to this basic fact, basic scientists are just as prone to unconscious bias as the rest of us, and, absent blinding, in a study like this there is definitely the potential for unconscious bias to affect the results. In fairness, few basic science researchers bother to blind whoever is quantifying Western blots or ethidium bromide-stained DNA gels of PCR products, but that's just a systemic problem in biomedical research that I not infrequently invoke when I review papers. Shaw and Tomljenovic are merely making the same mistake that at least 90% of basic scientists make.
But let’s step back and take the authors’ results at face value for a moment. Let’s assume that what is reported is a real effect. In the rest of the paper, the authors present evidence of changes in gene expression that suggest the activation of a molecular signaling pathway controlled by a molecule called NF-κB and that male mice were more susceptible to this effect than females. (Just like autism!) Funny, but I know NF-κB. I’ve published on NF-κB. I had an NIH R01 grant to study how my favorite protein affected NF-κB. True, I ended up abandoning that line of research because I hit some dead ends. True, I’m not as familiar with NF-κB as I used to be. But I do know enough to know that NF-κB is easy to activate and very nonspecific. I used to joke that just looking at my cells funny would activate NF-κB signaling. Also, NF-κB activation is indeed associated with inflammation, but so what? What we have is an artificial model in which the mice are dosed much more frequently with aluminum than human infants. Does this have any relevance to the human brain or to human autism? who knows? Probably not. No, almost certainly not.
Also, the mouse immune system is different from the human immune system. None of this stops the authors from concluding:
Based on the data we have obtained to date, we propose a tentative working hypothesis of a molecular cascade that may serve to explain a causal link between Al and the innate immune response in the brain. In this proposed scheme, Al may be carried by the macrophages via a Trojan horse mechanism similar to that described for the human immunodeficiency virus (HIV) and hepatitis C viruses, travelling across the blood-brain-barrier to invade the CNS. Once inside the CNS, Al activates various proinflammatory factors and inhibits NF-κB inhibitors, the latter leading to activation of the NF-κB signaling pathway and the release of additional immune factors. Alternatively, the activation of the brain’s immune system by Al may also occur without Al traversing the blood-brain barrier, via neuroimmuno-endocrine signaling. Either way, it appears evident that the innate immune response in the brain can be activated as a result of peripheral immune stimuli. The ultimate consequence of innate immune over-stimulation in the CNS is the disruption of normal neurodevelopmental pathways resulting in autistic behavior.
That’s what we call in the business conclusions not supported by the findings in a study. On a more “meta” level, it’s not even clear whether the markers of inflammation observed in autistic brains are causative or an epiphenomenon. As Skeptical Raptor noted. It could be that the inflammation reported is caused by whatever the primary changes in the brain that result in autism. Cause and effect are nowhere near clear. One can’t help but note that many of the infections vaccinated against cause way more activation of the immune system and cytokines than vaccination.
So what are we left with?
Basically, what we have is yet another mouse study of autism. The study purports to show that aluminum adjuvants cause some sort of “neuroinflammation,” which, it is assumed, equals autism. By even the most charitable interpretation, the best that can be said for this study is that it might show increased levels of proteins associated with inflammation in the brains of mice who had been injected with aluminum adjuvant way more frequently than human babies ever would be. Whether this has anything to do with autism is highly questionable. At best, what we have here are researchers with little or no expertise in very basic molecular biology techniques using old methodology that isn’t very accurate overinterpreting the differences in gene and protein levels that they found. At worst, what we have are antivaccine “researchers” who are not out for scientific accuracy but who actually want to promote the idea that vaccines cause autism. (I know, I know, it’s hard not to ask: Why not both?) If this were a first offense, I’d give Shaw and Tomljenovic the benefit of the doubt, but this is far from their first offense. Basically, this study adds little or nothing to our understanding of autism or even the potential effects of aluminum adjuvants. It was, as so many studies before, the torture of mice in the name of antivax pseudoscience. The mice used in this study died in vain in a study supported by the profoundly antivaccine Dwoskin Foundation.
Also, I’ll tell my antivax admirer the same thing I once told J.B. Handley when he taunted me to examine a study that he viewed as “slam dunk” evidence for a vaccine-autism link: You don’t tug on Superman’s cape. And, no, your name isn’t Slim. You’re not an exception.
ADDENDUM 9/27/2017: Apparently I wasn’t…Insolent…enough with this paper. On PubPeer there is a big discussion about whether the images in this paper were manipulated and whether the authors self-plagiarized Figure 1 from another paper. It looks bad.
6 replies on “Torturing more mice in the name of antivaccine pseudoscience, 2017 aluminum edition”
[…] blogger, essay underneath a pseudonym, pronounced that their methods for measuring certain biologic markers […]
[…] name of pseudoscience. Later, after I wrote my first analysis of the study in which I described how poorly designed and executed the experiments were, I discovered that there’s more than just bad science there. There’s possible fraud, as […]
[…] showed—more times than I can remember, most recently when Christopher Shaw and Lucija Tomljenovic tortured yet more mice in the name of autism pseudoscience to produce a paper that was ultimately retracted because of […]
[…] times than I can remember, most recently when Christopher Shaw and Lucija Tomljenovic tortured yet more mice in the name of autism pseudoscience to produce a paper that was ultimately retracted because of […]
[…] falsamente al aluminio usado como adyuvante de causar autismo” y otros problemas infantiles, destaca el oncólogo y escéptico David Gorski, escribiendo bajo el pseudónimo de Orac,. En su último trabajo del Journal of Inorganic Biochemistry, financiado por la Fundación de la […]
Via the BadScientist.net forum, this paper has been retracted:
https://www.ncbi.nlm.nih.gov/pubmed/29269133
🙂