Is the misrepresentation of scientific findings by antivaxers leading to self-censorship by scientists?

It’s rare that I’m mentioned in The New York Times (or any other national media outlet). It’s happened, but only a handful of times (e.g., this NYT article on “right to try” and this article about the Food Babe). I must admit that I do have a Google Alert set for my name and variants of it, so I usually know when I’m mentioned in the media. This time around, I was blissfully unaware because, although a NYT article published a weak ago linked to a post of mine, it didn’t mention my name; that is, I was unaware until readers burst that bubble of blissful ignorance. It turns out that it’s an article about a topic very relevant to my writings and that it also (somewhat) misrepresents (or at least incompletely represents) my position on whether certain studies should be funded or not. It’s by Melinda Wenner Moyer and entitled “Anti-Vaccine Activists Have Taken Vaccine Science Hostage“. It’s basically about how the manner in which antivaccine activists leap on any study that questions the efficacy or safety of vaccines could be causing some scientists to indulge in self-censorship. It’s a worthwhile discussion, and certainly there is a germ of truth in the article, but I think Moyer overstates her case and conflates two different things. This article reminded me of a PLoS blog post by Moyer in which I thought she was a bit too eager to cite preclinical studies to argue for biological plausibility of a cell-phone cancer link that really isn’t there, at least not in the way she appeared to think. It’s one of those things where I agreed with her that those who claim that, because radio waves can’t break chemical bonds, then cell phone radiation can’t possibly cause cancer are being overly simplistic but thought the form her argument took went too far the other way. These are the most niggling kinds of disagreements, where I agree with the gist of the post but have problems with it that keep me from wholeheartedly agreeing.

To show you what I mean, let’s look at the passage that irked me a little, and after that I’ll discuss the rest of her article, as well as a couple of the studies mentioned in it. Part way through her article, Moyer states:

Last September, researchers with the Vaccine Safety Datalink, a collaborative project between the Centers for Disease Control and Prevention and various health care organizations, published a study in the journal Vaccine that found an association — not a causal link, the authors were careful to note — between a flu vaccine and miscarriage. Soon after, Paul Offit, the director of the Vaccine Education Center at the Children’s Hospital of Philadelphia and co-inventor of a lifesaving rotavirus vaccine, said in The Daily Beast that the paper shouldn’t have been published, in part because the study was small and conflicted with earlier research. He also suggested that the authors had cherry-picked their data — a charge they vehemently deny. One physician questioned in the popular blog Science-Based Medicine why the research had been funded in the first place.

Let’s begin. It is true that I did question why the study had been funded in the first place, but the paragraph above doesn’t tell the whole story. At the risk of being self-indulgent, let me cite the relevant passage in my previous post from last September about the study, specifically my analysis of the introduction to the study in which the authors discussed the background and tried to justify their research:

Notice the eight studies cited (references 5-12) that failed to find significant safety issues with the vaccine in pregnancy, and a study (reference 14) using VSD data failed to find an association between flu vaccination with spontaneous abortion. That’s actually a lot of data for the safety of the flu vaccine during pregnancy, which makes me wonder what the justification for yet another study looking for an association between influenza vaccination and miscarriages. If I were a funding agency and received a grant application to do a study like this with text above in the “Background and Significance” or the “Impact” section, my first reaction would be: Why on earth would we fund this? It’s all been done before, many, many times. Yet the CDC funded this study. So much for antivax claims about the CDC not being concerned about vaccine safety and not being willing to look for adverse reactions due to vaccines.

I also find it rather odd that the authors would say that few studies have been done looking for a correlation between vaccination against influenza, even in early pregnancy, when in fact there have been a lot, many well-designed, and they’ve pretty much all been negative. Whenever you see a study that finds something a lot different from the bulk of the studies that have been done before, the first question to be asked is: Are the results of the current study so robust that they indicate a hole in the existing data addressing the question asked that we should begin to question the cumulative results of all the studies that have gone before? Keep that question in mind as I continue.

So, yes, I’m guilty as charged. I did question why the CDC funded this study. However, in my defense I argue that the reason that I questioned why the CDC funded this study is very, very important (and valid) indeed, so much so that the paragraph in Moyer’s NYT article misrepresents what I said by omission. I hate to say this, but that’s the conclusion I came to about it. Indeed, one commenter named Mike noticed this as well:

As a matter of curiosity, I looked at the post in Science-Based that the author describes as “question[ing] *** why the research had been funded in the first place.” From context, you’d think that the blog author, David Gorski, was suggesting that such research should be avoided altogether. Gorski’s point, like Paul Offit’s point, is statistical and quite different than anything Moyer is suggesting. Gorski points out that the study he was critiquing cited eight prior studies of flu vaccines and miscarriages, none of which suggested an association. Because of the prior studies, a small study would not be powerful enough to shift the views of anyone who has a sense of how scientific evidence accumulates by replication. This was Offit’s point, too. Once a fair bit of research has been done, useful follow up can come only from much larger studies–that’s a bit of an oversimplification, but all that fits in a comment. Smaller studies won’t and shouldn’t change the clinical consensus, but they can be used to spread “fear, uncertainty, and doubt.”

And that’s basically it. Unfortunately, in 2018, science-funding is a zero-sum game. Funding granted to an unnecessary study like the small study of the flu vaccine and miscarriages that I wrote about is funding that didn’t go to a study of vaccine safety that might actually have told us something new, rather than retreading where many investigators have trod before and muddying the waters in such a way that even more funding will have to be devoted to a larger followup study on what is almost certainly a false positive. I get it. A NYT article has strict word limits, and something’s got to give. A reporter or op-ed columnist can’t always explain everything, and it’s good that this reporter linked to my post, so that the interested reader could check out what I said for himself. But, really, how many readers will do that? I know from a long history of blogging that no more than around 10% ever clink on a prominent link; for links buried in the middle of the text it’s more like 1-5%. And what about print readers? There’s no link for them to click.

Now let’s go back to the beginning. After reiterating the dangers caused by the antivaccine movement and citing her bona fides as a pro-vaccine science journalist, Moyer then goes on to warn:

As a science journalist, I’ve written several articles to quell vaccine angst and encourage immunization. But lately, I’ve noticed that the cloud of fear surrounding vaccines is having another nefarious effect: It is eroding the integrity of vaccine science.

Later, she argues:

But concerns over what these groups might do are starting to take precedence over scientific progress.

So there’s Moyer’s central thesis. Fear of the antivaccine movement is eroding the integrity of science. Whenever I see a statement this bold, I have to ask: Is the evidence presented adequate enough to support the “hypothesis,” if you will, strongly enough that I buy it? After reading Moyer’s op-ed, I must say that the answer is: No, but I could change my mind with more.

First, what does she mean by “eroding the integrity of science”? This:

In February I was awarded a fellowship by the nonpartisan Alicia Patterson Foundation to report on vaccines. Soon after, I found myself hitting a wall. When I tried to report on unexpected or controversial aspects of vaccine efficacy or safety, scientists often didn’t want to talk with me. When I did get them on the phone, a worrying theme emerged: Scientists are so terrified of the public’s vaccine hesitancy that they are censoring themselves, playing down undesirable findings and perhaps even avoiding undertaking studies that could show unwanted effects. Those who break these unwritten rules are criticized.

The evidence presented by Moyer is basically all anecdotal, the typical “both sides” construct so beloved of journalists, with examples of scientists who feel apprehensive about talking to reporters about studies of vaccines that show less benefit or more risk from vaccines and report having experienced pressure not to publish. On the one side, we have Dr. Paul Offit arguing that “small, inconclusive, worrying studies should not be published because they could do more harm than good.” He has a point, but I would have phrased it differently.

Let’s take a look at the examples of “science stifled” that Moyers chooses to use. There are three, and I’ve already discussed one, namely the study of whether influenza vaccination during pregnancy. It was actually the third example cited by Moyers. Let’s go back to the first:

Here’s a case that typifies this problem and illustrates how beneficial it can be when critical findings get published. In 2005, Lone Simonsen, who was then with the National Institute of Allergy and Infectious Diseases, and her colleagues published a study in JAMA Internal Medicine showing that the flu vaccine prevented fewer deaths than expected in people over 65.

“I had interesting conversations with vaccine people. They said, ‘What are you doing, Lone? You are ruining everything,’” recalls Dr. Simonsen, who is now a global public health researcher at George Washington University. Her work helped lead to the development of a more effective flu vaccine for older people, yet she felt ostracized. “I felt it personally, because I wasn’t really invited to meetings,” she says. “It took a good decade before it was no longer controversial.”

Interestingly, Moyers cites what she characterizes quite correctly as a “ridiculous” article about how Simonsen’s study inspired an article by antivaxers entitled “Flu Vaccines Are Killing Senior Citizens, Study Warns“. Ironically, her example demonstrates just what scientists could be afraid of. The article is dated November 17, 2017, and Simonsen’s study is from 2005. Basically, well over 12 years later, antivaxers are still writing articles citing Simonsen’s study as “evidence” that the flu vaccine is dangerous to seniors.

I can’t help but note that, even in that “ridiculous” article, there is something that Moyers missed. It cites an article by Peter Doshi, who’s been discussed on SBM a few times before. Basically, he’s now an associate editor for The BMJ, and he frequently uses his position to parrot antivaccine tropes. In other words, the wall of pro-vaccine scientists is not nearly as strong as Moyer would have us believe. There are antivaccine-sympathetic, if not outright antivaccine, scientists at the highest levels of biomedial publishing. Heck, there is even one at the highest levels of evidence-based medicine. Yes, I’m referring to Tom Jefferson, who’s even head of the Vaccine Field Group at the Cochrane Database Collaboration, a man who’s consistently tried to argue that the flu vaccine doesn’t really work and has even appeared on über-quack Gary Null’s radio show, which is hardly what a respectable scientist does). They’re in the minority by far, to be sure, but they’re not insignificant and belie the portrait of an utterly united pro-vaccine front that, if it can’t shut down studies not showing the utter perfection of vaccines.

So what’s the other example? It’s this:

In 2009, Danuta Skowronski, the lead epidemiologist in the division of Influenza and Emerging Respiratory Pathogens at the British Columbia Center for Disease Control, and her colleagues stumbled across unexpected data that suggested a link between seasonal flu shots and an increased risk for pandemic flu. The findings could not prove a causal link — perhaps people who get seasonal flu shots differ from those who don’t in ways that make them more susceptible to pandemic strains. But one possible interpretation is that seasonal flu shots inhibit immunity to those strains. Dr. Skowronski’s team replicated the findings in five different studies and then shared the data with trusted colleagues. “There was tremendous pushback,” Dr. Skowronski recalls, and some questioned whether “the findings were appropriate for publication.”

“I believed I had no right to not publish those findings,” Dr. Skowronski says. “They were too important.” The findings were submitted to three journals and underwent at least eight lengthy reviews before the final study was published in PloS Medicine.

I can’t help but note that this study was very careful to point out, “The occurrence of bias (selection, information) or confounding cannot be ruled out.” In these cases, it’s always hard to ascertain whether a journal’s refusal to publish a study is due to flaws in the study or bias among the peer reviewers. Three journals? I’ve had papers that took submission to four different journals to get them published, and they were not anywhere near as controversial as this paper. Bottom line: Science is tough, and/or my science wasn’t good enough. Either way, it doesn’t show that there’s some sort of systemic bias to exclude my scientific findings from the peer-reviewed medical literature.

So what are we left with?

Well, I can’t help but note this particular Tweetstorm:

And I can’t help but note this part of the Tweetstorm:

That’s how antivaxers will spin it, that scientists are engaging in massive self-censorship because they know antivaxers are right and our afraid. The question is: What should vaccine scientists do?

Although I can conceive of cases where self-censorship could be appropriate (albeit quite few and then involving findings that could easily be weaponized by, for example terrorists), I’m nonetheless sympathetic with Moyers in that I like to think that self-censorship of scientific findings is generally a bad thing. Scientific findings should, in general, be reported, and let the chips fall where they may. On the other hand, I understand that it’s naïve not to recognize how scientific findings will be spun by the antivaccine movement and that scientists need to be careful about how science is reported. It’s not just vaccine science, either. Those of us who have tried to serve as science advocates understand that it’s also climate science, evolution, and other areas of science where ideology all too frequently trumps evidence. Are vaccine scientists “suppressing” science unfavorable to vaccines? In the end, even Moyer doesn’t seem to think so:

This is not to say that anyone is covering up major safety problems, by the way; critical studies generally concern minor issues in specific contexts. But scientists could one day miss more important problems if they embrace a culture that suppresses research.

While Moyer is correct that “good science needs to be heard even if some people will twist its meaning”, I’m not sure that she understands how little of what is “being heard” is actually good science and just how relentlessly antivaxers will “twist its meaning”. Scientists in fields under attack by ideologues understand that and are struggling about how to be true to science without making it too easy for these ideologues to distort and attack science. In the end, I left this story with a concern that what Moyers describes might be starting to happen, but the entirely anecdotal case she built for this phenomenon, coupled with a mere three examples, none of which were particularly compelling to me, didn’t convince me that vaccine safety research is suffering because of fear of how antivaxers might use it. At least not yet.