Are Russian bots being used to sow division over vaccines? Maybe.

I’ve been involved in what I sometimes call the vaccine wars for a long time, dating back nearly 20 years. It was in the late 1990s and early 2000s that I discovered that there were actually people who thought that vaccines were not only unsafe, but that they caused autism, autoimmune diseases, and basically every chronic disease under the son. However, I didn’t really get involved in actively refuting online antivaccine misinformation in a big way until early 2005, not long after I started version one of this blog. Back then, social media consisted primarily of Usenet (which by the time I started this blog in late 2004 was dying from a disease we’re all familiar with in 2018, trolls drowning out any actual conversation) and blogs like this one. Twitter had not yet been founded, and, while Facebook existed, it was in its infancy and access had not been granted to the general public yet. (To give you an idea of what I’m talking about, I didn’t join Facebook until 2008, and I didn’t sign up for a Twitter account until 2009. Then I hardly used Twitter for the first couple of years I had an account.) Russian bots and trolls did not yet exist, either. So blogs were pretty much it, and pretty much the main source of antivaccine pseudoscience and misinformation that needed to be countered at the time.

With the rise of Facebook and Twitter, the social media landscape changed markedly. Blogs, while still important, are no longer the primary vehicles of social media, and the antivaccine movement, albeit slow to do so, did ultimately dive into Twitter and Facebook in a big way. I first took note of its first bumbling, fumbling forays onto Twitter in 2015 over the whole “CDC whistleblower conspiracy theory,” whose birth I observed in 2014. By 2017, fake news and Twitter bots spouting antivaccine misinformation were rampant, and the antivaccine movement had made major inroads into Facebook. Now—surprise! surprise!—it turns out that Russian bots and troll farms are now Tweeting about vaccines as well, as a new study released yesterday examined. The study, Weaponized Health Communication: Twitter Bots and Russian Trolls Amplify the Vaccine Debate, led by David A. Broniatowski at The George Washington University in the Department of Engineering Management and Systems Engineering.

Basically, the investigators, who came from GWU, the University of Maryland, and Department of Engineering Management and Systems Engineering, started examining Russian troll accounts as part of their study after NBC News published its database of more than 200,000 Tweets emanating from Russian-linked accounts. These known Russian troll accounts were linked to the Internet Research Agency, a company backed by the Russian government that specializes in online influence operations and churns out memes, YouTube videos, Facebook posts, and Tweets pretending to be activists and activist groups in an attempt to sway political conversations. Basically, it’s the same Russian propaganda machine that interfered in the 2016 US election.

Broniatowski describes what caught his attention:

“One of the things about them that was weird was that they tried to — or they seemed to try to — relate vaccines to issues in American discourse, like racial disparities or class disparities that are not traditionally associated with vaccination,” Broniatowski said.

For instance, “one of the tweets we saw said something like ‘Only the elite get clean vaccines,’ which on its own seemed strange,” he said. After all, anti-vaccine messages tend to characterize vaccines as risky for all people, regardless of class or socioeconomic status.

I can’t help but agree with this observation—mostly. It’s true that antivaxers tend to view vaccines as risky for all children. However, ever since the dawn of the “CDC Whistleblower” manufacturoversy turned into a full-blown conspiracy theory, antivaxers have been bringing race and class into their messaging more. For instance, key to the whole “CDC whistleblower” conspiracy theory is the misinterpretation in a reanalysis by an antivaxer named Brian Hooker of a study of the MMR to claim that MMR vaccination was associated with autism in African-American boys. That reanalysis was so bad that it was ultimately retracted. More recently, antivaxers have been palling around with the Nation of Islam to spread their message and targeting vulnerable minority populations, such as the Somali immigrant community in Minnesota, in the name of “helping” to protect them from “government-mandated” vaccines. The result in Minnesota has been a massive measles outbreak. So, Borniatowski shouldn’t have been that surprised. This sort of message is not as far-fetched as he apparently thought it was.

The investigators reacted thusly to their observation:

The researchers were stunned to find Russian troll accounts tweeting about vaccines, but unraveling why they would stoke the vaccine debate was mind-boggling, too.

So, the authors analyzed the Tweets and discovered Russian bots Tweeting about vaccines. Before I discuss that in a bit more detail, let’s look at the paper itself more. The authors note in the introduction:

Proliferation of this content has consequences: exposure to negative information about vaccines is associated with increased vaccine hesitancy and delay.8–10 Vaccine hesitant parents are more likely to turn to the Internet for information and less likely to trust health care providers and public health experts on the subject.9,11 Exposure to the vaccine debate may suggest that there is no scientific consensus, shaking confidence in vaccination.12,13 Additionally, recent resurgences of measles, mumps, and pertussis and increased mortality from vaccine preventable vaccine preventable diseases such as influenza and viral pneumonia14 underscore the importance of combating online misinformation about vaccines.

Much health misinformation may be promulgated by “bots”15—accounts that automate content promotion—and “trolls”16— individuals who misrepresent their identities with the intention of promoting discord. One commonly used online disinformation strategy, amplification,17 seeks to create impressions of false equivalence or consensus through the use of bots and trolls. We seek to understand what role, if any, they play in the promotion of content related to vaccination.

So the authors used a set of 1,793,690 Tweets collected from July 14, 2014 to September 26, 2017, and also carried out a qualitative study of the hashtag #VaccinateUS, described by the authors as a “Twitter hashtag designed to promote discord using vaccination as a political wedge issue” and Tweets that were “uniquely identified with Russian troll accounts linked to the Internet Research Agency.” In their first analysis, the authors examined whether Twitter bots and trolls Tweet about vaccines more often than the average Twitter user, while in their second analysis they examined the relative rates at which each type of account Tweeted pro-vaccine, antivaccine, or neutral messages about vaccines. Finally, they did their qualitative study of the #VaccinateUS hashtag.

Data for the first analysis was taken from one of two datasets derived from the Twitter streaming application programming interface (API) and consisted of a random sample of 1% of all Tweets and a sample of Tweets containing vaccine-related keywords. For each data set, the authors extracted Tweets from accounts known to be bots or trolls identified in seven publicly available lists of Twitter user IDs and compared these to an equal number of randomly selected Tweets that were posted in the same time frame. The relative frequency with which each account type posted about vaccines was estimated by counting the total number of Tweets that contained at least one word containing “vax” or “vacc.” In the second analysis, the authors collected a random subset of all Tweets from users in the “vaccine stream” (the random sample of 1% of all Tweets) and used the Botmeter API to estimate each Tweet’s “Bot Score,” which reflects the likelihood that the account doing the Tweeting is a bot and consists of a score between 0% and 100% likelihood of “botness.” The accounts were segmented into three categories: those with scores less than 20% (very likely to be human); scores greater than 80% (very likely to be bots); and scores between 20% and 80% (can’t tell for sure). The same analysis was then carried out on the “vaccine stream” (Tweets related to vaccines).

Overall, as you can see, the higher the bot score, the more likely the account is to be Tweeting about vaccines:

I can’t help but notice, though, from this graph, that accounts with unknown and intermediate bot scores seem to be the ones Tweeting the msot about vaccines, and mostly antivaccine messages. It kind of harkens back to the study I discussed way back in 2015. I noted that some antivaccine “influencers” on Twitter Tweeted so often that it makes me wonder how good the Botmeter algorithm is at identifying classifying them as low likelihood to be bots. Similarly, this graph suggests that very low and very high bot scores are associated with basically no Tweeting of pro-vaccine messages. This puzzled me. I wondered how big an effect this really was and whether it justified all the fevered news coverage of this study. After all, the authors themselves noted in the introduction that “a full 93% of tweets about vaccines are generated by accounts whose provenance can be verified as neither bots nor human users yet who exhibit malicious behaviors” and that these unidentified accounts “preferentially tweet antivaccine misinformation.”

Let’s look at graph 2:

This clearly shows that Russian bots and trolls were far more likely to Tweet about vaccines than humans. It also indicates tha content polluters like to use vaccine-related Tweets to draw in clicks to distribute their malware.

Finally, a thematic analysis showed these to be the sorts of pro- and antivaccine messages spread by Russian bots during that three year period:

Antivaccine messages

The authors concluded:

Malicious online behavior varies by account type. Russian trolls and sophisticated bots promote both pro- and antivaccination narratives. This behavior is consistent with a strategy of promoting political discord. Bots and trolls frequently retweet or modify content from human users. Thus, wellintentioned posts containing provaccine content may have the unintended effect of “feeding the trolls,” giving the false impression of legitimacy to both sides, especially if this content directly engages with the antivaccination discourse. Presuming bot and troll accounts seek to generate roughly equal numbers of tweets for both sides, limiting access to provaccine content could potentially also reduce the incentive to post antivaccine content.

By contrast, accounts that are known to distribute malware and commercial content are more likely to promote antivaccination messages, suggesting that antivaccine advocates may use preexisting infrastructures of bot networks to promote their agenda. These accounts may also use the compelling nature of antivaccine content as clickbait to drive up advertising revenue and expose users to malware. When faced with such content, public health communications officials may consider emphasizing that the credibility of the source is dubious and that users exposed to such content may be more likely to encounter malware. Antivaccine content may increase the risks of infection by both computer and biological viruses.

All of this seems like a set of reasonable conclusions. However, I was left with a niggling feeling. How big an effect was this really? After all, the authors themselves note that the highest proportion of antivaccine content is generated by accounts that are not clearly bots, accounts with unknown or intermediate bot scores. This led me to ask: How accurate is Botmeter? Also, as I mentioned before, there are a lot of antivaccine “influencers” on Twitter who are so prolific in their Tweeting that I can see how they might be, if not mistaken for a bot by an algorithm, at least be scored as intermediate by a program like Botmeter.

Also, when you come down to it, the numbers examined in this paper are not that large, something that is not noted in most of the news reports I’ve read about this study. If you look carefully at the results, you’ll find that the activity of known bots and trolls examined consisted of 899 Tweets and that the vaccine stream only consisted of 9,985 Tweets. For a three year period, these are not large numbers at all, even if you take into account that the vaccine stream is only a 1% random sample of vaccine-related Tweets, meaning roughly a million Tweets over three years. I can’t help but wonder how represenative the sample is, and, in particular, I have a hard time gettin excited over such a small number of bot-generated Tweets.

Renee DiResta noted just this on Twitter several days ago:

More recently:

I tend to agree. The bottom line is that antivaccine misinformation is a serious matter. Indeed, Twitter and Facebook have become cesspools of antivaccine pseudoscience and were well on their way to that state before the rise of Russian bots. There are also known to be quite a few antivaccine bots out there, as a 2017 study showed. In other words, knowing a bit more of history of the antivaccine movement’s activities on social media and the Internet would have served the investigators well. Yes, I do believe that Russian bots were trying to spread some discord about vaccines, but this study doesn’t suggest that it’s anywhere near as big a problem as the breathless headlines about the study suggest.

At least not yet.