Two years ago, I wrote about a study that demonstrated how the antivaccine movement had learned to use Twitter to amplify their antiscience message. At the time, I noted how in 2014, when the whole “CDC whistleblower” conspiracy theory was first hatched, antivaxers were so bad at Twitter, so obvious, so naive. The Tweeted inane claims at government officials, scientists, legislators, and whoever else might have influence on vaccine policy, using hashtags like #CDCwhistleblower and #hearmewell. (These hashtags are still in use, but much less active.) However they did get better, to the point where the study that I discussed pointed out how antivax Twitter accounts formed large networks Tweeting opposition to California SB 277, the bill (now law) that eliminated personal belief exemptions to school vaccine mandates.
All of this was before the 2016 election, even before Donald Trump came gliding down the escalator at Trump Tower to announce his candidacy for the Republican nomination for President. It was right around the time that fake news was beginning to be appreciated as the huge problem that it ultimately became. More importantly, it was long before it became appreciated how Twitter bots and hordes of Twitter trolls were engaged in an active effort to influence U.S. politics and the 2016 election, and how Facebook was weaponized for the same purpose.
An inestimably important tool in the armamentarium of tools used by those seeking to influence election politics were bots. A bot is an automated program that posts to social media according to an algorithm. Twitter bots are the ones that most people are familiar with; chances are very good that if you’re on Twitter for any length of time you’ll come into contact with bots, which are used to distribute Tweets en masse, sometimes in an attempt to influence Twitter’s trending topics, sometimes just to give the appearance of way more support for people or policies than there actually is. If you’re on Twitter long enough, you’ll start to learn the telltale signs that an account might be run by a bot, although accounts that combine a mixture of human-generated and automated Tweets are also common.
It turns out that bots are everywhere. It turns out that there is evidence that Twitter is using them too. I shouldn’t be surprised, and I wasn’t really that surprised, but I was disturbed. Earlier this week I came across an article, SocialBots are Pouring the Pseudo into Science. #Vaccines. It was the product of Mentionmapp Analytics, a company that runs a website called Mentionmapp, which is a tool that looks at connections between accounts and advertises itself as making “finding Twitter’s great stuff easier.” It begins:
There’s no immunity. Computational propaganda is infecting every significant online socio-political conversation. Algorithms are directly influencing the content populating social feeds, and people with ill-intentions are using software automation tools to spread digital pathogens. There’s no escaping that the number of likes, re-tweets, shares, and views are the foundations of our “filter bubbles.”
These key social indicators are easily manipulated. They’re like micro-events and discerning human from non-human engagement is nearly impossible to detect. Detecting SocialBots at work and seeing concentrated efforts to influence public opinion and perceptions leaves us wondering how civil discourse will survive this spreading digital black death.
OK, so the article starts out a bit apocalyptic and overdramatic. It’s a company that exists to sell its services analyzing Twitter networks. Still, that doesn’t mean that the computer-automated manipulation of social networks isn’t a massive problem. In any case, the company notes that the hashtags #vaccines and #antivax came to its attention recently, which made it curious to see how SocialBots are involved in online conversations about science and public health. Not surprisingly these days, the answer is: Heavily.
Mentionmapp notes that it’s hard to do an analysis of what it calls SocialBots without getting pulled into the misinformation being spread by those bots, which, as it turns out, is a lot. Here’s the story:
For this case-study we observed 23 different daily Twitter maps. Each map captures the last 200 tweets and the profiles that tweeted using a the hashtag #Vaccines. Before separating real profiles from the fakes, the first map we reviewed (above) seemingly highlights the divisiveness of this issue. We also noted the volume of tweets from those profiles staking an anti-vaccine position subsequently flow to high profile and politically partisan secondary profiles.
After reviewing the 23 separate maps of the hashtag #Vaccines, we documented 284 profile as SocialBots with one dominant participant emerging above the rest. Day in and day out @LotusOak is at the center of this conversation.
A short video is included to visually illustrate this phenomenon:
Regular Twitter users might also recognized prominent antivaccine activists and the pro-vaccine activists who make prodigious efforts to counter their misinformation. What some of those pro-vaccine advocates might be unhappy to learn if they’ve been countering Twitter users @LotusOak (name: Vera Burnayev, who, as far as Mentionmapp can tell, doesn’t exist as a real, identifiable person), @eTweeetz, or @draintheswamp55, they’ve almost certainly been arguing with bots Tweeting antivaccine misinformation.
Mentionmapp also noted:
Out of the 23 maps we also noted the presence these four profiles re-tweeting @LotusOak on multiple days —
@SNCCLA = 11 days
@8greatyears = 5 days
@Marmy2c = 5 day
@theruralists = 5 daysWe classify them as SocialBots. As well as noting 284 SocialBot profiles tweeting the hashtag #Vaccine, we also documented every hashtag used in conjunction with it. A total of 609 secondary hashtags were used. Here are the top 30 hashtags.
The authors also documented every hashtag used in conjunction with the #vaccine hashtag. those of you out there on Twitter will recognize a lot of them that came up in the top thirty: #LearnTheRisk, #CDCTruth, #homeoprophylaxis, #aluminum, #mercury, #GMO, and more. There are also some pro-vaccine hashtags in there but those are often used by pro-vaccine Twitter users along with #vaccines. I do note that I did find one thing about this list very puzzling. Anyone who’s on Twitter and deals with antivaccine misinformation will know that, over the last few months, among the favorite hashtags used by antivaxers are those related to the antivaccine propaganda movie VAXXED, like #wearevaxxed (of course) and #praybig (I have no idea why antivaxers adopted this hashtag).
Why didn’t these hashtags get flagged? I can think of a few possible reasons. One is that maybe the “VAXXED” contingent of the antivaccine movement is not as prominent as Andrew Wakefield would like everyone to believe. I’d like to think that, but there are other possible reasons. One possible reason is that, although #vaccines might be heavily influenced by bots, discussions using VAXXED-related hashtags are not. After all, why would they be? There are so many Wakefield groupies willing to use #wearevaxxed and #praybig to try to influence Twitter conversations. Alternatively, whoever is behind accounts like @LotusOak are not interested in promoting VAXXED and affiliated antivaccine viewpoints.
From my perspective, one of the weaknesses in the Mentionapp analysis flows from a lack of knowledge about the antivaccine movement. That’s not surprising, as Mentionapp is not noted for its expertise regarding pseudoscientific arguments about vaccines or, more importantly, about the main players in the antivaccine movement. As a result, what I see as a key flaw is that Mentionapp’s analysis focused on #vaccines as the main hashtag to study. As a first pass, that probably sounds reasonable, but there are so many more major hashtags used by antivaxers. Arguably, #vaccines isn’t even the most important. No, I don’t have quantitative data to support that conclusion and thus could be wrong, but my impression in the trenches in Twitter is that most antivaxers rarely use the #vaccines hashtag. In other words, real humans who are antivaccine probably don’t use #vaccines that much, but it makes sense that bots would. That makes me wonder if this analysis overestimates the influence of bots in social media interactions on Twitter. That’s not to say that bots are unimportant. Even if this analysis does overestimate their influence, it wouldn’t surprise me if antivaxers are using Twitter bots to influence discussions about vaccines and to give the impression that antivaccine viewpoints are more prevalent than they in fact are.
I realize that my readers include a number of people who are active combatting antivaccine misinformation on social media, particularly Twitter. It’s a hard and thankless job that subjects one to potential online abuse and stalking, particularly for women. I know that I hadn’t really considered the possibility that antivaxers might be adopting the same tactics as political activists, namely using bots to try to influence the conversation on Twitter. At least, I didn’t think it was likely to be happening on a large scale. The current article doesn’t really answer the question of how prevalent these bots are, but it does suggest that it behooves science advocates to be aware of bots and have an idea how to identify them. Also, we should realize that not all bots are malicious. Some just post poetry, photography, or news, with no distorting effects on social media conversations.
There are several characteristics of Twitter accounts that should make you suspect you’re dealing with a bot. One of the most glaring traits of Twitter bots is the frequency with which they Tweet. Benchmarks vary, but one commonly accepted benchmark is more than 50 Tweets per day. Some bots produce hundreds of Tweets a day, something real humans cannot do, at least not on a sustained basis. Another characteristic of a bot is that it frequently produces far more retweets than original Tweets. Remember, one of the main purposes of bots is amplification, to boost the signal from others by retweeting, liking, or quoting others. Another amplification technique is to program a bot to share news stories from selected sites without comment. This is particularly true if the content is always very similar, because bots are often programmed to post similar content.
There are, of course, other characteristics suggestive of a bot, such as not having an avatar or having an avatar that is a stolen or shared photo, having a random string of numbers at the end of its handle, and choice of URL shortener. Basically, after a while on Twitter, one starts to be able to “smell” a bot. Personally, I block any account I suspect of being a bot. I’m willing to accept the “collateral damage” of potentially blocking legitimate Twitter users.
Thanks to bots, social media has been weaponized. I might have some quibbles with Mentionapp’s analysis, btu I also have to admit that part 2 hasn’t been released yet. Maybe the deficiencies I’ve noted in this discussion will be considered and discussed in part 2. Maybe not. Even if they aren’t, it’s hard not to conclude that antivaxers aren’t using bots to promote their point of view. It’s the new reality. I can’t help but wonder if we shouldn’t have bots of our own.
One reply on “Antivaxers on Twitter: Fake news and Twitter bots”
[…] discussed, it wasn’t about Twitter, which amplifies antivaccine messages, in some cases due to the use of bots. In contrast, this study by Naomi Smith, Lecturer in Sociology in the School of Arts, Humanities […]