Twitter Bots and Trolls Fuel Online Discord About Vaccines

Twitter logo
(Image credit: Shutterstock)

Twitter bots and trolls appear to be skewing online discussions about vaccinations, spreading misinformation on the topic and fueling online discord, according to a new study.

"The vast majority of Americans believe vaccines are safe and effective, but looking at Twitter gives the impression that there is a lot of debate," lead study author David Broniatowski, an assistant professor at George Washington University's School of Engineering and Applied Science in Washington, D.C., said in a statement. "It turns out that many anti-vaccine tweets come from accounts whose provenance is unclear," including bots or hacked accounts, Broniatowski said.

"Although it's impossible to know exactly how many tweets were generated by bots and trolls, our findings suggest that a significant portion of the online discourse about vaccines may be generated by malicious actors with a range of hidden agendas," he added. [5 Dangerous Vaccination Myths]

The study, which was published online today (Aug. 23) in the American Journal of Public Health, analyzed thousands of tweets posted to Twitter between July 2014 and September 2017. The researchers included a random sample of tweets, as well as tweets that specifically mentioned vaccines. They then used publicly available data to identify accounts known to belong to bots or trolls, including "Russian troll" accounts that were identified by the U.S. Congress. ("Bots" are accounts that automate content, while "trolls" are people who misrepresent their identity and deliberately promote online arguments.)

The researchers found that so-called "content polluters" — bot accounts that distribute malware and unsolicited commercial content — shared anti-vaccine messages 75 percent more than average Twitter users.

These bot accounts seemed to use anti-vaccine messages as "bait" to get followers to click on ads and links to malicious websites, the researchers said. "Ironically, content that promotes exposure to biological viruses may also promote exposure to computer viruses," study co-author Sandra Crouse Quinn, a professor at the University of Maryland's School of Public Health, said in the statement.

Russian trolls and more sophisticated bot accounts were also more likely to tweet about vaccination than average Twitter users, the study found. But these troll accounts posted both pro- and anti-vaccine messages — a tactic that promotes discord.

These tweets often used polarizing language and tied the messages to political themes or concepts such as "freedom," "democracy" and "constitutional rights," the researchers said.

For example, one anti-vaccine tweet under the hashtag VaccinateUS, a hashtag linked with Russian troll accounts, read: "#VaccinateUS mandatory #vaccines infringe on constitutionally protected religious freedoms." A pro-vaccine tweet under this hashtag read: "#VaccinateUS My freedom ends where another person's begins. Then children should be #vaccinated if disease is dangerous for OTHER children."

"These trolls seem to be using vaccination as a wedge issue, promoting discord in American society," study co-author Mark Dredze, a professor of computer science at Johns Hopkins University in Baltimore, said in the statement. "However, by playing both sides, they erode public trust in vaccination, exposing us all to the risk of infectious diseases. Viruses don't respect national boundaries." [Why Vaccine Myths Persist]

Research is needed on how to combat these anti-vaccine messages without unintentionally "feeding" troll and bot accounts content to use. Such strategies include "emphasizing that a significant proportion of antivaccination messages are organized 'astroturf' (i.e., not grassroots)," the researchers wrote in their paper. "Astroturfing" is a term used when people mask the sponsors of a message to make it appear to have grassroots support, when it does not, the researchers said.

Regarding the anti-vaccine messages spread by content polluters, "public health communications officials may consider emphasizing that the credibility of the source is dubious, and that users exposed to such content may be more likely to encounter malware," the researchers wrote.

Original article on Live Science.

Rachael Rettner
Contributor

Rachael is a Live Science contributor, and was a former channel editor and senior writer for Live Science between 2010 and 2022. She has a master's degree in journalism from New York University's Science, Health and Environmental Reporting Program. She also holds a B.S. in molecular biology and an M.S. in biology from the University of California, San Diego. Her work has appeared in Scienceline, The Washington Post and Scientific American.