Skip to main content
Society

Words got us into the anti-vaccine crisis. Here’s how they’ll get us out.

Researchers are studying ways to keep misinformation from spreading, from exposing ‘data voids’ to minding sentence structure.

By Schuyler Velasco

In December 2020, the first COVID-19 vaccines were administered in the United States and Europe. It was a flickering light at the end of the pandemic tunnel — the first inkling that our world had a path back to normal. Elderly patients and high-risk medical workers received their shots to joyous applause. For the first time in months, millions of people breathed a collective, cautious sigh of relief.

But others — a minority, but a critical one — had questions. That same month, user searches for “vaccine” and “ingredients” spiked on Google, according to a sample analysis of 100 million searches by researchers at Northeastern University and several partner universities. And the results were teeming with bad information: articles from domains like express.co.uk and childrenshealthdefense.org with headlines reading, “Covid vaccine ingredients: What is in Pfizer, Moderna mRNA vaccines?” and “Leading COVID Vaccine Candidates Plagued by Safety Concerns.”

The stories themselves don’t look all that different from the ones that appear in mainstream news outlets. But express.co.uk is the online home of a notorious British tabloid; childrenshealthdefense.org has been a prolific source of vaccine misinformation since long before COVID. In the weeks following those first inoculations, around 7% of the results for searches that included the words “vaccine” and “ingredient” yielded articles from sites like those — a much higher proportion than either “vaccine” or “ingredient” would yield on its own. Seven percent may sound small, but in terms of the vast volume of internet searches, it can translate to reaching thousands of people. It was the highest rate of search results that could be classified as “misinformation” among more than 90 pandemic-related queries the researchers tracked.

In the nearly year since, the top search results for that query have shifted to the CDC, the FDA, and reputable news outlets like Reuters. But in those early weeks, there was just enough of a dearth of good information for some bad to squeeze in — a phenomenon known in computer science as a “data void.” Those two specific words, “vaccine” and “ingredient,” provided one gateway.

“There emerged a bit of a rabbit hole in the Google world that those terms provided the entry point for,” says David Lazer, a political science and computer science professor at Northeastern University who headed up the project. “The way search works is if you come up with certain phrases that haven’t been put together before, they’re like magical keys that take you to crazy places.”

As the U.S. continues to struggle with vaccine hesitancy — fueled by a misinformation machine so powerful that the Surgeon General’s Office declared it a risk to public health — researchers have increasingly focused on the role language has played in the spread of panic and mistrust. It’s an area of study that drills down into the power of individual words, and draws expertise from many fields, from linguistics to psychology to computer engineering to media studies.

Taken together, the research suggests a compelling framework for looking at a problem that has proven so intractable. It could be that an original sin of the pandemic response — perpetrated by well-meaning government officials, public health agencies, and media giants — was a failure to understand the power of words.

But if words helped get us into this mess, they might be able to get us out of it, too.


Words have always been precision instruments in spreading and communicating ideas — from urban legends to public health campaigns, outright propaganda to catchy political turns of phrase. The sticky nature of a term like the “Green New Deal” highlights the evocative power of words, as does the success of the term “death tax,” which drives voter interest far more than the anodyne name “estate tax.” The internet, of course, made words more powerful than ever — an entire field of work now focuses on the art of using specific words to drive search engine algorithms. Keywords power the ads delivered on Google and Facebook. They signal what’s trending on Twitter. In a perpetual news cycle, they serve as shorthand for what’s important.


“If you come up with certain phrases that haven’t been put together before, they’re like magical keys that take you to crazy places.”

David Lazer, political science and computer science professor at Northeastern University

The pandemic represented a high-stakes test of the power of words, and, for researchers, a window on how language could be exploited and abused. “Words are a way for us of tracking what’s happening,” says Lazer, a leader of the COVID States Project, a multi-university effort by academics from various disciplines — computer science, public health, communications, and neuroscience, among others — to follow and report links between social behaviors and virus transmission in the United States.

Lazer can name some of the early word-based mistakes that might have helped misinformation about the virus and vaccines take hold. For instance, he thinks naming the process of developing the COVID vaccine “Operation Warp Speed,” was an unforced error. “I probably could have come up with a worse name, like ‘Operation Death Drug’ or something, but [it implied] they were hurrying up and throwing caution to the wind,” he says. “I understand the sentiment, trying to do this as quickly as possible, but it was terrible.”

Over the summer, the term “breakthrough infections” came under scrutiny, as the medical community worked to dispel the false notion that, because vaccinated people could contract COVID, the vaccine must not have been working. “Maybe the word ‘breakthrough’ is part of the problem,” CNN host Brian Stelter posited to a panel of medical experts on the show Reliable Sources in August. “‘Breakthrough’ implies rare; it implies that something went wrong, and that’s not really the premise of the vaccine.”

The challenge is that once a problematic term is coined, it can quickly take hold. In the 2008 book Predicting New Words, forensic linguist Alan Metcalf developed FUDGE, a mnemonic for the elements that help novel terms become a permanent part of the language. It’s short for Frequency, Unobtrusiveness, Diversity, Generating new forms and meanings, and Endurance, and many of its concepts translate easily to internet culture. “Frequency” takes the shape of trending topics and hashtags. “Generating new forms and meanings” is a good encapsulation of the life cycle of a meme — like an out-of-context screen grab from a TV show or movie that becomes a shorthand commentary on current events.

And the more commonplace a term or idea becomes, the more plausible it seems. Researchers say that when information is repeated over and over, people are more likely to take it at face value. “It’s called the ‘illusory truth effect,’” says Briony Swire-Thompson, a cognitive psychologist at Northeastern University who studies why people believe misinformation.

Other qualities — “whether the information is attention-grabbing, whether it’s because it aligns really well with someone’s preexisting views, whether it’s particularly memorable” — also play a role in making pervasive health myths resonate, Swire-Thompson says. She and other researchers are working on pinpointing which of those elements is doing the heavy lifting, or whether it’s a combination of the three. “We’re not exactly sure where the stickiness is,” she says. 


For more than a year, the COVID States Project has studied the general public’s opinions on a wide range of topics related to the pandemic — and along the way, uncovered ways that words have been deployed to spread misinformation. In one study, researchers tracked social media news consumption and attitudes toward COVID-19 vaccines between December 2020 and March 2021, and compiled a list of the most-clicked articles from that time frame that contained misinformation. The top one, it found, was a false story linking baseball legend Hank Aaron’s death in January (from natural causes) to his COVID vaccination weeks before, which had been posted on the vaccine-skeptical website childrenshealthdefense.org. The headline was packed with words destined to trigger search engine algorithms: “Home Run King Hank Aaron Dies of ‘Undisclosed Causes’ 18 Days After Receiving Moderna Vaccine.”  

It was a combination of a timely news event and a group of keywords that hadn’t been put together before, and it briefly spread like wildfire. The misconception was widespread enough that Anthony Fauci, the president’s chief medical advisor, addressed and debunked it in media appearances.

Researchers found that another pervasive myth — that the COVID-19 vaccine causes infertility — was primed to take off because it connected to a persistent falsehood about vaccines more generally. “There was a well-written narrative for that argument that gave it particular potency,” Lazer says.

And in the case of the keyword searches for “vaccine” and “ingredient,” Lazer says, purveyors of misinformation took advantage of an information vacuum. Without some legitimate concern within the scientific community about vaccine ingredients, there wasn’t much legitimate ink spilled on the subject. So people searching for those terms were more likely to stumble across the crazy stuff.

Well before the pandemic, false health information spread across the internet this way, exploiting a lack of valid news about a subject that was either fresh or completely obscure. “There are phrases and ideas that just may not have been explored very much because nobody thinks they’re credible,” Lazer explains. “So when you go into Google, and you search something like ‘apricot seeds and cancer,’ for example, no one’s studying that who’s legit, because nobody really thinks apricot seeds will cure cancer. So there is content out there; it’s just bad content.”

Elizabeth Glowacki, a Northeastern University professor who studies public health and communication, says the overall uncertainty of the COVID era, combined with the internet’s reach, created an environment ripe for turning messages on their heads, misinterpreting innocent turns of phrase, and sowing conspiracy theories.

“In general, humans have a really tough time with uncertainty. Anything that’s new can be scary,” she says. “So even if it’s a new vaccine, a new treatment, there’s a lot of fear. One of the ways that people cope with fear is creating their own theories, or latching onto theories that might already be out there.”

That means information that took hold online spread quickly to the real world, with dire consequences. When it comes to reasons not to get vaccinated, “we’ve heard everything,” says Roseann Bongiovanni, the executive director of GreenRoots, an environmental justice and community health organization in Chelsea, Massachusetts — a city with one of the highest COVID infection rates in the state. Conspiracy theories have spread through Chelsea, she says, primarily on the messaging platform WhatsApp, shared Instagram videos, and fringe-right Spanish language media outlets.

A common concern was, “I’m afraid it was created so quickly. How do we know that the science is truly behind it?” Bongiovanni says. But she also heard “the crazy controversies. There’s concern around [vaccines negatively impacting] virility. There are very religious folks who say, ‘I’m not going to get the shot because it’s made with tissue from aborted fetuses.’ It’s, ‘they’re injecting me with a microchip and they’re going to follow me.’”

That last myth, she says, has particular potency in Chelsea, which has a high number of undocumented residents. Terms like “tracking,” “government surveillance,” and “database” are scary in a tangible sense. And latching onto a terrifying word can mean the difference between getting a vaccine and rejecting it.


The prime way to combat bad words, it seems, is with good ones. Scientists across disciplines increasingly agree that the more frequently we deploy language to correct misinformation, the better off we’ll be.

“Social media companies, other kinds of gatekeepers, mainstream media, shouldn’t hesitate to do very assertive fact-checking when it’s justified,” says John Wihbey, a professor of journalism at Northeastern University who has studied the evolution of increasingly assertive content labeling, from gentle warnings that an article is “disputed by third parties” to graying out problematic content before it can even be read.


“[Social media consumers] are far more willing to update their beliefs than a lot of people realize.”

Briony Swire-Thompson, cognitive psychologist at Northeastern University

Researchers once worried that flooding social media sites with corrective feedback — labels, warnings, alerts, suggestions — would create a “backfire effect” and make the corrections less effective, or prompt people to double down on incorrect beliefs. But Swire-Thompson and others have conducted studies that largely upend this notion. One, done in collaboration with Lazer, Wihbey, and researchers from the Boston Attention and Learning Lab, found that backfire effects all but disappeared when the corrective information was reliable enough.

Indeed, Swire-Thompson believes social media companies should be more forceful about their correctives — not just erasing problematic language, but acknowledging why it’s wrong and pointing people directly to what’s true.

“People are far more willing to update their beliefs than a lot of people realize,” she says.

Glowacki believes it’s also important to improve public health messaging from the outset, down to the level of sentence structure and active verbs. Personifying the danger — and making it the subject of a sentence rather than a passive object — makes it more real, she says.

“How do you assign agency to threats? It’s really tricky when you have something like a virus, which you can’t really see,” she says. “People pay more attention when you talk about the non-human threat and what it’s doing to us — COVID is preying on people, or COVID is targeting people.”

Casting any looming crisis as an immediate and direct threat also helps to capture people’s attention, Glowacki says. It’s why acute natural disasters like hurricanes generate so much interest, but it’s harder to drive collective action on a long-term problem. “One of the issues with climate change is it’s harder to care about threats down the road,” she says. “That probably comes into play here a little bit. COVID could get you, but it’s not like it’s bearing down on your house.”

And sometimes, using words well might mean not using many words at all. “Visual communication needs to be explored further,” Glowacki says. “So much of health messaging traditionally has been text-based. That’s not always helpful for people with low literacy levels, who might not be as verbally alert. [We should be] thinking about more creative ways to communicate.”

Creativity is especially important in a place where traditional messaging can get muddled by language barriers and cultural gaps. In Chelsea, Bongiovanni acknowledges the power of using the real names of mutual friends and acquaintances to make the threat feel immediate — “‘Joan got the vaccine’ or ‘Remember when Joan died of COVID,’” she says — and counteract the faceless sources online.

“I have trust with my neighbors in Chelsea. And so I’m much more likely to buy into it than I would if some outsider came into the city,” she says. Getting through to people, she says, involves “making a personal connection and saying, ‘Trust me, I got the vaccine, and look, nothing happened to me. There’s no microchip in my body; nobody’s tracking me.’”

By the end of the summer, Chelsea’s vaccination rate hovered around 72%, outpacing Massachusetts’ already high rate. Public health officials have credited a multipronged effort that included mobile clinics, aggressive outreach programs, and a conscious effort to use language wisely: recruiting Spanish-speaking clinic workers, offering verbal as well as written information, and building trust through personal connections. 

That success is real-world evidence of what Swire- Thompson, Lazer and others have found in their research: Fighting the spread of misinformation isn’t necessarily a losing battle. It’s a matter of how you deploy the words.

Published on

Schuyler Velasco is Experience’s senior editor.

First Word

Broaden your bookshelf

Northeastern University professors recommend books that inform today’s conversations about identity.