How misinformation spreads online
On April 23, 2013, the Associated Press Twitter account posted: "Breaking: Two Explosions in the White House and Barack Obama is injured." The Dow Jones dropped 150 points in two minutes. $136 billion in stock market value vanished. The tweet was fake. The account had been hacked by the Syrian Electronic Army. Markets recovered within minutes of the correction.
That was 2013. A single tweet, clearly attributable to a hacked account, with an obvious correction available. It still caused real financial damage in the time it took to spread.
Now imagine that scenario playing out in 2026, where AI can generate convincing fake news articles in seconds, deepfake video is nearly indistinguishable from real footage, and social media algorithms are optimized to amplify content that triggers strong emotional reactions. The infrastructure for misinformation has gotten orders of magnitude more powerful. Our defenses have not kept pace.
The speed gap
The most important study on misinformation spread is still the one published by Vosoughi, Roy, and Aral at MIT in 2018, using data from Twitter spanning 2006 to 2017. They analyzed 126,000 stories shared by roughly 3 million people over that period.
Their headline finding: false news reached 1,500 people about six times faster than true stories did. False stories were 70% more likely to be retweeted than true ones. And this wasn't because of bots. The researchers controlled for bot activity and found that humans were the primary drivers of false news spread. Bots shared true and false news at roughly equal rates. It was people who preferentially amplified the fakes.
This finding has been replicated and extended multiple times since then. A 2023 study by Guess and Lyons in Nature found similar patterns on Facebook. A 2025 analysis by the Oxford Internet Institute tracked misinformation spread across TikTok and found that false claims in video format spread even faster than text-based false claims, partly because video feels more trustworthy to viewers and is harder to fact-check quickly.
The speed gap matters because corrections almost never catch up. By the time a fact-check is published, the original false claim has already reached most of the audience it's going to reach. Research by Brendan Nyhan at Dartmouth has shown that corrections reach only about 20-30% of the people who saw the original false claim. And even among those who do see the correction, belief change is partial at best.
Why we believe things that aren't true
The psychological machinery behind misinformation vulnerability is well-documented, and most of it has nothing to do with intelligence.
The illusory truth effect. Repeated exposure to a claim increases our belief in it, regardless of whether the claim is true. This was first demonstrated by Hasher, Goldstein, and Toppino in 1977 and has been replicated dozens of times since. The mechanism is simple: our brains use processing fluency (how easily we can understand something) as a shorthand for truth. Statements we've encountered before are easier to process, so they feel more true. Social media, where the same claims circulate repeatedly through shares and reposts, is an ideal environment for this effect.
Confirmation bias. We evaluate information that aligns with our existing beliefs less critically than information that challenges them. This isn't laziness. It's a deeply embedded cognitive pattern. A 2019 study by Pennycook and Rand found that people spent an average of 8.2 seconds evaluating a news headline that confirmed their political views before sharing it, versus 11.1 seconds for headlines that challenged their views. Three seconds doesn't sound like much, but in those three seconds, the brain is doing less verification work on the confirming headline.
Emotional arousal. False claims that trigger strong emotions (anger, fear, disgust, moral outrage) are shared more than neutral or positive ones. The MIT study found this directly: false stories that inspired surprise and disgust were the most viral. There's an evolutionary logic to this. Threats demand fast communication. If someone in your tribe spots a predator, you don't stop to fact-check. You alert everyone immediately. Social media hijacks this same circuit, but the "predators" are often fabricated.
Source confusion. After encountering a piece of information, we tend to forget where we learned it faster than we forget the information itself. A false claim seen in a tweet from an anonymous account gets stored in memory alongside facts from trusted sources. When we later recall the claim, we may not remember that it came from an unreliable source. We just remember that we "know" it.
The continued influence effect. Even after we learn that something is false, the original misinformation continues to influence our reasoning. Corrections weaken the effect but don't eliminate it. This was demonstrated by Johnson and Seifert in 1994 and confirmed repeatedly since. Our brains seem to treat retractions as "updates" rather than "deletions," leaving traces of the original false belief in place.
How platforms amplify the problem
Social media platforms didn't create misinformation. People have been spreading false claims since language existed. But platforms created an amplification system that operates at a scale and speed that no previous communication technology could match.
The chart above illustrates a pattern that plays out almost identically across platforms. The false claim spreads rapidly in the first few hours, often amplified by algorithmic recommendation. By the time a fact-check appears (typically 8-12 hours later for claims that get checked at all), the original has already reached the majority of its eventual audience. The correction spreads slowly, reaching only a fraction of the people who saw the original.
Engagement-based algorithms. Every major social platform uses algorithms that prioritize content likely to generate engagement (likes, comments, shares, time spent viewing). False claims tend to be more novel, more surprising, and more emotionally provocative than accurate information. This means the algorithm systematically boosts false content, not because it's designed to spread lies, but because it's designed to maximize engagement, and lies happen to be engaging.
Network effects and cascading shares. When someone with a large following shares a false claim, it reaches thousands or millions of people instantly. Each of those people can reshare it to their own networks. The speed of this cascading process means that a false claim can reach millions of people before any correction mechanism can respond. Traditional media had gatekeepers (editors, producers) who slowed the spread of unverified claims. Social media removed those gatekeepers.
Context collapse. On social media, a claim that was originally made in a specific context (joking, hypothetical, exaggerated for effect) gets stripped of that context as it's shared and reshared. A screenshot of a headline travels without the article. A quote travels without the surrounding paragraph. A joke gets screenshotted and shared by people who take it literally. This context collapse turns nuanced or qualified statements into absolute claims.
Platform incentives. Social media companies profit from user engagement. Removing viral false content reduces engagement metrics. This creates a structural incentive to respond slowly to misinformation, even when the platform has the ability to identify and flag it. Meta's internal research (leaked in 2021 by Frances Haugen) showed that the company was aware that its algorithms amplified divisive and misleading content and chose not to fully address the problem because doing so would reduce engagement.
Specific examples that show the pattern
The abstract version of this problem is easy to dismiss as someone else's concern. Specific cases make it harder to look away.
The Wayfair trafficking conspiracy (2020). In July 2020, a Reddit user noticed that some storage cabinets on Wayfair.com had unusually high prices and names that coincidentally matched those of missing children. The user posted a theory that Wayfair was using furniture listings as a front for child trafficking. Within 48 hours, the theory had spread to TikTok, Twitter, Facebook, and Instagram. The hashtag #WayfairIsOverParty trended globally. Millions of people believed it.
It wasn't true. Wayfair issued a denial. Fact-checkers at Snopes, Reuters, and the Associated Press debunked it. The National Center for Missing and Exploited Children confirmed that several of the "missing" children had already been found safe. But the debunking reached a fraction of the audience. A YouGov poll in August 2020 found that 25% of Americans thought the Wayfair trafficking claims were "probably" or "definitely" true.
5G and COVID-19 (2020-2021). The claim that 5G towers caused or spread COVID-19 originated in a January 2020 Belgian newspaper article that vaguely linked 5G radiation to health concerns. It was picked up and amplified through conspiracy communities on YouTube and Facebook. By April 2020, at least 77 cell towers had been set on fire in the UK alone. Engineers maintaining critical communications infrastructure received death threats.
The claim had no scientific basis. Radio waves cannot create or spread viruses. But the speed of spread outpaced the ability of scientists and health authorities to respond. By the time authoritative debunkings were widely available, the belief was entrenched in millions of people.
Election misinformation (ongoing). Every major election since 2016 has been accompanied by waves of false claims about voting processes, results, and fraud. The specific claims change, but the pattern is consistent: a false claim about voter fraud or rigged machines gets amplified by partisan accounts, reaches millions of people within hours, and proves resistant to correction even after election officials provide evidence disproving it.
The AI multiplier
Everything I've described so far has been happening with human-generated misinformation. AI changes the equation in ways that make the problem significantly worse.
Before AI, creating convincing fake content required effort. Writing a plausible fake news article took time and skill. Creating a fake image required Photoshop expertise. Producing fake video was essentially impossible without a studio. The effort barrier limited the volume of misinformation any single actor could produce.
AI removed that barrier. A person with no writing skill can now generate a convincing fake news article in seconds using ChatGPT or similar tools. Deepfake video that would have taken a visual effects team weeks to produce can now be generated in minutes with consumer hardware. Voice cloning allows anyone to create audio of public figures saying things they never said.
The volume problem is the real threat. It's not that any individual AI-generated fake is more convincing than a well-crafted human fake. It's that AI allows the production of fakes at industrial scale, for negligible cost. A single person can now generate hundreds of unique fake news articles per day, each targeting different audiences and platforms. Human fact-checkers cannot keep up with this volume.
A report by NewsGuard in January 2026 identified over 1,200 websites publishing primarily AI-generated content disguised as news, up from approximately 50 in early 2023. Most of these sites exist to generate ad revenue from traffic, but the misinformation they produce as a byproduct still enters social media circulation and gets treated as legitimate news by readers who encounter it in their feeds.
What actually works
I'm going to skip the part where I tell you to "think critically" and "check your sources." You've heard that. It's good advice. It's also insufficient. Here are some more specific strategies that research has shown to be effective.
Practical steps that reduce your vulnerability to misinformation
Prebunking over debunking. A 2022 study published in Science Advances by Roozenbeek, van der Linden, and colleagues found that short (90-second) videos explaining common misinformation techniques reduced sharing of false claims by 20-25% for at least one month. The key insight: it's easier to inoculate people against misinformation before they encounter it than to correct beliefs after they've formed. Look for "prebunking" resources from organizations like First Draft and the Inoculation Science Project.
The SIFT method. Mike Caulfield at the University of Washington developed a four-step verification process: Stop (don't share immediately), Investigate the source (who published this?), Find better coverage (what do other sources say?), Trace claims to their origin (where did this claim first appear?). The method works because it interrupts the automatic share impulse and forces a minimum level of verification.
Lateral reading. Professional fact-checkers don't read a suspicious source deeply. They immediately open new tabs and search for information about the source and the claim elsewhere. This "lateral reading" approach is more effective than "vertical reading" (carefully analyzing the source itself) because it quickly reveals whether a source is credible. A 2019 study by Wineburg and McGrew at Stanford found that lateral readers correctly evaluated sources 95% of the time, compared to 60% for vertical readers.
Emotional check. If a piece of content makes you feel strong emotions (especially anger or outrage), that's precisely when you should slow down. The strongest predictor of sharing false content isn't ignorance or stupidity. It's emotional arousal combined with inattention. Pausing when you feel a strong reaction is the single most effective habit for reducing your own misinformation spread.
Building the muscle
Knowing about misinformation techniques is different from being able to spot them in practice. The gap between knowledge and skill is significant.
There's some evidence that games and interactive exercises help close this gap. The Cambridge University game "Go Viral!" reduced perceived reliability of misinformation by 21% in a randomized trial (Roozenbeek & van der Linden, 2020). "Bad News," another Cambridge game where players create misinformation, improved detection ability by 24% over a control group. These interventions work because they give people repeated, low-stakes practice with real-world misinformation patterns.
The same principle applies to AI-specific literacy. Games like Bluffpedia, where players practice distinguishing real content from AI-generated fakes, build the same kind of pattern recognition. You won't learn to spot every fake by playing a game. But you'll develop a baseline suspicion of polished, plausible-sounding text that serves you well outside the game too.
The scale of the problem
Let me end with a number that has stuck with me since I first saw it.
In 2024, the Reuters Institute Digital News Report found that 56% of people across 46 countries said they were concerned about distinguishing real news from fake news online. That's a majority of the global news-consuming population acknowledging they can't reliably tell what's true.
That number isn't going down. The tools for creating convincing fakes are getting better and cheaper. The platforms that amplify false claims are getting more powerful. The volume of content we're all exposed to is increasing.
What can go down is our individual vulnerability. Every person who understands how misinformation spreads, who pauses before sharing, who checks a claim before repeating it, is reducing the reach of the next false story. That's not a solution to the systemic problem. But it's not nothing.
The immune system for misinformation is distributed. It lives in the decisions of individual humans encountering individual claims and choosing to verify before they amplify. The better each of us gets at that, the harder it becomes for false claims to achieve the reach they need to cause real harm.
It's worth getting better at.