Concept
Illusory Truth Effect
A cognitive phenomenon in which repeated exposure to a statement increases its perceived truthfulness, independent of its actual accuracy. First documented by Hasher, Goldstein, and Toppino in 1977, the effect operates because repetition increases processing fluency — the ease with which the brain handles familiar information — and fluency is misread as a signal of validity. On digital platforms, algorithmic amplification functions as an industrial-scale repetition engine: reshares, reposts, and recommendation systems expose users to the same claims across multiple encounters and sources, systematically inflating perceived credibility. The effect is not limited to the credulous; it operates on people who have been told a statement is false, and on people who initially knew it was false.
The illusory truth effect is the finding that repeated exposure to a statement increases its perceived truthfulness. It was first documented in a 1977 study by Lynn Hasher, David Goldstein, and Thomas Toppino, who showed that participants rated statements as more likely to be true if they had encountered them in a previous session — regardless of whether the statements were actually true. The effect has since been replicated extensively across different populations, statement types, and conditions.
The mechanism is processing fluency. When the brain encounters familiar information, it handles it more easily — the cognitive effort required is lower. This ease of processing is experienced as a vague sense of correctness or plausibility. The brain, in effect, uses familiarity as a proxy for truth, because in most environments, repeated information has survived some form of social validation. The heuristic is often useful. In an environment saturated with algorithmically amplified misinformation, it is systematically exploited.
Digital platforms do not create the illusory truth effect, but they dramatically change the conditions under which it operates. A claim shared on social media does not reach each person once; it is encountered across feeds, reshares, recommended content, news aggregators, and forwarded messages. The platform's recommendation logic optimises for engagement, and emotionally charged or surprising content — which tends to include false or misleading claims — generates more engagement than accurate, mundane information. The result is a system that selects for repetition of content that is likely to be wrong.
What makes the illusory truth effect particularly significant is its resistance to correction. Research by Gordon Pennycook and colleagues has shown that the effect persists even when participants are informed that a statement is false before repeated exposure — subsequent repetition still increases rated credibility. It also affects people who demonstrably know the correct information. A person who knows a fact can still rate a contradicting statement as more plausible after repeated exposure. This is not a failure of education or intelligence; it is an automatic cognitive process that runs beneath deliberate reasoning.
The social dimension compounds the effect. When a person sees a claim shared by multiple people in their network — even if those shares are downstream of a single original post — they receive the social signal that many people have endorsed the content. This is a second, distinct route to inflated credibility: social proof. Algorithmic platforms routinely produce this pattern without any underlying consensus existing.
The implication for information diet is structural. Diversifying news sources is insufficient if all sources draw from the same virally amplified pool. The effective intervention is to encounter new claims in contexts with lower repetition rates — newsletters, books, long-form journalism — and to develop the habit of noting when a strong intuition of truthfulness is explained entirely by prior exposure.
Key Figures
Lynn Hasher
Cognitive psychologist, co-author of the original 1977 illusory truth study
Gordon Pennycook
Decision researcher, work on misinformation and cognitive engagement
Daniel Kahneman
Psychologist, processing fluency and System 1 reasoning
Further Reading