Skip to content
All concepts

Concept

Human Downgrading

Tristan Harris's term for the systemic erosion of human capacities — sustained attention, mental health, the quality of relationships, critical thinking, shared epistemic reality, and democratic function — caused by engagement-maximising technology. The argument is not that technology is incidentally harmful but that optimising for engagement necessarily selects against the qualities that make people capable of self-governance and flourishing. A recommendation algorithm that maximises time-on-platform will reliably discover that outrage, anxiety, and tribalism outperform nuance, and will amplify the former at scale. Human downgrading is what happens to the population exposed to that dynamic for a decade.

Human downgrading is the term developed by Tristan Harris — former Google design ethicist and co-founder of the Center for Humane Technology — to describe the aggregate effect of engagement-maximising technology on human capacities. The term appeared most prominently in the 2020 Netflix documentary The Social Dilemma, which Harris co-created, and in the Center for Humane Technology's accompanying educational materials. It represents an attempt to name not a specific harm but a directional pressure: systems optimised for engagement systematically select against the qualities that define human flourishing.

Harris's background was in persuasive technology — the academic field, pioneered at Stanford by B.J. Fogg, that studies how digital systems can be designed to change behaviour. After delivering an internal presentation at Google in 2013 warning that the industry was not taking seriously its responsibility for what it was doing to human minds, he became one of the most prominent critics of the systems he had studied to build. His argument is grounded in mechanism rather than tone: the problem is not that technology companies are malicious but that the metric they optimise for — engagement, measured in time on platform and actions taken — is misaligned with human wellbeing in specific, predictable ways.

The downgrading Harris describes operates across several dimensions. At the individual level, persistent exposure to engagement-optimised feeds degrades sustained attention, increasing preference for shorter, more stimulating content and reducing tolerance for the kind of effortful, slow cognition that produces complex understanding. Research on adolescent mental health — particularly the work of Jean Twenge and Jonathan Haidt on the correlation between smartphone adoption and rising rates of anxiety, depression, and self-harm among teenage girls — suggests that the social comparison and social exclusion dynamics amplified by social media produce measurable psychological harm.

At the social level, recommendation systems trained on engagement data consistently discover that outrage, moral indignation, and in-group signalling generate more clicks and shares than accurate, nuanced reporting. Amplifying this content at scale does not merely reflect existing polarisation — it accelerates and deepens it. The epistemic commons — a shared set of facts, institutions, and interpretive frameworks within which political disagreement can take place — erodes when the information environment is shaped by what inflames rather than what informs.

At the democratic level, Harris argues that self-governance requires citizens capable of sustained attention, tolerance for complexity, and a minimum of shared reality. Human downgrading systematically undermines all three. The concern is not hypothetical: the same engagement dynamics that make social media compelling also make it a highly efficient vector for coordinated disinformation, because false outrage-generating content spreads faster than true moderate content by the platform's own algorithmic logic.

The practical implication of the human downgrading frame is that the problem cannot be solved by individual resilience or digital literacy alone. The scale of the pressure — billions of people, billions of dollars of optimisation, thousands of the world's most talented engineers, working continuously — cannot be matched by individual effort. Structural responses — algorithmic transparency, engagement metric reform, design standards, liability — are the appropriate level of intervention, alongside the environmental changes individuals can make to reduce their own exposure.

Key Figures

TH

Tristan Harris

Former Google design ethicist, co-founder Center for Humane Technology

JH

Jonathan Haidt

Social psychologist, research on social media and adolescent mental health

BF

B.J. Fogg

Founder of Stanford Persuasive Technology Lab, whose work Harris extended critically

Further Reading