Critical Media Skills: The Dead Internet Theory
- Soni Albright

- 3 hours ago
- 9 min read
A media literacy case study in evaluating ideas that spread faster than evidence

You probably know the feeling. You scroll through social media, and something feels off. Comment sections that seem to exist for no one. Viral trends that evaporate without explanation. A Facebook post about a cat named Pound Cake on a weight-loss journey, with weeks of genuine emotional investment from real people, turns out to be entirely AI-generated.
There is a theory that tries to name this feeling. It is called the Dead Internet Theory. Before we decide what to make of it, let us do something the internet rarely encourages: slow down and ask how we would actually evaluate it.
🛑 Media Literacy Moment #1: Notice the feeling
You just read an opening designed to produce recognition and unease. Ask yourself: am I being asked to feel something, or to evaluate something? Both can happen in the same piece of writing. Noticing which one is happening is the first skill.
What Is the Dead Internet Theory, and Where Did It Come From?
The Dead Internet Theory hypothesizes that the internet has become dominated by bots, AI-generated content, and algorithmic manipulation, and that authentic human activity is increasingly buried under artificial noise.
The idea, however, did not start in a research lab or a journalism investigation; it originated in fringe online forums, specifically 4chan and similar spaces, in the late 2010s. One widely cited post, by a user named “IlluminatiPirate,” was viewed over 362,000 times and described the internet as “empty and devoid of people.” The theory then migrated to Reddit paranormal communities, Joe Rogan fan forums, and tech enthusiast spaces before eventually landing in mainstream outlets like Forbes, Time Magazine, and The Atlantic in the last five years.
🛑 Media Literacy Moment #2: Trace the origin.
A media-literate reader asks: where did this idea start, and who started it? The spaces DIT originated in, including 4chan and paranormal subreddits, are built around irony, provocation, and conspiratorial thinking. That does not make the idea wrong, but the origin story is context. An idea can travel far enough that most people sharing it have no idea where it came from. That laundering is worth noticing.
Who Is Amplifying It, and Why Does That Matter?
By late 2025, the theory had reached Sam Altman, CEO of OpenAI, and Alexis Ohanian, co-founder of Reddit, both of whom posted warnings about it on X. Ohanian said he has “long subscribed to the dead internet theory,” and Altman wrote that he “never took the dead internet theory that seriously but it seems like there are really a lot of LLM-run twitter accounts now.”
Here is where a careful, media-literate reader pauses.
Altman runs the company whose technology makes it easier to build LLM-run accounts. Ohanian co-founded a platform with a documented bot problem. Neither is a disinterested observer. The Forbes article that quoted both of them at length did not interrogate these conflicts of interest.
High-profile people repeating an idea tells you something about the idea’s reach. It tells you nothing about its accuracy.
🛑 Media Literacy Moment #3: Check the amplifiers.
Ask: who is saying this, and what do they stand to gain or lose from it being believed? Credibility by association is not the same as credibility by evidence. This applies equally to tech CEOs and academic papers.
What Does the Evidence Actually Show?
This is where the investigation gets interesting and uncomfortable.
The bot traffic numbers are real, but read the fine print. Imperva’s 2025 Bad Bot Report found that automated traffic surpassed human activity for the first time in a decade in 2024, accounting for 51% of all web traffic. This is a genuine data point. It is also produced by a company that sells bot-detection software. Their business depends on bot traffic being perceived as a serious and growing threat. Their methodology is not independently peer-reviewed. Separately, Distil Networks’ 2019 Bad Bot Report found that bots drove about 38% of internet traffic in 2018, and that human traffic was actually increasing that year. The numbers fluctuate depending on who is measuring and how. That context rarely makes it into the viral posts citing these figures.
FYI: I am intentionally not linking to sources that circulate these figures without a clear methodology or independent verification.
The academic papers circulating on this topic have significant problems. One frequently cited paper, published in the Asian Journal of Research in Computer Science, contains no original data and appears in a journal flagged on predatory-publishing watchlists. A second paper, from researchers at NED University of Engineering and Technology, comes from a more credible institutional context. However, its own conclusion states plainly that “the Dead Internet Theory lacks substantial evidence to support its claim.” That sentence is rarely included when the paper is cited online.
The most instructive example comes from a widely shared Forbes article. The article refers to “a team of researchers from Swiss universities” who published a paper suggesting that the Dead Internet Theory can now be “observed first-hand.” That phrasing signals to readers that this is a research-based finding.
I located the Swiss paper. It is two pages long and not a research study. It is an opinion column published in the “Curmudgeon Corner” section of the journal AI & Society, which the journal itself describes as “a short opinionated column.”
It contains no data, no methodology, and no original research.
The description “researchers from Swiss universities” is technically accurate—the author holds those affiliations. But the article omits the critical context that this is an opinion piece, not an empirical study.
Most readers would reasonably interpret the original description as research-backed evidence. Without checking the source, there is no clear way to distinguish the difference.
🛑 Media Literacy Moment #4: Distinguish claim from evidence, and description from reality. The word “researchers” does not mean “research.” An institutional affiliation does not make an opinion column a study. A journal published by Springer Nature does not mean every piece in it carries the same evidentiary weight. Ask: what kind of document is this, actually? Read the fine print before you trust the headline.
Can This Theory Even Be Proven Wrong?
In its strong form, the Dead Internet Theory has a structural problem: it is very difficult to falsify.
If most internet activity appears human, that can be interpreted as evidence of increasingly sophisticated bots. If it appears artificial, that directly confirms the theory. There is no clear outcome that disproves it.
In media literacy and critical thinking, this is called an unfalsifiable or self-sealing claim, which is one that can absorb any evidence without being meaningfully tested.
This is a feature of conspiratorial thinking generally, not of DIT specifically. And unfalsifiability does not mean a claim is wrong. It means we should be especially careful about how much confidence we assign to it.
Caroline Busta, who studies the impact of technology on culture, described elements of DIT as “paranoid fantasy” in 2021, while also acknowledging that the broader feeling that the internet feels emptier or less human is real. That distinction is an important one as we examine this theory. A claim can feel true while remaining empirically weak.
🛑 Media Literacy Moment #5: Ask what would have to be true for this to be wrong.
If there is no possible answer to that question, treat the claim with extra caution. This is not cynicism. It is a basic standard of reasoning. Apply it to things you agree with as much as to things you don’t.
The Irony Worth Sitting With
The Dead Internet Theory describes how misinformation spreads: through bot amplification, weak sourcing, and credibility laundering.
Ironically, it has also spread through those same mechanisms.
A fringe forum post became a widely circulated idea. Researchers who were unnamed or only loosely described were cited in mainstream coverage. An opinion column was presented in ways that suggested research-based findings. High-profile tech figures with their own incentives amplified the idea to large audiences. In many cases, the original sources were not closely examined.
This is not, by itself, a reason to dismiss the underlying concern.
But it is a reminder that the way a claim spreads should be evaluated alongside the claim itself.
🛑 Media Literacy Moment #6: Pay attention to how a claim spreads.
When a theory about how bad ideas spread travels through those same mechanisms, that is worth paying attention to.
It does not mean the theory is wrong. It means that even ideas we find intuitively compelling should be evaluated with the same level of scrutiny as ideas we find suspicious.
A Practical Toolkit
The goal of media literacy is not to become paralyzed by skepticism or cynicism. It is to develop calibrated confidence and the ability to say, “this is likely partially true, poorly documented, and worth monitoring,” rather than forcing a binary conclusion.
A few tools that consistently work:
Use lateral reading. Do not evaluate a source by reading it more carefully. Open new tabs and see what independent sources say about it. Research from the Stanford History Education Group shows that professional fact-checkers rely on this approach and outperform academics and students who stay on a single page. Even brief instruction (under six hours) significantly improves students’ ability to identify unreliable sources.
Trace the amplification chain. Find the earliest available version of a claim and evaluate that source directly. Claims often become more credible as they are repeated, even when the underlying evidence has not changed.
Name conflicts of interest explicitly. Identify who produced the information and what incentives may shape it. This is not a reason to dismiss a source, but it is necessary context for interpreting its claims.
Sit with uncertainty. “I do not know enough to conclude” is a valid position.Research on misinformation shows that slowing down and withholding judgment reduces susceptibility to false or misleading claims.
🛑 Media Literacy Moment #7: Apply these tools here, right now.
Go find the Swiss university paper that Forbes referenced in its Dead Internet Theory coverage. See if you can locate it. Then ask yourself what it means if you cannot.
What This Means for Young People
Gen Z and Gen Alpha have never known an internet without this odd authenticity problem. They are also the primary users of systems built by sophisticated teams optimizing for engagement. They are active experimenters, users, and creators in the digital space - digital natives, even if that term no longer means what it was originally intended to.
These young people also know the game. They know bots are real, that fabrication happens constantly, and that they themselves participate in it at times. That awareness should be an asset, but too often, it becomes the opposite - a slide into mistrust not just of bad actors, but of institutions, media, and the possibility of having a shared reality.
The bigger risk isn't any one false claim. It's epistemic resignation - concluding that nothing online can be verified and therefore nothing is worth trying to verify. That outcome is more consequential than any individual piece of misinformation.
The good news is that media literacy education offers the solution, and these skills are very teachable. Research from the Stanford History Education Group shows that even a few hours of targeted media literacy instruction can significantly improve students’ ability to evaluate sources. The tools exist, but the challenge is making media literacy education a consistent part of overall instruction.
🛑 Media Literacy Moment #8: What are you going to do differently?
“Do you believe the Dead Internet Theory?” is the wrong question. The right question is: what is one thing you will check next time before you share or believe something that feels true?
How Are Your Media Literacy Skills?
Something feels off about what we are experiencing on the internet right now, and that instinct is valid. We live in a world where emerging AI technology has made fabrication cheaper and faster than ever has us second-guessing what we see online, and rightfully so. When we encounter claims like DIT, that sense of “that could actually be true” reflects real changes in how information is produced, distributed, and consumed. After all, conspiracy theories spread because they contain a kernel of truth.
The question isn't whether that instinct is right or wrong. The better question is: what do we do with it?
One option is to resolve the discomfort quickly by accepting a theory that explains everything. Another is to treat that discomfort as a starting point for investigation — asking where the evidence comes from, how claims are being framed, how the claim is spreading, and what would count as meaningful proof.
The internet may or may not be dead. Either way, the tools for evaluating it are far more useful than that answer ever will be.

Soni Albright is a teacher, parent educator, curriculum specialist, researcher, and writer for Cyber Civics with nearly 24 years of experience in education. She has taught the Cyber Civics curriculum for 14 years and currently works directly with students while also supporting families and educators. Her experience spans a wide range of school settings—including Waldorf, Montessori, public, charter, and homeschool co-ops. Soni regularly leads professional development workshops and is passionate about helping schools build thoughtful, age-appropriate digital literacy programs. Please visit: https://www.cybercivics.com/parent-presentations
