Independent Web Almanac Publishing for people, not algorithms

The Dead Internet Theory Is Less Interesting Than the Dead Human Web

The Dead Internet Theory is a conspiracy. The boring version, anyway. It goes like this: most of what you encounter online is bot-generated, algorithmically faked, and propped up by government or corporate actors to manufacture consensus. Real human activity has been quietly replaced. You're scrolling through a ghost town dressed up as a city.

It's a good story. It's also mostly not the point.

The more unsettling truth doesn't require shadowy coordination. It doesn't need a conspiracy. It just needs the ordinary incentives of platform capitalism, scaled up over twenty years, applied to billions of people who were never asked whether this is what they wanted.

The internet isn't dead. But it's starting to feel like it.


The Bots Are Real, But They're Not the Problem

There are a lot of bots. This much is not disputed. Twitter, before and after Elon Musk made it his, had bot estimates ranging from 5% to nearly 20% of accounts, depending on who was counting and what they were counting for. Facebook routinely removes hundreds of millions of fake accounts per quarter. LinkedIn is, at this point, largely a machine for AI-generated career inspiration firing into a void of other AI-generated career inspiration.

But here's the thing: even if you removed every bot tomorrow, you'd still have the problem. Because the real issue isn't bots pretending to be humans. It's humans behaving like bots.

Not as an insult. As an observation.

The platforms trained us. Engagement metrics (likes, shares, follower counts, view numbers) became the feedback loop that shaped what people posted. Over time, posting stopped being self-expression and started being performance optimization. You learn what works. You do more of that. The content gets cleaner, more predictable, more targeted at response.

This is how you get thousands of accounts that aren't bots, aren't fake, but are functionally indistinguishable from content mills. Real people, real labor, real time. Producing content that serves the algorithm rather than any human impulse to say something.


What Platforms Actually Reward

Every major platform has the same basic problem: the things that drive engagement aren't the things that make engagement worth having.

Outrage travels faster than nuance. Broad appeal beats specificity. Novelty, even shallow novelty, edges out depth. Controversy that fits neatly into an existing culture war framing gets more traction than something genuinely complicated, because complicated things require effort, and effort is the enemy of the scroll.

So that's what gets made.

This isn't cynicism about human nature. It's an observation about systems. If you build a system that rewards a certain behavior, you get more of that behavior. The people producing content aren't stupid or lazy. They're rational. They're responding to the incentives in front of them.

The problem is that the incentives were never designed to produce a healthy information environment. They were designed to keep people on the platform. These are not the same thing, and in many cases they're directly opposed.


Synthetic Engagement Is the Norm Now

There's a version of social media that was supposed to work like word of mouth. You post something real, people who find it useful or funny or interesting share it, it reaches more people. Organic.

The actual version has purchase buttons. You can pay to amplify content. You can buy followers. You can join engagement pods, groups of accounts that agree to like and comment on each other's posts reflexively, tricking the algorithm into thinking content is performing well. You can hire agencies that do this at scale.

The result is that visible metrics have become almost meaningless as signals of genuine interest. A post with 80,000 likes might represent 80,000 people who found it resonant. It might also represent $400 spent on a service and a lot of offshore accounts doing their jobs.

Most people scrolling can't tell the difference. The platforms could tell the difference, and often choose not to, because the inflated numbers keep everyone looking at numbers.

This is synthetic engagement. Not bots writing articles from scratch, not AI generating fake conversations, just a systematic inflation of the signals we use to decide what's worth paying attention to. It quietly poisons the idea that popular means good, that spread means true, that engagement means connection.


The Content Dilution You Can't Unsee

Search for almost anything practical right now. A recipe. A product review. How to fix something in your house. Medical symptoms. Travel recommendations.

The first several pages of results will, in most cases, be search-engine-optimized content designed to rank, not to inform. It follows templates. It hits keyword densities. It answers the question you typed in a way that technically qualifies as answering the question, while providing as little actual signal as possible, because actual signal requires expertise, and expertise doesn't scale.

This has been getting worse for years, but AI writing tools have sped it up in ways that are hard to overstate. The cost of producing mediocre content at volume has dropped to almost nothing. So the volume has exploded. And because search algorithms are still largely built around signals that mediocre-at-volume content can game, it surfaces.

The people who used to write that content for $15 an article are now competing with software that does it for fractions of a cent. Some of them have switched to prompting the software themselves. The quality floor hasn't moved. The quantity has gone vertical.

What this means practically is that finding something real, a person who actually knows a thing explaining it honestly, has gotten harder. Not impossible. But harder. You need to know where to look. You need to already have enough context to evaluate what you're finding. The internet that was supposed to democratize expertise has produced a situation where navigating it well has itself become a kind of expertise.


The Dead Human Web

Here's the thing the Dead Internet Theory gets backwards: it imagines a world that was once vibrant and human, now replaced by machines. But the replacement didn't happen that way. The human parts didn't disappear. They adapted to conditions that punished being human.

The forums where people argued about obscure things because they cared about those things: many are still technically there, but the people have moved or the culture changed or the SEO spam swamped the signal. The personal blogs where people wrote for nobody in particular because they had something to say: most are gone, orphaned, or replaced by Substacks optimized for subscriber counts. The comment sections where conversation happened: taken over by the same outrage cycle as everything else, or killed off by publishers who found them more trouble than they were worth.

What we're left with is an internet that is technically very much alive. Billions of posts. Infinite scroll. More content than any person could consume in a thousand lifetimes. And somehow, less and less of it feels like it came from a person who wanted to tell you something.

That's not a conspiracy. Nobody planned it. It's what happens when you take human communication, wire it to attention metrics, hand the distribution to a handful of companies whose revenue depends on keeping you looking, and run the whole thing for two decades at planetary scale.

The theory is that we're browsing through a ghost town. The reality is more complicated and less comforting: we built the ghost town ourselves, one engagement-optimized post at a time, and we're still building it, and most of us keep showing up because where else are you going to go.

That's the part worth thinking about.