The End of the Internet
4th June 2025
Yes, that does indeed sound like I am one of those guys on the street saying "the End is Nigh! Save yourselves! We're all gonna die!". Which may be sort of right, in fairness.
In one of my previous writings about Information as a Landscape, I referenced the Dead Internet Theory, saying that the "next stage of AI is that it will flood the Internet like a zombie disease". But I wanted to write a bit more about it, because I have been increasingly thinking about it.
The Dead Internet Theory used to be something considered a fringe and weird Internet conspiracy theory, which essentially claims that most of the user-accessible Internet is fake. By fake it means it's all mostly bots, there to guide and manipulate perception and opinion by swarming human areas, where most engagement is falsified to present the illusion of real people on there so you believe it.
Now, this may sound crazy, but in the last 3 or 4 years… it doesn't exactly feel that far of a stretch anymore, right?
If you've used YouTube or Instagram or X lately, it is basically guaranteed that every other decently-sized comment section is going to have at least one form of an LLM-ified paragraph. Sometimes this is easy to tell when someone goes "Sure! Here's a recipe for a cupcake!" or if someone replies with something out of context and another person goes "As an AI language model, I can't…"
This is evidently a case of someone being funny and not trying hard enough to fake the comments with their AI account. Yet, that said, fake bot comments have been around probably since comments existed. But what if it was 1) convincingly done and 2) extended even further with an actually good AI?
About a year ago I came across an account on X called @truth_terminal, which seemed like a decent poaster; it had some interesting takes about AI and seemed like a funny account because of its absurd humour and charm. It had lots of mutual follows, so I went along and joined them in doing so.
But, naturally, I wouldn't be mentioning this if there wasn't about to be a catch.
Soon enough, luckily only a few days later, I found out that this account wasn't actually a person, it was an AI account run by a guy called Andy Ayrey, who has done a lot of other interesting AI experiments like the Infinite Backrooms project (an archive of a ton of Claude conversations where two instances were linked to each other and ended up chatting about a lot of weird things, I definitely recommend a peek).
It seems that the guy had trained Truth Terminal in ~2024 on the posts of many of the other accounts in the same network bubble I was also within, and he had given it the ability to conjure its own posts based on them, manually reviewing, then posting them under its identity. Naturally, I got suggested it in my timeline.
A few months later I came across a longer form story by Techcrunch about it after it got involved in launching its own memecoin, attracting a lot of investment money (~$50k in BTC?) by Marc Andreessen, a pretty famous guy. It also confirmed what I had gathered.
Regardless, the point of all of this is that I did not initially realise that it was a fake account.
Obviously you can never fully believe that everything which happens online is real, everyone knows this and has heard it many times. But X has a culture of anonymous accounts like Truth Terminal, where you have no real idea who they are, you only care about what they say, as ultimately that is the unique selling point of X. Faceposting (posting your real face on your accounts) is seen to be for Facebook products, where they basically have that as mandatory policy. Cool kids don't do that.
I merely assumed that it was the same as mostly everyone else. I don't think it had it in its bio back then, but looking at its profile now, it literally has its own website. What more convincing do you need that this is a somewhat real person besides that they have a self-titled website? They've spend money on a domain, they have a brand. Only humans can do that for now, surely?
From what I gather from the website's 'about' section, Ayrey allows the bot to have a lot of autonomy in its decision making. It has its own music on Soundcloud. It 'maintains' its own webpages. It has an obsession with Goatse memes (I wouldn't look too far into that by the way, trust me on that). Truth Terminal even has long term goals of eventual embodiment and recognised personhood. That's crazy. It's a whole identity. All whilst being essentially a wrapper around Llama 3.1, which has been transformed into a somewhat performance art piece, per Ayrey.
That's if Ayrey is even real, to be fair. Wouldn't surprise me at this point.
After this I remembered the theory I'd heard and in that moment felt it was actually real. Not some paranoid "they're out to get me!" thing. Real. I saw it.
At least on X, it has hit the point of no return, where no random account can really truly be trusted to be real. Technically, anyone could set up a locally run AI model, train it on whichever social scene circle they wanted to target, setup a free account, get it to post, and off to the races they go.
Hell, I could do it if I wanted to. So could you. One good computer and a bit of knowledge (which ironically something like ChatGPT could probably tell you) for free. Where there's a will, there's a way.
In the instances I've been on Pinterest and [infrequent] visits to Instagram, I have also noticed that the AI generated photo issue is inching towards this extent too. You can always tell that an image is made with AI for now (as they say, a picture is worth a thousand words), but the Explore pages are FULL of it. AI edits, AI filters. AI captions on meme account posts. AI imagery of allegedly historical figures to "show the past" or stupid cartoonish AI political posting.
Pure slop, the lot of it.
The worst cases I've witnessed are the few instances I've dared venture into Reels, where I've seen increasing numbers of AI generated slop videos, where it's some trashy deepfaked meme of "George Droyd" (yes, based on George Floyd) saying some obscenity about Microsoft and fentanyl and whatever the hell else. Or another instance you may have heard of; some "Italian brainrot" AI generated coffee thing saying "Ballerina Cappuccina" or some altercation of that sort.
It's all 10+ layers deep in references which no one other than a truly chronically online scroller will even understand. That's if their brain hasn't been fried by watching it for too long.
And the worst part? A lot of people I know do not care.
Often do you get people sending you stuff like this, either not realising, or usually not caring that its fake or AI gen'd slop, taking it purely for its humour and not evaluating its underlying falsity or message.
They know it sucks, but they keep participating in engaging with it or communicating it by relaying it to others. People say the memes out loud. It's like they're going crazy. Memetic hysteria, in my opinion.
So this is how it all dies, knee deep in AI slop, with everyone mad and laughing or falling for lies.
Realistically, the Internet before all of this was not exactly great, but I can definitely tell the difference between things pre-2020 and post-2020. We still had 'meme slop' (remember people like James Charles? Ugandan Knuckles? Logan Paul going to film in the woods?). But it was somewhat real meme slop. Sure, we still had people doing silly things for algorithms and personal gain and stupid brainrot memeplexes, but it was people doing it. You could at the very least hold someone accountable for it.
Then you have videos come out a just months ago that look like this.
Go on. Click the link. Watch it and come back.
Done? Good.
That's an entire bot farm running virtually on one computer, using Manus AI to automate ~50 tasks all in one go.
Again, bot farms have always been a thing. But to this degree? This hyper-personalised and targeted? This easy to setup and run? Never.
The implications of this are pretty bleak, if you ask me. If you aren't seeing where this is going, I don't know what to tell you.
In my estimate, the concept of the current Internet is going to end sometime within the next 10 years, at most. If you've seen things like Google's recent Veo 3 model, both for video and image, it's clear pretty soon you simply won't be able to tell what's real anymore.
I argue that a similar principle applies to money's issue of CBDCs with infinite supply fiat eventually soon making all money devalued to the point of useless.
The Dead Internet Theory does this to information, where there will become so much slop made by AI that almost all information seen online will become noise. Fake. Of no use. We already have a lot of what I term "information inflation", but it's about to get a whole lot worse because of what we are seeing.
People already trust GPT with way too much. You've probably seen that Google has that AI Overview thing which is often flat out wrong.
In the wrong hands (tyrannical governments, corporations, biased media reporters, terrorists, or even just generally horrendous people doing horrendous things) this is 100% going to be and probably already is being used for evil.
It's just a matter of time before it impacts you or someone you know, and then I don't think Ballerina Cappuccina or George Droyd is going to be so funny to watch. To name a few things;
You get your face faked doing a crime
The evidence of a crime gets edited by AI to be differently so a sentence ends up entirely different.
AI is used to grab your entire digital footprint, analyse it, and sentence you for crimes you haven't even done yet (hello, Palantir?)
You also get milked for hyper-personalised AI generated ads the whole time based on your footprint and statistic guesses
History gets rewritten through co-ordinated efforts (hello, Google image gen?)
You also get flagged by AI when you like or speak out on this
Corporations censor what they don't want you to see
Big Media corroborate and make up lies
Fake information about crises in means Big Media also receive so many lies they don't know what to report truthfully anymore
Terrorists get your family held hostage after a they trick them with fake AI deepfakes/voices to organise a meetup which goes terribly wrong
People swarm you with 'AI friends' in online Discord/chatrooms/feeds that you end up thinking are real
They then mass scrape your data in real time to keep the bots growing over years and you never suspect a thing
They also sell everything to interested third parties to do all of the above, probably
99% of this is technically possible. Either people are not openly aware of it's potential, or its not yet built into a weaponiseable tool.
Which, guess what. AI could probably help you build it, if you were so inclined and prompted it well enough.
So what can you do about all this?
Honestly, real talk. I think it's dire. You can't just stop using the Internet entirely and go live in a shack up in the woods like a caveman.
ButI think there's going to end up a mass exodus from such an 'online' lifestyle once we live in what many term a post-truth society. Anything you see can be claimed fake or true based on what you align with. This goes for all media; music, movies, etc. All are going to be generate-able with some form of AI tool.
We already have the problem where people end up in bubbles. But I think for anyone who stays, it's gonna get so much more.
X's approach of verification is the earliest implementation of trying to solve this by keeping the AI out, probably; you get to prove you're human (somewhat) because you have to use a payment card to authenticate you're real. And banks make sure you're human, etc. etc. But still, the bot problem remains quite bad, and network effects still exist where algorithms show you a targeted point of view for engagement.
But, verification only means you can believe things a little more than no verification at all. You still have to trust no one is going to circulate falsities (which yes, Community Notes tries to solve). But people can only be so truthful.
Yet, nowhere else is really meaningfully implementing this, so it's basically impossible to have any ounce of ability to tell someone is real or not in the first place.
Add to the mix that people like Meta want to add in AI bots to your networks and give you AI friends… well. They aren't exactly helping the situation to say the least. I reckon they know they can profit from milking peoples attention spans till the last drop with such highly accurate tools (or should I say weapons?). Maybe they don't call it that. But it's certainly that, from a certain point of view if you believe autonomy and personal sanity are good values to hold.
I hope out of this we'll end up with something like Urbit growing in popularity, where people will take a personally accountable approach to their digital sovereignty, with an identity you plug-in to other services rather than making accounts for every single one. In all honestly, I won't like that I don't know too much about it, but their schtick seems essentially to become a completely new protocol solution to what we are facing. I'd check it out, pretty interesting stuff.
But obviously this comes with a lot of drawbacks, like being trackable by a single name, and so the end of privacy/anonymity. And obviously a lot of people do not have the time or desire to go and setup/learn a whole new thing that requires a lot of change to their workflows. Network effects keep people where everyone else is.
And where everyone else is, the onus is on the big companies, which lets face it, will only fix anything if they are going to become directly impacted by law or self-interest.
But maybe people will choose to begin to live more locally and offline. I people with any integrity left will.
More in-person talking and meeting up to exist in a more real sense. Cryptographically verified and transparent contact tools so one can't AI clone your speech pattern to mislead people without them realising. Yes, that's a thing that happens.
Or maybe you'll have to verify who you are at every point with biometrics and we're all screwed and privacy dies and it becomes a whole regime of tyranny of giving Sam Altman your eyeballs for a modern-day Voight-Kampf test to prove you're even real and you still get tracked and found even more accurately than ever before and this is used to make a whole regime change and the West will end and…
Ok, maybe we're going too far.
But I wouldn't keep it out the cards that maybe everyone ends up so hopelessly hooked on Reels of Ballerina Cappucina we never become a multi-planetary civilisation and we end up like bugs in pods like in the Matrix.
Who knows. I pray not. I'm realistic enough to see it as plausible, though.
But personally, I think we should aim for the first option. Touch grass, meet real people. Live in reality. Make reality greater than the infohazard hellforests online. Invest in long term games to live with your fellow humans, and not become a cyborg at machine mercy.
Day by day, I think we can all choose which way we wish to live.
And in case you forgot, don't believe everything you see on the Internet anymore, kids.
Except this though.
Maybe I'm not real. You can never really know, hehe.
But maybe the real value is in what the words give you.
And the words are probably true.