It is already too late to avoid a full-blown, AI-driven epistemic crisis. We will need trust figures, but who will pay them?

When true knowledge becomes hard to discover and verify, and is drowned out by a tsunami of fake content designed to be maximally engaging, interpersonal trust relations once again become critical to our understanding of the world.

We may not have to wait for religious fanatics and psychopathic rulers to drive us into World War III. Artificial intelligence is going to drive civilisation off a cliff.

We are barrelling towards a genuine epistemic crisis, in which we will not be able to tell what is real, and what is not; what is true, and what is not; what is honest, and what is not; what is good, and what is not.

The entire edifice of how we construct a stock of knowledge we know to be true, or have good reasons to believe to be true, is about to break down, and there’s nothing we can do to prevent its collapse.

Everything, from how we learn the news, to how we learn academic subjects, to how we conduct science, to how we conduct politics, to how we spend our money, will be overwhelmed with fakery, scams, dishonesty and slop.

If I sounded alarmed about this six months ago, I am now convinced the AIpocalypse is upon us.

Spotting slop

I like to think I still have a pretty good eye for AI-generated copy. Most written copy I see that I suspect to be AI-generated has a certain cadence to it, a tendency to favour “not only…but…” or “not just…but…” constructions, a habit of using slightly off-key or startling metaphors. It also tends to belabour points a little, but then, so do I sometimes.

Still, I don’t believe this is a skill worth honing. I am entirely convinced that a year from now, I won’t be able to spot AI-generated copy anymore.

AI is already far more efficient, and usually better, than humans at doing basic desk research. Experts are advising companies that new models coming down the line this year (like Anthropic’s upcoming Claude Mythos) will work better with far less hand-holding and fewer guardrails. To date, AI was like an intern who required a lot of supervision, double-checking and second-guessing. In the near future, independent AI agents will operate virtually unsupervised, and be good enough for most purposes.

About a third of new music uploaded to streaming sites like Deezer are now AI-generated. So far, the site appears to be good at filtering them out, but 97% of listeners cannot tell the difference, and only half care that they cannot tell the difference.

I tested myself on several sites to see whether I could tell AI video from real video. In one, you got two almost identical videos, and had to tell which is AI. In two others (here and here), you got a string of videos, and had to pick real or AI for each one individually. In all these tests, I scored between 50% and 60%.

Coinflip

That’s pretty much a coinflip. Can I tell whether videos are AI generated? Yes, but only just over half the time.

Dedicated AI detectors are not much better at actually detecting AI-generated content, especially not if a human disguised it with a light editing pass.

A recent leak of source code from one AI company revealed a so-called “under-cover mode”, in which the AI is instructed to conceal that what it produces is AI-generated at all costs. And if one benign AI company agrees to watermark its AI’s output, there will be a dozen that won’t.

By next year this time, we’ll be another dedicated AI processor down the line, and another model or two further, and most people won’t be able to spot AI slop anymore, at all, and neither will machine-based AI detectors.

Arguably, many AI productions will be better than human productions, because AI can use so much more input for a given output than a human, or human team, ever could.

Imagine a history lesson, book or video, that can almost instantly draw information from all historical work available online. A person would be hard pressed to survey the histories of a few historical figures, but an AI can survey the histories of hundreds, or thousands of them.

By next year, the question won’t so much be whether people use AI, but how they use it. I don’t think AI is going to cause mass unemployment among white collar workers, but almost all of us will be required to learn how to get really good at using AI in our jobs.

All knowledge

This isn’t just true for writing, or music, or videos online. This is true for all forms of human endeavour.

We are about to be inundated with a tsunami of AI-crafted scams and spam, and there is nothing we can do about it. They will become so good, so individualised, and so difficult to distinguish from real messages, that far more people will fall for them. Scams will become far more sophisticated, like a recent fraud case in which someone used AI to create music, and then used AI agents to create fake listeners for that music to earn royalties.

Computer systems and networks will become catastrophically more vulnerable. Already, AIs are already better at finding and exploiting security vulnerabilities than human security researchers are. That obviously means security researchers and developers need to throw AIs at their software to find exploitable bugs, but it also means that bad actors can find exploitable zero-day bugs much faster. And they can get AI agents to exploit those zero-days. Online data breaches, (s)extortion, and identity theft are about to become a high-volume, automated business.

There is a surge of AI writing in scientific papers, which raises all sorts of questions about originality, hallucinations and outright fraud. A year ago, the first fully AI-written paper passed peer review. Six months ago, it was estimated that one fifth of computer science papers may include AI content. I’d bet that applies to all sciences by now, and that it’s at least two fifths in computer science.

It used to be possible for reviewers (and readers) to check citations on questionable claims in scientific papers. It is only a matter of time before AIs are capable of producing fake science by inventing citations and then creating the very papers on which their fraud is based.

Companies are going to be inundated with AI slop. People will use AI to create tender proposals, architectural drawings, and product designs. They will use AI to produce annual reports, employee reviews and software code. AI agents will be deployed to create entire companies and special-purpose vehicles out of thin air, to orchestrate supply chains, mine corporate data and drive rapid product development.

One harbinger of the future is OpenClaw, a system of AI agents that acts as a “personal assistant”, and can do a lot of work automatically, without supervision, given enough permissions to access data and messaging platforms.

It is extraordinarily popular, but it is also creates extraordinary security and business risks. It can even turn on its users, which raises the spectre of many a dystopian sci-fi plot of the revolt of the machines.

Epistemic crisis

The inability to tell whether a news report, a video of a public figure doing or saying something, a proposed product design, a medical diagnoses, or a scientific paper is AI-created is a fundamental crisis of epistemology (the study of the nature, origin, and limits of knowledge).

The uncritical masses will have no defence whatsoever against propaganda, fraud and fake science. The intelligentsia is going to have a really hard time keeping a grip on what is real and what is not, even if their critical thinking skills are pretty decent. Even experts will find it increasingly hard to tell whether information is true or not.

Doctors are expressing worry that their patients not only search for their symptoms on relatively decent websites, but are now consulting AI chatbots instead of primary healthcare. When they do go to see a physician, the patient is telling the doctor what is wrong, and how to treat it, based on what the AI told them.

Populism based on propaganda that appeals to the basest emotions of the body politic will soon be the only viable source of democratic power.

Evaporating advertising revenue

Journalism has been fighting a rear-guard action to defend its revenue from the internet. With a handful of holdouts, that battle has been lost, and we’ve already seen the consequences.

The shift of advertising revenue from newspapers to the internet, and then to search engines and social media, has hollowed out the journalism industry. Remember when newspapers were thick, and carried classified adverts and estate agency supplements?

That’s gone. Estate agents don’t need newspapers anymore. Neither do people who use classified advertising. And that was just the start of evaporating advertising revenue.

Many people think that bias is the media’s biggest crime (and some politicians have been denouncing the “fake news media” just as Joseph Goebbels once did).

It is not. It was always possible to choose a news outlet that catered to your own political persuasion. In free countries, there were conservative papers, religious papers, nationalist papers, liberal papers, patriotic papers, unionist papers, and socialist papers. There were sensationalist tabloids, financial papers, and serious broadsheets.

In unfree countries, some of these might have been banned, but the media still catered to all legal biases.

Crisis of trust

The crisis of trust in the media has a lot more to do with declining quality than with bias. Poor quality is directly attributable to the diversion of advertising revenue away from newsrooms, and towards big tech giants.

Since the 1990s, media critics have fretted over what we call the “juniorisation of the newsroom”. As the money dried up, the most experienced and highly paid editors and reporters were let go, in favour of younger people who would work for far less, but lacked the discipline of having worked under a grizzled and pedantic editor for 20 years.

Staff numbers got cut. Titles folded. Regional papers were consolidated.

Fact checkers were the first to go, followed by sub-editors. Reporters were required to produce enormous volumes of copy, rapidly, day after day. The pressure to publish the moment a reporter got hold of a story became intolerable.

If you’re publishing as soon as you break a story, for fear of getting scooped by the online hordes, and you’re relying on the reporter getting it right in the first place, you simply cannot afford high standards.

A handful of large, global titles were able to maintain a semblance of quality, but even there, the cracks were showing.

Completing the job

AI is completing the job of separating the creators of knowledge from revenue streams.

Already, websites are being hammered as people don’t bother to read further than the AI summary at the top of their search results. The people who actually produce the information that is offered up in a neat bite-sized chunk for easy consumption never see a reader, and therefore don’t see a cent of revenue.

Advertisers don’t care how people get to see their adverts. They only care that people get to see their adverts. They will enthusiastically fund an endless stream of AI slop if that is the primary channel through which people engage with the world.

This doesn’t only affect journalism, of course.

Murphy Campbell is an American musician who plays traditional Appalachian music. An AI music producer fed her music into an AI music generator to produce similar sounding but fake music. Then they filed copyright strikes against the original artists’ work, which the music platform’s automated system happily went along with. AI not only stole her music, but destroyed her revenue stream, too.

The courts are going to get overwhelmed with people who challenge AI fraud and AI plagiarism. Most people simply won’t have the resources to fight it. In many jurisdictions, courts have already ruled that mass appropriation of vast libraries of material, without any payment to the original authors or artists, for the purpose of training AI models, is “fair use”.

Trust, interaction and experience

If all this sounds apocalyptic, it is. A massive upheaval in our ability to tell truth from fiction, reality from fantasy, and human-made from machine-produced, is underway, and the consequences are going to be catastrophic.

On one hand, this is going to make a lot of stuff very cheap and very accessible, provided you’re not too overburdened with taste, but on the other, it is ruining the livelihoods of the people who actually create our science, our entertainment, and our news.

But it’s not all bad news. New kinds of value will emerge in this new world.

Over the next few years, the most important commodities will be individual trust, personal interaction and real-world experiences.

In much the same way that handmade items command a premium over mass-produced products, we’re going to learn to appreciate material where we know a real human made it.

We’re going to have to rely on organisations with a reputation to protect, and individuals with a trust relationship to defend, to determine what is and is not real.

We will no longer be able to assume that a name in an academic citation, or a quote in a news article, signifies a reliable source. We will have to look further, to who published the news article, or who employs the academic.

Trust in some media titles and individual journalists, and trust in some universities and individual academics, is going to become critical.

Actual reporting, and actual primary research, will become far more valuable to our grasp of reality, even as the people who produce it lose most of their revenue.

Funding

For the media, that means increasingly relying on philanthropy and subscription revenue.

People will need to start paying for media, if they don’t do so yet, in order to replace the advertising revenue that has dried up. If, that is, they want to keep trustworthy reporters employed in the business of creating new, reliable and important journalism.

Artists will need to have personal relationships with the people who buy their art. Perhaps customers will want to see more of the process of creation. Perhaps they’ll pay for workshops to learn techniques the artist uses.

Just the ability to create a beautiful or meaningful artwork won’t be enough anymore once AIs can emulate all but the most original artists.

Real experiences will attract a premium over entertainment that any AI can produce.

Driving human creativity

Paradoxically, perhaps, the inexorable rise of AI, which none of us will be able to resist, will drive people to new heights of creativity.

It will be hard, because creators will need to become true iconoclasts, to keep a step ahead of the all-consuming AI, but they will have no choice.

One example is Angine de Poitrine (meaning angina pectoris, or chest pain), a French-Canadian duo that recently went viral with a performance of some highly original, microtonal rock.

Their video has amassed 7.5 million views in just two months, and has appeared on the channels of every major music critic and educator.

The band’s presentation alone is extraordinary. They don’t look anything like a typical rock band. Instead, the artists and their stage are dressed up in a weird, Dadaist, style.

The drummer produces complex rhythms that frequently change time signature. The guitarist uses an elaborate system of pedals and knobs with their bare feet for looping, and a microtonal double-necked lead-and-bass guitar that has twice the number of frets of a normal instrument.

(Microtonal music uses divisions of the octave other than the 12-semitone chromatic scale normally associated with Western music; in this case, it appears to use 24 quarter tones. Here’s another example, in which the brilliant Mike Battaglia plays an exquisite rendition of Scarborough Fair on a new-fangled keyboard-like device that divides the octave into 31 divisions.)

Despite sounding strange and highly original, Angine de Poitrine produces a wonderfully relatable groove.

The comments on their video are revealing, though. The most common sentiment is not how brilliant or how weird they are, but how human.

It could be argued (and has been argued) that Angine de Poitrine’s sudden popularity, seven years after the band was formed and two years after their debut album was released, is entirely the result of the rise of AI.

Apocalyptic

As AI washes over almost all existing human endeavour, genuinely new human experiences will become the most prized commodity of all. Artists and creators will have to become more iconoclastic than ever, constantly breaking with tradition and pushing the envelope where the AIs cannot follow.

Systems of trust will make gatekeepers like creditable media titles and respected universities more important than ever.

The question is how we’re going to make sure that the human creatives that produce the work and maintain this trust are fairly paid.

I don’t see any good answers to this question, which is just another reason I think the epistemic crisis we’re facing – the crisis of knowing what is real, what is true, and what is human – is potentially apocalyptic.

[Image: Angine de Poitrine, a uniquely human band from Quebec in Canada, during a performance in Rennes, France on 4 December 2025. (Still from YouTube.)]

The views of the writer are not necessarily the views of the Daily Friend or the IRR.

If you like what you have just read, support the Daily Friend


contributor

Ivo Vegter is a freelance journalist, columnist and speaker who loves debunking myths and misconceptions, and addresses topics from the perspective of individual liberty and free markets.