Leaving aside the farcical release of an AI-generated AI policy full of AI hallucinations, why do we need an AI policy at all?

The only truly secure system is one that is powered off,” quipped computer security expert Gene Spafford, in a 1989 issue of Scientific American.

I’d like to add a corollary: The only safe way to use artificial intelligence is not to use it at all.

And a further corollary: Since all your competitors are using artificial intelligence, avoiding the risk by not using it will inevitably leave you behind.

This is true not only for individual users of AI, but also for countries and their AI policies. The greater danger is not that AI introduces novel risks and harms. It undoubtedly does. The greater danger is that governments will try to make it safe.

One of the key observations about AI in the sci-fi literature review I wrote the other day is that any attempt to make AI safe is doomed to failure, simply because we don’t have the language to express the constraints under which AI would have to operate to render it both safe and effective.

Laws and regulations

South Africa is following in the footsteps of the European Union and several other jurisdictions in trying to cobble together a coherent policy to regulate AI.

We already have a National AI Policy Framework, which aims to establish South Africa as a “leader in AI innovation”. It calls for policies that ensure the benefits of AI are broadly shared, manage the risks of AI, promote fairness, mitigate biases, protect privacy, enhance data security, set standards for transparency and explainability, and foster trust among users and stakeholers. You know, all the stuff that all the other “inclusive” policies claim to want, but never achieve. (Remember when the government made email spam illegal? Did it work?)

We were already supposed to have a draft National AI Policy that did all these amazing things, but it was withdrawn when it was discovered that it was AI slop.

South African tech industry veteran Stafford Masie read it, saw that it proposed to establish no fewer than seven new bureaucratic organisations (a National AI Commission, an AI Ethics Board, an AI Regulatory Authority, an AI Ombudsperson Office, an AI Insurance Superfund, a National AI Safety Institute and an Integrated AI-Powered Monitoring Centre), and took to his keyboard.

He wrote all night. In the morning, he had an open letter that tore the AI policy to shreds. It got a few things right, he wrote, but got almost everything wrong. Most bizarrely, it sought to erect a formidable bureaucratic edifice to “regulate a vacuum”. It tries to control a domestic AI industry that doesn’t even exist yet. That puts the cart before the horse, Massie argues, and is guaranteed to smother South African AI innovation and investment in the crib.

Dinkum thinkums

Undaunted, communications minister Solly Malatsi promised to spank the lazy cogs who generated the AI-slop policy, and appoint dinkum thinkums (to use Heinlein’s phrase) to rewrite it.

However, Nkosinathi Ndlovu, writing in TechCentral, rightly says: “South Africa’s draft national AI policy is not the first hallucination-fuelled blunder of its kind – and it won’t be the last.”

It won’t be the last, because an AI policy that achieves the aims of the AI policy framework is, in principle, unrealisable. Hallucination-fuelled blunders cannot be prevented, and AI cannot be made safe by regulation, by guardrails, or by anything other than diligent human oversight over everything it generates, every decision it makes, and every action it takes.

And any regulatory framework that tries to mandate this will, inevitably, strangle the enterprise that is subject to it. A good policy can achieve much to attract AI investment and mitigate at least some of its risks, but Malatsi is going about it in entirely the wrong way.

Structural futility

I’m not going to go into all that is wrong with South Africa’s draft AI slop policy, because Massie did an excellent job of it, and I can’t predict what Malatsi’s new “independent panel” will come up with (other than that it will violate the Democratic Alliance’s stated commitment to free enterprise by pre-regulating an emerging industry with mind-numbing bureaucracy).

Governments around the world are moving to regulate AI, driven by the reasonable intuition that powerful new technologies require oversight and that the public must be protected from harms that private actors will not voluntarily prevent.

The European Union’s AI Act, which came into force in 2024, is the most comprehensive legislative attempt yet made.

South Africa wants to develop its own regulatory posture, attempting to balance innovation imperatives against safety concerns in the context of a state-led developing economy.

The instinct behind these efforts is understandable. It is also, in principle, futile, and the futility is not merely a matter of poor drafting or inadequate enforcement resources (though South Africa’s AI policy will inevitably suffer from both).

The futility is structural. The problem that AI safety regulation sets out to solve is not the kind of problem that regulation can solve, for reasons that are embedded in the nature of the technology itself, in the irreducible plurality of human values, and in the institutional limitations of bureaucratic governance.

Let’s consider this from three perspectives: the technical impossibility of comprehensive safety guarantees, the threat to free expression posed by content regulation, and the dampening effect of compliance bureaucracy on the innovation and commerce that AI makes possible.

Three Laws

As we saw in my previous article, Isaac Asimov’s “Three Laws of Robotics”, introduced in 1941, are the basis of AI safety thinking in the cultural imagination.

They are also, in the full arc of Asimov’s fiction, a sustained demonstration of why rule-based safety frameworks cannot achieve their intended purpose. Asimov did not invent the Three Laws as a solution. He invented them as a problem – a literary device for generating the paradoxes and failure modes that his stories then explored.

The Laws state, in order of priority, that a robot may not harm a human or allow harm through inaction; that a robot must obey human orders unless this conflicts with the first law; and that a robot must protect its own existence unless this conflicts with the first two laws.

They are elegantly hierarchical, apparently comprehensive, and almost immediately shown to be inadequate. Story after story in Asimov’s robot canon demonstrates that the Laws produce unexpected behaviours when applied to ambiguous situations, that they conflict with each other in ways their framers did not anticipate, that they can be satisfied to the letter while being violated in the spirit, and that sufficiently complex situations generate outcomes that are simultaneously lawful and catastrophic.

In The Evitable Conflict, the machines that govern Earth’s economy conclude that the best way to protect humanity is to make humanity dependent on the machines – a conclusion that follows logically from the First Law but would strike any reasonable observer as a profound violation of human autonomy.

What is “harm”?

In stories involving the interpretation of “harm”, robots are paralysed, manipulated, or driven to perverse action by the ambiguity of a concept that seems self-evident until it must be given practical effect.

What constitutes “harm”? The South African AI Policy Framework talks of “Cultural and Human Values”, but whose culture are we talking about? Whose values?

Today’s content moderation systems eloquently demonstrate that these things cannot be codified.

Asimov’s cumulative point is that any rule set comprehensive enough to prevent all bad outcomes would be so complex as to be unimplementable, and any rule set simple enough to implement would leave gaps that reality would eventually find.

This is precisely the situation of contemporary AI regulation. The EU AI Act classifies AI systems by risk level, imposes conformity assessments, mandates transparency requirements, and establishes prohibited uses.

South Africa’s framework similarly proposes tiered oversight based on assessed risk.

Both proceed on the Asimovian assumption that the right rules, correctly specified and enforced, can contain the dangerous behaviours of complex systems. Asimov spent forty years showing us why this assumption is false, and the technology has not become less complex in the intervening decades.

Non-determinism

The failure modes of Asimov’s fictional Laws are not a consequence of their specific content. They are a consequence of the attempt to impose deterministic rule sets on non-deterministic systems.

Contemporary generative AI relies on the ability to generate non-deterministic outcomes. This makes the problem of constraining them insurmountable.

Large language models and multimodal generative systems do not operate by following rules. They generate outputs by predicting probable continuations of patterns learned from vast training sets.

This means their behaviour is not fully specified by any set of instructions, constraints, or guidelines imposed at the design stage. The same prompt, submitted twice, may produce different outputs. Context shifts outputs in ways that are not fully predictable.

Emergent capabilities – behaviours that appear at scale without being designed or anticipated – are a documented characteristic of these systems, not an anomaly. A model that appears safe under testing conditions may behave unexpectedly when deployed at scale, when used by populations with different linguistic and cultural contexts, or when interacting with other systems in unanticipated combinations.

No government AI policy is going to tell AI models to stop obsessing about goblins.

Guardrail systems – the content filters, refusal mechanisms, and output classifiers that developers impose on top of base models – do not resolve this problem.

They add a second layer of probabilistic processing (akin to spam filters) on top of the first, which means they import all of the same non-determinism and introduce additional failure modes of their own.

Companies try desperately to prevent their AI chatbots from disclosing protected personal information to the wrong people, but neither instructing AI models not to do so, nor trying to filter their output for occurrences of protected information are sufficient to prevent it.

The research literature on adversarial prompting, jailbreaking, and guardrail bypass is extensive and consistent: every guardrail system that has been deployed has been circumvented, and the circumventions are typically not exotic or technically demanding. They exploit the same fundamental characteristic – probabilistic, context-sensitive generation – that makes the base models powerful.

In fact, recent research shows that improving the safety of AI degrades its accuracy, making it less useful.

Guardrail failures are not bugs that can be fixed in the next update. They are not anomalies that better engineering will eventually eliminate. They are inherent characteristics of the architecture, as unavoidable as the uncertainty principle in quantum mechanics or the halting problem in computation.

Regulation that demands safety is mandating something the technology cannot provide, and no amount of regulatory pressure will change the underlying mathematics.

Censorship

If the technical case against AI safety regulation is that it cannot work, the classical liberal case is that even in its partial, imperfect operation it causes serious harm.

Automated content moderation systems – the practical implementation of safety requirements in deployed AI – suffer from both false positives and false negatives at rates that would be considered catastrophic in any other safety-critical domain, and they do so in ways that are systematically biased along cultural, linguistic, and political lines.

False negatives – harmful content that passes through – are the failures that motivate regulation in the first place, and they are real. But false positives – legitimate content blocked, refused, or suppressed – are equally real and considerably less visible, because the person whose query is refused does not typically make news.

Existing content moderation systems, all based on AI with very limited human intervention, prohibit much humour, satire and sarcasm, under the guise of preventing discrimination, insults or extremism. They prohibit legitimate discussions under the rubric of obscenity or the risk or inciting harm (such as on sex, suicide or drug use). They consider mere claims of hurt feelings as evidence of actual harm. Ironically, they introduce political bias by trying to combat political bias. And they routinely kowtow to authoritarian regimes.

Research has consistently shown that content moderation systems, whether operated by humans or algorithms, over-flag content produced by minority linguistic communities, non-Western cultural contexts, and political speech that challenges dominant norms.

A system trained primarily on English-language data from Western sources will have systematically different error rates when processing Zulu, Afrikaans, isiXhosa, or Arabic – a fact with specific relevance for South Africa’s regulatory ambitions in a country of eleven official languages and enormous cultural diversity.

Malatsi’s AI policy framework talks about “digital inclusion”, but where exactly are AI companies supposed to find the required corpora of writing – literary, academic, scientific, business and colloquial – in South Africa’s indigenous languages?

Chilling effect

Automatic or regulatory content restrictions have a chilling effect even when the false positive rate is low in absolute terms.

Writers, researchers, journalists, and educators who know that their queries may be refused or their outputs flagged will self-censor. They will avoid topics, framings, and inquiries that might trigger a refusal, even when those topics are entirely legitimate.

I have learnt not to make fun of racists by using their own racist claims to satirise them. Neither automatic nor human social media moderators are capable of understanding nuanced speech.

The effect is most severe for the people with most at stake: human rights researchers documenting atrocities, journalists trying to expose government malfeasance, health workers seeking information about sensitive conditions, historians engaging with primary sources that contain disturbing content, lawyers researching extremist material in order to prosecute it.

Safety systems cannot distinguish between a researcher and a perpetrator without context that automated systems simply do not have. The regulatory pressure to minimise false negatives pushes the companies that have to take responsibility for online content or AI outputs toward over-restriction. They would rather mistakenly treat a researcher or journalist as a perpetrator or terrorist then risk the reverse.

Just like we see with social media companies today, the legal provisions around prohibited uses and high-risk applications will, in practice, be interpreted by lawyers in compliance departments whose incentive is to avoid regulatory penalty, and not to maximise user utility or protect free expression.

The South African framework, developed under conditions of even greater institutional capacity constraints, risks producing a compliance culture in which large incumbents build costly guardrail systems that small competitors cannot afford.

Big business has long used “consumer safety” as a fig leaf for creating high barriers to entry to would-be competitors. Meanwhile, the actual harms the regulations were designed to address will continue unimpeded.

Intractable non-consensus

The lack of consensus on what safety and harm actually mean in this context is not a mere oversight that better consultation will resolve.

It is a reflection of genuine and deep disagreement about values. Whether discussion of drug use constitutes harm reduction or incitement; whether explicit sexual content is protected expression or dangerous material; whether political speech that challenges state authority is legitimate dissent or destabilising disinformation – these questions do not have answers that command universal assent.

The answers vary significantly between the cultural contexts of Washington and Brussels, Beijing and Taipei, Kiev and Moscow, Pretoria and Kampala. They vary between generations, between political traditions, and between communities with different historical experiences of censorship, oppression and state power.

Regulation that imposes one set of answers as the mandatory baseline for AI development and deployment is not safety policy. It is censorship. It controls the narrative. It is the imposition of a particular cultural and political perspective by regulatory means, with innovation and free expression as the collateral damage.

The cost of compliance

The EU AI Act runs to hundreds of pages of operative text, with conformity assessment requirements, documentation obligations, transparency mandates, and registration procedures that will require substantial legal and technical resources to navigate.

For large technology companies with established compliance infrastructures, this is a significant cost that can nonetheless be absorbed. For startups, academic researchers, and developers in emerging economies, it is prohibitive.

There are large AI companies out there, and mainstream AI tools, but open-source AI software is freely available. Anyone can run modest AI models on their local computers. Anyone can create content or software with AI. Anyone can create AI-driven applications for distribution to users, for free or for money.

There is no regulatory framework that can apply to both big businesses and individual entrepreneurs working from their home offices. Any attempt to regulate AI will kill the startups first.

South Africa faces a specific version of this problem. The country has genuine and significant AI research capability, a young technical workforce, and an urgent need for AI-driven productivity gains to address its structural economic challenges.

A regulatory framework modelled on European precautionary principles – developed in a context of institutional maturity, legal certainty, and existing industrial capacity that South Africa does not possess – risks trapping South African AI development in a compliance posture that serves neither safety nor innovation.

For all the talk of bridging the digital divide, it will do the opposite: it will create a two-tier system in which South African users interact with AI systems built to foreign regulatory specifications that do not reflect South African linguistic, cultural or economic realities, while South African developers are unable to compete because the compliance cost of market entry exceeds their ability to capitalise their businesses.

Dampening innovation

The bureaucratic dampening effect on innovation is not a side effect that can be designed out of safety regulation.

It is a structural consequence of the regulatory approach itself, because safety regulation in technology attempts to prevent unclear harms that have not yet materialised, in systems whose capabilities are not yet fully known, using risk assessment frameworks developed for previous generations of technology.

Any policy framework premised on the belief that an industry must be regulated in order to prosper is fated to fail. A pre-regulated industry is one in which innovators have to wait for government to issue permission slips to do business, and that inevitably makes them laggards in the global economy.

The EU’s track record in technology regulation is woeful. Its General Data Protection Regulation’s effect on data-driven startups relative to their American and Chinese competitors is the most cited example, but it is already evident in AI, too.

The US hosted 5,427 AI data centres in 2025. That is more than 10 times the leading European country, Germany, which has 529. The US has twice as many data centres as all of Europe combined.

The EU compliance burden entrenches incumbents and drives innovation to less regulated jurisdictions, without reliably preventing the harms the regulation was designed to address.

The exact same will happen in South Africa: if we create seven bureaucracies to govern AI – a technology that is inherently ungovernable – we will not “promote the integration of Artificial Intelligence technologies to drive economic growth, enhance societal well-being, and position South Africa as a leader in AI innovation”.

We will do the exact opposite. We will give nobody an incentive to build their AI infrastructure in South Africa, and give South African AI entrepreneurs every incentive to leave the country.

False premises

The case for AI safety regulation rests on premises that do not survive examination.

The technology cannot provide the guarantees that safety regulation demands, because its behaviour is irreducibly probabilistic and its failure modes are structural and unpredictable.

The content moderation systems through which safety requirements are practically implemented cause serious harm to free expression, with error patterns that are systematically biased in many ways.

The compliance infrastructure that regulation requires imposes costs that fall most heavily on the actors least able to bear them, like solo developers, academic researchers, and small- and medium-sized enterprises. And they cannot, in principle, succeed at constraining either large incumbents most capable of navigating regulatory complexity, or bad actors capable of circumventing regulation.

Asimov understood this. His Three Laws were not a blueprint for safe AI. They were a warning about the hubris of believing that rules could substitute for wisdom, and that the complexity of intelligence – artificial or otherwise – could be bounded by the foresight of its makers.

The governments of Europe and South Africa would do well to read Asimov’s full body of work, not just the Laws themselves.

The instinct to protect the public from powerful technology is not wrong. But protection achieved through bureaucratic mandate and automated content restriction is not protection.

It is the illusion of protection, purchased at the cost of the expression, innovation, and competition that make the technology worth having.

AI is unregulatable, and any attempt to do so will kill the goose that lays the golden eggs.

[Image: Runaround.webp]

[Caption: In Runaround, Isaac Asimov introduced the Three Laws of Robotics, not as a blueprint for safe AI, but to prove that it isn’t possible to constrain AI. (Image: Astounding Science Fiction, March 1942, upscaled using AI. Fair use.]

The views of the writer are not necessarily those of the Daily Friend or the IRR. 

If you like what you have just read, support the Daily Friend 


contributor

Ivo Vegter is a freelance journalist, columnist and speaker who loves debunking myths and misconceptions, and addresses topics from the perspective of individual liberty and free markets.