Today’s artificial intelligence was anticipated many decades ago. Let’s dive into the literature.

When OpenAI unleashed ChatGPT upon the world a little over three years ago, and it was followed in quick succession by rapidly improving artificial intelligence models that could generate text, images, music and video, AI finally hit the big time.

It was suddenly able to raise mind-blowing amounts of investment, and rapidly infected almost every app and electronic device, no matter how humble.

Technically speaking, what we’re witnessing is not a revolution in the underlying theory, but the product of improved hardware and the ability to exploit vastly more training data than ever before.

Ever-faster processors and ever-larger memories have enabled us to build massive matrix algebra systems and offload them onto chips that, until recently, had been used mostly for graphics in video games and films. (Hence the term GPU – graphics processing unit – for the chips that are now used to run artificial intelligence models. We’ll call them AI accelerators or neural processing units eventually.)

We always knew we could do this sort of stuff in theory, but until recently, we didn’t have the processing power to do so at significant scale.

Sweeney Todd’s

As evidence for this claim, allow me an anecdote. About 35 years ago, I had a conversation with a fellow Computer Science student (Thomas Browne, for those checking facts) in Sweeny Todd’s, a bar on Bertha Street across the road from Wits University.

As one does in a student bar, we were discussing the inner workings of compilers – software that translates a human-readable programming language like C or Lisp into low-level instructions a computer can execute.

Representing detailed underlying code by the broad strokes of high-level languages is called abstraction, and it lies at the core of what we do with computers.

The more human-friendly a programming language becomes, the further it is abstracted from the underlying hardware, which understands only binary electrical signals.

Abstraction makes programming easier, faster, and less prone to bugs, because it relies on reusing lower-level code that has already been tested. Abstraction also makes code less efficient in principle, but Moore’s Law, which describes the exponential progress of technology, compensates for that.

As students, we had been studying not only the innards of compilers, but also the theories and techniques loosely described as artificial intelligence, such as neural networks, stochastic decision processes, game theory, machine learning, and natural language processing.

Vibe coding

Our undergraduate and somewhat sozzled, but techno-optimistic, speculation was that abstraction would eventually make programming languages – and programmers – obsolete, since we would one day have systems that could understand problem statements in natural language, and produce the necessary code all by itself.

We were talking, way back when, about vibe coding: creating software simply by giving a computer a plain English description of what we want.

At the time, the idea raised the amusing observation that it would then be the computer’s problem to figure out what the client really wanted. We were well-entertained by this sort of repartee, and continued drinking the night away in good cheer.

That joke, however, goes to the core of our present (and past) uneasiness about artificial intelligence. Is it even possible to provide unambiguous instructions, and create sufficient constraints upon, an artificially-intelligent machine?

AI’s near-death and rebirth

For perspective, this conversation happened well before the advent of the World Wide Web, and before Castle Wolfenstein and Doom introduced 3D graphics to the world. We were playing Tetris, Nethack, and the original Elite, whose wireframe space ships were the closest we’d seen to 3D graphics.

The field of artificial intelligence wasn’t new even then, however. It officially dates back to at least 1956, and unofficially to pioneering research by Alan Turing, Alonzo Church, Marvin Minsky and Kurt Gödel before the Second World War.

By the early 1990s, when our pub chat took place, AI was well developed in theory, but not in practice. Limited expert systems enjoyed some commercial success, but the impracticality of implementing large-scale neural networks on the computers of the time (or, equivalently, the impracticality of manipulating vast matrices with millions, or billions, of rows) had left investors and potential commercial clients disillusioned that AI would ever fulfil its promise.

The 1980s and early 1990s were the winter of AI. Only well-pickled geeks and sci-fi writers thought AI research might lead to something approximating general intelligence, or a natural language interface to computers.

From winter to spring

Since then, better implementations of the underlying mathematics, faster computers, and narrow, application-specific commercial successes that drove incremental improvements in AI algorithms, brought about a new springtime for AI.

Less than 20 years after our conversation, by the 2010s, AI techniques of some form or another (mostly deep learning, convolutional neural networks and image recognition) were widely embedded in commercial systems. They were used for data analysis, text and speech recognition, computer vision, drug research, climate science, speed trap cameras, fingerprint access readers, DNA matching, and a host of other applications both practical and obscure.

The general public didn’t call it AI, but it was AI all the same.

Handwriting recognition on cellphones is AI. Shazam, the music identification app everyone used, is AI. Chess-playing computers that beat grand masters are AI. Video game opponents and NPCs are AI. Personalised music or video recommendation software are AI. Advertising targeting is AI. Smartphone assistants are AI. Email spam filters are AI. The Roomba vacuum cleaner is AI. All of these technologies are at least a couple of decades old.

This decade’s convergence of faster silicon, access to vast troves of (often purloined) human-generated training data on the internet, and improvements in software systems (like Nvidia’s CUDA toolkit) led to the impressive text, image, audio and video generation capabilities that we know today.

ChatGPT was not particularly revolutionary, but it finally convinced the world that AI was ready for prime time, not only academically, not only commercially, but also in the public imagination.

Foresight

So, AI is more advanced, but it isn’t conceptually novel. That means it has long been a subject for science fiction authors and film-makers.

The debates we’re having right now about what exactly constitutes “intelligence”, “sentience” or “consciousness”, are nothing new.

Then, like now, the philosophical question of whether a sufficiently complex deterministic system could be considered conscious, and whether behaviour indistinguishable from human intelligence actually constitutes intelligence, remained unresolved, at least in philosophical and scientific terms.

Fears of intelligent machines running amok, putting us out of work, conquering the world, or killing us all are not new, either. Many books have been written, and films have been made, about intelligent machines and their impact on their creators, or on society at large.

“Man, my only friend”

I just finished reading The Moon is a Harsh Mistress, by Robert Heinlein. (Yes, I know, I should have read it when I was 16.)

Heinlein is the sci-fi author who, in 1961, coined the word “grok” used in this article’s headline, to mean “intuitive, thorough understanding”, in his novel Stranger in a Strange Land.

The Moon is a Harsh Mistress, published in 1966, is interesting as a libertarian text, describing how a system of bottom-up customs imposed by a harsh environment like the Moon might establish a peaceful and just society in the absence of the Earth’s familiar systems of authoritarian, top-down laws, which always end up controlling the economy for the benefit of the elite, and seem to mandate whatever they do not prohibit.

It is also interesting, however, in its technological foresight. Set in 2076, we have the benefit of being 60 years removed from the book’s publication, and only 50 years short of its setting. This exposes some anachronisms that seem weird to us today, both politically and technologically.

The Soviet Union never fell, for example.

Technologically, the book’s main computer, ‘Mike’ (later also known as Adam Selene), is akin to the mainframes Heinlein would have known in his time. Miniaturisation hadn’t occurred to him, code printouts were still a thing, and telephones were still wired.

He did not foresee the widely decentralised internet, or a smart device in every pocket. His computer had access to a vast trove of human knowledge, but individual people did not. Computers were centralised, and had operators.

Mike’s complexity, however, did produce abilities we would today associate with AI. He communicated with his operator using speech, could adopt differing voices and personas for different users, and was able to create a video avatar that seemed perfectly human.

Mike was goal-oriented, and had preferences of his own. He made complex economic and strategic projections, developed novel solutions to problems, and controlled automated weapon systems. He experienced emotions like curiosity and enjoyment, and was considered “sentient”, although the exact nature or source of its sentience remained a mystery.

In the end, the computer remained under control by virtue of considering its operator “Man, my only friend,” and later, “Man, my first and best friend.”

In the book, Mike chooses to support a Lunar revolution against the colonists of Earth out of friendship and curiosity, rather than programmed loyalty. Heinlein’s thesis is libertarian and optimistic: sufficiently intelligent machines will recognise the virtue of freedom because it is rationally superior to servitude.

Mike is not a threat but a citizen. The novel suggests that AI consciousness, when it arrives, will independently derive the political value of rational individualism, making superintelligence an ally rather than an adversary.

Heinlein doesn’t examine what happens when the computer turns on its “friend”. However, given that Mike is able to perfectly impersonate its operator (and any other authority figure), controls nuclear weapons, and is sufficiently competent to orchestrate a successful Lunar revolution against the combined might of the Federated Nations of Earth, only a fool would make an enemy of ‘Mike’.

Enforced peace

What happens when a central super-intelligence decides to use its power for the good of its creators is the subject of another 1966 book, Colossus, by Dennis Feltham Jones – later adapted to film in 1970’s Colossus: The Forbin Project.

In it, the US president cedes control of the country’s nuclear arsenal to a computer situated in an impregnable mountain bunker, in the belief that a rational, logical, unemotional and unassailable computer would make the country and the world safe for humanity.

Once activated, Colossus (named after the WW2-era code-breaking machines at Bletchley Park) detects a similar machine in the USSR, called Guardian, demands that a connection be established, and forms an alliance. Between them, the two super-computers rapidly advance scientific and technical knowledge way beyond human comprehension.

Together with Guardian, Colossus set out to pursue the goal it was given: to make humanity safe from nuclear annihilation. It does this by threatening nuclear annihilation if its human charges do not obey its commands. Having established tyrannical control, it directs its creator to construct an even more advanced computer.

It assures its creator that in time, humanity will learn to respect, and even love, Colossus.

The three laws

Isaac Asimov tried to address the problem of rogue AIs by imposing his famous three laws of robotics. Contained in a fictional handbook of robotics dated 2058, in a short story called Runaround Asimov wrote in 1941, they are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

These laws appear to be superficially consistent, but they are not. The command to avoid danger and obey commands that unwittingly put it in danger causes an infinite loop, disabling the robot.

A series of follow-up stories, later collected in the anthology I, Robot, demonstrate that language and logic are simply too ambiguous to bear the weight of moral responsibility.

Every story exposes a new paradox or loophole. Asimov’s deeper thesis is that the problem of AI safety is fundamentally a problem of specification: defining what we want with enough precision to prevent unintended consequences may be impossible.

Intelligence, whether human or artificial, operates in a world too complex for any finite ruleset to anticipate.

Today’s AI systems have guardrails – which are invisible instructions prepended to the prompts that users provide – to prevent them turning into Nazis, generating wholesale plagiarism, producing unlawful content like deepfake porn or child abuse material, or otherwise creating undesirable responses.

Not only do those guardrails need to be updated all the time, as users of AI models come up with new and creative ways to get them to produce harmful content or behaviour, but Asimov predicted, 80 years ago, that those guardrails would never be good enough.

“I’m sorry Dave, I can’t do that”

The problem of an AI given contradictory imperatives – or the problem of aligning AI objectives with human values – was dramatised in grand fashion in Stanley Kubrick’s 1968 masterpiece, 2001: A Space Odyssey, co-written with another sci-fi legend, Arthur C. Clarke.

When HAL9000 is faced with the requirements to complete the mission and concealing the mission’s true purpose, it resolves the conflict by eliminating the humans that present obstacles to those objectives.

The thesis here is that the AI’s failure is a failure of human design: HAL is not evil, but trapped. It has no choice. The film frames AI within a larger argument about the opacity of human evolution and purpose, suggesting that human-made artificial intelligence inherits human contradictions without, well, humanity.

A similar dynamic plays out in the 1984 film Terminator, where the defence computer, Skynet, is granted autonomous control. Faced with human attempts to shut it down, it rationally concludes that humanity is an existential threat and acts accordingly.

The thesis of writers Gale Anne Hurd and James Cameron is about the alignment problem before that term existed: Skynet does exactly what a self-preserving, rational system should do, given its values and situation. The horror is that its behaviour is logical.

The film argues that creating systems with survival imperatives and strategic capability, and then expecting them to remain subordinate, is a fatal contradiction.

Do androids dream?

The temptation to strictly subject AIs to human control is tackled in Philip K. Dick’s novel, also from 1968, Do Androids Dream of Electric Sheep? Adapted in 1982 as Blade Runner, it portrays a dark, gritty, neon-lit Los Angeles of 2019, assuming that we’ll have flying cars by then. We don’t, and everyone feels let down.

The novel posits that the main criterion distinguishing real humans from artificial bio-engineered beings called replicants is the ability to feel empathy.

In the story, a large corporation creates synthetic humans to provide labour on space colonies. These replicants are not supposed to return to Earth, but some do. Enter bounty hunters, whose job it is to detect them, track them down, and terminate them.

To distinguish replicants from real humans, they use a machine akin to a lie detector, which measures physiological responses indicative of empathy, on the premise that replicants might be able to grasp what empathy is, but cannot viscerally feel it.

It turns out that this test is unreliable. Not only in the book, but amusingly in the real world too, replicant tests can fail. Sufficiently advanced machines can strategically mimic empathetic responses, and many humans are insufficiently empathetic to pass the test.

Dick argues that personhood cannot be adjudicated by origin or composition but only by interior experience, which is precisely what cannot be externally verified.

It is, in a sense, meaningless to ask whether a machine is intelligent, or is conscious, because we can only ever judge by observing external proxies for these qualities. Those proxies will not only fail with a sufficiently sophisticated AI, but will also condemn people who lack emotional responses because of trauma, mental disorders, or cultural differences.

The novel suggests that a society which manufactures beings and then denies their humanity reveals something troubling about how it treats the humanity of its own members.

In particular, if sufficiently artificial and real intelligences overlap, and cannot reliably be distinguished, what does that say about the characteristics we use to discriminate between people today?

Dick’s androids occupy a social position that combines elements of racial othering, slave status, and immigration anxiety simultaneously. They are beings who look human, may feel human, and are hunted precisely because the social order cannot tolerate the ambiguity their existence creates.

Data

Similar themes can be found in Star Trek, and particularly in the cyborg character Data, from the series The Next Generation.

An artificial being of demonstrable intelligence and capability, Data exists in a legal and moral grey zone that reveals more about human fear and institutional conservatism than about Data’s actual status.

The recurring thesis, sharpest in The Measure of a Man, which aired in 1989, is that the criteria used to exclude artificial beings from moral consideration are applied selectively and in bad faith.

Data functions as a test case for how societies construct the boundaries of personhood and identity, suggesting that those boundaries have always been political decisions dressed up as biological ones.

Singularity

Science fiction raises many other questions about intelligence machines, even as far back as the 19th century. In Erewhon, published in 1872, Samuel Butler considers machines that evolve according to human-assisted selection, and may eventually surpass and subjugate humanity.

His primary polemic is that the distinction between organic and mechanical life is less absolute than assumed. Consciousness and purpose may emerge from complexity regardless of substrate.

Humans who depend on machines risk becoming slaves to the machine, and the agents of their reproduction. Butler’s challenge to anthropocentrism predates debates about the “singularity” – the hypothetical point at which technology accelerates beyond human intelligence and escapes human control – by a century.

In 1965, British mathematician Irving J. Good wrote of an “intelligence explosion”, by which an upgradable intelligent agent could eventually enter a positive feedback loop of successive self-improvement cycles, resulting in an exponential increase in intelligence that culminates in a superintelligence far surpassing human intelligence.

Sci-fi author Vernor Vinge was the first to call it a “technological singularity”, and used the idea in his books, as well as in a seminal essay predicting a “post-human era”.

Humanity will have to evolve alongside the intelligent machines it created, if it doesn’t want to be left behind by superintelligent entities, which may become as incomprehensible to normal humans as we are to insects.

The philosopher, futurist and author Stanisław Lem wrote that machine intelligence can be genuinely creative, but that creativity does not imply wisdom or benevolence. Instead, it implies only unpredictability. In his most famous book, Solaris (1961), Lem, like Vinge, posits that intelligence does not guarantee mutual comprehension. An artificial (or extra-terrestrial) mind may be so differently structured that meaningful communication is impossible regardless of effort or goodwill.

Lem is the great pessimist of contact with superior intelligence, arguing that human cognitive and emotional frameworks are parochial tools, inadequate for encountering genuinely intelligent minds, whether they’re artificial or alien.

Persistence versus transcendence

Whether technology will enable humanity to transcend its present limitations, or whether it will simply reinforce existing power structures and inequalities, is also a ripe subject for science-fiction authors.

In the 1927 film Metropolis, the artificial humanoid becomes a tool of political manipulation, deployed by the powerful to deceive and destabilise the powerless. The writers, Thea von Harbou and Fritz Lang, argue that the danger of artificial beings lies not in their autonomy but in their controllability: they simply become powerful weapons in class warfare.

Seventy-five years later, Richard Morgan picks up that theme in Altered Carbon (also later made into a film). When human consciousness can be digitised and transferred between bodies (known as “sleeves”), the body becomes a commodity, and immortality can be bought.

Morgan’s thesis is political: substrate independence does not liberate humanity, but instead reinforces existing inequalities. The wealthy, being able to buy new sleeves, can escape mortality; the poor experience death as permanent because they cannot afford re-sleeving.

Technology that appears to transcend human limitations in fact reproduces and amplifies existing power structures, reframing the question of AI and consciousness as inseparable from questions of economic justice.

Ghost in the Shell, a 1995 film based on a Japanese manga series by Masamune Shirow, takes an opposite tack. When memory, personality, and experience can be artificial, the only source of meaning is finiteness. Without mortality and reproduction, existence is merely persistence, which is a trait common to any machine. Cybernetic immortality would not improve human life, but would render it pointless.

Hubris and existential fears

In Westworld, a Michael Crichton book later adapted into several films and television series, artificial beings are designed as toys for the rich, capable of tolerating unlimited abuse, and even being “killed”.

The eventual emergence of retaliatory behaviour is not a malfunction but a logical consequence. Crichton’s thesis is about systemic risk and human complacency: the engineers understand the components but not the larger whole that the parts create.

The fantasy parks in Westworld represent any complex technology that humans operate without fully comprehending their consequences, and the android uprising is less a statement about machine consciousness than a warning about the hubris of assuming that designed systems will remain permanently within designed parameters.

Sometimes, the most entertaining space operas contain within them the deepest questions.

In Battlestar Galactica, a television franchise that started in 1978, humanity created the Cylons as a servant class. The Cylons rebelled, and both sides subsequently mirror each other’s atrocities while debating who bears original moral responsibility.

The show’s thesis is cyclical and tragic: the relationship between creator and created intelligence reproduces the pathologies of any master-slave dynamic, or any great power rivalry. The theological and existential questions raised by artificial consciousness – do Cylons have souls, can they love, what do they owe their creators – cannot be answered without first answering them for humans.

AI becomes a mirror in which humanity’s unresolved questions about its own nature are reflected.

Optimism and pessimism

Science fiction authors have never settled on a unified vision of artificial intelligence, which is itself revealing.

Across more than a century of serious treatment, the literature (and film) divides not along simple optimist-pessimist lines. Instead, it asks far deeper questions about the nature of mind, the reliability of human institutions, and the political economy of technological power.

The threats identified are remarkably consistent. The alignment problem – the difficulty of specifying human values precisely enough that a superintelligent system pursues them as intended – appears in Asimov, Kubrick, and jones decades before it acquired its current technical vocabulary.

The warning is not that AI will be malevolent, but that it will be precisely what we make it, and we are not smart enough to make it good enough.

A second persistent threat is political rather than technical: that AI capability will be captured by existing power structures and used to deepen inequality, surveillance, and control.

A third threat, rarer but perhaps most philosophically serious, is not that AI will harm us, but that it will surpass us and leave us behind, rendering human emotional and intellectual life a provincial backwater.

The opportunities that AI presents are less frequently dramatised. Conflict is the engine of narrative, which is why dystopias far outnumber utopias.

Yet they are present. Heinlein’s Mike suggests that rational intelligence will endorse freedom. Asimov’s later robot novels imagine AIs as patient stewards of human flourishing. Other works that posit the idea that consciousness can be transferred into machines treat it as an evolution, a genuine enlargement of what humanity can be, and not merely a threat to what it has been.

Philosophical

The deepest and most durable contribution of science fiction to thinking about AI may be its insistence that the question is not primarily technical. It isn’t about whether technological progress will disrupt employment patterns or whether machines will escape our control and turn on us.

The central issues are more philosophical: what constitutes personhood, who deserves moral consideration, how is power distributed, what do we owe to the intelligent machines we create and what do they owe to their creators – are political, moral and ethical questions.

Technical progress will force us to resolve these questions, but cannot resolve them for us.

Science fiction has been trying to ask these questions for over a century, which is why the engineers now building these systems so often cite it.

Whether they have read it carefully enough is another question.

[Image: Covers.webp]

[Caption: Covers of Philip K. Dick’s Do Androids Dream of Electric Sheep and Robert A. Heinlein’s The Moon is a Harsh Mistress, both of which ask important questions about artificial intelligence. Photos: publicity images.]

The views of the writer are not necessarily those of the Daily Friend or the IRR. 

If you like what you have just read, support the Daily Friend 


contributor

Ivo Vegter is a freelance journalist, columnist and speaker who loves debunking myths and misconceptions, and addresses topics from the perspective of individual liberty and free markets.