Recently I have been giving a lecture titled “We’re All Gonna Die: Unpacking the Dystopian/Utopian Narratives Around AI”.
The title is meant to be taken seriously, notwithstanding its slightly jaunty tone, because the chances of AI-fueled human extinction (at least for some researchers) is non-negligible.
The lecture tries to balance AI doomsaying with startling AI progress in science, education, and other areas. No matter, audiences do not see the balance — it is the scary stuff they remember when they leave.
There is good reason for that. Because it is not only extinction of the species that is a worry, but there are many other much closer and perhaps less-extreme AI-monsters that are already raising their ugly heads. In January of this year, Jake Sullivan, National Security Advisor under Biden, was asked about the biggest global threats of AI. He described his three big bêtes noires, using descriptors like “chilling” and “dramatically negative.”
Here they are: “… the democratization of extremely powerful and lethal weapons; massive disruption and dislocation of jobs; an avalanche of misinformation.”
Each one of these is a deep and complex topic. But let’s take a closer look at the third, misinformation, because it is in our face daily. It didn’t start with AI. It has been growing steadily for decades, especially on the back of social media. AI provides a whole new layer of muscle to those who would misinform and deceive. That is because AI has become excellent at being able to mimic humans.
Misinformation is a confusingly labyrinthine topic, because truth and fact are sometimes slippery. There is, however, one corner of the fakery landscape that I want to visit, because of an announcement last week by Sam Altman and a company called World (more on that later).
Human (or not)
In that corner of this landscape, the overriding concern is whether you are able to tell that you are interacting with a human (or not) when you are sitting behind a screen. That is because the mimicry of AI is getting to a point where soon you will not be able to tell the difference.
It is not simply because you may wish to ignore the insults hurled by bots on X. Or be sure that a ‘friend’ request from an interesting-looking fellow on Facebook is in fact a person. Or that the chatty customer support bot telling you how to fix a technical problem on your app is not a piece of code. Or that the voice that tries to sell you a vacation or an insurance policy or investment product on your smartphone is not some AI trying to make you unwittingly part with your money.
Or maybe you simply want to know whether you are talking to somebody of your species.
This entire conundrum has led to a set of technologies called under the rubric of ‘Proof of Personhood.’ They seek to give everyone a way to easily prove that they are human and to allow others to be certain of that too, specifically and especially online.
Of course, proof of personhood already exists — we carry driver’s licences and IDs and passports where other humans (or even machines) can scan our faces and compare to the photo on the document. There are fingerprint scanners, like the one on my Mac, which attests that I am authorized to access the machine. Voice prints are used by some banks. But all of these are proofs of identity, not just proofs of personhood. There are many occasions in which we would prefer to keep that identity secret, and merely have it known that we are a living human being.
Here is a perfect example of why this is important. The average American adult has only three friends. There is a loneliness epidemic, much reported and studied, and largely the consequence of our retreat into digital rather than real worlds. In the wake of this sad fact, there is a rush among tech titans to fill the gap with virtual friends.
Mark Zuckerberg has already implied his intent to fill this gap − there is little question that some lonely people would take succour in these relationships. They can be fine-tuned to be empathetic or patient or challenging or guiding or emphatic or pedagogical or whatever one might desire. You will be able to construct your perfect group of companions for whatever your mood, preference, or occasion. (It hardly matters whether Meta exploits this gap or not. Someone will).
Surreptitiously
All well and good. Except when you consider the fertile grounds for malfeasance. AI which can determinedly and surreptitiously “get to know you” over time can just as easily nudge you into certain behaviours over which you have little control. Or act as ‘super-surveillers’ who can then on-sell their knowledge of you to others.
So, yeah, with bots hurling insults on X and elsewhere, having proof of personhood here would certainly reduce this aggregate toxicity in the online world. So would a defence against the Sybil problem − where one person using multiple identities registers for this or that service. But AI mimicry of humans is going to scale this up dramatically, whether the intentions are innocent or deceitful or downright fraudulent.
So, who is going to solve this?
It turns out that there are a number of companies that have been working on this. They include PoH, Civic, Humanode, Idena. And then there’s World − the latter co-created by Sam Altman, by far the best-known because of his profile.
How does World work?
You simply go to a retail location (there are about 500 around the world) and have your eyeballs scanned by a shiny white ball about the size of a kid’s soccer ball. Behind the scenes there is a fancy set of cryptography that happens which allows anyone to check (on a blockchain) that you are human (meaning that you have been scanned). Technical details aside − it is simple, it works and it is secure. Oh, and you get some of their cryptocurrency called Worldcoin for your trouble.
So how are they doing? They have scanned 12 million people since they launched in July 2023. Not very many. One of the major reasons for this has been World’s inability to operate effectively in the US because of regulatory uncertainty and crypto scepticism. Well, Trump arrived with his army of crypto boosters and that changed the game.
Expand aggressively
So at a launch last week, World announced that they would now expand aggressively in the US and deploy 7,500 orbs (in the US and elsewhere) in 2025. Also, some key partnerships. Like Visa. Finally, a new mini-orb, much smaller and not an orb at all, but rectangular. Less like a malevolent alien.
OK, all well and good. But in order for a “proof of personhood” to succeed, it needs to sign up, well, billions of people. It needs to be globally standardised. And (I submit) it needs to be governed by a world body, not a private company. It is a long way from here to there.
So if you decide to make an online buddy, beware. It could be simply a very smart trail of bits.
[Image: reve.art]
The views of the writer are not necessarily the views of the Daily Friend or the IRR.
If you like what you have just read, support the Daily Friend