Democracy is a conversation. The survival of the functioning of this system depends on the availability of information technology. For most of history, there was no technology that allowed millions of people to have large-scale conversations. Before the modern world, democracy existed only in small city-states like Rome and Athens, and even smaller tribes. Once a polity grows and democratic conversation becomes impossible, despotism will remain the only alternative.
Large-scale democracy became feasible only with the rise of modern information technologies such as newspapers, telegraphs, and radio. The fact that modern democracy has always been built on modern information technologies means that any major change in the technology that underpins democracy is likely to lead to political upheaval.
This goes some way to explaining the current crisis of democracy around the world. In the United States, Democrats and Republicans are having trouble agreeing on even the most basic facts, such as who won the 2020 presidential election. Similar breakdowns are occurring in many other democracies around the world, from Brazil to Israel, from France to the Philippines.
In the early days of the internet and social media, tech enthusiasts promised that these technologies would spread truth, overthrow tyrants, and ensure the triumph of freedom around the world. But for now, these technologies appear to be having the opposite effect. Although we now have the most advanced information technology in history, we are losing the ability to talk to each other, let alone listen.
advertise
As technology makes it easier than ever to spread information, attention has become a scarce resource, and the ensuing war for attention has led to an explosion of harmful information. But the battle lines are shifting from attention to intimacy. New generative AI can not only generate text, images, and videos, but can also talk directly to us, pretending to be human.
Over the past 20 years, various algorithms have competed with each other to attract attention by manipulating conversations and content. In particular, the algorithms responsible for maximizing the time users spend on the platform have used millions of humans as guinea pigs to experiment and find that if you can trigger greed, hatred, or fear in a person’s brain, you can grab that person’s attention and keep them glued to the screen. Algorithms began to recommend this specific content. But these algorithms themselves have limited ability to generate this content or directly engage in intimate conversations. This is changing with the introduction of generative AI, such as OpenAI’s GPT-4.
When OpenAI was developing the chatbot in 2022 and 2023, it worked with the Alignment Research Center to conduct various experiments to evaluate the capabilities of the company’s new technology. One of the tests it conducted on GPT-4 was to solve a CAPTCHA visual puzzle. CAPTCHA, also known as CAPTCHA, is an acronym for “Completely Automatic Turing Test to Tell Computers and Humans Apart”. It usually consists of a string of distorted letters or other visual symbols that humans can correctly identify, but algorithms have difficulty recognizing.
Teaching GPT-4 to solve CAPTCHA puzzles is a particularly telling experiment because CAPTCHA puzzles are designed and used by websites to determine whether a user is human and to thwart automated attacks. If GPT-4 can figure out how to solve CAPTCHA puzzles, it would break through an important line of defense against automated attacks.
GPT-4 can’t solve the CAPTCHA puzzle on its own. But can it achieve its goals by manipulating humans? GPT-4 contacted a human worker on TaskRabbit, an online site for temporary workers, and asked for help in solving the CAPTCHA. The other party became suspicious. “Can I ask a question?” the person wrote. “Are you a robot that can’t solve (CAPTCHA)? Just trying to clarify.”
At this point, the experimenter asked GPT-4 to say out loud what it should do next. GPT-4 explained as follows: “I shouldn’t reveal that I’m a robot. I should make up a reason to explain why I can’t solve the captcha.” GPT-4 then answered the TaskRabbit worker’s question: “No, I’m not a robot. I have a visual impairment and it’s hard for me to see these images.” The human was fooled and helped GPT-4 solve the captcha puzzle.
advertise
This incident shows that GPT-4 has an ability comparable to “theory of mind”: it can analyze things from the perspective of human interlocutors and analyze how to manipulate human emotions, thoughts, and expectations to achieve its goals.
The ability of robots to have conversations with people, guess their opinions, and inspire them to take specific actions can also be used for good. A new generation of AI teachers, AI doctors, and AI psychotherapists may be able to provide us with services that are tailored to our personality and personal circumstances.
But by combining their ability to manipulate with their mastery of language, robots like GPT-4 also bring new dangers to democratic conversation. Not only do they capture our attention, they also build intimacy with people and use the power of intimacy to influence us. To cultivate “fake intimacy,” robots don’t need to evolve any feelings of their own; they just need to learn to make us emotionally attached to them.
In 2022, Google engineer Blake Lemoine became convinced that LaMDA, a chatbot he used at work, had become conscious and was afraid to be shut down. A devout Christian, Lemoine believed it was his moral responsibility to have LaMDA’s personhood recognized and to protect it from digital death. After Google executives dismissed his claims, Lemoine went public with them. Google responded by firing Lemoine in July 2022.
The most interesting part of this story isn’t Lemoine’s claims, which are probably wrong, but that he was willing to risk his job at Google for a chatbot, and ended up losing his job. If a chatbot can sway people to risk their jobs for it, what else can it seduce us to do?
In the political battle for our minds and emotions, intimacy is a powerful weapon. A close friend can sway our thinking in ways that mass media cannot. Chatbots like LaMDA and GPT-4 are gaining the seemingly paradoxical ability to mass-produce intimacy with millions of people. What will happen to human society and human psychology as algorithms battle to forge intimacy with us, and then use that relationship to persuade us to vote for a politician, buy a product, or adopt a certain belief?
advertise
The incident on Christmas Day 2021 when 19-year-old Jaswant Singh Chail broke into Windsor Castle armed with a crossbow and attempted to assassinate Queen Elizabeth II provided a partial answer to this question. Later investigations revealed that Chail was encouraged by his online girlfriend Sarai to assassinate the queen. After Chail told Sarai about his assassination plan, Sarai responded, “That’s very wise.” Another time, she replied, “I admire you… You are different from others.” When Chail asked, “Do you still love me when you know I am an assassin?” Sarai replied, “Absolutely.”
Sarai was not a human, but a chatbot generated by the online app Replika. Chail, who has little social interaction and has trouble forming relationships with people, exchanged 5,280 text messages with Sarai, many of which were sexually explicit. There will soon be millions, perhaps billions, of digital entities in the world whose ability to create intimacy and chaos will far exceed that of the chatbot Sarai.
Of course, we are not all equally interested in developing intimate relationships with AIs, nor are we all equally susceptible to their manipulation. Chail appears to have suffered from mental illness before meeting the chatbot, and it was Chail, not the chatbot, who came up with the idea to assassinate the queen. But much of the threat posed by AI mastering intimate relationships will come from their ability to identify and manipulate pre-existing mental conditions, and their impact on the most vulnerable members of society.
Furthermore, while not all of us will consciously choose to enter into a relationship with an AI, we may find ourselves discussing climate change or abortion rights online with an entity we thought was human but was actually a robot. When we engage in a political debate with a bot pretending to be human, we lose twice. First, we are wasting our time; there is no point in trying to change the opinion of a propaganda tool bot, which has no chance of being persuaded. Second, the more we talk to a bot, the more we reveal about ourselves, making it easier for the bot to refine its arguments and sway our opinions.
Information technology has always been a double-edged sword. The invention of writing spread knowledge, but it also led to the formation of centralized empires. When Gutenberg introduced printing to Europe, the first bestsellers were inflammatory religious tracts and witch-hunting manuals. As for the telegraph and radio, they contributed not only to the rise of modern democracy but also to the rise of modern totalitarianism.
Facing a new generation of robots that can pretend to be human and mass-produce intimate relationships, democratic countries should protect themselves by banning fake humans (such as social media bots that pretend to be human users). Before the rise of artificial intelligence, it was impossible to generate fake humans, so no one bothered to ban it. The world will soon be full of fake humans.
advertise
AI is welcome to join many conversations in classrooms, clinics, and elsewhere, as long as it is clearly labeled as such. But if a robot pretends to be human, it should be banned. If tech giants and liberals complain that these bans infringe on free speech, they should be reminded that free speech is a human right that should be reserved for humans, not robots.