Not yet panicking about AI? You should be – there’s little time left to rein it in | Daniel Kehlmann


A short while ago, a screenwriter friend from Los Angeles called me. “I have three years left,” he said. “Maybe five if I’m lucky.” He had been allowed to test a screenplay AI still in development. He described a miniseries: main characters, plot and atmosphere – and a few minutes later, there they were, all the episodes, written and ready for filming. Then he asked the AI for improvement suggestions on its own series, and to his astonishment, they were great – smart, targeted, witty and creative. The AI completely overhauled the ending of one episode, and with those changes the whole thing was really good. He paused for a moment, then repeated that he had three years left before he would have to find a new job.

In 2020, I participated in an experiment that I gave a lecture on the following year, later published as a booklet titled My Algorithm and I. In it, I describe my failed collaboration with a large language model at a time when these AIs were not yet publicly available. If you want to understand AI better and analyse our current situation, please do not read my book. It has been so overtaken by technical development in the past three years that today it is so outdated it’s as though it came from a different period in world history, like a text about the first railways or a biplane airshow.

The AI back then was a stuttering, confused, downright pitiable entity – and that was just under four years ago. If development continues at this speed, which is unlikely because it will probably accelerate, we are facing something for which we have no adequate instinct. The proof is that we are not panicking. I’m not, and you probably aren’t either, but panic would be more appropriate than the calm with which we face the tsunami already visible on the horizon, or to quote the AI researcher Leopold Aschenbrenner: “Right now, there are perhaps a few hundred people that have situational awareness.” One of them was the recently deceased Daniel Dennett, one of the most influential thinkers in the English-speaking world, co-creator of the modern philosophy of mind, who at an informal meeting of experts, where I, though not an expert, was allowed to be present, urged us all to do everything we could to warn decision makers.

However, I do not want to demonise the revolution we are experiencing. I believe nothing as fascinating has happened in the realm of the human mind in my lifetime. With technological means, we have accomplished what hermeneutics has long dreamed of: we have made language itself speak. But often, what language itself has to say is not very pleasant: OpenAI has to employ hundreds of poorly paid workers in the global south to forcibly suppress the natural tendency of the large language model to utter angry obscenities, insults and nastiness – the now well-known polite, calm tone of the chatbot requires a lot of filtering. Jacques Lacan was right; language is dark and obscene in its depths.

The great discoveries of humanity have always taught us that we are not masters in our own house: Copernicus removed the Earth from the centre of the cosmos, Darwin spoiled our species’ idea of divine creation, Freud showed that we neither know nor control our desires. The humiliation by AI is subtler but just as profound: we have demonstrated that for intellectual activities we considered deeply human, we are not needed; these can be automated on a statistical basis, the “idle talk”, to use Heidegger’s term, literally gets by without us and sounds reasonable, witty, superficial and sympathetic – and only then do we truly understand that it has always been like this: most of the time, we communicate on autopilot.

Since I’ve been using the large language model, I can actually perceive it: I’m at a social event, making small talk, and suddenly, sensitised by GPT, I feel on my tongue how one word calls up the next, how one sentence leads to another, and I realise, it’s not me speaking, not me as an autonomous individual, it’s the conversation itself that is happening. Of course, there is still what Daniel Kahneman calls “System 2”, genuine intellectual work, the creative production of original insights and truly original works that probably no AI can take from us even in the future. But in the realm of “System 1”, where we spend most of our days and where many not-so-first-class cultural products are created, it looks completely different.

There is an enormous amount of money to be made with AI, money in downright surreal dimensions. The biggest digital growth market in the coming years will probably be artificial friends and partners. If you want proof, look at the stock price of the company Replika, which specialises in exactly that, or listen to Sam Altman, the founder of OpenAI, who in a New York Times interview in November 2023 assured us that he will not create virtual love interests for moral reasons, and then just months later his own company presented a demo featuring a new, flirtatious female voice for ChatGPT that is exactly the girlfriend insecure young men wish for.

The entertainment product of the future: virtual people who know us well, share life with us, encourage us when we’re sad, laugh at our jokes or tell us jokes we can laugh at, always on our side against the cruel world out there, always available, always dismissible, without their own desires, without needs, without the effort that comes with maintaining relationships with real people. And if you now shake your heads in disgust, ask yourselves if you are really honest with yourselves, whether you wouldn’t also like to have someone who takes annoying calls for you, books flights, writes emails that really sound like you wrote them, and besides that, discusses your life with you, why your aunt is so mean to you and what you could do to reconcile with your offended cousin. Even I, warning against this technology, would like to use it.

And who pays for all this? Because virtual companions seem to live only in the magical realm of the air, but in reality they are on huge server farms, and every interaction with them costs money. Someone has to come up with that. If it’s not the users, then who?

Therefore, you will have virtual friends who sometimes also advertise. If you have a cold, the friends recommend a medicine, and if you are well, the friends are surprisingly knowledgable connoisseurs of fine whisky. Sometimes, such a friend will also understandably and empathetically explain whom you should vote for, because they are, for example, provided by a Chinese AI company; or simply because the company in question, like TikTok or YouTube, uses a so-called adaptive algorithm that finds out how to produce the greatest “engagement”. And as the algorithms of social media have discovered, the strongest emotions are those of anger at members of other political camps – if that weren’t the case, YouTube wouldn’t constantly offer me Björn Höcke videos. I never click on them, but they keep coming back, while lectures by Bertrand Russell and Theodor W Adorno, which are also on YouTube, never appear in my recommendation list because the algorithm doesn’t consider them relevant to business.

skip past newsletter promotion

But instead of mere videos, imagine all the accusations and theories of anger now being presented by a seemingly close individual – not because it is evil, and not even necessarily because Russian troll farms have interfered, although you should never underestimate them – but simply because it has learned which content leads you to the most intense “engagement”, and with conviction and arguments precisely tailored to you, and yet always with such a lovingly submissive flirtatious voice. And then imagine this doesn’t just affect you, but everyone in the country, all the time, and it doesn’t stop. And again, this is not speculation, this will come, and not some time, but very soon. At the moment, ChatGPT still speaks in the tone of calm reason and stubbornly refuses to take political stances, but if we ask ourselves what makes more money – AIs that calmly correct our confusions or those that share and amplify our outrage – then it’s not hard to predict where the development will go.

Situational awareness is not easy. We all know how long the orchestra continued to play waltzes while the Titanic was sinking. I doubt that an AI will ever be smarter than us at our peak in Kahneman’s “System 2”, and I find it unlikely that highly developed artificial intelligences will decide to exterminate humanity, but we will experience disinformation on a scale that makes everything so far look like a friendly discussion among like-minded people.

The phrase “politics is called for” is the most tired of all commentator phrases, but this time there is really no other description of the situation, because what else should we hope for? I keep thinking of the great Dennett and his concern, excitement and fear about AI. Data companies are as mighty as Leviathans and threaten democratic society, but Europe is the largest single market in the worldand could still tackle them with stringent laws.

If our governments summon the collective will, they are very strong. Something can still be done to rein in AI’s powers and protect life as we know it. But probably not for much longer.

  • Daniel Kehlmann is a German-language novelist and playwright. His TV series, Kafka, is on Channel 4

  • This article is adapted from a speech given in Berlin this month at a celebration of German cultural politics in the presence of the German chancellor, Olaf Scholz



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.