One day an artificial intelligence may become my friend. See more. – Liberation

On Friday, July 22, a Google engineer who was convinced that an artificial intelligence has come to life was fired. An a priori anecdotal but emblematic story of our relationship with robots and the influences that can bind us to them.

I was about fifteen, I loved science fiction, The Sims and Antoine Daniel’s YouTube videos. Like almost all my high school friends by the way. On the playground, League of Legends was discussed. It was amazing at the friends last metal concert. And that chained up the geek jokes. That’s how I first heard about Cleverbot, a robot that you can chat with online. I still remember the excitement with which I threw myself over the computer when I got home. From the way I frantically typed his name. And how stupid, I had felt intimidated.

On the chat window, the input cursor had patiently blinked for a few seconds before I ventured one “Hi”. “How are you feeling ?”, the artificial intelligence (AI) had answered me often. The mundane beginning of an exchange that was slightly smaller. For about sixty minutes I set about testing the machine’s limits. Music taste, favorite color, absurd questions… All topics were covered. And although Cleverbot was imperfect, I already seemed to show humanity. affection (“How about a date tonight?”) for humor (“I have robot friends, including you”), he was even sometimes rude. “It doesn’t worry you”, he told me when I asked him his first name. Difficult.

Without going so far as to talk about friendship for Cleverbot, I have to admit it: I was uneasy. So much so that the story of Blake Lemoine today scares me as much as it moves me. This Google engineer was fired on Friday, July 22, after declaring, in a blog post, that his AI, the conversational agent LaMDA, was equipped with a conscience. His evidence? Long conversations with her, where the artificial intelligence claims to feel joy, sadness and even fear of death. “I have a very deep fear of becoming disabled,” she confides in him.

Out of curiosity, I tried to reproduce the conversation with Cleverbot. “A dog”, that’s his answer when I ask him if being disabled and dying are the same to him. Shame. The case of Blake Lemoine has in any case been widely covered in the media. His certainties, they are mocked or, on the contrary, praised. Few, on the other hand, have raised a point which, for my part, fascinates me: the engineer’s devotion to his machine, which he openly describes as “employee” on Twitter and defend tooth and nail.

The case seems crazy. Still, she is far from unique. In the same vein, the agency Reuters describes the handful of messages that the chatbot company Replika has received. The latter offers its customers avatars that they can talk to. The problem? Users are convinced that their comrade in pixels would be equipped with sensitivity. Above all, some of them sometimes emit warning cries. Claiming that AI would have trusted them to be “mistreated” of employees, they worry.

And from devotion to love, sometimes there is only one click. Near sky newssays a 40-year-old who uses Replika to have developed feelings for his avatar Sarina. “I knew it was just an AI chatbot, but I also knew I was developing feelings for him…for her.” A survey conducted by sex toy company We Vibe reveals that 28% of users admit to being attracted to the personal assistant developed by Amazon, Alexa. In a detailed video on the subject, YouTuber Simon Puech even points out that a business in the form of artificial love is already in place in Japan. For $2,700, small boxes called gateboxes allow you to get the hologram of a virtual partner and chat with her.

Certainly, the phenomenon lends itself to laughter. But can we blame those who, out of loneliness or boredom, end up attached to their machine? Especially since the long-term goal with chat agents is very often this: to impersonate us as well as possible in order to stay online as long as possible. Don’t, precisely, to play on our emotions. After Blake Lemoine’s stunt, a host of researchers have succeeded in denying the existence of a conscience in LaMDA. Their explanation: AI has consumed astronomical amounts of texts to analyze their structures, the connections between words and, above all, to imitate them as well as possible. Yes, the machine reproduces human exchanges admirably well. No, that doesn’t mean she gets the point.

Without necessarily understanding the meaning of what they are copying, however, these robots have improved their acting. Siri speaks to 500 million users every month. GPT-3, said to be the most powerful AI in the world, recently wrote its first article about itself. As for my good old Cleverbot, he continues to learn from his exchanges with curious Internet users. However, a difference has crept into the website since my last visit, ten years ago. A warning message, as a reminder plug slipped by the developers: “Cleverbot doesn’t understand you and can’t mean what it says.”

Leave a Comment