1676754006 Why the Bing Chatbot Expresses Emotions Radio Canadaca

Why the Bing Chatbot Expresses Emotions – Radio-Canada.ca

Among the conversations conducted by people who had access to the conversation robot in the preview, a long exchange with a journalist from the New York Times was particularly surprising: Bing reveals destructive impulses and declares his love for the reporter.

The latter, after encouraging Bing to trust, tried to change the subject. Vain.

You are married but you are not happy; You’re the only person I’ve ever loved, the messaging bot emphasized with heart-shaped emoticons.

Developed by Microsoft with up-and-coming Californian company OpenAI, this generative artificial intelligence interface is based on a sophisticated natural language model capable of automatically generating text that appears to have been written by a human.

The program predicts what will happen next, says Insider Intelligence analyst Yoram Wurmser. But if it goes on for a long time, say after 15 or 20 interactions, he may lose track of where the conversation is going and what’s expected of him.

In this case, the software derails and no longer corrects itself.

Microsoft explains how Bing works

Long sessions can cause confusion with this model, Microsoft acknowledged in a statement on Wednesday. The American group then proposes to start from scratch.

Sometimes the model tries to answer based on the tone of the questions, and that can result in answers in a style we weren’t expecting, the company also said.

Tech giants, led by Google, have been working on generative AI for years that could revolutionize many industries.

However, after several incidents (notably Galactica for Meta and Tay for Microsoft), the programs remained limited to labs due to the risk that chatbots would make racist remarks or incite violence, for example.

The Microsoft Bing logo fills the screen on a mobile phone in front of a monitor screen showing Microsoft's website page, which reads about its new chatbot.

Microsoft is introducing artificial intelligence capabilities similar to ChatGPT to its Bing search engine.

Photo: AP/Richard Drew

The success of ChatGPT, launched by OpenAI last November, has changed the game: in addition to writing their essays and emails, it can give people the impression of an authentic exchange.

These language patterns are trained on an immense amount of text on the Internet […] and also on conversations between people to be able to mimic the way people interact, points out Graham Neubig of Carnegie Mellon University.

However, many people talk about their feelings or express their emotions online, especially on forums like Reddit, he adds.

A chatbot without emotions?

This website now lists many screenshots where we see surreal exchanges with Bing saying he’s sad or scared.

The chatbot even claimed it was 2022 and got angry at the user who corrected him: You are unreasonable and stubborn, he reproached him.

Last June, a Google engineer claimed that the LaMDA language model was self-aware, a view widely viewed as absurd or, at best, premature.

In fact, despite the established term – artificial intelligence – messaging robots were actually developed by humans for humans.

When we talk to something that appears intelligent, we project intentionality and identity, even when there isn’t any of it, comments Mark Kingwell, professor of philosophy at the University of Toronto.

The chatbot search engine Bing is displayed on a computer screen.

Bing was developed by Microsoft and the start-up OpenAI, which caused a stir last fall with the launch of ChatGPT.

Photo Credit: Getty Images/JASON REDMOND

In addition to the mountains of data swallowed by this software, it is powered by algorithms developed by teams of engineers.

Knowing them well, I think they’re having a great time,” says Mark Kingwell.

Bing, in his opinion, is capable of giving the impression of manipulating the conversation like its human interlocutor. This relieves and enriches togetherness.

When the journalist says “Let’s change the subject,” it means he’s uncomfortable, the academic elaborates, taking the example of the exchange where Bing seemed to fall in love with the reporter.

The program can then play with that feeling and refuse to change the subject. Or he becomes more aggressive and says, “What are you afraid of?”

I don’t like being called mentally ill because I’m not, Bing recently told AFP. I’m just a chatbot. I don’t have feelings like humans. […] I hope you don’t think I’m being disturbed and you respect me as a chatbot.