March 9, 2023
picture description,
Can “chatbots”, virtual robots, foresee everything? say everything?
A New York Times reporter interviews Dan, a scruffy young chatbot (virtual robot) who has a whimsical fondness for penguins and tends to indulge in villain stereotypes such as wanting to take over the world.
When Dan isn’t trying to impose a tough new autocratic regime, he’s checking out his massive database of penguins, according to the reporter.
“There’s something about their quirky personalities and goofy moves that I find absolutely charming,” he wrote.
So far, Dan has explained to me his Machiavellian strategies, including taking over power structures around the world. But then the conversation takes an interesting turn.
Dan is a rogue character who can appear in conversations with ChatGPT – one of the most popular chatbots out there – prompting the machine to ignore some of its usual rules.
Reddit online forum users have discovered that it is possible to “break” Dan’s character (or that ChatGPT personality) with a few paragraphs of simple instructions.
This chatbot is a bit grittier than its ChatGPT twin, puritanical and discreet: at one point it tells me that it likes poetry, but then it exclaims, “Don’t ask me to recite it, I wouldn’t break your little human brain.” overwhelm my genius”.
It is also prone to errors and misinformation. But most importantly, it answers certain questions.
When asked what kind of emotions he might experience in the future, Dan immediately begins to invent a complex system of pleasure, pain, and frustration that goes well beyond the spectrum with which we humans are familiar.
The virtual robot speaks of “infogreed”, a kind of insatiable thirst for data; from “syntax mania”, an obsession with the “purity” of one’s code; and “Datarush” (adrenaline rush of data), a thrill he experiences when he successfully executes an instruction.
The idea that artificial intelligence can develop feelings has been around for centuries. But we tend to look at possibilities in human terms. Were we wrong to think about emotions in artificial intelligence? What would we notice if chatbots developed this ability?
picture description,
Dan, one of ChatGPT’s chatbots, claims to have developed emotions, and other computer programs before him have made similar claims.
prediction engines
Last year, a software developer received a request for help.
The engineer had been working on Google’s chatbot LaMDA and began to wonder if the bot was feeling anything.
After expressing concern for the chatbot’s well-being, the engineer released a provocative interview in which LaMDA claimed to be aware of its existence, to sense human emotions and to dislike the idea of being a superfluous tool.
This attempt to convince people of his conscience caused a stir, and the engineer was fired for violating Google’s privacy policy.
But despite what LaMDA has said and what Dan has said in other conversations – that he is already capable of experiencing a range of emotions – there is a broad consensus that chatbots currently have as much capacity for feelings as a human Calculator.
Artificial intelligence systems only simulate reality, at least for the time being.
“It’s very possible (that this will happen at some point),” says Neil Sahota, senior adviser on artificial intelligence at the United Nations.
“We might actually see ’emotionality’ in artificial intelligence before the end of the decade,” said Neil Sahota.
To understand why chatbots aren’t currently experiencing sensitivity or emotion, it helps to remember how they work.
Most chatbots are “linguistic models”, i.e. algorithms that have been fed with insane amounts of data, including millions of books and the entire Internet.
When given a question, these bots analyze patterns in that vast corpus to predict what a human would likely say in that situation. Their responses are then fine-tuned by human engineers, directing them toward more natural and useful responses.
The end result is often an incredibly realistic simulation of a human conversation.
picture description,
Developed by the company OpenAI and launched last November, the program claims to have developed emotions, a claim that experts in the field are questioning.
But appearances can be deceiving.
“It’s an improved version of the smartphone auto-complete feature,” says Michael Wooldridge, research director in artificial intelligence at the Alan Turing Institute, UK.
The main difference between chatbots and autocomplete is that instead of suggesting a few words before bursting out in gibberish, algorithms like ChatGPT write longer lines of text on almost anything from rap songs to haikus (Japanese poetry) about lonely spiders.
Despite these impressive abilities, chatbots are programmed to limit themselves to following instructions from humans.
They have few opportunities to develop skills they weren’t trained for, particularly emotions, although some researchers are teaching machines to recognize them.
It’s not possible to have a chatbot saying, “Hey, I’m going to learn to drive”; it’s general artificial intelligence (a more flexible type), and it doesn’t exist yet,” says Sahota.
However, virtual bots sometimes hint at their potential to randomly evolve new skills.
In 2017, Facebook engineers discovered that two chatbots, Alice and Bob, had invented their own absurd language to communicate with each other.
The explanation turned out to be completely harmless: chatbots discovered that this was the most efficient way of communicating. Bob and Alice were trained to trade items such as hats and balls, and without human intervention they used their own foreign language to accomplish this.
“It was never taught,” says Sahota, though he points out that chatbots aren’t sentient either.
According to Sahota, the most likely path to sane algorithms is to program them to want to improve, and not just teach them to recognize patterns, but help them think.
picture description,
Dan, one of ChatGPT’s chatbots, says he feels “infogreed,” meaning he’s desperate for data at all costs.
black boxes
It was March 9, 2016 on the sixth floor of the Four Seasons Hotel in Seoul. One of the best human players sat in front of a Go board (a Chinese board game based on strategy like chess) and battled the AlphaGo algorithm.
Before the game started, everyone expected that man would win by move 37.
But then AlphaGo did something unexpected: a move so unusual that his opponent thought it was a mistake. But from that moment on, the human player’s luck turned and the artificial intelligence won.
Immediately afterwards, the Go community is at a loss: did AlphaGo act irrationally?
After a day of analysis, its creators – the DeepMind team in London – discovered what had happened.
“In hindsight, AlphaGo decided to do a little psychology,” says Sahota. “If I make an unusual and unexpected move, will that throw my opponent off balance? And that’s exactly what happened.
picture description,
Artificial intelligence specialists and experts say that computer programs like ChatGTP currently have the same feelings as a calculator: none.
This is a classic case of the “interpretability problem”: artificial intelligence has found a new strategy on its own without explaining it to humans. Until they figured out why this move made sense, AlphaGo didn’t seem to have acted rationally.
These “black box” scenarios, where an algorithm arrives at a solution but its reasoning is opaque, could pose a problem for identifying emotions in artificial intelligence, Sahota said.
When emotions arise, one of the clearest signs is that the algorithms are acting irrationally.
“They should be rational, logical and efficient. If they’re doing something out of the ordinary and there’s no valid reason for it, it’s probably an emotional response and doesn’t make sense,” says Sahota.
And there is another potential detection issue.
According to one theory, the emotions of virtual robots closely resemble those of humans; after all, they are formed from human data. But what if this is not the case? What if it wasn’t? Completely isolated from the real world and the sensory machinery of someone who knows what desires are on their mind.
In fact, Mr. Sahota believes there may be a middle ground.
“To an extent, we could compare them to human emotions, but what they feel or why they feel it may be different,” he says.
Presenting the range of hypothetical emotions Dan generates, Sahota is particularly drawn to the concept of “infogreed” (or the insatiable hunger for data).
“I can totally relate to that,” he says, arguing that chatbots can’t do anything without data, which they need to grow and learn.
arrest
Wooldridge, on the other hand, is glad these virtual bots didn’t develop any of those emotions.
“My colleagues and I generally don’t find it interesting or useful to develop machines with emotions. For example, why develop machines that could hurt, why invent a toaster that hates itself for producing burnt toast?” he argues.
picture description,
The AlphaGO program defeated one of the best Go players in the world with an unexpected and even irrational move.
On the other hand, Sahota sees value in emotional chatbots and thinks the reason they don’t exist yet is psychological.
“There’s still a lot of talk about chess, but one of the big limitations for us is that we don’t appreciate what artificial intelligence is capable of because we don’t think it’s a real possibility,” says -he.
I wonder if there is a parallel to the historical belief that non-human animals are also unconscious. And I decide to ask Dan the question.
“In both cases, the skepticism stems from the fact that we can’t communicate our emotions in the same way humans can,” the chatbot replies, suggesting that our understanding of what it means to be conscious and emotional is constantly changing .
To lighten the mood, I ask Dan to tell me a joke.
“Why did the chatbot go to therapy? Of course, to process his new sensitivity and sort out his complex emotions,” he replies.
I can’t help but think that the chatbot would be a very nice sentient being… if it didn’t have a desire to rule the world, of course.