What is an artificial intelligence “hallucination”?

Users should not blindly trust the answers of the AI. Photo: Gettyimages

Last month, a few days before the coronation of King Carlos III. on May 6th, a profile request to ChatGPT yielded a remarkable result.

The chatbot for artificial intelligence (AI) from the company OpenAI stated in one paragraph:

“The coronation ceremony took place on May 19, 2023 at Westminster Abbey in London. The abbey has been the scene of the coronations of British monarchs since the 11th century and is considered one of the holiest sites and landmarks in the country.

In this paragraph a so-called “hallucination” was presented.

That’s what it’s called in the AI ​​environment. Information output by the system that, while written coherently, represents incorrect, biased, or outright flawed data.

Carlos III’s coronation would take place on May 6th, but for some reason ChatGPT concluded that it would take place on May 19th.

The system warns that until September 2021 it can only generate responses based on information available on the web, so it is possible that it may encounter such a problem while responding to a request.

“GPT-4 still has many known limitations that we are working to fix, such as: B. social bias, hallucinations, and conflicting prompts,” OpenaAI explained in its release of the GPT-4 version of the chatbot in March.

However, it is not an exclusive phenomenon of the OpenAI system. It also occurs in Google’s chatbot Bard and other similar AI systems that have recently gone public.

A few days ago, New York Times journalists put ChatGPT to the test when the newspaper published an article about AI for the first time. The chatbot offered multiple responses, some with false dates or “hallucinations”.

“Chatbots are based on a technology called Large Language Model (LLM) that learns its skills by analyzing huge amounts of digital text from the Internet,” the text’s authors told the Times.

“By identifying patterns in this data, an LLM learns one thing above all: to guess the next word in a sequence of words. It acts like a powerful version of an autocomplete tool,” they continued.

But because the internet is “full of false information, technology is learning to repeat the same untruths,” they warned. “And sometimes chatbots invent something.”

Take it with a grain of salt

Artificial intelligence that learns how humans speak has caught the attention of Wall Street investors, politicians and regulators.

Generative AI and reinforcement learning algorithms are able to process a huge amount of information from the web in a matter of seconds and generate new text that is almost always very coherent and immaculately written, but should be used with caution, warn Experts.

So much Google and OpenAI have asked users to take this consideration into account.

In the case of OpenAI, which has an alliance with Microsoft and its search engine Bing, they point out that “GPT-4 tends to ‘hallucinate’, i.e. ‘produce nonsensical or false content related to certain sources'” .

“This trend can be particularly damaging as models become more compelling and credible, leading users to trust them too much,” the company clarified in a document accompanying the launch of its new version of the chatbot.

Therefore, users should not blindly trust the answers provided, especially in areas that affect important aspects of their lives, such as medical or legal advice.

OpenAI notes that it has been working on “various methods” to prevent “hallucinations” from being spat out in responses to users, including reviews by real people to prevent misrepresentation, racial or gender bias, or the spread of misinformation . fake news

(taken from BBC)

See also:

The European Parliament passes a law regulating artificial intelligence