Should Google care about ChatGPT The mail The mail

Should Google care about ChatGPT? The mail The mail

In late November, OpenAI, a non-profit artificial intelligence research organization, unveiled ChatGPT, software designed to chat with users. This type of tool, commonly referred to as “chatbot”, has been around for a while, but ChatGPT has gained a reputation for its great ability to generate textual content of all kinds and provide very credible and surprising answers.

Since the beginning of experimentation with ChatGPT, many users have noticed that the chatbot’s expressive possibilities go far beyond generating more or less realistic answers: thanks to the variety of content on which the model has been trained, it often succeeds in giving helpful answers to questions of different kinds. A recent article published by Atlantic warns of the consequences that the mass use of similar services could have in the classroom, since already today ChatGPT seems to be able to produce in a few seconds a short essay good enough to to gain the appropriateness – with some user modifications.

– Also read: A “chatbot” like no other is here

The fact that it is enough to write a short command and press a button to get an answer on any topic (or almost) has led some experts to hypothesize that artificial intelligences like ChatGPT are becoming a major threat to Google and other search engines could become. eventually organizing and distributing content on the Internet in the 1990s. In fact, in the future, asking a chatbot for advice and information could become quicker and more convenient than typing a bunch of search terms into the Google bar, hoping to find the answer you’re looking for among the many results.

The new possibilities offered by AI come at a delicate stage for the search engine, which has been criticized for the past few years for the growing importance of ads in displaying search results. With a resulting degradation in user experience. What, in addition to the increase in advertising, significantly worsens the quality of Google searches is the unification of a large part of the content on the web, which is often produced according to the style rules of SEO (search engine optimization), so the hope is selected and rewarded by the algorithm. As a result, Google searches are much less relevant and less accurate than they used to be, and it’s becoming increasingly difficult for users to find what they’re looking for. For this reason, too, there are people who believe that products like ChatGPT can be an important alternative for searching the web.

– Also read: Google search isn’t what it used to be

Just last week, an analyst at US investment bank Morgan Stanley downplayed the threat these AIs appear to pose to Google. However, not all industry experts are so optimistic: According to the New York Times, within Alphabet, the company that owns Google (but also YouTube and others), some executives are quite worried and have called out a “Code Red” phrase referring to indicates an internal alert to prioritize a problem or potential threat. According to the newspaper, “some fear the company is approaching a moment feared by key groups in Silicon Valley: the arrival of a huge technological shift that can disrupt business.”

However, it would be wrong to argue that Google was surprised by the emergence of OpenAI or other services like MidJourney AI and Stable Diffusion that are widely used in image generation. The company invested in the industry early on and has already built a chatbot that it believes can compete with ChatGPT. Google also has a privileged relationship with the GPT language model itself, of which it developed some of the technology (more specifically, that relating to the “T” of the GPT acronym, or the “Transformer” neural network unveiled in 2017).

The Google-developed chatbot has been the talk of the last few months over a controversy that has arisen precisely from the technology’s great communication and expression abilities. Last June, programmer Blake Lemoine, who worked for a division of Google dedicated to AI, told the press he had evidence that LaMDA, the discursive artificial intelligence the company was developing, was advanced enough to actually be “sentient”.

Lemoine’s statements had attracted attention, but were soon contradicted by an analysis of the conversation between the programmer and the chatbot, which revealed that LaMDA was responding to the user’s commands and only appeared to be providing answers stemming from an aptitude appeared with confidence. Lemoine was fired from Google and the case was interpreted as a testament to LaMDA’s great capabilities. Despite this, the company is still reluctant to open its use to the public like OpenAI did with ChatGPT, keeping it available to only a few researchers and scientists.

It is precisely this caution that separates Google from OpenAI and guarantees the latter, at least in the media, a clear lead. Google has also long argued that the future of search is “conversational”, meaning that the simple search we know today will gradually be replaced by a continuous dialogue between user and service. The transition to this new way of searching is held back by two main obstacles: the first has to do with Google’s size and relevance in the digital world; the second concerns the advertising sector, which is fundamental to the group’s finances.

– Also read: How artificial intelligence shapes the world

To date, the texts produced by ChatGPT, while very persuasive, often contain errors, and the speed with which the service can produce texts of all kinds, from poetry to anti-scientific hoaxes, already worries many commentators. According to them, the popularity of these AIs will have serious consequences for the spread of disinformation and fake news, so much so that some have compared them to the “uncontrolled release of a virus into the environment”. OpenAI co-founder and CEO Sam Altman himself has Are defined the service “incredibly limited, but good enough in certain respects to give a false impression of grandeur”. That was the case two years ago, a few months after GPT-3 was unveiled Are defined They exaggerate expectations about the technology, pointing out its shortcomings and the many mistakes it has made.

ChatGPT’s ultimate goal, at least for now, is to generate meaningful written answers, not be an oracle capable of providing the right answer to every question. This is a limitation common to each of these products, including LaMDA, and it makes a large-scale implementation problematic by Google, which cannot afford to provide an unreliable, if surprising, service.

But even if Google could solve such a complex problem, the advertising question would remain open. The search engine’s business model is to show ads near search results, and the company hasn’t figured out how to do the same in a conversational model, where ads are designed to break the back-and-forth between user and machine. According to journalist Alex Kantrowitz, “Google therefore has little incentive to go beyond traditional search, at least not in a paradigm-shifting way, until it figures out how the economics work. In the meantime, it will continue to focus on the Google Assistant », its voice assistant.

Speaking of money, today ChatGPT is a very expensive experimental product. At OpenAI’s 1,000,000 user mark, Altman said the cost of maintaining and running the system “brings tears to the eyes.” The company has received investment from many corporations and investors, including Elon Musk and Netflix’s Reed Hastings, but relies heavily on Azure, Microsoft’s cloud computing and digital infrastructure division.

Having a service like ChatGPT used by such a large, albeit small, audience comes at a staggering cost, estimated at $3 million per day. Also for this reason, according to some observers, the economic aspect of this season of free experimentation will soon put an end to it, forcing OpenAI and other companies in the sector to put a price on their services.