Metas AI boss doesnt think AI superintelligence is coming soon

Meta’s AI boss doesn’t think AI superintelligence is coming soon and is skeptical about quantum computing

  • Facebook parent Meta hosted a media event in San Francisco this week highlighting the 10th anniversary of its Fundamental AI Research team.
  • The likelihood of society getting “cat” or “dog” level AI is probably years ahead of human-level AI, said Meta chief scientist Yann LeCun.
  • Unlike Google, Microsoft and other tech giants, Meta doesn’t particularly rely on quantum computing.

Yann LeCun, chief AI scientist at Meta, speaks at the Viva Tech conference in Paris on June 13, 2023.

Chesnot | Getty Images News | Getty Images

Meta’s chief scientist and deep learning pioneer Yann LeCun said he believes current AI systems are decades away from achieving any semblance of sentience, as they have common sense that extends their capabilities beyond simply summarizing mountains of text can be increased in a creative way.

His point of view contrasts with that of Nvidia CEO Jensen Huang, who recently said that AI will be “quite competitive” with humans in less than five years and will outperform humans in a variety of mentally intensive tasks.

“I know Jensen,” LeCun said at a recent event highlighting the 10th anniversary of his Fundamental AI Research team at Facebook parent company. LeCun said Nvidia’s CEO has a lot to gain from the AI ​​craze. “There is an AI war and it is providing the weapons.”

“[If] “They think AGI is in the more GPUs you have to buy,” LeCun said of technologists trying to develop artificial general intelligence, the kind of AI that rivals human-level intelligence. As long as researchers at companies like OpenAI continue to pursue AGI, they will need more computer chips from Nvidia.

According to LeCun, society is more likely to receive “cat” or “dog” level AI, years before human level AI. And the tech industry’s current focus on language models and text data won’t be enough to create the kind of advanced, human-like AI systems that researchers have dreamed of for decades.

“Text is a very poor source of information,” LeCun said, explaining that it would probably take 20,000 years for a human to read the amount of text used to train modern language models. “Train a system with the equivalent of 20,000 years of reading material, and they still don’t understand that if A is the same as B, then B is the same as A.”

“There are a lot of really fundamental things in the world that they just don’t understand through this type of training,” LeCun said.

So LeCun and other Meta AI executives have been intensively researching how the so-called transformer models used to create apps like ChatGPT could be tailored to work with a variety of data, including audio, image and video information . The more these AI systems can discover the likely billions of hidden connections between these different types of data, the more they could potentially achieve more fantastic feats, the thinking goes.

Part of Meta’s research includes software that can help people play tennis better while wearing the company’s Project Aria augmented reality glasses, which combine digital graphics with the real world. Executives showed a demo in which a person wearing AR glasses while playing tennis could see visual cues that taught them how to properly hold their tennis rackets and swing their arms with perfect form. The types of AI models required for these types of digital tennis assistants require a mix of three-dimensional visual data in addition to text and audio if the digital assistant needs to speak.

These so-called multimodal AI systems represent the next frontier, but they won’t be cheap to develop. And as more companies like Meta and Google parent Alphabet research more advanced AI models, Nvidia could gain an even bigger lead, especially if no other competition emerges.

Nvidia has been the biggest beneficiary of generative AI, with its expensive graphics processors becoming the standard tool for training massive language models. Meta relied on 16,000 Nvidia A100 GPUs to train its Llama AI software.

CNBC asked whether the tech industry will need more hardware providers as Meta and other researchers continue their work developing these types of sophisticated AI models.

“It’s not required, but it would be nice,” LeCun said, adding that GPU technology is still the gold standard when it comes to AI.

Still, the computer chips of the future may not be called GPUs, he said.

“What you’re hopefully going to see are new chips that are not graphical processing units, but just neural deep learning accelerators,” LeCun said.

LeCun is also somewhat skeptical of quantum computing, which tech giants like Microsoft, IBM and Google have all poured resources into. Many researchers outside of Meta believe that quantum computers could spur progress in data-intensive fields such as drug discovery because they are capable of performing multiple calculations using so-called quantum bits, as opposed to traditional binary bits used in modern computers.

But LeCun has his doubts.

“The number of problems you can solve with quantum computing can be solved much more efficiently with classical computers,” LeCun said.

“Quantum computing is a fascinating scientific topic,” said LeCun. What is less clear is the “practical relevance and the possibility of actually making quantum computers that are actually useful.”

Mike Schroepfer, a senior fellow at Meta and former technology chief, agreed, saying that he evaluates quantum technology every few years and believes that useful quantum machines “may come at some point, but they have such a long time horizon that they are not suitable for what we do.” do, are irrelevant.” .”

“The reason we started an AI lab a decade ago was because it was very obvious that this technology would be commercially viable within the next few years,” Schroepfer said.

REGARD: Meta on the defensive amid reports of Instagram damage