Sam Altman, CEO of OpenAI, during a panel discussion at the World Economic Forum in Davos, Switzerland, on January 18, 2024.
Bloomberg | Bloomberg | Getty Images
AI has become the talk of the business world over the last year, thanks in no small part to the success of ChatGPT, OpenAI's popular generative AI chatbot. Generative AI tools like ChatGPT are powerful, large language models, algorithms trained on massive amounts of data.
This has raised concerns among governments, businesses and stakeholders worldwide as the risks associated with the lack of transparency and explainability of AI systems continue to multiply. job losses due to increasing automation; social manipulation through computer algorithms; Surveillance; and data protection.
Sam Altman, CEO and co-founder of OpenAI, said he believes artificial general intelligence may not be far from reality and could be developed in the “fairly near future.”
However, he noted that fears that it would dramatically change and disrupt the world were overblown.
“It's going to change the world a lot less than we all think, and it's going to change jobs a lot less than we all think,” Altman said in a conversation organized by Bloomberg at the World Economic Forum in Davos, Switzerland.
Altman, whose company entered the mainstream after publicly launching the ChatGPT chatbot in late 2022, has changed his tune on the topic of the dangers of AI since his company was thrust into the regulatory spotlight last year, as governments from the United States, Britain , European Union and beyond are trying to rein in technology companies on the risks their technologies pose.
In a May 2023 interview with ABC News, Altman said he and his company were “afraid” of the downsides of superintelligent AI.
“We have to be careful here,” Altman told ABC. “I think people should be happy that we’re a little bit afraid of this.”
AGI is a very vaguely defined term. If we simply call it “better than humans at pretty much everything humans can do,” I agree that we can get systems that can do that very soon.
Aidan Gomez
CEO, Cohere
Then Altman said he was afraid of the possibility that AI could be used for “large-scale disinformation,” adding, “Now that they’re getting better at writing computer code, [they] could be used for offensive cyber attacks.”
Altman was temporarily suspended from OpenAI in November. This was a shocking move that highlighted concerns about the governance of the companies behind the most powerful AI systems.
In a discussion at the World Economic Forum in Davos, Altman said his ouster was a “microcosm” of the stresses facing OpenAI and other AI labs internally. “As the world moves closer to AGI, the stakes, stress and tension become greater. This will all increase.”
Aidan Gomez, CEO and co-founder of artificial intelligence startup Cohere, echoed Altman's point that AI is likely to be a real outcome in the near future.
“I think we'll have this technology soon,” Gomez told CNBC's Arjun Kharpal in a fireside chat at the World Economic Forum.
However, he said a key problem with AGI is that it is still poorly defined as a technology. “First of all, AGI is a very vaguely defined term,” Cohere’s boss added. “If we just call it 'better than humans at pretty much everything humans can do,' then I agree we can get systems that can do that very soon.”
However, Gomez said that even if AGI eventually arrives, it would likely take “decades” for businesses to be truly integrated.
“The question really is how quickly we can adopt it, how quickly we can get it into production. The size of these models makes adoption difficult,” noted Gomez.
“So our focus at Cohere has been on compressing that: making them more adaptable and efficient.”
The topic of defining what AGI actually is and what it will ultimately look like is a mystery to many experts in the AI community.
Lila Ibrahim, chief operating officer of Google's AI lab DeepMind, said no one really knows what type of AI qualifies as “general intelligence,” adding that it is important to develop the technology safely.
“The reality is that no one knows” when AGI will arrive, Ibrahim told CNBC’s Kharpal. “There is a debate among AI experts who have been doing this for a long time, both in the industry and within the organization.”
“We are already seeing areas where AI is able to unlock our understanding… where humans have not been able to make such advances. So it is AI in partnership with humans or as a tool,” said Ibrahim.
“I think that's really a big open question, and I don't know how to answer it better than: How do we actually think about this, rather than how long it's going to take?” Ibrahim added. “How do we think about what it might look like and how do we ensure that we are responsible stewards of the technology?”
Altman wasn't the only top tech executive asked about AI risks in Davos.
Marc Benioff, CEO of enterprise software company Salesforce, said on a panel with Altman that the tech world is taking steps to ensure the AI race doesn't lead to a “Hiroshima moment.”
Many technology industry leaders have warned that AI could lead to an event in which machines become so powerful that they spiral out of control and wipe out humanity.
Several AI and technology leaders, including Elon Musk, Steve Wozniak and former presidential candidate Andrew Yang, have called for a pause in the advancement of AI, saying a six-month moratorium would be beneficial to allow society and regulators to do so , to catch up.
Geoffrey Hinton, an AI pioneer often referred to as the “godfather of AI,” has previously warned that advanced programs could “evade control by writing their own computer code to modify themselves.”
“One way these systems could escape control is by writing their own computer code to modify themselves. And that's something we need to be seriously concerned about,” Hinton said in an interview with CBS' “60 Minutes” in October.
Hinton left his role as Google vice president and fellow engineer last year, raising concerns about how the company handles AI safety and ethics.
Benioff said that tech industry executives and experts need to ensure that AI averts some of the problems that have plagued the internet over the past decade or so — from the manipulation of beliefs and behavior by recommendation algorithms during election cycles to the invasion of privacy.
“We really haven’t had this kind of interactivity with AI-based tools,” Benioff told the audience in Davos last week. “But we don’t fully trust him yet. So we have to trust each other.”
“We also need to reach out to these regulators and say, 'Hey, if you look at social media over the last decade, it's been some kind of f—ing show.' It's pretty bad. We don't do that.' We want this in our AI industry. We want to have a good, healthy partnership with these moderators and with these regulators.”
Jack Hidary, CEO of SandboxAQ, disputed enthusiasm among some tech executives that AI could be nearing the stage of achieving “general” intelligence, adding that systems still have many teething problems to iron out.
He said AI chatbots like ChatGPT have passed the Turing Test, a test called the “imitation game” developed by British computer scientist Alan Turing to determine whether someone communicates with a machine and a human. But a big area where AI lacks is common sense, he added.
“One thing we have seen from LLMs [large language models] is very expressive, can say to college students as if there is no tomorrow, but sometimes it is difficult to find common sense, and when asking: “How do people cross the street?” Sometimes it can't even recognize, what the zebra crossing is as opposed to other things, things that even a toddler would know, so it will be very interesting to go beyond that in terms of reasoning.
Hidary has a big prediction for the development of AI technology in 2024: This year, he said, will be the first year that advanced AI communications software will be loaded into a humanoid robot.
“This year we will see a 'ChatGPT' moment for humanoid robots with embodied AI, this year 2024 and then 2025,” Hidary said.
“We won’t see robots coming off the assembly line, but we will see them actually demonstrate in reality what they can do with their intelligence, their brains, perhaps with LLMs and other AI techniques.”
“There are now 20 companies that have been venture-backed to develop humanoid robots, of course in addition to Tesla and many others, and so I think there will be a shift this year when it comes to that,” Hidary added.