OpenAI boss seeks new Microsoft funding to build ‘superintelligence’

OpenAI plans to secure further financial backing from its largest investor Microsoft, as ChatGPT maker CEO Sam Altman pushes forward with his vision of creating artificial general intelligence (AGI) – computer software that is as intelligent as humans.

In an interview with the Financial Times, Altman said his company’s partnership with Microsoft CEO Satya Nadella is working “really well” and that he expects to raise “a lot more money over time” from the tech giant and other investors to keep up can the high costs of developing more sophisticated AI models.

Microsoft invested $10 billion in OpenAI earlier this year in a “multi-year” deal that valued the San Francisco-based company at $29 billion, according to people familiar with the discussions.

Asked whether Microsoft would invest further, Altman said: “I would hope so.” He added: “There is still a long way to go and a lot of computing power still needs to be built between here and AGI.” . The training costs are just huge.”

Altman said that “sales growth has been good this year,” without providing financial details, and that the company remains unprofitable due to training costs. But he said the Microsoft partnership would ensure “we both make money off each other’s success and everyone is happy.”

In the latest sign of how OpenAI plans to build a business model on ChatGPT, the company announced a series of new tools and upgrades to its existing GPT-4 model for developers and enterprises at a November 6 event attended by Nadella.

Tools include custom versions of ChatGPT that can be customized and tailored for specific applications, as well as a GPT store or marketplace with the best apps. The ultimate goal will be to share revenue with the most popular GPT creators, in a business model similar to Apple’s App Store.

“Right now, guys [say] “You have this research lab, you have this API.” [software]”You’ve got the partnership with Microsoft, you’ve got this ChatGPT thing, now there’s a GPT store. “But those aren’t really our products,” Altman said. “These are channels to our only product, which is intelligence, magical intelligence in the sky. I think that’s what we’re about.”

To grow the company’s business, Altman said he hired executives like Brad Lightcap, who previously worked at Dropbox and startup accelerator Y Combinator, as his chief operating officer.

Altman, on the other hand, divides his time into two areas: researching “how to build superintelligence” and ways to build computing power for it. “The vision is to develop AGI and figure out how to make it safe.” . . and find out the benefits,” he said.

He pointed to the introduction of GPTs and said OpenAI is working to develop more autonomous agents that can perform tasks and actions such as running code, making payments, sending emails or submitting claims .

“We will make these agents more and more powerful.” . . and the actions get more complex from here,” he said. “The business value that comes from being able to do that in every category is, I think, pretty good.”

The company is also working on GPT-5, the next generation of its AI model, Altman said, although he did not commit to a release schedule.

More data will be needed to train, which Altman said would come from a combination of publicly available datasets on the Internet as well as proprietary data from companies.

OpenAI recently issued a call for large datasets from organizations that are “not yet readily available to the public online today,” particularly for long texts or conversations in any format.

While GPT-5 is likely more sophisticated than its predecessors, Altman said it is technically difficult to predict exactly what new capabilities and capabilities the model might have.

“Until we train this model, it’s like a fun guessing game for us,” he said. “We’re trying to get better at it because I think it’s important to predict skills for safety reasons. But I can’t tell you exactly what it will do that GPT-4 didn’t do.”

To train its models, OpenAI, like most other major AI companies, uses Nvidia’s advanced H100 chips, which became Silicon Valley’s hottest commodity last year as rival tech companies raced to secure the crucial semiconductors that needed for the construction of AI systems.

Altman said there had been “a brutal crisis” throughout the year, with supply shortages of Nvidia’s chips worth $40,000 apiece. He said his company had received H100 and expected more soon, adding that “next year already looks like it will be better.”

However, with other players such as Google, Microsoft, AMD and Intel preparing to release competing AI chips, Nvidia’s dependence is unlikely to last much longer. “I think the magic of capitalism is at work here. And a lot of people would like to be Nvidia now,” Altman said.

With the release of ChatGPT almost a year ago, OpenAI took an early lead in the race to develop generative AI – systems that can create text, images, code and other multimedia content in seconds.

Despite its success with consumers, OpenAI aims to make progress in building artificial general intelligence, Altman said. Large Language Models (LLMs) underlying ChatGPT are “one of the core pieces.” . . for the construction of AGI, but there will be many other parts.”

While OpenAI has primarily focused on LLMs, its competitors are pursuing alternative research strategies to advance AI.

Altman said his team believes language is a “great way to compress information” and use it to develop intelligence, a factor he said companies like Google DeepMind have overlooked.

“[Other companies] I have a lot of smart people. But they didn’t do it. They didn’t do it, even after I thought we had somehow proven it with GPT-3,” he said.

Figures next to a screen showing an OpenAI logo

Ultimately, Altman said, “the biggest missing piece” in the race to develop AGI is what is needed for such systems to make fundamental leaps in understanding.

“There was a long period of time where it was right for us [Isaac] Newton’s assignment was to read more math textbooks, talk to professors, and practice problems. . . That’s what our current models do,” Altman said, pointing to an example a colleague had previously used.

But he added that Newton would never invent calculus simply by reading about geometry or algebra. “And neither do our role models,” Altman said.

“So the question is, what is missing from the idea of ​​generating net novelties?” . Knowledge for humanity? I think that’s the biggest thing we need to work on.”

Video: Can generative AI live up to the hype? | FT Tech