The fear and tension that led to Sam Altmans ouster

The fear and tension that led to Sam Altman’s ouster at OpenAI

Last year, Sam Altman led OpenAI to the forefront of the technology industry. Thanks to its wildly popular chatbot ChatGPT, the San Francisco startup was at the center of an artificial intelligence boom, and Mr. Altman, CEO of OpenAI, had become one of the most recognizable people in the tech industry.

But this success led to tensions within the company. Ilya Sutskever, a respected AI researcher who co-founded OpenAI with Mr. Altman and nine other people, became increasingly concerned that OpenAI’s technology could be dangerous and that Mr. Altman was not paying enough attention to that risk, according to three people familiar with him people’s thinking. Mr. Sutskever, a board member of the company, also protested what he saw as his limited role within the company, according to two of the people.

This conflict between rapid growth and AI safety was highlighted Friday afternoon when Mr. Altman was forced out of his job by four of OpenAI’s six board members, led by Mr. Sutskever. The move shocked OpenAI employees and the rest of the tech industry, including Microsoft, which has invested $13 billion in the company. Some industry insiders said the split was as significant as when Steve Jobs was forced out of Apple in 1985.

The fall of the 38-year-old Mr. Altman highlighted a long-standing divide in the AI ​​community: between people who believe AI is the biggest business opportunity of a generation and others who fear that moving too quickly could be dangerous. And the fall showed how a philosophical movement dedicated to fear of AI had become an inevitable part of tech culture.

Since ChatGPT was released nearly a year ago, artificial intelligence has captured the public’s imagination with hopes that it could be used for important tasks such as drug discovery or teaching children. But some AI scientists and political leaders worry about the risks involved, such as automating jobs or autonomous warfare beyond human control.

Fears that AI researchers would build something dangerous are an integral part of the OpenAI culture. The founders believed they were the right people to build it because they understood these risks.

OpenAI’s board hasn’t given a specific reason why it ousted Mr. Atman, other than to say in a blog post that it didn’t believe he was communicating honestly with them. OpenAI employees were told Saturday morning that his removal had nothing to do with “offenses or anything related to our financial, business, security or privacy practices,” according to a message viewed by The New York Times.

Greg Brockman, another co-founder and president of the company, resigned Friday evening in protest. The same goes for OpenAI’s research director. According to a half-dozen current and former employees, the company was in chaos Saturday morning, with its roughly 700 employees struggling to understand why the board was making this move.

“I’m sure you are all confused, sad, and perhaps a little afraid,” Brad Lightcap, OpenAI’s chief operating officer, said in a memo to OpenAI employees. “We are fully focused on getting the issue under control, pushing for resolution and clarity and getting back to work.”

Mr. Altman was asked to attend a board meeting in San Francisco on Friday at noon via video. There, Mr. Sutskever, 37, read from a script that closely resembled the blog post the company published minutes later, according to a person familiar with the matter. The post states that Mr. Altman “has not been consistently open in his communications with the board, which affects its ability to carry out its responsibilities.”

But in the hours that followed, OpenAI employees and others focused not only on what Mr. Altman may have done, but also on the way the San Francisco startup is structured and the extreme views on the dangers of AI, which has been embedded in the company’s work since it was created in 2015.

Mr. Sutskever and Mr. Altman could not be reached for comment on Saturday.

In recent weeks, Jakub Pachocki, who helped oversee GPT-4, the technology at the heart of ChatGPT, was promoted to the company’s director of research. After previously holding a position below Mr. Sutskever, he was promoted to a position next to Mr. Sutskever, according to two people familiar with the matter.

Mr. Pachocki left the company late Friday, the people said, shortly after Mr. Brockman. Earlier in the day, OpenAI said Mr Brockman had been removed as chief executive and would report to new interim chief executive Mira Murati. Other allies of Mr. Altman — including two senior researchers, Szymon Sidor and Aleksander Madry — have also left the company.

Mr. Brockman said in one Post on X, formerly Twitter that, despite being board chairman, he did not attend the board meeting at which Mr. Altman was ousted. That left Mr. Sutskever and three other board members: Adam D’Angelo, chief executive of the question-and-answer site Quora; Tasha McCauley, associate senior management scholar at the RAND Corporation; and Helen Toner, director of strategy and basic research grants at Georgetown University’s Center for Security and Emerging Technology.

They could not be reached for comment on Saturday.

Ms. McCauley and Ms. Toner have ties to the Rationalist and Effective Altruist movements, a community deeply concerned that AI could one day destroy humanity. Today’s AI technology cannot destroy humanity. However, this community believes that these dangers will emerge as technology becomes more powerful.

In 2021, a researcher named Dario Amodei, who also has ties to this community, and about 15 other OpenAI employees left the company to start a new AI company called Anthropic.

Mr. Sutskever increasingly embraced these beliefs. He was born in the Soviet Union, spent his formative years in Israel and immigrated to Canada as a teenager. As a student at the University of Toronto, he helped pioneer an AI technology called neural networks.

In 2015, Mr. Sutskever left his job at Google and helped found OpenAI along with Mr. Altman, Mr. Brockman and Tesla CEO Elon Musk. They set up the lab as a nonprofit organization and said that, unlike Google and other companies, it would not be driven by commercial incentives. They vowed to build so-called artificial general intelligence (AGI), a machine that can do anything the brain can do.

Mr. Altman turned OpenAI into a for-profit company in 2018 and negotiated a $1 billion investment with Microsoft. Such huge sums of money are essential to the development of technologies like GPT-4, released earlier this year. Since its initial investment, Microsoft has poured an additional $12 billion into the company.

The company continued to be governed by the nonprofit board. Although investors like Microsoft receive profits from OpenAI, their profits are limited. Any money over the cap goes back into the nonprofit organization.

Recognizing the power of GPT-4, Mr. Sutskever helped create a new Super Alignment team within the company to look for ways to ensure that future versions of the technology do no harm.

Mr. Altman was open to these concerns, but also wanted OpenAI to stay ahead of its much larger competitors. Mr. Altman flew to the Middle East in late September to meet with investors, according to two people familiar with the matter. He applied for up to $1 billion in funding from SoftBank, the Japanese technology investor led by Masayoshi Son, for a potential OpenAI company that would build a hardware device to run AI technologies like ChatGPT.

OpenAI is also in discussions about “tender offer” financing that would allow employees to cash out shares of the company. This deal would make OpenAI worth more than $80 billion, nearly triple its value about six months ago.

But the company’s success appears to have only increased concerns that something could go wrong with AI

“It doesn’t seem at all unlikely that we will have computers — data centers — that are much smarter than humans,” Mr. Sutskever said in a Nov. 2 podcast. “What would such AIs do? I don’t know.”

Kevin Roose and Tripp Mickle contributed reporting.