OpenAIs board may have been right to fire Sam Altman

OpenAI’s board may have been right to fire Sam Altman – and also rehire him – Vox.com

The seismic shakeup at OpenAI—with the firing and eventual reinstatement of CEO Sam Altman—came as a shock to almost everyone. But the truth is that the company was always likely to reach a breaking point. It was built on a fault line so deep and unstable that stability would eventually give way to chaos.

This fault line has been OpenAI’s dual mission: to develop AI smarter than humanity while ensuring that AI is safe and useful to humanity. There is an inherent tension between these goals, as advanced AI could harm people in a variety of ways, from perpetuating prejudices to enabling bioterrorism. Now the tension in OpenAI’s mandate appears to have helped trigger the tech industry’s biggest earthquake in decades.

On Friday, the board fired Altman for alleged lack of transparency, and company president Greg Brockman resigned in protest. On Saturday, the two tried to persuade the board to reinstate him, but the negotiations did not go as planned. By Sunday, both had accepted jobs at major OpenAI investor Microsoft, where they would continue their work on cutting-edge AI. By Monday, 95 percent of OpenAI employees were also threatening to move to Microsoft.

Late Tuesday evening, OpenAI announced“We have agreed in principle that Sam Altman will return to OpenAI as CEO with a new board.”

As chaotic as it all was, the aftershocks for the AI ​​ecosystem might have been scarier if the shock had ended in a mass exodus of OpenAI employees. as it seemed a few days ago. A talent flow from OpenAI to Microsoft would have meant a shift from a company founded on concerns about AI safety to one that barely bothers to pay lip service to the concept.

So at the end of the day, did the OpenAI board make the right decision in firing Altman? Or was it the right decision to rehire him?

The answer to both could well be “yes.”

OpenAI’s board of directors did exactly what they were supposed to do: protect the integrity of the company

OpenAI is not a typical technology company. It has a unique structure, and this structure is key to understanding the current upheavals.

The company was originally founded in 2015 as a non-profit organization focused on AI research. But in 2019, hungry for the resources it would need to create AGI — artificial general intelligence, a hypothetical system that can match or exceed human capabilities — OpenAI created a for-profit company. That allowed investors to put money into OpenAI and potentially earn a return, although their profits would be capped under the new entity’s rules and anything over the cap would revert to the nonprofit. Crucially, the nonprofit board retained the authority to run the for-profit organization. This also included hiring and firing staff.

The board’s job was to ensure that OpenAI remained true to its mission as reflected in its charter, which clearly states: “Our primary fiduciary duty is to humanity.” Not to investors. Not to employees. For humanity.

The charter also states: “We fear that late-stage AGI development could become a race with no time for adequate safeguards.” But paradoxically, it also states: “About the impact of AGI on society To address this effectively, OpenAI must be at the cutting edge of AI capabilities.”

It reads something like this: We’re worried about a race where everyone is pushing to be at the front. But we have to be at the forefront.

Each of these two impulses found an avatar in one of the OpenAI leaders. Ilya Sutskever, co-founder of OpenAI and leading AI researcher, reportedly worried that the company was moving too quickly and trying to make a splash and profit at the expense of security. Since July, he has co-led OpenAI’s “Superalignment” team, which aims to figure out how to deal with the risks of superintelligent AI.

Meanwhile, Altman was running full steam ahead. Under his tenure, OpenAI has done more than any other company to trigger arms race dynamics, most notably with the launch of ChatGPT last November. More recently, Altman reportedly raised funds from autocratic regimes in the Middle East such as Saudi Arabia to start a new AI chip manufacturing company. That alone could raise security concerns as such regimes could use AI to increase digital surveillance or human rights abuses.

We still don’t know exactly why the OpenAI board fired Altman. The board said he “was not consistently candid in his communications with the board, which compromised its ability to carry out its responsibilities.” Sutskever, who led Altman’s removal, initially defended the move in similar terms: “In doing so, the board did its duty “To the nonprofit’s mission, which is to ensure OpenAI builds AGI that benefits all of humanity,” he told employees hours after firing an all-hands meeting. (However, Sutskever later changed sides and said he regretted taking part in the fall.)

“Sam Altman and Greg Brockman seem to believe that accelerated AI can do the best for humanity. The plurality of [old] “However, the board appears to disagree that the pace of progress is too fast and could threaten security and trust,” said Sarah Kreps, director of the Tech Policy Institute at Cornell University.

“I think that by firing Altman, the board made the only decision they felt they could make,” AI expert Gary Marcus told me. “I think they saw something in Sam that they thought they couldn’t live with and stay true to their mission. So in their eyes they made the right choice.”

Before OpenAI agreed to reinstate Altman, Kreps was worried about this “The board may have won the battle, but it lost the war.”

In other words, if the board fired Altman in part because of concerns that his acceleration would jeopardize the security part of the OpenAI mission, he won the battle by doing everything he could to keep the company true to its mission.

But if the saga had ended with the coup that pushed OpenAI’s top talent directly into the arms of Microsoft, the board would have lost the larger war – the attempt to ensure the safety of AI for humanity. Which brings us to…

The AI ​​risk landscape would likely be worse if Altman had remained fired

Altman’s firing caused incredible chaos. According to futurist Amy Webb, CEO of the Future Today Institute, OpenAI’s board had failed to practice “strategic foresight” – to understand how Altman’s sudden firing could cause the company to implode and focus on the impacts the entire AI ecosystem. “You have to think about the consequences of your actions,” she told me.

It’s entirely possible that Sutskever didn’t foresee the threat of a mass exodus that could have killed OpenAI entirely. But another board member behind the removal, Helen Toner — who had castigated Altman over a paper she co-authored that appeared to criticize OpenAI’s approach to security — knew it was a possibility. And she was willing to accept that possibility if it would best protect the interests of humanity – which, as you know, was the Board’s job. She said that if the company were destroyed by Altman’s firing, “that could be consistent with its mission,” The New York Times reported.

However, when Altman and Brockman announced that they would be joining Microsoft and the OpenAI employees also threatened a mass exodus, the board’s calculation may have changed: keeping them at the company was probably better than this new alternative. Sending them straight into the arms of Microsoft probably wouldn’t bode well for AI safety.

Finally, Microsoft fired its entire AI ethics team earlier this year. When Microsoft CEO Satya Nadella teamed up with OpenAI in February to embed GPT-4 into Bing search, he mocked competitor Google: “We made them dance.” And in hiring Altman, Nadella tweeted that he was pleased that the ousted leader was setting “a new pace of innovation.”

Pushing out Altman and OpenAI’s top talent would have meant that “OpenAI can absolve itself of any responsibility for possible future missteps in AI development, but cannot prevent it,” Kreps said. “The developments show how dynamic and demanding the AI ​​field has become and that it is impossible to stop or contain progress.”

Perhaps impossible is too strong a word. However, curbing progress would require changing the underlying incentive structure in the AI ​​industry, and that has proven extremely difficult in hyper-capitalist, hyper-competitive Silicon Valley. Being at the forefront of technological development is what brings profit and prestige, but that does not lead to slowing down, even when slowing down is urgently needed.

Under Altman, OpenAI tried to close this loop by arguing that researchers need to experiment with advanced AI to figure out how to make advanced AI safe – so accelerating development is actually helpful. That was weak logic a decade ago, but it no longer holds water today, when we have AI systems so advanced and so opaque (think GPT-4) that many experts say we need to figure it out first , how they work We are building more black boxes that are even more inexplicable.

OpenAI had also encountered a more prosaic problem that made it vulnerable to pursuing a for-profit path: it needed money. To run large-scale AI experiments today you need a lot of computing power – more than 300,000 times what you needed a decade ago – and that’s incredibly expensive. To stay current, the company had to create a for-profit arm and partner with Microsoft. OpenAI wasn’t alone in this: rival company Anthropic, which former OpenAI employees founded because they wanted to focus more on security, initially argued that we needed to change the underlying incentive structure in the industry, but eventually joined forces with Amazon .

Given all of this, is it even possible to build an AI company that advances the state of the art while truly prioritizing ethics and safety?

“It looks like it might not be the case,” Marcus said.

Webb was even more direct, saying, “I don’t think that’s possible.” Instead, she emphasized that the government needs to change the underlying incentive structure within which all of these companies operate. This includes a mix of carrots and sticks: positive incentives, such as tax breaks, for companies that demonstrate that they adhere to the highest safety standards; and negative incentives such as regulation.

The AI ​​industry is now a Wild West where every company plays by its own rules. OpenAI lives to play another day.

Update, November 22, 11:30 a.m. ET: This story was originally published on November 21 and has been updated to reflect Altman’s reinstatement at OpenAI.