Elon Musk helped lure Ilya Sutskever to OpenAI and now wants to know what scared him “so much that he wanted to fire CEO Sam Altman.” Kirsty Wigglesworth – WPA Pool/Getty Images
Elon Musk played a big role in convincing Ilya Sutskever to join OpenAI as chief scientist in 2015. Now the Tesla CEO wants to know what he saw there that scared him so much.
Sutskever, whom Musk recently described as a “good person” with a “good heart” – and a “lynchpin to OpenAI’s success” – was a member of the OpenAI board that fired CEO Sam Altman two Fridays ago; in fact, Sutskever informed Altman of his dismissal. However, the board has since been reshuffled and Altman reinstated, with investors led by Microsoft pushing for the changes.
Sutskever himself backtracked on Monday, writing on X: “I deeply regret that I took part in the board’s actions. I never intended to harm OpenAI.”
But Musk and other tech elite — including those who mocked the board for firing Altman — are still curious about what Sutskever saw.
Late Thursday, venture capitalist Marc Andreessen said, mocking “doomers” who fear AI’s threat to humanity. Posted to X: “But seriously – what did Ilya see?” Musk replied a few hours later: “Yes! Something scared Ilya so much that he wanted to fire Sam. What was it?”
That remains a mystery. The board gave only vague reasons for firing Atlman. Not much has been revealed since then.
“Such a drastic action”
OpenAI’s mission is to develop artificial general intelligence (AGI) and ensure that it “benefits all of humanity.” AGI refers to a system that can keep up with humans when faced with an unfamiliar task.
Because of OpenAI’s unusual corporate structure, the board of a non-profit organization is ranked higher than that of a company with limited profits, allowing the board to fire the CEO if, for example, it feels that commercializing potentially dangerous AI features in an unsafe environment Pace advances.
Portal reported early Thursday that several OpenAI researchers had written a letter to the board warning of a new AI that could threaten humanity. After being contacted by Portal, OpenAI wrote an internal email confirming a project called Q* (pronounced Q-Star) that some employees believe could represent a breakthrough in the company’s AGI search. Q* can reportedly pass basic math tests, suggesting an ability to think logically, in contrast to ChatGPT’s more predictive behavior.
Musk has long warned about the potential dangers of artificial intelligence to humanity, but also sees its benefits and is now offering a ChatGPT rival called Grok through his startup xAI. He co-founded OpenAI in 2015 and helped attract key talent like Sutskever, but left the company a few years later with a poor reputation. He later I complained about it The former nonprofit organization – which he had hoped would serve as a counterweight to Google’s AI dominance – has instead become a “closed, profit-maximizing company effectively controlled by Microsoft.”
Last weekend, he weighed in on the OpenAI board’s decision to fire Altman, writing: “Given the risk and power of advanced AI, the public should be informed why the board felt it needed to take such drastic action.” “
When an X user suggested that there might be a “bombshell variable” unknown to the public, Musk replied: “Exactly.”
Sutskever reacted to Altman’s return after his withdrawal on Monday Write on Wednesday: “There is no sentence in any language that expresses how happy I am.”