Sam Altman
The CEO of OpenAI, along with hundreds of other AI experts, warns of the dangers of unregulated machine intelligence.
(Photo: dpa)
Düsseldorf The statement is only 22 words long, but it is powerful. “Reducing the risk of AI annihilation must be a global priority, along with other risks of societal proportions, such as pandemics or nuclear war.” The American non-governmental organization “Center of AI Safety”, based in San Francisco, published the call.
The list of signatories reads like a who’s who in the science of artificial intelligence (AI). There are Demis Hassabis and Sam Altman, the heads of Google Deepmind and OpenAI, which are among the top AI companies in the world. There you also find Geoffrey Hinton and Yoshua Bengio, who won the 2018 Turing Award – a kind of Nobel prize in computer science.
Among the 375 signatories are some German AI luminaries: Frank Hutter, professor of computer science at the University of Freiburg, Joachim Weickert, professor of mathematics and computer science at the University of Saarland, and Ansgar Steland, professor of statistics and business mathematics at RWTH Aachen.
Grimes, Elon Musk’s ex-girlfriend, also signs AI regulation
The call is reminiscent of the Future of Life Institute’s open letter in March 2023, which called for a six-month research break, and was signed by well-known tech personalities such as Elon Musk and Steve Wozniak.
The call got mixed responses, not just because of the warning itself, but also whether a moratorium could make sense. For example, AI startups take a critical view of the call. They fear regulation based on such warnings would prevent them from reaching established players.
Reducing the risk of AI annihilation should be a global priority alongside other societal-scale risks like pandemics or nuclear war. AI security center
However, there are differences between the new and the old petition. Therefore, the current one is kept more general. While the appeal puts the dangers of AI on a par with nuclear warfare, it does not talk about a “race race” and AI that “no one – not even its creators – can reliably understand, predict or control”.
>> Read also: “AI will achieve an intelligence that will be greater than that of humans”
According to Dan Hendrycks, director of the Center for AI Safety, the statement was made on purpose. The aim was to avoid differences of opinion about the danger or about solutions such as a six-month research break.
Instead, it would be desirable to encourage a kind of “presentation” by scientists. “There is a widespread belief, even within the AI community, that there are only a handful of naysayers,” Hendrycks told the New York Times. “But actually there are many who express their concerns privately.”
In fact, with a few exceptions like the pop singer – and former girlfriend of Elon Musk – Grimes or Jaan Tallinn, co-founder of the streaming service Skype, there are almost only AI researchers on the list. There are more than 30 employees from Google Deepmind or 16 from OpenAI, in addition to numerous professors and scientists.
However, some names are also missing, such as Yann LeCun, chief AI scientist at Meta, Facebook’s parent company.
More: The Frankenstein Moment – If we don’t control artificial intelligence, it controls us.