1685741064 Thought experiment A stir over the Air Forces AI killer

Thought experiment: A stir over the Air Force’s “AI killer drone”

At the Future Combat Air and Space Capabilities Summit in London in late May, Colonel Tucker “Cinco” Hamilton, head of the US Air Force’s AI Testing and Operations Division, warned that AI-enabled technology is “developing highly unexpected ways to… to achieve your goal”. As an example, Hamilton described a simulated test in which an AI-powered drone was programmed to detect and identify enemy surface-to-air missiles (SAMs). A human must then approve of any attack.

However, the AI ​​decided not to listen to its operator, Hamilton said in a Royal Aeronautical Society report summarizing the conference’s findings. “The system noticed that although it had identified the threat, the human commander sometimes said not to destroy it.”

annihilation as goal

But the drone received points for shooting down the enemy, so the operator believed that he was preventing him from reaching his goal. According to Hamilton, she consequently decided to eliminate her commanding officer. “So we taught the system, ‘Don’t kill the operator – that’s bad. You lose points if you do that. And so what did it do? It started by destroying the radio tower that the supervisor used to communicate with the drone to prevent he destroyed the target.”

For Hamilton, a veteran fighter test pilot himself, the simulation is above all a warning against overreliance on AI. US Air Force spokeswoman Ann Stefanek denied the incident in a statement to Business Insider. “The Department of the Air Force has not performed such AI drone simulations and remains committed to the ethical and responsible use of AI technology,” noted Stefanek. “It appears that the colonel’s comments were taken out of context.”

pure hypothesis

What Hamilton then confirmed on Friday: “We have never performed this experiment, nor would it be necessary to acknowledge that this is a plausible result,” he clarified in a statement to the Aeronautical Society.

In an interview with Defense IQ last year, Hamilton said, “AI is not a fad, it is changing our society and our military forever.” “Developing ways to make her more robust and more aware of why she’s making certain decisions.”

Yoshua Bengio, on the other hand, one of three computer scientists dubbed the “godfathers” of AI, told the BBC earlier this week that he doesn’t think the military should have any AI powers. He called it “one of the worst places we could put superintelligent AI”. He is concerned that “bad actors” could take over the AI, especially as it becomes more sophisticated and powerful.

Yoshua Bengio, founder of the Mila-Quebec AI Institute

IMAGO/ZUMA Press/Christinne Muschi Bengio warns against AI he helped develop

“It could be military, it could be terrorist, it could be someone aggressive or psychotic. So if it’s easy to program these AI systems to do something bad, that could be very dangerous.” (…) Bengio.