At the British country estate Bletchley Park, participants from 28 countries discussed how artificial intelligence can be controlled. Is it the biggest global threat or is it creating a land of milk and honey? The British media thought about it too.
The location was well chosen. After Bletchley Park, conservative British Prime Minister Rishi Sunak invited an international group to the summit on artificial intelligence (AI) and its dangers. The country estate northwest of London was a military center during World War II, where cryptography analysts cracked Enigma, the German Wehrmacht’s encryption code. Brilliant logician and computing pioneer Alan Turing was part of the team.
83 years later, a complicated problem was discussed there again. How is AI domesticated? Politicians, managers and experts from around the world solemnly promised that they would at least try to regulate this technology, which has now become incredibly similar to human thought and has long been superior in some aspects. The goal is a global set of rules. Industrialized countries see the need for action. However, an agreement would still have to be reached on who will monitor the monitors.
Turing Test
AI can create fear. This was already predictable when Turing invented the “Imitation Game”: a device and a person should communicate indirectly with a guinea pig via keyboard and screen. If you can no longer distinguish whether you are interacting with your own species or with computational machines, the program has won. What was considered fantastic back then now seems to be reality. In 2014, a supercomputer in England passed this Turing test for the first time. Chat’s artificial intelligence GPT (Generative Pre-trained Transformer), which exchanges information with people through text and images, now reportedly regularly completes the test positively.
If even experts can no longer distinguish between true and false news, manipulated and real images, the danger is imminent: is the head of state still making a speech on television that has a disinhibiting effect, or is it his bot? In the worst case, malicious software can initiate hate speech against outspoken opponents. The concern among the powerful and inventive at Bletchley Park seems justified.
A special guest, the innovative and globally active entrepreneur Elon Musk, filled the headlines of British newspapers with his statements. The Chron quoted him as saying that AI represents “one of the biggest threats to humanity”. And put this in perspective with the opposing view from a top manager at tech competitor Meta (formerly Facebook): The risks are being exaggerated, said former Liberal politician Sir Nick Clegg. “The Times” approached the subject with a slightly different accent. The “most disruptive force in human history” appeared in the subtitle on Friday, with Musk claiming in the headline that artificial intelligence means no one needs to work anymore. Will conditions be like the land of milk and honey or is there a risk of widespread unemployment? Questioned by Sunak, the billionaire said, according to the Daily Telegraph, that people would only have jobs for “personal satisfaction”.
National security
The Financial Times captured the key point: leading AI companies agreed that governments could scrutinize the next generation of their products for national security risks. Type!
“The Economist” had already advised that governments should not rush to control AI: “Think, then act”. One should not ignore a technology that can profoundly change the world. Any threat to humanity must be taken seriously. Many would have wished regulators had reacted more quickly to social media. “But it is also dangerous to act hastily.” In that case, you may be targeting the wrong problems by creating global rules and institutions, being ineffective in addressing real problems, and suppressing innovation. Time and consideration are necessary. It looks smart.