The White House on Monday unveiled a sweeping set of rules and principles designed to ensure America remains a “leader” in regulating artificial intelligence (AI), which is the subject of fierce international competition.
• Also read: A summit to examine the risks of artificial intelligence is taking place in the UK
US President Joe Biden will issue an executive order that, among other things, will require artificial intelligence developers to submit the results of their security tests to the federal government if their projects “pose a serious risk to national security, the national economy, or public health .
The initiative is scheduled to be unveiled at an official ceremony at the White House on Monday.
The 80-year-old Democrat is citing a Cold War law, the Defense Production Act (1950), which grants the federal government certain coercive powers over companies when it comes to the country’s security.
The criteria for these safety tests will be determined and published at the federal level, the American executive said.
Testing
Last July, several big names in the digital industry, including Microsoft and Google, committed to subjecting their artificial intelligence systems to external testing.
The White House wants to pay particular attention to the risks that the development of AI can bring with it in both the areas of biotechnology and infrastructure.
The American government will also issue recommendations for the detection and identification of content generated by artificial intelligence, a technology that makes it possible to produce lifelike images, sounds or even videos at very high speeds.
The Executive also promises to publish recommendations on discrimination, given the biases that artificial intelligence systems can bring, and is committed to monitoring the impact of this technological revolution on employment.
Although the White House boasts about the ambition of the decree presented on Monday, in reality Joe Biden has limited room for maneuver.
Any truly binding and ambitious artificial intelligence legislation should be passed by the US Congress. However, the latter are currently divided between Democrats and Republicans, making the passage of a large-scale law very unlikely.
congress
Still, the American president called on lawmakers on Monday to pass laws to “protect the private lives” of Americans at a time when artificial intelligence “does not only make it easier to extract, identify and use personal data “but also encourages them to do so” as companies use this data to train “algorithms”.
In response to the publication of the presidential decree, the software publishers association BAS also called for “a new legal framework to create concrete protective measures for artificial intelligence.”
The regulation of artificial intelligence is subject to fierce international competition.
The European Union, which is producing a wealth of rules in the digital sector, wants to equip itself with a regulatory system for artificial intelligence this year, hoping to set the tone at the global level.
The United Kingdom is organizing a summit on the issue this week, which will be attended by American Vice President Kamala Harris.
From smartphones to airports: artificial intelligence is already omnipresent in everyday life.
Its progress has accelerated in recent years with the development of generative AI, such as the ChatGTP conversation robot.
For example, if AI raises hopes of major advances in medicine, this technological revolution also raises fears of massive job losses, repeated attacks on privacy, and an explosion of disinformation.
Many experts and NGOs also warn against the use of AI by authoritarian regimes or criminal organizations.