The head of the SPÖ delegation, Andreas Schieder, will be asked on Tuesday in Strasbourg whether apps like ChatGPT would fall into the high-risk regulatory area. “In the last six months, everyone has probably tried it, so you can safely say: this is by no means a high risk,” said the MEP. The situation with disinformation campaigns is much more critical, everything is happening faster and more technically misleading than a short time ago. But that’s exactly where the challenge lies: what is still ridiculed today may bring about big social – or political – changes tomorrow.
On Wednesday, the EU Parliament will vote on the world’s first set of rules to reduce the risks of artificial intelligence (AI) and promote its ethical use – the parliamentary position is likely to be adopted. The rules follow a risk-based approach and set obligations for providers and users based on the level of risk AI can create. AI systems with unacceptable levels of risk may be banned as a result. This includes remote biometric identification systems, categorization systems using sensitive features, predictive policing, and emotion recognition systems.
China provides negative examples of this, as Green MP Thomas Waitz puts it: “These are social profiles, where people’s social behavior is tracked – and if you don’t get enough points there, you can’t buy a train ticket to China. high speed train.” Or AI assesses human behavior at the border crossing, for example, to expose smugglers.
For generative AI, i.e. ChatGPT, lawmakers propose bespoke rules, including labeling AI-generated content and publishing summaries of copyrighted data used to train the software.
At the same time, however, the development of programs must not be hampered. In order to drive AI innovation, there should be exemptions for research activities and open source AI components, as well as for the use of so-called real AI labs (“regulatory sandboxes”) or controlled environments created by public authorities to test the AI before it is deployed. Civil rights must also be strengthened, the right to complain and the right to information must allow everyone to report misuse or illegal use of AI and receive information if it affects them personally. Example from everyday life: if AI is to be used in the workplace, employee representatives must be on board.
numerous prohibitions
to contain
Amnesty International urged MPs to completely ban mass surveillance technologies in law: invasive facial recognition technologies would increase racist and discriminatory law enforcement measures against minorities. Indeed, the current proposal contains numerous prohibitions, such as real-time biometric recognition systems in public spaces, emotion recognition systems in law enforcement, border guards, workplaces and educational institutions, biometric categorization systems using sensitive characteristics (e.g. gender, ethnicity, nationality, religion, political orientation) or random reading of biometric data from social media or video surveillance recordings to create facial recognition databases (violation of human rights and right to privacy).
The “AI Act” is a real opportunity to introduce regulations even before a technology permeates all that potentially brings about far-reaching changes – one can “get ahead of the wave here”, said Matthias Kettemann of the University of Innsbruck, the journalists a few days ago. There is no reason for “fear-mongering” in the sense of a gradual quasi-abolition of humanity, but regulations are definitely needed.
For the Austrian technology and regulation researcher Sandra Wachter, who works at the University of Oxford (UK), Europe is making history with this regulation. The problem, however, is that, according to the principle of “conformity assessment”, system manufacturers in the high-risk area would have to assess for themselves whether their product conforms to rules and standards. A third party is only required if biometric data is used. So here are “those who should follow the law”, including those who decide whether or not to do so, according to the scientist. “AI-Act” itself also sees this as problematic, why this interim solution must be followed by a new one. In any case, for MEP Claudia Gamon (Neos), regulation is “a great success”, “legal security without excessive regulation” is being created.
The EU, as Lukas Mandl (ÖVP) says, has often made the mistake of excluding risks and not seizing opportunities at the same time. Not the case this time. Mandl recalled that Austria wants to implement essential aspects of the package, which will probably be decided today by a large majority and should be a reality by 2025, already in 2024, there is no time to lose here. And indeed, the trialogue negotiations with the Council and the Commission are due to start on Wednesday night.