Science
After tough negotiations between member states, the European Union agreed on stricter requirements for “high-risk” applications such as critical infrastructure or security authorities. Steps that are also welcome in Styrian AI research.
16/12/2023 09:37
Online since today, 9:37 am
Texts that are written as if by magic, very realistic images that are created in seconds using just keywords, but also manipulated videos that are used for fraudulent purposes.
The possibilities of artificial intelligence seem almost limitless. The EU now wants to set limits through the rules it has adopted.
Consider where AI can become significantly responsible
Roman Kern, AI researcher at the TU Graz knowledge center, says the EU asked itself the important question: “What regulations are needed here to restrict where AI is used? And the statute is that you want to base the risk, namely that you say, for example, that there is no artificial intelligence to control nuclear plants. In other areas where it doesn't matter, we let the AI do whatever it wants. And in the middle you have an area where you're not sure. And that’s where trustworthy AI comes into play. This means you use artificial intelligence that needs to meet certain requirements.”
Certificates as guidelines
To this end, after an in-depth examination of its operation, certificates will be developed and awarded to trusted solution and service providers in the field of artificial intelligence: “It will go exactly in this direction. And these certificates then say that privacy is protected, that legal and social norms are respected, and so on.”
For Kern, this is a positive development: “Yes, this is an investigation that looks at what are the possibilities, the capabilities of artificial intelligence, but at the same time the question arises where, where are the limits set? How can I tame artificial intelligence?” EU regulations on this matter are expected to be developed by the end of the year.