In a context where artificial intelligence like ChatGPT is becoming increasingly popular, Europe is considering strengthening the GDPR to better regulate the use of personal data. These AIs, deployed in connected devices such as smartwatches and online platforms, can access sensitive information, often without users being aware of it. Given these risks, the European Union is proposing regulatory measures aimed at protecting personal data and informing users about the use of their information by AI.
This will also interest you
[EN VIDÉO] Interview: How was artificial intelligence born? Artificial intelligence aims to mimic the way the human brain works, or at least its logic…
While European Personal Data Protection Day aims to raise awareness among the population to take control of their private data online, the future of this data could well see a revolution. It is the thunderous arrival of powerful artificial intelligence (AI) like ChatGPT. If these AIs know how to convince with their rhetorical talent, they are increasingly being exploited to manage the numerous personal data that we often unknowingly make available to the platforms. From this perspective, AI is not without risks.
For this reason, Europe wants to supplement its General Data Protection Regulation (GDPR). The institution plans to add a set of harmonized rules for the use of AI. It has to be said that this famous AI is now everywhere. We wear it on our wrist day and night with connected watches and bracelets that can collect health data and even detect certain pathologies. However, consumers do not always realize that asking personal questions, such as medical ones, to a conversational tool means providing the companies that manage this artificial intelligence with confidential information that could be used for commercial purposes. And that is not the only concern, because many actors are involved in artificial intelligence, be it the developer, the supplier, the importer, the dealer and the user. This set remains rather opaque for the consumer. Therefore, it is difficult to know who actually has access to personal data and who would be responsible in the event of problems.
Better information about AI algorithms
As the use of these AIs increases, the risk of data leaks or loss of control over personal data is also significant. Therefore, for their protection, consumers must inform themselves about the company that collects their data and its policies regarding the processing of that personal data. This is not always easy, even if some players in the industry are more virtuous than others. This is particularly true for Apple, which wants to promote data confidentiality by, for example, forcing application developers to automatically obtain consent for data collection.
To better protect users, the European Union has therefore proposed three texts: a regulatory framework for artificial intelligence, an AI liability directive and a product liability directive. With its additional regulations, the EU wants to force digital giants and other platforms and social networks to better inform users about their algorithms. And to oblige them, the text provides for significant sanctions. Failure to comply with these new obligations could amount to between 10 and 30 million euros, or 2 to 4% of turnover. It remains up to the institution to adopt these texts quickly enough before the AIs allow themselves even more freedom.