Is Italy leading the fight against artificial intelligence? The country announced Friday that it was blocking chatbot ChatGPT over data usage concerns, two months after banning Replica, another program marketed as a “virtual friend.”
• Also read: Artificial intelligence: ChatGPT saves a dog’s life?
• Also read: Up to 300 million jobs threatened by artificial intelligence?
• Also read: Orthobot: To facilitate access to artificial intelligence
In a press release, the Italian data protection authority warns that its decision has “immediate effect” and accuses ChatGPT of not complying with European regulations and not having a system to verify the age of underage users.
This decision will entail “the temporary restriction of the processing of Italian user data towards OpenAI”, the company behind the application, she adds.
ChatGPT appeared in November and was quickly adopted by users who were impressed by its ability to clearly answer difficult questions, write sonnets, or write computer code.
Funded by computer giant Microsoft, which has added it to several of its services, it’s sometimes portrayed as a potential competitor to Google’s search engine.
The Italian authority emphasizes in its press release that ChatGPT “suffered a loss of data on March 20 regarding users’ conversations and information regarding the payment of subscribers to the paid service”.
She also criticizes him for “the lack of an information notice for users whose data is collected by OpenAI, but more importantly the lack of a legal basis justifying the collection and bulk storage of personal data for the purpose of training the algorithms to make the platform work”.
Although the robot is intended for people over the age of 13, the agency emphasizes that due to the lack of a filter to verify users’ age, minors are absolutely not exposed to answers that correspond to their level of development.
In early February, the same institution blocked the Replica application, which offers chatting with a custom avatar. Some users had complained about receiving risqué messages and images that bordered on sexual harassment.
This time, the authority asks OpenAI to “communicate within 20 days the measures taken” to remedy this situation, “under penalty of a penalty of up to 20 million euros or up to 4% of the annual worldwide turnover”, the foreseen maximum the European regulation on personal data (GDPR).
This case shows that the GDPR, which has already fined tech giants billions of dollars, could also become the enemy of new content-generating AIs.
The European Union is also preparing a draft law regulating artificial intelligence, which could be finalized in late 2023 or early 2024 and could come into force a few years later.
Because AI fuels much deeper fears than the mere exploitation of personal data.
The European police agency Europol warned on Monday that criminals are ready to use artificial intelligence such as the chatbot ChatGPT to commit fraud and other cybercrimes.
From phishing to misinformation or malware, chatbots’ ever-evolving capabilities are likely to be quickly exploited by malicious individuals, Europol said in a report.
They can also be used to cheat on exams and ChatGPT was banned in several schools or universities around the world shortly after its release. Large companies have also advised their employees not to use the application for fear of revealing sensitive data.
Finally, billionaire Elon Musk, one of the founders of OpenAI whose board he later left, and hundreds of global experts signed a call on Wednesday for a six-month hiatus in intelligence research — more powerful than GPT-4, the latest version of the software on which ChatGPT, was launched in mid-March citing “great risks to humanity”.