Europol, the European criminal police agency, has just published a report dedicated to this The emergence of linguistic models, like GPT, GPT-J or even LLaMA. According to the agency, the rise of generative AI is likely to disrupt “all industries, including the criminal industries.”
Also read: ChatGPT leaked confidential user data
Why Europol fears the rise of AI
Tools like ChatGPT have “huge potential” for criminals, Europol notes with concern. The report also notes that the criminals already started to exploit the possibilities of AI to commit crimes.
“ChatGPT is already capable of facilitating a significant number of criminal activities, […] such as terrorism and the sexual exploitation of children,” explains Europol, stressing that the gangsters are “generally quick to exploit new technologies”.
For example, scammers can rely on ChatGPT Write compelling phishing messages or ransom demands. Europol officials are also concerned that generative AI is being used to spread false information and propaganda on social media. With images designed by Midjourney or Stable Diffusion and messages written by ChatGPT, it is possible to manipulate internet users and “encourage them to trust criminal actors”. Some criminals are already using AI-generated videos to trick YouTube users. These dummy sequences direct internet users to corrupted download links full of viruses.
The agency also fears that the AI will be used by pirates to orchestrate brute force attacks or generate fake documents such as invoices and contracts. With the help of ChatGPT, some hackers even started designing malware like ransomware or data stealing without having much technical knowledge.
“The possible exploitation of such AI systems by criminals offers bleak prospects,” warns Europol.
A source of information
In general, a criminal can rely on ChatGPT or any other intelligent chatbot Learn more about how to commit specific crimes. To avoid discrepancies, OpenAI has implemented filters and safeguards.
Unfortunately, it is possible to circumvent these limitations using a “prompt injection” attack. This type of attack, known to OpenAI, consists of talking to ChatGPT to convince it to ignore its programming and change its behavior. This is how some users pushed the model to teach them how to build a bomb or produce cocaine. This information was already available on the web, but the chatbot makes it easier to access.
“ChatGPT can significantly speed up the search process by providing key insights,” the agency believes.
The counterattack is organized
In this context, Europol encourages law enforcement agencies to look into the matter and prepare for “what’s to come”. At the same time, the agency, which is primarily dedicated to the exchange of information between national police forces, recommends “raising public awareness” and building security mechanisms with the help of industry.
Finally, Europol believes that police forces should do so explore the possibilities that AI offers. The report discusses building a language model consisting solely of data provided by law enforcement agencies. This conversational agent would assist the police in their activities, just as ChatGPT facilitates the task of criminals…