Cyber security experts have developed an artificial intelligence system to answer these scammers’ calls and waste their time by extending the chat as long as possible.
At a time when artificial intelligence (AI) is being used for phone scams, cybersecurity experts are looking to thwart these scammers’ plans using the same technology. A team from Macquarie University in Australia has developed an AI system that creates convincing fake victims in the form of multilingual chatbots. Objective: Waste scammers’ time to reduce the $55 billion in losses victims lose to them every year.
Waste of time for scammers
This system is called Apate – after the Greek goddess of deception – and aims to rip off scammers. Dali Kaafar, executive director of the university’s cybersecurity center, came up with the idea to develop it after receiving a fraudulent phone call while having lunch with his family. He managed to keep the scammer online for 40 minutes while making his kids laugh. “I realized that while I was wasting the scammer’s time so he couldn’t reach the vulnerable, and that was the point – it was also 40 minutes of my own life that I would never regain,” he explained.
Dali Kaafar then started thinking about a way to automate this process and “use natural language processing to develop a computerized chatbot that could have a credible conversation with the scammer.” The team first analyzed the fraudulent phone calls and identified the social engineering techniques the scammers used on their victims, leveraging machine learning and natural language processing techniques.
Forward fraudulent calls to chatbots
Chatbots were then trained using a dataset of real-life fraudulent conversations (fraudulent call recordings, fraudulent email transcripts, etc.) to be able to generate their own scam-like conversations. “The conversational AI bots we’ve developed can fool scammers into thinking they’re talking to potential fraud victims, so they spend time scamming the bots,” said Dali Kaafar.
These chatbots are currently being tested on live fraudulent calls and will redirect calls from victims to the system developed by the experts. The team exposed the numbers of these chatbots by introducing them into spam applications or even posting them on websites to increase their chances of receiving these fraudulent calls. Chatbots currently manage to keep fraudsters online for an average of five minutes. The goal is for them to do this for 40 minutes.
In order to deploy this technology globally, the team is currently in discussions with several telecommunications providers. “Working with communications providers will be key to making this really effective,” said the executive director of the university’s cybersecurity center.