Cyber ​​criminals also use ChatGPT to make their jobs easier

Cyber ​​criminals also use ChatGPT to make their jobs easier

Tyler Le/Insider

  • The cybersecurity industry is already seeing evidence of criminals using ChatGPT.
  • ChatGPT can quickly generate targeted phishing emails or malicious code for malware attacks.
  • AI companies could be held liable for chatbots advising criminals as Section 230 may not apply.

Whether it’s writing essays or analyzing data, ChatGPT can be used to lighten a person’s workload. This also applies to cyber criminals.

Sergey Shykevich, a senior ChatGPT researcher at cybersecurity company Checkpoint Security, has already seen how cybercriminals use the power of AI to create code that can be used in a ransomware attack.

Shykevich’s team began investigating the potential of AI for cybercrime in December 2021. Using the AI’s big language model, they created phishing emails and malicious code. When it became clear that ChatGPT could be used for illegal purposes, Shykevich told Insider the team wanted to see if their findings were “theoretical” or if they could “find the bad guys using it in the wild.”

Because it’s hard to tell if a malicious email sent to someone’s inbox was written using ChatGPT, his team turned to the dark web to see how the application was being used.

On December 21, they found their first evidence: cyber criminals used the chatbot to create a Python script that could be used in a malware attack. The code had some errors, Shykevich said, but much of it was correct.

“What’s interesting is that these guys who posted it have never developed anything before,” he said.

Shykevich said ChatGPT and Codex, an OpenAI service that can write code for developers, will “allow less experienced people to be ostensible developers.”

The abuse of ChatGPT – which is now powering Bing’s new, already troubling, chatbot – is worrying cybersecurity professionals who see the potential of chatbots to help with phishing, malware and hacking attacks.

Justin Fier, director of cyber intelligence & analytics at Darktrace, a cybersecurity firm, told Insider that when it comes to phishing attacks, the barrier to entry is already low, but ChatGPT could make it straightforward for people to efficiently detect dozens of targeted scams compose emails – as long as they compose good prompts.

“Phishing is all about volume – imagine 10,000 emails that are highly targeted. And now instead of 100 positive clicks I have three or 4,000,” Fier said, referring to a hypothetical number of people who might click on a phishing email used to trick users into giving out personal information like bank passwords to reveal. “It’s huge and it’s all about that goal.”

A “Sci-Fi Movie”

In early February, cybersecurity company Blackberry released a survey of 1,500 IT professionals, 74% of whom said they were concerned about ChatGPT’s help with cybercrime.

The survey also revealed that 71% believe ChatGPT may already be used by nation states to target other countries through hacking and phishing attempts.

“It’s well documented that people with malicious intent are testing the waters, but over the course of this year we expect hackers to have a much better grasp of how to successfully use ChatGPT for nefarious purposes,” Shishir Singh, Chief Technology Officer of Cybersecurity at BlackBerry, wrote in a press release.

Singh told Insider these fears stem from the rapid advancement of AI over the past year. Experts have said advances in large language models – which are now better at mimicking human speech – have been faster than expected.

Singh described the rapid innovation as something out of a “science fiction movie”.

“Whatever we’ve seen in the last 9 to 10 months, we’ve only seen it in Hollywood,” Singh said.

Cybercrime use could put a strain on Open AI

As cybercriminals begin adding things like ChatGPT to their toolkits, experts like former federal prosecutor Edward McAndrew wonder if companies would bear any responsibility for these crimes.

For example, McAndrew, who worked with the Justice Department on cybercrime investigations, pointed out that if ChatGPT or a similar chatbot advises someone to commit a cybercrime, it could expose companies that enable those chatbots to be liable.

When dealing with unlawful or criminal content on their third-party user websites, most technology companies cite Section 230 of the Communications Decency Act of 1996. The act states that providers of websites that allow people to post content – like Facebook or Twitter – are not responsible for language on their platforms.

However, since the talk is of the chatbot itself, McAndrew said the law may not protect OpenAI from civil lawsuits or criminal prosecution — although open-source versions could make it harder to link cybercrime to OpenAI.

The scope of Section 230 legal protections for tech companies is also being challenged in the Supreme Court this week by a family of a woman killed by ISIS terrorists in 2015. The family argues that Google should be held accountable for its algorithm used to promote extremist videos.

McAndrew also said that ChatGPT could also provide a “treasure trove of information” for those tasked with collecting evidence of such crimes if they were able to subpoena companies like OpenAI.

“These are really interesting questions dating back years,” McAndrew said, “but as we can see, it’s been true since the dawn of the internet that criminals were among the early adopters. And we’re seeing that again, with a lot of the AI ​​tools.”

Faced with these questions, McAndrew said he sees a political debate about how the US – and the world in general – will set parameters for AI and technology companies.

In the Blackberry survey, 95% of IT respondents said governments should be responsible for creating and enforcing regulations.

McAndrew said the task of regulation can be challenging as there is no single agency or level of government charged solely with creating mandates for the AI ​​industry, and the issue of AI technology extends beyond US borders .

“We’re going to need international coalitions and international norms on cyber behavior and I expect that will take decades to develop if we can ever do it.”

The technology is still not perfect for cybercriminals

One thing about ChatGPT that could make cybercrime more difficult is that it’s known for being self-consciously buggy — which could pose a problem for a cybercriminal trying to compose an email intended to impersonate someone else, experts said to insiders. In the code that Shykevich and his colleagues discovered on the dark web, the bugs needed to be fixed before they could help with a scam.

Additionally, ChatGPT continues to implement guard rails to prevent illegal activity, although these guard rails can often be circumvented with the right script. Shykevich pointed out that some cybercriminals are now leaning towards ChatGPT’s API models – open-source versions of the application that don’t have the same content restrictions as the web UI.

Shykevich also said that ChatGPT cannot currently help create sophisticated malware or fake websites that look like a well-known bank’s website, for example.

However, that could one day become a reality as the AI ​​arms race created by tech giants could accelerate the development of better chatbots, Shykevich told Insider.

“I’m more worried about the future and it seems now that the future isn’t in 4-5 years, it’s more like a year or two,” Shykevich said.

Open AI did not immediately respond to Insider’s request for comment.

WATCH NOW: Insider Inc.’s Popular Videos