28 March 2023

Europol warns about potential criminal abuse of ChatGPT


Europol warns about potential criminal abuse of ChatGPT

The European Union's law enforcement agency Europol has warned about the potential misuse of artificial intelligence-powered OpenAI’s chatbot ChatGPT in phishing attempts, disinformation and cybercrime. In fact, miscreants are already using the chatbot to carry out malicious activities and looking for ways to circumvent OpenAI’s API restrictions.

“The impact these types of models might have on the work of law enforcement can already be anticipated,” Europol stated in its report. “Criminals are typically quick to exploit new technologies and were fast seen coming up with concrete criminal exploitations, providing the first practical examples mere weeks after the public release of ChatGPT.”

Europol pointed out that as the capabilities of large language models (LLMs) like ChatGPT are actively being improved, the risk of the potential exploitation of these types of AI systems by criminals also increases.

“If a potential criminal knows nothing about a particular crime area, ChatGPT can speed up the research process significantly by offering key information that can then be further explored in subsequent steps. As such, ChatGPT can be used to learn about a vast number of potential crime areas with no prior knowledge, ranging from how to break into a home, to terrorism, cybercrime and child sexual abuse,” Europol says.

While all of the information ChatGPT provides is already available on the internet, the model makes it easier to find and understand how to carry out specific crimes. The agency also highlighted that ChatGPT could be exploited to impersonate targets, facilitate fraud and phishing, or produce propaganda and disinformation to support terrorism.

“ChatGPT’s ability to draft highly authentic texts on the basis of a user prompt makes it an extremely useful tool for phishing purposes. Where many basic phishing scams were previously more easily detectable due to obvious grammatical and spelling mistakes, it is now possible to impersonate an organisation or individual in a highly realistic manner even with only a basic grasp of the English language. “Europol notes.

“Critically, the context of the phishing email can be adapted easily depending on the needs of the threat actor, ranging from fraudulent investment opportunities to business e-mail compromise and CEO fraud. ChatGPT may therefore offer criminals new opportunities, especially for crimes involving social engineering, given its abilities to respond to messages in context and adopt a specific writing style. Additionally, various types of online fraud can be given added legitimacy by using ChatGPT to generate fake social media engagement, for instance to promote a fraudulent investment offer.”

In January, cybersecurity firm Check Point warned that bad actors are already taking advantage of the AI-based chatbot to develop malicious tools.


Back to the list

Latest Posts

Cyber Security Week in Review: April 19, 2024

Cyber Security Week in Review: April 19, 2024

In brief: the LabHost PhaaS platform shut down, Russian military hackers attacked critical infrastructure in the US and Europe, and more.
19 April 2024
Ukrainian military personnel targeted via messaging apps and dating sites

Ukrainian military personnel targeted via messaging apps and dating sites

The threat actor employs a range of software in their malicious activities, including both commercial programs and  open-source tools.
18 April 2024
Russian military hackers targeted US water utilities and hydroelectric facilities in Europe

Russian military hackers targeted US water utilities and hydroelectric facilities in Europe

This marks the first time Russian nation-state hackers have posed a direct threat to critical infrastructure in Western countries.
18 April 2024