A suspected North Korean hacking group known as Kimsuky has used artificial intelligence tools, including ChatGPT, to generate a deepfake South Korean military ID as part of a sophisticated phishing operation, according to new research from South Korean cybersecurity firm Genians.
The cyberattack, uncovered in July, involved a fake draft of a South Korean military identification card created using generative AI. The image was designed to add legitimacy to a phishing email, which was sent to a targeted group of recipients including North Korea analysts, journalists, human rights activists, and defense experts. Rather than including the image directly in the email, attackers embedded a malicious link that downloaded malware onto victims' devices.
Once opened, the attached ZIP file triggered a chain of commands, including obfuscated PowerShell scripts and communication with attacker-controlled servers in South Korea and France. The malware downloaded a fake ID image and executed a batch script to install additional spyware.
Researchers confirmed that the fake ID was AI-generated with help from ChatGPT. Kimsuky, also tracked as Emerald Sleet or Velvet Chollima, has previously been linked to intelligence-gathering missions ordered by the North Korean regime.
In June, OpenAI said it took doown ChatGPT accounts linked to Russian, Chinese, Iranian and North Korean state-sponsored hacker groups. The illegal uses of ChatGPT mainly fell into three categories: creating fake social media comments; improving malware and helping with cyberattacks; and running job scams targeting people in other countries.
The attack comes just weeks after AI firm Anthropic revealed that North Korean hackers had used its Claude Code model to impersonate developers and secure remote jobs at US tech companies.