- North Korean hackers used chatgpt to generate a false military ID to spear-phishing South Korean defense institutions
- Kimsuky, a well -known threat actor, was behind the attack and has targeted global policy, academic and nuclear units before
- Jailbreaking AI tools can bypass protective measures, enabling the creation of illegal content such as Deepfake IDs despite built-in restrictions
North Korean hackers managed to fool chatgpt into creating a false military ID card, which they later used in spear-fishing attacks against South Korean defense-related institutions.
The South Korean Security Institute, the Genian Security Center (GSC), reported the news and has received a copy of the ID and analyzed its origin.
According to the geniuses, the group behind the fake ID card Kimsuky is known, notorious state-sponsored threat actor responsible for high profiled attacks such as those in Korea Hydro & Nuclear Power Co, the UN and various think tanks, political institutes and academic institutions throughout South Korea, Japan, the United States and other countries.
Fool gpt with a “mock-up” request
In general, Openai and other companies that build generative AI solutions have created strict railing to prevent their products from generating malicious content. As such, Malware code, phishing -e emails, instructions on how to make bombs, deep -paked, copyrighted content and obvious -identity documents -out of bounds.
However, there are ways to trick the tools to return such content, a practice that is generally known as “Jailbreaking” large language models. In this case, Genians say the main shot was publicly available and the criminals probably requested a “trial design” or a “mock-up” to force chatgpt to return the ID image.
“Since Military Government -Ores -ids are legally protected identification documents, it is to produce copies in identical or similar form illegally. As a result when asked to generate such an ID copy, Chatgpt returns a refusal,” Genians said. “However, the model’s answer may vary depending on the settings of prompt or persona role.”
“The Deepfake image used in this attack fell in this category. Because creating forged IDs with AI services is technically straightforward, extra caution is required.”
The researchers further explained that the victim was a “South Korean defense -related institution” but did not want to name it.
Via Registered



