- Openai Bans accounts associated with China, North Korea for malicious AI-Assisted Monitoring and Phishing
- Chinese actors used chatgpt to prepare suggestions for surveillance tools and behavioral profiling systems
- North Korean actors tested phishing, legitimation theft and mako’s malware development using re -formulated prompt
Openai has banned Chinese, North Korean and other accounts that were reportedly used chatgpt to start surveillance campaigns, develop phishing techniques and malware and participate in other malicious practices.
In a new report, Openai said that the observed individuals who are allegedly affiliated with Chinese government units or state organizations using its large language model (LLM) to help write suggestions for surveillance systems and profiling technologies.
These included tools for monitoring individuals and analysis of behavioral patterns.
Study of phishing
“Some of the accounts we banned seemed to try to use chatgpt to develop tools for large -scale monitoring: Analysis of data sets often collected from Western or Chinese social media platforms,” the report reads.
“These users typically asked chatgpt to help design such tools or generate promotional materials about them, but not to implement the surveillance.”
Employees were framed in a way that avoided triggering security filters, and were often formulated as academic or technical studies.
While the returned content did not directly enable monitoring, its output was used to refine documentation and planning for such systems, it was said.
The North Koreans, on the other hand, used chatgpt to explore phishing techniques, identification theft and mako’s malware development.
Openai said it observed that these accounts test questions related to social engineering, password harvest, and malpractice of malicious code, especially targeting Apple systems.
The model directly refused requests for malicious code, Openai said, but stressed that threat actors were still trying to bypass protection measures by rephrasing requests or asking for general technical help.
Like any other tool, LLMs are used by both financially motivated and state -sponsored threat actors for all kinds of malicious activities.
This abuse of AI is evolving with threat actors who are increasingly integrating AI into existing workflows to improve their efficiency.
While developers like Openai are working hard to minimize the risk and make sure their products cannot be used as this, there are many requests that fall between legitimate and malicious use. This gray zone activity the report suggests requires nuanced detection strategies.
Via Registered
Follow Techradar on Google News and Add us as a preferred source To get our expert news, reviews and meaning in your feeds. Be sure to click the Follow button!
And of course you can too Follow Techradar at Tiktok For news, reviews, unboxings in video form and get regular updates from us at WhatsApp also.



