- Openai has banned accounts using chatgpt for malicious purposes
- Incorrect information and monitoring campaigns were uncovered
- Threat actors are increasingly using AI to detriment
Openai has confirmed that it has recently identified a set of accounts involved in malicious campaigns and prohibited responsible users.
The forbidden accounts involved in ‘peer review’ and ‘sponsored unhappy’ campaigns probably come from China, said Openai, and ‘seems to have used or tried to use models built by Openai and another US AI -Lab In the context of a apparent monitoring operation and to generate anti-American, interfere with malicious use of our models: an update February 2025 3 Spanish-language articles ”.
AI has facilitated an increase in disinformation and is a useful tool for threat actors to use to interfere with elections and undermine democracy in unstable or politically divided nations – and state -backed campaigns have used the technology to their advantage.
Monitoring and disinformation
The ‘Peer Review’ campaign used chatgpt to generate ‘detailed descriptions in accordance with sales sites, of a listening tool on social media that they claimed to have used to feed real -time reports on protests in the West of the Chinese Security Services’, Openai confirmed .
As part of this surveillance campaign, the threat actors used the model to “edit and debug code and generate promotional materials” to suspect AI -driven listening tools on social media -though Openai was unable to identify post on social media after the campaign.
Chattt accounts participating in the ‘Sponsored Dissatisfaction’ campaign was used to generate comments in English and news articles in Spanish, in accordance with ‘spamouflage’ behavior, primarily using anti-American rhetoric, probably to trigger dissatisfaction in Latin America , namely Peru, Mexico and Ecuador.
This is not the first time Chinese state -sponsored actors have been identified using ‘spamouflage’ tactics to spread disinformation. By the end of 2024, a Chinese influence campaign was discovered targeting American voters with thousands of AI-generated images and videos, most of the low quality and containing false information.