- OpenAI warns that future LLMs could help with zero-day development or advanced cyber espionage
- The company invests in defensive tools, access control and a tiered cybersecurity program
- The New Frontier Risk Council will guide safeguards and responsible capacity across frontier models
Future OpenAI Large Language Models (LLM) could pose higher cybersecurity risks, as they could theoretically be able to develop working zero-day remote exploits against well-defended systems or meaningfully assist in complex and stealthy cyberespionage campaigns.
This is according to OpenAI itself, which said in a recent blog that cyber capabilities in its AI models are “advancing rapidly”.
While this may sound ominous, OpenAI actually sees this from a positive perspective, saying that the advances also bring “meaningful benefits for cyber defense”.
Crash of the browser
To prepare in advance for future models that could be misused in this way, OpenAI said it is “investing in strengthening models for defensive cybersecurity tasks and creating tools that make it easier for defenders to perform workflows such as auditing code and patching vulnerabilities”.
The best way to do that, according to the blog, is a combination of access control, infrastructure hardening, exit control and monitoring.
In addition, OpenAI announced that it would soon introduce a program to provide users and customers working on cybersecurity tasks with access to enhanced capabilities in a tiered manner.
Finally, the Microsoft-backed AI giant said it plans to establish an advisory group called the Frontier Risk Council. This group should consist of experienced cybersecurity experts and practitioners and, after an initial focus on cybersecurity, should expand its reach elsewhere.
“Members will advise on the line between useful, responsible capacity and potential abuse, and these experiences will directly inform our evaluations and safeguards. We will share more on the council soon,” the blog reads.
OpenAI also said that cyber abuse could be viable “from any frontier model in the industry”, which is why it is part of the Frontier Model Forum, where it shares knowledge and best practices with industry partners.
“In this context, threat modeling helps mitigate risk by identifying how AI capabilities can be weaponized, where critical bottlenecks exist for various threat actors, and how boundary models can provide meaningful promise.”
Via Pakinomist
The best antivirus for all budgets
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



