OpenAI admits that new models are likely to pose a ‘high’ cybersecurity risk


  • OpenAI warns that future LLMs could help with zero-day development or advanced cyber espionage
  • The company invests in defensive tools, access control and a tiered cybersecurity program
  • The New Frontier Risk Council will guide safeguards and responsible capacity across frontier models

Future OpenAI Large Language Models (LLM) could pose higher cybersecurity risks, as they could theoretically be able to develop working zero-day remote exploits against well-defended systems or meaningfully assist in complex and stealthy cyberespionage campaigns.

This is according to OpenAI itself, which said in a recent blog that cyber capabilities in its AI models are “advancing rapidly”.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top