- Openai increases its error payments
- Spotting of vulnerabilities with high impact could net scientists $ 100,000
- The move comes when more AI agents and systems are developed
Openai hopes to encourage security researchers to identify safety vulnerability by increasing its rewards for discovering bugs.
The AI giant has revealed that it increases its security error program from $ 20,000 to $ 100,000, expanding the extent of its cybersecurity subsidy program as well as developing new tools to protect AI agents from malicious threats.
This follows the recent warnings AI agents can be hijacked to write and send phishing attacks, and the company is eager to outline its “commitment to rewarding meaningful security study with great influence that helps us protect users and maintain confidence in our systems.”
Disturbs threats
Since the CyberSecurity Grant program was launched in 2023, Openai has undergone thousands of applications and even financed 28 research initiatives, which helps the company gain valuable insight into security topics such as autonomous cyber security defense, rapid injections and secure code generation.
Openai says it is constantly monitoring malicious actors who want to utilize their systems and identify and disrupt targeted campaigns.
“We’re not just defending ourselves,” said the company, “We share Print Craft with other AI laboratories to strengthen our collective defense. By sharing these new risks and cooperation across industry and government, we help ensure that AI technologies are developed and implemented safely.”
Openai is not the only company that increases its reward program in which Google announced by 2024 a five factor increase in Bug -Bounty Rewards, arguing that safer products make it more difficult to find bugs, which is reflected in the higher compensations.
With more advanced models and agents and more users and developments, there are inevitably more vulnerability points that can be utilized, so the relationship between researchers and software developers is more important than ever.
“We are engaging researchers and practitioners throughout the cyber security community,” AI confirmed.
“This allows us to exploit the latest thinking and share our findings with those working towards a more secure digital world. To train our models, we work with experts across academic, governments and commercial laboratories to benchmark skills and achieve structured examples of advanced reasoning across cyber security domains.”
Via Cybergenws