- Meta makes its Frontier AI frame available to all
- The company says it’s worried about AI-induced cyber security threats
- Risk assessments and modeling will categorize AI models as critical, high or moderate
Meta has revealed some concerns about the future of AI despite CEO Mark Zuckerberg’s well -published intentions of making artificial general intelligence (AGI) openly available to everyone.
The company’s recently released Frontier AI framework is investigating some “critical” risks that AI could pose, including its potential consequences for cyber security and chemical and biological weapons.
By making its guidelines publicly available, Meta hopes to collaborate with other industry leaders to “anticipate and mitigate” such risks by identifying potential “catastrophic” results and threat modeling to establish thresholds.
To indicate: “Open sourcing ai is not optional; It is important, ”Meta outlined in a blog post, how sharing research helps organizations learn from each other’s assessments and encourages innovation.
Its framework works by proactively running periodic threats modeling exercises to supplement its AI risk assessments -modeling will also be used if and when an AI model is identified to potentially “exceeds the current border functions” where it becomes a threat.
These processes are informed by internal and external experts, resulting in one of three negative categories: ‘critical’ where the development of the model must stop; ‘Tall,’ where the model in its current state must not be released; and ‘moderate’, where the release strategy is further taken into account.
Some threats include the discovery and utilization of zero-day vulnerabilities, automated fraud and fraud and the development of biological means with great influence.
Within the benefits of society from these technologies. “
The company has committed to updating its framework using academics, decision makers, civil society organizations, governments and the broader AI community as technology continues to develop.