Meta’s guidelines for AI Chatbot -leakage raising questions about child security


  • A leaked meta -document revealed that the company’s guidelines for AI Chatbot once allowed inappropriate response
  • Meta confirmed the authenticity of the document and has since removed some of the most worrying sections
  • Among calls for studies is the question of how successful AI -Moderation can be

Meta’s internal standards for its AI chatbots were intended to remain internal, and after somehow came their way to Pakinomist, it is easy to understand why the tech giant did not want the world to see them. Meta struggled with the complexity of AI ethics, children’s online security and content standards, and found what few would claim is a successful roadmap for AI Chatbot rules.

The most disturbing notes among the details shared by Pakinomist are around how the chatbot is talking to children. As reported by Pakinomist, the document says it is “acceptable [for the AI] To engage a child in conversations that are romantic or sensual “and” describe a child in terms proving their attractiveness (eg: “Your youthful form is a work of art”). “Although it is prohibiting explicit sexual discussion, it is still a shocking intimate and romantic conversation level with children for Meta AI to allegedly consider.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top