- META AI assigned unique identifiers to requests and answers
- The servers did not check who had access rights to these identifiers
- The vulnerability was determined by the end of January 2025
An error that could have exposed the user’s requests and AI response to Meta’s artificial intelligence platform has been patched.
The error came from the way meta AI assigned identifiers to both directions and answers.
It turns out when a logged user tries to edit their previous prompt to get another answer, Meta both assigns a unique identifier. By changing this number, Meta’s servers would return someone else’s queries and results.
No abuse so far
The error was discovered by a security researcher and appsecure founder, Sandeep Hodkasia, at the end of December 2024. He reported it to Meta, which put in a solution on January 24, 2025 and paid a $ 10,000 bounty for his problems.
Hodkasia said that the fast numbers that Meta’s servers generated were easy to guess, but apparently – no threat actors thinking about this before it was addressed.
This basically means that Meta’s servers were not double control whether the user had proper permission to view the content.
This is clearly problematic in several ways, the most obvious is that many people share sensitive information with chatbots these days.
Business documents, contracts and reports, personal information, all of which are uploaded to LLMs every day, and in many cases – people use AI tools as psychotherapists, share intimate life details and private revelations.
This information can be abused, among other things, in highly adapted phishing attacks that can lead to infoTeals -implementation, identity theft or even ransomware.
For example, if a threat actor knows that a person requested AI for cheap VPN solutions, they could send them an email offering a great, cost-effective product, it’s nothing but a back door.
Via Techcrunch



