- Security researchers find way to abuse Metas Llama LLM to perform remote code
- Meta treated the problem in early October 2024
- The problem was using pickle as a serialization format for socket communication
Meta’s Llama Large Language Model (LLM) had a vulnerability that could have enabled threat actors to perform arbitrary code on the defective server, experts have warned.
CyberSecurity researchers from Oligo Security published an in-depth analysis of a bugs like CVE-2024-50050, which according to the National Woundability Database (NVD) has a severity of 6.3 (medium).
The error was discovered in a component called Llama Stack, designed to optimize the implementation, scaling and integration of large language models.
Oligo described the affected version as “vulnerable to deserialization of non -procedure data, which means an attacker can perform arbitrary code by sending malicious data that is deserialized.”
NVD describes the error as this: “Llama stack before revision 7A8AA775E5A267CF8660D83140011A0B7F91E005 used pickle as a serialization format for socket communication, potentially allowing external code execution”.
“Socket communication has been changed to use JSON instead,” it added.
The researchers tilted Meta on the error on September 24, and the company turned to October 10 by pushing versions 0.0.41. Hacker the news Notes that the error has also been remedied in Pyzmq, a Python library that gives access to Zeromq Messaging Library.
Together with Patchet, Meta released a security advice in which it told the community that it had established a remote code of execution associated with using pickle as a serialization format for socket communication. The solution was to switch to the JSON format.
Llama or large language model Meta AI are a number of large language models developed by social media giant, meta. These models are designed for Natural Language Processing (NLP) tasks such as text generation, summary, translation and more.