Security

A pickle in Meta’s LLM code could allow RCE attacks

Meta’s large language model (LLM) framework, Llama, suffers a typical open-source coding oversight, potentially allowing arbitrary code execution on servers leading to resource theft, data breaches, and AI model takeover.

The flaw, tracked as CVE-2024-50050, is a critical deserialization bug belonging to a class of vulnerabilities arising from the improper use of the open-source library (pyzmq) in AI frameworks.

“The Oligo research team has discovered a critical vulnerability in meta-llama, an open-source framework from Meta for building and deploying Gen AI applications,” said Oligo’s security researchers in a blog post. “The vulnerability, CVE-2024-50050 enables attackers to execute arbitrary code on the llama-stack inference server from the network.”

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button