Security

Is your cloud security strategy ready for LLMs?

Brian Levine, an Ernst & Young managing director for cybersecurity and data privacy, points to end users–be it employee, contractor, or third-party with privileges–leveraging shadow LLMs as a massive problem for security and one that can be difficult to control. “If employees are using their work devices, existing tools can identify when employees visit known unauthorized LLM sites or apps and even block access to such sites,” he says. “But if employees use unauthorized AI on their own devices, companies have a bigger challenge because it is currently harder to reliably differentiate content generated by AI from user generated content.”

For the moment, enterprises are dependent on security controls within the LLM being licensed, assuming they are not deploying homegrown LLMs written by their own people. “It is important that the company do appropriate third-party risk management on the AI vendor and product. As the threats to AI evolve, the methods for compensating for those threats will evolve as well,” Levine says. “Currently, much of the compensating controls must exist within the AI/LLM algorithms themselves or rely on the users and their corporate policies to detect threats.”

Security testing and decision making must now take AI into account

Ideally, security teams need to make sure that AI awareness is baked into every single security decision, especially in an environment where zero trust is being considered. “Traditional EDR, XDR, and MDR tools are primarily designed to detect and respond to security threats on conventional IT infrastructure and endpoints,” says Chedzhemov. This makes them ill-equipped to handle the security challenges posed by cloud-based or on-premises AI applications, including LLMs.

“Security testing now must focus on AI-specific vulnerabilities, ensuring data security, and compliance with data protection regulations,” Chedzhemov adds. “For example, there are additional risks and concerns around prompt hijacking, intentional breaking of alignment, and data leakage. Continuous re-evaluation of AI models is necessary to address drifts or biases.”

Chedzhemov recommends that secure development processes should embed AI security considerations throughout the development lifecycle to foster closer collaboration between AI developers and security teams. “Risk assessments should factor in unique AI-related challenges, such as data leaks and biased outputs,” he says.

Hasty LLM integration into cloud services create attack opportunities

Itamar Golan, the CEO of Prompt Security, points to an intense urgency in businesses these days as a critical concern. That urgency inside many firms developing these models is encouraging all manner of security shortcuts in coding. “This urgency is pushing aside many security validations, allowing engineers and data scientists to build their GenAI apps sometimes without any limitations. To deliver impressive features as quickly as possible, we see more and more occasions when these LLMs are integrated into internal cloud services like databases, computing resources and more,” Golan said.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button