Security

Top 4 LLM threats to the enterprise

AI versus AI

The US Government, arguably the largest network in the world, certainly understands the value of AI security policy as it seeks to leverage the promise of AI across government and military applications. In October 2023, the Whitehouse issued an executive order (EO) for safe AI development and use.

The Cybersecurity and Infrastructure Security Agency (CISA), part of the Department of Homeland Security (DHS), plays a critical role in executing the executive order and has generated an AI roadmap that incorporates key CISA-led actions as directed by the EO—along with additional actions CISA is leading to support critical infrastructure owners and operators as they navigate the adoption of AI. 

As a result of the executive order, several key government agencies have already identified, nurtured, and appointed new chief AI officers responsible for coordinating their agency’s use of AI, promoting AI innovation while managing risks from their agency’s use of AI, according to Lisa Einstein, CISA’s senior advisor for AI.

“With AI embedded into more of our everyday applications, having a person who understands AI—and who understands the positive and negative implications of integrating AI—is critical,” Einstein explains. “Risks related to LLM use is highly contextual and use-case specific based on industry, whether it be healthcare, schools, energy, or IT. So, AI champions need to be able to work with industry experts to identify risks specific to the context of their industries.”

Within government agencies, Einstein points to the Department of Homeland Security’s Chief AI Officer Eric Hysen, who is also DHS’s CIO. Hysen coordinates AI efforts across DHS components, she explains, including the Transportation Security  Administration, which uses IBM’s computer vision to detect prohibited items in carry-on luggage. DHS, in fact, leverages AI in many instances to secure the homeland at ports of entry and along the border, as well as in cyberspace to protect children, defend against cyberthreats, and even to combat the malicious use of AI.

As LLM threats evolve, it will take equally innovative AI-enabled tools and techniques to combat them. AI-enhanced penetration testing and red teaming, threat intelligence, anomaly detection, incident response are but some of the tool types that are quickly adapting to fight these new threats.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button