Security

Surviving the cyber arms race in the age of generative AI

The swift emergence of generative AI has already tipped the scales in cybersecurity, prompting action from governments, with a sweeping executive order (EO) issued in October by US President Joe Biden.

The Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence offers guidance on how to ensure the safety of this emerging technology–something that has been lacking in previous orders. It also outlines the challenges associated with AI’s rapid acceleration. While the EO seeks to make domestic use of AI safe, secure, and trustworthy, perhaps the tallest order is the race to harness the potential of AI for the good guys and prevent its use for the bad guys. This raises the question: Over the next five years, who will benefit more – defenders or attackers? The answer: It remains unclear.

The one certainty is that both defenders and attackers want to reap the advantages of generative AI. What we cannot predict at this point is whether one side will gain the upper hand. It’s a race that will require an investment of time, effort, and expense from both groups, and each side will see bursts of success.

It doesn’t have to be entirely chaotic. Organizations, security practitioners, and government agencies can take steps now to ensure they keep pace with attackers and perhaps even take the lead with greater collaboration, ongoing legislative frameworks, and a secure space for innovation to thrive.

AI supercharges both threat actors and security teams

For attackers, AI adds unprecedented speed and power to social engineering and impersonation attacks, particularly at scale. Without AI, a phishing attack targeting a CFO’s email is time-consuming for attackers as they first must sift through old emails to get a sense of communication style before mimicking it in phishing emails. Generative AI models, which have demonstrated proficient writing abilities, do this very quickly, enabling a greater number of threat campaigns. Where attackers can currently launch, say, ten phishing, pig butchering, or email compromise attacks at a time, AI will allow them to execute a thousand in seconds at the click of a button.

These types of attacks are successful because an attacker can target a greater number of potential victims at one time, which undoubtedly multiplies with AI’s firepower. When used for evil, generative AI has proven to exacerbate attack intensity and the severity of outcomes.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button