AI Threats, Cybersecurity Uses Outlined By Gartner Analyst

AI is a long way from maturity, but there are still offensive and defensive uses of AI technology that cybersecurity professionals should be watching, according to a presentation today at the Gartner Security & Risk Management Summit in National Harbor, Maryland.

Jeremy D’Hoinne, Gartner Research VP for AI & Cybersecurity, told conference attendees that the large language models (LLMs) that have been getting so much attention are “not intelligent.” He cited one example where ChatGPT was recently asked what the most severe CVE (common vulnerabilities and exposures) of 2023 was – and the chatbot’s response was essentially nonsense (screenshot below).

ChatGPT security prompt – and response (source: Gartner)

Deepfakes Top AI Threats

Despite the lack of sophistication in LLM tools thus far, D’Hoinne noted one area where AI threats should be taken seriously: deepfakes.

“Security leaders should treat deepfakes as an area of immediate focus because the attacks are real, and there is no reliable detection technology yet,” D’Hoinne said.

Deepfakes aren’t as easy to defend against as more traditional phishing attacks that can be addressed by user training. Stronger business controls are essential, he said, such as approval over spending and finances.

He recommended stronger business workflows, a security behavior and culture program, biometrics controls, and updated IT processes.

AI Speeding Up Security Patching

One potential AI security use case D’Hoinne noted is patch management. He cited data that AI assistance could cut patching time in half by prioritizing patches by threat and probability of exploit and checking and updating code, among other tasks.

Other areas where GenAI security tools could help include: alert enrichment and summarization; interactive threat intelligence; attack surface and risk overview; security engineering automation, and mitigation assistance and documentation.

AI code fixes (source: Gartner)

AI Security Recommendations

“Generative AI will not save or ruin cybersecurity,” D’Hoinne concluded. “How cybersecurity programs adapt to it will shape its impact.”

Among his recommendations to attendees was to “focus on deepfakes and social engineering as urgent problems to solve,” and to “experiment with AI assistants to augment, not replace staff.” And outcomes should be measured based on predefined metrics for the use case, “not ad hoc AI or productivity ones.”

Stay tuned to The Cyber Express for more coverage this week from the Gartner Security Summit.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button