Gartner Warns Agentic AI Will Accelerate Account Takeovers
Within two years, AI agents will accelerate the time it takes threat actors to hijack exposed accounts by 50%, Gartner has warned.
The analyst claimed that the technology would help to automate more of the steps necessary to accomplish account takeovers (ATOs), such as deepfake-driven social engineering and credential compromise.
Agentic AI is widely hailed as the next major leap forward for the technology after generative AI (GenAI) made its impact in the past two years. It will be defined by autonomous agents capable of making decisions and adapting dynamically to changing environments without human intervention.
In the face of this coming threat, vendors will likely introduce new products to better detect and monitor AI agent interactions, Gartner added.
“In the face of this evolving threat, security leaders should expedite the move toward passwordless phishing-resistant MFA,” said Akif Khan, VP analyst at Gartner. “For customer use cases in which users may have a choice of authentication options, educate and incentivize users to migrate from passwords to multi-device passkeys where appropriate.”
Driven by a surge in malicious bot and infostealer activity, ATOs have become a major headache for corporate security teams and end customers – enabling large-scale fraud and enterprise breaches.
An Abnormal Security report last year claimed that ATOs have now outpaced ransomware as the top enterprise security concern, with 83% of organizations experiencing at least one incident over the previous 12 months.
Agentic AI can also be a benefit to security teams. ReliaQuest SVP of technical operations, Michael McPherson, claimed recently that the technology can process security alerts “20 times faster than traditional methods, with 30% greater accuracy at identifying true threats to the business.”
Deepfakes to the Fore
Gartner has also predicted that 40% of social engineering attacks will target both executives and the broader workforce by 2028, using deepfake audio and video to deceive employees on voice and video calls.
“Organizations will have to stay abreast of the market, and adapt procedures and workflows in an attempt to better resist attacks leveraging counterfeit reality techniques,” said Manuel Acosta, senior director analyst at Gartner.
“Educating employees about the evolving threat landscape by using training specific to social engineering with deepfakes is a key step.”
Read more on deepfakes: Quarter of Brits Report Deepfake Phone Scams