Security

AI agents can find and exploit known vulnerabilities, study shows

First, the agents were able to discover new vulnerabilities in a test environment — but that doesn’t mean that they can find all kinds of vulnerabilities in all kinds of environments. In the simulations that the researchers ran, the AI agents were basically shooting fish in a barrel. These might have been new species of fish, but they knew, in general, what fish looked like. “We haven’t found any evidence that these agents can find new types of vulnerabilities,” says Kang.

LLMs can find new uses for common vulnerabilities

Instead, the agents found new examples of very common types of vulnerabilities, such as SQL injections. “Large language models, though advanced, are not yet capable of fully understanding or navigating complex environments autonomously without significant human oversight,” says Ben Gross, security researcher at cybersecurity firm JFrog.

And there wasn’t a lot of diversity in the vulnerabilities tested, Gross says, they were mainly web-based, and can be easily exploited due to their simplicity.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button