CybercrimeHackers

Criminals, too, see productivity gains from AI

However, noted Jeremy Kirk, analyst at Intel 471, not all claims of AI use may be accurate. “We use the word ‘purportedly’ to represent that it is a claim being made by a threat actor and that it is frequently unclear exactly to what extent AI has been incorporated into a product, what LLM model is being used, and so forth,” he said in an email. “As far as whether developers of cybercriminal tools are jumping on the bandwagon for a commercial benefit, there seem to be genuine efforts to see how AI can help in cybercriminal activity. Underground markets are competitive, and there is often more than one vendor for a particular service or product. It is to their commercial advantage to have their product work better than another, and AI might help.”

Intel 471 has observed many claims that are in doubt, including one by four University of Illinois Urbana-Champaign (UIUC) computer scientists who claim to have used OpenAI’s GPT-4 LLM to autonomously exploit vulnerabilities in real-world systems by feeding the LLM common vulnerabilities and exposures (CVE) advisories describing flaws. However, the study pointed out, “Because many of the key elements of the study were not published — such as the agent code, prompts or the output of the model — it can’t be accurately reproduced by other researchers, again inviting skepticism.”

Automation

Other threat actors offered tools that scrape and summarize CVE data, and a tool integrating what Intel 471 called a well-known AI model into a multipurpose hacking tool that allegedly does everything from scanning networks and looking for vulnerabilities in content management systems to coding malicious scripts.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button