DeepSeek and Alibaba’s high-performing large language models (LLMs) have not only captivated AI enthusiasts but also caught the attention of cybercriminals.
Many cybercriminals “are rushing” to test the newest Chinese-made LLMs to help them develop or improve malware, Sergey Shykevich, Threat Intelligence Group Manager at Check Point told Infosecurity during the firm’s CPX 2025 conference in Vienna.
“While Alibaba’s Qwen LLM receives less media attention than DeepSeek’s models, this is the model cybercriminals seem to be experimenting with the most,” he added.
Until now, most security researchers said cybercriminals had primarily toyed with LLMs for phishing and scamming purposes rather than malware development.
“Currently, most LLMs are not great for malware development, but providers like OpenAI are investing a lot to improve their software development capabilities because they know there is a huge market for that,” Shykevich continued. “The moment the development capabilities of the state-of-the-art LLMs are better, they will be used by enterprises and cybercriminals alike.”
The providers of commercial LLMs are likely to implement guardrails to restrict programming for malicious purposes, Shykevich said. However, he cautioned that new versions of open-weight models like DeepSeek’s R1 Alibaba’s Qwen and Meta’s Llama models will offer efficient alternatives for malware developers.
Recently, Donato Capitella, an AI Security Researcher at WithSecure Consulting, accused DeepSeek’s R1 “reasoning” model of lacking fundamental security features to protect against prompt injection.
For this reason, the model performs poorly in WithSecure’s Simple Prompt Injection Kit for Evaluation and Exploitation (Spikee), a new AI security benchmark.
“We are currently doing similar work with Alibaba’s Qwen model,” Capitella told Infosecurity.
Low-Skill Cybercriminals Use AI to Create Malware Capabilities
Funksec is one of the first active ransomwares to use AI capabilities for malware development.
“Funksec’s ransomware is not very sophisticated, and the actor behind it is not very technical. He recycled code from other ransomware and took a chance with AI,” Shykevich said. “However, we tested the ransomware and it works, it disrupts services on the machines ot targets and encrypts data.”
Check Point released a report on Funksec in January and the firm’s analysts exchanged directly with its developer, Shykevich told Infosecurity.
“More recently, we also saw a cybercriminal use Alibaba’s Qwen to develop an infostealer, a type of malware that’s very efficient at stealing credentials and personal data but does not need high developing skills,” the analyst added.
He believes lower-level malicious actors will be the first to leverage the capabilities of LLMs to enable them to build functional malware. Many effective AI-powered malware will appear in 2025.
Read now: Everything You Need to Know About Infostealers