LLMs fueling a “genAI criminal revolution” according to Netcraft report
However, there can be clues in the email or on the site. Netcraft said that sometimes threat actors accidentally include large language model (LLM) outputs in the fraudulent emails. For example, a phishing email it encountered, claiming to contain a link to a file transfer of family photos, also included the phrase, “Certainly! Here are 50 more phrases for a family photo.”
“We might theorize that threat actors, using ChatGPT to generate the email body text, mistakenly included the introduction line in their randomizer,” Netcraft said. “This case suggests a combination of both genAI and traditional techniques.”
Telltale evidence still shows which phishing emails are fake
Another phishing email it viewed would have been credible — had it not been for the sentence at the beginning, which included the LLM introduction line, “Certainly, here’s your message translated into professional English.” And a fake investment website touting the phoney company’s advantages looked good, except for the headline saying, “Certainly! Here are six key strengths of Cleveland Invest Company.”