by Tim Pappa, a certified former profiler with the FBI’s Behavioral Analysis Unit (BAU), specializing in cyber deception and online influence.
Many cyber threat intelligence and policy communities are increasingly concerned with the threat posed by generative artificial intelligence, as GenAI is being operationalized to attempt to influence the attitudes and beliefs of target populations.
But we also keep reading about how those documented methods have not influenced audiences or narratives much at all.
The Discrepancy in GenAI’s Influence
Imagine an athlete who can perform at exceptional levels in running or other displays of strength. You hear he’s joining a rival cricket team.
When you see him play, however, he can’t seem to hit anything. He doesn’t know how to play. For all his natural athletic ability, he doesn’t know how to hit or throw the correct way.
Yes, he appears fearsome because of his general athletic performance, but he is not an effective player on that team until he learns how to deliver and hit a cricket ball.
This is one way to conceptualize how malign influence cyber actors are applying or thinking of applying scalable generative artificial intelligence to attempt to influence target audiences.
This includes everyone, including pro-Western cyber actors targeting overseas audiences.
Effectiveness of ‘Pro-Western’ Narrative Accounts
A summer 2022 report highlighted how ineffective many of these ‘pro-Western’ narrative accounts appeared to be at generating engagement and building influence.
While social media platforms have documented how these accounts created artful, foreign-language content and call to action to encourage engagement and social media response, most of these accounts had no more than a handful of likes or retweets on Twitter, and less than a quarter of the accounts had more than a thousand followers.
Nearly half of these accounts posing as media organizations included batches of hashtags with their posted content, likely trying to reach broader audiences. But again, there was limited audience response.
In my experience as a certified former profiler with the FBI’s Behavioral Analysis Unit (BAU), broad appeals to broad audiences even in the right language on the right platform do not work.
Content including narratives must be crafted specifically for targeted individuals with some understanding of the kind of platforms they use and trust and the kind of relationships they have or how those relationships influence the decisions of that targeted individual or group.
There are established theoretical frameworks for understanding generally how people, even outside of these behavioral relational contexts, process and respond to content like this.
Communication researchers throughout the past forty years have established relatively similar conditions for cognitive and attitudinal processing of content.
These dual processing models or conditions generally find that people spend more or less time thinking about and consuming or sharing content based on how relevant it is to them and how motivated they are to process that content.
This is important, especially in this growing environment of “coordinated inauthentic behavior”, where creators may be scaling and applying more generative artificial intelligence content with the same methods for attempting to influence audiences.
These models suggest influence attempts may still be unsuccessful regardless of the content of the creator largely depending on unfamiliar audiences and unknown users to respond to content.
If any challenges are trying to access that content, people may not even be motivated to process that content or even less motivated to share that content with others. Audiences may be more likely to scrutinize content if they struggle to process it or understand it.
If there are cultural or religious sensitivities to engaging in any kind of content, audiences will likely not encounter that content or may even react aggressively to the content.
Audiences may not engage content or follow content creators because of the possible consequences of being associated in some manner with that kind of content or those creators. These are general considerations, but they are serious considerations.
These considerations may explain some of the limited success of the controlled accounts in these recent reports, but these dual processing models of individuals and audiences apply even if there is uniquely generative artificial intelligence content.
While much of my experience observing or reviewing other failed attempts to influence broad audiences with scalable programs is anecdotal, the psychological underpinnings of how individuals and groups of individuals as audiences process content or narratives provide an integrated theoretical framework that consistently explains why this is not working.
Future Challenges in Malign Influence Cyber Operations
The above research literature is the beginning of understanding this framework.
The liminal step however in effectively behaviorally operationalizing content or narratives whether that includes the use of GenAI or not is having a defined target with content or narratives crafted for that defined target.
Malign influence cyber actors will continue to struggle to behaviorally operationalize scalable GenAI throughout this new year, even as GenAI programs become more dynamic.
This will likely result in more of the same kind of reports described above, which highlight the growing use of GenAI programs to materialize malign influence attempts but in which we see a limited measure of how anyone was influenced behaviorally.
Disclaimer: The views and opinions expressed in this guest post are solely those of the author(s) and do not necessarily reflect the official policy or position of The Cyber Express. Any content provided by the author is of their opinion and is not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything.