Security

If you don’t already have a generative AI security policy, there’s no time to lose

Some companies have already done so: Samsung banned its use after an accidental disclosure of sensitive company information while using generative AI. However, this type of strict, blanket prohibition approach can be problematic, stifling safe, innovative use and creating the types of policy workaround risks that have been so prevalent with shadow IT. A more intricate, use-case risk management approach may be far more beneficial.

“A development team, for example, may be dealing with sensitive proprietary code that should not be uploaded to a generative AI service, while a marketing department could use such services to get the day-to-day work done in a relatively safe way,” says Andy Syrewicze, a security evangelist at Hornetsecurity. Armed with this type of knowledge, CISOs can make more informed decisions regarding policy, balancing use cases with security readiness and risks.

Learn all you can about generative AI’s capabilities

As well as learning about different business use cases, CISOs also need to educate themselves about generative AI’s capabilities, which are still evolving. “That’s going to take some skills, and security practitioners are going to have to learn the basics of what generative AI is and what it isn’t,” France says.

CISOs are already struggling to keep up with the pace of change in existing security capabilities, so getting on top of providing advanced expertise around generative AI will be challenging, says Jason Revill, head of Avanade’s Global Cybersecurity Center of Excellence. “They’re generally a few steps behind the curve, which I think is due to the skill shortage and the pace of regulation, but also that the pace of security has grown exponentially.” CISOs are probably going to need to consider bringing in external, expert help early to get ahead of generative AI, rather than just letting projects roll on, he adds.

Data control is integral to generative AI security policies

“At the very least, businesses should produce internal policies that dictate what type of information is allowed to be used with generative AI tools,” Syrewicze says. The risks associated with sharing sensitive business information with advanced self-learning AI algorithms are well-documented, so appropriate guidelines and controls around what data can go into and be used (and how) by generative AI systems are certainly key. “There are intellectual property concerns about what you’re putting into a model, and whether that will be used to train so that someone else can use it,” says France.

Strong policy around data encryption methods, anonymization, and other data security measures can work to prevent unauthorized access, usage, or transfer of data, which AI systems often handle in significant quantities, making the technology more secure and the data protected, says Brian Sathianathan, Iterate.ai co-founder and CTO.

Data classification, data loss prevention, and detection capabilities are emerging areas of insider risk management that become key to controlling generative AI usage, Revill says. “How do you mitigate or protect, test, and sandbox data? It shouldn’t come as a surprise that test and development environments [for example] are often easily targeted, and data can be exported from them because they tend not to have as rigorous controls as production.”

Generative AI-produced content must be checked for accuracy

Along with controls around what data goes into generative AI, security policies should also cover the content that generative AI produces. A chief concern here relates to “hallucinations” whereby large language models (LLMs) used by generative AI chatbots such as ChatGPT regurgitate inaccuracies that appear credible but are wrong. This becomes a significant risk if output data is over-relied upon for key decision-making without further analysis regarding its accuracy, particularly in relation to business-critical matters.

For example, if a company relies on an LLM to generate security reports and analysis and the LLM generates a report containing incorrect data that the company uses to make critical security decisions, there could be significant repercussions due to the reliance on inaccurate LLM-generated content. Any generative AI security policy worth its salt should include clear processes for manually reviewing the accuracy of generated content for rationalization, and never taking it for gospel, Thacker says.

Unauthorized code execution should also be considered here, which occurs when an attacker exploits an LLM to execute malicious code, commands, or actions on the underlying system through natural language prompts.

Include generative AI-enhanced attacks within your security policy

Generative AI-enhanced attacks should also come into the purview of security policies, particularly with regard to how a business responds to them, says Carl Froggett, CIO of Deep Instinct and former head of global infrastructure defense and CISO at Citi. For example, how organizations approach impersonation and social engineering is going to need a rethink because generative AI can make fake content indistinct from reality, he adds. “This is more worrying for me from a CISO perspective — the use of generative AI against your company.”

Froggett cites a hypothetical scenario in which generative AI is used by malicious actors to create a realistic audio recording of himself, fit with his unique expressions and slang, that is used to trick an employee. This scenario makes traditional social engineering controls such as detecting spelling mistakes or malicious links in emails redundant, he says. Employees are going to believe they’ve actually spoken to you, have heard your voice, and feel that it’s genuine, Froggett adds. From both a technical and awareness standpoint, security policy needs to be updated in line with the enhanced social engineering threats that generative AI introduces.

Communication and training key to generative AI security policy success

For any security policy to be successful, it needs to be well-communicated and accessible. “This is a technology challenge, but it’s also about how we communicate it,” Thacker says. The communication of security policy is something that needs to be improved, as does stakeholder management, and CISOs must adapt how security policy is presented from a business perspective, particularly in relation to popular new technology innovations, he adds.

This also encompasses new policies for training staff on the novel business risks that generative AI exposes. “Teach employees how to use generative AI responsibly, articulate some of the risks, but also let them know that the business is approaching this in a verified, responsible way that is going to enable them to be secure,” Revill says.

Supply chain management still important for generative AI control

Generative AI security policies should not omit supply chain and third-party management, applying the same level of due diligence to gauge outside generative AI usage, risk levels, and policies to assess whether they pose a threat to the organization. “Supply chain risk hasn’t gone away with generative AI – there are a number of third-party integrations to consider,” Revill says.

Cloud service providers come into the equation too, adds Thacker. “We know that organizations have hundreds, if not thousands, of cloud services, and they are all third-party suppliers. So that same due diligence needs to be performed on most parties, and it’s not just a sign-up when you first log in or use the service, it must be a constant review.”

Extensive supplier questionnaires detailing as much information as possible about any third-party’s generative AI usage is the way to go for now, Thacker says. Good questions to include are: What data are you inputting? How is that protected? How are sessions limited? How do you ensure that data is not shared across other organizations and model training? Many companies may not be able to answer such questions right away, especially regarding their usage of generic services, but it’s important to get these conversations happening as soon as possible to gain as much insight as possible, Thacker says.

Make your generative AI security policy exciting

A final thing to consider are the benefits of making generative AI security policy as exciting and interactive as possible, says Revill. “I feel like this is such a big turning point that any organization that doesn’t showcase to its employees that they are thinking of ways they can leverage generative AI to boost productivity and make their employees’ lives easier, could find themselves in a sticky situation down the line.”

The next generation of digital natives are going to be using the technology on their own devices anyway, so you might as well teach them to be responsible with it in their work lives so that you’re protecting the business as a whole, he adds. “We want to be the security facilitator in business – to make businesses flow more securely, and not hold innovation back.”

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button