Security

Generative AI A Boon for Organizations Despite the Risk

Generative AI is too beneficial to abandon despite the threats it poses to organizations, according to experts speaking at the ISC2 Security Congress 2023.

During a session at the event, Kyle Hinterburg, Manager at LBMC and Brian Willis, Senior Manager at LBMC pointed out that while criminals will utilize generative AI tools and they carry data and privacy risks, this is the case for all technologies we use on a day-to-day basis, such as email and ATMs.

Hinterburg emphasized that these tools are not sentient beings, but instead are tools trained and used by humans.

This was a message shared by Jon France, CISO at ISC2, speaking to Infosecurity Magazine. “Is AI good or bad? It’s actually yes to both, and it’s not AI’s fault, it’s how we as humans use it,” he noted.

How Generative AI Can Enhance Security

Hinterburg and Willis set out the various ways generative AI can be utilized by cybersecurity teams:

1. Documentation. Willis noted that documentation is the “foundational element of establishing a good security program,” but is a task that security professionals typically dread doing. Generative AI can help create policies and procedures in areas like incident response faster and more accurately, ensuring no compliance requirements and best practices are missed.

2. System configuration guidance. Organizations often do not configure correctly, and as a result misconfigurations are a major threat. Generative AI can mitigate this issue, by providing prompts and commands to configure correctly in areas like logging, password settings and encryption. Willis outlined: “By leveraging AI, you can ensure you are using good configuration standards that are appropriate for your organization.”

3. Scripts and coding. There are many different coding languages, such as Powershell, Python and HTML. For security professionals who do not have proficiency in a particular one, tools like ChatGPT can rapidly offer the code or script they need, stated Hinterburg, rather than having to perform a difficult search online themselves.

4. Process facilitation. Another area generative AI can boost the performance of security teams is by helping manage tasks through an entire conversation flow, beyond a single prompt. Hinterburg gave the example of an incident response tabletop exercise, which generative AI tools are capable of facilitating by giving scenarios and options to choose from, and continuing from there.

5. Developing private generative AI tools. Willis said that many organizations are now creating their own private generative AI tools built on publicly available technologies, which are specifically trained on internal data. These can be used to quickly access and summarize documents, such as meeting notes, contracts and internal policies. These tools are also more secure than open source tools as they are pitched in the organizations’ own environment.

How to Mitigate AI Risks

Hinterburg and Willis also set out three major insider threats from generative AI tools, and how to mitigate these risks:

1. Unreliable results. Tools like ChatGPT are trained on data from the internet, and therefore are prone to errors, such as ‘hallucinations.’ To overcome such issues, Willis advised taking actions like using multiple AI tools to make a query and comparing and contrasting the results. Additionally, humans should avoid overreliance on these tools, recognizing their weaknesses in areas such as bias and errors. “We should still want to use our own minds to do things,” he outlined.

2. Disclosure of sensitive material. There have been cases of organizations’ sensitive data being exposed accidently in generative AI tools. OpenAI also revealed there had been a data breach in ChatGPT itself in March 2023, which may have exposed payment-related information of some customers. Due to these breach risks, Hinterburg advised organizations not to input sensitive data into these tools, including email conversations. He noted there are tools available that can undertake pre-processing tasks, allowing organizations to know what data to input into generative AI.

3. Copyright issues. Willis warned that using content generated from generative AI for commercial purposes can lead to copyright issues and plagiarism. He said it is vital that organizations understand the legalities around generating content in this way, such as AI-generated content rights and keep records of AI-based data used for such purposes.

Concluding, Hinterburg said the risks of generative AI are “things we have to be cognizant about,” but the benefits are too great to simply stop using these tools.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button