Maximizing ROI And Security With GenAI In Cybersecurity
By Maurice Uenuma, Vice President and General Manager of the Americas, Blancco
Artificial intelligence (AI) is increasingly becoming a tool employed by enterprises to improve data-driven decision-making, automate processes, generate new content and enhance customer experiences. The emergence of Generative AI (GenAI) applications like ChatGPT was the catalyst for widespread excitement about the technology, with AI at almost everyone’s fingertips for the first time.
However, the emergence of these applications has raised concerns about how to mitigate risks while enjoying those benefits.
Indeed, GenAI models are helpful in areas such as improving productivity, but they do also have their flaws. Malicious AI chatbots such as WormGPT and FraudGPT and deepfake phishing are just a few of the threats generated by AI that have emerged recently. Without appropriate security measures, enterprises are at risk of exposure to these new attack vectors.
Addressing GenAI Red Flags
AI is a C-suite consideration and there are legitimate reasons for both optimism and concern. While the benefits and use cases are extensive, they are still largely unexplored, conceptual and unproven. With most people having limited experience working with AI, it is critical that the C-suite establishes policies that aim to clearly articulate and outline the usage of GenAI as it becomes more integrated across all lines of business.
Without proper guardrails, GenAI tools that interact with external parties, including customers, partners, or vendors, can expose the enterprise to significant risks. These risks are akin to those associated with employees unknowingly engaging with infected files, accessing malicious websites, or inadvertently sharing sensitive data with malicious actors.
GenAI used in IT also has the potential to erode an organisation’s existing security posture by altering existing controls and safeguards, including security settings on enterprise applications, access rights to data storage, or procedures insecurity operations. Gen AI applications could pull in sensitive enterprise data, or even create new sensitive data that must be protected (such as new employee or customer data based on other, existing data sets.
The Impact of AI on Data Lifecycle Management
One of the key ways for organizations to maximize their return on investment in AI while protecting sensitive data is through careful data governance and management. AI models place a new premium on data quality—they need clean, high-quality datasets to produce valuable results.
This makes it even more critical that businesses understand the value of their data and regularly reduce quantities of low-quality data that do not enhance AI outputs and help make informed business decisions. Gathering excessive or irrelevant data weakens ROI, and creates security issues through a wider attack surface.
It’s worth noting that GenAI may become a major contributor of exposed sensitive data as well as redundant, obsolete or trivial (ROT) data. For example, GenAI may piece together clues to generate factually accurate personally identifiable information (which must be protected under existing regulations and standards) and make it available without appropriate security controls, thus exposing the enterprise, and its customers, to new cyber risk.
Therefore, maximizing ROI on AI, must include clearly defined governance frameworks and investment in tools that specialize in data discovery and classification. Data loss prevention solutions can restrict unauthorized data spread, providing an extra layer of security. Removing unnecessary data through data sanitization also minimizes storage costs which is important as data volumes rise.
With cybersecurity threats evolving alongside AI, a disciplined approach to data collection and management is key to maximizing financial returns while safeguarding sensitive information from new risks. In essence, the hype surrounding generative AI in data lifecycle management should be approached with caution and tempered by reality.
As AI grows more prevalent and new regulations emerge to protect public interests, enterprises will need to ensure compliance is maintained throughout complex new data workflows and value chains. Effective data governance will be key to optimizing these processes.
Embracing the Future of Generative AI
If we assume that GenAI will elevate the sophistication and speed of cyber-attacks while also enhancing cyber defences, then enterprises will embrace it as a potentially powerful security tool. Waiting for government regulations to protect against AI cybersecurity threats is not a viable strategy. Instead, organizations should establish company policies that provide guardrails for the safe and secure use of generative AI.
Furthermore, companies should leverage AI for competitive differentiation while remaining realistic about its ability to help achieve business goals while mitigating associated security risks.
The future is now, and enterprises must adapt their security strategies to accommodate the data revolution driven by AI. GenAI offers immense potential for increased productivity but must be approached with caution due to security risks. By establishing comprehensive policies, reducing the data attack surface, and leveraging specialized tools, organizations can maximize the return on investment from AI while safeguarding their operations.
Enterprises must take proactive steps to ensure the responsible and secure integration of GenAI into their systems. The successful adoption of these technologies for enterprises lies in their ability to not get swept up in the hype of AI but to adapt and evolve alongside the data revolution brought about by AI.
Disclaimer: The views and opinions expressed in this guest post are solely those of the author(s) and do not necessarily reflect the official policy or position of The Cyber Express. Any content provided by the author is of their opinion and is not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything.