Skip to Content
Cybersecurity professional sitting at a desk using a computer
Article

AI and cybersecurity risks: How is your organization strengthening its defenses?

August 14, 2024 / 5 min read

AI moves us into a new realm of cyberthreats and data vulnerabilities. Organizations can’t have effective risk management without addressing the interplay of AI and cybersecurity. How is your organization strengthening its cybersecurity policies, framework, and measures?

Generative AI and other types of machine learning, including chatbots, augmented and virtual reality, and robotic process automation, are transforming organizational operations across many sectors. Within C-suites and boardrooms, as well as internal IT teams, there’s a strong push for AI adoption due to its vast capabilities. However, jumping on the AI bandwagon without thorough planning can significantly increase cybersecurity and data privacy risks. To navigate this landscape effectively, it’s crucial for organizations to proactively strengthen their cybersecurity defenses. How prepared is your organization to address the cybersecurity challenges posed by AI integration?

What are the cybersecurity risks of AI?

You can’t effectively manage risk in your organization without addressing the interplay of AI and cybersecurity. Often, users — including those in leadership — lack an understanding of the implications of AI or how it can expose their organization and data to risks. Many organizations have yet to define their use cases for AI much less guardrails, policies, and training for its use. And without these critical components, cybersecurity risks increase, including unintentional sharing or unintended data proliferation.

Data proliferation can be a serious issue with AI use, often occurring incrementally. As just one example, consider how a staff member might grab their mobile phone or open a browser window and use a chatbot like ChatGPT on their own, without considering what company, intellectual property, customer, or patient information they’re disclosing in their prompts.

Once the information is input into a chatbot, which another organization owns and controls, how is it stored, used, or disseminated by third parties — intentionally or otherwise?

Some common office applications, ERP systems, and search engines now include AI, and staff may already be using it to help do their work — potentially in ways your organization isn’t aware of or didn’t intend. Many of these AI tools store user inputs and use them to refine their algorithm. Unless precautions are taken, your organization’s data, including proprietary data, can be stored and used for that purpose. The risks compound when your team members access AI chatbots and other tools with personal, rather than an organizational, devices and accounts. Similarly, few users consider their data profile and how it can be used against them. Deepfakes, persona authentication, sophisticated phishing attempts — AI-assisted scams are becoming more refined and realistic and therefore more difficult to detect. Increasingly, users are finding fraudulent communications harder to discern from legitimate asks. High-profile cybercrimes using AI have already occurred and will only continue to rise as it becomes harder to tell fake likenesses of people, even those we know well, from real ones.

Potential legal and ethical issues around how staff use generated output also create cybersecurity risks, particularly in light of biased algorithms and hallucinations. How are your staff members using the results generated by AI in their work for your organization? Has your leadership had serious strategic discussions about the potential ethical and legal considerations of AI-assisted or AI-generated material?

Mitigating AI risk with cybersecurity controls

AI moves us into a new realm of cyberthreats, and companies developing AI tools aren’t inherently structuring their products with the sole purpose of protecting your data, IT systems, or users. Your organization must take the lead to protect itself. A few pragmatic steps can help enable a proactive and holistic approach that spans people, process, and technology:

1. Determine your use cases for AI

Treat AI as you would any other tool in your IT environment that you aim to use securely to advance your business goals and mission. In other words, think before you jump in. What problems do you hope to solve with AI, and how do you plan to use it to improve efficiency?

Carefully consider whether AI will in fact enable the outcomes and output you seek; doing so early on will help you eliminate unwanted results, which will help you anticipate and manage risk down the road. Defining use cases early on can also help prevent AI hallucinations — keeping your data clean and accurate.

This can be a surprisingly difficult challenge, and it will become harder as AI capabilities make their way into more ERP and other applications. What are the use cases when the tools are inherently at your staff’s disposal?

2. Develop a comprehensive data management policy for strong data life cycle management

Data governance encompasses the policies, procedures, standards, and organizational responsibilities designed to maintain the accuracy, security, and compliance of your data. In simple terms, it’s critical to be aware of what data you have and how, in a systematic way, to prevent it from being shared and proliferating externally.

This applies to all the technologies you use, and your data management policy should be updated and extended to address data issues with AI. To be candid, without a defined data strategy and policy in place, secure AI use won’t be possible.

3. Develop well-defined guardrails — policies, procedures, and training — for secure AI use

You should already have policies, procedures, and training in place for secure web and social media use, and you’ll want to develop a similar framework to address AI. As you do, consider these questions:

These are just a few of the questions your AI and cybersecurity policies, procedures, and training should address.

4. Layer on behavior-based cybersecurity policies and technical controls

Behavior-based policies, such as identifying inherent bias in AI results, hold staff accountable for following your rules for technology use. Technical controls, such as implementing safeguards for security, also help ensure compliance. Because without guardrails and guidelines, people tend to find shortcuts that can create data privacy and cybersecurity risks.

5. Drive AI and cybersecurity initiatives from the top

Effective cybersecurity initiatives must be driven by top-level leadership, not the IT department. It’s crucial for executives and board members to prioritize organizational resilience by integrating data management and cybersecurity into every aspect of the business. When governance and strategic direction come from the top, it ensures a comprehensive approach that addresses potential vulnerabilities from the outset. Without strong executive leadership, your organization runs the risk of embedding outdated security practices into AI implementations, which can lead to significant vulnerabilities.

Plan for new cyber risks now — and keep planning

When it comes to cybersecurity, AI requires carefully planned and thoughtful deployment, including identifying meaningful use cases and appropriate controls. Each time you perform a technology assessment, be sure it includes current, and planned, AI use.

Although you currently may not be ready to adopt AI, begin to elevate the maturity of your cybersecurity and data management programs to prepare — because you don’t have time to ignore it.

Related Thinking

Two business professionals in front of computer monitors talking about generative AI
March 21, 2024

Foundational data considerations for generative AI

Article 4 min read
Leaders discussing artificial intelligence in public sector.
Jan. 30, 2024

How AI is transforming the public sector: Rewards, risks, and readiness

Webinar 1 hour watch
Hands typing on laptop computer.
September 29, 2023

Think cybersecurity is just an IT responsibility? Think again

Article 5 min read