Introduction:

If you are like me, I spend a lot of time thinking about how to solve problems. That typically ends up going down various rabbit holes. As a governance, risk, and compliance (GRC) professional, I’m always intrigued by how companies view information security and how they address compliance. Some of these views are relatively straightforward; however, some are examples of creative uses of risk exceptions or fancy documentation that meets the intent of the controls but completely lacks the spirit of them. Enter Artificial Intelligence Information Systems.

While we can look at these as “crown jewels” of an organization’s enterprise applications, some of the principles that make these systems “learn” are beyond what we typically think about in GRC. Absent any framework that addresses AI systems, this article is the culmination of research that I wanted to put together for discussion with other professionals to help develop a way that we can address the challenges of securing these systems in a new and exciting time as a technology professional.

Goals and Objectives

  1. What Is an AI Risk Management Framework?
  2. How Can We Score This?
  3. How Does AI Impact the GRC Professional?

Objective 1: What Is an AI Risk Management Framework?

The National Institute of Standards and Technology defines a risk management framework as “…a process that integrates security, privacy, and cyber supply chain risk management activities into the system development life cycle. The risk-based approach to control selection and specification considers effectiveness, efficiency, and constraints due to applicable laws, directives, executive orders, policies, standards, or regulations.” For those of us who already prepare our organizations and/or clients for internal audits, we know that we have several standards that we can follow depending on several factors like industry and organizational requirements.

Examples:

  • International Standards Organization 27001 certifications are considered the international standard for validating a cybersecurity program—internally and across third parties.
  • The Federal Information Security Management Act (FISMA) is a comprehensive cybersecurity framework that protects federal government information and systems against cyber threats. FISMA also extends to third parties and vendors who work on behalf of federal agencies.
  • North American Electric Reliability Corporation – Critical Infrastructure Protection (NERC CIP) is a set of cybersecurity standards designed to help those in the utility and power sector reduce cyber risk and ensure the reliability of bulk electric systems.

However, when it comes to AI systems, there are only a handful of documents that suggest certain aspects of what would be necessary to address information security requirements. To better understand the complexity of what makes an AI system different from others, we should first have an understanding of what an AI system is.

From a layperson’s standpoint, an AI system is no different than any application on an enterprise network other than its use. It needs an infrastructure, system components, and a business purpose. What makes AI systems more difficult to understand is how business purposes are realized.

Elements of an ML system from ISO/IEC 23053:2022-Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML)

Task (Problem Definition)

A machine learning task is the type of prediction or inference being made based on the problem or question that is being asked and the available data. For example, the classification task assigns data to categories, and the clustering task groups data according to similarity.

Machine learning tasks rely on patterns in the data rather than being explicitly programmed.

Model

A machine learning model is a program that can find patterns or make decisions from a previously unseen dataset.

Datasets:

A machine learning dataset is a collection of data that is used to train the model. A dataset acts as an example to teach the machine learning algorithm how to make predictions. The common types of data include:

  • Text data
  • Image data
  • Audio data
  • Video data
  • Numeric data

The data is usually first labeled or annotated in order for the algorithm to understand what the outcome needs to be.

Tools

ML model creation uses tools categorized as data preparation, ML algorithms, optimization methods, and evaluation metrics. ML model performance is assessed through tools that generate evaluation metrics.

New Considerations

The components of a machine learning system that supports AI capabilities can be secured and risk assessed through traditional means. However, with new technology that is meant to augment human decision-making, there must be additional considerations that we need to address to address emerging security concerns and potential ethical outputs.

ENISA’s Multilayer Framework for Good Cybersecurity Practices for AI publication looks at securing these systems in three tiers.

  • Layer I – Cybersecurity Foundations: The basic cybersecurity knowledge and practices that need to be applied to all information and communications technology (ICT) environments that host/operate/develop/integrate/maintain/supply/provide AI systems. Existing cybersecurity good practices presented in this layer can be used to ensure the security of the ICT environment that hosts the AI systems.
  • Layer II – AI Fundamentals and Cybersecurity: Cybersecurity practices are needed for addressing the specificities of the AI components with a view on their life cycle, properties, threats, and security controls, which would be applicable regardless of the industry sector.
  • Layer III – Sector-Specific Cybersecurity Good Practices: Various best practices that can be used by the sectoral stakeholders to secure their AI systems High-risk AI systems (i.e. those that process personal data) have been identified in the AI Act, and they are listed in this layer to raise the awareness of operators to adopt good cybersecurity practices.

From an initial review, this seems to be an acceptable way to begin our framework to assess the risk of a single AI system on an organization’s network. Sans the documentation that would be required (see Draft ISO 42001- Information technology — Artificial intelligence — Management system) there are several factors that we should consider in creating a framework to reduce risk to an organization’s use of the AI system.

Layer 1 Requirements

  • The information systems infrastructure of the organization must have the minimum services and capabilities and operate at an acceptable level of risk to support the system as defined by the organization and/or regulation. An example of this would be that the organization utilizes the Information Technology Infrastructure Library Framework.
  • The information security infrastructure of the organization must have the minimum services and capabilities and operate at an acceptable level of risk to protect the system as defined by the organization and/or regulation. An example of this would be that the organization utilizes the NIST Risk Management Framework to protect its information technology assets.

Layer 2 Requirements

  • Establish NIST AI Risk Management Framework (https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf) or ISO 42001- Information technology — Artificial intelligence — Management system
  • The organization’s AI system stakeholders must be familiar with the MITRE ATLAS™ (Adversarial Threat Landscape for Artificial-Intelligence Systems). See https://atlas.mitre.org/
  • The organization’s capabilities must be mature to address AI systems’ cybersecurity threats.

Layer 3 Requirements

  • Know and understand the requirements of your industry.

Conclusion

Securing artificial intelligence information systems is a critical challenge that organizations face today, transcending conventional GRC concepts. The above is just the beginning and we haven’t even explored ethics! As AI systems continue to evolve, their complexities and inherent risks multiply, necessitating comprehensive frameworks that address these nuanced threats. Relying solely on conventional cybersecurity practices can leave organizations vulnerable, as AI presents unique risks not typically seen in traditional systems. The ENISA’s Multilayer Framework offers an innovative approach, providing a clear path for organizations to navigate the intricacies of AI security across different layers of operation. Layer 1 emphasizes the foundational aspects of information security, Layer 2 highlights the distinctiveness of AI components, and Layer 3 underscores sector-specific good practices. By integrating these frameworks with the existing GRC protocols, organizations can create a robust, well-rounded strategy to safeguard their AI endeavors. The journey towards comprehensive AI security is ongoing, and collaboration among professionals is crucial to refine and evolve these frameworks to meet the dynamic challenges of tomorrow. Embracing a proactive stance now will not only safeguard AI systems but will also foster trust among stakeholders, ensuring the responsible and secure adoption of AI technologies in the future.

Join the conversation on AI security!

We’re at the forefront of defining and refining the best practices for securing AI Information Systems. But this journey is collaborative. Your insights, experiences, and feedback are invaluable. Share your thoughts and be part of the proactive community shaping the future of AI security. Let’s ensure a safer digital landscape together. Comment below to dive deeper into the discussion!

References

 

Subscribe