This series contains general information and is neither formal nor legal advice. Readers and organisations must make their own assessment as to the suitability of referred standards and this material for their specific business needs.
Introduction
The new ISO 42001:2023 framework is an Artificial Intelligence (AI) Management System for your organisation, similar to, for instance, ISO 9001 (Quality Control) and ISO 27001 (Information Security).
In this three part series we will cover background information regarding AI (part 1), matters to consider when looking at developing (part 2), using or offering AI in your organisation, and how ISO 42001 can help you with this in a responsible manner (part 3).
We do this from the perspective of Governance, Risk and Compliance (GRC) as that is our specific area of expertise in this matter, and it is also what ISO 42001:2023 is all about.
All you need to know about ISO 42001:2023
With all that in mind, we can now look at the ISO 42001 AI Management System (AIMS). How does it help an organisation navigate the fast-changing AI landscape, responsibly? The following are some key aspects of what the framework covers, and contrary to some other ISO management systems, this one is designed to create an adapted AIMS to address your organisation’s specific needs.
Most of all, ISO 42001 provides structure to your AI management efforts, and globally recognised proof of this towards third parties.
The Basics
In defining the scope for the organisation’s AIMS, consider the intended purpose and use of the AI systems developed or provided. The requirements of interested parties must also be incorporated.
Management, all the way to the top executives, must be committed to demonstrate positive leadership regarding the AI policy and objectives, communicate this effectively, and ensure adequate resources are made available on an ongoing basis for the AI systems as well as the AIMS.
The process of planning an AI system should consider risk assessments, treatments and impacts. The context for this process is defined in the AIMS and can be applied to multiple AI systems. Risk treatments must be measurable for effectiveness. A system’s impact assessment reviews the potential consequences of its deployment, intended use and any foreseeable misuse.
Processes must be created, documented and implemented for the development, as well as the operational component of the AI system. Changes must be planned, with appropriate safeguards in place. The AI system, as well as all these surrounding processes, must be monitored and measured on relevant criteria, and fully tracked over time. The criteria are periodically evaluated through a standardised management process that results in actionable and measurable corrective actions for identified problems. And just like with ISO 27001, there are internal and external audits that can yield opportunities for improvement, minor and major nonconformities.
Finally, and no surprise there, all related documentation must be kept and maintained over time, for internal purposes as well as for external audits.
No organisation will be perfect the first time round or even the next; the objective is continual improvement.
Roles: From Developers to Subjects
Not everyone dealing with AI plays the same role. Here are some of the roles to consider:
- AI developer: currently predominantly the domain of organisations such as OpenAI and Meta. However, bear in mind that the required technology – essentially lots and lots of heavy GPUs (graphics cards with lots of processing power and memory, such as the high-end models from NVIDIA) is readily available and dropping in price at a good rate, so building a model and teaching it from scratch will come within reach of more organisations. Such computing power can also be rented on-demand in the cloud, and additionally it has been shown that much smaller models based on pre-trained larger models still perform really well. Progress is indeed fast.
- Application developers: building programs leveraging an LLM or other AI model.
- Dataset builder: additional datasets for use within an existing model.
- User: someone using an AI-enabled application.
An organisation that uses an AI-enabled application may have access to all the components associated with these roles (or is using them hosted elsewhere through an API) rather than building them in-house. That is fine – however, are there arrangements, processes and safeguards in place for support, including upgrades? This is important since, more than a regular computer program, an AI-enabled system may behave fundamentally differently when new data is added or its use changes even in a minor way (see our paragraph on Hallucinations and Idempotency).
Controls of Particular Interest
Annex A contains a standard set of controls. Aside from the controls supporting the basics of the AIMS, there are some extra that deal with:
- Data acquisition, its quality and provenance (where does the data come from, can it be trusted and not contain any detrimental information);
- Information for interested parties, in particular users – enabling them to assess the impact (both positive and negative) of using the system;
- Other third-party relationships, ensuring that the organisation understands its responsibilities and remains accountable to its customers’ needs and expectations, both during development as well as use of the AI system.
Implementation Guidance
A really great addition to ISO 42001 is the implementation guidance in Annex B. It does not tell an organisation exactly what to do, as every organisation and AI-enabled system is different, but it gives you an excellent starting point from which can be extended or modified to suit your needs.
- Objective (business requirements): management direction and AI systems support.
- AI policy and its periodic review.
- Alignment with other policies within the organisation.
- Roles and responsibilities, including safety, privacy, performance, human oversight, supplier relationships, fulfilling legal requirements, and data quality management.
- An effective mechanism for reporting concerns, stipulating appropriate investigation and resolution powers for the person(s) or group referred to.
- Resource documentation: any type of documentation relevant to the implementation and use of the system. This covers a broad range of topics, such as the data, the AI algorithms/models, hardware to develop and run the model, human resources required for the entire AI lifecycle, etc.
- Data resources: a particularly useful list with references for working with AI datasets, including testing aspects.
- Impact assessments: covering not only the legal position of the organisation, but also that of its users, their physical and psychological well-being, and universal human rights; this is then broadened to groups and overall society. The organisation must also consider environmental aspects and potential relevance to climate change!
- Life cycle: this covers the entire path from idea through to decommissioning, with a strong emphasis on the training process of the AI and its maintenance over time. The performance of the AI-enabled system must remain reliable and predictable according to standards set by the organisation. A lot of examples and references are provided.
- Technical documentation: more detailed information about the system’s architecture, design choices, assumptions and technical limitations. There should be a plan for managing failures, including performing a rollback on an update.
- Event logs: an AI-enabled system should automatically produce logs (that the organisation then keeps, of course), enabling tracing of functionality and identification of possible problems.
- Data: management, acquisition, quality, provenance (where does the data come from and can it be relied on), and preparation.
- Interested parties: documentation and information for users, external reporting, communication of incidents, and other information for interested parties.
- Responsible use: this covers fairness, accountability, transparency, explainability, reliability, safety, robustness and redundancy, privacy and security, and accessibility. Again, it involves monitoring, human oversight including the authority to override decisions made by the system, and ultimately, whether automated decision-making is actually appropriate for a responsible approach to the particular AI-enabled system and its specific use case.
- Third-party relationships: reviewing and allocating responsibilities appropriately, both with suppliers as well as with customers.
For each of these topics, a lot of detail is provided that can act as a guide for outlining and implementing the AIMS. While the standard at no point says: “don’t do this”, it is made abundantly clear that an organisation should not simply jump into a deployment of an AI-enabled system, but rather explicitly take all noted aspects into consideration and put the appropriate documentation and processes in place. Local legislation aside, this is a very prudent approach.
If certain controls are deemed to not be applicable, this needs to be explicitly justified and auditors will review this documentation. An organisation cannot simply pick and choose.
Annex C provides additional guidance on potential AI-related organisational objectives and risk sources, while Annex D discusses use of the AIMS across domains or sectors of government and private enterprise, and integration with other ISO management systems. The main ones, which we have already mentioned, are ISO 27001 (Information Security Management) and ISO 9001 (Quality Management), but ISO 27701 (Privacy Information Management – for the Personally Identifiable Information [PII] in the AI-enabled system).
Conclusion
As we have discussed in this three-part series, AI is a broad and fast-moving field, and, while very interesting and offering great potential for productive uses, organisations wishing to use it responsibly are wise to consider and plan their steps carefully. Since the hype is already abundant, we took a specific look at some practical risks regarding the use of AI, as well as practical mitigations, in the context of GRC.
The new ISO 42001:2023 framework can provide a solid foundation for this, ensuring no aspects and checks are missed along the way, also offering structure for the ongoing management of uses of AI in an organisation.
Certification in, or alignment with, ISO 42001 can show potential clients, partners and other stakeholders how you are managing AI-enabled systems in your organisation, using an up-to-date global standard.
Learn more about Sekuro’s GRC services, and how we can assist you with certification and alignment with ISO 42001.
Arjen Lentz
Senior Consultant (Governance, Risk, Compliance), Sekuro