- ISO/IEC 42001:2023 is a management system standard governing how organisations manage AI activities. It does not specify technical requirements for individual AI systems, and certification against it does not substitute for EU AI Act conformity assessment.
- The standard's seven main clauses (4 to 10) follow the ISO High Level Structure used in ISO 9001 and ISO 27001, making it familiar to organisations with existing management system programmes.
- Annex A adds 38 AI-specific controls across nine categories. These are the controls most directly relevant to EU AI Act compliance preparation, particularly the AI risk management (A.4) and AI system lifecycle (A.6) categories.
- The clause-to-AI-Act mapping is strong at the governance layer (Articles 9, 17, 72) and weaker at the system layer (Articles 13, 14, 15). Agent-specific frameworks such as Agent Certified are needed to fill the system-level gap.
- For organisations with ISO 27001 in place, the incremental effort for ISO 42001 is substantially lower than starting from scratch, because much of the documentation infrastructure already exists.
ISO/IEC 42001:2023 was published on 18 December 2023 by the International Organization for Standardization and the International Electrotechnical Commission. It is the first ISO management system standard dedicated to artificial intelligence. Where its predecessors, ISO 9001 for quality and ISO 27001 for information security, established governance frameworks for well-understood operational disciplines, ISO 42001 addresses an area where the underlying technology is still evolving, the regulatory landscape is still forming, and the risk landscape is imperfectly understood.
This creates a specific challenge for implementation. A management system standard works by specifying what an organisation must govern and how. It assumes the organisation knows what it is governing. For AI, particularly for autonomous agents, the scope of what is being governed is often contested internally before the standard is even opened. This guide addresses that challenge directly.
Structure of the standard
ISO/IEC 42001 follows the ISO High Level Structure (HLS), which is a common clause structure used across management system standards to make integration and auditing consistent. The structure runs from Clause 4 to Clause 10. Clauses 1 to 3, covering scope, normative references, and terms and definitions, are prefatory.
Clause 4 covers Context of the organisation. It requires the organisation to understand its internal and external context, identify interested parties and their requirements, and define the scope of the AIMS. For AI, this requires a documented inventory of AI systems in use, a mapping of their deployment contexts, and a clear statement of which systems are inside and outside the AIMS boundary. The scope definition is the document that determines what the certification covers, and it is the first document an auditor will examine.
Clause 5 covers Leadership. It requires top management to demonstrate commitment to the AIMS, establish an AI policy, and assign responsibility for the system's performance. The AI policy must address the organisation's position on responsible AI development and use, its commitments on transparency and accountability, and its requirements for oversight. This clause maps to Article 17 of the EU AI Act on quality management systems for providers of high-risk AI.
Clause 6 covers Planning. It requires the organisation to identify risks and opportunities affecting the AIMS, establish AI risk assessment processes, and set objectives and plans to achieve them. This is the clause where the connection to EU AI Act Article 9 on risk management is most direct. A well-implemented Clause 6 produces a documented risk assessment process, a living risk register, and a set of measurable objectives that can be referenced in a conformity assessment or audit.
Clause 7 covers Support. It addresses the competence, awareness, communication, and documentation needed to run the AIMS. In practical terms, it requires documented training records for staff involved in AI-related activities, a communication plan, and a document control procedure. The documentation requirement under Clause 7.5 is the governance layer beneath which all other documented information in the AIMS sits.
Clause 8 covers Operation. It requires the organisation to plan, implement, and control the processes needed to meet the AIMS requirements and carry out the actions identified in Clause 6. For AI, this is the clause where system-level activities sit: development, testing, deployment, and operational management of AI systems. The connection to EU AI Act Articles 10, 11, and 12 on data governance, technical documentation, and record-keeping is strongest here.
Clause 9 covers Performance evaluation. It requires the organisation to monitor, measure, analyse, and evaluate the AIMS, conduct internal audits, and carry out management review. This clause maps to EU AI Act Article 72 on post-market monitoring. A well-implemented Clause 9 produces a monitoring programme with defined metrics, an internal audit schedule, and a management review cadence with documented outputs.
Clause 10 covers Improvement. It requires the organisation to act on nonconformities, take corrective action, and continually improve the AIMS. In the AI context, this clause provides the mechanism for learning from incidents, integrating findings from post-market monitoring, and updating the risk assessment in response to new evidence about system behaviour.
Annex A: the AI-specific controls
Annex A is where ISO 42001 departs from the generic management system template and addresses AI directly. It contains 38 controls across nine categories. Unlike ISO 27001's Annex A, which lists controls in one-to-two sentence form, ISO 42001's Annex A controls are supplemented by Annex B implementation guidance, which provides substantive direction on each control. Both annexes are informative, meaning they are not certification requirements in themselves, but organisations typically include their selected Annex A controls in their Statement of Applicability, which is a required document.
The nine control categories and their relationship to EU AI Act obligations are set out below.
A.2: Policies for AI
Three controls governing the organisation's formal positions on AI use, development, and third-party AI engagement. The AI use policy required here maps to the AI Act's requirement for documented instructions for use that deployers must follow under Article 26(1). Organisations that have a documented AI use policy are already part of the way to the instructions-for-use map that Article 26 requires.
A.3: Internal organisation
Four controls on roles, responsibilities, and cross-functional governance. The AI governance role and the assignment of human oversight responsibilities required here connect directly to Article 26(2) of the AI Act, which requires deployers to assign human oversight to natural persons with competence, training, authority, and support. A documented role definition that meets A.3's requirements is a substantial part of the oversight register the Article 26 compliance file demands.
A.4: AI risk management
Five controls on the AI risk assessment process, the risk treatment process, and the documentation of both. This is the most directly relevant category for EU AI Act Article 9 compliance. A.4 requires the organisation to identify AI-related risks, assess their likelihood and impact, determine treatments, and document the results in a form that is maintained and updated throughout the system's lifecycle. The risk register produced under A.4 is the same risk record that forms the first element of the minimum Article 26 operator file.
A.5: AI system impact assessment
Five controls on assessing the impact of AI systems on individuals and society before and during deployment. This category overlaps substantially with the fundamental rights impact assessment required under Article 27 of the AI Act for certain categories of high-risk deployers. An organisation that has implemented A.5 rigorously has likely produced most of the content that an Article 27 assessment requires, although the formal scope of the two instruments differs.
A.6: AI system lifecycle
Eight controls covering the full lifecycle from requirements and design through testing, deployment, operation, and retirement. This is the category where the technical documentation obligations of EU AI Act Articles 11 and Annex IV are most relevant. A.6 requires documented evidence of design decisions, testing results, acceptance criteria, and operational monitoring arrangements. The technical file produced under A.6 is the starting point for the Annex IV documentation the AI Act requires, though the two have different levels of specificity in different areas.
A.7: Data for AI systems
Four controls on data quality, data governance, and data sourcing for AI training and operation. These map to EU AI Act Article 10 on data and data governance. A.7 requires documentation of data sources, data quality criteria, and the processes used to validate training and operational data. For agentic systems that retrieve data at runtime, A.7's controls on operational data governance are particularly relevant.
A.8: Third party and customer relationships
Three controls on due diligence for AI systems sourced from third parties and on the obligations owed to customers and end users. This category addresses the supply chain aspects of the EU AI Act that apply to importers, distributors, and deployers using third-party AI components. An organisation that has implemented A.8 has a documented process for verifying that third-party AI systems meet the organisation's standards and for communicating AI system characteristics to users.
A.9: AI system use
Three controls on the responsible use of AI systems within the organisation, including controls on permitted use cases and monitoring of system outputs. This category supports EU AI Act Article 26's operational requirements for deployers.
A.10: Documentation and evidence
Three controls on the creation, maintenance, and retention of documentation and evidence related to AI systems. This is the explicit documentation governance layer, reinforcing Clause 7.5's document control requirements with AI-specific provisions. A.10 is relevant to the logging and record retention obligations under Article 26(6) of the AI Act.
Where ISO 42001 leaves gaps for agent-specific systems
ISO/IEC 42001 was designed as a horizontal standard applicable to any organisation using or developing AI, from a company using a simple recommendation engine to one running fully autonomous multi-agent pipelines. This breadth is a design choice, not a flaw. But it means the standard does not address several characteristics of autonomous agents that create specific compliance and risk challenges under the EU AI Act.
The first gap is the autonomy envelope. An AI agent's autonomy envelope is the set of conditions under which it may take actions without human confirmation. ISO 42001 requires human oversight to be considered, but does not provide a framework for specifying and monitoring the boundaries of autonomous action. The Agent Certified framework's Autonomy dimension addresses this directly, requiring applicants to document and test the boundaries of autonomous operation and the conditions that trigger escalation.
The second gap is multi-step reasoning transparency. An agent that operates through a chain of intermediate reasoning steps, tool calls, and sub-agent invocations produces a process trace that is significantly different from a classical model's input-output pair. ISO 42001's documentation and evidence controls do not specify what a process trace for an agentic system looks like or what must be retained. EU AI Act Article 12's record-keeping obligation for high-risk systems requires logging that enables post-incident investigation, but the standard does not translate that into agent-specific logging requirements.
The third gap is dynamic context integrity. An agent that retrieves context from external sources at runtime, whether from vector databases, web search, or API calls, operates in a way that ISO 42001's data governance controls do not fully address. The data the agent acts on is not fixed training data but dynamically assembled context that may be manipulated, outdated, or inconsistent. The Agent Certified framework's Context Integrity dimension evaluates whether the agent's dynamic context retrieval is governed appropriately.
For organisations deploying autonomous agents, the right implementation posture is ISO 42001 at the management layer, supplemented by agent-specific technical evaluation at the system layer. The Agent Certified methodology is designed to fill the system-layer gap that ISO 42001 leaves open for agentic deployments.
The EU AI Act mapping in practice
The practical exercise most useful before a conformity assessment or certification audit is an evidence mapping exercise. For each EU AI Act obligation that applies to the organisation's in-scope systems, identify the ISO 42001 clause or Annex A control that generates evidence for that obligation, locate the evidence in the AIMS documentation, and note the gaps where evidence does not yet exist.
The most commonly found gaps are: the system-level risk assessment (A.4 generates a framework, but the actual assessment per system often does not exist); the human oversight assignment with documented competence (A.3 requires role definition, but the training records and escalation path are often missing); the logging schedule with defined retention periods (A.10 requires records but often does not produce a specific schedule with retention periods tied to each log type); and the incident response procedure at the system level (Clause 10 handles improvement, but a specific incident response procedure for AI outputs is often absent).
Closing these gaps is the work of implementation. None of them is technically difficult. All of them require organisational decision-making, documentation discipline, and periodic review that is harder to sustain than to initiate.
Integration with ISO 27001
Many organisations approaching ISO 42001 already hold ISO 27001 certification or are in the process of implementing it. The two standards share the ISO High Level Structure, which means their clauses align and their documentation requirements are compatible. In practice, an organisation with a functioning ISO 27001 programme can typically use its existing document control procedures, risk management process structure, internal audit programme, and management review cadence as the foundation for ISO 42001 implementation, and focus the additional work on the AI-specific Annex A controls.
The information security risk register maintained under ISO 27001 is a natural starting point for the AI risk register required under A.4, with AI-specific risk categories added. The asset inventory used in ISO 27001 can be extended to include AI systems and their associated data sets. The training and awareness programme can be extended to cover AI governance topics. The main additional work is the AI-specific content: the AI policy, the AI system lifecycle documentation, the impact assessment process, and the agent-specific technical controls where relevant.
For the comparison between ISO 42001, NIST AI RMF, and the EU AI Act in a single table, see the framework comparison article. For the seven dimensions of technical agent certification that supplement the ISO 42001 management layer, see the seven dimensions article.
Preparing for the certification audit
A certification audit against ISO 42001 follows a two-stage process common to management system standards. Stage 1 is a documentation review, typically conducted remotely, in which the auditor reviews the AIMS scope, the AI policy, the risk assessment process documentation, and the Statement of Applicability. The auditor assesses whether the management system is sufficiently developed to proceed to Stage 2 and identifies any significant gaps that must be closed first.
Stage 2 is an implementation review, typically conducted on-site or by remote observation, in which the auditor interviews personnel, reviews records, and assesses whether the management system is functioning as documented. For AI systems, the auditor will typically review the AI system inventory, a sample of risk assessments, the incident records, the management review minutes, and the internal audit reports. Evidence of active use of the system, not just documented intent, is the standard that Stage 2 applies.
Organisations planning their first ISO 42001 certification audit should expect the Stage 1 to Stage 2 gap to be used to close specific documentation gaps identified in Stage 1. This is normal and expected. The audit cycle is annual surveillance, with a three-year recertification cycle. The ongoing work of maintaining the AIMS is heavier than the initial implementation for most organisations, because the AI landscape, the regulatory requirements, and the organisation's own AI use are all changing simultaneously.
For organisations ready to assess their agent deployments against a technical standard designed for autonomous systems, the next step after ISO 42001 is a formal assessment under the Agent Certified framework. The assessment covers the system-level dimensions that ISO 42001 does not address and produces a certification artefact that maps cleanly to the EU AI Act technical documentation requirements for high-risk systems.
For insight on the relationship between the EU AI Act's product liability exposure and the certification posture organisations should build before December 2026, see Agent Liability EU's briefing on the revised Product Liability Directive.