Conformity assessment is the formal verification step that sits between development and placing a high-risk AI system on the market. Understanding it is essential for any AI provider preparing for the enforcement deadline, and for any deployer who needs to verify that the systems they procure have been properly assessed before they accept the associated compliance obligations.

Key takeaways

  • The conformity assessment is a mandatory provider obligation for all high-risk AI systems under Annex III and Annex I of Regulation (EU) 2024/1689. It verifies compliance with the eight requirements in Chapter III, Section 2: risk management, data governance, technical documentation, transparency, human oversight, accuracy, robustness, and cybersecurity.
  • Most Annex III high-risk AI systems are subject to self-assessment by the provider. Systems embedded in products covered by Annex I Union harmonisation legislation require notified-body involvement in the relevant regulated product conformity procedure.
  • A successfully completed conformity assessment produces a Declaration of Conformity, a CE marking affixed to the system or its documentation, and a registration entry in the EU database managed by the AI Office.
  • Voluntary certification under ISO/IEC 42001:2023 or the FP Certified framework is separate from the mandatory conformity assessment. It strengthens the evidence base for self-assessment and is increasingly used by AI insurers as a governance proxy for underwriting purposes.
  • A substantial modification to a system requires the conformity assessment to be repeated. Deployers who substantially modify a system become providers and must conduct their own assessment.

The legal basis: Articles 43 to 47

Conformity assessment procedures are set out in Articles 43 to 47 of Regulation (EU) 2024/1689, with the Declaration of Conformity in Article 47 and the EU database registration in Articles 49 and 71. The CE marking procedure is in Article 48. Read together, these provisions create a compliance architecture that is familiar to manufacturers of regulated products in the EU but is new territory for AI software providers and deployers.

Article 43(1) sets the entry point: before placing a high-risk AI system on the market or putting it into service, the provider shall carry out a conformity assessment of the system in accordance with the applicable procedure. The applicable procedure depends on which Annex the system falls under.

For high-risk AI systems listed in Annex III that are not safety components of Annex I-regulated products, Article 43(1) applies Annex VI, which is the self-assessment procedure based on internal control. The provider reviews the system against the Chapter III Section 2 requirements using their internal quality management process, documents the outcome, issues a Declaration of Conformity, and affixes the CE marking.

For systems that are safety components of products subject to Union harmonisation legislation listed in Annex I, the notified body that is already involved in the regulated product conformity process also assesses the AI component. The AI component assessment follows the applicable Annex from the parent product legislation. This is the scenario that applies to AI embedded in medical devices under Regulation (EU) 2017/745, in vitro diagnostic devices under Regulation (EU) 2017/746, and other Annex I-listed product categories.

The eight requirements that conformity assessment verifies

The conformity assessment checks the provider's system against the eight requirements in Chapter III, Section 2. Each requirement generates specific documentation that the provider must hold and update throughout the system's lifecycle.

Risk management system (Article 9). The provider must have an established, documented, iterative risk management process covering identification, evaluation, mitigation, and residual risk. The assessment verifies that the process exists, is documented, and has been applied to the specific system.

Data and data governance (Article 10). Training, validation, and testing data must meet quality criteria: relevance, representativeness, freedom from errors to the extent possible, and completeness relative to the intended purpose. The assessment examines the data governance process and the documentation of data quality decisions.

Technical documentation (Article 11 and Annex IV). The full technical documentation set, covering the system's general description, design specifications, training methodology, validation results, risk management records, and post-market monitoring plan, must be complete and accurate. This is the central document of the conformity assessment.

Record-keeping (Article 12). The system must automatically log relevant operational events to the extent technically feasible. Logs must enable post-event investigation and must be accessible to deployers and market surveillance authorities.

Transparency and instructions for use (Article 13 and Annex XIII). The system must include the information necessary for deployers to use it effectively, including its capabilities, limitations, intended use, and the human oversight measures. The instructions for use must accompany the system when it is placed on the market.

Human oversight (Article 14). The system must be designed with oversight mechanisms enabling persons to understand outputs, detect malfunctions, and intervene or halt the system. The conformity assessment verifies that these mechanisms are designed into the product, not merely described in documentation.

Accuracy, robustness, and cybersecurity (Article 15). The system must achieve the accuracy levels claimed in the technical documentation, be robust to errors and inconsistencies in the input data, and be resilient to unauthorised attempts to alter its functioning. The assessment examines the testing evidence for each of these properties.

The self-assessment procedure in practice

Most AI providers subject to the EU AI Act will conduct a self-assessment under Annex VI. This is an internal quality procedure: the provider reviews their system against each requirement, documents the evidence, concludes that the system meets the requirements, and issues a Declaration of Conformity. No third party is mandatorily involved in the standard Annex III self-assessment.

The self-assessment is only as robust as the internal quality management system that supports it. A provider without an established AI governance process will struggle to produce the documentation the self-assessment requires, because the documentation must reflect real governance decisions that were made during development, not compliance statements drafted after the fact.

The EU AI Act does not specify a format for the internal quality management system, but Article 17 requires providers to establish a quality management system covering their AI risk processes. ISO/IEC 42001:2023, the international standard for AI management systems published in December 2023, provides a framework that is directly applicable to the Article 17 obligation. A provider that implements ISO 42001 in substance will find that its management system produces most of the documentation required for the Annex VI self-assessment automatically, because the two frameworks address the same governance questions.

For a detailed mapping of ISO 42001 requirements to EU AI Act obligations, see the NIST, ISO 42001, and EU AI Act comparison. For the full FP Certified methodology and how it maps to conformity assessment requirements, see the certification methodology page.

Where notified bodies are required

Notified bodies are independent conformity assessment organisations that have been formally designated by a member state to carry out third-party assessments under Union harmonisation legislation. In the AI context, existing notified bodies from regulated product sectors are extending their scope to cover AI components under the EU AI Act.

The AI Act requires notified-body involvement for AI systems that are safety components of products subject to the regulated product directives listed in Annex I. This includes: machinery under Directive 2006/42/EC, low-voltage electrical equipment, pressure equipment, personal protective equipment, gas appliances, rail systems, medical devices under Regulation (EU) 2017/745, in vitro diagnostic medical devices under Regulation (EU) 2017/746, toys, and certain other Annex I-listed categories.

For AI developers who supply embedded AI to manufacturers of regulated products, the practical effect is that the notified body already in the product certification relationship will also assess the AI component. The developer must provide the notified body with the same documentation package that a self-assessing provider would produce: technical documentation, risk management records, data governance evidence, and test results.

Notified bodies for AI are not yet widely available. The EU AI Act created a new regime of AI system notified bodies that is still being stood up across member states. BSI, TÜV SÜD, and Bureau Veritas are among the organisations actively developing AI conformity assessment capacity in Europe, but the market for notified-body AI assessment is not yet mature. Providers who require notified-body involvement should begin the engagement process early, because the pipeline of organisations seeking notified-body assessments is growing faster than capacity.

The Declaration of Conformity and CE marking

The Declaration of Conformity is the legal document produced at the end of the conformity assessment. Under Article 47, it confirms that the provider has assessed the system and concluded that it meets the Chapter III Section 2 requirements. The Declaration must contain the provider's name and contact details, the system's description and identification, any Union harmonisation legislation under which conformity is declared, a statement of conformity with the AI Act requirements, references to harmonised standards applied where relevant, the notified body's name and number where applicable, and the date and signature of the responsible person.

The Declaration must be retained for ten years from the date the system is placed on the market. This is an unusually long retention period that reflects the potential longevity of AI systems in commercial deployment and the possibility that enforcement investigations may be initiated well after first deployment.

CE marking for AI systems embedded in regulated products follows the CE marking procedure of the parent product legislation. For standalone AI systems in Annex III categories, the CE marking is affixed to the system, its documentation, or its packaging. AI software is typically marked by including the CE symbol in the user interface or the product documentation.

How voluntary certification relates to the mandatory assessment

Voluntary certification under frameworks like ISO/IEC 42001, AIUC-1, or the FP Certified methodology is legally distinct from the mandatory conformity assessment. The conformity assessment is required by law for providers of high-risk AI systems. Voluntary certification is an additional step that a provider may choose to undertake to demonstrate the robustness of their governance processes to external audiences: clients, partners, investors, and insurance underwriters.

The practical overlap between the two is substantial. AIUC-1 certification, which underpins the first AI agent insurance policy issued to ElevenLabs in February 2026, requires evidence of risk management, governance documentation, and behavioural testing that maps closely to Articles 9, 10, 11, 14, and 15 of the EU AI Act. ISO 42001 certification produces a quality management record that supports the Article 17 quality management system requirement and strengthens the self-assessment documentation. FP Certified's seven-dimension framework covers the same governance, transparency, oversight, and operational domains that the conformity assessment evaluates.

For insurance underwriters, the relationship between conformity assessment and voluntary certification is a practical one. A provider who holds a Declaration of Conformity has satisfied a legal requirement. A provider who additionally holds ISO 42001 certification or an AIUC-1 assessment has demonstrated that their governance processes have been independently verified against a published standard. The latter is what insurers treat as underwriting evidence. The former is a necessary baseline, but not in itself sufficient to price a policy with confidence.

For the coverage implications of certification, see the overview on preparing an AI agent underwriting submission for European insurers. For the EU AI Act regulatory context that conformity assessment sits within, see the Article 9 risk management system guide on the EU regulatory desk.

Substantial modification and the repeat assessment obligation

Article 3(23) defines a substantial modification as a change to a high-risk AI system, after it has been placed on the market or put into service, that affects the system's compliance with the Chapter III Section 2 requirements, or results in an increase in the risk to health, safety, or fundamental rights, or a change of the intended purpose. When a substantial modification occurs, the provider must carry out a new conformity assessment, update the technical documentation, issue a new Declaration of Conformity, and re-register in the EU database if the registration information has changed.

The practical challenge is identifying what counts as a substantial modification in an AI context where models are regularly updated, fine-tuned, and deployed in new configurations. A change in a model's weights through fine-tuning on new data may or may not constitute a substantial modification depending on its effect on the system's behaviour relative to the intended purpose that was assessed. Providers should maintain a change log that records each modification and includes a documented assessment of whether the change constitutes a substantial modification requiring a new conformity assessment.

Frequently asked questions

What is the conformity assessment under the EU AI Act and who must conduct it?

The conformity assessment is the procedure by which a provider of a high-risk AI system verifies compliance with Chapter III Section 2 requirements. For most Annex III systems, it is a self-assessment by the provider. Systems embedded in Annex I-regulated products require notified-body involvement through the parent product conformity process.

Which high-risk AI systems require a third-party notified-body assessment?

Systems that are safety components of products covered by Annex I Union harmonisation legislation, including medical devices under Regulation (EU) 2017/745, in vitro diagnostic devices under Regulation (EU) 2017/746, and machinery under Directive 2006/42/EC. Standalone Annex III systems not embedded in Annex I products are subject to self-assessment.

What does the EU AI Act Declaration of Conformity contain?

Under Article 47, the Declaration must include the provider's name and address, the system's name and type, applicable Union harmonisation legislation, a conformity statement, references to harmonised standards, notified body details where applicable, and the date and signature. It must be retained for ten years from when the system was placed on the market.

How does voluntary certification under ISO 42001 or AIUC-1 relate to the mandatory conformity assessment?

The mandatory conformity assessment is a legal obligation for providers of high-risk AI. Voluntary certification under ISO 42001 or AIUC-1 is separate. It demonstrates governance quality to external audiences, strengthens the self-assessment evidence base, and is increasingly used by AI insurers as a governance proxy for underwriting purposes. The two are complementary rather than substitutes.

What happens if a provider substantially modifies a high-risk AI system after initial conformity assessment?

Article 3(23) defines a substantial modification as a change affecting compliance with Chapter III Section 2 requirements or the system's intended purpose. A substantial modification requires a new conformity assessment, updated technical documentation, a new Declaration of Conformity, and updated EU database registration. A deployer who substantially modifies a system becomes a provider and must conduct the assessment themselves.

References

  1. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 (Artificial Intelligence Act), OJ L, 12.7.2024.
  2. Article 43, Regulation (EU) 2024/1689, conformity assessment procedures.
  3. Article 47, Regulation (EU) 2024/1689, EU declaration of conformity.
  4. Article 48, Regulation (EU) 2024/1689, CE marking of conformity.
  5. Articles 49 and 71, Regulation (EU) 2024/1689, registration in the EU database.
  6. Annex IV, Regulation (EU) 2024/1689, technical documentation requirements.
  7. Annex VI, Regulation (EU) 2024/1689, internal control conformity assessment procedure.
  8. Article 3(23), Regulation (EU) 2024/1689, definition of substantial modification.
  9. Article 17, Regulation (EU) 2024/1689, quality management system requirements.
  10. Regulation (EU) 2017/745, medical devices regulation.
  11. Regulation (EU) 2017/746, in vitro diagnostic medical devices regulation.
  12. Directive 2006/42/EC, Machinery Directive.
  13. ISO/IEC 42001:2023, Artificial intelligence management system.
  14. AIUC-1 AI Agent Certification Standard. Artificial Intelligence Underwriting Company, 2025.
  15. ElevenLabs. First AIUC-1-backed AI agent insurance policy. February 2026.