High-risk AI sys­tems

A limited number of the AI systems defined in the AI Act are classified as high-risk AI systems. Generally speaking, there is a large potential for added value in using these AI systems. However, as the systems are used in regulated product sectors or sensitive areas it must be ensured that they do not have an adverse impact on the safety, health or fundamental rights of any persons. The intended purpose of an AI system is a decisive factor in classifying the system as a high-risk AI system.

The key points of information about high-risk AI systems from the AI Act are summarised below.

High-risk AI systems under Annex I

An AI system is part of a regulated product or is itself a regulated product (see Article 6(1) points (a) and (b) of the AI Act).

The AI system is considered to be a high-risk AI system if conditions (a) and (b) are both fulfilled:

(a) it is itself a product covered by the EU legislation listed in Annex I

OR

it is used as a safety component of a product (for example an AI safety component in machinery) covered by the EU legislation listed in Annex I,

AND

(b) the EU legislation requires third-party conformity assessment before the product is placed on the EU market.

Examples

Annex I lists 20 product legislative acts, examples include the Radio Equipment Directive, the Toy Safety Directive, the Machinery Directive and the Medical Devices Regulation.

High-risk AI systems under Annex III

An AI system is intended to be used in a sensitive area (see Article 6(2) AI Act).
The AI system is considered to be a high-risk AI system if it is used in any of the areas listed in Annex III for any of the purposes specified.

Annex III lists eight areas:

  1. biometrics,
  2. critical infrastructure,
  3. educational and vocational training,
  4. employment, workers’ management and access to self-employment,
  5. access to and enjoyment of essential private services and essential public services and benefits,
  6. law enforcement,
  7. migration, asylum and border control,
  8. administration of justice and democratic processes.

Exceptions

An AI system covered by one of the categories in Annex III is not considered to be a high-risk AI system if it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons. This is the case if the AI system:

(a) is intended to perform a narrow procedural task;

(b) only improves the result of a previously completed human activity;

(c) detects decision-making patterns or deviations from prior decision-making patterns and does not replace or influence the previously completed human assessment, without proper human review; or

(d) performs a preparatory task to an assessment relevant for the purposes of the use cases listed in Annex III.

Notwithstanding these exceptions, an AI system is always considered to be high-risk if the system performs profiling of natural persons.

High-risk AI systems must comply with the requirements laid down in Articles 8 to 15 of the AI Act. These requirements cover establishing a risk management system, quality criteria for data, drawing up technical documentation, logging capabilities, providing information for the transparent use of an AI system, implementing oversight measures, and achieving an appropriate level of accuracy, robustness and cybersecurity.

Requirements

Summary of requirements
Requirement

Description

Risk management system

The risk management system is understood as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating (Article 9(2) first sentence, AI Act).

See Article 9 AI Act for further details.

Data quality

High-risk AI systems that make use of techniques involving the training of AI models with data must be developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in Article 10 AI Act.

See Article 10 AI Act for further details.

Technical documentation

 

The technical documentation is drawn up before a high-risk AI system is placed on the market or put into service. It serves to demonstrate that the high-risk AI system complies with the requirements.

See Article 11 AI Act for further details.

Logging capability

 

Events must be recorded automatically throughout the lifecycle of a high-risk AI system to ensure traceability.

See Article 12 AI Act for further details.

Provision of information

The operation of a high-risk AI system must be transparent and deployers must be able to interpret the system’s output appropriately. Each high-risk AI system must be accompanied by instructions for use.

See Article 13 AI Act for further details.

Oversight measures

 

High-risk AI systems must be able to be effectively overseen by natural persons, for example through the integration of an appropriate human-machine interface, to prevent or minimise risks.

See Article 14 AI Act for further details.

Accuracy, robustness and cybersecurity

High-risk AI systems must perform reliably. They must therefore achieve an appropriate level of accuracy and be resilient, for example against errors and manipulation by unauthorised third parties.

See Article 15 AI Act for further details.

Providers of high-risk AI systems must ensure that their systems are compliant with these requirements (Article 16, point (a) AI Act). An overview of the necessary steps is given below.

Overview: requirements for providers

Requirements for providers of high-risk AI systems

Providers of high-risk AI systems must establish a post-market monitoring system that actively and systematically collects, documents and analyses relevant data from a high-risk AI system throughout the entire lifetime of the system. The monitoring system must be suitable for the technology and the risks (Article 72 AI Act).

Providers of high-risk AI systems must report any serious incident to the market surveillance authorities of the Member States where the incident occurred in accordance with Article 73 of the AI Act.

Mastodon