Risk lev­els

The AI Act is underpinned by a risk-based regulatory approach: the higher the risk posed by an AI application, the stricter the regulation.

The AI Act defines “risk” as the combination of two factors:

the probability of harm occurring
and
the severity of the harm.

The AI Act contains various rules for classifying the level of risk of AI systems; the different levels of risk are described below. The requirements applicable to an AI system depend on the system’s level of risk.

The vast majority of AI applications do not present particular risks. These include email spam filters, personalised product suggestions, customer service chatbots and computer games using AI. These applications are subject to no or minimal rules such as transparency obligations.

However, some AI applications pose potential risks to life and health or to European fundamental rights. These applications are subject to strict regulation. They include AI systems for the health and energy sectors, road traffic and air transport, and decision-making for welfare benefit assessments or credit scoring.

These levels of risk only apply to AI systems. Special requirements apply to general-purpose AI models. More information can be found here.

Presentation of the four risk levels described in the text

Four levels of risk
Unacceptable risk

Prohibited

Some AI systems are classified as posing an unacceptable risk as they are incompatible with EU fundamental rights or pose a clear threat to the safety and health of people. These AI systems have therefore been prohibited completely in the EU since 2 February 2025.

They include:

  • social scoring: AI systems that analyse the behaviour of individuals to make social or political evaluations;
  • real-time surveillance: AI-based facial recognition systems in publicly accessible spaces for law enforcement purposes without a justified cause;
  • manipulative systems: AI systems that purposefully negatively influence or manipulate people’s behaviour, for example during elections or in advertising.

More information can be found here.

High risk

Only allowed with conformity assessment

AI systems with a high risk can potentially cause serious harm if they malfunction or are misused. They are therefore subject to strict rules to ensure that their deployment does not have an adverse impact on the health and safety or fundamental rights of persons or on the environment. A distinction is made between two types of high-risk AI systems:

  • AI systems in products regulated by European legislation requiring third-party conformity assessment, such as

    • medical devices,
    • toys,
    • radio equipment;
  • AI systems used in sensitive areas, such as

    • AI systems for decision-making on accessibility to services (for example granting credits) or the assessment of criminal offences.

More information is available here.

Limited risk

Transparency obligations

Certain AI systems are subject to less stringent regulation. These include the majority of commercial AI applications such as chatbots, recommendation algorithms or AI systems generating text, image or audio content. Most of these AI systems pose only a low risk. If these specific AI systems are implemented in high-risk AI systems, the transparency obligations apply in addition to the requirements for high-risk AI systems.

Companies must nevertheless make sure that these systems are transparent and user-friendly due to the potential for risks relating to manipulation or deep fakes. Users must, for example, be informed that they are interacting with an AI system and not with a human.

More inforamation is available here.

Minimal or no risk

No particular obligations, voluntary codes of conduct possible

AI systems with a minimal risk such as spam filters are largely unregulated as they do not pose any significant risks to society or individuals. Companies are still recommended to voluntarily make sure that these systems comply with certain principles for trustworthy AI systems, for example fairness and transparency.

Mastodon