Ob­jec­tives, tar­get group and ap­pli­ca­tion time­line

The objectives of the AI Act are to promote the development and use of trustworthy AI in the EU, enable innovation and minimise risks, while safeguarding health, safety and fundamental rights. The AI Act is addressed to companies, public authorities and organisations that deploy or develop AI.

Objectives

The AI Act has several key objectives:

  • Safety and protection: the AI Act aims to protect people from the possible risks of AI applications, in particular where health, safety and fundamental rights such as democracy, the rule of law and environmental protection could be affected.
  • Boosting innovation: the AI Act creates a favourable environment for the development and use of AI in Europe.
  • Uniform rules: the AI Act lays down a uniform legal framework for the whole of the EU internal market, enabling companies across Europe to operate under the same conditions.
  • Trust: the AI Act provides for clear rules, transparency and responsibility aimed at strengthening trust in AI technologies.

The AI Act creates clear rules for operators along the AI value chain. It defines binding requirements for certain AI applications and increases transparency and safety in the use of AI technology. At the same time, the AI Act makes sure that the administrative and financial burden on companies, especially SMEs, is kept to a minimum.

Target group

The rules apply to all companies, public authorities, organisations and other operators deploying or placing on the market AI systems. The decisive factor is not where a company is located but whether an AI system is placed on the market or put into service in the EU and whether the AI system’s outputs have an impact on people in the EU (Article 2 AI Act).

Graphic for actors for whom the AI ​​regulation applies

The AI Act applies to these operators

More information on operators within the meaning of the AI Act can be found here.

These operators are subject to requirements including:

  • the risk-based categorisation of AI systems,
  • documentation obligations and proof of conformity,
  • transparency obligations to show users that they are interacting with an AI system,
  • safety and risk management requirements, in particular for high-risk AI applications,
  • surveillance by the national supervisory authorities.
The AI Act does not apply to deployers using AI systems for private purposes only. In addition, the rules do not apply to AI systems developed for the sole purpose of research and development or designed and used for military or defence purposes.

Application timeline

2 February 2025

AI literacy: providers and deployers of AI systems must take measures to ensure a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf.
Prohibited practices: the AI Act defines eight practices that may no longer be applied.

2 August 2025

Time line

2 August 2026

  • Rules for high-risk AI systems under Annex III to the AI Act become applicable.
  • At least one operational AI regulatory sandbox at national level must be established.
  • Member States must have implemented rules on penalties.
  • All the other requirements of the AI Act not explicitly listed become applicable.
  • Transparency requirements apply to providers and deployers of certain AI systems.

2 August 2027

  • Rules for high-risk AI systems under Annex I become applicable.
  • Rules for general-purpose AI models deployed before August 2025 become applicable.

Transitional arrangements

  • AI systems that are components of large-scale EU IT systems in the fields of freedom, security and law that have been placed on the market or put into service before 2 August 2027 and established by EU legislation, such as the Schengen Information System, must comply with the requirements of the AI Act by 31 December 2030.
  • Deployers of high-risk AI systems that have been placed on the market or put into service before 2 August 2026 must only comply with the AI Act if the systems are subject to significant changes in their designs.
  • Providers and deployers of high-risk AI systems that are intended to be used by public authorities must ensure compliance with all the requirements and obligations laid down in the AI Act by 2 August 2030.
Mastodon