Objectives, target group and application timeline
The objectives of the AI Act are to promote the development and use of trustworthy AI in the EU, enable innovation and minimise risks, while safeguarding health, safety and fundamental rights. The AI Act is addressed to companies, public authorities and organisations that deploy or develop AI.
Objectives
The AI Act has several key objectives:
- Safety and protection: the AI Act aims to protect people from the possible risks of AI applications, in particular where health, safety and fundamental rights such as democracy, the rule of law and environmental protection could be affected.
- Boosting innovation: the AI Act creates a favourable environment for the development and use of AI in Europe.
- Uniform rules: the AI Act lays down a uniform legal framework for the whole of the EU internal market, enabling companies across Europe to operate under the same conditions.
- Trust: the AI Act provides for clear rules, transparency and responsibility aimed at strengthening trust in AI technologies.
The AI Act creates clear rules for operators along the AI value chain. It defines binding requirements for certain AI applications and increases transparency and safety in the use of AI technology. At the same time, the AI Act makes sure that the administrative and financial burden on companies, especially SMEs, is kept to a minimum.
Target group
The rules apply to all companies, public authorities, organisations and other operators deploying or placing on the market AI systems. The decisive factor is not where a company is located but whether an AI system is placed on the market or put into service in the EU and whether the AI system’s outputs have an impact on people in the EU (Article 2 AI Act).
The AI Act applies to these operators
More information on operators within the meaning of the AI Act can be found here.
These operators are subject to requirements including:
- the risk-based categorisation of AI systems,
- documentation obligations and proof of conformity,
- transparency obligations to show users that they are interacting with an AI system,
- safety and risk management requirements, in particular for high-risk AI applications,
- surveillance by the national supervisory authorities.
Application timeline
2 February 2025
AI literacy: providers and deployers of AI systems must take measures to ensure a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf.
Prohibited practices: the AI Act defines eight practices that may no longer be applied.
2 August 2025
- The rules for general-purpose AI models become applicable.
- Each Member State must designate a national authority responsible for the enforcement of the AI Act. This authority can monitor compliance with the rules and impose penalties for infringements.
- Each Member State must designate at least one notifying authority responsible for the assessment, designation and notification of conformity assessment bodies and for their monitoring.
- A single point of contact for the public and for other market surveillance authorities must be set up.
2 August 2026
- Rules for high-risk AI systems under Annex III to the AI Act become applicable.
- At least one operational AI regulatory sandbox at national level must be established.
- Member States must have implemented rules on penalties.
- All the other requirements of the AI Act not explicitly listed become applicable.
- Transparency requirements apply to providers and deployers of certain AI systems.
2 August 2027
- Rules for high-risk AI systems under Annex I become applicable.
- Rules for general-purpose AI models deployed before August 2025 become applicable.
Transitional arrangements
- AI systems that are components of large-scale EU IT systems in the fields of freedom, security and law that have been placed on the market or put into service before 2 August 2027 and established by EU legislation, such as the Schengen Information System, must comply with the requirements of the AI Act by 31 December 2030.
- Deployers of high-risk AI systems that have been placed on the market or put into service before 2 August 2026 must only comply with the AI Act if the systems are subject to significant changes in their designs.
- Providers and deployers of high-risk AI systems that are intended to be used by public authorities must ensure compliance with all the requirements and obligations laid down in the AI Act by 2 August 2030.
Service
FAQ
FAQs: all you need to know about AI
Artificial Intelligence: Questions and Answers (European Commission)
Contact details
Use our online form to contact us if you haven't found the answer to your question (in German)
Events
AI-Café (in German)
Links and Downloads
Bundesnetzagentur's AI compliance compass (in German)
Hinweispapier: KI-Kompetenzen nach Artikel 4 KI-Verordnung (pdf / 357 KB) (in Geman)
EU guidelines on the definition of an artificial intelligence system
General-Purpose AI Code of Practice
Digital transformation among SMEs (in German)