Def­i­ni­tions

This page provides details of selected terms relating to the AI Act.

AI system

Article 3 of the AI Act provides the following definition:
ArticleTermDefinition
3(1)AI systema machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments

The European Commission published its Guidelines on the definition of an artificial intelligence system on 6 February 2025. The Guidelines are not legally binding and will be updated as and when necessary.

According to the Guidelines, the definition comprises the following seven main elements:

  1. machine-based
  2. varying levels of autonomy
  3. adaptiveness
  4. explicit and implicit objectives
  5. inference
  6. outputs
  7. interaction.

Operators within the meaning of the AI Act

Article 3 of the AI Act includes various key definitions identifying the roles of individual operators along the AI value chain:
ArticleTermDefinition
3(3)providera natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge
3(4)deployera natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity
3(5)authorised representativea natural or legal person located or established in the Union who has received and accepted a written mandate from a provider of an AI system or a general-purpose AI model to, respectively, perform and carry out on its behalf the obligations and procedures established by this Regulation
3(6)importera natural or legal person located or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established in a third country
3(7)distributora natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market
3(8)operatora provider, product manufacturer, deployer, authorised representative, importer or distributor
3(68)downstream providera provider of an AI system, including a general-purpose AI system, which integrates an AI model, regardless of whether the AI model is provided by themselves and vertically integrated or provided by another entity based on contractual relations

Terms relating to authorities and institutions

The AI Act assigns certain responsibilities to various authorities and institutions. An overview is provided below:
ArticleTermDefinition
3(19)notifying authoritythe national authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring
3(21)conformity assessment bodya body that performs third-party conformity assessment activities, including testing, certification and inspection
3(22)notified bodya conformity assessment body notified in accordance with this Regulation and other relevant Union harmonisation legislation
3(26)market surveillance authoritythe national authority carrying out the activities and taking the measures pursuant to Regulation (EU) 2019/1020
3(45)law enforcement authority(a) any public authority competent for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security; or
(b) any other body or entity entrusted by Member State law to exercise public authority and public powers for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security
3(47)AI Officethe Commission’s function of contributing to the implementation, monitoring and supervision of AI systems and general-purpose AI models, and AI governance, provided for in Commission Decision of 24 January 2024; references in this Regulation to the AI Office shall be construed as references to the Commission
3(48)national competent authoritya notifying authority or a market surveillance authority; as regards AI systems put into service or used by Union institutions, agencies, offices and bodies, references to national competent authorities or market surveillance authorities in this Regulation shall be construed as references to the European Data Protection Supervisor
3(55)AI regulatory sandboxa controlled framework set up by a competent authority which offers providers or prospective providers of AI systems the possibility to develop, train, validate and test, where appropriate in real-world conditions, an innovative AI system, pursuant to a sandbox plan for a limited time under regulatory supervision
The following terms are relevant in addition to those defined in Article 3:
ReferenceTermDefinition
Recital 20, Article 65, Article 66Artificial Intelligence Board (“Board”)The European Artificial Intelligence Board should support the Commission to promote AI literacy tools, public awareness and understanding of the benefits, risks, safeguards, rights and obligations in relation to the use of AI systems.
The Board is composed of one representative per Member State. The European Data Protection Supervisor participates as observer. The AI Office also attends the Board’s meetings, without taking part in the votes. Other national and Union authorities, bodies or experts may be invited to the meetings by the Board on a case by case basis, where the issues discussed are of relevance for them.
Article 67(1) and (2)advisory forumAn advisory forum must be established to provide technical expertise and advise the Board and the Commission, and to contribute to their tasks under this Regulation.
The membership of the advisory forum must represent a balanced selection of stakeholders, including industry, start-ups, SMEs, civil society and academia. The membership of the advisory forum must be balanced with regard to commercial and non-commercial interests and, within the category of commercial interests, with regard to SMEs and other undertakings.
Article 68scientific panel of independent expertsThe scientific panel consists of experts selected by the Commission on the basis of up-to-date scientific or technical expertise in the field of AI. The scientific panel advises and supports the AI Office.
Article 70(2)single point of contactMember States must designate a market surveillance authority to act as a single point of contact for the Regulation and must notify the Commission of the identity of the single point of contact. The Commission must make a list of the single points of contact publicly available.

Terms relating to AI systems and their deployment

The AI Act also includes various terms relating to AI systems and their deployment. Some of these terms are key to defining and classifying AI systems. Other terms are explained in the Guidelines on prohibited artificial intelligence practices established by the AI Act, which the European Commission published on 4 February 2025.
ArticleTermDefinition
3(2)riskthe combination of the probability of an occurrence of harm and the severity of that harm
3(9)placing on the marketthe first making available of an AI system or a general-purpose AI model on the Union market
3(10)making available on the marketthe supply of an AI system or a general-purpose AI model for distribution or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge
3(11)putting into servicethe supply of an AI system for first use directly to the deployer or for own use in the Union for its intended purpose
3(12)intended purposethe use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation
3(13)reasonably foreseeable misusethe use of an AI system in a way that is not in accordance with its intended purpose, but which may result from reasonably foreseeable human behaviour or interaction with other systems, including other AI systems
3(14)safety componenta component of a product or of an AI system which fulfils a safety function for that product or AI system, or the failure or malfunctioning of which endangers the health and safety of persons or property
Recital 55 also describes safety components in connection with the management and operation of critical infrastructure as systems protecting “the physical integrity of critical infrastructure or the health and safety of persons and property”. By contrast, components intended to be used solely for cybersecurity purposes should not qualify as safety components.
3(15)instructions for usethe information provided by the provider to inform the deployer of, in particular, an AI system’s intended purpose and proper use
3(18)performance of an AI systemthe ability of an AI system to achieve its intended purpose
3(23)substantial modificationa change to an AI system after its placing on the market or putting into service which is not foreseen or planned in the initial conformity assessment carried out by the provider and as a result of which the compliance of the AI system with the requirements set out in Chapter III, Section 2 is affected or results in a modification to the intended purpose for which the AI system has been assessed
3(29)training datadata used for training an AI system through fitting its learnable parameters
3(30)validation datadata used for providing an evaluation of the trained AI system and for tuning its non-learnable parameters and its learning process in order, inter alia, to prevent underfitting or overfitting
3(31)validation data seta separate data set or part of the training data set, either as a fixed or variable split
3(32)testing datadata used for providing an independent evaluation of the AI system in order to confirm the expected performance of that system before its placing on the market or putting into service
3(33)input datadata provided to or directly acquired by an AI system on the basis of which the system produces an output
3(60)deep fakeAI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful

Terms relating to notification

Notification under the AI Act means the official communication to the European Commission and the other EU Member States that a conformity assessment body has been verified and designated. Conformity assessment under the AI Act may only be carried out by these notified bodies. In accordance with Article 6 of the AI Act, notified bodies must be involved if an AI system is considered to be high-risk in order to assess whether the high-risk AI system is in conformity with the requirements of the AI Act.

The following definitions from the AI Act are relevant in this context:
ArticleTermDefinition
3(19)notifying authoritythe national authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring
3(20)conformity assessmentthe process of demonstrating whether the requirements set out in Chapter III, Section 2 relating to a high-risk AI system have been fulfilled
3(21)conformity assessment bodya body that performs third-party conformity assessment activities, including testing, certification and inspection
3(22)notified bodya conformity assessment body notified in accordance with this Regulation and other relevant Union harmonisation legislation
3(24)CE markinga marking by which a provider indicates that an AI system is in conformity with the requirements set out in Chapter III, Section 2 and other applicable Union harmonisation legislation providing for its affixing

Terms relating to biometrics

The deployment of biometric AI systems is considered to be particularly sensitive and is therefore subject to strict rules to protect fundamental rights and privacy.

The following definitions from the AI Act are relevant in this context:
ArticleTermDefinition
3(34)biometric datapersonal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, such as facial images or dactyloscopic data
3(35)biometric identificationthe automated recognition of physical, physiological, behavioural, or psychological human features for the purpose of establishing the identity of a natural person by comparing biometric data of that individual to biometric data of individuals stored in a database
3(36)biometric verificationthe automated, one-to-one verification, including authentication, of the identity of natural persons by comparing their biometric data to previously provided biometric data
3(37)special categories of personal datathe categories of personal data referred to in Article 9(1) of Regulation (EU) 2016/679, Article 10 of Directive (EU) 2016/680 and Article 10(1) of Regulation (EU) 2018/1725
3(38)sensitive operational dataoperational data related to activities of prevention, detection, investigation or prosecution of criminal offences, the disclosure of which could jeopardise the integrity of criminal proceedings
3(39)emotion recognition systeman AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data
3(40)biometric categorisation systeman AI system for the purpose of assigning natural persons to specific categories on the basis of their biometric data, unless it is ancillary to another commercial service and strictly necessary for objective technical reasons
3(41)remote biometric identification systeman AI system for the purpose of identifying natural persons, without their active involvement, typically at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database
3(42)real-time remote biometric identification systema remote biometric identification system, whereby the capturing of biometric data, the comparison and the identification all occur without a significant delay, comprising not only instant identification, but also limited short delays in order to avoid circumvention
3(43)post-remote biometric identification systema remote biometric identification system other than a real-time remote biometric identification system
Mastodon