Press enter to search, esc to close
The European Union's Artificial Intelligence Act (the "EU AI Act") was published in the Official Journal of the EU on 12 July 2024 and entered into force on 1 August 2024.
The EU AI Act provides a comprehensive framework of rules governing the development, deployment and use of AI. Various Codes of Practice will be published (with a deadline of 1 June 2025), which will supplement the "on the face" interpretation of the EU AI Act.
In this article, we provide a high-level, snapshot summary of the 10 key things you need to know about the EU AI Act, with boxes linking to more detailed analysis of these topics.
Date published
07 February 2025
Key Point One: the EU AI Act distinguishes between AI Systems and AI Models
|
The EU AI Act contains a set of rules which apply to AI Systems, and a separate set of rules which apply to AI Models (more specifically General-Purpose AI Models).
The EU AI Act distinguishes between AI Systems and General-Purpose AI Models, laying down compliance obligations in respect of each of these.
The definition of AI Systems in the EU AI Act aligns with the OECD definition with the EU AI Act recitals stating that this has been done in order "to ensure legal certainty, facilitate international convergence and wide acceptance". Under the EU AI Act, an AI System is:
"a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments"
As flagged in the Recitals to the EU AI Act, a key characteristic of an AI System is its ability to "infer" which the EU AI Act states "transcends basic data processing by enabling learning, reasoning or modelling" [Recital 12].
In order to qualify as an AI System under the EU AI Act all elements of the definition (noted above) will need to be met. It is therefore possible that software which contains AI may not qualify as an AI System under the EU AI Act, and conversely AI which might not be commonly thought of as such could be caught as an AI System under the EU AI Act.
Intriguingly, the EU AI Act does not define "AI Models", but instead only defines General-Purpose AI Models (such as Large Language Models, or LLMs).
General-Purpose AI Models are defined in the EU AI Act as being:
"an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks"
The EU AI Act states, in the Recitals, that "models with at least a billion of parameters and trained with a large amount of data using self-supervision at scale should be considered to display significant generality and to competently perform a wide range of distinctive tasks".
Parameters (noted above) are, in essence, the variables that an AI model learns from during training to improve their ability to make accurate predictions.
For context, in relation to the reference of 1 billion parameters, it is thought that OpenAI's GPT-4 has around 1.76 trillion parameters and Anthropic's Claude 2 is estimated to have over 130 billion parameters. As such, many of the more powerful AI models on the market will clearly exceed the threshold that the EU AI Act envisages in order to qualify as a General-Purpose AI Model.
General-Purpose AI Models require the addition of other components, such as a user interface, in order to become an AI System. Large Language Models, or LLMs, are a common example of General-Purpose AI Models.
Key Point Two: the EU AI Act adopts a tiered approach based on perceived risk level |
AI Systems
For AI Systems, the EU AI Act adopts a tiered approach based on the perceived risk level:
The compliance obligations that apply to each of the above correspond to the relevant risk level.
General-Purpose AI Models
The EU AI Act creates a compliance regime which applies to General-Purpose AI Models, with additional rules applying to General-Purpose AI Models which are considered to have "systemic risk".
As discussed earlier, the EU AI Act distinguishes between AI Systems and General-Purpose AI Models, laying down compliance obligations in respect of each of these.
We summarise each of these in turn below.
For AI Systems, the EU AI Act adopts a tiered approach based on the perceived risk level:
Prohibited AI Systems
The EU AI Act prohibits the use of certain AI Systems - these include:
Prohibited AI: EU AI Act Timeline
As per the timetable when the EU AI Act will come into effect (please see a summary of this timetable in Key Point Five), the obligations around prohibited AI came into effect on 2 February 2025. The European Commission published Guidelines on 4 February 2025 which provide an overview of AI practices that are deemed unacceptable due to their potential risks to European values and fundamental rights.
High-Risk AI Systems
The majority of the compliance obligations in the EU AI Act fall on providers of high-risk AI Systems. There are two categories of high-risk AI Systems under the EU AI Act:
1. High-risk AI Systems used as a safety component or a product covered by EU laws listed in Annex I of the EU AI Act and which are required to undergo a third-party conformity assessment under those Annex I laws.
2. AI Systems which fall under a use case in Annex III of the EU AI Act. Annex III covers use cases such as:
Article 6(3) of the EU AI Act contains an exemption to the above for use cases that would otherwise fall within the scope of Annex III (and therefore be deemed high risk). Under Article 6(3), a provider can conduct a risk assessment in order to demonstrate that:
This risk assessment must have been conducted and documented before the AI system is placed on the market or put into service. The provider is also required to register the AI system in an EU database maintained by the Commission.
AI Systems that profile individuals will always be considered high-risk and will not be able to rely on the exemption in Article 6(3).
High-Risk AI: EU AI Act Timeline
As per the timetable when the EU AI Act will come into effect (please see a summary of this timetable in Key Point Five), the obligations:
The EU AI Act effectively creates two categories of General-Purpose AI Models: (1) those with systemic risk; and (2) those without systemic risk.
In contrast to the rules around AI Systems, the regime in the EU AI Act governing General-Purpose AI Models is use case agnostic.
General-Purpose AI Models without systemic risk
For a look at the definition of General-Purpose Models without systemic risk under the EU AI Act, please see Key Point 1.
General-Purpose AI Models with systemic risk
The EU AI Act seeks to place additional obligations on those General-Purpose AI Models that are considered to have systemic risk. General-Purpose AI Models are considered to have "systemic risk" where one of two conditions are met:
A General-Purpose AI Model is presumed to have “high impact capabilities” if the cumulative amount of computation used for its training is more than 10^25 floating point operations.
AIA Timeline
As per the timetable when the EU AI Act will come into effect (please see a summary of this timetable in Key Point 5), the obligations around General-Purpose AI Models will come into effect on 2 August 2025.
Key Point Three: The EU AI Act has broad territorial scope
|
The EU AI Act has broad territorial scope and will apply to operators outside the EU. For example, the EU AI Act will apply if:
The EU AI Act distinguishes between AI Systems and General-Purpose AI Models, laying down compliance obligations in respect of each of these. As such, the territorial scope of the EU AI Act needs to be considered in respect of each of these categories of AI. We consider these in turn below.
In this context, it is also worth noting a few key definitions in the EU AI Act:
AI Systems
The EU AI Act is engaged where an AI System is:
The EU AI Act is also engaged where the output produced by an AI System is used in the EU, even if that output is generated outside the EU.
General-Purpose AI Models
The EU AI Act is engaged where a provider of a General-Purpose AI Model places it on the market in the EU or puts it into service in the EU – irrespective of where the provider is located or established.
Authorised Representative
The EU AI Act requires that: (a) providers of high-risk AI Systems; and (b) providers of General-Purpose AI Models, established outside the EU must appoint an authorised representative.
An authorised representative is defined as a natural or legal person located or established in the EU who has received and accepted a written mandate from a provider of an AI System or a General-Purpose AI Model to, respectively, perform and carry out on its behalf the obligations and procedures established by the EU AI Act.
The role of an authorised representative includes ensuring that the documentation required by the AI Act is available to the competent authorities and co-operating with those authorities
Key Point Four: the EU AI Act will not apply in certain, limited, instances
|
The EU AI Act does not apply in certain limited instances.
As outlined above, the EU AI Act has a broad personal and territorial scope. However, there are a few areas that are explicitly carved out and stakeholders in these areas are not required to comply with the rules of the EU AI Act.
Military, defence or national security purposes
The EU AI Act does not apply to AI Systems where and in so far as they are placed on the market, put into service, or used with or without modification exclusively for military, defence or national security purposes, regardless of the type of entity carrying out those activities.
Research and development
The EU AI Act does not affect AI Systems or AI Models, including their output, specifically developed and put into service for the sole purpose of scientific research and development.
Additionally, the EU AI Act does not apply to any research, testing or development activity regarding AI Systems or Models prior to their being placed on the market or put into service. However, testing in real-world conditions is caught by the EU AI Act.
Public authorities in a third countries and international organisations
The EU AI Act does not apply to public authorities in a third country, nor to international organisations meeting the requirements of one of the aforementioned AI Act roles.
This exemption only applies where those authorities or organisations use AI Systems in the framework of international cooperation or agreements for law enforcement and judicial cooperation with the EU or with one or more Member States, provided that such a third country or international organisation provides adequate safeguards with respect to the protection of fundamental rights and freedoms of individuals.
Open source
The EU AI Act does not apply to AI Systems released under free and open-source licences. For a more detailed discussion around open source AI and the EU AI Act, please see Key Point 9.
Key Point Five: the EU AI Act comes into effect on a staggered basis
|
The EU AI Act comes into effect on a staggered basis, with the various requirements applying gradually over time.
The following is a high-level summary of when the various elements of the EU AI Act come into effect.
Implementation Timetable
Key dates to be aware of are:
Date |
What comes into effect? |
2 February 2025 |
The Prohibited AI regime
Rules around AI Literacy
|
2 August 2025 |
Rules governing General-Purpose AI
|
2 August 2026
|
Rules governing the High-Risk AI Systems listed in Annex III of the EU AI Act
Transparency requirements
|
2 August 2027
|
Rules governing AI that is integrated into products that are subject to the product safety regulations listed in Annex I(being categorised as high-risk)
|
Transitional Arrangements under the EU AI Act
The EU AI Act also contains some transitional arrangements, these are as follows:
Key Point Six: various rules apply across the AI Value Chain |
The EU AI Act creates a "value chain", which maps compliance obligations across different "operators". Determining your role within the AI value chain is critical to determining your obligations under the EU AI Act.
The EU AI Act largely focusses on providers and deployers.
The EU AI Act creates a "value chain", which maps compliance obligations across different "operators". These "operators" are:
The following is a high-level summary of the various operators within the value chain under the EU AI Act. Determining your role within the AI value chain will be critical to determining your obligations under the EU AI Act.
The EU AI Act mainly focuses on providers and deployers, and so the following will in turn also focus on those operator roles.
Provider
Providers are subject to the majority of the obligations under the EU AI Act and so determining whether an entity qualifies as a provider will be a crucial step in determining the applicable compliance obligations under the EU AI Act.
Deployer
Downstream Provider
Downstream providers are granted certain rights under the EU AI Act, including:
Transfer of Provider Role (High-Risk AI Systems)
In certain scenarios, the EU AI Act provides that "operators" (other than the "original" provider) can be considered a provider of a high-risk AI System.
Any distributor, importer, deployer or other third-party will be considered to be a provider of a high-risk AI System , in any of the following circumstances:
Fine-tuning a General-Purpose AI Model
The EU AI Act indicates that where a party modifies or fine-tunes a third party General-Purpose AI Model and integrates that modified or fine-tuned model into their own AI System (or otherwise places a fine-tuned General-Purpose AI Model on the market or puts it into service) , then they will be considered the provider with respect to that modified or fine-tuned model only.
Importer
Distributor
Key Point Seven: whilst the majority of the EU AI Act is focussed on high-risk AI Systems, certain transparency obligations apply on a broader basis
|
Providers, and in some cases deployers, will have certain obligations regarding transparency regardless of whether the relevant AI System is classified as being high-risk.
The following provides a breakdown of the transparency requirements under the EU Act for AI Systems and AI Models.
The EU AI Act distinguishes between AI Systems and General-Purpose AI Models, laying down certain transparency requirements in respect of AI Systems.
We provide a high-level summary of the transparency requirements that apply to providers and deployers of AI Systems below.
Provider Obligations
Category |
Requirement |
Exemptions |
AI Systems intended to interact directly with natural persons |
Where an AI System is intended to interact directly with natural persons, providers need to ensure that these AI Systems are designed and developed in such a way that the natural persons concerned are informed that they are interacting with an AI System. |
This requirement does not apply where either:
|
Marking of AI generated content
|
Providers of AI Systems must appropriately mark synthetic content such as audio, images, videos, or text.
The synthetic output needs to be marked in a machine readable to indicate that it is artificially generated or manipulated.
Any such markings should be effective, interoperable, robust and reliable as far as this is technically feasible, taking into account the specificities and limitations of various types of content, the costs of implementation and the generally acknowledged state of the art, as may be reflected in relevant technical standards.
|
This obligation does not apply where:
|
Deployer Obligations
Category |
Requirement |
Exemptions |
Emotion recognition AI Systems, or a biometric categorisation AI Systems |
The relevant deployer must inform the natural persons exposed to them about the use of the system.
Any personal data must be processed in accordance with the GDPR. |
This obligation does not apply to AI systems used for biometric categorisation and emotion recognition, which are permitted by law to detect, prevent or investigate criminal offences.
|
Deepfakes |
Deployers of an AI System that generates or manipulates image, audio or video content constituting a deep fake, are required to disclose that the content has been artificially generated or manipulated.
Where the content forms part of an evidently artistic, creative, satirical, fictional or analogous work or programme, the transparency obligations are limited to disclosure of the existence of such generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work.
Deployers of an AI System that generates or manipulates text which is published with the purpose of informing the public on matters of public interest shall disclose that the text has been artificially generated or manipulated. |
The obligation to disclose that the content has been artificially generated or manipulated, does not apply where the use is authorised by law to detect, prevent, investigate or prosecute criminal offence.
The requirements re: AI Systems that generate or manipulate text which is published with the purpose of informing the public on matters of public interest, shall not apply where:
|
Timing for provision of transparency information
The transparency information, summarised above, needs to be provided:
High-risk AI Systems
The transparency information, summarised above, are explicitly stated in the EU AI Act to operate alongside the requirements for high-risk AI Systems (as set out in the EU AI Act), and other applicable laws that relate to transparency (either under EU or national law).
Codes of Practice
The AI Office is responsible for drawing up codes of practice to facilitate the effective implementation of the obligations regarding the detection and labelling of artificially generated or manipulated content.
As discussed above, the EU AI Act distinguishes between AI Systems and General-Purpose AI Models, laying down certain transparency requirements in respect of General-Purpose AI Models.
We provide a high-level summary of these transparency requirements below.
Transparency requirements for General-Purpose AI Models
Under the EU AI Act, providers of General-Purpose AI Models are required to:
(a) draw up and keep up-to-date certain technical documentation which contains a description of the model development process, including around its training and testing (with such technical documentation containing, at a minimum, the information set out in Annex XI of the EU AI Act) for the purpose of providing it, upon request, to the AI Office and the national competent authorities;
(b) prepare, maintain and provide certain information to downstream AI System providers (i.e. those who wish to integrate the General-Purpose AI Model into their AI System) so that such downstream providers have a good understanding of the capabilities and limitations of the General-Purpose AI Model (this information should contain, at a minimum, the elements set out in Annex XII of the EU AI Act);
(c) establish a policy to comply with EU law on copyright and related rights, in particular the right to opt-out of text and data mining under Article 4(3) of Directive (EU) 2019/790; and
(d)prepare and make publicly available a sufficiently detailed summary about the content used for training of the General-Purpose AI Model (the AI Office is tasked under the EU AI Act with providing a template for this purpose).
The EU AI Act recognises that the information required to be provided under (b), above, can be balanced against the need to protect confidential information and trade secrets of the provider.
The obligations under (a) and (b) above do not apply to General-Purpose AI Models made available on an open source basis. Please see the wider article for a discussion about open source and the EU AI Act.
Codes of Practice
The EU AI Act states that providers of General-Purpose AI Models may rely on Codes of Practice published by the AI Office in order to demonstrate compliance with the transparency obligations outlined above, until a harmonised standard is published.
The European Commission has published initial drafts of its 'General-Purpose AI Code of Practice' which it plans to finalise by April 2025.
Key Point Eight: the EU AI Act requires a level of AI Literacy for all deployers and providers of AI Systems |
The EU AI Act contains a general requirement for all deployers and providers of AI systems to ensure that their personnel have a sufficient level of AI literacy.
What the appropriate level of AI literacy is will depend on the education, expertise and technical knowledge or staff, as well as the context in which the relevant AI systems are to be used.
As outlined earlier, the EU AI Act requires all deployers and providers of AI systems to ensure that their personnel have a sufficient level of AI literacy.
What is AI Literacy under the EU AI Act?
The EU AI Act defines AI literacy as being skills, knowledge and understanding that allow providers, deployers and affected persons to make an informed use of AI Systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.
What does the EU AI Act Require?
Under the EU AI Act, providers and deployers of AI Systems are required to take measures to ensure a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf.
The level of AI literacy required should take into account the:
It is worth noting that:
When do the AI literacy requirements under the EU AI Act apply from?
The AI literacy requirements under the EU AI Act came into effect on 2 February 2025. For further information around the EU AI Act implementation timeline, please see Key Point 5.
How we can help: a tailored approach
The EU AI Act makes clear that providers and deployers will need to tailor their AI literacy programme depending on a range of factors, such as the nature of the AI System at hand and the relevant knowledge level within its organisation.
Ensuring AI literacy in your business is not just a requirement of the EU AI Act, it is prudent governance. It is not a "set and forget" exercise, it is ongoing and should evolve as AI technology develops. Your AI literacy programme should stay up-to-date with the latest AI trends, technologies, and best practices.
We are helping many of our clients to design and implement bespoke AI literacy training and governance programmes across their business.
In our experience any AI literacy programme should have three core pillars:
Key Point Nine: in certain circumstances AI released on an open source basis will not be caught by the EU AI Act |
The EU AI Act states that the regulation does not apply to AI systems released under free and open-source licences. However, there are a number of crucial elements to be considered under the EU AI Act in determining whether AI is made available on an open source basis.
This following is a high-level summary of how the EU AI Act approaches open source AI. As discussed in our overview article, the EU AI Act distinguishes between AI Systems and General-Purpose AI Models, laying down compliance obligations in respect of each of these.
The approach regarding open source AI Systems and General-Purpose AI Models is considered in turn below.
AI Systems
The EU AI Act states that the regulation does not apply to AI Systems released under free and open-source licences, unless "placed on market" or "put into service" as:
As such, the open source exceptions for AI Systems under the EU AI Act are fairly limited.
General-Purpose AI Models
General-Purpose AI Models are exempt from the requirements of the EU AI Act where these are released under free and open-source licences that allow for the access, usage, modification, and distribution of the model, and whose parameters, including the weights, the information on the model architecture, and the information on model usage, are made publicly available.
Even where this exemption does apply, the provider will still need to:
It is also worth noting that:
General-Purpose AI Models with Systemic Risk
The exception in the EU AI Act regarding General-Purpose AI Models does not apply to General Purpose AI Models with Systemic Risk.
Key Point Ten: fundamental rights impact assessments (FRIA) will be required in certain circumstances |
Under the EU AI Act, certain deployers of high-risk AI systems must carry out a FRIA before putting the high-risk AI system into use.
The following is a high-level summary of when a fundamental rights impact assessment ("FRIA") might be required under the EU AI Act.
When is a FRIA required?
The EU AI Act requires a FRIA to be completed by deployers of high-risk AI Systems in the following scenarios:
A FRIA will not be required where the relevant high-risk AI System is intended to be used as a safety component in the management and operation of critical digital infrastructure, road traffic, or in the supply of water, gas, heating or electricity.
When does the FRIA need to be conducted?
The FRIA must be performed before the first use of the high-risk AI System and must be updated when the deployer considers that any of the relevant factors have changed or are not up-to-date anymore.
In similar cases, the deployer can rely on previously conducted FRIAs or existing impact assessments carried out by the provider.
Notification requirement
Except in limited circumstances, once the FRIA has been completed, the deployer is required to notify the market surveillance authority of its results, submitting the template to be provided by the AI Office.
What is required by a FRIA?
For a FRIA, deployers are required to perform an assessment consisting of:
a. a description of the deployer’s processes in which the high-risk AI System will be used in line with its intended purpose;
b. a description of the period of time within which, and the frequency with which, each high-risk AI System is intended to be used;
c. the categories of natural persons and groups likely to be affected by its use in the specific context;
d. the specific risks of harm likely to have an impact on the categories of natural persons or groups of persons identified pursuant to point (c);
e. a description of the implementation of human oversight measures, according to the instructions for use;
f. the measures to be taken in the case of the materialisation of those risks, including the arrangements for internal governance and complaint mechanisms.
As noted above, the EU AI Act requires the AI Office to provide a template FRIA which deployers can use to complete such an assessment.
Overlap with Data Protection Impact Assessments (DPIA)
The EU AI Act provides that where any of the FRIA obligations are met by any data protection impact assessment (DPIA) carried out under the GDPR, then the FRIA should complement that DPIA.
If you’d like to learn more about how we can support you on your AI journey, visit our AI In Focus page, or get in touch with one of our experts below.
This publication is intended for general guidance and represents our understanding of the relevant law and practice as at January 2025. Specific advice should be sought for specific cases. For more information see our terms and conditions.
RELATED INSIGHTS AND EVENTS
View all