Press enter to search, esc to close
The European Union's Artificial Intelligence Act (the “EU AI Act”) came into effect on 1 August 2024, with the legislation having a phased implementation timetable. The first elements of the EU AI Act applied from 2 February 2025, being the prohibitions on certain AI systems and requirements around AI literacy.
In this article, we provide a quick overview of what constitutes prohibited AI under the EU AI Act and then set out our practical tips on how to approach compliance.
If you would like further detail on the EU AI Act, then see our article here which sets out the ten key things you need to know.
Under Article 5 of the EU AI Act, certain AI practices are prohibited. The prohibited practices under Article 5 are practices that are harmful, abusive and contradict Union values, the rule of law, and fundamental rights. The European Commission published Guidelines on 4 February 2025 which provide further detail on Article 5 and prohibited AI.
In summary, the following AI practices are generally prohibited under the EU AI Act:
There are limited exceptions available for certain prohibited practices (for example, in relation to AI systems that are used to infer emotions, the prohibition does not apply where it relates to AI systems placed on the market for medical or safety reasons).
As with many aspects of AI governance and compliance, knowing where to start can be difficult. As such, we have set out below our tips for ensuring compliance with Article 5 of the EU AI Act, including a look at alternative, more scalable approaches to AI inventories, to avoid perfection becoming the thief of compliance.
1. Rethinking your approach to an AI inventory; prioritising prohibited AI use cases
In order to establish how current uses of AI might be caught and regulated by the EU AI Act, many organisations are going down the route of creating an AI inventory. However, for prolific AI users, creating a detailed and complete AI inventory can be complex and could delay compliance with the provisions of the EU AI Act that have already come into effect.
Creating an AI inventory has become commonly recommended practice as part of AI compliance programmes, but is not expressly required by the EU AI Act. What the EU AI Act does require is for businesses to understand and manage their AI systems, especially prohibited and high-risk ones.
The immediate focus, given these elements of the EU AI Act are already in force, should be on establishing current and proposed uses of AI which are prohibited by the EU AI Act.
As such, our tip here is to use a risk-based scoping exercise, applying a pragmatic lens. Start by focussing on establishing where AI is being used in a way that might be prohibited by the EU AI Act, then move on to establishing wider AI use cases and broader EU AI Act compliance.
Here are some suggestions on how to approach this task:
a. Look for the quick wins - any of the below will be prohibited, and may be easier to identify than other prohibited AI:
b. Start with systems that interact with customers (particularly where these are consumers) and staff, focusing on areas where AI could directly impact individuals.
c. Leverage existing compliance frameworks, such as GDPR or DORA (including records of processing, DPIAs or ICT risk registers):
d. Prioritise high impact or high access IT vendors.
e. Ask targeted questions to the wider business (not just the technology team). HR, security, marketing and customer service could support in establishing whether prohibited AI is being used within your business.
You can then collate your findings into a table that sets out: (a) the system/tool name and vendor details; (b) the potentially prohibited feature; (c) whether further confirmation/checks are needed, and (d) next steps.
2. Screen for prohibited AI early
As noted above, the EU AI Act bans specific uses of AI. As well as establishing current uses of AI, we recommend ensuring that any proposed future use cases of AI, that are potentially prohibited under the EU AI Act, are caught as early as possible.
To do this, we recommend building “buzzwords” into your use case assessment or development process, which act as triggers for escalation. Any use case involving, for example, biometric analysis or emotion or sentiment analysis should be escalated for review.
3. Get the balance right
Even if your business opts for a leaner approach to AI inventories, a wider AI governance framework is still essential. The goal isn’t heavy bureaucracy – it’s clarity, accountability and traceability.
A successful AI governance programme will seek to balance assessing those risks and the potential benefits that AI can bring to your business. If processes and procedures, for example risk assessments of particular AI use cases, become too burdensome, bureaucratic or lengthy then this may dissuade potential AI implementation which could positively impact your business.
For a further look at how to approach AI governance – please see our article here, which sets out our “Top 5 Tips”.
This article was drafted with assistance from Joe Battle, Associate.
Authors – Tom Sharpe, Michelle Sally and Emma Erskine-Fox
This publication is intended for general guidance and represents our understanding of the relevant law and practice as at May 2025. Specific advice should be sought for specific cases. For more information see our terms & conditions.
Date published
16 May 2025
RELATED INSIGHTS AND EVENTS
View allRELATED SERVICES