The advent of ChatGPT in late 2022 propelled AI on to the front pages of newspapers and brought with it an explosion of interest in generative AI's disruptive potential.

As we moved into 2023 and 2024, we saw the advent of many "copilot" based generative AI products along with companies looking beyond the initial hype to find use cases for the wave of generative AI tools that were coming onto the market. In this sense, whilst 2023 was the year of the hype, 2024 certainly seemed to be the year of the AI use case.

The AI hype of 2025 seems to be centred around one key topic: AI agents (or agentic AI). Deloitte predicts that in 2025, 25% of companies that use generative AI will launch agentic AI pilots or proofs of concept, growing to 50% in 2027.

In this article we take a look at what AI Agents are and note some of the key legal challenges that agentic AI presents.

What are AI Agents?

In broad terms, AI agents are systems which are capable of autonomously performing tasks by designing a workflow and calling on external tools. The goals are set by humans, but the AI agent determines how to fulfil that goal.

At the foundation of AI agents are large language models (LLMs) with AI agents using the advanced natural language processing techniques of LLMs in order to comprehend and respond to user inputs step-by-step and determine when to call on external tools.

AI Agents vs Monolithic LLMs

Given that LLMs are at the core of AI agent systems, it is useful to consider how AI agents compare with monolithic LLMs.

Monolithic LLMs predict responses based on data that they were trained on. This data is static. Monolithic LLMs don't interact with the world beyond their training data, meaning that, in isolation, the monolithic LLMs can't fetch new events or data.

AI Agents, on the other hand, commonly have the following features:

1. Planning: after receiving a user prompt, AI agents can define actions needed - creating a digital plan in order to achieve the relevant goal. The advanced natural language processing techniques of LLMs are harnessed here and are revolutionising the power and ability of agentic systems.

2. Use of Tools: unlike monolithic LLMs, AI agents can interact with a variety of tools (e.g. via API) to gather data from a variety of sources in order to perform tasks effectively. Accessing these tools, and information, allows AI agents to have abilities beyond that of a static dataset (as is the constraint with monolithic LLMs).

3. Action Tasks: as the name suggests, agentic AI has agency - once the AI agent has defined the actions needed, and accessed relevant external data sources, it can execute these tasks autonomously.

4. Memory and Reflection: AI agents can have the ability to remember past interactions and behaviours, and even to perform self-reflection to inform future actions (improving the AI agent's performance over time).

Multi-AI Agent Systems

More advanced AI agents are likely to be multi-agent systems, where AI agents collaborate to address more complex challenges.

Here, the architecture of the relevant AI system will be more complex, but could - for example - include a "lead" AI agent which "orchestrates" other AI agents in order to perform a certain task.

What additional legal challenges do AI Agents present?

The autonomy of AI agents, whilst bringing huge potential benefits, also presents inherent legal challenges, especially as the level of human involvement decreases (and the corresponding level of autonomy of the AI agent increases).

At a high level, these legal challenges include the following:

  • The EU AI Act: as discussed in our article, the EU AI Act distinguishes between AI Systems and General-Purpose AI Models. The specific types of AI that might fall within these definitions are not mentioned within the EU AI Act (a deliberate choice, so as to future proof the text of the Act as much as possible). As such, AI agents are not specifically mentioned in the EU AI Act. However, the nature of an agentic AI system may increase the risk profile under the EU AI Act when compared with a monolithic LLM. For example, depending on the architecture of the Agentic AI system - and the breadth of tasks that the agentic AI system can execute - there may be an increased risk that "prohibited" or "high risk" practices are conducted as part of the AI agent executing a task. For an overview of prohibited and high-risk AI under the EU AI Act, please see our article ‘The EU AI Act: Ten key things to know’.
  • Intellectual Property: where AI agents gather data from external sources in order to complete tasks, this may pose a risk that such information is obtained in a way that infringes the intellectual property rights of a third party.
  • Data Protection: compliance with data protection laws (including the EU GDPR and UK GDPR) may become more complex in the context of agentic AI. For example, whilst the GDPR requires processing of personal data to be transparent, it may be difficult to track the personal data processed by the AI agent where it consults a broad range of sources to execute a task, and where the output does not indicate what personal data has been used/processed (or reviewed/processed and not used).
  • Liability: in cases where an AI agent makes an error or causes harm, determining liability becomes complex. Is it the developer, the user, or the AI agent itself who is responsible for that harm? Here, analogies are likely to be drawn from relevant cases involving AI chatbots (such as the Air Canada case, where Air Canada was held liable for incorrect information provided to a passenger).
  • Security: as noted above, AI agents have access to multiple data sources, in addition to the data forming the corpus of data used to train a monolithic LLM. In certain scenarios, the agency of the agentic AI system - and its autonomy in solving a task - complicates efforts to assure system security and augments the potential for failure when compared with a monolithic LLM.

TLT Comment

Whilst agentic AI presents huge opportunities and is another exciting development in the AI space, there are increased legal challenges that arise.

We are helping many of our clients to navigate these challenges and to help them harness the power of AI agents in an ethical, safe and compliant way.

For more information on how we can help you with your AI journey, please get in touch.

This publication is intended for general guidance and represents our understanding of the relevant law and practice as at February 2025. Specific advice should be sought for specific cases. For more information see our terms and conditions.

Date published

26 February 2025

RELATED INSIGHTS AND EVENTS

View all

Get in touch