Agentic AI: CMA publishes guidance on consumer law and DMCC Act risks

Agentic AI has moved quickly from concept to commercial reality. Systems that can plan, decide and act autonomously on a user’s behalf are already being deployed across customer service, marketing, pricing, refunds and product recommendations. Businesses may use agentic AI for a number of purposes, such as efficiency gains, better personalisation and reduced friction for customers.

Unsurprisingly, UK regulators are trying to keep pace with technical innovation in this space. The Digital Regulation Co-operation Forum (DRCF) – which includes the CMA, ICO, Ofcom and FCA – ran a call for views at the end of 2025 on the regulatory challenges associated with Agentic AI.

This has quickly fed into the CMA’s consumer protection work. On 9 March 2026, the CMA published two closely linked documents:

Taken in the round, the CMA clearly recognises the potential pro-consumer and wider economic benefits that AI agents can bring.

But it also identifies some of the potential consumer protection risks that could materialise if agentic AI is deployed without sufficient pre-launch due diligence.

A familiar legal framework applied to a new technology

The CMA’s starting point is deliberately simple and explains that the same consumer law rules apply regardless of whether customers interact with a human or an AI agent. Businesses remain fully responsible for what their AI agents say and do, even where the technology is supplied or designed by a third party.

In other words, delegating decisions to software does not delegate legal accountability.

That matters because agentic systems are different from traditional automation or chatbots. As the CMA’s policy paper explains, AI agents can pursue higher-level goals, break them into steps, retrieve data from multiple sources and take actions, sometimes with limited human intervention. As autonomy increases, so does the impact of errors, bias, manipulation or misleading behaviour.

The CMA is clear that consumer law is already capable of addressing these risks. Under the Digital Markets, Competition and Consumers Act 2024 (DMCCA), the CMA now has much tougher consumer enforcement powers, including the ability to impose fines of up to 10% of global turnover for breaches.

Key consumer law obligations when using AI agents

While the CMA’s guidance is not prescriptive, a few core areas stand out for businesses using agentic AI:

1. Transparency and honesty

The CMA is clear that consumers must not be misled about whether they are dealing with an AI agent, or about what that agent can and cannot do. If the use of AI might affect a consumer’s decision-making, businesses should be open about it. The CMA therefore warns against overstating an agent’s capabilities or disguising automated interactions as coming from a human.

Transparency also extends to outcomes. If AI agents generate recommendations, rankings or comparisons, businesses should ensure that material limitations are clearly disclosed. That could include, for example, how much of the market is covered, how results are ranked, or whether there are commercial relationships influencing outputs.

2. Fair treatment and respect for consumer rights

AI agents must be trained and configured to respect consumers’ statutory and contractual rights, which include rights around pricing, refunds, cancellation, and accurate product information. Consumer-facing businesses will be at risk if they deploy AI agents that makes it harder for consumers to exercise their rights, provides incomplete information, or applies policies inconsistently.

The CMA emphasises that errors at scale are particularly problematic, as an agent that systematically misstates cancellation rights or rejects valid refund requests can quickly lead to widespread consumer harm.

3. Avoiding manipulation and harmful design

The policy paper highlights concerns around loss of consumer agency and the risk of manipulative or deceptive design (or ‘dark patterns’).

The CMA is likely to be concerned if an AI agent pressures, misleads or unduly influences consumers in a way that harms their economic interests, regardless of whether it emerges from code, prompts or optimisation logic.

In a sense, this is a natural evolution of the CMA’s long-standing focus on online choice architecture (OCA). The difference is that while the CMA’s work to date on OCA has tended to focus on the impact of static user interfaces on consumers, the challenges are different for dynamic AI-powered systems that are highly personalised and optimise constantly in response to user behaviour.

4. Accountability and human oversight

Businesses must remain in control of their AI agents and the CMA emphasises the need for appropriate human oversight, particularly where agents interact directly with consumers or take decisions with financial or contractual consequences.

Hallucinations, data errors and unexpected behaviour are well‑recognised risks in current AI systems. Without robust monitoring and escalation processes, these risks can quickly translate into consumer law breaches.

Practical compliance steps for businesses

The CMA’s guidance is pragmatic and operational in tone. Rather than introducing new legal tests, it encourages businesses to embed consumer protection principles into the lifecycle of agentic AI systems.

Build consumer law into design and training

Compliance should start at the design stage, which means that businesses should be clear about what tasks an AI agent is allowed to perform, what data it can access, and what constraints apply.

Training data and prompts should reflect consumer law requirements, including statutory rights and consent requirements. Testing is also critical, as pre‑deployment testing such as scenario testing and review of edge cases can help identify misleading outputs, inconsistent decisions or gaps in disclosures before they reach customers.

Monitor real-world performance

Deployment is not a one-off exercise or the end of the compliance story. The CMA expects ongoing monitoring of how AI agents behave in practice, including reviewing outputs, customer interactions, complaints and feedback. Regular human review helps identify bias or unintended outcomes, and this is particularly important where agents interact with large numbers of consumers or vulnerable groups or where decisions have financial consequences.

Act quickly when things go wrong

Where issues are identified,particularly where there might be significant impacts, the CMA expects businessesto act promptly to refine prompts, workflows or guardrails.  As the CMA’s message is clear that speed ofresponse matters, businesses are advised to put in place processes to avoiddelays in remediation.

Manage third party risk

Using third‑party AI tools does not shift responsibility, so businesses should carry out vendor due diligence on suppliers, understand how LLM systems are trained and governed, and ensure contractual arrangements and warranties support compliance, audit and remediation.

What should businesses do now?

For organisations experimenting with or rolling out agentic AI, the CMA’s guidance and policy paper can be distilled as follows:

  • map current and planned uses of AI agents in consumer journeys and assess whether those uses could mislead, pressure or disadvantage consumers;
  • embed consumer law requirements into design, training and testing of agentic AI;
  • be open with customers about the use of AI agents, especially where this could affect purchasing decisions or the uptake of services;
  • act quickly when issues are identified and put in place clear monitoring, escalation and remediation processes, including through appropriate human oversight; and
  • ensure senior ownership and accountability for AI‑driven consumer outcomes.

How we can help

TLT’s Digital Regulation team has market-leading experience advising on AI deployment. We work with businesses across the full lifecycle of AI deployment, from product design and procurement through to regulatory engagement. Our digital consumer experts are ready to support with all consumer-facing AI use cases.

Contributor: Lili Elenoglou

This publication is intended for general guidance and represents our understanding of the relevant law and practice as at April 2026. Specific advice should be sought for specific cases. For more information see our terms & conditions.

No items found.
Date published
07 Apr 2026

Abstract overlapping curved shapes in varying shades of violet and purple on a solid violet background.

Legal insights & events

Keep up to date on the issues that matter.

Abstract yellow background with overlapping translucent olive green curved shapes.

Follow us

Find us on social media

Digital Markets, Competition and Consumers Act
No items found.