Reading article phone

When AI shops for you

Redefining the payments journey

Visa and Mastercard have announced the launch of AI-driven payments agents which will operate as autonomous tools that can make purchases on behalf of consumers. These tools have been designed to analyse user preference, compare options and execute transactions. The launch of these AI agents represents a major evolution in the way consumers interact with the payments ecosystem.

While the commercial and convenience benefits are clear, the introduction of AI agents introduces novel legal and regulatory challenges. As the boundaries between human instruction and machine execution continue to blur, firms deploying these tools will need to reassess how they manage risk, demonstrate accountability and transparency and protect consumers.

AI in the payment flow: beyond automation

The launch of autonomous AI-driven payments agents goes further than existing “smart” technologies embedded in digital wallets or recurring payment tools. AI payments agents are not a passive tool facilitating consumer-initiated payments but instead operate as active agents capable of initiating transactions within parameters set by the consumer.

In doing so, they challenge many tradition legal constructs, some of which we explore further below.

Some legal considerations

Traditional agency is predicated on clear instructions from a principal and the accountability of the agent. Where the agent is a generative AI model responding dynamically to data inputs, establishing clear lines of responsibility becomes more complex.

For UK-regulated firms, this raises a number of questions:

  • Does the execution of a payment by an AI tool meet the definition of “authorisation” under the Payment Services Regulations 2017? Consideration will need to be given to whether a broad instruction (e.g. to automatically re-order household products) qualifies as a valid authorisation, how far downstream decisions made by the AI agent still fall within that authorisation and what controls exist to prevent unauthorised or disputed transactions.
  • How is liability for disputed transactions managed? Especially where, for example, the AI agent makes a purchase the consumer disagrees with, misinterprets input data or relies on erroneous information from a third party (e.g. merchant or data provider) that influences its decision.
  • How is data being processed and is it compliant with the UK GDPR? For example, what special category data is being relied upon, how are transparency obligations managed (particularly around profiling and automated decision making) and how is the consumer informed of any automated decisioning and given a right to contest this?
  • Are the decisions made by the AI agent explainable and auditable?
  • Are customer outcomes being actively checked under the Consumer Duty? Are firms testing the AI system for unintended bias or discriminatory impact and ensuring consumers are not financially disadvantaged by poorly tuned or oversensitive AI logic.
  • Are terms and disclosures relating to the use of AI agents sufficiently clear and fair? Terms must be transparent and accessible and should disclose the nature, scope and risks of AI-driven functionality and ensure consumers have and maintain meaningful control.

To manage these risks, firms will need to ensure robust contractual frameworks are in place, clarifying the scope of the AI agent’s role and limiting liability where appropriate. For any firms deploying AI-driven payments agents, they will need to ensure that consumers are given clear information on how decisions are made, allowing them to set and adjust parameters easily and ensuring that they can revoke or override agent activity at any time. These controls and principles align closely with the Financial Conduct Authority’s expectations around transparency, explainability and accountability.

The Financial Conduct Authority, Competition and Markets Authority and Information Commissioner’s Office will inevitably keep a close eye on these tools through the lens of existing consumer protection, data and AI governance regimes. We also anticipate alignment efforts between the UK and other international regulatory approaches, particularly as the EU AI Act begins to influence market participants.

Governance and oversight

As with any AI deployment, strong governance will be critical. The launch of AI-driven payments agents is likely to prompt a broader conversation about how AI systems are developed, tested and monitored in financial services.

In practice, for firms launching AI tools in the payments space, this means:

  • defining ownership of AI decisions across the payments journey;
  • building in auditability and traceability of agent actions;
  • conducting regular testing and scenario analysis;
  • ensuring effective human oversight; and
  • embedding ethics, legal review and consumer analysis into product design.

For firms seeking a practical starting point, our insight Five top tips for AI governance outlines how to build proportionate frameworks tailored to different levels of AI sophistication.

TLT’s insight

As the payments sector embraces AI, firms must strike the right balance between innovation and accountability. At TLT, we support clients in navigating this emerging space – helping them design compliant frameworks, manage operational and contractual risk and engage constructively with regulators.

For more articles relating to this topic, please explore our insights:

Written by Matthew Atkinson, with contributions by Alex Williamson, Tom Sharpe and Michelle Sally

This publication is intended for general guidance and represents our understanding of the relevant law and practice as at June 2025. Specific advice should be sought for specific cases. For more information see our terms & conditions.

 

No items found.

No items found.

Written by
Matt Atkinson
Date published
24 Jun 2025

Managing Partner

Legal insights & events

Keep up to date on the issues that matter.

Follow us

Find us on social media

No items found.