Background

AI is already a prominent force in the UK Financial Services (FS) sector, with a recent Bank of England / FCA survey indicating that 75% of UK FS firms already use AI with a further 10% planning to do so within the next three years.

The payments sector has been an early adopter of AI – with payment service providers (PSPs) already using the technology in their internal processes and customer-facing offerings, with use cases ranging from payment authorisation via facial recognition technology to voice activated shopping.

As PSPs continue to strive for a leading edge, the potential benefits of AI adoption are clear. However, there are of course potential risks (both practical and legal) including an emerging regulatory framework, which market players embracing AI will need to monitor closely.

Benefits

AI can analyse large scale data sets to an increasing level of detail. This ability to conduct rapid, in-depth analysis of consumer behaviour (across both spending and browsing habits) alongside demographics and wider data sets provides PSPs an opportunity to better predict customer behaviour and trends, which in turn can focus their target marketing, research and development, investment, and strategies. For example, predictive AI analysis can support PSPs in anticipating where customers (and potential customers) are likely to spend and the payment methods they are most likely to use.

Another key benefit of AI technology is its automation capability, driven by the breakneck speed at which complex data sets can be analysed. Fraud is one of the main underlying risks in payments (as well as a key regulatory focus) and fraud analysis, detection and mitigation are already mature AI use cases for PSPs - with AI tools reinforcing established processes to mitigate fraud risk while minimising customer disruption.

Customer onboarding (another key area of regulatory focus) is also being accelerated by AI, with specific applications ranging from facial recognition and liveness detection, to anomaly detection in ID documents becoming increasingly central to the smooth onboarding process that payment customers have come to expect.

At a business-to-business level, AI is being increasingly harnessed for large volume invoice payment processing, more sophisticated transaction routing options, the auto-population of frequent payment details and automated reconciliation processes.

Risks

Alongside its undoubted benefits, AI does bring with it a significant number of risks, which look set to evolve as the underlying technology and use cases continue to develop.

On the opposite side to fraud mitigation use cases, sophisticated perpetrators are maliciously using AI to optimise their own fraud and illegal activities.

The use of AI where personal data is involved gives rise to complex considerations around the compatibility of using AI analysis / outputs with the rigorous data protection legal and regulatory framework.

Regulatory compliance more broadly is also a key issue – for example where AI is used to support regulatory processes such as customer onboarding or to deliver frontline chatbot customer support (where Consumer Duty compliance is of course a key consideration in the UK).

The emerging regulatory framework around AI in financial services will continue to develop, with the FCA and other UK regulators actively monitoring the scene. The Bank of England has also recently announced its plans to establish an AI Consortium whose remit will include considering the key opportunities, risks and challenges presented by increasing AI adoption, no doubt playing an important role in shaping the UK regulatory framework around AI, particularly where FS and payments are concerned.

In addition to complying with the EU AI Act (which came into force in 2024 and will continue to be implemented in stages over the next 18 months) where conducting AI activity in the European Union, UK PSPs will need to keep a close eye on the UK Government’s upcoming Data (Use and Access) Bill in respect of digital identify verification. The Bill proposes to introduce a set of requirements aimed at ensuring the reliability of digital identity verification services, including establishing a trust framework alongside supplementary codes for specific digital ID use cases, and setting up a register of certified providers of digital ID services. With the EU AI Act, UK regulatory requirements (from the ICO, FCA, CMA and more), and the Data (Use and Access) Bill (expected to become law by the summer) to navigate, not to mention existing legal frameworks like the (UK) GDPR, AI use cases in digital ID verification will require careful consideration to ensure PSPs are operating within all relevant guardrails.

How TLT can help

As PSPs continue to embrace and invest in AI, there’s an imperative to balancing the potential commercial gains with successfully navigating the key risks, particularly when designing and implementing any sustainable long-term strategies.

TLT’s multi-disciplinary AI team includes our experts across payments, data protection, technology and IP, competition and regulatory law and alongside our specialist tools such as AI Navigator, we can help PSPs and other businesses with successfully navigating any legal and regulatory considerations relating to AI.  If you need any support with the next phase of your AI journey, please do get in touch.

Contributor: Ed Jeffery

This publication is intended for general guidance and represents our understanding of the relevant law and practice as at February 2025. Specific advice should be sought for specific cases. For more information see our terms and conditions.

Written by

Alex Williamson

Alex Williamson

Date published

17 February 2025

RELATED INSIGHTS AND EVENTS

View all

RELATED SERVICES