As artificial intelligence becomes more powerful, and more autonomous, the risks and responsibilities around data are rapidly escalating. Agentic AI (AI systems capable of making decisions and taking actions with minimal human oversight) is now starting to enter the mainstream, raising critical questions for financial services businesses as they begin to experiment with AI agents and explore their potential.

Following our article on the rise of AI agents in February 2025, we spoke to Emma Erskine-Fox, Partner in our Tech, IP and Data team, about five big questions our financial services clients are asking as they navigate the data, privacy and ethical challenges of this new development.

Key takeaways include:

1. Agentic AI raises the stakes for data and AI governance: Unlike traditional generative AI, agentic systems operate with unprecedented autonomy and make independent decisions. That means organisations need to rethink how they govern and audit those systems and explain how decisions are made, especially where personal or high-risk data is involved.

2. Cross-functional collaboration breeds success: The smartest firms are moving beyond 'compliance-only' thinking to holistic AI governance, creating multi-disciplinary teams combining data, legal, compliance, and procurement expertise (among others). This helps to ensure that all relevant considerations are embedded into every decision, all the way from initial selection through to implementation and beyond.

3. Ethics continue to be central to responsible AI: Firms need to ensure systems are fair, accountable, and able to stand up to scrutiny from regulators and customers alike; agentic AI poses unique challenges to allowing firms to do this, but placing ethics at the heart of AI governance and implementation will help firms avoid legal and reputational consequences.

1. What’s changing — and why does it matter now?

We’ve entered a new phase of AI capability. The generative AI that we’ve all become used to over the last couple of years creates content based on static data and user prompts. Agentic AI (which has generative AI as its foundation) takes autonomy of systems to the next level; agentic models are dynamic systems that can design workflows, call on multiple data sources independently, make decisions and take action without human input.

Agentic AI is still in its early days, but that’s arguably the best time to start getting ahead of the game. The increased autonomy of agentic AI comes with significant implications for how financial services firms handle data. Traditional data governance models aren’t always designed for systems that constantly learn, adapt, and make decisions on the fly, with such limited human input. There’s a risk that legal and regulatory frameworks (like GDPR or Consumer Duty) struggle to keep up with the practical realities of how AI operates.

Financial services firms are right to be excited about the potential of agentic AI, but they must make sure that their data practices, procurement processes, and governance frameworks are robust and flexible enough to allow for the responsible use of agentic AI. Otherwise, the regulatory, reputational, and ethical risks could outweigh the benefits. And the Senior Managers and Certification Regime (SM&CR) means that senior managers hold direct responsibility for AI deployment within their remit, so clear accountability for agentic AI related activities is vital.

2. What’s the opportunity — and what should clients be cautious about?

Agentic AI has real power and potential within financial services. It can identify fraud, increase personalisation of customer journeys, and accelerate decision-making at scale. However, as these systems operate independently, they also introduce new risks around control, transparency, and bias.

A good example is lending decisions. Agentic AI could be used to automate credit decisioning, speeding up the time it takes for decisions to be made and improving customer experience. But even a small error somewhere in the system could lead to biased decisions and unfairly denying services to customers. Under Consumer Duty, that’s not just an operational problem; it’s a potential breach of regulatory obligations.

When AI operations cause or risk harm to customers or third parties, organisations must ensure they can investigate such instances, enabling them to respond to complaints and identify regulatory failures. But as above, identifying where the issue has occurred in a complex agentic AI model is easier said than done. Transparency and explainability are also key concerns; organisations will need to be able to explain the outcome of decisions and investigations to customers, third parties and regulators (including the FCA, the ICO and the Financial Ombudsman Service). Explaining decisions when there is limited human input into how those decisions have been arrived at isn’t an easy task.

We’re also seeing challenges where firms don’t fully understand how their vendors’ systems work: what data was used in training, how models are tested, or whether outputs can be explained. That lack of visibility creates potential legal and ethical exposure.

The due diligence process requires early engagement with vendors and detailed scrutiny of their practices around training, testing, and data quality. Organisations must focus on understanding vendors’ data cleansing processes and accuracy guarantees, as this focus on data quality is intrinsically linked to privacy obligations.

The message is simple: move fast but build strong foundations. That means embedding AI governance into procurement, legal and data teams, not bolting it on at the end.

3. How are financial services firms responding?

We're seeing a real variety of approaches across the sector. Some firms are already in fairly advance experimentation stages with agentic AI, or even starting to roll out some agentic AI solutions. Others are still at the ‘wait and see’ stage; understandably cautious, but keeping a close eye on developments. One thing that’s universal though is that AI, including agentic AI, is high on agendas, and many firms are thinking carefully about their AI policies, data flows, risk assessments and vendor contracts.

One of the smartest shifts we’re seeing is a move from ‘compliance-only’ thinking to more holistic AI governance. That includes creating multi-disciplinary steering groups with data, legal, compliance and procurement functions all taken into consideration. These teams aren’t just asking “Is this legal?” but also “Is this ethical? Explainable? Aligned with our values?” This also enables firms to take into account wider issues, such as any potential impact on jobs, in the conversations around implementation of agentic AI.

4. What are the most pressing legal and ethical issues?

Data protection is a key issue. The UK GDPR requires firms to ensure that personal data is processed lawfully, fairly, and transparently (among other obligations). With agentic AI, that’s harder, but even more important.

Take Article 22, for example: individuals have rights around decisions made solely by automated means that significantly impact them. Recent rulings, like the EU’s SCHUFA case, have confirmed that credit scoring tools may trigger this provision, meaning firms must ensure meaningful human involvement and explainability. If credit scoring forms part of agentic AI solutions for lending decision-making, balancing the right level of human involvement with the increased autonomy of these models will be challenging.

Other priorities include:

  • Vendor due diligence: ensuring agentic AI partners meet high standards around data quality, bias testing, and transparency.
  • Data minimisation and accuracy: avoiding excessive or outdated data use, especially in a context where agentic solutions can determine themselves the data sources they will call on to make their decisions.
  • Risk assessments: ensuring that robust assessments are completed for any agentic AI models, assessing not just privacy risks, but also fairness, bias and accountability, among others – and taking into account the unique risks of agentic AI.

Beyond compliance, there’s an ethical imperative too. Agentic AI increases the risk of unintentional bias and discrimination, creating long-term reputational and social risks.

5. What should financial services firms do next?

Firstly, put ethics at the heart of agentic AI governance. Like any AI solution, an ethical approach is vital to ensure agentic AI is implemented and used responsibly, helping organisations avoid legal consequences and reputational damage.

Secondly, make sure all the right voices are in the room early on, ideally before selecting or deploying any agentic AI system. Successful AI governance isn’t just about ‘monitoring’ after launch, it’s about shaping decisions from day one.

Third, make sure your governance and risk assessment processes account for the additional risks of agentic AI. These solutions will likely need extra scrutiny — not just for GDPR compliance, but for fairness, accountability, and brand trust.

Fourth, review your vendor management processes. AI vendor agreements may lack the clauses needed for meaningful oversight – which brings increased risks for highly independent agentic AI solutions. Push for transparency around training data, testing processes, update cycles and model performance. It won’t always be easy, but it’s essential.

Finally, invest in education. AI literacy is now a legal obligation under the EU AI Act, but more than this, it’s good business sense. Boards and senior leaders need to understand the risks of agentic AI, not just the tech opportunities. When legal, compliance and business leaders are aligned, the governance is stronger, and the innovation is safer.

At TLT, we help you stay ahead of complex regulatory shifts and build practical, future-ready solutions for responsible AI adoption.

Download our AI Legal Playbook for practical guidance on balancing innovation with governance and responsible AI implementation.

This publication is intended for general guidance and represents our understanding of the relevant law and practice as at June 2025. Specific advice should be sought for specific cases. For more information see our terms and conditions.

Written by

Emma Erskine-Fox

Emma Erskine-Fox

Date published

20 June 2025

Get in touch

RELATED INSIGHTS AND EVENTS

View all