
AI and financial crime
Five big questions with Ben Cooper
AI is shaking up the financial crime landscape. While banks and financial institutions are embracing AI to spot fraud faster and protect customers more effectively, criminals are using the same tech to stay one step ahead. Deepfakes, voice cloning, synthetic identities aren’t just future threats, they’re here..
We spoke to Ben Cooper, Partner in TLT’s Financial Services Regulatory team, about five key questions clients are asking as they adopt AI to fight financial crime, and protect customer trust whilst staying compliant with the latest governance.
Key takeaways include:
- AI is critical but criminals are using it too: Financial institutions must respond to AI-driven threats with AI-powered solutions, such as fraud detection and behavioural monitoring.
- Regulation limits how fast firms can move: Banks must comply with strict data privacy rules and regulatory requirements adding complexity to AI implementation.
- Agility is key to staying ahead: It’s not just about deploying tools, it’s about continuously adapting AI systems to address new threats while maintaining service quality and compliance.
1. What’s changing and why does it matter now?
Financial crime is becoming more sophisticated, and AI is a seismic shift for both sides. Criminals are using advanced tools like voice cloning and deepfake technology to pull off scams that would have been unimaginable just a few years ago. They’re impersonating customers, managers, tricking systems, and creating synthetic identities, at an alarming pace and scale.
Financial institutions are responding, deploying their own AI tools to fight financial crime. Using it to spot fraud patterns faster, reduce false positives, and make investigations quicker, and more efficient. But there’s a real divide here. As, criminals don’t have to worry about regulations and governance. Banks do. They’re operating under strict data privacy laws, regulatory expectations, and ethical responsibilities. So, the challenge is to fight AI with AI, but responsibly and within the rules.
2. What’s the opportunity and what should clients be cautious about?
The upside is clear, AI can help firms detect suspicious behaviour in real time, identify patterns faster than a human analyst, and reducing losses as a result of fraud. And we’re seeing it make a big difference in areas like fraud detection and transaction monitoring.
But it’s not without risk. AI can get it wrong — flagging legitimate customers, missing new fraud techniques, or introducing bias into decision-making models. That’s not just a financial risk, it’s a reputational one and it opens the door to regulatory consequences, especially under the Consumer Duty.
Firms must demonstrate adherence to the FCA’s Principles for Business, emphasising due skill, care and diligence in their AI implementations. This includes maintaining effective risk management systems and ensuring operational resilience, particularly regarding oversight of critical third parties.
Likewise, the Consumer Duty mandates firms to deliver good outcomes for consumers, avoid foreseeable harm, and support customers in achieving their financial objectives. AI applications must not lead to unfair discrimination or produce adverse market outcomes.
Firms need to be cautious. If AI is being used for onboarding or sanctions screening, for example, it’s vital that a human still has oversight. The regulators expect it, and frankly, customers do too.
3. How are financial services firms responding?
We’re seeing two responses: early trailblazers are embedding AI across fraud teams, risk models and KYC systems, with cross-functional teams spanning compliance, data, and legal. Others are waiting, wary of reputational risks, and regulatory uncertainty.
The most successful firms are focusing on agility not just implementing AI, but continuously refining it. That means adapting systems as fraud threats, and governance, evolve, monitoring model performance, and reviewing AI use cases with compliance from the start.
A growing number are also using hybrid approaches by combining rule-based systems with AI models and ensuring a human-in-the-loop process for final decisions.
4. What are the most pressing legal and ethical issues?
The main issue is balance — using AI to protect customers and reduce crime, without creating new risks around fairness or accountability.
Data protection is a big one. Even when you’re trying to prevent fraud, you still need to comply with GDPR — things like data minimisation, lawful processing, and fairness can’t be ignored.
Transparency is another pressure point. Firms need to be able to explain why someone was flagged. That’s hard to do if you’re relying on black-box models that even your own teams don’t fully understand. Regulators want more visibility — and so do customers.
There’s also the question of human oversight. Under UK GDPR, automated decisions that affect people in significant ways need meaningful human involvement. That needs to be built in from the ground up.
And if you’re using third-party AI vendors? You need clear accountability, proper audit rights, and insight into how their models are trained. That’s where a lot of the legal and ethical complexity really shows up — especially when something goes wrong.
5. What should financial services firms do next?
First, deploy AI solutions with agility — but not at the expense of compliance. Institutions need robust governance, clearly defined control structures, and legal oversight from day one. This isn’t just about technology implementation — it’s about embedding AI within a responsible, auditable framework.
Second, think about customer experience. AI might reduce false positives, but it can’t be left to operate without checks. Human review needs to stay part of the process, particularly in high-risk areas like fraud and transaction monitoring.
And finally, stay flexible. Criminal tactics are evolving fast, and AI models need to evolve with them. That means regular updates, testing, and a culture that treats agility as part of your compliance strategy — not something separate from it.
Done right, AI can be a game changer — not just for stopping fraud, but for building customer trust and resilience. But it has to be done responsibly.
At TLT, we help financial institutions navigate the legal and ethical complexities of AI in financial crime prevention — building future-ready, compliant strategies that protect both your customers and your reputation.
Download our AI Legal Playbook for guidance on managing risk, building agile compliance frameworks, and using AI responsibly in financial services.
This publication is intended for general guidance and represents our understanding of the relevant law and practice as at June 2025. Specific advice should be sought for specific cases. For more information see our terms & conditions.
Get in touch
Get in touch
Insights & events


























