
FATF Horizon Scan: AI & Deepfakes — Impacts on AML/CFT/CPF
TLT picks out the key points you shouldn't miss...
What’s this about?
The FATF has published its “Horizon Scan: AI and Deepfakes” report, highlighting how artificial intelligence and deepfake technologies are rapidly changing the landscape of financial crime. The report explores both the risks and opportunities for financial institutions, with a focus on AML/CFT/CPF compliance.
Our Head of Risk and Financial Crime, Ben Cooper says...
“Criminals now have access to easy-to-use deepfake kits that can generate convincing video and voice clones - enough to fool facial recognition or biometric onboarding systems. Even unsophisticated actors can launch high-impact scams, while organised networks automate multi-stage laundering. Firms need to ramp up AI-detection, scale human review, and invest in cross-industry collaboration to stay ahead.”
The points not to miss...
Synthetic audio, video, or images allow criminals to convincingly impersonate individuals—defeating KYC, remote onboarding, biometric verification, and liveness checks. FATF highlights impersonation of senior staff in real-time scams, pressuring staff to authorise payments, often with devastating consequences.
Criminals now use AI to execute complex, high-volume laundering schemes—patterning transactions in ways that traditional rules-based systems can’t detect. Autonomous AI agents may orchestrate these flows without human supervision.
Fraud detection tech hasn’t kept pace with generative AI. Deepfakes can pass through liveness and biometric checks and only trigger alarms later—creating a dangerous window for fund diversion.
Deepfake onboarding and cross-border transactions often route through jurisdictions with inconsistent AML regulations or weaker digital ID frameworks, making tracing and enforcement more difficult.
The FATF emphasises using AI also for compliance. This includes deployment of anomaly detection systems, biometrics verification, deepfake detection, and automated screening—turning AI into part of the solution.
FATF signals that supervisors will intensify scrutiny of AI-specific AML/CFT controls. Firms should create structured AI risk governance frameworks—defining ownership, risk appetite, monitoring and assurance processes.
Effective defence requires collaboration: FATF recommends mechanisms for sharing intelligence on emerging AI-enabled threats, enabling institutions and authorities to respond quickly.
What firms should do now
Map out where AI-deepfake risk intersects with your operations—onboarding, authentication, payment authorisation—and evaluate control effectiveness under worst-case scenarios.
Invest in or pilot advanced tools that can detect synthesised media in real time and integrate AI-driven transaction monitoring.
Enhance multi-factor onboarding protocols, including human validation steps and robust challenge-response checks to back up automated systems.
Establish clear internal AI risk ownership structures, with board-level oversight and regular independent assurance.
Educate staff - especially front-line teams - on deepfake and AI-facilitated fraud typologies, with regular simulations to maintain vigilance.
Engage with regulators, law enforcement, peers, fintechs, and vendors to share insights and seek synergies around AI threat intelligence and mitigation.
Anticipate deep-dive scrutiny during AML/CFT assessments; document risk assessments, AI control architecture, and technology trail to demonstrate preparedness.
At a glance...
Authors: Ben Cooper and Tamara Raoufi
This publication is intended for general guidance and represents our understanding of the relevant law and practice as at January 2026. Specific advice should be sought for specific cases. For more information see our terms & conditions.
Get in touch
Get in touch
Insights & events

Strengthening Trade Sanctions Compliance - Real World Lessons in Trade Sanctions Breach Detection



























