fs hero

FATF Horizon Scan: AI & Deepfakes — Impacts on AML/CFT/CPF

TLT picks out the key points you shouldn't miss...

What’s this about?

The FATF has published its “Horizon Scan: AI and Deepfakes” report, highlighting how artificial intelligence and deepfake technologies are rapidly changing the landscape of financial crime. The report explores both the risks and opportunities for financial institutions, with a focus on AML/CFT/CPF compliance.

Our Head of Risk and Financial Crime, Ben Cooper says...  

“Criminals now have access to easy-to-use deepfake kits that can generate convincing video and voice clones - enough to fool facial recognition or biometric onboarding systems. Even unsophisticated actors can launch high-impact scams, while organised networks automate multi-stage laundering. Firms need to ramp up AI-detection, scale human review, and invest in cross-industry collaboration to stay ahead.”

The points not to miss...

Deepfake-enabled identity fraud is surging

Synthetic audio, video, or images allow criminals to convincingly impersonate individuals—defeating KYC, remote onboarding, biometric verification, and liveness checks. FATF highlights impersonation of senior staff in real-time scams, pressuring staff to authorise payments, often with devastating consequences.

AI automates transaction laundering

Criminals now use AI to execute complex, high-volume laundering schemes—patterning transactions in ways that traditional rules-based systems can’t detect. Autonomous AI agents may orchestrate these flows without human supervision.

Detection lags behind creation

Fraud detection tech hasn’t kept pace with generative AI. Deepfakes can pass through liveness and biometric checks and only trigger alarms later—creating a dangerous window for fund diversion.

Cross-border vulnerabilities magnified

Deepfake onboarding and cross-border transactions often route through jurisdictions with inconsistent AML regulations or weaker digital ID frameworks, making tracing and enforcement more difficult.

Regtech: a crucial defence mechanism

The FATF emphasises using AI also for compliance. This includes deployment of anomaly detection systems, biometrics verification, deepfake detection, and automated screening—turning AI into part of the solution.

Strengthening governance & controls

FATF signals that supervisors will intensify scrutiny of AI-specific AML/CFT controls. Firms should create structured AI risk governance frameworks—defining ownership, risk appetite, monitoring and assurance processes.

Public–private information sharing

Effective defence requires collaboration: FATF recommends mechanisms for sharing intelligence on emerging AI-enabled threats, enabling institutions and authorities to respond quickly.

What firms should do now

Conduct an AI deepfake risk assessment

Map out where AI-deepfake risk intersects with your operations—onboarding, authentication, payment authorisation—and evaluate control effectiveness under worst-case scenarios.

Upgrade your tech stack

Invest in or pilot advanced tools that can detect synthesised media in real time and integrate AI-driven transaction monitoring.

Fortify onboarding & verification

Enhance multi-factor onboarding protocols, including human validation steps and robust challenge-response checks to back up automated systems.

Build AI risk governance

Establish clear internal AI risk ownership structures, with board-level oversight and regular independent assurance.

Enhance training & awareness

Educate staff - especially front-line teams - on deepfake and AI-facilitated fraud typologies, with regular simulations to maintain vigilance.

Form public-private partnerships

Engage with regulators, law enforcement, peers, fintechs, and vendors to share insights and seek synergies around AI threat intelligence and mitigation.

Prepare for regulatory exams

Anticipate deep-dive scrutiny during AML/CFT assessments; document risk assessments, AI control architecture, and technology trail to demonstrate preparedness.

At a glance...

Publication link Horizon Scan AI and Deepfakes
Published date 22 December 2025
Who has published it? Financial Action Task Force (FATF)
Publication type Horizon scan on emerging AML/CFT/CPF risks and recommendations
Any key dates? N/A
What's it relevant to? FS sector. AML, CFT, CPF, AI, deepfakes, KYC, regtech, payment fraud, digital ID

Authors: Ben Cooper and Tamara Raoufi

This publication is intended for general guidance and represents our understanding of the relevant law and practice as at January 2026. Specific advice should be sought for specific cases. For more information see our terms & conditions.

No items found.
Date published
05 Jan 2026

Legal insights & events

Keep up to date on the issues that matter.

Follow us

Find us on social media

No items found.
No items found.