
The Bank of England and PRA set out plans for safe AI innovation: What firms need to know
TLT picks out the key points you shouldn't miss...
What's this about?
On 1 April 2026, Sarah Breeden (Deputy Governor, Financial Stability, Bank of England (the BoE)) and Sam Woods (Deputy Governor and CEO, PRA) wrote to the Chancellor of the Exchequer and the Secretaries of State for Science, Innovation and Technology, and for Business and Trade, setting out the BoE and PRA's plans to enable safe AI innovation in financial services. The letter responds to a Government request dated 28 January 2026 that the BoE and PRA publish a plan for enabling safe AI innovation and commit to annual reporting on how their regulatory approach enables AI-driven innovation and growth.
Our Legal Director, Tamara Raoufi says...
"The message from the BoE and PRA is straightforward: existing rules apply to AI, and firms should be ready to demonstrate compliance. With AI now a named supervisory priority, that conversation with regulators is coming. Firms should be reviewing their AI governance frameworks now - and ensuring they can demonstrate robust oversight of AI-driven decision-making across their operations."
In summary…
The BoE and PRA want to create an environment in which responsible AI adoption can contribute to financial sector innovation, competition, competitiveness, and growth, whilst safeguarding the integrity of the financial system. They are keeping under review their technology-agnostic approach to regulation and whether further action or guardrails might be needed, including in light of how rapidly the technology, its use cases, and adoption levels are advancing.
This builds on DP5/22 — their 2022 joint Discussion Paper with the FCA — where respondents broadly found no regulatory barriers to safe AI adoption under existing rules. In 2023, the PRA issued Model Risk Management Principles for banks (SS1/23), technology-agnostic in approach but deliberately designed to capture factors relevant to AI models.
The letter sits within a broader policy context in which the Government - informed by the 2025 AI Opportunities Action Plan led by Matt Clifford CBE - has signalled its ambition for the UK to become a global leader in safe and innovative AI adoption, with financial services identified as one of the sectors with the greatest potential to benefit.
The points not to miss...
The PRA has highlighted AI adoption in its 2026 supervisory priorities, meaning it will be a key topic in supervisory dialogues with firms. Firms should expect direct questions on governance, model risk management, and oversight frameworks.
The BoE and PRA will continue to apply their technology-agnostic, outcomes-focused regulatory framework to AI. Most participants in the PRA's recent industry roundtables did not see the need yet for detailed AI-specific regulatory guidance or rules, and most couldn't see a case for a BoE or PRA AI sandbox at this time. But the position is under active review. The FCA's Supercharged Sandbox and AI Live Testing initiatives were seen as providing sufficient testing infrastructure for firms looking to innovate with AI in a regulatory context. Firms seeking to test AI innovations should engage with these existing FCA mechanisms.
The AI Consortium, established in May 2025, is examining concentration risks including from third-party model providers as a primary workstream. This mirrors questions raised by the FCA in its February 2026 Mills Review about the UK retail financial services market’s dependency on a small number of dominant AI infrastructure providers. Firms with significant reliance on third-party AI models should review their outsourcing and third-party risk management frameworks accordingly.
The AI Consortium is specifically examining explainability and transparency in generative AI, the evolution of AI "edge cases" as adoption moves into areas more relevant to financial stability such as credit risk assessment and trading, and AI-accelerated contagion in financial markets. Firms using generative AI in regulated activities should ensure they can articulate how outputs are generated and validated and should monitor the AI Consortium's forthcoming report for emerging expectations in these areas.
The rise of agentic AI is already a topic of discussion within the AI Consortium. No expectations have been set yet, but supervisory attention is likely to follow.
The G20 Financial Stability Board, chaired by BoE Governor Andrew Bailey, is prioritising work with international standard-setters on sound practices for AI adoption, use and innovation by financial institutions. The PRA co-chairs the International Association of Insurance Supervisors' (IAIS) AI workstreams, and the BoE is working with the G7 on managing AI-related cybersecurity risks.
Its work will focus on developing a better understanding of the potential financial stability risks involving frontier AI agents. It will also continue to support the Cross Market Operational Resilience Group (CMORG)’s AI taskforce, whose AI Baseline Guidance Review provides guidance to firms on government and regulatory approaches, risk management principles and frameworks, technical implementation, third party and legal considerations, and education and awareness.
It intends to achieve this via its annual Business Plan and Annual Report. This creates a new accountability mechanism that firms and stakeholders can use to track regulatory developments and assess the PRA's progress in supporting responsible AI adoption over time.
Whilst this letter sets out the BoE and PRA's plans, dual-regulated firms should remain aware of the FCA’s own AI workstreams, including through the joint AI Consortium, the Supercharged Sandbox, and its broader data and technology strategy. Dual-regulated firms should ensure they are tracking developments from both regulators and considering how their AI governance frameworks address the expectations of each.
What should firms do next?
In light of the BoE and PRA's letter, firms should consider taking the following steps now:
- review and, where necessary, strengthen AI governance frameworks to ensure they clearly articulate how AI is being used, overseen, and validated across operations;
- assess concentration risk arising from reliance on third-party AI model providers and ensure that outsourcing and third-party risk management frameworks adequately address this exposure - firms should be able to articulate their reliance on third-party models and the controls around them; and
- monitor the AI Consortium's forthcoming report and the PRA's 2026 supervisory engagement programme. It may provide one of the clearest indicators yet of where regulatory thinking on AI risk is heading.
At a glance...
Authors: Tamara Raoufi, Hannah Stanley and Ailbhe Redding
This publication is intended for general guidance and represents our understanding of the relevant law and practice as at April 2026. Specific advice should be sought for specific cases. For more information see our terms & conditions.
Get in touch
Get in touch
Insights & events

The Bank of England and PRA set out plans for safe AI innovation: What firms need to know

FCA turns to AI to fight fraud: What the Palantir contract means for financial regulation

Understanding the UK’s Ownership and Control Test: Insights from the latest call for evidence








%20%C3%94%C3%87%C3%B4%20790px%20X%20451px%2072ppi24.jpg)

%20%C3%94%C3%87%C3%B4%20790px%20X%20451px%2072ppi.jpg)

















