Following the Government’s response to its AI Regulation White Paper: A Pro-innovation Approach in February this year, the FCA published its AI update (Update) on 22 April 2024. 

There has been some speculation over recent weeks about whether the UK Government may be planning to make some adjustments to its current principles-approach to the regulation of AI, particularly now that the EU, following an alternative approach, has passed the world’s first comprehensive AI Act. While the Government acknowledges that legislation will be necessary at some point, sector regulators are currently tasked with regulating AI, so that bespoke measures can be tailored to the risks and requirements of each sector.

In its Update, the FCA states that it welcomes the Government’s approach as it is a “technology-agnostic, principles-based and outcomes-focused regulator”. The FCA is focused on how firms can safely and responsibly adopt AI technology as well as understanding what impact AI innovations are having on consumers and markets.

The Government’s AI White Paper outlined five cross-sectoral principles for the UK’s existing regulators to interpret and apply within their remits (with additional Initial Guidance for Regulators published in February 2024).  In this article, we look at how the FCA maps its existing regulatory framework to each of these principles, and what this means for firms.

AI systems should function in a robust, secure and safe way throughout the AI life cycle, and risks should be continually identified, addressed and managed.

The Update points out that there is already a range of rules and guidance that will be relevant to a firm’s safe, secure and robust use of AI systems in the delivery of financial services. For example, the FCA’s Principles for Business require firms to conduct their business with due skill, care and diligence (Principle 2) and take reasonable care to organise and control their affairs responsibly, effectively and with adequate risk management systems (Principle 3). The FCA’s Threshold Conditions are also relevant, such as the requirement that a firm’s business model must be suitable.

There are also more specific rules and guidance relating to systems and controls under the Senior Management Arrangements, Systems and Controls (SYSC) sourcebook, including requirements for relevant firms to have sound security mechanisms in place relating to data and business continuity.

The FCA also notes that its work on operational resilience, outsourcing and critical third parties (CTPs) are of particular relevance to this principle. The requirements under SYSC 15A (Operational Resilience) aim to ensure relevant firms are able to respond to, recover, learn from and prevent future operational disruptions; this would include a firm’s use of AI where it supports an Important Business Service (IBS).    In relation to CTPs, the Bank of England, PRA and FCA are currently assessing their approach on operational resilience; while not specific to AI, the concept of services a CTP provides is broad enough to encompass considerations around the systemic use of a common AI model (e.g. data bias, model robustness).

In short, there already exists an abundance of regulation and guidance that the FCA will apply when considering the use of AI by regulated entities.  Firms would be well advised to have this at the front of their minds when determining whether or not to implement a new AI solution.

AI systems should not undermine the legal rights of individuals or organisations, discriminate unfairly against individuals or create unfair market outcomes. Actors involved in all stages of the AI lifecycle should consider descriptions of fairness that are appropriate to a system’s use, outcomes and the application of relevant law.

The FCA points out that its regulatory approach to consumer protection is particularly relevant to fairness in the use of safe AI systems by firms.  Its approach is based on a combination of the FCA’s Principles for Businesses and other rules and guidance, including the Consumer Duty.

The Consumer Duty requires firms to play a greater and more proactive role in delivering good outcomes for retail customers (including, in some circumstances, those who are not direct clients of the firm). Firms are required to act in good faith, avoid causing foreseeable harm, and enable and support retail customers to pursue their financial objectives. Products and services must be designed to meet the needs of their target customers and provide fair value.

Although AI can provide opportunities (such as chatbots that help customers understand products or services), the FCA acknowledges the risks and highlights the need for firms to consider their obligations under the Consumer Duty.  It provides the example of using AI in risk assessments, where some customers will do well but others may be excluded from the market.

Firms should also take account of the Guidance on the fair treatment of vulnerable customers, the consumer protection requirements in the FCA Handbook and the obligations under data protection legislation. The FCA points to the ICO’s Guidance on AI and Data Protection for clarification on how the law should be applied in the context of AI systems, in particular in relation to the requirement that all processing of personal data must be fair and not lead to unfair outcomes.

AI systems should be appropriately transparent and explainable.

The FCA acknowledges that its regulatory framework does not specifically address the transparency or explainability of AI systems; however, it points again to its approach to consumer protection and the cross-cutting obligation under the Consumer Duty to act in good faith.  Where the Consumer Duty does not apply, Principle 7 of the Principles for Business requires firms to pay due regard to the information needs of clients and communicate with them in a way that is clear, fair and not misleading. The UK GDPR also requires data controllers to notify data subjects of certain information about their processing activities, including the existence of automated decision-making and profiling.

Governance measures should be put in place to ensure effective oversight of the supply and use of AI systems, with clear lines of accountability established across the AI life cycle.

Alongside a range of high-level rules, the FCA states in its Update that the SYSC sourcebook contains a number of specific provisions on systems and controls and firms’ governance processes and accountability arrangements. In particular, SYSC 4.1.1R requires firms to have robust governance arrangements, which include a clear organisational structure, effective processes to identify and manage risks, and effective control and safeguard arrangements for information processing systems.

Additionally, the Senior Managers and Certification Regime (SM&CR) is relevant to the safe and responsible use of AI.  Although governance structures can vary, all firms must ensure that one or more of their Senior Management Function (SMF) managers have overall responsibility for each of the activities, business areas, and management functions of the firm. This means that any use of AI in relation to an activity, business area, or management function of a firm would fall within the scope of a SMF manager’s responsibilities. 

This is a particularly important point to bear in mind.  SMF managers should take active steps to familiarise themselves with both the FCA’s stated approach to regulating the use of AI, and also with the wider legal and technological framework within which AI tools are being developed and deployed.  This is no small undertaking.  Sufficient time and resource should be made available by firms, to enable SMF managers to discharge these obligations.

The FCA reminds firms that the first annual report on the implementation of the Consumer Duty is due on 31 July 2024.  It suggests that a firm’s board might include consideration of current or future use of AI technologies where it might impact retail consumer outcomes or assist in monitoring and evaluating those outcomes.

Where appropriate, users, impacted third parties and actors in the AI life cycle should be able to contest an AI decision or outcome that is harmful or creates material risk of harms.

Where a firm’s use of AI results in a breach of the FCA’s rules (e.g. because an AI system produces decisions or outcomes which cause consumer harm), the FCA states that there are a range of mechanisms through which firms can be held accountable. If consumers are dissatisfied with the results of a firm’s internal investigation, they can refer the matter free of charge to the Financial Ombudsman Service for an independent review, which can award redress in appropriate cases.

FCA’s plans for the next 12 months

In the final section of the Update, the FCA sets out its action plan for the next 12 months. This includes continuing to further understand AI deployments in UK financial markets and actively considering whether future regulatory adaptations are needed.

The FCA will continue to collaborate closely with the Bank of England, the PSR and with other regulators through its membership of the Digital Regulation Cooperation Forum (DRCF). It will also prioritise international engagement on AI, given recent developments such as the AI Safety Summit and the G7 Ministerial Declaration.

Recognising that the greater use of AI may create benefits as well as risks, the FCA will continue its work with DRCF member regulators to deliver the pilot AI and Digital Hub. It is assessing opportunities to try new types of regulatory engagement and exploring changes to its innovation services that could enable the testing, design, governance and impact of AI technologies in UK financial markets within an AI sandbox.

As for its own use of AI, the FCA plans to invest more into AI technologies to proactively monitor markets, including for market surveillance purposes.  Looking to the future, the FCA is taking a proactive approach to understanding emerging technologies and their potential impact. In 2024–25, the DRCF’s horizon scanning and emerging technologies’ workstream will conduct research on deepfakes and simulated content.

What does this mean for firms?

As the adoption of AI increases at an unprecedented rate, the FCA’s Update provides some welcome guidance on how existing rules will be relevant to the safe and responsible use of AI technology. Firms should familiarise themselves with strategic guidance issued by the FCA and other regulators that may be relevant to financial institutions, such as the Bank of England and the PRA (see letter dated 22 April  2024), the ICO and the CMA (see strategic update dated 29 April 2024).

In view of international developments and the upcoming general election, it is unclear whether the UK will continue to task sector regulators with regulating AI going forwards or whether we will see legislation that provides closer alignment to the EU’s approach.

For the time being at least, firms should focus on identifying all of the AI systems used within their business and consider which of the rules highlighted in the FCA’s Update are triggered by the use of such systems.  They should then consider what resources, strategic changes or processes are required to ensure ongoing compliance with these obligations.

The Bank of England published an interesting speech last week (by Jonathan Hall, a member of its Financial Policy Committee) which provided some reflections on how developments in AI could affect financial stability.  Although the focus of the article is on deep trading algorithms, many of the emerging risks that are discussed and the actions to be taken are relevant to the wider use of AI. In particular, the article emphasises the importance of ongoing monitoring and stress testing following the initial training and testing stages.  Managers will need to constantly monitor outputs for signs of harmful behaviour and also for regulatory divergence to ensure that the rules are not forgotten. 

The pace of legislative and regulatory change regarding the use of AI in the UK (and worldwide) shows no sign of slowing down as we head towards the second half of 2024.  Those firms which familiarise themselves with the current legal landscape now will be best placed to harness the benefits of AI, in a legally compliant and ethically sound way, as the UK’s approach starts to crystallise.

This publication is intended for general guidance and represents our understanding of the relevant law and practice as at May 2024. Specific advice should be sought for specific cases. For more information see our terms & conditions

Date published

20 May 2024

RELATED INSIGHTS AND EVENTS

View all