TLT's AI Brief: May 2026

Catch up with the latest in our monthly newsletter

Welcome to May's edition of TLT's AI Brief, bringing you updates on all things AI over the last month. As ever, it has been a busy period across law, policy and the wider AI landscape.

This edition covers:

  1. Developments in Law and Policy – The CMA continues to sharpen its focus on AI, with new investigations into algorithmic pricing and fresh guidance on agentic AI and consumer law. In Europe, negotiations to amend the EU AI Act have stalled, meaning the August 2026 deadlines for high-risk AI systems remain in force. In the US, the Trump Administration has set out a National Policy Framework for AI, pushing for a unified federal approach — a significant shift in the regulatory landscape.
  2. AI in the News – Microsoft and OpenAI have restructured their partnership, moving to a non-exclusive arrangement that opens up greater commercial competition in the cloud AI market. In the UK, Lloyds Banking Group has become the first FTSE-listed blue-chip company to deploy an AI tool in its boardroom, a sign of how quickly AI is moving into senior decision-making.
  3. AI for Good – A London-based team has won the £1 million Longitude Prize on Dementia, with an AI-powered smart glasses assistant designed to help people living with dementia navigate daily life independently.
  4. Key Dates and Events – A roundup of upcoming AI conferences and summits to have on your radar.

1. Developments in law and policy

United Kingdom

CMA Signals Increased Scrutiny of Algorithmic Pricing and AI-Driven Collusion

The CMA has sent a clear signal that it is actively monitoring whether algorithmic pricing tools and agentic AI are being used to facilitate anti-competitive behaviour. On 2 March 2026, it announced an investigation into Hilton, IHG and Marriott, alleging that the hotel chains shared competitively sensitive pricing data through a third-party platform operated by CoStar. CoStar is also now under investigation as the platform operator, confirming that tool providers, not just their users, can face liability.

On 4 March 2026, the CMA published a blog on 'AI and collusion: frontiers, opportunities and challenges', acknowledging the benefits agentic AI can bring to competition, whilst also warning that the same capabilities can reduce the uncertainty rivals would ordinarily have about each other's pricing and strategy. The CMA's specific concern is "hub and spoke" collusion, where a shared platform acts as a channel for the exchange of competitively sensitive information between competitors.

The CMA expects businesses to train staff on competition law as it applies to AI, ensure that competitors' data is not pooled within shared platforms, and stress-test AI tools for collusion risks. It also flagged its own growing AI capabilities – a reminder that the regulator is deploying the same technology to identify infringements.

Read more here.

CMA Publishes Guidance on Consumer Law and Agentic AI

The CMA has published practical guidance and an accompanying policy paper on the consumer protection obligations that apply when businesses deploy AI agents. The core message is straightforward: the same consumer law rules apply whether a customer interacts with a human or an AI agent, and businesses remain fully legally responsible for what their AI agents say and do, even where the technology is supplied by a third party. Delegating decisions to software does not delegate legal accountability.

The CMA identifies key areas of focus, including transparency about AI use, fair treatment of consumers, avoiding manipulative design, and maintaining meaningful human oversight.

The guidance encourages businesses to embed consumer protection principles into the full lifecycle of agentic AI, from design and training through to monitoring and remediation. In practical terms, businesses should map their current and planned AI agent deployments against consumer journey risks, be transparent with customers about AI use, and ensure clear escalation and oversight processes are in place.

Read more here.

FCA Raises Concerns Over Unregulated AI-Driven Financial Guidance

The FCA's latest perimeter report, published on 26 March 2026, has drawn attention to the rapid rise of general-purpose AI tools, such as AI-powered personal finance chatbots, that are increasingly offering financial advice or recommendations to consumers without sitting squarely within the existing regulatory framework.

The FCA has flagged that current perimeter boundaries may no longer be fit for purpose if these unregulated services begin to cause consumer harm, and has called on the government to consider updating those boundaries accordingly.

Read more here.

House of Commons Business and Trade Committee Launches Inquiry into AI in the Workplace

The House of Commons Business and Trade Committee has launched an inquiry to examine the opportunities and risks of AI adoption in UK workplaces and to assess whether existing worker protections remain fit for purpose. The inquiry follows rapid acceleration in the deployment of generative and agentic AI across recruitment, performance management, and day-to-day decision-making.

While the timeline for any legislative or regulatory response is not yet defined, the inquiry reflects a broader shift in the UK's approach. The UK government's light-touch approach to AI regulation is being challenged. Regulators including the ICO and the Equality and Human Rights Commission are increasingly concerned about AI systems that discriminate or operate without sufficient transparency, and there is growing public unease about the use of AI in employment decisions.

In a closely related development, the ICO has published a report on automated decision-making in recruitment, finding that employers must ensure more meaningful human involvement in AI-assisted processes. The ICO's consultation runs until 29 May 2026.

Read more here and here.

UK Government Sets Out AI Sovereignty and Hardware Ambitions

On 28 April 2026, Technology Secretary Liz Kendall gave a keynote speech at the Royal United Services Institute, setting out the Government's view that AI is now central to the UK's economic prosperity and national security. The Government also announced plans to develop a UK AI hardware plan to support domestic capability in chips and semiconductor technologies underpinning the AI hardware stack.

This reflects a wider push for the UK to take greater control of its own AI future, not by cutting itself off from global partners, but by reducing its reliance on others in areas that matter most. The Government's aim is for the UK to become a "keystone" in the global AI architecture, focusing on areas where the UK can build real leverage.

This sits alongside reports that UK technology ministers are concerned that closer alignment with EU AI regulation could limit the UK's flexibility and potentially affect future investment.

Read more here.

DRCF Publishes Paper on the Future of Agentic AI

The Digital Regulation Cooperation Forum, which brings together the ICO, CMA, FCA and Ofcom, has published a paper on The Future of Agentic AI. The paper is not a statement of regulatory policy, but is intended to encourage discussion about how UK regulators should approach the opportunities and risks presented by agentic AI.

The paper sets out early thinking across four broad categories: governance, data protection and cybersecurity, consumer rights and interests, and market dynamics and competition. The key message is that agentic AI does not sit outside existing legal frameworks. Existing obligations around transparency, fairness, accountability, safety, consumer protection and competition law will continue to apply.

The DRCF also indicates that, during 2026/27, the regulators will carry out further horizon-scanning work on the future of user interfaces, consumer robotics and physical AI, and the likely impact of emerging technologies on everyday consumer experiences.

Read more here.

House of Commons Launches Inquiry into Low-Energy Computing

The House of Commons Science, Innovation and Technology Committee has launched an inquiry into whether low-energy computing could help address the rising energy demands associated with AI. The inquiry will consider the scale of the challenge, the role of emerging technologies, and what more the Government could do to support research and development in this area.

A particular focus is neuromorphic photonics, an emerging field that combines silicon photonics with neuromorphic computing principles. This has been identified as a possible route to reducing the energy intensity of AI workloads. Written evidence can be submitted until 14 May 2026.

Read more here.

Europe

EU AI Act – Digital Omnibus: From Negotiation to Stalemate

In November 2025, the European Commission published its Digital Omnibus Package – a broad set of proposed reforms to EU data and cybersecurity laws. As certain deadlines under the EU AI Act are due to take effect in August 2026, the AI-related amendments were fast-tracked and separated out from the rest of the package. Both the European Council and the European Parliament adopted their respective negotiating positions, and formal three-way negotiations between the Council, Parliament and Commission got under way.

However, in April 2026, negotiations between the European Parliament, the Council and the Commission stalled without agreement, primarily due to disagreement over whether AI systems embedded in products already regulated under EU sector-specific legislation (such as medical devices) should remain subject to the AI Act.

As no agreement has yet been reached, the proposed amendments have not been adopted and the existing AI Act timelines remain in force. In particular, obligations relating to high-risk AI systems are still due to begin applying from August 2026 unless and until legislative amendments are formally agreed and adopted.

Read more here and here.

European Commission Consults on Measuring AI Energy Use

The European Commission has launched a consultation as part of a wider study on measuring and promoting energy-efficient, low-emission AI in the EU. The responses will help inform a framework for meeting the energy-related objectives of the EU AI Act.

The EU AI Act already requires providers of general-purpose AI models to record known or estimated energy consumption as part of their technical documentation. The consultation reflects growing regulatory attention on the environmental impact of AI, particularly the compute and energy demands associated with training and running advanced models.

Read more here.

United States

White House Sets Out Vision for a Unified Federal Approach to AI Regulation

On 20 March 2026, the Trump Administration released a comprehensive National Policy Framework for AI, calling on Congress to establish a unified federal approach to AI legislation that would override the current patchwork of conflicting state laws. The Administration framed the Framework around six key objectives:

  • Protecting Children and Empowering Parents
  • Safeguarding and Strengthening American Communities
  • Respecting Intellectual Property Rights and Supporting Creators
  • Preventing Censorship and Protecting Free Speech
  • Enabling Innovation and Ensuring American AI Dominance
  • Educating Americans and Developing an AI-Ready Workforce

The Administration has stated it will work with Congress in the coming months to translate the Framework into legislation.

Read more here.

US Treasury Summons Bank Chiefs Over Cybersecurity Risks from Anthropic's Claude Mythos

US Treasury Secretary Scott Bessent called a meeting of major American bank chiefs (including the CEOs of Goldman Sachs, Bank of America, Citigroup, Morgan Stanley and Wells Fargo) to discuss cybersecurity risks posed by Anthropic's latest AI model, Claude Mythos.

The meeting followed Anthropic's disclosure that Mythos had identified thousands of previously unknown software vulnerabilities, some up to 27 years old, prompting the company to restrict access to the model to a small group of businesses including Amazon, Apple and Microsoft. Anthropic has warned that AI models have now surpassed "all but the most skilled humans at finding and exploiting software vulnerabilities", with potentially severe consequences for economies, public safety and national security.

Read more here.

Pentagon Signs AI Agreements for Classified Military Networks

The Pentagon has entered into agreements with several major AI and technology companies, including SpaceX, OpenAI, Google, Nvidia, Reflection AI, Microsoft and Amazon Web Services, to deploy AI capabilities on classified US defence networks.

The development is significant because it shows frontier AI moving further into defence and national security. It also raises familiar questions around procurement controls, guardrails, cybersecurity, accountability and the role of private technology providers in sensitive state functions.

Read more here.

Global

China – AI Regulation Update

On 10 April 2026, China's Cyberspace Administration of China (CAC), alongside four other central government authorities, jointly issued the "Interim Measures for the Administration of Artificial Intelligence Anthropomorphic Interaction Services," establishing a dedicated compliance regime for AI-powered virtual companions, chatbots, and emotionally interactive digital services, with effect from 15 July 2026.

The measures require service providers to implement robust governance controls over AI-generated content, data security, algorithm transparency, and user safety, including clear disclosure of the artificial nature of AI interactions and mechanisms to prevent psychological overdependence.

Read more here.

2. AI in the news

Meta Releases Llama 4

Meta released Llama 4 on 5 April. Meta describes Llama as a multimodal AI system capable of processing and integrating various types of data including text, video, images and audio, and able to convert content across these formats.

Meta plans to spend as much as $65 billion this year to expand its AI infrastructure, amid investor pressure on big tech firms to show returns on their investments.

Read more here.

DeepSeek – Continued Scrutiny

Following the high-profile emergence of DeepSeek's R1 model in late January 2026, scrutiny of the model's data practices and potential national security implications has continued. Several jurisdictions, including Italy and the UK, have taken steps to review or restrict use of the model on government and public sector devices. Additionally, a US Commission has warned that China's use of open-source AI threatens the US lead in AI development.

Read more here.

Lloyds Bank to Use AI Tool in Board Meetings

Lloyds Banking Group has become the first UK-listed blue-chip company to deploy a specialist AI "board bot" in its boardroom, supplied by advisory firm Board Intelligence, which senior executives and directors are using to review confidential material, prepare for meetings and check for bias in decision-making.

The system has been trained across areas including cybersecurity, sustainability, financial analysis and M&A, with Lloyds currently using it primarily for meeting preparation, though its developer envisages a future where directors interact with it in real time during discussions.

Read more here.

Microsoft and OpenAI Amend Exclusivity Arrangements

Microsoft and OpenAI have announced amended partnership terms under which Microsoft's licence to OpenAI IP is now non-exclusive and OpenAI can serve products across any cloud provider (while Azure remains the primary cloud partner).

This restructuring is significant because, until now, the Microsoft/OpenAI partnership has been a defining feature of the generative AI landscape, and one that gave Microsoft a privileged position in how OpenAI's technology was accessed and deployed. By moving to a non-exclusive arrangement, OpenAI gains greater commercial freedom and the market opens up to broader cloud competition.

Read more here.

Google Cloud Growth Highlights Enterprise AI Demand

Alphabet reported a significant acceleration in Google Cloud growth in the first quarter of 2026, with year-on-year revenue increasing by approximately 63%, driven in large part by rising enterprise demand for AI-enabled cloud services.

For businesses procuring AI tools, the key takeaway is that AI adoption is increasingly intertwined with broader decisions about cloud infrastructure and data governance – decisions that can be difficult and costly to reverse.

Read more here.

OpenAI Launches Life Sciences Model

OpenAI has launched GPT-Rosalind, a model designed to support life sciences research, including biochemistry, drug discovery and medicine development. The model is intended to assist with tasks such as reviewing evidence, hypothesis generation and planning experiments.

The launch is part of a broader trend of AI tools being developed for specific industries where accuracy and specialist knowledge matter most. It also highlights the growing role AI is playing in pharmaceutical and scientific research.

Read more here.

3. AI for good

AI-Powered Smart Glasses Win £1 Million Prize in Dementia Care Breakthrough

A London-based team has won the £1 million Longitude Prize on Dementia for CrossSense, an AI-powered assistant built into smart glasses that helps people living with dementia navigate daily tasks independently. At its core is Wispy, a conversational AI companion that recognises objects in real time and adapts to individual habits, guiding users through everything from making tea to hosting guests, and has already shown promising results in improving memory, object recognition and confidence in daily routines. The team plans to bring the technology to market by 2027.

Read more here.

AI Shows Promise in Emergency Diagnosis

A Harvard-led study has found that an AI reasoning model performed strongly in text-based emergency room diagnostic scenarios, outperforming physicians in identifying diagnoses across a set of complex cases. The Guardian reported that the model identified correct diagnoses in 67% of cases, compared with 50–55% for doctors, rising to 82% when given more detailed information.

The results are promising, but the study also highlights the need for caution. The tests were text-based and did not assess the ability to interpret visual symptoms or other human cues. For now, the more realistic use case appears to be AI as clinical decision support rather than a replacement for clinicians.

Read more here.

4. Key dates and events

Date Event

3 June 2026 - AI in Motion: Addressing bias in AI, at TLT's London office

8–12 June 2026 - London Tech Week

10–11 June 2026 - The AI Summit Series: The AI Summit London

29–30 June 2026 - Reuters: Momentum AI London

7–10 July 2026 - UN: AI for Good Global Summit (Geneva)

23–24 September 2026 - Big Data London

We hope you find this edition useful, and do get in touch ifyou want to hear more about any of the updates we’ve covered.

This publication is intended for general guidance and represents our understanding of the relevant law and practice as at May 2026. Specific advice should be sought for specific cases. For more information see our terms & conditions.

No items found.

Date published
08 May 2026

Abstract overlapping curved shapes in varying shades of violet and purple on a solid violet background.

Legal insights & events

Keep up to date on the issues that matter.

Abstract yellow background with overlapping translucent olive green curved shapes.

Follow us

Find us on social media

No items found.