Press enter to search, esc to close
As generative AI shifts from experimental use to full-scale deployment, financial services firms are at a crossroads. The choices they make now will define their future — from managing risk to unlocking new opportunities. We spoke to Gareth Oldale, Partner and Head of Data, Privacy & Cybersecurity at TLT, to uncover the five strategic moves that will shape success in the rapidly evolving generative AI space.
The biggest shift is the speed at which generative AI has moved from concept to execution. In just one-year, financial services firms have gone from experimenting with gen AI to integrating it across multiple operations — from customer service and chatbots to internal reporting and compliance. This rapid adoption means AI is now embedded in deeply regulated, data-intensive environments, making governance and risk management more complex than ever.
Firms must act now because the window for mitigating risks is rapidly closing. Regulatory frameworks are catching up, but they’re not yet fully in place — and firms must get ahead of potential compliance issues before they become blockers. Clients are increasingly curious about AI’s potential, but also more cautious. Those who establish strong governance, address risks proactively, and deploy AI in a controlled, strategic way will secure a competitive advantage and be better positioned for the future.
Generative AI offers significant upsides, but it comes with real risks that must be carefully managed. With Data security and privacy remaining top concerns. Firms must ask: Are we protecting sensitive client data when feeding it into AI models? Intellectual property (IP) and liability are also critical — who owns the outputs of generative AI, and what happens when an AI model makes a mistake, particularly in areas like financial advice?
Yet the opportunities are just as compelling. AI can drive major productivity gains by automating repetitive tasks and improving operational efficiency. It can also enable hyper-personalised services, helping firms reach more clients with tailored offerings. Additionally, AI can enhance compliance and risk mitigation by automating monitoring and reporting processes, making it easier to navigate complex regulations.
The firms that succeed will be those that embrace AI’s potential while managing its risks through strong governance, ethical oversight, and a clear, strategic roadmap for scaling AI across their operations.
Responses vary widely. A minority of firms are already well advanced in their AI journey, with dedicated taskforces, cross-functional governance teams, and live deployments across departments. Others are still in the early stages, exploring where to begin.
A common theme is the need for clarity. Firms are asking: Who owns AI policy? What role should the board play in overseeing AI governance? Which part of the legal function should advise on AI matters?
The most proactive firms are taking a cross-functional approach. Legal, IT, risk, operations, and innovation teams are collaborating to design governance frameworks that are flexible and adaptable. This is essential — AI doesn’t fit neatly into one department. It requires a holistic, integrated strategy that aligns with the firm’s broader business goals.
Firms that can build clear, transparent, and adaptable governance frameworks will be best positioned to scale AI responsibly and effectively.
As generative AI becomes more deeply embedded in financial services, firms must take a strategic approach to navigating a fast-evolving legal and regulatory landscape. This isn’t just about compliance — it’s about enabling innovation while managing risk.
Key areas of focus include data privacy (such as GDPR and global equivalents), intellectual property rights, and financial regulation. Firms must ensure that AI models are trained and deployed in ways that respect client confidentiality, protect proprietary data, and align with contractual obligations. At the same time, ethical considerations — including fairness, transparency, and explainability — must be built into AI systems from the outset to maintain trust with clients, regulators, and the wider market.
As laws like the EU AI Act come into force, firms that stay ahead of these developments will not only mitigate risk but also position themselves as leaders in responsible AI. Strategic foresight and ethical alignment will be key to long-term success.
Generative AI is not a one-off project — it’s a long-term business transformation. To future-proof, firms must build scalable, adaptable governance frameworks and continuously evolve with the technology.
This means having flexible contracts that can adapt as AI capabilities grow, training teams to identify and manage emerging risks, and ensuring leadership alignment across the business. AI is not just an IT issue — it’s a strategic priority that must be embedded into the firm’s core operations.
At TLT, we work with clients as strategic partners, helping them navigate the legal, regulatory, and ethical complexities of AI. The firms that will lead in this space are those that move quickly, think strategically, and stay one step ahead of change.
Want to lead the AI conversation?
Download our AI in Financial Services Legal Playbook
This publication is intended for general guidance and represents our understanding of the relevant law and practice as at May 2025. Specific advice should be sought for specific cases. For more information see our terms & conditions.
Date published
30 May 2025