Generative artificial intelligence (GenAI) is the "new black" and has been a hot topic on the market since ChatGPT rocketed into the public consciousness in November 2022.

Businesses across all sectors, from financial services, marketing, computer programming to public sector, education and healthcare want to get their hands on it in one way or another.

GenAI has come a long way since ChatGPT led the way, and now almost every few weeks a new tool, regulation or technological enhancement is developed. A recent survey carried out by McKinsey in 2022 on the adoption of AI by organisations demonstrates that the use of GenAI has nearly doubled since 2017, and nowadays in some industries the use of GenAI is almost expected.  

Acting as a co-pilot for hyper-automation and hyper-creation, GenAI allows businesses to boost their productivity and efficiency, and unsurprisingly many businesses are considering ways of introducing GenAI into their customer-facing functionalities (such as AI Chatbots). The outputs produced by GenAI look very convincing; however, businesses should be mindful of its shortfalls, especially when used in a consumer-facing context. Outputs can include inaccurate resources or results (so-called "hallucinations"), some are completely wrong, and others can be biased. As AI disputes are beginning to emerge, it is important for businesses to be mindful of the risks to consumers and their own potential areas of liability where GenAI interacts with individuals.  

Below are some of the key risks businesses face when looking to deploy GenAI in consumer-facing products and services:

Consumer law

There is a risk of liability under consumer law and regulation. Although a Canadian decision, the case of Moffat v Air Canada upheld the well-established principle that businesses are responsible for the acts and omissions of computer systems that they choose to use, and for misrepresentations they make to the public, whether made by a human or GenAI chatbot. Although this is yet to be tested under UK laws, and the UK courts may well take a similar approach and may find the business deploying GenAI to be responsible if anything goes wrong. There are also regulatory considerations. The UK's national consumer regulator, the Competition and Markets Authority has made it clear that it is monitoring the impact of AI tools (including chatbots) on consumers and is prepared to use its new powers under the Digital Markets, Competition and Consumers (DMCC) Act to impose penalties up to 10% of global turnover to drive home this message.

Civil claims

There is potential for claims of negligent misstatement, or defamation, to be made. Businesses should be cognisant of their potential liability if a GenAI output purports to make claims on behalf of the business – as Moffat v Air Canada demonstrated, businesses could be held to those claims if they don’t approach GenAI implementation with caution.

Data Protection

Despite efforts to ensure that personal data is not processed by GenAI, there is often no way to reduce the risk or personal data being impacted to zero, particularly when customers are responsible for data input. This could lead to liability under data protection legislation, for example, if inaccurate or incomplete personal data is inadvertently processed, or if personal data is accidentally sent or accessed by the developer without the right protections in place.

Cybersecurity risks

There are various security risks to consider, and in particular the risks of the GenAI being hacked via any of the different layers of the tool, which could lead to personal or confidential data being compromised and result in significant reputational risk for the business.

Discrimination and bias

Businesses often do not have control over the data used to train GenAI tools, or how the tool is tested. If training data introduces biases, or testing is not sufficiently robust, leading to discriminatory outputs, this could lead to claims under the Equality Act 2010, not to mention reputational damage.  

The existence of risk does not need to stifle innovation or hinder GenAI implementation, but it is important to address and mitigate these risks, including by considering the following:

  • A robust and clearly visible disclaimer will be a very important part of the mitigation strategy for consumer facing GenAI. Disclaimer wording will need to be appropriately tightly drafted, but at the same time it must be fair and reasonable to consumers. In fact, it is a specific requirement under the EU AI Act and the UK’s AI Regulatory Principles that certain content created by GenAI must be marked as such.
  • The outputs created by GenAI should display the original source material/weblinks, and the customers should be directed to read these before relying on the GenAI outputs.
  • Businesses should have in place appropriate training, testing and monitoring, and in particular, the tools used to filter out personal data must be highly trained. Businesses should also ensure that an "Acceptable Use of AI" policy is in place which highlights the unique issues and risks raised by GenAI, helps personnel to understand the guidelines for acceptable use cases, and safeguards confidential and sensitive information.
  • Businesses should carry out thorough and detailed due diligence on the developer’s security and processes for compliance, including training and testing.
  • In order to avoid biased outputs, businesses should carefully consider and, to the extent possible, select the data that is used to train GenAI models and perhaps consider using smaller, more specialised GenAI models instead of opting for the off-the-shelf likes of ChatGPT.
  • Use cases should be carefully considered, taking into account the impact of hallucinations and incorrect outputs. The use of GenAI in connection with critical decisions should be treated with caution and appropriately risk assessed.  

The above steps will help business navigate through some of the unique issues that arise with deploying consumer-facing GenAI. This is an emerging field, and the landscape of risks will continue to change and evolve rapidly. For businesses looking for a practical solution to inform their next steps in AI compliance, TLT’s AI Navigator (available here: TLT AI Navigator | Steering you through the legal AI landscape - TLT LLP) is designed to guide organisations through the complexities of AI adoption, governance, and compliance and assess where they are on a maturity curve. Please get in touch with us if you have any questions or require further assistance with your AI journey.

Authors: Emma Erskine-Fox and Liza Vernygorova

This publication is intended for general guidance and represents our understanding of the relevant law and practice as at August 2024. Specific advice should be sought for specific cases. For more information see our terms & conditions.

Date published

16 August 2024

Get in touch

RELATED INSIGHTS AND EVENTS

View all