Press enter to search, esc to close
On the 17 December 2024, the UK Government published its consultation on Copyright and Artificial Intelligence (the "Consultation").
The Consultation touches on a number of challenges to copyright law in the UK presented by the dawn of generative AI, along with a number of suggestions for potential reforms to deal with such challenges.
The Consultation makes clear that the Government is seeking to balance the interests of both the creative and the AI sectors, with three key objectives being stated as informing the Government's approach:
1. Supporting rights holders’ control of their content and ability to be remunerated for its use.
2. Supporting the development of AI models in the UK by ensuring wide and lawful access to high-quality data.
3. Promoting greater trust and transparency between the creative and AI sectors.
The Consultation will run for a period of ten weeks, closing on 25 February 2025.
In this article we summarise the key points covered in the Consultation.
The current position under UK law regarding intellectual property and AI is uncertain and is an area much in need of clarification. As such, the Consultation is welcome and timely. Uncertainty and risk of disputes will hamper development, investment and adoption of AI and the Government is looking to create a framework that provides greater certainty while protecting the interest of the creators of original content and encouraging co-operation.
In contrast to the previous Government's position, the Consultation notes a number of areas where regulation may be required, in line with the Government's general approach to put in place AI specific laws governing the "most powerful AI Models". More information here.
It is noteworthy that in a number of places, for example, regarding: (a) an exemption from text and data mining which allows right holders to reserve their right; (b) transparency around AI model training data; and (c) labelling of outputs as AI generated) the Consultation suggests that the UK position may become broadly aligned with that in the EU AI Act.
The Consultation highlights the tensions under current copyright law presented by AI, dealing with the training of AI models and outputs of AI models in turn.
In many cases, AI models are trained using works made available to the public on the internet. Whilst these works are made publicly available, they are often not expressly licensed for AI model training.
The use of such works to train AI models has given rise to debate on the extent to which copyright law does, or should, restrict access to such media for the purpose of AI training.
Some AI developers argue that existing legal exceptions in UK copyright law allow them to use copyright works when conducting training activity in the UK, but right holders generally reject these arguments.
Litigation to resolve disputes in this area is currently taking place, including the Getty Images vs Stability AI case before the UK High Court. However, the Consultation acknowledges that it will likely take several years for these issues to be definitively resolved in case law.
As such, the Government is considering a "more direct intervention" through legislation to "clarify the rules in this area and establish a fair balance in law".
The Consultation also considers current uncertainties in terms of the outputs of generative AI models, in particular regarding:
the position around IP ownership in AI outputs; and
the circumstances in which AI outputs will infringe copyright.
We summarise the contents of the Consultation on each of the above below.
The Consultation outlines a number of options for clarifying copyright law in light of the above, these are:
Option 0: do nothing and leave copyright and related laws as they are.
Option 1: strengthen copyright requiring licensing in all cases.
Option 2: introduce a broad data mining exception.
Option 3: a data mining exception which allows right holders to reserve their rights, underpinned by supporting measures on transparency.
The Consultation notes that the UK Government's preferred approach is "Option 3", noted above. Namely, a data mining exception which has the following features, it would:
apply to data mining for any purpose, including commercial purposes;
apply only where the user has lawful access to the relevant works (including works on the internet, or made available under a contract);
apply only where the right holder has not reserved their rights in relation to the work; and
be underpinned by greater transparency around the sources of an AI model's training material.
The Consultation notes that this approach would be similar to that in the EU under Article 4 of the Digital Single Market Copyright Directive (Directive (EU) 2019/790).
The Consultation notes that there are tools available to rights holders to facilitate reservation of rights (such as the robots.txt standard), but flags that:
There is a lack of standardisation in the area of rights reservation, with rights holders having to navigate several systems.
There are no current requirements for AI firms to have any such rights reservation recognition systems in place.
The most widely used standard (the robot.txt standard) has certain limitations, for example:
As such, the Consultation calls for greater standardisation in the area of rights reservation noting that regulation might be needed to support to the adoption of such standards.
Often creators and performers license their rights to collective management organisations (CMOs), who are given a mandate to license their members’ works on a blanket basis.
The Consultation notes that new structures may be needed to support the aggregation and licensing of data for AI training purposes.
The Consultation:
Notes that it is difficult for right holders to determine whether their works are being used to train AI models which can make it challenging for right holders to enforce their rights.
Suggests that "increased transparency by AI developers will be crucial to ensuring copyright law is complied with and can be enforced".
The transparency measures suggested by the Consolation include:
A requirement for AI firms and others conducting text and data mining to disclose the use of specific works and datasets along with details of web crawlers used to train the AI model.
A requirement for AI firms to keep records, to provide certain information on request, or to evidence compliance with rights reservations.
The above approach suggested by the Consultation is worth noting in light of:
The approach under the EU AI Act, whereby AI providers are required to make publicly available a “sufficiently detailed summary” of training content.
The approach under California’s Assembly Bill 2013 (AB 2013), which will require generative AI developers to disclose information about the datasets used to in, train, test, and validate their models.
Along with introducing a new exception in this area (Option 3, noted above) the Government aims to remove wider ambiguity under copyright law at the same time.
The Consultation states the above aim in relation to two areas:
Treatment of AI models trained in other jurisdictions: the Government states that it wants to encourage AI developers operating in the UK to comply with UK law on AI model training, even if their models are trained in other countries. To encourage this, the Government aims to ensure that the UK’s copyright provisions are internationally interoperable and do not lead to unreasonable burdens for AI providers, which often operate across multiple jurisdictions.
The "temporary copies" exception: this exception permits temporary copies to be made during technological processes – for example, copies held in browser caches or displayed on computer screens. It has been argued by some that this exception applies to the training of generative AI models, but it is not clear if this would be the case.
Section 9(3) of the Copyright, Designs and Patents Act 1988 (CDPA) provides specific protection for “computer-generated works” ("CGWs").
This protection applies to a literary, dramatic, musical, or artistic work which is “generated by computer in circumstances such that there is no human author” of the work. The author of such a work is the person “by whom the arrangements necessary for the creation of the work are undertaken”.
In the case of a general-purpose AI which generates output in response to a user prompt, the “author” will usually be person who inputted the prompt.
The section 9(3) computer-generated work provision has been criticised on a number of grounds relating to uncertainty around its application, both generally and in relation to generative AI.
The Consultation outlines a number of options around CGWs:
Option 0: no legal change, maintaining the current provisions
Option 1: reform the current CGWs protection to clarify its scope
Option 2: remove specific protection for CGWs
Whilst the Consultation indicates that the Government is minded to reject Option 0, unlike the position regarding input/training data (noted above), there is no Government preference indicated with views being sought before the Government shapes its position further in this space.
Content generated by an AI model will infringe copyright in the UK if it reproduces a “substantial part” of a protected work.
There is evidence that, on occasion, AI models can and do contain elements of copyright works with it being a question of fact as to whether this amounts to reproducing a "substantial part".
The Consultation notes that the Government's view is that the copyright framework in relation to infringing outputs is "reasonably clear" and "appears to be adequate".
However, the Consultation welcomes views:
as to where respondents consider copyright law to be deficient
on the value of practical measures that may be put in place by generative AI providers to prevent reproduction of copyright protected works, such as keyword filtering.
As the technical capability of generative AI advances, it is becoming increasingly difficult to tell when a work such as an image is generated by AI.
The Consultation notes that the Government believe that labelling content as being AI generated is an area where Regulation may be needed.
It is worth noting that this position is broadly analogous to that under the EU AI Act which contains certain transparency rules relating to generative AI, including a requirement that AI outputs are detectable as being AI generated or manipulated.
An area of concern to the creative sector is the use of AI to create "digital replicas" or deepfakes. The Consultation defines deepfakes as images, videos and audio recordings created by digital technology to realistically replicate an individual’s voice or appearance.
Where deepfakes are made without the relevant individuals consent, then the position under UK law for legal protection is uncertain. Whilst the Consultation notes that consulting on proposals around the introduction of "personality rights" (or similar) in the UK is beyond its scope, the Government welcomes views on whether the legal framework is fit for purpose in this space.
On this point, it is worth noting that the EU AI Act contains specific transparency requirements around deepfakes.
This publication is intended for general guidance and represents our understanding of the relevant law and practice as at January 2025. Specific advice should be sought for specific cases. For more information see our terms and conditions.
Date published
23 January 2025
RELATED SERVICES