April’s Digital Future events looked at the topic of artificial intelligence (AI) and whether we should be embracing, regulating or banning AI. Guest speaker Norman Lewis, Director Futures-Diagnosis Ltd and Visiting Research Fellow, MCC Brussels, joined TLT’s Juliet Mason, Daniel Lloyd and Emma Erskine-Fox to debate the issue. 

What is AI?

AI is an umbrella term for the technology and should be differentiated from machine learning which is the ability to learn by example and carry out tasks that it has not been specifically programmed to do.

An ongoing debate continues when you are defining intelligence in the context of AI. That is, can we class AI as intelligent? When we look at this question, we often talk about the difference between Narrow AI and General AI. The key differentiator being whether the AI can make a judgement call. 

Take, for example, the AI products used the legal world: they are helpful tools, but ones which don’t have the intelligence to advise with any degree of accuracy without human intervention. AI might be able to support with contract drafting by assessing which clauses are present or missing, and providing standard advice, but it can’t provide specific advice on what amendments might need to be made to reflect a particular product or service, or the needs of a customer or supplier. In order to do so, it needs information from outside the digital world (such as, knowledge of the project, bargaining power between parties, losses which could be incurred and so on) – and that is where human intervention is needed. Only a human has such knowledge and the ability to make these judgements. To put it another way, today’s AI has become more and more proficient at accurately capturing data, deciphering it and providing a useful response, but it is still not understanding why it is giving that response. That is, while Narrow AI might mimic “intelligence” the market is a long way from being able to develop the General AI you might see in films.  

Why is AI such a hot topic? 

In one form or another AI has been around for a long time, but it is only in recent years that certain enablers have come together to allow for consumable and practical products to come to market, and hence we have seen the incredible increase in investment in this area. There are a number of enablers which have helped AI to become the technology of the moment. 

The first of these big data. Big data is AI’s fuel – in order for an algorithm to learn, it needs millions and millions of examples. The more complex the model, the more examples it requires.

The second is cheap storage. Recently there have been huge diminishing costs of storage alongside the reduction in the size of associated equipment, thus taking the practical and financial limitations away from storing and processing this big data. 

The third is superfast processing. As well as allowing for real time processing, in order to train an algorithm, not only does it need big data but it needs to be able to process and learn from that data. Without superfast processing, AI products would never learn fast enough to actually come to market. 

The fourth and final is connectivity. While the internet has existed for some time it has only been with the introduction of 4G and superfast broadband that we have been able to move large datasets from servers to user devices in real time, meaning that the bulk of processing can be done at the datacentre while the device just provides the front-end mechanism. 

Can regulation hamper innovation and the development of AI?

AI is, in some ways, a relatively unknown quantity and the potential negative impact that it could have is influencing the regulation being introduced in the UK and EU as they look to negate that. 

However, for many the question is not just about the level of regulation, but whether regulation is being brought in too early? The argument against regulation looks at other tech breakthroughs such as DARPA, which was the forerunner of the Internet. If development had been paused or stringent regulation introduced, it is unlikely that technological platforms such as Google, Facebook or Amazon would exist. The same could be applied to AI; if regulation is too rigorous or brought in too early then it could hamper innovation and future development. 

It is important to note that AI systems don’t deploy themselves. A company will deploy AI as part of (for example) an operating system for a car, and that company needs to be held accountable for the AI’s purpose, the data it is collecting, how it is constructed and so on.  Therefore, regulation should encourage transparency and accountability while enabling innovation, as allowing AI to evolve will create a better understanding of potential future risks. 

Regulation also needs to take into account the fact that businesses will operate across different territories and hence need to comply with differing regulations. To really harness the development of AI there needs to be collaboration and interoperability between the different regulatory regiments – something which is not being seen at present.  

What approach is the EU taking to developing AI?

In order for the EU to develop a digital economy in which AI can thrive it needs access to big datasets, investment and infrastructure and then regulation that supports those three key elements.  One of the concerns with the proposed EU AI Act is that is a complicated piece of legislation which is potentially being introduced too early into a market which is not fully developed or understood in terms of its legislative requirements. 

The EU regulation takes a risk based approach, which has three main categories. The first is “low risk” where the application needs to fulfil certain transparency obligations regarding the use of that application. The second is “prohibited AI”, such as subliminal AI applications, social credit scoring or attributing values to citizens based on their behaviours. The third is “high-risk”. High-risk applications have to satisfy a number of criteria to ensure a high level of robustness, security and accuracy, which includes oversight by humans to minimize risk and so on. 

While this is a valid approach to regulating AI the issue is which applications fall under the “high risk” category as this is widely defined, including everything from critical infrastructure that could put the life and health of citizens at risk to educational or vocational training that may determine the access to education, as well as safety, components of products, employment management, workers, and essential private and public services.  

If you are a start-up navigating through a “high risk” category it could be expensive and time-consuming. Businesses may question the return on investment – is it worth developing this product in this country under this level of regulatory security (before the product has even got to market and been allowed to develop)? This is particularly true when one of the unexpected benefits of AI can be its application in other areas as the product is developed and refined, as has been demonstrated when using AI in the context of medtech. Therefore, overcomplicated and too-early regulation may set a precedent in the EU that prevents the development of a digital economy. 

How does the UK’s approach to regulation differ? 

The UK has taken a very different approach to its proposed regulation of AI. The government released its AI white paper on 29 March 2023 which outlines this approach and aims to put the UK at the forefront of the AI race. The paper is open for consultation until 21 June 2023. Rather than proposing a prescriptive legislative framework, the government has set out a principles based approach which acknowledges that businesses need to be accountable for their decisions on AI development and deployment: in the words of the white paper, “a pro innovation approach to regulation involves tolerating a certain degree of risk.”

There are five key principles detailed in the white paper:

  • Safety, security and robustness – AI solutions should be technically secure and the players in the AI ecosystem should regularly assess and manage the risks and make sure that the AI tools function as intended. There is still a question as to where that responsibility sits, but the white paper is clear that accountability should be driven throughout the supply chain. 
  • Appropriate transparency and explainability – this is seen as key to building public trust in AI and thereby increasing AI uptake. If businesses and consumers don’t understand broadly how an AI tool works and processes data, they are less likely to use it. By referring to “appropriate” transparency, the white paper acknowledges that the level of information required will depend on the context, risk and particular use case for the tool, as well as the audience for the information; regulators will need more detailed and technical information, for example, than consumers. s. 
  • Fairness – AI can have a significant impact on individuals and businesses and that those impacts should be justifiable. The AI shouldn't undermine legal rights, discriminate unfairly against individuals or businesses, or lead to any sort of unfair market outcomes.
  • Accountability and governance – this requires ensuring there is oversight and clear lines of accountability throughout the supply chain. This is a widely acknowledged challenge with AI – where does that accountability sit when there is a complex supply chain and the technology by its nature is adaptive and autonomous?  The white paper suggests that regulators will need to ensure that there are expectations on appropriate actors in the supply chain, but it is not always clear who those appropriate actors are. The government also suggests that the existing legal frameworks that allocate those responsibilities may not do fairly in the context of AI – for example, existing concepts of the “controller” and “processor” in data protection law might not work on a practical level when it comes to AI. There is a suggestion that the government may intervene at legislative level to review this where required. 
  • Contestability and redress – -individuals and businesses affected by AI outcomes should be able to challenge those decisions and obtain redress where appropriate. The government is not currently intending to introduce new mechanisms of redress or contestability; regulators will need to clarify the existing routes. However, it is acknowledged that this is another area where the government may need to address gaps in current mechanisms to enable real accountability.

This is a non-statutory framework and existing regulators, such as the ICO, the FCA and the CMA, will be expected to interpret and apply the principles within their existing regulatory remits. At the end of an initial implementation period, the government intends to introduce a statutory duty on regulators to have “due regard” to the principles. However, the white paper leaves it open for the government to decide that the framework should remain non-statutory if it is working well. Whether or not this statutory duty is introduced, the white paper does not envisage any additional legal duties on those operating within the AI ecosystem. 

To support regulators in implementing the principles, the government intends to create a number of central functions. This include: a monitoring and evaluation function to assess the impact of the framework and how it is working; and a risk assessment function to identify, assess and manage risks. The government also proposes to provide guidance and support to regulators, as well as support for innovators through sandboxes and test beds, and education and awareness for AI actors to embed the principles. There is also a big focus on regulators collaborating and producing guidance for businesses, to help them navigate the new regulatory landscape, particularly when several regulators may be involved in regulating the same AI use case. 

Can we keep up with pace of change?

One of the main challenges of trying to legislate new tech is that it is impossible for the legislation to keep up with the pace of change, particularly when that legislation is very prescriptive or rigid. Take for example, the EU AI Act, which was drafted in 2021; the technology has moved on so much during the intervening time that it could be argued that this is already out of date. 

The more agile and flexible the framework – potentially like the one proposed by the UK – the easier it is for that regulation, the regulators and the players in the sector to keep up with the pace of change. On the other hand, those looking to develop and deploy AI need certainty as to what their obligations are, and the risks for them and their users if they fail to meet them.AI has the potential to make a range of positive impacts, and if we are going to fully embrace this technology, we need to be careful not to regulate something that isn’t yet fully understood out of existence, whilst striking a balance with the need to protect businesses and users from the risks of the technology . 

One of the things needed to drive innovation and successfully regulate AI is more education about its capabilities. As it stands, we are a long way off AI (if ever) going from its current ability of a deep learning algorithm to solve certain problems, to it sending papers on how you might unify the laws of physics to “Physics Today” for peer review! A successful regulatory regime for AI would take into account both the opportunities and the limitations of the technology, whilst allowing space for the regulation to shift alongside technological developments, to truly enable responsible innovation in the AI space. 

Date published

11 May 2023



View all