Press enter to search, esc to close
Bristol played host to techSpark’s week-long Bristol Technology Festival in October.
Each day of the festival had a theme that in turn spelt out the word “SPARK”. The end of the week saw a focus on “Knowledge” – presenting the perfect setting in which to launch TLT’s AI Forum - a series of events which will consider how AI is changing the way we live and work, and what organisations should be thinking about when designing, deploying and exploiting AI technology.
The first AI Forum focused on the AI regulatory landscape and saw chair Emma Erskine-Fox joined by panellists Nigel Winship, AI investment specialist at the Department of Business and Trade; Sian Ashton, TLT Client Service Transformation Partner; and Karin Rudolph, founder of Collective Intelligence and ethical technology champion.
With more businesses implementing AI to create efficiencies, it’s important to create a dialogue to explore this ever-changing landscape. And as AI develops at such a rapid pace, we also need to think about how AI may be regulated.
There are different approaches to regulating the development and use of AI solutions. For example:
A number of existing laws (such as data protection law, competition law and consumer protection law) already encompass some elements of AI use. However, there is still a need for further regulation to ensure the safety of users. The challenge that comes with regulating this technology is balancing the tension of protecting individuals and businesses while allowing space for innovation.
The current lack of AI-specific regulation has led to many tech companies policing themselves to ensure that they handle AI with care. For example, ChatGPT 4 is billed as the “safest” version of the technology, arguably because humans are manually checking its work. Given the rapid development of AI, it is reasonable to assume that effective self-regulation is not sustainable. Specific regulation would mitigate the risks of this self-regulation turning into the equivalent of greenwashing.
In addition, without the correct frameworks in place there are also significant data risk implications. Considered regulation is essential, as will be supporting companies in understanding what regulations mean within their own contexts.
An essential part of regulating AI in a way that does not stifle innovation is ensuring that the regulation is built with input from many viewpoints, including those who are actually working with these technologies.
Whilst there is competition between the major global players in the AI regulation space (namely the EU, US and China), there is an argument that these “digital empires” need each other to protect the interests of their own tech companies abroad. Therefore, we may not see any major extremes in regulation and could instead see some form of alliance, with all markets striving towards a better technological ecosystem. Regulators certainly need to consider global approaches to ensure that businesses operating on a multi-national level have certainty as to their obligations and how potentially different regulatory frameworks affect them.
We have seen a drive towards this coming from the UK with the AI Safety Summit, aiming to put the UK at the forefront of global AI safety.
Alongside considering the risks of AI, it is important to consider the huge potential benefits that the technology brings such as advances in healthcare and increased productivity. But at the same time organisations need to take ethics into account when implementing AI solutions. While companies are generally keen to ensure that they are ‘doing the right thing’ and taking an ethical approach, without any regulation or official frameworks, it will be difficult to ensure AI companies are accountable.
Existing human rights frameworks could be a good starting point for developing suitable ethical framework. In addition, education and guidance for employees, alongside clear company policies, are essential to make sure that AI is used within the right ethical and regulatory parameters.
AI is not a new fad which is suddenly going to disappear. It is here to stay. If anything, its use and its impact on business and our personal lives is only going to increase. Therefore, developing a regulatory and associated ethical framework which protects all parties involved in the development and use of AI without stifling innovation is essential. And to achieve this there needs to be real collaboration between the public and the private sector – not just in the UK but on an international basis.
This publication is intended for general guidance and represents our understanding of the relevant law and practice as at November 2023. Specific advice should be sought for specific cases. For more information see our terms & conditions.
Date published
10 November 2023
RELATED INSIGHTS AND EVENTS
View all