Press enter to search, esc to close
This is despite having the same underlying policy goals and commitment to the principles for responsible AI innovation (as articulated by the UK Government) of Safety, transparency, explainability, fairness, accountability, and fundamental human rights. Read more about the Government response, here.
This difference in approach creates the potential for a significant regulatory divergence between the UK and the EU. The UK appears to be adopting a context specific approach without recourse (for the time being) to the statute book. In contrast, the EU has already passed into law the very comprehensive AI Act.
The EU AI Act is clearly aimed at preventing the abuse of AI once it is operational, either through recklessness or deliberate actions. However, it remains to be seen what impact it will have on innovation and whether or not the regulatory hurdles put up by the EU will hinder the development of AI in a European context.
Under the Act, economic actors who deploy high risk AI applications onto the market must first carry out a pre-conformity assessment to make sure the AI being deployed is safe, transparent, subject to human control, and non-discriminatory in its operation.
High risk AI applications are very widely defined as either AI products already covered by EU legislation listed in the Annex to the AI Act, or AI systems that may be used in: biometrics, critical infrastructure, education, employment, access to essential services – both public and private, law enforcement, immigration and administration of justice.
The EU also has the discretion to designate any AI application as high-risk. The model approved by the EU in the AI Act builds on the approach adopted for the GDPR, which requires companies to carry out data protection impact assessments prior to adopting any business solution that may involve the processing of personal data.
Conversely, the UK government has adopted more of a wait and see approach. In the interim, it has tasked the regulators for each sector to develop guidance for use of AI in those sectors according to these five key principles: Safety, security, and robustness; Appropriate transparency and explainability; Fairness; Accountability and governance; Contestability and redress.
This approach has the advantage of allowing best practice to develop outside of a statutory framework, with a view to a statutory framework being developed in the future that may reflect those practices. The approach reflects the underlying policy position of the UK Government to promote innovation in AI.
However, the absence of a statutory framework does create the potential for a significant divergence from the regimes operating within the EU.
Critics within the UK may also argue that the lack of any specific statutory protection for individuals will allow for the potential abuse of AI by unscrupulous actors who deliberately choose not to follow regulatory guidance.
Striking the balance between fostering innovation in AI and protecting fundamental human rights will continue to be a challenge from a legislative perspective, and will no doubt be a major challenge for the next Administration.
Will there be a change in direction and a greater alignment with the EU or will the UK continue to task regulators with the job of policing AI? Only time will tell, but as the UK has the world’s third largest AI economy, whichever direction is followed will have consequences domestically and internationally.
Date published
21 March 2024
RELATED INSIGHTS AND EVENTS
View all