Hide table of contents

Foreword
Two days ago the UK Government released their Policy Report (Command Paper 728) on how the government intends to pursue their AI Policy. This is an initial report ahead of a much more detailed and in-depth White Paper. They have published this to make people aware of their intended direction, and get feedback to see if anything needs adjusted, amended, or changed - and to litmus test public and stakeholder opinion.

The UK is the highest ranked country in Europe for private investment in AI companies and for newly funded AI companies, and ranks third globally (2021). It also ranks first in Europe and fourth globally for AI Publications (2021). As a result, what they intend to do in terms of policy and regulation has significant impact for the global AI ecosystem, and for those interested in the future of AI.

Here I will summarise the contents of the report, as well as adding in some opinions and thoughts about this in line with my EA-related beliefs as well as what some sections mean for wider EA cause areas. I will also end this post with what we as AI Policymakers and as the wider EA community should do in response to this report. The purpose of this post is to summarise the policy paper to be easier read by those who don't want to read the full report. I would still recommend reading it in its originality.

 

Introduction

If this report were to be summarised briefly, you could do so by stating that the UK intends to head in a very different direction to the EU in terms of AI policy and regulation. Whereas the EU have approached AI policy from a Single Market perspective and (as civil law nations often do) have opted for a legislature-first approach to AI policy, the UK are choosing the adaptive route instead. 

The UK is opting for looser, more context-focused policy which relies on the contextually heavy input from regulatory bodies rather than a centralised legislature or statutory system of overarching regulation. In essence, the UK will set out the core of what it considers ‘AI’ and let industry regulators handle the rest in terms of deciding what needs to be regulated and how. The idea behind this is that regulatory bodies know their own industries best, and can better handle the issues that arise in a more flexible manner. 

The report therefore is made up primarily of describing what the government takes into account when defining AI, what powers the regulators will have, what challenges the regulation will address, and what the next steps are.

The core characteristics the UK Government are focusing on are made up of two traits -  the ‘adaptiveness’ of the technology (the ability for an AI to change over time), and the ‘autonomy’ of the technology (the ability for AI to act on its own). Their reasoning appears to be that these traits are most linked to risks, and the difficulty in regulating against these risks. Therefore when viewing whether or not a technology counts as ‘AI’, they will view it through this dual lens to answer the question.

The report acknowledges that a major failure point of this approach could be that less uniformity and a distributed rather than uniform approach makes the rules harder to understand and follow, especially as new areas develop. They intend to mitigate against this by adopting a set of overarching ‘principles’ to ensure that problems are at least approached in a uniform way across the board. The report appears to intend these principles to surround transparency, explainability, and fairness. This is a values-focused approach, but the report explicitly states that they are not intended as a new framework of rights for individuals. Instead, the principles ensure that AI is used in conjunction with the preservation and respect of existing rights.

Unusually, the report makes specific mention of using AI for good eg. tacking fuel poverty, improving animal welfare on dairy farms etc. This is good news for those interested in the long-term future and in major EA cause areas. It also mentions risks, specifically referring to facial recognition and large language models. It mentions customers buying ‘off the shelf’ AI systems that are ‘evidenced, tested, and robust’ which hints at regulation impacting developers as well, despite a focus on use over form.

It is well worth reading in full for those interested in AI policy, as it also offers a very good summary of the current regulatory landscape in the UK. The UK has always taken a very context-heavy approach in the past, and this patchwork method (though difficult to research) has significant boons in flexibility. Having it summarised in one place like this is quite helpful.

 

Key Challenges

The report summarises the key challenges the UK Government currently face in AI regulation, ie what is not working as intended or at all. This includes a lack of a clear regulatory landscape for stakeholders, with much of it open to the interpretation of individual bodies or judges. Many of these bodies have overlapping areas which can result in confusing, mixed information. Many regulators also have different powers and levels of control, resulting in a patchwork enforcement landscape. The current legal system is also not designed with AI in mind, so the nation's legal systems need to be more ready for ‘future risks’. These challenges are very brief summaries in the report, and I am hopeful to see much more detail in the white paper.

 

Cross-Sectional Principles

As mentioned previously, the UK Government in this paper disclosed an aim to be contextual and non-statutory in their regulation but to have overarching principles to draw from. They split these into six categories, summarised thusly:


AI Safety

The government is concerned about unforeseen safety implications in a range of areas. The principle of AI safety means that regulators should act to reduce the risks to humans from AI in their own areas. It specifies, however, that the precautions be commensurate with ‘actual risk’. This means that their concerns around AI Safety are likely to be grounded securely in the present, instead of being concerned about future risks. This could be an odd one for EA’s attitude to AI Safety. It could be interpreted as meaning that the Government is not willing to listen to futurist or long-term AI Safety concerns, but it is also a potentially good sign that the UK Government is beginning to take such matters into consideration in their policy decisions. Ultimately, it depends whether you’re a ‘glass half full’ or a ‘glass half empty’ EA. The ‘glass is a simulation’ folk are unlikely to be affected.


 

AI technical security and function

This principle aims to ensure that consumers and the public have confidence in AI working properly, and that this trust translates to an economically healthier AI ecosystem. Essentially, AI systems should be ‘technically secure’ and ‘do what they intend and claim to do’. 

The functioning, resilience, and security of a system should be tested and proven, and the data used in training and in deployment should be relevant, high quality, representative, and contextualised.

Again, this doesn’t directly address many of the AI Safety concerns many EA folk have, but it once again opens a useful door. Though I personally don’t do much work within AGI/ASI, those types of worries could mesh well with this area.


AI Explainability

Perhaps most importantly, the Government makes specific mention to the legal process here by using Tribunals as a case study. This is a major source of AI risk, and the fact that the government (claim to) take it seriously is a rare gem of good news. In essence, decisions made by AI have to be transparent and explainable. More importantly, they have to be challengeable. I’m excited about this category from a legal standpoint, as the lack of explainability and a lack of transparency from AI developers is a source of extreme risk in my area.

AI Fairness

The policy paper here claims that allowing regulators to continue to define what ‘fairness’ means is their current strategy. I actually did a whole section of my PhD literature review on what I meant by ‘fair’ in my thesis regarding AI law and policy. This is a monumentally complex and highly debated area of AI law and policy. I have no idea how a regulatory body would even begin to do this. The government, however, have guidance which includes articulating ‘fairness’ as relevant to their sector or doman, deciding in which contexts and instances fairness is important and relevant (I’m unsure what this means), and designing and enforcing appropriate governance requirements for ‘fairness’ as applicable to the entities that they regulate.

I think in this area of the Policy Paper, the UK Government has significantly underestimated how complex this area is, and I feel this is one area of the white paper which requires significant investment in detail. It is also likely to be my area of professional input (see Conclusion).


 

AI Legal Person

They mention that a legal person, corporate or natural, must have responsibility for the behaviour of an AI. This is a mere regulatory aide, but does inch the door open slightly for discussions of AI legal personhood. I wrote a fairly long blog post on my blog about this area a few weeks ago here.

 

Clarification of routes to redress or contestability

The policy expects regulators to implement proportionate measures to ensure the contestability of the outcome of the use of AI in relevant regulation situations. This is helpful for giving the regulators ‘bite’ via the public. The regulatory bodies will still require the ‘teeth’ to get the job done, but a clear route to redress allows the public to approach regulatory bodies with adequate information to begin a realistic process of complaint.

 

Concrete Plans for the Future

The Policy paper states that the UK Government intends to begin on a non-statutory footing so that they can monitor, evaluate, and update their approach. They did make clear that this policy is open to change - perhaps significantly so.

The Government intends to release other guidance later for individual areas which AI impacts so that there is more guidance there for regulators to follow. 

They are considering whether or not there is a need to update the powers and remits of some regulators. Many regulators will have the flexibility within their regulatory powers to translate and implement the proposed principles, but not all. There are also differences between the types of rules regulators can make when translating these principles, and the enforcement action regulators can take where the underlying legal rules are broken.

The Government are also paying attention to the regulator’s access to the correct skills and knowledge. While some have been able to make significant and rapid investment in their AI capabilities in recent years, not all regulators have access to the necessary skills and expertise required. They seek to address these disparities in a proportionate and innovative way; which could include consideration of the role that ‘pooled capabilities’ can play, as well as the effectiveness of secondments from industry and academia.

The report also states that although they are currently taking a non-legislative approach, this could change in the future.


 

Conclusion

The UK Government’s AI Policy Paper (Command Paper 728) clearly shows their future plans in the AI Policy arena. Unlike Europe, they intend to go along a route of context-heavy, non-legislative, non-statutory regulation via enforcing bodies and regulatory bodies who take overarching principles and apply them appropriately within each area. This has strengths and weaknesses, with a highly flexible and agile model being much more open to tackling the rapid pace of change of AI and its impact on society, whilst also having the risk of being more difficult to decipher and enforce.

The Government has made clear that they are very much focused on credible, present threats and do not intend to worry overly about future threats. This comes from a standpoint of being worried about stifling innovation. Personally, I think it’s a generally good strategy (better than the EU one), but I am very worried about their policy resulting in a case of “shutting the stable door after the horse has bolted”. Some AI risks leave very little time to respond, and having this kind of policy leaves one open to not being able to recover from significant mistakes.

The Policy Paper has lots of good news in that the UK Government is taking seriously many areas of AI Safety that were previously overlooked, such as AI’s impact on fairness, on the legal system, and in relation to human rights. However, only passing mentions to the Council of Europe are made and the reality of the current, major, authoritarian shifts in wider UK politics (including attempts by the Government to remove the Human Rights Act) both cast doubt on the seriousness of their engagement with this, and of their true intentions regarding AI harms.

The UK Government, in this Policy Paper, has asked for feedback from industry and academia, as well as independent experts. I’m really excited by this and would love to put forward an EA-focused paper as feedback, focusing on long-term risk and national strategies for mitigation. However, I feel that Gov.AI or the Legal Priorities Project would be better suited to this due to better resources and more EA knowledge than I. Instead, I will be submitting feedback in coordination with several other experts on the legal elements of this policy paper. I am, of course, willing to assist any EA org who wants to make a formal response in terms of input, proofreading etc. In this instance I just feel like there are better suited candidates than me and I should sacrifice my ego on the altar of impact.

Below are the major questions the UK Government seek clarification on, though people can submit feedback outside of these. There may be AI working groups, small orgs etc who want to make responses. The major feedback questions from the government are below:

 

  1. What are the most important challenges with our existing approach to regulating AI? Do you have views on the most important gaps, overlaps or contradictions?
  2. Do you agree with the context-driven approach delivered through the UK’s established regulators set out in this paper? What do you see as the benefits of this approach? What are the disadvantages?
  3. Do you agree that we should establish a set of cross-sectoral principles to guide our overall approach? Do the proposed cross-sectoral principles cover the common issues and risks posed by AI technologies? What, if anything, is missing?
  4. Do you have any early views on how we best implement our approach? In your view, what are some of the key practical considerations? What will the regulatory system need to deliver on our approach? How can we best streamline and coordinate guidance on AI from regulators?
  5. Do you anticipate any challenges for businesses operating across multiple jurisdictions? Do you have any early views on how our approach could help support cross-border trade and international cooperation in the most effective way?
  6. Are you aware of any robust data sources to support monitoring the effectiveness of our approach, both at an individual regulator and system level?

 

 

The feedback closes 26/09/22. They can be contacted via:

Office for Artificial Intelligence

DCMS

100 Parliament Street

London

SW1A 2BQ

Or online via email at evidence@officeforai.gov.uk

 

Hopefully this has been some use to those of you interested enough in AI Policy to care, but not enough to sit and read the full thing.


 

9

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since:

Thank you kindly for the summary! I was just thinking today when the paper was making the rounds - I'd really like a summary of this whilst I'm waiting on making the time to read it in full. So this is really helpful for me.

I work in this area, and can attest to the difficulty of getting resources towards capability building for detecting trends towards future risks, as opposed to simply firefighting the ones we've been neglecting. However, I think the near vs long term distinction is often unhelpful and limited, and I prefer to try to think about things in the medium term (next 2-10 years). There's a good paper on this by FHI and CSER. 

I agree with you that the approach outlined in the paper is generally good, and with your caveats/risks too. I also think it's nice that there is variation amongst nations' approaches; hopefully they'll be complementary and borrow pieces from each other's work.

Curated and popular this week
Relevant opportunities