C

CAISID

AI Legislation
265 karmaJoined Working (6-15 years)

Bio

I am a computer scientist (to degree level) and legal scholar (to PhD level) working at the intersection between technology and law. I currently work in a legislation role at a major technology company, and as a consultant to government and industry on AI Law, Policy, Governance, and Regulation.

How others can help me

I am looking for opportunities to network with others. I have some scope to take on new projects in 2024, and I am willing to hear from potential collaborators or funders.

How I can help others

Reach out to me to spit-ball AI legislation ideas or for simple feedback. I am also interested in meeting early careers researchers in AI governance or policy.

Comments
64

Thank you for this post Matthew, it is just as thoughtful and detailed as your last one. I am excited to see more posts from you in future! 

I have some thoughts and comments as someone with experience in this area. Apologies in advance if this comment ends up being long - I prefer to mirror the effort of the original post creator in my replies and you have set a very high bar!

  1. Risk Assessments  How should frontier AI organisations design their risk assessment procedures in order to sufficiently acknowledge – and prepare for – the breadth, severity and complexity of risks associated with developing frontier AI models?

This is a really great first area of focus, and if I may arrogantly share a self-plug, I recently posted something along this specific theme here. Clearly it has been field-changing, achieving a whopping 3 karma in the month since posting. I am truly a beacon of our field!

Jest aside, I agree this is an important area and one that is hugely neglected. A major issue is that academia is not good at understanding how this actually works in practice. Much more industry-academia partnership is needed but that can be difficult to arrange where it really counts - which is something you successfully allude to in your post.



Senior leadership of firms operate with limited information. Members of senior management of large companies themselves cannot know of everything that goes on in the firm. Therefore, strong communication channels and systems of oversight are needed to effectively manage risks.



This is a fantastic point, and one that is frequently a problem. Not long ago I was having a chat with the head of a major government organisation who quite confidently stated that his department did not use a specific type of AI system. I had the uncomfortable moral duty to inform that it did, because I had helped advise on risk mitigation for that system only some weeks earlier. It's a fun story, but the higher up the chain you are in large organisations the harder it can be. Another good, recent example is also Nottinghamshire Police publicly claiming that they do not use and do not plan to use AFR in an FOI request - seemingly unaware their force revealed a new AFR tool to the media earlier that week.



Although much can be learned from practices in other industries, there are a number of unique challenges in implementing good corporate governance in AI firms. One such challenge is the immaturity of the field and technology. This makes it difficult currently to define standardised risk frameworks for the development and deployment of systems. It also means that many of the firms conducting cutting edge research are still relatively small; even the largest still in many ways operate with “start-up” cultures. These are cultures that are fantastic for innovation, but terrible for safety and careful action. 


This is such a fantastic point, and to back this up it's the source of I reckon about 75% of the risk scenarios I've advised on in the past year. Although I don't think 'AI firms' is a good focus term because many major corporations are making AI as part of their coverage but are not themselves "AI Firms", your point still stands well in the face of evidence because a major problem right now is AI startups selling immature, untested, ungoverned tools to major organisations who don't know better and don't know how to question what they're buying. This isn't just a problem with corporations but with government, too. It's such a huge risk vector.

For Sections 2 and 3, engineering and energy are fantastic industries to draw from in terms of their processes for risk and incident reporting. They're certainly amongst the strictest I've had experience of working alongside.

 

Ethics committees take a key role in decision making that may have particularly large negative impacts on society. For frontier AI labs, such committees will have their work cut out for them. Work should be done to consider the full list of processes ethics committees should have input in, but it will likely include decisions around:

  • Model training, including
    • Appropriate data usage
    • The dangers of expected capabilities
  • Model deployments
  • Research approval

 

This is an area that's seen a lot of really good outcomes in AI in high-risk industries. I would advise reading this research which covers a fantastic use-case in detail. There are also some really good ones in the process of getting the correct approvals which I'm not entirely sure I can post here yet but if you want kept updated shoot me an inbox and I'll keep you informed.



The challenge for frontier AI firms by comparison is that many of the severe risks posed by AI are of a more esoteric nature, with much current uncertainty about how failure modes may present themselves. One potential area of study is the development of more general forms of risk awareness training, e.g. training for developing a “scout mindset” or to improve awareness of black swan events.



This is actually one of the few sections I disagree with you on. Of all the high-risk AI systems I've worked with in a governance capacity, exceptionally few have had esoteric risks. Many times AI systems interact with the world via existing processes which themselves are fairly risk scoped. Exceptions if you meant far-future AI systems which obviously would be currently unpredictable. For contemporary and near-future AI systems though the risk landscape is quite well explored.
 


7 – Open Research Questions

These are fantastic questions, and I'm glad to see some of these are covered by a recent grant application I made. Hopefully the grant decision-makers read these forums! I actually have something of a research group forming in this precise area, so feel free to drop me a message if there's likely to be any overlap there and I'm happy to share research directions etc :)

 

There are huge technical research questions that must be answered to avoid tragedy, including important advancements in technical AI safety, evaluations and regulation. It is the author’s opinion that corporate governance should sit alongside these fields, with a few questions requiring particular priority and focus:

 

One final point of input that may be valuable is that in most of my experience of hiring people for risk management / compliance / governance roles in high-risk AI systems is the best candidates in the long run seem to be people with an interdisciplinary STEM and social studies background. It is tremendously hard to find these people. There needs to be much, much more effort put towards sharing of skills and knowledge between the socio-legal and STEM spheres, but a glance at my profile might show a bit of bias in this statement! Still, for these type of roles that kind of balance is important. I understand that many European universities now offer such interdisciplinary courses, but no degrees yet. Perhaps the winds will change.

Apologies if this comment was overly long! This is a very important area of AI governance and it was worth taking the time to put some thoughts on your fantastic post together. Looking forward to seeing your future posts - particularly in this area!

Hm. The closest things I can think of would either be things like inciting racial hatred or hate speech (ie not physical, no intent for crime, but illegal). In terms of research, most research isn't illegal but is usually tightly regulated by participating stakeholders, ethics panels, and industry regulations. Lots of it is stakeholder management too. I removed some information from my PhD thesis at the request of a government stakeholder, even though I didn't have to. But it was a good idea to ensure future participation and I could see the value in the reasoning. I'm not sure there was anything they could do legally if I had refused, as it wasn't illegal per se.

The closest thing I can think of to your example is perhaps weapons research. There's nothing specifically making weapons research illegal, but it would be an absolute quagmire in terms of not breaking the law. For example sharing the research could well fall under anti-terrorism legislation, and creating a prototype would obviously be illegal without the right permits. So realistically you could come up with a fantastic new idea for a weapon but you'd need to partner with a licensing authority very, very early on or risk doing all of your research by post at His Majesty's pleasure for the next few decades.

I have in the past worked in some quite heavily regulated areas with AI, but always working with a stakeholder who had all the licenses etc so I'm not terribly sure how all that works behind the scenes.

 

Answer by CAISID4
0
0

You have some interesting questions here. I am a computer scientist and a legal scholar, and I work a lot with organisations on AI policy as well as helping to create policy too. I can sympathise with a lot of the struggles here from experience. I'll focus in some some of the more concrete answers I can give in the hopes that they are the most useful. Note that this explanation isn't from your jurisdiction (which I assume from the FBI comment is USA) but instead from England & Wales, but as they're both Common Law systems there's a lot of overlap and many key themes are the same.


For example, one problem is: How do you even define what "AGI" or "trying to write an AGI" is?

This is actually a really big problem. There's been a few times we've trialled new policies with a range of organisations and found that how those organisations interpret the term 'AI' makes a massive difference to how they interpret, understand, and adhere to the policy itself. This isn't even a case of bad faith, more just people trying to attach meaning to a vague term and then doing their best but ultimately doing so in different directions. A real struggle is that when you try to get more specific, it can actually end up being less clear because the further you zoom in, the more you accidentally exclude. It's a really difficult balancing act - so yes, you're right. That's a big problem. 
 


I'm wondering how much this is actually a problem, though. As a layman, as far as I know there could be existing government policies that are somewhat comparably difficult to evaluate.


Oh, tons. In different industries, in a variety of forms. Law and policy can be famously hard to interpret. Words like 'autonomous', 'harm', and 'intend' are regular prickly customers.


Many judicial decisions related to crimes, as I vaguely understand it, depend on intentionality and belief——e.g. for a killing to be a murder, the killer must have intended to kill and must not have believed on reasonable grounds that zer life was imminently unjustifiedly threatened by the victim.
 


This is true to an extent. So in law you often have the actus reus (what actually happened) and the mens rea (what the person intended to happen). The law tends to weigh the mens rea quite heavily. Yes, intent is very important - but more so provable intent. Lots of murder cases get downgraded to manslaughter for a better chance at a conviction. Though to answer your question yes at a basic level criminal law often relates to intention and belief. Most of the time this is the objective belief of the average person, but there are some cases (such as self-defence in your example) where the intent is measured against the subjective belief of that individual in those particular circumstances. 

 

What are some crimes that are defined by mental states that are even more difficult to evaluate? Insider trading? (The problem is still very hairy, because e.g. you have to define "AGI" broadly enough that it includes "generalist scientist tool-AI", even though that phrase gives some plausible deniability like "we're trying to make a thing which is bad at agentic stuff, and only good at thinky stuff". Can you ban "unbounded algorithmic search"?)

 

Theft and assault of the everyday variety are actually some of the most difficult to evaluate really, since both require intent to be criminal and yet intent can be super difficult to prove. In the context of what you're asking, 'plausible deniability' is often a strategy chosen when accused of a crime (i.e making the prosecution prove something non-provable which is an uphill battle) but ultimately it would come down to a court to decide. You can ban whatever you want, but the actual interpretation could only really be tested in that matter. In terms of broad language the definitions of words is often a core point of contention in court cases so likely it would be resolved there, but honestly from experience the overwhelming majority of issues never reach court. Neither side wants to take the risk, so usually the company or organisation backs off and negotiates a settlement. The only times things really go 'to the hilt' is for criminal breaches which require a very severe stepping over the mark. 

 

  • Bans on computer programs. E.g. bans on hacking private computer systems. How much do these bans work? Presumably fewer people hack their school's grades database than would without whatever laws there are; on the other hand, there's tons of piracy.

 

In the UK the Computer Misuse Act 1990 is actually one of the oldest bits of computer-specific legislation and is still effective today after a few amendments. It's mostly due to the broadness of the law and the fact that evidence is fairly easy to come by and that intent with those is fairly easy to prove. It's beginning to struggle in the new digital era though, thanks to totally unforeseen technologies like generative AI and blockchain.

Some bits of legislation have been really good at maintaining bans though. England and Wales have a few laws against CSAM which included the term 'pseudo-photography' which actually applies to generative AI and so someone who launched an AI for that purpose would still be guilty of an offence. It depends what you mean by 'ban' as a ban in legislation can often function much differently than a ban from, for example, a regulator.
 


Bans on conspiracies with illegal long-term goals. E.g. hopefully-presumably you can't in real life create the Let's Build A Nuclear Bomb, Inc. company and hire a bunch of nuclear scientists and engineers with the express goal of blowing up a city. And hopefully-presumably your nuke company gets shut down well before you actually try to smuggle some uranium, even though "you were just doing theoretical math research on a whiteboard". How specifically is this regulated? Could the same mechanism apply to AGI research?

 

Nuclear regulation is made up of a whole load of different laws and policy types too broad to really go into here, but essentially what you're describing there is less about the technology and more about the goal. That's terrorism and conspiracy to commit murder just to start off with, no matter whether you use a nuke or an AGI or a spatula. If your question centres more on 'how do we dictate who is allowed access to dangerous knowledge and materials' that's usually a licensing issue. In theory you could have a licensing system around AGIs, but that would probably only work for a little while and would be really hard to implement without buy-in internationally.

If you're specifically interested in how this example is regulated, I can't help you in terms of US law beyond this actually quite funny example of a guy who attempted a home-built nuclear reactor and narrowly escaped criminal charges - however some UK-based laws include the Nuclear Installations Act 1965 and much of the policy from the Office for Nuclear Regulation (ONR).

Hopefully some of this response is useful!

Yeah, that's fixed for me :)

This is a useful list, thank you for writing it.

In terms of:

UK specific questions

  • Could the UK establish a new regulator for AI (similar to the Financial Conduct Authority or Environment Agency)? What structure should such an institution have? This question may be especially important because the UK civil service tends to hire generalists, in a way which could plausibly make UK AI policy substantially worse.  

I wrote some coverage here of this bill which seeks to do this, which may be useful for people exploring the above. Also well worth watching and not particularly well covered right now is how AUKUS will affect AI Governance internationally. I'm currently preparing a deeper dive on this as a post, but for people researching UK-specific governance it's a good head start to look at these areas as ones where not a lot of people are directing effort.

This is really interesting, thank you. As an aside, am I the only one getting an unsecured network warning for nonlinear.org/network?

[This comment is no longer endorsed by its author]Reply

I wouldn't be disheartened. I have considerable experience in AI safety and my current role has me advising decision-makers in the topic in major tech organisations. I've had my work cited by politicians in parliament twice last year.

I've also been rejected for every single AI Safety fellowship or scholarship that I've ever applied for. That's every advertised one, every single year, for at least 5 years. My last rejection, actually, was on March 4th (so a week ago!). A 0% success rate, baby!

Rejected doesn't mean you're bad. It's just that there's maybe a dozen places for well over a thousand people, and remember these places have a certain goal in mind so you could be the perfect candidate but at the wrong career stage, or location, or suchlike.

I'd say keep applying, but also apply outside the EA sphere. Don't pigeonhole yourself. As others mentioned, keep developing skills but I'd also add that you may never get accepted and that's okay. It's not a linear progression where you have to get one of these opportunities before you make impact. Check out other branches.

Inbox me if you feel you need more personal direction, happy to help :)
 

Answer by CAISID4
3
0

This won't be the answer you're looking for but honestly, time permitting, I just take a day or three off. I find when I'm relaxing, giving myself space to breathe and think without force, that's when creativity starts to flow again and ideas come in. Obviously this isn't deadline-friendly!

This is a really interesting podcast - particularly the section with the discussion on foundation models and cost analysis. You mention a difficulty on exploring this. If you ever want to explore it, I'm happy to give some insight via inbox because I've done a bit of work in industry in this area that I can share.

Load more