[Caveat lector: I know roughly nothing about policy!]

Suppose that there were political support to really halt research that might lead to an unstoppable, unsteerable transfer of control over the lightcone from humans to AGIs. What government policy could exert that political value?

[That does sound relaxing.]

Banning AGI research specifically

This question is NOT ASKING ABOUT GENERALLY SLOWING DOWN AI-RELATED ACTIVITY. The question is specifically about what it could look like to ban (or rather, impose an indefinite moratorium on) research that is aimed at creating artifacts that are more capable in general than humanity.

So "restrict chip exports to China" or "require large vector processing clusters to submit to inspections" or "require evals for commercialized systems" don't answer the question.

The question is NOT LIMITED to policies that would be actually practically enforceable by their letter. Making AGI research illegal would slow it down, even if the ban is physically evadable; researchers generally want to think publishable thoughts, and generally want to plausibly be doing something good or neutral by their society's judgement. If the FBI felt they had a mandate to investigate AGI attempts, even if they would have to figure out some only-sorta-related crime to actually charge, maybe that would also chill AGI research. The question is about making the societal value of "let's not build this for now" be exerted in the most forceful and explicit form that's feasible.

Some sorts of things that would more address the question (in the following, replace "AGI" with "computer programs that learn, perform tasks, or answer questions in full generality", or something else that could go in a government policy):

  1. Make it illegal to write AGIs.
  2. Make it illegal to pay someone if the job description explicitly talks about making AGIs.
  3. Make it illegal to conspire to write AGIs.

Why ask this?

I've asked this question of several (5-10) people, some of whom know something about policy and have thought about policies that would decrease AGI X-risk. All of them said they had not thought about this question. I think they mostly viewed it as not a very salient question because there isn't political support for such a ban. Maybe the possibility has been analyzed somewhere that I haven't seen; links?

But I'm still curious because:

  1. I just am. Curious, I mean.
  2. Maybe there will be support later, at which point it would be good to have already mostly figured out a policy that would actually delay AGI for decades.
  3. Maybe having a clearer proposal would crystallize more political support, for example by having something more concrete to rally around, and by having something for AGI researchers "locked in races" to coordinate on as an escape from the race.
  4. Maybe having a clearer proposal would allow people who want to do non-AGI AI research to build social niches for non-AGI AI research, and thereby be less bluntly opposed to regulation on AGI specifically.
  5. [other benefits of clarity]

Has anyone really been far even as decided to use?

There's a lot of problems with an "AGI ban" policy like this. I'm wondering, though, which problems, if any, are really dealbreakers.

For example, one problem is: How do you even define what "AGI" or "trying to write an AGI" is? I'm wondering how much this is actually a problem, though. As a layman, as far as I know there could be existing government policies that are somewhat comparably difficult to evaluate. Many judicial decisions related to crimes, as I vaguely understand it, depend on intentionality and belief——e.g. for a killing to be a murder, the killer must have intended to kill and must not have believed on reasonable grounds that zer life was imminently unjustifiedly threatened by the victim. So it's not like not-directly-observable mental states are out of bounds. What are some crimes that are defined by mental states that are even more difficult to evaluate? Insider trading? (The problem is still very hairy, because e.g. you have to define "AGI" broadly enough that it includes "generalist scientist tool-AI", even though that phrase gives some plausible deniability like "we're trying to make a thing which is bad at agentic stuff, and only good at thinky stuff". Can you ban "unbounded algorithmic search"?)

Some other comparisons:

  • Bans on computer programs. E.g. bans on hacking private computer systems. How much do these bans work? Presumably fewer people hack their school's grades database than would without whatever laws there are; on the other hand, there's tons of piracy.
  • Bans on research. E.g. recombinant DNA, cloning, gain-of-function.
  • Bans on conspiracies with illegal long-term goals. E.g. hopefully-presumably you can't in real life create the Let's Build A Nuclear Bomb, Inc. company and hire a bunch of nuclear scientists and engineers with the express goal of blowing up a city. And hopefully-presumably your nuke company gets shut down well before you actually try to smuggle some uranium, even though "you were just doing theoretical math research on a whiteboard". How specifically is this regulated? Could the same mechanism apply to AGI research?

Is that good to do?

Yeah, probably, though we couldn't know whether a policy would be good without knowing what the policy would look like. There are some world-destroying things that we have to ban, for now; for everything else, there's Mastercard libertarian techno-optimism.

17

0
0

Reactions

0
0
New Answer
New Comment

1 Answers sorted by

You have some interesting questions here. I am a computer scientist and a legal scholar, and I work a lot with organisations on AI policy as well as helping to create policy too. I can sympathise with a lot of the struggles here from experience. I'll focus in some some of the more concrete answers I can give in the hopes that they are the most useful. Note that this explanation isn't from your jurisdiction (which I assume from the FBI comment is USA) but instead from England & Wales, but as they're both Common Law systems there's a lot of overlap and many key themes are the same.


For example, one problem is: How do you even define what "AGI" or "trying to write an AGI" is?

This is actually a really big problem. There's been a few times we've trialled new policies with a range of organisations and found that how those organisations interpret the term 'AI' makes a massive difference to how they interpret, understand, and adhere to the policy itself. This isn't even a case of bad faith, more just people trying to attach meaning to a vague term and then doing their best but ultimately doing so in different directions. A real struggle is that when you try to get more specific, it can actually end up being less clear because the further you zoom in, the more you accidentally exclude. It's a really difficult balancing act - so yes, you're right. That's a big problem. 
 


I'm wondering how much this is actually a problem, though. As a layman, as far as I know there could be existing government policies that are somewhat comparably difficult to evaluate.


Oh, tons. In different industries, in a variety of forms. Law and policy can be famously hard to interpret. Words like 'autonomous', 'harm', and 'intend' are regular prickly customers.


Many judicial decisions related to crimes, as I vaguely understand it, depend on intentionality and belief——e.g. for a killing to be a murder, the killer must have intended to kill and must not have believed on reasonable grounds that zer life was imminently unjustifiedly threatened by the victim.
 


This is true to an extent. So in law you often have the actus reus (what actually happened) and the mens rea (what the person intended to happen). The law tends to weigh the mens rea quite heavily. Yes, intent is very important - but more so provable intent. Lots of murder cases get downgraded to manslaughter for a better chance at a conviction. Though to answer your question yes at a basic level criminal law often relates to intention and belief. Most of the time this is the objective belief of the average person, but there are some cases (such as self-defence in your example) where the intent is measured against the subjective belief of that individual in those particular circumstances. 

 

What are some crimes that are defined by mental states that are even more difficult to evaluate? Insider trading? (The problem is still very hairy, because e.g. you have to define "AGI" broadly enough that it includes "generalist scientist tool-AI", even though that phrase gives some plausible deniability like "we're trying to make a thing which is bad at agentic stuff, and only good at thinky stuff". Can you ban "unbounded algorithmic search"?)

 

Theft and assault of the everyday variety are actually some of the most difficult to evaluate really, since both require intent to be criminal and yet intent can be super difficult to prove. In the context of what you're asking, 'plausible deniability' is often a strategy chosen when accused of a crime (i.e making the prosecution prove something non-provable which is an uphill battle) but ultimately it would come down to a court to decide. You can ban whatever you want, but the actual interpretation could only really be tested in that matter. In terms of broad language the definitions of words is often a core point of contention in court cases so likely it would be resolved there, but honestly from experience the overwhelming majority of issues never reach court. Neither side wants to take the risk, so usually the company or organisation backs off and negotiates a settlement. The only times things really go 'to the hilt' is for criminal breaches which require a very severe stepping over the mark. 

 

  • Bans on computer programs. E.g. bans on hacking private computer systems. How much do these bans work? Presumably fewer people hack their school's grades database than would without whatever laws there are; on the other hand, there's tons of piracy.

 

In the UK the Computer Misuse Act 1990 is actually one of the oldest bits of computer-specific legislation and is still effective today after a few amendments. It's mostly due to the broadness of the law and the fact that evidence is fairly easy to come by and that intent with those is fairly easy to prove. It's beginning to struggle in the new digital era though, thanks to totally unforeseen technologies like generative AI and blockchain.

Some bits of legislation have been really good at maintaining bans though. England and Wales have a few laws against CSAM which included the term 'pseudo-photography' which actually applies to generative AI and so someone who launched an AI for that purpose would still be guilty of an offence. It depends what you mean by 'ban' as a ban in legislation can often function much differently than a ban from, for example, a regulator.
 


Bans on conspiracies with illegal long-term goals. E.g. hopefully-presumably you can't in real life create the Let's Build A Nuclear Bomb, Inc. company and hire a bunch of nuclear scientists and engineers with the express goal of blowing up a city. And hopefully-presumably your nuke company gets shut down well before you actually try to smuggle some uranium, even though "you were just doing theoretical math research on a whiteboard". How specifically is this regulated? Could the same mechanism apply to AGI research?

 

Nuclear regulation is made up of a whole load of different laws and policy types too broad to really go into here, but essentially what you're describing there is less about the technology and more about the goal. That's terrorism and conspiracy to commit murder just to start off with, no matter whether you use a nuke or an AGI or a spatula. If your question centres more on 'how do we dictate who is allowed access to dangerous knowledge and materials' that's usually a licensing issue. In theory you could have a licensing system around AGIs, but that would probably only work for a little while and would be really hard to implement without buy-in internationally.

If you're specifically interested in how this example is regulated, I can't help you in terms of US law beyond this actually quite funny example of a guy who attempted a home-built nuclear reactor and narrowly escaped criminal charges - however some UK-based laws include the Nuclear Installations Act 1965 and much of the policy from the Office for Nuclear Regulation (ONR).

Hopefully some of this response is useful!

Thanks for the thoughtful responses!

a guy who attempted a home-built nuclear reactor

Ha!

a licensing system around AGIs,

Well, I have in mind something more like banning the pursuit of a certain class of research goals.

the fact that evidence is fairly easy to come by and that intent with those is fairly easy to prove.

Hm. This provokes a further question:

Are there successful regulations that can apply to activity that is both purely mental (I mean, including speech, but not including anything more kinetic), and also is not an intention to commit a ... (read more)

2
CAISID
2mo
Hm. The closest things I can think of would either be things like inciting racial hatred or hate speech (ie not physical, no intent for crime, but illegal). In terms of research, most research isn't illegal but is usually tightly regulated by participating stakeholders, ethics panels, and industry regulations. Lots of it is stakeholder management too. I removed some information from my PhD thesis at the request of a government stakeholder, even though I didn't have to. But it was a good idea to ensure future participation and I could see the value in the reasoning. I'm not sure there was anything they could do legally if I had refused, as it wasn't illegal per se. The closest thing I can think of to your example is perhaps weapons research. There's nothing specifically making weapons research illegal, but it would be an absolute quagmire in terms of not breaking the law. For example sharing the research could well fall under anti-terrorism legislation, and creating a prototype would obviously be illegal without the right permits. So realistically you could come up with a fantastic new idea for a weapon but you'd need to partner with a licensing authority very, very early on or risk doing all of your research by post at His Majesty's pleasure for the next few decades. I have in the past worked in some quite heavily regulated areas with AI, but always working with a stakeholder who had all the licenses etc so I'm not terribly sure how all that works behind the scenes.  
5
TsviBT
2mo
Ah, yeah, that sounds close to what I'm imagining, thank you.
More from TsviBT
Curated and popular this week
Relevant opportunities