Hide table of contents

In March of this year, 30,000 people, including leading AI figures like Yoshua Bengio and Stuart Russell, signed a letter calling on AI labs to pause the training of AI systems. While it seems unlikely that this letter will succeed in pausing the development of AI, it did draw substantial attention to slowing AI as a strategy for reducing existential risk.

While initial work has been done on this topic (this sequence links to some relevant work), many areas of uncertainty remain. I’ve asked a group of participants to discuss and debate various aspects of the value of advocating for a pause on the development of AI on the EA Forum, in a format loosely inspired by Cato Unbound.

  • On September 16, we will launch with three posts: 
    • David Manheim will share a post giving an overview of what a pause would include, how a pause would work, and some possible concrete steps forward 
    • Nora Belrose will post outlining some of the risks of a pause
    • Thomas Larsen will post a concrete policy proposal
  • After this, we will release one post per day, each from a different author
  • Many of the participants will also be commenting on each other’s work

Responses from Forum users are encouraged; you can share your own posts on this topic or comment on the posts from participants. You’ll be able to find the posts by looking at this tag (remember that you can subscribe to tags to be notified of new posts). 

I think it is unlikely that this debate will result in a consensus agreement, but I hope that it will clarify the space of policy options, why those options may be beneficial or harmful, and what future work is needed.

People who have agreed to participate

These are in random order, and they’re participating as individuals, not representing any institution:

  1. David Manheim (ALTER)
  2. Matthew Barnett (Epoch AI)
  3. Zach Stein-Perlman (AI Impacts)
  4. Holly Elmore (AI pause advocate)
  5. Buck Shlegeris (Redwood Research)
  6. Anonymous researcher (Major AI lab)
  7. Anonymous professor (Anonymous University)
  8. Rob Bensinger (Machine Intelligence Research Institute)
  9. Nora Belrose (EleutherAI)
  10. Thomas Larsen (Center for AI Policy)
  11. Quintin Pope (Oregon State University)

Scott Alexander will be writing a summary/conclusion of the debate at the end.

Thanks to Lizka Vaintrob, JP Addison, and Jessica McCurdy for help organizing this, and Lizka (+ Midjourney) for the picture.

Comments58
Sorted by Click to highlight new comments since: Today at 6:32 AM

Will someone write about the symbolic importance of the ask for a pause?

Right now, most of what has been written here on this seems focused on the techno-economics as if a pause were only about slowing down a technical process.

Asking for a pause is also a signal that one is extremely serious about the risk (and by the same token, not asking for a pause but speaking about existential risks seems very hard to communicate to a broader public).

I completely agree, these posts focus only on whether a pause would be good or not, and not ml on whether a campaign for a pause, or a similar campaign with a different purpose could be positive EV considering all outcomes of the campaign

I think Holly Elmore will probably address this, at least she did in her Filan podcast appearance: https://sites.libsyn.com/438081/12-holly-elmore-on-ai-pause.

Yes, my piece will address this and why Pause advocacy can work.

It's definitely good to think about whether a pause is a good idea. Together with Joep from PauseAI, I wrote down my thoughts on the topic here.

Since then, I have been thinking a bit on the pause and comparing it to a more frequently mentioned option, namely to apply model evaluations (evals) to see how dangerous a model is after training.

I think the difference between the supposedly more reasonable approach of evals and the supposedly more radical approach of a pause is actually smaller than it seems. Evals aim to detect dangerous capabilities. What will need to happen when those evals find that, indeed, a model has developed such capabilities? Then we'll need to implement a pause. Evals or a pause is mostly a choice about timing, not a fundamentally different approach.

With evals, however, we'll move precisely to the brink, look straight into the abyss, and then we plan to halt at the last possible moment. Unfortunately, though, we're in thick mist and we can't see the abyss (this is true even when we apply evals, since we don't know which capabilities will prove existentially dangerous, and since an existential event may already occur before running the evals).

And even if we would know where to halt: we'll need to make sure that the leading labs will practically succeed in pausing themselves (there may be thousands of people working there), that the models aren't getting leaked, that we'll implement the policy that's needed, that we'll sign international agreements, and that we gain support from the general public. This is all difficult work that will realistically take time.

Pausing isn't as simple as pressing a button, it's a social process. No one knowns how long that process of getting everyone on the same page will take, but it could be quite a while. Is it wise to start that process at the last possible moment, namely when the evals turn red? I don't think so. The sooner we start, the higher our chance of survival.

Also, there's a separate point that I think is not sufficiently addressed yet: we don't know how to implement a pause beyond a few years duration. If hardware and algorithms improve, frontier models could democratize. While I believe this problem can be solved by international (peaceful) regulation, I also think this will be hard and we will need good plans (hardware or data regulation proposals) for how to do this in advance. We currently don't have these, so I think working on them should be a much higher priority.

My gut reaction is that the eval path is strongly inferior because it relies on a lot more conjunction. People need to still care about it when models get dangerous, it needs to still be relevant when they get dangerous, and the evaluations need to work at all. Compared to that, a pause seems like a more straight-forward good thing, even if it doesn't solve the problem.

I agree that immediate pause or at least a slowdown ("moving bright line of a training compute cap") is better/safer than a strategy that says "continue until evals find something dangerous, then hit the brakes everywhere."

I also have some reservations to evals in the sense that I think they can easily make things worse if they're implemented poorly (see my note here).

That said, evals could complement the pause strategy. For instance:

(1) The threshold for evals to trigger further slowing could be low. If the evals have to unearth even just rudimentary deception attempts rather than something already fairly dangerous, it may not be too late when they trigger. (2) Evals could be used in combination with a pause (or slowdown) to greenlight new research. For instance, maybe select labs are allowed to go over the training compute cap if they fulfill a bunch of strict safety and safety-culture requirements and if they use the training budget increase for alignment experiments and have evals set up to show that previous models of the same kind are behaving well in all relevant respects.

So, my point is we shouldn't look at this as "evals as an idea are inherently in tension with pausing ASAP."

There's an important difference between pausing and evals: evals gets you loads of additional information. We can look at the results of the evals, discuss them and determine in what ways a model might have misuse potential (and thus try to mitigate it) or if the model is simply undeployable. If we're still unsure, we can gather more data and additionally refine our ability to perform and interpret evals.

If we (i.e. the ML community) repeatedly do this we build up a better picture of where our current capabilities lie, how evals relate to real-world impact and so on. I think this makes evals much better, and the effect will compound over time. Evals also produce concrete data that can convince skeptics (such as me - I am currently pretty skeptical of much regulation but can easily imagine eval results that would convince me). To stick with your analogy, each time we do evals we thin out the fog a bit, with the intention of clearing it before we reach the edge, as well as improving our ability to stop.

To stick with your analogy, each time we do evals we thin out the fog a bit, with the intention of clearing it before we reach the edge, as well as improving our ability to stop.

How does doing evals improve your ability to stop? What concrete actions will you take when an eval shows a dangerous result? Do none of them overlap with pausing?

Evals showing dangerous capabilities (such as how to build a nuclear weapon) can be used to convince lawmakers that this stuff is real and imminent.

Of course, you don't need that if lawmakers already agree with you – in that case, it's strictly best to not tinker with anything dangerous.

But assuming that many lawmakers will remain skeptical, one function of evals could be "drawing out an AI warning shot, making it happen in a contained and controlled environment where there's no damage."

Of course, we wouldn't want evals teams to come up with AI capability improvements, so evals shouldn't become dangerous AI gain-of-function research. Still, it's a spectrum because even just clever prompting or small tricks can sometimes unearth hidden capabilities that the model had to begin with, and that's the sort of thing that evals should warn us about.

I'm really happy to see this happening.

In fact, I'd like to see more things along these kinds of lines.

While there's a lot of good discussion on this forum, we aren't always going to end up discussing the most important topics organically. So I think it's often helpful for CEA/the mods to occasionally direct the attention of the forum towards the discussion topics that will move us forward as a community.

If we wanted to go beyond this, then I think it would be quite valuable to find two people with opposite views to work together in order to produce a high-quality distillation of any such debates.

Thanks for noticing something you thought should happen (or having it flagged to you) and making it happen!

Love this idea, thanks for organizing this.

PSA: the term "compute overhang" or "hardware overhang" has been used in many ways. Today it seems to most often (but far from always) mean amount labs can quickly scale up the size of the largest training run (especially because a ban on large training runs ends). When you see it or use it, make sure everyone knows what it means.

(It will come up often in this debate.)

PSA: if "pause" is not defined but seems to refer to a specific kind of government policy, it most likely means policy regime that stops training runs using compute beyond a certain threshold.

Relatedly, there's something like a soft pause or slowdown where you slow training runs using compute beyond a certain threshold, but the threshold is moving every year. This could be a pragmatic tweak because compute will likely get cheaper, so it becomes easier for rogue actors to circumvent the compute cap if it never moves. This soft pause idea has been referred to as "moving bright line (of a compute cap)." 

(Adding to this: "FLOP" is the plural of "FLOP".)

I’m trying to make “FLOPstacles” happen for things that mean we can’t just take max FLOP per GPU and multiply by number of GPUs, e.g. mem or interconnect bandwidth.

Any thoughts on using https://www.kialo.com/ or a similar tool specailized on debate?

I used such tools for a while and didn't feel much connection to them. I guess it often felt like there was no way to quantify arguments. It's not about the number but the value of the arguments before and against a point.

I wish! I’ve been recommending this for a while but nobody bites, and usually (always?) without explanation. I often don’t take seriously many of these attempts at “debate series” if they’re not going to address some of the basic failure modes that competitive debate addresses, e.g., recording notes in a legible/explorable way to avoid the problem of arguments getting lost under layers of argument branches.

What do such tools offer?

Great initiative! 

Perhaps it'd be good to ask people who are doing some public-facing campaigning to contribute? For example, the team at the Existential Risk Observatory or those behind PauseAI. I might be wrong, but I don't think anyone on the list of agreed contributors represents that specific theory of change. 

I think a public-facing campaign is important to think about if we want to reduce the likelihood of articles such as 'How Silicon Valley doomers are shaping Rishi Sunak’s AI plans' being written.

Thanks for the suggestion! I expect that @Holly_Elmore will represent that viewpoint. See e.g. this podcast.

Yes! My piece is about advocacy and Pause.

Oh I wasn't aware, thanks for correcting me! 

Great idea, thanks for organizing!

I'm a long-time lurker but I registered to make this comment: While these posts are incredibly high quality, they are legally naive. There is no mention of the First Amendment or Bernstein v. Department of Justice, which is a significant gap. Yes, let's have the discussion about pausing AI and part of that should include its legality. 

I would be excited for you to write a post about that!

Writing a post about it now. I'll probably crosspost to my Substack as well. 

This is a great idea, and I look forward to reading the diverse views on the wisdom of an AI pause.

I do hope that the authors contributing to this discussion take seriously the idea that an 'AI pause' doesn't need to be fully formalizable at a political, legal, or regulatory level. Rather, its main power can come from promoting an informal social consensus about the serious risks of AGI development, among the general public, journalists, politicians, and the more responsible people in the AI industry. 

In other words, the 'Pause AI' campaign might get most of its actual power and influence from helping to morally stigmatize reckless AI development, as I argued here.  

Thus, the people who argue that pausing AI isn't feasible, or realistic, or legal, or practical, may be missing the point. 'Pause AI' can function as a Schelling point, or focal point, or coordination mechanism, or whatever you want to call it, with respect to public discourse about the ethics of AI development.

I include links to my two old posts arguing for keeping AI development:

First I argued that AI is a necesary (almost irrepleaceable) tool to deal with the other existential risks (mainly nuclear war):

https://forum.effectivealtruism.org/posts/6j6qgNa3uGmzJEMoN/artificial-intelligence-as-exit-strategy-from-the-age-of

Then, that currently AI risk is simply "too low to be measured", and we need to be closer to AGI to develop realistic alignment work:

https://forum.effectivealtruism.org/posts/uHeeE5d96TKowTzjA/world-and-mind-in-artificial-intelligence-arguments-against

I think the idea of debates like this is great, good work team!

Is there any plan to expand this to non-AI cause areas?

Thanks! I don't have concrete plans, and it seems like I might get replaced soon. But I would be interested in seeing a list of other topics you would be excited for people to organize content around.

As always with these Forum events though I would like to reiterate that it really isn't that hard to email a few people and ask them if they're willing to write something about a given topic, and I would be excited for others to organize a similar things, since I'm skeptical that I will cover all the useful topics. People should feel free to DM me if they are interested in doing so and have questions about how to go about it!

I would be excited to see a debate series on the meat eater problem. It is weird to me that there's not more discussion around this in EA, since it (a) seems far from settled, and (b) could plausibly imply that one of the core strands of EA -- global health and development -- is ultimately net negative.

So the final post will be released on September 27th?

If everything goes according to schedule, the final debate post would be the 24th, and then it will take some undetermined amount of time after that for Scott to write his summary/conclusion. I wouldn't be that surprised if things fell behind schedule though.

Regardless of whether a pause has benefits, can someone succinctly explain how a pause could even be possible? Nuclear arms pauses never meaningfully happened - SALT treaties were decades later and didn't meaningfully reduce the "nuclear apocalypse" doom risk, at least for citizens living in targeted countries. (Maybe it reduced worldwide risks)

A nuclear bombardment is a quantifiable, object level thing. You can use a special tool and draw circles on a paper map of your own cities and wargame out where the enemy will allocate their missiles if their goal is maximum destruction. Multiply a series probability estimate of how likely the warhead will reach the target and give ful yield and add up the estimated damage.

The "we must pause" threat of AI doom I have seen so far doesn't have any data of any threat. And a frequent threat model is the "treacherous turn" - you wouldn't be able to quantify how risky the current AI systems you have at any point or their capacity for damage in numbers because the AI systems are hiding their motives and capabilities until they see a route to victory.

With no data of a threat how could any policymakers agree to a pause?

The other examples of human technologies we did agree to not pursue all exist and the threats are concrete. The damage from nerve gas, CFCs, bio warfare, and human genetic engineering we have data on.

As a side note the teacherous turn threat model has an obvious technical mitigation measure : use the least compute on any task by selecting from a pool of models the most efficient one. This discourages models with the cognitive capacity for sedition or hidden capabilities as they will need more memory and compute to run.

Yeah, you should wait for the posts - it's definitely something that is worth discussion.

Here are some other technologies that got slowed down, cf @Katja_Grace 

It doesn’t look like it to me. Here are a few technologies which I’d guess have substantial economic value, where research progress or uptake appears to be drastically slower than it could be, for reasons of concern about safety or ethics

Huge amounts of medical research, including really important medical research e.g. The FDA banned human trials of strep A vaccines from the 70s to the 2000s, in spite of 500,000 global deaths every year. A lot of people also died while covid vaccines went through all the proper trials. 

Nuclear energy

Fracking

Various genetics things: genetic modification of foods, gene drives, early recombinant DNA researchers famously organized a moratorium and then ongoing research guidelines including prohibition of certain experiments (see the Asilomar Conference)

Nuclear, biological, and maybe chemical weapons (or maybe these just aren’t useful)

Various human reproductive innovation: cloning of humans, genetic manipulation of humans (a notable example of an economically valuable technology that is to my knowledge barely pursued across different countries, without explicit coordination between those countries, even though it would make those countries more competitive. Someone used CRISPR on babies in China, but was imprisoned for it.)

Recreational drug development

Geoengineering

Longer/ongoing list here.

So do any of these not exist in some form proving the tech is real?

Do any of these not have real world data of the drawbacks?

Note for example human genetic engineering, we never tried that but when we edit mammals errors are common and we know the incredible cost of birth defects. Also since we edit other mammals we know 100 percent this isn't just a possibility down the road, it's real, same tooling that works on rats will work on humans.

Do any of these, if you researched the tech and developed it to it's full potential, allow you to invade every country on earth and/or threaten to kill every living person within that countrys borders?

Note those last paragraph. This is not me being edgy. If exactly one nation has a large nuclear arsenal they could do exactly that. Once other powers started to get nukes in the 1950s, every nation had to get their own or have trusted friends with nukes and a treaty protecting them.

AGI technology if developed to it's full potential, and kept exclusive to one superpower, would allow the same.

It means any multilateral agreement to stop agi research because of dangers that haven't been shown yet to exist means each party is throwing away all the benefits, like becoming incredibly wealthy, immortal, and having almost limitless automated military hardware.

So much incentive to cheat it seems like a non starter.

For an agreement to be even possible I think there would need to be evidence that

(1) it's too dangerous to even experiment with AGI systems because it could "break out and kill everyone". Break out to where? What computers can host it? Where are they? How does the AGI stop humans cutting the power or bombing the racks of interconnected H100s?

(2) There's no point in building models, benchmarking them, finding the ones that are safe to use and also AGI, and using them, they will all betray you.

It's possible that (1,2) are true facts of reality but there is no actual evidence.

Re human genetic engineering, I don't think it's data on errors that is preventing it happening, it's moral disgust of eugenics. We could similarly have a taboo against AGI if enough people are scared enough of, and disgusted by, the idea of a digital alien species that doesn't share our values taking over and destroying all biological life.

Perhaps. I can't really engage on that because "moral disgust" doesn't explain multiple distinct nations with slightly different views on morality all refusing to practice it. My main comment is I think it's helpful to look at the potential gain vs potential risks.

Potential gain : yes you could identify alleles with promoters associated with the nervous system and statistically correlated with higher IQ. 20-30 years later, if this were done on large scales, people might be marginally smarter.

But how much gain is this? How long has it been since humans had the tools to even attempt genetic engineering. Can you project any gain whatsoever 20-30 years from now?

I would argue the answers are minimal, less than 10 years since reliable tools existed, and almost no gain, because in 20-30 years any task that "average or below" IQ individuals struggle with, AI tools will be able to complete in seconds.

Potential risks : each editing error undetected in early fetal development saddles the individual human with lifelong birth defects which may require a permanent caretaker or hospitalization. This can cost many millions, and is essentially so much liability that only a government could afford to practice genetic engineering.

Governments are slow, remember it's only been about 10 years since it has been even feasible.

Conclusion: the risk to benefits ratio for human genetic engineering offers minimal gain and even in the best case is slow, which means little annual roi. The decision to say "industrialize China" has generated more wealth than all of humanity had prior to that point, and took only slightly longer than one iteration of human genetic engineering.

The potential benefits for AI could double the wealth of humanity in a few years, aka 10-100 percent annual ROI.

There are universal human psychological adaptations associated with moral disgust, so it's not that hard for 'moral disgust' to explain broad moral consensus across very different cultures. For example, murder and rape within societies are almost always considered morally disgusting, across cultures, according to the anthropological research.

It's not that big a stretch to imagine that a global consensus could be developed that leverages these moral disgust instincts to stigmatize reckless AI development. As I argued here.

Ok, so some societies have much higher murder rates than others. Some locations, the local police de facto make murder between gang members legal, by accepting low bribes and putting minimal effort into investigation.

The issue is runaway differential utility. The few examples of human technology not exploited do not have runaway utility. They have small payoffs delayed far into the future and large costs, and making even a small mistake makes the payoff negative.

Examples : genetic engineering, human medicine, nuclear power. Small payoffs and it's negative on the smallest error.

AI is different. It appears to have immediate more than 100 percent annual payoff. OpenAIs revenue on a model they state cost 68 million to train is about 1 billion USD a month. Assuming 10 percent profit margin (the rest pays for compute) that's over 100 percent annual ROI.

So a society that has less moral disgust towards AI would get richer. They spend their profits on buying more AI hardware and more research. Over time they own a larger and larger fraction of all assets and revenue on earth. This is why EMH forces companies towards optimal strategies, because over time the ones that fail to do so fail financially. (they fail when their cost of production becomes greater than the market price for a product. Example: Sear. Sears failed to modernize its logistics chain so eventually it's cost to deliver retail goods exceeds the market price for those goods).

Moreover, other societies, forced to compete, have to drop some of their moral disgust and I suspect this scenario ends up like a ratchet, where inevitably a society will lose 100 percent of all disgust in order to compete.

Pauses, multilateral agreements, etc can slow this down but it depends on how fast the gain is as to how long it buys you. Unilateral agreements just free tsmc up to manufacture AI chips for the parties not signing the agreement.

OK, that sounds somewhat plausible, in the abstract.

But what would be your proposal to slow down and reduce extinction risk from AI development? Or do you think that risk is so low that it's not worth trying to manage it?

My proposal is to engineer powerful and reliable AI immediately, as fast as feasible. If this is true endgame - whoever wins the race owns the planet if not the accessible universe - then spending and effort should be proportional. It's the only way.

You deal with the dangerous out of control AI by tasking your reliable models with destroying them.

The core of your approach is to subdivide and validate all the subtasks. No model is manufacturing the drones used to do this by itself, it's thousands of temporary instances. You filter the information used to reach the combat solvers that decide how to task each drone to destroy the enemy so any begging from the enemy is never processed. You design the killer drones with lots of low level interlocks to prevent the obvious misuse and they would use controllers maybe using conventional software so they cannot be convinced not to carry out the mission as they can't understand language.

The general concept is if 99 percent of the drones are "safe" like this then even if escaped models are smart they just can't win.

Or in more concrete terms, I am saying say a simple reliable combat solver is not going to be a lot worse than a more complex one. That superintelligence saturates. Simple and reliable hypersonic stealth drones are still almost as good as whatever a superintelligence cooks up etc. It's an assumption on available utility relative to compute.

This really isn’t the right post for most of those issues/questions, and most of what you mentioned are things you should be able to find via searches on the forum, searches via Google, or maybe even just asking ChatGPT to explain it to you (maybe!). TBH your comment also just comes across quite abrasive and arrogant (especially the last paragraph), without actually appearing to be that insightful/thoughtful. But I’m not going to get into an argument on these issues.

[This comment is no longer endorsed by its author]Reply

At risk of compounding the hypocrisy here, criticizing a comment for being abrasive and arrogant while also saying: "Your ideas are neither insightful nor thoughtful, just Google/ ChatGpt it" might be showing a lack of self-awareness...

But agreed that this post is probably not the best place for an argument on the feasibility of a pause, especially as the post mentions that David M will make the case for how a pause would work as part of the debate. If your concerns aren't addressed there, Gerald, that post will probably be a better place to discuss.

[This comment is no longer endorsed by its author]Reply

Strange, unless the original comment from Gerald has been edited since I responded I think I must have misread most of the comment, as I thought it was making a different point (i.e., "could someone explain how misalignment could happen"). I was tired and distracted when I read it, so it wouldn't be surprising. However, the final paragraph in the comment (which I originally thought was reflected in the rest of the comment) still seems out of place and arrogant.

The 'final paragraph' was simply noting that when you try to make concrete AI risks - instead of an abstract machine that is overwhelmingly smarter than human intelligence and randomly aligned, but a real machine that humans have to train and run on their computers - numerous technical mitigation methods are obvious.  The one I was alluding to was (1)

(1) https://www.lesswrong.com/posts/C8XTFtiA5xtje6957/deception-i-ain-t-got-time-for-that

Sparsity and myopia are general alignment strategies and as it happens are general software engineering practices.  Many of the alignment enthusiasts on lesswrong have rediscovered software architectures that already exist.  Not just exist, but are core to software systems ranging from avionics software to web hyperscalers.  

Sparsity happens to be TDD.

Myopia happens to be stateless microservices.  

Curated and popular this week
Relevant opportunities