JM

Joseph Miller

524 karmaJoined

Comments
22

Yup.

the small movement that PauseAI builds now will be the foundation which bootstraps this larger movement in the future

Is one of the main points of my post. If you support PauseAI today you may unleash a force which you cannot control tomorrow.

I agree this is slightly hyperbolic. If you include the disappearance of Ilya Sutskever, there's three. And I know of two more less widely reported. Depending on how narrow your definition of a "safety-focused researcher" is, five people leaving in less than 6 months is fairly significant.

Thanks, Rudolf, I think this is a very important point, and probably the best argument against PauseAI. It's true in general that The Ends Do Not Justify the Means (Among Humans).

My primary response is that you are falling for status-quo bias. Yes this path might be risky, but the default path is more risky. My perception is the current governance of AI is on track to let us run some terrible gambles with the fate of humanity.

Consider environmentalism. It seems quite uncertain whether the environmentalist movement has been net positive (!).

We can play reference class tennis all day but I can counter with the example of the Abolitionists, the Suffragettes, the Civil Rights movement, Gay Pride or the American XL Bully.

It seems to me that people overstate the track record of populist activism at solving complicated problems
...
the science is fairly straightforward, environmentalism is clearly necessary, and the movement has had huge wins

As I argue in the post, I think this is an easier problem than climate change. Just as most people don't need a detailed understanding of the greenhouse effect, most people don't need a detailed understanding of the alignment problem ("creating something smarter than yourself is dangerous").

The advantage with AI is that there is a simple solution that doesn't require anyone to make big sacrifices, unlike with climate change. With PauseAI, the policy proposal is right there in the name, so it is harder to become distorted than vaguer goals of "environmental justice".

fighting Moloch rather than sacrificing our epistemics to him for +30% social clout

I think to a significant extent it is possible for PauseAI leadership to remain honest while still having broad appeal. Most people are fine if you say that "I in particular care mostly about x-risk, but I would like to form a coalition with artists who have lost work to AI."

There is a spirit here, of truth-seeking and liberalism and building things, of fighting Moloch rather than sacrificing our epistemics to him for +30% social clout. I admit that this is partly an aesthetic preference on my part. But I do believe in it strongly.

I'm less certain about this but I think the evidence is much less strong than rationalists would like to believe. Consider: why has no successful political campaign ever run on actually good, nuanced policy arguments? Why do advertising campaigns not make rational arguments for why should prefer their product, instead appealing to your emotions? Why did it take until 2010 for people to have the idea of actually trying to figure out which charities are effective? The evidence is overwhelming that emotional appeals are the only way to persuade large numbers of people.

If we make the conversation about AIS more thoughtful, reasonable, and rational, it increases the chances that the right thing (whatever that ends up being - I think we should have a lot of intellectual humility here!) ends up winning.

Again, this seems like it would be good, but the evidence is mixed. People were making thoughtful arguments for why pandemics are a big risk long before Covid, but the world's institutions were sufficiently irrational that they failed to actually do anything. If there had been an emotional, epistemically questionable mass movement calling for pandemic preparedness, that would have probably been very helpful.

Most economists seem to agree that European monetary policy is pretty bad and significantly harms Europe, but our civilization is too inadequate to fix the problem. Many people make great arguments about why aging sucks and it should really be a top priority to fix, but it's left to Silicon Valley to actually do something. Similarly for shipping policy, human challenge trials and starting school later. There is long list of preventable, disastrous policies which society has failed to fix due lack of political will, not lack of sensible arguments.

The main message of this post is that current PauseAI protest's primary purpose is to build momentum for a later point.

This post is just my view. As with Effective Altruism, PauseAI does not have a homogenous point of view or a specific required set of beliefs to participate. I expect that the main organizers of PauseAI agree that GPT-5 is very unlikely to end the world. Whether they think it poses an acceptable risk, I'm not sure.

Notably, I doubt we'll discover the difference between GPT4 and superhuman to be small and I doubt GPT5 will be extremely good at interpretability.

I also doubt it, but I am not 1 in 10,000 confident.

This paragraph seems too weak for how important it is in the argument. Notably, I doubt we'll discover the difference between GPT4 and superhuman to be small and I doubt GPT5 will be extremely good at interpretability.

The important question for the argument is whether GPT-6 will pose an unacceptable risk.

There's a crux which is very important. If you only want to attend protests where the protesters are reasonable and well informed and agree with you, then you implicitly only want to attend small protests.

It seems pretty clear to me that most people are much less concerned about x-risk than job loss and other concerns. So we have to make a decision - do we stick to our guns and have the most epistemically virtuous protest movement in history and make it 10x harder to recruit new people and grow the moment? Or do we compromise and welcome people with many concerns, form alliances with groups we don't agree with in order to have a large and impactful movement?

It would be a failure of instrumental rationality to demand the former. This is just a basic reality about solving coordination problems.

[To provide a counter argument: having a big movement that doesn't understand the problem is not useful. At some point the misalignment between the movement and the true objective will be catastrophic.

I don't really buy this because I think that pausing is a big and stable enough target and it is a good solution for most concerns.]

This is something I am actually quite uncertain about so I would like to hear your opinion.

  • What is the risk level below which you'd be OK with unpausing AI?

I think approximately 1 in 10,000 chance of extinction for each new GPT would be acceptable given the benefits of AI. This is approximately my guess for GPT-5, so if we could release that model and then pause, I'd be okay with that.

A major consideration here is the use of AI to mitigate other x-risks. Some of Toby Ord's x-risk estimates:

  • AI - 1 in 10
  • Engineering Pandemic - 1 in 30
  • Unforeseen anthropogenic risks (eg. dystopian regime, nanotech) - 1 in 30
  • Other anthropogenic risks - 1 in 50
  • Nuclear war - 1 in 1000
  • Climate change - 1 in 1000
  • Other environmental damage 1 in 1000
  • Supervolcano - 1 in 10,000

If there was a concrete plan under which AI could be used to mitigate pandemics and anthropogenic risks, then I would be ok with a higher probability of AI extinction, but it seems more likely that AI progress would increase these risks before it decreased them.

AI could be helpful for climate change and eventually nuclear war. So maybe I should be willing to go a little higher on the risk. But we might need a few more GPTs to fix these problems and if each new GPT is 1 in 10,000 then it starts to even out.

  • What do you think about the potential benefits from AI?

I'm very bullish about the benefits of an aligned AGI. Besides mitigating x-risk, I think curing aging should be a top priority and is worth taking some risks to obtain.

I've read the post quickly, but I don't have a background in economics, so it would take me a while to fully absorb. My first impression is that it is interesting but not that useful for making decisions right now. The simplifications required by the model offset the gains in rigor. What do you think? Is it something I should take the time to understand?

My guess would be that the discount rate is pretty cruxy. Intuitively I would expect almost any gains over the next 1000 years to be offset by reductions in x-risk since we could have zillions of years to reap the benefits. (On a meta-level I believe moral questions are not "truthy" so this is just according to my vaguely total utilitarian preferences, not some deeper truth).

EA promoted earning to give When the movement largely moved away from it, not enough work was done to make that distance

Why would we want to do that? Earning to give is a good way to help the world. Maybe not the best, but still good.

It's also worth remembering that this is advertising. Claiming to be a little bit better on some cherry picked metrics a year after GPT-4 was released is hardly a major accelerant in the overall AI race.

Fair point. On the other hand, the perception is in many ways more important than the actual capability in terms of incentivizing competitors to race faster.

Also based on early user reports it seems to actually be noticably better than GPT-4.

Load more