This post was published for draft amnesty day, so it’s less polished than the typical EA forum post.
Epistemic status: I think there's probably something wrong in my reasoning, posting here to figure out what it is, in the spirit of Cunningham's Law [1].

 

Givewell estimates that $300 million in marginal funding would result in ~30,000 additional lives saved, that’s very roughly $0.50 per day of life.

If you believe that there’s a higher than 10% chance of extinction via AGI[2], that means that delaying AGI by one day gives you 10% · 10¹⁰[3] life-days, equivalent to ~$0.5B in GiveWell marginal dollars (as a rough order of magnitude).

Potential disagreements and uncertainties:

  • Delaying AGI is, in expectation, going to make lives in pre-AGI world worse.
    To me, this seems negligible compared to the risk of dying, unless you put the 0-point of a “life worth living” very high (e.g. you think ~half the current global population would be better off dead). If the current average value of a life is X, for an AGI transformation to make it go to 2X it would need to be extremely powerful and extremely aligned.
  • Under longtermism, the value of current lives saved is negligible compared to the value of future lives that are more likely to exist. So the only thing that matters is if the particular method by which you delay AGI reduces x-risks.[4]
    I would guess that, probably, delaying AGI by default reduces the probability of x-risks by giving more time for a “short reflection”, and for the field of AI Alignment to develop.
  • Delaying AGI is not tractable, e.g. regulation doesn’t work.
    It seems to me that lots of people believe excessive regulation raises prices and slows down industries and processes. I don’t understand how that doesn’t apply to AI in particular (and the same arguments don’t apply to nuclear power, healthcare, or other safety-sensitive very technical areas). And there are areas where differential technological development happened in practice (e.g. human cloning and embryo DNA editing).
  • There's significantly less than a 1% risk from AGI for lives that morally matter.
    It's possible, probably my main uncertainty, but I think it would require both narrow person affecting views and a lot of certainty on AI timelines or consequences.

Proposals:


Curious on your thoughts!
 

  1. ^

    The best way to get the right answer on the Internet is not to ask a question; it's to post the wrong answer. (Wikipedia)

  2. ^

    If you believe it’s ~100% just multiply by 10, if you believe it’s ~1% just divide by 10

  3. ^

    Human population is roughly 10^10 humans

  4. ^

    Extinction, unrecoverable collapse/stagnation, or flawed realization

28

0
0

Reactions

0
0
Comments19
Sorted by Click to highlight new comments since:

I liked this post and would like to see more of people thinking for themselves about cause prioritization and doing BOTECs.

Some scattered thoughts below, also in the spirit of draft amnesty.

I had a little trouble understanding your calculations/logic, so I'm going to write them out in sentence form: GiveWell's current giving recommendations correspond to spending about $0.50 to save an additional person an additional year of life.  A 10% chance of extinction from misaligned AI means that postponing misaligned AI by a year gets us 10%*current population number of person days, or about 10 million. If we take GiveWell's willingness to spend and extrapolate it to the scenario of postponing misaligned AI, we get that GiveWell might be willing to spend $500 million to postpone misaligned AI by a day. 

I think it's important that these are different domains, and the number of people who would be just as happy to see their donation buy a bednet as lobby for tech regulation (assuming similar EV) is unfortunately small. Many donors care about much more than some cause-neutral how-much-good their donation does. e.g., for instance I see to care (I'm confiused) that some of my donations help extremely poor people.

You point out that maybe regulation doesn't work, but there's the broader problem which is that we don't have shovel ready projects that can turn $500 million into postponing misaligned AI by a day; I suspect there are many interventions which can do this for much cheaper, but they are not interventions which can just absorb money and save lived like many global health charities can (perhaps they need projects to be founded and take years to develop). 

The above problems point to another important idea: The GiveWell bar is able to be where it is because of what projects to improve the world can actually be funded by GiveWell dollars — not because of some fact about the value of postponing a life by a day. You might think about the GiveWell bar as, the cheapest scalable ways to save lives in the global health and development space can provide an additional day of life for $0.50. If you ask individuals in the US how much they would pay to extend their own life or that of a loved one by a day, you will get numbers much higher than this; if you look at spending on healthcare in the developed world my guess is that it is very normal to spend thousands of dollars to postpone a life by a day. GiveWell's bar for funding would be higher if there were other great opportunities for saving lives salably for cheap (at least in global health and development). 

An abstraction I notice I'm using is thinking about $0.50/person/day as the current market price. However, this is not an efficient market, for a number of reasons. This post draws the parallel of "hey look, at that price we should be willing to spend $0.5b on postponing misaligned AI by a day". However, if we actually had many opportunities to spend $0.5b on postponing misaligned AI by a day, the funding bar would increase, because there isn't enough money in the cause-neutral altruism bucket. 

Some implications: cause-neutral donors who put above negligible probability on existential risks from AI will probably get much more bang for their buck trying to reduce existential risks or buy time, at least contingent on there being projects that can absorb money in that space. More importantly, those working on reducing AI x-risk have a lot of work to do in terms of closing the cost-effectiveness gap between themselves and global health. By closing the gap I mean getting enough projects to exist in the space such that we can consistently take in more money and turn it into x-risk reduction or buying time. 

If you haven't read Astronomical Waste, you might like it. 

Hoo boy, I think we can get a paradox out of this:

Givewell is looking at the loss of potential life for the people that currently exist. Save 30000 lives that live for an average of 50 years each, and that’s 30000*50 =1.5 million years worth of potential lives saved. 

Suppose the population stayed constant at 10^10 people forever, with average 50 years left. 

In the event of a human extinction, the number of potential life years lost if the apocalypse happened today would be 50*10^10. 

Now what happens if we survive another hundred years, and the extinction happens then? 

Well, there are still 10^10 people averaging 50 years left, so the number of potential life years lost as a result of the apocalypse is… 50*10^10.  

So it seems like if you go by the givewell calculations, delaying the apocalypse makes no difference at all! In fact, if the population were to increase, then the amount of potential life lost in the extinction would be greater, so arguably delaying would make it worse.  

Ride the current way of AI skepticism by people worried about it being racist, or being replaced and left unemployed. To lobby for significantly more government involvement, to slow down progress (like the FDA in medicine).

 

I agree! In recent days, I've been soundboarding an idea of mine: 


Idea: AI Generated Content (AIGC) Policy Consultancy

Current Gaps:
1. Policy around services provided by AIGC is probably not gonna be good within the next decade, despite the speed with which AI will begin automating tasks and industries. See: social media, crypto policy.

2. AI Safety community currently struggles with presenting strong, compelling value propositions or near-term inroads into policymaking circles. This is consistent with other x-risk topics. See: climate and pandemic risk.


Proposition: EA community gathers law and tech people together to formulate and AIGC policy framework. Will require ~10 tech/law people which is quite feasible as an EA project.


Benefits:

1. Formulating AIGC policy will establish credibility and political capital to tackle alignment problems

2. AIGC is the most publicly understandable way to present AI risk to the public, allowing AIS to reach mainstream appeal

3. Playing into EA’s core competencies of overanalysing problems

4. Likely high first mover advantage, where if EA can set the tone for AI policy discourse, it will mitigate people believing misconceptions about AI as a new tech, which of course benefits AIS in the long run

Further Thoughts

Coming from a climate advocate background, I think this is the least low-probability way for EA to engage the public and policymakers on AIS. It seeks to answer “How to we get politicians to take EA’s AIS stances seriously”

I find that some AIS people I've talked to don't immediately see the value of this idea. However, my context is that having been a climate advocate, I learned of an incredibly long history of scientists' input being ignored simply because the public and policymakers did not prioritise the value of climate risk work.

It was ultimately engaging, predominantly youth, advocacy that mobilised institutional resources and demand to the level required. I highly suspect this will hold true for AI Safety, and I hope this time, the x-risk community doesn't make the same mistake of undervaluing external support. So this plan is meant to provide a value proposition for AI Safety that non-AIS people understand better.

So far, I haven't been able to make much progress on this idea. Problem being that I am neither in the law field nor technical AIS field (something I hope to work on next year), so if it happens, I essentially need to find someone else to spearhead it.

Anyway, I posted this idea publicly because I've procrastinating on developing it for ~1 week, so I figured it was better to send it out into the ether and see if anyone feels inspired, rather than just let it sit in my Drafts. Do reach out if you or anyone you know might be interested!

[anonymous]2
0
3

[deleted]

This assumes that there is a 10% chance of extinction via AGI per day. I don't think anyone believes it to be that high; frankly if it ever gets that high we've already lost.


I don't think so, I think it assumes a 10% of extinction one time after we get AGI.

E.g. "we get AGI in 2050 -> 10B people die (on average) in 2060 with a 10% chance" vs "we get AGI in 2050 + 1 day -> 10B people die (on average) in 2060 + 1 day with a 10% chance"

[anonymous]2
2
7

[Edit: this post has been updated, and this comment applies substantially less now. See this thread for details. ]

As a longtermist, I think this post is bad and harmful. I strongly dislike this framing, and I think it's very unhealthy for an altruistic community. 

First, I think the fermi estimate here is  not good, principally for a lack of any discounting and for failing to try to  incorporate the objections raised in the post into the actual estimate.  But'll leave the specifics of the back-of-the envelope  estimate aside in favor of putting emphasis on what think is the most harmful thing. 

Pitting X-risks against other ways of making the world better (1) is extremely unlikely to convince anyone to work on x-risk who isn't already doing so, (2) hedges on very unlikely risky scenarios without incorporating principles involving discounting, epistemic humility, or moral uncertainty, (3)  is certain to alienate people and  is the kind of thing that makes enemies--not friends--which reduces the credibility and sociopolitical capital of longtermism, and (4) is very disrespectful toward real people in the real world who suffer from real, large problems that GiveWell charities try to address. 

I would encourage deleting this post. 

I think it's good to make object-level criticisms of posts, but I think it's important that we encourage rather than discourage posts that make a genuine attempt to explore unusual ideas about what we should prioritise, even if they seem badly wrong to you. That's because people can make up their own minds about the ideas in a post, and because some of these posts that you're suggesting be deleted might be importantly right.

In other words, having a community that encourages debate about the important questions seems more important to me than one that shuts down posts that seem "harmful" to the cause.

[anonymous]2
1
0

I generally agree, but not in this specific case for two reasons. First, I think there are more thorough, less provocative, strictly better discussions of this kind of thing already. See writing from Beckstead, Bostrom, etc. Second, I think there are specific direct harms this post could have. See my latest reply to the OP on the other branch of this thread. 

Oh no, I'm sorry if that's the case!

I'm unsure if deletion is the right response to bad posts (which this one likely is!), instead of explaining why the post is bad so that others can understand that it's wrong (and that the forum thinks is wrong, which I guess could be as important!).
For context, I'm not a longtermist. I'm just worried about global catastrophic risks, since a billion people is a lot of people, and the marginal cost of life saved per GiveWell seems relatively high ($10k/life)

 

1) is extremely unlikely to convince anyone to work on x-risk who isn't already doing so

My personal current career trajectory hinges a bit on this :/
Like, is it more likely for me to (help) influence AI timelines or billions of capital?

2) hedges on very unlikely risky scenarios without incorporating principles involving discounting, epistemic humility, or moral uncertainty

Is that the same as There's significantly less than a 1% risk from AGI for lives that morally matter (which I agree is my main uncertainty), or is it a different consideration?

3)  is certain to alienate people and  is the kind of thing that makes enemies--not friends--which reduces the credibility and sociopolitical capital of longtermism

What would make friends and not enemies? In a conflict between e.g. workers/artists and AI companies that want to stay unregulated, can you avoid making enemies while helping one side?

4) is very disrespectful toward real people in the real world who suffer from real, large problems that GiveWell charities try to address. 

I am mostly worried about real people in the real world that (maybe) suffer from a real large risk. I think a marginal GiveWell dollar might help us real people less than lowering those risks.

[anonymous]4
1
0

Is that the same as There's significantly less than a 1% risk from AGI for lives that morally matter (which I agree is my main uncertainty), or is it a different consideration?

I believe so. This post is about one day of delayed extinction. Not about preventing it. Not tryna split hairs tho.

What would make friends and not enemies?

Not using x-risks to imply that donating to GiveWell charities is of trivial relative importance. It's easy to talk about the importance of x-risks without making poverty and health charities the direct comparison. 

I am mostly worried about real people in the real world that (maybe) suffer from a real large risk.

I still presume you care about people who suffer from systemic issues in the world. This kind of post would not be the kind of thing that would make anyone like this feel respected. 


A case for deletion. Consider a highly-concrete and pretty likely scenario. Emille Torres finds out about this post, tweets about it along with a comment about moral rot in EA, and gets dozens of retweets and a hundred likes. Then Timnut Gebru retweets it along with another highly-negative comment and gets hundreds of retweets and a thousand likes. This post contributes to hundreds or more people more actively disliking EA--especially because it's on the actual EA forum and not a more ignorable comment from someone in a lower profile space. 

I would recommending weighing the possible harms of this post getting tons of bad press against how likely you think that it will positively change anyone's mind or lead to high-quality discussion. My beliefs here are that deleting it might be very positive in EV. 

Do you think it would be possible to edit this post to make it less harmful/bad/wrong, and still allow me to get feedback on what's wrong with my thinking? (I think something's wrong, and posted asking for feedback/thoughts).

E.g. keeping feedback like this

 

 

It's easy to talk about the importance of x-risks without making poverty and health charities the direct comparison. 

For me it is the direct comparison that matters though, I need to choose between those two

I believe so.

I don't understand, you believe which one?

I still presume you care about people who suffer from systemic issues in the world. This kind of post would not be the kind of thing that would make anyone like this feel respected. 

Does that also  apply to any post about e.g. animal welfare and climate change?

As for damage: maybe I can write more clearly that I'm probably wrong and that I'm a random anonymous account? Would be happy to edit this post!

[anonymous]1
0
0

Does that also  apply to any post about e.g. animal welfare and climate change?

This would apply to a post titled "Reducing carbon emissions by X may be equivalent to 500M in donations to GiveWell charities."

On the question of deleting

  • I don't think this post will be particularly good at sparking good conversations. 
  • I think it would be better to have a different post that makes more effort in the estimation proposed and clearly asks a question in the title.
  • Relatedly, I think the large majority of the potential downside of this post comes from the title. Someone like Torres may have no interest in reading the actual post or taking any nuances into account when commenting on it. They likely wouldn't even read anything beyond the title. They'd just do their thing and be a pundity troll, and the title gives exactly the kind of ammunition they want. 
     

Edited the title, do you think this is good enough?

Could you please point out your estimation? Since at the end of the day we do need to decide what to work on.

[anonymous]2
0
0

I believe this is a big improvement. 

I work on AI safety tools. I believe this might be the most important thing for someone like me to do FWIW. I think AI doom is not likely but likely enough to be my personal top priority.  But when I give money away I do it to GiveWell charities for reasons involving epistemic humility, moral uncertainty, and my belief in the importance of a balanced set of EA priorities. 

I'm interested in why you don't think AI doom is likely, given a lot of people in the AI safety space at least seem to suggest it's reasonably likely (>10% likelihood in the next 10 or 20 years)

[anonymous]1
1
1

My guess is like 5-10%

Thank you for the pushback on the title!

I wonder what are your thoughts on delaying timelines, instead of working on tooling, but I guess it might hinge on being more longtermist and personal fit.

[anonymous]3
0
0

I very badly want to delay timelines, especially because doing so gives us more time to develop responses, governance strategies, and tools to handle rapid changes. I think this is underemphasized. And lately, I have been thinking that the most likely thing that could make me shift my focus is the appeal of work that makes it harder to build risky AI or that improves our ability to respond to or endure threats. This contrasts with my current work which is mostly about making alignment easier.  

Hoo boy, I think we can get a paradox out of this:

Givewell is looking at the loss of potential life for the people that currently exist. Save 30000 lives that live for an average of 50 years each, and that’s 30000*50 =1.5 million years worth of potential lives saved. 

Suppose the population stayed constant at 10^10 people forever, with average 50 years left. 

In the event of a human extinction, the number of potential life years lost if the apocalypse happened today would be 50*10^10. 

Now what happens if we survive another hundred years, and the extinction happens then? 

Well, there are still 10^10 people averaging 50 years left, so the number of potential life years lost as a result of the apocalypse is… 50*10^10.  

So it seems like if you go by the givewell calculations, delaying the apocalypse makes no difference at all! In fact, if the population were to increase, then the amount of potential life lost in the extinction would be greater, so arguably delaying would make it worse.  

Curated and popular this week
Relevant opportunities