Comment author: pmelchor  (EA Profile) 15 August 2018 02:46:04PM 0 points [-]

Thanks, Carl. I fully agree: if we are convinced it is essential that we act now to counter existential risks, we must definitely do that.

My question is more theoretical (feel free to not continue the exchange if you find this less interesting). Imagine we lived in a world just like ours but where the development of AI, global pandemics, etc. are just not possible: for whatever reason, those huge risks are just not there. An argument in favour of weighting the long-term future heavily could still be valid (there could be many more people alive in the future and therefore a great potential for either flourishing or suffering). But how should we weight that against the responsibility to help people alive today, since we are the only ones who can do it (future generations will not be able to replace us in that role)?

Comment author: Carl_Shulman 15 August 2018 06:03:42PM 3 points [-]

Imagine we lived in a world just like ours but where the development of AI, global pandemics, etc. are just not possible: for whatever reason, those huge risks are just not there

If that was the only change our century would still look special with regard to the possibility of lasting changes short of extinction, e.g. as discussed in this posts by Nick Beckstead. There is also the astronomical waste argument: delay in interstellar colonization by 1 year means losing out on all the galaxies reachable (before separation by the expansion of the universe) by colonization begun in year n-1 instead of n. The population of our century is vanishingly small compared to future centuries, so the ability of people today to affect the colonized volume is accordingly vastly greater on a per capita basis, and the loss of reachable galaxies to delayed colonization is irreplaceable as such.

So we would still be in a very special and irreplaceable position, but less so.

For our low-population generation to really not be in a special position, especially per capita, it would have to be the case that none of our actions have effects on much more populous futures as a whole. That would be very strange, but if it were true then there wouldn't be any large expected impacts of actions on the welfare of future people.

But how should we weight that against the responsibility to help people alive today, since we are the only ones who can do it (future generations will not be able to replace us in that role)?

I'm not sure I understand the scenario. This sounds like a case where an action to do X makes no difference because future people will do X (and are more numerous and richer). In terms of Singer's drowning child analogy, that would be like a case where many people are trying to save the child and extras don't make the child more likely to be saved, i.e. extra attempts at helping have no counterfactual impact. In that case there's no point in helping (although it may be worth trying if there is enough of a chance that extra help will turn out to be necessary after all).

So we could consider a case where there are many children in the pond, say 20, and other people are gathered around the pond and will save 10 without your help, but 12 with your help. There are also bystanders who won't help regardless. However, there is also a child on land who needs CPR, and you are the only one who knows how to provide it. If you provide the CPR instead of pulling children from the pond, then 10+1=11 children will be saved instead of 12. I think in that case you should save the two children from drowning instead of the one child with CPR, even though your ability to help with CPR is more unique, since it is less effective.

Likewise, it seems to me that if we have special reason to help current people at the expense of much greater losses to future generations, it would be because of flow-through effects, or some kind of partiality (like favoring family over strangers), or some other reason to think the result is good (at least by our lights), rather than just that future generations cannot act now (by the same token, billions of people could but don't intervene to save those dying of malaria or suffering in factory farms today).

Comment author: pmelchor  (EA Profile) 11 August 2018 10:45:31PM 5 points [-]

I think there is an 11th reason why someone may want to work on near-term causes: while we may be replaceable by the next generations when it comes to working on the long-term future, we are irreplaceable when it comes to helping people / sentient beings who are alive today. In other words: influencing what may happen 100 years from now can be done by us, our children, our grand-children and so on; however, only we can help say the 700 million people living in extreme poverty today.

I have not come across the counter-arguments for this one: has it been discussed on previous posts or related material? Or maybe it is a basic question in moral philosophy 101 and I am just not knowledgeable enough :-)

Comment author: Carl_Shulman 12 August 2018 10:12:06PM *  3 points [-]

The argument is that some things in the relatively near term have lasting effects that cannot be reversed by later generations. For example, if humanity goes extinct as a result of war with weapons of mass destruction this century, before it can become more robust (e.g. by being present on multiple planets, creating lasting peace, etc), then there won't be any future generations to act in our stead (for at least many millions of years for another species to follow in our footsteps, if that happens before the end of the Earth's habitability).

Likewise, if our civilization was replaced this century by unsafe AI with stable less morally valuable ends, then future generations over millions of years would be controlled by AIs pursuing those same ends.

This period appears exceptional over the course of all history so far in that we might be able to destroy or permanently worsen the prospects of civilizations as a result of new technologies, but before we have reached a stable technological equilibrium or dispersed through space.

Comment author: RandomEA 09 August 2018 01:52:58PM *  1 point [-]

Do you know if it's just a fund for other large donors? It seems unusual to require small donors to send an email in order to donate.

If the fund is open to small donors, I hope CEA will consider mentioning it on the EA Funds website and the GWWC website.

Comment author: Carl_Shulman 09 August 2018 06:10:22PM 4 points [-]

I don't know, you could email and ask. If Chloe wanted to take only large donations one could use a donor lottery to turn a small donation into a chance of a large one.

Comment author: RandomEA 09 August 2018 04:55:09AM 1 point [-]

Would it be a good idea to create an EA Fund for U.S. criminal justice? It could potentially be run by the Open Phil program officer for U.S. criminal justice since it seems like a cause area where Open Phil is unlikely to fund everything the program officer thinks should be funded, which makes it more likely that extra funding can be spent effectively.

This could help attract more people into effective altruism. However, that could be bad if you think those people are less likely to fully embrace the ideas of effective altruism and thus would dilute the community.

Comment author: Carl_Shulman 09 August 2018 07:21:56AM *  3 points [-]

Would it be a good idea to create an EA Fund for U.S. criminal justice?

Open Phil's Chloe Cockburn has a fund for external donors. See Open Phil's recent blog post:

Chloe Cockburn leads our work in this area, and as such has led our outreach to other donors. To date, we estimate that her advice to other donors (i.e., other than Dustin and Cari) has resulted in donations moved (in the same sense as the metric GiveWell tracks) that amount to a reasonable fraction (>25%) of the giving she has managed for Open Philanthropy.

It appears that interest in her recommendations has been growing, and we have recently decided to support the creation of a separate vehicle - the Accountable Justice Action Fund - to make it easier for donors interested in criminal justice reform to make donations to a pool of funds overseen by Chloe. The Fund is organized as a 501(c)(4) organization; those interested in contributing to AJAF should contact us.

Comment author: Peter_Hurford  (EA Profile) 07 August 2018 11:11:17PM 1 point [-]

This is really cool, Carl. Thanks for sharing. Do superforecasters ever make judgments about other x-risks?

Comment author: Carl_Shulman 08 August 2018 01:47:27AM 3 points [-]

Not by default, but I hope to get more useful forecasts that are EA action-relevant in the future performed and published.

In response to When causes multiply
Comment author: Carl_Shulman 07 August 2018 09:31:59PM 7 points [-]

"Note also that while we’re looking at such large pools of funding, the EA community will hardly be able to affect the funding ratio substantially. Therefore, this type of exercise will often just show us which single cause should be prioritised by the EA community and thereby act additive after all. This is different if we look at questions with multiplicative factors in which the decisions by the EA community can affect the input ratios like whether we should add more talent to the EA community or focus on improving existing talent."

I agree that multiplicative factors are a big deal for areas where we collectively have strong control over key variables, rather than trying to move big global aggregates. But I think it's the latter that we have in mind when talking about 'causes' rather than interventions or inputs working in particular causes (e.g. investment in hiring vs activities of current employees). For example:

"Should the EA community focus to add its resources on the efforts to reduce GCRs or to add them to efforts to help humanity flourish?"

If you're looking at global variables like world poverty rates, or total risk of extinction it requires quite a lot of absolute impact before you make much of a proportional change.

E.g. if you reduce the prospective risk of existential catastrophe from 10% to 9%, you might increase the benefits of saving lives through AMF by a fraction of a percent, as it would be more likely that civilization would survive to see benefits of the AMF donations. But a 1% change would be unlikely to drastically alter allocations between catastrophic risks and AMF. And a 1% change in existential risk is an enormous impact: even in terms of current humans (relevant for comparison to AMF) that could represent tens of millions of expected current lives (depending on the timeline of catastrophe), and immense considering other kinds of beings and generations. If one were having such amazing impact in a scalable fashion it would seem worth going further at that point.

Diminishing returns of our interventions on each of these variables seems a much more important consideration that multiplicative effects between these variables: cost per percentage point of existential risk reduced is likely to grow many times as one moves along the diminishing returns curve.

"We could also think of the technical ideas to improve institutional decision making like improving forecasting abilities as multiplying with those institution’s willingness to implement those ideas."

If we're thinking about institutions like national governments changing willingness to implement the ideas seems much less elastic than improving the methods. If we look at a much narrower space, e.g. the EA community or a few actors in some core areas, the multiplicative factors key fields and questions.

If I was going to look for cross-cause multiplicative effects it would likely be for their effects on the EA community (e.g. people working on cause A generate some knowledge or reputation that helps improve the efficiency of work on cause B, which has more impact if cause B efforts are larger).

Comment author: Denise_Melchin 07 August 2018 11:32:35AM 3 points [-]

Do you have private access to the Good Judgement data? I've been thinking before about how it would be good to get superforecasters to answer such questions but didn't know of a way to access the results of previous questions.

(Though there is the question of how much superforecasters' previous track record on short-term questions translates to success on longer-term questions.)

Comment author: Carl_Shulman 07 August 2018 07:38:27PM 5 points [-]

GJ results (as opposed Good Judgment Open) aren't public, but Open Phil has an account with them. This is from a batch of nuclear war probability questions I suggested that Open Phil commission to help assess nuclear risk interventions.

Comment author: kbog  (EA Profile) 06 August 2018 11:29:26PM *  0 points [-]

don't forget the doomsday argument.

https://arxiv.org/abs/1705.08807 has a question about the probability that the outcome of AI will be "extremely bad."

Where in the Stern report are you looking?

Comment author: Carl_Shulman 07 August 2018 01:25:14AM 7 points [-]

The fixed 0.1% extinction risk is used as a discount rate in the Stern report. That closes the model to give finite values (instead of infinite benefits) after they exclude pure temporal preference discounting on ethical grounds. Unfortunately, the assumption of infinite confidence in a fixed extinction rate, gives very different (lower) expected values than a distribution that accounts for the possibility of extinction risks eventually becoming stably low for long periods (the Stern version gives a probability of less than 1 in 20,000 to civilization surviving another 10,000 years, when agriculture is already 10,000 years old).

Comment author: Carl_Shulman 07 August 2018 01:14:51AM *  14 points [-]

Earlier this year Good Judgment superforecasters (in nonpublic data) gave a median probability of 2% that a state actor would make a nuclear weapon attack killing at least 1 person before January 1, 2021. Conditional on that happening they gave an 84% probability of 1-9 weapons detonating, 13% to 10-99, 2% to 100-999, and 1% to 100 or more.

Here's a survey of national security experts which gave a median 5% chance of a nuclear great power conflict killing at least 80 million people over 20 years, although some of the figures in the tables look questionable (mean less than half of median).

It's not clear how much one should trust these groups in this area. Over a longer time scale I would expect the numbers to be higher, since there is information that we are currently not in a Cold War (or hot war!), and various technological and geopolitical factors (e.g. the shift to multipolar military power and the rise of China) may drive it up.

Comment author: Gregory_Lewis 06 August 2018 08:46:04PM 5 points [-]

Thanks for posting this.

I don't think there are any other sources you're missing - at least, if you're missing them, I'm missing them too (and I work at FHI). I guess my overall feeling is these estimates are hard to make and necessarily imprecise: long-run large scale estimates (e.g. what was the likelihood of a nuclear exchange between the US and Russia between 1960 and 1970?) are still very hard to make ex post, leave alone ex ante.

One question might be how important further VoI is for particular questions. I guess the overall 'x risk chance' may have surprisingly small action relevance. The considerations about the relative importance of x-risk reduction seem to be fairly insensitive to 10^-1 or 10^-5 (at more extreme values, you might start having pascalian worries), and instead the discussion hinges on issues like tractability, pop ethics, etc.

Risk share seems more important (e.g. how much more worrying is AI than nuclear war?), yet these comparative judgements can be generally made in relative terms, without having to cash out the absolute values.

Comment author: Carl_Shulman 06 August 2018 09:55:19PM *  14 points [-]

The considerations about the relative importance of x-risk reduction seem to be fairly insensitive to 10^-1 or 10^-5 (at more extreme values, you might start having pascalian worries), and instead the discussion hinges on issues like tractability, pop ethics, etc.

I think differences over that range matter a lot, both within a long-termist perspective and over a pluralist distribution across perspectives.

At the high end of that range the low-hanging fruit of x-risk reduction will also be very effective at saving the lives of already existing humans, making them less dependent on concern for future generations.

At the low end, non-existential risk trajectory changes look more important within long-termist frame, or capacity building for later challenges.

Magnitude of risk also importantly goes into processes for allocating effort under moral uncertainty and moral pluralism.

View more: Next