E.g. What is the expected effect on existential risk of donating to one of GiveWell's top charities?

I've asked myself this question several times over the last few years, but I've never put a lot of thought into it. I've always just assumed that at the very least it would not increase existential risk.

Have any analyses been done on this?

13

0
0

Reactions

0
0
Comments25
Sorted by Click to highlight new comments since: Today at 5:47 PM

Say you have two interventions, A and B, and two outcome metrics, X and Y. You expect A will improve X by 100 units per dollar and B will improve Y by 100 units per dollar. However each intervention will have some smaller effect of uncertain sign on the other outcome metric. A will cause +1 or -1 units of Y, and B will cause +1 or -1 units of X.

It would be silly to decide for or against one of these interventions based on its second-order effect on the other outcome metric:

  • If you think either X or Y is much more important than the other metric, then you just pick based on the more important metric and neglect the other
  • If you think X and Y are of similar importance, again you focus on the primary effect of each intervention rather than the secondary one
  • If you are worried about A harming metric Y because you want to ensure you have expected positive impact on both X and Y, you can purchase offsets by putting 1% of your resources into B, or vice versa for B harming X

Cash transfers significantly relieve poverty of humans who are alive today, and are fairly efficient at doing that. They are far less efficient at helping or harming non-human animals today or increasing or reducing existential risk. Even if they have some negative effect here or there (more meat-eating, or habitat destruction, or carbon emissions) the cost of producing a comparable benefit to offset it in that dimension will be small compared to the cash transfer. E.g. an allocation of 90% GiveDirectly, and 10% to offset charities (carbon reduction, meat reduction, nuclear arms control, whatever) will wind up positive on multiple metrics.

If you have good reasons to give to poverty alleviation rather than existential risk reduction in the first place, then minor impacts on existential risk from your poverty charities are unlikely to reverse that conclusion (although you could make some smaller offsetting donations if you wanted to have a positive balance on as many moral theories as possible). It makes sense to ask how good those reasons really are and whether to switch, but not to worry too much about small second-order cross-cause effects.

ETA: As I discuss in a comment below, moral trade gives us good reasons to be reciprocally supportive with efforts to very efficiently serve different conceptions of the good with only comparatively small costs according to other conceptions.

True but it's important for other reasons that we can tell whether the net effect of certain interventions is positive or not. If I'm spreading the message of EA to other people, should I put a lot of effort into getting people to send money to GiveDirectly and other charities? There is no doubt in my mind as to the fact that poverty alleviation is a suboptimal intervention. But if I believe that poverty alleviation is still better than nothing, I'll be happy to promote and spread it and engage in debates about the best way to reduce poverty. But if I decide that the effects on existential risks and the rise in meat consumption of the developing world (1.66kg per capita per year per $1000 increase in per capita GDP) are significant enough that poverty alleviation is worse than nothing, then I don't know what I'll do.

If you are even somewhat of a moral pluralist, or have some normative uncertainty between views that would favor a focus on current people versus future generations, then if you were spending a trillion dollar budget it would include some highly effective poverty reduction, along with interventions that would do very well on different ethical views (with smaller side effects ranked poorly on other views).

I think that both pluralism and uncertainty are relevant, so I favor interventions that most efficiently relieve poverty even if they much less efficiently harm current humans or future generations, and likewise for things that very efficiently reduce factory farming at little cost to poverty or future generations, etc. One can think of this as a sort of moral trade with oneself.

And at the interpersonal level, there is a clear and overwhelming case for moral trade (both links are to Toby Ord's paper, now published in Ethics). People with different ethical views about the importance of current human welfare, current non-human welfare, and the welfare of future generations have various low-cost high-benefit ways to help each other attain their goals (such as the ones you mention, but also many others like promoting the use of evidence-based charity evaluators). If these are all taken, then the world will be much better by all the metrics, i.e. there will be big gains from moral trade and cooperation.

You shouldn't hold those benefits of cooperation (in an iterated game, no less), and the cooperate-cooperate equilibrium, hostage to the questionable possibility of some comparatively small drawbacks.

Eh, good points but I don't see what normative uncertainty can accomplish. I have no particular reason to err on one side or the other: the chance that I might be giving too much weight to any given moral issue is no greater than the chance that I might be giving too little. Poverty alleviation could be better than I thought, or it could be worse. I can imagine moral reasons which would cut either way.

Thank you! This just changed where I intend to donate tremendously.

Specifically, I intend to give (100% or nearly 100%) to existential risk rather than (mostly) poverty alleviation (this due to how much I value future lives (a lot) relative to the quality of currently-existing lives).

Upon trying to think of counter-arguments to change my view back in favor of donating to poverty alleviation charities, the best I can come up with right now:

Maybe the best "poverty alleviation" charities are also the best "existential risk" charities. That is, maybe they are more effective at reducing existential risk than than are charities typically thought of as (the best) existential risk charities. How likely is this to be true? Less than 1%?

Maybe the best "poverty alleviation" charities are also the best "existential risk" charities. That is, maybe they are more effective at reducing existential risk than than are charities typically thought of as (the best) existential risk charities. How likely is this to be true? Less than 1%?

More than 1%. For example, investing in GiveWell (and to a lesser extent, donating to its top charities to increase its money moved and influence) to expedite its development has been a fantastic buy for global poverty historically, and it looks like it will also turn out to have great effects in reducing global catastrophic risks and factory farming.

It could end up best if you think improving general human empowerment or doing "common sense good" (or something like that) is the best way to reduce existential risk, though personally it seems unclear because many existential risks are man-made, and there seems to be more specific things we can do about them.

GiveWell also selects charities on the basis of room for more funding, team quality and transparency - things you'd want in any charity no matter your outcome metric - and that might raise the probability above 1%.

room for more funding, team quality and transparency - things you'd want in any charity no matter your outcome metric

Indeed. Valuation of outcomes is one of several multiplicative factors.

There might be a strong argument made about the existential risk coming from people in poverty contributing to social instability, and the resulting potential for various forms of terrorism, sabotage, and other Black Swan scenarios.

Or looking at it the other way around-perhaps the most effective way of reducing global catastrophic risk also is the most effective way of helping the people in poverty in the present generation, as I argue here.

Based on, for example, this post , would it be reasonable to say that most of the expected total impact of donating to / working on global health and development is linked to the respective long-term effects? If so, as suggested here (see "Response five: "Go longtermist""), it seems more reasonable to focus on long-termism.

I believe:

  • The prior for the short-term (1st order) expected impact of (e.g.) GiveWell top charities has low variance.
  • The estimate for the total expected impact of GiveWell top charities has high variance.
  • The higher the variance of the estimate, the smaller the update to the prior.

However, I do not think one should conclude from the above that the posterior for the total expected impact of GiveWell top charities is similar to the prior for the short-term expected impact of GiveWell top charities. If I am not mistaken, this update would only be valid if the low-variance prior concerned the total expected impact of GiveWell top charities, but it respects the short-term expected impact.

Interesting. Even more specifically, which particular x-risk charity do you plan to donate to? And why do you think it does a lot of good (i.e. that when you donate a few thousand dollars to it, this will do more good than saving a life, deworming hundreds of children, or lifting several families out of poverty)?

  1. I don't have a specific charity in mind yet. 2. I'm not very confident in my answer.

I should also mention that I probably won't be donating much more for at least a couple years, so it probably shouldn't be my highest priority to try to answer all of these questions. They are good questions though, so thanks.

This is useful but doesnt entirely answer William's question. To put it another way: suppose GiveDirectly reduced extreme poverty in East Africa by 50%. What would your best estimate of the effect of that on xrisk be? I'd expect it to be quite positive, but havent thought about how to estimate the magnitude.

I believe there's an important case where this does actually matter.

Suppose there's a fundraising charity F which raises money for charities X and G. Charity X is an x-risk charity, and F raises money for it at a 2:1 ratio. Charity G is a global poverty charity, and F raises money for it at a 10:1 ratio. If you care more about x-risk than global poverty and believe charity G decreases x-risk, or only increases x-risk by a tiny amount, then you should give to F instead of X. But if G increases x-risk by more than 20% as much as X decreases it, then giving to F is actually net negative and you should give to X instead.

I don't believe 20% is implausibly high. This only requires that ending global poverty increases x-risk by about 0.01% and charity G is reasonably effective. (I did some Fermi calculations to justify this but they're pretty complicated so I'll leave them out.)

Something not mentioned yet is the existential risk coming from people in poverty contributing to social instability, and the resulting potential for various forms of terrorism, sabotage, and other Black Swan scenarios.

I'm not aware of careful analysis having been done on the topic.

One thing speaking in favour of it increasing existential risk is if it leads to faster technological progress, which in turn could give less time to research on things that specifically benefit safety, of the kind that MIRI and FHI are doing. I'm thinking that more rich people in previously poor countries would make it more profitable for western countries to invest in R&D and that these previously would fund proportionally less x-risk-research than what takes place in the west (this is not an obvious assumption, but it is my suspicion).

But as mentioned by others here there are risks pointing in the other direction also.

I don't have an opinion myself as to which direction the effect on x-risk is, but I suspect the effect on x-risk from donating to GiveWell is of neglectable importance compared to effect of whether or not you donate to x-risk-related work (assuming, as I do, that x-risk research and work directed specifically at x-risk can have a significant impact on x-risk). Your donation to aid projects seems unlikely to have a significant effect on the speed of global development as seen as a fraction of the current speed, but the number of people working on x-risk is small and thus it's easier to affect the size of it by a significant fraction.

Following Brian Tomasik's thinking, I believe that one of the big issues for existential risk is international stability and cooperation to deal with AI arms races and similar issues, so to answer this question I asked something along those lines on Reddit, and got an interesting (and not particularly optimistic) answer.

https://www.reddit.com/r/IRstudies/comments/3jk0ks/is_the_economic_development_of_the_global_south/

Maybe one can think that getting through the unsteady period of economic development quickly will hasten the progress of the international community whereas delaying it would simply forestall all the same problems of instability and competition. I don't know. I wish we had more international relations experts in the EA community.

I've been thinking lately that nuclear non-proliferation is probably a more pressing x-risk than AI at the moment and for the near term. We have nuclear weapons and the American/Russian situation has been slowly deteriorating for years. We are (likely) decades away from needing to solve AI race global coordination problems.

I am not asserting that AI coordination isn't critically important. I am asserting that if we nuke ourselves first, it probably won't matter.

You really don't need to give so many disclaimers for the view that nuclear war is an important global catastropic risk, and that the instantaneous risk is much higher for existing nuclear arsenals than for future technologies (which have ~0 instantaneous risk), which everyone should agree with. Nor for thinking that nuclear interventions might have better returns today.

You might be interested in reading OpenPhil's shallow investigation of nuclear weapons policy. And their preliminary prioritization spreadsheet of GCRs.

OpenPhil doesn't provide recommendations for individual donors, but you could get started on picking a nuclear charity from their investigation (among other things). If you do look into it, it would be great to post about your research process and findings.

That is a great question you posted on Reddit!

There are so many important unanswered questions relevant to EA charitable giving. Maybe an effective meta-EA charity idea would be a place where EAs could pose research questions they want answered, and they offer money based on how much they would be willing to give to have their question answered with a certain quality.

I feel inclined to say that we should crowdsource or research for those answers and save our money on important causes. For instance, LessWrong did a whole series of interviews with computer scientists asking them about AI risks just by emailing them (http://wiki.lesswrong.com/wiki/Interview_series_on_risks_from_AI). Experts are pretty damn expensive.

That being said, I do think we have a bit of a problem with major conclusions regarding artificial intelligence, economics, etc being drawn by people who lack graduate education and field recognition in those fields. Being an amateur interdisciplinary thinker is nice, but we have a surplus of those people in EA. Maybe we can do more movement building in targeted intellectual communities.

I wasn't thinking that the money would go towards hiring experts. Rather, something like: "I'll donate $X to GiveDirectly if someone changes my view on this important question that will decide whether I want to donate my money to Org 1 or Org 2."

Your question provokes a methodological question for me about existential risk vs. helping people who are alive today. Has anyone incorporated a measure of risk -- in the sense of uncertainty -- into comparing current and future good?

In the language of investment, investors are typically willing to receive lower returns in exchange for less risk. As an investor, I'd rather get a very high probability of a low return than a chancier probability of a high return. You pay for uncertainty.

It seems to me that the more speculative our causes, the higher a benefit-cost ratio we would want to demand. Put another way, it's hard to believe that my actions, unless I'm very lucky, will really have an impact on humans 200 years from now, but a virtual certainty that my actions can have impact on someone now.

I'm interested in whether this thinking has been incorporated into analyses of existential risk.