I have this idea which I haven't fully fleshed out yet, but I'm looking to get some feedback. To simplify this, I'll embody the idea in a single, hypothetical Effective Altruist called Alex. I'll assume silly things like no inflation for simplicity. I also use 'lives saved' as a proxy for 'good done'; although this is grossly oversimplified it doesn't affect the argument.

Alex is earning to give, and estimates that they will be able to give $1 million over their lifetime. They have thought a lot about existential risk, and agree that reducing existential risk would be a good thing, and also agree that the problem is at least partially tractable. Alex also accepts things like the notion that future lives are equally as valuable as lives today. However, Alex is somewhat risk averse.

After careful modelling, Alex estimates that they could save a life for $4,000, and thus could save 250 lives over their own lifetime. Alex also thinks that their $1 million might slightly reduce the risk of some catastrophic event, but it probably won't. On expected value terms, they estimate that donating to an X-risk organisation is about ten times as good as donating to a poverty charity (they estimate 'saving' 2,500 lives on average).

However, all things considered, Alex still decides to donate to the poverty organisation, because they are risk averse, and the chances of them making a difference by donating to the X-risk organisation are very low indeed.

This seems to embody the attitude of many EAs I know. However, the question I'd like to pose is: is this selfish?

It seems like some kind of moral narcissism to say that one would prefer to increase their chances of their personal actions making a difference at the expense of overall wellbeing in expectation. If a world where everyone gave to X-risk meant a meaningful reduction in the probability of a catastrophe, shouldn't we all be working towards that instead of trying to maximise the chances that our personal dollars make a difference?

As I said, I'm still thinking this through, and don't mean to imply that anyone donating to a poverty charity instead of an X-risk organisation is selfish. I'm very keen on criticism and feedback here.

Things that would imply I'm wrong include existential risk reduction not being tractable or not being good, some argument for risk aversion that I'm overlooking, an argument for discounting future life, or something that doesn't assume a hardline classical hedonistic utilitarian take on ethics (or anything else I've overlooked).

For what it's worth, my donations to date have been overwhelmingly to poverty charities, so to date at least, I am Alex.

Comments21
Sorted by Click to highlight new comments since: Today at 10:46 AM

A personal comment (apologies that this is neither feedback nor criticism): I switched from a career plan that was pointing me towards neglected tropical disease genomics and related topics to x-risk/gcr after reading Martin Rees' Our Final Century (fun fact: I showed up at FHI without any real idea who Nick Bostrom was or why a philosopher was relevant to x-risk research).

6 years later, there's still a nagging voice in the back of my mind that worries about issues related to what you describe. (Admittedly, the voice worries more so that we don't end up doing the right work - if we do the right work but it's not the Xrisk that emerges [but was a sensible and plausible bet], or we are doing the right thing but someone else plays the critical role in averting it due to timing and other factors, I can live with that even if it technically means a wasted career and a waste of funds). I'm hoping that the portfolio of research activities I'm involved in setting up here in Cambridge is broad enough to us a good shot, directly or indirectly, of making some difference in the long run. But it's not totally clear to me I'll ever know for certain (i.e. a clean demonstration of a catastrophe that was clearly averted because of work I was involved with at CSER/CFI/FHI seems unlikely). I try to placate that voice by donating to global disease charities (e.g. SCI) despite working in xrisk.

So basically, I'm saying I empathise with these feelings. While it perhaps conflicts with some aspects of really dedicated cause prioritisation, I think a donor ecosystem in which some people are taking the long, high payoff bets, and don't mind a higher probability that their funds don't directly end up saving lives in the long run, while others are more conservative and are supporting more direct and measurable do-the-most-gooding, makes for a good overall EA 'portfolio' (and one in which the different constituents help to keep each other both open-minded and focused).

While I can't comment on whether this is selfish or narcissistic, if the end result is an ecosystem with a level of diversity in the causes it supports, that seems to be good given the level of uncertainty we have to have about the long-run importance of many of these things - provided, of course, we have high confidence that the causes within this diversity remain orders of magnitude more important than other causes we are choosing not to support (i.e. the majority of causes in the world).

It's possible that preventing human extinction is net negative. A classical utilitarian discusses whether the preventing human extinction would be net negative or positive here: http://mdickens.me/2015/08/15/is_preventing_human_extinction_good/. Negative-leaning utilitarians and other suffering-focused people think the value of the far-future is negative.

This article contains an argument for time-discounted utilitarianism: http://effective-altruism.com/ea/d6/problems_and_solutions_in_infinite_ethics/. I'm sure there's a lot more literature on this, that's about all I've looked into it.

You could also reject maximizing expected utility as the proper method of practical reasoning. Weird things happen with subjective expected utility theory, after all - St. Petersburg paradox, Pascal's Mugging, anything with infinity, dependence on possibly meaningless subjective probabilities, etc. Of course, giving to poverty charities might still be suboptimal under your preferred decision theory.

FWIW, strict utilitarianism isn't concerned with "selfishness" or "moral narcisissm", just maximizing utility.

[anonymous]8y4
0
0

It's possible that preventing human extinction is net negative

For something so important, it seems this question is hardly ever discussed. The only literature on the issue is a blog post? It seems like it's often taken for granted that x-risk reduction is net positive. I'd like to see more analysis on whether non-negative utilitarians should support x-risk reduction.

I totally agree. I've had several in-person discussions about the expected sign of x-risk reduction, but hardly anybody writes about it publicly in a useful way. The people I've spoken to in person all had similar perspectives and I expect that we're still missing a lot of important considerations.

I believe we don't see much discussion of this sort because you have to accept a few uncommon (but true) beliefs before this question becomes interesting. If you don't seriously care about non-human animals (which is a pretty intellectually crappy position but still popular even among EAs) then reducing x-risk is pretty clearly net positive, and if you think x-risk is silly or doesn't matter (which is another obviously wrong but still popular position) then you don't care about this question. Not that many people accept both that animals matter and that x-risk matters, and even among people who do accept those, some believe that work on x-risk is futile or that we should focus on other things. So you end up with a fairly small pool of people who care at all about the question of whether x-risk reduction is net positive.

It's also possible that people don't even want to consider the notion that preventing human extinction is bad, or they may conflate it with negative utilitarianism when it could also be a consequence of classical utilitarianism.

For the record, I've thought about writing something about it, but I basically came to the same conclusions that you did in your blog post (I also subscribe to total, hedonistic utilitarianism and its implications i.e. anti-speciesism, concern for wild-animals etc).

If everyone has similar perspectives, it could be a sign that we're on the right track, but it could be that we're missing some important considerations as you say, which is why I also think more discussion of this would be useful.

I wrote an essay partially looking at this this for the Sentient Politics essay competition. If it doesn't win (and probably even if it does) I'll share it here.

I think it's a very real and troubling concern. Bostrom seems to assume that, if we populated the galaxy with minds (digital or biological) that would be a good thing, but even if we only consider humans I'm not sure that's totally obvious. When you throw wild animals and digital systems into the mix, things get scary.

I wouldn't be surprised if Bostrom's basic thinking is that suffering animals just aren't a very good fuel source. To a first approximation, animals suffer because they evolved to escape being eaten (or killed by rivals, by accidents, etc.). If humans can extract more resources from animals by editing out their suffering, then given enough technological progress, experimentation, and competition for limited resources, they'll do so. This is without factoring in moral compunctions of any kind; if moral thought is more likely to reduce meat consumption than increase it, this further tilts the scales in that direction.

We can also keep going past this point, since this is still pretty inefficient. Meat is stored energy from the Sun, at several levels of remove. If you can extract solar energy more efficiently, you can outcompete anyone who doesn't. On astronomical timescales, running a body made of meat subsisting on other bodies made of meat subsisting on resources assembled from clumsily evolved biological solar panels probably is a pretty unlikely equilibrium.

(Minor side-comment: 'humans survive and eat lots of suffering animals forever' is itself an existential risk. An existential risk is anything that permanently makes things drastically worse. Human extinction is commonly believed to be an existential risk, but this is a substantive assertion one might dispute, not part of the definition.)

[anonymous]8y1
0
0

Good points about fuel efficiency. I don't think it's likely that (post)humans will rely on factory farmed animals as a food source. However, there are other ways that space colonization or AI could cause a lot of suffering, such as spreading wild animals (which quite possibly have negative lives) via terraforming or running a lot of computer simulations containing suffering (see also: mindcrime). Since most people value nature and don't see wildlife suffering as a problem, I'm not very optimistic that future humans, or for that matter an AI based on human values, will care about it. See this analysis by Michael Dickens.

(It seems like "existential risk" used to be a broader term, but now I always see it used as a synonym for human extinction risks.)

I agree with the "throwaway" comment. I'm not aware of anyone who expects factory farming of animals for meat to continue in a post-human future (except in ancestor simulations). The concerns are with other possible sources of suffering.

Thanks Jesse, I definitely should also have said that I'm assuming preventing extinction is good. My broad position on this is that the future could be good, or it could be bad, and I'm not sure how likely each scenario is, or what the 'expected value' of the future is.

Also agreed that utilitarianism isn't concerned with selfishness, but from an individual's perspective, I'm wondering if what Alex is doing in this case might be classed that way.

I don't think that risk aversion does what you think it does. Let's say that he only wants to perform interventions which are certain to be helpful.

Whether he is selfish depends on his reasons for acting that way. If he thinks it's morally better to support x risk but gains more personal satisfaction from his donations, then he is selfish. But if he believes that there are moral or other reasons to support more robust interventions then it sounds like he isn't selfish.

If someone did make their allocations based on risk aversion rather than utility maximization, they would be operating according to a fairly reasonable decision model, so probably not selfish there either. (Unless they really believed that utility maximization was correct but derived personal satisfaction from being risk averse, which I don't think describes many people.)

Thanks, there are some good points here.

I still have this feeling, though, that some people support some causes over others simply for the reason that 'my personal impact probably won't make a difference', which seems hard to justify to me.

In normative decision theory, risk aversion means a very specific thing. It means using a different aggregating function from expected utility maximisation to combine the value of disjunctive states.

Rather than multiplying the realised utility in each state by the probability of that state occurring, these models apply a non-linear weighting to each of the states which depends on the global properties of the lottery, not just what happens in that state.

Most philosophers and economists agree risk aversion over utilities is irrational because it violates the independence axiom / sure-thing principle which is one of the foundations of objective / subjective expected utility theory.

One way a person could rationally have seemingly risk averse preferences is by placing a higher value on the first bit of good they do than on the second bit of good they do, perhaps because doing some good makes you feel better. This would technically be selfish.

But I'm pretty sure this isn't what most people who justify donating to global poverty out of risk aversion actually mean. They generally mean something like "we should place a lot of weight on evidence because we aren't actually very good at abstract reasoning". This would mean their subjective probability that an x-risk intervention is effective is very low. So it's not technically risk aversion. It's just having a different subjective probability. This may be an epistemic failure. But there's nothing selfish about it.

I wrote a paper on this a while back in the context of risk aversion justifying donating to multiple charities. This is a shameless plug. https://docs.google.com/document/d/1CHAjFzTRJZ054KanYj5thWuYPdp8b3WJJb8Z4fIaaR0/edit#heading=h.gjdgxs

I just want to push back against your statement that "economists believe that risk aversion is irrational". In development economics in particular, risk aversion is often seen as a perfectly rational approach to life, especially in cases where the risk is irreversible.

To explain this, I just want to quickly point out that, from an economic standpoint, there's no correct formal way of measuring risk aversion among utils. Utility is an ordinal, not cardinal, measure. Risk aversion is something that is applied to real measures, like crop yields, in order to better estimate people's revealed preferences - in essence, risk aversion is a way of taking utility into account when measuring non-utility values.

So, to put this in context, let's say you are a subsistence farmer, and have an expected yield of X from growing Sorghum or a tuber, and you know that you'll always roughly get a yield X (since Sorghum and many tubers are crazily resilient), but now someone offers you an 'improved Maize' growth package that will get you an expected yield of 2X, but there's a 10% chance that you're crops will fail completely. A rational person at the poverty line should always choose the Sorghum/tuber. This is because that 10% chance of a failed crop is much, much worse than could be revealed by expected yield - you could starve, have to sell productive assets, etc. Risk aversion is a way of formalizing the thought process behind this perfectly rational decision. If we could measure expected utility in a cardinal way, we would just do that, and get the correct answer without using risk aversion - but because we can't measure it cardinally, we have to use risk aversion to account for things like this.

As a last fun point, risk aversion can also be used to formalize the idea of diminishing marginal utility without using cardinal utility functions, which is one of the many ways that we're able to 'prove' that diminishing marginal utility exists, even if we can't measure it directly.

I agree that dmu over crop yields is perfectly rational. I mean a slightly different thing. Risk aversion over utilities. Which is why people fail the Allais pradadox. Rational choice theory is dominated by expected utility theory (exceptions Buchak, McClennen) which suggests risk aversion over utilities is irrational. Risk aversion over utilities seems pertinent here because most moral views don't have dmu of people's lives.

I think that this discussion really comes from the larger discussion about the degree to which we should consider rational choice theory (RCT) to be a normative, as opposed to a positive, theory (for a good overview of the history of this debate, I would highly suggest this article by Wade Hands, especially the example on page 9). As someone with an economics background, I very heavily skew toward seeing it as a positive theory (which is why I pushed back against your statement about economists' view of risk aversion). In my original reply I wasn't very specific about what I was saying, so hopefully this will help clarify where I'm coming from!

I just want to say that I agree that rational choice theory (RCT) is dominated by expected utility (EU) theory. However, I disagree with your portrayal of risk aversion. In particular, I agree that risk aversion over expected utility is irrational - but my reasoning for saying this is very different. From an economic standpoint, risk aversion over utils is, by its very definition, irrational. When you define 'rational' to mean 'that which maximizes expected utility' (as it is defined in EU and RCT models), then of course being risk averse over utils is irrational - under this framework, risk neutrality over utils is a necessary pre-requisite for the model to work at all. This is why, in cases where risk aversion is important (such as the yield example), expected utility calculations take risk aversion into account when calculating the utils associated with each situation - thus making risk aversion over the utils themselves redundant.

Put in a slightly different way, we need to remember that utils do not exist - they are an artifact of our modeling efforts. Risk neutrality over utils is a necessary assumption of RCT in order to develop models that accurately describe decision-making (since RCT was developed as a positive theory). Because of this, the phrase 'risk aversion over utility' has not real-world interpretation.

With that in mind, people don't fail the Allais paradox because of risk aversion over utils, since there is no such thing as being risk averse over utils. Instead, the Allais paradox is a case showing that older RCT models are insufficient for describing the actions of humans - since the empirical results appear to show, in a way, something akin to risk aversion over utils, which in turn breaks the model. This is an important point - put differently, risk neutrality over utils is a necessary assumption of the model, and empirical results that disprove this assumption do not mean that humans are wrong (even though that may be true), it means that the model fails to capture reality. It was because the model broke (in this case, and in others), that economics developed newer positive theories of choice, such as behavioral economics and bounded rationality models, that better describe decision-making.

At most, you can say that the Allais paradox is a case showing that people's heuristics associated with risk aversion are systematically biased toward decisions that they would not choose if they thought the problem through a bit more. This is definitely a case showing that people are irrational sometimes, and that maybe they should think through these decisions a little more thoroughly, but it does not have anything to do with risk aversion over utility.

Anyways, to bring this back to the main discussion - from this perspective, risk aversion is a completely fine thing to put into models, and it would not be irrational to Alex to factor in risk aversion. This would especially be fine if Alex is worried about the validity of their model itself (which, Alex not being an expert on modeling nor AI risk, should consider to be a real concern). As a last point, I do personally think that we should be more averse to the risks associated with supporting work on far-future stuff and X-risks (which I've discussed partially here), but that's a whole other issue entirely.

Hope that helps clarify my position!

By this argument, someone who is risk-averse should buy insurance, even though you lose money in expectation. Most of the time, this money is wasted. Interestingly, X risk research is like buying insurance for humanity as a whole. It might very well be wasted, but the downside of not having such insurance is so much worse than the cost of insurance that it makes sense (if you are risk neutral and especially if you are risk-averse).

Edit: And actually some forms of global catastrophic risks are surprisingly likely, for instance a 10% global agricultural shortfall has about an 80% probability this century. So preparation for this would most likely not be wasted.

I agree. Although some forms of personal insurance are also rational. Eg health insurance in the US because the downside of not having it is so bad. But don't insure your toaster.

[anonymous]8y3
0
0

Eliezer Yudkowsky's piece Purchase Fuzzies and Utilons Separately is very relevant here. I highly recommend checking it out if you haven't already.

I agree with a lot of the other folk here that risk aversion should not be seen as a selfish drive (even though, as Gleb mentioned, it can serve that drive in some cases), but rather is an important part of rational thinking. In terms of directly answering your question, though, regarding 'discounting future life', I've been wondering about this a bit too. So, I think it's fair to say that there are some risks involved with pursuing X-risks: there's a decent chance you'll be wrong, you may divert resources from other causes, your donation now may be insignificant compared to future donations when the risk is more well-known and better understood, and you'll never really know whether or not you're making any progress. Many of these risks are accurately represented in EA's cost/benefit models about X-risks (I'm sure yours involved some version of these, even if just the uncertainty one).

My recent worry is the possibility that, that when a given X-risk becomes associated with the EA community, these risks become magnified, which in turn needs to be considered in our analyses. I think that this can happen for three reasons:

First, the EA community could create an echo chamber for incorrect X-risks, which increases bias in support of those X-risks. In this case, rational people who would have otherwise dismissed the risk as conspiratorial now would be more likely to agree with it. We’d like to think that large support of various X-risks in the EA communities is because EAs have more accurate information about these risks, but that’s not necessarily the case. Being in the EA community changes who you see as ‘experts’ on a topic – there isn't a vocal majority of experts working on AI globally who see the threat as legitimate, which to a normal rational person may make the risk seem a little overblown. However, the vast majority of experts working on AI who associate with EA do see it as a threat, and are very vocal about it. This is a very dangerous situation to be in.

Second, if an 'incorrect' X-risk is grasped by the community, there’s a lot of resource diversion at stake – EA has the power to more a lot of resources in a positive way, and if certain X-risks are way off base then their popularity in EA has an outsized opportunity cost.

Lastly, many X-risks turns a lot of reasonable people away from EA, even when they’re correct - if we believe that EA is a great boon to humanity, then the reputational risk has very real implications for the analysis.

Those are my rough initial thoughts, which I've elaborated on a bit here. It's a tricky question though, so I'd love to hear people's critiques of this line of thinking - is this magnified risk something we should take into account? How would we account for it in models?

It seems to me that risk aversion and selfishness are orthogonal to each other - i.e., they are different axes. Based on the case study of Alex, it seems that Alex does not truly - with their System 1 - believe that a far-future cause is 10X better than a current cause. Their System 1 has a lower expected utility on donating to a far future cause than poverty relief, and the "risk aversion" is a post-factum rationalization of a System 1, subconscious mental calculus.

I'd suggest for Alex to sit down and see if they have any emotional doubts about the 10X figure for the far-future cause. Then, figure out any emotional doubts they have, and place accurate weights on far-future donations versus poverty relief. Once Alex has their System 1 and System 2 aligned, then proceed.