Hide table of contents

Abstract

Two key questions of normative decision theory are: 1) whether the probabilities relevant to decision theory are evidential or causal; and 2) whether agents should be risk-neutral, and so maximise the expected value of the outcome, or instead risk-averse (or otherwise sensitive to risk). These questions are typically thought to be independent - that our answer to one bears little on our answer to the other. But there is a surprising argument that they are not. In this paper, I show that evidential decision theory implies risk neutrality, at least in moral decision-making and at least on plausible empirical assumptions. Take any risk-aversion-accommodating decision theory, apply it using the probabilities prescribed by evidential decision theory, and every verdict of moral betterness you reach will match those of expected value theory.

Introduction

When making moral decisions about aiding others, you might think it appropriate to be risk-averse. For instance, suppose you face a decision between: rescuing one person from drowning for sure; and spinning a roulette wheel—if the roulette wheel lands on 0 (one of 37 possibilities), you thereby rescue 37 people (with similarly valuable lives) from drowning and, if it lands on any of the other 36 numbers, you rescue no one.[1] On the face of it, it seems plausible that it is better (instrumentally) to rescue the one person for sure, rather than to risk saving no one at all.

In this paper, I will present a novel argument against risk aversion[2] in moral cases, and in favour of risk neutrality: that one risky option is instrumentally better than another if and only if it results in a greater expected sum of moral value. The argument starts from a surprising place, usually thought to have no bearing on issues of risk aversion or risk neutrality. It starts from the claim that the probabilities used to compare options are those given by your evidence, including the evidence provided by that option being chosen; that we should accept evidential decision theory (EDT).

To illustrate what EDT asks of us, consider the much-discussed Newcomb’s Problem.

 

Newcomb’s Problem

Before you are two boxes, one opaque and one transparent. You can see that the transparent box contains $1,000. You cannot see into the opaque box, but you know that it contains either $0 or $1,000,000. You can either take the opaque box, or take both boxes. But the contents of the opaque box have been decided by a highly reliable predictor (perhaps with a long record of predicting the choices of others who have faced the same problem). If she predicted that you would take both boxes, it contains $0. If she predicted that you would take just take the opaque box, it contains $1,000,000.

 

Which is better: to take one or to take both? EDT tells us that taking the one is better. Why? You know that the predictor is highly reliable. So, if you take just the opaque box, you thereby obtain strong evidence that the $1,000,000 is contained within—we can suppose the probability that it does, conditional on taking just one box, is very close to 1. But, if you take both boxes, you thereby obtain strong evidence that the opaque box is empty—the probability that it contains $0, conditional on taking both boxes, is again close to 1. Using these probabilities, taking both boxes will almost certainly win you only $1,000, while taking just the opaque box will almost certainly win you $1,000,000. The latter then seems far better.

Alternatively, you might endorse causal decision theory (CDT): that the probabilities used to compare options are how probable it is that choosing that option will cause each outcome; evidence provided by the choice itself is ignored (see Joyce, 1999, p. 4). In Newcomb’s Problem, to the causal decision theorist, the probability of the opaque box containing $1,000,000 is the same for both options—making either choice has no causal influence on what the predictor puts in the box, so the probability cannot change between options. Using these probabilities, taking both boxes is guaranteed to turn out at least as well as taking just the one. So, the option of taking both must be better than that of taking one.

On the face of it, whether we endorse EDT’s or CDT’s core claims seems to be independent of whether we should endorse risk aversion or risk neutrality.[3] At its core, the question of EDT or CDT is a question about what notion of probability we take as normatively relevant. And this doesn’t seem to bear on how we should respond to said probabilities, and so whether it is appropriate to be risk-averse. Any theory of risk aversion could perhaps be applied to either notion of probability.[4]

But this turns out not to be true. As I will argue, if EDT is true then in practice so too is risk neutrality, at least for moral decision-making. And so we have a novel argument for risk neutrality; that or, if you think risk neutrality deeply implausible, a novel argument against EDT.

Read the rest of the paper

 

  1. ^

    Assume that, if not rescued, each of those people is guaranteed to drown. So, the possibility of saving no one (in the second option) does not arise because no rescue attempt is necessary; it arises because your rescue attempt would be unsuccessful. Without this assumption, it turns out that what risk aversion recommends is under-determined—see Greaves et al. (n.d.).

  2. ^

    More generally, the argument has force against any form of risk sensitivity (any deviation from risk neutrality). But, in the moral case, risk aversion seems more plausible than risk seeking (cf. Buchak, 2019), so I will focus here on risk aversion.

  3. ^

    One technical, and not very compelling, reason to think otherwise is this: CDT is typically axiomatised in the framework of Savage (1954), while EDT is typically axiomatised in the framework of Jeffrey (1965); and, where risk aversion is accommodated in normative decision theory, it is often done so in the basic framework of Savage (see, e.g., Buchak, 2013, p. 88 & p. 91). But there is no necessary connection between EDT and the Jeffrey framework— EDT can be expressed in Savage’s framework (e.g., Spencer and Wells, 2019, pp. 28-9), and CDT can be expressed in Jeffrey’s framework (e.g., Edgington, 2011). Nor is there a necessary connection between Jeffrey’s framework and risk neutrality—theories accommodating risk aversion can be formulated in that framework too (see Stefánsson and Bradley, 2019).

  4. ^

    Risk neutrality is often assumed without argument in existing discussions of EDT and CDT. But I take it that this is typically not for any principled reason, but instead in the interests of brevity (as indicated by, e.g., Williamson, 2021: Footnote 27), or simply due to a lack of imagination.

17

0
0

Reactions

0
0

More posts like this

Comments4
Sorted by Click to highlight new comments since: Today at 6:46 AM

This argument is basically my biggest source of doubt for risk aversion, but I don't think the response to dependant outcomes is adequate here.

You'd have to cherry pick a subsequence so that the correlations can be arranged to tend to 0, but if you're picking a subsequence this way, you're ignoring the infinitely many outcomes with correlations bounded away from 0, and the argument doesn't pass to the whole sequence.

And we should expect correlations bounded away from 0 in an infinite universe. One reason is just because there should be infinitely many (nearly) identical agents in (nearly) identical situations. Another reason is that we have uncertainty about features of our world that's very plausibly correlated across agents, like how hard it is to align AI, how prone individuals with power are to catastrophic conflict/destruction, whether or not the agent is in a relatively short-lived simulation, the density of aliens in our universe, the maximum possible density of suffering, whether or not P=NP, or what's necessary for consciousness. You can try to condition on those first and then use the LLN or CLT (or generalizations), but I'm not sure risk neutrality will definitely come out ahead when you combine the results from each condition, because the different conditions could have different maximizers and rank options very different. In some cases, your highest EV options could backfire and become among the worst under the right conditions. Still, I'd guess this gives some reason to be somewhat less risk averse, but I don't know how much, and it could depend on the specific decision.

Plus, identical distributions can't capture all correlations that matter.

In the extreme, for sequences of independent trials with payoffs increasing without bound but probability of positive payoff decreasing quickly (and unbounded variance), risk neutrality leads to almost surely worse outcomes:

  1. https://alexanderpruss.blogspot.com/2022/10/expected-utility-maximization.html
  2. https://alexanderpruss.blogspot.com/2022/10/the-law-of-large-numbers-and-infinite.html

Also, as you mention, it's possible there just aren't enough roughly uncorrelated outcomes, in case the universe is finite (although my best guess is that the universe is infinite in spatial extent).

Maybe you could try to group outcome distributions in such a way that the different groups' sums of outcomes have vanishing "covariance" with the other groups' outcome sums, and hope you get enough groups left and they satisfy some condition to let you apply something like the LLN or CLT.

On the other hand, I'd guess it very often won't be the case that a risky option is very probably better than each low risk option*, but that standard seems higher than necessary. We're not usually going to get (near) certainty in either direction, so it could be suspicious to always choose low risk options anyway. If it's usually the case that a risky option is probably better than the low risk option* and not even worse (than it is better) when it is worse, that seems like about enough reason to reject risk aversion in practice.

I'm not sure this exact statement is enough to avoid counterexamples, but something in this direction seems right.

*separately for each low risk option (not better than the statewise max of the low risk options), but the same risky option. We can also compare quantiles rather than be sensitive to the specific way options are related statewise. It seems like stochastic dominance specifically is too much to expect, though, including using background value as in Tarsney's paper, if too much of the background is correlated, e.g. uncertainty about the requirements for consciousness and how much value different minds can generate can make basically all background value highly correlated with the local causal value of options.

Also the correct footnote statement of the conditions for the LLN result you use with decreasing correlations has pretty strong conditions and your informal statement of it in the main text has trivial counterexamples, e.g. with just one outcome independent from the rest, and the rest all identical as random variables.

For any positive epsilon, you need all but finitely many of the covariances to be less than epsilon in absolute value. This means that it can't be the case that infinitely many of the outcomes have non-negligible (bounded below in absolute value by epsilon) covariance with any other ourcome. But if we expect non-negligible correlations at all between causally separated outcomes in an infinite universe, I think we should expect non-negligible correlations between infinitely many pairs of them.

Curated and popular this week
Relevant opportunities