New GPI paper: Teruji Thomas, 'The Asymmetry, Uncertainty, and the Long Term'. Abstract:

The Asymmetry is the view in population ethics that, while we ought to avoid creating additional bad lives, there is no requirement to create additional good ones. The question is how to embed this view in a complete normative theory, and in particular one that treats uncertainty in a plausible way. After reviewing the many difficulties that arise in this area, I present general ‘supervenience principles’ that reduce arbitrary choices to uncertainty-free ones. In that sense they provide a method for aggregating across states of nature. But they also reduce arbitrary choices to one-person cases, and in that sense provide a method for aggregating across people. The principles are general in that they are compatible with total utilitarianism and ex post prioritarianism in fixed-population cases, and with a wide range of ways of extending these views to variable-population cases. I then illustrate these principles by writing down a complete theory of the Asymmetry, or rather several such theories to reflect some of the main substantive choice-points. In doing so I suggest a new way to deal with the intransitivity of the relation ‘ought to choose A over B’. Finally, I consider what these views have to say about the importance of extinction risk and the long-run future.

43

0
0

Reactions

0
0
Comments6
Sorted by Click to highlight new comments since:

There's also a video in which the author presents the work there. Here's the direct link.

I think the beatpath method to avoid intransitivity still results in a sadistic repugnant conclusion. Consider three situations. In situation 1, one person exist with high welfare 100. In situation 2, that person gets welfare 400, and 1000 additional people are added with welfare 0. In situation 3, those thousand people will have welfare 1, i.e. small but positive (lives barely worth living), and the first person now gets a negative welfare of -100. Total utilitarianism says that situation 3 is best, with total welfare 900. But comparing situations 1 and 3, I would strongly prefer situation 1, with one happy person. Choosing situation 3 is both sadistic (the one person gets a negative welfare) and repugnant (this welfare loss is compensated by a huge number of lives barely worth living). Looking at harms, in situation 1, the one person has 300 units of harm (400 welfare in situation 2 compared to 100 in situation 1). In situation 2, the 1000 additional people each have one unit of harm, which totals 1000 units. In situation 3, the first person has 200 units of harm (-100 in situation 3 compared to +100 in situation 1). According to person-affecting views, we have an intransitivity. But Schulze's beatpath method, Tideman’s ranked pairs method, minimax Condorcet method, and other selection methods to avoid intransitivity, select situation 3 if situation 2 were an option (and would select situation 1 if situation 2 was not an available option, violating independence of irrelevant alternatives).

Perhaps we can solve this issue by considering complaints instead of harms. In each situation X, a person can complain against choosing that situation X over another situation Y. That complaint is a value between zero and the harm that the person has in situation X compared to situation Y. A person can choose how much to complain.  For example, if the first person would fully complain in situation 1, then situation 3 will be selected, and in that situation the first person is worse-off.  Hence, learning about this sadistic repugnant conclusion, the first person can decide not to complain in situation 1, as if that person is not harmed in that situation. Without the complaint, situation 1 will be selected. We have to let people freely choose how much they want to complain in the different situations. 

Hmm, ya, this seems right. At least for beatpath and the way I imagine it's used (I haven't read the paper in a while, and I'm just checking Schulze method on Wikipedia), there is a path from 1 to 3, with strength equal to the minimum of the (net?) betterness of 2 over 1 (300=400-100) and the net betterness of 3 over 2 (500=(1000-0)+(-100-400), or maybe just 1000, counting only the positive votes here), so 300.

The direct path from 3 to 1 has only strength 200=100-(-100) (we ignore the contingent people here, since they have nonengative welfare). Since 200<300, 3 is beatpath better than 1.

For what it's worth, an option like 2 would have to be practically available to result in 3 being required this way. We can imagine creating many humans, nonhuman animals or artificial sentiences to improve the welfare of existing beings by exploiting these extra moral patients, although their lives would still need to be at least net neutral.

The Supervenience Theorem is quite strong and interesting, but perhaps too strong for many with egalitarian or prioritarian intuitions. Indeed, this is discussed with respect to the conditions for the theorem. In its proof, it's shown that we should treat any problem like the original position behind the veil of ignorance (the one-person scenario; for individuals, we treat ourselves as having probability of being any of those individuals, and we consider only our own interests in that case), so that every interpersonal tradeoff is the same as a personal tradeoff. This is something that I'm personally quite skeptical of. In fact, if each individual ought to maximize their own expected utility in a way that is transitive and independent of irrelevant alternatives when only their own interests are at stake, then fixed-population Expected Totalism follows (for a fixed population, we should maximize the unweighted total expected utility). The Supervenience Theorem is something like a generalization of Harsanyi's Utilitarian Theorem this way. EDIT: Ah, it seems like this link is made indirectly through this paper, which is cited.

That being said, the theorem could also be seen as an argument for Expected Totalism, if each of its conditions can be defended or to whoever leans towards accepting them.

If we've already given up the independence of irrelevant alternatives (whether A or B is better should not depend on what other outcomes are available), it doesn't seem like much of an extra step to give up separability (whether A or B is better should only depend on what's not common to A and B) or Scale Invariance, which is implied by separability. There are different ways to care about the distribution of welfares, and prioritarians and egalitarians might be happy to reject Scale Invariance this way.

Prioritarians and egalitarians can also care about ex ante priority/equality, e.g. everyone deserves a fair chance ahead of time, and this would be at odds with Statewise Supervenience. For example, given H=heads and T=tails, each with probability 0.5, they might prefer the second of these two options, since it looks fairer to Adam ahead of time, as he actually gets a chance at a better life. Statewise Supervenience says these should be equivalent:


If someone cares about ex post equality, e.g. the final outcome should be fair to everyone in it, they might reject Personwise Supervenience, because personwise-equivalent scenarios can be unfair in their final outcomes. The first option here looks unfair to Adam if H happens (ex post), and unfair to Eve if T happens (ex post), but there's no such unfairness in the second option. Personwise Supervenience says we should be indifferent, because from Adam's point of view, ignoring Eve, there's no difference between these two choices, and similarly from Eve's point of view. Note that maximin, which is a limit of prioritarian views, is ruled out.

There are, of course, objections to giving these up. Giving up Personwise Supervenience seems paternalistic, or to override individual interests if we think individuals ought to maximize their own expected utilities. Giving up Statewise Supervenience also has its problems, as discussed in the paper. See also "Decide As You Would With Full Information! An Argument Against Ex Ante Pareto" by Marc Fleurbaey and Alex Voorhoeve, as well as one of my posts which fleshes out ex ante prioritarianism (ignoring the problem of personal identity) and the discussion there.

Regarding the definition of the Asymmetry,

2. If the additional people would certainly have good lives, it is permissible but not required to create them

is this second part usually stated so strongly, even in a straight choice between two options? Normally I only see "not required", not also "permissible", but then again, I don't normally see it as a comparison of two choices only. This rules out average utilitarianism, critical-level utilitarianism, negative utilitarianism, maximin and many other theories which may say that it's sometimes bad to create people with overall good lives, all else equal. Actually, basically any value-monistic consequentialist theory which is complete, transitive and satisfies the independence of irrelevant alternatives and non-antiegalitarianism, and avoids the repugnant conclusion is ruled out.

Interesting!

What if we redefine rationality to be relative to choice sets? We might not have to depart too far from vNM-rationality this way.

The axioms of vNM-rationality are justified by Dutch books/money pumps and stochastic dominance, but the latter can be weakened, too, since many outcomes are indeed irrelevant, so there's no need to compare to them all. For example, there's no Dutch book or money pump that only involves changing the probabilities for the size of the universe, and there isn't one that only involves changing the probabilities for logical statements in standard mathematics (ZFC); it doesn't make sense to ask me to pay you to change the probability that the universe is finite. We don't need to consider such lotteries. So, if we can generalize stochastic dominance to be relative to a set of possible choices, then we just need to make sure we never choose an option which is stochastically dominated by another, relative to that choice set. That would be our new definition of rationality.

Here's a first attempt:

Let be a set of choices or probabilistic lotteries over outcomes (random variables), and let be the set of all possible outcomes which have nonzero probability in some choice from (or something more general to accommodate general probability measures). Then for , we say stochastically dominates with respect to if:

for all , and the inequality is strict for some . This can lift comparisons using , a relation , between elements of to random variables over the elements of . need not even be complete over or transitive, but stochastic dominance thus defined will be transitive (perhaps at the cost of losing some comparisons). could also actually be specific to , not just to .

We could play around with the definition of here.

When we consider choices to make now, we need to model the future and consider what new choices we will have to make, and this is how we would avoid Dutch books and money pumps. Perhaps this would be better done in terms of decision policies rather than a single decision at a time, though.

(This approach is based in part on "Exceeding Expectations: Stochastic Dominance as a General Decision Theory" by Christian Tarsney, which also helps to deal with Pascal's wager and Pascal's mugging.)

More from Pablo
Curated and popular this week
Relevant opportunities