Was doing a bit of musing and thought of an ethical concept I have not heard discussed before, though I'm sure it has been written about by some ethicist. 

It concerns average utilitarianism, a not-very-popular philosophy that I nonetheless find a bit plausible; it has a small place in my moral uncertainty. Most discussions of average utilitarianism (averagism) and total utilitarianism (totalism) begin and end with discussions of the Repugnant and Sadistic conclusions. For me, such discussions lead averagism seeming worse, but not entirely forgettable, relative to totalism.

There is more intricacy to average utilitarianism, however, that I think is overlooked. (Hedonic) total utilitarianism is easily defined: assuming that each sentient being s at point in time t has a "utility" value u(s,t) representing (amount of pleasure - amount of pain) in the moment, total utilitarianism is just:

Average utilitarianism requires specification of an additional value, the moral weight of an individual at a point in time w(s,t), corresponding to a sentient being's capacity for pleasure and pain, or their "degree of consciousness". Averagism is then (I think?) ordinarily defined as follows, where at any given time you divide the total utility by the total moral weight of beings alive:

Laying out the view like this makes clear another flaw, a flaw which is in my view worse than anything discussed in the Repugnant vs. Sadistic conclusion arguments: utility isn't time-independent. That is, if a population grows over time to (e.g.) 10x the size, each being's pain and pleasure counts 10x less than beings that came earlier. 

This leads to some really bad conclusions. Say you had a task that the above population needs to accomplish that will require immense suffering by one person. Instead of trying to reduce this suffering, this view says that you can dampen it simply by having this person born way in the future. The raw suffering that this being will experience is the same, but because there happen to be more people alive, this suffering just doesn't matter as much. In a growing population, offloading suffering to future generations now becomes an easy get-out-of-jail free card in ways that only make sense to someone who treats ethics as a big game.

After some thinking, I realized that the above expression is not the only way you can define averagism. You can instead divide the total amount of utility that will ever exist by the total amount of moral weight that will ever exist:

This expression destroys the time dependency discussed above. Instead of downweighing an individual's utility by the amount of other beings that currently exist, we instead downweigh it by the amount of other beings that have ever existed. We still avoid the Repugnant Conclusion on a global scale (which satisfies the "choose one of these two worlds" phrasing ordinarily used), though on local timescales you have a lot of repugnant behavior remaining that you don't with the previous definition.

The time-invariant expression also puts a bit of a different spin on average utilitarianism. By the end of the last sentient life, we want to be able to claim that the average sentient being was as happy as possible. If ever we get average utilitarianism to a level we can never match, the best option is just to have no more sentient life, to "turn off the tap" of sentience before the water gets too diluted with below-average (even if positive) utility. We are also obligated to learn about our history to determine whether ancient beings were miserable or ecstatic, to see at which level of utility it is still worth having life.

...Or at least, in theory. In practice, of course, it's really hard to figure out what the actual implications of different forms of averagism are, given how little we know about wild animal welfare and given the correlation between per-capita prosperity and population size. That being said, I think this form of averagism is at least interesting and merits a bit of discussion. I certainly don't give it too much credence, but it has found a bit of weight in my moral uncertainty space.

18

0
0

Reactions

0
0

More posts like this

Comments5
Sorted by Click to highlight new comments since:

Thanks for this. i just had a similar idea, and ofc I'm glad to see another EA had a similar insight before. I am no expert on the field, but I agree that this "atemporal avg utilitarianism" seems to be underrated; I wonder why. The greatest problem I see with this view, at first, is that it makes the moral goodness of future actions depend on the population and the goodness of the past. I suspect this would also make it impossible (or intractable) to model goodness as a social welfare function. But then... if the moral POV is the "POV of the universe", or the POV of nowhere, or of the impartial observer... maybe that's justified? And it'd explain the Asymmetry and the use of thresholds for adding people.

I suspect this view is immune to the repugnant conclusion / mere addition paradox. The most piercing objection from total view advocates against avg utilitarianism is that it implies a sadistic conclusion: adding a life worth living makes the world worse if this life is below the average utility; and adding a life with negative value is good if it is superior to the world average. But if the overall avg utility is positive, or if you add a constraint forbidding adding negative lives... this makes it less likely to find examples where this view implies a "sadistic" conclusion

As an aside, if both average-ism and totalism lead to results that seem discordant with our moral intuitions, why do we need to choose between them? Wouldn't it make sense to look for a function combining some elements of each of these?

There's a proof showing that any utilitarian ideology violates either the repugnant or sadistic conclusion (or anti-egalitarianism, incentivizing an unequal society), so you can't cleverly avoid these two conclusions with some fancy math. To add, any fancy view you create will be in some sense unmotivated - you just came with a formula that you like, but why would such a formula be true? Totalism and averagism seem to be the two most interpretable utilitarian ideologies, with totalism caring only about pain/pleasure (and not by whom this pain/pleasure is experienced) and averagism being the same except population-neutral, not incentivizing a larger population unless it has higher average net pleasure. Anything else is kind of an arbitrary view invented by someone who is too into math.

The anti-egalitarianism one seems to me to be the least obviously necessary of the three [1]. It doesn't seem obviously wrong that for this abstract concept of 'utility' (in the hedonic sense), there may be cases and regions in which it's better to have one person with a bit more and another with a bit less.

But more importantly, I think, why is it so bad that it is 'unmotivated'. In many domains we think that 'a balance of concerns' or 'a balance of inputs' yields the best outcome under the constraints.

So why shouldn't a reasonable moral valuation ('axiology') involve some balance of interest in total welfare and interest in average welfare? It's hard to know where that balancing point should lie (although maybe some principles could be derived). But that still doesn't seem to invalidate it... any more than my liking some combination of work and relaxation, or believing that beauty lies in a balance between predictability and surprise, etc.

I wouldn't think 'invented by someone too into math' (if that's possible :) ). If anything I think the opposite. I am accepting that a valuation of what is moral could be valid and defensible even if it can't be stated in as stark axiomatic terms as the extreme value systems.


  1. Although many EAs seem to be ok with the repugnant conclusion also. ↩︎

In other domains, when we combine different metrics to yield one frankenstein metric, it is because these different metrics are all partial indicators of some underlying measure we cannot directly observe. The whole point of ethics is that we are trying to directly describe this underlying measure of "good", and thus it doesn't make sense to me to create some frankenstein view. 

The only instance I would see this being ok is in the context of moral uncertainty, where we're saying "I believe there is some underlying view but I don't know what it is, so I will give some weight to a bunch of these plausible theories". Which maybe is what you're getting at? But in that case, I think it's necessary to believe that each of the views you are averaging over could be approximately true on its own, which IMO really isn't the case with a complicated utilitarianism formula, especially since we know there is no formula out there that will give us all we desire. Though this is another long philosophical rabbit hole, I'm sure.

Curated and popular this week
Relevant opportunities