13

aspencer comments on S-risk FAQ - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (9)

You are viewing a single comment's thread.

Comment author: aspencer 26 September 2017 03:00:33PM 1 point [-]

This sentence in your post caught my attention: " Even if the fraction of suffering decreases, it's not clear whether the absolute amount will be higher or lower."

To me, it seems like suffering should be measured by suffering / population, rather than by the total amount of suffering. The total amount of suffering will grow naturally with the population, and suffering / population seems to give a better indication of the severity of the suffering (a small group suffering a large amount is weighted higher than a large group suffering a small amount, as I intuitively think is correct).

My primarily concern with this (simplistic) method of measuring the severity of suffering is that it ignores the distribution of suffering within a population (i.e there could be a sub population with a large amount of suffering). However, I don't think that's a compelling enough reason to discount working to minimize the fraction of suffering rather than absolute suffering.

Are there compelling arguments for why we should seek to minimize total suffering?

Comment author: Gregory_Lewis 28 September 2017 06:08:11AM 1 point [-]

If I understand right, the view you're proposing is sort of like the 'average view' of utilitarianism. The objective is to minimize the average level of suffering across a population.

A common challenge to this view (shared with average util) is that it seems you can make a world better by adding lives which suffer, but suffer less than the average. In some hypothetical hellscape where everyone is getting tortured, adding further lives where people get tortured slightly less severely should make the world even worse, not better.

Pace the formidable challenges of infinitarian ethics, I generally lean towards total views. I think the intuition you point to (which I think is widely shared) in that larger degrees of suffering should 'matter more' is perhaps better accommodated in something like prioritarianism, whereby improving the well-being of the least well off is given extra moral weight to its utilitarian 'face value'. (FWIW, I generally lean towards pretty flat footed utilitarianism, as there some technical challenges with prioritarianism, and it seems hard to distinguish the empirical from the moral matters: there are evolutionary motivations (H/T Carl Shulman) why there should be extremely severe pain, so maybe a proper utilitarian accounting makes relieving these extremes worth very large amounts of more minor suffering).

Aside: in population ethics there's a well-worn problem of aggregation, as suggested by the repugnant conclusion: lots and lots of tiny numbers when put together can outweigh a big numbers, so total views have challenges such as: "Imagine A where 7 billion people live lives of perfect bliss, versus B where these people suffer horrendous torture, but TREE(4) people with lives that are only just barely worth living". B is far better than A, yet it seems repulsive. (The usual total view move is to appeal to scope insensitivity and that our intuitions here are ill-suited to tracking vast numbers. I don't think perhaps more natural replies (e.g. 'discount positive wellbeing if above zero but below some threshold close to it'), come out in the wash).

Unfortunately, the 'suffering only' suggested as a potential candidate in the FAQ (i.e. discount 'positive experiences', and only work to reduce suffering) seems to compound these, as in essence one can concatenate these problems of population ethics with the counter-intuitiveness of this discounting of positive experience (virtually everyone's expressed and implied preferences indicate positive experiences have free-standing value, as they are willing to trade off between negative and positive).

The aggregation challenge akin to the repugnant conclusion (which I think I owe to Carl Shulman) goes like this. Consider A: 7 billion people suffering horrendous torture. Now consider B: TREE(4) people enjoying lifelong eudaimonic bliss with the exception of each suffering a single pinprick. On a total suffering view A >>> B, yet this seems common-sensically crazy.

The view seems to violate two intuitions, the first the aggregation issue (i.e. TREE(4) pinpricks is more morally important than 7 billion cases of torture), but also the discounting of positive experience - the 'suffering only counts view' is indifferent to the difference of TREE(4) instances of lifelong eudaimonic bliss between the scenarios. If we imagine world C where no one exists a total util view gets the intuitively 'right' answer (i.e. B > C > A), whilst the suffering view gets most of the pairwise comparisons intuitively wrong (i.e. C > A > B)

Comment author: WillPearson 27 September 2017 08:44:26PM 0 points [-]

How do you feel about the mere addition paradox? These questions are not simple.