M

MichaelStJules

9770 karmaJoined May 2016

Sequences
1

Welfare and moral weights

Comments
2243

Topic contributions
12

We should separate whether the view is well-motivated from whether it's compatible with "ethics being about affecting persons". It's based only on comparisons between counterparts, never between existence and nonexistence. That seems compatible with "ethics being about affecting persons".

We should also separate plausibility from whether it would follow on stricter interpretations of "ethics being about affecting persons". An even stricter interpretation would also tell us to give less weight to or ignore nonidentity differences using essentially the same arguments you make for A+ over Z, so I think your arguments prove too much. For example,

  1. Alice with welfare level 10 and 1 million people with welfare level 1 each
  2. Alice with welfare level 4 and 1 million different people with welfare level 4 each

You said "Ruling out Z first seems more plausible, as Z negatively affects the present people, even quite strongly so compared to A and A+." The same argument would support 1 over 2.

Then you said "Ruling out A+ is only motivated by an arbitrary-seeming decision to compare just A+ and Z first, merely because they have the same population size (...so what?)." Similarly, I could say "Picking 2 is only motivated by an arbitrary decision to compare contingent people, merely because there's a minimum number of contingent people across outcomes (... so what?)"

So, similar arguments support narrow person-affecting views over wide ones.

The fact that non-existence is not involved here (a comparison to A) is just a result of that decision, not of there really existing just two options.

I think ignoring irrelevant alternatives has some independent appeal. Dasgupta's view does that at step 1, but not at step 2. So, it doesn't always ignore them, but it ignores them more than necessitarianism does.

 

I can further motivate Dasgupta's view, or something similar:

  1. There are some "more objective" facts about axiology or what we should do that don't depend on who presently, actually or across all outcomes necessarily exists (or even wide versions of this). What we should do is first constrained by these "more objective" facts. Hence something like step 1. But these facts can leave a lot of options incomparable or undominated/permissible. I think all complete, transitive and independent of irrelevant alternatives (IIA) views are kind of implausible (e.g. the impossibility theorems of Arrhenius). Still, there are some things the most plausible of these views can agree on, including that Z>A+.
    1. Z>A+ follows from Harsanyi's theorem, extensions to variable population cases and other utilitarian theorems, e.g. McCarthy et al., 2020, Theorem 3.5; Thomas, 2022; sections 4.3 and 5; Gustafsson et al., 2023; Blackorby et al., 2002, Theorem 3.
    2. Z>A+ follows from anonymous versions of total utilitarianism, average utilitarianism, prioritarianism, egalitarianism, rank-discounted utilitarianism, maximin/leximin, variable value theories and critical-level utilitarianism. Of anonymous, monotonic (Pareto-respecting), transitive, complete and IIA views, it's only really (partially) ~anti-egalitarian views (e.g. increasing marginal returns to additional welfare, maximax/leximax, geometrism, views with positive lexical thresholds), which sometimes ~prioritize the better off more than ~proportionately, that reject Z>A+, as far as I know. That's nearly a consensus in favour of Z>A+, and the dissidents have more plausible counterparts that support Z>A+.
    3. On the other hand, there's more disagreement on A vs A+, and on A vs Z.
    4. Whether or not this step is person-affecting could depend on what kinds of views we use or the facts we're constrained by, but I'm less worried about that than what I think are plausible (to me) requirements for axiology.
  2. After being constrained by the "more objective" facts in step 1, we should (or are at least allowed to) pick between remaining permissible options in favour of necessary people (or minimizing harm or some other person-affecting principle). Other people wouldn't have reasonable impartial grounds for complaint with our decisions, because we already addressed the "more objective" impartial facts in 1.

If you were going to defend utilitarian necessitarianism, i.e. maximize the total utility of necessary people, you'd need to justify the utilitarian bit. But the most plausible justifications for the utilitarian bit would end up being justifications for Z>A+, unless you restrict them apparently arbitrarily. So then, you ask: am I a necessitarian first, or a utilitarian first? If you're utilitarian first, you end up with something like Dasgupta's view. If you're a necessitarian first, then you end up with utilitarian necessitarianism.

Similarly if you substitute a different wide, anonymous, monotonic, non-anti-egalitarian view for the utilitarian bit.

Then, I think there are ways to interpret Dasgupta's view as compatible with "ethics being about affecting persons", step by step:

  1. Step 1 rules out options based on pairwise comparisons within the same populations, or same number of people. Because we never compare existence to nonexistence — we only compare the same people or with the same number like in nonidentity — at this step, this step is arguably about affecting persons.
  2. Step 2 is just necessitarianism on the remaining options. Definitely about affecting persons.

These other views also seem compatible with "ethics being about affecting persons":

  1. The view that makes (wide or narrow) necessitarian utilitarian comparisons pairwise while ignoring alternatives, so it gives A<A+, A+<Z, Z<A, a cycle.
  2. Actualism
  3. The procreation asymmetry

Anyway, I feel like we're nitpicking here about what deserves the label "person-affecting" or "being about affecting persons".

Do you intend for the population to recover in B, or extinction with no future people? In the post, you write that the second virus "will kill everybody on earth". I'd assume that means extinction.

If B (killing 8 billion necessary people) does mean extinction and you think B is better than A, then you prefer extinction to extra future deaths. And your argument seems general, e.g. we should just go extinct now to prevent the deaths of future people. If they're never born, they can't die. You'd be assigning negative value to additional deaths, but no positive value to additional lives. The view would be antinatalist.

Or, if you think B is just no worse than A (equivalent or incomparable), then extinction is permissible, in order to prevent the deaths of future people.

 

If you allow population recovery in B, then (symmetric) wide person-affecting views can say B is better than A, although it could depend on how many future/contingent people will exist in each scenario. If the number is the same or larger in B and dying earlier is worse than dying later, then B would be better. If it's lower in B, then you may need to discount some of the extra early deaths in A.

If additional human lives have no value in themselves, that implies that the government would have more reason to take precautionary measures against a virus that would kill most of us than one that would kill all of us, even if the probabilities were equal.

Maybe I'm misunderstanding, but if

  • we totally discounted what happens to future/additional people (even stronger than no reason to create them), and only cared about present/necessary people, and
  • killing everyone/extinction means killing all present/necessary people (extinction now, not extinction in the future) and no one else ever existing,

then, conditional on the given virus mutating

  1. the first virus kills 7 billion + possibly several million more people who presently/necessarily exist, but less than 8 billion present/necessary people
  2. the second virus kills everyone, 8 billion present/necessary people

2 kills more present/necessary people, so we'd want to prevent it.

EDIT: It looks like you pointed out something similar here.

I don't think 6 follows. Preventing the early deaths of future people does not imply creating new lives or making happy people. The two statements in each version of the intuition of neutrality separated by the "but" here are not actually exhaustive of how we should treat future people.

Your argument would only establish that we shouldn't be indifferent to (or discount or substantially discount) future lives, not that we have reason to ensure future people are born in the first place or to create people. Multiple views that don't find extinction much worse than almost everyone dying + population recovery could still recommend avoiding the extra deaths of future people. Especially "wide" person-affecting views.[1]

On "wide" person-affecting views, if you have one extra person Alice in outcome A, but a different extra person Bob in outcome B, and otherwise the same people in both, then you treat Alice and Bob like the same person across the two outcomes. They're "counterparts". For more on this, and how to extend to different numbers of non-overlapping people between A and B, see Meacham, 2012, section 4 (or short summary in Koehler, 2021) and Thomas, 2019, section 5.3. I also discuss some different person-affecting views here.

Under wide views, with the virus that kills more people, the necessary people+matched counterparts are worse off than with the virus that kills fewer people.

(I'd guess there are different ways to specify the intuition of neutrality; your argument might succeed against some but not others.)

  1. ^

    Some versions of negative preference utilitarianism or views that minimize aggregate DALYs might, too, but if the extra early deaths prevent additional births, then in fact killing more people with the viruses could prevent more deaths overall, and that could be better on these views. These are pretty antinatalist views. That being said, I am fairly sympathetic to antinatalism about future people, but more so because I don't think good lives can make up for bad ones.

Dasgupta's view makes ethics about what seems unambiguously best first, and then about affecting persons second. It's still person-affecting, but less so than necessitarianism and presentism.

It could be wrong about what's unambiguously best, though, e.g. we should reject full aggregation, and prioritize larger individual differences in welfare between outcomes, so A+' (and maybe A+) looks better than Z.

Do you think we should be indifferent in the nonidentity problem if we're person-affecting? I.e. between creating a person a person with a great life and a different person with a marginally good life (and no other options).

For example, we shouldn’t care about the effects of climate change on future generations (maybe after a few generations ahead), because future people's identities will be different if we act differently.

But then also see the last section of the post.

I largely agree with this, but

  1. If we were only concerned with what's best for the original people when in S, the probability that, if we pick A+, we can and should switch to something like Z later matters. For the original people, it may be worth the risk. It would depend on the details.
  2. I also suspect we should first rule out A+ with Z available from S, even if we were sure we couldn't later switch to something like Z. A+ does seem unfair with Z available, from S. Whether or not we can switch to something like Z later, we'll have realized it was a mistake to not choose Z over A+ for the people who will then exist, if we had chosen A+. But I also want to say it won't have been a mistake to pick A, despite A+ having being available.

2 motivates applying impartial norms first, like fixed population comparisons insensitive to who currently or necessarily exists, to rule out options, and in this case, A+, because it's worse than Z. After that, we pick among the remaining options using person-affecting principles, like necessitarianism, which gives us A over Z. That's Dasgupta's view.

You could also do brain imaging to check for pain responses.

You might not even need to know what normal pain responses in the species look like, because you could just check normally painful stimuli vs control stimuli.

However, knowing what normal pain responses in the species look like would help. Also, across mammals, including humans and raccoons, the substructures responsible for pain (especially the anterior cingulate cortex) seem roughly the same, so I think we'd have a good idea of where to check.

Maybe one risk is that the brain would just adapt and recruit a different subsystem to generate pain, or use the same one in a diffefdnt way. But control stimuli could help you detect that.

Another behavioural indicator would be (learned) avoidance of painful stimuli.

The arguments for unfairness of X relative to Y I gave in my previous comment (with the modified welfare levels, X=(3, 6) vs Y=(5,5)) aren't sensitive to the availability of other options: Y is more equal (ignoring other people), Y is better according to some impartial standards, and better if we give greater priority to the worse off or larger gains/losses.

All of these apply also substituting A+ for X and Z for Y, telling us that Z is more fair than A+, regardless of the availability of other options, like A, except for priority for larger gains/losses (each of the 1 million people has more to lose than each of the extra 99 million people, between A+ and Z).

Fairness is harder to judge between populations of different sizes (the number of people who will ever exist), and so may often be indeterminate. Different impartial standards, like total, average and critical-level views will disagree about A vs A+ as well as about A vs Z. But A+ and Z have the same population size, so there's much more consensus in favour of Z>A+ (although necessitarianism, presentism and views that especially prioritize more to lose can disagree, finding A+>Z).

Load more