M

MichaelStJules

9787 karmaJoined May 2016

Sequences
1

Welfare and moral weights

Comments
2251

Topic contributions
12

I don't think it's true that other things are equal on the intuition of neutrality, after saying there are more deaths in A than B. The lives and deaths of the contingent/future people in A wouldn't count at all on symmetric person-affecting views (narrow or wide). On some asymmetric person-affecting views, they might count, but the bad lives count fully, while the additional good lives only offset (possibly fully offset) but never outweigh the additional bad lives, so the extra lives and deaths need not count on net.

On the intuition of neutrality, there are more deaths that count in B, basically except if you're an antinatalist (about this case).

What person-affecting views satisfying neutrality do you imagine would recommend B/extinction/taking precautions against A here?

For an argument against neutrality that isn't just against antinatalism, I think you want to define B so that it's better than or as good as A for necessary people. For example, the virus in B makes everyone infertile without killing them (but the virus in A kills people). Or, fewer people are killed early on in B, and the rest decide not to have children. Or, the deaths in A (for the necessary people) are painful and extended, but painless in B.

Granted, but this example presents just a binary choice, with none of the added complexity of choosing between three options, so we can't infer much from it.

I can add any number of other options, as long as they respect the premises of your argument and are "unfair" to the necessary number of contingent people. What specific added complexity matters here and why?

I think you'd want to adjust your argument, replacing "present" with something like "the minimum number of contingent people" (and decide how to match counterparts if there are different numbers of contingent people). But this is moving to a less strict interpretation of "ethics being about affecting persons". And then I could make your original complaint here against Dasgupta's approach against the less strict wide interpretation.

Well, there is a necessary number of "contingent people", which seems similar to having necessary (identical) people.

But it's not the same, and we can argue against it on a stricter interpretation. The difference seems significant, too: no specific contingent person is or would be made worse off. They'd have no grounds for complaint. If you can't tell me for whom the outcome is worse, why should I care? (And then I can just deny each reason you give as not in line with my intuitions, e.g. "... so what?")

Stepping back, I'm not saying that wide views are wrong. I'm sympathetic to them. I also have some sympathy for (asymmetric) narrow views for roughly the reasons I just gave. My point is that your argument or the way you argued could prove too much if taken to be a very strong argument. You criticize Dasgupta's view from a stricter interpretation, but we can also criticize wide views from a stricter interpretation.

I could also criticize presentism, necessitarianism and wide necessitarianism for being insensitive to the differences between A+ and Z for persons affected. The choice between A, A+ and Z is not just a choice between A and A+ or between A and Z. Between A+ and Z, the "extra" persons exist in both and are affected, even if A is available.

 

I think there is a quite straightforward argument why IIA is false. (...)

I think these are okay arguments, but IIA still has independent appeal, and here you need a specific argument for why Z vs A+ depends on the availability of A. If the argument is that we should do what's best for necessary people (or necessary people + necessary number of contingents and resolving how to match counterparts), where the latter is defined relative to the set of available options, including "irrelevant options", then you're close to assuming IIA is false, rather than defending it. Why should we define that relative to the option set?

And there are also other resolutions compatible with IIA. We can revise our intuitions about some of the binary choices, possibly to incomparability, which is what Dasgupta's view does in the first step.

Or we can just accept cycles.[1]

 

I don't see why this would be better than doing other comparisons first.

It is constrained by "more objective" impartial facts. Going straight for necessitarianism first seems too partial, and unfair in other ways (prioritarian, egalitarian, most plausible impartial standards). If you totally ignore the differences in welfare for the extra people between A+ and Z (not just outweighed, but taken to be irrelevant) when A is available, it seems you're being infinitely partial to the necessary people.[2] Impartiality is somewhat more important to me than my person-affecting intuitions here.

I'm not saying this is a decisive argument or that there is any, but it's one that appeals to my intuitions. If your person-affecting intuitions are more important or you don't find necessitarianism or whatever objectionably partial, then you could be more inclined to compare another way.

  1. ^

    We'd still have to make choices in practice, though, and a systematic procedure would violate a choice-based version of IIA (whichever we choose in the 3-option case of A, A+, Z would not be chosen in binary choice with one of the available options).

  2. ^

    Or rejecting full aggregation, or aggregating in different ways, but we can consider other thought experiments for those possibilities.

Still, I think your argument is in fact an argument for antinatalism, or can be turned into one, based on the features of the problem to which you've been sensitive here so far. If you rejected antinatalism, then your argument proves too much and you should discount it, or you should be more sympathetic to antinatalism (or both).

You say B prevents more deaths, because it will prevent deaths of future people from the virus. But it prevents those future deaths by also preventing those people from existing.

So, for B to be better than A, you're saying it's worse for extra people to exist than not exist, and the reason it's worse is that they will die. Or that the will die early, but early relative to what? There’s no counterfactual in which they live longer, the way you've set the problem up. They die early relative to other people around them or perhaps without achieving major life goals they would have achieved if they didn't die early, I guess.

Similarly, going extinct now prevents more deaths from all causes, including age-related ones, but also everything that causes people to die early, like car accidents, war, diseases in young people, etc.. The effects are essentially the same.

What's special about the virus in this hypothetical vs all other causes of (early) death in humans?

So, we should prevent (early) deaths by going extinct now, or collectively refusing to have children, if the alternative is the status quo with many (early) deaths for a long time. That looks like an principle antinatalist position.

Thanks for providing these external benchmarks and making it easier to compare! Do you mind if I updated the text to include a reference to your comments?

Feel free to!

Oh, I didn't mean for you to define the period explicitly as a fixed interval period. I assume this can vary by catastrophe. Like maybe population declines over 5 years with massive crop failures. Or, an engineered pathogen causes massive population decline in a few months.

I just wasn't sure what exactly you meant. Another intepretation would be that P_f is the total post-catastrophe population, summing over all future generations, and I just wanted to check that you meant the population at a given time, not aggregating over time.

Expected value density of the benefits and cost-effectiveness of saving a life

You're modelling the cost-effectiveness of saving a life conditional on catastrophe here, right? I think it would be best to be more explicit about that, if so. Typically x-risk interventions aim at reducing the risk of catastrophe, not the benefits conditional on catastrophe. Also, it would make it easier to follow.

Denoting the pre- and post-catastrophe population by  and , I assume

Also, to be clear, this is supposed to be ~immediately pre-catastrophe and ~immediately post-catastrophe, right? (Catastrophes can probably take time, but presumably we can still define pre- and post-catastrophe periods.)

Another benchmark is GiveWell-recommended charities, which save a life for around $5,000. Assuming that's 70 years of life saved (mostly children), that would be 70 years of human life/$5000 = 0.014 years of human life/$. People spend about 1/3rd of their time sleeping, so it's around 0.0093 years of waking human life/$.

Then, taking ratios of cost-effectiveness, that's about 7 years of disabling chicken pain prevented per year of waking human life saved.

Then, we could consider:

  1. How bad disabling pain is in a human vs a chicken
  2. How bad human disabling pain is vs how valuable additional waking human life is
  3. Indirect effects (of the additional years of human life, influences on attitudes towards nonhuman animals, etc.)

Measures aimed at addressing thermal stress, and improving hen access to feed and water show promise in reducing significant amounts of hours spent in pain cost-effectively. Example initial estimates:

Welfare issueTotal impact 

[hours of disabling pain averted/farm]
Cost efficacy 

[$/hen]
Cost efficacy 

[$cents/hour of disabling pain]
Thermal stress87.5k (46.25k-150k) 0.771.11 (0.65-2.09)
Limited access to water23.75k (12.5k-35k)0.170.9 (0.61-1.71)
Limited access to feed (feeders)162.5k (103.75k-212.5k)0.220.17 (0.13-0.27)
Limited access to feed (feeders + feed)362.5k (250k-475k) 1.430.49 (0.38-0.72)

 

For the most promising, limited access to feed (feeders), at 0.17 cents/hour of disabling pain, this is around 0.067 years of disabling pain/$. It's worth benchmarking against corporate campaigns for comparison. From Duffy, 2023, using disabling pain-equivalent:

1.7 years of suffering avoided per dollar that was spent on cage-free campaigns, with a range between 0.23 and 5.0 years per dollar. 

At first, this looks much less cost-effective, 1.7/0.067 = 25. However, Emily Oehlsen from Open Phil said

We think that the marginal FAW funding opportunity is ~1/5th as cost-effective as the average from Saulius’ analysis.

And Duffy's estimate is based on the same analysis by Saulius. So, more like 5x less cost-effective. However, Duffy's estimate also included milder pains:

Table 25: Cage-free corporate campaign cost-effectiveness by pain type

 Lower Bound (yrs. pain avoided/$/yr.)Average Estimate (yrs. pain avoided/$/yr.)Upper Bound (yrs. pain avoided/$/yr.)Weight
Excruciating-0.000002-0.000002-0.0000025
Disabling0.0190.0520.1071
Hurtful0.100.390.880.15
Annoying0.350.911.70.01

Suffering-

equivalent

0.050.120.231

 

More than half of the equivalent hours of disabling pain is actually not from disabling pain at all, instead hurtful pain. So a fairer comparison would either omit the hurtful pain for corporate campaigns or also include hurtful pain for this other intervention. This could bring us closer to around 2.5x, as a first guess, which seems near enough to the funding bar.

On the other hand, I picked the most promising of the interventions, and it's less well-studied and tested than corporate campaigns, so we might expect some optimizer's curse or regression towards being less cost-effective.

We should separate whether the view is well-motivated from whether it's compatible with "ethics being about affecting persons". It's based only on comparisons between counterparts, never between existence and nonexistence. That seems compatible with "ethics being about affecting persons".

We should also separate plausibility from whether it would follow on stricter interpretations of "ethics being about affecting persons". An even stricter interpretation would also tell us to give less weight to or ignore nonidentity differences using essentially the same arguments you make for A+ over Z, so I think your arguments prove too much. For example,

  1. Alice with welfare level 10 and 1 million people with welfare level 1 each
  2. Alice with welfare level 4 and 1 million different people with welfare level 4 each

You said "Ruling out Z first seems more plausible, as Z negatively affects the present people, even quite strongly so compared to A and A+." The same argument would support 1 over 2.

Then you said "Ruling out A+ is only motivated by an arbitrary-seeming decision to compare just A+ and Z first, merely because they have the same population size (...so what?)." Similarly, I could say "Picking 2 is only motivated by an arbitrary decision to compare contingent people, merely because there's a minimum number of contingent people across outcomes (... so what?)"

So, similar arguments support narrow person-affecting views over wide ones.

The fact that non-existence is not involved here (a comparison to A) is just a result of that decision, not of there really existing just two options.

I think ignoring irrelevant alternatives has some independent appeal. Dasgupta's view does that at step 1, but not at step 2. So, it doesn't always ignore them, but it ignores them more than necessitarianism does.

 

I can further motivate Dasgupta's view, or something similar:

  1. There are some "more objective" facts about axiology or what we should do that don't depend on who presently, actually or across all outcomes necessarily exists (or even wide versions of this). What we should do is first constrained by these "more objective" facts. Hence something like step 1. But these facts can leave a lot of options incomparable or undominated/permissible. I think all complete, transitive and independent of irrelevant alternatives (IIA) views are kind of implausible (e.g. the impossibility theorems of Arrhenius). Still, there are some things the most plausible of these views can agree on, including that Z>A+.
    1. Z>A+ follows from Harsanyi's theorem, extensions to variable population cases and other utilitarian theorems, e.g. McCarthy et al., 2020, Theorem 3.5; Thomas, 2022; sections 4.3 and 5; Gustafsson et al., 2023; Blackorby et al., 2002, Theorem 3.
    2. Z>A+ follows from anonymous versions of total utilitarianism, average utilitarianism, prioritarianism, egalitarianism, rank-discounted utilitarianism, maximin/leximin, variable value theories and critical-level utilitarianism. Of anonymous, monotonic (Pareto-respecting), transitive, complete and IIA views, it's only really (partially) ~anti-egalitarian views (e.g. increasing marginal returns to additional welfare, maximax/leximax, geometrism, views with positive lexical thresholds), which sometimes ~prioritize the better off more than ~proportionately, that reject Z>A+, as far as I know. That's nearly a consensus in favour of Z>A+, and the dissidents have more plausible counterparts that support Z>A+.
    3. On the other hand, there's more disagreement on A vs A+, and on A vs Z.
    4. Whether or not this step is person-affecting could depend on what kinds of views we use or the facts we're constrained by, but I'm less worried about that than what I think are plausible (to me) requirements for axiology.
  2. After being constrained by the "more objective" facts in step 1, we should (or are at least allowed to) pick between remaining permissible options in favour of necessary people (or minimizing harm or some other person-affecting principle). Other people wouldn't have reasonable impartial grounds for complaint with our decisions, because we already addressed the "more objective" impartial facts in 1.

If you were going to defend utilitarian necessitarianism, i.e. maximize the total utility of necessary people, you'd need to justify the utilitarian bit. But the most plausible justifications for the utilitarian bit would end up being justifications for Z>A+, unless you restrict them apparently arbitrarily. So then, you ask: am I a necessitarian first, or a utilitarian first? If you're utilitarian first, you end up with something like Dasgupta's view. If you're a necessitarian first, then you end up with utilitarian necessitarianism.

Similarly if you substitute a different wide, anonymous, monotonic, non-anti-egalitarian view for the utilitarian bit.

Then, I think there are ways to interpret Dasgupta's view as compatible with "ethics being about affecting persons", step by step:

  1. Step 1 rules out options based on pairwise comparisons within the same populations, or same number of people. Because we never compare existence to nonexistence — we only compare the same people or with the same number like in nonidentity — at this step, this step is arguably about affecting persons.
  2. Step 2 is just necessitarianism on the remaining options. Definitely about affecting persons.

These other views also seem compatible with "ethics being about affecting persons":

  1. The view that makes (wide or narrow) necessitarian utilitarian comparisons pairwise while ignoring alternatives, so it gives A<A+, A+<Z, Z<A, a cycle.
  2. Actualism
  3. The procreation asymmetry

Anyway, I feel like we're nitpicking here about what deserves the label "person-affecting" or "being about affecting persons".

Load more