Comment author: Joey 29 August 2017 05:14:30AM 1 point [-]

At the end of the day the number is based off what I think would not impair my productivity. It’s nice having it anchored to something concrete, but I am not as sold on it being anchored as I am on the good done from spending less. I do think if I based it off the animal average (aside from it being way harder to calculate) it would not be enough to live off of without major time sacrifices.

Comment author: Carl_Shulman 29 August 2017 06:45:13AM *  2 points [-]

At the end of the day the number is based off what I think would not impair my productivity.

If this comes first (which makes sense to me), it might be better to start by framing the description in terms of that, giving yourself as a data point/example of that being low in your judgment of your case, rather than in terms of the global mean (which will often not track your primary criterion)?

Julia Wise's posts on her and Jeff Kaufman's frugal budgets take this form (data point).

Comment author: Carl_Shulman 28 August 2017 05:54:31PM *  13 points [-]

As with Eric, I'd like to express praise for your altruism, respect for your choice, but raise some cautions about the idea of a global human mean income as global norm.

I think it makes sense to think about this in terms of market compensation (including wages and nonpecuniary benefits) and explicit and implicit donation thereof. Depending on people's opportunity costs that salary could represent a large boost in income relative to their outside prospects, a 10% donation rate, 50%, or 99%+. I'd also think about to what extent the change in donation affects your impact (positively and negatively). The degree of sacrifice and magnitude (and sign) of impacts will be enormously different across cases.

This approximate world average has a very strong intuitive appeal to us, because it’s what somebody would get paid if there was complete equality.

Some thoughts on this:

  • If you include nonhuman animals then mean income is orders of magnitude less; even adjusting for cost of living, that would imply low subsistence wages, i.e. absolute poverty and <$200 for humans (which would be clearly highly counterproductive)
  • Equal allocations of income would leave no extra for those expensive needs, e.g. health conditions that require hundreds of thousands of dollars per to survive, young vs old
  • If equality meant bringing up productivity and conditions for poor people, then total and per capita output could rise several fold; if it meant everyone allocating their efforts altruistically optimally then per capita wealth could explode (or collapse alongside skyrocketing total wealth)
  • Conversely, if equality were achieved by taxation at 100% rates and transfers then incomes would collapse, ceteris paribus
  • Median income is dramatically lower than mean, but has a plausible better claim re living high while others die
Comment author: Carl_Shulman 28 July 2017 05:52:08PM *  18 points [-]

Thinking from the perspective of a beneficiary, I would rather get $100 than remove a 1/10,000,000 risk of death. That level of risk is in line with traveling a few kilometers by walking, and a small fraction of the risk associated with a day skiing: see the Wikipedia entry on micromorts. We all make such tradeoffs every day, taking on small risks of large harm for high probabilities of smaller benefits that have better expected value.

So behind the veil of ignorance, for a fixed population size, the 'altruistic repugnant conclusion' is actually just what beneficiaries would want for themselves. 'Repugnance' would involve the donor prioritizing their scope-insensitive response over the interests of the beneficiaries.

An article by Barbara Fried makes a very strong case against this sort of anti-aggregationism based on the ubiquity of such tradeoffs.

Comment author: Carl_Shulman 30 July 2017 01:37:08AM 2 points [-]

Separately in the linked Holden blog post it seems that the comparison is made between 100 large impacts and 10,000 small impacts that are well under 1% as large. I.e. the hypothetical compares larger total and per beneficiary impacts against a smaller total benefit distributed over more beneficiaries.

That's not a good illustration for anti-aggregationism.

(2) Provide consistent, full nutrition and health care to 100 people, such that instead of growing up malnourished (leading to lower height, lower weight, lower intelligence, and other symptoms) they spend their lives relatively healthy. (For simplicity, though not accuracy, assume this doesn’t affect their actual lifespan – they still live about 40 years.)

This sounds like improving health significantly, e.g. 10% or more, over 14,600 days each, or 1.46 million days total. Call it 146,000 disability-adjusted life-days.

(3) Prevent one case of relatively mild non-fatal malaria (say, a fever that lasts a few days) for each of 10,000 people, without having a significant impact on the rest of their lives.

Let's say mild non fatal malaria costs half of a life-day per day, and 'a few days' is 6 days. Then the stakes for these 10,000 people are 30,000 disability-adjusted life-days.

146,000 adjusted life days is a lot more than 30,000 adjusted life-days.

Comment author: Carl_Shulman 28 July 2017 05:52:08PM *  18 points [-]

Thinking from the perspective of a beneficiary, I would rather get $100 than remove a 1/10,000,000 risk of death. That level of risk is in line with traveling a few kilometers by walking, and a small fraction of the risk associated with a day skiing: see the Wikipedia entry on micromorts. We all make such tradeoffs every day, taking on small risks of large harm for high probabilities of smaller benefits that have better expected value.

So behind the veil of ignorance, for a fixed population size, the 'altruistic repugnant conclusion' is actually just what beneficiaries would want for themselves. 'Repugnance' would involve the donor prioritizing their scope-insensitive response over the interests of the beneficiaries.

An article by Barbara Fried makes a very strong case against this sort of anti-aggregationism based on the ubiquity of such tradeoffs.

Comment author: Carl_Shulman 17 July 2017 12:51:38AM 10 points [-]

Ian David Moss has a post on this forum arguing for things along the lines of 'EA for the rich country fine arts' and other such restricted scope versions of EA.

My biggest objection to this is that to stay in line with people's habitual activities the rationales for the restricted scope have to be very gerrymandered (perhaps too much to be credible if stated explicitly), and optimizing within that restricted objective function may be pick out things that are overall bad, e.g. the recent media discussion comparing interventions purely in terms of their carbon emissions without taking anything else into account, suggesting that the existence of a member of a society with GDP per capita of $56,000 is bad if it includes carbon emissions with a social cost of $2,000 per person.

Comment author: MichaelPlant 03 April 2017 10:22:03AM 2 points [-]

Agree with the above, but wanted to ask: what do you mean by a 'strong presentist' view? I've not heard/seen the term and am unsure what it is contrasted with.

Is 'weak presentism' that you give some weight to non-presently existing people, 'strong presentism' that you give none?

Comment author: Carl_Shulman 05 April 2017 05:12:07PM 1 point [-]

"Is 'weak presentism' that you give some weight to non-presently existing people, 'strong presentism' that you give none?"

In my comment, yes.

Comment author: William_MacAskill 31 March 2017 05:13:07PM 1 point [-]

That's reasonable, though if the aim is just "benefits over the next 50 years" I think that campaigns against factory farming seem like the stronger comparison:

"We’ve estimated that corporate campaigns can spare over 200 hens from cage confinement for each dollar spent. If we roughly imagine that each hen gains two years of 25%-improved life, this is equivalent to one hen-life-year for every $0.01 spent." "One could, of course, value chickens while valuing humans more. If one values humans 10-100x as much, this still implies that corporate campaigns are a far better use of funds (100-1,000x) [So $30-ish per equivalent life saved]." http://www.openphilanthropy.org/blog/worldview-diversification

And to clarify my first comment, "unlikely to be optimal" = I think it's a contender, but the base rate for "X is an optimal intervention" is really low.

Comment author: Carl_Shulman 31 March 2017 07:23:54PM *  15 points [-]

"if you are only considering the impact on beings alive today...factory farming"

The interventions you are discussing don't help any beings alive at the time, but only affect the conditions (or existence) of future ones. In particular cage-free campaigns, and campaigns for slower growth-genetics and lower crowding among chickens raised for meat are all about changing the conditions into which future chickens will be born, and don't involve moving any particular chickens from the old to new systems.

I.e. the case for those interventions already involves rejecting a strong presentist view.

"That's reasonable, though if the aim is just "benefits over the next 50 years" I think that campaigns against factory farming seem like the stronger comparison:"

Suppose there's an intelligence explosion in 30 years (not wildly unlikely in expert surveys), and expansion of population by 3-12 orders of magnitude over the next 10 years (with AI life of various kinds outnumbering both human and non-human animals today, with vastly more total computation). Then almost all the well-being of the next 50 years lies in that period.

Also in that scenario existing beings could enjoy accelerated subjective speed of thought and greatly enhanced well-being, so most of the QALY-equivalents for long-lived existing beings could lie there.

Comment author: Cornelius  (EA Profile) 26 March 2017 02:04:44AM 0 points [-]

Perhaps "systemic change bias" needs to be coined, or something to that effect, to be used in further debates.

Might be useful in elucidating why people criticizing EAs always mischaracterize us as not caring about systemic change or harder-to-quantify causes.

Comment author: Carl_Shulman 26 March 2017 04:07:38AM *  2 points [-]

Might be useful in elucidating why people criticizing EAs always mischaracterize us as not caring about systemic change or harder-to-quantify causes.

Those causes get criticized because of how hard to quantify they are. The relatively neglected thing is recognizing both strands, and arguing for Goldilocks positions between 'linear clear evidence-backed non-systemic charity' and 'far too radical for most interested in systemic change.'

Comment author: Ben_Todd 25 March 2017 04:44:40AM 7 points [-]

I read him as saying that EA community would not support e.g. the abolishionist movement were it around then, precisely because of the difficulties in EV calculations, and I agree with him on that.

Just as an aside, I'm not sure that's obvious. John Stuart Mill was a leader in the abolition movement. He was arguably the Peter Singer of those times.

Turning to current issues, ending factory farming is also a cause that likely requires large scale social change through advocacy, and lots of EAs work on that.

Comment author: Carl_Shulman 25 March 2017 08:15:34AM 9 points [-]

Just as an aside, I'm not sure that's obvious. John Stuart Mill was a leader in the abolition movement. He was arguably the Peter Singer of those times.

And Bentham was ahead of the curve on:

  • Abolition of slavery
  • Calling for legal equality of the sexes
  • The first known argument for legalization of homosexuality in England
  • Animal rights
  • Abolishing the death penalty and corporal punishment (including of children)
  • Separation of church and state
  • Freedom of speech

precisely because of the difficulties in EV calculation

The extensive work on factory farming is certainly one counterexample, but the interest in artificial intelligence may be a more powerful one on this point.

In response to comment by Carl_Shulman on Why I left EA
Comment author: Cornelius  (EA Profile) 06 March 2017 05:11:44AM 1 point [-]

Yes, precisely. Although - there are so many variants of negative utilitarianism that "precisely" is probably a misnomer.

In response to comment by Cornelius  (EA Profile) on Why I left EA
Comment author: Carl_Shulman 06 March 2017 05:09:54PM *  4 points [-]

OK, then since most EAs (and philosophers, and the world) think that other things like overall well-being matter it's misleading to suggest that by valuing saving overall good lives they are failing to achieve a shared goal of negative utilitarianism (which they reject).

View more: Next