Comment author: Carl_Shulman 28 July 2017 05:52:08PM *  16 points [-]

Thinking from the perspective of a beneficiary, I would rather get $100 than remove a 1/10,000,000 risk of death. That level of risk is in line with traveling a few kilometers by walking, and a small fraction of the risk associated with a day skiing: see the Wikipedia entry on micromorts. We all make such tradeoffs every day, taking on small risks of large harm for high probabilities of smaller benefits that have better expected value.

So behind the veil of ignorance, for a fixed population size, the 'altruistic repugnant conclusion' is actually just what beneficiaries would want for themselves. 'Repugnance' would involve the donor prioritizing their scope-insensitive response over the interests of the beneficiaries.

An article by Barbara Fried makes a very strong case against this sort of anti-aggregationism based on the ubiquity of such tradeoffs.

Comment author: Carl_Shulman 30 July 2017 01:37:08AM 2 points [-]

Separately in the linked Holden blog post it seems that the comparison is made between 100 large impacts and 10,000 small impacts that are well under 1% as large. I.e. the hypothetical compares larger total and per beneficiary impacts against a smaller total benefit distributed over more beneficiaries.

That's not a good illustration for anti-aggregationism.

(2) Provide consistent, full nutrition and health care to 100 people, such that instead of growing up malnourished (leading to lower height, lower weight, lower intelligence, and other symptoms) they spend their lives relatively healthy. (For simplicity, though not accuracy, assume this doesn’t affect their actual lifespan – they still live about 40 years.)

This sounds like improving health significantly, e.g. 10% or more, over 14,600 days each, or 1.46 million days total. Call it 146,000 disability-adjusted life-days.

(3) Prevent one case of relatively mild non-fatal malaria (say, a fever that lasts a few days) for each of 10,000 people, without having a significant impact on the rest of their lives.

Let's say mild non fatal malaria costs half of a life-day per day, and 'a few days' is 6 days. Then the stakes for these 10,000 people are 30,000 disability-adjusted life-days.

146,000 adjusted life days is a lot more than 30,000 adjusted life-days.

Comment author: Carl_Shulman 28 July 2017 05:52:08PM *  16 points [-]

Thinking from the perspective of a beneficiary, I would rather get $100 than remove a 1/10,000,000 risk of death. That level of risk is in line with traveling a few kilometers by walking, and a small fraction of the risk associated with a day skiing: see the Wikipedia entry on micromorts. We all make such tradeoffs every day, taking on small risks of large harm for high probabilities of smaller benefits that have better expected value.

So behind the veil of ignorance, for a fixed population size, the 'altruistic repugnant conclusion' is actually just what beneficiaries would want for themselves. 'Repugnance' would involve the donor prioritizing their scope-insensitive response over the interests of the beneficiaries.

An article by Barbara Fried makes a very strong case against this sort of anti-aggregationism based on the ubiquity of such tradeoffs.

Comment author: Carl_Shulman 17 July 2017 12:51:38AM 10 points [-]

Ian David Moss has a post on this forum arguing for things along the lines of 'EA for the rich country fine arts' and other such restricted scope versions of EA.

My biggest objection to this is that to stay in line with people's habitual activities the rationales for the restricted scope have to be very gerrymandered (perhaps too much to be credible if stated explicitly), and optimizing within that restricted objective function may be pick out things that are overall bad, e.g. the recent media discussion comparing interventions purely in terms of their carbon emissions without taking anything else into account, suggesting that the existence of a member of a society with GDP per capita of $56,000 is bad if it includes carbon emissions with a social cost of $2,000 per person.

Comment author: MichaelPlant 03 April 2017 10:22:03AM 2 points [-]

Agree with the above, but wanted to ask: what do you mean by a 'strong presentist' view? I've not heard/seen the term and am unsure what it is contrasted with.

Is 'weak presentism' that you give some weight to non-presently existing people, 'strong presentism' that you give none?

Comment author: Carl_Shulman 05 April 2017 05:12:07PM 1 point [-]

"Is 'weak presentism' that you give some weight to non-presently existing people, 'strong presentism' that you give none?"

In my comment, yes.

Comment author: William_MacAskill 31 March 2017 05:13:07PM 1 point [-]

That's reasonable, though if the aim is just "benefits over the next 50 years" I think that campaigns against factory farming seem like the stronger comparison:

"We’ve estimated that corporate campaigns can spare over 200 hens from cage confinement for each dollar spent. If we roughly imagine that each hen gains two years of 25%-improved life, this is equivalent to one hen-life-year for every $0.01 spent." "One could, of course, value chickens while valuing humans more. If one values humans 10-100x as much, this still implies that corporate campaigns are a far better use of funds (100-1,000x) [So $30-ish per equivalent life saved]." http://www.openphilanthropy.org/blog/worldview-diversification

And to clarify my first comment, "unlikely to be optimal" = I think it's a contender, but the base rate for "X is an optimal intervention" is really low.

Comment author: Carl_Shulman 31 March 2017 07:23:54PM *  14 points [-]

"if you are only considering the impact on beings alive today...factory farming"

The interventions you are discussing don't help any beings alive at the time, but only affect the conditions (or existence) of future ones. In particular cage-free campaigns, and campaigns for slower growth-genetics and lower crowding among chickens raised for meat are all about changing the conditions into which future chickens will be born, and don't involve moving any particular chickens from the old to new systems.

I.e. the case for those interventions already involves rejecting a strong presentist view.

"That's reasonable, though if the aim is just "benefits over the next 50 years" I think that campaigns against factory farming seem like the stronger comparison:"

Suppose there's an intelligence explosion in 30 years (not wildly unlikely in expert surveys), and expansion of population by 3-12 orders of magnitude over the next 10 years (with AI life of various kinds outnumbering both human and non-human animals today, with vastly more total computation). Then almost all the well-being of the next 50 years lies in that period.

Also in that scenario existing beings could enjoy accelerated subjective speed of thought and greatly enhanced well-being, so most of the QALY-equivalents for long-lived existing beings could lie there.

Comment author: Cornelius  (EA Profile) 26 March 2017 02:04:44AM 0 points [-]

Perhaps "systemic change bias" needs to be coined, or something to that effect, to be used in further debates.

Might be useful in elucidating why people criticizing EAs always mischaracterize us as not caring about systemic change or harder-to-quantify causes.

Comment author: Carl_Shulman 26 March 2017 04:07:38AM *  2 points [-]

Might be useful in elucidating why people criticizing EAs always mischaracterize us as not caring about systemic change or harder-to-quantify causes.

Those causes get criticized because of how hard to quantify they are. The relatively neglected thing is recognizing both strands, and arguing for Goldilocks positions between 'linear clear evidence-backed non-systemic charity' and 'far too radical for most interested in systemic change.'

Comment author: Ben_Todd 25 March 2017 04:44:40AM 7 points [-]

I read him as saying that EA community would not support e.g. the abolishionist movement were it around then, precisely because of the difficulties in EV calculations, and I agree with him on that.

Just as an aside, I'm not sure that's obvious. John Stuart Mill was a leader in the abolition movement. He was arguably the Peter Singer of those times.

Turning to current issues, ending factory farming is also a cause that likely requires large scale social change through advocacy, and lots of EAs work on that.

Comment author: Carl_Shulman 25 March 2017 08:15:34AM 9 points [-]

Just as an aside, I'm not sure that's obvious. John Stuart Mill was a leader in the abolition movement. He was arguably the Peter Singer of those times.

And Bentham was ahead of the curve on:

  • Abolition of slavery
  • Calling for legal equality of the sexes
  • The first known argument for legalization of homosexuality in England
  • Animal rights
  • Abolishing the death penalty and corporal punishment (including of children)
  • Separation of church and state
  • Freedom of speech

precisely because of the difficulties in EV calculation

The extensive work on factory farming is certainly one counterexample, but the interest in artificial intelligence may be a more powerful one on this point.

In response to comment by Carl_Shulman on Why I left EA
Comment author: Cornelius  (EA Profile) 06 March 2017 05:11:44AM 1 point [-]

Yes, precisely. Although - there are so many variants of negative utilitarianism that "precisely" is probably a misnomer.

In response to comment by Cornelius  (EA Profile) on Why I left EA
Comment author: Carl_Shulman 06 March 2017 05:09:54PM *  4 points [-]

OK, then since most EAs (and philosophers, and the world) think that other things like overall well-being matter it's misleading to suggest that by valuing saving overall good lives they are failing to achieve a shared goal of negative utilitarianism (which they reject).

In response to Why I left EA
Comment author: Cornelius  (EA Profile) 03 March 2017 10:56:38PM 0 points [-]

Yea as a two-level consequentialist moral anti-realist I actually am pretty tired of EA's insistence of "how many lives we can save" instead of emphasizing how much "life fulfillment and happiness" you can spread. I always thought this was not only a PR mistake but also a utilitarian mistake. We're trying to prevent suffering, so obviously preventing instances where a single person goes through more suffering on the road to death is more morally relevant utils-wise than preventing a death with less suffering.

Nonetheless, this is the first I've heard that violence and exploitation are under-valued by EA's. It always seemed the case to me that EAs generally weep and feel angsty feelings in their gut when they read about the violence and exploitation of their fellow man. But, what can we do? Regions of violence are notoriously difficult for setting up interventions that are tractable. As such it always seeemed to me that we should focus on what we know works since lifting people out of disease and poverty empowers them to address issues of violence and exploitation themselves. And giving someone their own agency back in this way is, in my view, something worth putting a lot of moral weight on due to its long-term (albeit hard-to measure) consequences.

And now I'm going to say something that I feel some people probably wont like.

I consistently feel that a lot of the critique on EA has to do with how others perceive EAs rather than what they are really like. i.e prejudice. I mentioned above that I generally feel EAs are legit moved to tears (or whatever is a significant feeling for them) regarding issues of violence. But, I find that as soon as this person spends most of his/her time in the public space talking about math and weird utilitarian expected value calculations this person is suddenly viewed as no longer having a heart or "the right heart." The amount of compassion and empathy a person has is not tied to what weird mathematical arguments they push out but what they do and feel inside (this is how I operationalize "compassion" at any rate: an internal state leading to external consequences. Yes I know, that's a pretty virtue ethics way to look at it, so sue me.).

Anyway, maybe part of this is because I know what it feels like to be the highschool nerd that secretly cries when he sees someone getting bullied at break time but who then talks to people about and cevelops exstensivly resaeched weird ideas like transhumanism as a means of optimizing the human flourishing (instead of say caring to go to an anti-bullying event that everyone instead thinks I should be going to if I really cared about bullying). It makes sense to me that many people think I have my priorities wrong. But it certainly isn't due to a lack of compassion and concern for my fellow man. It's not too hard to go from this analogy and argue that

This is perhaps what I absolutely love about the EA community. I've finally found a community of nerds where I can be myself and go in depth with uber-weird (any and all) ideas without being looked at as any less compassionate <3.

When people talk about ending violence and exploitation by doing something that will change the system that keeps these problems in place I get upset. This "system" is often invisible and amorphous and a product of ideology rather than say cost-effectiveness calculations. Why this gets me upset is that I often find this means people are willing to sacrifice giving someone their agency back when it is clear you can do so through donating to proven disease and poverty alleviation interventions to instead donate/support a cause against violence and exploitation because it aligns with their ideology. This essentially seems to me a way of making donation about yourself - trying to make sure you feel content in your own ethical worldview because specifically not doing anything about that violence and exploitation makes you feel bad - rather than making it about the individuals on the receiving end of the donation.

Yea I know, my past virtue ethics predilections are showing again. Even if someone like what I've described above supports an anti-violence cause that though difficult to get a effectiveness measure from is still nontheless doing a lot of good in the world we cant measure I still don't like it. I'm caring what people think and arguing that certain self-serving thoughts appear morally problematic independent of the end-result they cause. So let me show I'm also strongly opposed to forms of anti-realist virtue ethics. It's not enough to merely be aligned with the right way of thinking/ideology etc and then good things come from that. The end result: the actual people on the receiving end - are what actually matter. And this is why I find a "mostly" utilitarian perspective so much more humanizing than people a lot of people who get uncomfortable with its extreme conclusions and then reject the whole thing. A more utilitarian perspective forces you to make it about the receiver.

Whatever the case, writing this has made me sad. I'm sad to see you go, you seem highly intelligent and a likely asset to the movement, and as someone who is on the front-line of EA and PR I take this as a personal failure but wish you the best. Does anyone know of any EA-vetted charities working on violence and exploitation prevention? Even ones that are a stretch tractability-wise would be good. I'd like to donate - always makes me feel better.

In response to comment by Cornelius  (EA Profile) on Why I left EA
Comment author: Carl_Shulman 04 March 2017 12:24:06AM *  4 points [-]

We're trying to prevent suffering, so obviously preventing instances where a single person goes through more suffering on the road to death is more morally relevant utils-wise than preventing a death with less suffering.

What do you mean by 'we'? Negative utilitarians?

Comment author: Peter_Hurford  (EA Profile) 15 February 2017 09:39:46PM 1 point [-]

Would it make sense to donate to the LJAF for promoting open science?

Comment author: Carl_Shulman 15 February 2017 10:06:19PM 2 points [-]

If you were trying to mimic them, I'd give more to some of their grantees, like METRICS or COS.

View more: Next