Comment author: Telofy  (EA Profile) 11 February 2016 09:34:48AM 3 points [-]

I don’t understand how “I want there to be the most minds having the time of their lives” is different from “aggregative consequentialist utilitarian[ism].” Isn’t it the same, just phrased a bit more informally? Or do you mean it’s not the same as “this is what is moral” because it didn’t give room for the 5% deontology/virtue ethics? But you seem to be arguing in the other direction. Could you elucidate that for me? Thanks!

Also you used “moral uncertainty,” so am I right to infer that you’re arguing from a moral realist perspective or are you referring to uncertainty about your moral preferences?

(To me, acting to optimally satisfy my moral preferences is ipso facto the same as “doing what is moral,” though I would avoid that phrasing lest someone think I wanted to imply that there’s some objective morality.)

Comment author: nino 11 February 2016 11:29:16AM *  0 points [-]

am I right to infer that you’re arguing from a moral realist perspective

If you're not arguing from a moral realist perspective, wouldn't {move the universe into a state I prefer} and {act morally} necessarily be the same because you could define your own moral values to match your preferences?

If morality is subjective, the whole distinction between morals and preferences breaks down.

Comment author: ScottA 13 January 2016 02:23:02PM *  3 points [-]

Since when is EA about buying bednets being the bare minimum? That seems like an unusual definition of EA. Many EAs think obligation framings around giving are wrong or not useful. EA is about doing as much good as possible. EAs try to figure out how to do that, and fall short, and that's to be expected, and great that they try! But an activity one knows doesn't do the most good (directly or indirectly) should not be called EA.

I think "do as much good as possible" is not the best framing, since it means (for example) that an EA who eats at a restaurant is a bad EA, since they could have eaten ramen instead and donated the difference to charity. I think it's counterproductive to define this in terms of "well, I guess they failed at EA, but everyone fails at things, so that's fine"; a philosophy that says every human being is a failure and you should feel like a failure every time you fail to be superhuman doesn't seem very friendly (see also my response to Squark above).

My interpretation of EA is "devote a substantial fraction of your resources to doing good, and try to use them as effectively as possible". This interpretation is agnostic about what you do with the rest of your resources.

Consider the decision to become vegetarian. I don't think anybody would think of this as "anti-EA". However, it's not very efficient - if the calculations I've seen around are correct, then despite being a major life choice that seriously limits your food options, it's worth no more than a $5 - 50 donation to an animal charity. This isn't "the most effective thing" by any stretch of the imagination, so are EAs still allowed to do it? My argument would be yes - it's part of their personal morality that's not necessarily subsumed by EA, and it's not hurting EA, so why not?

I feel the same way about offsetting nonvegetarianism. It may not be the most effective thing any more than vegetarianism itself is, but it's part of some people's personal morality, and it's not hurting EA. Suppose people in fact spend $5 offsetting nonvegetarianism. If that $5 wasn't going to EA charity, it's not hurting EA for the person to give it to offsets instead of, I don't know, a new bike. If you criticize people for giving $5 in offsets, but not for any other non-charitable use of their money, then that's the fallacy in this comic: https://xkcd.com/871/

Let me put this another way. Suppose that somebody who feels bad about animal suffering is currently offsetting their meat intake, using money that they would not otherwise give to charity. What would you recommend to that person?

Recommending "stop offsetting and become vegetarian" results in a very significant decrease in their quality of life for the sake of gaining them an extra $5, which they spend on ice cream. Assuming they value not-being-vegetarian more than they value ice cream, this seems strictly worse.

Recommending "stop offsetting but don't become vegetarian" results in them donating $5 less to animal charities, buying an ice cream instead, and feeling a bit guilty. They feel worse (they prefer not feeling guilty to getting an ice cream), and animals suffer more. Again, this seems strictly worse.

The only thing that doesn't seem strictly worse is "stop offsetting and donate the $5 to a charity more effective than the animal charity you're giving it to now". But why should we be more concerned about making them give the money they're already using semi-efficiently to a more effective charity, as opposed to starting with the money they're spending on clothes or games or something, and having the money they're already spending pretty efficiently be the last thing we worry about redirecting?

Comment author: nino 13 January 2016 06:53:03PM *  1 point [-]

Aren't you kind of not disagreeing at all here?

The way I understand it, Scott claims that using your non-EA money for ethical offsetting is orthogonal to EA because you wouldn't have used that money for EA anyway, and Claire claims that EAs suggesting ethical offsetting to people as an EA-thing to do is antithetical to EA because it's not the most effective thing to do (with your EA money).

The two claims don't seem incompatible with each other, unless I'm missing something.

Comment author: nino 22 November 2015 08:39:34AM 6 points [-]

I'm so excited that this finally exists. Huge thanks to Anne and Malcolm!

Comment author: nino 25 August 2015 08:32:50PM 3 points [-]

There is now a German translation of effectivealtruism.org at www.effektiver-altruismus.de.