Comment author: Jamie_Harris 04 April 2018 03:25:57PM 3 points [-]

Thanks for the question - I have wondered the same, as I also studied History at undergraduate level.

A slight detour from your question, but maybe of interest. There is currently is no community / FB group for people with backgrounds or research interests in History within EA that I know of. There have been quite a few times when discussions around the usefulness of historical studies has come up and it might be good to share ideas and collaborate.

I don't have time to try and coordinate this at the moment, but it seems like trying to establish some sort of discussion forum (or organisation?) for using the study of history to advance our understanding of (and strategy towards) cause areas which are often prioritised within EA.

If this is something you (or anyone else seeing this) has an interest in me developing, it's something to bear in mind? People should feel free to contact me at jamesaharris [at] hotmail.co.uk if you want to talk about it further.

Examples: I'm thinking primarily within Effective Animal Advocacy (Sentience Institute's study of the British antislavery movement; ACE discontinuing their social studies project; technology adoption being considered as a precedent for clean meat e.g. by Sentience Institute and Paul Shapiro) but this would also apply to other fields. The systematic approach described in the post linked at [1] seems to correlate more closely with the approach Holden and others took at OPP than it does the studies done in the Effective Animal Advocacy sphere.

[1] http://effective-altruism.com/ea/1lz/whyweshouldbedoingmoresystematic_research/

Comment author: MichaelPlant 06 April 2018 11:39:35PM 1 point [-]

just emailed you.

Comment author: Peter_Hurford  (EA Profile) 02 April 2018 01:42:57PM *  2 points [-]

As last time, I upvoted this comment and downvoted the post to show I agree with a "no job postings on the EA Forum unless they have other content of general interest" norm.

Comment author: MichaelPlant 05 April 2018 12:53:06AM 0 points [-]

I did the same.

Comment author: lukeprog 28 March 2018 03:51:12PM 5 points [-]

(I work for Open Phil.)

Comparing to academia, I'd say that research at Open Phil is (1) consistently targeted at what will help us do as much good as possible rather than what is most intellectually interesting or prestigious, (2) aimed at informing our actions as cheaply as possible, meaning that we cut corners when we don't think doing so will change our bottom-line conclusions, rather than trying to live up to the standards for thoroughness etc. expected in academia, and (3) only aimed at what academia would consider "novel research" when that's what is required to help us do the most good.

Comment author: MichaelPlant 29 March 2018 10:45:02AM 5 points [-]

Wow. Working at Open Phil sounds like a dream compared to academia. You've identified three things I spend huge amounts of time doing as part of my research and find intensely irritatingly.

Comment author: MichaelPlant 25 March 2018 03:21:33PM 9 points [-]

Just commenting to express my agreement with this.

I've been thinking about this in my own life my recently. I realised I was spending a lot of time reflecting on the effectiveness of my current projects and this was getting in the way of actually doing them. I also came to the conclusion I should stifle my doubts, get to the end of what I'm doing and then reflect on changing direction.

I'm not convinced by the 1:4 rule, but the general idea seems good.

Comment author: Halstead 22 March 2018 02:20:04PM 0 points [-]

On your account, as you say, bringing people into a life of suffering doesn't harm them and preventing someone from dying doesn't benefit them. So, you could also have said "lots of EA activities are devoted to preventing people from dying and preventing lives of suffering, but neither activity benefits anyone, so the definition is wrong". This is a harder sell, and it seems like you're just criticising the definition of EA on the basis of a weird account of the meaning of 'benefitting others'.

I would guess that the vast majorty of people think that preventing a future life of suffering and saving lives both benefit somebody. If so, the vast majority of people would be committed to something which denies your criticism of the definition of EA.

Comment author: MichaelPlant 22 March 2018 05:12:13PM 0 points [-]

weird account of the meaning of 'benefitting others'.

The account might be uncommon in ordinarly langauge, but most philosophers accept creating lives doesn't benefit the created person. I'm at least being consistent and I don't think that consistency is objectionable. Calling it the view weird is unhelpful.

But suppose people typically think it's odd to claim you're benefiting someone by creating them. Then the stated definition of what's EAs about will be at least somewhat misleading to them when you explain EA in greater detail. Consistent with other things I've written on this forum, I think EA should take avoiding being misleading very seriously.

I'm not claiming this is a massive point, it just stuck out to me.

Comment author: Halstead 22 March 2018 10:09:38AM 0 points [-]

On your view, is it good for someone to prevent them from dying? Doesn't the same argument apply - if the person doesn't exist (is dead) there's no object to attach any predicates to.

Comment author: MichaelPlant 22 March 2018 11:34:23AM 0 points [-]

No, I also don't think it makes sense to say death is good or bad for people. Hence it's not true to say you benefit someone by keeping them alive. Given most people do want to say there's something good about keeping people alive, it makes sense to adopt an impersonal locution.

I'm not making an argument about what the correct account of ethics is here, I'm just making a point about the correct use of language. Will's definition can't be capturing what he means and is thus misleading, so 'do the most good' is better than 'benefit others'.

Comment author: Sanjay 21 March 2018 12:49:11PM 0 points [-]

Really? "doing as much good as possible" is confusing people? I tend to use that language, and I haven't noticed people getting confused (maybe I haven't been observant enough!)

Comment author: MichaelPlant 21 March 2018 11:39:26PM 0 points [-]

maybe I haven't been observant enough

I've often observed your lack of observance :)

Comment author: Halstead 21 March 2018 10:27:39AM 4 points [-]

Does it harm someone to bring them into existence with a life of intense suffering?

Comment author: MichaelPlant 21 March 2018 11:38:44PM 0 points [-]

No. It might be impersonally bad though.

Comment author: MichaelPlant 20 March 2018 09:50:23PM *  1 point [-]

The thing I find confusing about what Will says is

effective altruism is the project of using evidence and reason to figure out how to benefit others

I draw attention to 'benefit others'. Two of EA's main causes are farm animal welfare and reducing risks of human extinction. The former is about causing happy animals to exist rather than miserable ones, and the latter is about ensuring future humans exist (and trying to improve their welfare). But it doesn't really make sense to say that you can benefit someone by causing them to exist. It's certainly bizarre to say it's better for someone to exist than not to exist, because if the person doesn't exist there's no object to attach any predicates to. There's been a recent move by some philosophers, such as McMahan and Parfit, to say it can be good (without being better) for someone to exist, but that just seems like philosophical sleight of hand.

A great many EA philosophers, including I think Singer, MacAskill, Greaves, Ord either are totalists or very sympathetic to it. Totalis the view the best outcome is the one with the largest sum of lifetime well-being of all people - past, present, future and it's known as impersonal view in population ethics. Outcomes are not deemed good, on impersonal views, because they are good for anyone, or because the benefit anyone, they are good because there is more of the thing which is valuable, namely welfare.

So there's something fishy about saying EA is trying to benefit others when many EA activities, as mentioned, don't benefit anyone, and many EAs think we shouldn't, strictly, be trying to benefit people so much as realising more impersonal value. It would make more sense to replace 'benefit others as much as possible' with 'do as much good as possible'.

Comment author: MichaelPlant 09 March 2018 12:02:31PM 0 points [-]

This sounds promising. Question: what sort of engagement do you want from the EA world? It's not clear to me what you're after. Are you after headline policy suggestions, detailed proposals, people to discuss ideas with, something else?

View more: Prev | Next