Comment author: Michelle_Hutchinson 19 July 2017 01:47:04PM *  2 points [-]

I broadly agree with you on the importance of inclusivity, but I’m not convinced by your way of cashing it out or the implications you draw from it.

Inclusivity/exclusivity strikes me as importantly being a spectrum, rather than a binary choice. I doubt when you said EA should be about ‘making things better or worse for humans and animals but being neutral on what makes things better or worse’, you meant the extreme end of the inclusivity scale. One thing I assume we wouldn’t want EA to include, for example, is the view that human wellbeing is increased by coming only into contact with people of the same race as yourself.

More plausibly, the reasons you outline in favour of inclusivity point towards a view such as ‘EA is about making things better or worse for sentient beings but being neutral between reasonable theories of what makes things better or worse’. Of course, that brings up the question of what it takes to count as a reasonable theory. One thing it could mean is that some substantial number of people hold / have held it. Presumably we would want to circumscribe which people are included here: not all moral theories which have at any time in the past by a large group of people are reasonable. At the other end of the spectrum, you could include only views currently held by many people who have made it their life’s work to determine the correct moral theory. My guess is that in fact we should take into account which views are and aren’t held by both the general public and by philosophers.

I think given this more plausible cashing out of inclusivity, we might want to be both more and less inclusive than you suggest. Here are a few specific ways it might cash out:

  • We should be thinking about and discussing theories which put constraints on actions you’re allowed to take to increase welfare. Most people think there are some limits on be what we’re allowed to do to others to benefit others. Most philosophers believe there are some deontological principles / agent centred constraints or prerogatives.

  • We should be considering how prioritarian to be. Many people think we should give priority to those who are worst off, even if we can benefit them less than we could others. Many philosophers think that there’s (some degree of) diminishing moral value to welfare.

  • Perhaps we ought to be inclusive of views to the effect that (at least some) non-human sentient beings have little or no moral value. Many people’s actions imply they believe that a large number of animals have little or no moral value, and that robots never could have moral value. Fewer philosophers seem to hold this view.

  • I’m less convinced about being inclusive towards views which place no value on the future. It seems widely accepted that climate change is very bad, despite the fact that most of the harms will accrue to those in the future. It’s controversial what the discount rate should be, but not that the pure time discount rate should be small. Very few philosophers defend purely person-affecting views.

Comment author: Michael_PJ 12 July 2017 09:43:49AM 1 point [-]

Hm, I'm a little sad about this. I always thought that it was nice to have GWWC presenting a more "conservative" face of EA, which is a lot easier for people to get on board with.

But I guess this is less true with the changes to the pledge - GWWC is more about the pledge than about global poverty.

That does make me think that there might be space for an EA org that explicitly focussed on global poverty. Perhaps GiveWell already fills this role adequately.

Comment author: Michelle_Hutchinson 12 July 2017 11:13:17AM 7 points [-]

You might think The Life You Can Save plays this role.

I've generally been surprised over the years by the extent to which the more general 'helping others as much as we can, using evidence and reason' has been easy for people to get on board with. I had initially expected that to be less appealing, due to its abstractness/potentially leading to weird conclusions. But I'm not actually convinced that's the case anymore. And if it's not detrimental, it seems more straightforward to start with the general case, plus examples, than to start with only a more narrow example.

Comment author: Michelle_Hutchinson 31 March 2017 10:27:37AM *  11 points [-]

I'm not totally sure I understand what you mean by IJ. It sounds like what you're getting at is telling someone they can't possible have the fundamental intuition that they claim they have (either that they don't really hold that intuition or that they are wrong to do so). Eg: 'I simply feel fundamentally that what matters most is positive conscious experiences' 'That seems like a crazy thing to think!'. But then your example is

"But hold on: you think X, so your view entails Y and that’s ridiculous! You can’t possibly think that.".

That seems like a different structure of argument, more akin to: 'I feel that what matters most is having positive conscious experiences (X)' 'But that implies you think people ought to choose to enter the experience machine (Y), which is a crazy thing to think!' The difference is significant: if the person is coming up with a novel Y, or even one that hasn't been made salient to the person in this context, it actually seems really useful. Since that's the case, I assume you meant IJ to refer to arguments more like the former kind.

I'm strongly in favour of people framing their arguments considerately, politely and charitably. But I do think there might be something in the ball-park of IJ which is useful, and should be used more by EAs than it is by philosophers. Philosophers have strong incentives to have views that no other philosophers hold, because to publish you have to be presenting a novel argument and it's easier to describe and explore a novel theory you feel invested in. It's also more interesting for other philosophers to explore novel theories, so in a sense they don't have an incentive to convince other philosophers to agree with them. All reasoning should be sound, but differing in fundamental intuitions just makes for a greater array of interesting arguments. Whereas the project of effective altruism is fundamentally different: for those who think there is moral truth to be had, it's absolutely crucial not just that an individual works out what that is, but that everyone converges on it. That means it's important to thoroughly question our own fundamental moral intuitions, and to challenge those of others which we think are wrong. One way to do this is to point out when someone holds an intuition that is shared by hardly anyone else who has thought about this deeply. 'No other serious philosophers hold that view' might be a bonus in academic philosophy, but is a serious worry in EA. So I think when people say 'Your intuition that A is ludicrous', they might be meaning something which is actually useful: they might be highlighting just how unusual your intuition is, and thereby indicating that you should be strongly questioning it.

Comment author: Julia_Wise 07 December 2016 04:22:40PM *  10 points [-]

There are lots of cases of correct models failing to take off for lack of good strategy. The doctor who realized that handwashing prevented infection let his students write up the idea instead of doing it himself, with the result that his colleagues didn't understand the idea properly and didn't take it seriously (even in the face of much lower mortality in his hospital ward). He got laid off, took to writing vitriolic letters to people who hadn't believed him, and died in disgrace in an insane asylum.

Comment author: Michelle_Hutchinson 08 December 2016 03:57:23PM 4 points [-]

That's a horrible story!

Comment author: Michelle_Hutchinson 24 August 2016 10:41:36AM *  4 points [-]

Recognizing the scale of animal suffering starts with appreciating the sentience of individual animals — something surprisingly difficult to do given society’s bias against them (this bias is sometimes referred to as speciesism). For me, this appreciation has come from getting to know the three animals in my home: Apollo, a six-year-old labrador/border collie mix from an animal shelter in Texas, and Snow and Dualla, two chickens rescued from a battery cage farm in California.

I wonder if we might do ourselves a disservice by making it sound really controversial / surprising that animals are thoroughly sentient? It makes it seem more ok not to believe it, but I think also can come across as patronising / strange to interlocutors. I've in the past had people tell me they're 'pleasantly surprised' that I care about animals, and ask when I began caring about animal suffering. (I have no idea how to answer that - I don't remember a time when I didn't) This feels to me somewhat similar to telling someone who doesn't donate to developing countries that you're surprised they care about extreme poverty, and asking when they started thinking that it was bad for people to be dying of malaria. On the one hand, it feels like a reasonable inference from their behaviour. On the other hand, for almost everyone we're likely to be talking to it will be the case that they do in fact care about the plight of others, and that their reasons for not donating aren't lack of belief in the suffering, or lack of caring about it. I would guess that would be similar for most of the people we talk to about animal suffering: they already know and care about animal suffering, and would be offended to have it implied otherwise. This makes the case easier to make, because it means we're already approximately on the same page, and we can start talking immediately about the scale and tractibility of the problem.

Comment author: Michelle_Hutchinson 29 July 2016 10:05:12PM 1 point [-]

Thanks Julia, this is an awesome resource! I'm really grateful for these kinds of super specific suggestions.

Comment author: weeatquince  (EA Profile) 25 July 2016 12:00:41PM 2 points [-]

Useful to know what the plan is for the GWWC Trust, if GWWC are not producing their own recommendations? Will any money going into the trust just be donated to GiveWell's top charities whatever they may be, and will it be donated evenly to those charities or donated following GiveWells current advice about proportions? Thanks

Comment author: Michelle_Hutchinson 26 July 2016 06:01:09PM 2 points [-]

Hey Sam, For people who choose to let us decide where the money goes, the next payout (Oct) will be the same as before (1/4 each to SCI, AMF, DWI, PHC), and the one after that (Jan) will be to on the allocation GW recommends in its Dec update. I expect we will continue allowing donations to the charities the Trust has given to in the past (eg PHC, IPA), but that the default charities suggested for donations will be the ones GW lists as top charities.

Comment author: richardcanal 25 April 2016 01:17:13AM -1 points [-]

Hi Jonathan, I agree that if you're goal is to "do the most good" that majority of EAs (myself included) believe that reducing extreme poverty is the most tractable/efficient way to do that at the current moment.

I think the main issue is that when people are learning about EA, if they find major discrepancies between GWWC currently stated mission (helping reduce poverty) and some materials like the blog post above (mission being do most good) it becomes difficult to figure out what's going on.

One recommendation I have is that if a major rebranding effort is happening within GWWC, an email out to Pledge members/chapter leads etc., and blog post on GWWC's blog and updating the various mission statements would be a good start. I was extremely surprised reading the post, when I follow many effective altruism forums/websites/materials and have never once seen GWWC even hinting at being cause neutral with the exception of the Pledge.

I find a good analogy for this situation is climate scientists, they are "cause neutral" when it comes to global warming, it just happens that all the science/facts point towards global warming being a real man made thing that should be addressed.

I'm very happy for the new direction, with GWWC being primarily focused on making the world a better place via donations to effective charities.

Richard

Comment author: Michelle_Hutchinson 27 April 2016 03:51:51PM 1 point [-]

Hi Richard, Thanks for your comments. Sorry to have been unclear - there isn't a major rebranding planned. The changed vision should be thought of more as clarifying what lies at the heart of gwwc and what makes it unique. In large part, the reason for doing it is to further focus the team, rather than to change anything for others. It doesn't mean that we plan to move away from working most on extreme poverty (for the reasons outlined in my more recent blog post). Ending extreme poverty is still a major focus for us (as it is for many EAs), but we wanted a vision that articulated why we work on that, and encapsulated the other things we care about. I am planning to write a blog post about our vision on the GWWC blog in May, I'm glad that seems like a helpful thing to do. Michelle

Comment author: davidc 25 April 2016 11:45:33AM 0 points [-]

Yep, we're just using different definitions. I find your definition a bit confusing, but I admit that it seems fairly common in EA.

For what it's worth, I think some of the confusion might be caused by my definition creeping into your writing sometimes. For example, in your next post (http://effective-altruism.com/ea/wp/why_poverty/):

"Given that Giving What We Can is cause neutral, why do we recommend exclusively poverty eradication charities, and focus on our website and materials on poverty? There are three main reasons ..."

If we're really using your definition, then that's a pretty silly question. It's like saying "If David is really cause neutral, then why is he focused on animals?" or "If Jeff is cause neutral, why does he donate to AMF?" Using your definition, there's (as we've both pointed out) absolutely no tension between focusing on a cause and being cause neutral.

Comment author: Michelle_Hutchinson 25 April 2016 12:49:48PM 1 point [-]

I think even if there's no tension, there could still be an open question about how you think your actions generate value. For example, cause-neutral-Jeff could be donating to AMF because he thinks it's the charity with the highest expected value per $, or because he's risk averse and thinks it's the best if you're going for a trade off between expected value and low variance in value per $, or because he wants to encourage other charities to be as transparent and impact focused as AMF. So although it's not surprising that cause-neutral-Jeff focuses his donations on just one charity, and that it's AMF, it's still interesting to hear the answer to 'why does he donate to AMF?'.

But I agree, it's difficult not to slide between definitions on a concept like cause neutrality, and I'm sorry I'm not as clear as I'd like to be.

Comment author: richardcanal 24 April 2016 06:16:58AM *  0 points [-]

Hi Michelle,

This is so hard to comprehend why this post was made, when it is in strict disagreement with the history/current mission statement for GivingWhatWeCan. Here are are the best descriptions about GivingWhatWeCan's mission that I could find.

"What do you do, and hope to achieve? Our goal is to play our part in eliminating poverty in the developing world."

"OUR HISTORY Giving What We Can is the brainchild of Toby Ord, a philosopher at Balliol College, Oxford. Inspired by the ideas of ethicists Peter Singer and Thomas Pogge, Toby decided in 2009 to commit a large proportion of his income to charities that effectively alleviate poverty in the developing world."

I started a GivingWhatWeCan chapter in my home town and have been very active in the community reading books/blogs/courses/ etc. and it's still incredibly difficult to figure out the various organisations and what their stated goals are and how they differ. A recent problem I've encountered is why are there GivingWhatWeCan chapters and LEAN/Local EA chapters. Our current meetup.com group is called GivingWhatWeCan, but our website is eacalgary.org.

This makes things extremely difficult for new members who are learning about the movement to navigate the EA landscape, and when the communications coming directly from the organisation are conflicting with it's stated mission, it becomes even more difficult to piece everything together. Perhaps this is a start of a rebranding effort that I wasn't aware of.

Looking forward to hearing back from you, appreciate all the good work the organisation does!

Richard

Comment author: Michelle_Hutchinson 24 April 2016 09:39:27PM *  1 point [-]

Hi Richard, I'm sorry it's rather confusing at the moment, and thank you so much for all the work you do with the GWWC/EA Calgary chapter. I'm hoping my more recent post on the Forum might help bring some clarity. I think part of the reason it's particularly confusing at the moment is that our website has been undergoing some changes, so the page with our mission/vision/values is currently not up. We've also, as Jon mentioned, been clarifying what GWWC is fundamentally about, including whether we are necessarily an organisation which focuses primarily on poverty or only contingently so (it's the latter).

These are our vision/mission/values:

Our Vision

A world in which giving 10% of our income to the most effective organisations is the norm

Our Mission

Inspire donations to the world’s most effective charities

Our Values

We are a welcoming community, sharing our passion and energy to improve the lives of others.

We care. We have a deep commitment to helping others, and We are dedicated to helping other members of our community give more and give better.

We take action based on evidence. We apply rigorous academic processes to develop trustworthy research to guide our actions. We are open-minded towards new approaches to altruism that may show greater effectiveness. We are honest when it comes to what we don't know or mistakes we have made.

We are optimistic. We are ambitious in terms of the change we believe we can create. We apply energy and enthusiasm to support and build our community.

All the best, Michelle

View more: Next