12

MichaelPlant comments on The marketing gap and a plea for moral inclusivity - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (46)

You are viewing a single comment's thread. Show more comments above.

Comment author: Julia_Wise 10 July 2017 06:24:58PM *  2 points [-]

For as long as it's the case that most of our members [edited to clarify: GWWC members, not members of the EA community in general] are primarily concerned with global health and development, content on our blog and social media is likely to reflect that to some degree.

But we also aim to be straightforward about our cause-neutrality as a project. For example, our top recommendation for donors is the EA Funds, which are designed to get people thinking about how they want to allocate between different causes rather than defaulting to one.

Comment author: MichaelPlant 10 July 2017 06:32:44PM *  1 point [-]

Thanks for the update. That's helpful.

However, it does seem a bit hard to reconcile GWWC's and 80k's positions on this topic. GWWC (i.e. you) seem to be saying "most EAs care about poverty, so that's what we'll emphasise" whereas 80k (i.e. Ben Todd above) seems to saying "most EAs do (/should?) care about X-risk, so that's what we'll emphasise".

These conclusions seem to be in substantial tension, which itself is may confuse new and old EAs.

Comment author: Julia_Wise 13 July 2017 03:00:39PM 0 points [-]

I edited to clarify that I meant members of GWWC, not EAs in general.

Comment author: casebash 11 July 2017 12:25:15AM 0 points [-]

80k is now seperate from CEA or is in the process of being separated from CEA. They are allowed to come to different conclusions.

Comment author: Ben_Todd 11 July 2017 05:56:39AM *  4 points [-]

We're fiscally sponsored by CEA (so legally within the same entity) and have the same board of trustees, but we operate like a separate organisation.

Our career guide also doesn't mention EA until the final article, so we're not claiming that our views represent those of the EA movement. GWWC also doesn't claim on the website to represent the EA movement.

The place where moral exclusivity would be most problematic is EA.org. But it mentions a range of causes without prioritising them, and links to this tool, which also does exactly what the original post recommends (and has been there for a year). https://www.effectivealtruism.org/articles/introduction-to-effective-altruism/#which-cause https://www.effectivealtruism.org/cause-prioritization-tool/

Comment author: RandomEA 11 July 2017 08:29:54PM 0 points [-]

I think it's actually mentioned briefly at the end of Part 5: https://80000hours.org/career-guide/world-problems/

(In fact, the mention is so brief that you could easily remove it if your goal is to wait until the end to mention effective altruism.)

Comment author: Ben_Todd 12 July 2017 04:44:03AM 0 points [-]

That's right - we mention it as a cause to work on. That slipped my mind since that article was added only recently. Though I think it's still true we don't give the impression of representing the EA movement.

Comment author: DavidNash 10 July 2017 09:04:47PM 0 points [-]

Might it be that 80k recommend X-risk because it's neglected (even within EA) and that if more then 50% of EAs had X-risk as their highest priority it would no longer be as neglected?

Comment author: Evan_Gaensbauer 10 July 2017 09:47:05PM 1 point [-]

I don't think that'd be the case as from inside the perspective of someone already prioritizing x-risk reduction, it can appear that cause is at least thousands of times more important than literally anything else. This is based on an idea formulated by philosopher Nick Bostrom: astronomical stakes (this is Niel Bowerman in the linked video, not Nick Bostrom). The ratio x-risk reducers think is appropriate for resources dedicated to x-risk relative to other causes is arbitrarily high. Lots of people think the argument is missing some important details, or ignoring major questions, but I think from their own inside view x-risk reducers probably won't be convinced by that. More effective altruists could try playing the double crux game to find the source of disagreement about typical arguments for far-future causes. Otherwise, x-risk reducers would probably maintain in the ideal as many resources as possible ought be dedicated to x-risk reduction, but in practice may endorse other viewpoints receiving support as well.

Comment author: Linch 13 July 2017 05:56:57AM 0 points [-]

This seems like a perfectly reasonable comment to me. Not sure why it was heavily downvoted.

Comment author: Evan_Gaensbauer 14 July 2017 08:09:15AM 0 points [-]

Talking about people in the abstract, or in a tone as some kind of "other", is to generalize and stereotype. Or maybe generalizing and stereotyping people others them, and makes them too abstract to empathize with. Whatever the direction of causality, there are good reasons people might take my comment poorly. There's lots of skirmishes online in effective altruism between causes, and I expect most of us don't all being lumped together in a big bundle, because it feels like under those circumstances at least a bunch of people in your inner-ingroup or whatnot will feel strawmanned. That's what my comment reads like. That's not my intention.

I'm just trying to be frank. On the Effective Altruism Forum, I try to follow Grice's Maxims because I think writing in that style heuristically optimizes the fidelity of our words to the sort of epistemic communication standards the EA community would aspire to, especially as inspired by the rationality community to do so. I could do better on the maxims of quantity and manner/clarity sometimes, but I think I do a decent job on here. I know this isn't the only thing people will value in discourse. However, there are lots of competing standards for what the most appropriate discourse norms are, and nobody is establishing to others how the norms will not just maximize the satisfaction of their own preferences, but maximize the total or average satisfaction for what everyone values out of discourse. That seems the utilitarian thing to do.

The effects of ingroup favouritism in terms of competing cause selections in the community don't seem healthy to the EA ecosystem. If we want to get very specific, here's how finely the EA community can be sliced up by cause-selection-as-group-identity.

  • vegan, vegetarian, reducetarian, omnivore/carnist
  • animal welfarist, animal liberationist, anti-speciesist, speciesist
  • AI safety, x-risk reducer (in general), s-risk reducer
  • classical utilitarian, negative utilitarian, hedonic utilitarian, preference utilitarian, virtue ethicist, deontologist, moral intuitionist/none-of-the-above
  • global poverty EAs; climate change EAs?; social justice EAs...?

The list could go on forever. Everyone feels like their representing not only their own preferences in discourse, but sometimes even those of future generations, all life on Earth, tortured animals, or fellow humans living in agony. Unless as a community we make an conscientious effort to reach towards some shared discourse norms which are mutually satisfactory to multiple parties or individual effective altruists, however they see themselves, communication failure modes will keep happening. There's strawmanning and steelmanning, and then there's representations of concepts in EA which fall in between.

I think if we as a community expect everyone to impeccably steelman everyone all the time, we're being unrealistic. Rapid growth of the EA movement is what organizations from various causes seem to be rooting for. That means lots of newcomers who aren't going to read all the LessWrong Sequences or Doing Good Better before they start asking questions and contributing to the conversation. When they get downvoted for not knowing the archaic codex that are evolved EA discourse norms, which aren't written down anywhere, they're going to exit fast. I'm not going anywhere, but if we aren't more willing to be more charitable to people we at first disagree with than they are to us, this movement won't grow. That's because people might be belligerent, or alarmed, by the challenges EA presents to their moral worldview, but they're still curious. Spurning doesn't lead to learning.

All of the above refers only to specialized discourse norms within just effective altruism. This would be on top of the complicatedness of effective altruists private lives, all the usual identity politics, and otherwise the common decency and common sense we would expect on posters on the forum. All of that can already be difficult for diverse groups of people as is. But for all of us to go around assuming the illusion of transparency makes things fine and dandy with regards to how a cause is represented without openly discussing it is to expect too much of each and every effective altruist.

Also, as of this comment, my parent comment above has net positive 1 upvote, so it's all good.

Comment author: MichaelPlant 10 July 2017 09:43:16PM 0 points [-]

Sure. But in that case GWWC should take the same sort of line, presumably. I'm unsure how/why the two orgs should reach different conclusions.

Comment author: HowieL 14 July 2017 06:24:04PM 1 point [-]

I obviously can't speak for GWWC but I can imagine some reasons it could reach different conclusions. For example, GWWC is a membership organization and might see itself as, in part, representing its members or having a duty to be responsive to their views. At times, listeners might understand statements by GWWC as reflecting the views of its membership.

80k's mission seems to be research/advising so its users might have more of an expectation that statements by 80k reflect the current views of its staff.