In this post, I make three points. First, I note there seems to be a gap between what EA markets itself as being about (effective poverty reduction) and what many EAs really believe is important (poverty isn’t the top priority) and this marketing gap is potentially problematic. Second, I propose a two-part solution. One part is that EA outreach-y orgs should be upfront about what they think the most important problems are. The other is that EA outreach-y orgs, and, in fact, the EA movement as a whole, should embrace ‘morally inclusivity’: we should state what the most important problems are for a range on moral outlooks but not endorse a particular moral outlook. I anticipate some will think we should adopt ‘moral exclusivity’ instead, and just endorse or advocate the one view. My third point is a plea for moral inclusivity. I suggest even those who strongly consider one moral position to be true should still be in favour EA being morally inclusive as a morally inclusive movement is likely to generate better outcomes by the standards of everyone’s individual moral theory. Hence moral inclusivity is the dominant option.

Part 1

One thing that's been bothering me for a while is the gap between how EA tends to market itself and what lots of EA really believe. I think the existence of this gap (or even if the perception of it) is probably a bad idea and also probably avoidable. I don't think I've seen this discussed elsewhere, so I thought I'd bring it up here.

To explain, EA often markets itself as being about helping those in poverty (e.g. see GWWC's website) and exhorts the general public to give their money to effective charities in that area. When people learn a bit more about EA, they discover that only some EAs believe poverty is the most important problem. They realise many EAs think we should really be focusing on the far future, and AI safety in particular, or on helping animals, or finding ways to improving the lives of presently existing humans that aren’t do to with alleviating poverty, and that’s where those EAs put their money and time.

There seem to be two possible explanations for the gap between EA marketing and EA reality. The first is historical. Many EAs were inspired by Singer's Famine, Affluence and Morality which centres on saving a drowning child and preventing those in poverty dying from hunger. Poverty was the original focus. Now, on further reflection, many EAs have decided the far future is the important area but, given its anti-poverty genesis, the marketing/rhetoric is still about poverty.

The second is that EAs believe, rightly or wrongly, talking about poverty is a more effective marketing strategy than talking about comparatively weird stuff like AI and animal suffering. People understand poverty and it’s easier to start with than before moving on to the other things.

I think the gap is problematic. If EA wants to be effective over the long run, one thing that's important is that people see it as a movement of smart people with high integrity. I think it's damaging to EA if there's the perception, even if this perception is false, that effective altruists are the kind of people that say you should do one thing (give money to anti-poverty charities) but themselves believe in and do something else (e.g. AI safety is the most important).

I think this is bad for the outside perception of EA: we don't want to give critics of the movement any more ammo than necessary. I think it potentially disrupts within-community cohesion too. Suppose person X joins EA because they were sold on the anti-poverty line by outreach officer Y. X then become heavily involved in the community and subsequently discovers Y really believes something different from what X was originally sold on. In this case, the new EA X would likely to distrust outreach officer Y, and maybe others in the community too.

Part 2

It seems clear to me this gap should go. But what should we do instead? I suggest a solution in two parts.

First, EA marketing should tally with the sort of things EAs believe are important. If we really think animals, AI, etc. are what matters, we should lead with those, rather than suggesting EA is about poverty and then mentioning other cause areas.

This doesn’t quite settle the matter. Should the marketing represent what current EAs believe is important? This is problematically circular: it’s not clear how to identify who counts as an ‘EA’ except by what they believe. In light of that, maybe the marketing should just represent what the heads or members of EA organisations believe is important. This is also problematic: what if EAs orgs’ beliefs substantially differ from the rest of the EA community (however that’s construed)?

Here, we seem to face a choice between what I’ll call ‘moral inclusivism’, stating what the most important problems are for a range on moral outlooks but not endorsing a particular moral outlook, and ‘moral exclusivism’, picking a single moral view and endorsing that.

With this choice in mind, I suggest inclusivism. I’ll explain how I thing this works in this section and defend it in the final one.

Roughly, I think the EA pitch should be "EA is about doing more good, whatever your views". If that seems too concessive, it could be welfarist – "we care making things better or worse for humans and animals" – but neutral on makes things better or worse - "we don't all think happiness is the only thing matters" - and neutral on population ethics – "EAs disagree about how much the future matters. Some focus on helping current people, others are worried about the survival of humanity, but we work together wherever we can. Personally, I think cause X is the most important because I believe theory Y...". 

I don't think all EA organisations need to be inclusive. What the Future of Humanity Institute works on is clearly stated in its name and it would be weird if it started claim the future of humanity was unimportant. I don't think individuals EAs need to pretend endorse multiple view eithers. But I think the central, outreach-y ones should adopt inclusivism.

The advantage of this sort of approach is it allows EA to be entirely straightforward about what effective altruists stand for and avoids even the perception of saying one thing and doing another. Caesar’s wife should be above suspicion, and all that.

An immediate objection is that this sort of approach - front-loading all the 'weirdness' of EA views when we do outreach - would be off-putting. I think this worry, so much as it does actually exist, is overblown and also avoidable. Here's how I think the EA pitch goes:

-Talk about the drowning child story and/or the comparative wealth of those in the developed world.

-Talk about ineffective and effective charities.

-Say that many people became EAs because they were persuaded of the idea we should help others when it's only a trivial cost to ourselves.

-Point out people understand this in different ways because of their philosophical beliefs about what matters: some focus on helping humans alive today, others on animals, others on trying to make sure humanity doesn't accidentally wipe itself out, etc.

-For those worried about how to ‘sell’ AI in particular, I recently heard Peter Singer give a talk when he said something like (can't remember exactly): "some people are very worried about about the risks from artificial intelligence. As Nick Bostrom, a philosopher at the University of Oxford pointed out to me, it's probably not a very good idea, from an evolutionary point of view, to build something smarter than ourselves." At which point the audience chuckled. I thought it was a nice, very disarming way to make the point.

In conclusion, think the apparent gap between rhetoric and reality is problematic and also avoidable. Organisations like GWWC should make it clearer that EAs support causes other than global poverty.

Part 3

One might think EA organisations, faced with the inclusivist-exclusivist dilemma, should opt for the latter. You might think most EAs, at least within certain organisations, do agree one a single moral theory, so endorsing morally inclusivity would be dishonest. Instead, you could conclude we should be moral exclusivists, fly the flag for our favourite moral theory, lead with it and not try to accommodate everyone.

From my outsider’s perspective, I think this is sort of direction 80,000 Hours has started to move in more recently. They are now much more open and straightforward about saying the far future in general, and AI safety in particular, is what really matters. Their cause selection choices, which I think they updated a few months ago only really make sense if you adopt total utilitarianism (maximise happiness throughout history of the universe) rather than if you prefer a person-affecting view in population ethics (make people happy, don’t worry about creating happy people) or you just want to focus on the near future (maybe due to uncertainty about what we can do or pure time discounting).

An obvious worry about being a moral exclusivist and picking one moral theory is that you might be wrong; if you’re endorsing the wrong view, that’s really going to set back your ability to do good. But given you have to take some choices, let’s putthis worry aside. I’m now going to make a plea for making/keeping EA morally inclusive whatever your preferred moral views are. I offer three reasons.

1.

Inclusivity reduces group think. If EA is known as a movement where people believe view X, people who don’t like view X will exit the movement (typically without saying anything). This deprives those who remain of really useful criticism that would help identify intellectual blind spots and force the remainers to keep improving their thinking. This also creates a false sense of confidence in the remainers because all their peers now agree with them.

Another part of this is that, if you want people to seek the truth, you shouldn’t give them incentives to be yes-humans. There are lots of people that like EA and want to work in EA orgs and be liked by other (influential) EAs. If people think they will be rewarded (e.g. with jobs) for adopting the ‘right’ views and signalling them to others, they will probably slide towards what they think people want to hear, rather than what they think is correct. Responding to incentives is a natural human thing to do, and I very much doubt EAs are immune to it. Similar to what I said in part 1, even a perception there are ‘right’ answers can be damaging to truth seeking. Like a good university seminar leader, EA should create an environment where people feel inspired to seek the truth, rather than just agree with the received wisdom, as honest truth seeking and disagreement seems mostly likely to reveal the truth.

2.

Inclusivity increases movement size. If we only appeal to a section of the 'moral market' then there won't be so many people in the EA world. Even if people have different views, they can still can work together, engage in moral trade, personally support each other, share ideas, etc.

I think organisations working on particular, object-level problems, need to be value-aligned to aid co-ordination (if I want to stop global warming and you don't care, you shouldn't join my global warming org) but this doesn't seem relevant at the level of a community. Where people meet in at EA hubs, EA conferences, etc. they’re not working together anyway. Hence this isn’t an objection for EA outreach-y orgs being morally inclusive.

 3.

Inclusivity minimises in-fighting. If people perceive there’s only one accepted and acceptable view, then people will spend their time fighting the battle of hearts and minds to ensure that their view wins, and this will do this rather than working on solving real world problems themselves. Or they'll split, stop talking to each other and fail to co-ordinate. Witness, for instance, the endless schisms within churches about doctrinal matters, like gay marriage, and the seemingly limited interest they have in helping other people. If people instead believe there's a broad range of views within a community, this is okay, and there’s no point fighting for ideological supremacy, they can instead engage in dialogue, get along and help each other. More generally, I think I’d rather be in a community where people thought different things and this was accepted, rather than one where there were no disagreements and none allowed.

On the basis of these three reasons, I don’t think even those who believe they’ve found the moral truth should want EA as a whole to be morally exclusive. Moral inclusivity seems to increase ability of effective altruists to collectively seek the truth and work together, which looks like it leads to more good being done from the perspective of each moral theory.

What followed from parts 1 and 2 is that, for instance, GWWC should close the marketing gap and be more upfront about what EAs really believe. People should not feel surprised about what EAs value when they get more involved in the movement.

What follows from part 3 is that, for instance, 80,000 Hours should be much morally inclusive than they presently are. Instead of “these are the most important things”, it should “these are the most important things if you believe A, but not everyone believes A. If you believe B, you should think these are the important things [new list pops up]. As an organisation, we don’t take a stand on A or B, but here are some arguments you might find relevant to help you decide”.

Here end my pleas for moral inclusivity.

There may be arguments for keeping the marketing gap and adopting moral exclusivism I’ve not considered and I’d welcome discussion. 

Edit (10/07/2017): Ben Todd points out in the comment below that 1) 80k have stated their preferred view since 2014 in order to be transparent and that 2) they provide a decision tool for those who disagree with 80k's preferred view. I'm pleased to learn the former and admit my mistake. On the latter, Ben and I seem to disagree whether adding the decision tool makes 80k morally inclusive or not (I don't think it does).

32

0
0

Reactions

0
0

More posts like this

Comments46
Sorted by Click to highlight new comments since: Today at 1:03 PM

It's worth noting that the 2017 EA Survey (data collected but not yet published), the 2015 EA Survey, and the 2014 EA Survey all have global poverty as the most popular cause (by plurality) among EAs in these samples, and by a good sized margin. So it may not be the case that the EA movement is misrepresenting itself as a movement by focusing on global poverty (even if movement leaders may think differently than the movement as a whole).

Still, it is indeed the case that causes other than global poverty are more popular than global poverty, which would potentially argue for a more diverse presentation on what EA is about as you suggest.

I was aware of this and I think this is interesting. This was the sort of thing I had in mind when considering that EA orgs/leaders may have a different concept of what matters from non-institutional EAs.

I think this is also nicely in tension with Ben Todd's comments above. There's something strange about 80k leaning on the far future as mattering and most EAs wanting to help living people.

And, as Michael says, even the perception that EA is misrepresenting itself could potentially be harmful.

Hi Michael,

I agree the issue of people presenting EA as about global poverty when they actually support other causes is a big problem.

80k stopped doing this in 2014 (not a couple of months ago like you mention), with this post: https://80000hours.org/2014/01/which-cause-is-most-effective-300/ The page you link to listed other causes at least as early as 2015: https://web.archive.org/web/20150911083217/https://80000hours.org/articles/cause-selection/

My understanding is that the GWWC website is in the process of being updated, and the recommendations on where to give are now via the EA Funds, which include 4 cause areas.

These issues take a long-time to fix though. First, it takes a long time to rewrite all your materials. Second, it takes people at least several years to catch up with your views. So, we're going to be stuck with this problem for a while.

In terms of how 80,000 Hours handles it:

Their cause selection choices, which I think they updated a few months ago only really make sense if you adopt total utilitarianism (maximise happiness throughout history of the universe) rather than if you prefer a person-affecting view in population ethics (make people happy, don’t worry about creating happy people) or you just want to focus on the near future (maybe due to uncertainty about what we can do or pure time discounting).

This is a huge topic, but I disagree. Here are some quick reasons.

First, you should value the far future even if you only put some credence on theories like total utilitarianism.

e.g. Someone who had 50% credence in the person affecting view and 50% credence in total utilitarianism, should still place significant value on the far future.

This is a better approximation of our approach - we're not confident in total utilitarianism, but some weight on it due to moral uncertainty.

Second, even if you don't put any value on the far future, it wouldn't completely change our list.

First, the causes are assessed on scale, neglectedness and solvability. Only scale is affected by these value judgements.

Second, scale is (to simplify) assessed on three factors: GDP, QALYs and % xrisk reduction, as here: https://80000hours.org/articles/problem-framework/#how-to-assess-it

Even if you ignore the xrisk reduction column (which I think would be unreasonable due to moral uncertainty), you often find the rankings don't change that much.

E.g. Pandemic risk gets a scale score of 15 because it might pose at xrisk, but if you ignored that, I think the expected annual death toll from pandemics could easily be 1 million per year right now, so it would still get a score of 12. If you think engineered pandemics are likely, you could argue for a higher figure. So, this would move pandemics from being a little more promising than regular global health, to about the same, but it wouldn't dramatically shift the rankings.

I think AI could be similar. It seems like there's a 10%+ chance that AI is developed within the lifetimes of the present generation. Conditional on that, if there's a 10% chance of a disaster, then the expected death toll is 75 million, or 1-2 million per year, which would also give it a score of 12 rather than 15. But it would remain one of the top ranked causes.

I think the choice of promoting EA and global priorities research are even more robust to different value judgements.

We actively point out that the list depends on value judgements, and we provide this quiz to highlight some of the main ones: https://80000hours.org/problem-quiz/

Ben's right that we're in the process of updating the GWWC website to better reflect our cause-neutrality.

Hm, I'm a little sad about this. I always thought that it was nice to have GWWC presenting a more "conservative" face of EA, which is a lot easier for people to get on board with.

But I guess this is less true with the changes to the pledge - GWWC is more about the pledge than about global poverty.

That does make me think that there might be space for an EA org that explicitly focussed on global poverty. Perhaps GiveWell already fills this role adequately.

You might think The Life You Can Save plays this role.

I've generally been surprised over the years by the extent to which the more general 'helping others as much as we can, using evidence and reason' has been easy for people to get on board with. I had initially expected that to be less appealing, due to its abstractness/potentially leading to weird conclusions. But I'm not actually convinced that's the case anymore. And if it's not detrimental, it seems more straightforward to start with the general case, plus examples, than to start with only a more narrow example.

I hadn't thought the TLYCS as an/the anti-poverty org. I guess I didn't think about it as they're not so present in my part of the EA blogsphere. Maybe it's less of a problem if there are at least charities/orgs to represent different world views (although this would require quite a lot of duplication of work so it's less than ideal).

And what are your/GWWC's thoughts on moral inclusivity?

For as long as it's the case that most of our members [edited to clarify: GWWC members, not members of the EA community in general] are primarily concerned with global health and development, content on our blog and social media is likely to reflect that to some degree.

But we also aim to be straightforward about our cause-neutrality as a project. For example, our top recommendation for donors is the EA Funds, which are designed to get people thinking about how they want to allocate between different causes rather than defaulting to one.

Thanks for the update. That's helpful.

However, it does seem a bit hard to reconcile GWWC's and 80k's positions on this topic. GWWC (i.e. you) seem to be saying "most EAs care about poverty, so that's what we'll emphasise" whereas 80k (i.e. Ben Todd above) seems to saying "most EAs do (/should?) care about X-risk, so that's what we'll emphasise".

These conclusions seem to be in substantial tension, which itself is may confuse new and old EAs.

I edited to clarify that I meant members of GWWC, not EAs in general.

80k is now seperate from CEA or is in the process of being separated from CEA. They are allowed to come to different conclusions.

We're fiscally sponsored by CEA (so legally within the same entity) and have the same board of trustees, but we operate like a separate organisation.

Our career guide also doesn't mention EA until the final article, so we're not claiming that our views represent those of the EA movement. GWWC also doesn't claim on the website to represent the EA movement.

The place where moral exclusivity would be most problematic is EA.org. But it mentions a range of causes without prioritising them, and links to this tool, which also does exactly what the original post recommends (and has been there for a year). https://www.effectivealtruism.org/articles/introduction-to-effective-altruism/#which-cause https://www.effectivealtruism.org/cause-prioritization-tool/

I think it's actually mentioned briefly at the end of Part 5: https://80000hours.org/career-guide/world-problems/

(In fact, the mention is so brief that you could easily remove it if your goal is to wait until the end to mention effective altruism.)

That's right - we mention it as a cause to work on. That slipped my mind since that article was added only recently. Though I think it's still true we don't give the impression of representing the EA movement.

Might it be that 80k recommend X-risk because it's neglected (even within EA) and that if more then 50% of EAs had X-risk as their highest priority it would no longer be as neglected?

I don't think that'd be the case as from inside the perspective of someone already prioritizing x-risk reduction, it can appear that cause is at least thousands of times more important than literally anything else. This is based on an idea formulated by philosopher Nick Bostrom: astronomical stakes (this is Niel Bowerman in the linked video, not Nick Bostrom). The ratio x-risk reducers think is appropriate for resources dedicated to x-risk relative to other causes is arbitrarily high. Lots of people think the argument is missing some important details, or ignoring major questions, but I think from their own inside view x-risk reducers probably won't be convinced by that. More effective altruists could try playing the double crux game to find the source of disagreement about typical arguments for far-future causes. Otherwise, x-risk reducers would probably maintain in the ideal as many resources as possible ought be dedicated to x-risk reduction, but in practice may endorse other viewpoints receiving support as well.

This seems like a perfectly reasonable comment to me. Not sure why it was heavily downvoted.

Talking about people in the abstract, or in a tone as some kind of "other", is to generalize and stereotype. Or maybe generalizing and stereotyping people others them, and makes them too abstract to empathize with. Whatever the direction of causality, there are good reasons people might take my comment poorly. There's lots of skirmishes online in effective altruism between causes, and I expect most of us don't all being lumped together in a big bundle, because it feels like under those circumstances at least a bunch of people in your inner-ingroup or whatnot will feel strawmanned. That's what my comment reads like. That's not my intention.

I'm just trying to be frank. On the Effective Altruism Forum, I try to follow Grice's Maxims because I think writing in that style heuristically optimizes the fidelity of our words to the sort of epistemic communication standards the EA community would aspire to, especially as inspired by the rationality community to do so. I could do better on the maxims of quantity and manner/clarity sometimes, but I think I do a decent job on here. I know this isn't the only thing people will value in discourse. However, there are lots of competing standards for what the most appropriate discourse norms are, and nobody is establishing to others how the norms will not just maximize the satisfaction of their own preferences, but maximize the total or average satisfaction for what everyone values out of discourse. That seems the utilitarian thing to do.

The effects of ingroup favouritism in terms of competing cause selections in the community don't seem healthy to the EA ecosystem. If we want to get very specific, here's how finely the EA community can be sliced up by cause-selection-as-group-identity.

  • vegan, vegetarian, reducetarian, omnivore/carnist
  • animal welfarist, animal liberationist, anti-speciesist, speciesist
  • AI safety, x-risk reducer (in general), s-risk reducer
  • classical utilitarian, negative utilitarian, hedonic utilitarian, preference utilitarian, virtue ethicist, deontologist, moral intuitionist/none-of-the-above
  • global poverty EAs; climate change EAs?; social justice EAs...?

The list could go on forever. Everyone feels like their representing not only their own preferences in discourse, but sometimes even those of future generations, all life on Earth, tortured animals, or fellow humans living in agony. Unless as a community we make an conscientious effort to reach towards some shared discourse norms which are mutually satisfactory to multiple parties or individual effective altruists, however they see themselves, communication failure modes will keep happening. There's strawmanning and steelmanning, and then there's representations of concepts in EA which fall in between.

I think if we as a community expect everyone to impeccably steelman everyone all the time, we're being unrealistic. Rapid growth of the EA movement is what organizations from various causes seem to be rooting for. That means lots of newcomers who aren't going to read all the LessWrong Sequences or Doing Good Better before they start asking questions and contributing to the conversation. When they get downvoted for not knowing the archaic codex that are evolved EA discourse norms, which aren't written down anywhere, they're going to exit fast. I'm not going anywhere, but if we aren't more willing to be more charitable to people we at first disagree with than they are to us, this movement won't grow. That's because people might be belligerent, or alarmed, by the challenges EA presents to their moral worldview, but they're still curious. Spurning doesn't lead to learning.

All of the above refers only to specialized discourse norms within just effective altruism. This would be on top of the complicatedness of effective altruists private lives, all the usual identity politics, and otherwise the common decency and common sense we would expect on posters on the forum. All of that can already be difficult for diverse groups of people as is. But for all of us to go around assuming the illusion of transparency makes things fine and dandy with regards to how a cause is represented without openly discussing it is to expect too much of each and every effective altruist.

Also, as of this comment, my parent comment above has net positive 1 upvote, so it's all good.

Sure. But in that case GWWC should take the same sort of line, presumably. I'm unsure how/why the two orgs should reach different conclusions.

I obviously can't speak for GWWC but I can imagine some reasons it could reach different conclusions. For example, GWWC is a membership organization and might see itself as, in part, representing its members or having a duty to be responsive to their views. At times, listeners might understand statements by GWWC as reflecting the views of its membership.

80k's mission seems to be research/advising so its users might have more of an expectation that statements by 80k reflect the current views of its staff.

Hello again Ben and thanks for the reply.

Thanks for the correction on 80k. I'm pleased to hear 80k stopped doing this ages ago: I saw the new, totalist-y update and assumed that was more of a switch in 80k's position than I thought. I'll add a note.

I agree moral uncertainty is potentially important, but there are two issues.

  1. I'm not sure EMV is the best approach to moral uncertainty. I've been doing some stuff on meta-moral uncertainty and think I've found some new problems I hope to write up at some point.

  2. I'm also not sure, even if you adopt an EMV approach, the result is that totalism becomes your effective axiology as Hilary and Toby suggest in their paper (http://users.ox.ac.uk/~mert2255/papers/mu-about-pe.pdf). I'm also working on a paper on this.

Those are basically holding responses which aren't that helpful for the present discussion. Moving on then.

I disagree with your analysis that person-affecting views are committed to being very concerned about X-risks. Even supposed you're taking a person-affecting view, there's still a choice to be made about your view of the badness of death. If you're an Epicurean about death (it's bad for no one to die) you wouldn't be concerned about something suddenly killing everyone (you'd still be concerned about the suffering as everyone died though). I find both person-affecting views and Epicureanism pretty plausible: Epicureanism is basically just taking a person-affecting view to creating lives and applying it to ending lives, so if you like one, you should like both. On my (heretical and obviously deeply implausible) axiology, X-risk doesn't turn out to be important.

FWIW, I'm (emotionally) glad people are working on X-rosk because I'm not sure what to do about moral uncertainty either, but I don't think I'm making a mistake in not valuing it. Hence I focus on trying to find the best ways to 'improve lives' - increasing the happiness of currently living people whilst they are alive.

You're right that if you combine person-affecting-ness and a deprivationist view of death (i.e. badness of death = years of happiness lost) you should still be concerned about X-risk to some extent. I won't get into the implications of deprivationism here.

What I would say, regarding transparency, is that if you think everyone should be concerned about the far future because you endorse EMV as the right answer to moral uncertainty, you should probably state that somewhere too, because that belief is doing most of the prioritisation work. It's not totally uncontentious, hence doesn't meet the 'moral inclusivity' test.

Hi Michael,

I agree that if you accept both Epicureanism and the person-affecting view, then you don't care about an xrisk that suddenly kills everyone, perhaps like AI.

However, you might still care a lot about pandemics or nuclear war due to their potential to inflict huge suffering on the present generation, and you'd still care about promoting EA and global priorities research. So even then, I think the main effect on our rankings would be to demote AI. And even then, AI might still rank due to the potential for non-xrisk AI disasters.

Moreover, this combination of views seems pretty rare, at least among our readers. I can't think of anyone else who explicitly endorses it.

I think it's far more common for people to put at least some value on future generations and/or to think it's bad if people die. In our informal polls of people who attend our workshops, over 90% value future generations. So, I think it's reasonable to take this as our starting point (like we say we do in the guide: https://80000hours.org/career-guide/how-much-difference-can-one-person-make/#what-does-it-mean-to-make-a-difference).

And this is all before taking account of moral uncertainty, which is an additional reason to put some value on future generations that most people haven't already considered.

In terms of transparency, we describe our shift to focusing on future generations here: https://80000hours.org/career-guide/world-problems/#how-to-preserve-future-generations-8211-find-the-more-neglected-risks If someone doesn't follow that shift, then it's pretty obvious that they shouldn't (necc) follow the recommendations in that section.

I agree it would be better if we could make all of this even more explicit, and we plan to, but I don't think these questions are on the mind of many of our more readers, and we rarely get asked about them in workshops and so on. In general, there's a huge amount we could write about, and we try to address people's most pressing questions first.

Hello Ben,

Main comments:

There are two things going on here.

On transparency, if you want to be really transparent about what you value and why, I don't think you can assume people agree with you on topics they've never considered, that you don't mention, and that do basically all the work of cause prioritisation. The number of people worlwide who understand moral uncertainty well enough to explain it could fill one seminar room. If moral uncertainty is your "this is why everyone should agree with us" fall back, then that should presumably feature somewhere. Readers should know that's why you put forward your cause areas so they're not surprised later on to realise that's the reason.

On exclusivity, you response seems to ammount to "most people want to focus on the far future and, what's more, even if they don't, they should because of moral uncertainty, so we're just going to say it's what really matters". It's not true that most EAs want to focus on the far future - see Ben Hurford's post below. Given that it's not true, saying people should focus on it is, in fact, quite exclusive.

The third part of my original post argued we should want EA should be morally inclusive even if we endorse a particular moral theory. Do you disagree with that? Unless you disagree, it doesn't matter whether people are or should be totalists: it's worse from a totalist perspective for 80k to only endorse totalist-y causes.

Less important comments:

FWIW, if you accept both person-affecting views and Epicureanism, you should find X-risk, pandemics or nuclear war pretty trivial in scale compared to things like mental illness, pain and 'ordinary human unhappiness' (that is, the sub-maximal happiness many people have even if they are entirely healthy and economically secure). Say a nuclear war kills everyone, then that's just few moments of suffering. Say it kills most people, but leaves 10m left who eek out a miserable existence in a post apocalyptic world, then you're just concerned with 10m people, which is 50 times less than just the 500m who have either anxiety or depression worldwide.

I know some people who implicitly or explicitly endorse this, but I wouldn't expect you to, and that's one of my worries: if you come out in favour of theory X, you disproportionately attract those who agree with you, and that's bad for truth seeking. By analogy, I don't imagine many people at a Jeremy Corbyn rally vote Tory. I'm not sure Jeremy shouldn't take that as further evidence that a) the Tories are wrong or b) no one votes for them.

I'm curious where you get your 90% figure from. Is this from asking people if they would:

"Prevent one person from suffering next year. Prevent 100 people from suffering (the same amount) 100 years from now."?

I assume it is, because that's how you put it in the advanced workshop at EAGxOX last year. If it is, it's a pretty misleading question to ask for a bunch of reasons that will take too long to type out fully. Briefly, one problem is that I think we should help the 100 people in 100 years if those people already exist today (both necessitarians and presentists get this results). So I 'agree' with your intuition pump but don't buy your conclusions, which suggests the pump is faulty. Another problem is the Hawthorne effect. Another is population ethics is a mess and you've cherry picked a scenario that suits your conclusion. If I asked a room of undergraduate philosophers "would you rather relieve 100 living people of suffering or create 200 happy people" I doubt many would pick the latter.

I feel like I'm being interpreted uncharitably, so this is making me feel a bit defensive.

Let's zoom out a bit. The key point is that we're already morally inclusive in the way you suggest we should be, as I've shown.

You say:

for instance, 80,000 Hours should be much morally inclusive than they presently are. Instead of “these are the most important things”, it should “these are the most important things if you believe A, but not everyone believes A. If you believe B, you should think these are the important things [new list pops up].

In the current materials, we describe the main judgement calls behind the selection in this article: https://80000hours.org/career-guide/world-problems/ and within the individual profiles.

Then on the page with the ranking, we say:

Comparing global problems involves difficult judgement calls, so different people come to different conclusions. We made a tool that asks you some key questions, then re-ranks the lists based on your answers.

And provide this: https://80000hours.org/problem-quiz/ Which produces alternative rankings given some key value judgements i.e. it does exactly what you say we should do.

Moreover, we've been doing this since 2014, as you can see in the final section of this article: https://80000hours.org/2014/01/which-cause-is-most-effective-300/

In general, 80k has a range of options, from most exclusive to least:

1) State our personal views about which causes are best 2) Also state the main judgement calls required to accept these views, so people can see whether to update or not. 3) Give alternative lists of causes for nearby moral views. 4) Give alternative lists of causes for all major moral views.

We currently do (1)-(3). I think (4) would be a lot of extra work, so not worth it, and it seems like you agree.

It seemed like your objection is more that within (3), we should put more emphasis on the person-affecting view. So, the other part of my response was to argue that I don't think the rankings depend as much on that as it first seems. Moral uncertainty was only one reason - the bigger factor is that the scale scores don't actually change that much if you stop valuing xrisk.

Your response was that you're also epicurean, but then that's such an unusual combination of views that it falls within (4) rather than (3).

But, finally, let's accept epicureanism too. You claim:

FWIW, if you accept both person-affecting views and Epicureanism, you should find X-risk, pandemics or nuclear war pretty trivial in scale compared to things like mental illness, pain and 'ordinary human unhappiness'

For mental health, you give the figure of 500m. Suppose those lives have a disability weighting of 0.3, then that's 150m QALYs per year, so would get 12 on our scale.

What about for pandemics? The Spanish Flu infected 500m people, so let's call that 250m QALYs of suffering (ignoring the QALYs lost by people who died since we're being Epicurean, or the suffering inflicted on non-infected people). If there's a 50% chance that happens within 50 years, then that's 2.5m expected QALYs lost per year, so it comes out 9 on our scale. So, it's a factor of 300 less, but not insignificant. (And this is ignoring engineered pandemics.)

But, the bigger issue is that the cause ranking also depends on neglectedness and solvability.

We think pandemics only get $1-$10bn of spending per year, giving them a score of ~4 for neglectedness.

I'm not sure how much gets spent on mental health, but I'd guess it's much larger. Just for starters, it seems like annual sales of antidepressants are well over $10bn, and that seems like fairly small fraction of the overall effort that goes into it. The 500m people who have a mental health problem are probably already trying pretty hard to do something about it, whereas pandemics are a global coordination problem.

All the above is highly, highly approximate - it's just meant to illustrate that, on your views, it's not out of the question that the neglectedness of pandemics could make up for their lower scale, so pandemics might still be an urgent cause.

I think you could make a similar case for nuclear war (a nuclear war could easily leave 20% of people alive in a dystopia) and perhaps even AI. In general, our ranking is driven more by neglectedness than scale.

Hey.

So, I don't mean to be attacking you on these things. I'm responding to what you said in the comments above and maybe more of a general impression, and perhaps not keeping in mind how 80k do things on their website; you write a bunch of (cool) stuff, I've probably forgotten the details and I don't think it would be useful to go back and enage in a 'you wrote this here' to check.

A few quick things as this has already been a long exchange.

Given I accept I'm basically a moral hipster, I'd understand if you put my views in the 3 rather 4 category.

If it's of any interest, I'm happy to suggest how you might update your problem quiz to capture my views and views in the area.

I wouldn't think the same way about Spanish flu vs mental health. I'm assuming happiness is duration x intensity (#Bentham). What I think you're discounting is the duration of mental illnesses - they are 'full-time' in that they take up your conscious space for lots of the day. They often last a long time. I don't know what the distribution of duration is, but if you have chronic depression (anhedonia) that will make you less happy constantly. In contrast, the experience of having flu might be bad (although it's not clear it's worse, moment per moment, than say, depression), but it doesnt last for very long. Couple of weeks? So we need to accounts for the fact a case of Spanish flu has 1/26th of the duration than anhedonia, before we even factor in intensity. More generally, I think we suffer from something like scope insensity when we do affecting forecasting: we tend to consider the intensity of events rather than duration. Studies into the 'peak-end' effect show this is exactly how we remember things: our brains only really remember the intensity of events.

One conclusion I reach (on my axiology) is that the things which cause daily misery/happy are the biggest in terms of scale. This is why I think don't think x-risks are the most important thing. I think a totalist should accept this sort of reasoning and bump up the scale of things like mental health, pain and ordinary human unhappiness, even though x-risk will be much bigger in scale on totalism. I accept I haven't offered anything to do with solvability of neglectedness yet.

Thanks. Would you consider adding a note to the original post pointing out that 80k already does what you suggest re moral inclusivity? I find that people often don't read the comment threads.

I'll add a note saying you provide a decision tool, but I don't think you do what I suggest (obviously, you don't have to do what I suggest and can think I'm wrong!).

I don't think it's correct to call 80k morally inclusive because you substantially pick a prefered outcome/theory and then provide the decision tool as a sort of after thought. By my lights, being morally inclusive is incompatible with picking a preferred theory. You might think moral exclusivity is, all things considered, the right move, but we should at least be a clear that's the choice you've made. In the OP I suggest there were advantages to inclusivity over exclusivity and I'd be interested to hear if/why you disagree.

I'm also not sure if you disagree with me that the scale of suffering on the living from a X-risk disaster is probably quite small, and that the happiness lost to long-term conditions (mental health, chronic pains, ordinary human unhappiness) is of much larger scale than you've allowed. I've very happy to discuss this with you in person to hear what, if anything, would cause you to change your views on this. It would be a bit of a surprise if every moral view agreed X-risks were the most important thing, and it's also a bit odd if you've left some of the biggest problems (by scale) off the list. I accept I haven't made substantial arguments for all of these in writing, but I'm not sure what evidence you'd consider relevant.

I've also offered to help rejig the decision tool (perhaps subsequently to discussing it with you) and that offer still stands. On a personal level, I'd like the decision tool to tell me what I think the most important problems are and better reflection the philosophical decision process! You may decide this isn't worth your time.

Finally, I think my point about moral uncertainty still stands. If you think it is really important, it should probably feature somewhere. I can't see a mention of it here: https://80000hours.org/career-guide/world-problems/

Effective Altruism is quite difficult to explain if you want capture all of its complexity. I think that it is a completely valid choice for an introductory talk to focus on one aspect of Effective Altruism as otherwise many people will have trouble following.

I would suggest letting people know that you are only covering one aspect of Effective Altruism, ie. "Effective Altruism is about doing the most good that you can with the resources available to you. This talk will cover how Effective Altruism has been applied to charity, but it is worth noting that Effective Altruism has also been applied to other issues like animal welfare or ensuring the long-term survival of humanity".

This reduces the confusion when they hear about these issues later and reduces the chance that they will feel mislead. At the same time, it avoids throwing too many new ideas at a person at once which may reduce their comprehension and it explains how it applies to an issue which they may already care about.

I think this is a good point, but these days we often do this, and people still get the impression that it's all about global poverty. People remember the specific examples far more than your disclaimers. Doing Good Better is a good example.

I broadly agree with you on the importance of inclusivity, but I’m not convinced by your way of cashing it out or the implications you draw from it.

Inclusivity/exclusivity strikes me as importantly being a spectrum, rather than a binary choice. I doubt when you said EA should be about ‘making things better or worse for humans and animals but being neutral on what makes things better or worse’, you meant the extreme end of the inclusivity scale. One thing I assume we wouldn’t want EA to include, for example, is the view that human wellbeing is increased by coming only into contact with people of the same race as yourself.

More plausibly, the reasons you outline in favour of inclusivity point towards a view such as ‘EA is about making things better or worse for sentient beings but being neutral between reasonable theories of what makes things better or worse’. Of course, that brings up the question of what it takes to count as a reasonable theory. One thing it could mean is that some substantial number of people hold / have held it. Presumably we would want to circumscribe which people are included here: not all moral theories which have at any time in the past by a large group of people are reasonable. At the other end of the spectrum, you could include only views currently held by many people who have made it their life’s work to determine the correct moral theory. My guess is that in fact we should take into account which views are and aren’t held by both the general public and by philosophers.

I think given this more plausible cashing out of inclusivity, we might want to be both more and less inclusive than you suggest. Here are a few specific ways it might cash out:

  • We should be thinking about and discussing theories which put constraints on actions you’re allowed to take to increase welfare. Most people think there are some limits on be what we’re allowed to do to others to benefit others. Most philosophers believe there are some deontological principles / agent centred constraints or prerogatives.

  • We should be considering how prioritarian to be. Many people think we should give priority to those who are worst off, even if we can benefit them less than we could others. Many philosophers think that there’s (some degree of) diminishing moral value to welfare.

  • Perhaps we ought to be inclusive of views to the effect that (at least some) non-human sentient beings have little or no moral value. Many people’s actions imply they believe that a large number of animals have little or no moral value, and that robots never could have moral value. Fewer philosophers seem to hold this view.

  • I’m less convinced about being inclusive towards views which place no value on the future. It seems widely accepted that climate change is very bad, despite the fact that most of the harms will accrue to those in the future. It’s controversial what the discount rate should be, but not that the pure time discount rate should be small. Very few philosophers defend purely person-affecting views.

Thanks Michelle.

I agree there's a difficulty in finding a theoretical justification for how inclusive you are. I think this overcooks the problem somewhat as an easier practical principle would be "be so inclusive no one feels their initially preferred theory isn't represented". You could swap "no one" for "few people" with "few" to be further defined. There doesn't seem much point saying "this is what a white supremacist would think" as there aren't that many floating around EA, for whatever reason.

On your suggestions for being inclusive, I'm not sure the first two are so necessary simply because it's not clear what types of EA actions prioritarians and deontologists will disagree about in practice. For which charities will utils and prioritarians diverge, for instance?

On the third, I think we already do that, don't we? We already have lots of human-focused causes people can pick if they aren't concerned about non-human animals.

On the last, the only view I can think of which puts no value on the future would be one with a very high pure time discount. I'm inclined towards person-affecting views and I think climate change (and X-risk) would be bad and are worth worrying about: they could impact the lives of those alive today. As I said to B. Todd earlier, I just don't think they swamp the analysis.

Interesting read!

Just a thought: does anyone have any thoughts on religion and EA? I don't mean it in a "saving souls is cost effective" way, more in the moral philosophy way.

My personal take is that unless someone is really hardcore/radical/orthodox, then most of what EA says would be positive/ethical for most religious persons. That is certainly my experience talking to religious folks, no one has ever gotten mad at me unless i get too consequentalist. Religious people might even be more open to giving what we can pledge, and EA altruism in some ways, because of the common practise of tithing. though they might decide that "my faith is the most cost effective", but that only sometimes happens, they seem to donate on top of it usually.

PS: Michael, was it my question on the facebook mental health EA post that prompted you to write this? Just curious.

Thank you for the interesting post, and you provide some strong arguments for moral inclusivity.

I'm less confident that the marketing gap, if it exists, is a problem, but there may be ways to sell the more 'weird' cause areas, as you suggest. However, even when they are mentioned, people may still get the impression that EA is mostly about poverty. The other causes would have to be explained in the same depth as poverty (looking at specific charities in these cause areas as well as cost-effectiveness estimates where they exist, for instance) for the impression to fade, it seems to me.

While I do agree that it's likely that a marketing gap is perceived by a good number of newcomers (based solely on my intuition), do we have any solid evidence that such a marketing gap is perceived by newcomers in particular?

Or is it mainly perceived by more 'experienced' EAs (many of whom may prioritise causes other than global poverty) who feel as if sufficient weight isn't being given to other causes, or who feel guilty for giving a misleading impression relative to their own impressions (which are formed from being around others who think like them)? If the latter, then the marketing gap may be less problematic, and will be less likely to blow up in our faces.

Hey Michael, sorry I am slightly late with my comment.

To start I broadly agree that we should not be misleading about EA in conversation, however my impression is that this is not a large problem (although we might have very different samples).

I am unsure where I stand on moral inclusivity/exclusivity, although as I discuss later I think this is not actually a particularly major problem, as most people do not have a set moral theory.

I am wondering what your ideal inclusive effective altruism outreach looks like?

I am finding it hard to build up a cohesive picture from your post and comments, and I think some of your different points don't quite gel together in my head (or at least not in an inconvenient possible world).

You give an example of this of beginning a conversation with global poverty before transitioning to explaining the diversity of EA views by:

Point out people understand this in different ways because of their philosophical beliefs about what matters: some focus on helping humans alive today, others on animals, others on trying to make sure humanity doesn't accidentally wipe itself out, etc.

For those worried about how to ‘sell’ AI in particular, I recently heard Peter Singer give a talk when he said something like (can't remember exactly): "some people are very worried about about the risks from artificial intelligence. As Nick Bostrom, a philosopher at the University of Oxford pointed out to me, it's probably not a very good idea, from an evolutionary point of view, to build something smarter than ourselves." At which point the audience chuckled. I thought it was a nice, very disarming way to make the point.

However trying to make this match the style of an event a student group could actually run, it seems like the closet match (other than a straightforward into to EA event) would be a talk on effective global poverty charity, follow by an addendum on EA being more broad at the end. (I think this due to a variety of practical concerns, such a there being far more good speakers and big names in global poverty, and it providing many concrete examples of how to apply EA concepts etc.)

I am however skeptical that a addendum on the end of a way would create nearly as strong an impression as the subject matter of the talk itself, and people would still leave with a much stronger impression of EA as being about global poverty than e.g. x-risk.

You might say a more diverse approach would be to have talks etc. roughly in proportion to what EAs actually believe is important, so if to make things simple, a third of EAs thought Global poverty was most important, a third x-risk and a third animal suffering, then a third of the talks should be a global poverty, a third on x-risk etc. Each of these could then end with this explanation of EA being more broad etc.

However if people's current perception that global poverty events is best way to get new people into EA is in fact right (at least in the short term) either by having better attendance or conversion ratios this approach could still lead to the majority of new EAs first introduction to EA being through a global poverty talk.

This due to the previous problem of the addendum not really changing peoples impressions enough we could still end up with the situation you say we should want to avoid where:

People should not feel surprised about what EAs value when they get more involved in the movement.

I am approaching this all more from the student group perspective, and so don't have strong views on the website stuff, although I will note that my impression was that 80k does a good job on being inclusive, and GWWC is more of an issue with a lack of updates than anything.

One thing you don't particularly seem to be considering is that almost all people don't actually have strongly formed moral views that conform to one of the common families (utilitarian, virtue ethics etc.) so I doubt (but could be wrong, as there would probably be a lot of survivor bias in this) that a high percentage of newcomers to EA feel excluded by the current implicit assumptions that might often be made of e.g. future people matter.

Hello Alex,

Thanks for the comments. FWIW, when I was thinking inclusive I had in mind 1) the websites of EA orgs and 2) introductory pitches at (student) events, rather than the talks involved in running a student group. I have no views on student groups being inclusive in their full roster of talks, not least because I doubt the groups would cohere enough to push a particular moral theory.

I agree that lots of people don't have strong moral views and I think EA should be a place where they figure out what they think, rather than a place where various orgs push them substantially in one direction or another. As I stress, I think even the perception of a 'right' answer is bad for truth seeking. Bed Todd doesn't seem to have responded to my comments on this, so I'm not really sure what he thinks.

And, again FWIW, survivorship bias is a concern. Anecdataly, I know a bunch of people that decided EA weirdness, particularly with reference to the far future, was want made them decide not to come back.

(Distinct comment on survivorship bias as it seems like a pretty separate topic)

I currently think good knowledge about what drives people away from EA would be valuable, although obviously fairly hard to collect, and can't remember ever seeing a particularly large collection of reasons given.

I am unsure as to how much we should try and respond to some kinds of complaints though, for things such as people being driven away by weirdness for instance, it is not clear to me that there is much we can do to make EA more inclusive to them without losing a lot of the value of EA (pursuing arguments even if they lead to strange conclusions etc.)

In particular do you know of anyone who left because they only cared about e.g. global poverty and did not want to engage with the far future stuff, who you think would have stayed if EA had been presented to them as including far future stuff from the start? It seems like it might just bring the point when they are put off earlier.

Ah ok, I think I generally agree with your points then (that intro events and websites should be morally inclusive and explain (to some degree) the diversity of EA. My current impression is that this is not much of a problem at the moment. From talking to people working at EA orgs and the reading the advice given to students running into events I think people do advocate for honesty and moral inclusiveness, and when/if it is lacking this is more due to a lack of time/honest mistakes as opposed to conscious planning. (Although possibly we should try to dedicate much more time to it to try and ensure it is never neglected?)

In particular I associate the whole 'moral uncertainty' thing pretty strongly with EA, and in particular CEA and GWWC (but this might just be due to Toby and Will's work on it) which strikes fairly strongly against part 3 in your main post.

How much of a problem do you think this currently is? The title and tone (use of plea etc.) in your post makes me think you feel we are currently in pretty dire straights.

I also think that generally student run talks (and not specific intro to EA events) are the way most people initially hear about EA (although could be very wrong about this) and so actually the majority of the confusion about what EA is really about would not get addressed by people fully embracing the recommendations in your post. (Although I may just be heavily biased towards how the EA societies I have been involved with have worked).

I favour the idea of inclusivity, and being upfront about the different areas that are prioritised. I think within these areas, there are certain ideas that could be held back, that people might think are fairly unusual issues of consideration. However, this does also mean that if an idea is put forward it doesn't then become a priority in itself. For example, where factory farming is put forward, not factory farming isn't necessarily the most effective way to reduce harm, instead harm can be shifted onto non-factory farming, which is another form of animal exploitation (often with a different set of harms) that would then need to be addressed. However, at that point it might fall off the EA radar, or be a reduced cause option because of a perceived decline in suffering has meant another general cause is more important.

However, the impact of animal consumption on the environment and human health are also strong reasons for prioritising the animal issue, and are even exacerbated by non-factory farming, particularly on the issue of the environment. So it might appear that reduction of animal consumption overall ought to be the priority. Then it might be more realistic to say that EA is interested in reducing harm to other animals, but this can also frame the issue in the harms that take place, and this can be notable in terms of further normalising those harms through trading one off for another, particularly when some people in the animal movement are ideologically opposed to the responsible systems.

However, I think it is ok to introduce the idea of factory farming, as long as the variety of approaches within the animal movement that address this issue are articulated later on (within EAA). Other issues such as wild animal suffering are also given time here. However, this brings up the issue of inclusivity of ideas, and i think it is fairly recognised that EAA is dominated by utilitarian thinking on the issue of animal exploitation. I think this is a cause of some concern because it tends to lead to a certain interpretation of ideas that are consequently prioritised over others, seemingly because they might fit with the idea the group has of itself, rather than because they have been thoroughly evaluated.

It seems to begin with EAA drew its expertise from a not particularly diverse form of 'mainstream' thinking that exists within the animal movement, and henceforth struggled (or uncharitably has been disinclined) to include different perspectives. This could largely be the result of traditional leaders in the animal movement having little incentive to include areas they know little about, or that would suggest their work may need to be adjusted in order to be more amenable to a more inclusive EAA. This also includes the concern that other perspectives may become increasingly favoured and their position diminished, or their favoured groups receive less funding overall.

In this sense an ingroup / outgroup situation can be created and perpetuated. So i would say the idea of being more inclusive is a good thing, but whether the idea of including is one that people want to engage in is another. In this sense it could highlight a previous (and ongoing) less than optimal approach, and this is quite a difficult situation to deal with. Both for people within EAA and for those who are interested in EAA but don't see their ideas included in conterfactual considerations, instead they might be generally ignored or dismissed because they don't fit with the view the organisation has constructed.

I think that going forward there need to be some changes in the way EAA works in order for it to grow, and for it to claim that it is indeed doing the most effective thing. As it stands there are too many ideologically similar groups whose ideas are prioritised and perpetuated and that have either resisted scrutiny, or the consequences of scrutiny. So it seems to me there is quite a challenge ahead if inclusivity is going to be the path people wish to take.

I totally understand your concern that the EA movement is misrepresenting itself by not promoting issues proportional to their representation among people in the group. However, I think that the primary consideration in promoting EA should be what will hook people. Very few people in the world care about AI as a social issue, but extreme poverty and injustice are very popular causes that can attract people. I don't actually think it should matter for outreach what the most popular causes are among community members. Outreach should be based on what is likely to attract the masses to practice EA (without watering it down by promoting low impact causes, of course). Also, I believe it's possible to be too inclusive of moral theories. Dangerous theories that incite terrorism like Islamic or negative utilitarian extremism should be condemned.

Also, I'm not sure to what extent people in the community even represent people who practice EA. Those are two very different things. You can practice EA, for example by donating a chunk of your income to Oxfam every year, but not have anything to do with others who identify with EA, and you can be a regular at EA meetups and discussing related topics often (ie. a member of the EA community) without donating or doing anything high impact. Perhaps the most popular issues acted on by those who practice EA are different from those discussed by those who like to talk about EA. Being part of the EA community doesn't give one any moral authority in itself.

It's exactly this line of thinking I expect to blow up in our faces and do less good over the long run. How would you feel if you thought I considered you one of the stupid "masses" (your word) and I was trying to manipulate you to do something I didn't personally believe?

You'd dislike me, distrust me, not want to do what I told you and probably tell other people EA was full of suspicious people. No one wants to be taken for an idiot.

I didn't follow your 2nd point I'm afraid.

This post may add grist to the mill that any such gap is a problem: https://srconstantin.wordpress.com/2017/01/11/ea-has-a-lying-problem/

(The post doesn't quite cover the same issues that Michael talks about here, but there's a parallel)