Hide table of contents

Preliminary note: for most readers of this forum, this post will be preaching to the choir. However, I decided to write it for two reasons:

  1. To create common knowledge in EA around the concept I will discuss. (This excellent post from Scott Aaronson provided some motivation.)
  2. I have noticed a recent stream of forum posts and movement-growth efforts that do not seem to take the concept I will discuss into account.

On prioritization

While it is the case that anyone can contribute to the EA movement, it is also important to remember that one of EA's most important concepts is prioritization: it is possible to save and improve many more lives if you prioritize where you direct your money and efforts.

There is a skewed distribution of the effectiveness of interventions, such that by prioritizing the most effective interventions, you can have many many times the impact. Given limited resources, if you care about massively improving the world, you should focus most of your attention on that small percentage of highly effective interventions. GiveWell only promotes a small percentage of charities based on this principle.

If the distribution of the effectiveness of people is similarly skewed, then EA should take seriously the idea of prioritizing outreach for primarily the most effective people. Is this distribution similarly skewed?

Yes. We live in a world where there is a skewed distribution for the amounts of good various people can do with their resources. The richest person in America, Bill Gates, has roughly $79,000,000,000 in assets. The median net worth of an American is $44,900. You would need to recruit over 1,000,000 Americans to match what your impact could potentially be by recruiting Bill Gates. If your goal is to have as much money as possible be donated in the best ways possible, then you should seriously consider whether the expected value of recruiting Bill Gates or other billionaires is higher than the expected value of recruiting as many people as possible. For example, it is likely that GiveWell's recruitment of Cari Tuna and Dustin Moskovitz was higher impact than all other EA donor-recruitment efforts combined.

Likewise, the difference in influence ability between Hilary Clinton and the average American is likely to be at least an order of magnitude difference. Similarly the difference in productive output between an Elon Musk and the average American is likely to be at least an order of magnitude difference.

In a world where everyone's ability to save and improve lives is equal, you might prefer mass-movement strategies and not worry much about who your outreach is directed toward. If, however, we live in a world in which there is a skewed distribution (likely even a power law distribution) of wealth, talent, and influence, you might prioritize strategies which try to recruit people for whom there is evidence of outsized effectiveness. We live in the latter world.

On the implications of prioritization

This can be a difficult conclusion for an effective altruist to come to. Our lives are based on compassion for others. So to prioritize some people over others based on their effectiveness can be an emotionally difficult idea. However, it is worth noting that every self-identifying EA already engages in this behavior. Behind every intervention or charity that we choose to deprioritize in favor of others which do more good, there are people. Similarly, behind every EA organization are decisions to hire the most effective people. In doing so, EA organizations are also choosing to prioritize certain people over others. Many or most of these people - for both cases above - have praiseworthy intentions and identities strongly associated with doing lots of good. Does deprioritizing certain people make EA inhumane?

Clearly, the answer is no! An EA chooses to prioritize for the most humane reasons possible, almost by definition.

So far I have described a fact about the world (the skewed distribution of personal effectiveness) and the consequences for an EA (prioritizing recruitment of the most effective people). What are more specific implications?

  • Organizations like Giving What We Can, The Life You Can Save, and the Centre for Effective Altruism might focus less on the amount of people recruited and more on the effectiveness of people recruited. For instance, recruiting one Mark Zuckerberg could move more money than the cumulative money moved from all GWWC and TLYCS pledges to date. Likewise, 100 hours spent recruiting one Angela Merkel would likely be higher impact than 100 hours spent recruiting 100 of the usual types of people who are attracted to EA. (I deliberately chose examples that I believe which could be within the EA movement's grasp given the current set of connections that I am aware of.)
  • Welcomingness should continue to be promoted, but not at the cost of lowering community standards. For instance, you would really not want to learn that your nation's medical schools promote low barriers to entry at all costs. If they prioritized welcomingness over effectiveness when you or someone you know is on the operating table, you would probably be upset. You would also not want the system for generating qualified scientists and engineers to drop their many-tiered filters - unless you want bridges buildings to fall. In our case, the stakes for finding well-qualified people are much, much higher. (It's important to note here, however, that welcomingness is a very different concept than diversity. EA will need highly effective people from many different types of backgrounds to tackle problems of extreme complexity. Strategies to increase the diversity of qualified candidates will help satisfy this need; strategies which lower effectiveness in favor of welcomingness will not necessarily help this need, and will occasionally harm it.)
  • Researchers in the EA community might investigate evidence from psychology, from the business literature, and from interviewing top hiring managers and recruiters on which attributes predict effectiveness. After this evidence is synthesized, EA movement-builders might try to figure out most cost-effective ways to find people with these attributes. 
  • Chapters and movement-builders might prefer one-on-one outreach and niche marketing to mass-marketing strategies.
  • If it is possible for current EAs to dramatically self-improve, then they should figure out how to do so. While there may be some genetic component to personal effectiveness, there is growing evidence that personal ability may be much less fixed than previously assumed. (Indeed, to some extent this seems to be the hypothesis that CFAR is testing. (And possibly Leverage as well?))
In general, one implication could be that EA should not try to be a mass movement, like Occupy Wall Street. Instead, it might look more like the scientific revolution, or the process that went into founding America, where a relatively small set of people were able to have a gigantic impact.

This all said, the accusation of elitism, even if it's accurate, can feel hurtful. Nevertheless there is an important thought experiment to run: In the hypothetical world where elitism is in fact the best strategy for saving and improving the most lives (even after accounting for reputational risk), how many happy lives am I willing to sacrifice in order to not be accused of elitism? Thankfully, for most of us - and for those whose fulfilling lives depend our our successful efforts - the answer is clear: zero.

That said, I'd be very interested to hear alternative arguments and change my thoughts on this topic. (Especially since my motivational system would be quite satisfied to hear that everything written above is false!)

----

[Important note: much of this content is not original - is has been based on a series of conversations with several members of the EA movement who have asked to stay anonymous. Parts of this have even been copy-and-pasted from those conversations with permission.]

Comments34
Sorted by Click to highlight new comments since: Today at 11:39 AM

This is an important question, but I think there are many considerations that are neglected in this post. The only argument seems to be that some people are much more rich and powerful than others, which is true but not very informative. I certainly don't think that it's warranted to conclude that the EA movement shouldn't be a mass movement on the basis of the arguments in this post.

One important consideration that isn't discussed is what would be the long-term consequences of a strategy that focused on high net individuals? Also, the "feel hurtful"-argument seems to me to be a straw man (did anyone ever argue that we should not be elitist simply because it feels hurtful to be accused of elitism?).

Also, the top 1% - a far larger segment of the population than the likes of Gates, Musk, etc - actually only earn a bit more than 10% of post-tax income even in the US (which is far more unequal than most other rich countries). Even if the ultra-rich earn orders of magnitude more than each of the rest of us, we still earn much more than them together, which prima facie seems to be a reason to try to reach non-elite people as well.

Wealth is much more concentrated than income, and wealth might be more important. The top 1% of Americans own 43% of total wealth, and the top 0.1% own 22%.

Also, the higher your income, the larger the fraction you can comfortably donate. Something much higher than 10% of total potential donated income will be in the top 1%.

This underestimates the potential donations from people who don't have high net worth. Wealth is largely a function of what percentage of income you put into savings, which is much higher for wealthier people. But you can donate out of your income, not just out of your savings. At the same time, if you're not saving much money then you might not have much wiggle room in your budget to donate more, so this maybe isn't a huge consideration.

Thanks for the post! I mostly agree with your key points: some people are (unfortunately) a lot more powerful than others, and this seems like a reason to focus on recruiting them. I also agree that, for this reason and others, it's not obvious that EA should try to be a mass movement.

However, I think that you're missing some benefits of having a more diverse, non-elite movement, and so reaching a conclusion which is too strong. In short, my argument is that the accusation of elitism, and elitism itself can BE hurtful to EA, not just FEEL hurtful. I'll focus on three arguments about the consequences of elitism, then make a couple of other points.

First, I think that appearing like an 'elite' movement has ambiguous effects on how EA is presented in the media. Whilst it might increase how prestigious EA is, and so make it more attractive, it is also something that I could imagine negative articles about (in fact, I think that there may already be such articles, but I can't place them right now). Something along the lines of 'Look at these rich, white, Ivy-league educated men. What do they know about poverty? Why should we listen to them?'. I'm not saying that these arguments are necessarily particularly good ones, just that they could be damaging to EA's image, which might limit our ability to get more people involved, and retain people.

Second, we sadly currently live in a world where power (in the forms of wealth and political capital that you discussed) correlates with a lot of other characteristics - being white, being male, being cis, being straight, having privileged parents, etc. EA probably over-represents those characteristics already, and this can cause a variety of problems. Less privileged people might feel excluded from the community, which is not nice for them. It may also reduce their participation, and so EA may exclude perspectives or skillsets that are more common in underprivileged groups, and make worse decisions as a result.

Third, it is possible that diversity is correlated with avoiding movement collapse (I'm not sure of this though - perhaps others have done more research). I've hinted above at some ways in which this could be brought about: causing negative media attention, and causing individuals to feel excluded, and leave the movement. This might be a really important consideration.

So far I've been talking only about the consequences of making EA more elite, but I think it's important not to dismiss non-consequentialist considerations. It may be that it is just good to promote diversity and fairness whenever you have the chance. There may also be non-consequence based moral reasons to include less powerful people in important decisions that could affect them. (Again, I'm not committing to this position, but it seems worth considering seriously, if we admit some uncertainty about whether utilitarianism is the right moral theory.)

I think that given these considerations, it's no longer so obvious that EA should be an elite movement. You point out some good reasons that EA should be elite, but there are reasons pointing in the other direction.

But as you point out, the question is not 'Should EA be elite?', but 'Should EA try to be more or less elite, given where we are at the moment?'. Where are we? EA already seems to be a pretty elite movement: I mentioned the lack of diversity above, and I think we probably have an abnormally high number of billionaires engaged with EA.

So when we account for how elite EA already is, and the risks of being elite, it seems quite possible that EA should be trying to be less elite.

Edit: see http://www.effective-altruism.com/ea/sl/celebrating_all_who_are_in_effective_altruism/ and the comments for even more reasons why this is a tricky question!

For an example of a media piece about the problems of EA elitism, see here: http://ssir.org/articles/entry/the_elitist_philanthropy_of_so_called_effective_altruism

To be fair, that post is probably positive publicity for EA. Like, it's a REALLY bad critique.

Let's not fail at other minds. SSIR is a prominent venue, and if its editors saw this as fit to print, we should assume plenty of other people agreed with it.

If you look at the article's comments, there were far more people who disagreed with the authors than agreed. Also, EA is so small at this stage that even negative publicity means more people hear about us and are thus potentially encouraged to consider effective giving as an option.

Agreed on the potentially positive value of negative publicity, at this stage in the movement's growth at least. We should be careful about how we expend our weirdness points, however.

Seeking donations from high net worth individuals/ financial 'elites' is a crowded market. The giving pledge is just one campaign targeting these people, which is already connected to networks of very wealthy person. Do we have good reasons to think that EA would have a comparative advantage in such a crowded market?

Another significant disadvantage I see to becoming another group that concentrates targeting high net worth individuals is that we would be perpetuating the myth that only very wealthy people can make a difference, which more moderately wealthy people often cite as their reason for not taking charity more seriously.

I would need to think about this more, but one argument for thinking we have a comparative advantage is that we've already demonstrated a surprising amount of headway in getting HNW people, particularly in Silicon Valley, on board. Plus there are some notable people in that group who weren't recruited in any meaningful sense but who have strikingly similar goals, e.g. BIll Gates. Prima facie I think it's plausible that very large donors tend to give more time to the question of where they should donate and do it on less personal grounds.

Neither of these is a knockdown argument, but the 'crowded market' claim has its own nuances. For instance, presumably the reason that the market is so crowded is because charities find it relatively easier to raise money from HNW's despite the crowdedness (or at least not significantly harder).

The groups that are most concerned with becoming mass movements are political, because, in democracies, everyone gets one vote; or organisations that want almost everyone to change something in their lives, for example, to stop behaving in a racist way.

For problems where political campaigning, voting or lifestyle change isn't the best way to solve them, the case for becoming a mass movement is much weaker.

Some causes we care about look one way, some the other way.

Even if we grant the conclusions of this post as a premise, you and Gleb haven't necessarily reached the point where you disagree with one another yet. Supposing that the primary purpose of the EA movement is to foster the very most effective EAs possible:

  • it could be that the very most effective EAs are will not turn out to be the hardcore EAs and most of the hardcore EAs are just spinning their wheels

  • it could be that Gleb's post is a valuable indicator that the "EA talent pipeline" is leaky and we're doing a bad job of inspiring softcore EAs to become hardcore EAs (perhaps we could e.g. emphasize self-sacrifice less and emphasize specialization of labor more, with everyone getting a distinct purpose & area of focus--what do successful groups do?)

  • if one assumes that moral behavior has its evolutionary origin in sending a signal to others about how virtuous a person you are, it's possible that any given population of "softcore" EAs can only support a population of "hardcore" EAs of a certain size before signaling opportunities are used up (the 1% of Clearview High School's students who are best at math identify as "the school math nerds"--if you take them away and put them in a magnet school, the top 1% of the remaining students will take up the "school math nerd" identity--you can see this phenomenon at work when a bright student who cruises through high school goes to an elite university and realizes they aren't that bright by elite university standards and they have to reform their identity)

Overall, the idea that improving the experience of "softcore" EAs will significantly trade off against efforts to foster "hardcore" EAs hasn't yet been supported. (Personally the best argument I can think of would probably something like: the "softcore" EAs will be more superficial in their judgements of which cause areas to pursue and will overwhelm the more carefully thought through judgement of the "hardcore" EAs--not sure how much to worry about this.)

It's also worth noting that some of the world's "most effective people" (e.g. Angela Merkel, or any successful politician or activist really) got that way by becoming popular with the masses, to the point where it's not exactly obvious whether the masses elected Angela Merkel as their representative or Angela Merkel inspired the masses to elect her. See https://en.wikipedia.org/wiki/Great_Man_theory#Criticism

Nice post, and really quick response to my post - good to be read together!

Now, I quite agree that the effectiveness of people is quite diverse. However, those people who are potentially the most effective - Hilary Clinton, Angela Merkel, etc. - would take way more resources to recruit than would a typical person. So the return on investment, accounting for the cost of recruitment, may well be quite a bit better for ordinary people than for elites.

Moreover, our capacity to recruit elites is limited by connections, while it's much easier for any EA to spread EA-themed effective giving to broad audiences, or Effective Altruism itself to people who they judge to be likely value-aligned.

Furthermore, the image of an elitist movement is very likely to scare off people like Hilary Clinton and Angela Merkel. It would be close to political suicide for them to engage with something perceived as above the masses.

Finally, there's a reason that mass consumer spending drives the global economy, not elite spending. Sure, when Warren Buffet buys a million-dollar yacht, that's great for the economy. But when a thousand people buy a 20K car, that's 20 times better.

Furthermore, the image of an elitist movement is very likely to scare off people like Hilary Clinton and Angela Merkel. It would be close to political suicide for them to engage with something perceived as above the masses.

Perhaps, but they already do take advice from groups of bankers, Christians, solicit advice from think tanks and so on. I think it's be more accurate to say that poliyicians generally avoid affiliating with grassroots activist groups in general, outside of their party's narrow ideology, but instead mostly take advice from elite individuals or (occasionally) focused lobby groups.

Now, I quite agree that the effectiveness of people is quite diverse. However, those people who are potentially the most effective - Hilary Clinton, Angela Merkel, etc. - would take way more resources to recruit than would a typical person. So the return on investment, accounting for the cost of recruitment, may well be quite a bit better for ordinary people than for elites.

Given that the EA movement has divided its limited resources between personal and collective outreach, I'd imagine the best way to bring this debate down to the realm of facts would be to look at the returm on each of these kinds of outreach so far. Consider, GWWC has attracted $0.6b of pledges. Dustin Moskovitz and Cari Tuna have together pledged over $3b, or half their net worth. In 2015, Givewell raised a record $2.5m from <$5k donors. In contrast, Elon Musk donated $10 million to impactful AI research as a lump sum for important research grants. And yet the amount of time dedicated to recruiting these people is small compared to the amount of broad-based outreach done, even accepting that private outreach can piggy-back somewhat on public branding and outreach efforts.

An advocate of public outreach can argue that the recruitment of the likes of Moskovitz required an enormous amount of political and social capital of other high-profile scientists or business leaders. But this would just concede the point being argued in the first place - that recruiting such leaders is both a critical and achievable project (since of course it has already been achieved).

On an admittedly cursory look at the evidence, the angle that some influential people are much more cost-effective to recruit seems supported.

Reading this discussion a month or two ago prompted me to, a couple of times, consider whether there were very wealthy people I know and could talk to about EA (not really) or people I'm connected with in some way (quite certainly). One of those people, some weeks later, let's say something like a 3rd or 4th degree connection (but with a publicly known penchant for meritocracy and preference for intelligent discourse on all things) made a plan to give away a huge sum of wealth (arguably too big for GiveWell's existing processes). Because of this discussion I prioritized attempting some modest action, drafted an email with former, related colleagues and friends, and sent it yesterday. The goal was just to nudge her in the right direction. I got a reply back about 10 hours later - I'm too excited to read it right now but wanted to share the seeming success, at least in one step towards making a better impact. Insofar as there is any impact, which is surely highly uncertain still, it's in part due to this thread. http://youtu.be/Wcz_kDCBTBk

Thanks, Ryan - numbers are helpful. I think, though, that the value of the collective outreach is considerably larger than the value of the GWWC pledges, via various indirect effects.

An advocate of public outreach can argue that the recruitment of the likes of Moskovitz required an enormous amount of political and social capital of other high-profile scientists or business leaders. But this would just concede the point being argued in the first place - that recruiting such leaders is both a critical and achievable project (since of course it has already been achieved).

Is it really to concede the point? The question is how valuable "collective outreach" to broader groups is relative to "personal outreach" to rich individuals, and how much of these two kinds of outreach we should do. If collective outreach indirectly makes personal outreach more effective, it would seem that that is an argument for putting more resources into collective outreach than we otherwise should have, ceteris paribus.

The original question wasn't just about the rich vs non-rich, but whether to focus on elites. The high-profile scientists and business leaders surely count as elite, even if they're not hyper-wealthy.

Perhaps, but they already do take advice from groups of bankers, Christians, solicit advice from think tanks and so on. I think it's be more accurate to say that poliyicians generally avoid affiliating with grassroots activist groups in general, outside of their party's narrow ideology, but instead mostly take advice from elite individuals or (occasionally) focused lobby groups.

What I gathered the OP to be saying here is for Hillary Clinton and Angela Merkel not to take advice but publicly identifying with the movement, in the way that Dustin Moskovitz did. Clinton and Merkel, it seems to me, do identify with various activist groups, for instance by attending their gatherings, etc., and through their presence "bless" the movement.

Let's take for example EA Global. In one world, the EA movement is broadly perceived as a movement of elitists dedicated to advancing human and other species flourishing in the most effective ways. In another world, the EA movement is broadly perceived as a broad movement dedicated to advancing human and other species flourishing in the most effective ways. In which scenario is Clinton more likely to come to the EA Global, everything else being equal? I post that the second scenario is more likely to advance Clinton's political career, and the first scenario would harm her political career, and the same for any other politician of her stature.

There's a fine line here between being perceived as a movement of elites and a movement of elitists. I think the first would generally be seen as positive and more likely to bring people in, whereas the second is generally negative.

I don't think it's even that fine a line. Don't exclude people actively. If you want to talk to your rich friends about EA first, that makes sense, but there should be virtually no reason to keep someone out.

(If it does make sense to exclude people actively it might make sense to use a vehicle that society has decided is acceptable for this--for example, it's considered relatively acceptable for a university to reject a student applying to it, a business or organization to reject someone applying for a job, etc.)

Yes, this is a good way of putting it :-)

.

Good post, I have nothing else to add.

Thank you - I revised my statement.

Good feedback. I changed the intro.

Side thought:

If we simplify our language it might help. I think I read that second grade reading level is ideal for business communication. It's what the New York Times is at, too, right? And this forum is some post-grad level stuff, sometimes!

Y'all are way smart. And that's who we should have figuring out complicated problems, for sure. But, like, even the words "Effective Altruism" just have so many syllables it blows my mind!

:D

The New York Times has a 10th grade reading level, as does the New Yorker. http://www.impact-information.com/impactinfo/newsletter/plwork15.htm

I was chatting with Julia Wise about this post. It seems plausible the types of people we prioritize recruiting isn't such a black-and-white issue. For instance, it seems likely that EA can better take advantage of network effects with some mass-movement-style tactics.

That said, it seems likely that there might be a lot of neglected low-hanging fruit in terms of outreach to people with extreme influence, talent or net worth.

I'm not claiming this is optimal, but I might be claiming what I'm about to say may be more optimal than anything else that 98% of EAs are actually doing.

There are a couple thousand billionaires on the planet. There are also about as many EAs.

Let's say 500 billionaires are EA friendly under some set of conditions. Then it may well be that the best use of the top 500 EAs is to minutiously study single individual billionaires. Understand their values, where they come from, what makes them tick. Draw their CT-chart, find out their attachment style, personality disorder, and childhood nostalgia. Then approach them to help them, and while solving many of their problems faster than they can even see it, also show them the great chance they have of helping the world.

Ready set go: http://www.forbes.com/billionaires/list/

"Likewise, the difference in influence ability between Hilary Clinton and the average American is likely to be at least an order of magnitude difference"

HAHAHAHAHAHAHAHA I'll think of a more detailed response later, but I just have to say that's the most hilarious understatement I've heard in quite a while.

The fact of the matter is that people in the EA community are prejudiced against anyone different from them and look for any justification to keep them out. Since there are no genuine justifications, it generally takes the form of instilling fear of the unknown into others, no different from any other type of bigotry: “But if we let blacks in this school, who knows what will happen! We must keep them out because, well, you just never know what horrible things may happen!”

The whole premise of the debate is prejudiced. EA's being more accepting of people different from them (including others' levels of commitment to EA) is “lowering the bar”? That is clearly bigoted – viewing people different from you as necessarily inferior. Of course, prejudiced people measure inferiority by how much others differ from them. I'm sure many people who are different from EA's view EA's as inferior.

I want to make sure I understand all of your points. Do I understand that you argue against EA being a mass movement and call for having higher standards for who qualifies as an EA at the expense of being welcoming?