Hide table of contents

This is my contribution to the December blogging carnival on "blind spots."

If I’d heard about effective altruism in different circumstances, I think I could have reacted quite negatively to the whole concept. If I’d been introduced to the idea of effective altruism by a friend who put a slightly negative spin on it, saying it was cold or overly demanding or something, I can totally imagine myself forming a negative impression. I’d be surprised if I were the only person for whom this were true.

Perhaps we should be thinking more about how the ideas of effective altruism might be - and are being - perceived by different people, to ensure that people like me don’t end up being put off. It’s not that we don’t spend any time thinking about this - there’s certainly a fair amount of talk about how to “pitch” or “frame” effective altruism and giving. But most of this discussion is internal - just a bunch of people who are already convinced by EA talking about how it might be perceived. As objective as we can try to be, there’s obviously a massive selection bias here.

The opinions we don’t hear

My friend Uri Bram recently wrote a great little article for Quartz called “ The most important person at your company doesn’t work for you .” He makes the following point: the people who stay working at your company are a very non-random sample of all the people who could possibly work for you, obviously - they’re likely to be those who feel most positively about the company. This means that if you only seek feedback from your current employees, you’ll likely get the impression that things are going pretty much fine. It’s the people who left the company, or who never applied in the first place, who are most likely to have useful information about the company’s shortcomings.

I think the same point applies to the EA movement. If we want to really get an accurate picture of the shortcomings and possible failure modes of the movement, we need to do more than talk about it ourselves (although this is certainly useful.) We also need to actively seek out feedback from people who aren’t already convinced by EA, people who have a negative impression of EA, or perhaps people who were initially interested but were put off by something, or just didn’t get more involved. I can think of plenty of people like this myself, and yet I’m not spending much time asking for their opinions and considering them as valuable, relevant feedback.

Maybe you’re already doing this much more than me - I’m sure some people are, and that’s great. But I also think it’s not something we’ve prioritised or talked much about explicitly as a group. This is surprising, in a way - seeking negative feedback is such an obviously useful thing. So why aren’t we doing this more?

Valuing feedback isn’t enough

One possible explanation is just that seeking criticism is a psychologically difficult thing to do. I think this is definitely at least part of what’s going on. Most effective altruists I know really value feedback and criticism - much more than the average person - and are much better at seeking feedback than most as a result. But we’re all still human, and so for most of us, having our beliefs challenged still feels unpleasant sometimes. It feels especially unpleasant when it comes to issues that are close to your identity or social group - which is certainly the case for many with effective altruism.

Another, related problem is that it’s easy to immediately write off people who react negatively to EA as “just not like us.” This isn’t an explicit thing, but it can happen implicitly - I’ve noticed myself thinking something like this. Someone criticises some element of effective altruism and I’ll find myself writing their comment off with a thought like “they just don’t really get it”, or “they’re biased because x”, or “they’re not really the kind of person we’d expect to be interested.” I’ve had some fairly heated discussions with my parents about the GWWC pledge, for example - they don’t particularly like the idea of me donating a substantial proportion of my earnings right now for various reasons. When we initially had these conversations, I mostly wrote off their comments as “them just not getting it” at best, and “them being crazy unreasonable” at worst. It’s only recently that I’ve started to see these conversations as a useful piece of feedback - both a different perspective on how I should prioritise my spending, and on how certain ideas might be off-putting to others. This doesn’t mean I have to agree with them, but I do now see that their perspective is useful for me to understand.

Rejecting something someone else says with “they don’t get it” or “they’re being unreasonable” is essentially a defensive reaction. What this misses is that every piece of feedback - no matter how it’s framed or where it comes from - contains useful information.

Finally, actively seeking out feedback from others - especially those who aren’t immediately within our social circle - can just be effortful. Even if you know that seeking feedback is a good idea and you’re not actively avoiding it, it takes time and effort to actually go and ask questions, think about different perspectives, and to extract useful information from it. These conversations can also be awkward or lead to conflict. A big part of the reason I don’t always talk to my family and friends about effective altruism is that I don’t want to seem like I’m preaching, or get into a heated discussion.

How can we obtain more varied perspectives on effective altruism?

So what can we do, practically speaking, to reduce this selection effect and get more useful information about where the EA movement be going wrong? A few ideas:

  1. See casual conversations with friends and family as an opportunity to learn about their perspective . If someone you’re talking to isn’t immediately taken in by an idea you’re telling them about, it’s natural to think you can either switch topic or try harder to persuade them. But there’s a third option: you could try to better understand what their perspective is on what you’re saying, and why it might not appeal to them. You might not agree with their perspective, but that doesn’t mean it’s not useful. It’s still their reaction, and others might have a similar one - and it’s valuable for the effective altruism movement to see where these perspectives are coming from.
  2. Making more effort to understand the perspectives of those who are critical of effective altruism - people who write critical pieces online or argue in Facebook comments, for example. It’s so natural to get on the defensive in these situations, but it’s also so valuable to take a step back and ask: what is making them think or feel this way? Why does what I’m saying seem wrong or triggering to them? How might their beliefs and values differ from mine to make this the case?
  3. Actively seeking out feedback from people who seem to have a negative or not-entirely-positive view of effective altruism . This could include a range of people: friends and family, those who have spoken out publicly in criticism, or perhaps people who we might expect to be aligned with effective altruism but don’t appear that interested - there are a number of academics and public figures who might fit this description that it could be really valuable to talk to.
  4. Doing online studies and surveys to get more data on how people react to different framings of effective altruism . As I said at the beginning of this post, the way that messages are framed can make a massive difference to how people react to them. I think there’s still a lot that we could learn about how people respond to different ways of talking about effective altruism, and the kinds of people who find the ideas most appealing. Running studies online is one way we could get a lot of information on this at relatively low cost.

I'd be interested in what others think - whether we should be focusing on getting more diverse perspectives, and, if so, what the best strategies for doing that are.
Comments33
Sorted by Click to highlight new comments since: Today at 11:37 PM

I've been talking a lot to an EA outsider, and she offers the following opinions (which I'm expressing in my own words; she hopes to flesh this out into a blog post at some point).

1) EA is arrogant

  • The name "effective altruism"
  • The name "effective altruist"
  • The attitude towards most charities and their supporters
  • The attitude of the Friendly AI people

2) EA is individualistic. It values individual wellbeing, not (for their own sake):

  • Art and culture
  • The environment
  • Communities

3) EA is top-down

  • Orgs like GiveWell call the shots
  • Charities aren't based in the countries they're trying to help
  • Donors are armchair-based, don't visit the communities they're trying to help

4) EA promotes incremental change, not systemic change

  • Charity rather than activism
  • Part of the capitalist system; money-focussed

5) EA is somewhat one-size-fits-all

  • donors have particular causes that are important to them
  • art patrons favour particular artists; they aren't trying to "maximize the total amount of art in existence"

6) Many consequences are hidden

  • If you're a teacher, how do you know what effect you ultimately have on your students?

7) How do you assess the actual needs of the communities you're trying to help?

  • Have you asked them?

8) The whole Friendly AI thing is just creepy.

  • If it is real, it means a tiny elite is making huge decisions for everyone without consulting people if that's what they want

My suggestion: if you can't think of an existing resource that answers one of these criticisms, then write it yourself.

Any one got good answers to 1), 8), and something that seems to be an element of 3) & 7) which is something like "there seems to be a real lack of actual experience living/befriending etc. your beneficiaries - surely this will help you learn what is important, and the validity of your data?" ?

I'm not an effective altruist and don't think I ever will be one. I'm here only out of curiosity and intellectual entertainment. Perhaps this allows me to give you an honest "outside" perspective. My main reasons for not donating 10% of my income or make other similar commitments:

  1. I am instinctively too egoistic and I don't like the cognitive dissonance from being a little altruistic, but not as much as I reasonably could be. I feel best when I "play for my own side" in life, am productive to get only what I want and don't think about the suffering of others. I feel better when I don't care, and I prefer feeling better over caring. They say giving makes happy, but I find it brings me no equivalent pleasure.

  2. Society and the law already demand a great deal of altruism and (what they think of as) morality from me. Some of it in the form of taxes, some of it in the form of restrictions on what I can do, some of it in the form of implicit status attacks. Of course, I get a lot in return, but subjetively it doesn't feel balanced. Perhaps if I were richer or had higher life satisfaction, I might be more generous in addition to what is already demanded.

  3. In many morally relevant domains, there is a discrepancy between what I feel is important and what people in general feel is important. In addition, I have given up on convincing people through value talk. Most people will never value what I value, and vice versa. There are no cost-effective ways to change these discrepancies, and even though EA is a multi-domain endeavor, it is ultimately about empowering humanity to fulfill its preferences, half of which are more or less opposed to mine.

  4. Psychologically, uncertainty cripples my motivation. I am not an "expected utility maximizer". But in EA, certainty of impact and scope of impact are somewhat negatively correlated. And where positive effects are really certain, I expect the most cost-effective ground will eventually be covered by the EA movement without me, and I'd rather other people pay than me (donor's dilemma).

These are the core reasons why I have decided against EA in my personal life. It does not preclude small donations or ethical consumption, both of which I make, but it makes me recoil from real commitments, unless I have an unexpected windfall or something.

Thanks for being so honest, Nicholas, really useful to hear your perspective - especially as it sounds like you've been paying a fair amount of attention to what's going on in the EA movement. I can empathise with your point 4. quite a bit and I think a fair number of others do too - it's so hard to be motivated if you're unsure about whether you're actually making a difference, and it's so hard to be sure you're making a difference, especially when we start questioning everything. For me it doesn't stop me wanting to try, but it does affect my motivation sometimes, and I'd love to know of better ways to deal with uncertainty.

This is a thoughtful post, so thanks for making it. On the other hand, from an EA perspective I hope we don't waste too much time worrying about people like you.

Put simply, you are weird. ( that is not an insult! - EA is weird and the founding communities of philosophers, students and AI/LW people who primarily make us up are even wierder).

I suspect, for those of use who want to expand and grow the reach of EA we should worry about how to expand our ideas to people who cannot use "expected utility maximiser" in a sentence and would never dream of admitting that they are "too egoistic" to prefer helping others over their own feelings. There is much more potential in talking to people who have never thought hard about an issue before than those who have thought hard about it and decided to go another way.

The language is certainly atypical, but I don't actually think these reasons are weird at all; they're what I would consider pretty common/core objections to EA and I've heard versions of all of these before. I think they're worth answering, or at least having answers available.

I wouldn't write it off. These reasons may apply to a lot of people, even if they wouldn't express them in those words. I found point 2 particularly interesting.

Nicholas, can you please elaborate on point 3 - your thoughts and what you care about might genuinely be things that I +others collected here should be caring about or taking into account. I'm interested. Please just email me at tomstocker88@gmail.com if they're so different from humanity's preferences that they're problematic to share publicly. Thank you for posting this, even if you are egoistic (I don't know many people that aren't to some extent), you have courage / honesty, which is awesome.

Some of these negative beliefs I hold, others I don't but appeal to the intellectual identities I circulate in, and thus at least register as possible negative beliefs:

(i) EA is elitist: in being largely constituted, particularly in staff, by well-to-do Oxbridge or Ivy League graduates; in being premised on, and thereby implicitly valuing people according their capacity to, earn-to-give; and in being conducted, within the movement, at a relatively technical level of discourse. There's also a potential anti-egalitarianism in the respect in which charitable giving is normally evaluated, namely, as a percentage of net income rather than, say, relative to a generalised baseline of minimally or moderately decent living.

(ii) EA is highly individualistic, rendering everything instrumental to the aggregate utility one can discharge through impersonal donations. Structural political and social change are mostly irrelevant, and insofar as they are, it is typically as new sites for impersonal donations.

(iii) EA is overwhelmingly populated by utilitarians and utilitarian thinking, despite external pretensions of being an ecumenical movement unified by concern for charitable giving. This is self-limiting as a movement, in that it discourages those not observing, what is for most people, a highly controversial ethical theory.

(iv) From my experience, most people simply don't accept - intellectually and/or psychologically - the demanding moralism implicit in charitably donating 10% of one's income; they don't see any impersonal and objective reason for doing so, and thus are not moved (the second most common response, in my experience, is for them to rationalise that charities are uniformly money-grubbing and ineffective).

Obviously these are not unrelated.

(i) really gets me: I think these ideas would really take off among different groups of people if presented differently. (ii) I think the remedy to this is just to estimate the benefits of societal change. The structural political thing is different and quite an interesting question for an EA - but the lack of analysis of power structures is certainly something to address (This isn't Raymond Geuss is it??) (iv) - don't see why this is important - it's a contraint or bias rather than a morally salient criticism?

[anonymous]9y9
0
0

At our Dutch EA meetup we got some feedback from a complete newcomer which we found quite useful. To summarize:

  • EA lacks concrete actions that a newcomer can take to do something that is in some sense effective. In particular, to make a tangible impact. Everything seems to pay off (if at all) in the very long run. Not appealing to a broad audience.

  • Because EA lacks concrete actions and a concrete common purpose, the message remains unclear. Only a certain type of person can handle this ambiguity long-term. We've got the 'Why' and the 'How', but the 'What' remains blurry (she was not familiar with Will's post about EA, which uses exactly these terms to describe EA btw).

Yeah we got similar feedback in Melbourne and agreed that it was not clear that we had great short-term things for people to do. There's donating, for one, and in Oxford, historically, there's been scope for volunteering, but apart from that it is tricker. There's .impact, and there's helping with arranging events and talks but it's not clear that those are the right kind of activity. Obviously, protests aren't either. But some kinds of non-costly in-group activities that have at least some impact would be great.

Great post, Jess! Here's another thing people can do to expose themselves to more perspectives:

When I ran HCEA, we pretty frequently had group dinners with other student groups. Mostly they would ask us questions about effective altruism and we would ask them questions about what they thought about it. I think these dinners were great for exposing HCEA to a broad range of perspectives, and they weren't too much trouble to set up.

(They were also great for recruiting and publicity--we were amused to note this year that during the annual atheist-Christian debate, one of the few things the two sides could agree on was that HCEA was awesome!)

If you're a non-student EA, you can do a similar thing by having your local EA group meet up with other local organizations (e.g. religious organizations, social justice groups, global poverty groups, etc.--though finding them might be a bit more challenging). If you want to get super advanced you could even offer to give a talk on effective altruism to the group, although that requires more prep and you need to be careful not to proselytize.

Do you remember any of the questions/reactions you got from the non-EA students at those dinners?

The dinner I learned the most from was the one we had with Harvard College Faith and Action, a Christian student group. I could identify three main differences in perspective (there are likely a bunch of others too):

(1) Many of the Christian students ascribed to non-consequential ethics systems. Their goal was to "act as Jesus would." While this would have consequences they considered good, they were maximizing for acting like he would, not for the outcomes acting like him would cause.

(2) A corollary of "act as Jesus would" is "help thy neighbor"; many of the students we talked to felt a need to prioritize local aid, or at least do it along side non-local aid.

(3) When cornered into least-convenient-possible-world thought experiments, most of the Christian students said that it was better to "save" one life (in the sense of salvation, i.e. ensuring one more soul went to heaven) than to save any number of physical lives. To be fair, they were quite resistant to this question, mostly saying that they supported the idea of local or international aid through Christian organizations who would also encourage the spread of Christian principles.

That's interesting to hear about their beliefs for #2. By contrast, the leaders at my church specifially say that everyone is our neighbor.

Can you argue on christian grounds - for example:

1). Jesus was very strong on giving to the poor - not to convince people of God but because they were poor and in need - Ananias was struck down (only person in New Testament I think) because he failed to tithe to the poor. People came to him asking to follow him (meaning high probability of salvation) - he said to give ALL of their money away to the poor first.

2). loving thy neighbour is all about loving people you aren't familiar with or are even afraid of.

3). you just kind of have to accept if they believe in eternal damnation - infinite QALYs for each person saved.

Thanks, Ben! This is a great idea, especially for student groups.

"Maybe you’re already doing this much more than me - I’m sure some people are, and that’s great. But I also think it’s not something we’ve prioritised or talked much about explicitly as a group. This is surprising, in a way - seeking negative feedback is such an obviously useful thing. So why aren’t we doing this more?"

First off, I like this post and up-voted it, because the concrete suggestions on how to go about this are good and I would like to see them done.

But my response to this post up to this point was to be confused because I think talking and listening to outsiders is already prioritised very highly. To take an obvious example, at London EA meetups I try to spend the bulk of my time talking to people who I don't recognise and asking what their impressions are. Within EA, one of the main things I do is try to feed back those outside impressions and use that to shape how we approach things. Am I really one of few people deliberately doing this?

I guess I might fall into the 'maybe you're already doing this more than me' category and then be committing a typical mind fallacy. With that said, I do see at least some evidence that a lot of other people are thinking about this hard and implementing at least your first three suggestions:

  1. In much of the internal discussion about how to present EA that you reference, people imply or explicitly state that their basis for their opinion is exactly such conversations with outside friends or family.

  2. There have been multiple long facebook threads seemingly doing exactly this; it seems quite popular and most of the commentary has been thoughtful. In fact, I'm not aware of a public critique of EA that hasn't been widely shared and reasonably fairly discussed within the community. I think this is good! But beyond perhaps drawing together common strands of criticism, I'm not sure what more we actually could do here.

  3. Ben Kuhn stands out here, with both his description of HCEA meetings with other groups below and blog posts like http://www.benkuhn.net/etg-objections. There's also some overlap with 2, which I think is being heavily done. Anecdotally I also think CEA is trying to do this, especially on the academics/public figures front.

  4. Ok, AFAIK this is new. No idea how to implement it sensibly, but quite probably worth someone's time. I'm curious as to whether CEA/EAO have considered anything like this, and if so what they make of it.

I don't have the impression that most people talk and listen to outsiders as much as I do. I think the Bay Area is a lot worse about this than Boston (for instance, AFAIK the Bay Area has no introductory EA meetups), so maybe it's an issue with how numerous local EAs are? In Boston we were forced to talk and listen to outsiders a lot because there weren't enough insiders, but in the Bay Area you can easily forge an entire social circle out of people in the EA movement and never interact with anyone else.

This might explain why Jess has observations closer to mine than yours, since hers are based partly on Oxford, which has a similar critical mass to the Bay Area.

at London EA meetups I try to spend the bulk of my time talking to people who I don't recognise and asking what their impressions are. Within EA, one of the main things I do is try to feed back those outside impressions and use that to shape how we approach things. Am I really one of few people deliberately doing this?

Kudos to organizing meetups! Not too many people do that, actually, so I think you might be rarer than you think.

Also, do you ever get a chance to talk to people who don't go to the meetup groups?

Sam Hilton does most of the organising, I mostly just turn up :)

Less so. I have my extended network of friends and acquaintances, obviously, but I'm cautious about bothering people where it doesn't come up naturally or they haven't indicated interest. One of the points of the donation match Denise and I are currently running is to flush out people who might have interest.

One of the points of the donation match Denise and I are currently running is to flush out people who might have interest.

That's a smart idea! I hadn't thought of that.

What are your plans on following up with them?

Mostly I just want to talk to those in my extended (non-EA) network who donate, thank them for donating, and work out where they are w.r.t. EA ideas. I don't currently have a more detailed plan than that.

What successful responses to these objections have people used in in-person discussions?

Reading the other comments below has been eye-opening exactly because I wouldn't have anticipated many of the objections to EA. Therefore, I'm not confident enough in my mental model of a non- or anti-EA person to know what responses would convince them, under their value system: the responses on e.g., Ben Kuhn's page and at The Life You Can Save are very convincing to me, and there are plenty of other responses that would score points in a debate, but I'm not sure that's enough. What's worked empirically to change minds?

If people don't bring a basic "caring for other people" vibe, you're either down to meta-ethics, which I haven't had any luck persuading people to care with, or their own moral or psychological framework - which entails a level of familiarity?

I would like to note (although I don't quite know what to do with this information) that the proposed method of gathering feedback leaves out at least 3 billion people who don't have internet access. In practice, it's probably also limited to gathering information from countries/in languages with at least some EA presence already (and mainly English speaking). Now, from a "optimize spread of EA ideas" perspective, it might be reasonable to focus in wealthier countries to reach people with more leverage (ie. expected earnings), but there are reasons to pay attention to this:

1) It could be very useful to have a population of EAs with background/lived experience in developing countries, to aid in idea generation for new international development programs. 2) EA might end up not spreading very much to people living in countries like China/India, which will become more economically important in the future. 3) We might end up making a mistake on some philosophically important issue due to biases in the background of most people in the EA movement. (I don't have a good example of what this looks like, but there might be, say, system 1 factors arising from the culture where you grow up that influence your position on issues of population ethics or something).

I also don't know how to go about this on the object level, or whether it's the best place for marginal EA investment right now. (I also think that EA orgs involved in international development will have access to more of these diverse perspectives, but the point I make is that they aren't present in the meta-level discussions).

Object level suggestion for collecting diverse opinions (for a specific person to look through, to make it easier to see trends): have something like a google form where people can report characteristics of an attempt to bring up EA ideas to a person or audience, and report comments on how the ideas were received. (This thread is a Schelling Point now, but won't remain so in the future)

Good post. Maybe it'd be valuable to start discussions on other online forums and try to gather external perspectives on EA, instead of just discussing what hypothetical external perspectives might look like here amongst ourselves :) For example, most of Less Wrong knows what EA is, but over half don't identify as EA. This thread has lots of non-EA LWers explaining themselves. Maybe we can come up with a list of questions and try to administer an informal survey to further our understanding?

(Yes, we have many people from the LW demographic already... but smart people concerned with cognitive biases seem like good EA recruits if they're less likely to see their good intentions go awry. So I think more people from that demographic are still highly valuable on the margin. The EA movement is currently full of smart thoughtful people, but it's growing fast, and there are only so many people who are smart and thoughtful... what will happen when less intelligent, less thoughtful people dominate?)

There are also intelligent, thoughtful people outside of LW.

Also, while I agree that keeping the tone among EAs thoughtful, I would be extremely sad if we didn't encourage particular people or groups from being interested in EA because they aren't "intelligent enough."

I would be extremely sad if we didn't encourage particular people or groups from being interested in EA because they aren't "intelligent enough."

This is also how the movement gets an elitist reputation that seems quite harmful.

Fair points.