13

Cognitive and emotional barriers to EA's growth

This morning I gave a colloquium to my Psychology Department here at University of New Mexico. Most of the 30+ audience members had never heard of EA, although a few had a vague idea about it.

I analyzed 10 cognitive and emotional barriers that people face in accepting EA approaches to moral activism, from confirmation bias and speciesism to scope-insensitivity and Theory of Mind failures in understanding likely AGI systems.

I also made a pitch for more psychology grad students and faculty to get involved in EA, to share our expertise on human nature, statistics, research design, public outreach, program evaluation, mental health welfare issues, etc. 

The powerpoint is here if anyone's interested: https://geoffrey-miller-y5jr.squarespace.com/s/EA-talk-march09-public-shorter-tcdh.pptx

I've proposed to give a similar but shorter talk at the Human Behavior and Evolution Society (HBES) conference this June in Amsterdam, which is the main evolutionary psychology research meeting -- so I'd appreciate any feedback on this version.

Comments (16)

Comment author: adamaero  (EA Profile) 10 March 2018 04:12:50PM *  5 points [-]

Thanks. This will be useful for a future presentation. Although, I am going to modify challenges 3-6. Using the word "utilitarian" seems...limiting. EA has utilitarian/consequentialist underpinnings--but not a full blown subscription to only that moral system (i.e., not exclusive). But I'm sure you knew that already. (See Macaskill's comment on 'Effective Altruism' as utilitarian equivocation.)

Off the top of my head, I'm thinking something more along the lines as maximizing impact and the empathy-altruism hypothesis related to meaning well (benevolence) versus actually doing good (beneficence). (Additionally, going to add an outline =)

Also, the slide about Effective Altruism as a movement, founded in 2011? I'm guessing that's for 80k Hours because GWWC has been around since 2009, and the main idea has been around since at least 1972.

Comment author: hollymorgan 15 March 2018 11:55:13PM 2 points [-]

When people ask when EA "started" I'm never sure what to say. But I imagine Geoffrey is referring to when we chose the name with "2011" (see http://effective-altruism.com/ea/5w/the_history_of_the_term_effective_altruism/), plus a quick nod to the longer history in Singer's work with "+ Peter Singer".

Comment author: DustinWehr 12 March 2018 06:13:32PM 1 point [-]

Good points. I don't think "(benevolence)"/"(beneficence)" adds anything, either. Beneficence is effectively EA lingo. You're not going to draw people in by teaching them lingo. Do that a little further into on-boarding.

Comment author: adamaero  (EA Profile) 12 March 2018 07:06:59PM 0 points [-]

I'm glad you said so. From now on I'll use well-meaning/ good intentions, and evidence-based good instead.

Comment author: Jeffhe  (EA Profile) 11 March 2018 10:45:52PM *  4 points [-]

On slide 10 (EA challenge 1), I think you meant “that” rather than “than”.

Good luck! Also, I'm new to this forum and would appreciate it if I could get some likes so that I could make a post! Thanks.

Comment author: [deleted] 11 March 2018 02:16:39PM 3 points [-]

In slide 8 you cite "‘The Giving Pledge’ raised over $350bn from donors such as Bill Gates, Mark Zuckerberg, & Elon Musk" as an EA impact. As far as I know, EA had little to do with the giving pledge.

Comment author: selfactualizer 09 March 2018 07:04:30PM 3 points [-]

Great content. I just poured through looking for feedback to give but the content is really great. Only note is if this is going to be done as a presentation in June I think it could get a lot more engaging with less written texts on the slide.

Comment author: geoffreymiller  (EA Profile) 09 March 2018 09:49:02PM 2 points [-]

Yes, I always put too much text on slides the first few times I present on a new topic, and then gradually strip it away as I remember better what my points are. Thanks!

Comment author: Ervin 09 March 2018 10:49:33PM 0 points [-]

Seconded, this is worth sharing more broadly via the facebook groups!

Comment author: Arepo 12 March 2018 01:35:31AM *  2 points [-]

Great stuff! A few quibbles:

  • It feels odd to specify an exact year EA (or any movement) was 'founded'. Givewell (surprisingly not mentioned other than a logo on slide 6) have been around since 2007; MIRI since 2000; FHI since 2005; Giving What We Can since 2009. Some or all of these (eg GWWC) didn't exactly have a clear founding date, though, rather becoming more like their modern organisations over years. One might not consider some of them more strictly 'EA orgs than others' - but that's kind of the point.

  • I'd be wary of including 'moral offsetting' as an EA idea. It's fairly controversial, and sounds like the sort of thing that could turn people off the other ideas

  • Agree with others that overusing the word 'utilitarianism' seems unnecessary and not strictly accurate (any moral view that included an idea of aggregation is probably sufficient, which is probably all of them to some degree).

  • Slide 12 talks about suffering exclusively; without getting into whether happiness can counterweigh it, it seems like it could mention positive experiences as well

  • I'd be wary of criticising intuitive morality for not updating on moral uncertainty. The latter seems like a fringe idea that's received a lot of publicity in the EA community, but that's far from universally accepted even by eg utilitarians and EAs

  • On slide 18 it seems odd to have an 'other' category on the right, but omit it on the left with a tiny 'clothing' category. Presumably animals are used and killed in other contexts than those four, so why not just replace clothing with 'other' - which I think would make the graph clearer

  • I also find the colours on the same graph a bit too similar - my brain keeps telling me that 'farm' is the second biggest categorical recipient when I glance it it, for eg

  • I haven't read the Marino paper and now want to, 'cause it looks like it might update me against this, but provisionally: it still seems quite defensible to believe that chickens experience substantially less total valence per individual than larger animals, esp mammals, even if it's becoming rapidly less defensible to believe that they don't experience something qualitatively similar to our own phenomenal experiences. [ETA] Having now read-skimmed it, I didn't update much on the quantitative issue (though it seems fairly clear chickens have some phenomenal experience, or at least there's no defensible reason to assume they don't)

  • Slide 20 'human' should be pluralised

  • Slide 22 'important' and 'unimportant' seem like loaded terms. I would replace with something more factual like (ideally a much less clunkily phrased) 'causes large magnitude of suffering', 'causes comparatively small magnitude of suffering'

  • I don't understand the phrase 'aestivatable future light-cone'. What's aestivation got to do with the scale of the future? (I know there are proposals to shepherd matter and energy to the later stages of the universe for more efficient computing, but that seems way beyond the scope of this presentation, and presumably not what you're getting at)

  • I would change 'the species would survive' on slide 25 to 'would probably survive', and maybe caveat it further, since the relevant question for expected utility is whether we could reach interstellar technology after being set back by a global catastrophe, not whether it would immediately kill us (cf eg https://www.openphilanthropy.org/blog/long-term-significance-reducing-global-catastrophic-risks) - similarly I'd be less emphatic on slide 27 about the comparative magnitude of climate change vs the other events as an 'X-risk', esp where X-risk is defined as here: https://nickbostrom.com/existential/risks.html)

  • Where did the 10^35 number for future sentient lives come from for slide 26? These numbers seem to vary wildly among futurists, but that one actually seems quite small to me. Bostrom estimates 10^38 lost just for a century's delayed colonization. Getting more wildly speculative, Isaac Arthur, my favourite futurist, estimates a galaxy of Matrioska brains could emulate 10^44 minds - it's slightly unclear, but I think he means running them at normal human subjective speed, which would give them about 10^12 times the length of a human life between now and the end of the stelliferous era. The number of galaxies in the Laniakea supercluster is approx 10^5, so that would be 10^61 total, which we can shade by a few orders of magnitude to account for inefficiencies etc and still end up with a vastly high number than yours. And if Arthur's claims about farming Hawking radiation and gravitational energy in the post-stellar eras are remotely plausible, then the number of sentient beings Black Hole era would dwarf that number again! (ok, this maybe turned into an excuse to talk about my favourite v/podcast)

  • Re slide 29, I think EA has long stopped being 'mostly moral philosophers & computer scientists' if it ever strictly was, although they're obviously (very) overrepresented. To what end do you note this, though? It maybe makes more sense in the talk, but in the context of the slide, it's not clear whether it's a boast of a great status quo or a call to arms of a need for change

  • I would say EA needs more money and talent - there are still tonnes of underfunded projects!

Comment author: Jeffhe  (EA Profile) 12 March 2018 06:06:04AM *  1 point [-]

You write, "Agree with others that overusing the word 'utilitarianism' seems unnecessary and not strictly accurate (any moral view that included an idea of aggregation is probably sufficient, which is probably all of them to some degree)."

One thing I am sure about effective altruism is that it endorses helping the greater number, all other things being equal (by which I am here only concerned with the quality of pain being equal, for simplicity’s sake). So, for example, if $10 can be used to either save persons A and B each from some pain or C from a qualitatively identical pain, EA would say that it is morally better to save the two over the one.

Now, this in itself does not mean that effective altruism believes that it makes sense to

  1. sum together certain people’s pain and to compare said sum to the sum of other people’s pain in such a way as to be able to say that one sum of pain is in some sense greater/equal to/lesser than the other, and

  2. say that the morally best action is the one that results in the least sum of pain and the greatest sum of pleasure (which is more-or-less utilitarianism)

(Note that 2. assumes the intelligibility of 1.; see below)

The reason is because there are also non-aggregative ways to justify why it is better to save the greater number, at least when all other things are equal. For a survey of such ways, see "Saving Lives, Moral Theory, and the Claims of Individual" (Otsuka, 2006) However, I'm not aware that effective altruism why it's better to save the greater number, all else equal, via these non-aggregative ways. Likely, it is purposely silent on this issue. Ben Todd (in private correspondence) informed me that "effective altruism starts from the position that it's better to help the greater number, all else equal. Justifying that premise in the first place is in the realm of moral philosophy." If that’s indeed the case, we might say that all effective altruism says is that the morally better course of action is the one that helps more people, everything else being equal (e.g. when the suffering to each person involved in the choice situation is qualitative the same), and (presumably) also sometimes even when everything isnt equal (e.g. when the suffering to each person in the bigger group might be somewhat less painful than the suffering to each person in the smaller group).

Insofar as effective altruism isn’t in the business of justification, then perhaps moral theories shouldn’t be mentioned at all in a presentation about effective altruism. But inevitably people considering joining the movement are going to ask why is it better to save the greater number, all else equal (e.g. A and B instead of C), or even sometimes when all else aren’t equal (e.g. one million people each from a relatively minor pain instead of one other person from a relatively greater pain)? And I think effective altruists ask themselves that question too. The OP might have and thought utilitarianism offers the natural justification: it is better to save A and B instead of C (and the million instead of the one) because doing so results in the least sum of pain. So, utilitarianism clearly offers a justification (though one might question if it is an adequate justification). On the other hand, it is not clear to me at all how other moral theories propose to justify saving the greater number in these two kinds of choice situations. So it is not surprising that OP has associated utilitarianism with effective altruism. I am sympathetic.

A bit more on utilitarianism: Roughly speaking, according to utilitarianism (or the principle of utility), among all the actions we can undertake at any given moment, the right action (ie the action we ought to take) is the one that results in the least sum of pain and the greatest sum of pleasure.

To figure out which action is the right action among a range of possible actions, we are to, for each possible action, add up all its resulting pleasures and pains. We are then to compare the resulting state of affairs corresponding to each action to see which resulting state of affairs contains the least sum of pain and greatest sum of pleasure. For example, suppose you can either save one million people each from a relatively minor pain or one other person from a relatively greater pain, but not both. Then you are to add up all the minor pains that would result from saving the single person, and then add up all the major pains (in this case, just 1) that would result from saving the million people, and then compare the two states of affairs to see which contains the least sum of pain.

From this we can clearly see that utilitarianism assumes that it makes sense to aggregate distinct people's pains and to compare these sums in such a way as to be able to say, for example, that the sum of pain involved in a million people's minor pains is greater (in some sense) than one other person’s major pain. Of course, many philosophers have seriously questioned the intelligibility of that.

Comment author: cassidynelson 15 March 2018 02:43:43AM 1 point [-]

Great work and I really enjoyed reading this presentation.

On slide 27, where did you get the estimates for "Human-caused X-risks are thousands of times more likely per year than natural X-risks"

I agree with this generally but was wondering if you have a source for the thousands times more.

Comment author: Risto_Uuk 12 March 2018 11:00:58AM 1 point [-]

Do you offer any recommendations for communicating utilitarian ideas based on Everett's research or someone else's?

For example, in Everett's 2016 paper the following is said:

"When communicating that a consequentialist judgment was made with difficulty, negativity toward agents who made these judgments was reduced. And when a harmful action either did not blatantly violate implicit social contracts, or actually served to honor them, there was no preference for a deontologist over a consequentialist."

Comment author: DavidMoss  (EA Profile) 12 March 2018 09:13:30PM *  1 point [-]

I imagine more or less anything which expresses conflictedness about taking the 'utilitarian' decision and/or expresses feeling the pull of the contrary deontological norm would fit the bill for what Everett is saying here. That said, I'm not convinced that Everett (2016) is really getting at reactions to "consequentialism" (see here ( 1 , 2 )

I think that this paper by Uhlmann et al, does show that people judge negatively those who take utilitarian decisions though, even when they judge that the utilitarian act was the right one to take. Expressing conflictedness about the utilitarian decision may be a double-edged sword, therefore. I think it may well offset negative character evaluations of the person taking the utilitarian decision, but plausibly it may also reduce any credence people attached to the utilitarian act being the right one to take.

My collaborators and I did some work relevant to this, on the negative evaluation of people who make their donation decisions in a deliberative rather than explicitly empathic way. The most relevant of our experiments for this looked at the evaluation of people who both deliberated about the cost effectiveness of the donation and expressed empathy towards the recipient of the donation simultaneously. The empathy+deliberation condition was close to the empathy condition in moral evaluation (see figure 2 https://osf.io/d9t4n/) and closer to the deliberation condition in evaluation of reasonableness.

Comment author: purpleskates 10 March 2018 04:23:56PM 1 point [-]

This is well done! Acknowledging and talking about what makes hyper-rationalism repulsive to many people - mostly very unfairly! - is constructive and interesting.

Maybe out of scope, but in the introduction section describing EA, I'd probably also include a slide or two of some of the more reasonable criticisms of typical EA beliefs and behaviors as well, and separate those from the list of 10 barriers of bias and irrational intuition.

Doing that would better set aside the question of the merits of the EA approach, and make it easier to focus on these other blockers to wider adoption. It also would make the presentation come off more even-handed rather than "here are the bad reasons people don't support what I support". That might get you more buy-in from the more skeptical members of the audience, along with inducing some questioning about how to improve EA from people who do find the answers intuitive.