This morning I gave a colloquium to my Psychology Department here at University of New Mexico. Most of the 30+ audience members had never heard of EA, although a few had a vague idea about it.
I analyzed 10 cognitive and emotional barriers that people face in accepting EA approaches to moral activism, from confirmation bias and speciesism to scope-insensitivity and Theory of Mind failures in understanding likely AGI systems.
I also made a pitch for more psychology grad students and faculty to get involved in EA, to share our expertise on human nature, statistics, research design, public outreach, program evaluation, mental health welfare issues, etc.
The powerpoint is here if anyone's interested: https://geoffrey-miller-y5jr.squarespace.com/s/EA-talk-march09-public-shorter-tcdh.pptx
I've proposed to give a similar but shorter talk at the Human Behavior and Evolution Society (HBES) conference this June in Amsterdam, which is the main evolutionary psychology research meeting -- so I'd appreciate any feedback on this version.
Great stuff! A few quibbles:
It feels odd to specify an exact year EA (or any movement) was 'founded'. Givewell (surprisingly not mentioned other than a logo on slide 6) have been around since 2007; MIRI since 2000; FHI since 2005; Giving What We Can since 2009. Some or all of these (eg GWWC) didn't exactly have a clear founding date, though, rather becoming more like their modern organisations over years. One might not consider some of them more strictly 'EA orgs than others' - but that's kind of the point.
I'd be wary of including 'moral offsetting' as an EA idea. It's fairly controversial, and sounds like the sort of thing that could turn people off the other ideas
Agree with others that overusing the word 'utilitarianism' seems unnecessary and not strictly accurate (any moral view that included an idea of aggregation is probably sufficient, which is probably all of them to some degree).
Slide 12 talks about suffering exclusively; without getting into whether happiness can counterweigh it, it seems like it could mention positive experiences as well
I'd be wary of criticising intuitive morality for not updating on moral uncertainty. The latter seems like a fringe idea that's received a lot of publicity in the EA community, but that's far from universally accepted even by eg utilitarians and EAs
On slide 18 it seems odd to have an 'other' category on the right, but omit it on the left with a tiny 'clothing' category. Presumably animals are used and killed in other contexts than those four, so why not just replace clothing with 'other' - which I think would make the graph clearer
I also find the colours on the same graph a bit too similar - my brain keeps telling me that 'farm' is the second biggest categorical recipient when I glance it it, for eg
I haven't read the Marino paper and now want to, 'cause it looks like it might update me against this, but provisionally: it still seems quite defensible to believe that chickens experience substantially less total valence per individual than larger animals, esp mammals, even if it's becoming rapidly less defensible to believe that they don't experience something qualitatively similar to our own phenomenal experiences. [ETA] Having now read-skimmed it, I didn't update much on the quantitative issue (though it seems fairly clear chickens have some phenomenal experience, or at least there's no defensible reason to assume they don't)
Slide 20 'human' should be pluralised
Slide 22 'important' and 'unimportant' seem like loaded terms. I would replace with something more factual like (ideally a much less clunkily phrased) 'causes large magnitude of suffering', 'causes comparatively small magnitude of suffering'
I don't understand the phrase 'aestivatable future light-cone'. What's aestivation got to do with the scale of the future? (I know there are proposals to shepherd matter and energy to the later stages of the universe for more efficient computing, but that seems way beyond the scope of this presentation, and presumably not what you're getting at)
I would change 'the species would survive' on slide 25 to 'would probably survive', and maybe caveat it further, since the relevant question for expected utility is whether we could reach interstellar technology after being set back by a global catastrophe, not whether it would immediately kill us (cf eg https://www.openphilanthropy.org/blog/long-term-significance-reducing-global-catastrophic-risks) - similarly I'd be less emphatic on slide 27 about the comparative magnitude of climate change vs the other events as an 'X-risk', esp where X-risk is defined as here: https://nickbostrom.com/existential/risks.html)
Where did the 10^35 number for future sentient lives come from for slide 26? These numbers seem to vary wildly among futurists, but that one actually seems quite small to me. Bostrom estimates 10^38 lost just for a century's delayed colonization. Getting more wildly speculative, Isaac Arthur, my favourite futurist, estimates a galaxy of Matrioska brains could emulate 10^44 minds - it's slightly unclear, but I think he means running them at normal human subjective speed, which would give them about 10^12 times the length of a human life between now and the end of the stelliferous era. The number of galaxies in the Laniakea supercluster is approx 10^5, so that would be 10^61 total, which we can shade by a few orders of magnitude to account for inefficiencies etc and still end up with a vastly high number than yours. And if Arthur's claims about farming Hawking radiation and gravitational energy in the post-stellar eras are remotely plausible, then the number of sentient beings Black Hole era would dwarf that number again! (ok, this maybe turned into an excuse to talk about my favourite v/podcast)
Re slide 29, I think EA has long stopped being 'mostly moral philosophers & computer scientists' if it ever strictly was, although they're obviously (very) overrepresented. To what end do you note this, though? It maybe makes more sense in the talk, but in the context of the slide, it's not clear whether it's a boast of a great status quo or a call to arms of a need for change
I would say EA needs more money and talent - there are still tonnes of underfunded projects!
You write, "Agree with others that overusing the word 'utilitarianism' seems unnecessary and not strictly accurate (any moral view that included an idea of aggregation is probably sufficient, which is probably all of them to some degree)."
One thing I am sure about effective altruism is that it endorses helping the greater number, all other things being equal (by which I am here only concerned with the quality of pain being equal, for simplicity’s sake). So, for example, if $10 can be used to either save persons A and B each from some pain or C fr... (read more)