This is my submission to February's FGO topic was about explaining effective altruism, especially in person. (We aren't searching for new explanations, as much as we are for meta-advice.)

I've barely ever tried to explain EA-ish ideas to anybody in person. The few times that I have didn't go very well. I didn't receive any counterarguments, it was just clear that the people I was talking to were generally unconvinced. The main reason why I don't tell anyone around me about effective altruism is because I expect these reactions.

The closest I'll come is telling some people I plan to donate a lot of money in the future and that I want to be smart about how I do it. I think this substitute explanation of EA (one that doesn't actually explain EA at all but instead offers a socially acceptable proxy) is likely to get far better reactions. In my personal experience, people are supportive of philanthropy so long as you don't come off as weird or as some sort of fanatic.

This is similar to the motte-and-bailey fallacy, where effective altruists tell each other that they're about, say maximizing total expected utility according to a hedonistic utilitarian framework, but only tell "the public" that they want to donate a lot and make sure their donations count. The "true" explanation of EA - that is, the definition that most of us more or less actually agree with - is swapped for a more easily defensible, socially acceptable explanation.

Maybe this is just routine, common sense marketing.

2

0
0

Reactions

0
0
Comments3
Sorted by Click to highlight new comments since: Today at 10:21 AM

Its not really a motte and bailey if the bailey and the motte are not different. I don't think most EA people believe or think they are about maximizing total expected utility according to a hedonistic utilitarian framework. Even assuming that were true though, its hard to see how the end result would necessarily differ markedly from. "want to donate a lot and make sure their donations count." which is a conclusion that is perfectly consistent with any number of ethical frameworks including the ethical framework used by 99% of us 99% of the time (including of utilitarians) thinkingaboutitaswegoalongism.

What you are not quite saying, but is implicit in your "utilitarian framework" is that there is an element within EA, mostly via a certain website - who see it as an explicit motte and bailey tool. Some of the users of that other site have been explicit in the past that they are using GiveWell and EA as a soft "recruiting tool" believing that once they have gotten people to sign up to: "donate a lot and make sure their donations count." they can modify individuals preferences about what "donations count" means (perhaps by presenting them with certain utilitarian arguments) and get them to switch from extreme poverty causes to their preferred esoteric AI and Future risk stuff.

But they are not "most EA's" - In monetary and numerical terms they are small compared to the numbers groups like GW, TLYCS and GWWC have. Its not even most Future risk or AGI people, most of whom wear their allegiances and weirdness points quite openly.

I find another motte-and-bailey situation more striking: the motte of "make your donations count by going to the most effective place" and the bailey of "also give all your money!"

I personally know a lot of people who have been turned off of effective altruism by the bailey here, and while some seem to disagree with the motte, they are legions fewer. In the discussion about how to present EA to those we know, I think in many circumstances I'd recommend sticking with the motte, especially until you know they are very on board with that and perhaps come up with the bailey on their own.

FGO = Figuring Good Out, the monthly EA blogging carnival started by Michael (Bitton - the author). The way this works is that anyone can write a blog post on this topic this month, and then Michael will link to them all at the end.