EA should not have any reputational issues. It is just people trying to figure out the best way to improve the world. What could be controversial about that?

Even before the whole FTX thing, EAs were being vilified on social media and even in academia. Is there some kind of psychological angle I am missing? Like a cognitive dissonance the critics are experiencing that they are not doing more, or some other kind of resentment?

Should we even care, or just try to ignore it and go about our business?

I think it is more important than ever that EA causes attract new mega donors, and it is going to be tougher to do that if EA has a negative public image, justified or not.

I am even embarrassed to use the words effective altruism anymore in conversation with friends and family. I would rather avoid the controversy unless it’s really necessary.

If these questions have already been addressed somewhere, I would appreciate any references.

11

0
0

Reactions

0
0
Comments34
Sorted by Click to highlight new comments since:

As with any social movement, people disagree about the best ways to take action. There are many critiques of EA which you should read to get a better idea of where others are coming from, for example, this post about effective altruism being an ideology, this post about someone leaving EA, this post about EA being inaccessible, or this post about blindspots in EA/rationalism communities. 

Even before SBF, many people had legitimate issues with EA from a variety of standpoints. Some people find the culture unwelcoming (eg. too elitist/not enough diversity), some people take issue with longtermism (eg. too much uncertainty), others disagree with consequentialism/utilitarianism, and still others are generally on board but find more specific issues in the way that EA approaches things. 

Post-SBF it's difficult to say what the full effects will be, but I think it's fair to say that SBF represents what many people fear/dislike about EA (eg. elitism, inexperience, ends-justifies-the-means reasoning, tech-bro vibes, etc). I'm not saying these things are necessarily true, but most people won't spend hundreds of hours engaging with EA to find out for themselves. Instead, they'll read an article on the New York Times about how SBF committed fraud and is heavily linked to EA and walk away with a somewhat negative impression. That isn't always fair, but it also happens to other social movements like feminism, Black Lives Matter, veganism, environmentalism, etc. EA is no exception, and FTX/SBF was a big enough deal that a lot of people will choose not to engage with EA going forward. 

Should you care? I think to an extent, yes - you should engage with criticisms, think through your own perspective, decide where you agree/disagree, and work on improving things where you think they should be improved going forward. We should all do this. Ignoring criticisms is akin to putting your fingers in your ears and refusing to listen, which isn't a particularly rational approach. Many critics of EA will have meaningful things to say about it and if we truly want to figure out the best ways to improve the world, we need to be willing to change (see: scout mindset). That being said, not all criticisms will be useful or meaningful, and we shouldn't get so caught up in the criticism that we stop standing for something. 

Sabs
10
5
6

well the elitism charge is just true and it should be true! Of course EA is an elitist movement, the whole point is trying to get elites to spend their wealth better, via complicated moral reasoning that you have to be smart to understand (this is IMO a good thing, not a criticism!). 

I actually think it would be a disaster if EA became anti-elitist, not just for EA but for the world. The civic foundation of the West is made up of Susan from South Nottingham who volunteers to run the local mother & baby group: if she stops doing that to ETG or whatever then everything falls apart and the economic surplus that EA relies on will disappear within a few generations. For everyone's sakes, the EA message needs to stay very specifically targeted; it would be an extremely dangerous meme if it leaked out to the wider world. Thankfully I think on some level EAs sort of know this, which would probably explain the focus on evangelizing specifically to smart university students but not to anyone else.

I rather liked this comment, and think it really hits the nail on the head. Myself being a person that has only recently come into contact and developed an interest, and therefore having mostly an 'outsider' perspective, I would add that there's a big difference in the perception of 'effective altruism', which almost anybody would find reasonable and morally unobjectionable, and 'Effective Altruism' / Rationalism as a movement with some beliefs and practices that will be felt as weird and rejectable by many people (basically, all those mentioned by S.E. Montgomery like elitism, long-termism, utilitarianism, a general hibristic and nerdy belief that complex issues and affairs are reducible to numbers and optimization models, etc...). 

Controversial take: While I agree that EA has big problems, I actually think that elitism was correct for one reason.

  1. Things are usually heavy tailed like power laws, indeed they may be the most common distribution, and this supports elitism.

One of my largest criticisms with EA is that they don't realize there might be a crucial consideration around moral realism. Now this is a non-special criticism, but moral realism is basically the theory behind where morality is real and mind independent.

Yet there is probably overconfidence on this front, and this matters, because if it isn't true, than EA will have to change drastically. And in general this is way too unquestioned as an assumption being used.

It's possible I've flipped the sign on what you're saying, but if I haven't, I'm pretty sure most EAs are not moral realists, so I don't know where you got the impression that it's an underlying assumption of any serious EA efforts.

If I did flip the sign, then I don't think it's true that moral realism is "too unquestioned".  At this point it might be more fair to say that too much time & ink has been spilled on what's frankly a pretty trivial question that only sees as much engagement as it does because people get caught up in arguing about definitions of words (and, of course, because some other people are deeply confused).

I think this might be a crux here: I do think that this question matters a lot, and I will point to implications if moral realism is false. Thankfully more EAs are moral anti-realists than I thought.

  1. EA would need to recognize that their values aren't superior or inferior to other people's values, just different values. In other words it would need to stop believing it's actually objectively right at all to hold (X values).

  2. The moral progress thesis is not correct, that is the changes in the 19th and 20th centuries were mostly not made on moral progress. At the very least moral progress is very much subjective.

  3. Values are points of view, not fundamental truths that humans have to abide by.

Now this isn't the most important question, but it is a question that I think matters, especially for moral movements like EA.

These are largely anecdotal, and are NOT endorsements of all listed critiques, just an acknowledgement that they exist, and may contribute to negative shifts in EA's public image. This skews towards L) leaning views, and isn't representative of all critiques out there, just a selection of from people I've talked to and commentary I've seen / what's front of mind due to recent conversations.

FTX events are clearly a net negative to EA's reputation from the outside. This probably was a more larger reputational hit to longtermism than animal welfare or GHW (though not necessarily a larger harm : benefit ratio). But even before this, a lot of left leaning folks view EA's ties to crypto with skepticism (often this is usually around views of whether crypto is net positive for humanity, not on the extent to which crypto is a sound investment). 

EA generally is subject to critiques around measuring impact through a utilitarian lens by those who deem the value of lives "above" measurement, as well as by those who think EA undervalues non-utilitarian moral views or person-affecting views. There are also general criticisms of it being an insufficiently diverse place (usually something like: too white / too western / too male / too elitist) for a movement that cares about doing the most good it can.

EA global health + wellbeing / development is subject to critiques around top-down Western aid (e.g., Easterly, Deaton), and general critiques around the merits of randomista development. Some conclusions are seen as unintuitive e.g. by those who think donating to local causes or your local community is preferable because of some moral obligation to those closer to them or those responsible for their success in some way.

Within the animal welfare space, there's discussion around whether continued involvement with EA and its principles is a good thing or not for similar reasons (lacking diversity, too top-down, too utilitarian) - these voices largely come from the more left leaning / social justice inclined (e.g. placing value on intersectionality). Accordingly, some within the farmed animal advocacy space also think involvement with EA is contributing to a splintering within the FAAM movement. I'm not sure why this seems more relevant in FAAM than GHW, but some possibilities might be if EA funders are a more important player in the animal space than the GHD space, and if FAAM members are generally more left-leaning and see a larger divide between social justice approaches and EA's utilitarian EV-maximising approaches. Some conclusions are seen as unintuitive e.g. shrimp welfare ("wait you guys actually care about shrimp?"), or wild animal suffering.

Longtermism is subject to critiques from those uncomfortable with valuing the future at the cost of people today, valuing artificial sentience more than humans, a perceived reliance on EV-maximising views, "tech bros" valuing science fiction ideas over real suffering and justifying such spending as "saving the world", and the extent to which the future they are wanting to preserve actually involves all of humanity, or just a version of humanity that a limited subculture cares about. Unintuitive conclusions may involve anything ranging from thinking about the future more than 1000 years from now, outside of the solar system, or artificial sentience. The general critiques around diversity and a lean towards tech fixes instead of systematic approaches are perhaps most pronounced in longtermism, perhaps in part due to AI safety being a large focus of longtermism, and in part due to associations with the Bay Area. The general critiques around utilitarianism are perhaps also most pronounced in longtermism, and the media attention around WWOTF probably made more people engage with longtermism and its critiques. On recent EA involvement in politics as a pushback RE: favouring tech fixes > systemic approaches, the Flynn campaign was seen as a negative update for some left-leaning outsiders in terms of EA's ability to engage in this space.

Outside of cause area considerations, some people get the impression that EA leans young, unprofessional, too skeptical to defer to existing expertise / too eager to defer to a smart generalist to first-principles their way through a complicated problem, that EA is a community that is too closely knit and subject to nepotism, that EA unfairly favours "EA insiders" or "EA alignment". Other people think EA is too fervent with outreach, and consider university and high school messaging or even 80,000 hours akin to cult recruitment. On a similar vein, some think that the EA movement is too morally demanding, and this may lead to burnout, or insufficiently values individuals' flourishing. Some others think that EA lacks direction, isn't well steered, or has an inconsistent theory of change.

It is just people trying to figure out the best way to improve the world. What could be controversial about that?

How to best improve the world is far from straightforward, and EA hardly has a bulletproof or incontrovertible position on what needs to be improved or how. Even if you can get broad agreement on something like "we should fix poverty", there are dozens of questions about why poverty happens, how the root causes should or shouldn't be approved, what is the most effective and most ethical ways of addressing it (which are not necessarily always the same), etc.   I think the comments from S. E. Montgomery and bruce do an excellent job summing up  a lot of the points where EA is open to (often well-deserved) critiques.

Even before the whole FTX thing, EAs were being vilified on social media and even in academia.

I think there's some value in separating  the "public" image and criticism of EA from academic ones, and I wanted to comment on the academic aspect in particular.  While I'm sure that, like all movements, EA has been subject to vilification by academics, the academically published critiques of EA I've read tend to be fair, reasoned critiques primarily stemming from different approaches to doing good: 

  • Institutional critiques that argue that EA is too focused on "band-aid" solutions that don't address the root causes of the problems it addresses, which would require institutional or radical change.
  • Methodological critiques: this includes critiques of utilitarianism as a method for cause and intervention prioritization in general, and specific criticisms of calculations that address biases or models.
  •  More general critiques on what should be prioritized for doing good: utilitarian or deontological views
  • Critiques that argue the value of local (read: community based, bottom up) vs. global (read: top-down foreign aid and intervention) approaches
  •  Critiques of the charity in general along the lines of the nonprofit industrial complex that are extended to EA.
  • Critiques of charity in general as a means of redistribution of wealth and prioritizing who gets access to which services as opposed to democratic methods. 
  • Anti-capitalist critiques that argue that EA is too tied in with capitalist ideals and methods to affect real change when it comes to poverty, which is considered to be a symptom of capitalism.

I don't think that all these critiques are excellent or as well backed up as they could be, but I think it's important to recognize that there are good reasons to find EA objectionable, controversial or even, on net, harmful based on any of these without having cognitive dissonance, resentment, or psychological effects involved. 

None of those seem like critiques of the broad idea “I will attempt to think critically about how to reduce the most suffering.” Rather, they take issue with tactics.

So, they aren’t criticisms of EA as a philosophy, they are criticisms of EA tactics.

Also, as Peter Singer recently pointed out, no one ever said EA work was “either / or.” If someone has a systemic solution to global poverty, by all means, pursue it. In the meantime, EAs will donate to AMF and Helen Keller, etc.

So, the question then is whether we should sit on our hands and leave our money in savings accounts while we wait for a solution to systemic poverty, or should we use that money to de-worm a child while we are waiting.

It's not like effective altruists are the first or only people to think critically about how to reduce suffering. EA philosophy doesn't have a monopoly on trying to do good or trying to the most good, it doesn't even have a monopoly of approaching doing good from an empirical standpoint. These aren't original or new ideas at all, and they aren't the ideas that are being critiqued. 

I don't think it makes sense to separate what you're calling EA tactics and EA philosophy. EA is its tactics, and those can change, but for the moment they are what they are. On a more concrete philosophical note, aspects of EA philosophy like cause prioritization, cause neutrality, utilitarian approaches, using metrics like cost-effectiveness, DALYs and QALY to measure outcomes, tendencies towards technocratic solutions, and top down approaches are among the things that are being critiqued. I don't think these are all necessarily valid or representative of the whole of EA, but the first few are certainly very closely related to what I consider to be the constitutive and distinctive features of EA. That being said, I'm curious about how you define EA philosophy beyond the broad idea of thinking critically about how to reduce the most suffering, as that definition is broad enough that I'd say most people working in pretty much any field related to wellbeing, social policy or charity would be included.

I understand your frustration, but I don't think anybody making the arguments above is arguing that "we should sit on our hands and leave our money in savings accounts while we wait for a solution to systemic poverty". They're arguing that current EA approaches, both philosophical and tactical, are not satisfactory and that they don't do the most good, and that there should be more effort and resources put into solutions that they think are.

I agree EAs aren’t original and weren’t the first.

I do think it makes sense to separate tactics and philosophy.

There are people who say it would be to leave money in a savings account than donate it to a de-worming charity or AMF. One of the most prominent ones is Alice Crary. Here is a recent sample of her argument: https://blog.oup.com/2022/12/the-predictably-grievous-harms-of-effective-altruism/

I've read the text you linked twice, to make sure I'm not missing something, and I don't see where the authors argue that "it would be to leave money in a savings account than donate it to a de-worming charity or AMF" - which, incidentally, is very different than "we should sit on our hands and leave our money in savings accounts while we wait for a solution to systemic poverty". I guess the closest thing is "how funding an “effective” organization’s expansion into another country encourages colonialist interventions that impose elite institutional structures and sideline community groups whose local histories and situated knowledges are invaluable guides to meaningful action.", which is not arguing that we should keep money in our bank accounts waiting for systemic change, but that funding these charities have adverse effects and so they shouldn't be funded.

Sabs
14
2
0

I think it's mostly just FTX tbh. Most people, even smart well-informed people, had never heard of EA beforehand. If they've been following the FTX saga closely they have, and their first impression is obviously a very bad one. There's not much anyone can do about this, it is what it is. Obviously the effects will partially dissipate over time. 

An additional problem is all SBF's donations to the Dems, so there's probably the additional perception created that EA is some weird and whacky scam the Dems are running on everyone i.e EA has become associated with the insane American partisan culture wars and for some people is probably now in the same bracket as Epstein and child abusing pizza restaurants. Again, there's not much you can do about this, although perhaps maybe EA could try to highlight its links to the political right a bit more to help address the balance? (Thiel, Musk)

I don’t think EA attracts more criticism than others. It’s just that it’s my movement, so I feel it more.

I think it’s badly named!

The name “effective altruism” has a holier than thou, “we’ve solved ethics and are perfectly ethical” vibe, which means that’s people are extra critical of EA when they disagree with the EA community or feel that the EA community isn’t doing the most good

Why wouldn't it be controversial? It suggests something other than people acting according to their personal pet projects, ideologies, and social affiliations, and proposes a way by which those can be compared and found wanting. The fact that it also comes with significantly more demandingness than anything else just makes it a stronger implicit attack.

Most people will read EA as a claim to the moral high ground, regardless of how nicely it's presented to them. Largely because it basically is one. Implicit in all claims to the moral high ground - even if it's never stated and even if it's directly denied - is the corollary claim that their claims to the moral high ground are lesser or even invalid. Which is a claim of superiority.

That will produce defensiveness and hostility by default.

 

Many people's livelihoods depend on ineffective charity, of course, and Sinclair's Rule is also a factor. But it's a minor one. The main factor is that the premise of EA is that charity should be purchasing utilons. And  even starting to consider that premise makes a lot of people tacitly realize that their political and charitable work may have been purchasing warm fuzzies, which is an unpleasant feeling that they are motivated to push back against to protect their self-image as a do-gooder/good person/etc.

Of course, there is no need for contradiction. You can purchase both utilons and warm fuzzies, so long as you do it separately. But in my estimation, no more than 5% of the world, at the absolute most, is amenable to buying warm fuzzies and utilons separately. (More likely it's less than 0.5%.) The other 95% will either halt, catch fire, and reorient their internal moral compass, or, much more commonly, get outraged that you dared to pressure them to do that. (Whether you actually applied any pressure is basically immaterial.) 

I like this comment and think it answers the question at the right level of analysis.

To try and summarize it back: EA’s big assumption is that you should purchase utilons, rather than fuzzies, with charity. This is very different from how many people think about the world and their relationship to charity. To claim that somebody’s way of “doing good” is not as good as they think is often interpreted by them as an attack on their character and identity, thus met with emotional defensiveness and counterattack.

EA ideas aim to change how people act and think (and for some core parts of their identity); such pressure is by default met with resistance.

I'm sure each individual critic of EA has their own reasons. That said (intuitively, I don't have data to back this up, this is my guess) I suspect two main things, pre-FTX.

Firstly, longtermism is very criticisable. It's much more abstract, focuses less on doing good in the moment, and can step on causes like malaria prevention that people can more easily emotionally get behind. There is a general implication of longtermism that if you accept its principles, other causes are essentially irrelevant.

Secondly, everything I just said about longtermism -> neartermism applies to EA -> regular charity - just replace "Doing good in the moment" with "Doing good close to home". When I first signed up for an EA virtual program, my immediate takeaway was that most of the things I had previously cared about didn't matter. Nobody said this out loud, they were scrupulously polite about it, they were 100% correct, and it was a message that needed to be shared to get people like me on board. This is a feature, not a bug, of EA messaging. But this is not a message that people enjoy hearing. The things people care about are generally optimised for having people care about them - as examples, see everything trending on Twitter. As a result, people don't react well to being told, whether explicitly or implicitly, that they should stop caring about (My personal example here) the amount of money Australian welfare recipients get, and care about malaria prevention halfway across the world instead.

One difference between EA and longtermism is that people rarely criticise neartermism to the same level, because then you can just point out the hundreds of thousands of lives that neartermism has already saved, and they look like an asshole. Longtermism has no such defense, and a lot of people equate that with the EA movement - sometimes out of intellectual dishonesty, and sometimes because longtermism  genuinely is a large and growing part of EA.

I'll list some criticisms of EA that I heard, prior to FTX, from friends/acquaintances who I respect (which doesn't mean that I think all of these critiques are good). I am paraphrasing a lot so might be misrepresenting some of them.

Some folks in EA are a bit too pushy to get new people to engage more. This was from a person who thought of doing good primarily in terms of their contracts with other people, supporting people in their local community, and increasing cooperation and coordination in their social groups. They also cared about helping people globally (donated some of their income to global health charities + were vegetarian) but felt like it wasn't the only thing they cared about. They felt like often in their interactions with EAs, the other person would try to bring up the same thought experiments they had already heard in order to get rid of their "bias towards helping people close to them in space-time". This was annoying for them. They also came from a background in law and found the emphasis on AI safety offputting because they didn't have the technical knowledge to form an opinion on it and the arguments were often presented to them by EA students who failed to convince them, and who they thought also didn't have good reason to believe in them. 

Another person mentioned that it looked weird to them that EA spent a lot of resources on helping itself. Without looking too closely at it, it looked like the ratio of resources spent on meta EA stuff to directly impactful stuff seemed suspiciously high. Their general thoughts on communities with access to billionaire money, influence, and young people wanting to find a purpose made them assume negative things about EA community as well. This made it harder for them to take some of the EA ideas seriously. I feel sympathetic to this and feel like if I wasn't already part of the effective altruism community and understood the value in a lot of the EA meta stuff, I would feel similarly suspicious perhaps.

Someone else mentioned that lots of EA people they met came across as young, not very wise, and quite arrogant for their level of experience and knowledge. This put them off. As one example, they had negative experiences with EAs who didn't have any experience with ML trying to persuade others that AI x-risk was the biggest problem.  

Then there was suspicion that EAs, because of their emphasis on utilitarianism, might be willing to do things like lie, break rules, push the big guy in front of the trolley, etc if it were for the "greater good".  This made them hard to trust. 

Some people I have briefly talked to mainly thought EA was about earning to give by working for Wall Street, and they thought it was harmful because of that.

I didn't hear the "EA is too elitist" or "EA isn't diverse enough" criticisms much (i can't think of a specific time someone brought that up as a reason they chose not to engage more with EA). 

I have talked to some non-EA friends about EA stuff after the FTX crisis (including one who himself lost a lot of money that was on the platform), mostly because they sent me memes about SBF's effective altruism. My impression was that their opinion (generally mildly positive though not personally enthusiastic) on EA did not change much as a result of FTX. This is unfortunately probably not the case for people who heard about EA for the first time because of FTX - they are more likely to assume bad things about EAs if they don't know any in real life (and I think this is to some extent, a justified response).

Why would it be harmful to work for Wall Street earning to give? Sincere question.

Finance is like anything else. You can have an ethically upstanding career, or you can have an ethically dubious career. Seems crazy to generalize.

I haven't thought about this much. I am just reporting that some people I briefly talked to thought EA was mainly that and had a negative opinion of it. 

Because finance people are bad people and therefore anything associated with them is bad. Or for a slightly larger chain, because money is bad, people who spend their lives seeking money are therefore bad, and anything associated with those people is bad.

Don't overthink this. It doesn't have to make sense, there just have to be a lot of people who think it does.

This seems counterproductively uncharitable. Wall Street in particular and finance in general is perceived by many to be an industry that is overall harmful and has negative value, and that participating in it is contributing to harm and producing very little added value for those outside of high-earning elite groups.

It makes a lot sense to me that someone who thinks the finance industry is, on net, harmful will see ETG in finance as a form of ends justify the means reasoning, without having to resort to reducing it to a caricature of "money bad = Wall Street bad = ETG bad, it doesn't have to make sense".

That's literally just the same thing I said with more words. They don't have reasons to think finance is net negative, it just is polluted with money and therefore bad.

Curated and popular this week
Relevant opportunities