P

ParthThaya

343 karmaJoined

Comments
9

I'm the author of a (reasonably highly upvoted) post that called out some problems I see with all of EA's different cause areas being under the single umbrella of effective altruism. I'm guessing this is one of the schism posts being referred to here, so I'd be interested in reading more fleshed out rebuttals. 

The comments section contained some good discussion with a variety of perspectives - some supporting my arguments, some opposing, some mixed - so it seems to have struck a chord with some at least. I do plan to continue making my case for why I think these problems should be taken seriously, though I'm still unsure what the right solution is. 

I agree it'd be good do rigorous analyses/estimations on what the costs vs benefits to global poverty and animal welfare causes are from being under the same movement as longtermism.  If anyone wants to do this, I'd be happy to help brainstorm ideas on how it can be done. 

I responded to the point about longtermism benefiting from its association with effective giving in another comment

I don't believe the EA -> existential risk pipeline is the best pipeline to bring in people to work on existential risks. I actually think it's a very suboptimal one and that absent how EA history played out, no one would ever have had answered the question of "What's the best way to get people to work on existential risks?" with anything resembling "Let's start them with the ideas of Peter Singer and then convince them that they should include future people in their circle of concern and do the math." Obviously this argument has worked well for longtermist EAs, but it's hard for me to believe that's a more effective approach than appealing to people's basic intuitions about why the world ending would be bad. 

That said, I also do think closing this pipeline entirely would be quite bad. Sam Bankman-Fried, after all, seems to have come through that pipeline. But I think the EA <-> rationality pipeline is quite strong despite the two being different movements, and that the same would be true here for a separate existential risk prevention movement as well. 

Thanks for the kind words! Your observations that "people who are emphatically in one camp but not the other are very different people" matches my beliefs here as well. It seems intuitively evident to me that most of the people who want to help the less fortunate aren't going to be attracted to, and often will be repelled by, a movement that focuses heavily on longtermism. And that most of the people who want to solve big existential problems aren't going to be interested in EA ideas or concepts (I'll use Elon Musk and Dominic Cummings are my examples here again).

There's a sampling bias problem here. The EAs who are in the movement, and the people EAs are likely to encounter, are the people who weren't filtered out of the movement. One could sample EAs, find a whole bunch of people who aren't into longtermism but weren't filtered out, and declare that the filter effect isn't a problem. But that wouldn't take into account all the people who were filtered out, because counting them is much harder. 

In the absence of being able to do that, this is how I explained my reasoning about this: 

I have met various effective altruists who care about fighting global poverty, and maybe care about improving animal welfare, but who are not sold on longtermism (and are sometimes hostile to portions of it, usually to concerns about AI). In their cases, their appreciation for what they consider to be the good parts of EA outweighed their skepticism of longtermism, and they become part of the movement.  It would be very surprising if there weren’t others who are in a similar boat, except being somewhat more averse to longtermism and somewhat less appreciative of the rest of the EA, the balance swings the other way and they avoid the movement altogether. 

You make a strong case that trying to convince people to work on existential risks for just their own sakes doesn't make much sense. But promoting a cause area isn't just about getting people to work on them but about getting the public and governments and institutions to take them seriously. 

For instance, Will MacAskill talks about ideas like scanning the wastewater for new pathogens and using UVC to sterilize airborne pathogens. But he does this only after trying to sell the reader/listener on caring about the potential trillions of future people. I believe this is a very suboptimal approach: most people will support governments and institutions pursuing these, not for the benefits of future people, but because they're afraid of pathogens and pandemics themselves. 

And even when it comes to people who want to work on existential risks, people have a more natural drive to try to save humanity that doesn't require them to buy the philosophical ideas of longtermism first. That is the drive we should leverage to get more people working on these cause areas. It seems to be working well for the fight against climate change after all.

Do you have any evidence that this is happening?

Anecdotally, yes. My partner who proofread my piece left this comment around what I wrote here: "Hit the nail on the head. This is literally how I experienced coming in via effective giving/global poverty calls to action. It wasn't long till I got bait-and-switched and told that this improvement I just made is actually pointless in the grand scheme of things. You might not get me on board with extinction prevention initiatives, but I'm happy about my charity contributions."

The comment I linked to explains well why many can come away with the impression that EA is just about longtermism these days. 

The claim isn't that the current framing of all these cause areas as effective altruism doesn't make any sense, but that it's confusing and sub-optimal. According to Matt Yglesias, there are already "relevant people" who agree strongly enough with  this that they're trying to drop to just using the acronym EA - but I think that's a poor solution and I hadn't seen those concerns explained in full anywhere.

As multiple recent posts have said, EAs today try to sell the obvious important and important idea of preventing existential risk using counterintuitive ideas about caring about the far future, which most people won't buy. This is an example of how viewing these cause areas through just the lens of altruism can be damaging to those causes. 

And then it damages the global poverty and animal welfare cause areas because many who might be interested in the EA ideas to do good better there get turned off by EA's intense focus on longtermism. 

Hi Sindy, thanks for the kind words! Really cool to hear you’ve been looking into doing that, and I’d be interested in hearing more. And of course you’re more than welcome to reach out if you have any questions.

I can’t speak for everyone involved, but off the top of my head, my rough strategy is something like:

  1. Get more people to hear about EA. Last year, we only managed to get invites out to ~10% of the company, so there’s lots more to do here;
  2. As there is more interest and awareness among employees, work with the company to incorporate EA principles/charities into the official Give campaign.

Our main metrics today are simply site visits, people tuning into our talks, and feedback we receive. Donations to/through GiveWell from Microsoft is something we could maybe track if they are willing to share that information, but that’s not a conversation we’ve had yet.

Bill Gates has stayed away from mixing his Foundation work with Microsoft, as far as I can tell. Our team’s never talked about reaching out to him for support, but maybe we should ...