If you think a typical EA cause has very high impact, it seems quite plausible that you can have even higher impact by working one level of “meta” up -- working not on that cause directly, but instead working on getting more people to work on that cause.
For example, while the impact of a donation to the Against Malaria Foundationseems quite large, it should be even more impactful to donate to Charity Science, Giving What We Can, The Life You Can Save, or Raising for Effective Giving, all of which claim to be able to move many dollars to AMF for every dollar donated to them. Likewise, if you think an individual EA is quite valuable because of the impact they’ll have, you may want to invest your time and money not in having a direct impact, but in producing more EAs!
However, while I agree with this logic, I’m nervous about going too far. As Dylan Matthews says, “if you take meta-charity too far, you get a movement that's really good at expanding itself but not necessarily good at actually helping people”. This is what leads to what Matthews called “[d]oing good through aggressive self-promotion” or what I’m calling “the meta trap”. While some meta-projects may have the highest impact in expectation, there are higher-order reasons to want to avoid giving all your resources to meta orgs.
Meta Trap #1: Meta Orgs Risk Not Actually Having an Impact
Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.
Which is more probable? (1) Linda is a bank teller or (2) Linda is a bank teller and is active in the feminist movement.
When asked by Tversky and Khaneman, the majority of people picked #2. However, this isn’t possible, since the probability of two events both occurring cannot be greater than the probability of one of those two events occurring.
This is called the conjuction fallacy, and it is a classic bias of human rationality. However, it’s also a classic bias of meta-charity.
-
When you chain different probabilities together, every additional step in the chain will, in almost every case, weaken it. This is also true with chaining together steps of meta-charity together -- while you’re getting higher returns in expected value, you’re also reducing the chance the impact will actually occur.
Consider someone who is considering donating to fund the salary of a staff member to work full-time finding volunteer mentors to advise new EA chapters at various universities. These EA chapters will in turn bring more college students into EA, and these new EAs will then graduate and will all earn-to-give for GiveWell top charities. (While a bit silly-sounding, this plan is so realistic in EA, I’ve actually funded a form of it.)
This plan could have quite a high impact. While donating to AMF all our lives is great, if we can spend our effort to get two people to donate to AMF instead of us, we’ve doubled our impact. If we can spend our effort creating an entire college group to get dozens of people to donate AMF, so much more impact! And then we can expand an entire network of college groups! And then we can become even more efficient in expanding this network. So much impact!
However, we’ve also now constructed a meta-chain that is five steps removed from the actual impact. There’s a lot that can now go wrong in this chain -- the chapters could get set up successfully but fail to get enough people to donate, the chapters could fail to get set up at all due to problems unrelated to the mentoring, the mentors themselves could fail to be better than if the full-time staff member just advised full-time instead, and the staff member could end up being really bad at recruiting volunteer mentors.
This doesn’t mean the chapter chain doesn’t have high expected value or that it’s not worth doing. It just means that it’s risky, and I’m nervous that as the levels of meta scale up, the additional risk taken on by introducing ways to break the chain might be much greater than the additional leverage taken by introducing another meta step.
I do think meta-charities are worth pursuing and I fund them myself. But for every time I think about how good of an opportunity the connections facilitated at EA Global are, I also worry about whether the new EAs brought into the movement really are going to create more counterfactual impact than the considerable cost of the conference.
Meta Trap #2: Meta Orgs Risk Curling In On Themselves
When I was in college, I once joked about a fictitious club called “Club Club” with the only purpose of perpetuating the club. Every Club Club meeting would be about how to advertise Club Club, how to recruit more Club Club members, and how to better retain the members that Club Club already had. Club Club wouldn’t actually do anything. On days where I’m especially grumpy, I worry that EA may become that.
The problem is that if we spread the idea that meta-orgs are the highest impact opportunity too well, we risk the creation of a meta-movement to spread the meta-movement and nothing else. Once meta-orgs get to the point where it’s all about EAs helping other EAs to help EAs, we’ve gotten to the point where there’s serious risk that actual impact won’t occur.
Consider that plan again where we get someone to full-time find chapter advisors for setting up lots of EA chapters. Now imagine that instead of advocating to the college students that they earn to give for GiveWell charities we suggest that this chapter building project is really the best possible thing to be doing, so they should get involved in, donate to, and volunteer for it. Now we’ve got chapters developing chapters to perpetuate developing more chapters. But what does this actually accomplish? We might as well have them working to set up Club Club.
Meta Trap #3: You Can’t Use Meta as an Excuse for Cause Indecisiveness
EA is made up of quite a few different object-level causes and it can be hard to figure out which one is the best. Is it global poverty? Existential risk reduction? Animal welfare? Or something else?
Somehow, meta-work became it’s own cause in this list, but I think that’s a mistake. Meta-work isn’t a cause, it’s a meta-cause, and it’s supposed to make the above actual-causes go better. To understand the meta-impact that meta-work has, it’s important to understand the object-level causes and have opinions on which one is best.
However, I feel like far too often people (including myself) hide behind donations to meta-charity as a feel-good way to support the EA movement as a whole without doing the hard work of figuring out which object-level causes are the best. Unless you’re funding cause prioritization research or hoping to bring in EAs who will shed more light on the question of which cause is the best, this seems like a big risk. It avoids learning opportunities and discussions we could be having about what the best causes actually are, which also pushes the entire movement forward.
Meta Trap #4: At Some Point, You Have to Stop Being Meta
Abraham Lincoln is purported to have said “Give me six hours to chop down a tree and I will spend the first four sharpening the axe.” I think this is generally a good philosophy to follow. But at some point you have to swing the axe and actually chop down the tree. If you have six hours to chop down the tree and you spend all six hours sharpening the axe, you’re obviously doing it wrong.
The problem we face with meta tasks is that we don’t really know how much time we have, and we have to allocate this unknown amount of time to axe sharpening (meta-work) and tree chopping (actual impact). At what point should we start chopping? I’m nervous that we may get so carried away with meta-work we forget to actually chop at some point.
Meta Trap #5: Sometimes, Well Executed Object-Level Action is What Best Grows the Movement
GiveWell is considered a meta-org, but they focus on direct research about which cause is best. Historically, they have not focused much resources on outreach or marketing. Instead, they just focused on doing a very good job on their research and delivering high-quality recommendations. In turn, they attracted many donors, including a big foundation. As GiveWell says, “Much of our most valuable publicity and promotion has come from enthusiastic people who actively sought us out” and that they “have generally felt that improving the quality of [their] research, and [the] existing audience’s understanding of it, has been the most important factor in [their] growth”.
Perhaps counter-intuitively, doing really well on object-level stuff could also be one of the best things we can do to grow a quality movement. People aren’t attracted to marketing, they're attracted to people doing a good job. Marketing is only useful in so far as it draws attention to good work.
How Can We Defuse The Meta Trap?
To be clear, I don’t think the EA movement is in a meta trap yet. I think we’re doing good work and making a lot of progress on important, object-level issues. But I want to be careful. I think the meta trap is a real problem.
Here are two steps I think would work to defuse it:
1.) The more steps away from impact an EA plan is, the more additional scrutiny it should get. The idea of a meta-meta-org may sound unusual, but many EA plans are actually that. This doesn’t mean they’re wrongheaded -- I just think they warrant extra skepticism. Are we really getting extra impact from each step? Or are we just introducing a lot of risk by bringing in another chain that might collapse the whole thing?
2.) More EAs should have a substantial stake in object-level impact. Right now I’m aiming at donating 50% of my pool to the best meta-projects I know and spending the other 50% on direct impact through GiveWell’s top charities. I don’t know if 50% is the correct number, but I hope this will set an example of what I want the movement as a whole to do.
I think Jeff Kaufman at EA Global put it best:
So yes, we should do this, we should put substantial effort into growing the movement. But this isn't the only thing we should do. We can't have an entirely meta movement that goes grow, grow, grow, build growth capacity, bring in people to bring in people, bigger and bigger, and then shift focus? Turn your giant optimized-for-growth movement into an optimized-for-helping one? Not going to work.
We need to do things that help people alongside growing the movement, and personally I try to divide my efforts 50-50. As I argued above, for the doing-good-now portion I think global poverty is our best shot. This isn't settled—EA is all about being open to the best options for helping others, whatever those causes happen to be—but today I think the best you can do to help people now is donate to GiveWell's top charities.“
Edit: this essay is great and I'm excited and I wanted to build on what Peter wrote so much I didn't even finish reading the whole thing before I've started in great volume commenting on each individual point. I believe I've gone over something Peter already covered in the OP, before I realized it. I'll edit that out for brevity, but forgive me if I miss something and I'm just needlessly repeating Peter.
[Epistemic Status: heady, giddy and rapid hypothesis generation]
I perceive two pitfalls here. Firstly, logistics and administration may become more difficult as the meta-project grows. For however many levels n an organization goes meta, such that n is the number of steps removed the management of the project in question is from the object-level goals of effective altruism, there is going to be more people, information, and organizations to keep up with. As a project gets more meta, it will become more difficult to convince effective altruists in general to increase its funding so it can scale, or become even more meta. So, a project that from the start seeks to go more meta in an unrestrained way will also face constant talent and financial constraints. As it go more meta, it will be difficult to find fitting employees, and due to how abstract the project would become, it would become difficult to explain in the first place how the project would work, let alone how skeptical said funders would otherwise be. If a project receives all their funding from one major donor, or a single consortium of donors, the project managers risk losing the independence to run their project as they see fit without being held back by constant questions or donors/directors steering the project in a new direction. This is why, e.g., Givewell doesn't want to receive all its funding from Good Ventures.
To run a meta-project in a nimble way which reacts quickly to changing circumstances and steep learning curves seems necessary for running meta-projects, as they're almost always breaking new ground in the non-profit sector and/or effective altruism when they're founded. So, they can't risk losing their independence by courting only one donor who may go on seekign to steer the ship themselves. If the meta-project in question was constrained to effective altruism, and it was otherwise facing financial constraints, I'd be skeptical of any claim they could find sufficient funding by going outside the effective altruism community.
Second, there is a temptation for meta-projects to go to the nth meta-level indefinitely. If they do so, I figure they'd eventually reach the point where the network they've built for expanding effective altruism would become unmanageable, and the members of that network wouldn't coordinate or gain the self-awareness to know what to do with themselves. So, the whole thing would unwind. While not all the value of the meta-project would be undone in such a case, I think there would be sufficient collapse or loss that whatever initial costs to start and ongoing costs to maintain the project would be unjustified, and counterfactully would have done more good at some lower level of organization. Whether that level would be the object-level, e.g., just donating to AMF, or only one meta-level up, merely fundraising for AMF, there would've been a point the managers should've known to stop the constant abstaction of the project.
I think the solution to both these problems is, would be, and will need to be greater accountability and oversight. Major donors to EA meta-projects might want to see a laid-out plan for operational goals for the next year, a budgetary breakdown of anticipated funding necessary for the goals, and a detailed layout of how they've done such in the past, to demonstrate their track record of reliability. Charity Science does all that, right? This is a cross between the proposal to fix science by registersing the hypothesis before the study is conducted, and a compnay transparently providing information to assure its investors the company's executives are making the best choices and best they can. If so, I think every other EA meta-project or meta-charity should be expected to do the same. I'd be happy to help normalize this trend. The best way to do that would be to explain how I donated to, e.g., Charity Science instead of CEA or Givewell or something based on Charity Science making abundantly clear their operational goals and what they expected to achieve relative to other organizations. I don't have enough money to donate right now to justify that, and likely won't in the next year, so I can't do that. I encourage others, such as yourself, Peter, to do that more often. I'd lend my moral, vocal, or other support in the present, though.
Also, I figure meta-projects or meta-charities should be incentivized, in addition to the above, to preregister what their low-ball, average, and stretch goals are for the year, with as quantified a caliber of confidence as they can muster. Incenitivizing this could be faciliated by an EA prediction market or other mechanisms of moral economics that have been recently discussed on this forum. A prediction maket could work in calibrating the expectations of a project by way of the best forecasters in the market making their own predictions of the projected success of the meta-project. If all the best forecasters, with their proven track records, independently converged on the conclusion the scope of the project was overconfident, the project managers would be induced to temper their overconfidence, and, e.g., ask for less fudnign than they can optimally use. To get an organization to change their behavior in the face of such a prediction-market scenario, I figure they might need be incentivized with rewards for updating in the right direction. I can't think of any right now besides assurance they'd receive the appropriate level of funding (corresponding to their most realistic goals).
Finally, it seems to me how internally well-connected the existing network that is effective altruism is just as important for facilitating an increase in valuable object-level work as growing the movement. I call the former, increasing the value of internal networking, growing stonger, and the latter, growing the network as a whole, I call growing bigger. This is a distinction of the ways effective altruists use the phrase "movement growth" in different ways. This distinction was first made clear to me at the 2013 EA Summit. "Growing stronger" seemed the approach to movement development favored by Anna Salamon and CFAR, "growing stronger" the approached favored by the CEA, and a combination of both a strategy seemingly favored by Geoff Anders and Leverage Research. I think managing and improving the internal strength of the community as is, and how we connect and collaborate, is just as or more important than increasing the absolute size of effective altruism. Another way of thinking of this is: increasing absolute impact vs. increasing impact per unit effort expended. My recent spate of proposals to and engagement with .impact has been motivated by facilitating movement development via increasing the utility of the current network.
Also, thanks for all your feedback, Evan. I'm glad you liked it.