Hide table of contents

Cross-posted to my blog.

The Open Philanthropy Project has made some grants that look substantially less impactful than some of its others, and some people have questioned the choice. I want to discuss some reasons why these sorts of grants might plausibly be a good idea, and why I ultimately disagree.

I believe Open Phil’s grants on criminal justice and land use reform are much less effective in expectation1 than its grants on animal advocacy and global catastrophic risks. This would naively suggest that Open Phil should spend all its resources on these more effective causes, and none on the less effective ones. (Alternatively, if you believe that the grants on US policy do much more good than the grants on global catastrophic risk, then perhaps Open Phil should focus exclusively on the former.) There are some reasons to question this, but I believe that the naive approach is correct in the end.

Why give grants in cause areas that look much less effective than others? Why give grants to lots of cause areas rather than just a few? Let’s look at some possible explanations for these questions.

Disclaimer: I don’t know anything about Open Phil’s reasoning except what’s been published online and what I’ve learned from a few conversations with employees, so I don’t claim that anything in this post correctly reflects Open Phil’s motivations. I don’t have as much information as Open Phil has, and haven’t spent nearly as much time investigating these cause areas. Open Phil may be making prioritization decisions based on information that I don’t have, or there may be important subtleties to the grantmaking process that I don’t understand. This post isn’t necessarily about what Open Phil is doing or should be doing, but more about what a very large value-aligned foundation should do. I mostly talk about Open Phil because it’s currently the largest such foundation2, and there’s a decent chance that it actually cares what I say.

Thanks to Peter Hurford and Kelsey Piper for reading drafts of this.

Contents

Reasons why these grants might be the right choice

Diminishing utility over large amounts of money

Money has diminishing marginal utility–as you spend more, it usually becomes harder to find good giving opportunities3. Perhaps someone like Open Phil has so much money that its utility-maximizing strategy is to make grants in lots of cause areas. Is that reasonable? Let’s look at how much room for more funding there might be in AI safety and in farm animal advocacy, and when we should expect marginal grants in US policy to do more good than these4.

The US government spends about $6 billion annually on biosecurity5. According to a Future of Humanity Institute survey, the median respondent believed that superintelligent AI was more than twice as likely to cause complete extinction as pandemics, which suggests that, assuming AI safety isn’t a much simpler problem than biosecurity, it would be appropriate for both fields to receive a similar amount of funding. (Sam Altman, head of Y Combinator, said in a Business Insider interview, “If I were Barack Obama, I would commit maybe $100 billion to R&D of AI safety initiatives.”) Currently, less than $10 million a year goes into AI safety research.

Open Phil can afford to spend something like $200 million/year. Biosecurity and AI safety, Open Phil’s top two cause areas within global catastrophic risk, could likely absorb this much funding without experiencing much diminishing marginal utility of money. (AI safety might see diminishing marginal utility since it’s such a small field right now, but if it were receiving something like $1 billion/year, that would presumably make marginal dollars in AI safety “only” as useful as marginal dollars in biosecurity.)

To take another approach, let’s look at animal advocacy. Extrapolating from Open Phil’s estimates, its grants on cage-free campaigns are probably about ten thousand times more cost-effective than GiveDirectly (if you don’t heavily discount non-human animals, which you shouldn’t) (more on this later), and perhaps a hundred times better after adjusting for robustness. Since grants on criminal justice reform are not significantly more robust than grants on cage-free campaigns, the robustness adjustments look similar for each, so it’s fair to compare their cost-effectiveness estimates rather than their posteriors.

Open Phil’s estimate for PSPP suggests that cage-free campaigns are a thousand times more effective. If we poured way more money into animal advocacy, we’d see diminishing returns as the top interventions became more crowded, and then less strong interventions became more crowded. But for animal advocacy grants to look worse than grants in criminal justice, marginal utility would have to diminish by a factor of 1000. I don’t know what the marginal utility curve looks like, but it’s implausible that we would hit that level of diminished returns before increasing funding in the entire field of farm animal advocacy by a factor of 10 at least. If I’m right about that, that means we should be putting $100 million a year into animal advocacy before we start making grants on criminal justice reform.

(Of course, in practice we wouldn’t just throw $100 million at the problem; we’d learn as much as we could about what the marginal utility curve looks like, and try and determine at what point marginal funding for animal advocacy starts to look worse than criminal justice reform. But right now we’re nowhere close to that point.)

If you’ll allow me to make a somewhat contentious6 point, I’ll add that there are other important cause areas (like wild animal suffering) Open Phil is currently not focusing on that have tremendous scope (and probably tremendous room for funding); if it were up to me, I’d prioritize this way above US policy. Wild-animal suffering could probably absorb about $1 quadrillion per year in funding before marginal dollars there would be less effective than marginal dollars in US policy7.

Fundamental risk aversion

If you’re making lots of grants, you may want to diversify them for some reason other than diminishing marginal utility. You might do this if you are inherently risk-averse (although Open Phil has explicitly said it’s not).

People are generally risk averse because most things have diminishing utility. If you make $10,000 a year, getting another $10,000 will improve your life a lot. But if you already make $100,000, an extra $10,000 won’t matter as much. Hence, we say that money has diminishing marginal utility.

Fundamental risk aversion means that you value each unit of utility less than the previous unit–that is, you have diminishing marginal utility of utility. This makes no sense–utility can’t diminish relative to itself. For more detail, Brian Tomasik has an essay about why we should maximize expected value.

Fundamentally confused risk aversion

Suppose you value lives in the far future, and you believe that reducing extinction risk is important. But you’re unsure if it’s possible to have an effect on reducing extinction risk because it’s so hard to measure. Then might it make sense to allocate some money to x-risk and some to more established causes like criminal justice?

Unless we’re fundamentally risk-averse, diversifying this way makes no sense. We are not trying to maximize the odds we at least accomplish something good, we’re trying to maximize expected value. If criminal justice has higher expected value than x-risk, we should make all our grants there (up to the point of diminishing marginal returns)–not make any lower-expected-value bets.8

Unequal consideration of interests

In Practical Ethics, Peter Singer describes the concept of equal consideration of interests:

[W]hen we make ethical judgments, we must go beyond a personal or sectional point of view and take into account the interests of all those affected. […] This means that we weigh interests, considered simply as interests and not as my interests, or the interests of people of European descent, or of people with IQs higher than 100. This provides us with a basic principle of equality: the principle of equal consideration of interests.

The essence of the principle of equal consideration of interests is that we give equal weight in our moral deliberations to the like interests of all those affected by our actions. This means that if only X and Y would be affected by a possible act, and if X stands to lose more than Y stands to gain, it is better not to do the act. We cannot, if we accept the principle of equal consideration of interests, say that doing the act is better, despite the facts described, because we are more concerned about Y than we are about X. What the principle really amounts to is: an interest is an interest, whoever’s interest it may be.9

According to the principle of equal consideration of interests, we cannot give special weight to humans purely by virtue of the fact that they are humans; nor can we give special weight to the current generation over future generations10. But if this principle is wrong then animal advocacy and existential risks may be a lot less important than they look.

Is it reasonable to reject the principle of equal consideration of interests? I do not believe so. Some people may want to give special consideration to certain groups, but there’s no place for that when we’re adopting, in the words of Henry Sidgwick, the point of view (if I may say so) of the Universe. If we want to do the most good, this necessarily requires us to give equal consideration to all beings.

It straightforwardly follows from the principle of equal consideration of interests that future generations should be valued equally to the present generation10. It’s less obvious how we should value non-human animals. Valuing different types of animals suffers from the two envelopes problem. But however you resolve this, it’s implausible that criminal justice reform is more effective than cage-free campaigns. To see why, let’s compare the cost-effectiveness estimates for cage-free campaigns and one of Open Phil’s grants in criminal justice reform.

Open Phil made a grant to the Pew Public Safety Performance Project (PSPP), a criminal justice intervention that looked particularly strong. Open Phil believes that “PSPP’s past work is plausibly competitive with donations to our top charities” and their back-of-the-envelope calculation suggests that PSPP’s activities avert one prison year per $29. Compare this to the estimate that cage-free campaigns avert 38 to 250 years of cage confinement per dollar. Leaving aside the fact that the latter estimate looks somewhat more robust than the former11, you would have to say that averting one prison year is 1100 to 7200 times better than averting one cage year. This is only plausible if you discount chicken suffering relative to human suffering by a factor of at least 500. If we give equal consideration to all interests, I don’t believe a discount rate this high is justifiable.

But even if we violate equal consideration of interests, it’s extremely unlikely that the PSPP grant and cage-free grants would look similarly good in expectation, so this alone is insufficient to justify making lots of grants in different cause areas. It answers why you might prefer criminal justice reform to animal welfare, but if you do, why give to animal welfare at all?

Value pluralism

Even if we can’t justify pure unequal consideration of interests, perhaps we could adopt its watered-down alternative–value pluralism.

Suppose we’re 90% confident that all generations have equal moral value, and believe there’s a 10% chance that only the current generation matters. (There are other possibilities but let’s narrow it down to these two for simplicity’s sake.) There’s no agreed-upon method for handling this kind of moral uncertainty, but let’s say we handle it by assigning resources to each moral system in proportion to our confidence in it and then let each system “buy” what it wants. So if we have $1 million, we give $900,000 to our first moral system which says that all generations have equal moral value, and $100,000 to our second system which only values the current generation. Then each system can use the money on whatever intervention it believes is best, or they can trade with each other12.

If we agree that this system makes sense (and I do have some doubts about it), you could justify giving some money to causes that aren’t optimal according to your dominant value system. However, if we’re pretty confident that future generations matter (as I believe we should be), then this alone cannot justify allocating lots of resources to interventions that only help the present generation.

Combining multiple reasons

Maybe you believe that none of the above arguments fully work, but you put weak credence in some of them, and that’s enough to justify diversifying grants across many cause areas. Say you think there’s a non-trivial chance that we should apply time discounting to the value of beings in the future (so you believe the principle of equal consideration of interests might be wrong), and you also think the best cause areas have less than $100 million/year in room for more funding, but maybe not a lot less. Then it might make sense to devote significant capacity to cause areas like criminal justice, even if no single one of these factors would have been sufficient.

This is the most compelling reason we’ve looked at, but even this doesn’t have sufficient persuasive force.

  1. The best cause areas look sufficiently better than the alternatives that they would have to see extraordinarily rapid diminishing marginal utility for wide diversification to do more good than a narrower focus.
  2. Fundamental risk aversion just makes no sense.
  3. “Fundamentally confused” risk aversion is fundamentally confused (hence the name).
  4. There does not exist any compelling argument that non-human animals’ interests matter less than humans’, or that we should discount the far future13; and it’s implausible that farm animals are massively less sentient than humans.
  5. Similarly, under a value-pluralistic approach, we should only assign small probabilities to moral theories that heavily discount non-human animals or the far future.

Even if some combination of these reasons does look sufficiently compelling, that doesn’t mean we should start funding lots of cause areas right away. We have limited research capacity and actions now matter more than actions in the future, so it makes sense to prioritize the most important cause areas. I can think of some reasons why you might start with less important areas; I don’t find these reasons compelling, but this essay is already way longer than I had planned so let’s leave that untouched for now.

If you’re a big funder with lots of money, you should prioritize a small set of cause areas (or perhaps even a single cause area) that you believe is most effective.

Notes

  1. Any particular grant may have a high probability of failing; a grant that looks really good right now might turn out to be useless. What we want to measure is the expected value of an intervention, not the true value.

    For example, if intervention A is guaranteed to save one life and intervention B has a 10% chance of saving a thousand lives, then intervention A has a 90% probability of producing a better result, but intervention B is definitely better in expectation, so we should prefer intervention B.

  2. Greenbaum Foundation appears value-aligned, but it’s much smaller so the same considerations do not apply.

  3. There are exceptions to this. On a traditional economic cost curve, average cost diminishes as output increases, and then at some point it reverses and average cost begins increasing. If a firm has low output, it can get increasing marginal utility by producing more. Most charitable activities probably follow a similar pattern: initially, each dollar does more good than the previous dollar, and this trend eventually reverses. But just as few firms find themselves on the lower part of the cost curve, I would expect few interventions to have increasing marginal utility of money.

  4. If we’re going by Open Phil’s classification system then animal advocacy counts as US policy, but I think of it separately because its goal is to help non-human animals–this arguably makes it more different from any of Open Phil’s other focus areas than they are from each other.

  5. Sell, T. K., & Watson, M. (2013). Federal agency biodefense funding, FY2013-FY2014. Biosecurity and bioterrorism: biodefense strategy, practice, and science, 11(3), 196-216.

  6. This is a comedic understatement, I’m so funny, right guys? I may be a lunatic who wants humans to spend world GDP on reducing wild animal suffering but at least I’m self-aware about it.

    Actually in all seriousness I think marginal spending on AI safety is probably better than marginal spending on wild animal suffering. That’s considered a reasonable position in some circles right?

  7. Justification: Let’s say if we are willing to spend $30 per prisoner per year, then we should be willing to spend at least $1 per wild vertebrate per year. There are about a quadrillion wild vertebrates. This is a bad estimate because that’s not how it works (you can’t just throw money at a problem based on how big it is and expect all your dollars to be equally effective), but it gives a rough idea of how much it might be reasonable to spend on alleviating wild animal suffering. Note that $1 quadrillion is much higher than world GDP. This is a feature, not a bug. If we used all of humanity’s economic production on preventing wild animal suffering we probably wouldn’t come anywhere close to solving the problem.

  8. Kelsey wrote this paragraph almost in its entirety.

  9. Singer, P. (2011). Practical Ethics (p. 20). Cambridge University Press. Kindle Edition.

  10. Although we could perhaps justify devaluing future generations’ happiness by adopting the person-affecting view. I believe this view is wrong, but this is too hard to justify here. See Beckstead, N. (2013). On the Overwhelming Importance of Shaping the Far Future. PhD Thesis. Department of Philosophy, Rutgers University. 2

  11. This is sort of an irrelevant side point but I’m going to briefly justify it anyway.

    The estimate for cage-free campaigns uses the inputs (1) spending on campaigns, (2) number of hens affected, (3) time until reforms would have happened anyway, (4) badness of battery cages. The first two numbers are based on hard data; the last two are harder to estimate.

    The estimate for PSPP relies on (1) spending by PSPP (which Open Phil does not report, but uses a conservative figure), (2) number of affected prisoners (easy to measure), (3) decrease in prison population (non-robust estimate based on impact forecasts), (4) time that reforms last (non-robust assumption), (5) badness of prison life. The individual inputs for both estimates are comparable, but this estimate has one more non-robust input than the previous estimate, which introduces more variability in the final result.

  12. Hat tip to Carl Shulman for introducing me to this argument.

  13. Based on my understanding and comments like this, I believe most Open Phil employees do in fact discount non-human animals way too much, which partially explains Open Phil’s behavior, and makes me more confident that Open Phil is in fact behaving incorrectly. On the question of the value of animals, we have good reason to expect people to be heavily biased here, so the fact that people disagree with me is only extremely weak evidence that they might be right. If a well-informed person disagreed with me about, say, the effectiveness of deworming treatments, this is a complex empirical issue where I don’t particularly expect other people to be less knowledgeable or more biased than I am. But if a large part of my disagreement with Open Phil happens simply because they value animals less, we can be particularly confident that it’s wrong.

    Adding to this, sometimes disagreements about the value of animal interventions ostensibly stem from empirical disagreements, when in fact they are caused by simple run-of-the-mill speciesism. For an example of this, Zach Groff writes about how it’s considered reasonable (even among some people who supposedly appropriately value animals) to give farm animals to poor people as an investment, but it wouldn’t be considered reasonable to give them, say, a child slave–even if doing so helped them economically.

15

0
0

Reactions

0
0

More posts like this

Comments9
Sorted by Click to highlight new comments since: Today at 10:00 PM

My guess (which, like Michael's, is based on speculation and not on actual information from relevant decision-makers) is that the founders of Open Phil thought about institutional philosophy before they looked in-depth at particular cause areas. They asked themselves questions like:

How can we create a Cause Agnostic Foundation, dedicated to directing money wherever it will do the most good, without having it collapse into a Foundation For Cause X as soon as its investigations conclude that currently the highest EV projects are in cause area x?

Do we want to create a Cause Agnostic Foundation? Would it be a bad thing if a Cause Agnostic Foundation quickly picked the best cause and then transformed into the Foundation For Cause X?

Apparently they concluded that it was worth creating a (stable) Cause Agnostic Foundation, and that this would work better if they directed significant amounts of resources towards several different cause areas. I can think of several arguments for this conclusion:

  1. Spreading EA Ideas. It's easier to spread the ideas behind effective altruism (and to create a world where more resources are devoted to attempts at effective altruism) if there is a prominent foundation which is known for the methodology that it uses to choose causes rather than for its support of particular causes. And that works best if the foundation gives to several different cause areas.

  2. Diminishing Returns to Prestige. Donations can provide value by conferring prestige, not just by transferring money, and prestige can have sharply diminishing returns to amount donated. e.g., Giving to your alma mater, whether it's $10 or $10,000, lets them say that a higher percentage of alumni are donors. One might hope that this prestige benefit (with diminishing returns) would apply to many of the grants from a Cause Agnostic Foundation, and that it will be well-regarded enough to bring other people's attention to the causes & organizations that it supports.

  3. Ability to Pivot. If a foundation focuses on just one or two cause areas (and hires people to work on those cause areas, publicizes its reasons for supporting those cause areas, builds connections with other organizations in those cause areas, etc.) that can make it hard for it to keep an open mind about cause areas and potentially pivot to a different cause area which starts looking more promising a few years later.

  4. Learning. We can learn more if we pursue several different cause areas than if we just focus on one or two. This can include things like: getting better at cause prioritization by doing it a lot, getting better at evaluating organizations by dealing with some organizations that are in cause areas where progress is relatively easy to track, and learning how to interact with governments in the context of criminal justice reform and then being better able to pursue projects involving government in other cause areas.

  5. Hits. A foundation which practices hits-based-giving can tolerate a lot of risk, but they may need to have at least some visible hits over the years in order to remain institutionally strong. Diversifying across cause areas can help that happen.

My sense is that this is an incomplete list; there are other arguments like these.

It's worth noting that many of these lines of reasoning are specific to a foundation like Open Phil, and would not apply to a single wealthy donor looking to donate his or her own money.

A few of these reasons do suggest that it might be useful to make grants in a cause area to stay open to it/keep actively researching it/keep potential grantees aware that you're funding it. This would suggest that it's worthwhile to spend relatively small amounts of money on less promising cause areas, but maintain spending to keep momentum.

This does have downsides:

  1. It costs money. If you can afford to spend $200 million/year, and you want to spend $5 million/year on each suboptimal cause area, that would easily eat up a quarter to a half of your budget.
  2. It costs staff time. You have limited capacity to do research and talk to grantees, so any time spent doing this in a suboptimal cause area is time spent not doing it in an optimal cause area. Maybe you could resolve this by putting only passing investment into less important areas and making grants without investigating them much.

Making grants in secondary cause areas has benefits, but the question is, does it have sufficient benefits to make it better than spending those grants on the strongest cause area(s)?

It's easier to spread the ideas behind effective altruism [...] if there is a prominent foundation which is known for the methodology that it uses to choose causes rather than for its support of particular causes.

Aside from the fact that I'm skeptical of this claim, Open Phil is fairly opaque about how it makes grant decisions. It produces writeups about the pros and cons of cause areas/grants, which is nice, but that doesn't tell us why this grants was chosen rather than some other grant, or why Open Phil has chosen to prioritize one cause area over another.

And like I said, I'm skeptical of this claim. Perhaps making grants to lots of cause areas promotes EA ideas. But since the standard EA claim is that individual donors should give to the single best cause, maybe a foundation would better promote EA ideas by focusing on the single best area until it has enough funding that it's no longer best on margin. I don't really know either way and I don't know how one would know.

I'm also not convinced that promoting EA ideas is a good thing.

My intuition is that you might be overestimating how much information is available to donors? There is also uncertainty over the value of purchasing additional information. It seems you need to buy at least a little bit of information in the best way you know how in order to start to calibrate how valuable that info is and thus your future information purchases will be.

Getting information is definitely important in a lot of cases. I believe it's more important for narrow decisions (e.g. which interventions to support within a cause) than broad decisions (such as whether to prioritize short-term or far-future interventions). I don't believe there's much you could learn from making grants about how to prioritize short-term versus far-future interventions, since this depends mostly on theoretical questions and extremely long-term effects that you can't really measure.

since this depends mostly on theoretical questions and extremely long-term effects that you can't really measure.

This itself is the sort of hypothesis that we wish to test by doing additional research. What sort of actions, if any, have ever had predictable long term consequences? What is the actual time horizon of e.g. qualitative predictions (unknown) vs quantitative predictions (around 400 days according to superforecasting work so far).

Thanks, all, for the very thoughtful post and comments!

At some point this year, I hope to make a post about our general reasons for wanting to put some resources into the causes that look best according to different plausible background worldviews and epistemology. Dan Keys and Telofy touched on a lot of these reasons (especially Dan's #3 and #4).

I think our biggest disagreement with Michael is that he seems to see a couple of particular categories of giving (those relating to farm animal suffering and direct existential risk) as massively and clearly better than others, with high certainty. If we agreed, our approach would be much more similar to what Michael suggests than it is now. We have big uncertainty about our cost-effectiveness estimates, especially as they pertain to issues like flow-through effects. I'll note that I've followed some of Michael's links but haven't ended up updating in the direction of more certainty about things he seems to be certain of (such as how we should weigh helping animals compared to helping humans).

We do think we've learned a lot about how to compare causes by exploring specific grants, and we think that in the long run, our current approach will yield important option value if we end up buying into worldview/background epistemology that doesn't match our current best guess. It's also worth noting that our approach requires commitments to causes, so our choice of focus areas will change less frequently than our views (and with a lag).

I think our other biggest disagreement with Michael is about room for more funding. We are still ramping up knowledge and capacity and have certainly not maxed out what we can do in certain causes, including farm animal welfare, but I expect this to be pretty temporary. I expect that we will hit real bottlenecks to giving more pretty soon. In particular, I am highly skeptical that we could recommend $50 million with even reasonable effectiveness on potential risks from advanced artificial intelligence in the next year (though recommending smaller amounts will hopefully, over time, increase field capacity and make it possible to recommend much more later). We're not sure yet whether we want to prioritize wild animal suffering, but I think here there is even more of a bottleneck to effective spending in the reasonably near term.

Thanks for the response, Holden. I appreciate it when you engage with public comments on GiveWell/Open Phil.

I think our biggest disagreement with Michael is that he seems to see a couple of particular categories of giving [...] as massively and clearly better than others, with high certainty.

I'm probably more confident than you are about cause prioritization, but I don't believe that's necessary for my arguments. You just have to be weakly confident that one area is better, and that it has more room for funding than you can fill in the long term. But if you're only weakly confident in one cause area being better than another then that makes Dan's #3 look more compelling, so diversifying may be the right call in that case.

I'll add that I agree with you that there's almost certainly not $50 million worth of "shovel-ready" grants in AI safety, and definitely not in wild-animal suffering, but the problems are big enough that they could easily absorb this much funding if more people were working on the problems. Committing money to the problems is probably one of the best ways to incentivize people to work on them—Open Phil already seems to be doing this a bit with AI safety. I don't know as much about grantmaking as you do but my understanding is that you can create giving opportunities by committing to cause areas, which was part of Open Phil's motivation for making such commitments.

I think the more uncertain you are, the more learning and option value matter, as well as some other factors I will probably discuss more in the future. I agree that committing to a cause, and helping support a field in its early stages, can increase room for more funding, but I think it's a pretty slow and unpredictable process. In the future we may see enough room for more funding in our top causes to transition to a more concentrated funding, but I think the tradeoffs implied in OP have very limited relevance to the choices we're making today.

That’s an interesting observation. A few more reasons and possible reasons for Open Phil’s priorities. I don’t know if they are sufficient to explain the phenomena you have observed.

Outside constraints:

  1. Scalability of recipients. Open Phil may have limited confidence in an organization’s ability to scale, e.g., because there is little talent on the market, so hiring will be slow no matter how much money Open Phil throws at the organization. When we define the funding gap in terms of the highest “execution level” that Open Phil recognizes and then give the organization a reasonable safety margin, then the marginal utility of further grants will drop very steeply, because the organization will have no choice but to save the money, something that Good Ventures could have done at least as well.
  2. Pacing of scaling. Spaces that have previously received smaller grants (e.g., farmed and wild animal advocacy) will have to scale up first before they can absorb the grants that spaces like the prison reform can absorb already, so the size of the grants that is really comparable is the integral of all grants to the space that will be made over the coming years or decades, or any amount of time necessary for the recipients to scale up.
  3. Stability of recipients. The funding gaps that Open Phil can use are further restricted because in many cases they can’t simply fill the whole funding gap of an organization or other impact-minded donors would redirect their donations to an organization that still has a funding gap. The organization that received the Open Phil grant would become highly dependent on Open Phil, a precarious situation, and, vice versa, a situation that reduces Open Phil’s flexibility. Hence Open Phil has to either fill only part of the funding gap or restrict funding to one particular project of an organization.

Organizational constraints:

  1. Value of information. Open Phil has repeatedly pointed out how grants can open doors to greater insight, e.g., because people in the spaces learn that they can prioritize conversations with Open Phil with lower risk of wasting their time. (This overlaps with Dan’s comment.)
  2. Scalability of Open Phil. To operate at the necessary scale, Open Phil needs to split the prioritization and grant-making task up to parallelize it. Cause prioritization is probably costly because it requires that a team have an overview over all the causes that need to be compared. This is only possible at a much more shallow level. At this level it may seem plausible that there are highly effective interventions in all the prioritized cause areas, but to investigate this further, one part of the team has to specialize or hire a specialist, someone who will be much less able to assume the same generalist’s perspective that is necessary for comparison between the areas. Since some people will certainly remain generalists to oversee the operation, it will eventually become more or less clear which areas have turned out to be more suitable than others.

That said, I think there are more funding gaps at least in farmed animal advocacy (and there may soon be more in wild animal advocacy as well), and I think they would be great enough to enable more grants without driving away other donors. Especially grants restricted to marketing and onboarding (and other important items that charities rarely try to advertise to donors) should have the opposite effect and, in effect, actually attract donors. GiveDirectly’s case is again a good precedent.

I’m about to publish a blog post on a coordination problem that I think is highly important to the farmed animals space. Open Phil could greatly alleviate this problem by making the same commitment to ACE’s top charities that it has made to GiveWell’s top charities: grants whose size depends on the charities’ total funding gaps, minimizing fungibility concerns. (More on that on Thursday probably.)

Edit: My abovementioned post.