Hide table of contents

This post attempts to clear up some confusions and discuss why the Open Philanthropy Project isn't currently funding organizations focused on promoting effective altruism. "We" refers to the Open Philanthropy Project.

We're excited about effective altruism, and we think of GiveWell as an effective altruist organization (while knowing that this term is subject to multiple interpretations, not all of which apply to us).

Over the last few years, multiple organizations have sprung up that focus on promoting and supporting the general ideas associated with effective altruism, and there's a case to be made that the Open Philanthropy Project should be funding such organizations. We may do so in the future, but we haven't to date and don't plan to do so imminently.

We are particularly interested in clarifying our thinking, and pointing out some of the constraints and limitations we face, in order to make clear that our lack of funding in this area does not mean that we have negatively evaluated the giving opportunities. We encourage individual donors to support organizations in this area if and when they feel they have ample context and a strong case for doing so.

For brevity, we abbreviate "grants to organizations focused on promoting and supporting the general ideas associated with effective altruism" as "EA organization grants" for the remainder of this post. Despite the abbreviation, this term isn't meant to include all organizations that consider themselves "EA organizations," and in particular doesn't include organizations that focus on global catastrophic risk reduction.

Summary:

  • We generally (though with some exceptions) avoid "one-off" giving opportunities. Instead, we focus on choosing focus areas, building capacity appropriate to our focus areas, and making grants within focus areas. Therefore, the question of why we don't make EA organization grants mostly comes down to the question of why we haven't made EA organization grants one of our focus areas. More
  • We feel that EA organization grants would be a relatively intensive focus area, in terms of the staff capacity it would require. There are several reasons for this, including the number of people and organizations seeking funding, the profile we would need for a staff member to focus in this area, and some risks of working in this space. We have tried thinking of ways to enter this area without using major staff capacity on it, but by default, we would have to sacrifice major parts of our current agenda to work in this area. More
  • We think that other work is a better fit for our comparative advantage and long-term strategy. We feel that there are other donors well-positioned to fund EA organizations, while we are uniquely well-suited to apply effective altruist values in areas that are less explicitly "about" effective altruism. More

There are several reasons for which we might make exceptions (one-off grants) or change our minds about EA organization grants as a focus area:

  • We would make any EA organization grant that seemed clearly outstanding, enough to offset all of the concerns listed here.
  • We expect changes in the future on a couple of important fronts: our capacity and stage of development, and the degree of funding opportunities for EA organization grants. At some point, we expect to put more time into considering an EA organization grants program area; this would include engaging more deeply with questions we haven't yet thought hard about, such as how this area scores on our criteria of importance and tractability.
  • We've also discussed some possible ways of funding EA organizations that would be less staff-intensive than our default "focus area" approach, and may find a way to make one of these work for us.

The bottom line is that at this point in time, we should be seen as generally agnostic on EA organization grants. It is true that we would put more time into this area if we had seen giving opportunities that seemed overwhelmingly compelling and unlikely to be funded by other donors, but for the most part we have not put enough time into assessing potential EA organization grants to have strong views. We don't think working on such grants is the right decision for our organization today, but we think there are other people for whom EA organization grants may be an excellent choice. While we sometimes share informal thoughts on effective altruism organizations and alert interested donors to giving opportunities, we feel that for the most part, many other donors are well-positioned to make their own judgments about whether to fund effective altruism organizations, particularly those donors who are highly impact-focused and have a good deal of context on the effective altruism community.

Avoiding "one-off" grants

We have limited capacity, and we generally feel that one way to make the most of it is to concentrate on focus areas. "Focus areas" refers to causes that we've made a deliberate, strategic decision to prioritize.

For any given focus area, we can:

  • Make sure there's one staffer responsible for being highly engaged in the area, meaning they know the relevant community, know the relevant literature, know some of the pitfalls to watch out for, know who the other funders are and how to avoid being fungible with them, and have a generally strong sense of context.
  • Make sure that multiple people - particularly those who need to weigh in on grants - are reasonably up to speed on the area.
  • Clearly lay out our priorities in the area, so that people can have a sense of what sorts of giving opportunities we are and aren't interested in.

When a giving opportunity sits within a focus area, we're generally able to evaluate it efficiently. It's clear whose role it is to provide most of the context; there are multiple people on staff who are familiar enough with the area to weigh in; potential risks are relatively easy to identify. We are well-positioned to explain our thinking to staff, to others in our audience, and to people who work within the cause. By contrast, one-off grants incur many of the costs of analyzing and discussing a new cause, without commensurate benefits.

In the specific case of EA organization grants, some of the costs of one-off grants are muted, while others are heightened. On one hand, we have strong connections in the effective altruist community, and we're broadly familiar with the goals of EA organizations, though we don't have the level of context we'd want for a focus area. So some of the costs associated with investigating focus areas are lower. On the other hand, we're particularly sensitive about problems we could cause if we funded some groups but not others, without being systematic and thoughtful about the reasons for doing so. Doing this could:

  • Send confusing signals. Grants might be interpreted as endorsements, and the lack of a grant might be interpreted as a negative evaluation - more than is warranted if we were making grants on a one-off basis without an overall strategy for the cause. Donors in this area also seem particularly inclined to put weight on our recommendations. Trying to signal accurately to other funders in the community would likely become a major undertaking, much more so than it is under the status quo.
  • Distort our relationships with people in the effective altruism community. If our choices were perceived as largely arbitrary, opaque, and instinctive (as they would likely appear if we made one-off grants rather than systematically thinking through all our options in the cause), people seeking funding might cease to become honest peers and move in the direction of trying to figure out how to relate to us in order to get funding. This risk exists to some degree in any case, but I think it would be greatly magnified if people had the sense they could get funding from us while not having much idea of how.

People in the effective altruism community are among those best positioned to both promote our work and to critique it at a big-picture level. Our relationships in this community are important to us, and that means we'd want to be able to situate any grants within this community in a well-thought-out overall strategy.

We aren't categorically opposed to one-off grants. We can make them when they appear sufficiently (a) outstanding and/or (b) unlikely to distract significantly from our focus-area-oriented work. EA organization grants generally seem particularly unlikely to pass the second criterion.

We have made one small grant in this category as a one-off, and may do so again in the future, but don't plan on much of it.

EA organization grants would be a relatively intensive focus area



There are some cases in which a focus area takes only a small fraction of a staff member's time - for example, land use reform and immigration policy. These causes are characterized by: (a) a thin field and ecosystem around our goals, with only a small number of giving opportunities and little in the way of established players; (b) sitting within a broader category (US policy) that one of our staff members focuses on.

By default, we don't think it would work well to approach EA organization grants this way, for several reasons.

First, there are enough organizations and people seeking funding that it would take serious time investment to keep up with and consider giving opportunities.

Second, there are some risks associated with this area:

  • Potential conflicts of interest. To some people, "promoting effective altruism" has a lot of overlap with "encouraging people to use recommendations put out by GiveWell and the Open Philanthropy Project." To others, it has much less. By default, we naturally have more in common with people who want to heavily promote GiveWell (and the Open Philanthropy Project) than with people who do not. If we funded EA organization grants, this fact would probably mean we were more likely (all else equal) to support effective altruism organizations that saw promoting our work as an important part of their mission. At the same time, we would also have a direct incentive pushing us in the same direction: regardless of whether supporting highly pro-GiveWell organizations was good for the world, it would be good for us as an organization. This would be a tricky dynamic: we'd have to work hard to distinguish "doing what's best for the world" from "doing what's most likely to get GiveWell promoted," even though we'd naturally perceive a high correlation between the two. We would risk distorting the effective altruism space in a pro-GiveWell direction, or being perceived as doing so. At the same time, our interest in the space as a whole might appear to people outside the effective altruism community as being primarily about promoting ourselves.
  • Managing high-stakes relationships. We see the effective altruist community as a significant source of useful feedback as well as potential staff, and many of our current staff have strong social ties to the community. We would feel the need to thoughtfully engage in any conflict and debate we might cause with our funding strategy, a fact that raises the importance of being thoughtful and systematic about our choices.
  • Risks of amplifying harmful messages. A fair amount of work has gone into defining and promoting the label "effective altruism," and our organization is partly associated with this label. At the same time, effective altruism is still in a very nascent phase, there are several messages associated with it that we're not comfortable with, and amplification of problematic messages at this stage could affect general perceptions around the label, which could be a problem for both effective altruism and us.


It's important to note that the above factors aren't arguments against working on EA organization grants; they're simply reasons that doing so would have to be fairly intensive for us. Many focus areas bring substantial challenges and would require significant effort to work in, and we are happy to take these challenges on for the small number of focus areas we prioritize most highly.

A final note on this topic is that there is a fairly small set of people whom we could picture leading our work on EA organization grants, and all of them could work on many other areas for us as well. We have done several cause-specific job searches, looking for people we wouldn't find through our generalist hiring process, but we don't think this would work well for EA organization grants.

The bottom line of this section is that the opportunity cost of working on EA organization grants would be fairly high for us. It would likely require substantial time from generalist senior staff, which - at this stage in our development - would substantially slow our work on another broad category, such as scientific research funding.

We are actively thinking about ways we could approach EA organization grants in a less intensive way. We haven't yet settled on an approach we're comfortable with.

Our comparative advantage



There are several reasons we prefer our current research agenda to one with more focus on EA organization grants. One is our informal assessment of the importance, neglectedness and tractability of the "EA organization grants" area. While we think greater interest in effective altruism could lead to a large amount of positive impact, the giving opportunities we've seen (after accounting for existing sources of funding) don't seem outstanding enough to make working in this area preferable to exploring other areas. However, this assessment is highly speculative and informal - especially given how far we are from having a good understanding of some of the areas we're working on, such as scientific research.

A different line of reasoning (that overlaps to some degree with questions around importance, neglectedness and tractability) has to do with our comparative advantage and long-term strategy:

  • We feel that there are other donors well-positioned to fund EA organizations.
  • We feel that we are uniquely well-suited to apply effective altruist values in areas that are less explicitly "about" effective altruism, and doing so could have significant benefits for our long-term strategy and for the effective altruism community.

Other donors are well-positioned to fund EA organizations

It currently seems to us that:

  • Most effective altruist organizations have raised reasonable amounts of money relative to their stages of development. They have also generally hit their funding targets, though this data point should be interpreted very cautiously (funding targets are often set based on what's achievable).
  • There are a number of people who (a) donate significant amounts; (b) are interested in EA organization grants; (c) are sufficiently connected in the effective altruism community to have strong context for evaluating potential EA organization grants; (d) do support much of the work in this space. In many cases, these people spend substantial time with the organizations they support and get to know their plans and leadership very well.

This isn't to say that we've seen no funding gaps. But the field as a whole seems both relatively young (hence limited room for more funding) and capable of raising money in line with its current stage of development. And we're unsure of how much value we'd add over existing donors. The most unique aspects of the Open Philanthropy Project approach to funding pertain to cause selection; our model for working within a cause is not (so far) very different from the approach many others take, which is to assign most of the decision-making to a person who knows the space (and its organizations, people and literature) well and can make informed judgment calls.

As a more minor point, if we made a large commitment to the space of EA organization grants, we'd be somewhat worried about causing others to give less to EA organizations. The donors we're discussing tend to be highly attentive to questions like "If I didn't fund this project, would someone else?" - but we're not confident in all cases that we would agree with the details of how they handle these questions. If we made EA organization grants a focus area, and others gave less to EA organizations in hope that we would fill the gap, effective altruism organizations could end up with less robust donor bases, i.e. relying more heavily on fewer donors, and therefore in a weaker position.

It does seem worth noting that today's donors, by supporting EA organizations at their current level, may be helping build capacity that will lead to much larger giving opportunities down the line, and thus making our future entry into the space more likely.

Our comparative advantage and long-term strategy


We feel that there are multiple individual donors who have similar values to ours and are well-positioned to evaluate EA organizations. By contrast, we feel that it is very challenging for individuals to apply effective altruist values to causes that aren't "about" effective altruism, and this is an area where we feel uniquely well-positioned to be helpful.

We've long believed that one of the best things we can do for effective altruism is to give it more substance and definition. There are other groups who focus on getting more people to become interested in the broad ideas and values behind effective altruism; by contrast, we feel particularly well-positioned to help people identify specific donations they can make, issues they can engage with, etc. once they have bought in. Doing this can, itself, help get more people interested in effective altruism. For example, we believe that GiveWell's work on top charities has improved engagement from people who wouldn't have been drawn in purely by the abstract idea of effective giving. Much as one might develop and refine a scientific theory by using it to make predictions, we're trying to develop and refine an effective altruist framework by using it to arrive at concrete recommendations. We believe that doing so can help improve and make the case for the framework, and that this is a distinct goal from supporting promotion of the framework.

There are a few other reasons that working on causes other than EA organization grants fits well with our long-term strategy:

  • In general, we aim to focus our efforts toward a long-term goal of moving very large amounts of money. In some cases, we work in causes where we see imminent potential to make tens of millions of dollars per year in grants (for example, criminal justice reform and biosecurity). In other cases, we make small grants in a small field in the hopes that the field will grow and eventually have this sort of room for more funding. When it comes to EA organization grants, we feel that the field doesn't yet have room for a lot more funding, and that a reasonable amount of field-building is already going on without our funding.
  • When the time comes that we have capacity for working on EA organization grants, we feel we will be able to do so relatively quickly, without needing the extended period of refining frameworks and building staff capacity that we need for other areas. By contrast, working on our current agenda means building capacity and knowledge for working in areas we couldn't otherwise explore. By prioritizing our current agenda, we believe we'll later be able to do more than if we were currently prioritizing EA organization grants.
  • Over the long run, we hope to gain a reputation as a valuable resource for philanthropists, and thus to influence many future philanthropists. We think the areas we're exploring - which are highly likely to be of interest to future philanthropists, and which we currently have limited understanding of - will do more for this goal than making EA organization grants.
  • We have somewhat of a general preference for avoiding cross-cutting, "meta" causes that we feel we'll be better positioned to work on later. For example, we may be more interested in "improving democracy" work after we've gained a better sense of the major issues and challenges in US policy, which can help us ensure that any "improving democracy" work we do focuses on the issues we see as most crucial. There is an analogous preference regarding effective altruism: we feel that we will be better-positioned to see the potential value-added of effective altruism after we can say more about what causes we think are most in need of more attention, and thus how additional effective altruists can add the most value. This is a relatively minor factor, but seems worth noting.

Bottom line

At the moment, we're not imminently planning to make EA organization grants, either as one-offs or via a focus area. We're continually reassessing this stance. Our staff capacity is in flux (generally growing), as is the state of the effective altruism community and the associated giving opportunities. As room for more funding in the EA organization grants space grows, and our capacity grows, the case for working on EA organization grants gets stronger. We do want to hear about EA grant giving opportunities; the more pressing and un-filled gaps we heard about, the more likely we would be to make the space a focus area, and we haven't ruled out one-off grants if the right opportunities (both in terms of promise and in terms of time required from us and potential risks) arose.

For the time being, however, we wish to make clear that we see no conflict between (a) choosing not to make EA organization grants ourselves and (b) being glad that there are other donors interested in doing so (both of which are the case).

Comments5
Sorted by Click to highlight new comments since: Today at 11:58 PM

Thanks for the comments, all.

Telofy and kgallas: I'm not planning to write up an exhaustive list of the messages associated with EA that we're not comfortable with. We don't have full internal agreement on which messages are good vs. problematic, and writing up a list would be a bit of a project in itself. But I will give a couple of examples, speaking only for myself:

  1. I'm generally uncomfortable with (and disagree with) the "obligation" frame of EA. I'm particularly uncomfortable with messages along the lines of "The arts are a waste when there are people suffering," "You should feel bad about (or feel the need to defend) every dollar you spend on yourself beyond necessities," etc. I think messages along these lines make EA sound overly demanding/costly to affiliate with as well as intellectually misguided.

  2. I think there are a variety of messages associated with EA that communicate unwarranted confidence on a variety of dimensions, implying that we know more than we do about what the best causes are and to what extent EAs are "outperforming" the rest of the world in terms of accomplishing good. "Effective altruism could be the last social movement we ever need" and "Global poverty is a rounding error compared to other causes" are both examples of this; both messages have been prominently enough expressed to get in this article , and both messages are problematic in my view.

Telofy: my general answer on a given grant idea is going to be to ask whether it fits into any of our focus areas, and if not, to have a very high bar for it as a "one-off" grant. In this case, supporting ACE fits into the Farm Animal Welfare focus area, where we've recently made a new hire; it's too early to say where this sort of thing will rank in our priorities after Lewis has put some work into considering all the options.

I’m looking forward to news from Lewis then!

Agreed on point 2.

About point 1: “I think messages along these lines make EA sound overly demanding/costly to affiliate with”: This strategic issue is one that I have no informed opinion on. Intuitively I would also think that people work that way, but the practice of hazing, e.g., initiation rites of fraternities, suggests that such costliness might counter recidivism, and that’s an important factor. Moral frameworks that have this obligation aspect also seem relatively more simple and consistent to me, which might make it easier to defend them convincingly in outreach.

“As well as intellectually misguided”: From a moral antirealist’s perspective, this depends on the person’s moral framework. Taking Brian’s critique of the demandingness critique into account, this does apply to mine, so whether to demand the same from others, again only boils down to the strategic question above. Do you have an ethical or epistemic reason why it would be misguided even from a broadly utilitarian viewpoint?

I really appreciate when you or other GiveWell employees take the time to write up your positions like this.

Thank you! That makes a lot of sense and increases my estimate of the marginal value of ETG for me.

One on-topic question: You say that “there are several messages associated with it that we're not comfortable with.” I have a bit of a history of updating in response to explanations from GiveWell, so I’m worried that I’m also running the risk of perpetuating (myself or through my donations) EA messaging that I will, in retrospect, regret. At the same time I’m puzzled as to which messages you might be referring to. Can you clarify this?

One explicitly off-topic question: Since ACE’s top charities are not charities that have grown out of the EA movement and ACE’s raison d'être is close to GiveWell’s, is there a chance you could provide a fixed yearly grant to ACE for regranting so as to incentivize charities to cooperate with it? (Small and fixed enough so not to make individual donations fungible to any worrisome degree.)

At the same time, effective altruism is still in a very nascent phase, there are several messages associated with it that we're not comfortable with, and amplification of problematic messages at this stage could affect general perceptions around the label, which could be a problem for both effective altruism and us.

Could you please clarify?