Hide table of contents

Edit 2024/01/02: People who like this new vision may want to consider donating to EAIF

Now until the end of January is an unusually good time to donate, as it allows you to take advantage of 2:1 Open Phil matching ($2 from them for $1 from you)[1]. My best guess is that (unlike with LTFF), the EAIF match will not be filled by default without significant changes in more donors chipping in. We currently have ~1.5M of the 3.5M matching filled (see live dashboard here)

Summary

  • EA Infrastructure Fund (EAIF)[1] has historically had a somewhat scattershot focus within “EA meta.” This makes it difficult for us to know what to optimize for or for donors to evaluate our performance. More
  • We propose that we switch towards focusing our grantmaking on Principles-First EA.[2] More
  • This includes supporting:
    • research that aids prioritization across different cause areas
    • projects that build communities focused on impartial, scope sensitive and ambitious altruism
    • infrastructure, especially epistemic infrastructure, to support these aims
  • We hope that the tighter focus area will make it easier for donors and community members to evaluate the EA Infrastructure Fund, and decide for themselves whether EAIF is a good fit to donate to or otherwise support.
  • Our tentative plan is to collect feedback from the community, donors, and other stakeholders until the end of this year. Early 2024 will focus on refining our approach and helping ease transition for grantees. We'll begin piloting our new vision in Q2 2024. More

Note: The document was originally an internal memo written by Caleb Parikh, which Linch Zhang adapted into an EA Forum post. Below, we outline a tentative plan. We are interested in gathering feedback from community members, particularly donors and EAIF grantees, to see how excited they’d be about the new vision.

Introduction and background context

I (Caleb) [3]think the EA Infrastructure Fund needs a more coherent and transparent vision than it is currently operating under.

EA Funds’ EA Infrastructure Fund was started about 7 years ago under CEA. The EA Infrastructure Fund (formerly known as the EA Community Fund or EA Meta Fund) has given out 499 grants worth about 18.9 million dollars since the start of 2020. Throughout its various iterations, the fund has had a large impact on the community and I am proud of a number of the grants we’ve given out. However, the terminal goal of the fund has been somewhat conceptually confused, which likely leads to a focus and allocation of resources often seemed scattered and inconsistent.

For example, EAIF has funded various projects that are associated with meta EA. Sometimes, these are expansive, community-oriented endeavors like local EA groups and podcasts on effective altruism topics. However, we’ve also funded more specialized projects for EA-adjacent communities. The projects include rationality meetups, fundraisers for effective giving in global health, and AI Safety retreats.

Furthermore, in recent years, EAIF also functioned as a catch-all grantmaker for EA or EA-adjacent projects that aren’t clearly under the purview of other funds. As an example, it has backed early-stage global health and development projects.

I think EAIF has historically served a valuable function. However, I currently think it would be better for EAIF to have a narrower focus. As the lead for EA Funds, the bottom line of EAIF has been quite unclear to me, which has made it challenging for me to assess its performance and grantmaking quality. This lack of clarity has also posed challenges for fund managers in evaluating grant proposals, as they frequently face thorny philosophical questions, such as determining the comparative value of a neartermist career versus a longtermist career.

Furthermore, the lack of conceptual clarity makes it difficult for donors to assess our effectiveness or how well we match their donation objectives. This problem is exacerbated by us switching back to a more community-funded model, in contrast to our previous reliance on significant institutional donors like Open Phil[4]. I expect most small and medium-sized individual donors to have less time or resources to carefully evaluate EAIF’s grantmaking quality despite the conceptual confusions. Likewise, grantees, applicants, and potential applicants may be confused due to uncertainty about what the fund is looking for.

Finally, having a narrower purpose clarifies the areas EAIF does not cover, allowing other funders to step in where needed. Internally, having a narrower purpose means our fund managers can specialize further, increasing the efficiency and systematization of grant evaluations.

Here is my proposal for the new EAIF vision, largely focused on principles-first EA (or at least my interpretation of principles-first EA). I believe this gives a clearer bottom line for the fund, whilst complementing the work done by other orgs nicely, and is currently a neglected perspective by other EA funders.

Proposal

The EA Infrastructure Fund will fund and support projects that build and empower the  community of people trying to identify actions that do the greatest good from a scope-sensitive and impartial welfarist view. In general, it will fund a mixture of activities that:

  1. Increase the number of people trying to do an ambitious amount of good using evidence and reason, with a focus on scope sensitivity, impartiality and radical empathy. This is mainly achieved by expanding the Effective Altruism community, though not exclusively.
  2. Help these individuals allocate their resources, or otherwise do good, in the most altruistically impactful way.

Examples of projects under the new EAIF’s purview

  • Research that aids prioritization across different cause areas
    • Initiatives such as 80,000 Hours and Probably Good
    • A retreat for researchers in nascent fields (like ethics concerning digital minds) where the outputs may be helpful in determining whether the EA community should devote more resources towards expanding those fields.
  • Projects that help grow the EA community.
    • Local and university-based EA groups with high epistemic integrity 
    • An op-ed discussing the world's most pressing issues from an Effective Altruism viewpoint.
  • Infrastructure, especially epistemic infrastructure, to support these aims
    • Guesstimate
    • Manifold Markets

Examples of projects that are outside of the updated scope

Here are some examples of “meta” projects that may have been within the current EAIF’s purview, but I think will fall outside of the new scope. (Note that many of them might still otherwise be exciting or high-impact).

  1. An introductory website to AI Safety, such as AIsafety.com.
  2. Travel reimbursements for students to attend an alternative protein conference.
  3. A community-based AI safety or animal welfare group.
  4. A university society or club dedicated to global health and wellbeing.
  5. An organization promoting effective giving, aimed at supporting Global Health and Wellbeing charities.

Why focus on Principles-First EA?

H/t to Joe Carlsmith for crystallizing some of these takes for me.

  • I[3] think EA is doing something special.
    • It has attracted many people towards problems that I think are very pressing, including many people that I think have a lot of impact potential.
    • Historically, EA has acted as a beacon for thoughtful, sincere, and selfless individuals to connect with each other.
      • I think at some point we shifted towards more of a recruitment-focused approach, rather than nurturing a community ethos. In my view, this shift moved us away from the core of EA that I think of as special and important.
        • Though perhaps a more recruitment-oriented version of EA could be better for the world.
  • I think that fighting for EA right now could make it meaningfully more likely to thrive long term.
    • I suspect that the reputation of EA has (justifiably) taken a hit due to the FTX situation and other negative media attention - but the impact on the brand isn't as severe as it might seem.
      • I think there are many updates to be made from FTX, but we shouldn’t abandon the brand or movement-building altogether just because of FTX.
    • Many of the organizations that I see as most central to keeping EA afloat might decide to prioritize direct work. I think this is true even if, collectively, they’d endorse more resources going towards Principles-First EA than the current allocation.
  • I think that we could make EA much better than it currently is - particularly on the “beacon for thoughtful, sincere, and selfless” front.
    • I don’t think EA has done much “noticing what is important to work on” recently.
      • Historically, EA has many thoughtful people who have discovered or were early adopters of novel and important ideas.
      • I don’t think many people have recently tried to bring those people together with the express goal of identifying and working on pressing causes.

Potential Metrics

Note that I'm not looking to directly optimize for these metrics. Rather, “If the fund is operating well, I predict we'll see improvements along these dimensions.”

Below are some potential metrics we could consider:

  • The number of people explicitly using EA principles to guide large decisions in their lives.
  • The number of people that can explain the various cruxes between longtermism and neartermism, and between helping humans and helping non-human animals
  • The number of people working in jobs that generate a substantial amount of altruistic impact, spanning a range of moral views that we find credible.
  • The number of people meaningfully engaging with the EA community
  • The quality of discussions on the EA forum and other EA platforms, specifically focusing on their epistemic rigor, originality, and usefulness for making a positive impact in the world.
  • The caliber of attendees at EA global events - evaluated based on their alignment with EA values and their fit for impactful roles.

I will also be interested in quality-weighting for those metrics, though this is controversial and may be hard to do in a worldview-agnostic manner. (One possibility is some combination of a relatively neutral assessment of competency and dedication)

Potential Alternatives for Donors and Grantees

I (Linch) might add more details to this section pending future comments and suggestions.

Unfortunately, some people and projects who are a good fit for EAIF’s current goals might not be a good fit for the new goals. Likewise, donors may wish to re-evaluate their willingness to contribute to EAIF in light of the new strategy. 

For people doing meta-work that is closely associated with a specific cause area, we encourage you to apply for funds that specialize in that cause area (e.g. LTFF for work on longtermism or mitigating global catastrophic risks, Animal Welfare Fund for animals-focused meta projects). 

I will also try to keep an updated list of alternative funding options below. Readers are also welcome to suggest other options in the comments.

People may also like observations of the funding landscape of EA and AI Safety by Vilhelm and Jona.

Tentative Timeline

Until EOY 2023:

  • Get feedback from community members and other stakeholders
  • Gauge donor interest and get soft commitments from donors, to understand what scale EAIF should be operating on next year

Q1 2024

  • Scope out vision more and define metrics more clearly
  • Hire for a new fund chair for EAIF (determine part-time or full-time status based on applicant interest and scale expectations)
  • Hire EAIF fund managers and assistant fund managers
  • Phase out current version of EAIF (eg by giving out exit grants)

Q2 2024

  • Onboard EAIF fund chair
  • 3-month trial period for the new vision

Q3 2024 onwards

  • Continue grantmaking under the new vision (if trial period worked out well)

Appendices

 (no need to read, but feel free to if you want to)

Examples of projects that I (Caleb) would be excited for this fund to support 

  • A program that puts particularly thoughtful researchers who want to investigate speculative but potentially important considerations (like acausal trade and ethics of digital minds) in the same physical space and gives them stipends - ideally with mentorship and potentially an emphasis on collaboration.
  • EA groups at top universities, particularly ones that aren’t just funnelling people into long-termism or AIS.
  • A book or podcast talking about underappreciated moral principles
  • Foundational research into big, if true, areas that aren’t currently receiving much attention (e.g. post-AGI governance, ECL, wild animal suffering, suffering of current AI systems).
  • Research that challenges assumptions or rarely discussed considerations like Growth and the case against randomista development.

Note that I[3] don’t plan on being the chair of this fund indefinitely, and probably won’t try and make these kinds of grants whilst I chair the fund.

Scope Assessment of Hypothetical EAIF Applications

These fictional grants are taken from this post and are all currently in scope.

In scope

  • Continued funding for a well-executed podcast featuring innovative thinking from a range of cause areas in effective altruism ($25,000)
  • A program run by a former career counsellor at an elite college introducing intellectually- and ethically-minded college freshmen to EA and future-oriented thinking ($35,000)
  • A six-month stipend and expenses for a dedicated national coordinator of EA Colombia[5] to aid community expansion and project coordination ($12,000)
  • Expenses for a student magazine covering issues like biosecurity and factory farming for non-EA audiences ($9,000)
  • 12 months’ living stipend, rent, and operational expenses for 2 co-organizers to develop and test out a program for specialised skill development within the Indonesian EA community and delivering high-quality localized content ($35,000)
  • Rerunning a large-scale study on perceptions of the EA brand to see if the results changed post-November 2022 ($11,000)
  • Stipend for 4 full-time equivalent (FTE) employees and operational expenses for an independent research organisation that conducts EA cause prioritisation research and assists a few medium-sized donors ($500,000)

Unclear

  • A nine-month stipend for a community builder to run an EA group for professionals in a US tech hub ($45,000)
    • If this ended up mostly focussed on AI safety, as it’s in a tech hub, it should instead be funded by the LTFF. If it is discussing EA more broadly then the EAIF should fund it.
  • Capital and a part-time stipend for an organiser to obtain rental accommodation for 15 students visiting EA hubs for internships during the summer ($40,000)
    • If this ended up mostly focussed on longtermist causes (which is plausible given the kinds of orgs that offer internships in EA hubs) it instead should be funded by the LTFF.

Out of scope

  • Funding a very promising biology PhD student to attend a one-month program run by a prestigious US think tank to understand better how the intelligence community monitors various kinds of risk, such as biological threats ($6,000)
  • Stipend and one year of expenses for someone with local experience in high-net-worth fundraising to launch an Effective Giving Singapore[5] website and start fundraising initiatives in Singapore for highly impactful global health charities ($170,000)
  • A 12-month stipend and budget for an EA to conduct programs to increase the positive impact of biomedical engineers and scientists ($75,000)

Key Considerations

I encourage commenters to share their own cruxes as comments.

  1. Is this vision philosophically coherent?
  2. Will this lead to a specific and narrow worldview dominating EAIF?
  3. How viable is the “EA beacon” in light of FTX?
  4. Are donors excited about this vision?
  5. Do others in the EA community think furthering this vision is a priority (relative to progress in core cause areas)?
  1. ^

    The EA Infrastructure Fund is part of EA Funds, which is a fiscally sponsored project of Effective Ventures Foundation (UK) (“EV UK”) and Effective Ventures Foundation USA Inc. (“EV US”). Donations to EAIF are donations to EV US or EV UK. Effective Ventures Foundation (UK) (EV UK) is a charity in England and Wales (with registered charity number 1149828, registered company number 07962181, and is also a Netherlands registered tax-deductible entity ANBI 825776867). Effective Ventures Foundation USA Inc. (EV US) is a section 501(c)(3) organization in the USA (EIN 47-1988398). Please see important state disclosures here.

  2. ^

    Also known as “EA qua EA” or “community-first EA.” Basically, focusing on this odd community of people who are willing to impartially improve the world as much as possible, without presupposing specific empirical beliefs about the world (like AGI timelines or shrimp sentience). 

  3. ^

    In here and the rest of the document, “I”, “me”, “my” etc refers to Caleb Parikh, unless explicitly stated otherwise.  In practice, many of the actual words in the post were written by Linch Zhang, who likes the vision and tried to convey it faithfully, but is genuinely uncertain about how it compares to other plausible visions. 

  4. ^

    Before our distancing and independence from Open Phil, in 2022, Open Phil accounted for >80% of EAIF’s funding. For comparison, institutional funds has historically accounted for <50% of LTFF’s funding. 

  5. ^

    As with the parent post, any reference to proper nouns in hypothetical grants, including country and regional names, should be assumed to be fictional.

Comments41
Sorted by Click to highlight new comments since: Today at 9:19 AM

This makes a lot of sense generally, but I see one issue that seems potentially significant.

I have a fairly good understanding of what will happen to more cause-area-specific yet "meta" grants in the x-risk/longtermism and animal-welfare domains. The view that the LTFF and AWF are better suited to funding these opportunities seems fairly compelling. The issue I see is that the EA Funds' Global Health and Development Fund (GHDF) seems to have focused on larger grants to more established organizations; this makes sense given its strong connection to GiveWell's work. That doesn't feel like a good fit for opportunities like the ones described by (4) and (5) of your examples of out-of-scope projects. According to its website, GHDF isn't even accepting applications. Thus, while these sorts of projects are not formally outside of GHDF's scope -- e.g., it has granted to One for the World -- it seems that they may be inaccessible as a practical matter.[1]

Perhaps the ideal solution would be for GHDF to start taking applications that would previously have been within EAIF's scope, so that there is a relatively seamless transition for potential and established grantees. I'm not sure if that is practicable for GHDF, though?

A second possibility would be for EAIF to retain the global-health/development scope for a stated time period, but (for donations received after a specified date in 2024) only out of donor funds that have been designated for that specific scope. That would allow more clarity of scope for EAIF donors while providing a conduit for donors who feel strongly about global-health/development meta work.

Finally, the exit strategy could be slowed down for global health/development specifically, in recognition of the lack of an obvious alternative fund for these sorts of grants. Although exit grants would soften existing grantees' landing for projects receiving ongoing support, it seems plausible that potential grantees may have done significant groundwork for new projects or expansions based on the funding universe as it existed prior to this plan being made public. Moreover, even if one expects other grantmakers would eventually step in to fill the void, this would likely take time. Thus, 1Q 2024 may be too soon for phasing out the current version of EAIF, at least where global health/development meta activity is concerned.

 

  1. ^

    I'm aware of Open Phil's work in global health/wellbeing community building, but as you note one of the objectives here is to move toward "a more community-funded model, in contrast to . . . previous reliance on significant institutional donors like Open Phil." A plan in which Open Phil picks up responsibility for funding these sorts of grants in global health/wellbeing seems like a step backward from this objective.

(own views only) Thank you Jason; I think you've correctly nailed the most important (short-term) issue with the changed scope. 

I think there are two huge uncertainties with trying to do grants in global health and development meta. The first is that I'm not sure this is what donors want. The second is that I'm not sure there are good grantmakers who are willing to work in this area.

For the first confusion, I don't have survey results or anything, but I think many GHDF donors will feel betrayed if they learned that a significant fraction of their money goes to funding ambiguously meta activities[1]

I do think GHDF donors with high risk tolerance are currently poorly served by the current ecosystem (and may have to either handpick projects themselves to support, or donate to a meta fund with a large cause split). I don't have a good sense of how large this population of donors actually is.

For the second confusion, as an empirical matter I believe it's been difficult to find grantmakers excited about evaluating GHD meta. Even if donors are on board, I don't think the current EAIF is set up well to do this, nor is the current GHDF. 

(In the medium- to long- term, I don't necessarily expect grantmakers to be a significant bottleneck in itself. Having enough assured funding + us focusing more time on hiring might be enough to solve that problem.)

Longer term, I think it probably makes sense for some fund to do global health and development meta[2] (It might even be under EA Funds!) I just don't think it's a good choice right now for either EAIF or GHDF.

I like your exit strategy suggestion and will probably bring it up with the team (note that I don't have any direct decision-making power for EAIF).

Again, these are just my own views. Caleb and other fund managers might disagree, and provide their own input.

  1. ^

    I think many people give to GHDF because they want something that's maybe 10-20% more risky than GiveWell's All Grants Fund. Whereas I expect many meta activities, particularly projects with a longer chain of impact than, say, paying for a fundraiser, to be much more risky.  

  2. ^

    I do think having a non-OP source of funding is good here. In addition to greater independence as you've noted, I think OP GHD community building is just quite conservative, e.g. more inclined to fund things with "one step of meta" and clear metrics. For example, fundraisers that counterfactually raise more money than they cost, or incubate GH charities that are on track to become future GiveWell top charities. Whereas I think people should be excited about the types of programs that originally got people like AGB to donate to global health, or fund neglected interventions research, even when the payoffs are not immediate.

I feel pretty good about surveying donors and allocating some proportion of funding based on that. Ultimately, I don't think it's low integrity or misleading for us to change directions towards meta work on the GHDF if we are still appealing to the values on our website - though I think the specifics of the arrangement matter a lot.

The main issue (imo) is that it's unclear that meta GHDF work is competitive with just donating to GiveWell charities. Conversations with Open Phil GHW have made me a bit less enthusiastic about this direction.

What is the ToC for meta Global Health work?

** Find excellent people who can work at existing direct orgs? ** GHD doesn't seem particularly leveraged career-wise right now. Most career opportunities for people in high-income countries (where EA is most prevalent) seem fairly unexciting (particularly) junior roles. I could imagine mid/late career meta work is pretty exciting, but I haven't seen many fundable projects in this area. If you are excited about working on mid/late field building in any cause area, please apply to the EAIF!

** Find people who can start new fundraising orgs?** Open Phil is currently funding projects in this area; EAIF also funds projects in this area (and will continue to do so if they work in multiple cause areas).

** Find people who can start new direct charities?** I am most compelled by meta work for Animal Welfare, where it seems like new initiatives could beat the best animal interventions we know. To the best of my knowledge, I don't think that new GHW charities have had much luck beating the best GiveWell charities (by a GiveWell-type view's lights). Ofc, you could disagree with GiveWell's worldview; I have some disagreements, though I haven't seen well-reasoned improvements.

(Epistemic status: speculative)

 

ETA a TL;DR -- it may lie in using relatively small amounts of EA funding to counterfactually multiply the positive effect of non-EA resources, or to counterfactually move substantial non-EA funding toward much more effective charities (even if not GiveWell's best).

What is the ToC for meta Global Health work?

It could lie in a few places. As an example, one could provide very low operational funding to student volunteer-led organizations. Having even a small external budget can be a real force multiplier for a student organization, making existing resources (e.g., student volunteer time, access to campus resources, access to a population reflecting on its values with time to hear a good speaker) significantly more effective. 

Drawing on my own life, I went to something like an Oxfam Hunger Banquet as an option toward fulfilling requirements for the freshman seminar class in college. I think that event had a meaningful effect on my own views about effectiveness and global priorities. If one could counterfactually give a similar, even mildly-EA flavored experience to college freshmen for a few dollars each, I speculate that the ROI would be quite good (e.g., in promoting effective giving). That only works if the funding acts as a force multiplier -- you'd need many of the inputs to be provided for "free" by non-EA sources. But as in my Hunger Banquet example, I don't think that is necessarily implausible.

** Find people who can start new direct charities?**  . . . . To the best of my knowledge, I don't think that new GHW charities have had much luck beating the best GiveWell charities (by a GiveWell-type view's lights). 

I don't think we should assume that the new charities will only donations from EA sources. If a GHW meta grantmaker provides startup funding to a new charity, and as a result that charity ends up diverting $1MM a year from ~ineffective charities to ~0.5X GiveWell work, the value is equivalent to donating ~$500K/year to a GiveWell top charity. Many potential donors are pre-committed to a specific subfield (e.g., mental health), or find diffuse interventions like bednets unappealing for whatever reasons. So their dollars were never in play for GiveWell top charities anyway.

In addition to providing startup funds, one could argue for funding a meta organization that -- e.g., -- helps carefully selected 98th-percentile-effectiveness organizations write convincing grant pitches to governments and non-EA foundations. I guess that comes back to force multipliers too -- it's not very effective to fund these organizations' operating expenses on a long term basis, but the right strategic investments might help them leverage enough non-EA monies to create a really good ROI.

I haven't come across any good non-EA GHD student groups. Remember that they need to beat the bar of current uni EA groups (that can get funding from Open Phil) from a GHD perspective - which I think is somewhat of a high bar.

If a GHW meta grantmaker provides startup funding to a new charity, and as a result that charity ends up diverting $1MM a year from ~ineffective charities to ~0.5X GiveWell work, the value is equivalent to donating ~$500K/year to a GiveWell top charity.

I don't think this reasoning checks out. GiveWell interventions also get lots of money from non-EA sources (e.g. AMF). It might be the case that top GiveWell charities are unusually hard to fundraise for from non-EA sources relative to 98% charities, though I'm not sure why that would be the case, and a 98th% intervention could end up being much less cost-effective in real terms.

I’m a grant writer and fundraiser by trade, but in the past I haven’t provided services to any charities that were affiliated with EA or met GiveWell’s effectiveness standards. They’re mostly the typical single-cause, single-location organizations run by people who really mean well but are running on emotion or “faith” alone. These are good people who just aren’t used to using an effective lens, even using much more conventional program evaluation methods.

There’s only so much I can do as an independent worker in this field, but I do like the idea of selecting those 98th percentile orgs you mentioned and am intrigued by the approach of applying a small amount of EA money to them (epistemic status: uncertain, ~40%).

My concern would be that such organizations would only be tangentially aligned with EA values, and so essentially EA Infrastructure would be funding organizations with very different values, which I don’t think matches EA’s core vision.

Of course, I’m still new to the movement, so I don’t really feel all that comfortable speaking definitively about this.

Nice points on GHDF, Jason! I will publish a related post in the next few days following up of this comment I made recently. Update: published!

I wrote the following on a draft of this post. For context, I currently do (very) part-time work at EAIF

Overall, I‘m pretty excited to see EAIF orient to a principles-first EA. Despite recent challenges, I continue to believe that the EA community is doing something special and important, and is fundamentally worth fighting for. With this reorientation of EAIF, I hope we can get the EA community back to a strong position. I share many of the uncertainties listed - about whether this is a viable project, how EAIF will practically evaluate grants under this worldview, or if it’s even philosophically coherent. Nonetheless, I’m excited to see what can be done.

Scattered first impressions:

  • I feel generally very positively about this update and have personally felt confused about the scope of EAIF when referring other people to it.
  • There are wide grey areas when attempting to delineate principles-first EA from cause-specific EA and the effective giving examples in this post stand out to me as one thorny area. I think it may make sense not to fund an AI-specific or an animal-specific effective giving project through EAIF (and the LTTF and AWF are more appropriate), but an effective giving project that e.g. takes a longtermist approach or is focused on near-term human and nonhuman welfare seems different to me. Put differently: How do you think about projects that don't cover all of EA, but also aren't limited to one cause area?
  • For this out-of-scope example in particular, I'm not sure where I would route someone to pursue alternative funding in a timely fashion:

Funding a very promising biology PhD student to attend a one-month program run by a prestigious US think tank to understand better how the intelligence community monitors various kinds of risk, such as biological threats ($6,000)

Maybe Lightspeed? But I worry there isn't currently other coverage for funding needs of this sort.

  • I'm worried about people couching cause-specific projects as principles-first, but there is already a heavy tide pushing people to couch principles-first projects as x-risk-specific, so this might not be a concern.
  • I'm really happy to see you thinking about digital minds and (seemingly) how to grow s-risk projects.

Thanks for your comment. I’m not able to respond to the whole comment right now but I think the bio career grant is squarely in the scope of the LTFF.

Makes sense, thank you! Maybe my follow-up questions would be: How confident would they need to be that they'd use the experience to work on biorisk vs. global health before applying to the LTFF? And if they were, say, 75:25 between the two, would EAIF become the right choice -- or what ratio would bring this grant into EAIF territory?

I think this is pretty unclear; I think we'd mostly be looking for people who are using EA principles to guide their career decision-making (scope sensitivity, impartiality etc.) as opposed to thinking primarily about future cause areas. I agree it's fuzzy, though I don't want to share concrete criteria. I am excited about here out of worries of goodharting.

Ultimately, we can transfer apps between funds, so it's not a huge deal. I think at 75:25 should probably apply to EAIF (my very off-the-cuff view).

(A few more responses to your comment)

There are wide grey areas when attempting to delineate principles-first EA from cause-specific EA and the effective giving examples in this post stand out to me as one thorny area. I think it may make sense not to fund an AI-specific or an animal-specific effective giving project through EAIF (and the LTTF and AWF are more appropriate), but an effective giving project that e.g. takes a longtermist approach or is focused on near-term human and nonhuman welfare seems different to me. Put differently: How do you think about projects that don't cover all of EA, but also aren't limited to one cause area?

I think it's fine for us to evaluate projects that don't cover all of EA. I think the thing we want to avoid is funding things that are clearly focused on a specific cause area. We can always transfer grants to other funds in EA Funds if it's a bit confusing for the applicant. In the examples that you gave, the LTFF would evaluate the AI-specific thing, but the EAIF is probably a better fit for the neartermist cross-cause fundraising.

Maybe Lightspeed? But I worry there isn't currently other coverage for funding needs of this sort.

I don't think this is open right now, and it's not clear when it will be open again.

I'm worried about people couching cause-specific projects as principles-first, but there is already a heavy tide pushing people to couch principles-first projects as x-risk-specific, so this might not be a concern.

Yes, I'm worried about this too.

People who like this new vision may want to consider donating to EAIF

Now until the end of January is an unusually good time to donate, as it allows you to take advantage of 2:1 Open Phil matching ($2 from them for $1 from you)[1]. My best guess is that (unlike with LTFF), the EAIF match will not be filled by default without significant changes in more donors chipping in (see dashboard here)

Let me know if you have any questions! We'll also likely post an AMA up on the forum in early January. 

  1. ^

    Note that this is less good that it sounds to the extent that you think OP's marginal dollar is almost as good and/or better than EAIF's marginal $.

Very excited about this, both about the clarification of scope and the scope itself.

I strongly agree there is currently a gap in terms of principles-first EA funders, and also largely agree with the way you've outlined "principles-first EA" here. I think this new scope will make me seriously consider becoming a donor to the EAIF in the new year.

I echo this view and think it's really exciting. I expect many people in the meta-funding space will be positive about this idea. However, I also anticipate that many of the donors will need to see a round or two of this idea executed and observe the resulting grants before donating to the fund.

Agreed (though personally I might be willing to make a bet if e.g. fund manager selection is done well)

Makes sense to me, thanks for sharing! It seems pretty plausible that the tighter remit is a good choice, both operationally and in terms of communication with donors. And I appreciate the clear examples of what falls inside and out.

One question, not intended as a criticism: you point out that EAIF would no longer function as a catch-all donor-of-last resort to random projects, which makes sense. But I do come across people working on such projects, and it does feel like there should be somewhere I can refer them, where they will be evaluated by cause-agnostic generalist EA evaluators (even if their prior probability of beating AI/GHW/Animals was low). Are you aware of such a venue?

Could you give me some examples of these kinds of projects? I think, as Linch said, Manifund is probably their best bet, or posting on the EA forum asking for funding from individual donors.

Manifund comes to mind, though they're more of a platform than a grantmaking agency. 

I was wondering the same thing, especially for projects that are managed by EAs or intended to meet EA principles. Are there any EA sources out there that fund such projects?

I think this is a great idea, and will protect against the worry Will McAskill raised of AI Safety “eating” EA

I’m pretty new to the EA movement and community, so please take what I have to say with a grain of salt. With that said, I really think this is the right direction. Effective Altruism is about common moral values and ethical vision, and at the end of the day this has to be our main focus. Recruitment or “winning converts” isn’t the point—a “big tent” movement without any substance behind it is of low value to the well-being of sentient life on the planet, in the short term and especially in the long. I don’t think we can afford “mission creep” right now, especially not after the FTX events.

Thanks Hayven! I'm glad you like this direction. The challenge remaining, from my perspective is how we can practically build a robust community. Particularly one that's not directly tied to singular short-term object-level metrics[1] like lives saved or money donated or people who work in impactful jobs, without being overly inward-facing and losing track of why we're here in the first place.

We want the community to be neither a factory nor a social club.

  1. ^

    Because judging a community too closely on specific object-level metrics risks biasing a specific worldview, plus might be long-term unhealthy for a community. 

"We want the community to be neither a factory nor a social club."

It is not immediately needed but I would really appreciate some further elaboration of your thoughts on this topic as I reckon many people(including me) are grappling with the same problem for their work outside of EAIF.

Really late to respond to this! Just wanted to quickly say that I've been mulling over this question for a while and don't have clear/coherent answers; hope other people (at EAIF and elsewhere) can comment with either more well-thought-out responses or their initial thoughts!

Agreed, and I'm not exactly sure what this looks like. I'm not comparing EA to religious ideologies, but in the past religion has been the main institution that has tried to fulfill the purpose of "robust moral community." Maybe taking a page from some of the ways these organizations have done community building (e.g., focused talks and meetings, social events, big gatherings with talks and exhibits, personal meditation / focus on values) would be a good idea? (epistemic status -- low, ~30%)

[This comment is no longer endorsed by its author]Reply

I suppose one of the main factors to take into consideration is what percent of donors want to fund cause agnostic EA projects vs. what percent want to fund any kind of EA-adjacent community building and want the fund managers to figure out what is most impactful with that.

I’m hugely in favour of principles first as I think it builds a more healthy community. However, my concern is that if you try too hard to be cause neutral, you end up artificially constrained. For example, Global Heath and Wellbeing is often a good introduction point to the concept of effectiveness. Then once people are focused on maximisation, it’s easier to introduce Animal Welfare and X-Risk.

I agree that GHW is an excellent introduction to effectiveness and we should watch out for the practical limitations of going too meta, but I want to flag that seeing GHW as a pipeline to animal welfare and longtermism is problematic, both from a common-sense / moral uncertainty view (it feels deceitful and that’s something to avoid for its own sake) and a long-run strategic consequentialist view (I think the EA community would last longer and look better if it focused on being transparent, honest, and upfront about what most members care about, and it’s really important for the long term future of society that the core EA principles don’t die).

I agree with the overall point, though I am not I've seen much empirical evidence for the GHD as a good starting point claim (or at least I think it's often overstated). I got into EA stuff though GHD, but, this may have just been because there were a lot more GHD/EA intro materials at the time. I think that the eco-system is now a lot more developed and I wouldn't be surprised if GHD didn't have much of an edge over cause first outreach (for AW or x-risk).

Maybe our analysis should be focussed on EA principles, but the interventions themselves can be branded however they like? E.g. We're happy to fund GHD giving games because we believe that they contribute to promoting caring about impartiality and cost-effectiveness in doing good - but they don't get much of a boost or penalty from being GHD giving games (as opposed to some other suitable cause area).

I'm excited to see the EAIF share more about their reasoning and priorities. Thank you for doing this!

I'm going to give a few quick takes– happy to chat further about any of these. TLDR: I recommend (1) getting rid of the "principles-first" phrase & (2) issuing more calls for proposals focused on the specific projects you want to see (regardless of whether or not they fit neatly into an umbrella term like "principles-first")

  • After skimming the post for 5 minutes, I couldn't find a clear/succinct definition of what "principles-first" actually means. I think it means something like "focus more on epistemics and core reasoning" and "focus less on specific cause areas". But then some of the examples of the projects that Caleb is excited about are basically just like "get people together to think about a specific cause area– but not one of the mainstream ones, like one of the more neglected ones."
  • I find the "principles-first" frame a bit icky at first glance. Something about it feels... idk... just weird and preachy or something. Ok, what's actually going on there?
    • Maybe part of it is that it seems to imply that people who end up focusing on specific cause areas are not "principles-first" people, or like in the extreme case they're not "good EAs". And then it paints a picture for me where a "good EA" is one who spends a bunch of time doing "deep reasoning", instead of doing cause-area-specific work. Logically, it's pretty clear to me that this isn't what the posters are trying to say, but I feel like that's part of where the system 1 "ick" feeling is coming from.
  • I worry that the term "principles-first EA" might lead to a bunch of weird status things and a bunch of unhelpful debates. For me, the frame naturally invokes questions like "what principles?" and "who gets to decide what those principles are?" and all sorts of "what does it truly mean to be an EA?" kinds of questions. Maybe the posters think that, on the margin, more people should be asking these questions. But I think that should be argued for separately– if EAIF adopts this phrase as their guiding phrase, I suspect a lot of people will end up thinking "I need to understand what EAIF thinks the principles of EA are and then do those things".
  • Personally, I don't think the EAIF needs to have some sort of "overarching term" that summarizes what it is prioritizing. I think it's quite common for grantmaking organizations to just say "hey, here's a call for proposals with some examples of things we're excited about."
  • Personally, I'm very excited about the projects that Caleb listed in the appendix. Some of these don't really seem to me to fall neatly under the "principles-first" label (a bunch of them just seem like "let's do deconfusion work or make progress in specific areas that are important and highly neglected."
  • Historically, my impression is that EAIF hasn't really done many calls for proposals relating to specific topics. It has been more like "hey anyone with any sort of meta idea can apply." I'm getting the sense from this post that Caleb wants EAIF to have a clearer focus. Personally, I would encourage EAIF to do more "calls for proposals" focused on specific projects that they want to see happen in the world. As an example, EAIF could say something like "we are interested in seeing proposals about acausal trade and ethics of digital minds. Here are some examples of things you could do."
    • I think there are a lot of "generally smart and agentic people" around who don't really know what to do, and some guidance from grantmakers along the lines of "here are some projects that we want to see people apply to" could considerably lower the amount of agency/activation energy/confidence/inside-viewness that such people need.
    • On the flip side, we'd want to avoid a world in which people basically just blindly defer to grantmakers. I don't suspect calls for proposals to contribute to that too much, and I also suspect there's a longer conversation that could be had about how to avoid these negative cultural externalities.

I'm excited to see the LTFF share more about their reasoning and priorities. Thank you for doing this!

Just noting that this is EAIF, not LTFF.

(Oops, fixed!)

I also think an emphasis on principles-first EA will help protect us against the failure mores of

  1. not reprioritising between cause areas as interventions by EAs make certain cause areas smaller in scale or less neglected

  2. not reprioritising between cause areas based on new and better cause prioritisation research

"The EA Infrastructure Fund will fund and support projects that build and empower the  community of people trying to identify actions that do the greatest good from a scope-sensitive and impartial welfarist view."

I'm curious how EA Funds incorporates moral uncertainty into its decision making given its mandate is 100% welfarist. To be clear, I don't think running one project that is 100% welfarist necessarily contradicts with plausible views on moral uncertainty. I think welfarism is massively underrepresented in most people's decision making and to compensate for that one might run a 100% welfarist project despite having credence in multiple theories.

I know this is not within the scope EAIF but I think this example from animal welfare illustrates a trade-off well. Some countries have passed legislation to ban the culling of male chicks in the egg industry. Male chicks won't be born in those countries. Working on these bans is a moral priority if you think acts of killing are intrinsically bad. If you think welfare is all that matters then working on this issue is far lower in priority since male chicks live for three days at most and their life experiences are dwarfed by the life experiences of other animals. Would EA Funds prefer people coming into EA to be 100% welfarist with respect to projects they choose to work on?

I had similar conundrums when drafting a vision and mission for my organisation, ie. how to keep our edge while being clear about taking moral uncertainty seriously. So I'm curious about how EA Funds thinks about this issue.

I'm not too worried about this kind of moral uncertainty. I think that moral uncertainty is mostly action-relevant when one moral view is particularly 'grabby' or the methodology you use to analyse an intervention seems to favour one view over another unfairly.

In both cases, I think the actual reason for concern is quite slippery and difficult for me to articulate well (which normally means that I don't understand it well). I tend to think that the best policy is to maximise the expected outcomes of the overall decision-making policy (which involves paying attention to decision theory, common sense morality, deontological constraints etc. ).

In any case, most of my moral uncertainty worry comes from maximising very hard on a narrow worldview (or set of metrics) - but I think that "welfarism" is sufficiently broad and the mandate and track record of the EAIF is sufficiently varied that I am not particularly worried about this class of concerns.

Not to detract from the general point, but there are welfarist views that can accommodate chick culling being very bad, like critical level utilitarianism. I don't think they're very popular, though.

This is very exciting. A key point in our draft strategy for 2024 was the apparent lack of principles-first EA funding (beyond CEA’s CBG programme). This is quite the update, I’m glad you posted it when you did!

Thanks! To be clear, this is a 'plan' instead of something we are 100% committed to delivering on in the way it's presented below. I think there are some updates to be made here, but I would feel bad if you made large irreversible decisions based on this post. We will almost certainly have a more official announcement if we do decide to commit to this plan.

Thanks for making that clear! 

More from Linch
Curated and popular this week
Relevant opportunities