The Effective Altruism Funds publicly launched February 2017. During the first year of their operation, Henry Stanley has noted multiple  times on the Effective Altruism Forum that the EA Funds granted money and updates on the EA Funds website very infrequently. Since then, the Global Health & Development and Animal Welfare Funds have made large grants which should inspire confidence they are being attentively managed. Elie Hassenfeld paid out $1.5 million to the Schistosomiasis Control Initiative in April 2018. Over the course of March and April of this year, Lewis Bollard paid out $750k to various animal advocacy and welfare organizations

Having been following the EA Funds myself, and checking up with the Centre for Effective Altruism on the status of the EA Funds, I sent an email and received a response from JP Addison, as I commented on Henry's post a few months ago. It was as follows:

Hello. In the last few months I've noticed some EA community members have expressed concern with a relative lack of activity since the EA Funds started. Ben West posted about it here in the 'Effective Altruism' Facebook group. 
https://www.facebook.com/groups/effective.altruists/permalink/1606722932717391/

At the time, Peter Hurford estimated the amount which had been allocated from each of the EA Funds in this comment (https://www.facebook.com/groups/effective.altruists/permalink/1606722932717391/?comment_id=1607047222684962&comment_tracking=%7B%22tn%22%3A%22R9%22%7D). Of course Marek Duda and Sam Deere let everyone know the EA Funds website had been updated with the amounts donated to the fund, and how they have been allocated up to the most recent date.

But later at the end of January Henry Stanley posted about how much money remain unallocated from the EA Funds. Henry pointed out the Animal Welfare Fund frequently makes many small grants (http://effective-altruism.com/ea/1k9/ea_funds_hands_out_money_very_infrequently_should/). This is corroborated by this comment from Lewis Bollard on Ben's post from September. 
Elie Hassenfeld, grant manager for the Global Development Fund, also provided a public update on his current plans for allocation, and request for feedback from donors on the EA Forum as well (http://effective-altruism.com/ea/1k9/ea_funds_hands_out_money_very_infrequently_should/d8o), on Henry's EA Forum post.

In neither of these posts has there been any updates from Nick Beckstead, grant manager for both the Long-Term Future and EA CommunityFunds. From the EA Funds website, the latest update from each of these funds as of December 15, 2017 is they've each only made one grant. 
https://app.effectivealtruism.org/funds/far-future 
https://app.effectivealtruism.org/funds/ea-community

While I'm not a donor to either of these or any of the EA Funds at present, as an EA community member I'm curious if: 
1) There have been grants from either the Long-Term Future or EA Community Funds since December? 
2) Can the EA community expect a public update about allocations from or future plans for either of these Funds, from Nick Beckstead, in the near future?

Thanks for your time, 
Evan

At the time, JP responded no grants from either of these funds had been made since the Community Fund grant was completed in January. He also stated the CEA was working on plans for increased grants, updates and transparency, which he expected to be out sometime before July. That was in April. As of me writing this, it is 22 July 2018.

On Henry's last post from April regarding how to improve the EA Funds, Marek Duda commented with the following response:

Hello, speaking in my capacity as the person responsible for EA Funds at CEA:

Many of the things Henry points out seem valid, and we are working on addressing these and improving the Funds in a number ways. We are building a Funds ‘dashboard’ to show balances in near real time, looking into the best ways of not holding the balances in cash, and thinking about other ways to get more value out of the platform.

We expect to publish a post with more detail on our approach in the next couple of weeks. Feel free to reach out to me personally if you wish to discuss or provide input on the process

No such updates have been made. Whether it's been privately or on the Effective Altruism Forum, ranging from a couple weeks to a few months, estimates from the CEA regarding updates from the EA Funds cannot be relied upon. According to data publicly available on the EA Funds website, each of the Long-Term Future and EA Community Funds have made a single grant: ~$14k to the Berkeley Existential Risk Initiative, and ~$83k to EA Sweden, respectively. As of April 2018, over $1 million total is available for grants from the Long-Term Future Fund, and almost $600k from the EA Community Fund. Here is the most recent update from both these funds from Nick Beckstead, grant manager for both funds, and a Program Officer for Global Catastrophic Risks at the Open Philanthropy Project (Open Phil).

I am giving an update on my grantmaking with the EA Funds. I'm giving it now because the size of funds available has increased and some people have expressed concern about infrequent grantmaking and updates.

I am currently a few months behind the (not publicly stated) schedule that I originally intended for disbursing most of the funds. However, I am planning to recommend grants that use up all available funds at my disposal under EA Funds by July 1st 2018 at the latest, and probably in the next month. The grants I plan to make are currently under consideration, but I will wait to announce them publicly. I plan for these grants to go to organizations with which I am familiar and have supported in the past.

Here are some updates regarding my plans for making grants with these funds and giving updates in the future:

  • I am not reviewing unsolicited proposals to fund new, small projects (e.g. with 1-2 people working for 1-2 years) with these funds because, regrettably, I lack the time necessary to adequately vet them due to other responsibilities I have. I believe a reasonable degree of vetting is important for the EA ecosystem. The concern is less that the funds will be wasted without such vetting, and more that not-properly-vetted new projects could undermine success by (i) being suboptimal representatives of important ideas, (ii) after being unsuccessful, sticking around too long and preventing better projects from taking their places, and/or (iii) causing other forms of harm. Given these constraints and the fact that I think grants to existing organizations have good expected returns, I rarely give substantial consideration to unsolicited proposals that come without a strong recommendation from a trusted source.
  • I am encouraging those seeking funding for small projects to seek support via EA Grants, rather than through Open Phil or EA Funds. EA Grants will have more time to vet such projects. I believe this is the most natural division of labor given the funding, relationships, and other priorities that each of us has.
  • I largely expect to use these funds to support organizations with which I am familiar and have supported in the past, though I may also fund new initiatives that I have determined to be valuable but are not a good fit for Open Phil.
  • I expect that I will use the EA Funds about once per year to make grants and update supporters, and that I will either (a) use a substantial majority of the available funds when I do so, or (b) write an update saying that I have explicitly decided to hold the funds at the time. I plan to do this, rather than on a more frequent schedule, because Open Phil funds most other types of grants that I recommend, Open Phil grantmaking constitutes the vast majority of my overall grantmaking, and this grantmaking via EA Funds also requires significant attention each time grants are made. I am not yet sure at what time I expect to do this in 2019 (after granting out funds this year).  

Visiting the EA Grants website, it states applications are currently closed, and there is no indication of when another application or funding round will begin. Thus, it is not clear how those seeking funding for small projects should seek support from the EA Grants or the CEA. Much of the rationale on the EA Funds webpage is based on Nick's long track record of making grants to EA organizations from before the EA Funds existed. This is not itself a track record for either the Long-Term Future or EA Community Funds. On the webpage for the Long-Term Future Fund, under the heading "why might you choose not to donate to this fund?", it states:

First, donors who prefer to support established organizations. The fund manager has a track record of funding newer organizations and this trend is likely to continue, provided that promising opportunities continue to exist.

Second, donors who are pessimistic about the room for more funding available in the area. The Open Philanthropy Project has made global catastrophic risk reduction a major focus area and may fund many of the opportunities that the fund manager would find promising. The fund manager has noted that “with Open Phil as a funder in this space it has been harder to find opportunities that are as promising and neglected as we were able to find previously.”

 From the webpage for the EA Community Fund, under the heading "Why donate to this fund?", it states:

This fund supports building and strengthening the capabilities of people and organizations trying to do good in a cause-neutral, outcome-oriented way — that is, the effective altruism community (broadly construed). When successful, such investments yield a flexible multiplier on the next most high-priority cause and allow us to build up resources which will hopefully be flexible enough to support the causes and opportunities that are later found to be the most promising. Donors should keep in mind that the multiplier can be somewhat delayed and that funding successes in object-level areas can also yield multipliers of their own.

Building the community of people working on effective causes is especially important for those who expect their opinions about the highest-priority areas to change a great deal. If the effective altruism community grows, that growth builds up general capabilities to take the best opportunities as they are discovered. It seems uncrowded, because it’s a new cause and there appear to be good opportunities available. It seems tractable because there are definite advocacy opportunities which have worked in the past and whose success can be measured. Examples include: encouraging people to join Giving What We Can or take the Founders Pledge. More direct evidence for effectiveness comes from the strong success to date of many of the projects in the area, like GiveWell.

Moreover, in the recent past, investing in promoting effective altruism has resulted in significantly more resources being invested in the highest-priority areas than would have occurred through direct donations. For instance, for every US $1 invested in Giving What We Can, at least $6 have been moved to high-priority interventions. Donors should note that the marginal return on funds is less clear for many of these opportunities, with some factors pointing to a higher number and some factors pointing to a lower number. Additionally, the area may be important because it’s a brand new area where there is much to learn, and we expect further work to have high value of information.

 

Further down the page, under the heading "Why might you choose not to donate to this fund?", it states two of the same reasons why someone would opt not to donate to the Long-Term Future Fund:

First, if donors prefer donations to established organizations over donations to emerging organizations, then this fund might not meet these preferences. The fund manager has a track record of funding newer organizations, and this trend is likely to continue as long as promising opportunities continue to exist.

 

Second, supporters of movement building might choose not to donate through the EA Community Building Fund if they are pessimistic about the room for more funding available in the movement building space. The Open Philanthropy Project recently announced that it would start considering grants in effective altruism and that this effort would be lead by Nick Beckstead, the EA Community Building Fund manager. Nick has noted that following Open Philanthropy's involvement in this area, there are now fewer promising but neglected donation opportunities here. (Source)

So, the considerations the CEA gives for why one might donate to the Long-Term Future and EA Community Funds, to support emerging organizations over existing ones; and because they expect there is much more room for funding in these focus areas, are contradicted by the fund manager. Nick is seeking to primarily make grants to existing organizations over emerging ones, and isn't taking much time to research grant opportunities in these focus areas through the EA Funds. Because Nick's grant recommendations are usually filled by the Open Philanthropy Project, and investigating further grants would take significant amounts of time from that work, grantmaking through the EA Funds at present is not as much a priority. No reason is given for the different management styles of the Long-Term Future and EA Community Funds by Nick Beckstead as an Open Phil program officer; and the respective management of the Global Health & Development, and Animal Welfare Funds by Elie Hassenfeld and Lewis Bollard, also program officers at Open Phil. Both Elie and Lewis have been directly responsive to the community regarding transparency for each of the funds they manage here on the EA Forum. They have also followed through on making grants and updating donors to their respective funds at least every few months. Having started in 2017, with Nick ambiguously stating grants may be made in 2019, the vast majority of the Long-Term Future and EA Community Funds will have been held for almost two years without clear reasons as to why this should be the case.

 

When making suggestions for how to improve the EA Funds, Henry Stanley listed the following concerns:

  • The funds hand out money very infrequently, and hold onto money for long periods of time. This erodes the value of the fund through time discounting. EA orgs have stated that they value donations in a year's time at a 12% discount to receiving them now[1], so this represents a substantial cost.
  • The funds hold their money as cash, forgoing any potential interest the money could earn.
  • There is no schedule as to when the money will be handed out. This lack of transparency is troubling, and prevents donors making informed choices (e.g. to give directly to charities instead of waiting).
  • (a weaker objection) As the funds hold onto donations for so long, the chances of the fund manager's and donors’ intentions drifting apart is high.

 

The email I received from the CEA regarding updates to the grants and transparency of the Long-Term Future and EA Community Funds is only accurate insofar as the update has been that will be no schedule for grantmaking, and for the indefinite future there will not be an increase in transparency. The updates from Nick regarding these funds is they will continue to hand out money infrequently, and no schedule will be provided for when grants will be made. It appears through no fault of their own, other staff at the CEA cannot to concerned community members updates regarding these two funds, because internally within the organization no updates have been provided to them. Nonetheless, whether it's from the CEA or Nick Beckstead, to address concerns regarding transparency and accountability of fiscal management in effective altruism by indicating those funds will not be managed with transparency or accountability is unacceptable. That is no transparency or accountability at all. All the while, the EA Funds have been taking in more donations from effective altruists who very likely would have counterfactually donated to nascent EA projects in need of funding to get off the ground. At the time of Henry's post in April, these were the amounts in these two funds remaining unallocated.

  • Long-Term Future: $348,167 [95%]
  • EA Community: $206,271 [71%]

As stated above, the Long-Term Future Fund now has over $1 million unallocated, and the EA Community Fund $600k. Whether its viewed as hundreds of thousands of dollars more, or an increase in nearly 200% in the size of funds, in the last several months, these funds have ballooned in size while quietly going back on the impression effective altruists were given on what the funds would be used for. On Henry's first post concerning the infrequent grants from (at the time, all) the EA Funds, I commented the following:

I've received feedback from multiple points in the community the EA Funds haven't been as responsive in as timely or as professional a manner as some would prefer. It appears a factor for this is that the fund managers are all program officers at the Open Philanthropy Project, which is a job which from the fund managers' perspective is most of the time more crucial than anything that can be done with the EA Funds. Thus, doing a more than full-time work-equivalent(?... I don't know how much Open Phil staff work each week) may mean management of the EA Funds gets overlooked. Ben West also made a recent post in the 'Effective Altruism' Facebook group asking about the EA Funds, and the response from the Centre for Effective Altruism (CEA) was they hadn't had a chance to update the EA Funds webpage with data on what grants had been made in recent months.

Given that at the current level of funding, the EA Funds aren't being mismanaged, but rather are being more neglected than donors and effective altruists would like, I'd say it might already be time to assign more managers to the fund. Picking Open Phil program officers to run the funds was the best bet for the community to begin with, as they had the best reputation for acumen going in, but if in practice in turns out Nick, Elie and Lewis only have enough time to manage grants at Open Phil (most of the time), it's only fair to donors CEA assign more fund managers to the fund. What's more, I wouldn't want the attention of Open Phil program officers to be any more divided than it need be, as I consider their work more important than the management of the EA Funds as is.

If the apparent lack of community engagement regarding the EA Funds is on the part of the CEA team responsible to keep the webpage updated, as their time may also be divided and dedicated to more important CEA projects than the EA Funds at any given point in time, that needs to be addressed. I understand the pressures of affording enough money to project management it gets done very effectively, while as an effective non-profit not wanting to let overhead expand too much and result in inefficient uses of donor money. I think if that's the case for CEA staff dividing their time between EA Funds and more active projects, it'd be appropriate for the CEA to hire a dedicated communications manager for the the EA Funds overall, and/or someone who will update the webpage with greater frequency. This could probably be done at 1 full-time equivalent additional staff hire or less. If it's not a single new position at the CEA, a part-time equivalent CEA staffer could have their responsibilities extended to ensuring there's a direct channel between the EA Funds and the EA community.

In the scope of things, such as the money moved through EA overall, EA Funds management may seem a minor issue. Given it's impact on values integral to EA, like transparency and accountability, as well as ensuring high-trust engagement between EA donors and EA organizations, options like I've listed out above seem important to implement. If not, overall, I'd think there's greater need for adding external oversight to ensure anything is being done with the EA Funds.

It's clear compared to how the Global Health & Development and Animal Welfare Funds are being manged, the Long-Term and EA Community Funds are being seriously neglected. On top of this, the CEA has made clear for a long time now their cause selection within EA is movement-building; the long-term future; and existential risk reduction. So it should be especially concerning to effective altruists who share this cause selection and have donated to these funds that they'are apparently undermining the integrity of the CEA's own goals. The juxtaposition of the stagnancy of the Long-Term and EA Community Funds, and the CEA's stated goals, create a false impression in and around the movement the CEA can be trusted to effectively identify promising projects from within the community within these focus areas. Thus, to rectify this problem, I make the following suggestions:

  • Nick Beckstead or a member of the CEA's executive team make clear a new plan and intentions for how the Long-Term and EA Community Funds will be more effectively managed in the future, posted to the Effective Altruism Forum.
  • The CEA consider hiring additional managers for each of these funds, given it's clear Nick alone doesn't have the bandwidth to manage both of them, or even one of them alone.
  • Past and current donors to these funds inform the CEA how they would prefer the funds be allocated, and what kind of projects they'd like to see funded, so the CEA, Nick and fund managers know what kind of new movement-building and long-term future projects in the community to search for.
  • Until one or more of the above suggestions comes about, effective altruists refrain from donating more to the Long-Term and EA Community Funds.
Comments56
Sorted by Click to highlight new comments since: Today at 3:41 PM

Thanks for sharing your concerns, Evan. It sounds like your core concerns relate to (i) delay between receipt and use of funds, (ii) focus on established grantees over new and emerging grantees, and (iii) limited attention to these funds. Some thoughts and comments on these points:

  • I recently recommended a series of grants that will use up all EA Funds under my discretion. This became a larger priority in the last few months due to an influx of cryptocurrency donations. I expect a public announcement of the details after all grant logistics have been completed.

  • A major reason I haven’t made many grants is that most of the grants that I wanted to make could be made through Open Phil, and I’ve focused my attention on my Open Phil grantmaking because the amount of funding available is larger.

  • I am hopeful that EA Grants and BERI will provide funding to new projects in these areas. CEA and BERI strike me as likely to make good choices about funding new projects in these areas, and I think this makes sense as a division of labor. EA Grants isn’t immediately available for public applications, but I’m hopeful they’ll have a public funding round soon. BERI issued a request for proposals last month. As these programs mature, I expect that most of what is seen as funding gaps in these areas will be driven by taste/disagreement with these grantmakers rather than lack of funding.

For now, I don’t have any plans to change the focus or frequency of my grantmaking with these funds from what was indicated in my April 2018 update.

I think it’s probably true that a fund manager who has more time to manage these funds would be preferable, provided we found someone with suitable qualifications. This is a possibility that’s under consideration right now, but progress toward it will depend on the availability of a suitable manager and further thinking about how to allocate attention to this issue relative to other priorities.

Hi Nick. Thanks for your response. I also appreciate the recent and quick granting of the EA Funds up to date. One thing I don't understand is why most of the grants you wanted to make could have been made by the Open Philanthropy Project, is why:

  • the CEA didn't anticipate this;
  • gave public descriptions of how the funds you managed would work to the contrary;
  • and why, if they learned of your intentions contrary to what they first told the EA community, they didn't issue an update.

I'm not aware of a public update of that kind. If there was a private email list for donors to the EA Community and Long-Term Future Funds, and they were issued a correction to how they were prior informed the money in the funds would be granted, I'd like to know. (I'm not demanding to see that update/correction published, if it exists, as I respect the privacy inherent in that relationship. If any donor to these funds or someone from the CEA could inform me if such an update/correction exists, please let me know.)

Regarding my concerns as you outlined them:

(i) delay between receipt and use of funds, (ii) focus on established grantees over new and emerging grantees, and (iii) limited attention to these funds.

That's an accurate breakdown.

Based on how the other two EA Funds have provided more frequent updates and made more frequent grants in the last year, I expect a lot of donors or community members would find it unusual the EA Community and Long-Term Future Funds granted all the money all at once. But in April you did give an update to that effect.

However, donors to the EA Community and Long-Term Future Funds were initially given the impression new and emerging grantees would be the target over established grantees. This was an impression of the EA Funds initially given by the CEA, not yourself as fund manager. But the CEA itself never corrected that. While based on the updates donors could have surmised the plan had changed, I would have expected a clearer update. Again, if such was privately provided to donors to these funds in some form, that would be good to know. Also, based on the redundancy of the EA Funds as you intended to manage them regarding your other role as a program officer at Open Phil, it seems clear you didn't expect you'd have to pay much attention to either of these funds.

However, it appears again donors were given the different impression by the CEA more attention would be afforded to the EA Funds. Had donors been given the rationale for why there were less frequent updates from the two funds you've been managing earlier, that would have been better. To receive updates on what amount of attention the EA Funds would receive was a suggestion on how to improve the EA Funds from Henry Stanley's last EA Forum post on the subject.

That's great news about BERI. I haven't had a chance to look over everything BERI has done up to date, but based on their early stuff I've looked at and the people involved, that sounds promising. Unfortunately, information on the EA Grants has been scarce. I know others have asked me about the EA Grants, and I've seen others share concerns regarding the uncertainty of when public applications will open again.

It appears at least there was a communication breakdown from the CEA initially and publicly told the EA community (which I imagine would include most of those who became donors to the funds), and, at a later stage, how you intended to manage them. Regarding this, and:

  • further questions regarding the EA Grants;
  • the possibility of (an) additional fund manager(s); I will try following up with the Centre for Effective Altruism more directly. I can't think of anything else I have to ask you at this time, so thanks for taking the time to respond and provide updates regarding the EA Funds.

Hi Evan, let me address some of the topics you’ve raised in turn.

Regarding original intentions and new information obtained:

  • At the time that the funds were formed, it was an open question in my mind how much of the funding would support established organizations vs. emerging organizations.
  • Since then, the things that changed were that EA Grants got started, I encountered fewer emerging organizations that I wanted to prioritize funding than expected, and Open Phil funding to established organizations grew more than I expected.
  • The three factors contributed to having fewer grants to make that couldn’t be made in other ways than was expected.
  • The former two factors contributed to a desire to focus primarily on established organizations.
  • The third opposes this, but I still see the balance of considerations favoring me focusing on established organizations.

Regarding my/CEA’s communications about the purposes of the funds: It seems you and some others have gotten the impression that the EA Funds I manage were originally intended to focus on emerging organizations over established organizations. I don’t think this is communicated in the main places I would expect it to be communicated if the fund were definitely focused on emerging organizations. For example, the description of the Long-Term Future Fund reads:

“This fund will support organizations that work on improving long-term outcomes for humanity. Grants will likely go to organizations that seek to reduce global catastrophic risks, especially those relating to advanced artificial intelligence.”

And “What sorts of interventions or organizations might this fund support?” reads:

"In the biography on the right you can see a list of organizations the Fund Manager has previously supported, including a wide variety of organizations such as the Centre for the Study of Existential Risk, Future of Life Institute and the Center for Applied Rationality. These organizations vary in their strategies for improving the long-term future but are likely to include activities such as research into possible existential risks and their mitigation, and priorities for robust and beneficial artificial intelligence."

The new grants also strike me as a natural continuation of the “grant history” section. Based on the above, I'd have thought the more natural interpretation was, "You are giving money for Nick Beckstead to regrant at his discretion to organizations in the EA/GCR space."

The main piece of evidence that these funds were billed as focused on emerging organizations that I see in your write-up is this statement under “Why might you choose not to donate to this fund?”:

“First, donors who prefer to support established organizations. The fund manager has a track record of funding newer organizations and this trend is likely to continue, provided that promising opportunities continue to exist.”

I understand how this is confusing, and I regret the way that we worded it. I can see that this could give someone the impression that the fund would focus primarily on emerging organizations, and that isn’t what I intended to communicate.

What I wanted to communicate was that I might fund many emerging organizations, if that seemed like the best idea, and I wanted to warn donors about the risks involved with funding emerging organizations. Indeed, two early grants from these funds were to emerging orgs: BERI and EA Sweden, so I think it's good that some warning was here. That said, even at the time this was written, I think “likely” was too strong a word, and “may” would have been more appropriate. It’s just an error that I failed to catch. In a panel discussion at EA Global in 2017, my answer to a related question about funding new vs. established orgs was more tentative, and better reflects what I think the page should have said.

I also think there are a couple of other statements like this on the page that I think could have been misinterpreted in similar ways, and I have regrets about them as well.

[Part I of II]

Thank you for your thoughtful response.

  • At the time that the funds were formed, it was an open question in my mind how much of the funding would support established organizations vs. emerging organizations.
  • Since then, the things that changed were that EA Grants got started, I encountered fewer emerging organizations that I wanted to prioritize funding than expected, and Open Phil funding to established organizations grew more than I expected.
  • The three factors contributed to having fewer grants to make that couldn’t be made in other ways than was expected.
  • The former two factors contributed to a desire to focus primarily on established organizations.
  • The third opposes this, but I still see the balance of considerations favoring me focusing on established organizations.

As far as I'm concerned, these factors combined more than exonerate you from aspersions you were in acting in bad faith in the management of either these funds. For what it's worth, I apologize you've had to face such accusations in the comments below as a result of my post. I hoped for the contrary, as I consider such aspersions at best counterproductive. I expect I'll do a follow-up as a top-level post to the EA Forum, in which case I'll make abundantly clear I disbelieve you were acting in bad faith, and that, if anything, it's as I expected: what's happened is a result of the CEA failing to ensure you as a fund manager and the EA Funds were in sufficiently transparent and regular communication with the EA community, and/or donors to these funds.

Personally, I disagree with a perspective the Long-Term and EA Community Funds should be operated differently than the other two funds, i.e., seeking to fund well-established as opposed to nascent EA projects/organizations. I do so while also agreeing it is a much better use of your personal time to focus on making grants to established organizations, and follow the cause prioritization/evaluation model you've helped develop and implement at Open Phil.

I think one answer is for the CEA to hire or appoint new/additional fund managers for one or both of the Long-Term Future and EA Community Funds to relieve pressure on you to do everything, both dividing your time between the Funds and your important work at Open Phil less than now, and to foster more regular communication to the community regarding these Funds. While I know yourself and Benito commented it's difficult to identify someone to manage the funds both the CEA and EA community at large would considered qualified, I explained my conclusion in this comment as to why I think it's both important and tractable for us as a community to pursue the improvement of the EA Funds by seeking more qualified fund managers.

What I've learned from the responses to my original post in the last week, more than I expected, was many effective altruists indeed, not as a superficial preference, but based on an earnest conviction think it would be more effective for the EA Funds to be focused on funding smaller, newer EA projects/organizations at a stage of development prior to when Open Phil might fund them. This appears true among EAs regardless of cause, and it happens to be the Long-Term Future and EA Community Funds being managed differently which brought this to the fore.

At a first glance among both existing and potential donors to the Long-Term Future and EA Community Funds, the grantees being MIRI; CFAR; 80k; CEA; and the Founders Pledge are leaving the community nonplussed (example) because those are exactly the charities EA donors could and would have guessed are targets for movement-building and long-term future donations by default. The premise of the EA Funds was the fund managers, based on their track records, could and would identify targets for donations within these focus areas with time and analysis the donors could not themselves afford. This was an attempt to increase the efficiency of donation in EA, and reduce potentially redundant cause prioritization efforts in EA.

But it's become apparent to many effective altruists in the wake of my post, beyond any intention I had, combined with other dissatisfaction with the EA Funds in the last year, that didn't happen. Given donors to the Long-Term Future and EA Community Funds would likely not have identified donation targets like EA Sweden and BERI that you mentioned, I consider it unlikely the money from the two funds you manage would have ended up this year at charities much different than the ones you're disbursing the EA Funds to as of August 8th.

So I don't think the Long-Term Future and EA Community Funds were a waste of money. What it did quantitatively waste was a lot of time as: (i) the donors' to the EA Funds could've either donated to one of the Funds' new grantees earlier, thus presumably benefiting the organization in question more; or, (ii) they could have taken a bit of time to do their own analysis which, however inadequate compared to what they at one point expected from the EA Funds, would leave them more satisfied than the current outcome.

Although it's qualitative and symbolic, I maintain the most consequential outcome of how differences the EA community at large has had with how the EA Funds are being administered as a project of the CEA is the shock it causes to the well of trust and good will between EA organizations, and effective altruists as individuals and as a whole.

I understand how this is confusing, and I regret the way that we worded it. I can see that this could give someone the impression that the fund would focus primarily on emerging organizations, and that isn’t what I intended to communicate.

What I wanted to communicate was that I might fund many emerging organizations, if that seemed like the best idea, and I wanted to warn donors about the risks involved with funding emerging organizations.

I in no sense any longer hold you personally responsible for the mismatch between how you thought and how much of the EA community, including donors to the EA Funds, thought you would manage the Long-Term Future and EA Community Funds. Unfortunately, that does not to me excuse the failure to ensure the fidelity of communicating again. Again, I believe the fidelity model of spreading EA is one of the best that has come out for EA movement-building in years. But like with how miscommunication on the CEA's part has apparently undermined their ability to as a representative agency of the EA movement pursue their own mission and goals, it's very concerning when the CEA can't adhere to the movement-building model they prescribe for themselves, and would hope the rest of EA might also follow.

I don't even hold Sam Deere, Marek Duda, or JP Addison themselves particularly responsible for the failure to update or check the fidelity of your updates and thinking on how to manage the EA Funds to donors and the EA community. While that was their responsibility, what with the delays in the email responses, and their preoccupation with the important tasks of updating the EA Forum 2.0 and all other tech projects under the CEA umbrella, it would appear the CEA tech team wasn't afforded or led to believe they should prioritize clear and regular maintenance of the EA Funds online communications/updates relative to their other tasks. Obviously, this is in even starker contrast than what I expected when I made this post to what many effective altruists think about how much of a priority the CEA should have made of the EA Funds.

The difference between these outcomes, and other mistakes the CEA has made in the past; or the EA Funds, and other big funds in EA, is these Funds were made from the donations of individual effective altruists, either modestly or as a major, conscious shift among those earning to give, that faced skepticism from the beginning it would be more effective than how those donors would counterfactually donate their own money. The CEA assured EA community members that wouldn't be the case. And those community members who went on to donate to the EA Funds are now learning pessimistic forecasts on the potential greater effectiveness of the Long-Term Future and EA Community Funds were correct. And this by the lights of the CEA as an organization lacking the self-awareness to know they were failing the expectations they had set for themselves, and on which grounds they asked for the trust of the whole EA movement.

[Part II of II]

In the week since I made my original post, Joey Savoie of Charity Science made this post, itself being rapidly upvoted, that how EA is represented, and how and who EA as a whole community ought trust to represent us, is receiving significant misgivings with how things have been going within EA. Whether it's part of a genuine pattern or not, the perception of the CEA (or any other organization representing EA) as failing to represent EA in accord with the what the EA movement as their supporters think tears the fabric of EA as a movement.

Indeed, two early grants from these funds were to emerging orgs: BERI and EA Sweden, so I think it's good that some warning was here. That said, even at the time this was written, I think “likely” was too strong a word, and “may” would have been more appropriate. It’s just an error that I failed to catch. In a panel discussion at EA Global in 2017, my answer to a related question about funding new vs. established orgs was more tentative, and better reflects what I think the page should have said.

I also think there are a couple of other statements like this on the page that I think could have been misinterpreted in similar ways, and I have regrets about them as well.

In my follow-up, I'll clarify misunderstanding about how the Long-Term Future and EA Community Funds would be allocated by both donors to the Funds, and other effective altruists, is a result of misinterpretation of ambiguous communications in hindsight should have been handled differently. To summarize my feelings here, if ultimately this much confusion resulted from some minor errors in diction, one would hope in an EA organization there would be enough oversight to ensure their own accountability such that minor errors in word choice would not lead to such confusion in the first place.

Ultimately, it was the responsibility of the CEA's Tech Team to help you ensure these regretted communications never led to this, and looking at the organization online, there is nobody else responsible than the CEA as a whole organization to ensure the Tech Team prioritizes that well. And if the CEA got what it identified as its own priorities relative to what of its own activities the rest of the EA community were most important to building and leading the movement so wrong, it also leads me to conclude the CEA as a whole needs to be more in touch with the EA movement as a whole. I don't know if there is any more to ask about why what's happened with not only the two EA Funds you manage, but the continued lagging of the CEA behind the community's and donors' realistic expectations to update them even as all the fund managers themselves had answers to provide. But one theme of my follow-up will to be asking how the CEA, including its leadership, and the EA movement can work together to ensure outcomes like this don't happen again.

provided we found someone with suitable qualifications.

Could you sketch out what "suitable qualifications" for the fund manager role look like, roughly?

I don't know what others think about the qualifications needed/desired for this, but as a donor to these EA Funds, some of the reasons I'm enthusiastic to give to Nick's funds are:

  • His full-time day job is working out which organisations will do the most good over the long run (especially of those seeking to grow the EA movement), and how much funding they need.

  • He does that alongside extremely smart, well-informed colleagues with the same aims, giving him lots of opportunities to test and improve his views

  • He has worked formally and informally in this area for coming on for ten years

  • He's consistently shown himself to be smart, well-informed and to have excellent judgement.

I've been very grateful to be able to off-load where/when/how to donate optimally to him, and hope if/when a new fund manager is found, they share at least some of the above qualities.

[Disclaimer: I used to work for CEA]

Hey Nick,

I'm excited to hear you've made a bunch of grants. Do you know when they'll be publicly announced?

The grant payout reports are now up on the EA Funds site:

Note that the Grant Rationale text is basically the same for both as Nick has summarised his thinking in one document, but the payout totals reflect the amount disbursed from each fund

It seems that Nick has not been able to leverage his position as EA fund manager to outperform his Open Phil grants (or at least meaningfully distinguish his EA fund grants from his Open Phil grants). This means that we can think of donating to the far future and community funds as having similar cost-effectiveness to individual donations to Open Phil earmarked for those causes. This seems like a problem, since the best individual donations should be able to outperform Open Phil, at least when you account for the benefits of not centralizing donations on too few decisionmakers. I don't see anyone calling for Open Phil to accept/solicit money from small donors.

The case for finding another manager seems pretty strong. EA funds is a fundamentally sound idea - we should be trying to consolidate donation decisions somewhat to take advantage of different levels of expertise and save small donors' time and mental energy, but this doesn't seem like the best way to do it.

Below is a comment I received an anonymous request to post on behalf of an EA community member regarding these grant payout reports.

When I read the perfunctory grant rationale for the Long-Term Future Fund and Community Fund grants, I wondered whether this was a joke or a calculated insult to the EA community.

The one paragraph and 4 bullet points to justify the disbursement of over $1,000,000 to 5 organisations from across the two funds seems like it could have been written up in 3 minutes, with nothing more than a passing knowledge of some of the most well known EA(ish) orgs. This, coming after months of speculation about what the Grant Evaluator entrusted with EA funds for the long term future and the EA community might actually be doing, gives the impression that they weren't actually doing anything.

Perhaps what is most disappointing is the desultory explanation that all these funds are disbursed in the vague hope that the >$1million might "subsidiz[e] electronics upgrades or childcare" for the charity's staff or pay them higher salaries and "increase staff satisfaction" and that this might boost productivity. This seems a clear signal, among other things, that the funding space in this area is totally full and grant manager can't even manage to come up with plausible sounding explanations for how their funds they are disbursing to EA-insiders might increase impact.

Here is my own response to these comments.

  • All the organizations involved definitely are self-identified effective altruism organizations. Both MIRI and CFAR have origins in the rationality community and LessWrong, with a focus on AI safety/alignment, predating their significant involvement with EA. But AI safety/alignment has been a priority focus area within EA since its inception, and as flagship organizations which have both benefited from an association within EA and have stuck with it through trials, for the purposes of this discussion MIRI and CFAR should be thought of as EA organizations, unless a representative steps forward to clarify these organizations' relationship(s) to the EA movement.

  • That the $1 million might subsidize electronics upgrades or childcare or pay them higher salaries doesn't necessarily strike me as a bad thing. If someone prioritizes anti-malarial bednets, the number of children who could have received treatment with that $1 million looks really bad when it's being casually used to make office jobs comfier. But there are already outside critics of the EA movement who malign how global poverty interventions can look bad, and most of those criticisms fall flat. So if we're going to make mountains out of molehills on every issue, no progress will ever be made in EA. Additionally, it's unrealistic to think what 'effectiveness' looks like at every organization and in every area at EA will or should be the same as the Against Malaria Foundation with extremely low overhead. The AMF is essentially a supply and distribution chain management organization run full-time by a founder who doesn't need to take a salary from running the organization. But most of the time, effective altruists come from more typical backgrounds where most of us can't do something like that. We're also human. Staff at EA organizations are usually talented people who by working in a small non-profit sector are forgoing career and personal opportunities they otherwise would have had, working on what to the outside world are all niche causes that don't attract much attention. So that salaries and benefits for staff at EA organizations go up over time like they do at any regular organization at other NPOs and in other economic sectors makes sense. Additionally, due to the nature of the work as based on a cumulative research base, to keep talent whose value for the important work EA organizations do only grows as they advance the organization's mission is also important.That relative to many other projects in this space that might have received seed funding, expanding benefits and salaries for existing staff and well-established EA organizations they could have fundraised more themselves, is another question entirely. However I would stress when doing so, we remember we're talking about real people with real needs, and to act outraged at effective altruists not living up to some image we had of them as ascetics would be only unfair and damaging.

  • Regarding Nick's response, it appears there was at least some miscommunication within the CEA regarding how the EA Community and Long-Term Future Funds would be disbursed. While there was a public update in April from Nick regarding the funds, the discrepancy with how the funds were initially presented to the community and donors, and Nick's way of running the funds, has not been addressed. I intend to follow up to find out more.

  • What several others have mentioned by now, and I agree, is that it's not necessarily apparent there is no room for more funding in these areas. Having worked on EA projects myself, I'm well aware of a few organizations working both on EA project development and are focused on the long-term future, respectively, that are seeking to expand. As Nick Beckstead mentioned, he hopes in the near future the EA Grants will be able to fill this role. It's unclear when the EA Grants might open for applications again.

Couldn't agree more. What is worse, (as I mention in another comment) university grants were disqualified for no clear reason. I don't know which university projects were at all considered, but the underlying assumption seems to be that irrespective of how good they would be, the other projects will perform more effectively and more efficiently, even if they are already funded, i.e. by giving them some more cash.

I think this a symptom of an anti-academic tendencies that I've noticed on this form and in this particular domain of research, which I think would be healthy to discuss. The importance of the issue is easy to understand if we think of any other domain of research: just imagine that we'd start arguing that non-academic climate research centers should be financed instead of the academic ones. Or that research in medicine should be redirected from academic institutions towards non-academic ones. I'd be surprised if anyone here would defend such a policy. There are good reasons why academic institutions --with all their tedious procedures, peer-review processes, etc.-- are important sources of reliable scientific knowledge production. Perhaps we are dealing here with an in-group bias, which needs an open and detailed discussion.

I'm Head of Operations for the Global Priorities Institute (GPI) at Oxford University. OpenPhil is GPI's largest donor, and Nick Beckstead was the program officer who made that grant decision.

I can't speak for other universities, but I agree with his assessment that Oxford's regulations make it much more difficult to use donations get productivity enhancements than it would be at other non-profits. For example, we would not be able to pay for the child care of our employees directly, nor raise their salary in order for them to be able to pay for more child care (since there is a standard pay scale). I therefore believe that the reason he gave for ruling out university-based grantees is the true reason, and one which is justified in at least some cases.

But what about paying for teaching duties (i.e. using the finding to cover the teaching load of a given researcher)? Teaching is one of the main issues when it comes to time spent on research, and this would mean that OU can't accept the funding framework within quite common ERC grants that have this issue covered. This was my point all along.

Second, what about the payment for a better equipment? That was another issue mentioned in Nick's post.

Finally, the underlying assumption of Nick's explanation is that the output of non-academic workers will be better within the given projects than the output of the non-academic workers, which is a bold claim and insufficiently explicated in the text he provided. Again: I don't know which projects we are assessing here and without that knowledge we cannot make an adequate assessment. Anything else would be a mere speculation. I am just making a plea for higher transparency given the complexity of these issues.

Given that Nick has a PhD in Philosophy, and that OpenPhil has funded a large amount of academic research, this explanation seems unlikely.

Disclosure: I am working at OpenPhil over the summer. (I don't have any particular private information, both of the above facts are publicly available.)

EDIT: I don't intend to make any statement about whether EA as a whole has an anti-academic bias, just that this particular situation seems unlikely to reflect that.

Thanks for the input! But I didn't claim that Nick is biased against academia - I just find the lack of clarity on this point and his explanation of why university grants were disqualified simply unsatisfactory.

As for your point that it is unlikely for people with PhDs to be biased, I think ex-academics can easily hold negative attitudes towards academia, especially after exiting the system.

Nevertheless, I am not concluding from this that Nick is biased (nor that he isn't) - we just don't have evidence for either of these claims, and at the end of the day, this shouldn't matter. The procedure for grants awarding should be robust enough to prevent such biases to kick in. I am not sure if any such measures have been undertaken in this case though, which is why I raising this point.

  • My guess would be because EA is still a niche community favouring unpopular causes, and existing effective altruists outside academia will be more willing to pursue effective ideas within uncommon areas EAs favour, while university projects typically have more opportunity for funding outside EA, it makes sense to prioritize funding non-academic projects. Of course, that's only heuristic reasoning. These aren't the most solid assumptions for EA as a movement to make. I agree this should be addressed with more open and detailed discussion on this forum.

  • Arguably life extension or anti-ageing research institutions are doing medical research outside academia. Indeed it's the case most organizations in this space I've heard effective altruists tout are either for-profit companies, or NPOs, to which they donate, such as SENS and the newly opened Longevity Research Institute. So while I don't know about climate research centres, there are in fact a lot of people in EA who might defend a policy of redirecting resources for medical research towards non-academic institutions.

  • Nick Beckstead stated why he didn't pay as much attention to the EA Community and Long-Term Future Funds is because they were redundant with grants he would have already made to the Open Philanthropy Project. Of course that still raises the question of why the EA Funds were presented differently to donors and the community, and why this wasn't better addressed, which I intend to follow up on with the CEA. Regarding the long-term future, I'm aware Nick is correct the Open Philanthropy Project has been making many smaller grants to many small academic projects in AI safety/alignment, biosecurity and other areas. I expect this trend will only increase in the near future. Having looked into it myself, and talked to academics in EA who know the area from the inside better than I, there are indeed fewer opportunities for academic research on effective 'EA Community-Building' than there will be for other areas EAs focus on. But projects run by effective altruists working at universities have received grants from EA Grants to build bridges into academia, such as the Effective Thesis Project jointly run the Czech EA Foundation, and the Effective Altruists of Berkeley, at the University of California, Berkeley.

Hi Evan, Here's my response to your comments (including another post of yours from above). By the way, that's a nice example of an industry-compatible research, I agree that such and similar cases can indeed fall into what EAs wish to fund, as long as they are assessed as effective and efficient. I think this is an important debate, so let me challenge some of your points.

Your arguments seem to be based on the assumption that EAs can do EA-related topics more effectively and efficiently than a non-explicitly EA-affiliated academics (but please correct me if I've misunderstood you!), and I think this is a prevalent assumption across this forum (at least when it comes to the topic of AI risks & safety). While I agree that being an EA can contribute to one's motivation for the given research topic, I don't see any rationale for the claim that EAs are more qualified to do scientific research relevant for EA than non-explicit-EAs. That would mean that, say, Christians are a priori more qualified to do research that goes towards some Christian values. I think this is a non sequitur.

Whether a certain group of people can conduct a given project in an effective and efficient way shouldn't primarily depend on their ethical and political mindset (though this may play a motivating role as I've mentioned above), but on the methodological prospects of the given project, on its programmatic character and the capacity of the given scientific group to make an impact. I don't see why EAs --as such-- would qualify for such values anymore than an expert in the given domain can, when placed within the framework of the given project. It is important to keep in mind that we are not talking here about a political activity of spreading EA ideas, but about scientific research which has to be conducted with a necessary rigor in order to make an impact in the scientific community and wider (otherwise nobody will care about the output of the given researchers). This is the kind of criteria that I wished would be present in the assessment of the given grants, rather than who is an EA and who not.

Second, by prioritizing a certain type of group in the given domain of research, the danger of confirmation bias gets increased. This is why feminist epistemologists have been arguing for diversity across the scientific community (rather than for the claim that only feminists should do feminist-compatible scientific research).

Finally, if there is a worry that academic projects focus too much on other issues, the call for funding can always be formulated in such a way that it specifies the desired topics. In this way, academic project proposals can be formulated having EA goals in mind.

Your arguments seem to be based on the assumption that EAs can do EA-related topics more effectively and efficiently than a non-explicitly EA-affiliated academics (but please correct me if I've misunderstood you!), and I think this is a prevalent assumption across this forum (at least when it comes to the topic of AI risks & safety). While I agree that being an EA can contribute to one's motivation for the given research topic, I don't see any rationale for the claim that EAs are more qualified to do scientific research relevant for EA than non-explicit-EAs. That would mean that, say, Christians are a priori more qualified to do research that goes towards some Christian values. I think this is a non sequitur.

I think it's a common perception in EA effective altruists can often do work as efficiently and effectively as academics not explicitly affiliated with EA. Often EAs also think academics can do some if not most EA work than a random non-academic EA. AI safety is more populated with and stems from the rationality community. On average it's more ambivalent towards academia than EA. It's my personal opinion there are a variety of reasons why EA may often have a comparative advantage of doing the research in-house. There are a number of reasons for this.

One is practical. Academics would often have to divide their time doing EA-relevant research with teaching duties. EA tends to focus on unsexy research topics, so academics may be likelier to get grants for focusing on irrelevant research. Depending on the field, the politics of research can distort the epistemology of academia so it won't work for EA's purposes. These are constraints effective altruists working full-time at NPOs funded by other effective altruists don't face, allowing them to dedicate all their attention to their organization's mission.

Personally, my confidence in EA to make progress on research and other projects for a wide variety of goals is bolstered by some original research in multiple causes being lauded by academics as some of the best on the subject they've seen. Of course, these are NPOs focused on addressing neglected problems in global poverty, animal advocacy campaigns, and other niche areas. Some of the biggest successes in EA come from close collaborations with academia. I think most EAs would encourage more cooperation between academia and EA. I've pushed in the past for EA making more grants to academics doing sympathetic research. Attracting talent with an academic research background to EA can be difficult. I agree with you overall EA's current approach doesn't make sense.

I think you've got a lot of good points. I'd encourage you to make a post out of some of the comments I made here. I think one reason your posts might be poorly received is because some causes in EA, especially AI safety/alignment, have received a lot of poor criticism in the past merely for trying to do formal research outside of academia. I could review a post before you post it to the EA Forum to suggest edits so it would be better received. Either way, I think EA integrating more with academia is a great idea.

Hey Evan, thanks for the detailed reply and the encouragement! :) I'd love to write a longer post on this and I'll try to do so as soon as I catch some more time! Let me just briefly reply to some of your worries concerning academia, which may be shared by others across the board.

  1. Efficiency in terms of time - the idea that academics can't do research as much as non-academic due to teaching duties is not necessarily the case. I am speaking here for EU, where in many cases both pre-docs and post-docs don't have much (or any) teaching duties (e.g. I did my PhD in Belgium where the agreement was that PhDs focus only on research). Moreover, even if you do have teaching dutues, it may often inform your research and as such it's usually not a "wasted time" (when it comes to research results). As for professors, this largely depends on a country, but there are many examples of academics with a prof. title whose productivity is super high in spite of the teaching duties.

  2. Focusing on sexy topics - there is this misconception that sexy topics won't pass through academia, while actually the opposite is the case: the sexier your topic is, the more likely it is that your project gets funded. The primary issue with any topic, whatsoever, is that the project proposal shows how the topic will be investigated, i.e. the basic methodology. I don't know where exactly this myth comes from, to be honest. I work in philosophy of science, and the more relevant your topic is for real-world problems, the more attractive your project proposal will be (at least in the current funding atmosphere). One reason why this myth is so entrenched among EAs could be the experience of EAs within research projects which already had pre-determined goals and so each researcher had to focus on whatever their boss asked them to. However, there are numerous possibilities across EU to apply for one's own project proposals, in which case you will do precisely what you propose. Another reason could be that EAs don't have much experience with applications for funding, and have submitted project proposals that don't seem convincing in terms of methodology (writing projects is a skill which needs to learned like any other), leading them to conclude that academics don't care about the given topics.

  3. Using public funding for EA purposes - this point relates to what you mention above and I think it would be really great if this direction could be improved. For instance, if academics within EA formed a sort of counseling body, helping EAs with their project proposals, choice of a PhD supervisor, etc. This would be a win-win situation for all kinds of reasons: from integrating EA relevant research goals into academia, to using public funding sources (rather than EA donations) for research. This could proceed e.g. in terms of real-life workshops, online discussions, etc. I'd be happy to participate in such a body so maybe we should seriously consider this option.

Nick says these latest grants "disburse all the EA Funds under my management." However, the grant amounts are ~10-15% less than the available cash the funds were reported as holding at the end of March, and the funds have presumably raised more money since then. Can Nick or someone from CEA please clarify?

Hi Jon, yes this is due to the numbers reported in March including the accounts payable - money not yet held in cash but expected to come in. We later realised that some of the transactions we were expecting to come in were not real donations, but rather several people making large 'testing' donations which then did not get paid. We have resolved these issues, and will be reporting Fund balances in cash terms going forward, however it did mean that the March numbers ended up being inflated.

We will be publishing a post in the coming weeks going into detail on the work we have been doing in the back end of Funds and releasing an update to the site which automatically pulls the Fund balances from our accounting system.

Thanks for clarifying!

Looking forward to seeing the upcoming post, it would be great if it could include a chart/table of donations (in cash terms) to each fund over time.

There are links missing from the EA Community Fund post to the OpenPhil writeups on 80k and CEA.

Fixed. Thanks, Markus!

Hi Peter, should be in the next few days, we're just finalising the details on CEA side.

Perfect, thanks!

[anonymous]6y36
0
0

Upvoted because I think it's a good community norm for people to call each other out on things like this.

However, with the rapid upvoting, and human attention span being what it is, I'm a bit worried that for many readers the main takeaway of this post will be something not far from "Nick Beckstead = bad". So in an effort to balance things out a bit in our lizard brains...

Ode to Nick Beckstead

  • I personally can't think of anyone I'd attribute more credit to for directing funding towards AI Safety work (and in the likely case that I'm wrong, I'd still be surprised if Nick wasn't in the top handful of contributors)

  • Nick was an EA before it was cool, founding trustee of CEA, helped launch the first Giving What We Can student groups, helped launch The Life You Can Save etc.

  • Rob Wiblin calls him "one of the smartest people I know" with "exceptional judgement"

  • On top of all the public information, in private I've found him to be impressively even-handed in his thinking and dealings with people, and one of the most emotionally supportive people I've known in EA. [Edit, h/t Michelle_Hutchinson: Disclaimer: I work for an EA community-building organisation that was offered an EA Community Grant last month by CEA.]

Indeed, in no way should any of my post be taken to be reflective of Nick's character in any way. It's just Nick was the fund manager of both the funds central to my post, and some of the material I directly cited from the EA Funds webpages were written by him. I wasn't sure how to write it without mentioning Nick a lot, as I thought writing my post with "Prof. Beckstead," "the fund manager," or using pronouns everywhere would have been more awkward.

So if I didn't come out in the tone of my post, it's my intention the CEA as an organization is responsible to address these concerns. Not only Nick but multiple staff from the CEA were involved in providing communications which, as I laid out in my post, paint a contradictory picture of what within the CEA different people thought the Funds would be used. This is on top of the multiple concerns Henry Stanley, myself and others have raised in the last several months regarding the EA Funds, and those concerns not (until now) being addressed for the EA Community and Long-Term Future Funds.

I found Nick's response adequate, and I thanked him for updating the EA Funds now. However, as I also responded to Nick, and based on other comments', that alone doesn't address why things have gotten to this point in the first place. Based on an expectation the EA Community and Long-Term Future Funds already would have been more transparent and accountable than they've been so far, as the other two funds have been, there are concerns regarding effectiveness to be addressed. I intend to try following up with the CEA to address these concerns.

Re the community fund: I find the decision to not review applications for new, small, projects both surprising and troubling.

  1. That established organisations which by the grant-maker's own assessment are not significantly funding constrained would make better marginal use of funds than a new organisation might seems very unlikely.

  2. It is also unlikely that donating to established organisations will do more to grow the movement than helping new organisations start up would.

  3. Echoing what has already been noted, the rationale given does not stand in a reasonable relation to the disbursement of more than half a million pounds. This even more so when one of the recepient organisations is so closely aligned with the grant maker. This is not at all an expression of distrust in the grant maker's intergrity; this is just obvious good governance.

  4. The rationale of lacking time to make the judgements does not stack up. First, a person who lacks the time to make obviously pertinent evaluations should not be in charge of the fund. Second, there are solutions, such as contracting people to invest the time required.

  5. Not unrelatedly (and apologies for not being able to articulate this super well) there appears to be a somewhat pervasive belief among some of those that are already well established within EA organisations that they're a lot better at making important decisions than outsiders would be. It's reflected in the grant maker's apparent reluctance to hand over management of the fund to someone else, in some comments in the discussion thread here (that put a lot of weight on how long someone's been involved, for example), and most explicitly in the belief that a new, small EA organisation would actually do net harm, by being "suboptimal representatives", or by not disappearing quickly enough for the grant maker's liking. This smacks of hubris to me.

(I feel like I've seen this a lot recently, and I think it's really worrying for the future health of the EA movement. One place is in this post, where the message can be glossed as: hey, for many important decisions, it's just not worth our while to explain to you all why we make them.)

I have been donating to the Long Term Futures fund on a recurring monthly basis for over a year now. Figured it was the best way for me to save time and not have to do due diligence for every giving opportunity. From reading this it seems that I was wrong and will be cancelling my donations. This has definitely put a dent in my confidence towards donating to EA orgs of any sort. I may stop donating entirely and instead invest the money now with a plan to donate it when promising opportunities present themselves in the future.

While I personally have trust that Nick Beckstead has been acting in good faith, I also completely understand why donors might choose to stop donating because of this extreme lack of regular communication.

It's important for EAs to realize that even when you have good intentions and are making good choices about what to do, if you aren't effectively communicating your thinking to stakeholders, then you aren't doing all that you should be doing. Communications are vitally important, and I hope that comments like this one really help to drive this point home to not just EA Funds distributors, but also others in the EA community.

For what it's worth, I've not been convinced one way or another donating now vs. later is optimal. So if one has been under the impression based on the views of many other effective altruists to donate in the present, but on reflection based on incidents like this among other factors, one chooses to donate when there are more promising opportunities in the future, I condone that choice. This is in spite of the fact I expect most community members will continue to donate in the present, and would encourage others to do the same. Ultimately, I think it comes down to judgement calls of when or how far into the future one should wait to donate. At the end, I've always found the arguments for a hard conclusion to donate now; invest; or put money into a donor-advised fund to be loose. I expect it comes down to personal preferences hard to make explicit. If you like, I can try finding some resources or other community members who've made a similar choice you can talk to about it.

I should say this is an unusual lack of transparency and accountability from an EA project or organization. I would discourage donors from generalizing to many or all EA orgs from this incident alone. That from within the EA community a more grassroots fundraising platform that would be more peer-to-peer has been tabled before. In light of this post, following up, I expect I will suggest the EA community pursue that end. While this will entail more time and due diligence for giving opportunities than we would've hoped for through the EA Funds, ultimately I think it'd be easier on donors and funders than a process of cold donation requests in the community. I haven't looked closely at all the alternatives suggested, but I'm guessing the platform could be up within several months. In the grand scheme of things, that isn't too long. So if that sounds promising, I can follow up with you, and you might hold onto your money intended for donations a bit longer before ultimately deciding to invest.

A review of the OpenPhil grants database shows that since the EA funds were founded, Nick Beckstead was the grant investigator for OpenPhil grants made in both these areas, larger than either of these funds; for example $2.7M to CEA and $3.75M to MIRI. These are good grants, and there are more good ones in the grants database.

When the EA Funds were first announced, I wrote:

My concern is that the marginal effect of donating to one of these funds on the amount of money actually reaching charities might be zero. Given that OpenPhil spent below its budget, and these funds are managed by OpenPhil staff, it appears as though these funds put money on the wrong side of a bottleneck.

Basically, it looks like Nick Beckstead is in charge of two sources of funding designated for the same cause areas, EA Funds are the smaller of the two sources, and they aren't really needed.

Lewis Bollard is also in this position, however, so this isn't everything.

I think the grants just announced confirm your view that Nick Beckstead can typically convince Open Phil to fund the grantees that he thinks are good (though I also agree with Jeff Kaufman that this may not be true for other Open Phil program officers). To the extent that EA Funds are premised on deferring to the judgment of someone who works full time on identifying giving opportunities, the best alternative to an Open Phil employee may be someone who works on EA Grants.

Here's one way EA Funds could be used to support EA Grants. CEA could choose multiple grant evaluators for each cause area (AI safety, biosecurity, community building, cause prioritization) and give each evaluator for a cause area the same amount of money. The evaluators could then choose which applicants to support; applicants supported by multiple evaluators would receive money from each of them (perhaps equally or perhaps weighted by the amount each one recommended). Donors would be able to see who evaluators had funded in the past and donate directly to the fund of a specific evaluator. If CEA commits to giving each evaluator for a cause area the same amount of money, then donors can be confident that their donations cause evaluators they trust more to have more money (although it'd be harder for them to be confident that they are increasing the overall amount of money spent on a cause area).

Right, but if they weren't needed in the first place, and that's evident now, why wasn't this noticed earlier? Moreso than how the money has now been allocated from the EA Funds, it's a matter of trust and accountability in the CEA, since as an organization over the last several months they've given a self-contradictory impression of how the EA Funds would be managed, without clarification.

This post highlighted an important problem that would have taken much longer to address otherwise. I would point to this post as an example of how to hold powerful people accountable in a way that is fair and reasonable.

(Disclosure: I worked for CEA when this post was published)

I'd be curious to hear some explanation of

"University-based grantees were not considered for these grants because I believe they are not well-positioned to use funds for time-saving and productivity-enhancement due to university regulations."

since I have no clue what that means. In the text previous to this claim it is only stated that "I recommended these grants with the suggestion that these grantees look for ways to use funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare)"- but a university staff can indeed use the funding to cover the teaching duties, as well as to buy a better equipment.

Moreover, if it were any other domain of research (say, medicine or physics), I'd be rather worried if university-based grants were disqualified for this kind of reason.

I'm in no way associated with EA Funds (although I do contract with CEA), but I can take a guess. Several EA orgs pay for assistants and certain other kinds of help for academics directly, which makes me think that the straightforward interpretation of the statement is true: Nick wanted to fund time savings for high impact people, and academics can't accept money to do that, although they can accept donated labor.

But that's just not necessarily true: as I said, academics can accept money to cover e.g. teaching duties and hence do more research. If you look at ERC grants, that's part of their format in case of Consolidator and Advanced grants. So it really depends on who applied for which funds, which is why Nick's explanation isn't satisfactory.

What do people think of the idea of having multiple funds (each run by a different manager) for those two areas (with donors allowed to choose a specific manager)?

Benefits would include:

  • a greater incentive for managers to spend money promptly and transparently

  • greater choice for donors (if managers have different worldviews e.g. long term future and thinks AI safety should be a priority, long term future but thinks AI safety should be less of a priority, community with a focus on community building, community with a focus on cause prioritization)

  • an increase in the chance that good projects are funded

Costs could include:

  • creating tension between the fund managers (and perhaps in the community at large)

  • no fund manager having enough money for bigger grants (though perhaps large grants could be left to Open Phil)

  • an increase in the chance that harmful projects are funded

Note: The idea of multiple fund managers has been proposed before.

I think that one of the constraints that is faced here is a lack of experienced grantmakers who have a good knowledge of x-risk and the EA community.

I'm not sure I agree that this constraint is real, I think I probably know a lot of good people who I'd trust to be competent EA / x-risk grantmakers, but I certainly haven't spent 10 hours thinking about what key qualities for the role are, and it's plausible that I'd find there are far fewer people competent enough than I currently think.

But if there are more grant managers, I think that I disagree with your costs. Two or more grantmakers acting on their own, different, first-principles models seems great to me, and to increase the likelihood of good grantmaking occuring, not increasing tension or anything. Competition is really rare and valuable in domains like this.

Thanks for this comment. This succinctly addresses the main considerations regarding the question of additional grant mangers. I agree the benefits outweigh the costs, and to the point that based on the reception to my post, many aren't satisfied with how having only a single fund manager for two funds has turned out, it's definitely worth considering.

To clarify, these two areas already have two separate funds designated to each of them, and thus far both have happened to share a single manager. While I'm not too concerned Nick made grants to the CEA and 80,000 Hours as a trustee of the CEA himself, as that has been transparently disclosed since the the funds' inception, I find it concerning both CEA and 80k received grants from both funds. I don't think it's a conflict of interest, since I've faith Nick earnestly believes that's best. Yet since CEA and 80k just happen to share priorities of both EA movement-building and x-risk reduction, for a single grant manager to make grants to both organizations from two different funds seems to blur the lines for the criteria for the two funds. In theory, grants from each fund should be optimized for different criteria based on the area it pertains to.

At best, as it stands, mixed signals are sent. For example, EA organizations may be unclear if the CEA is indifferent or not to EA movement-building activities which don't give a prominent place to the building up a long-term future focus in EA. That could distort things by attracting projects which aren't that as effective but receive undue favour for signaling a long-term future focus. Projects which aren't as explicitly about the long-term future in EA, nor obviously/immediately benefit long-term future-focused projects in EA, but nonetheless would effectively build the movement, get overlooked.

Nick himself has responded above he thinks it would ultimately probably be better if someone with more time and less redundancy with a higher-priority role could manage one or both of these funds (in addition or instead of him?). Like Benito also responded, the problem is there appears to be few people who are qualified. On the other hand, 'nascent high-context intellectual movement non-profit grant manager' is an abstract role that doesn't exactly exist in the broader world. And nobody has addressed the possibility Nick isn't literally the only person in the entire EA movement qualified the manage either of these funds. So I'm optimistic there is a discussion to be had there among effective altruists on what or how someone would qualify for that role, though I don't know how that conversation should be had.

I agree with the benefits you cited. Regarding potential costs:

  • While it's certainly possible there could be tension between one or more fund managers for each of these funds, it seems mechanisms could be designed to avoid that. Even barring that, I don't have a reference class for the rate at which tensions between managers of a single fund would develop, so I don't see strong reason to expect that would be the default outcome. Indeed, we haven't considered the possibility the fund managers would be cooperative rather than competitive, complement each other's perspectives, largely agree, and merely cover each other's blindspots. Regarding the possibility of tensions within the community based on the EA Funds, it would appear there is a lot of tension right now with only one fund manager for two EA Funds. There are potential downsides to having multiple fund managers for multiple EA Funds, but that's not been tried. We know what's been tried so far has been inadequate, so that's not evidence against trying something new.

  • Regarding no single fund manager having enough money for bigger grants, this precludes the possibility multiple fund managers will have significant overlaps in their worldview, or would not have another way to adjudicate between bigger grants. So it's possible the likeliest outcome for the biggest grants, multiple fund managers would converge on the same conclusion. If not, it's not my impression any of the EA Funds except for the Global Health & Development Fund has made a grant to date over $500k. Since the EA Funds haven't been used thus far for these especially big grants in question, and may not in the future, this may be a moot point.

  • I'm confused as to why this would increase the chance harmful projects are funded. I guess if someone was much more confident in one fund manager's judgement over another, in comparison grants from the other fund manager might be net negative in expectation. Or their judgement could be directly negative in that they fund projects that do nothing for the movement or the long-term future, or harm their trajectories. I've not seen anyone make the argument the judgement of some hypothetical, single grant manager would be irreproachable relative to literally anyone else. Problems with finding qualified fund managers, and what that means, have been brought up. But that doesn't mean they can't be addressed. And everyone so far agrees it would be a good thing if the EA community can work together to figure that out. Nick Beckstead himself already said if someone with more time to look at these funds could be found, ultimately that would be better. So the idea multiple fund managers would increase the chance harmful projects strikes me as strange. This is especially in light of the fact the grantees' from the EA Community and Long-Term Future Funds are projects the donors counterfactually would have either donated to those grantees, or emerging projects/organizations in EA, which is what donors to these funds were initially led to believe would be the case. It's not clear in the last year the EA Community and Long-Term Future Funds improved upon at all how those donations would otherwise have been allocated. So unless the expected value of having multiple fund managers is literally less than zero, I'm not sure how it's worse in expectation than the EA Funds for the last year, counterfactually, being a wash.

In summary, based on what we've observed so far, and not knowing those costs would come to fruition with multiple fund managers, I don't understand how the costs could compare to the potential benefits. I think this should no longer be an idle consideration. I don't see why this shouldn't be one if not the primary way to improve the EA Community and Long-Term Future Funds going forward.

My (guessing) model is that through his work for OpenPhil Bollard often has additional grants he wants to make, while Beckstead can more often convince OpenPhil to make his intended grants and so is rarely in this position. Hence Bollard has more use for supplementary funding.

I would agree, there's more scope beyond how the Open Philanthropy Welfare Fund presently operates so EA Funds has more potential utility there, but my own view is that the full range of possibilites aren't presently explored / considered because of time constraints alongside the low value of some disbursements alongside potentially having to spend more time justifying fairly unconventional grants.

In some ways i think it is the unconventional / marginal organisations which need more consideration as bringing potential value to the table over what is generally considered. Particularly in the way that a narrow funding focus could develop associations with particular organisations / ideas and so there could be issues of gravitating toward type.

I'm not sure what the solution is, perhaps another project worker at the Open Philanthropy Welfare Fund, maybe a small set of volunteers could be managed / empowered to work on building cases. It's difficult to know, but i do sympathise with the time constraints.

Open Phil hired a Senior Associate, Farm Animal Welfare in March 2018.

https://www.openphilanthropy.org/about/team/amanda-hungerford

Yeah, this was a good step but i think probably not enough, particularly in relation to having two former HSUS staff members which is useful for implementing the current programme but less so when considering or assessing the value of different areas of the animal movement.

I don't know about others, but since Nick's response, the issue isn't with how Nick has run the two EA Funds, but for several months donors and the community were given an impression the funds would be run a different way, and there wasn't much communication on this subject. Donors gave to the EA Community and Long-Term Future Funds under the impression the fund manager would be looking for those additional granting opportunities, as was the case with Lewis' management of the Animal Welfare Fund.

Lewis announced another round of grants for the Animal Welfare fund on Facebook on June 26, though it's not clear when exactly the grants were paid out or will be paid out. The Animal Welfare fund page has not been updated with this information. This seems surprising since Lewis has already written up an explanation of the grant; it just isn't on the website yet.

Update: two months later, CEA has now updated the management teams for these funds, bringing on new managers and committing to a regular schedule of grant giving. link

Thanks for writing this up.

Small typo:

are contradicted by the fund manager.Nick is

There should be a space between "manager." and "Nick"

Also the formatting of that whole paragraph is inconsistent with the style of other body paragraphs.