Hide table of contents

The Centre for Effective Altruism has begun its 2016 fundraising round. We’ve put together a document that summarises our activities, impact and mistakes in 2016, and plans for 2017. You can read it here (there’s an option to download it as a PDF file); we’ve also reproduced the executive summary below. 


There are a couple of differences between this fundraising round and some previous fundraising rounds. First, we’ve not tried to aggregate all our activities into a “single dollars spent: dollars raised for charity” ratio. Doing so made sense for Giving What We Can, when it was an independent organisation, because the primary metrics for Giving What We Can were money moved and total money pledged. (Though even then it’s still plausible that Giving What We Can’s main impact lay beyond moving money, such as by helping foster the effective altruism community.) CEA currently has several distinct aims: raising money for extremely effective charities is one, but it also aims to grow the effective altruism community, to increase the value of the community to each member, to reduce risks of collapse or fragmentation of the community, to produce and encourage research that advances our understanding of how to do the most good, and to try out new projects that might be extremely high-value. We therefore think it would be misleading to put heavy emphasis on money moved.

Second, our assessment of our room for more funding is much higher than in previous years. This reflects a change of attitude. First, we’ve taken more seriously the implication that, if donating to a meta-charity like CEA is more effective than donating to a first-order charity, then we should be trying to grow CEA’s activities so that there is a greater room for funding among meta-charities. For this reason, we’ve spent more time than in previous years thinking about how we could do a lot more than we’re currently doing without hiring more staff, such as Facebook ad campaigns and regranting to the EA community, including to local groups, EAGx conferences, and other non-CEA EA projects. Second, recently many more people have been emphasising talent gaps rather than funding gaps in EA, to the extent that many people who are earning to give are considering moving to do direct work. This meant that we spent more time figuring out how much we could reasonably grow, even beyond the amount we expect we could raise, so that people could make career decisions based on a good understanding of the room for more funding in CEA (I wrote a little about the risk of overcorrecting on talent/funding here).

I don’t think it’s currently meaningful to say that CEA is more talent-constrained or funding constrained. We’re always on the lookout for exceptional people to hire, and would be able to do a better job if we had even more exceptional people to work for us, and so in that sense we are talent-constrained. But unless a very large donor steps in, I think it’s unlikely that we’ll reach our aggressive growth target so it’s likely we could still spend additional money well, and in that sense CEA will also be funding-constrained. 

 

 Read the full prospectus here: CEA Winter Fundraising Prospectus

 

Executive Summary

 

The Centre for Effective Altruism (CEA) helps to grow and maintain the effective altruism movement. Our mission is to:

- create a global community of people who have made helping others a core part of their lives, and who use evidence and scientific reasoning to figure out how to do so as effectively as possible; and

- make the advancement of the wellbeing of all a worldwide intellectual project, doing for the pursuit of good what the Scientific Revolution did for the pursuit of truth.

We have two divisions. The community and outreach division focuses on growing, strengthening, and serving the effective altruism community. The special projects division focuses on improving our understanding of how to do the most good, which includes exploring new applications for effective altruism.

2016 was a year of significant change for CEA. We went from being a collection of largely autonomous teams to a single team under one management structure. Although the internal reorganization is complete, the process of integrating the various projects continues. Our overarching goal for 2017 is to build a strong, focused CEA, which we believe is essential to achieving our long-term mission.

Key projects we intend to pursue in 2017 include the following:

Turn effectivealtruism.org into the landing page for the effective altruism community

Host three Effective Altruism Global conferences

Establish a scalable model for facilitating student and local groups

Launch a multidisciplinary institute at the University of Oxford for the study of effective altruism

Develop advanced quantitative cause prioritization models

Below we provide a detailed review of our activities over the past year and our plans for next year. In summary, we believe the case for supporting CEA based on marginal cost-effectiveness alone is strong. But we also maintain that that is not how CEA should be evaluated.

We believe that effective altruism has the potential to have a transformative impact on how people think about doing good in the world and that CEA is currently best positioned to help effective altruism realize its potential. We estimate that most of CEA’s value in expectation comes from the chance–small as it might be–that it realizes that mission. In other words, funding CEA is a gamble, albeit (because of its marginal cost-effectiveness) a gamble in which the “bad” outcome still looks pretty good.

For 2017, the minimum we’re looking to raise is £2.5 million. We believe we could spend much more than that before hitting strongly diminishing returns: we could spend £5.1 million in our growth scenario and £7.3 million in our stretch growth scenario. In both of these latter two scenarios, we would regrant a significant amount of money to smaller projects in the effective altruism community.

 

Comments35
Sorted by Click to highlight new comments since: Today at 8:30 AM

I find it difficult to evaluate CEA especially after the reorganization, but I did as well beforehand.

The most significant reason is that I feel CEA has been exceedingly slow to embrace metrics regarding many of its activities, as an example, I'll speak to outreach.

Big picture metrics: I would have expected one of CEA's very first activities, years ago when EA Outreach was established, to begin trying to measure subscription to the EA community. Gathering statistics on number of people donating, sizes of donations, number that self-identify as EAs, percentage that become EAs after exposure to different organizations/media, number of chapters, size of chapters, number that leave EA, etc.

Obviously, some of these are difficult, and others involve assumptions, gaining access to properties other organizations run, or gathering data yourselves, but I would expect to see a concerted effort to do so, at least in part. The community has embraced Fermi Estimates where appropriate, and these metrics could be estimated with much more information than those often require.

So a few years in, I find it a bit mindblowing that I'm unaware of an attempt to do this by the only organization that has had teams dedicated specifically to the improvement and growth of the movement. Were these statistics gathered, we'd be much better able to evaluate outreach activities of CEA, which are now central to its purpose as an organization.

With regard to metrics on specific CEA activities, I've also been disappointed by the seeming lack of measurement (though this may be a transparency issue, more on this later). For example, there have been repeated instances where outreach has actively turned people off in ways that I've been told have been expressed to CEA. Multiple friends who applied to the Pareto Fellowship felt like it was quite unprofessionally run and potential speakers at EA Global mentioned they'd found some of the movement's actions immature. In each instance, I'm aware of them becoming significantly less engaged as a result.

At times concerns such as these have been acknowledged, but given the level of my (admittedly highly anecdotal) exposure to them, it feels like they have mostly not been examined to see if they were at a magnitude that should give pause. It would be nice to see them fully acknowledged through quantification, so we could understand if these were a small minority (which does matter of course regardless) or actually of great concern. Quantification could involve, for example, getting feedback on the process from all of those who applied to the Pareto Fellowship or EA Global or all of those who considered them. I do believe that some satisfaction measurements for EAGx and EA Global did in fact come out recently; I was glad to see those and also hope that they are just seen as starting points rather than as representing the majority of CEA’s growth in measurement.

Other examples of where quantification could be helpful is in the relative prominence of various communication vehicles. The cause prioritization tool, for example, is quite prominently shared, but has its success been measured? Have any alternatives been considered? Measuring and sharing this could be beneficial both for CEA’s decision making as well as for the community understanding what works best for their own outreach activities.

The second most significant reason I find CEA tough to evaluate, which is interconnected to much of what I said regarding the first, is that I feel transparency, especially around decision making, is lacking. I feel that other EA organizations better document why they are pursuing much of what they do, but CEA too often feels like a collection of projects without central filtering / direction. I do believe the reorganization may have been to target a similar feeling, but new projects such as EA Concepts, after the reorganization have similarly seemed to come out of nowhere and without justification of their resourcing. It'd be quite helpful to better understand the set of projects CEA considers and how its decision making leads to what we observe. So many of us have been exposed to the book giveaway… what was the decision making behind doing it? Should taking such a proactive action make us update that CEA has found a quite effective promotion vehicle, or was it a trial to determine effects of distribution?

CEA has taken initial steps toward improvement, with the monthly updates, and I'd like to see them greatly expand and specifically address decision making.

Could CEA speak to its planned approach to growing measurement and transparency moving forward?

I have many additional strong feelings and beliefs in favor of CEA as a donation target, had many strong anecdotal experiences, and have a few beliefs that give me great pause as well. But I think measurement and transparency could do a great deal toward putting those in proper context.

Hey Josh,

As a preliminary matter, I assume you read the fundraising document linked in this post, but for those reading this comment who haven’t, I think it’s a good indication of the level of transparency and self-evaluation we intend to have going forward. I also think it addresses some of the concerns you raise.

I agree with much of what you say, but as you note, I think we’ve already taken steps toward correcting many of these problems. Regarding metrics on the effective altruism community, you are correct that we need to do more here, and we intend to. Before the reorganization, this responsibility didn’t fall squarely within any team’s jurisdiction which was part of the problem. (For example, Giving What We Can collected a lot of this data for a subset of the effective altruism community.) This is a priority for us.

Regarding measuring CEA activities, internally, we test and measure everything (particularly with respect to community and outreach activities). We measure user engagement with our content (including the cause prioritization tool), the newsletter, Doing Good Better, Facebook marketing, etc., trying to identify where we can most cost-effectively get people most deeply engaged. As we recently did with EAG and EAGx, we’ll then periodically share our findings with the effective altruism community. We will soon share our review of the Pareto Fellowship, for example.

Regarding transparency, our monthly updates, project evaluations (e.g., for EAG and EAGx, and the forthcoming evaluation of the Pareto Fellowship), and the fundraising document linked in this post are indicative of the approach we intend to take going forward. Creating all of this content is costly, and so while I agree that transparency is important, it’s not trivially true that more is always better. We’re trying to strike the right balance and will be very interested in others’ views about whether we’ve succeeded.

Lastly, regarding centralized decision-making, that was the primary purpose of the reorganization. As we note in the fundraising document, we’re still in the process of evaluating current projects. I don’t think the EA Concepts project is to the contrary: that was simply an output of the research team, which it put together in a few weeks, rather than a new project like Giving What We Can or the Pareto Fellowship (the confusion might be the result of using "project" in different ways). Whether we invest much more in that project going forward will depend on the reception and use of this minimum version.

Regards, Michael

Creating all of this content is costly, and so while I agree that transparency is important, it’s not trivially true that more is always better. We’re trying to strike the right balance and will be very interested in others’ views about whether we’ve succeeded.

Would CEA be open to taking extra funding to specifically cover the cost of hiring someone new whose role would be to collect the data and generate the content in question?

The way your fundraising page represents how much money CEA is trying to raise confused me. First of all, you switched between representing amounts in either dollars or pounds. This isn't a big deal, but I thought I'd just let you know it's momentarily jarring when the amount being requested switches so much. I think readers can convert currencies well by themselves if need be.

Anyway, it says the CEA is seeking $3.1 million as its 'Minimum Target' for how much its seeking to raise. But that's the minimum target CEA is seeking to expand beyond its current scope of activity. It says in the budget summary the amount CEA needs to raise to cover the continuation of its regular suite of activities in 2017 is £ 1,860,517. As of this writing, that comes out to $2,277,905. It took me a while to figure out the ~$2.3 million figure was to continue ongoing operations, and guessed the ~$900k USD remaining would be for the ambitious expansion of more speculative but successful projects, like marketing, EAGx grants, and EA chapter grants. But I noticed that's already accounted for in the budget summary as well.

So, pardon me for saying so, but I'm confused as to what CEA's intentions are with the 'Minimum Target' and 'Growth Target' for Growth(?). I think I'm missing something, or the document doesn't make clear, which items in CEA's 2017 budget would the funding from these targets, if reached, be used for. Could you please clarify?

Second this. I'm guessing part of what's going on in the $3.1 versus £1.8 is to do with reserves, but would be useful to get confirmation. Also, the google sheet linked doesn't have numbers that I can line up with anything else in the blog post, I think because it has numbers for CEA UK only and ignores CEA US (but that's speculation)?

Hi AGB, you are correct on both counts - the linked budget is for CEA UK only, and the $3.1M figure is enough to allow us to end 2017 with at least 12 months of reserves.

The reason that we’re raising more than the total projected spend for 2017 is that we are hoping to build up our reserves to ensure we do not need to fundraise mid-year. We aim to maintain a minimum of 12 months of reserves, in line with recommended best practices for non-profits. Prior to the start of this fundraiser, we had planned let our reserves fall far below this limit towards the end of 2016, as we identified some particularly promising opportunities late in the year, including the Doing Good Better giveaway campaign and marketing the EA Newsletter and the Giving What We Can Pledge. Having experimented with these new approaches, we want to further test and expand upon these activities in 2017, while rebuilding our reserves to a more sustainable level. This means that we need to raise about 18 months of reserves to fully fund our current mainline plans, and avoid the need to fundraise mid-year.

You can describe our plans following the fundraiser as follows:

  • If we raise the full $3.1M then we will not run another fundraiser until late 2017. We will plan to end 2017 with around 12-16 months of reserves.

  • If we raise less than $2.1M, we will reevaluate our 2017 plans. In this scenario, we would likely reduce our planned spending on marketing activities during Y Combinator, reduce the amount we plan to spend on EAGx and student group grants and delay or cancel some planned hires.

  • If we raise an amount between $2.1M to $3.1M, we will proceed with our mainline plans for 2017, but we will likely not pursue any additional activities and we will be more cautious with some of our more flexible spending such as EAGx grants and the marketing spend we have planned during Y Combinator. We will then reevaluate our financial position mid-year and may decide to run a smaller fundraiser then to cover any gaps.

I don't believe organizations should post fundraising documents to the EA Forum. As a quick heuristic, if all EA orgs did this, the forum would be flooded with posts like this one and it would pretty much kill the value of the forum.

It's already the case that a significant fraction of recent content is CEA or CEA-associated organizations talking about their own activities, which I don't particularly want to see on the EA Forum. I'm sure some other people will disagree but I wanted to contribute my opinion so you're aware that some people dislike these sorts of posts.

I feel that 1-2 such posts per organization per year is appropriate and useful, especially since organizations often have year-end reviews or other orienting documents timed near their annual fundraiser, and reading these allows me to get oriented about what the organizations are up to.

1-2 posts per year seems arguably reasonable; one post per month (as CEA has been doing) is excessive.

I guess the key is that every update post must either convey substantial info that will change people's actions, or genuinely solicit strategic input.

Perhaps the top-level comment is more intended to convey the belief that monthly update posts ought to live on CEA's website, rather than the forum, and it is not specific to this (different type of) post.

I like that this is being debated. Personally, I think that organization-related posts are great because organization-related material is action-related material, and the whole point of the forum is to get people to combine big EA ideas with action.

I agree that if the front page was half-covered by CEA content (including research and updates) at all times of year, then this would be bad, but I would guess that if you amortize it, they make up like 5% of content.

While I disagree with Michael and don't think we should discourage EA orgs from posting fundraising documents,* I'm disappointed that his comment has so far received 100% downvotes. This seems to be part of a disturbing larger phenomenon whereby criticism of prominent EA orgs or people tends to attract significantly more downvotes that other posts or comments of comparable quality, especially posts or comments that praise such orgs or people.

__

(*) I work for CEA, so there's a potential conflict of interest that may bias my thinking about this issue.

I downvoted because I found the tone negative and hyperbolic. It won't kill the forum. I think a good norm the community is to always steelman before criticising. This would make us more welcoming and constructive.

Also there's now 14 comments debating this issue and 0 comments debating how much funding CEA should get, which is a vastly more important issue. So this comment derailed the thread.

If the CEA is seriously seeking feedback from unaligned EA community members on how much funding the CEA should receive, I have all kinds of thoughts on this I'll write up as a top-level post when I get the chance.

I agree that, other things equal, we want to encourage critics to be constructive. All things considered, however, I'm not sure we should hold criticism to a higher standard, as we seem to be doing. This would result in higher quality criticism, but also in less total criticism.

In addition, the standard to which criticism is held is often influenced by irrelevant considerations, like the status of the person or organization being criticized. So in practice I would expect such a norm to stifle certain types of criticism more than others, over and above reducing criticism in general.

I think we should hold criticism to a higher standard, because criticism has more costs. Negative things are much more memorable than positive things. People often remember criticism, perhaps just on a gut level, even if it's shown to be wrong later in the thread.

Yep, it's helpful to emphasize that upvotes and downvotes should be allocated according to whether (as indicated when you hover over the button), you "found this useful", or "didn't find this useful", not based on agreement!

I know this is the stated meaning, and I usually think it's correct to act on. In some cases when usage deviates from this, though, I'm not actually sure that people are making a mistake.

I think that happens most often on short statements of opinion. In such cases, there's not much ambiguity about how useful the comment was (opinions are always somewhat useful but don't contain amazing new insights). It's more useful to get a cheap instant poll of how widespread that opinion is in the community.

Notes:

  • I'm not confident in this, but to the extent that it seems wrong it would be if we thought posting short opinions was generally unhelpful. (I'd find that claim more plausible of LW, but still dubious there.) Otherwise to convey the information about distribution of opinions lots of people need to post.
  • Separate buttons as Benito suggests below might well be preferable. In particular they'd avoid ambiguity of things like the case in hand, which will be mostly read as an expression of opinion but also gives some considerations for.
  • This "instant poll" effect is to my mind the strongest reason for having voting scores on posts be public anyway. Maybe if there were separate buttons only the "agree/disagree" one would get displayed, and the "useful/not-useful" would be used to determine display-order for posts.
  • I was going to down-vote Ryan's comment to express that I disagree ;) But then I noticed that it was unusually helpful that he'd raised the point explicitly as it made it easier to have this conversation, and didn't know what to do.

The primary role of the vote buttons is to create the incentive gradient that determines which comments (and commenters) we get to have. This is perhaps the most powerful tool we have for incentivising some types of commentary. So I think we should practically always vote according to what comments we want to exist. On the margin, I think everyone (including you, based on your last bullet!) should vote more on usefulness.

Yes - perhaps the text (particularly of the downvote button) should be changed to something that clarifies that.

I definitely use the down vote button a lot to expressive negative affect, which is strongly influenced by 'disagree'. Having separate buttons could be pretty awesome.

It would also be terrible UI to have two 'up' and two 'down' buttons with different meanings.

The nearest thing that would be feasible would be to have, as does LessWrong, a polling feature.

You could give a little (3x3? 5x5?) voting grid: usefulness is one dimension, agreement is another. Users have the option of hiding one of the dimensions, and maybe this is the default.

This document is effectively CEA's year-end review and plans for next year (which I would expect to be relevant to people who visit this forum). We could literally delete a few sentences, and it would cease to be a fundraising document at all.

December is giving season due to the US tax year. This is an appropriate place for orgs that are asking for funding to post their updates, and pitches for future funding, and receive comments on them.

As a representative of an org (CSER, and previously FHI) who has periodically posted updates on these orgs on the EA forum and previously LW, it's very helpful to hear opinions (both positive and negative) on desirability and frequency of updates. I would be grateful for more opinions while it's under discussion.

Thank you Michael for raising the question.

Given all the interest in this (fairly unrelated to top post) topic I wonder if it makes sense to do a different post/survey on what would be the ideal posting frequency for EA orgs on the EA forum. I know CS would be very responsive to information on this and I suspect all the other EA orgs would be as well.

It also seems a bit hard to deal with criticism that falls along somewhat contradicting lines of a) you're not being transparent enough, I want more things like the monthly update and b) you're too spammy, I want to see less things like the monthly update. (I do know there is a difference between number of posts and given information, but limiting number of posts does make it harder).

Well more transparency and EA Forum posts don't have to be correlated. For example, I have read much of the updates posted on Charity Science web properties, and I think that's a fine place for many of them to continue to live.

Yes. I agree with those who have pointed out that this derailed an important CEA conversation (and regret, in hindsight, contributing to this - my apologies), but the questions Joey raises are ones that it would be v useful to have more info on, in the context of a separate discussion.

Who would you suggest run such a survey? Usually, these sorts of things would be run by EA orgs, but in this case I'd be wary of almost any EA org running it since they've got such strong institutional motivations/incentives to interpret or present the data in a biased way.

If enough people feel the same as Michael, is there a case for having a forum subsection where e.g. updates/fundraising/recruitment calls for EA orgs could live?

Disadvantages I could see

  • 'branching' the forum complicates and clutters it at a point where there still isn't a huge amount of volume to support/justify such structures.

  • these updates might get less visibility.

Advantages (in addition to the forum being kept more free for discussion of EA ideas)

  • These updates would then all be clustered in one place, making it easier for a reader to do an overview of orgs without digging through the forum's history.

I could make a links post of all EA orgs' (semi-)annual reviews (if they have one up), and make it its own top-level post.

I disagree.