Comment author: Robert_Wiblin 06 September 2017 07:27:09AM *  3 points [-]

I strongly recommend using the impact-adjusted plan change metric rather than the unadjusted one for 80,000 Hours. Those figures:

Sep 2014 to Aug 2015 - 184.8

Sep 2015 to Aug 2016 - 631.3

Sep 2016 to Aug 2017 - 1202

There's also our newsletter growth. New subscribers each year:

2014 - 262

2015 - 23,000

2016 - 76,000

2017 so far - 57,000.

Comment author: Peter_Hurford  (EA Profile) 06 September 2017 11:28:30AM 1 point [-]

Ok, done. Thanks. Impact-adjusted numbers are fairer, since that is what you are actually targeting, though there is some subjectivity in the impact adjusting process.

12

Is EA Growing? Some EA Growth Metrics for 2017

This post was co-authored by Peter Hurford and Joey Savoie.   The EA Survey team at Rethink Charity (including myself) recently released initial data from the 2017 EA Survey  and will have more to follow it up. KBog made a comment on the EA Forum  noticing that the 2015 EA... Read More
Comment author: CalebWithers  (EA Profile) 04 September 2017 02:31:12AM 3 points [-]

It seems that the numbers in the top priority paragraph don't match up with the chart

Comment author: Peter_Hurford  (EA Profile) 05 September 2017 02:36:06AM 0 points [-]

This is true and will be fixed. Sorry.

Comment author: Michelle_Hutchinson 04 September 2017 02:40:19PM *  6 points [-]

I'm having trouble interpreting the first graph. It looks like 600 people put poverty as the top cause, which you state is 41% of respondents, and that 500 people put cause prioritisation, which you state is 19% of respondents.

The article in general seems to put quite a bit of emphasis on the fact that poverty came out as the most favoured cause. Yet while 600 people said it was the top cause, according to the graph around 800 people said that long run future was the top cause (AI + non-AI far future). It seems plausible to disaggregate AI and non-AI long run future, but at least as plausible to aggregate them (given the aggregation of health / education / economic interventions in poverty), and conclude that most EAs think the top cause is improving the long-run future. Although you might have been allowing people to pick multiple answers, and found that most people who picked poverty picked only that, and most who picked AI / non-AI FF picked both?

The following statement appears to me rather loaded: "For years, the San Francisco Bay area has been known anecdotally as a hotbed of support for artificial intelligence as a cause area. Interesting to note would be the concentration of EA-aligned organizations in the area, and the potential ramifications of these organizations being located in a locale heavily favoring a cause area outlier." The term 'outlier' seems false according to the stats you cite (over 40% of respondents outside the Bay thinking AI is a top or near top cause), and particularly misleading given the differences made here by choices of aggregation. (Ie. that you could frame it as 'most EAs in general think that long-run future causes are most important; this effect is a bit stronger in the Bay)

Writing on my own behalf, not my employer's.

Comment author: Peter_Hurford  (EA Profile) 05 September 2017 02:35:42AM 0 points [-]

I'm having trouble interpreting the first graph. It looks like 600 people put poverty as the top cause, which you state is 41% of respondents, and that 500 people put cause prioritisation, which you state is 19% of respondents.

I can understand why you're having trouble interpreting the first graph, because it is wrong. It looks like in my haste to correct the truncated margin problem, I accidentally put a graph for "near top priority" instead of "top priority". I will get this fixed as soon as possible. Sorry. :(

We will have to re-explore the aggregation and disaggregation with an updated graph. With 237 people saying AI is the top priority and 150 people saying non-AI far future is the top priority versus 601 saying global poverty is the top priority, global poverty still wins. Sorry again for the confusion.

-

The term 'outlier' seems false according to the stats you cite

The term "outlier" here is meant in the sense of a statistically significant outlier, as in it is statistically significantly more in favor of AI than all other areas. 62% of people in the Bay think AI is the top priority or near the top priorities compared to 44% of people elsewhere (p < 0.00001), so it is a difference of a majority versus non-majority as well. I think this framing makes more sense when the above graph issue is corrected -- sorry.

Looking at it another way, The Bay contains 3.7% of all EAs in this survey, but 9.6% of all EAs in the survey who think AI is the top priority.

Comment author: Buck 01 September 2017 11:48:09PM 7 points [-]

I wish that you hadn't truncated the y axis in the "Cause Identified as Near-Top Priority" graph. Truncating the y-axis makes the graph much more misleading at first glance.

Comment author: Peter_Hurford  (EA Profile) 02 September 2017 01:36:31AM 4 points [-]

Huh, I didn't even notice that either. Thanks for pointing that out. I agree that it's misleading and we can fix it.

Comment author: Robert_Wiblin 01 September 2017 07:31:20PM 1 point [-]

In that case I think they should start thinking about it in those terms. :)

Comment author: Peter_Hurford  (EA Profile) 01 September 2017 07:34:40PM 3 points [-]

I'd personally disagree, but it's a good discussion to have either way.

Comment author: Robert_Wiblin 01 September 2017 06:47:50PM *  7 points [-]

For next year's survey it would be good if you could change 'far future' to 'long-term future' which is quickly becoming the preferred terminology.

'Far future' makes the perspective sound weirder than it actually is, and creates the impression you're saying you only care about events very far into the future, and not all the intervening times as well.

Comment author: Peter_Hurford  (EA Profile) 01 September 2017 07:33:27PM 3 points [-]

I've added to our list of things to consider for the 2018 survey.

Comment author: Robert_Wiblin 01 September 2017 07:00:21PM 2 points [-]

There's a discussion about the most informative way to slice and dice the cause categories in next year's survey here: https://www.facebook.com/robert.wiblin/posts/796476424745?comment_id=796476838915

Comment author: Peter_Hurford  (EA Profile) 01 September 2017 07:33:12PM 1 point [-]

Thanks, I replied there.

Comment author: Robert_Wiblin 01 September 2017 07:05:27PM *  1 point [-]

To reduce future confusion I think that ACE's charity evaluation criteria page should be edited to acknowledge the fact that ACE is increasingly open to 'hits based' charity recommendations, and rightly so: http://www.openphilanthropy.org/blog/hits-based-giving

Comment author: Peter_Hurford  (EA Profile) 01 September 2017 07:28:18PM 1 point [-]

I'm personally less sure if ACE is thinking about it or framing it in those terms.

Comment author: KevinWatkinson  (EA Profile) 01 September 2017 12:46:40PM 1 point [-]

Hi Peter, thanks for those comments.

-

I believe that one issue with thinking of the seven criteria as fairly rules based is that people can have an expectation the criteria will be met in relation to consistency and impartiality. I am not in favour of maintaining strict rules, though I think there are some potential negative consequences of not doing so that need to be taken into account. So in which circumstances would they be overlooked or minimised? I think it is fine to be open that it could happen, but it raises issues in relation to how other groups perform well, but wouldn’t get top status for less certain reasons. There are further problems with this in relation to how the process is viewed by potential groups taking part in the evaluation process, and by people who look upon recommendations as sufficient consideration. In this way, I think we need to take into account evaluation isn’t a particularly competitive area, and there aren’t many groups that do it.

-

I reasonably believe the funding gap is presently fairly negligible at GFI (for example EA Funds are not very concerned about it, and already look for alternatives to GFI in that area), and I don’t think EAs generally ought to be funding groups in preparation for 2018. Once a group has had their funding requirements met then I think we probably ought to move onto other areas of interest. Though people can choose to do what they like, and if they believe donating now for next year is a good thing, then that is their choice, but I think there are other projects that are neglected today that need further consideration and resources. Also, if GFI receive more money today that could be a factor against them receiving top status next year, because their funding requirements are met over and above their needs. So if people think they benefit, or should benefit more than others then it may be more helpful to GFI not to receive more money now.

-

I think it could be possible to second guess Open Phil. I like the considerations they put into different areas, but I have spoken to Holden Karnofsky and don’t feel there is any tangible process that ensures that checks and balances are applied. I think there are issues now with the funding that Open Phil engage in, and Holden doesn’t. It essentially seems to come down to the idea he thinks things are fine, rather than there being some form of system in place that can be pointed toward that would take care of this process. In some ways it reminds me of issues with too much red tape, it is the case there can be onerous criteria that start to limit efficiency and effectiveness, but at some point we find red tape exists for a reason. At the moment I think there aren’t enough checks and balances, others will be less inclined to think this is an issue where they are reasonably content with the overall pattern of how resources are distributed, and how that is encouraged by ACE and Open Phil.

In terms of the donor of last resort, Open Phil don’t announce who they are going to give to at the beginning of the year, but I would second guess at least some of their donations based on their ideological leanings (it is more explicit with EA Funds, in their section about why people may choose not to donate to EA funds). It could also just be better if they didn’t tell anyone who they are donating to at all. As a general matter there are some updates posted by ACE, but I don’t think this sufficiently takes into account what other groups / people are likely to do in relation to those top charities, or really considers diminishing returns.

So taking a couple of points “THL has already received more funding that we predicted they would be able to use this year (including their forthcoming grant money), Coman-Hidy hopes that THL can raise an additional $2.2 million–$2.7 million this year.”

MfA had raised $5m in five months. So I don’t think there is much reason to believe they wouldn’t hit $8.3m in twelve (including a budget increase of $1m over the previous year) so in relation to GFI, MfA and THL, i think many EAs ought to be looking at other areas. Whilst AE have continued to grow, though likely at a lower rate than if they had top charity status.

Whilst stated in one of the links you posted: “For the highest-value giving opportunities, we want to recommend that Good Ventures funds 100%. It is more important to us to ensure these opportunities are funded than to set incentives appropriately.”

I think there are few grounds to believe any of the top groups aren’t going to easily hit their targets, so I am most interested in what follows from that, and I think my main point here is that donor agency is something that can be quite different depending on where people stand in the organisational donor structure. The idea that Open Phil are building knowledge or funding groups to build knowledge is a good idea, like many of their ideas, but there isn’t much evidence they do this, at least not in the areas in which I am most interested.

Comment author: Peter_Hurford  (EA Profile) 01 September 2017 03:49:43PM 0 points [-]

I believe that one issue with thinking of the seven criteria as fairly rules based is that people can have an expectation the criteria will be met in relation to consistency and impartiality. I am not in favour of maintaining strict rules, though I think there are some potential negative consequences of not doing so that need to be taken into account. So in which circumstances would they be overlooked or minimised?

So far, all seven criteria are followed for every top charity. But it's not a binary. How much track record is enough track record to have a "good" track record? GFI does have enough of a track record that we felt comfortable evaluating it, but it does have less of a track record than our other recommended organizations.

-

I reasonably believe the funding gap is presently fairly negligible at GFI (for example EA Funds are not very concerned about it, and already look for alternatives to GFI in that area)

I'm not sure I'd read that much into the EA Funds donations, personally.

-

I don’t think EAs generally ought to be funding groups in preparation for 2018.

Speaking about room for more funding generally -- I agree it has been harder to find room for more funding lately (and this is definitely a good problem to have) and this is something ACE has been monitoring closely. The next charity update will be in just a few months and will include fresh re-estimations of room for more funding. You may consider waiting until then.

Either way, I'm confident that GFI could continue to productively use money given now. I don't think there's any particular reason to give in January but not September as you say, unless you're worried that ACE's recommendation will change or that they are out of RFMF for a good portion of 2018 also.

Additionally, organizations that get more money now might be encouraged to take on more, to scale, and to build a bigger budget in the future. More money now would help them give them more confidence.

-

In relation to Better Eating International, i’m thinking in terms of the criteria of needing x amount more money. I haven’t heard anything from them about further fundraising after the Kickstarter project. Though I haven’t asked either.

You could consider asking. I think they could make use of another $20-40K to boost their analytics capabilities.

-

I also think it would be a good thing if ACE look at the organisations I mentioned in some depth, I think that would be useful and I would encourage all groups to be open to this process.

I can suggest those organizations if they are not already on ACE's radar.

View more: Prev | Next