Comment author: Peter_Hurford  (EA Profile) 13 October 2017 05:03:43AM 1 point [-]

I found the case studies really valuable in their own right and as ways of explaining your points. Awesome!

Comment author: RyanCarey 10 October 2017 02:18:07AM 5 points [-]

Hey Zack,

I agree that we lose a bunch by moving our movement's centre of gravity away from poverty and development econ. But if we do the move properly, we gain a lot on the basis of the new areas we settle in. What rigor we lost, we should be able to patch up with Bayesian rationalist thinking. What institutional capital we might have lost from World Bank / Gates, we might be able to pick up with RAND/IARPA/Google/etc, a rather more diverse yet impressive group of possible contributors. For organization, yes a lot of experience, like that of Evidence Action, will be lost, but also much will be gained, for example, by working instead at technology think tanks, and elsewhere.

I don't think your conclusion that people should start in the arena of poverty is very well-supported either, if you're not comparing it to other arenas that people might be able to start out in. Do you think you might be privileging the hypothesis that people should start in the management of poverty just because that's salient to you, possibly because it's the status quo?

Comment author: Peter_Hurford  (EA Profile) 10 October 2017 05:02:03PM 6 points [-]

What rigor we lost, we should be able to patch up with Bayesian rationalist thinking

Can you elaborate more on this?

Comment author: Peter_Hurford  (EA Profile) 27 September 2017 09:06:04PM 2 points [-]

Quantitative social science, such as economics or analysis of survey data

Can you elaborate more on this?

Comment author: Peter_Hurford  (EA Profile) 26 September 2017 09:52:59PM 3 points [-]

For some of the research prior to starting Charity Science Health, I recall looking at two HIV interventions and ending up not that impressed. We summarized some of the research onto this grid.

Antiretroviral therapy ended up noticeably less cost-effective than our other interventions. That might favor an interpretation for DAH spenders being wrong.

On the other hand, prevention of mother-to-child transmission of HIV seemed pretty cost-effective, but the field was quite crowded already with a lot of pre-existing organizations working in the area and seeming to do quite well. This might favor a "both right" interpretation, if we assume that DAH funders has already used up all the room for more funding that Givewell / OpenPhil / EA would have used.

Comment author: Peter_Hurford  (EA Profile) 26 September 2017 03:27:45AM 2 points [-]

Keep in mind that there are some differences between DALYs and QALYs, for example see the discussion in

Comment author: Denkenberger 24 September 2017 01:33:07AM 1 point [-]

Was there any discussion about effective volunteering?

Comment author: Peter_Hurford  (EA Profile) 26 September 2017 02:48:44AM 2 points [-]

Not any qualitative data, but we did ask people whether they volunteer. 390 people said yes, 1025 people said no, 187 people did not answer, and 237 people were not asked this question.

Comment author: Bernadette_Young 20 September 2017 01:11:29PM 4 points [-]

That's still a very important point that doesn't seem to have been made in the analysis here: the demographic questions were not included in the questions put to all respondents. Since there are good reasons to think that people taking the "full" and "donations only" survey will differ systematically (e.g. more likely to have been involved with EA for longer). If the non responses are not random that's an important caveat on all these findings and very much limits any comparisons that can be done over time. I can't seem to see it discussed in the post?

Comment author: Peter_Hurford  (EA Profile) 20 September 2017 07:50:10PM *  0 points [-]

Yeah. I personally think that offering the donations only survey was a bad idea for the reason that you said and a few other reasons.

Even if everyone took the full survey, the non-response would still be pretty non-random -- you still have to have the tenacity to persist to page seven, which I imagine correlates with being more involved in EA and you also have to have taken the survey in the first place, which we also know is not random. It would have been nice to not make this worse, though.

Comment author: Rick 19 September 2017 06:56:23PM 0 points [-]

Sorry to fixate on this, but I've just never seen non response rates this high before - 10% is high in most cases of surveys, 40% is absurd. Like, yes you always have groups who feel like the answers don't accurately capture their reality, but given that you did allow for multiracial answers (and given the homogeneity of EA from a race stand point), this usually would be only a very small fraction of respondents. There's also the population that, for lack of a better term, "don't believe in race" and never answer this question, but given how small this population is in general, unless an absurdly high number of them are EAs then this should also only be a very small fraction.

I really, really hope this isn't the explanation, but I could see at least some of these answers coming from the perspective of "I don't think race is a problem in EA, and people should stop asking about it, so I'll just not answer at all as a protest or something." As someone who sees data collection as sacred, I would be appalled by this - so please, someone, for the sake of my sanity explain what could possibly drive a 40% non response rate that is not this.

Comment author: Peter_Hurford  (EA Profile) 19 September 2017 11:22:23PM 1 point [-]

The answer looks to be pretty simple and unimportant, as I explain in this comment.

Comment author: Rick 19 September 2017 06:17:31PM 1 point [-]

Are there any theories about what is driving the really high non response rate for race? Or any cross tabs about what groups or locations are more likely to have a non response for race? Racial demographics in EA is an important topic, and it's a shame that we can't get better data on it.

Comment author: Peter_Hurford  (EA Profile) 19 September 2017 11:21:34PM *  4 points [-]

I can see how the non-response rate looks alarming and I definitely owe some context for that.

One thing we tried this year was a separate donations only survey, where people only reported their donations and a few other questions. Race was not on this slimmer survey. 554 did not answer this race question because they were never asked it.

Another source of apparent non-response is the fact that we asked people Yes or No for four different races (White, Black, Asian, and Hispanic). It looks like some people checked "Yes" for one race but did not explicitly check "No" for the others. This accounts for another 120 people.

Combining these first two reasons, there are only 67 people genuinely ignoring the race question. You then have to account for survey fatigue, where people answer some of the questions at the beginning of the survey, but then get bored, busy, or distracted and quit the survey without answering the rest of the questions. Given that race was at the bottom of the seventh page of the survey, this could be acute. I couldn't find anyone who neglected to answer the race question but did answer a question after the race question, so it looks like these three factors may fully account for all non-response.

Comment author: concerned_ 18 September 2017 09:14:52PM 4 points [-]

I'd be curious to see how "year joined" correlates with cause area preference.

Comment author: Peter_Hurford  (EA Profile) 18 September 2017 11:01:47PM 5 points [-]

We actually have a post on that coming up soon, looking at how cause area preferences change over time!

View more: Next