Comment author: Bernadette_Young 20 September 2017 01:11:29PM 4 points [-]

That's still a very important point that doesn't seem to have been made in the analysis here: the demographic questions were not included in the questions put to all respondents. Since there are good reasons to think that people taking the "full" and "donations only" survey will differ systematically (e.g. more likely to have been involved with EA for longer). If the non responses are not random that's an important caveat on all these findings and very much limits any comparisons that can be done over time. I can't seem to see it discussed in the post?

Comment author: Peter_Hurford  (EA Profile) 20 September 2017 07:50:10PM *  0 points [-]

Yeah. I personally think that offering the donations only survey was a bad idea for the reason that you said and a few other reasons.

Even if everyone took the full survey, the non-response would still be pretty non-random -- you still have to have the tenacity to persist to page seven, which I imagine correlates with being more involved in EA and you also have to have taken the survey in the first place, which we also know is not random. It would have been nice to not make this worse, though.

Comment author: Rick 19 September 2017 06:56:23PM 0 points [-]

Sorry to fixate on this, but I've just never seen non response rates this high before - 10% is high in most cases of surveys, 40% is absurd. Like, yes you always have groups who feel like the answers don't accurately capture their reality, but given that you did allow for multiracial answers (and given the homogeneity of EA from a race stand point), this usually would be only a very small fraction of respondents. There's also the population that, for lack of a better term, "don't believe in race" and never answer this question, but given how small this population is in general, unless an absurdly high number of them are EAs then this should also only be a very small fraction.

I really, really hope this isn't the explanation, but I could see at least some of these answers coming from the perspective of "I don't think race is a problem in EA, and people should stop asking about it, so I'll just not answer at all as a protest or something." As someone who sees data collection as sacred, I would be appalled by this - so please, someone, for the sake of my sanity explain what could possibly drive a 40% non response rate that is not this.

Comment author: Peter_Hurford  (EA Profile) 19 September 2017 11:22:23PM 1 point [-]

The answer looks to be pretty simple and unimportant, as I explain in this comment.

Comment author: Rick 19 September 2017 06:17:31PM 1 point [-]

Are there any theories about what is driving the really high non response rate for race? Or any cross tabs about what groups or locations are more likely to have a non response for race? Racial demographics in EA is an important topic, and it's a shame that we can't get better data on it.

Comment author: Peter_Hurford  (EA Profile) 19 September 2017 11:21:34PM *  4 points [-]

I can see how the non-response rate looks alarming and I definitely owe some context for that.

One thing we tried this year was a separate donations only survey, where people only reported their donations and a few other questions. Race was not on this slimmer survey. 554 did not answer this race question because they were never asked it.

Another source of apparent non-response is the fact that we asked people Yes or No for four different races (White, Black, Asian, and Hispanic). It looks like some people checked "Yes" for one race but did not explicitly check "No" for the others. This accounts for another 120 people.

Combining these first two reasons, there are only 67 people genuinely ignoring the race question. You then have to account for survey fatigue, where people answer some of the questions at the beginning of the survey, but then get bored, busy, or distracted and quit the survey without answering the rest of the questions. Given that race was at the bottom of the seventh page of the survey, this could be acute. I couldn't find anyone who neglected to answer the race question but did answer a question after the race question, so it looks like these three factors may fully account for all non-response.

Comment author: concerned_ 18 September 2017 09:14:52PM 3 points [-]

I'd be curious to see how "year joined" correlates with cause area preference.

Comment author: Peter_Hurford  (EA Profile) 18 September 2017 11:01:47PM 4 points [-]

We actually have a post on that coming up soon, looking at how cause area preferences change over time!

Comment author: Michael_S 14 September 2017 10:52:08PM *  5 points [-]

Hey; I made some comments on this on the doc, but I thought it was worth bringing them to the main thread and expanding.

First of all, I'm really happy to sea other EAs looking at ballot measures. They're a potentially very high EV method of passing policy/raising funding. They're particularly high value per dollar when spending on advertising is limited/nothing since the increased probability of passage from getting a relatively popular measure on the ballot is far more than the increased probability from spending the same amount advertising for it.

Also, am I correct in interpreting that you assume 100% chance of passage in your model conditional on good polling? Polling can help, but ballot measure polling does have a lot of error (in both directions). So even a popular measure in polling is hardly guarantee of passage (http://themonkeycage.org/2011/10/when-can-you-trust-polling-about-ballot-measures/).

Finally, in your EV estimates, you seem to be focus on the individual treatment cost of the intervention, which overwhelms the cost of the ballot measure. I don't think this is getting at the right question when it comes to running a ballot measure. I believe the gains from the ballot measure should be the estimated sum of the utility gains from people being able to purchase the drugs multiplied by the probability of passage; the costs should be how much it would cost to run the campaign. On the doc, you made the point that Givewell doesn't include leverage on other funding in their estimates, but when it comes to ballot measures, leverage is exactly what you're trying to produce, so I think an estimate is important.

Comment author: Peter_Hurford  (EA Profile) 14 September 2017 11:45:00PM 1 point [-]

I believe the gains from the ballot measure should be the estimated sum of the utility gains from people being able to purchase the drugs multiplied by the probability of passage; the costs should be how much it would cost to run the campaign. On the doc, you made the point that Givewell doesn't include leverage on other funding in their estimates, but when it comes to ballot measures, leverage is exactly what you're trying to produce, so I think an estimate is important.

One potential way of thinking about this is that the ballot measure in itself does not accomplish much, it just "unlocks" the ability for people to more cheaply help themselves. This could be modeled as the costs of the ballot measure + the costs of people helping themselves over a stream of X years, put against the benefits of people helping themselves over X years. I would use 5 for X, assuming that a lot can change in 5 years and maybe drug legalization would happen anyway, but I think a higher value for X could also be justified.

This kind of (costs of unlocking + costs of what is unlocked over time) vs. benefits of what is unlocked over time is also how I model the cost-benefit of developing a new medicine (like a vaccine), since the medicine is useless unless it is actually given to people, which costs additional money.

Comment author: nonzerosum 14 September 2017 10:12:05PM 0 points [-]

I don't think it's appropriate to include donations to ACE or GiveWell as 'cause prioritization.' I think ACE should be classed as animal welfare and GiveWell as global poverty.

My understanding is that cause prioritization is broad comparison research.

Cause prioritization looks at broad causes (e.g. migration, global warming, global health, life extension) in order to compare them, instead of examining individual charities within each cause (as has been traditional).

https://causeprioritization.org/Cause%20prioritization

Comment author: Peter_Hurford  (EA Profile) 14 September 2017 11:41:08PM 1 point [-]

It does seem to me that GiveWell and ACE are qualitatively of a different kind than the organizations they evaluate. I do agree it is a judgement call, though. If you feel differently, all the raw data is there for you to create a new analysis that categorizes the organizations differently.

Comment author: Dan_Keys 12 September 2017 04:03:08PM 0 points [-]

I agree that asking about 2016 donations in early 2017 is an improvement for this. If future surveys are just going to ask about one year of donations then that's pretty much all you can do with the timing of the survey.

In the meantime, it is pretty easy to filter the data accordingly -- if you look only at donations made by EAs who stated that they joined on 2014 or before, the median donation is $1280.20 for 2015 and $1500 for 2016.

This seems like a better way to do the analyses. I think that the post would be more informative & easier to interpret if all of the analyses used this kind of filter. (For 2016 donations you could also include people who became involved in EA in 2015.)

For example, someone who hears a number for the median non-student donation in 2016 will by default assume that this refers to people who were non-student EAs throughout 2016. If possible, it's good to give the number which matches the scenario that they're imagining rather than needing to give caveats about how 35% of the people weren't EAs yet at the start of 2016. When people hear a non-intuitive analysis with a caveat then they're fairly likely to either a) forget about the caveat and mistakenly think that the number refers to the thing that they initially assumed that it meant or b) not know what to make of the caveated analysis and therefore not learn anything.

Comment author: Peter_Hurford  (EA Profile) 12 September 2017 04:20:53PM 2 points [-]

The median 2016 reported donation total of people who joined on 2015 or before was $655.

We'll talk amongst the team about if we want to update the post or not. Thanks!

Comment author: Dan_Keys 12 September 2017 03:20:13AM 5 points [-]

It is also worth noting that the survey was asking people who identify as EA in 2017 how much they donated in 2015 and 2016. These people weren't necessarily EAs in 2015 or 2016.

Looking at the raw data of when respondents said that they first became involved in EA, I'm getting that:

7% became EAs in 2017
28% became EAs in 2016
24% became EAs in 2015
41% became EAs in 2014 or earlier

(assuming that everyone who took the "Donations Only" survey became an EA before 2015, and leaving out everyone else who didn't answer the question about when they became an EA.)

So if we're looking at donations made in 2015, 35% of the people weren't EAs then and another 24% had only just become EAs that year. For 2016, 35% of the people weren't EAs yet at the start of the year and 7% weren't EAs at the end of the year.

(There were similar issues with the 2015 survey.)

These not-yet-EAs can have a large influence on the median, and to a lesser extent on the percentiles and the mean. They would also tend to create an upward trend in the longitudinal analysis (e.g., if many of the 184 individuals became EAs in 2015).

Comment author: Peter_Hurford  (EA Profile) 12 September 2017 04:44:55AM 2 points [-]

You're right there's a long lag time between asking about donations and the time of the donations... for the most part this is unavoidable, though we're hoping to time the survey much better in the future (asking only about one year of donations and asking just a month or two after the year is over). This will come with better organization in our team.

In the meantime, it is pretty easy to filter the data accordingly -- if you look only at donations made by EAs who stated that they joined on 2014 or before, the median donation is $1280.20 for 2015 and $1500 for 2016.

Comment author: joshjacobson  (EA Profile) 11 September 2017 03:03:53PM -1 points [-]

I think it's quite misleading to present p-values and claim that results are or aren't 'statistically significant', without also presenting the disclaimer that this is very far from a random sample, and therefore these statistical results should be interpreted with significant skepticism.

Comment author: Peter_Hurford  (EA Profile) 12 September 2017 03:02:56AM 2 points [-]

This is covered in detail in the methodology section. We try not to talk about statistical significance much, we try to belabor that these are EAs "in our sample" and not necessarily EAs overall, and we try to meticulously benchmark how representative our sample is to the best of our abilities.

I agree some skepticism is warranted, but not sure if the skepticism should be so significant as to be "quite misleading"... I think you'd have to back up your claim on that. Could be a good conversation to take to the methodology section.

Comment author: WillPearson 09 September 2017 07:20:34PM 0 points [-]

I'm not sure if there are many EAs interested in it, because of potential low tractability. But I am interested in "systemic change" as a cause area.

Comment author: Peter_Hurford  (EA Profile) 09 September 2017 09:35:14PM *  1 point [-]

What does "systemic change" actually refer to? I don't think I ever understood the term.

View more: Next