23

Pablo_Stafforini comments on The 2014 Survey of Effective Altruists: Results and Analysis - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (69)

You are viewing a single comment's thread. Show more comments above.

Comment author: Pablo_Stafforini 19 March 2015 12:52:43AM *  1 point [-]

What makes it particularly problematic is that it is very hard estimate the ‘size’ of this bias

One approach would be to identify a representative sample of the EA population and circulate among folks in that sample a short survey with a few questions randomly sampled from the original survey. By measuring response discrepancies between surveys (beyond what one would expect if both surveys were representative), one could estimate the size of the sampling bias in the original survey.

ETA: I now see that a proposal along these lines is discussed in the subsection 'Comparison of the EA Facebook Group to a Random Sample' of the Appendix. In a follow-up study, the authors of the survey randomly sampled members of the EA Facebook group and compared their responses to those of members of that group in the original survey. However, if one regards the EA Facebook group as a representative sample of the EA population (which seems reasonable to me), one could also compare the responses in the follow-up survey to all responses in the original survey. Although the authors of the survey don't make this comparison, it could be made easily using the data already collected (though given the small sample size, practically significant differences may not turn out to be statistically significant).

Comment author: DavidMoss 19 March 2015 09:27:06AM *  5 points [-]

I think it's right to say that the survey was premised on the idea that there is no way to know the true nature of the EA population and no known-to-be-representative sampling frame. If there were such a sampling frame or a known-to-be-representative population, we'd definitely have used that. Beforehand, and a little less so now, I would have strongly expected the EA Facebook group to not be representative. For that reason I think randomly sampling the EA FB group is largely uninformative- and I think that this is now Greg's view too, though I could be wrong.

Comment author: Gregory_Lewis 22 March 2015 01:50:51AM 0 points [-]

I agree that could work, although doing it is not straightforward - for technical reasons, there aren't many instances where you get added precision by doing a convenience survey 'on top' of a random sample, although they do exist.

(Unfortunately, random FB sample was small, with something like 80% non-response, thus making it not very helpful to sample sampling deviation from the 'true' population. In some sense the subgroup comparisons do provide some of this information by pointing to different sub-populations - what they cannot provide is a measure as to whether these subgroups are being represented proportionally or not. A priori though, that would seem pretty unlikely.)

As David notes, the 'EA FB group' is highly unlikely to be a representative sample. But I think it is more plausibly representative along axes we'd be likely to be interested in the survey. I'd guess EAs who are into animal rights are not hugely more likely to be in facebook in contrast to those who are into global poverty, for example (could there be some effects? absolutely - I'd guess FB audience skews young and computer savvy, so maybe folks interested in AI etc. might be more likely to be found there, etc. etc.)

The problem with going to each 'cluster' of EAs is that you are effectively sampling parallel rather than orthogonal to your substructure: if you over-sample the young and computer literate, that may not throw off the relative proportions of who lives where or who cares more about poverty than the far future; you'd be much more fearful of this if you oversample a particular EA subculture like LW.

I'd be more inclined to 'trust' the proportion data (%age male, %xrisk, %etc) if the survey was 'just' of the EA facebook group, either probabilistically or convenience sampled. Naturally, still very far from perfect, and not for all areas (age, for example). (Unfortunately, you cannot just filter the survey and just look at those who clicked through via the FB link to construct this data - there's plausibly lots of people who clicked through via LW but would have clicked through via FB if there was no LW link, so ignoring all these responses likely inverts anticipated bias).