From the EA Survey team

It's been great to get so many suggestions for questions for this year's annual EA Survey, in the Facebook thread, the Forum post, and throughout the year. As we said, we'd also like to get the community's suggestions for the survey's planning and execution. What additional purposes can the survey serve? How can we reach interesting groups about which we know little or which are hard to find out about, such as people on the fringes of effective altruism, or people who favour effective poverty charities but haven't heard of EA? Where should we share the survey and to whom should we send it? Is it worth selecting and targeting an initial sample before trying to reach as many people as possible? (Though we've already asked Greg Lewis for suggestions on this and are planning to follow them after lengthy internal and external discussion.) How long or short should the survey be, and is there any harm adding a long 'extra credit' section? What projects or services could getting certain information from the survey enable? What strategic decisions could it inform?

For reference, you may be interested in the results and analysis from last year's survey, or the raw data.

Comment here, and remember that the ultimate place to discuss anything about the survey is as always a .impact meeting - in particular the survey deep dive that will be held on Sunday 24 May at 9pm UTC (2pm Pacific, 5pm Eastern, 10pm London). A Google Hangouts link to join will be posted in the Facebook event for this at that time. It'll be a chance to talk directly with the survey team and help work things out.

5

0
0

Reactions

0
0
Comments9
Sorted by Click to highlight new comments since: Today at 10:31 AM

I'm going to reproduce a comment I wrote at the time the 2014 results were released in order to have them on the agenda for the call later on. I remain convinced that each of these three practical suggestions is relatively low effort and will make the survey process easier, the data more reliable and any resulting conclusions more credible:

Firstly, we should use commercial software to operate the survey rather than trying to build something ourselves. These are both less effort and more reliable. For example, SurveyMonkey could have done everything this survey does for about £300. I'm happy to pay that myself next year to avoid some of the data quality issues.

Secondly, we should use live data validation to improve data collection, data integrity and ease of analysis. SurveyMonkey or other tools can help John to fill in his age in the right box. It could refuse to believe the 7 year old, and suggest that they have another go at entering their age. It could also be valuable to do some respondent validation by asking people to answer a question with a given answer, removing any random clickers or poor quality respondents who are speeding through (eg "Please enter the number '2' in letters into the textbox to prove you are not a robot. For example, the number '1' in letters is 'one'")

Thirdly, we should do more testing by trying out draft versions with respondents who have not written the survey. It is very, very hard to estimate how people are going to read a particular question, or which options should be included in multiple choice questions. Within my firm, it is typical for an entire project team to run through a survey several times before sending it out to the public. Part of the value here is that most team members were not closely involved in writing the survey, and so won't necessarily be reading it in the way the author expected them to read it. I would suggest you want to try any version of the survey out with a large group (at least twenty) of different people who might answer it, to catch the interpretations of questions which different groups might have. Does the EA affiliation filter work as hoped for? Are there important charities which we should include in the prompt list? It does not seem unreasonable to pilot and redraft a few times with a diverse group of willing volunteers before releasing generally.

Firstly, we should use commercial software to operate the survey rather than trying to build something ourselves. These are both less effort and more reliable. For example, SurveyMonkey could have done everything this survey does for about £300. I'm happy to pay that myself next year to avoid some of the data quality issues.

It does seem clearly to be worth this expense. I'm concerned that .impact/the community team behind the survey are too reluctant to spend money and undervalue the time relative to it. I suppose that's the cost of not being a funded organization.

asking people to answer a question with a given answer, removing any random clickers or poor quality respondents who are speeding through (eg "Please enter the number '2' in letters into the textbox to prove you are not a robot. For example, the number '1' in letters is 'one'")

Seconded - I'd urge the team to do this, even if it means ignoring some genuine answers (I would expect Effective Altruists to generally put enough effort into the survey to spot and complete this question, though I might be naïve).

Thirdly, we should do more testing by trying out draft versions with respondents who have not written the survey.

An excellent suggestion also. I'd be willing to do this - I imagine anyone else who'd volunteer can comment below and hopefully someone from the team will spot this and send messages.

Great suggestion Stens!

I'm happy to trial draft versions of the survey

My main additional comment to the below is that we should be relatively unconcerned with people failing to finish a long survey - we are talking to individuals who are committed to doing a significant amount of good in the world. The relative cost of a few extra questions is low compared with the cost of missing out a question which allows us to better understand the movement and therefore change the world.

What would make you get more involved in the EA movement? What approaches have you found particularly (in)effective in communicating EA ideas? Which best describes your EA behavior: a big initial commitment or an incremental progression over time?

Would The Life You Can Save use this data Jon? Are there particular things you might consider doing, or general ways you might tweak your strategy?

We'd use the first two questions I listed as a source of ideas. The third would help inform our general strategy. Our working assumption is that it's better to make small, incremental asks rather than one big ask (e.g. a significant pledge) at the outset. Would be nice to see if that's consistent with the experiences of the EA community.

Hope this went well.

I got into a conversation with 'Telofy' about his post about dissociation being a necessary or helpful approach for some Effective Altruists, and he suggested that it might be useful to use the survey find out how many people have the problem he described there or find his solution useful.

Curated and popular this week
Relevant opportunities