From the EA Survey team
It's been great to get so many suggestions for questions for this year's annual EA Survey, in the Facebook thread, the Forum post, and throughout the year. As we said, we'd also like to get the community's suggestions for the survey's planning and execution. What additional purposes can the survey serve? How can we reach interesting groups about which we know little or which are hard to find out about, such as people on the fringes of effective altruism, or people who favour effective poverty charities but haven't heard of EA? Where should we share the survey and to whom should we send it? Is it worth selecting and targeting an initial sample before trying to reach as many people as possible? (Though we've already asked Greg Lewis for suggestions on this and are planning to follow them after lengthy internal and external discussion.) How long or short should the survey be, and is there any harm adding a long 'extra credit' section? What projects or services could getting certain information from the survey enable? What strategic decisions could it inform?
For reference, you may be interested in the results and analysis from last year's survey, or the raw data.
Comment here, and remember that the ultimate place to discuss anything about the survey is as always a .impact meeting - in particular the survey deep dive that will be held on Sunday 24 May at 9pm UTC (2pm Pacific, 5pm Eastern, 10pm London). A Google Hangouts link to join will be posted in the Facebook event for this at that time. It'll be a chance to talk directly with the survey team and help work things out.
I'm going to reproduce a comment I wrote at the time the 2014 results were released in order to have them on the agenda for the call later on. I remain convinced that each of these three practical suggestions is relatively low effort and will make the survey process easier, the data more reliable and any resulting conclusions more credible:
Firstly, we should use commercial software to operate the survey rather than trying to build something ourselves. These are both less effort and more reliable. For example, SurveyMonkey could have done everything this survey does for about £300. I'm happy to pay that myself next year to avoid some of the data quality issues.
Secondly, we should use live data validation to improve data collection, data integrity and ease of analysis. SurveyMonkey or other tools can help John to fill in his age in the right box. It could refuse to believe the 7 year old, and suggest that they have another go at entering their age. It could also be valuable to do some respondent validation by asking people to answer a question with a given answer, removing any random clickers or poor quality respondents who are speeding through (eg "Please enter the number '2' in letters into the textbox to prove you are not a robot. For example, the number '1' in letters is 'one'")
Thirdly, we should do more testing by trying out draft versions with respondents who have not written the survey. It is very, very hard to estimate how people are going to read a particular question, or which options should be included in multiple choice questions. Within my firm, it is typical for an entire project team to run through a survey several times before sending it out to the public. Part of the value here is that most team members were not closely involved in writing the survey, and so won't necessarily be reading it in the way the author expected them to read it. I would suggest you want to try any version of the survey out with a large group (at least twenty) of different people who might answer it, to catch the interpretations of questions which different groups might have. Does the EA affiliation filter work as hoped for? Are there important charities which we should include in the prompt list? It does not seem unreasonable to pilot and redraft a few times with a diverse group of willing volunteers before releasing generally.
It does seem clearly to be worth this expense. I'm concerned that .impact/the community team behind the survey are too reluctant to spend money and undervalue the time relative to it. I suppose that's the cost of not being a funded o... (read more)