Comment author: Peter_Hurford  (EA Profile) 17 November 2017 03:59:58PM 2 points [-]

Yep, that is an issue. One idea might be to look at the data for each referral source (e.g., how everyone who heard about the survey through Facebook heard about EA, then how everyone who heard about the survey though SlateStarCodex heard about EA, etc.).

Comment author: Tee 17 November 2017 05:13:09PM 1 point [-]

I agree, this is something we acknowledge multiple times in the post, and many times throughout the series. The level of rigor it would take to bypass this issue is difficult to reach.

This is also why the section where we see some overlap with Julia's survey is helpful.

Comment author: Tee 10 October 2017 05:26:09PM 3 points [-]

Additional data on EA shifts in cause area preference: http://effective-altruism.com/ea/1fi/have_ea_priorities_changed_over_time/

Comment author: Michelle_Hutchinson 05 September 2017 09:00:44AM *  3 points [-]

Thanks for clarifying.

The claim you're defending is that the Bay is an outlier in terms of the percentage of people who think AI is the top priority. But what the paragraph I quoted says is 'favoring a cause area outlier' - so 'outlier' is picking out AI amongst causes people think are important. Saying that the Bay favours AI which is an outlier amongst causes people favour is a stronger claim than saying that the Bay is an outlier in how much it favours AI. The data seems to support the latter but not the former.

Comment author: Tee 05 September 2017 12:54:51PM 1 point [-]

I've also updated the relevant passage to reflect the Bay Area as an outlier in terms of support for AI, not AI an outlier as a cause area

Comment author: Michelle_Hutchinson 05 September 2017 09:00:44AM *  3 points [-]

Thanks for clarifying.

The claim you're defending is that the Bay is an outlier in terms of the percentage of people who think AI is the top priority. But what the paragraph I quoted says is 'favoring a cause area outlier' - so 'outlier' is picking out AI amongst causes people think are important. Saying that the Bay favours AI which is an outlier amongst causes people favour is a stronger claim than saying that the Bay is an outlier in how much it favours AI. The data seems to support the latter but not the former.

Comment author: Tee 05 September 2017 12:40:16PM 1 point [-]

Hey Michelle, I authored that particular part and I think what you've said is a fair point. As you said, the point was to identify the Bay as an outlier in terms of the amount of support for AI, not declare AI as an outlier as a cause area.

The article in general seems to put quite a bit of emphasis on the fact that poverty came out as the most favoured cause.

I don't know that this is necessarily true beyond reporting what is actually there. When poverty is favored by more than double the number of people who favor the next most popular cause area (graph #1), favored by more people than a handful of other causes combined, and disliked the least, those facts need to be put into perspective.

If anything, I'd say we put a fair amount of emphasis on how EAs are coming around on AI, and how resistance toward putting resources toward AI has dropped significantly.

We could speculate about how future-oriented certain cause areas may be, and how to aggregate or disaggregate them in future surveys. We've made a note to consider that for 2018.

Comment author: CalebWithers  (EA Profile) 04 September 2017 02:31:12AM 3 points [-]

It seems that the numbers in the top priority paragraph don't match up with the chart

Comment author: Tee 05 September 2017 12:19:17PM 1 point [-]

09/05/17 Update: Graph 1 (top priority) has been updated again

Comment author: Michelle_Hutchinson 04 September 2017 02:40:19PM *  6 points [-]

I'm having trouble interpreting the first graph. It looks like 600 people put poverty as the top cause, which you state is 41% of respondents, and that 500 people put cause prioritisation, which you state is 19% of respondents.

The article in general seems to put quite a bit of emphasis on the fact that poverty came out as the most favoured cause. Yet while 600 people said it was the top cause, according to the graph around 800 people said that long run future was the top cause (AI + non-AI far future). It seems plausible to disaggregate AI and non-AI long run future, but at least as plausible to aggregate them (given the aggregation of health / education / economic interventions in poverty), and conclude that most EAs think the top cause is improving the long-run future. Although you might have been allowing people to pick multiple answers, and found that most people who picked poverty picked only that, and most who picked AI / non-AI FF picked both?

The following statement appears to me rather loaded: "For years, the San Francisco Bay area has been known anecdotally as a hotbed of support for artificial intelligence as a cause area. Interesting to note would be the concentration of EA-aligned organizations in the area, and the potential ramifications of these organizations being located in a locale heavily favoring a cause area outlier." The term 'outlier' seems false according to the stats you cite (over 40% of respondents outside the Bay thinking AI is a top or near top cause), and particularly misleading given the differences made here by choices of aggregation. (Ie. that you could frame it as 'most EAs in general think that long-run future causes are most important; this effect is a bit stronger in the Bay)

Writing on my own behalf, not my employer's.

Comment author: Tee 05 September 2017 12:17:50PM 1 point [-]

09/05/17 Update: Graph 1 (top priority) has been updated again

Comment author: Tee 02 September 2017 08:20:41PM 4 points [-]

09/02/17 Update: We've updated the truncated graphs

Comment author: Tee 05 September 2017 12:17:29PM 1 point [-]

09/05/17 Update: Graph 1 (top priority) has been updated again

Comment author: Tee 02 September 2017 08:23:10PM 2 points [-]

09/02/17 Post Update: The previously truncated graphs "This cause is the top priority" and "This cause is the top or near top priority" have been adjusted in order to better present the data

Comment author: Buck 01 September 2017 11:48:09PM 7 points [-]

I wish that you hadn't truncated the y axis in the "Cause Identified as Near-Top Priority" graph. Truncating the y-axis makes the graph much more misleading at first glance.

Comment author: Tee 02 September 2017 08:20:41PM 4 points [-]

09/02/17 Update: We've updated the truncated graphs

Comment author: kbog  (EA Profile) 30 August 2017 04:18:46AM *  4 points [-]

I don't think there is a difference between a moral duty and an obligation.

In 2015, there were more than 2000 respondents, right? Does this mean EA is getting smaller??

Comment author: Tee 30 August 2017 03:54:52PM 0 points [-]

I don't think there is a difference between a moral duty and an obligation.

I'm not entirely sure that I would agree with this. I'm supposed to be publishing more survey content on the Forum at the moment, so parsing this out may have to wait, but obligation to me feels relatively more guilt-driven, and being duty-bound seems to invoke a more diverse set of internal and external pressures

At any rate, if it's not clear here, it's certainly not good as a survey question.

In 2015, there were more than 2000 respondents, right? Does this mean EA is getting smaller??

Could be! May also be indicative of year-on-year survey fatigue though. We'll be revamping the survey for 2018 to make it a better experience in general

View more: Next