13

Gregory_Lewis comments on Announcing the Effective Altruism Handbook, 2nd edition - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (53)

You are viewing a single comment's thread. Show more comments above.

Comment author: Peter_Hurford  (EA Profile) 04 May 2018 01:40:32AM *  19 points [-]

I find it so interesting that people on the EA Facebook page have been a lot more generally critical about the content than people here on the EA Forum -- here it's all just typos and formatting issues.

I'll admit that I was one of the people who saw this here on the EA Forum first and was disappointed, but chose not to say anything out of a desire to not rock the boat. But now that I see others are concerned, I will echo my concerns too and magnify them here -- I don't feel like this handbook represents EA as I understand it.

By page count, AI is 45.7% of the entire causes sections. And as Catherine Low pointed out, in both the animal and the global poverty articles (which I didn't count toward the page count), more than half the article was dedicated to why we might not choose this cause area, with much of that space also focused on far-future of humanity. I'd find it hard for anyone to read this and not take away that the community consensus is that AI risk is clearly the most important thing to focus on.

I feel like I get it. I recognize that CEA and 80K have a right to have strong opinions about cause prioritization. I also recognize that they've worked hard to become such a strong central pillar of EA as they have. I also recognize that a lot of people that CEA and 80K are familiar with agree with them. But now I can't personally help but feel like CEA is using their position of relative strength to essentially dominate the conversation and claim it is the community consensus.

I agree the definition of "EA" here is itself the area of concern. It's very easy for any of us to call "EA" as we see it and naturally make claims about the preferences of the community. But this would be very clearly circular. I'd be tempted to defer to the EA Survey. AI was only the top cause of 16% of the EA Survey. Even among those employed full-time in a non-profit (maybe a proxy for full-time EAs), it was the top priority of 11.26%, compared to 44.22% for poverty and 6.46% for animal welfare. But naturally I'd be biased toward using these results, and I'm definitely sympathetic to the idea that EA should be considered more narrowly, or we should weight the opinions of people working on it full-time more heavily. So I'm unsure. Even my opinions here are circular, by my own admission.

But I think if we're going to be claiming in a community space to talk about the community, we should be more thoughtful about who's opinions we're including and excluding. It seems pretty inexpensive to re-weigh the handbook to emphasize AI risk just as much without being as clearly jarring about it (e.g., dedicating three chapters instead of one or slanting so clearly toward AI risk throughout the "reasons not to prioritize this cause" sections).

Based on this, and the general sentiment, I'd echo Scott Weather's comment on the Facebook group that it’s pretty disingenuous to represent CEA’s views as the views of the entire community writ large, however you want to define that. I agree I would have preferred it called “CEA’s Guide to Effective Altruism” or something similar.

Comment author: Gregory_Lewis 05 May 2018 01:06:42AM *  7 points [-]

It's very easy for any of us to call "EA" as we see it and naturally make claims about the preferences of the community. But this would be very clearly circular. I'd be tempted to defer to the EA Survey. AI was only the top cause of 16% of the EA Survey. Even among those employed full-time in a non-profit (maybe a proxy for full-time EAs), it was the top priority of 11.26%, compared to 44.22% for poverty and 6.46% for animal welfare.

As noted in the fb discussion, it seems unlikely full-time non-profit employment is a good proxy for 'full-time EAs' (i.e. those working full time at an EA organisation - E2Gers would be one of a few groups who should also be considered 'full-time EAs' in the broader sense of the term).

For this group, one could stipulate every group which posts updates to the EA newsletter (I looked at the last half-dozen or so, so any group which didn't have an update is excluded, but likely minor) is an EA group, and toting up a headcount of staff (I didn't correct for FTE, and excluded advisors/founders/volunteers/freelancers/interns - all of these decisions could be challenged) and recording the prevailing focus of the org gives something like this:

  • 80000 hours (7 people) - Far future
  • ACE (17 people) - Animals
  • CEA (15 people) - Far future
  • CSER (11 people) - Far future
  • CFI (10 people) - Far future (I only included their researchers)
  • FHI (17 people) - Far future
  • FRI (5 people) - Far future
  • Givewell (20 people) - Global poverty
  • Open Phil (21 people) - Far future (mostly)
  • SI (3 people) - Animals
  • CFAR (11 people) - Far future
  • Rethink Charity (11 people) - Global poverty
  • WASR (3 people) - Animals
  • REG (4 people) - Far future [Edited after Jonas Vollmer kindly corrected me]
  • FLI (6 people) - Far future
  • MIRI (17 people) - Far future
  • TYLCS (11 people) - Global poverty

Totting this up, I get ~ two thirds of people work at orgs which focus on the far future (66%), 22% global poverty, and 12% animals. Although it is hard to work out the AI | far future proportion, I'm pretty sure it is the majority, so 45% AI wouldn't be wildly off-kilter if we thought the EA handbook should represent the balance of 'full time' attention.

I doubt this should be the relevant metric of how to divvy-up space in the EA handbook. It also seems unclear how clear considerations of representation play in selecting content, or if so what is the key community to proportionately represent.

Yet I think I'd be surprised if it wasn't the case that among those working 'in' EA, the majority work on the far future, and a plurality work on AI. It also agrees with my impression that the most involved in the EA community strongly skew towards the far future cause area in general and AI in particular. I think they do so, bluntly, because these people have better access to the balance of reason, which in fact favours these being the most important things to work on.

Comment author: MichaelPlant 05 May 2018 01:01:23PM *  6 points [-]

'full-time EAs' (i.e. those working full time at an EA organisation - E2Gers would be one of a few groups who should also be considered 'full-time EAs' in the broader sense of the term).

I think this methodology is pretty suspicious. There are more ways to be a full-time EA (FTEA) that working at an EA org, or even E2Ging. Suppose someone spends their time working on, say, poverty out of an desire to do the most good, and thus works at a development NGO or for a governent. Neither development NGOs nor governments will count as an 'EA org' on your definition because they won't being posting updates to the EA newsletter. Why would they? The EA community has very little comparative advantage in solving poverty, so what we be the point in say, Oxfam or DFID sending update reports to the EA newsletter? It would frankly be bizarre for a government department to update the EA community. We might say "ah, put people who work on poverty aren't really EAs" but that would just beg the question.

Comment author: ThomasSittler 06 May 2018 05:12:13PM *  5 points [-]

+1 for doing a quick empirical check and providing your method.

But the EA newsletter is curated by CEA, no? So it also partly reflects CEA's own priorities. You and others have noted in the discussion below that a number of plausibly full-time EA organisations are not included in your list (e.g. BERI, Charity Science, GFI, Sentience Politics).

I'd also question the view that OpenPhil is mostly focused on the long-term future. When I look at OpenPhil's grant database, and counting

  • Biosecurity and Pandemic Preparedness
  • Global Catastrophic Risks
  • Potential Risks from Advanced Artificial Intelligence
  • Scientific Research

as long-term future focused, I get that 30% of grant money was given to the long-term future.

Comment author: Jan_Kulveit 05 May 2018 07:04:30AM 3 points [-]

I think while this headcount is not a good metric how to allocate space in the EA handbook, it is a quite valuable overview in itself!

Just as a caveat, the numbers should not be directly compared to numbers from EA survey, as the later included also cause-prioritization, rationality, meta, politics & more.

(Using such cathegories, some organizations would end up in classified in different boxes)

Comment author: RandomEA 05 May 2018 12:15:07PM 2 points [-]

I think your list undercounts the number of animal-focused EAs. For example, it excludes Sentience Politics, which provided updates through the EA newsletter in September 2016, January 2017, and July 2017. It also excludes the Good Food Institute, an organization which describes itself as "founded to apply the principles of effective altruism (EA) to change our food system." While GFI does not provide updates through the EA newsletter, its job openings are mentioned in the December 2017, January 2018, and March 2018 newsletters. Additionally, it excludes organizations like the Humane League, which while not explicitly EA, have been described as having a "largely utilitarian worldview." Though the Humane League does not provide updates through the EA newsletter, its job openings are mentioned in the April 2017 newsletters, February 2018, and March 2018.

Perhaps the argument for excluding GFI and the Humane League (while including direct work organizations in the long term future space) is that relatively few people in direct work animal organizations identify as EAs (while most people in direct work long term future organizations identify as EA). If this is the reason, I think it'd be good for someone to provide evidence for it. Also, if the idea behind this method of counting is to look at the revealed preference of EAs, then I think people earning to give have to be included, especially since earning to give appears to be more useful for farm animal welfare than for long term future causes.

(Most of the above also applies to global health organizations.)

Comment author: Gregory_Lewis 06 May 2018 12:43:37PM 1 point [-]

I picked the 'updates' purely in the interests of time (easier to skim), that it gives some sense of what orgs are considered 'EA orgs' rather than 'orgs doing EA work' (a distinction which I accept is imprecise: would a GW top charity 'count'?), and I (forlornly) hoped pointing to a method, however brief, would forestall suspicion about cherry-picking.

I meant the quick-and-dirty data gathering to be more an indicative sample than a census. I'd therefore expect significant margin of error (but not so significant as to change the bottom line). Other relevant candidate groups are also left out: BERI, Charity Science, Founder's Pledge, ?ALLFED. I'd expect there are more.