14

Announcing the Effective Altruism Handbook, 2nd edition

Today CEA is releasing the second edition of our Effective Altruism Handbook.

 

You can get the pdf version here, and we also have epub/mobi versions for people who prefer e-readers.

 

What is CEA's EA Handbook?

 

It’s an introduction to some of the core concepts in effective altruism.

 

If you’re new to effective altruism, it should give you an overview of the key ideas and problems that we’ve found so far. But if you have already engaged with effective altruism, there should be some new insights amongst the familiar ideas.

 

The pieces are a mix of essays and adaptations of conference talks. We’ve tried to put them in an order that makes sense. Together we think they cover some of the key ideas in the effective altruism community.

 

Why a new edition?

 

The first edition of CEA's EA Handbook is now 3 years old. As a community, we’ve changed a lot in those three years, and learnt a lot. In fact, comparing the new handbook with the old is a good way to get a sense of just how much intellectual progress we’ve made.

 

After consulting with Ryan Carey, the editor of the old handbook, we agreed it was time for something new, and for a slightly more polished design. Stefan Schubert and I compiled a list of talks and articles, and the authors were gracious enough to give us permission. With the help of a small army of transcribers and copy-editors, and Laura Pomarius’ design skills, we brought it together.

 

What next?

We hope that this becomes a reference point for people who are new to effective altruism, and a summary that local groups can share with their members.

 

We are not currently planning to make a physical version of CEA's EA Handbook: we think that the articles and design are high quality for an online pdf. However, we worry that it might damage the brand of effective altruism to distribute or sell physical copies of a resource which remains only a collection of articles and talks, rather than a polished book.

 

We’d welcome feedback on any aspect of the new edition.

 

Here are the links again if you want to get reading or sharing:

[Edited the body of the post to reflect changes made to the contents on 23 May 2018, and change links.]

Comments (56)

Comment author: [deleted] 04 May 2018 06:55:37PM *  16 points [-]

Now that the formatting has been fixed, a few points on the substance. First of all, this is obviously a huge improvement on the old EA handbook. So much high-quality intellectual content has been produced in the last three years. I think it's very important, as a community-building tool, to have a curated selection of this content, and the new handbook is a welcome step towards that goal.

I think the first seven parts are good choices, and appear in a good order:

  • Introduction to Effective Altruism
  • Efficient Charity — Do Unto Others
  • Prospecting for Gold
  • Crucial Considerations and Wise Philanthropy
  • The Moral Value of Information
  • The Long-Term Future
  • A Proposed Adjustment to the Astronomical Waste Argument

Next in "promising causes", we first get three articles in a row about AI:

  • Three Impacts of Machine Intelligence
  • Potential Risks from Advanced AI
  • What Does (and Doesn’t) AI Mean for Effective Altruism?

I understand that these are good pieces, and you want to showcase the good thinking the EA community has done. But three, out of the eight in "promising causes", seems like a heavy focus on AI. Suppose we condition on long-termism, which plausibly excludes the Global Health and Animal Welfare articles as written, then it's it's three out of six on AI. Additionally, the fact that AI comes first lends it still more importance.

Another asymmetry is that while

  • The long-term future
  • Animal Welfare
  • Global health and development

have a discussion of pros and cons, the three AI articles talk about what to do if you've already decided to focus on AI. This creates the implicit impression that if you accept long-termism, it's a foregone conclusion that AI is a priority, or even the priority.

Next we have:

  • Biosecurity as an EA Cause Area
  • Animal Welfare
  • Effective Altruism in Government
  • Global Health and Development
  • How valuable is movement growth?

I'm especially worried that among these, animal welfare and global health and development are the only two that follow the pro-con structure. This is not because objections are discussed. It's because these pieces feel much more like mass-outreach pieces than the others in the same section. They are

  • more introductory in content
  • in tone, more like a dry memo**, or a broad survey for outsiders

The second point is somewhat nebulous, but, I think, actually very important. While the other cause area articles feel like excerpts from a lively intellectual conversation among EAs, the global health and animal welfare articles do not. This seems unnecessary since EAs have produced a lot of excellent, advanced content in these areas. A non-exhaustive list:

  • The entire GiveWell blog
  • Large parts of Brian Tomasik's website
  • Lewis Bollard's talks at EA global
  • Rachel Glennerster's post "Where economists can learn from philosophers and effective altruism"
  • Tobias Leenaert's talks on cognitive dissonance and behaviour change driving value change
  • Sentience Institute's "Summary of Evidence for Foundational Questions in Effective Animal Advocacy"
  • Toby Ord's "the moral imperative towards cost-effectiveness"

So it should hopefully be possible to significantly improve the handbook for an edition 2.1, to come out soon :)

** (I think Jess did a great job writing these, but they were originally meant for a different context)

Comment author: Maxdalton 07 May 2018 01:27:56PM 5 points [-]

Thanks Tom. I've discussed the reasoning for including three articles on AI a bit on Facebook. To quote from that:

"I want to explain some of the reasoning behind including several articles on AI. AI risk is a more unusual area, which is more susceptible to misinterpretation than global health or animal welfare [or, I think e.g. biosecurity and other long-term focused causes]. Partly for this reason, we thought that it was sensible to include several articles on this topic, with the intention that this would provide more needed background and convey more of the nuance of the idea. I will talk with some of the commenters above to discuss if it makes sense to do some sort of merge so that AI dominates the contents page less."

Thanks for suggestions for alternatives to the global poverty and animal welfare articles. I think you may well be right that we should change those. This is another mistake that I made. The content for the EA Handbook grew out of a sequence of content on effectivealtruism.org. As a consequence, it included only content that we had produced (or that had been produced by others at our events). At the point when we shifted to a pdf/ebook format, I should have reconsidered the selection of articles, which would have given us the possibility of trying to include the excellent content that you mention. I hope that changing those articles will also reduce the impression that AI follows obviously from a long-term future focus. I'm sorry for making this mistake.

Comment author: Darius_Meissner 03 May 2018 09:41:42AM *  9 points [-]

Now that a new version of the handbook is out, could you update the 'More on Effective Altruism' link? It is quite prominent in the 'Getting Started' navigation panel on the right-hand side of the EA Forum.

Comment author: Maxdalton 07 May 2018 08:35:06AM 1 point [-]

Good idea. I'll do this when and if there is more consensus that people want to promote this content over the old.

Comment author: JoshP 02 May 2018 01:21:50PM *  9 points [-]

A few quick comments, have skimmed through rather than a read in depth (have read a number of the articles in the past):

  1. There's a format error on p.167- Under 1. Learn more, there is a spacing error in the paragraph, which bizarrely cuts the paragraph in two for no reason. [Edit: this is not a lone error, I've found another on p.142, there may well be others, I haven't gone through exhaustively]
  2. I'd be interested how the relevant cause areas were agreed upon? There's a heavy emphasis on Artificial Intelligence and the Long-Run Future (three articles on AI); and some areas of interest to EAers get very little or no mention at all (e.g. Mental Health and Happiness, re Michael Plant [EDIT: there is some mention of this, but there's still at least a good question here about how cause areas are decided]). Perhaps there's also a lack of concentration on cause areas versus career paths (and the Gov. article is great but extremely America heavy, which is not helpful for non-Americans). I suspect there is more to be said as to how this is decided, but it would be useful to understand this.
  3. The recommendation of different books at the end is interesting- given the heavy emphasis on the long term-future throughout, it seems to me that the recommended books diverge from that (Doing Good Better and the Most Good You Can Do aren't hugely about that from my memory?) Would it have been better to recommend SuperIntelligence; or the Global Catastrophic Risks volume which came out a while back; or something else, if the Long-Term Future dominates the attention elsewhere?
  4. One worry I have is that it does a lot to suggest possibel directions for EAers, and little to deal with objections EAers might face. The original handbook seems to have slightly more on that (e.g. the Estimation article from Katja; Holden's article on White's in shining armour); and there are other ones which regularly arise e.g. the collectivist/coordination ideas that Hilary Greaves has been talking about (no individual can ever make a difference); or objections which focus on the overriding significance of virtue/the total cluelessness of us all. It might be that these are dealt with in some other way (I'm not clear how) or that this is simply not that important (which I question, but am uncertain), and so would be appreciative for your thoughts.

Mainly, though I liked it, so my critical points aren't to be understood as a total rejection of the piece! It was great in so many ways, and I'm sure required a fair amount of work!

Comment author: Maxdalton 03 May 2018 08:22:05AM *  2 points [-]
  1. Thanks for pointing that out, we'll fix that.
    • Cause areas: Unfortunately we couldn't include everything. One of the core tenets of effective altruism is making difficult calls about cause prioritization, and these will always be contentious. We had to make those calls as we decided what to include. Our current best guess is that we should be focusing our efforts on a variety of different attempts to improve the long-term future, and this explains the calls that we made in the handbook.
    • Career paths: You're right, I'll make some changes to make clear that this is cause focused, and point people to 80,000 Hours for career focused advice.
    • Sorry that the article is not so helpful for non-Americans. Unfortunately this varies quite a bit between countries, and we couldn't cover them all.
  2. That's a good point, I'll consider changing/adding those books.
  3. That's another good point. I might include another section at the end on criticisms.
Comment author: konrad 02 May 2018 08:22:39AM 7 points [-]

Awesome, thanks a lot for this work!

From what I understood when talking to CEA staff, this is also thought to replace handing out copies of Doing Good Better, yes? If so, I would emphasise this more explicitly, too.

Comment author: Maxdalton 03 May 2018 02:17:17PM 1 point [-]

Doing Good Better is still a useful introduction to EA, and it's possible to distribute physical copies of the book, so that will sometimes be more useful. The EA Handbook might work better as a more advanced introduction, or in online circumstances (see also some of the other comment threads).

Comment author: impala 03 May 2018 09:00:18PM 6 points [-]

There's a valuable discussion of this on Facebook at https://www.facebook.com/groups/effective.altruists/permalink/1750780338311649/

Comment author: Yannick_Muehlhaeuser 02 May 2018 12:33:27PM 6 points [-]

If i could only recommend one book to someone should i recommend this or Doing Good Better? Not really sure about that. What do you think?

Comment author: Maxdalton 03 May 2018 08:44:21AM 6 points [-]

As Josh says, they're slightly different resources, and I think it will depend on the person.

The EA Handbook was designed for people who have already showed some interest and inclination towards EA's core principles - maybe they've been pitched by a friend, or listened to a podcast. I think Doing Good Better is likely to be better as an introduction to those core principles, whilst the Handbook is an exploration of where the principles might lead. So in terms of level, Doing Good Better feels more introductory.

In my view, the content of the EA Handbook better reflects our best current understanding of which causes to prioritize, and so I would prefer it in terms of content.

Overall, my guess is that if you've had a chance to briefly explain some of EA's core principles and intuitions, it would be best to recommend the EA Handbook.

Comment author: JoshYou 03 May 2018 04:33:37AM 2 points [-]

Doing Good Better is more accessible and spends a lot more time introducing and defending the basic idea of EA instead of branching out into more advanced ideas. It is also much more focused on global poverty.

Comment author: adamaero  (EA Profile) 03 May 2018 12:20:24AM 2 points [-]

Doing Good Better

Comment author: DavidMoss  (EA Profile) 03 May 2018 01:53:50AM 0 points [-]

Or Singer's The Most Good You Can Do.

Comment author: Peter_Hurford  (EA Profile) 04 May 2018 01:40:32AM *  21 points [-]

I find it so interesting that people on the EA Facebook page have been a lot more generally critical about the content than people here on the EA Forum -- here it's all just typos and formatting issues.

I'll admit that I was one of the people who saw this here on the EA Forum first and was disappointed, but chose not to say anything out of a desire to not rock the boat. But now that I see others are concerned, I will echo my concerns too and magnify them here -- I don't feel like this handbook represents EA as I understand it.

By page count, AI is 45.7% of the entire causes sections. And as Catherine Low pointed out, in both the animal and the global poverty articles (which I didn't count toward the page count), more than half the article was dedicated to why we might not choose this cause area, with much of that space also focused on far-future of humanity. I'd find it hard for anyone to read this and not take away that the community consensus is that AI risk is clearly the most important thing to focus on.

I feel like I get it. I recognize that CEA and 80K have a right to have strong opinions about cause prioritization. I also recognize that they've worked hard to become such a strong central pillar of EA as they have. I also recognize that a lot of people that CEA and 80K are familiar with agree with them. But now I can't personally help but feel like CEA is using their position of relative strength to essentially dominate the conversation and claim it is the community consensus.

I agree the definition of "EA" here is itself the area of concern. It's very easy for any of us to call "EA" as we see it and naturally make claims about the preferences of the community. But this would be very clearly circular. I'd be tempted to defer to the EA Survey. AI was only the top cause of 16% of the EA Survey. Even among those employed full-time in a non-profit (maybe a proxy for full-time EAs), it was the top priority of 11.26%, compared to 44.22% for poverty and 6.46% for animal welfare. But naturally I'd be biased toward using these results, and I'm definitely sympathetic to the idea that EA should be considered more narrowly, or we should weight the opinions of people working on it full-time more heavily. So I'm unsure. Even my opinions here are circular, by my own admission.

But I think if we're going to be claiming in a community space to talk about the community, we should be more thoughtful about who's opinions we're including and excluding. It seems pretty inexpensive to re-weigh the handbook to emphasize AI risk just as much without being as clearly jarring about it (e.g., dedicating three chapters instead of one or slanting so clearly toward AI risk throughout the "reasons not to prioritize this cause" sections).

Based on this, and the general sentiment, I'd echo Scott Weather's comment on the Facebook group that it’s pretty disingenuous to represent CEA’s views as the views of the entire community writ large, however you want to define that. I agree I would have preferred it called “CEA’s Guide to Effective Altruism” or something similar.

Comment author: Gregory_Lewis 05 May 2018 01:06:42AM *  7 points [-]

It's very easy for any of us to call "EA" as we see it and naturally make claims about the preferences of the community. But this would be very clearly circular. I'd be tempted to defer to the EA Survey. AI was only the top cause of 16% of the EA Survey. Even among those employed full-time in a non-profit (maybe a proxy for full-time EAs), it was the top priority of 11.26%, compared to 44.22% for poverty and 6.46% for animal welfare.

As noted in the fb discussion, it seems unlikely full-time non-profit employment is a good proxy for 'full-time EAs' (i.e. those working full time at an EA organisation - E2Gers would be one of a few groups who should also be considered 'full-time EAs' in the broader sense of the term).

For this group, one could stipulate every group which posts updates to the EA newsletter (I looked at the last half-dozen or so, so any group which didn't have an update is excluded, but likely minor) is an EA group, and toting up a headcount of staff (I didn't correct for FTE, and excluded advisors/founders/volunteers/freelancers/interns - all of these decisions could be challenged) and recording the prevailing focus of the org gives something like this:

  • 80000 hours (7 people) - Far future
  • ACE (17 people) - Animals
  • CEA (15 people) - Far future
  • CSER (11 people) - Far future
  • CFI (10 people) - Far future (I only included their researchers)
  • FHI (17 people) - Far future
  • FRI (5 people) - Far future
  • Givewell (20 people) - Global poverty
  • Open Phil (21 people) - Far future (mostly)
  • SI (3 people) - Animals
  • CFAR (11 people) - Far future
  • Rethink Charity (11 people) - Global poverty
  • WASR (3 people) - Animals
  • REG (4 people) - Far future [Edited after Jonas Vollmer kindly corrected me]
  • FLI (6 people) - Far future
  • MIRI (17 people) - Far future
  • TYLCS (11 people) - Global poverty

Totting this up, I get ~ two thirds of people work at orgs which focus on the far future (66%), 22% global poverty, and 12% animals. Although it is hard to work out the AI | far future proportion, I'm pretty sure it is the majority, so 45% AI wouldn't be wildly off-kilter if we thought the EA handbook should represent the balance of 'full time' attention.

I doubt this should be the relevant metric of how to divvy-up space in the EA handbook. It also seems unclear how clear considerations of representation play in selecting content, or if so what is the key community to proportionately represent.

Yet I think I'd be surprised if it wasn't the case that among those working 'in' EA, the majority work on the far future, and a plurality work on AI. It also agrees with my impression that the most involved in the EA community strongly skew towards the far future cause area in general and AI in particular. I think they do so, bluntly, because these people have better access to the balance of reason, which in fact favours these being the most important things to work on.

Comment author: [deleted] 06 May 2018 05:12:13PM *  6 points [-]

+1 for doing a quick empirical check and providing your method.

But the EA newsletter is curated by CEA, no? So it also partly reflects CEA's own priorities. You and others have noted in the discussion below that a number of plausibly full-time EA organisations are not included in your list (e.g. BERI, Charity Science, GFI, Sentience Politics).

I'd also question the view that OpenPhil is mostly focused on the long-term future. When I look at OpenPhil's grant database, and counting

  • Biosecurity and Pandemic Preparedness
  • Global Catastrophic Risks
  • Potential Risks from Advanced Artificial Intelligence
  • Scientific Research

as long-term future focused, I get that 30% of grant money was given to the long-term future.

Comment author: MichaelPlant 05 May 2018 01:01:23PM *  7 points [-]

'full-time EAs' (i.e. those working full time at an EA organisation - E2Gers would be one of a few groups who should also be considered 'full-time EAs' in the broader sense of the term).

I think this methodology is pretty suspicious. There are more ways to be a full-time EA (FTEA) that working at an EA org, or even E2Ging. Suppose someone spends their time working on, say, poverty out of an desire to do the most good, and thus works at a development NGO or for a governent. Neither development NGOs nor governments will count as an 'EA org' on your definition because they won't being posting updates to the EA newsletter. Why would they? The EA community has very little comparative advantage in solving poverty, so what we be the point in say, Oxfam or DFID sending update reports to the EA newsletter? It would frankly be bizarre for a government department to update the EA community. We might say "ah, put people who work on poverty aren't really EAs" but that would just beg the question.

Comment author: Jan_Kulveit 05 May 2018 07:04:30AM 3 points [-]

I think while this headcount is not a good metric how to allocate space in the EA handbook, it is a quite valuable overview in itself!

Just as a caveat, the numbers should not be directly compared to numbers from EA survey, as the later included also cause-prioritization, rationality, meta, politics & more.

(Using such cathegories, some organizations would end up in classified in different boxes)

Comment author: RandomEA 05 May 2018 12:15:07PM 2 points [-]

I think your list undercounts the number of animal-focused EAs. For example, it excludes Sentience Politics, which provided updates through the EA newsletter in September 2016, January 2017, and July 2017. It also excludes the Good Food Institute, an organization which describes itself as "founded to apply the principles of effective altruism (EA) to change our food system." While GFI does not provide updates through the EA newsletter, its job openings are mentioned in the December 2017, January 2018, and March 2018 newsletters. Additionally, it excludes organizations like the Humane League, which while not explicitly EA, have been described as having a "largely utilitarian worldview." Though the Humane League does not provide updates through the EA newsletter, its job openings are mentioned in the April 2017 newsletters, February 2018, and March 2018.

Perhaps the argument for excluding GFI and the Humane League (while including direct work organizations in the long term future space) is that relatively few people in direct work animal organizations identify as EAs (while most people in direct work long term future organizations identify as EA). If this is the reason, I think it'd be good for someone to provide evidence for it. Also, if the idea behind this method of counting is to look at the revealed preference of EAs, then I think people earning to give have to be included, especially since earning to give appears to be more useful for farm animal welfare than for long term future causes.

(Most of the above also applies to global health organizations.)

Comment author: Gregory_Lewis 06 May 2018 12:43:37PM 1 point [-]

I picked the 'updates' purely in the interests of time (easier to skim), that it gives some sense of what orgs are considered 'EA orgs' rather than 'orgs doing EA work' (a distinction which I accept is imprecise: would a GW top charity 'count'?), and I (forlornly) hoped pointing to a method, however brief, would forestall suspicion about cherry-picking.

I meant the quick-and-dirty data gathering to be more an indicative sample than a census. I'd therefore expect significant margin of error (but not so significant as to change the bottom line). Other relevant candidate groups are also left out: BERI, Charity Science, Founder's Pledge, ?ALLFED. I'd expect there are more.

Comment author: Maxdalton 04 May 2018 05:02:53PM 6 points [-]

(Copying across some comments I made on Facebook which are relevant to this.)

Thanks for the passionate feedback everyone. Whilst I don’t agree with all of the comments, I’m sorry for the mistakes I made. Since some of the comments above make similar comments, I’ll try to give general replies in some main-thread comments. I’ll also be reaching out to some of the people in the thread above to try to work out the best way forward.

My understanding is that the main worry that people have is about calling it the Effective Altruism Handbook vs. CEA’s Guide to Effective Altruism or similar. For the reasons given in my reply to Scott above, I think that calling it the EA Handbook is not a significant change from before: unless we ask Ryan to take down the old handbook, then whatever happens, there will be a CEA-selected resource called the EA Handbook. For reasons given above and below, I think that the new version of the Handbook is better than the old. I think that there is some value in explicitly replacing the old version for this reason, and since “EA Handbook” is a cleaner name. However, I do also get people’s worries about this being taken to represent the EA community as a whole. For that reason, I will make sure that the title page and introduction make clear that this is a project of CEA, and I will make clear in the introduction that others in the community would have selected different essays.

My preferred approach would then be to engage with people who have expressed concern, and see if there are changes we can make that alleviate their concerns (such as those we already plan to make based on Scott’s comment). If it appears that we can alleviate most of those concerns whilst retaining the value of the Handbook from from CEA’s perspective, it might be best to call it the Centre for Effective Altruism’s EA Handbook. Otherwise, we would rebrand. I’d be interested to hear in comments whether there are specific changes (articles to add/take away/design things) that would reassure you about this being called the EA Handbook.

In this comment I’ll reply to some of the more object-level criticisms. I want to apologize for how this seemed to others, but also give a clearer sense of our intentions. I think that it might seem that CEA has tried merely to push AI safety as the only thing to work on. We don’t think that, and that wasn’t our intention. Obviously, poorly realized intentions are still a problem, but I want to reassure people about CEA’s approach to these issues.

First, re there not being enough discussion of portfolios/comparative advantage, this is mentioned in two of the articles (“Prospecting for Gold” and “What Does (and Doesn't) AI Mean for Effective Altruism?”). However, I think that we could have emphasised this more, and I will see if it’s possible to include a full article on coordination and comparative advantage.

Second, I’d like to apologise for the way the animal and global health articles came across. Those articles were commissioned at the same time as the long-term future article, and they share a common structure: What’s the case for this cause? What are some common concerns about that cause? Why might you choose not to support this cause? The intention was to show how many assumptions underlie a decision to focus on any cause, and to map out some of the debate between the different cause areas, rather than to illicitly push the long-term future. It looks like this didn’t come across, sorry. We didn’t initially commission sub-cause profiles on government, AI and biosecurity, which explains why those more specific articles follow a different structure (mostly talks given at EA Global).

Third, I want to explain some of the reasoning behind including several articles on AI. AI risk is a more unusual area, which is more susceptible to misinterpretation than global health or animal welfare. Partly for this reason, we thought that it was sensible to include several articles on this topic, with the intention that this would provide more needed background and convey more of the nuance of the idea. I will talk with some of the commenters above to discuss if it makes sense to do some sort of merge so that AI dominates the contents page less.

Comment author: Evan_Gaensbauer 04 May 2018 07:51:37PM 4 points [-]

What about the possibility the Centre for Effective Altruism represents the community by editing the EA Handbook to reflect what the community values in spite of what the CEA concludes, the CEA excludes evaluations from the EA Handbook from which it currently diverges from the community, and it's still called the 'EA Handbook' instead of 'CEA's Guide to EA?' Obviously this wouldn't carry EA forward with what the CEA thinks is maximum fidelity, but it's clear many think the CEA is trying to spread the EA message with infidelity, while acting as though they're the only actor in the movement others can trust to carry that message. It looks not only hypocritical but undermines faith in the CEA.

Altering the handbook so it's more of a compromise between multiple actors in EA will redeem the reputation of the CEA. Without that, the CEA can't carry EA forward with the fidelity at all, because the rest of the movement wouldn't cooperate with them. In the meantime, the CEA and everyone else can hammer out what we think is the most good here on the EA Forum. If broader conclusions are drawn which line up with the CEA's evaluation based on a consensus the CEA had behind their perspective the best arguments, that can be included in the next edition of the EA Handbook. Again, from the CEA's perspective, that might seem like deliberately compromising the fidelity of EA in the short-term to appease others. But again, from the perspective of the CEA's current critics, why they're criticizing the 2nd edition of the EA Handbook is because they perceive themselves as protecting the fidelity of EA from the Centre for Effective Altruism. This could solve other contentious issues in EA, such as consideration of both s-risks and x-risks from AI. The EA Handbook could be published as close to identically as possible in multiple languages, which would prevent the CEA from selling EA one way in English, the EAF selling it another way in German, and creating more trust issues which would down the road just become sources of conflict, not unlike the criticism the EA Handbook, 2nd edition, is receiving now. Ultimately, this would be the CEA making a relatively short-term compromise to ensure the long-term fidelity of EA by demonstrating themselves as delegate and representative agency the EA community can still have confidence in.

Comment author: Maxdalton 07 May 2018 01:39:06PM 7 points [-]

Thanks for the comments Evan. First, I want to apologize for not seeking broader consultation earlier. This was clearly a mistake.

My plan now is to do as you suggest: talk to other actors in EA and get their feedback on what to include etc. Obviously any compromise is going to leave some unhappy - different groups do just favour different presentations of EA, so it seems unlikely to me that we will get a fully independent presentation that will please everyone. I also worry that democracy is not well suited to editorial decisions, and that the "electorate" of EA is ill-defined. If the full compromise approach fails, I think it would be best to release a CEA-branded resource which incorporates most of the feedback above. This option also seems to me to be cooperative, and to avoid harm to the fidelity of EA's message, but I might be missing something.

Comment author: Evan_Gaensbauer 07 May 2018 01:45:27PM 5 points [-]

Thanks for responding Max. I agree consulting some key actors but not going through a democratic makes sense. I appreciate you being able to respond to and incorporate all the feedback you're receiving so quickly.

Comment author: Jiri_Nadvornik 02 May 2018 09:15:11AM *  5 points [-]

Thanks. How is it licensed? Under what conditions is it possible to share copies or translate (parts of) it?

Comment author: Maxdalton 03 May 2018 08:26:12AM 1 point [-]

I encourage you to share copies online. For reasons similar to those discussed above, we didn't get consent to make physical copies from all of the authors. I have not yet asked for permission on translation, but I will do so and reply in this thread.

(This post may be interesting for you if you haven't already read it: http://effective-altruism.com/ea/1lh/why_not_to_rush_to_translate_effective_altruism/).

Comment author: Maxdalton 13 August 2018 08:39:30AM 0 points [-]

If you would like to translate the Handbook, please email content@effectivealtruism.org for permission.

Comment author: Alex_Barry 02 May 2018 03:59:23PM *  4 points [-]

As far as I can tell none of the links that look like this instead of http://effective-altruism.com work in the pdf version.

Comment author: JoshP 02 May 2018 05:17:53PM 1 point [-]

I think there's a mix of working and non-working, having just checked myself. Some don't go through to anything when you click on them; some go through to a 404 error; and some go through to the correct website.

Bizarrely, this will depend on the copy I have downloaded. I have downloaded it more than once, and it works differently each time. The first one I have downloaded (I downloaded more than once in different tabs) works in every link I check. The second one doesn't- and this remains true when comparing certain links like for like. I'm not really sure why. Bit bizarre.

Comment author: Maxdalton 02 May 2018 06:21:05PM *  3 points [-]

Edit, this should be fixed now, let me know if there are still problems.

(Sorry, I don't have time to reply to all of the comments here today.) Sorry about this! Not sure what's going on here, but does this version work better for you?

[There was a link here]

I'll try to get a full fix tomorrow.

Comment author: JoshP 03 May 2018 09:59:02AM 10 points [-]

I just spent a very exciting hour going through every link (yes, I clicked all of them) in the handbook, and I think I have a definitive list of mistakes in the links (if there are others, may they remain mistakes ever more :P ): p.47, Engines of Creation p.77 expected value link p.77 Risk aversion and rationality , Use http://fitelson.org/seminar/buchak2.pdf instead p.80 – scope insensitivity p.80- Luke Muehlhauser has commented p.127 “our profile on the long-run future” Footnote 43, p.137, link 2, related to diarrhoea p.142, “animal welfare profile” P.144- systemic change profile link works, but looks slightly unprofessional You could do with a link to the 80,000 Hours Podcast and the Doing Good Better podcast on p.166

Comment author: Maxdalton 03 May 2018 12:41:30PM 1 point [-]

Thanks so much for this!

Comment author: [deleted] 03 May 2018 07:57:54PM 3 points [-]

This looks really good now that some of the formatting errors have been removed. Thanks! :)

Comment author: RandomEA 03 May 2018 06:30:24AM *  10 points [-]

The shift from Doing Good Better to this handbook reinforces my sense that there are two types of EA:

Type 1:

  1. Causes: global health, farm animal welfare

  2. Moral patienthood is hard to seriously dispute

  3. Evidence is more direct (RCTs, corporate pledges)

  4. Charity evaluators exist (because evidence is more direct)

  5. Earning to give is a way to contribute

  6. Direct work can be done by people with general competence

  7. Economic reasoning is more important (partly due to donations being more important)

  8. More emotionally appealing (partly due to being more able to feel your impact)

  9. Some public knowledge about the problem

  10. More private funding and a larger preexisting community

Type 2:

  1. Causes: AI alignment, biosecurity

  2. Moral patienthood can be plausibly disputed (if you're relying on the benefits to the long term future; however, these causes are arguably important even without considering the long term future)

  3. Evidence is more speculative (making prediction more important)

  4. Charity evaluation is more difficult (because impact is harder to measure)

  5. Direct work is the way to contribute

  6. Direct work seems to benefit greatly from specific skills/graduate education

  7. Game theory reasoning is more important (of course, game theory is technically part of economics)

  8. Less emotionally appealing (partly due to being less able to feel your impact)

  9. Little public knowledge about the problem

  10. Less private funding and a smaller preexisting community

Comment author: Buck 04 May 2018 06:00:51PM 3 points [-]

I don't think my experience matches this split. For example, I don't think that it's obvious that the causes you specify match the attributes in points 2, 5, 6.

Comment author: Alex_Barry 03 May 2018 03:52:01PM *  2 points [-]

I am somewhat confused by the framing of this comment, you start by saying "there are two types of EA" but the points seem to all be about the properties of different causes.

I don't think there are 'two kinds' of EAs in the sense you could easily tell which group people were going to fall into in advance, but that all of your characteristics just follow as practical considerations resulting from how important people find the longtermist view. (But I do think "A longtermist viewpoint leads to very different approach" is correct.)

I'm also not sure how similar the global poverty and farm animal welfare groups actually are. There seem to be significant differences in terms of the quality of evidence used and how established they are as areas. Points 3, 4, 7, 9 and 10 seem to have pretty noticeable differences between global poverty and farm animal welfare.

Comment author: RandomEA 04 May 2018 04:31:38AM 2 points [-]

Just to clarify, when I say that my sense is that there are two types of EA, I mean that I sense that there are two types of effective altruism, not that I sense that there are two types of effective altruists.

I agree that there are substantial differences between global poverty and farm animal welfare (with global poverty being more clearly Type 1). But it seems to me that those differences are more differences of degree, while the differences between global poverty/farm animal welfare and biosecurity/AI alignment are more differences of kind.

Comment author: Alex_Barry 04 May 2018 04:45:42PM *  0 points [-]

Just to clarify, when I say that my sense is that there are two types of EA, I mean that I sense that there are two types of effective altruism, not that I sense that there are two types of effective altruists.

Ah I see. for some reason I got the other sense from reading your comment, but looking back at it I think that was just a failing of reading comprehension on my part.

I agree that the differences between global poverty and animal welfare are more matters of degree, but I also think they are larger than people seem to expect.

Comment author: kbog  (EA Profile) 14 May 2018 01:04:17PM 0 points [-]

What on Earth do you mean by "disputing moral patienthood"? If there are no moral patients then there is basically no reason for altruism whatsoever.

Comment author: adamaero  (EA Profile) 03 May 2018 06:34:33PM *  0 points [-]

I also believe there are two broad types of EAs today. So this is interesting. Although, I am a little confused on some of your meaning. Can you make some of those into complete sentences?

2) How are these different between Type 1 and Type 2?

4) "Evidence is more direct" in what regard or context??

Lastly, the list seems skewed, favoring Type 2.

Comment author: RandomEA 04 May 2018 04:44:18AM *  0 points [-]

2) How are these different between Type 1 and Type 2?

To me, it cannot be seriously disputed that improving the lives of currently alive humans is good, that improving the welfare of current and future animals is good, and that preventing the existence of farm animals who would live overall negative lives is good.

By contrast, I think that you can make a plausible argument that there is no moral value to ensuring that a person who would live a happy life comes into existence (though as noted above, you can make the case for reducing global catastrophic risks without relying on that benefit).

4) "Evidence is more direct" in what regard or context??

It's easier to measure the effectiveness of the program being implemented by a global health charity, the effectiveness of that charity at implementing the program, and the effectiveness of an animal charity at securing corporate pledges than it is to measure the impact of biosecurity and AI alignment organizations.

Comment author: Maxdalton 23 May 2018 01:20:28PM *  2 points [-]

Thanks so much for all of the feedback everyone, this was very helpful for me to work out the problems with the old version. I've been working on getting a version which everyone can agree will be an improvement.

All of the more minor changes to the Handbook have now been completed, and are available at the links in the OP.

In addition to the minor changes, I plan to add the following articles:

I may also make some edits to the three "cause profile" pieces, some for length, and adding some details to the long-term future piece. The more major edits might take a couple of months (for the 80,000 Hours piece to be ready, for redesign).

I've reached out to some of the original commenters, and some of the main research/community building orgs in the space, asking for further comments. Thanks again to everyone who took the time to try to make this a better product. I for one am more excited about the version-to-come than the version-as-was.

Comment author: adamaero  (EA Profile) 02 May 2018 08:09:14PM *  2 points [-]

As mentioned by others, the formatting is poor. I most like page 166 where a choice between a short and long video is given. Although, the spacing for the descriptions is odd. The book list is exactly what I want to see, but there is a period floating for 80k Hours, and no period for The Most Good You Can Do. Awesome content, but not something I would share with others. The formatting is just too inconsistent.


  • I wish there were some profile descriptions of real-world effective altruists, even one or two people who are only partially on board. I consider myself all for lessening global poverty and animal suffering. Although, I'm generally against taking action against wild animal suffering or supporting CS grad students.

  • I wish there were something about how absolute/extreme poverty is getting better fast, but there is still a lot of children and families suffering from the causes. Additionally, some mention of the significant decline:

40% in 1990

20% in 2010

https://en.wikipedia.org/wiki/Extreme_poverty#/media/File:USAID_Projections.png

Comment author: Maxdalton 03 May 2018 08:48:54AM *  0 points [-]

Thanks for your feedback.

Thanks for catching that mistake, we'll fix that floating period, and the other errors that others have spotted. When you say "as mentioned by others", are you only referring to the comments above, or is there some discussion of this that I'm missing? It would be good to catch all of the mistakes!

Thanks for the suggestions on content. I'll have a think about whether it would be useful to include profiles somewhere.

Comment author: adamaero  (EA Profile) 03 May 2018 02:35:28PM 0 points [-]

JoshP's comment ~ which you took care of

Comment author: adamaero  (EA Profile) 02 May 2018 07:14:08PM *  2 points [-]

Minor Critique

On page 140 of the handbook, "Does foreign aid really work?" Moyo's Dead Aid is mentioned. Although, she is strictly speaking about gov't aid: "But this books is not concerned with emergency and charity based aid." (End of page 7, Dead Aid.)

(1) humanitarian or emergency ~ mobilized and dispensed in response to catastrophes and calamities

(2) charity-based ~ disbursed by NGOs to institutions or people

(3) systematic: "aid payments made directly to governments either though government-to-government transfers [bilateral aid] or transferred via institutions such as the World Bank (known as multilateral aid)."

Therefore, since EA is about charity-based aid, and Moyo is strictly discussing gov't aid, I do not think it is relevant to mention Dead Aid.


Aside, total US gov't foreign aid is 4150G*.7%

= almost 30 billion.

Where 373.25 billion by foundations & individuals (in the US), and

of that 265 billion by individuals alone!

http://www.pbs.org/development/2016/06/14/giving-usa-2016-released-today

Comment author: Maxdalton 03 May 2018 08:55:24AM 2 points [-]

Thank you for pointing this out! I'll remove that reference.

Comment author: kbog  (EA Profile) 14 May 2018 01:06:14PM *  0 points [-]

I haven't read the book, but a lot of government aid goes to very similar programs as private aid, however. It's not clear to me that none of the conclusions remain true.

Charity is such a touchy moralistic subject in the US, and foreign aid such a juicy political target, that I wouldn't be surprised if the author walled off the topic in such a manner for editorial rather than rational reasons.

Comment author: RyanCarey 02 May 2018 11:32:16PM 1 point [-]

I think this has turned out really well Max. I like that this project looks set to aid with movement growth while improving the movement's intellectual quality, because the content is high-quality and representative of current EA priorities. Maybe the latter is the larger benefit, and probably it will help everyone to feel more confident in accelerating the movement growth over time, and so I hope we can find more ways to have a similar effect!

Comment author: Evan_Gaensbauer 04 May 2018 07:57:48PM *  2 points [-]

What current EA priorities are according to the CEA is at odds with what many others think EA's current priorities are. It appears the CEA, by putting more emphasis on x-risk reduction, is smuggling what they think EA's current (proportional distribution) of priorities should be into a message about what EA's current priorities actually are, in a way undermining the perspective of thousands of effective altruists. So the idea that this handbook will increase the movement's intellectual quality is based on definitions of quality and representation for EA many effective altruists don't share. I think this and future editions of the EA Handbook should be regarded as the community as drafts from the CEA until they carry out a project of getting as broad and substantial a swathe of feedback from important community actors as they can. This doesn't have to be a program of populist democracy where each self-identified effective altruist gets a vote. But the CEA could run the EA Handbook by EA organizations which are just as crucial to effective altruism as the CEA which don't have the phrase 'effective altruism' in their name, like ACE, Givewell, or any other organizations which are cause-specific but are community pillars nonetheless.

Comment author: Yannick_Muehlhaeuser 04 May 2018 09:19:57PM *  1 point [-]

Reading the book as Epub in iBooks, in enumerations there are often certain sentences that have a bigger font size than the normal text (for instance in the section "A Proposed Adjustment to the Astronomical Waste Argument"). I can't post a picture here but i don't think it was intendet to be that way. Hope that helps.

Comment author: Maxdalton 08 May 2018 11:22:43AM 0 points [-]

Thanks, we'll look into that.

Comment author: [deleted] 03 May 2018 08:01:02PM 1 point [-]

I'm a bit confused by "There are many other promising areas to work on, and it is difficult to work out which is most promising" as the section heading for "promising causes". It sounds like you're contrasting affecting the long-term future with other promising causes, but the section "promising causes" contains many examples of affecting the long-term future.

Comment author: Maxdalton 04 May 2018 04:52:25PM 0 points [-]

Thanks, that's a mistake which we'll fix.

Comment author: antonio  (EA Profile) 30 May 2018 12:16:26PM 0 points [-]

I thank both editors and everybody who worked on this; I will learn a lot from it. (The Kindle version has weird layout and sectioning, but it seems readable.)

By the way, someone kindly added this new edition on Goodreads.