Comment author: Maxdalton 23 May 2018 01:20:28PM *  2 points [-]

Thanks so much for all of the feedback everyone, this was very helpful for me to work out the problems with the old version. I've been working on getting a version which everyone can agree will be an improvement.

All of the more minor changes to the Handbook have now been completed, and are available at the links in the OP.

In addition to the minor changes, I plan to add the following articles:

I may also make some edits to the three "cause profile" pieces, some for length, and adding some details to the long-term future piece. The more major edits might take a couple of months (for the 80,000 Hours piece to be ready, for redesign).

I've reached out to some of the original commenters, and some of the main research/community building orgs in the space, asking for further comments. Thanks again to everyone who took the time to try to make this a better product. I for one am more excited about the version-to-come than the version-as-was.

Comment author: Yannick_Muehlhaeuser 04 May 2018 09:19:57PM *  1 point [-]

Reading the book as Epub in iBooks, in enumerations there are often certain sentences that have a bigger font size than the normal text (for instance in the section "A Proposed Adjustment to the Astronomical Waste Argument"). I can't post a picture here but i don't think it was intendet to be that way. Hope that helps.

Comment author: Maxdalton 08 May 2018 11:22:43AM 0 points [-]

Thanks, we'll look into that.

Comment author: Evan_Gaensbauer 04 May 2018 07:51:37PM 4 points [-]

What about the possibility the Centre for Effective Altruism represents the community by editing the EA Handbook to reflect what the community values in spite of what the CEA concludes, the CEA excludes evaluations from the EA Handbook from which it currently diverges from the community, and it's still called the 'EA Handbook' instead of 'CEA's Guide to EA?' Obviously this wouldn't carry EA forward with what the CEA thinks is maximum fidelity, but it's clear many think the CEA is trying to spread the EA message with infidelity, while acting as though they're the only actor in the movement others can trust to carry that message. It looks not only hypocritical but undermines faith in the CEA.

Altering the handbook so it's more of a compromise between multiple actors in EA will redeem the reputation of the CEA. Without that, the CEA can't carry EA forward with the fidelity at all, because the rest of the movement wouldn't cooperate with them. In the meantime, the CEA and everyone else can hammer out what we think is the most good here on the EA Forum. If broader conclusions are drawn which line up with the CEA's evaluation based on a consensus the CEA had behind their perspective the best arguments, that can be included in the next edition of the EA Handbook. Again, from the CEA's perspective, that might seem like deliberately compromising the fidelity of EA in the short-term to appease others. But again, from the perspective of the CEA's current critics, why they're criticizing the 2nd edition of the EA Handbook is because they perceive themselves as protecting the fidelity of EA from the Centre for Effective Altruism. This could solve other contentious issues in EA, such as consideration of both s-risks and x-risks from AI. The EA Handbook could be published as close to identically as possible in multiple languages, which would prevent the CEA from selling EA one way in English, the EAF selling it another way in German, and creating more trust issues which would down the road just become sources of conflict, not unlike the criticism the EA Handbook, 2nd edition, is receiving now. Ultimately, this would be the CEA making a relatively short-term compromise to ensure the long-term fidelity of EA by demonstrating themselves as delegate and representative agency the EA community can still have confidence in.

Comment author: Maxdalton 07 May 2018 01:39:06PM 6 points [-]

Thanks for the comments Evan. First, I want to apologize for not seeking broader consultation earlier. This was clearly a mistake.

My plan now is to do as you suggest: talk to other actors in EA and get their feedback on what to include etc. Obviously any compromise is going to leave some unhappy - different groups do just favour different presentations of EA, so it seems unlikely to me that we will get a fully independent presentation that will please everyone. I also worry that democracy is not well suited to editorial decisions, and that the "electorate" of EA is ill-defined. If the full compromise approach fails, I think it would be best to release a CEA-branded resource which incorporates most of the feedback above. This option also seems to me to be cooperative, and to avoid harm to the fidelity of EA's message, but I might be missing something.

Comment author: ThomasSittler 04 May 2018 06:55:37PM *  14 points [-]

Now that the formatting has been fixed, a few points on the substance. First of all, this is obviously a huge improvement on the old EA handbook. So much high-quality intellectual content has been produced in the last three years. I think it's very important, as a community-building tool, to have a curated selection of this content, and the new handbook is a welcome step towards that goal.

I think the first seven parts are good choices, and appear in a good order:

  • Introduction to Effective Altruism
  • Efficient Charity — Do Unto Others
  • Prospecting for Gold
  • Crucial Considerations and Wise Philanthropy
  • The Moral Value of Information
  • The Long-Term Future
  • A Proposed Adjustment to the Astronomical Waste Argument

Next in "promising causes", we first get three articles in a row about AI:

  • Three Impacts of Machine Intelligence
  • Potential Risks from Advanced AI
  • What Does (and Doesn’t) AI Mean for Effective Altruism?

I understand that these are good pieces, and you want to showcase the good thinking the EA community has done. But three, out of the eight in "promising causes", seems like a heavy focus on AI. Suppose we condition on long-termism, which plausibly excludes the Global Health and Animal Welfare articles as written, then it's it's three out of six on AI. Additionally, the fact that AI comes first lends it still more importance.

Another asymmetry is that while

  • The long-term future
  • Animal Welfare
  • Global health and development

have a discussion of pros and cons, the three AI articles talk about what to do if you've already decided to focus on AI. This creates the implicit impression that if you accept long-termism, it's a foregone conclusion that AI is a priority, or even the priority.

Next we have:

  • Biosecurity as an EA Cause Area
  • Animal Welfare
  • Effective Altruism in Government
  • Global Health and Development
  • How valuable is movement growth?

I'm especially worried that among these, animal welfare and global health and development are the only two that follow the pro-con structure. This is not because objections are discussed. It's because these pieces feel much more like mass-outreach pieces than the others in the same section. They are

  • more introductory in content
  • in tone, more like a dry memo**, or a broad survey for outsiders

The second point is somewhat nebulous, but, I think, actually very important. While the other cause area articles feel like excerpts from a lively intellectual conversation among EAs, the global health and animal welfare articles do not. This seems unnecessary since EAs have produced a lot of excellent, advanced content in these areas. A non-exhaustive list:

  • The entire GiveWell blog
  • Large parts of Brian Tomasik's website
  • Lewis Bollard's talks at EA global
  • Rachel Glennerster's post "Where economists can learn from philosophers and effective altruism"
  • Tobias Leenaert's talks on cognitive dissonance and behaviour change driving value change
  • Sentience Institute's "Summary of Evidence for Foundational Questions in Effective Animal Advocacy"
  • Toby Ord's "the moral imperative towards cost-effectiveness"

So it should hopefully be possible to significantly improve the handbook for an edition 2.1, to come out soon :)

** (I think Jess did a great job writing these, but they were originally meant for a different context)

Comment author: Maxdalton 07 May 2018 01:27:56PM 4 points [-]

Thanks Tom. I've discussed the reasoning for including three articles on AI a bit on Facebook. To quote from that:

"I want to explain some of the reasoning behind including several articles on AI. AI risk is a more unusual area, which is more susceptible to misinterpretation than global health or animal welfare [or, I think e.g. biosecurity and other long-term focused causes]. Partly for this reason, we thought that it was sensible to include several articles on this topic, with the intention that this would provide more needed background and convey more of the nuance of the idea. I will talk with some of the commenters above to discuss if it makes sense to do some sort of merge so that AI dominates the contents page less."

Thanks for suggestions for alternatives to the global poverty and animal welfare articles. I think you may well be right that we should change those. This is another mistake that I made. The content for the EA Handbook grew out of a sequence of content on effectivealtruism.org. As a consequence, it included only content that we had produced (or that had been produced by others at our events). At the point when we shifted to a pdf/ebook format, I should have reconsidered the selection of articles, which would have given us the possibility of trying to include the excellent content that you mention. I hope that changing those articles will also reduce the impression that AI follows obviously from a long-term future focus. I'm sorry for making this mistake.

Comment author: Darius_Meissner 03 May 2018 09:41:42AM *  8 points [-]

Now that a new version of the handbook is out, could you update the 'More on Effective Altruism' link? It is quite prominent in the 'Getting Started' navigation panel on the right-hand side of the EA Forum.

Comment author: Maxdalton 07 May 2018 08:35:06AM 0 points [-]

Good idea. I'll do this when and if there is more consensus that people want to promote this content over the old.

Comment author: Peter_Hurford  (EA Profile) 04 May 2018 01:40:32AM *  20 points [-]

I find it so interesting that people on the EA Facebook page have been a lot more generally critical about the content than people here on the EA Forum -- here it's all just typos and formatting issues.

I'll admit that I was one of the people who saw this here on the EA Forum first and was disappointed, but chose not to say anything out of a desire to not rock the boat. But now that I see others are concerned, I will echo my concerns too and magnify them here -- I don't feel like this handbook represents EA as I understand it.

By page count, AI is 45.7% of the entire causes sections. And as Catherine Low pointed out, in both the animal and the global poverty articles (which I didn't count toward the page count), more than half the article was dedicated to why we might not choose this cause area, with much of that space also focused on far-future of humanity. I'd find it hard for anyone to read this and not take away that the community consensus is that AI risk is clearly the most important thing to focus on.

I feel like I get it. I recognize that CEA and 80K have a right to have strong opinions about cause prioritization. I also recognize that they've worked hard to become such a strong central pillar of EA as they have. I also recognize that a lot of people that CEA and 80K are familiar with agree with them. But now I can't personally help but feel like CEA is using their position of relative strength to essentially dominate the conversation and claim it is the community consensus.

I agree the definition of "EA" here is itself the area of concern. It's very easy for any of us to call "EA" as we see it and naturally make claims about the preferences of the community. But this would be very clearly circular. I'd be tempted to defer to the EA Survey. AI was only the top cause of 16% of the EA Survey. Even among those employed full-time in a non-profit (maybe a proxy for full-time EAs), it was the top priority of 11.26%, compared to 44.22% for poverty and 6.46% for animal welfare. But naturally I'd be biased toward using these results, and I'm definitely sympathetic to the idea that EA should be considered more narrowly, or we should weight the opinions of people working on it full-time more heavily. So I'm unsure. Even my opinions here are circular, by my own admission.

But I think if we're going to be claiming in a community space to talk about the community, we should be more thoughtful about who's opinions we're including and excluding. It seems pretty inexpensive to re-weigh the handbook to emphasize AI risk just as much without being as clearly jarring about it (e.g., dedicating three chapters instead of one or slanting so clearly toward AI risk throughout the "reasons not to prioritize this cause" sections).

Based on this, and the general sentiment, I'd echo Scott Weather's comment on the Facebook group that it’s pretty disingenuous to represent CEA’s views as the views of the entire community writ large, however you want to define that. I agree I would have preferred it called “CEA’s Guide to Effective Altruism” or something similar.

Comment author: Maxdalton 04 May 2018 05:02:53PM 6 points [-]

(Copying across some comments I made on Facebook which are relevant to this.)

Thanks for the passionate feedback everyone. Whilst I don’t agree with all of the comments, I’m sorry for the mistakes I made. Since some of the comments above make similar comments, I’ll try to give general replies in some main-thread comments. I’ll also be reaching out to some of the people in the thread above to try to work out the best way forward.

My understanding is that the main worry that people have is about calling it the Effective Altruism Handbook vs. CEA’s Guide to Effective Altruism or similar. For the reasons given in my reply to Scott above, I think that calling it the EA Handbook is not a significant change from before: unless we ask Ryan to take down the old handbook, then whatever happens, there will be a CEA-selected resource called the EA Handbook. For reasons given above and below, I think that the new version of the Handbook is better than the old. I think that there is some value in explicitly replacing the old version for this reason, and since “EA Handbook” is a cleaner name. However, I do also get people’s worries about this being taken to represent the EA community as a whole. For that reason, I will make sure that the title page and introduction make clear that this is a project of CEA, and I will make clear in the introduction that others in the community would have selected different essays.

My preferred approach would then be to engage with people who have expressed concern, and see if there are changes we can make that alleviate their concerns (such as those we already plan to make based on Scott’s comment). If it appears that we can alleviate most of those concerns whilst retaining the value of the Handbook from from CEA’s perspective, it might be best to call it the Centre for Effective Altruism’s EA Handbook. Otherwise, we would rebrand. I’d be interested to hear in comments whether there are specific changes (articles to add/take away/design things) that would reassure you about this being called the EA Handbook.

In this comment I’ll reply to some of the more object-level criticisms. I want to apologize for how this seemed to others, but also give a clearer sense of our intentions. I think that it might seem that CEA has tried merely to push AI safety as the only thing to work on. We don’t think that, and that wasn’t our intention. Obviously, poorly realized intentions are still a problem, but I want to reassure people about CEA’s approach to these issues.

First, re there not being enough discussion of portfolios/comparative advantage, this is mentioned in two of the articles (“Prospecting for Gold” and “What Does (and Doesn't) AI Mean for Effective Altruism?”). However, I think that we could have emphasised this more, and I will see if it’s possible to include a full article on coordination and comparative advantage.

Second, I’d like to apologise for the way the animal and global health articles came across. Those articles were commissioned at the same time as the long-term future article, and they share a common structure: What’s the case for this cause? What are some common concerns about that cause? Why might you choose not to support this cause? The intention was to show how many assumptions underlie a decision to focus on any cause, and to map out some of the debate between the different cause areas, rather than to illicitly push the long-term future. It looks like this didn’t come across, sorry. We didn’t initially commission sub-cause profiles on government, AI and biosecurity, which explains why those more specific articles follow a different structure (mostly talks given at EA Global).

Third, I want to explain some of the reasoning behind including several articles on AI. AI risk is a more unusual area, which is more susceptible to misinterpretation than global health or animal welfare. Partly for this reason, we thought that it was sensible to include several articles on this topic, with the intention that this would provide more needed background and convey more of the nuance of the idea. I will talk with some of the commenters above to discuss if it makes sense to do some sort of merge so that AI dominates the contents page less.

Comment author: ThomasSittler 03 May 2018 08:01:02PM 1 point [-]

I'm a bit confused by "There are many other promising areas to work on, and it is difficult to work out which is most promising" as the section heading for "promising causes". It sounds like you're contrasting affecting the long-term future with other promising causes, but the section "promising causes" contains many examples of affecting the long-term future.

Comment author: Maxdalton 04 May 2018 04:52:25PM 0 points [-]

Thanks, that's a mistake which we'll fix.

Comment author: konrad 02 May 2018 08:22:39AM 7 points [-]

Awesome, thanks a lot for this work!

From what I understood when talking to CEA staff, this is also thought to replace handing out copies of Doing Good Better, yes? If so, I would emphasise this more explicitly, too.

Comment author: Maxdalton 03 May 2018 02:17:17PM 1 point [-]

Doing Good Better is still a useful introduction to EA, and it's possible to distribute physical copies of the book, so that will sometimes be more useful. The EA Handbook might work better as a more advanced introduction, or in online circumstances (see also some of the other comment threads).

Comment author: JoshP 03 May 2018 09:59:02AM 10 points [-]

I just spent a very exciting hour going through every link (yes, I clicked all of them) in the handbook, and I think I have a definitive list of mistakes in the links (if there are others, may they remain mistakes ever more :P ): p.47, Engines of Creation p.77 expected value link p.77 Risk aversion and rationality , Use http://fitelson.org/seminar/buchak2.pdf instead p.80 – scope insensitivity p.80- Luke Muehlhauser has commented p.127 “our profile on the long-run future” Footnote 43, p.137, link 2, related to diarrhoea p.142, “animal welfare profile” P.144- systemic change profile link works, but looks slightly unprofessional You could do with a link to the 80,000 Hours Podcast and the Doing Good Better podcast on p.166

Comment author: Maxdalton 03 May 2018 12:41:30PM 1 point [-]

Thanks so much for this!

Comment author: adamaero  (EA Profile) 02 May 2018 07:14:08PM *  2 points [-]

Minor Critique

On page 140 of the handbook, "Does foreign aid really work?" Moyo's Dead Aid is mentioned. Although, she is strictly speaking about gov't aid: "But this books is not concerned with emergency and charity based aid." (End of page 7, Dead Aid.)

(1) humanitarian or emergency ~ mobilized and dispensed in response to catastrophes and calamities

(2) charity-based ~ disbursed by NGOs to institutions or people

(3) systematic: "aid payments made directly to governments either though government-to-government transfers [bilateral aid] or transferred via institutions such as the World Bank (known as multilateral aid)."

Therefore, since EA is about charity-based aid, and Moyo is strictly discussing gov't aid, I do not think it is relevant to mention Dead Aid.


Aside, total US gov't foreign aid is 4150G*.7%

= almost 30 billion.

Where 373.25 billion by foundations & individuals (in the US), and

of that 265 billion by individuals alone!

http://www.pbs.org/development/2016/06/14/giving-usa-2016-released-today

Comment author: Maxdalton 03 May 2018 08:55:24AM 2 points [-]

Thank you for pointing this out! I'll remove that reference.

View more: Next