Hide table of contents

CEA just released the first section of the third edition of the EA Handbook

We’d love to get your feedback on the initial version of this material, since we plan to make edits and updates over time. This will also help us decide whether and how to publish additional Handbook material in the future.

If you want to help, click the link above to start reading and use this form to provide feedback.

 

The project

The EA Handbook is meant as an introduction to effective altruism for people who are newer to the movement. It can’t be comprehensive, but we aim to describe the core ideas of EA accurately and fairly.

This is the third edition of the Handbook, but it looks quite a bit different from the previous edition:

  • It's being hosted as a series of EA Forum posts, rather than as a PDF or as a series of articles that don't allow comments.
  • It’s likely to end up being longer, covering more ground, and including a wider range of authors.

So far, I’ve produced one of what I hope will be several sections of the Handbook. The topic is “Motivation”: What are the major ideas and principles of effective altruism, and how do they inspire people to take action? (You could also think of this as a general introduction to EA.)

If this material is received well enough, I’ll keep releasing additional material on a variety of topics, following a similar format. If people aren’t satisfied with the content, style, or format, I may switch things up in the future.

Potential issues

Producing a single “introduction to effective altruism” is difficult for many reasons. These include:

  • The sizable collection of excellent writing about EA that already exists. Any reasonably-sized introduction will be forced to exclude much of this.
  • The wide range of people who encounter EA and want to learn more. No single introduction will appeal equally to each person who reads it.
  • The wide range of viewpoints that exist within EA. Any introduction at all is likely to lean closer to some views than others, even if the author(s) don’t intend for this to happen.
  • The passage of time. EA is always growing and changing, and any static introduction will become outdated with distressing speed. (Though growth and intellectual progress are good things, on the whole!)

As the coordinator of the project, here’s how I’ve tried to account for these issues:

  • I asked the community to suggest material for inclusion, in this Forum post and in lots of individual conversations and messages. I chose a mix of the most popular suggestions and those that have stood the test of time in other ways (e.g. having been cited in many newer articles).
  • I’m testing the Handbook with a wide range of people (everyone reading this, plus many people and EA groups that I’ve reached out to individually).
  • I’ve planned for the Handbook to be a living document that grows and changes as the movement does. I intend to regularly check in on the material and make sure it still fits the spirit of the movement (and that any important numbers aren’t too outdated). And because the Handbook is hosted on the Forum, I expect to get lots of questions, corrections, and suggestions as people read it over the years.

The simplest way to help

Even if you don’t plan to suggest improvements, I’d love to get a basic sense of how well the Handbook is working. 

If you want to be both helpful and efficient, you can just answer the first question in the "Detailed feedback" section of the feedback form (the 0-10 scale). That will help me judge how to move forward with the project. If you read at least a few of the articles, please try to fill it out!

How to improve the Handbook

I don’t think I’ve produced anywhere near the optimal introduction to EA; what I have now can be substantially improved. 

And I hope that you — whether you’re new to EA or very experienced — can help. 

Right now, my plan is that the Forum will host the “EA Handbook” permanently. However, the collection of material that makes up the Handbook will always be subject to change. Posts might be added or removed. Explanatory footnotes might be added (e.g. to provide recent data or link to later material from the same author). As people ask questions and point out weaknesses in my own contributions, I will edit and improve them.

What this means: If you suggest a change to the Handbook, it could stick around for a long time and be seen by hundreds or thousands of people.

Examples of changes you could suggest:

  • Material to excerpt from (or include in full) for a given topic
  • Improvements to the structure, prose, or framing of the pieces I wrote
  • Changes to the overarching structure of the Handbook (e.g. the order in which topics are presented)
  • Anything else that comes to mind! Please don’t be shy; I’m grateful for your bad suggestions as well as your good ones.

Ways to provide feedback:

I hope that you enjoy the Handbook, whether you’re reading every article for the first time or just skimming through some old favorites.

But I hope you don’t enjoy it too much, because I know it can be better and I want to hear your constructive criticism. Thanks for reading!

Comments11
Sorted by Click to highlight new comments since: Today at 9:21 AM

I thought a bit about essays that were key on me becoming more competent and able to take action in the world to improve it, that connected to what I cared about. I'll list some and the ways they helped me. (I filled out the rest of the feedback form too.)

---

Feeling moral by Eliezer Yudkowsky. Showed me an example where my deontological intuitions were untrustworthy and that simple math was actually effective.

Purchase Fuzzies and Utilons Separately by Eliezer. Showed me where attempts to do good can get very confused and simply looking at outcomes can avoid a lot of problems from reasoning by association or by what's 'considered a good idea'.

Ends Don’t Justify Means (Among Humans) by Eliezer. Helped me understand a very clear constraint on naive utilitarian reasoning, which avoided worlds where I would naively trust the math in all situations.

Dive In by Nate Soares. Helped point my flailing attempts to improve and do better in a direction where I would actually get feedback. Only by actually repeatedly delivering a product, even if you changed your mind about what you should be doing and whether it was valuable 10 times a day, can you build up real empirical data about what you can accomplish and what's valuable. Encouraged me to follow-through on projects a whole lot more.

Beyond the Reach of God by Eliezer. This helped ground me, it helped me point at what it's like to have false hope and false trust, and recognise it more clearly in myself. I think it's accurate to say that looking directly and with precision at the current state of the world involves trusting the world a lot less than most people, and a lot less than establishment narratives would say (Steven Pinker's "Everything is getting better and will continue to get better" isn't the right way to conceptualise our position in history, there's much more risk involved than that). A lot of important improvements in my ability to improve the world have involved me realising I had unfounded trust in people or institutions, and realising that unless I took responsibility for things myself, I couldn't trust that they would work out well by default, and this essay was one of the first places I clearly conceptualised what false hope feels like.

Money: The Unit of Caring by Eliezer. Similar things to the Fuzzies and Utilons post, but a bit more practical. And Kelsey named her whole Tumblr after this, which I guess is a fair endorsement.

Desperation by Nate. This does similar things to Beyond the Reach of God, but in a more hopeful way (although it's called 'Desperation', so how hopeful can it be?). It helped me conceptualise what it looks like to actually try to do something difficult that people don't understand or think looks funny, and to notice whether or not it was something I had been doing. It also helped me notice (more cynically) that a lot of people weren't doing things that tended to look like this, and to not try to emulate those kind of people so much.

Scope Insensitivity by Eliezer. Similar things to Feeling Moral, but a bit simpler / more concrete and tries to be actionable.

--

Some that I came up with that you already included:

  • On Caring
  • 500 Million, But Not A Single One More

It's odd that you didn't include Scott Alexander's classic on Efficient Charity or Eliezer's Scope Insensitivity, although Nate's "On Caring" maybe is sufficient to get the point about scope and triage across.

Thanks for posting these! I had never read a few of them, and I especially liked "desperation".

That makes me happy to hear :)

Thanks for this feedback! 

I considered Fuzzies/Utilons and The Unit of Caring, but it was hard to find excerpts that didn't use obfuscating jargon or dive off into tangents; trying to work around those bits hurt the flow of the bits I most wanted to use. But both were important for my own EA journey as well, and I'll keep thinking about ways to fit in some of those concepts (maybe through other pieces that reference them with fewer tangents).

As for "Beyond the Reach of God," I'd prefer to avoid pieces with a heavy atheist slant, given that one goal is for the series to feel welcoming to people from a lot of different backgrounds.

Scott's piece was part of the second edition of the Handbook, and I agree that it's a classic; I'd like to try working it into future material (right now, my best guess is that the next set of articles will focus on cause prioritization, and Scott's piece fits in well there). As an addition to this section, I think it makes a lot of the same points as the Singer and Soares pieces, though it might be better than one or the other of those.

As for "Beyond the Reach of God," I'd prefer to avoid pieces with a heavy atheist slant, given that one goal is for the series to feel welcoming to people from a lot of different backgrounds.

I think that if the essay said things like "Religious people are stupid isn't it obvious" and attempted to do social shaming of religious people, then I'd be pretty open to suggesting edits to such parts.

But like in my other comment, I would like to respect religious people enough to trust they can deal with reading writing about a godless universe and understand the points well, even if they would use other examples themselves.

I also think many religious people agree that God will not stop the world from becoming sufficiently evil, in which case they'll be perfectly able to appreciate the finer points of the post even though it's written in a way that misunderstands their relationship to their religion.

I think either way, if they're going to engage seriously with intellectual thought in the modern world they need to take responsibility and learn to engage with writing about the world which doesn't expect that there's an interventionist aligned superintelligence (my terminology, I don't mean nothing by it). I don't think it's right to walk on eggshells around religious people, and I don't think it makes sense to throw out powerful ideas and pieces of strongly emotional/artistic work to make sure such people don't need to learn to engage with art and ideas that don't share their specific assumptions about the world.

Scott's piece was part of the second edition of the Handbook, and I agree that it's a classic; I'd like to try working it into future material (right now, my best guess is that the next set of articles will focus on cause prioritization, and Scott's piece fits in well there).

Checks out, that makes sense.

I think either way, if they're going to engage seriously with intellectual thought in the modern world they need to take responsibility and learn to engage with writing about the world which doesn't expect that there's an interventionist aligned superintelligence.

If there were no great essays with similar themes aside from Eliezer's, I'd be much more inclined to include it in a series (probably a series explicitly focused on X-risk, as the current material really doesn't get into that, though perhaps it should). But I think that between Ord, Bostrom, and others, I'm likely to find a piece that makes similar compelling points about extinction risk without the surrounding Eliezerisms.

Sometimes, Eliezerisms are great; I enjoy almost everything he's ever written. But I think we'd both agree that his writing style is a miss for a good number of people, including many who have made great contributions to the EA movement. Perhaps the chance of catching people especially well makes his essays the highest-EV option, but there are a lot of other great writers who have tackled these topics.

(There's also the trickiness of having CEA's name attached to this, which means that — however many disclaimers we may attach, and  — there will be readers who assume it's an important part of EA to be atheist, or to support cryonics, etc.)

To clarify, I wouldn't expect an essay like this to turn off most religious readers, or even to complete alienate any one person; it's just got a few slings and arrows that I think can be avoided without compromising on quality.

Of course, there are many bits of Eliezer that I'd be glad to excerpt, including from this essay; if the excerpt sections in this series get more material added to them, I might be interested in something like this:

What can a twelfth-century peasant do to save themselves from annihilation?  Nothing.  Nature's little challenges aren't always fair.  When you run into a challenge that's too difficult, you suffer the penalty; when you run into a lethal penalty, you die.  That's how it is for people, and it isn't any different for planets.  Someone who wants to dance the deadly dance with Nature, does need to understand what they're up against:  Absolute, utter, exceptionless neutrality.

If there were no great essays with similar themes aside from Eliezer's, I'd be much more inclined to include it in a series (probably a series explicitly focused on X-risk, as the current material really doesn't get into that, though perhaps it should). But I think that between Ord, Bostrom, and others, I'm likely to find a piece that makes similar compelling points about extinction risk without the surrounding Eliezerisms.

I see. As I hear you, it's not that we must go overboard on avoiding atheism, but that it's a small-to-medium sized feather on the scales that is ultimately decision-relevant because there is not an appropriately strong feather arguing this essay deserves the space in this list.

From my vantage point, there aren't essays in this series that deal with giving up hope as directly as this essay. I think Singer's piece or the Max Roser piece both try to look at awful parts of the world, and argue you should do more, to make progress happen faster. Many essays, like the quote from Holly about being in triage, talk about the current rate of deaths and how to reduce that number. But I think none engage so directly with the possibility of failure, of progress stopping and never starting again. I think existential risk is about this, but I think that you don't even need to get to a discussion of things like maxipok and astronomical waste to just bring failure onto the table in a visceral and direct way.

*nods* I'll respond to the specific things you said about the different essays. I split this into two comments for length.

I considered Fuzzies/Utilons and The Unit of Caring, but it was hard to find excerpts that didn't use obfuscating jargon or dive off into tangents

I think there's a few pieces of jargon that you could change (e.g. Unit of Caring talks about 'akrasia', which isn't relevant). I imagine it'd be okay to request a few small edits to the essay.

But I think that overall the posts talk like how experts would talk in an interview, directly and substantively. I don't think you should be afraid to show people a high-level discussion, just because they don't know all of the details being discussed already. It's okay for there to be details that a reader has a vague grasp on, if the overall points are simple and clear – I think this is good, it helps see that there are levels above to reach.

It's like how EA student group events would always be "Intro to EA". Instead, I think it's really valuable and exciting to hear how Daniel Kahneman thinks about the human mind, or how Richard Feynman thinks about physics, or how Peter Thiel thinks about startups, even if you don't fully understand all the terms they use like "System 1 / System 2" or "conservation law" or "derivatives market". I would give the Feynman lectures to a young teenager who doesn't know all of physics, because he speaks in a way that gets to the essential life of physics so brilliantly, and I think that giving it to a kid who is destined to become a physicist will leave the kid in wonder and wanting to learn more.

Overall I think the desire to remove challenging or nuanced discussion is a push in the direction of saying boring things, or not saying anything substantive at all because it might be a turn-off to some people. I agree that Paul Graham's essays are always said in simple language, but I don't think that scientists and intellectuals should aim for that all the time when talking to non-specialists. Many of the greatest pieces of writing I know use very technical examples or analogies, and that's necessary to make their points.

See the graph about dating strategies here. The goal is to get strong hits that make a person say "This is one of the most important things I've ever read", not to make sure that there are no difficult sentences that might be confusing. People will get through the hard bits if there's true gems there, and I think the above essays are quite exciting and deeply change the way a lot of people think.

I also think the essays are exciting and have a good track record of convincing people. And my goal with the Handbook isn't to avoid jargon altogether. To some extent, though, I'm trying to pack a lot of points into a smallish space, which isn't how Eliezer's style typically works out. Were the essay making the same points at half the length, I think it would be a better candidate.

Maybe I'll try to produce an edited version at some point (with fewer digressions, and e.g. noting that ego depletion failed to replicate in a "Fuzzies and Utilons" footnote). But the more edits happen in a piece, the longer I expect it to take to get approval, especially from someone who doesn't have much time to spare — another trade-off I had to consider when selecting pieces (I don't think anything in the current series had more than a paragraph removed, unless it were printed as an excerpt).

I don't want to push you to spend a lot of time on this, but if you're game, would you want to suggest an excerpt from either piece (say 400 words at most) that you think gets the central point across without forcing the reader to read the whole essay? This won't be necessary for all readers, but it's something I've been aiming for.

 

I do expect that further material for this project will contain a lot more jargon and complexity, because it won't be explicitly pitched as an introduction to the basic concepts of EA (and you really can't get far in e.g. global development without digging into economics, or X-risk without getting into topics like "corrigibility").

 

A note on the Thiel point: As far as I recall, his thinking on startups became a popular phenomenon only after Blake Masters published notes on his class, though I don't know whether the notes did much to make Thiel's thinking more clear (maybe they were just the first widely-available source of that thinking).

suggest an excerpt from either piece (say 400 words at most) that you think gets the central point across without forcing the reader to read the whole essay?

Sure thing. The M:UoC post is more like a meditation on a theme, very well written but less of a key insight than an impression of a harsh truth, so hard to extract a core argument. I'd suggest the following from the Fuzzies/Utilons post instead. (It has about a paragraph cut in the middle, symbolised by the ellipsis.)

--

If I had to give advice to some new-minted billionaire entering the realm of charity, my advice would go something like this:

  • To purchase warm fuzzies, find some hard-working but poverty-stricken woman who's about to drop out of state college after her husband's hours were cut back, and personally, but anonymously, give her a cashier's check for $10,000.  Repeat as desired.
  • To purchase status among your friends, donate $100,000 to the current sexiest X-Prize, or whatever other charity seems to offer the most stylishness for the least price.  Make a big deal out of it, show up for their press events, and brag about it for the next five years.
  • Then—with absolute cold-blooded calculation—without scope insensitivity or ambiguity aversion—without concern for status or warm fuzzies—figuring out some common scheme for converting outcomes to utilons, and trying to express uncertainty in percentage probabilitiess—find the charity that offers the greatest expected utilons per dollar.  Donate up to however much money you wanted to give to charity, until their marginal efficiency drops below that of the next charity on the list.

But the main lesson is that all three of these things—warm fuzzies, status, and expected utilons—can be bought far more efficiently when you buy separately, optimizing for only one thing at a time... Of course, if you're not a millionaire or even a billionaire—then you can't be quite as efficient about things, can't so easily purchase in bulk.  But I would still say—for warm fuzzies, find a relatively cheap charity with bright, vivid, ideally in-person and direct beneficiaries.  Volunteer at a soup kitchen.  Or just get your warm fuzzies from holding open doors for little old ladies.  Let that be validated by your other efforts to purchase utilons, but don't confuse it with purchasing utilons.  Status is probably cheaper to purchase by buying nice clothes.

And when it comes to purchasing expected utilons—then, of course, shut up and multiply.

Lightly edited from an email chain I had with Aaron

The content and high-level format of it being "hosted" on the forum looks super promising!

It also made me wonder whether the scope the project should increase:

Does it make sense to translate everything in the 3rd Edition of the EA Handbook? What sort of considerations arise from here?

According to your post, this could be a "living document that grows and changes as the movement does", which means most, if not all material in there are by nature high fidelity. I believe with this new edition, a lot of the traditional fear of irreversible lock-in, translation difficulties, and out-of-dateness don't apply, and most countries can benefit a lot from an ongoing translation effort.

The way I'm intending to do this for EA Spain is to allow relatively new-but-engaged members to take up like a "miniproject" translating some of these texts to be shown on EA Spain's website, so it serves as both an engagement and educational tool at the same time.

Curated and popular this week
Relevant opportunities