Comment author: JanBrauner 13 March 2018 09:02:02AM 4 points [-]

You think aggregating welfare between individuals is a flawed approach, such that you are indifferent between alleviating an equal amount of suffering for 1 or each of a million people.

You conclude that these values recommend giving to charities that directly address the sources of most intense individual suffering, and that between them, one should not choose by cost-effectiveness, but randomly. One should not give to say GiveDirectly, which does not directly tackle the most intense suffering.

This conclusion seems correct only for clear-cut textbook examples. In the real world, I think, your values fail to recommend anything. You can never know for certain how many people you are going help. Everything is probabilities and expected value:

Say, for the sake of the argument, you think that severe depression is the cause of most intense individual suffering. You could give your $10.000 to a mental health charity, and they will in expectation prevent 100 people (made up number) from getting severe depression.

However, if you give $10.000 to GiveDirectly, certainly that will affect they recipients strongly, and maybe in expectation prevent 0.1 cases of severe depression.

Actually, if you take your $10.000, and buy that sweet, sweet Rolex with it, there is a tiny chance that this will prevent the jewelry store owner from going bankrupt, being dumped by their partner and, well, developing severe depression. $10.000 to the jeweller prevent an expected 0.0001 cases of severe depression.

So, given your values, you should be indifferent between those.

Even worse, all three actions also harbour tiny chances of causing severe depression. Even the mental health charity, for every 100 patients they prevent from developing depression, will maybe cause depression in 1 patient (because interventions sometimse have adverse effects, ...). So if you decide between burning the money or giving it to the mental health charity, you decide between preventing 100 or 1 episodes of depression. An decision that you are, given your stated values, indifferent between.

Further arguments why approaches that try to avoid interpersonal welfare aggregation fail in the real world can be found here:

Comment author: Jan_Kulveit 13 March 2018 08:15:29AM *  4 points [-]

I just published a short history of creating effective altruism movement in the Czech Republic and I think it is highly relevant to this discussion

Compared to Ben's conclusions I would use it as a data-point showing

  • it can be done

  • it may not be worth delaying

  • there are intermediate forms of communication in between "mass outreach" and "person-to-person outreach"

  • you should consider more complex model of communication than just (personal vs. mass media): specifically, a viable model in new country could be something like "very short message in mass media, few articles translated in national language to lower the bareer and point in the right direction, much larger amount transmitted via conferences & similar

Putting too much weight into "person to person" interaction runs into the problem you are less likely to find the right persons (consider how such connections may be created)

Btw it seems to me the way e.g. 80k hours and CEA works are inadeqate in creating the required personal connections in new countries, so it's questionable if it makes sense to focus on it

(I completely agree China is extremely difficult, but I don't think China should be considered a typical example - considering mentality it's possibly one of the most remote countries from from Eurpoean POV)

Comment author: RandomEA 13 March 2018 03:52:45AM *  2 points [-]

I used to think that a large benefit to a single person was always more important than a smaller benefit to multiple people (no matter how many people experienced the smaller benefit). That's why I wrote this post asking others for counterarguments. After reading the comments on that post (one of which linked to this article), I became persuaded that I was wrong.

Here's an additional counterargument. Let's say that I have two choices:

A. I can save 1 person from a disease that decreases her quality of life by 95%; or

B. I can save 5 people from a disease that decreases their quality of life by 90%.

My intuition is that it is better to save the 5. Now let's say I get presented with a second dilemma:

B. I can save 5 people from a disease that decreases their quality of life by 90%; or

C. I can save 25 people from a disease that decreases their quality of life by 85%.

My intuition is that it is better to save the 25. Now let's say I get presented with a third dilemma.

C. I can save 25 people from a disease that decreases their quality of life by 85%; or

D. I can save 125 people from a disease that decreases their quality of life by 80%.

My intuition is that it is better to save the 125. This cycle continues until the seventeenth dilemma:

Q. I can save 152,587,890,625 people from a disease that decreases their quality of life by 15%; or

R. I can save 762,939,453,125 people from a disease that decreases their quality of life by 10%.

My intuition is that it is better to save the 762,939,453,125.

Since I prefer R over Q and Q over P and P over O and so on and so forth all the way through preferring C over B and B over A, it follows that I should prefer R over A.

In other words, our intuition that providing a large benefit to one person is less important than providing a slightly smaller benefit to several people conflicts with our intuition that providing a very large benefit to one person is more important than providing a very small benefit to an extremely large number of people. Given scope insensitivity, I think the former intuition is probably more reliable.

One last point. I think that EA has a role even under your worldview. It can help identify the worst possible forms of suffering (such as being boiled alive at a slaughterhouse) and the most effective ways to prevent that suffering.

Comment author: Michael_S 13 March 2018 03:30:33AM 7 points [-]

Choice situation 3: We can either save Al, and four others each from a minor headache or Emma from one major headache. Here, I assume you would say that we should save Emma from the major headache

I think you're making a mistaken assumption here about your readers. Conditional on agreeing 5 minor headaches in one person is worse than 1 major headache in one person, I would feel exactly the same if it were spread out over 5 people. I expect the majority of EAs would as well.

Comment author: DavidMoss 12 March 2018 09:13:30PM *  1 point [-]

I imagine more or less anything which expresses conflictedness about taking the 'utilitarian' decision and/or expresses feeling the pull of the contrary deontological norm would fit the bill for what Everett is saying here. That said, I'm not convinced that Everett (2016) is really getting at reactions to "consequentialism" (see here ( 1 , 2 )

I think that this paper by Uhlmann et al, does show that people judge negatively those who take utilitarian decisions though, even when they judge that the utilitarian act was the right one to take. Expressing conflictedness about the utilitarian decision may be a double-edged sword, therefore. I think it may well offset negative character evaluations of the person taking the utilitarian decision, but plausibly it may also reduce any credence people attached to the utilitarian act being the right one to take.

My collaborators and I did some work relevant to this, on the negative evaluation of people who make their donation decisions in a deliberative rather than explicitly empathic way. The most relevant of our experiments for this looked at the evaluation of people who both deliberated about the cost effectiveness of the donation and expressed empathy towards the recipient of the donation simultaneously. The empathy+deliberation condition was close to the empathy condition in moral evaluation (see figure 2 and closer to the deliberation condition in evaluation of reasonableness.

Comment author: Tee 12 March 2018 07:54:59PM *  4 points [-]

Ben West asked this question in the EA Facebook group late last year, and I believe EA Funds has updated since then:

It's not clear what the optimal amount of funding for resurrecting LW should be, but according to the EA survey (run by Rethink), LW had been a top source for introducing people to EA until recently:

Qualifying this by clarifying that I'm the ED of Development for Rethink Charity – I would say the lineup of projects offered by Rethink (SHIC, LEAN, RC Forward and Rethink Priorities, EA Survey) should be among the most competitive funding options for community building, especially considering our reach and impact on a comparatively low budget:

Comment author: Jeffhe  (EA Profile) 12 March 2018 07:41:34PM *  4 points [-]

Hi Risto,

You've done such a thorough job, well done!

One tip I would add under "How to read philosophy" is to read on when something in the book isn't making sense, instead of spending a lot of time trying to make sense of things on the spot. The reason is because, oftentimes, later passages help to clarify what the writer meant by earlier passages, where these earlier passages can be hopelessly hard to understand or precise-ify without having read those later passages.

P.S. I'm new to this forum and would appreciate it if I could get some likes so that I could make a post! Thanks.

Comment author: adamaero  (EA Profile) 12 March 2018 07:06:59PM 0 points [-]

I'm glad you said so. From now on I'll use well-meaning/ good intentions, and evidence-based good instead.

Comment author: DustinWehr 12 March 2018 06:13:32PM 1 point [-]

Good points. I don't think "(benevolence)"/"(beneficence)" adds anything, either. Beneficence is effectively EA lingo. You're not going to draw people in by teaching them lingo. Do that a little further into on-boarding.

Comment author: Nekoinentr 12 March 2018 05:31:45PM 0 points [-]

It's no big deal, but your formatting is a little different from the normal forum formatting - it might be worth requesting .impact provide a button to clear extraneous formatting via the issues link at

Comment author: ThomasSittler 12 March 2018 02:34:50PM 2 points [-]

The article might benefit from being more accurately titled "heuristics for individual donors in AI safety".

Comment author: Risto_Uuk 12 March 2018 11:00:58AM 1 point [-]

Do you offer any recommendations for communicating utilitarian ideas based on Everett's research or someone else's?

For example, in Everett's 2016 paper the following is said:

"When communicating that a consequentialist judgment was made with difficulty, negativity toward agents who made these judgments was reduced. And when a harmful action either did not blatantly violate implicit social contracts, or actually served to honor them, there was no preference for a deontologist over a consequentialist."

Comment author: Jeffhe  (EA Profile) 12 March 2018 06:06:04AM *  2 points [-]

You write, "Agree with others that overusing the word 'utilitarianism' seems unnecessary and not strictly accurate (any moral view that included an idea of aggregation is probably sufficient, which is probably all of them to some degree)."

One thing I am sure about effective altruism is that it endorses helping the greater number, all other things being equal (by which I am here only concerned with the quality of pain being equal, for simplicity’s sake). So, for example, if $10 can be used to either save persons A and B each from some pain or C from a qualitatively identical pain, EA would say that it is morally better to save the two over the one.

Now, this in itself does not mean that effective altruism believes that it makes sense to

  1. sum together certain people’s pain and to compare said sum to the sum of other people’s pain in such a way as to be able to say that one sum of pain is in some sense greater/equal to/lesser than the other, and

  2. say that the morally best action is the one that results in the least sum of pain and the greatest sum of pleasure (which is more-or-less utilitarianism)

(Note that 2. assumes the intelligibility of 1.; see below)

The reason is because there are also non-aggregative ways to justify why it is better to save the greater number, at least when all other things are equal. For a survey of such ways, see "Saving Lives, Moral Theory, and the Claims of Individual" (Otsuka, 2006) However, I'm not aware that effective altruism why it's better to save the greater number, all else equal, via these non-aggregative ways. Likely, it is purposely silent on this issue. Ben Todd (in private correspondence) informed me that "effective altruism starts from the position that it's better to help the greater number, all else equal. Justifying that premise in the first place is in the realm of moral philosophy." If that’s indeed the case, we might say that all effective altruism says is that the morally better course of action is the one that helps more people, everything else being equal (e.g. when the suffering to each person involved in the choice situation is qualitative the same), and (presumably) also sometimes even when everything isnt equal (e.g. when the suffering to each person in the bigger group might be somewhat less painful than the suffering to each person in the smaller group).

Insofar as effective altruism isn’t in the business of justification, then perhaps moral theories shouldn’t be mentioned at all in a presentation about effective altruism. But inevitably people considering joining the movement are going to ask why is it better to save the greater number, all else equal (e.g. A and B instead of C), or even sometimes when all else aren’t equal (e.g. one million people each from a relatively minor pain instead of one other person from a relatively greater pain)? And I think effective altruists ask themselves that question too. The OP might have and thought utilitarianism offers the natural justification: it is better to save A and B instead of C (and the million instead of the one) because doing so results in the least sum of pain. So, utilitarianism clearly offers a justification (though one might question if it is an adequate justification). On the other hand, it is not clear to me at all how other moral theories propose to justify saving the greater number in these two kinds of choice situations. So it is not surprising that OP has associated utilitarianism with effective altruism. I am sympathetic.

A bit more on utilitarianism: Roughly speaking, according to utilitarianism (or the principle of utility), among all the actions we can undertake at any given moment, the right action (ie the action we ought to take) is the one that results in the least sum of pain and the greatest sum of pleasure.

To figure out which action is the right action among a range of possible actions, we are to, for each possible action, add up all its resulting pleasures and pains. We are then to compare the resulting state of affairs corresponding to each action to see which resulting state of affairs contains the least sum of pain and greatest sum of pleasure. For example, suppose you can either save one million people each from a relatively minor pain or one other person from a relatively greater pain, but not both. Then you are to add up all the minor pains that would result from saving the single person, and then add up all the major pains (in this case, just 1) that would result from saving the million people, and then compare the two states of affairs to see which contains the least sum of pain.

From this we can clearly see that utilitarianism assumes that it makes sense to aggregate distinct people's pains and to compare these sums in such a way as to be able to say, for example, that the sum of pain involved in a million people's minor pains is greater (in some sense) than one other person’s major pain. Of course, many philosophers have seriously questioned the intelligibility of that.

Comment author: brianwang712 12 March 2018 04:34:30AM *  2 points [-]

To add onto the "platforms matter" point, you could tell a story similar to Bostrom's (build up credibility first, then have impact later) with Max Tegmark's career. He explicitly advocates this strategy to EAs in 25:48 to 29:00 of this video:

Comment author: Arepo 12 March 2018 01:35:31AM *  2 points [-]

Great stuff! A few quibbles:

  • It feels odd to specify an exact year EA (or any movement) was 'founded'. Givewell (surprisingly not mentioned other than a logo on slide 6) have been around since 2007; MIRI since 2000; FHI since 2005; Giving What We Can since 2009. Some or all of these (eg GWWC) didn't exactly have a clear founding date, though, rather becoming more like their modern organisations over years. One might not consider some of them more strictly 'EA orgs than others' - but that's kind of the point.

  • I'd be wary of including 'moral offsetting' as an EA idea. It's fairly controversial, and sounds like the sort of thing that could turn people off the other ideas

  • Agree with others that overusing the word 'utilitarianism' seems unnecessary and not strictly accurate (any moral view that included an idea of aggregation is probably sufficient, which is probably all of them to some degree).

  • Slide 12 talks about suffering exclusively; without getting into whether happiness can counterweigh it, it seems like it could mention positive experiences as well

  • I'd be wary of criticising intuitive morality for not updating on moral uncertainty. The latter seems like a fringe idea that's received a lot of publicity in the EA community, but that's far from universally accepted even by eg utilitarians and EAs

  • On slide 18 it seems odd to have an 'other' category on the right, but omit it on the left with a tiny 'clothing' category. Presumably animals are used and killed in other contexts than those four, so why not just replace clothing with 'other' - which I think would make the graph clearer

  • I also find the colours on the same graph a bit too similar - my brain keeps telling me that 'farm' is the second biggest categorical recipient when I glance it it, for eg

  • I haven't read the Marino paper and now want to, 'cause it looks like it might update me against this, but provisionally: it still seems quite defensible to believe that chickens experience substantially less total valence per individual than larger animals, esp mammals, even if it's becoming rapidly less defensible to believe that they don't experience something qualitatively similar to our own phenomenal experiences. [ETA] Having now read-skimmed it, I didn't update much on the quantitative issue (though it seems fairly clear chickens have some phenomenal experience, or at least there's no defensible reason to assume they don't)

  • Slide 20 'human' should be pluralised

  • Slide 22 'important' and 'unimportant' seem like loaded terms. I would replace with something more factual like (ideally a much less clunkily phrased) 'causes large magnitude of suffering', 'causes comparatively small magnitude of suffering'

  • I don't understand the phrase 'aestivatable future light-cone'. What's aestivation got to do with the scale of the future? (I know there are proposals to shepherd matter and energy to the later stages of the universe for more efficient computing, but that seems way beyond the scope of this presentation, and presumably not what you're getting at)

  • I would change 'the species would survive' on slide 25 to 'would probably survive', and maybe caveat it further, since the relevant question for expected utility is whether we could reach interstellar technology after being set back by a global catastrophe, not whether it would immediately kill us (cf eg - similarly I'd be less emphatic on slide 27 about the comparative magnitude of climate change vs the other events as an 'X-risk', esp where X-risk is defined as here:

  • Where did the 10^35 number for future sentient lives come from for slide 26? These numbers seem to vary wildly among futurists, but that one actually seems quite small to me. Bostrom estimates 10^38 lost just for a century's delayed colonization. Getting more wildly speculative, Isaac Arthur, my favourite futurist, estimates a galaxy of Matrioska brains could emulate 10^44 minds - it's slightly unclear, but I think he means running them at normal human subjective speed, which would give them about 10^12 times the length of a human life between now and the end of the stelliferous era. The number of galaxies in the Laniakea supercluster is approx 10^5, so that would be 10^61 total, which we can shade by a few orders of magnitude to account for inefficiencies etc and still end up with a vastly high number than yours. And if Arthur's claims about farming Hawking radiation and gravitational energy in the post-stellar eras are remotely plausible, then the number of sentient beings Black Hole era would dwarf that number again! (ok, this maybe turned into an excuse to talk about my favourite v/podcast)

  • Re slide 29, I think EA has long stopped being 'mostly moral philosophers & computer scientists' if it ever strictly was, although they're obviously (very) overrepresented. To what end do you note this, though? It maybe makes more sense in the talk, but in the context of the slide, it's not clear whether it's a boast of a great status quo or a call to arms of a need for change

  • I would say EA needs more money and talent - there are still tonnes of underfunded projects!

Comment author: letianw 11 March 2018 11:28:20PM 4 points [-]

Hi Ben,

As a Chinese national currently living in the west, I think I broadly agree with your argument that "efforts to expand effective altruism into other languages should initially focus on person-to-person outreach to a small number of people with key expertise." I also appreciate your grasp of the complexity of cultural and linguistic barriers in promoting EA ideas in the Chinese context, which can often be lost on EAs who are less familiar with other cultures.

One potential objection to this is that not rushing into massive translation effort does not equal to not at least attempt some translation at all. A set of core materials can still be useful, if it is carefully curated by professional translators (not merely bilingual volunteers like me). Without written material, it can be difficult to make ideas stick, even among a small group of personal contacts. A counter argument to this, however, is that the initial promising groups are very likely elite college students and urban professionals who would have no problem reading English materials. I don't have a strong opinion on this.

Another potential problem I can foresee regarding 'personal contact' approach is that to my knowledge, the Chinese government keeps close tabs on any recruitment activities by foreign social movements. Anecdotes from missionary friends 10 years ago suggest that their religious activities, especially when involving locals, were closely monitored by the police, kept under 20 people, and sometimes harassed. I cannot speak with any confidence that this is still the case, or if it will be applied to EAs equally. But this is something to keep in mind when evaluating personal outreach versus media effort.

Comment author: weeatquince  (EA Profile) 11 March 2018 11:06:31PM *  1 point [-]

This sounds like a really good project. You clearly have a decent understanding of the local political issues, a clear ideas of how this project can map to other countries and prove beneficial globally. And a good understanding of how this plays a role in the wider EA community (I think it is good that this project is not branded as 'EA').

Here are a number of hopefully constructive thoughts I have to help you fine tune this work. These maybe things you thought about that did not make the post. I hope they help.




As far as I can tell the CCC seems to not care much about scenarios with a small chance of a very high impact. On the whole the EA community does care about these scenarios. My evidence for this comes from the EA communities concern for the extreme risks of climate change ( and x-risks whereas the CCC work on climate change that I have seen seems to have ignored these extreme risks. I am unsure why the discrepancy (Many EA researchers do not use a future discount rate for utility, does CCC?)

This could be problematic in terms of the cause prioritisation research being useful for EAs, for building a relationship with this project and EA advocacy work, EA funding, etc, etc.




Sometimes the most important priorities will not be the ones that public will latch onto. It is unclear from the post:

2.1 how you intend to find a balance between delivering the messages that are most likely to create change verses saying the things you most believe to be true. And

2.2 how the advocacy part of this work might differ from work that CCC has done in the past. My understanding is that to date the CCC has mostly tried to deliver true messages to an international policy maker audience. Your post however points to the public sentiment as a key driving factor for change. The advocacy methods and expertise used in CCC's international work are not obviously the best methods for this work.




For a prioritization research piece like I could imagine the researcher might dive straight into looking at the existing issues on the political agenda and prioritising between those based on some form of social rate of return. However I think there are a lot of very high level questions that I could be asked first like: • Is it more important to prevent the government making really bad decisions in some areas or to improve the quality of the good decisions • Is it more important to improve policy or to prevent a shift to harmful authoritarianism • How important is it to set policy that future political trends will not undo • How important is the acceptability among policy makers . public of the policy being suggested Are these covered in the research?

Also to what event will the research be looking at improving institutional decision making? To be honest I would genuinely be surprised if the conclusion of this project was that the most high impact policies were those designed to improve the functioning / decision making / checks and balances of the government. If you can cut corruption and change how government works for the better then the government will get more policies correct across the board in future. Is this your intuition too?


Finally to say I would be interested to be kept up-to-date with this project as it progresses. Is there a good way to do this? Looking forward to hearing more.

Comment author: Jeffhe  (EA Profile) 11 March 2018 10:45:52PM *  3 points [-]

On slide 10 (EA challenge 1), I think you meant “that” rather than “than”.

Good luck! Also, I'm new to this forum and would appreciate it if I could get some likes so that I could make a post! Thanks.

In response to Open Thread #39
Comment author: westcoastsurpliers 11 March 2018 10:26:11PM -1 points [-]

We produce Real registered and Novelty passports,drivers licenses,ID cards,birth certificates,diplomas,Visas,SSN,Marriage certificates,divorce papers,US green cards,University degrees Gun license,Insurance, Passport Visas, Entry and Exit Stamps,Teacher's License, Utility Bills, Divorce Certificates, Marriage Certificates, Property Tittles, Customs, Counterfeit Money,birth certificates,diplomas,Visas,SSN,Teacher's License, Utility Bills, Divorce Certificates, Marriage Certificates, Property Tittles, Customs, Counterfeit Money,birth certificates,diplomas,Visas,SSN,Marriage certificates,divorce papers,US green cards, Death Certificates, Residence Permits, Jobs, CV, Counterfeit Bank Notes, Change all identy documents, TEXT AND CALL +1(989) 262-0996 Email}

Clear criminal records, Work Permits, Ultility Bills, Invitation Letters, Jobs, CV, Counterfeit Bank Notes, Change all identy documents, Clear criminal records,Social Security Number, Birth Certificate, Driver's license, USA/UK Residence Permits, School Diplomas and Certificates,University degrees,Stamps,Marijuana license, registered TOEFL, IELTS, ESOL, CELTA/DELTA & other English Language Certificates,credit cards,SSD chemical solution and activation powder used for cleaning coated black money, and other documents.....

In response to Open Thread #39
Comment author: westcoastsurpliers 11 March 2018 10:25:07PM -1 points [-]

View more: Prev | Next