Comment author: Ben_Todd 17 July 2017 04:47:34AM 17 points [-]

Hey Kaj,

I agree with a lot of these points. I just want to throw some counter-points out there for consideration. I'm not necessarily endorsing them, and don't intend them as a direct response, but thought they might be interesting. It's all very rough and quickly written.

1) Having a high/low distinction is part of what has led people to claim EAs are misleading. One version of it involves getting people interested through global poverty (or whatever causes they're already interested in), and then later trying to upsell them into high-level EA, which presumably has a major focus on GCRs, meta and so on.

It becomes particularly difficult because the leaders, who do the broad outreach, want to focus on high-level EA. It's more transparent and open to pitch high-level EA directly.

There are probably ways you could implement a division without incurring these problems, but it would need some careful thought.

2) It sometimes seems like the most innovative and valuable idea within EA is cause-selection. It's what makes us different from simply "competent" do-gooding, and often seems to be where the biggest gains in impact lie. Low level EA seems to basically be EA minus cause selection, so by promoting it, you might lose most of the value. You might need a very big increase in scale of influence to offset this.

3) Often the best way to promote general ideas is to live them. With your example of promoting science, people often seem to think the Royal Society was important in building the scientific culture in the UK. It was an elite group of scientists who just got about the business of doing science. Early members included Newton and Boyle. The society brought likeminded people together, and helped them to be more successful, ultimately spreading the scientific mindset.

Another example is Y Combinator, which has helped to spread norms about how to run startups, encourage younger people to do them, reduce the power of VCs, and have other significant effects on the ecosystem. The partners often say they became famous and influential due to reddit -> dropbox -> airbnb, so much of their general impact was due to having a couple of concrete successes.

Maybe if EA wants to have more general impact on societal norms, the first thing we should focus on doing is just having a huge impact - finding the "airbnb of EA" or the "Newton of EA".

Comment author: JanBrauner 13 July 2017 06:20:44PM *  2 points [-]

With regards to "following your comparative advantage":

Key statement : While "following your comparative advantage" is beneficial as a community norm, it might be less relevant as individual advice.

Imagine 2 people Ann and Ben. Ann has very good career capital to work on cause X. She studied a relevant subject, has relevant skill, maybe some promising work experience and a network. Ben has very good career capital to contribute to cause Y. Both have aptitude to become good at the other cause as well, but it would take some time, involve some cost, maybe not be as save.

Now Ann thinks that cause Y is 1000 times as urgent as cause X, and for Ben it is the other way around. Both consider retraining for the cause they think is more urgent.

From a community perspective, it is reasonable to promote the norm that everyone should follow their comparative advantage. This avoids prisoner's dilemma situations and increases total impact of the community. After all, the solution that would best satisfy both Ann's and Ben's goals was if each continued in their respective areas of expertise. (Let's assume they could be motivated to do so)

However, from a personal perspective, let's look at Ann's situation: In reality of course, there will rarely be a Ben to mirror Ann, who would also be considering retraining at exactly the same time as Ann. And if there was, they would likely not know each other. So Ann is not in the position to offer anyone the specific trade that she could offer Ben, namely: "I keep contributing to cause X, if you continue contributing to cause Y"

So these might be Ann's thoughts: "I really think that cause Y is much more urgent than anything I could contribute to cause X. And yes, I have already considered moral uncertainty. If I went on to work on cause X, this would not directly cause someone else to work on cause Y. I realize that it is beneficial for EA to have a norm that people should follow their comparative advantage, and the creation of such a norm would be very valuable. However, I do not see how my decision could possibly have any effect on the establishment of such a norm"

So for Ann it seems to be a prisoner’s dilemma without iteration, and she ought to defect.

I see one consideration why Ann should continue working towards cause X: If Ann believed that EA is going to grow a lot, EA would reach many people with better comparative advantage for cause Y. And if EA successfully promoted said norm, those people would all work on cause Y, until Y would not be neglected enough any more to be much more urgent than cause X. Whether Ann believes this is likely to happen depends strongly on her predictions of the future of EA and on the specific characteristics of causes X and Y. If she believed this would happen (soon), she might think it was best for her to continue contributing to X. However, I think this consideration is fairly uncertain and I would not give it high weight in my decision process.

So it seems that - it clearly makes sense (for CEA/ 80000 hours/ ...) to promote such a norm - it makes much less sense for an individual to follow the norm, especially if said indiviual is not cause agnostic or does not think that all causes are within the same 1-2 orders of magnitude of urgency.

All in all, the situation seems pretty weird. And there does not seem to be a consensus amongst EAs on how to deal with this. A real world example: I have met several trained physicians who thought that AI safety was the most urgent cause. Some retrained to do AI safety research, others continued working in health-related fields. (Of course, for each individual there were probably many other factors that played a role in their decision apart from impact, e.g. risk aversion, personal fit for AI safety work, fit with the rest of their lives, ...)

Ps: I would be really glad if you could point me to errors in my reasoning or aspects I missed, as I, too, am a physician currently considering retraining for AI safety research :D

Pps: I am new to this forum and need 5 karma to be able to post threads. So feel free to upvote.

Comment author: Ben_Todd 14 July 2017 01:03:01AM 1 point [-]

Hi there,

I think basically you're right, in that people should care about comparative advantage to the degree that the community is responsive to your choices, and they're value-aligned with typical people in the community. If no-one is going to change their career in response to your choice, then you default back to whatever looks highest-impact in general.

I have a more detailed post about this, but I conclude that people should consider all of role impact, personal fit and comparative advantage, where you put or less emphasis on comparative advantage compared to personal fit given certain conditions.

Comment author: RandomEA 11 July 2017 08:29:54PM 0 points [-]

I think it's actually mentioned briefly at the end of Part 5:

(In fact, the mention is so brief that you could easily remove it if your goal is to wait until the end to mention effective altruism.)

Comment author: Ben_Todd 12 July 2017 04:44:03AM 0 points [-]

That's right - we mention it as a cause to work on. That slipped my mind since that article was added only recently. Though I think it's still true we don't give the impression of representing the EA movement.

Comment author: casebash 11 July 2017 12:25:15AM 0 points [-]

80k is now seperate from CEA or is in the process of being separated from CEA. They are allowed to come to different conclusions.

Comment author: Ben_Todd 11 July 2017 05:56:39AM *  4 points [-]

We're fiscally sponsored by CEA (so legally within the same entity) and have the same board of trustees, but we operate like a separate organisation.

Our career guide also doesn't mention EA until the final article, so we're not claiming that our views represent those of the EA movement. GWWC also doesn't claim on the website to represent the EA movement.

The place where moral exclusivity would be most problematic is But it mentions a range of causes without prioritising them, and links to this tool, which also does exactly what the original post recommends (and has been there for a year).

Comment author: MichaelPlant 10 July 2017 09:23:18AM *  1 point [-]


So, I don't mean to be attacking you on these things. I'm responding to what you said in the comments above and maybe more of a general impression, and perhaps not keeping in mind how 80k do things on their website; you write a bunch of (cool) stuff, I've probably forgotten the details and I don't think it would be useful to go back and enage in a 'you wrote this here' to check.

A few quick things as this has already been a long exchange.

Given I accept I'm basically a moral hipster, I'd understand if you put my views in the 3 rather 4 category.

If it's of any interest, I'm happy to suggest how you might update your problem quiz to capture my views and views in the area.

I wouldn't think the same way about Spanish flu vs mental health. I'm assuming happiness is duration x intensity (#Bentham). What I think you're discounting is the duration of mental illnesses - they are 'full-time' in that they take up your conscious space for lots of the day. They often last a long time. I don't know what the distribution of duration is, but if you have chronic depression (anhedonia) that will make you less happy constantly. In contrast, the experience of having flu might be bad (although it's not clear it's worse, moment per moment, than say, depression), but it doesnt last for very long. Couple of weeks? So we need to accounts for the fact a case of Spanish flu has 1/26th of the duration than anhedonia, before we even factor in intensity. More generally, I think we suffer from something like scope insensity when we do affecting forecasting: we tend to consider the intensity of events rather than duration. Studies into the 'peak-end' effect show this is exactly how we remember things: our brains only really remember the intensity of events.

One conclusion I reach (on my axiology) is that the things which cause daily misery/happy are the biggest in terms of scale. This is why I think don't think x-risks are the most important thing. I think a totalist should accept this sort of reasoning and bump up the scale of things like mental health, pain and ordinary human unhappiness, even though x-risk will be much bigger in scale on totalism. I accept I haven't offered anything to do with solvability of neglectedness yet.

Comment author: Ben_Todd 10 July 2017 10:28:18PM 1 point [-]

Thanks. Would you consider adding a note to the original post pointing out that 80k already does what you suggest re moral inclusivity? I find that people often don't read the comment threads.

Comment author: MichaelPlant 09 July 2017 12:00:32PM 0 points [-]

Hello Ben,

Main comments:

There are two things going on here.

On transparency, if you want to be really transparent about what you value and why, I don't think you can assume people agree with you on topics they've never considered, that you don't mention, and that do basically all the work of cause prioritisation. The number of people worlwide who understand moral uncertainty well enough to explain it could fill one seminar room. If moral uncertainty is your "this is why everyone should agree with us" fall back, then that should presumably feature somewhere. Readers should know that's why you put forward your cause areas so they're not surprised later on to realise that's the reason.

On exclusivity, you response seems to ammount to "most people want to focus on the far future and, what's more, even if they don't, they should because of moral uncertainty, so we're just going to say it's what really matters". It's not true that most EAs want to focus on the far future - see Ben Hurford's post below. Given that it's not true, saying people should focus on it is, in fact, quite exclusive.

The third part of my original post argued we should want EA should be morally inclusive even if we endorse a particular moral theory. Do you disagree with that? Unless you disagree, it doesn't matter whether people are or should be totalists: it's worse from a totalist perspective for 80k to only endorse totalist-y causes.

Less important comments:

FWIW, if you accept both person-affecting views and Epicureanism, you should find X-risk, pandemics or nuclear war pretty trivial in scale compared to things like mental illness, pain and 'ordinary human unhappiness' (that is, the sub-maximal happiness many people have even if they are entirely healthy and economically secure). Say a nuclear war kills everyone, then that's just few moments of suffering. Say it kills most people, but leaves 10m left who eek out a miserable existence in a post apocalyptic world, then you're just concerned with 10m people, which is 50 times less than just the 500m who have either anxiety or depression worldwide.

I know some people who implicitly or explicitly endorse this, but I wouldn't expect you to, and that's one of my worries: if you come out in favour of theory X, you disproportionately attract those who agree with you, and that's bad for truth seeking. By analogy, I don't imagine many people at a Jeremy Corbyn rally vote Tory. I'm not sure Jeremy shouldn't take that as further evidence that a) the Tories are wrong or b) no one votes for them.

I'm curious where you get your 90% figure from. Is this from asking people if they would:

"Prevent one person from suffering next year. Prevent 100 people from suffering (the same amount) 100 years from now."?

I assume it is, because that's how you put it in the advanced workshop at EAGxOX last year. If it is, it's a pretty misleading question to ask for a bunch of reasons that will take too long to type out fully. Briefly, one problem is that I think we should help the 100 people in 100 years if those people already exist today (both necessitarians and presentists get this results). So I 'agree' with your intuition pump but don't buy your conclusions, which suggests the pump is faulty. Another problem is the Hawthorne effect. Another is population ethics is a mess and you've cherry picked a scenario that suits your conclusion. If I asked a room of undergraduate philosophers "would you rather relieve 100 living people of suffering or create 200 happy people" I doubt many would pick the latter.

Comment author: Ben_Todd 10 July 2017 05:51:00AM 8 points [-]

I feel like I'm being interpreted uncharitably, so this is making me feel a bit defensive.

Let's zoom out a bit. The key point is that we're already morally inclusive in the way you suggest we should be, as I've shown.

You say:

for instance, 80,000 Hours should be much morally inclusive than they presently are. Instead of “these are the most important things”, it should “these are the most important things if you believe A, but not everyone believes A. If you believe B, you should think these are the important things [new list pops up].

In the current materials, we describe the main judgement calls behind the selection in this article: and within the individual profiles.

Then on the page with the ranking, we say:

Comparing global problems involves difficult judgement calls, so different people come to different conclusions. We made a tool that asks you some key questions, then re-ranks the lists based on your answers.

And provide this: Which produces alternative rankings given some key value judgements i.e. it does exactly what you say we should do.

Moreover, we've been doing this since 2014, as you can see in the final section of this article:

In general, 80k has a range of options, from most exclusive to least:

1) State our personal views about which causes are best 2) Also state the main judgement calls required to accept these views, so people can see whether to update or not. 3) Give alternative lists of causes for nearby moral views. 4) Give alternative lists of causes for all major moral views.

We currently do (1)-(3). I think (4) would be a lot of extra work, so not worth it, and it seems like you agree.

It seemed like your objection is more that within (3), we should put more emphasis on the person-affecting view. So, the other part of my response was to argue that I don't think the rankings depend as much on that as it first seems. Moral uncertainty was only one reason - the bigger factor is that the scale scores don't actually change that much if you stop valuing xrisk.

Your response was that you're also epicurean, but then that's such an unusual combination of views that it falls within (4) rather than (3).

But, finally, let's accept epicureanism too. You claim:

FWIW, if you accept both person-affecting views and Epicureanism, you should find X-risk, pandemics or nuclear war pretty trivial in scale compared to things like mental illness, pain and 'ordinary human unhappiness'

For mental health, you give the figure of 500m. Suppose those lives have a disability weighting of 0.3, then that's 150m QALYs per year, so would get 12 on our scale.

What about for pandemics? The Spanish Flu infected 500m people, so let's call that 250m QALYs of suffering (ignoring the QALYs lost by people who died since we're being Epicurean, or the suffering inflicted on non-infected people). If there's a 50% chance that happens within 50 years, then that's 2.5m expected QALYs lost per year, so it comes out 9 on our scale. So, it's a factor of 300 less, but not insignificant. (And this is ignoring engineered pandemics.)

But, the bigger issue is that the cause ranking also depends on neglectedness and solvability.

We think pandemics only get $1-$10bn of spending per year, giving them a score of ~4 for neglectedness.

I'm not sure how much gets spent on mental health, but I'd guess it's much larger. Just for starters, it seems like annual sales of antidepressants are well over $10bn, and that seems like fairly small fraction of the overall effort that goes into it. The 500m people who have a mental health problem are probably already trying pretty hard to do something about it, whereas pandemics are a global coordination problem.

All the above is highly, highly approximate - it's just meant to illustrate that, on your views, it's not out of the question that the neglectedness of pandemics could make up for their lower scale, so pandemics might still be an urgent cause.

I think you could make a similar case for nuclear war (a nuclear war could easily leave 20% of people alive in a dystopia) and perhaps even AI. In general, our ranking is driven more by neglectedness than scale.

Comment author: casebash 09 July 2017 02:07:59AM *  10 points [-]

Effective Altruism is quite difficult to explain if you want capture all of its complexity. I think that it is a completely valid choice for an introductory talk to focus on one aspect of Effective Altruism as otherwise many people will have trouble following.

I would suggest letting people know that you are only covering one aspect of Effective Altruism, ie. "Effective Altruism is about doing the most good that you can with the resources available to you. This talk will cover how Effective Altruism has been applied to charity, but it is worth noting that Effective Altruism has also been applied to other issues like animal welfare or ensuring the long-term survival of humanity".

This reduces the confusion when they hear about these issues later and reduces the chance that they will feel mislead. At the same time, it avoids throwing too many new ideas at a person at once which may reduce their comprehension and it explains how it applies to an issue which they may already care about.

Comment author: Ben_Todd 09 July 2017 03:45:45AM 13 points [-]

I think this is a good point, but these days we often do this, and people still get the impression that it's all about global poverty. People remember the specific examples far more than your disclaimers. Doing Good Better is a good example.

Comment author: MichaelPlant 08 July 2017 09:02:57PM 0 points [-]

Very much enjoyed this. Good to see the thinking developing.

My only comment is on simple replacability. I think you're right to say this is too simple in an EA context, where someone this could cause a cascade or the work wouldn't have got done anyway.

Do you think simple replacability doesn't apply outside the EA world? For example, person X wants to be a doctor because they think they'll do good. If they take a place at med school, should we expect that 'frees up' the person who doesn't get the place to go and do something else instead? My assumption is the borderline medical candidate is probably not that committed to doing the most good anyway.

To push the point in a familiar case, assume I'm offered a place in an investment bank and I was going to E2G, but I decide to do something more impactful, like work at an EA org. It's unlike the person who gets my job and salary instead would be donating to good causes.

If you think replacability is sometimes true and other times not, it would be really helpful to specify that. My guess is motivation and ability to be an EA play the big role.

Comment author: Ben_Todd 09 July 2017 03:42:20AM 0 points [-]

Hi Michael,

I'm writing a much more detailed piece on replaceability.

But in short, simple replaceability could still be wrong in that the doctor wouldn't be replaced. In general, a greater supply of doctors should mean that more doctors get hired, even if it's less than 1.

But yes you're right that if the person you'd replace isn't value-aligned with you, then the displacement effects seem much less significant, and can probably often be ignored.

If you think replacability is sometimes true and other times not, it would be really helpful to specify that. My guess is motivation and ability to be an EA play the big role.

We did state this in our most recent writing about it from 2015: It's pretty complex to specify the exact conditions under which it does and doesn't matter, and I'm still working on that.

Comment author: MichaelPlant 08 July 2017 09:43:38PM *  2 points [-]

Hello again Ben and thanks for the reply.

Thanks for the correction on 80k. I'm pleased to hear 80k stopped doing this ages ago: I saw the new, totalist-y update and assumed that was more of a switch in 80k's position than I thought. I'll add a note.

I agree moral uncertainty is potentially important, but there are two issues.

  1. I'm not sure EMV is the best approach to moral uncertainty. I've been doing some stuff on meta-moral uncertainty and think I've found some new problems I hope to write up at some point.

  2. I'm also not sure, even if you adopt an EMV approach, the result is that totalism becomes your effective axiology as Hilary and Toby suggest in their paper ( I'm also working on a paper on this.

Those are basically holding responses which aren't that helpful for the present discussion. Moving on then.

I disagree with your analysis that person-affecting views are committed to being very concerned about X-risks. Even supposed you're taking a person-affecting view, there's still a choice to be made about your view of the badness of death. If you're an Epicurean about death (it's bad for no one to die) you wouldn't be concerned about something suddenly killing everyone (you'd still be concerned about the suffering as everyone died though). I find both person-affecting views and Epicureanism pretty plausible: Epicureanism is basically just taking a person-affecting view to creating lives and applying it to ending lives, so if you like one, you should like both. On my (heretical and obviously deeply implausible) axiology, X-risk doesn't turn out to be important.

FWIW, I'm (emotionally) glad people are working on X-rosk because I'm not sure what to do about moral uncertainty either, but I don't think I'm making a mistake in not valuing it. Hence I focus on trying to find the best ways to 'improve lives' - increasing the happiness of currently living people whilst they are alive.

You're right that if you combine person-affecting-ness and a deprivationist view of death (i.e. badness of death = years of happiness lost) you should still be concerned about X-risk to some extent. I won't get into the implications of deprivationism here.

What I would say, regarding transparency, is that if you think everyone should be concerned about the far future because you endorse EMV as the right answer to moral uncertainty, you should probably state that somewhere too, because that belief is doing most of the prioritisation work. It's not totally uncontentious, hence doesn't meet the 'moral inclusivity' test.

Comment author: Ben_Todd 09 July 2017 03:33:13AM *  12 points [-]

Hi Michael,

I agree that if you accept both Epicureanism and the person-affecting view, then you don't care about an xrisk that suddenly kills everyone, perhaps like AI.

However, you might still care a lot about pandemics or nuclear war due to their potential to inflict huge suffering on the present generation, and you'd still care about promoting EA and global priorities research. So even then, I think the main effect on our rankings would be to demote AI. And even then, AI might still rank due to the potential for non-xrisk AI disasters.

Moreover, this combination of views seems pretty rare, at least among our readers. I can't think of anyone else who explicitly endorses it.

I think it's far more common for people to put at least some value on future generations and/or to think it's bad if people die. In our informal polls of people who attend our workshops, over 90% value future generations. So, I think it's reasonable to take this as our starting point (like we say we do in the guide:

And this is all before taking account of moral uncertainty, which is an additional reason to put some value on future generations that most people haven't already considered.

In terms of transparency, we describe our shift to focusing on future generations here: If someone doesn't follow that shift, then it's pretty obvious that they shouldn't (necc) follow the recommendations in that section.

I agree it would be better if we could make all of this even more explicit, and we plan to, but I don't think these questions are on the mind of many of our more readers, and we rarely get asked about them in workshops and so on. In general, there's a huge amount we could write about, and we try to address people's most pressing questions first.

Comment author: Ben_Todd 08 July 2017 09:00:17PM 16 points [-]

Hi Michael,

I agree the issue of people presenting EA as about global poverty when they actually support other causes is a big problem.

80k stopped doing this in 2014 (not a couple of months ago like you mention), with this post: The page you link to listed other causes at least as early as 2015:

My understanding is that the GWWC website is in the process of being updated, and the recommendations on where to give are now via the EA Funds, which include 4 cause areas.

These issues take a long-time to fix though. First, it takes a long time to rewrite all your materials. Second, it takes people at least several years to catch up with your views. So, we're going to be stuck with this problem for a while.

In terms of how 80,000 Hours handles it:

Their cause selection choices, which I think they updated a few months ago only really make sense if you adopt total utilitarianism (maximise happiness throughout history of the universe) rather than if you prefer a person-affecting view in population ethics (make people happy, don’t worry about creating happy people) or you just want to focus on the near future (maybe due to uncertainty about what we can do or pure time discounting).

This is a huge topic, but I disagree. Here are some quick reasons.

First, you should value the far future even if you only put some credence on theories like total utilitarianism.

e.g. Someone who had 50% credence in the person affecting view and 50% credence in total utilitarianism, should still place significant value on the far future.

This is a better approximation of our approach - we're not confident in total utilitarianism, but some weight on it due to moral uncertainty.

Second, even if you don't put any value on the far future, it wouldn't completely change our list.

First, the causes are assessed on scale, neglectedness and solvability. Only scale is affected by these value judgements.

Second, scale is (to simplify) assessed on three factors: GDP, QALYs and % xrisk reduction, as here:

Even if you ignore the xrisk reduction column (which I think would be unreasonable due to moral uncertainty), you often find the rankings don't change that much.

E.g. Pandemic risk gets a scale score of 15 because it might pose at xrisk, but if you ignored that, I think the expected annual death toll from pandemics could easily be 1 million per year right now, so it would still get a score of 12. If you think engineered pandemics are likely, you could argue for a higher figure. So, this would move pandemics from being a little more promising than regular global health, to about the same, but it wouldn't dramatically shift the rankings.

I think AI could be similar. It seems like there's a 10%+ chance that AI is developed within the lifetimes of the present generation. Conditional on that, if there's a 10% chance of a disaster, then the expected death toll is 75 million, or 1-2 million per year, which would also give it a score of 12 rather than 15. But it would remain one of the top ranked causes.

I think the choice of promoting EA and global priorities research are even more robust to different value judgements.

We actively point out that the list depends on value judgements, and we provide this quiz to highlight some of the main ones:

View more: Next