Comment author: rohinmshah  (EA Profile) 19 March 2017 04:31:55AM 2 points [-]

Robin Shah

*Rohin Shah (don't worry about it, it's a ridiculously common mistake)

I also find Ben Todd's post on focusing on growth rather than upfront provable marginal impact to be promising and convincing

While I generally agree with the argument that you should focus on growth rather than upfront provable marginal impact, I think you should take the specific argument comparing with vegetarians with many grains of salt. That's speculative enough that there are lots of similarly plausible arguments in both directions, and I don't see strong reasons to prefer any specific one.

For example: Perhaps high growth is bad because people don't have deep engagement and it waters down the EA movement. Perhaps vegetarianism is about as demanding as GWWC, but vegetarianism fits more peoples values than GWWC (environmentalism, animal suffering, health vs. caring about everyone equally). Perhaps GWWC is as demanding and as broadly applicable as vegetarianism, but actually it took hundreds of years to get 1% of the developed world to be vegetarian and it will take similar amounts of effort here. And so on.

I think looking at specifically how a metacharity plans to grow, and how well their plans to grow have worked in the past, is a much better indicator than these sorts of speculative arguments. (The speculative arguments are a good way to argue against "we have reached capacity", which I think was how Ben intended them, but they're not a great argument for the meta charity.)

Comment author: MichaelDello 14 March 2017 11:25:59AM 2 points [-]

For some of the examples, it seems unclear to me how they differ from just reacting quickly generally. In other words, what makes these examples of 'ethical' reactions and not just 'technical' reactions?

Comment author: rohinmshah  (EA Profile) 14 March 2017 05:14:55PM 1 point [-]

^ Yeah, I can certainly come up with examples where you need to react quickly, it's just that I couldn't come up with any where you had to make decisions based on ethics quickly. I think I misunderstood the post as "You should practice thinking about ethics and ethical conundrums so that when these come up in real life you'll be able to solve them quickly", whereas it sounds like the post is actually "You should consider optimizing around the ability to generally react faster as this leads to good outcomes overall, including for anything altruistic that you do". Am I understanding this correctly?

Comment author: rohinmshah  (EA Profile) 12 March 2017 04:05:33AM 4 points [-]

My main question when I read the title of this post was "Why do I expect that there are ethical issues that require a fast reaction time?" Having read the body, I still have the same question. The bystander effect counts, but are there any other cases? What should I learn from this besides "Try to eliminate bystander effect?"

But other times you will find out about a threat or an opportunity just in time to do something about it: you can prevent some moral dilemmas if you act fast.


Sometimes it’s only possible to do the right thing if you do it quickly; at other times the sooner you act, the better the consequences.


Any time doing good takes place in an adversarial environment, this concept is likely to apply.

Examples? One example I came up with was negative publicity for advocacy of any sort, but you don't make any decisions about ethics in that scenario.

Comment author: RobBensinger 07 February 2017 11:00:02PM 7 points [-]

Anonymous #32(e):

I'm generally worried about how little most people actually seem to change their minds, despite being in a community that nominally holds the pursuit of truth in such high esteem.

Looking at the EA Survey, the best determinant of what cause a person believes to be important is the one that they thought was important before they found EA and considered cause prioritization.

There are also really strong founder effects in regional EA groups. That is, locals of one area generally seem to converge on one or two causes or approaches being best. Moreover, they often converge not because they moved there to be with those people, but because they 'became' EAs there.

Excepting a handful of people who have switched cause areas, it seems like EA as a brand serves more to justify what one is already doing and optimize within one's comfort zone in it, as opposed to actually changing minds.

To fix this, I'd want to lower the barriers to changing one's mind by, e.g., translating the arguments for one cause to the culture of a group often associated with another cause, and encouraging thought leaders and community leaders to be more open about the ways in which they are uncertain about their views so that others are comfortable following suit.

Comment author: rohinmshah  (EA Profile) 08 February 2017 06:22:29PM 2 points [-]

I agree that this is a problem, but I don't agree with the causal model and so I don't agree with the solution.

Looking at the EA Survey, the best determinant of what cause a person believes to be important is the one that they thought was important before they found EA and considered cause prioritization.

I'd guess that the majority of the people who take the EA Survey are fairly new to EA and haven't encountered all of the arguments etc. that it would take to change their minds, not to mention all of the rationality "tips and tricks" to become better at changing your mind in the first place. It took me a year or so to get familiar with all of the main EA arguments, and I think that's pretty typical.

TL;DR I don't think there's good signal in this piece of evidence. It would be much more compelling if it were restricted to people who were very involved in EA.

Moreover, they often converge not because they moved there to be with those people, but because they 'became' EAs there.

I'd propose a different model for the regional EA groups. I think that the founders are often quite knowledgeable about EA, and then new EAs hear strong arguments for whichever causes the founders like and so tend to accept that. (This would happen even if the founders try to expose new EAs to all of the arguments -- we would expect the founders to be able to best explain the arguments for their own cause area, leading to a bias.)

In addition, it seems like regional groups often prioritize outreach over gaining knowledge, so you'll have students who have heard a lot about global poverty and perhaps meta-charity who then help organize speaker events and discussion groups, even though they've barely heard of other areas.

Based on this model, the fix could be making sure that new EAs are exposed to a broader range of EA thought fairly quickly.

Comment author: Linch 04 January 2017 05:36:59AM 6 points [-]

It might be worth doing a pre-registered experiment of people who have never directly talked to non-GWWC members about GWWC before, or at least not in the last 6 months.

Say we get 10 of such people, and they each message 5-20 of their friends. If our initial models are correct (and I agree with you that from the outside view this looks unusually high), we would expect 5-20 new pledges to come out of this.

Do you happen to be somebody in this category? If so, would you be interested in participating in such an experiment?

Comment author: rohinmshah  (EA Profile) 05 January 2017 12:02:02AM 0 points [-]

from the outside view this looks unusually high

I would have said this a little over a year ago, but I'm less surprised by it now and I do expect it would replicate. I also expect that it becomes less effective as it scales (I expect the people who currently do it are above average at this, due to selection effects), but not by that much.

This is based on running a local EA group for a year and constantly being surprised by how much easier it is to get a pledge than I thought it would be.

Comment author: Ben_Todd 24 December 2016 09:37:37PM *  20 points [-]

Hi Michael,

Thanks for the comments.

Our last chain got very long, and it didn't feel like we made much progress, so I'm going to limit myself to one reply.

My fundamental concern with 80K is that the evidence it its favor is very weak. My favorite meta-charity is REG because it has a straightforward causal chain of impact, and it raises a lot of money for charities that I believe do much more good in expectation than GiveWell top charities. 80K can claim the latter to some extent but cannot claim the former.

I agree that the counterfactuals + chain of impact are clearer with REG, however, strength of evidence is only one criteria I'd use when assessing a charity. To start with, I'm fairly risk-neutral, so I'm open to accepting weak evidence of high upside. But there's a lot of other considerations to use when comparing charities, including some of these:

  1. A larger multiplier - REG's multiplier is about 4x once you include opportunity costs (see figures; whereas I've argued our multiplier at the margin is at least 15x.

  2. 80k's main purpose is to solve talent gaps rather than funding gaps, and about 70% of the plan changes don't donate. So probably most of the benefits of 80k aren't captured by the donation multiplier. We've also argued that talent gaps are more pressing than funding gaps. The nature of solving talent gaps means that your evidence will always be weak, so if donors only accept strong evidence, the EA community would never systematically deal with talent gaps.

  3. Better growth prospects - REG hasn't grown the last few years, whereas we've grown 20-fold; the remaining upside is also much better with 80k (due to the reasons in the post).

  4. Better movement building benefits. 80k is getting hundreds of people into the EA movement, who are taking key positions at EA orgs, have valuable expertise etc. REG has only got a handful of poker players into EA.

  5. 80k produces valuable research for the movement, REG doesn't.

  6. Donations to REG might funge with the rest of EAS, which mostly works on stuff with similar or even weaker evidence than 80k (EA movement building in German-speaking countries, Foundational Research Institute, animal-focused policy advocacy).

  7. Larger room for more funding. REG has been struggling to find someone to lead the project, so has limited room for funding. I'm keen to see that REG gets enough to cover their existing staff and pay for an Executive Director, but beyond that, 80k is better able to use additional funds.

On your specific criticisms of the donation multiplier calculation via getting people to pledge:

These people would not have signed the pledge without 80K.

We ask them whether they would have taken it otherwise, and don't count people who said yes. More discussion here: And here:

These people would not have done something similarly or more valuable otherwise.

It seems like people take the GWWC pledge independently from their choice of job, so there isn't an opportunity cost problem here.

The GWWC pledge is as valuable as GWWC claims it is.

I'm persuaded by GWWC's estimates: In fact, I think they undercount the value, because they ignore the chance of GWWC landing a super-large donor, which seems likely to be in the next 10 years. GWWC/CEA has already advised multiple billionaires.

On the opportunity cost point:

When someone switches from (e.g.) earning to give to direct work, 80K adds this to its impact stats. When someone else switches from direct work to earning to give, 80K also adds this to its impact stats.

Most of the plan changes are people moving from "regular careers" to "effective altruist style careers", rather than people who are already EAs switching between great options.

If someone switched from a good option to another good option, we may not count it as a plan change in the first place. Second, we wouldn't give it an equally high "impact rating".

In specific cases, we also say what the counterfactuals were e.g. see the end of the section on earning to give here:

I would like to see more effort on 80K's part to figure out whether its plan changes are actually causing people to do more good.

In terms of whether people are going into the right options, our core purpose is research into which careers are highest impact, so that's where most of our effort already goes.

In terms of the specific cases, there's more detail here:

Questionable marketing tactics.

We're highly concerned by this, and think about it a lot. We haven't seen convincing evidence that we've been turning lots of people off (very unclear counterfactuals!). Our bounce rates etc. are if anything better than standard websites and I expect most people leave simply because they're not interested. Note as well that our intro advice isn't EA branded, which makes it much less damaging for the rest of the movement if we do make a bad impression.

We've also thought a lot about the analogy with GiveWell, and even discussed it with them. I think there's some important differences. Our growth has also been faster than GiveWell the last few years.

Do we have any reason to believe that 80K will cause more organizations to be created, and that they will be as effective as the ones it contributed to in the past?

4 new organisations seems like a pretty good track record - it's about one per year.

I expect more because we actively encourage people to set up new EA organisations (or work for the best existing ones) as part of our career advice.

Also note that our impact doesn't rest on this - we have lots of other pathways to impact.

but writing new articles has diminishing utility as you start to cover the most important ideas.

You can also get increasing returns. The more articles you have, the more impressive the site seems, so the more convincing it is. More articles can also bring the domain more traffic, which benefits all the other articles. More articles mean you cover more situations, which makes a larger fraction of users happy, increasing your viral coefficient.

You need to convince me that it has sufficiently high leverage that it does more good than the single best direct-work org, and it has higher leverage than any other meta org.

Yes. Though where it gets tricky is making the assessment at the margin.

Comment author: rohinmshah  (EA Profile) 25 December 2016 06:36:06PM 3 points [-]

Yes. Though where it gets tricky is making the assessment at the margin.

I was wondering about this too. Is your calculation of the marginal cost per plan change just the costs for 2016 divided by the plan changes in 2016? That doesn't seem to be an assessment at the margin.

Comment author: MichaelPlant 23 December 2016 02:08:18PM 0 points [-]

Just a small comment. Shouldn't we really be calling this worry about 'movement building' rather than 'meta'? Meta to me means things like cause prioritisation.

Comment author: rohinmshah  (EA Profile) 23 December 2016 07:08:17PM 0 points [-]

Yeah I didn't have a great term for it so I just went with the term that was used previously and made sure to define what I meant by it. I think this is a little broader than movement building -- I like the suggestion of "promotion traps" above.

Comment author: John_Maxwell_IV 23 December 2016 12:10:25PM 0 points [-]

I agree that 80k's research product is not meta the way I've defined it. However, 80k does a lot of publicity and outreach that GiveWell for the most part does not do. For example: the career workshops, the 80K newsletter, the recent 80K book, the TedX talks, the online ads, the flashy website that has popups for the mailing list. To my knowledge, of that list GiveWell only has online ads.

Maybe instead of talking about "meta traps" we should talk about "promotion traps" or something?

Comment author: rohinmshah  (EA Profile) 23 December 2016 07:02:36PM 0 points [-]

Yeah, that does seem to capture the idea better.

Comment author: Ben_Todd 22 December 2016 09:14:54PM 1 point [-]

I agree the double-counting issue is pretty complex. (I think maybe the "fraction of value added" approach I mention in the value of coordination post is along the right lines)

I think the key point is that it seems unlikely that (given how orgs currently measure impact) they're claiming significantly more than 100% in aggregate. This is partly because there's already lots of adjustments that pick up some of this (e.g. asking people if they would have done X due to another org) and because there are various types of undercounting.

Given this, adding a further correction for double counting doesn't seem like a particularly big consideration - there are more pressing sources of uncertainty.

Comment author: rohinmshah  (EA Profile) 23 December 2016 07:02:00PM 0 points [-]

Yes, I agree with this. (See also my reply to Rob above.)

Comment author: Robert_Wiblin 23 December 2016 04:36:22AM 2 points [-]

Hey Rohin, without getting into the details, I'm pretty unsure whether correcting for impacts from multiple orgs makes 80,000 Hours look better or worse, so I'm not sure how we should act. We win out in some cases (we get bragging rights from someone who found out about EA from another source then changes their career) and lose in others (someone who finds out about GiveWell through 80k but doesn't then attribute their donations to us).

There's double counting yes, but the orgs are also legitimately complementary of one another - not sure if the double counting exceeds the real complementarity.

We could try to measure the benefit/cost of the movement as a whole - this gets rid of the attribution and complementarity problem, though loses the ability to tell what is best within the movement.

Comment author: rohinmshah  (EA Profile) 23 December 2016 07:00:09PM *  0 points [-]

I'm pretty unsure whether correcting for impacts from multiple orgs makes 80,000 Hours look better or worse

I'm a little unclear on what you mean here. I see three different factors:

  1. Various orgs are undercounting their impact because they don't count small changes that are part of a larger effort, even though in theory from a single player perspective, they should count the impact.

  2. In some cases, two (or more) organizations both reach out to an individual, but either one of the organizations would have been sufficient, so neither of them get any counterfactual impact (more generally, the sum of the individually recorded impacts is less than the impact of the system as a whole)

  3. Multiple orgs have claimed the same object-level impact (eg. an additional $100,000 to AMF from a GWWC pledge) because they were all counterfactually responsible for it (more generally, the sum of the individually recorded impacts is more than the impact of the system as a whole).

Let's suppose:

X is the impact of an org from a single player perspective

Y is the impact of an org taking a system-level view (so that the sum of Y values for all orgs is equal to the impact of the system as a whole)

Point 1 doesn't change X or Y, but it does change the estimate we make of X and Y, and tends to increase it.

Point 2 can only tend to make Y > X.

Point 3 can only tend to make Y < X.

Is your claim that the combination of points 1 and 2 may outweigh point 3, or just that point 2 may outweigh point 3? I can believe the former, but the latter seems unlikely -- it doesn't seem very common for many separate orgs to all be capable of making the same change, it seems more likely to me that in such cases all of the orgs are necessary which would be an instance of point 3.

We could try to measure the benefit/cost of the movement as a whole

Yeah, this is the best idea I've come up with so far, but I don't really like it much. (Do you include local groups? Do you include the time that EAs spend talking to their friends? If not, how do you determine how much of the impact to attribute to meta orgs vs. normal network effects?) It would be a good start though.

Another possibility is to cross-reference data between all meta orgs, and try to figure out whether for each person, the sum of the impacts recorded by all meta orgs is a reasonable number. Not sure how feasible this actually is (in particular, it's hard to know what a "reasonable number" would be, and coordinating among so many organizations seems quite hard).

View more: Next