Comment author: Halstead 05 June 2018 10:43:07AM 0 points [-]

You argued that counterfactual impact may be smaller than it appears. But it may also be larger than it first appears due to leveraging other orgs away from ineffective activities. e.g. an NGO successfully advocates for a policy change P1 - the benefits of P1 is their counterfactual impact. But as a result of the proven success of this type of project, 100 other NGOs start working on similar projects where before they worked on ineffective projects. This latter effect should also be counted as the first org's counterfactual impact. This could be understood as leveraging additional money into an effective space.

Comment author: Ben_Todd 05 June 2018 06:03:55PM 1 point [-]

Makes sense. I don't think Joey would object if orgs were counting this though.

Comment author: Halstead 04 June 2018 02:06:04PM *  1 point [-]

good points. This can also go the other way though - an org could leverage money from otherwise very ineffective orgs. Especially with policy changes, it can sometimes be the case that a good org comes up with a campaign that steers the entire advocacy ecosystem to a more effective path. A good example of this is campaigns for ordinary air pollution regulations on coal plants, which were started in the 1990s by the Clean Air Task Force among others and now have hundreds of millions in funding from Bloomberg. If these campaigns weren't started, environmental NGOs in the US and Europe would plausibly be working on something much worse.

I don't think the notion of 'credit' is a useful one. At FP, when we were looking at orgs working on policy change, we initially asked them how much credit they should take for a particular policy change. They ended up saying things like "40%". I don't really understand what this means. It turned out to be best to ask them when the campaign and policy change would have happened had they not acted (obviously a very difficult question). It's best to couch things in terms of counterfactual impact throughout and not to convert into 'credit'.

Similarly with voting, if an election is decided by one vote and there are one million voters for the winning party, I think it is inevitably misleading to ask how much of the credit each voter should get. One naturally answers that they get one millionth of the credit, but this is wrong as a proposition about their counterfactual impact, which is what we really care about.

Indeed, focusing on credit can lead you to attribute impact in cases of redundant causation when an org actually has zero counterfactual impact. Imagine 100 orgs are working for a big policy change, and only 50 of them were necessary to the outcome (though this could be any combination of them and they were all equally important). In this case, funding one of the orgs had zero counterfactual impact because the change would have happened without them. But on the 'credit approach', you'd end up attributing one hundredth of the impact to each of the orgs

Comment author: Ben_Todd 05 June 2018 05:00:49AM -1 points [-]

I agree - I was talking a bit too loosely. When I said "assign credit of 30% of X" I meant "assign counterfactual impact of 30% of X". My point was just that even if you do add up all the counterfactual impacts (ignoring that this is a conceptual mistake like you point out), they rarely sum to more than 100%, so it's still not a big issue.

I'm not sure I follow the first paragraph about leveraging other groups.

Comment author: Ben_Todd 03 June 2018 10:41:58AM 3 points [-]

On the practical point, one help is that I think cases like these are fairly uncommon:

The previous example used donations because it’s easy and clear cut to make the case that this is the wrong move without getting into more difficult issues, but it generalizes to talent as well. For example, recently, Fortify Health was founded. Clearly the founders deserve 100% impact- without them, the project certainly would not have happened. But wait a second: both of them think that without Charity Science’s support, the project would definitely not have happened. So, technically, Charity Science could also take 100% credit. (Since from our perspective, if we did not help Fortify Health it would not have happened, so it is a 100% counterfactually caused by Charity Science project). But wait a second, what about the donors who funded the project early on (because of Charity Science’s recommendation)? Surely they deserve some credit for impact as well! What about the fact that without the EA movement, it would have been much less likely for Charity Science and Fortify Health to connect? With multiple organizations and individuals, you can very easily attribute a lot more impact than actually happens.

In our impact evaluations, and in my experiences talking to others in the community, we would never give 100% of the impact to each group. For instance, if Charity Science didn't exist, the founders of Fortify might well have ended up doing a similar idea anyway - it's not as if Charity Science is the only group promoting evidence-based global health charities, and if Charity Science didn't exist, another group like them probably would have sprung up eventually. What's more, even if the founders didn't do Fortify, they would probably have done something else high-impact instead. So, the impact of Charity Science should probably be much less than 100% of Fortify. And the same is true for the other groups involved.

At 80,000 Hours, we rarely claim more than 30% of the impact of an event or plan change, and we most often model our impact as a speed-up (e.g. we assume the career changer would have eventually made the same shift, but we made it come 0.5-4 years earlier). We also sometimes factor in costs incurred by other groups. All this makes it hard for credit to add up to more than 100% in practice.

Comment author: Halstead 31 May 2018 02:32:24PM 2 points [-]

I think this is a fair comment. I probably misinterpreted the main emphasis of the piece. I thought his main point was that each of the organisations is misstating their impact. I do think this was part of the argument and I think a few others did as well given that a few people started talking about dividing up credit according to the Shapely value. But I think the main part is about coordination and I agree wholeheartedly with his points and yours on that front

Comment author: Ben_Todd 03 June 2018 10:24:44AM 2 points [-]

I'm interested in what norms we can use to better deal with the practical case.

e.g. Suppose:

1) GiveWell does research for a cost of $6 2) TLYCS does outreach using the research for a cost of $6 3) $10 is raised as a result.

Assume that if GiveWell didn't do the research, TLYCS wouldn't have raised the $10, and vice versa.

If you're a donor working out where to give, how should you approach the situation?

If you consider funding TLYCS with GiveWell held fixed, then you can spend $6 to raise $10, which is worth doing. But if you consider funding GiveWell+TLYCS together, then you can spend $12 to raise $10, which is not worth doing.

It seems like the solution is that the donor needs to think very carefully about which margin they're operating at. Here are a couple of options:

A) If GiveWell will definitely do the research whatever happens, then you ought to give. B) Maybe GiveWell won't do the research if they don't think anyone will promote it, so the two orgs are coupled, and that means you shouldn't fund either. (Funding TLYCS causes GiveWell to raise more, which is bad in this case) C) If you're a large donor who is able to cover both funding gaps, then you should consider the value of funding the sum, rather than each org individually.

It seems true that donors don't often consider situations like (B), which might be a mistake. Though sometimes they do - e.g. GiveWell considers the costs of malaria net distribution incurred by other actors.

Likewise, it seems like donors often don't consider situations like (C). e.g. If there are enough interactions, maybe the EA Funds should calculate the cost-effectiveness of a portfolio of EA orgs, rather than estimate the ratios for each individual org.

On the other hand, I don't think these cases where two orgs are both 100% necessary for 100% of the impact are actually that common. In practice, if GiveWell didn't exist, TLYCS would do something else with the $6, which would mean they raise somewhat less than $10; and vice versa. So, the two impacts are fairly unlikely to add up to much more than $12.

Comment author: Ben_Todd 22 May 2018 03:28:59AM 6 points [-]

Yes, each cause has different relative needs.

It's also more precise and often clearer to talk about particular types of talent, rather than "talent" as a whole e.g. the AI safety space is highly constrained by people with deep expertise in machine learning and global poverty isn't.

However, when we say "the landscape seem more talent constrained than funding constrained" what we typically mean is that given our view of cause priorities, EA aligned people can generally have a greater impact through direct work than earning to give, and I still think that's the case.

Comment author: Ben_Todd 08 May 2018 06:42:34AM *  4 points [-]

More reasons for why sharing the mission of EA (which includes dedication as a component) is important in most roles in EA non-profits:

https://80000hours.org/articles/operations-management/#why-is-it-important-for-operations-staff-to-share-the-mission-of-effective-altruism

Comment author: MichaelPlant 13 April 2018 11:18:07PM 0 points [-]

I agree it's really complicated, but merits some thinking. The one practical implication I take is "if 80k says I should be doing X, there's almost no chance X will be the best thing I could do by the time I'm in a position to do it"

Comment author: Ben_Todd 20 April 2018 06:24:03AM 4 points [-]

That seems very strong - you're saying all our recommendations are wrong, even though we're already trying to take account of this effect.

Comment author: MichaelPlant 12 April 2018 10:16:26AM 16 points [-]

However, we can also err by thinking about a too narrow reference class

Just to pick up on this, a worry I've had for a while - which I'm don't think I'm going to do a very job explaining here - is that the reference class people use is "current EAs" not "current and future EAs". To explain, when I started to get involved in EA back in 2015, 80k's advice, in caricature, was that EAs should become software developers or management consultants and earn to give, whereas research roles, such as becoming a philosopher or historian, are low priority. Now the advice has, again in caricature, swung the other way: management consultancy looks very unpromising, and people are being recommended to do research. There's even occassion discussion (see MacAskill's 80k podcast) that, on the margin, philosophers might be useful. If you'd taken 80k's advice seriously and gone in consultancy, it seems you would have done the wrong thing. (Objection, imagining Wiblin's voice: but what about personal fit? We talked about that. Reply: if personal fit does all the work - i.e. "just do the thing that has greatest personal fit" - then there's no point making more substantive recommendations)

I'm concerned that people will funnel themselves into jobs that are high-priority now, in which they have a small comparative advice to other EAs, rather than jobs in which they will later have a much bigger comparative advantage to other EAs. At the present time, the conversation is about EA needing more operations roles. Suppose two EAs, C and D, are thinking about what to do. C realises he's 50% better than D at ops and 75% better at research, so C goes into Ops because that's higher priority. D goes into research. Time passes the movement grows. E now joins. E is better than C at Ops. The problem is that C has taken an ops role and it's much harder for C to transition to research. C only has a comparative advantage at ops in the first time period, thereafter he doesn't. Overall, it looks like C should just have gone into research, not ops.

In short, our comparative advantage is not fixed, but will change over time simply based on who else shows up. Hence we should think about comparative advantage over our lifetimes rather than the shorter term. This likely changes things.

Comment author: Ben_Todd 20 April 2018 06:22:00AM 2 points [-]

I agree with the "in short" section. I'm less sure about exactly how it changes things. It seems reasonable to think more about your comparative advantage compared to the world as a whole (taking that as a proxy for the future composition of the community), or maybe just try to think more about which types of talent will be hardest to attract in the long-term. I don't think much the changes in advice about etg and consulting were due to this exact mistake.

One small thing we'll do to help with this is ask people to project the biggest talent shortages at longer time horizons in our next talent survey.

Comment author: Alex_Barry 13 April 2018 11:09:17AM *  15 points [-]

Thanks for writing this up! This does seem to be an important argument not made often enough.

To my knowledge this has been covered a couple of times before, although not as thoroughly.

Once by Oxford Prioritization Project however they approached it from the other end, instead asking "what absolute percentage x-risk reduction would you need to get for £10,000 for it to be as cost effective as AMF" and finding the answer of 4 x 10^-8%. I think your model gives £10,000 as reducing x-risk by 10^-9%, which fits with your conclusion of close but not quite as good as global poverty.

Note they use 5% before 2100 as their risk, also do not consider QALYs, instead only looking at 'lives saved' which is likely bias them against AMF, since it mostly saves children.

We also calculated this as part of the Causal Networks Model I worked on with Denise Melchin at CEA over the summer. The conclusion is mentioned briefly here under 'existential effectiveness'.

I think our model was basically the same as yours, although we were explicitly interested in the chance of existential risk before 2050, and did not include probabilistic elements. We also tried to work in QALYs, although most of our figures were more bullish than yours. We used by default:

  • 7% chance of existential risk by 2050, which in retrospect seems extremely high, but I think was based on a survey from a conference.
  • The world population in 2050 will be 9.8 Billion, and each death will be worth -25 QALYs (so 245 billion QALYs at stake, very similar to yours)
  • For the effectiveness of research, we assumed that 10,000 researchers working for 10 years would reduce x-risk by 1% point (i.e. from 7% to 6%). We also (unreasonably) assumed each researcher year cost £50,000 (where I think the true number should be at least double that, if not much more).
  • Our model then had various other complicated effects, modelling both 'theoretical' and 'practical' x-risk based on government/industry willingness to use the advances, but these were second order and can mostly be ignored.

Ignoring these second order effects then, our model suggested it would cost £5 billion to reduce x-risk by 1% point, which corresponds to a cost of about £2 per QALY. In retrospect this should be at least 1 or 2 orders of magnitude higher (increasing researcher cost and decreasing x-risk possibility by and order of magnitude each).

I find your x-risk chance somewhat low, I think 5% before 2100 seems more likely. Your cost-per-percent to reduce x-risk also works out as much higher than the one we used, but seems more justified (ours was just pulled from the air as 'reasonable sounding').

Comment author: Ben_Todd 20 April 2018 06:07:35AM *  5 points [-]

I also made a very rough estimate in this article: https://80000hours.org/articles/extinction-risk/#in-total-how-effective-is-it-to-reduce-these-risks Though this estimate is much better and I've added a link to it.

I also think x-risk over the century is over 1%, and we can reduce it much more cheaply than your guess, though it's nice to show it's plausible even with conservative figures.

Comment author: Nick_Beckstead 26 March 2018 06:33:12PM *  2 points [-]

I am a Program Officer at Open Philanthropy who joined as a Research Analyst about 3 years ago.

The prior two places I lived were New Brunswick, NJ and Oxford, UK. I live in a house with a few friends. It is 25-30m commute door-to-door via BART. My rent and monthly expenses are comparable to what I had in Oxford but noticeably larger than what I had in New Brunswick. I got pay increases when I moved to Open Phil, and additional raises over time. I’m comfortable on my current salary and could afford to get a single-bedroom apartment if I wanted, but I’m happy where I am.

Overall, I would say that it was an easy adjustment.

Comment author: Ben_Todd 27 March 2018 04:35:31AM 2 points [-]

Surely rent is much higher than Oxford on average? It's possible to get a great place in Oxford for under £700 per month, while comparable in SF would be $1300+. Food also seems about 30% more expensive, and in Oxford you don't have to pay for a commute. My overall guess is that $80k p.a. in SF is equivalent to about £40k p.a. in Oxford.

View more: Next