Comment author: DardanBastiaan 27 May 2017 12:24:47PM *  1 point [-]

For banks and big corporations to want to join, there probably needs to be a greater sense of assurance that their signing up will actually lead to the publicity you suggest there would be. That in mind, it's plausible that 1. cancer charities would do better than an investment in something westerners aren't personally affected by, such as schistosomiasis, and 2. that one big check to one big organization will garner more attention than many checks to a myriad of organizations. To hammer home that latter point: you could refer to past examples where big donations to some charity lead to a big press event where the donor is thanked extensively (a google browse should garner plenty of results). Lastly, it's plausible that big organizations are more likely to listen and cooperate if they are asked by a big, known charity, which you will have contacted and gotten to initiate that process, rather than some obscure, new, small organization without any track-record whatsoever.

Have you considered implementing this in already existing charity structures of organizations. Quite a few organizations already have partnerships, e.g.:, and this could fit neatly into that.

Comment author: Ben_Todd 27 May 2017 03:19:31AM 1 point [-]

Does suggest that a cost-effectiveness estimate should just try to quantify those outliers directly instead of going through a translation.

Yes, that's the main way I think about our impact. But I think you can also justify it on the basis of getting lots of people make moderate changes, so I think it's useful to consider both approaches.

Comment author: Jon_Behar 26 May 2017 08:58:13PM 0 points [-]

Fair point about outliers driving the mean. Does suggest that a cost-effectiveness estimate should just try to quantify those outliers directly instead of going through a translation.
E.g. if "some of the 10s are likely to donate millions to charity within the next few years", just estimate the value of that rather than assuming that giving will on average equal 10x GWWC's estimate for the value of a pledge.

Comment author: casebash 26 May 2017 10:38:11AM 1 point [-]

Very happy to see such an ambitious plan! I will be surprised if you manage to pull it off, but then again the vast majority of good that is achieved will come from a few projects that work amazingly well.

In response to EA Forum FAQ
Comment author: 1amdmode 26 May 2017 09:42:01AM 0 points [-]
Comment author: Ben_Todd 26 May 2017 04:44:46AM 0 points [-]

My hunch is also that 80,000 Hours and most organisations have diminishing marginal cost-effectiveness. As far as I know from our conversations, on balance this is Sindy's view too.

You need to be very careful about what margin and output you're talking about.

As I discuss in my long comment above, I think it's unclear whether our annual ratio of cost per plan change will go up or down, and I think there's a good chance it continues to drop, as it has the last 4 years.

On the other hand, if you're talking about total value created per dollar (including all forms of value), then that seems like it's more likely to be going down. It seems intuitive that our earliest supporters who made 80k possible had more impact than supporters today.

Though even that's not clear. You could get increasing returns due to economies of scale or tipping point effects and so on.

Comment author: Ben_Todd 26 May 2017 04:40:55AM 0 points [-]

Wait, what? The costs are also increasing, it's definitely possible for marginal cost effectiveness to be lower than the current average.

Yes, agree with this. Like I say in the long comment above, I think that giving money to us right now probably has diminishing returns because we already made our funding targets for this year.

Comment author: Ben_Todd 26 May 2017 04:39:27AM 0 points [-]

Lastly, I think your model right now assumes 80K has 100% responsibility for all their career changes. Maybe this is completely fine because 80K already weights their reported career change numbers for counterfactuality? Or maybe there's some other good reason to not take this into account? I admit there's a good chance I'm missing something here, but it would be nice to see it addressed more specifically.

I don't think that's true, because the GWWC pledge value figures have been counterfactually adjusted, and because we don't count all of the people we've influenced to take the GWWC pledge.

More discussion here:

While 1 impact-adjusted change is approximately the value of a GWWC pledge, that doesn't mean it is equal in both mean and standard deviation as your model suggests, since the plan changes involve a wide variety of different possibilities.

Agree with that - the standard deviation should be larger.

Comment author: Ben_Todd 26 May 2017 04:36:39AM 1 point [-]

Hi Jon,

I would have liked to have seen a discussion of sensitivity to assumptions.

I agree - I think, however, you can justify the cost-effectiveness of 80k in multiple, semi-independent ways, which help to make the argument more robust:

FWIW, I’m pretty dubious about the treatment of plan changes scored 10. The model implies each of those plan changes is worth >$500k...If a university student tells me they're going to "become a major advocate of effective causes" (sufficient for a score of 10), I wouldn't think that has the same expected value as a half million dollars given to AMF today.

Yes, we only weigh them at 10, rather than 40. However, here are some reasons the 500k figure might not be out of the question.

First, we care about the mean value, not the median or threshold. Although some of the 10s will probably have less impact than 500k to AMF now, some of them could have far more. For instance, there's reason to think GPP might have had impact equivalent to over $100m given to AMF.

You only need a small number of outliers to pull up the mean a great deal.

Less extremely, some of the 10s are likely to donate millions to charity within the next few years.

Second, most of the 10s are focused on xrisk and meta-charity. Personally, I think efforts in these causes are likely at least 5-fold more cost-effective than AMF, so they'd only need to donate a 100k to have as much impact as 500k to AMF.

Comment author: Ben_Todd 26 May 2017 04:29:17AM *  0 points [-]

Hi there,

Thanks for writing this. A couple of quick comments (these are not thoroughly checked - our annual reviews are the more reliable source of information):

How should we think about all these things they could do with additional funding now?

Given that we made the higher end of our funding targets, I'd guess that giving us money right now has diminishing returns compared to those we received earlier in the year. However, they are not super diminishing. First, they give us the option to grow faster. Second, if we don't take that option, then the worst case scenario is that we raise less money next funding round. This means you funge with our marginal donor in early 2018 (which might well be Open Phil), while also saving us time, and giving us greater financial strength in the meantime, which helps to attract staff.

Will our returns diminish from 2016 to 2017? That's less clear.

If you're looking at the ratio of plan changes to costs each year, as you do in your model, then there's a good chance the ratio goes down in 2017. Past investments will pay off, we learn how to be more efficient, and we get economies of scale. More discussion here:

On the other hand, if we invest a lot in long-term growth, then the short-term ratio will go up.

This shows some of the limitation looking at the ratio of costs to plan changes each year, which we discuss more here:

If you're reading this and trying to evaluate 80,000 Hours, then I'd encourage you to consider other questions, which are glossed over in this analysis, but similarly, or more important, such as:

1) Is the EA community more talent constrained than funding constrained?

2) Will 80k continue to grow rapidly?

3) How pressing a problem are poor career choice and promoting EA?

4) How effective is AMF vs other EA causes? (80k isn't especially focused on global poverty)

5) Is 80k a well-run organisation with a good team?

You can see more of our thoughts on how to analyse a charity here:

Comment author: Jon_Behar 26 May 2017 12:07:32AM 2 points [-]

Thanks for sharing this analysis (and the broader project)!

Given the lengthy section on model limitations, I would have liked to have seen a discussion of sensitivity to assumptions. The one that stood out to me was the estimate for the value of a GWWC Pledge, which serves as a basis for all your calcs. While it certainly seems reasonable to use their estimate as a baseline, there’s inherently a lot of uncertainty in estimating a multi-decade donation stream and adjusting for counter-factuals, time discounting, and attrition.

FWIW, I’m pretty dubious about the treatment of plan changes scored 10. The model implies each of those plan changes is worth >$500k (again, adjusted for counterfactuals, time discounting, and attrition), which is an extremely high hurdle to meet. If a university student tells me they're going to "become a major advocate of effective causes" (sufficient for a score of 10), I wouldn't think that has the same expected value as a half million dollars given to AMF today.

Comment author: Sindy_Li 25 May 2017 11:09:55PM *  0 points [-]

On increasing and decreasing (marginal) returns:

I see that you said "claiming that expected returns are normally diminishing is compatible with expecting that true returns increase over some intervals. I think that true returns often do increase over some intervals, but that returns generally decrease in expectation."

I wasn't sure why this would be true in a model that describes the organization's behavior, so I spent some time thinking it through. Here is a way to reconcile increasing returns and decreasing expected returns, with a graph. Note that when talking about "funding" here (and the x-axis of the graph) I mean "funding the organization will receive over the next planning period, i.e. calendar year", and assume there's no uncertainty over funding received, same as in Max's model.

I think it's reasonable to assume that "increasing returns" in organization's impact often come from cases of "lumpy investments", i.e. things with high impact and high fixed costs. In this case nothing would happen until a certain level of funding is reached, and at that point there is a discrete jump in impact. For the sake of the argument let's assume that everything the organization does has this nature (we'll relax this later). So you'd expect the true returns function to be a step function (see the black curve on graph).

How does the organization makes decision? First, let's assume that these "lumpy investments" (call them "projects") aren't actually 0 or 1; rather, the closer the level of funding is to the "required" level, the more likely the project will happen (e.g. maybe AMF is trying to place an order for bed nets and the minimum requirement is 1000 nets, but it's possible that they can convince the supplier to make an order of 900 nets with probability less than 1). For simplicity let's assume the probability grows linearly (we'll relax this later). Then the expected returns is actually the red piecewise linear function in the graph. Note that overall the marginal returns are still weakly diminishing (but they are constant within each project) because given the red expected returns function the organization would choose to first do the project with the highest marginal return (i.e. slope), then the second highest, etc.

Note: We assume the probability grows linearly. If we relax this assumption, things get more complicated. I illustrate the case where probabilities grow in a convex way within each project with the ugly green curves (note that this also covers the case with no uncertainty in the project happening or not, but rather the project has a "continuous" nature and increasing marginal returns). It's true that you cannot call the whole thing concave (and I don't know if mathematicians have a word to describe something like this). But from the perspective of a donor who, IN ADDITION to the model here that assumes certainty in funding levels, has uncertainty over how much funding the organization has, the "expected-expected" returns function they face (with expectation over funding level and impact) would probably be closer to the earlier piecewise linear thing, or concave. If the probabilities grow in some weird ways that are not completely convex (note that this also covers the case with no uncertainty in the project happening or not, but rather the project has a "continuous" nature and weirdly shaped, non-convex marginal returns), things may get more complicated (e.g. switching projects half way may happen if the organization always spends the next dollar on the next thing with highest marginal return) -- maybe we should abandon such possibilities since they are unintuitive.

Note: If the organization does some projects that look more like linear in the relationship between impact and funding, 1) we can still use the red piecewise linear graph, and organizations will still start with projects with the highest slopes; 2) at a fine level things are still discrete so we'll be back to (mini) step functions.

Note: We also assumed the only uncertainty here is whether a project would happen at a funding level less than "required". There could also be uncertainty over impact, conditional the project happening -- this is not in our model, but my guess is it shouldn't change the main results much (of course it might depend on the shape of the new layer of uncertainty, and I haven't thought about it carefully).

All of the above is essentially based on the old idea that organizations do highest returns things first. The main addition is to look at a model where there are discrete projects (with elements of increasing returns) and still arrive at the same general conclusion.

I don't know how many people find this useful, but I was very confused by this issue (and said some incoherent things in my earlier comments, which I've delete to avoid confusing people), and found that I had to think through what the organization actually does in the case of lumpy investments.

Other important issues that are related but out of the scope of this discussion include how organizations and donors act under uncertainty over donation to be received by the organization.

Comment author: MichaelPlant 25 May 2017 06:02:03PM 0 points [-]

I agree on your second point that you'd want to adjust all the models, I was just hoping you could give me a reference. My thought is that depression removing 0.65 of someone's happiness for a year (i.e. going from 8/10 to a 2.5/10) seems about right on the life satisfaction scores. This means that everything else should have a much lower comparative weight, rather than making depression worse than death. For instance, maybe blindness really has a weight of 0.1 rather than 0.5 as I believe it does at present.

Comment author: adom_hartell 25 May 2017 05:05:28PM *  0 points [-]

Ah, yes. Agreed. Thanks for the clarification.

Comment author: MaxCarpendale 25 May 2017 04:33:07PM 0 points [-]

Another reason it might make sense to ignore flow-through effects is when you don't know whether they would be positive or negative. If you were absolutely unsure about the flow-through effects, and figuring them out seemed impossible, then it seems right that they would balance out and that you can expect zero value from them. Insofar as this is the case, you should ignore them.

Comment author: Maxdalton 25 May 2017 01:49:44PM 1 point [-]

In response to your first paragraph, I think it's true that GiveWell will have more information about any changes in the returns function. For the reasons given the in the second post, I think it's unlikely that GiveWell charities do have inflection points in their returns functions. I'm not sure from GiveWell's writing whether they think that there are inflection points or not (In particular, I don't think they take a clear stance on this in the linked post).

I think your second paragraph is answered by footnote 1 of the first post. I don't fully understand how your third and fourth paragraphs relate to the posts. Are you simply arguing that a fuller analysis would incorporate the size of individual donations, not the total level of funding? This seems like a plausible extension.

Comment author: Maxdalton 25 May 2017 01:38:48PM *  1 point [-]

To disagree slightly with my co-author here... As I understand you, you are conditioning on A being able to expand capacity.

I think what is going on is that you are asking "Should we give to Organization A or Organization B?". I think your analysis is roughly right as a response to this question. We are not claiming that Organization A is more effective than Organization B.

Instead, what we're asking at this stage in the paper is more like "Is the total counterfactual impact of giving to Organization A steeply declining at any point?". We think that the answer is probably "No" for the reasons given. But note that this doesn't imply that one should always give to Organization A: if A starts off more effective, but returns gradually diminish, then there will still be some point at which it makes sense to start donating to organization B.

Overall, I think there isn't a disagreement here (although I may have misunderstood), but this is a sign that we should have been clearer in this section - I'll think about a rewrite.

Comment author: Owen_Cotton-Barratt 25 May 2017 10:24:17AM 0 points [-]

Fair question. This argument is all conditioned on A not actually having good ways to expand capacity -- the case is that even then the funds are comparably good given to A as elsewhere. The possibility of A in fact having useful expansion might make it noticeably better than the alternative, which is what (to my mind) drives the asymmetry.

Comment author: arunbharatula 25 May 2017 07:08:54AM 0 points [-]

Good idea

Comment author: Peter_Hurford  (EA Profile) 25 May 2017 02:49:13AM 0 points [-]

Found it, thanks!

View more: Next