In this short post I outline a very simple (new?) way of estimating the expected value of systemic changes when you are very uncertain of how much they'd cost. It seems to have the result of making systemic changes look much more viable than they otherwise would.

Suppose we want to do something about poverty, and we're torn between Give Directly, an 'atomic' intervention (prejoratively, 'sticking plaster') and international legal reform, a 'systemic' intervention. I'm not going to get tied down to any specifics of the latter, because it's only for illustration, but the sort of thing I have in mind are those mentioned by Thomas Pogge - such as resource privilege (p11), and borrowing privilege (p13) - and Leif Weinar in Blood Oil; we could also think about try to reduce subsidies to rich-world farmers, such as the EU's Common Agricultural Policy, which make it much harder for poor-world farmers to compete.

Let's say we know how cost-effective Give Directly are. Asssume Give Directly spends $730 to take one person out of poverty for a year ($2x365 days). Suppose this increases someone’s happiness by 0.5 units on a -1 to +1 happiness scale. 0= death/neutral/unconsciousness, 1 = ‘max’ happiness, where maximum is maximum average sustainable happiness, or something. So the cost per ‘happiness adjusted life year’ (HALY) is $1,460.

Now, suppose we’re wondering how cost-effective it would be to run a campaign that's trying to bring about some of the systemic changes I just mentioned. This is tricky to do, because we really don't know what comparisons to draw on (which other campaigns are similar? how similar are they to ours?). It feels like we're engaged in whimsical speculation when we create the numbers.

However, we can do things the other way around and create ceiling-cost estimate. We know how cost-effective Give Directly are, so we can ask: what's the maximum we could we expect to spend on the systemic change campaign and it still turn out to be as cost-effective?

Some numbers: suppose there are 1bn people in poverty, and if we change the international system somehow (to be specified) it increases their happiness by 0.1 units a year, i.e we're assuming this has only 1/5th of the impact on each individual as Give Directly does. Therefore, if we pulled it off, it would do 100 million HALYs worth of good (i.e. 0.1 x 1bn). Give Directly produces 1 HALY for $1,460 (from above assumption). Therefore, if we spent anything less than $146 billion on a successful campaign/lobbying group, then it would be more cost effective to do that than give money to Give Directly directly (100m x 1,460; see maths in figure 1)

 

Cost of intervention

People effected

Year of happiness

happiness increase/yr

HALYS

Cost per HALY

Give Directly

730.00

1.00

1.00

0.50

0.50

$1,460.00

International reform

146,000,000,000.00

1,000,000,000.00

1.00

0.10

100,000,000.00

$1,460.00

Int reform (more realistic)

146,000,000,000.00

1,000,000,000.00

10.00

0.01

100,000,000.00

$1,460.00

Figure 1.

What do we do with this $146bn figure we've just made up? We ask ourselves if we believe the expected cost for the systemic change to be successful would be higher or lower than that number. This seems like a much easier, back-of-the-mental envelope subjective judgement to make.

Suppose we fund this campaign and it costs $160bn before it's successful, rather than $146bn? Well, then our campaign turned out to be less cost-effective than Give Directly, but not by a lot. Suppose we reflect and conclude it would, on expectation, cost $100m for systemic change success. Then the systemic change would be 1,460 times more cost-effective than Give Directly. 

The above calculation was unrealistic in a couple of ways. First, to keep things simpler, I assumed the international legal reform would last for one year. Actually, that's unlikely. The reform would have an ongoing, annual effect. However, we might think the reform would have happened anyway, so the correct counterfactual is how many years earlier we make it occur. Let's guess it happens 10 years before it otherwise would. Second, I also assumed it would have an effect of 0.1 HALYs per person per year. This might be high, so let's assume it has a 1/10th of that impact. This scenario is represented in line three. Notice the ceiling cost is still the same, because that's how I rigged it. The upshot: even very expensive, long-term systemic changes with even a minor impact can look believably more cost-effective than atomic interventions. 

Does this mean EAs should abandon all atomic interventions in favour of systemic ones? Not so fast.

First, you'd need a plausible story of how you could spend the money to bring about the systemic change. I haven't provided more than a skeleton story here. But when we see how promising systemic changes could be, that should cause us to look a bit harder for potential systemic interventions. (For those interested, I discuss what seems to be a more plausible candidate for a systemic change campaign, drug policy reform, in this post. I use poverty here because it requires less explanation.)

Second, you might think it would cost $146bn, on expectation, for success, but that a much smaller campaign than that would be proportionally much less likely to succeeed; a token effort would be a waste. For instance, the expected value of a campaign with $1.46bn of funding could be less than 100 times that of one with $146bn, i.e. 100 times its size. The expected value of additional resources may well not be constant; there might be increasing marginal returns. So donors would need to think what the effect of their extra contribution would be. Donors might realise they could do much more good together, targeting the same project, that they do targeting different projects. This would be a reason to co-ordinate. Curiously, this seems to work the other way around for atomic interventions. It's hard to believe, if you tried to give Give Directly $146bn, they wouldn't run into diminishing marginal returns.

That said, these very simple calculations suggest EAs should think much harder about systemic interventions; at least, if you want to fund an atomic intervention, it would be an oversight to assume the systemic alternative won't be more cost-effective if you haven't even drawn up some simple cost-effectiveness guesses.

EAs already tend to use expected value reasoning, and accept that small chances of very high value events are worth taking seriously, e.g. when it comes to extinction risk. There doesn’t seem to be anything particularly suspicious about systemic changes per se. It's just a small change of affecting loads of people. Everyone should accept the abolition of slavery, which affected millions, was a substantial systemic change that had a large positive impact. It's at least hypothetically possible funding that, at the time, would have been the best thing to do.

One result of this way of thinking is that systemic changes now look more effective, in general, that atomic ones. This could well be true. Certainly, one criticism of EA is that's it's ignored systemic changes. The next step would be comparing various types of systemic changes against each other to see which look the most plausible.

(I'm grateful to Will Rooney for reading this post in advance and providing some comments)

Comments23
Sorted by Click to highlight new comments since: Today at 11:52 PM

Quick comment - I broadly agree. I think if you want to maximise impact within global poverty, then you should first look for potential large-scale solutions, such as policy change, even if they have weak evidence behind them. We might not find any, but we should try hard first. It's basically hits based giving. https://www.openphilanthropy.org/blog/hits-based-giving

In practice, however, the community members who agree with this reasoning, have moved on to other problem areas. This leaves an odd gap for "high risk global poverty" interventions. Though GiveWell has looked into some options here, and I hope they'll do more.

"the community members who agree with this reasoning, have moved on to other problem areas"

I've seen this problem come up with other areas as well. For instance, funding research to combat aging (eg the SENS foundation) gets little support, because basically anyone who will "shut up and multiply" - coming to the conclusion that SENS is higher EV than GiveWell charities, will use the same logic to conclude that AI safety is higher EV than GiveWell charities or SENS.

It seems there has been a build-up of a handful of people who would be willing to support organizations like SENS, and can donate to anti-ageing when they think on the margin it's a more impactful intervention. Another factor is with the Open Philanthropy Project and others granting to EA organizations more than ever, this results in fewer organizations having room for more funding, meaning the money can be donated to a charity like SENS. I know they received a lot of funding in 2017, so I'm wondering why that might be the case.

Thanks for writing this up – the ceiling-cost estimate seems like a valuable tool for comparing interventions across different cause areas.

Second, I also assumed it would have an effect of 0.1 HALYs per person per year. This might be high, so let's assume it has a 1/10th of that impact.

0.01 HALY/person/year on average still seems quite high. We're estimating the average impact across a billion people, and any sort of systemic reform is going to have an enormous number of impacts (of varying magnitudes, in both directions). Attributing 0.01 HALY on average sorta assumes there aren't any really big negative impacts (more precisely, that any negative impacts are insubstantial compared to the positive impacts).

It also seems difficult to separate out the impacts that are appropriate to attribute to the systemic reform from all the other effects that are going on in the background.

All this is to say that I think arriving at believable average-impact estimates for systemic interventions is tricky. It's probably one of the harder parts of making good ceiling-cost estimates.

These numbers are just illustrative and to get people thinking, rather than to be taken literally.

Nevertheless, in some sense, it's not the 0.01 that's so important, it's the ratio between that and the Give Directly score. I'm amusing the intervention, whatever it is, has a 1/50th of a effect Give Directly does. That seems pretty believable: massive campaign to restucture the trade and subsidy system could do quite a bit of shift people out of poverty.

We could make the average effect a 1/500th of the GD average effect and the mystery campaign would be cost-effective up to $14.6bn. That's still a lot of sauce.

But yes, if you don't think the intervention would do good, that would be a substantial reason to dodge it (presumably in favour of another systemic intervention).

You seem to be assuming that the "bad case" for systematic reform is that it's, say, 1/500th of the benefit of the GD average effort. But I don't think that's the bad case for most systematic reforms: the bad case is that they're actively harmful.

For me, at least, the core of my problem with "systematic reform" is that we're "clueless" about its effects - it could have good effects, but could also have quite bad effects, and it's extremely hard for us to tell which.

I think the ceiling cost estimate is a nice way of framing the comparison, but I agree with Milan that the hard bit is working out the expected effect.

There are some systemic reforms that seem easier reason about that others. Getting governments to be able to agree a tax scheme such that the Google's and Facebook's of the world can't hide their profits, seems like a pretty good idea. Their money piles suggest that they aren't hurting for cash to invest in innovation. It is hard to see the downside.

The upside is going to be less in developing world than the developed (due to more profits occurring in the developed world). So it may not be ideal. The tax justice network is something I want to follow more. They had a conversation with givewell.pdf)

There's a sliding scale of what people consider "systematic reform". Often people mean things like "replace capitalism". I probably wouldn't even have classed drug policy reform or tax reform as "systematic reform", but it's a vague category. Of course the simpler ones will be easier to analyze.

the core of my problem with "systematic reform" is that we're "clueless" about its effects - it could have good effects, but could also have quite bad effects, and it's extremely hard for us to tell which.

I think this can also apply for the atomic interventions EAs tend to like, namely those from GW. You can tell a story about how Give Directly increases meat consumption, so that's bad. For life saving charities, there's the same worry about meat, in addition to concerns about overpopulation. I'm not claiming we can't sensible work through these and concude they all do more good than bad, only that cluelessness isn't just a systemic intervention worry.

Frame it as a matter of degree if you like: I think we're drastically more clueless about systematic reform than we are about atomic interventions.

FWIW, I think this is way too broad. Even if, a priori, systemic interventions are more clueness-ny (?) than atomic interventions ones, it's not that useful to talk about them as a category. It'd would be more useful to argue the toss on particular cases.

Sure - I don't think "systematic change" is a well-defined category. The relevant distinction is "easy to analyze" vs "hard to analyze". But in the post you've basically just stipulated that your example is easy to analyze, and I think that's doing most of the work.

So I don't think we should conclude that "systematic changes look much more effective" - as you say, we should look at them case by case.

How does this differ from the more general practice of looking at the upper and lower bounds of the costs and impact of interventions and seeing how the interventions compare? This seems like something EAs do a lot and something I've personally begged systemic change advocates to do repeatedly (no takers so far).

I think I'm making a difference point. The context to the post was that it's quite hard to estimate the EV of systemic interventions without a context, but it's much easier to guess whether you think they'd cost more or less than a scaled-up atomic intervention.

I'm all for sensitive analysis too. This was more just to make the initial comparisons easier. Many EAs seem to wave away systemic interventions, often without even trying to run these sorts of numbers.

In many cases a big concern with systemic change is that, especially when political, it involves playing zero-sum, or negative-sum games. For example, if I think that some international legal reform X is useful, but you think it would be detrimental, we might both donate money to campaigns fighting for our side of the issue and cancel each other out, meaning the money is wasted. It would have been better for us to realise this before donating to the political campaigns and give our money elsewhere.

Note this is not the same as just saying that people might disagree on which cause is the most effective. If I think that funding a vaccine program is most effective, and you think that funding a malaria-net program is more effective we can both donate without stopping the other.

Not all systemic change is of this adversarial type, involving campaigning against other people who disagree and will spend money in the other direction. But I think this is a problem which overwhelmingly affects attempts at systemic change rather than atomic change. Systemic change usually involves changing some rules or reforming some institution - which doesn't inherently need to consume lots of resources - unless we need to spend money campaigning against some people on the other side. Conversely, atomic change generally involves interventions which require resources even if everybody agrees it is a good idea (such as buying malaria nets or creating vaccines).

The conclusion here is that when calculating spending money on systemic change, you need to account for other people reacting by spending their money fighting against you - money they might have otherwise spent on something useful.

I don't think this is quite right. The distinction you seem to be drawing on is 'people counteract your action' vs 'people don't', rather that 'systemic' vs 'atomic'. An example of two atomic interventions counteracting each other would be saving lives and family planning to reduce population size; the latter want there to be generally less people, the former keep more people alive. Hence there's a natural tension there (although both could be good under certain circumstances and views).

It's true we need to consider if people will counteract us. However, the scenario you suggest where it would be better us, who are for legal reform X to engage in a moral trade with those who are against us and we both agree to do something else, actually requires we could get the other side to agree. If we can't get the other side to agree to moral trade, we need to think "what is my counterfactual impact given they'll fight me" vs "what is the counterfactual impact of other stuff i could do".

You're right to point out that it could be the case if you do X, people will try to make not X happen, when if you haven't tried to do X, they would have done Y instead, where Y is a positive outcome. But that could apply to both systemic and atomic interventions. I spend money saving lives, someone concerned about overpopulation could marginally step up their donations to thwart me.

I agree that the 'people counteract your action' vs 'people don't' axis and the 'systemic' vs 'atomic' axis are different - but I think that there's a strong correlation between the two. Of course any intervention could have people working to counteract it, but I think these counter-actions are much more likely for systemic-type interventions.

This is because many systemic interventions have the property that, if a large majority of people agreed the intervention was a good idea, it would be easy to accomplish, or would have already been accomplished. This is at least true of systemic interventions that take the form of advocacy for social/political change in democracies - there might be other significant classes of systemic change to which this argument does not apply though - perhaps those where we think that many of those who disagree with us can be easily persuaded, or forced to comply.

This means that it's likely to be the case that the systemic interventions that still need doing must be those that have a significant group of people who disagree with us, and it is these people who are likely to counteract. It is hard to think of ways to campaign for removing rich-world agricultural subsidies in a way that is both effective, and does not invoke counteraction. Drug policy reform would also likely provoke counteraction (and by people who will genuinely believe that they, not we, are the altruists).

But non-systemic interventions seem like they would generally be easier to do in ways that avoid counteraction, because they tend to take the form of atomic improvements rather than sweeping change. I don't think there actually are many, if any, people who will really spend their money attempting to give more people malaria or schistosomiasis as a response to us spending ours the other way.

Having said all this I think this ceiling-cost approach is a very useful one, and systemic changes can be extremely effective. Rather, I just think these are the sorts of reasons that make might make one `suspicious about systemic changes per se.' as you put it.

I really like this type of reasoning - I think it allows for easier comparisons than the standard expected value assessments people have occasionally tried to do for systemic changes. A couple points, though.

1) I think very few systemic changes will affect 1B people. Typically I assume a campaign will be focussed on a particular country, and likely only a portion of the population of that country would be positively affected by change - meaning 10M or 100M people is probably much more typical. This shifts the cutoff cost to closer to around $1B to $10B, which seem plausibly in the same ballpark as GD.

2) Instead of asking "how much would this campaign cost to definitely succeed", you could ask "how much would it cost to run a campaign that had at least a 50% chance of succeeding" and then divide the HALYS by 2. I'd imagine this is a much easier question to answer, as you'd never be certain that an effort at systemic change would be successful, but you could become confident that the chances were high.

Thanks.

I think very few systemic changes will affect 1B people.

I'm not making a serious claim about EV in any particular case, just suggesting an easier way to do the maths. However, if systemic change X affected A people for £B, and systemic change Y affected 1/2A people for £1/2B, then they would have the same expected value.

how much would this campaign cost to definitely succeed

I actually don't do this (although now I wonder if I should have been clearer about this). I deliberately use the expected cost for it to succeed. The 'definite success' cost would nearly always be ludicrously high, because you can't really guaranatee success. If you thought you could estimate the expected cost for 100% chance of success, you could obviously use the same reasoning to estimate the expected cost for a 50% chance of success.

'I think very few systemic changes will affect 1B people' . I agree entirely. We should not in any way close the door on the possibility of systemic interventions with wide-scale impact and it made me think:

'Imagine a systemic change intervention could create a HALY/person of 1.00 for all scentient beings all the time'.

Noting:

1) all time less an implementation period accounting for ramp-up to maximum impact

2) is not constrained by arbitrary boundaries e.g. rich/poor, planetary, species, only by the limits of our consciousness

3) I'm considering a HALY of 1.00 to equate to the elimination of all avoidable suffering

This statement is logically consistent with the original sentence and only scaled up beyond the 1B people (so both numerically, within the our species, but also beyond our species).

So what does this mean exactly? Effectively we are imagining, expressed in another way, transitioning from a world of:

x-risk, nuclear catastophe, environmental degradation, compromised well being, family-oriented suffering, animal suffering, selfishness, displacement, conflict, fair economies, discrimination, compromised physical or mental health, poor nutrition, tobacco, pollution, corruption, poorly treated children, unsafe (e.g. safe roads) and irresponsible transportation, abuse, inadequate education, slavery, poverty, corporate or poltical irresponsibility, gridlock, equality, the ills of globalization, technology risk e.g. AI, to children), genocide,, terrorism, materialism, suicide etc

to living experiences characterized by:

happiness, sensation, creativity, caring, love, understanding, dynamicism, responsibility, progress, equality, fun, good health, truth, trust, consensus, sharing etc

Even if there were very few such interventions, any demonstrating:

  • a compelling, well-reasoned and evidence-based story
  • scalable to achieve the desired outcome
  • testable from a modest initial scale pilot
  • as such not massive $'s to test

This would have to be worth investing in. It would have to be as much of a no-brainer as buying malaria nets, frankly.

In most cases, I expect interventions to impact policy to also have diminishing marginal returns. Eg An experiment on legislative contacts found little increased effect with more calls (https://link.springer.com/article/10.1007/s11109-014-9277-1) .

As many others have expressed, the concern with systemic changes is you are often dealing with complex poorly understood systems.

Let's take the example of EU's Common Agricultural Policy. It is most likely evil toward world's poor, but it's not clear for example whether it works toward or against EU unity. It is plausible it is somehow important for EU unity, either because being a form of fiscal transfer or a way how to corrupt important voter block... So we should include possible political consequences in the utility calculation, and the problem becomes really tricky.

On the other hand I agree systemic interventions are worth considering, and e.g. change in drug policy seems to be an excellent candidate, as the action has been tested, we understand most of the consequecnes.

Hi and many thanks for this wonderful post, I have a few thoughts and observations to share:

I strongly agree there is a need for a greater focus in general (i.e. within and outside of the EA community) on systemic versus symptomatic interventions.

I think this should be achieved via proportionally funding systemic versus symptomatic interventions. This appears to me to be morally correct in the sense that it considers both current and future suffering reduction. My off-the-bat split … 90% symptomatic / 10% systemic given the scale of the current state of suffering

This however is based on the assumption that compelling, well-reasoned and evidence-based systemic invention opportunities exist for funding allocation

This is problematic: I personally I not aware of such intervention opportunities today per se, perhaps in part because of the central issue you raise i.e. quantifying the estimated impact

Conversely we have several compelling, well-reasoned and evidence-based symptomatic interventions today. Front-of-mind is AMF: $2 a net, 3750 nets installed = 1 life saved @ $7,500 total. Qudos to this community for shining a light on this suffering reduction no-brainer

Coming back to the example given, I don’t regard international legal reform as a systemic invention: the need for it is the absence of it in the first place leading to the symptom (in this case poverty), as such the reasoning is that international legal reform is symptom of symptom and not cause of the problem in itself

As such the first challenge is to formulate interventions that are truly systemic and with a tremendous sense of urgency: there is what seems like an infinite number of manifestations of suffering in our world today and a real threat to our existence estimated at 1 in 5 by 2100 by this community

Reasoning and logic always leads me personally (i.e. whether thinking about any specific case of suffering, anytime, anywhere) to the root cause being embedded within our consciousness (individually/collectively, naturally/nurturally)

One issue that your example highlights is the investment required for large-scale systemic interventions: using your example, funding need will exceed funding availability by many multiples

As such feasible and compelling systemic interventions should be scalable i.e. testable on a smaller scale i.e. via (non-throwaway) pilot projects; and/or have extremely high expected return profiles

As such it does not have to be the case that smaller campaigns be proportionally less effective in a material sense i.e. in the sense that they can be judged for their effectiveness, considering the benefits of efficiency of scale-up, and therefore requiring more modest investment and therefore able to access a broader base of potential funding

Many thanks again for the post,

Allister Clark.