Aug 6 20185 min read 6

19

Preface: Searching for ‘the one true cause area’, or perhaps several of them, strikes me as a poor model of how cause prioritisation should be thought about. Much better is an emphasis on what the best allocation of resources to various cause areas looks like over time and what we need to do to achieve this allocation. In this post I’ll lay out one argument for this view.

When the EA community discusses cause prioritisation, we usually model different causes as additive. However, in the real world, the interactions between many causes aren’t additive, but interact in much more complicated ways with each other. 

In this post I’ll present how we can model causes whose impact multiplies. If causes simply add together, it makes sense to focus on the cause with the highest impact per dollar. However, if causes multiply, we should locally distribute resources between them according to the ratio of their output elasticities, i.e. how much an increase in investment in one cause affects the outcome as a whole. The output elasticity can be interpreted as a measure of how tractable a cause is. This means that local funding decisions (monetary ‘crowdedness’) should be proportional to how tractable the causes are if they’re multiplicative.

Sam Bankman-Fried has written about multiplicative causes with the same output elasticity before, in which case it is optimal to distribute resources between the multiplying causes equally. You might want to read his post before reading this one.

To illustrate this, think about economic progress vs. cultural progress. If we imagine an economically as well as culturally destitute world, we could pour all our resources into improving our world economically - and could achieve a 1984 or Brave New World dystopia. Or we could pour all our resources into improving it culturally. In that case we might have wonderful relationships with our fellow humans and beautiful art to enjoy, but would still starve and die of illnesses early. Most appealing is the world in which we put an equal amount of resources into economical and cultural progress and much more so than in the alternative worlds. In this world people will live a bit longer and don’t starve quite as much while also having acceptable relationships with their peers.

But this assumes economical vs. cultural progress are similarly tractable. What happens if the amount of resources we put into economical and technological progress multiplies, but putting extra resources in cultural progress doesn’t affect cultural progress as much as putting extra resources in economic progress affects economic progress?

To find an answer to this question, we can look at the Cobb Douglas production function and expand its use. Historically, it describes the inputs of capital and labour and how much output can be produced by those inputs.

Goods = A * Lalpha * Cbeta. In this historical version L is the labour input, while alpha is the output elasticity of labour, i.e. how many % more goods will be produced if the amount of labour is increased by 1%. C is the capital input, while beta is the output elasticity of capital. Goods are the output that is being produced. A is the ‘total factor productivity’ which captures the effects on output that aren’t explained by the input variables.

We can now ask the question how resources between labour and capital should be allocated to maximize the number of goods produced. The mathematically optimal allocation to maximize the output is determined by the ratio between the output elasticities of capital and labour, so alpha:beta. Note that in this case the output elasticities are constants.

If labour’s output elasticity was 1, but capital’s output elasticity was 0.1 (though note that those aren't the true historical ones), then to produce the most goods with a given set of resources it’s optimal to put ten times as many resources into labour than into capital.

If we now adapt the historical Cobb Douglas production function and look at our hypothetical society, we can answer our question:

Thriving of humanity = A * Resources for economic progressgamma * Resources for cultural progressdelta

A is the factor again that measures everything that results in the thriving of humanity that isn’t explained by putting in resources for economic and cultural progress.

If it is ten times easier to make progress on the thriving of humanity by investing resources in economic progress than by investing resources in cultural progress (i.e. gamma is ten times as big as delta), then as in our previous example the optimal allocation is to spend ten times as many resources on economic progress. 

Thus, the important question to ask for optimal resource allocation in multiplicative scenarios is what the ratio of the output elasticities is. However, output elasticities might change as the inputs change. Most often, the output elasticities won’t be constants. This means in many cases the production function will only tell us locally which cause is in most need of more resources to improve the overall outcome.

To calculate an output elasticity, we need to find out how much increasing the respective input by 1% is affecting the outcome as a percentage. In our example, we look at how many resources are being spent on cultural progress already i.e. how much talent or funding is currently being allocated to it. We then assess how much an increase of 1% is affecting the output. If the thriving of humanity improved by 10% due to the 1% increase of the resources for cultural progress, we have an output elasticity of 10%/1% = 10. If the thriving of humanity improved only by 0.1%, the output elasticity would be 0.1%/1% = 0.1.

What does all of this mean in practice for cause prioritisation? While many causes aren’t actually additive and modelling them as additive might give subpar results, modelling them simply as multiplicative isn’t close to perfect either. However, it might still be an improvement. 

Questions for which modelling the problem as multiplicative might be helpful are e.g. whether we should add new talent to the EA community or improve the talent we already have? Should the EA community focus to add its resources on the efforts to reduce GCRs or to add them to efforts to help humanity flourish? We could also think of the technical ideas to improve institutional decision making like improving forecasting abilities as multiplying with those institution’s willingness to implement those ideas.

When we think about global funding in the inputs for our production functions, we fundamentally care about what the EA community can do. So it’s important to compare like with like by thinking about the most effective uses of resources that will determine the output elasticities. Note also that while we’re looking at such large pools of funding, the EA community will hardly be able to affect the funding ratio substantially. Therefore, this type of exercise will often just show us which single cause should be prioritised by the EA community and thereby act additive after all. This is different if we look at questions with multiplicative factors in which the decisions by the EA community can affect the input ratios like whether we should add more talent to the EA community or focus on improving existing talent.

Which other cause prioritisation questions can you think of that are better modelled as multiplicative instead of additive? 

TL,DR: When we try to prioritise between different additive causes, it makes sense to focus on the one with the highest impact per dollar. But if we try to prioritise between different multiplicative causes, we should locally spread our resources between them according to their output elasticity ratios. That means the proportions of funding (‘crowdedness’) should locally be equal to the proportions of the output elasticities (a measure for the ‘tractability’ of the respective causes).

 

Thanks to Jacob Hilton who reviewed a draft of this post.

19

0
0

Reactions

0
0

More posts like this

Comments6
Sorted by Click to highlight new comments since: Today at 9:19 AM

"Note also that while we’re looking at such large pools of funding, the EA community will hardly be able to affect the funding ratio substantially. Therefore, this type of exercise will often just show us which single cause should be prioritised by the EA community and thereby act additive after all. This is different if we look at questions with multiplicative factors in which the decisions by the EA community can affect the input ratios like whether we should add more talent to the EA community or focus on improving existing talent."

I agree that multiplicative factors are a big deal for areas where we collectively have strong control over key variables, rather than trying to move big global aggregates. But I think it's the latter that we have in mind when talking about 'causes' rather than interventions or inputs working in particular causes (e.g. investment in hiring vs activities of current employees). For example:

"Should the EA community focus to add its resources on the efforts to reduce GCRs or to add them to efforts to help humanity flourish?"

If you're looking at global variables like world poverty rates, or total risk of extinction it requires quite a lot of absolute impact before you make much of a proportional change.

E.g. if you reduce the prospective risk of existential catastrophe from 10% to 9%, you might increase the benefits of saving lives through AMF by a fraction of a percent, as it would be more likely that civilization would survive to see benefits of the AMF donations. But a 1% change would be unlikely to drastically alter allocations between catastrophic risks and AMF. And a 1% change in existential risk is an enormous impact: even in terms of current humans (relevant for comparison to AMF) that could represent tens of millions of expected current lives (depending on the timeline of catastrophe), and immense considering other kinds of beings and generations. If one were having such amazing impact in a scalable fashion it would seem worth going further at that point.

Diminishing returns of our interventions on each of these variables seems a much more important consideration that multiplicative effects between these variables: cost per percentage point of existential risk reduced is likely to grow many times as one moves along the diminishing returns curve.

"We could also think of the technical ideas to improve institutional decision making like improving forecasting abilities as multiplying with those institution’s willingness to implement those ideas."

If we're thinking about institutions like national governments changing willingness to implement the ideas seems much less elastic than improving the methods. If we look at a much narrower space, e.g. the EA community or a few actors in some core areas, the multiplicative factors key fields and questions.

If I was going to look for cross-cause multiplicative effects it would likely be for their effects on the EA community (e.g. people working on cause A generate some knowledge or reputation that helps improve the efficiency of work on cause B, which has more impact if cause B efforts are larger).

Great comment, thank you. I actually agree with you. Perhaps I should have focussed less on discussing the cause-level and more the interventions level, but I think it is still good to encourage more careful thinking on a cause-wide level even if it won't affect the actual outcome of the decision-making. I think people rarely think about e.g. reducing extinction risks benefiting AMF donations as you describe it.

Let's hope people will be careful to consider multiplicative effects if we can affect the distribution between key variables.

While it's hard to disagree with the math, would it not be fairly unlikely for the current allocation of resources to be close enough to the actual allocation of resources that this would realistically lead to allocating an agent's resources to more than one cause area? Like you mention, the allocation within the community-building cause area itself is one of the more likely candidates, as we have a large piece of the pie in our hands (if not all of it). However, the community is not one agent, so we would need to funnel the money through e.g. EA Funds, correct?

Alternatively, there could be top-level analysis of what the distribution -ought- to be, and what it -currently is-, and suggest people donate to close that gap. But is this really different from arguments in terms of marginal impact and neglectedness? I agree your line of thinking ought to be followed in such analysis, but am not convinced that this isn't incorporated already.

It also doesn't solve issues like Sam Bankman-Fried mentioned where according to some argument one cause area is 44 orders of magnitude more impactful, because even if the two causes are multiplicative, if I understand correctly this would imply a resource allocation of 1:10^44, which is effectively the same as going all in on the large cause area. I think that even in less extreme cases than this, we should actually be far more "egalitarian" in our distribution of resources than multiplicative causes (and especially additive causes) suggest, as statistically speaking, the higher the expected value of a cause area is, the more likely that it is overestimated.

I do think this is a useful framework on a smaller scale. E.g. your example of focusing on new talent or improving existing talent within the EA community. For local communities where a small group of agents plays a determining role on where the focus lies, this can be applied much more easily than in global cause area resource allocations.

I address the points you mention in my response to Carl.

It also doesn't solve issues like Sam Bankman-Fried mentioned where according to some argument one cause area is 44 orders of magnitude more impactful, because even if the two causes are multiplicative, if I understand correctly this would imply a resource allocation of 1:10^44, which is effectively the same as going all in on the large cause area.

I don't think this is understanding the issue correctly, but it's hard to say since I am a bit confused what you mean by 'more impactful' in the context of multiplying variables. Could you give an example?

I guess when I say "more impactful" I mean "higher output elasticity".

We can go with the example of x-risk vs poverty reduction (as mentioned by Carl as well). If we were to think that allocating resources to reduce x-risk has an output elasticity 100,000 higher than poverty reduction, but reducing poverty improves the future, and reducing x-risk makes reducing poverty more valuable, then you ought to handle them multiplicatively instead of additively, like you said.

If you'd have 100,001 resources to spend, that'd mean 100,000 units against x-risk and 1 unit for poverty reduction, as opposed to the 100,001 for x-risk and 0 for poverty reduction when looking at them independently(/additively). Sam implies the additive reasoning in such situations is erroneous, after mentioning an example with such a massive discrepancy in elasticity. I'm pointing out that this does not seem to really make a difference in such cases, because even with proportional allocation it is effectively the same as going all in on (in this example) x-risk.

Anyway, not claiming that this makes the multiplicative approach incorrect (or rather, less correct than additive), just saying that in this case which is mentioned as one of the motivations for this, it really doesn't make much of a difference (though things like diminishing returns would). Maybe this would have been more fitting as a reply to Sam than you, though!

What you're saying is correct if you're assuming that so far zero resources have been spent on x-risk reduction and global poverty. (Though that isn't quite right either: You can't compute an output elasticity if you have to divide by 0.)

But you are supposed to compare the ideal output elasticity ratio with how resources are being spent currently, those ratios are supposed to be equal locally. So using your example, if there were currently more than 1mil times as many resources spent on x-risk than global poverty, global poverty should be prioritised.

When I was running the numbers, my impression was that global wellbeing increases had a much bigger output elasticity than x-risk reduction. I found it a bit tricky to find numbers for global (not just EA) x-risk reduction efforts, so I'm not confident and also not confident how large the gap in resource spending is. 80k quotes $500 billion per year for resources spent on global wellbeing increases.