5

Milan_Griffes comments on Open Thread #40 - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (52)

You are viewing a single comment's thread.

Comment author: Milan_Griffes 10 July 2018 03:15:37PM *  5 points [-]

Why I'm skeptical of cost-effectiveness analysis

Reposting as comment because mods told me this wasn't thorough enough to be a post.

Briefly:

  • The entire course of the future matters (more)
  • Present-day interventions will bear on the entire course of the future, out to the far future
  • The effects of present-day interventions on far-future outcomes are very hard to predict
  • Any model of an intervention's effectiveness that doesn't include far-future effects isn't taking into account the bulk of the effects of the intervention
  • Any model that includes far-future effects isn't believable because these effects are very difficult to predict accurately
Comment author: Peter_Hurford  (EA Profile) 10 July 2018 07:14:20PM 5 points [-]

I'm glad you reposted this.

Any model of an intervention's effectiveness that doesn't include far-future effects isn't taking into account the bulk of the effects of the intervention

I'd argue we don't necessarily know yet whether this is true. It may well be true, but it may well be false.

Any model that includes far-future effects isn't believable because these effects are very difficult to predict accurately

This doesn't account for the fact that there's still gradients of relative believability here, even if the absolute believability is low. There's also an interesting meta-question of what to do when under various levels and kinds of uncertainty (and getting a better handle just how bad the uncertainty is).

Comment author: Milan_Griffes 11 July 2018 12:16:23AM *  1 point [-]

absolute believability is low. There's also an interesting meta-question...

I think the crux here is that absolute believability is low, such that you can't really trust the output of your analysis.

Agree the meta-question is interesting :-)

Comment author: Milan_Griffes 11 July 2018 12:15:00AM 0 points [-]

I'd argue we don't necessarily know yet whether this is true. It may well be true, but it may well be false.

I think it's almost certainly true (confidence ~90%) that far future effects account for the bulk of impact for at least a substantial minority of interventions (like at least 20%? But very difficult to quantify believably).

Also seems almost certainly true that we don't know for which interventions far future effects account for the bulk of impact.

Comment author: Peter_Hurford  (EA Profile) 11 July 2018 01:54:55AM 1 point [-]

Separately, I'd wager that I feel pretty confident that taking into account all the possible long-term effects I can think of (population ethics, meat eating, economic development, differential technological development), that the effect of AMF is still net positive. I wonder if you really can model all these things? I previously wrote about five ways to handle flow-through effects in analysis and like this kind of weighted quantitative modeling.

Comment author: Milan_Griffes 11 July 2018 04:17:51AM *  0 points [-]

I suspect it's basically impossible to model all the relevant far-future considerations in a way that feels believable (i.e. high confidence that the sign of all considerations is correct, plus high confidence that you're not missing anything crucial).

...the effect of AMF is still net positive.

I share this intuition, but "still net positive" is a long way off from "most cost-effective."

AMF has received so much scrutiny because it's a contender for the most cost-effective way to give money – I'm skeptical we can make believable claims about cost-effect when we take the far future into account.

I'm more bullish about assessing the sign of interventions while taking the far future into account, though that still feels fraught.

Comment author: Peter_Hurford  (EA Profile) 11 July 2018 01:51:52AM 1 point [-]

I recently played two different video games with heavy time-travel elements. One of the games heavily implied that choosing differently made small differences for a little while but ultimately didn't matter in the grand scheme of things. The other game heavily implied that even the smallest of changes could butterfly effect into dramatically different changes. I kind of find both intuitions plausible so I'm just pretty confused about how confused I should be.

I wish there was a way to empirically test this, other than with time travel.

Comment author: Milan_Griffes 11 July 2018 04:12:44AM 1 point [-]

A lot of big events of in my life have had pretty in-the-moment-trivial-seeming things in the causal chains leading up to them. (And the big events appear contingent on the trivial-seeming parts of the chain.)

I think this is the case for a lot of stuff in my friends' lives as well, and appears to happen a lot in history too.

It's not the far future, but the experience of regularly having trivial-seeming things turn out to be important later on has built my intuition here.

Comment author: John_Maxwell_IV 16 July 2018 12:47:29AM 0 points [-]

It's surely true that trivial-seeming events sometimes end up being pivotal. But it sounds like you are making a much stronger claim: That there's no signal whatsoever and it's all noise. I think this is pretty unlikely. Humans evolved intelligence because the world has predictable aspects to it. Using science, we've managed to document regularities in how the world works. It's true that as you move "up the stack", say from physics to macroeconomics, you see the signal decrease and the noise increase. But the claim that there are no regularities whatsoever seems like a really strong claim that needs a lot more justification.

Anyway, insofar as this is relevant to EA, I tend to agree with Dwight Eisenhower: Plans are useless, but planning is indispensable.

Comment author: Milan_Griffes 16 July 2018 03:20:02PM *  0 points [-]

...you are making a much stronger claim: That there's no signal whatsoever and it's all noise. I think this is pretty unlikely.

I'm making the claim that with regard to the far future, it's mostly noise and very little signal.

I think there's some signal re: the far future. E.g. probably true that fewer nuclear weapons on the planet today is better for very distant outcomes.

But I don't think most things are like this re: the far future.

I think the signal:noise ratio is much better in other domains.


Humans evolved intelligence because the world has predictable aspects to it.

I don't know very much about evolution, but I suspect that humans evolved the ability to make accurate predictions on short time horizons (i.e. 40 years or less).

Comment author: John_Maxwell_IV 13 July 2018 07:40:21PM 3 points [-]

Any model that includes far-future effects isn't believable because these effects are very difficult to predict accurately

"Anything you need to quantify can be measured in some way that is superior to not measuring it at all."

Comment author: Milan_Griffes 15 July 2018 04:39:56PM 1 point [-]

My post is basically contesting the claim that any measurement is superior to no measurement in all domains.

Comment author: WillPearson 15 July 2018 06:55:10PM 1 point [-]

It might be worth looking at the domains where it might be less worthwhile (formal chaotic systems, or systems with many sign flipping crucial considerations). If you can show that trying to make cost-effectiveness based decisions in such environments is not worth it, that might strengthen your case.

Comment author: Milan_Griffes 15 July 2018 07:14:45PM *  1 point [-]

...systems with many sign flipping crucial considerations

Yeah, I'm continuing to think about this, and would like to get more specific about which domains are most amiable to cost-effectiveness analysis (some related thinking here).

I think it's very hard to identify which domains have the most crucial considerations, because such considerations are unveiled over long time frames.


A hypothesis that seems plausible: cost-effectiveness is good for deciding about which interventions to focus on within a given domain (e.g. "want to best reduce worldwide poverty in the next 20 years? These interventions should yield the biggest bang for buck...")

But not so good for deciding about which domain to focus on, if you're trying to select the domain that most helps the world over the entire course of the future. For that, comparing theories of change probably works better.

Comment author: saulius  (EA Profile) 15 July 2018 10:44:53AM 0 points [-]

Another way of saying it is “Sometimes pulling numbers out of your arse and using them to make a decision is better than pulling a decision out of your arse.” It's taken from http://slatestarcodex.com/2013/05/02/if-its-worth-doing-its-worth-doing-with-made-up-statistics/ which is relevant here.

Comment author: Milan_Griffes 15 July 2018 04:38:24PM *  1 point [-]

Sure, but I don't think those are the only options.

Possible alternative option: come up with a granular theory of change; use that theory to inform decision-making.

I think this is basically what MIRI does. As far as I know, MIRI didn't use cost-effectiveness analysis to decide on its research agenda (apart from very zoomed-out astronomical waste considerations).

Instead, it used a chain of theoretical reasoning to arrive at the intervention it's focusing on.

Comment author: John_Maxwell_IV 16 July 2018 12:30:48AM *  0 points [-]

I'm not sure I understand the distinction you're making. In what sense is this compatible with your contention that "Any model that includes far-future effects isn't believable because these effects are very difficult to predict accurately"? Is this "chain of theoretical reasoning" a "model that includes far-future effects"?

We do have a fair amount of documentation regarding successful forecasters, see e.g. the book Superforecasting. The most successful forecasters tend to rely less on a single theoretical model and more on an ensemble of models (hedgehogs vs foxes, to use Phil Tetlock's terminology). Ensembles of models are also essential for winning machine learning competitions. (A big part of the reason I am studying machine learning, aside from AI safety, is its relevance to forecasting. Several of the top forecasters on Metaculus seem to be stats/ML folks, which makes sense because stats/ML is the closest thing we have to "the math of forecasting".)

Comment author: Milan_Griffes 16 July 2018 03:12:24PM *  1 point [-]

I'm not sure I understand the distinction you're making...

I'm trying to distinguish between cost-effectiveness analyses (quantitative work that takes a bunch of inputs and arrives at a output, usually in the form of a best-guess cost-per-outcome), and theoretical reasoning (often qualitative, doesn't arrive at a numerical cost-per-outcome, instead arrives at something like "...and so this thing is probably best").

Perhaps all theoretical reasoning is just a kind of imprecise cost-effect analysis, but I think they're actually using pretty different mental processes.

The most successful forecasters tend to rely less on a single theoretical model and more on an ensemble of models...

Sure, but forecasters are working with pretty tight time horizons. I've never heard of a forecaster making predictions about what will happen 1000 years from now. (And even if one did, what could we make of such a prediction?)

My argument is that what we care about (the entire course of the future) extends far beyond what we can predict (the next few years, perhaps the next few decades).

Comment author: saulius  (EA Profile) 15 July 2018 11:15:37AM 0 points [-]

I wanted to ask what kind of conclusions this line of reasoning leads you to make. But am I right to think that this is a very short summary of your series of posts exploring consequentialist cluelessness (http://effective-altruism.com/ea/1hh/what_consequences/)? In that case the answer is in the last post of the series, right?

Comment author: Milan_Griffes 15 July 2018 04:33:10PM *  2 points [-]

Yeah, my conclusions here definitely overlap with the cluelessness stuff. Here I'm thinking specifically about cost-effectiveness.

My main takeaway so far: cost-effect estimates should be weighted less & theoretical models of change should be weighted more when deciding what interventions have the most impact.

Comment author: John_Maxwell_IV 16 July 2018 01:02:38AM 0 points [-]

Do you think you're in significant disagreement with this Givewell blog post?

Comment author: Milan_Griffes 16 July 2018 05:39:54AM 0 points [-]

I basically agree with that post, though GiveWell cost-effectiveness is about comparing different interventions within the domain of improving global health & development in the next 20-50 years.

As far as I know, GiveWell hasn't used cost-effectiveness analysis to determine that global health & development is a domain worth focusing on (perhaps they did some of this early on, before far-future considerations were salient).

The complication I'm pointing at arises when cost-effectiveness is used to compare across very different domains.