Comment author: Raltune 16 July 2018 03:21:31PM 2 points [-]

Thanks, Milan. I think the economics are such that the return does not necessarily go to the person/org that donated the money. The 24$ return per 1$ invested is seen in sustainable fisheries and the taxes they generate; in generating tourism for that region and all the jobs and auxiliary benefits, taxes, decreased welfare spending, etc. So it's a great return but does not accrue to the donor, per se. But it's a great investment for governments and for charities that are looking to maximize well-being.

Other examples from the book have "family planning/sex education" as a 120$ return per 1$ invested. Campaigns against malaria as 36$:1. And these ideas are vetted, calculated by teams of economists trying to decide where the trillions of dollars that will be spent on aid over the next 15 years.

Does that make sense?

If anyone found this useful I could use a couple karma points to start threads in the regular forum. Thanks. :) -Tom

In response to comment by Raltune on Open Thread #40
Comment author: Milan_Griffes 16 July 2018 03:24:30PM *  1 point [-]

Hm, could you link to the place where you're getting these figures? I'm curious :-)

(Or give page numbers if it's a book.)

Comment author: John_Maxwell_IV 16 July 2018 12:47:29AM 0 points [-]

It's surely true that trivial-seeming events sometimes end up being pivotal. But it sounds like you are making a much stronger claim: That there's no signal whatsoever and it's all noise. I think this is pretty unlikely. Humans evolved intelligence because the world has predictable aspects to it. Using science, we've managed to document regularities in how the world works. It's true that as you move "up the stack", say from physics to macroeconomics, you see the signal decrease and the noise increase. But the claim that there are no regularities whatsoever seems like a really strong claim that needs a lot more justification.

Anyway, insofar as this is relevant to EA, I tend to agree with Dwight Eisenhower: Plans are useless, but planning is indispensable.

Comment author: Milan_Griffes 16 July 2018 03:20:02PM *  0 points [-]

...you are making a much stronger claim: That there's no signal whatsoever and it's all noise. I think this is pretty unlikely.

I'm making the claim that with regard to the far future, it's mostly noise and very little signal.

I think there's some signal re: the far future. E.g. probably true that fewer nuclear weapons on the planet today is better for very distant outcomes.

But I don't think most things are like this re: the far future.

I think the signal:noise ratio is much better in other domains.


Humans evolved intelligence because the world has predictable aspects to it.

I don't know very much about evolution, but I suspect that humans evolved the ability to make accurate predictions on short time horizons (i.e. 40 years or less).

Comment author: John_Maxwell_IV 16 July 2018 12:30:48AM *  0 points [-]

I'm not sure I understand the distinction you're making. In what sense is this compatible with your contention that "Any model that includes far-future effects isn't believable because these effects are very difficult to predict accurately"? Is this "chain of theoretical reasoning" a "model that includes far-future effects"?

We do have a fair amount of documentation regarding successful forecasters, see e.g. the book Superforecasting. The most successful forecasters tend to rely less on a single theoretical model and more on an ensemble of models (hedgehogs vs foxes, to use Phil Tetlock's terminology). Ensembles of models are also essential for winning machine learning competitions. (A big part of the reason I am studying machine learning, aside from AI safety, is its relevance to forecasting. Several of the top forecasters on Metaculus seem to be stats/ML folks, which makes sense because stats/ML is the closest thing we have to "the math of forecasting".)

Comment author: Milan_Griffes 16 July 2018 03:12:24PM *  1 point [-]

I'm not sure I understand the distinction you're making...

I'm trying to distinguish between cost-effectiveness analyses (quantitative work that takes a bunch of inputs and arrives at a output, usually in the form of a best-guess cost-per-outcome), and theoretical reasoning (often qualitative, doesn't arrive at a numerical cost-per-outcome, instead arrives at something like "...and so this thing is probably best").

Perhaps all theoretical reasoning is just a kind of imprecise cost-effect analysis, but I think they're actually using pretty different mental processes.

The most successful forecasters tend to rely less on a single theoretical model and more on an ensemble of models...

Sure, but forecasters are working with pretty tight time horizons. I've never heard of a forecaster making predictions about what will happen 1000 years from now. (And even if one did, what could we make of such a prediction?)

My argument is that what we care about (the entire course of the future) extends far beyond what we can predict (the next few years, perhaps the next few decades).

Comment author: John_Maxwell_IV 16 July 2018 01:02:38AM 0 points [-]

Do you think you're in significant disagreement with this Givewell blog post?

Comment author: Milan_Griffes 16 July 2018 05:39:54AM 0 points [-]

I basically agree with that post, though GiveWell cost-effectiveness is about comparing different interventions within the domain of improving global health & development in the next 20-50 years.

As far as I know, GiveWell hasn't used cost-effectiveness analysis to determine that global health & development is a domain worth focusing on (perhaps they did some of this early on, before far-future considerations were salient).

The complication I'm pointing at arises when cost-effectiveness is used to compare across very different domains.

Comment author: WillPearson 15 July 2018 06:55:10PM 1 point [-]

It might be worth looking at the domains where it might be less worthwhile (formal chaotic systems, or systems with many sign flipping crucial considerations). If you can show that trying to make cost-effectiveness based decisions in such environments is not worth it, that might strengthen your case.

Comment author: Milan_Griffes 15 July 2018 07:14:45PM *  1 point [-]

...systems with many sign flipping crucial considerations

Yeah, I'm continuing to think about this, and would like to get more specific about which domains are most amiable to cost-effectiveness analysis (some related thinking here).

I think it's very hard to identify which domains have the most crucial considerations, because such considerations are unveiled over long time frames.


A hypothesis that seems plausible: cost-effectiveness is good for deciding about which interventions to focus on within a given domain (e.g. "want to best reduce worldwide poverty in the next 20 years? These interventions should yield the biggest bang for buck...")

But not so good for deciding about which domain to focus on, if you're trying to select the domain that most helps the world over the entire course of the future. For that, comparing theories of change probably works better.

In response to Open Thread #40
Comment author: Raltune 14 July 2018 10:36:07PM 4 points [-]

New here. Hoping to get some karma points so that I can ask specific questions for the local community development project I have planned.

I just finished reading "The Nobel Laureates' Guide To The Smartest Targets For The World" and can not find the specific methods that can be employed to achieve the proposed targets. For example: with regard to coral reef loss, if the research is accurate and there is a 24$ economic return for every 1$ spent, through what organizations or processes can this be achieved? The specific dollar figure must imply that the process is known. Is there a separate resource of footnotes that describe how to achieve those returns? The short book was very interesting as a navigation tool towards the initiatives that may have the greatest economic return and resultant prosperity for humankind.

Thanks for any insights if you get the chance. -Tom

In response to comment by Raltune on Open Thread #40
Comment author: Milan_Griffes 15 July 2018 04:43:42PM 1 point [-]

New here.

Welcome!

For example: with regard to coral reef loss, if the research is accurate and there is a 24$ economic return for every 1$ spent

If there was a $24 total return to every dollar spent, and the actor could capture even a small fraction of this return, I'd expect that a for-profit enterprise would already be doing this.

But I'm not familiar with the domain, maybe there's no way for a for-profit to capture the return, or maybe the 24:1 ratio is incorrect.

In response to Open Thread #40
Comment author: RandomEA 15 July 2018 10:09:52AM 3 points [-]

Frequency of Open Threads

What do people think would be the optimal frequency for open threads? Monthly? Quarterly? Semi-annually?

In response to comment by RandomEA on Open Thread #40
Comment author: Milan_Griffes 15 July 2018 04:40:59PM 1 point [-]

Every 2-3 months seems good (weakly held).

Comment author: John_Maxwell_IV 13 July 2018 07:40:21PM 3 points [-]

Any model that includes far-future effects isn't believable because these effects are very difficult to predict accurately

"Anything you need to quantify can be measured in some way that is superior to not measuring it at all."

Comment author: Milan_Griffes 15 July 2018 04:39:56PM 1 point [-]

My post is basically contesting the claim that any measurement is superior to no measurement in all domains.

Comment author: saulius  (EA Profile) 15 July 2018 10:44:53AM 0 points [-]

Another way of saying it is “Sometimes pulling numbers out of your arse and using them to make a decision is better than pulling a decision out of your arse.” It's taken from http://slatestarcodex.com/2013/05/02/if-its-worth-doing-its-worth-doing-with-made-up-statistics/ which is relevant here.

In response to comment by saulius  (EA Profile) on Open Thread #40
Comment author: Milan_Griffes 15 July 2018 04:38:24PM *  1 point [-]

Sure, but I don't think those are the only options.

Possible alternative option: come up with a granular theory of change; use that theory to inform decision-making.

I think this is basically what MIRI does. As far as I know, MIRI didn't use cost-effectiveness analysis to decide on its research agenda (apart from very zoomed-out astronomical waste considerations).

Instead, it used a chain of theoretical reasoning to arrive at the intervention it's focusing on.

Comment author: saulius  (EA Profile) 15 July 2018 11:15:37AM 0 points [-]

I wanted to ask what kind of conclusions this line of reasoning leads you to make. But am I right to think that this is a very short summary of your series of posts exploring consequentialist cluelessness (http://effective-altruism.com/ea/1hh/what_consequences/)? In that case the answer is in the last post of the series, right?

In response to comment by saulius  (EA Profile) on Open Thread #40
Comment author: Milan_Griffes 15 July 2018 04:33:10PM *  2 points [-]

Yeah, my conclusions here definitely overlap with the cluelessness stuff. Here I'm thinking specifically about cost-effectiveness.

My main takeaway so far: cost-effect estimates should be weighted less & theoretical models of change should be weighted more when deciding what interventions have the most impact.

View more: Next