Comment author: AGB 14 February 2017 08:28:10PM 2 points [-]

For a third perspective, I think most EAs who donate to AMF do so neither because of an EV calculation they've done themselves, nor because of risk aversion, but rather because they've largely-or-entirely outsourced their donation decision to Givewell. Givewell has also written about this in some depth, back in 2011 and probably more recently as well.

http://blog.givewell.org/2011/08/18/why-we-cant-take-expected-value-estimates-literally-even-when-theyre-unbiased/

Key quote:

"This view of ours illustrates why – while we seek to ground our recommendations in relevant facts, calculations and quantifications to the extent possible – every recommendation we make incorporates many different forms of evidence and involves a strong dose of intuition. And we generally prefer to give where we have strong evidence that donations can do a lot of good rather than where we have weak evidence that donations can do far more good – a preference that I believe is inconsistent with the approach of giving based on explicit expected-value formulas (at least those that (a) have significant room for error (b) do not incorporate Bayesian adjustments, which are very rare in these analyses and very difficult to do both formally and reasonably)."

Comment author: Linch 17 February 2017 10:53:27AM 0 points [-]

An added reason to not take expected value estimates literally (which applies to some/many casual donors, but probably not to AGB or GiveWell) is if you believe that you are not capable of making reasonable expected value estimates under high uncertainty yourself, and you're leery of long casual chains because you've developed a defense mechanism against your values being Eulered or Dutch-Booked.

Apologies for the weird terminology, see: http://slatestarcodex.com/2014/08/10/getting-eulered/ and: https://en.wikipedia.org/wiki/Dutch_book

Comment author: Linch 15 February 2017 12:39:21PM *  3 points [-]

The GiveWell Top Charities are part of the Open Philanthropy Project’s optimal philanthropic portfolio, when only direct impact is considered. There’s not enough money to cover the whole thing. These are highly unlikely to both be true. Global poverty cannot plausibly be an unfillable money pit at GiveWell’s current cost-per-life-saved numbers. At least one of these three things must be true:

GiveWell’s cost per life saved numbers are wrong and should be changed.

The top charities’ interventions will reach substantially diminishing returns long before they’ve managed to massively scale up.

A few billion dollars can totally wipe out major categories of disease in the developing world.

I don't think I understand the trilemma you presented here.

As a sanity check, under-5 mortality is about 6 million worldwide. Assuming that more than 2/3 is preventable (which I think is a reasonable assumption if you compare with developed world numbers on under-5 mortality), this means there are 4 million+ preventable deaths (and corresponding suffering) per year. At $10000 to prevent a death, this is already way more money than Open Phil has in a few months. At $3500 to prevent a death, this is still more money than Open Phil has in even a single year.

We would expect the numbers to also be much larger if we're not prioritizing just deaths, but also prevention of suffering.

Comment author: RyanCarey 09 February 2017 07:02:04PM 0 points [-]

I trust that you can explain Bayes theorem, I'm just adding that we now can be fairly confident that the intervention has less than 10% effectiveness.

Comment author: Linch 10 February 2017 12:08:02AM 0 points [-]

Yeah that makes sense!

Comment author: RyanCarey 09 February 2017 03:12:55AM 0 points [-]

You should not update significantly towards “casual outreach about EA is ineffective”, or “outreach has a very low probability of success” since the study is FAR too underpowered to detect even large effects. For example, if talking about GWWC to likely candidates has a 10% chance of making them take the pledge in the next 15-20 days, and the 14 people who were contacted are exactly representative of the pool of “likely candidates”, then we have a .9^14=23% chance of getting 0 pledges.

Given that it was already unlikely that being put in contact with a GWWC member would have a 10% chance of making them take the pledge, we can now call it very unlikely.

Comment author: Linch 09 February 2017 07:21:00AM *  2 points [-]

I'm not sure how you're operationalizing the difference between unlikely and very unlikely, but I think we should not be able to make sizable updates from this data unless the prior is REALLY big.

(You probably already understand this, but other people might read your comment as suggesting something more strongly than you're actually referring to, and this is a point that I really wanted to clarify anyway because I expect it to be a fairly common mistake)

Roughly: Unsurprising conclusions from experiments with low sample sizes should not change your mind significantly, regardless of what your prior beliefs are.

This is true (mostly) regardless of the size of your prior. If a null result when you have a high prior wouldn't cause a large update downwards, then a null result on something when you have a low prior shouldn't cause a large shift downwards either.

[Math with made-up numbers below]

As mentioned earlier:

If your hypothesis is 10%: 23% probability experiment confirms it.

If your hypothesis is 1%: 87% probability experiment is in line with this

5%: 49%

20%: 4.4%

Say your prior belief is that there's a 70% chance of talking to new people having no effect (or meaningfully close enough to zero that it doesn't matter), a 25% chance that it has a 1% effect, and a 5% chance that it has a 10% effect.

Then by Bayes' Theorem, your posterior probability should be: 75.3% chance it has no effect

23.4% chance it has a 1% effect

1.24% chance it has a 10% effect.

If, on the other hand, you originally believed that there's a 50% chance of it have no effect, and a 50% chance of it having a 10% effect, then your posterior should be:

81.3% chance it has no effect

18.7% chance it has a 10% effect.

Finally, if your prior is that it already has a relatively small effect, this study is far too underpowered to basically make any conclusions at all. For example, if you originally believed that there's a 70% chance of it having no effect, and a 30% chance of it having a .1% effect, then your posterior should be:

70.3% chance of no effect

29.7% chance of a .1% effect.

This is all assuming ideal conditions.Model uncertainty and uncertainty about the quality of my experiment should only decrease the size of your update, not increase it.

Do you agree here? If so, do you think I should rephrase the original post to make this clearer?

Comment author: the_jaded_one 01 February 2017 05:13:26PM 0 points [-]

I don't think this is the right way to model marginal probability, to put it lightly. :)

Well really you're trying to look at d/dx P(Hillary Win|spend x), and one way to do that is to model that as a linear function. More realistically it is something like a sigmoid.

For some numbers, see this

So if we assume: P(Hillary Win|total spend $300M) = 25% P(Hillary Win|total spend $3Bn) = 75%

Then the average value of d/dx P(Hillary Win|spend x) over that range is going to be 2700M/0.5 = $5.5Bn per unit of probability. Most likely the value of the derivative at the actual value isn't too far off the average.

This isn't too far from $1000/vote x 3 million votes = $3Bn.

Comment author: Linch 02 February 2017 01:04:37AM 1 point [-]

Thanks for the edit! :) I appreciate it.

I think your model has MUCH more plausible numbers after the edit, but on a more technical level, I still think a linear model that far out is not ideal here. We would expect diminishing marginal returns well before we hit an increase in spending by a factor of 10.

Probably much better to estimate based on "cost per vote" (like you did below), and then use something like Silver's estimates for marginal probability of a vote changing an election.

To be clear, I have nothing against linear models and use them regularly.

Comment author: the_jaded_one 01 February 2017 07:59:08AM *  -2 points [-]

Hillary outspent Trump by a factor of 2 and lost by a large margin, so it's something of a questionable decision.

EDIT: I think a more realistic model might go something like this; you can tweak the figures to shift a factor of 2-3 but not much more:

P(Hillary Win|total spend $300M) = 25% P(Hillary Win|total spend $3Bn) = 75%

Then the average value of d/dx P(Hillary Win|spend x) over that range is going to be 2700M/0.5 = $5.5Bn per unit of probability. Most likely the value of the derivative at the actual value isn't too far off the average.

This isn't too far from $1000/vote x 3 million votes = $3Bn.

So we could look at something like $5Bn/unit probability at the margin, or each $1 increasing the probability of Hillary winning by 1/5,000,000,000

You could probably do a very similar analysis for any political election at roughly this level of existing funding.

We can take a first approximation to the expected disutility of a very bad Trump presidency at $4Tn or one full years' GDP. This implies a very confident belief in an extremely negative outcome about Trump.

Is it competitive with global poverty? Well it seems like it is on a fairly similar level, since for $5000 you can save a life which is typically valued at something like $5-$25M, which is a similar "rate of return" to paying 5Bn for 4Tn via the Clinton campaign.

Is this competitive with MIRI or the other AI risk orgs? Probably not, but your beliefs about AI risk factor into this quite a lot.

Comment author: Linch 01 February 2017 08:38:13AM *  2 points [-]

I'm really confused by both your conclusion and how you arrived at the conclusion.

I. Your analysis suggest that if Clinton doubles her spending, her chances of winning will increase by less than 2% (!)

This seems unlikely.

II. "Hillary outspent Trump by a factor of 2 and lost by a large margin." I think this is exaggerating things. Clinton had a 2.1% higher popular vote. 538 suggests (http://fivethirtyeight.com/features/under-a-new-system-clinton-could-have-won-the-popular-vote-by-5-points-and-still-lost/) that Clinton would probably have won if she had a 3% popular vote advantage.

First of all, I dispute that losing by less than 1-in-100 of the electoral body is a "large margin." Secondly, I don't think it's very plausible that shifting order 1 million votes with $1 billion in additional funding has less than a 2% chance. ($1,000 per vote is well within the statistics I've seen on GOTV efforts, and actually seriously on the high end).

III. "I mean presumably even with 10x more money or $6bn, Hillary would still have stood a reasonable chance of losing, implying that the cost of a marginal 1% change in the outcome is something like $500,000,000 - $1,000,000,000 under a reasonable pre-election probability distribution."

I don't think this is the right way to model marginal probability, to put it lightly. :)

Comment author: Linch 27 January 2017 12:26:05AM 0 points [-]

I wouldn't worry too much about detecting plagiarism. There isn't THAT much content in the EA space, and some member of a group of us would likely be able to recognize content that are repeats of things we've seen before.

Comment author: Linch 24 January 2017 08:47:40AM 2 points [-]

Once this idea is more developed, Students for High-Impact Charity would be happy to help advertise/promote it.

Comment author: Telofy  (EA Profile) 13 January 2017 07:25:55PM 1 point [-]

Oh, thank you! <3 I’m trying my best.

Oh yeah, the Berkeley community must be huge, I imagine. (Just judging by how often I hear about it and from DxE’s interest in the place.) I hope the mourning over Derek Parfit has also reminded people in your circles of the hitchhiker analogy and two-level utilitarianism. (Actually, I’m having a hard time finding out whether Parfit came up with it or whether Eliezer just named it for him on a whim. ^^)

Comment author: Linch 24 January 2017 08:22:11AM 1 point [-]

The hitchhiker is mentioned in Chapter One of Reasons and Persons. Interestingly, Parfit was more interested in the moral implications than the decision-theory ones.

Comment author: Linch 16 January 2017 06:22:15AM 1 point [-]

UPDATE: I now have my needed number of volunteers, and intend to launch the experiment tomorrow evening. Please email, PM, or otherwise contact me in the next 12 hours if you're interested in participating.

View more: Next