Comment author: Flodorner 16 June 2018 06:48:16AM 3 points [-]

I am not sure about whether your usage of economies of scale already covers this, but it seems to make sense to highlight, that what matters is the marginal difference of the money for you and your adversary. If doing evil is a lot more efficient at low scales (Think of distributing highly addictive drugs among vurnerable populations vs. Distributing Malaria nets), your adversary could be hitting diminishing returns already, while your marginal returns increase, and the lottery might still be not be worth it.

Comment author: cole_haus 10 July 2018 08:54:17AM 0 points [-]

Yup, I hope the examples make that clear, but the other descriptions could do more to highlight that we're interested in the margin.

Comment author: Peter_Hurford  (EA Profile) 15 June 2018 09:48:21PM 0 points [-]

I think you have a typo in your post title.

Comment author: cole_haus 16 June 2018 04:16:01AM 2 points [-]

It was meant as mediocre word play on the idiom 'dining with the devil' and 'donating'.

3

Doning with the devil

[Cross-posted from https://www.col-ex.org/posts/devil-donor-lottery/ ] Donor lotteries Effective altruists have proposed and promoted donor lotteries . Briefly, in a donor lottery, donors pool money for charitable contribution. They're given lottery tickets in proportion to their contributions. The winner of the lottery gets to decide where the pool of charitable funds should... Read More
Comment author: cole_haus 14 June 2018 04:33:36PM *  4 points [-]

Anecdotally, we’ve found that our matching campaigns have brought in a disproportionately large number of new donors—the majority of whom were not previously involved with effective giving. [...] we were able to teach them about effective animal advocacy and to support them in effective giving elsewhere in the EA movement. The amount that these donors will give to effective charities during their lifetime is significantly higher than the donation-matching campaign that attracted them; we continue to build relationships with these new donors.

I think this might be a key part that merits more explication. I can think of two major objections that evidence here would help answer:

1) The consequentialist benefit of 'standard' marketing techniques isn't worth the deontological cost.

2) 'Standard' marketing techniques are self-defeating for EA. This relies upon a belief that those that are put off by the utilon approach and attracted by the fuzzy approach are unlikely to 'assimilate' into EA.

Can you share more information on the number of new donors and particularly their subsequent engagement with EA? Or, if you can't or aren't ready to share that data, can you at least attest that you're tracking it and working on it?

Comment author: cole_haus 14 June 2018 03:42:39PM 2 points [-]

This seems very related to the unilateralist's curse: https://nickbostrom.com/papers/unilateralist.pdf. There, they suggest that if you're about to reveal information you're surprised others aren't talking about, take a moment and consider whether their silence is evidence you should remain silent.

Comment author: cole_haus 30 May 2018 07:17:41PM 0 points [-]

Regarding section 1, is there a reliable way to determine who these market-beating superforecasters are? What about in new domains? Do we have to have a long series of forecasts in any new domain before we can pick out the superforecasters?

Somewhat relatedly, what guarantees do we have that the superforecasters aren't just getting lucky? Surely, some portion of them would revert to the mean if we continued to follow their forecasts.

Altogether, this seems somewhat analogous to the arguments around active vs passive investing where I think passive investing comes out on top.

Comment author: cole_haus 30 May 2018 06:51:47PM *  0 points [-]

I think Evidence-Based Policy: A Practical Guide To Doing It Better is also a good source here. The blurb:

Over the last twenty or so years, it has become standard to require policy makers to base their recommendations on evidence. That is now uncontroversial to the point of triviality--of course, policy should be based on the facts. But are the methods that policy makers rely on to gather and analyze evidence the right ones? In Evidence-Based Policy, Nancy Cartwright, an eminent scholar, and Jeremy Hardie, who has had a long and successful career in both business and the economy, explain that the dominant methods which are in use now--broadly speaking, methods that imitate standard practices in medicine like randomized control trials--do not work. They fail, Cartwright and Hardie contend, because they do not enhance our ability to predict if policies will be effective.

The prevailing methods fall short not just because social science, which operates within the domain of real-world politics and deals with people, differs so much from the natural science milieu of the lab. Rather, there are principled reasons why the advice for crafting and implementing policy now on offer will lead to bad results. Current guides in use tend to rank scientific methods according to the degree of trustworthiness of the evidence they produce. That is valuable in certain respects, but such approaches offer little advice about how to think about putting such evidence to use. Evidence-Based Policy focuses on showing policymakers how to effectively use evidence, explaining what types of information are most necessary for making reliable policy, and offers lessons on how to organize that information.