Comment author: cole_haus 15 August 2018 09:13:59PM *  3 points [-]

I think there's a certain prima facie plausibility to the traditional tripartite division. If you just think about the world in general, each of actors, actions, and states seem salient. It wouldn't take much to convince me that--appropriately defined--actors, actions, and states are mutually exclusive and collectively exhaustive in some metaphysical sense.

Once you accept the actors, actions, states division, it makes sense to have ethical theories revolving around each. These corresponds to virtue ethics, deontology and consequentialism.

Comment author: cole_haus 15 August 2018 09:06:45PM *  2 points [-]

I think you could fairly convincingly bucket virtue ethics in 'Ends' if you wanted to adopt this schema. A virtue ethicist could be someone who chooses the action that produces the best outcome in terms of personal virtue. They are (sort of) a utilitarian that optimizes for virtue rather than utility and restricts their attention to only themselves rather than the whole world.

Comment author: Benito 22 July 2018 07:18:00PM *  7 points [-]

I actually have made detailed notes on the first 65% of the book, and hope to write up some summaries of the chapters.

It’s a great work. To do the relevant literature reviews would likely have taken me 100s of hours, rather than the 10s to study the book. As with all social science, the conclusions from most of the individual studies are suspect, but I think it sets out some great and concrete models to start from and test against other data we have.

Added: I’m Ben Pace, from LessWrong.

Added2: I finished the book. Not sure when my priorities will allow me to turn it into blogposts, alas.

Comment author: cole_haus 24 July 2018 11:47:07AM 1 point [-]

That's great to hear!

Comment author: cole_haus 22 July 2018 06:08:09PM 7 points [-]

I've not yet read it myself, but I'm curious if anyone involved in this project has read "Building Successful Online Communities: Evidence-Based Social Design" (https://mitpress.mit.edu/books/building-successful-online-communities). Seems quite relevant.

Comment author: Flodorner 16 June 2018 06:48:16AM 3 points [-]

I am not sure about whether your usage of economies of scale already covers this, but it seems to make sense to highlight, that what matters is the marginal difference of the money for you and your adversary. If doing evil is a lot more efficient at low scales (Think of distributing highly addictive drugs among vurnerable populations vs. Distributing Malaria nets), your adversary could be hitting diminishing returns already, while your marginal returns increase, and the lottery might still be not be worth it.

Comment author: cole_haus 10 July 2018 08:54:17AM 0 points [-]

Yup, I hope the examples make that clear, but the other descriptions could do more to highlight that we're interested in the margin.

Comment author: Peter_Hurford  (EA Profile) 15 June 2018 09:48:21PM 0 points [-]

I think you have a typo in your post title.

Comment author: cole_haus 16 June 2018 04:16:01AM 2 points [-]

It was meant as mediocre word play on the idiom 'dining with the devil' and 'donating'.

3

Doning with the devil

[Cross-posted from https://www.col-ex.org/posts/devil-donor-lottery/ ] Donor lotteries Effective altruists have proposed and promoted donor lotteries . Briefly, in a donor lottery, donors pool money for charitable contribution. They're given lottery tickets in proportion to their contributions. The winner of the lottery gets to decide where the pool of charitable funds should... Read More
Comment author: cole_haus 14 June 2018 04:33:36PM *  4 points [-]

Anecdotally, we’ve found that our matching campaigns have brought in a disproportionately large number of new donors—the majority of whom were not previously involved with effective giving. [...] we were able to teach them about effective animal advocacy and to support them in effective giving elsewhere in the EA movement. The amount that these donors will give to effective charities during their lifetime is significantly higher than the donation-matching campaign that attracted them; we continue to build relationships with these new donors.

I think this might be a key part that merits more explication. I can think of two major objections that evidence here would help answer:

1) The consequentialist benefit of 'standard' marketing techniques isn't worth the deontological cost.

2) 'Standard' marketing techniques are self-defeating for EA. This relies upon a belief that those that are put off by the utilon approach and attracted by the fuzzy approach are unlikely to 'assimilate' into EA.

Can you share more information on the number of new donors and particularly their subsequent engagement with EA? Or, if you can't or aren't ready to share that data, can you at least attest that you're tracking it and working on it?

Comment author: cole_haus 14 June 2018 03:42:39PM 2 points [-]

This seems very related to the unilateralist's curse: https://nickbostrom.com/papers/unilateralist.pdf. There, they suggest that if you're about to reveal information you're surprised others aren't talking about, take a moment and consider whether their silence is evidence you should remain silent.

Comment author: cole_haus 30 May 2018 07:17:41PM 0 points [-]

Regarding section 1, is there a reliable way to determine who these market-beating superforecasters are? What about in new domains? Do we have to have a long series of forecasts in any new domain before we can pick out the superforecasters?

Somewhat relatedly, what guarantees do we have that the superforecasters aren't just getting lucky? Surely, some portion of them would revert to the mean if we continued to follow their forecasts.

Altogether, this seems somewhat analogous to the arguments around active vs passive investing where I think passive investing comes out on top.

View more: Next