T

tomstocker

183 karmaJoined Jan 2015

Comments
200

I think why I like this so much is that it isn't another idea that is fiddling on the margins of a problem with a complicated theory of impact - it just provides a project vehicle to solve one of the more tractable key problems head on.

Certainly there's a risk that it turns into a community wide equivalent of procrastination if the spreads are low. Would love someone to tackle that rigorously and empirically!

This is helpful. Might be worth defining EA as a movement that realises premises 1, 2, 3 are partially true, and that even if there are small differences on each, it is worth being really careful and deliberate about what we do and how much.

There was also something attractive to me as a young person many moons ago about Toby Ord & Will Mackaskill's other early message - which is perhaps a bit more general / not specific to EA - that there are some really good opportunities to promote the common good out there, and they are worth pursuing (perhaps this is the moral element that you're trying to abstract from?).

I like the way you introduced the calculus, it was artful. I think going one step further would be useful, I.e. Looking at the income distributions of recipients of different interventions and charities.

Id be interested in long run future and things focused more directly on human wellbeing than generic health and income. Id also be more interested if these groups not only updated on orgs we all know about but also did / collated exploratory work on speculative opportunities.

Except that on point 3, the policies advocated and strategies being tried aren't as if people are trying to reduce x risk, they're as if they're trying to enable AI to work rather than backfire.

See recent pain control brief lee sharkey as example, or Auren Forrester's stuff on suicide.

What's wrong with low hanging fruit? Not entertaining enough?

Load more