Comment author: Peter_Hurford  (EA Profile) 16 February 2017 11:56:36PM 4 points [-]

I looked a bit at the expected value of campaigning for Hillary in "How Should I Spend My Time" and came away thinking the cost-effectiveness and career capital value looked pretty low. As such, I've mostly deprioritized political work.

However, I'm open to reconsidering. The estimate I made definitely has very wide error bars and could easily be anywhere from very positive to quite net negative. I also could see that targeted legislative lobbying might improve cost-effectiveness, confidence in the impact being positive, and career capital. Definitely worth re-investigating and I'm glad some EAs are looking at this.

Comment author: casebash 16 February 2017 11:24:07PM 0 points [-]

"A 10% chance of donating $100K should be roughly as motivating to a risk-neutral EA as a 100% chance of donating $10K (not taking into account arguments that the risk-neutral utility of money may be nonlinear)." - that's not how human psychology works.

Comment author: Peter_Hurford  (EA Profile) 16 February 2017 11:39:20PM 0 points [-]

How easy is it for an EA to overcome that?

Also, if there's a motivation - impact trade-off, how can we navigate that?

Comment author: ThomasSittler 15 February 2017 09:52:32PM 0 points [-]

There are some pragmatic obstacles to that. For instance, team members have only signed up to work on the Oxford Prioritisation Project over a particular time period. If we ended up winning the donor lottery later on, we'd have to do lots of research then.

Comment author: Peter_Hurford  (EA Profile) 16 February 2017 04:40:45AM 0 points [-]

What's the time period?

One additional reason why I ask is that it seems like a lot of OxPri (do you have a preferred abbreviation? OPP is naughty and conflicts with OpenPhil) research to date has been of the form "field X looks interesting and may have some promising opportunities but it doesn't look like we could do anything for 10K GBP" which makes it feel like the donation size may be limiting.

Comment author: Peter_Hurford  (EA Profile) 15 February 2017 09:39:46PM 0 points [-]

Would it make sense to donate to the LJAF for promoting open science?

Comment author: ThomasSittler 15 February 2017 03:43:20PM *  1 point [-]

I am aware that the formatting is poor, but the EA forum text editor is hard to deal with. Tips on how to improve would be appreciated.

Right now the best way to read Daniel's post is probably the Google Doc:

https://docs.google.com/document/d/13wsMAugRacu52EPZo6-7NJh4QuYayKyIbjChwU0KsVU/edit#

Comment author: Peter_Hurford  (EA Profile) 15 February 2017 09:30:17PM 3 points [-]

Formatting seems readable to me. It would be nice to not include the entire article in a blockquote, though.

Comment author: Peter_Hurford  (EA Profile) 14 February 2017 04:28:35AM *  2 points [-]

While it likely is true of some EAs, it it's a simplistic strawman to assume that those of us who favor donating to AMF (though in practice I prefer donating to research and meta-charity more) do so due to risk aversion. Saying that would require knowing, with confidence, the expected value of a donation to MIRI.

I certainly would prefer to donate to a 0.01% chance of saving 11K lives than a 100% chance of saving a life. But I don't actually know that MIRI actually represents a superior expected value bet.

(See some discussion about MIRI's chance of success here and here).

Comment author: Benito 14 February 2017 03:24:59AM *  3 points [-]

I'll note that the team should still allocate its time to answering the object level question, so that if they win the lottery they know where they'll give the money.

Comment author: Peter_Hurford  (EA Profile) 14 February 2017 04:23:40AM 1 point [-]

I definitely agree. But it might be good to see research around whether a donor lottery is worthwhile and how to donate $100K.

Comment author: casebash 14 February 2017 03:50:13AM 0 points [-]

That would defeat the purpose of the project. I think that the purpose is to spur research and the money is there for extra encouragement.

Comment author: Peter_Hurford  (EA Profile) 14 February 2017 04:23:02AM 2 points [-]

I don't think that's true for two reasons:

(1) A 10% chance of donating $100K should be roughly as motivating to a risk-neutral EA as a 100% chance of donating $10K (not taking into account arguments that the risk-neutral utility of money may be nonlinear).

(2) Research around whether to donate $100K or $10K (or how to donate $100K conditional on winning the lottery) would be useful.

Comment author: Peter_Hurford  (EA Profile) 14 February 2017 12:07:50AM 9 points [-]

Creating a well-defined mathematical underpinning for the neglectedness-tractability-importance framework is a really cool non-trivial accomplishment. Thanks for helping further arm all of us cause prioritizers. :)

Comment author: Peter_Hurford  (EA Profile) 14 February 2017 12:05:47AM 1 point [-]

In addition to Will, this team consists of Emma Gray, Executive Assistant, and Roxanne Heston, Press Officer. This month they have been mainly focused on setting up the processes for the new team and helping to devise strategies to help build Will’s influence and connections.

I don't have any problem with this, but stating it so clearly and bluntly comes off to me as kind of cultish.

View more: Next