Comment author: Benito 05 September 2017 09:19:44PM 6 points [-]

I don't think the idea Anna suggests is to pick books you think young people should read, but to actually ask the best people what books they read that influenced them a lot.

Things that come to my mind include GEB, HPMOR, The Phantom Tolbooth, Feynman. Also, which surprises me but is empirically true for many people, Sam Harris's "The Moral Landscape" seems to have been the first book a number of top people I know read on their journey to doing useful things.

But either way I'd want more empirical data.

Comment author: AABoyles  (EA Profile) 06 September 2017 05:25:33PM 0 points [-]

Totally agree about data collection. Seems like a good candidate for an approval vote. After a five-minute search, I couldn't find a good approval-voting platform, when I realized that basically all polls on DEAM work this way (i.e. Facebook supports this). Maybe this is something we could post in the EA Facebook group? @Peter_Hurford?

Comment author: AABoyles  (EA Profile) 20 July 2015 08:25:49PM 4 points [-]

Mr. Musk has personally donated $10 million via the Future of Life Institute towards a variety of AI safety projects. Additionally, MIRI is currently engaged in its annual fundraising drive with ambitious stretch goals, which include the hiring of several (and potentially many) additional researchers.

With this in mind, Is the bottleneck to progress in AI Safety research the availability of funding or researchers? Stated differently, If a technically-competent person assesses AI Safety to be the most effective cause, which is approach more effective: Earning-to-give to MIRI or FLI, or becoming an AI Safety researcher?

Comment author: AABoyles  (EA Profile) 21 July 2015 03:08:16PM 2 points [-]

Related: What is your estimate of the field's room-for-funding for the next few years?

Comment author: AABoyles  (EA Profile) 21 July 2015 02:41:59PM 2 points [-]

GiveWell's Holden Karnofsky assessed the Singularity Institute in 2012, and provided a thoughtful, extensive critique of the mission and approach which still remains tied for the top post on Lesswrong. It seems the EA Meta-charity evaluators are still hesitant to name AI Safety (and more broadly, Existential Risk Reduction) as a potentially effective target for donations. What are you doing to change that?

Comment author: AABoyles  (EA Profile) 20 July 2015 08:25:49PM 4 points [-]

Mr. Musk has personally donated $10 million via the Future of Life Institute towards a variety of AI safety projects. Additionally, MIRI is currently engaged in its annual fundraising drive with ambitious stretch goals, which include the hiring of several (and potentially many) additional researchers.

With this in mind, Is the bottleneck to progress in AI Safety research the availability of funding or researchers? Stated differently, If a technically-competent person assesses AI Safety to be the most effective cause, which is approach more effective: Earning-to-give to MIRI or FLI, or becoming an AI Safety researcher?

Comment author: AABoyles  (EA Profile) 08 July 2015 06:42:53PM 4 points [-]

The Boston Review held a Forum on Effective Altruism with some excellent criticism by academic, non-EAs.

Comment author: RyanCarey 27 April 2015 02:52:48PM 0 points [-]

Thanks very much!

Comment author: AABoyles  (EA Profile) 29 April 2015 01:19:16PM 0 points [-]

Also, props for compiling it in LaTeX. The typesetting is beautiful. :)

Comment author: RyanCarey 26 April 2015 08:39:55PM *  0 points [-]

Thanks, that's helpful! Have you any suggestions for improving the book?

Comment author: AABoyles  (EA Profile) 27 April 2015 02:24:58PM 1 point [-]

Honestly, no. It covers the high points of the movement with excellent pacing. The essays are concise, readable, and interesting. There's no superfluous content. It's great all around.

Comment author: AABoyles  (EA Profile) 26 April 2015 08:23:25PM 3 points [-]

Excellent work! I've just finished it and posted it on GoodReads.

Comment author: AABoyles  (EA Profile) 16 March 2015 02:49:47PM 2 points [-]

The bottom link is broken. I believe it should point to http://globalprioritiesproject.org/2015/03/ylds-and-ylls/

View more: Next