Comment author: JoshP 08 May 2018 09:56:43PM 0 points [-]

Haven't read all of it- but I believe there's an error in the first line, which says this is the "second of three parts", I think it means third. Sorry my engagement isn't more interesting :P

Comment author: JoshP 07 May 2018 01:49:16PM 0 points [-]

Good article in lots of ways. I'm perhaps slightly put off by the sheer amount of info here- I don't feel like I can input all of this easily, given my own laziness and number of goals which I feel like I prioritise. Not sure there's an easy solution to that (maybe some sort of two three top suggestions?), but feel like this is a bit of an information overload. Thanks for writing it though Darius, I enjoyed it :)

Comment author: Maxdalton 02 May 2018 06:21:05PM *  3 points [-]

Edit, this should be fixed now, let me know if there are still problems.

(Sorry, I don't have time to reply to all of the comments here today.) Sorry about this! Not sure what's going on here, but does this version work better for you?

[There was a link here]

I'll try to get a full fix tomorrow.

Comment author: JoshP 03 May 2018 09:59:02AM 10 points [-]

I just spent a very exciting hour going through every link (yes, I clicked all of them) in the handbook, and I think I have a definitive list of mistakes in the links (if there are others, may they remain mistakes ever more :P ): p.47, Engines of Creation p.77 expected value link p.77 Risk aversion and rationality , Use http://fitelson.org/seminar/buchak2.pdf instead p.80 – scope insensitivity p.80- Luke Muehlhauser has commented p.127 “our profile on the long-run future” Footnote 43, p.137, link 2, related to diarrhoea p.142, “animal welfare profile” P.144- systemic change profile link works, but looks slightly unprofessional You could do with a link to the 80,000 Hours Podcast and the Doing Good Better podcast on p.166

Comment author: Alex_Barry 02 May 2018 03:59:23PM *  4 points [-]

As far as I can tell none of the links that look like this instead of http://effective-altruism.com work in the pdf version.

Comment author: JoshP 02 May 2018 05:17:53PM 1 point [-]

I think there's a mix of working and non-working, having just checked myself. Some don't go through to anything when you click on them; some go through to a 404 error; and some go through to the correct website.

Bizarrely, this will depend on the copy I have downloaded. I have downloaded it more than once, and it works differently each time. The first one I have downloaded (I downloaded more than once in different tabs) works in every link I check. The second one doesn't- and this remains true when comparing certain links like for like. I'm not really sure why. Bit bizarre.

Comment author: JoshP 02 May 2018 01:21:50PM *  9 points [-]

A few quick comments, have skimmed through rather than a read in depth (have read a number of the articles in the past):

  1. There's a format error on p.167- Under 1. Learn more, there is a spacing error in the paragraph, which bizarrely cuts the paragraph in two for no reason. [Edit: this is not a lone error, I've found another on p.142, there may well be others, I haven't gone through exhaustively]
  2. I'd be interested how the relevant cause areas were agreed upon? There's a heavy emphasis on Artificial Intelligence and the Long-Run Future (three articles on AI); and some areas of interest to EAers get very little or no mention at all (e.g. Mental Health and Happiness, re Michael Plant [EDIT: there is some mention of this, but there's still at least a good question here about how cause areas are decided]). Perhaps there's also a lack of concentration on cause areas versus career paths (and the Gov. article is great but extremely America heavy, which is not helpful for non-Americans). I suspect there is more to be said as to how this is decided, but it would be useful to understand this.
  3. The recommendation of different books at the end is interesting- given the heavy emphasis on the long term-future throughout, it seems to me that the recommended books diverge from that (Doing Good Better and the Most Good You Can Do aren't hugely about that from my memory?) Would it have been better to recommend SuperIntelligence; or the Global Catastrophic Risks volume which came out a while back; or something else, if the Long-Term Future dominates the attention elsewhere?
  4. One worry I have is that it does a lot to suggest possibel directions for EAers, and little to deal with objections EAers might face. The original handbook seems to have slightly more on that (e.g. the Estimation article from Katja; Holden's article on White's in shining armour); and there are other ones which regularly arise e.g. the collectivist/coordination ideas that Hilary Greaves has been talking about (no individual can ever make a difference); or objections which focus on the overriding significance of virtue/the total cluelessness of us all. It might be that these are dealt with in some other way (I'm not clear how) or that this is simply not that important (which I question, but am uncertain), and so would be appreciative for your thoughts.

Mainly, though I liked it, so my critical points aren't to be understood as a total rejection of the piece! It was great in so many ways, and I'm sure required a fair amount of work!

Comment author: JoshP 25 April 2018 01:48:14PM 5 points [-]

Interesting stuff, thanks guys. I wanted to discuss one point:

  1. From conversations with James, I believe Cambridge has a pretty different model of how they run it- in particular, a much more hands on approach, which calls for formal commitment from more people e.g. giving everyone specific roles, which is the "excessive formalist" approach. Are there reasons you guys have access to which favour your model of outreach over theirs? Or alternate frame; what's the best argument in favour of the Cambridge model of giving everyone an explicit role, and why does that not succeed (if it doesn't)?

For example, is it possible that Cambridge get a significantly higher number of people involved, which then cancels out the effects of immediately high-fidelity models in due course (e.g. suppose lots of people are low fidelity while at Cam, but then a section become more high-fidelity later, and it ends up not making that much difference in the long run)? Or does the Cambridge model use roles as an effective commitment device? Or does one model ensure less movement drift, or less lost value from movement drift? (see here http://effective-altruism.com/ea/1ne/empirical_data_on_value_drift/?refresh=true) There's a comment from David Moss here suggesting there's an "open question" about the value of focussing on more engaged individuals, given the risks of attrition in large movements (assuming the value of the piece, which is subject to lots of methodological caveats).

The qs above might be contradictory- I'm not advocating any of the above, but instead clarifying whether there's anything missed by your suggestions.