14

Risto_Uuk comments on Enlightened Concerns of Tomorrow - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (8)

You are viewing a single comment's thread. Show more comments above.

Comment author: cassidynelson 16 March 2018 01:05:55AM 0 points [-]

I agree, I found it surprising as well that he has taken this view. It seems like he has read a portion of Bostrom's Global Catastrophic Risks and Superintelligence, has become familiar with the general arguments and prominent examples, but then has gone on to dismiss existential threats on reasons specifically addressed in both books.

He is a bit more concerned about nuclear threats than other existential threats, but I wonder if this is the availability heuristic at work given the historical precedent instead of a well-reasoned line of argument.

Great suggestion about Sam Harris - I think Steven Pinker and him had a live chat just the other day (March 14) so may have missed this opportunity. I'm still waiting for the audio to be uploaded on Sam's podcast, but I wonder given Sam's positions if he questions Pinker on this as well.

Comment author: Risto_Uuk 16 March 2018 11:03:43PM 2 points [-]

Sam Harris did ask Steven Pinker about AI safety. If anybody gets around listening to that, it starts at 1:34:30 and ends at 2:04, so that's about 30 minutes about risks from AI. Harris wasn't his best in that discussion and Pinker came off much more nuanced and evidence and reason based.

Comment author: RyanCarey 29 May 2018 05:32:25PM 0 points [-]

I agree with the characterization of the discussion, but regardless, you can find it here: https://www.youtube.com/watch?v=H_5N0N-61Tg&t=86m12s