cassidynelson comments on Enlightened Concerns of Tomorrow - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (8)

You are viewing a single comment's thread. Show more comments above.

Comment author: JasperGeh 15 March 2018 09:24:36AM 6 points [-]

Oh, I didn't expect Pinker to hold that position; it's quite disappointing. But it's hopefully a topic we will see addressed in a future conversation with Sam Harris who should push back on the "AI cannot be a threat"-narrative. Have you tweeted/mailed/whatnot him this response?

Comment author: cassidynelson 16 March 2018 01:05:55AM 0 points [-]

I agree, I found it surprising as well that he has taken this view. It seems like he has read a portion of Bostrom's Global Catastrophic Risks and Superintelligence, has become familiar with the general arguments and prominent examples, but then has gone on to dismiss existential threats on reasons specifically addressed in both books.

He is a bit more concerned about nuclear threats than other existential threats, but I wonder if this is the availability heuristic at work given the historical precedent instead of a well-reasoned line of argument.

Great suggestion about Sam Harris - I think Steven Pinker and him had a live chat just the other day (March 14) so may have missed this opportunity. I'm still waiting for the audio to be uploaded on Sam's podcast, but I wonder given Sam's positions if he questions Pinker on this as well.

Comment author: DavidMoss 16 March 2018 01:57:08AM 3 points [-]

I think part of the problem is that he expressed a very dismissive stance towards AI/x-risk positions publicly, seemingly before he'd read anything about them. Now people have pushed back and pointed out his obvious errors and he's had to at least somewhat read about what the positions are, but he doesn't want to backtrack at all from his previous statement of extreme dismissiveness.

Comment author: cassidynelson 16 March 2018 10:08:47PM 1 point [-]

I agree and that appears the likely sequelae. I find it a bit disappointing that he went into this topic with his view already formed, and used the prominent contentious points and counterarguments to reinforce his preconceptions without becoming familiar with the detailed refutations already out there. It's great to have good debate and opposing views presented, but his broad stroke dismissal makes it really difficult.

Comment author: Risto_Uuk 16 March 2018 11:03:43PM 2 points [-]

Sam Harris did ask Steven Pinker about AI safety. If anybody gets around listening to that, it starts at 1:34:30 and ends at 2:04, so that's about 30 minutes about risks from AI. Harris wasn't his best in that discussion and Pinker came off much more nuanced and evidence and reason based.

Comment author: RyanCarey 29 May 2018 05:32:25PM 0 points [-]

I agree with the characterization of the discussion, but regardless, you can find it here: https://www.youtube.com/watch?v=H_5N0N-61Tg&t=86m12s