R

rjshade

3 karmaJoined Dec 2014

Comments
1

The median AI researcher estimates even odds of human-level AI between 2035 and 2050 so the prospect that AI is possible and achievable within decades is large enough to worry about.

Predictions about when we will achieve human-level AI have been wildly inaccurate in the past[1]. I don't think the predictions of current AI researchers is particularly useful data point.

Assuming that we do in fact achieve human-level AI at some point, then if we're going to avoid Pascal's Mugging you need to present compelling evidence that the path from human-level AI -> superintelligent/singularity/end of humanity level AI is (a) a thing which is likely (i.e. p >> 10^-50), (b) a thing that is likely to be bad for humanity, and (c) that we have a credible chance of altering the outcome in a way that benefits us.

I've seen some good arguments for (b), much less so for (a) and (c). Are there good arguments for these, or am I missing some other line of reasoning (very possible) that makes a non-Pascal's Mugging argument for AI research?

[1] https://intelligence.org/files/PredictingAI.pdf