Comment author: kbog  (EA Profile) 28 February 2017 08:51:49PM *  0 points [-]

It depends on the context. In many places there are people who really don't know what they're talking about and have easily corrected, false beliefs. Plus, most places on the Internet protect anonymity. If you are careful it is very easy to avoid having an effect that is net negative on the whole, in my experience.

Comment author: sdspikes 01 March 2017 01:50:13AM 1 point [-]

As a Stanford CS (BS/MS '10) grad who took AI/Machine Learning courses in college from Andrew Ng, worked at Udacity with Sebastian Thrun, etc. I have mostly been unimpressed by non-technical folks trying to convince me that AI safety (not caused by explicit human malfeasance) is a credible issue.

Maybe I have "easily corrected, false beliefs" but the people I've talked to at MIRI and CFAR have been pretty unconvincing to me, as was the book Superintelligence.

My perception is that MIRI has focused in on an extremely specific kind of AI that to me seems unlikely to do much harm unless someone is recklessly playing with fire (or intentionally trying to set one). I'll grant that that's possible, but that's a human problem, not an AI problem, and requires a human solution.

You don't try to prevent nuclear disaster by making friendly nuclear missiles, you try to keep them out of the hands of nefarious or careless agents or provide disincentives for building them in the first place.

But maybe you do make friendly nuclear power plants? Not sure if this analogy worked out for me or not.

Comment author: sdspikes 13 August 2015 05:29:35PM 1 point [-]

There's already a coursera course, but I don't know how good it is: https://www.coursera.org/learn/altruism