Hide table of contents

In particular, for someone who is very hard to convince and that addresses all of the objections a well-educated, rational person may have

4

0
0

Reactions

0
0
New Answer
New Comment

2 Answers sorted by

I like the AI Alignment Wikipedia page because it provides an overview of the field that's well-written, informative, and comprehensive.

I don't know if it addresses all the objections one may have, but the two part Wait But Why series (Part 1, Part 2) was what finally did it for me and I think is wonderfully written. 

I think it's a very good explainer of the "orthodox" Ai safety position. 

I think it would be unlikely to change the mind of a skeptic, however. It relies way too much on just relaying the opinions of Ray Kurzweil and Nick Bostrom, and Kurzweil in particular is very easy to dismiss based on his wildly overconfident predictions (in the article, they state that we are on the "verge" of drexler-style nanofactories, which should arrive "by the 2020's", which has not aged well). 

There is almost no engaging with many obvious objections, and because it w... (read more)

1
Tristan Williams
Fair enough, maybe I was less skeptical than I thought at first and having a really good explainer was enough to dispel the little skepticism I did have. You mention Human Compatible, but also don't really seem convinced by it, is there any convincing work you've found, or have you remained unconvinced through all you've read?
9
titotal
I've been skeptical minded about the subject from the start, and I've failed to find anything that convinced me. I've written a lot about my reasons for remaining unconvinced.  I've read human compatible, superintelligence, the sequences, the aforementioned wait but why intro, the writings of Holden karnofsky, and I've been regularly reading the arguments in this forum for the last year or two.  Of those, I found Yudkowsky the least convincing, because he tends to assume a level of AGI omnipotence that I find ludicrous, and he has a habit of overconfidently mangling the science whenever he ventures into my field of expertise (I'm a computational physicist). I find Karnofsky and Russell to have the best arguments, because they don't rely on omnipotent AI to make their case. I think they have raised my estimates of the catastrophic risk from AI, even if my views on extinction risk remain largely unchanged. 
1
Tristan Williams
Ah okay cool, a skeptic that has really engaged with the material. I won't ask you your reasons because I'm sure I can find them on your substack, but I would love to know, do you have rough percentages for chance of catastrophic risk and x-risk from AI? You can restrict the estimate to the next century if that would help.
7
titotal
If you forced me to give numbers, I'd put the odds of catastrophe (~1 billion dead) at 1 in a thousand, and the odds of extinction at 1 in 500 thousand. Essentially, there are several plausible paths for a catastrophe to occur, but almost none for extinction. I don't put too much stock in the actual numbers though, as I don't think forecasting is actually useful for unbounded, long-term predictions. 
Curated and popular this week
Relevant opportunities