FO

freest one

4 karmaJoined

Comments
3

Thanks, I appreciate the comment. I hadn’t seen your piece, that’s great though. The difficulty of gene/brain alignment is a good analogy for how unlikely human/AI alignment is on a first try. & I share your scepticism about humans having some general utility function.

Thanks for the comment. Yes, I’m still uncertain about the mechanism of self-reproduction in future AIs… with humans, it’s certainly possible to decouple sex from reproduction but if a large enough proportion of people do that, then we will assuredly start to disappear.

I think Tegmark is still too optimistic. The arguments against nuclear war happening are typically very weak (variations of "it hasn't happened yet, people believe in MAD, leaders are rational). And even when pundits have considered the risks higher (Cuban missile crisis) their actions have not reflected this at all.  We should take this as a signal of massive status quo bias and denial. 

http://www.mcgannfreestone.com.au/?p=2255