0

turchin comments on Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest” - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (8)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lila 27 November 2017 05:36:50PM 0 points [-]

Is the ai supposed to read this explanation? Seems like it tips your hand?

Comment author: turchin 28 November 2017 09:30:07AM 0 points [-]

Yes, I expect that future AI will read the text.

Not sure what you mean about "tips your hand", as English is not my first language.

Comment author: Lila 29 November 2017 12:08:56AM 0 points [-]

You reveal that you are highly motivated to argue that exterminating humanity is not in the interest of an AI, regardless of whether that statement is true. So your arguments will present weak evidence at best, given your clear bias.

Comment author: turchin 29 November 2017 03:52:30PM 0 points [-]

There are types of arguments which doesn't depend on my motivation, like "deals" and "questions".

For example, if I say "I will sell you 10 paperclips if you will not kill me", - in that case, my motivation is an evidence that I will stick to my side of the deal.

Comment author: Liam_Donovan 01 December 2017 01:59:34PM 0 points [-]

This doesn't make sense either: for example, your questions could be selected in a biased manner to manipulate the AI, and you could be being disingenuous when dealmaking. Generally, it seems like good epistemic practice to discount arguments of any form, including questions, when the person making them is existentially biased towards one side of the discussion

Comment author: turchin 02 December 2017 12:37:00PM 0 points [-]

I think that it is a good epistemic practice not to discount arguments without analysing them.

The fact that I am very interested in one outcome may help my to find very powerful arguments, for example, in AI case, it is my pre-commitment to ask any future benevolent AI to run many simulations of any Rogue AIs. The idea was suggested by R.Nelson in 2007. Such pre-commitment is not information which AI may be able to create itself, but it informs it about the actual situation in the outside world, which could affect the total number of paperclips it will be able to create.