turchin comments on Curing past sufferings and preventing s-risks via indexical uncertainty - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (18)

You are viewing a single comment's thread. Show more comments above.

Comment author: kbog  (EA Profile) 27 September 2018 06:50:19PM 3 points [-]

This is an algorithmic trick without ethical value. The person who experienced suffering still experienced suffering. You can outweigh it by creating lots of good scenarios, but making those scenarios similar to the original one is irrelevant.

Comment author: turchin 27 September 2018 11:48:11PM *  0 points [-]

It is an algorithmic trick only if personal identity is strongly connected to exact this physical brain. But in the text, it is assumed that identity is not brain-connected, without any discussion. However, it doesn't mean that I completely endorse this "copy-friendly" theory of identity.

Comment author: kbog  (EA Profile) 28 September 2018 09:07:56AM *  2 points [-]

Identity is irrelevant if you evaluate total or average welfare through a standard utilitarian model.

Comment author: turchin 29 September 2018 04:29:22PM 0 points [-]

I just find the way how the whole trick will increase total welfare in the multiverse, copied from the comment below:

No copies of suffering observer-moments will be created - only the next moment after suffering will be simulated and diluted, and this will be obviously the happiest moment for someone in agony - to feel that the pain disappeared and to know that he is saved from hell.

It will be like an angel, who comes to a cancer patient and tells him: your disease was just completely cured. If one ever got a negative result for cancer test, he may know this feeling of relief.

Also, the fact that benevolent AI is capable to save observers from Evil AI (and also model Evil AIs in simulations and punish them if they dare to torture anyone) will significantly reduce (I hope) the number of Evil AIs.

Thus, the combination of the pleasure of being saved from Evil AI plus lowering the world-share of Evil AIs, as they can't win and know it - will increase the total positive utility in the universe.

Comment author: kbog  (EA Profile) 30 September 2018 10:39:45AM *  1 point [-]

There is no reason to prefer that over simply creating people with happy lives. You can always simulate someone who believes that they have just been saved from suffering if that constitutes the best life. The relation to some historical person who suffered is irrelevant and inefficient.

Deterring unfriendly AI is another matter. There are so many possible goal functions that can be used to describe possible unfriendly AIs that a general strategy for deterring them doesn't make sense. At least not without a lot of theoretical groundwork that is presently lacking.

Comment author: turchin 30 September 2018 03:20:34PM 0 points [-]

If the connection between observer-moments is not regarded as important (as I understand, this is your point of view: there is no identity that chains observer-moments in timelines) - than the creation of happy observer-moments is also a way to dilute past sufferings.

In other words, the creation of infinitely many happy observer-moments in the future makes the probability that a person finds himself into a suffering observer-moment infinitely small.

I addressed the problem of the simulation of evil AIs in the "Message to any future AI" text, which we already discussed. The main types of Evil AIs may be predicted by simulating the worlds where they likely to appear. We could get the advantage over the Evil AIs by declaring now, before their creation, that we will simulate and punish them in those timelines where will be able to create Benevolent AI. But we don't need to discuss all the technical details how we will do it, as a benevolent AI will find better ways. (The idea comes from Rolf Nelson)