-3

Lukas_Finnveden comments on Curing past sufferings and preventing s-risks via indexical uncertainty - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (18)

You are viewing a single comment's thread. Show more comments above.

Comment author: turchin 27 September 2018 11:30:33PM *  0 points [-]

Reading your comment I come to the following patch of my argument: benevolent AI starts not from S(t), but immediately from many copies of those S(t+1) which have much less intense sufferings, but still have enough similarity with S(t) to be regarded as its next moment of experience. Not S(t) will be diluted, but the next moments of the S(t). This solves the need to create many S(t)-moments which seems morally wrong and computationally intensive.

My plan is that FAI can't decrease the number of suffering moments, but the plan is to create an immediate way out of each such moment. While total utilitarian will not feel the difference, it is just a theory which was not designed to account for the length of suffering, but for any particular observer, this will be a salvation.

Comment author: Lukas_Finnveden 28 September 2018 09:23:08AM 1 point [-]

I remain unconvinced, probably because I mostly care about observer-moments, and don't really care what happens to individuals independently of this. You could plausibly construct some ethical theory that cares about identity in particular way such that this works, but I can't quite see how it would look, yet. You might want to make those ethical intuitions as concrete as you can, and put them under 'Assumptions'.

Comment author: turchin 29 September 2018 04:32:46PM 0 points [-]

It will also increase the number of happy observer-moments globally, because of the happiness of being saved from agony plus lowering the number of Evil AIs, as they will know they will lose and will be punished.