-3

Larks comments on Curing past sufferings and preventing s-risks via indexical uncertainty - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (18)

You are viewing a single comment's thread. Show more comments above.

Comment author: Larks 27 September 2018 10:11:16PM 0 points [-]

The point, presumably, is that people would feel better because of the expectation that things would improve.

Of course, the criticism is that rather than simulating someone who starts in pain and then improves gradually, you could simply simulate someone with high welfare all along. But if you could achieve identity-continuity without welfare-level-continuity this cost wouldn't apply.

Comment author: kbog  (EA Profile) 28 September 2018 09:04:46AM *  0 points [-]

The point, presumably, is that people would feel better because of the expectation that things would improve.

1/1000 people supposedly feels better, but then 999/1000 people will feel slightly worse, because they are given a scenario where they think that things may get worse, when we have the power to give them a guaranteed good scenario instead. It's just shifting expectations around, trying to create a free lunch.

It also requires that people in bad situations actually believe that someone is going to build an AI that does this. As far as ways of making people feel more optimistic about life go, this is perhaps the most convoluted one that I have seen. Really there are easier ways of doing that: for instance, make them believe that someone is going to build an AI which actually solves their problem.

Comment author: turchin 27 September 2018 11:50:16PM *  0 points [-]

See my patch to the argument in the comment to Lukas: we can simulate those moments which are not in intense pain, but still are very close to the initial suffering-observer moment, so they could be regarded its continuation.