19

Brian_Tomasik comments on An Argument for Why the Future May Be Good - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (28)

You are viewing a single comment's thread.

Comment author: Brian_Tomasik 23 July 2017 12:26:54AM *  8 points [-]

Thanks for the post! If lazy solutions reduce suffering by reducing consciousness, they also reduce happiness. So, for example, a future civilization optimizing for very alien values relative to what humans care about might not have much suffering or happiness (if you don't think consciousness is useful for many things; I think it is), and the net balance of welfare would be unclear (even relative to a typical classical-utilitarian evaluation of net welfare).

Personally I find it very likely that the long-run future of Earth-originating intelligence will optimize for values relatively alien to human values. This has been the historical trend whenever one dominant life form replaces another. (Human values are relatively alien to those of our fish ancestors, for example.) The main way out of this conclusion is if humans' abilities for self-understanding and cooperation make our own future evolution an exception to the general trend.

Comment author: Ben_West  (EA Profile) 20 August 2017 05:25:42PM 0 points [-]

Thanks Brian!

I think you are describing two scenarios:

  1. Post-humans will become something completely alien to us (e.g. mindless outsourcers). In this case, arguments that these post-humans will not have negative states equally imply that these post-humans won't have positive states. Therefore, we might expect some (perhaps very strong) regression towards neutral moral value.
  2. Post-humans will have some sort of abilities which are influenced by current humans’ values. In this case, it seems like these post-humans will have good lives (at least as measured by our current values).

This still seems to me to be asymmetric – as long as you have some positive probability on scenario (2), isn't the expected value greater than zero?