16

turchin comments on Principia Qualia: blueprint for a new cause area, consciousness research with an eye toward ethics and x-risk - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (49)

You are viewing a single comment's thread. Show more comments above.

Comment author: kbog  (EA Profile) 10 December 2016 04:21:00AM 0 points [-]
Comment author: turchin 08 July 2018 02:09:55PM 0 points [-]

What if AI exploring moral uncertainty finds that there is provably no correct moral theory or right moral facts? It that case, there is no moral uncertainty between moral theories, as they are all false. Could it escape this obstacle just by aggregating human's opinion about possible situations?

Comment author: kbog  (EA Profile) 11 July 2018 12:09:16PM *  1 point [-]

What if AI exploring moral uncertainty finds that there is provably no correct moral theory or right moral facts?

In that case it would be exploring traditional metaethics, not moral uncertainty.

But if moral uncertainty is used as a solution then we just bake in some high level criteria for the appropriateness of a moral theory, and the credences will necessarily sum to 1. This is little different from baking in coherent extrapolated volition. In either case the agent is directly motivated to do whatever it is that satisfies our designated criteria, and it will still want to do it regardless of what it thinks about moral realism.

Those criteria might be very vague and philosophical, or they might be very specific and physical (like 'would a simulation of Bertrand Russell say "a-ha, that's a good theory"?'), but either way they will be specified.