19

Alexander comments on Some considerations for different ways to reduce x-risk - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (34)

You are viewing a single comment's thread.

Comment author: Alexander 04 February 2016 10:54:10PM 2 points [-]

Do you see any specific examples where reducing other types of existential risks could increase quality risks?

Comment author: Owen_Cotton-Barratt 05 February 2016 10:54:17AM 8 points [-]

Moving towards political singleton, and increasing surveillance technology, both look like they should help to reduce risks of human extinction. But they may well increase the risk of locking in a value system which is suboptimal, whereas a more varied society could do better in expectation (particularly after different parts of the society trade with each other).

Comment author: MichaelDickens  (EA Profile) 05 February 2016 12:06:30AM 5 points [-]

If you expect the far future to be net negative in expectation, then reducing existential risk necessarily increases quality risk. In this essay I list some reasons why the far future might be net negative:

  • We sustain or worsen wild animal suffering on earth.
  • We colonize other planets and fill them with wild animals whose lives are not worth living.
  • We create lots of computer simulations of extremely unhappy beings.
  • We create an AI with evil values that creates lots of suffering on purpose. (But this seems highly unlikely.)

In the essay I discuss how likely I think these scenarios are.

Comment author: Alexander 05 February 2016 01:24:25AM 0 points [-]

In your essay you place a lot of weight on other people's opinions. I wonder, if for some reason you decided to disregard everyone else's opinion, do you know if you would reach a different conclusion?

Comment author: MichaelDickens  (EA Profile) 05 February 2016 05:07:04PM 1 point [-]

My probabilities would be somewhat different, yes. I originally wrote "I’d give about a 60% probability that the far future is net positive, and I’m about 70% confident that the expected value of the far future is net positive." If I didn't care about other people's opinions, I'd probably revise this to something like 50%/60%.

It seems to me that the most plausible future scenario is we continue doing what we've been doing, the dominant effect of which is that we sustain wild animal populations that are probably net negative. I've heard people give arguments for why we shouldn't expect this, but I'm generally wary of arguments that say "the world will look like this 1000 years from now, even though it has never looked like this before and hardly anybody expects this to happen," which is the type of argument used to justify that wild animal suffering won't be a problem in the far future.

I believe most people are overconfident in their predictions about what the far future will look like (an, in particular, on how much the far future will be dominated by wild animal suffering and/or suffering simulations). But the fact that pretty much everyone I've talked to expects the far future to be net positive does push me in that direction, especially people like Carl Shulman and Brian Tomasik* who seem to think exceptionally clearly and level-headedly.

*This isn't exactly what Brian believes; see here.

Comment author: Alexander 05 February 2016 10:16:04PM 1 point [-]

Okay. Do you see any proxies (besides other people's views) that, if they changed in our lifetime, might shift your estimates one way or the other?

Comment author: MichaelDickens  (EA Profile) 05 February 2016 11:09:37PM 3 points [-]

Off the top of my head:

  • We develop strong AI.
  • There are strong signals that we would/wouldn't be able to encode good values in an AI.
  • Powerful people's values shift more toward/away from caring about non-human animals (including wild animals) or sentient simulations of non-human minds.
  • I hear a good argument that I hadn't already heard or thought of. (I consider this pretty likely, given how little total thought has gone into these questions.)