Alexander comments on Some considerations for different ways to reduce x-risk - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (34)

You are viewing a single comment's thread. Show more comments above.

Comment author: Alexander 05 February 2016 01:24:25AM 0 points [-]

In your essay you place a lot of weight on other people's opinions. I wonder, if for some reason you decided to disregard everyone else's opinion, do you know if you would reach a different conclusion?

Comment author: MichaelDickens  (EA Profile) 05 February 2016 05:07:04PM 1 point [-]

My probabilities would be somewhat different, yes. I originally wrote "I’d give about a 60% probability that the far future is net positive, and I’m about 70% confident that the expected value of the far future is net positive." If I didn't care about other people's opinions, I'd probably revise this to something like 50%/60%.

It seems to me that the most plausible future scenario is we continue doing what we've been doing, the dominant effect of which is that we sustain wild animal populations that are probably net negative. I've heard people give arguments for why we shouldn't expect this, but I'm generally wary of arguments that say "the world will look like this 1000 years from now, even though it has never looked like this before and hardly anybody expects this to happen," which is the type of argument used to justify that wild animal suffering won't be a problem in the far future.

I believe most people are overconfident in their predictions about what the far future will look like (an, in particular, on how much the far future will be dominated by wild animal suffering and/or suffering simulations). But the fact that pretty much everyone I've talked to expects the far future to be net positive does push me in that direction, especially people like Carl Shulman and Brian Tomasik* who seem to think exceptionally clearly and level-headedly.

*This isn't exactly what Brian believes; see here.

Comment author: Alexander 05 February 2016 10:16:04PM 1 point [-]

Okay. Do you see any proxies (besides other people's views) that, if they changed in our lifetime, might shift your estimates one way or the other?

Comment author: MichaelDickens  (EA Profile) 05 February 2016 11:09:37PM 3 points [-]

Off the top of my head:

  • We develop strong AI.
  • There are strong signals that we would/wouldn't be able to encode good values in an AI.
  • Powerful people's values shift more toward/away from caring about non-human animals (including wild animals) or sentient simulations of non-human minds.
  • I hear a good argument that I hadn't already heard or thought of. (I consider this pretty likely, given how little total thought has gone into these questions.)