19

Owen_Cotton-Barratt comments on Some considerations for different ways to reduce x-risk - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (34)

You are viewing a single comment's thread.

Comment author: Owen_Cotton-Barratt 04 February 2016 11:17:20AM *  11 points [-]

This certainly gets quite a bit of attention in internal conversations at the Future of Humanity Institute. Bostrom discussed it when first(?) writing about existential risk in 2001, under the name shrieks. Note I wouldn't recommend reading that paper except for historical interest -- his more modern exposition in Existential Risk Prevention as Global Priority is cleaner and excellent. I think your quality risk coincides with Bostrom's notion of flawed realisation, although you might also mean to include subsequent ruination. Could you clarify?

Anyhow I'll give my view briefly:

  • Much of the focus on risk from AI is about flawed realisations (from locking in the wrong values) more than never getting big.
  • Aside from concrete upcoming cases to lock-in values, it's unclear whether we can affect the long-term trajectory. However we might be able to, so this gives only a modest reason to discount working to mitigate the risks of flawed realisations.
  • There are lots of plausible ways to indirectly help reduce future risk (both extinction risk and other kinds), by putting us in a better position to face future challenges. The further off challenges are, the more this looks like the right strategy. For extinction risks, some of them are close enough that the best portfolio looks like it includes quite a bit of directly addressing the risks. For risks of flawed realisation apart from AI, my guess is that the portfolio should be skewed heavily towards this capacity-building.
  • Many of the things we think of to do to improve long-term capacity to deal with challenges look less neglected right now than the direct risks. But not all (e.g. I think nurturing the growth of a thoughtful EA movement may be helpful here), and we should definitely be open to finding good opportunities in this space.
  • I would like to see more work investigating the questions in this area.
Comment author: Owen_Cotton-Barratt 05 February 2016 10:50:17AM 2 points [-]

When I get multiple downvotes I like to use this to learn not to do things which people find unhelpful. Often I can go back and re-read my comment and work out what people didn't like. Here I'm not so sure -- something about tone? The fact that I gave my own framing of the issue more than building on the framing in the OP? Mixing two unrelated points (history of discussion and my views now) in one comment?

I'd appreciate pointers from anyone who downvoted, or who didn't but felt a temptation to. I don't want to discuss whether my post 'deserved' downvotes, I just want to understand what about it would drive them.

Comment author: mhpage 05 February 2016 05:48:46PM 2 points [-]

The downvoting throughout this thread looks funny. Absent comments, I'd view it as a weak signal.

Comment author: tyrael 04 February 2016 05:02:08PM 1 point [-]

"Quality risk" is meant to include both of those ideas, just any situation where we get "very large" (~"technologically mature") but not "very good."