19

RyanCarey comments on Some considerations for different ways to reduce x-risk - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (34)

You are viewing a single comment's thread.

Comment author: RyanCarey 04 February 2016 03:51:08AM *  4 points [-]

Thanks for investing your thoughts in this area.

This has been a prominent part of existential risk reduction discussion since at least 2003 (edit 2013) when Nick Beckstead wrote his article about "Trajectory Changes", which are a slightly cleaner version of your " quality risks". (1) Trajectory changes are events whose impact persists in the long term, though not by preventing extinction.

This article was given to Nick Bostrom at the time, who replied to it at the time, which gives you a ready made reply to your article from the leader and originator of the existential risk idea:


One can arrive at a more probably correct principle by weakening, eventually arriving at something like 'do what is best' or 'maximize expected good'. There the well-trained analytic philosopher could rest, having achieved perfect sterility. Of course, to get something fruitful, one has to look at the world not just at our concepts.

Many trajectory changes are already encompassed within the notion of an existential catastrophe. Becoming permanently locked into some radically suboptimal state is an xrisk. The notion is more useful to the extent that likely scenarios fall relatively sharply into two distinct categories---very good ones and very bad ones. To the extent that there is a wide range of scenarios that are roughly equally plausible and that vary continuously in the degree to which the trajectory is good, the existential risk concept will be a less useful tool for thinking about our choices. One would then have to resort to a more complicated calculation. However, extinction is quite dichotomous, and there is also a thought that many sufficiently good future civilizations would over time asymptote to the optimal track.

In a more extended and careful analysis there are good reasons to consider second-order effects that are not captured by the simple concept of existential risk. Reducing the probability of negative-value outcomes is obviously important, and some parameters such as global values and coordination may admit of more-or-less continuous variation in a certain class of scenarios and might affect the value of the long-term outcome in correspondingly continuous ways. (The degree to which these complications loom large also depends on some unsettled issues in axiology; so in an all-things-considered assessment, the proper handling of normative uncertainty becomes important. In fact, creating a future civilization that can be entrusted to resolve normative uncertainty well wherever an epistemic resolution is possible, and to find widely acceptable and mutually beneficial compromises to the extent such resolution is not possible---this seems to me like a promising convergence point for action.)

It is not part of the xrisk concept or the maxipok principle that we ought to adopt some maximally direct and concrete method of reducing existential risk (such as asteroid defense): whether one best reduces xrisk through direct or indirect means is an altogether separate question.


The reason people don't usually think about trajectory changes (and quality risks) is not that they've just overlooked that possibility. It's that absent some device for fixing them in society, the (expected) impact of most societal changes decays over time. Changing a political system or introducing and spreading new political and moral ideologies is one of the main kinds of trajectory changes proposed. However, it is not straightforward to argue that such an ideology would be expected to thrive for millenea when almost all other poliyical and ethical ideologies have not. In contrast, a whole-Earth extinction event could easily end life in our universe for eternity.

So trajectory changes (or quality risks) are important in theory, to be sure. The challenge that the existential risk community has not yet successfully achieved, is to think of ones that are probable and worth moving altruistic resources towards, that could as easily be used to reduce extinction risk.

1.http://lesswrong.com/lw/hjb/a_proposed_adjustment_to_the_astronomical_waste/

Comment author: Sean_o_h 05 February 2016 09:36:08AM 3 points [-]

(On a lighter note) On re-reading Nick Beckstead's post, I spent a while thinking "dear lord, he was an impossibly careful-thinking/well-informed teenager"*. Then I realised you'd meant 2013, not 2003 ;)

(*This is not to say he wasn't a very smart, well-informed teenager, of course. And with the EA community in particular, this would not be so unlikely - the remarkable quality and depth of analysis in posts being published on the EA forum by people in their late teens and early twenties is one of the things that makes me most excited about the future!)

Comment author: tyrael 04 February 2016 04:03:36AM *  0 points [-]

Thanks for sharing. I think my post covers some different ground (e.g. the specific considerations) than that discussion, and it's valuable to share an independent perspective.

I do agree it touches on many of the same points.

I might not agree with your claim that it's been a "prominent" part of discussion. I rarely see it brought up. I also might not agree that "Trajectory Changes" are a slightly cleaner version of "quality risks," but those points probably aren't very important.

As to your own comments at the end:

The reason people don't usually think about trajectory changes (and quality risks) is not that they've just overlooked that possibility.

Maybe. Most of the people I've spoken with did just overlook (i.e. didn't give more than an hour or two of thought - probably not more than 5 minutes) the possibility, but your experience may be different.

It's that absent some device for fixing them in society, the (expected) impact of most societal changes decays over time.

I'm not sure I agree, although this claim is a bit vague. If society's value (say, moral circles) is rated on a scale of 1 to 100 at every point in time and is currently at, say, 20, then even if there's noise that moves it up and down, a shift of 1 will increase the expected value at every future time period.

You might mean something different.

However, it is not straightforward to argue that such an ideology would be expected to thrive for millenea when almost all other poliyical and ethical ideologies have not.

I don't think it's about having the entire "ideology" survive, just about having it affect future ideologies. If you widen moral circles now, then the next ideology that comes along might have slightly wider circles than it would otherwise.

The challenge that the existential risk community has not yet successfully achieved, is to think of ones that are probable and worth moving altruistic resources towards, tgat couod as easily be used to reduce exyinction risk.

As a community, I agree. And I'm saying that might be because we haven't put enough effort into considering them. Although personally, I see at least one of those (widening moral circles) as more promising than any of the extinction risks currently on our radar. But I'm always open to arguments against that.

Comment author: Brian_Tomasik 05 February 2016 01:27:51PM *  1 point [-]

It's that absent some device for fixing them in society, the (expected) impact of most societal changes decays over time.

AGI is plausibly such a device. MIRI and Bostrom seem to place reasonable probability on a goal-preserving superintelligence (since goal preservation is a basic AI drive). AGI could preserve values more thoroughly than any human institutions possibly can, since worker robots can be programmed not to have goal drift, and in a singleton scenario without competition, evolutionary pressures won't select for new values.

So it seems the values that people have in the next centuries could matter a lot for the quality of the future from now until the stars die out, at least in scenarios where human values are loaded to a nontrivial degree into the dominant AGI(s).

Comment author: Alexander 04 February 2016 05:44:31AM *  1 point [-]

The effect of most disasters decays over time, but this does not mean that a disaster so big it ends humanity is not possible. So I don't see why that most societal changes decay over time bears on whether large trajectory changes could happen. Maybe someday, there will be a uniquely huge change.

Also, I don't understand why Bostrom mentions a "thought" that all sufficiently good civilizations will converge toward an optimal track. This seems like speculation.

Here is a concern I have. It may be that reducing many types of existential risk, like of nuclear war, could lower economic growth.

http://www.nytimes.com/2014/06/14/upshot/the-lack-of-major-wars-may-be-hurting-economic-growth.html?hpw&rref=

How do we know that by avoiding war we are not increasing another sort of existential risk, or the risk of permanent economic stagnation? Depending on how much we want to risk a total nuclear war, human development on Earth might have many permanent equilibria.