Comment author: Peter_Hurford  (EA Profile) 18 February 2016 03:20:14PM 4 points [-]

I agree with Stefan that it's more persuasive to write one-sided, and I'd point to the fact that the most popular articles out there (both here on the EA Forum and definitely elsewhere) are presented one-sidedly. I think by "persuasive" you meant "best for helping readers form accurate beliefs", which are different things ;)

I write one-sidedly from the perspective of "offering additional considerations people haven't thought of to the considerations everyone already knows" and I don't spend much time talking about the considerations everyone already knows. This is mainly to save time as you said, because blogging here is definitely a very side project to me and for nearly all my pieces, I don't have much longer than 3-4 hours to write them.

Comment author: tyrael 18 February 2016 04:17:37PM 3 points [-]

This seems like a good opportunity for collaboration! Perhaps one-sided posts could include a disclaimer at the end of, "In this post, I've covered the most compelling arguments for X because I think the considerations on the other side are things most people already know. However, I invite someone with more time/interest to compile those in a comment, or message me and I can add them to the post itself."

Of course, this assumes we want to be "persuasive" in the way Rob means rather than the common definition of, "most likely to get people to agree with you."

Comment author: RyanCarey 05 February 2016 02:09:55AM 2 points [-]

A couple of remarks:

GCR prevention only matters if they will happen soon enough

The very same, from a future perspective, applies to values-spreading.

Most people are incentivized to prevent extinction but not many people care about my/our values

This is a suspiciously antisocial approach that only works if you share Brian's view that not only are their no moral truths for future people to (inevitably) discover, but nonetheless it is very important to promote one's current point of view on moral questions over whatever moral views are taken in the future.

Comment author: tyrael 05 February 2016 07:22:04AM *  0 points [-]

The very same, from a future perspective, applies to values-spreading.

Why do you think that? There are different values we can change that seem somewhat independent.

This is a suspiciously antisocial approach

That seems mean and unfair. Having different values than the average person doesn't make you antisocial or suspicious; it just makes you different. In fact, I'd say most EAs have different values than average :)

Comment author: Owen_Cotton-Barratt 04 February 2016 11:17:20AM *  11 points [-]

This certainly gets quite a bit of attention in internal conversations at the Future of Humanity Institute. Bostrom discussed it when first(?) writing about existential risk in 2001, under the name shrieks. Note I wouldn't recommend reading that paper except for historical interest -- his more modern exposition in Existential Risk Prevention as Global Priority is cleaner and excellent. I think your quality risk coincides with Bostrom's notion of flawed realisation, although you might also mean to include subsequent ruination. Could you clarify?

Anyhow I'll give my view briefly:

  • Much of the focus on risk from AI is about flawed realisations (from locking in the wrong values) more than never getting big.
  • Aside from concrete upcoming cases to lock-in values, it's unclear whether we can affect the long-term trajectory. However we might be able to, so this gives only a modest reason to discount working to mitigate the risks of flawed realisations.
  • There are lots of plausible ways to indirectly help reduce future risk (both extinction risk and other kinds), by putting us in a better position to face future challenges. The further off challenges are, the more this looks like the right strategy. For extinction risks, some of them are close enough that the best portfolio looks like it includes quite a bit of directly addressing the risks. For risks of flawed realisation apart from AI, my guess is that the portfolio should be skewed heavily towards this capacity-building.
  • Many of the things we think of to do to improve long-term capacity to deal with challenges look less neglected right now than the direct risks. But not all (e.g. I think nurturing the growth of a thoughtful EA movement may be helpful here), and we should definitely be open to finding good opportunities in this space.
  • I would like to see more work investigating the questions in this area.
Comment author: tyrael 04 February 2016 05:02:08PM 1 point [-]

"Quality risk" is meant to include both of those ideas, just any situation where we get "very large" (~"technologically mature") but not "very good."

Comment author: Buck 04 February 2016 05:31:00AM 3 points [-]

Michael Dickens wrote about quality risks vs existential risks here and here.

Comment author: tyrael 04 February 2016 06:08:21AM *  2 points [-]

Thanks for noting. I should have included links to those and other existing materials in the post. Was just trying to go quickly and show an independent perspective.

Comment author: RyanCarey 04 February 2016 03:51:08AM *  4 points [-]

Thanks for investing your thoughts in this area.

This has been a prominent part of existential risk reduction discussion since at least 2003 (edit 2013) when Nick Beckstead wrote his article about "Trajectory Changes", which are a slightly cleaner version of your " quality risks". (1) Trajectory changes are events whose impact persists in the long term, though not by preventing extinction.

This article was given to Nick Bostrom at the time, who replied to it at the time, which gives you a ready made reply to your article from the leader and originator of the existential risk idea:


One can arrive at a more probably correct principle by weakening, eventually arriving at something like 'do what is best' or 'maximize expected good'. There the well-trained analytic philosopher could rest, having achieved perfect sterility. Of course, to get something fruitful, one has to look at the world not just at our concepts.

Many trajectory changes are already encompassed within the notion of an existential catastrophe. Becoming permanently locked into some radically suboptimal state is an xrisk. The notion is more useful to the extent that likely scenarios fall relatively sharply into two distinct categories---very good ones and very bad ones. To the extent that there is a wide range of scenarios that are roughly equally plausible and that vary continuously in the degree to which the trajectory is good, the existential risk concept will be a less useful tool for thinking about our choices. One would then have to resort to a more complicated calculation. However, extinction is quite dichotomous, and there is also a thought that many sufficiently good future civilizations would over time asymptote to the optimal track.

In a more extended and careful analysis there are good reasons to consider second-order effects that are not captured by the simple concept of existential risk. Reducing the probability of negative-value outcomes is obviously important, and some parameters such as global values and coordination may admit of more-or-less continuous variation in a certain class of scenarios and might affect the value of the long-term outcome in correspondingly continuous ways. (The degree to which these complications loom large also depends on some unsettled issues in axiology; so in an all-things-considered assessment, the proper handling of normative uncertainty becomes important. In fact, creating a future civilization that can be entrusted to resolve normative uncertainty well wherever an epistemic resolution is possible, and to find widely acceptable and mutually beneficial compromises to the extent such resolution is not possible---this seems to me like a promising convergence point for action.)

It is not part of the xrisk concept or the maxipok principle that we ought to adopt some maximally direct and concrete method of reducing existential risk (such as asteroid defense): whether one best reduces xrisk through direct or indirect means is an altogether separate question.


The reason people don't usually think about trajectory changes (and quality risks) is not that they've just overlooked that possibility. It's that absent some device for fixing them in society, the (expected) impact of most societal changes decays over time. Changing a political system or introducing and spreading new political and moral ideologies is one of the main kinds of trajectory changes proposed. However, it is not straightforward to argue that such an ideology would be expected to thrive for millenea when almost all other poliyical and ethical ideologies have not. In contrast, a whole-Earth extinction event could easily end life in our universe for eternity.

So trajectory changes (or quality risks) are important in theory, to be sure. The challenge that the existential risk community has not yet successfully achieved, is to think of ones that are probable and worth moving altruistic resources towards, that could as easily be used to reduce extinction risk.

1.http://lesswrong.com/lw/hjb/a_proposed_adjustment_to_the_astronomical_waste/

Comment author: tyrael 04 February 2016 04:03:36AM *  0 points [-]

Thanks for sharing. I think my post covers some different ground (e.g. the specific considerations) than that discussion, and it's valuable to share an independent perspective.

I do agree it touches on many of the same points.

I might not agree with your claim that it's been a "prominent" part of discussion. I rarely see it brought up. I also might not agree that "Trajectory Changes" are a slightly cleaner version of "quality risks," but those points probably aren't very important.

As to your own comments at the end:

The reason people don't usually think about trajectory changes (and quality risks) is not that they've just overlooked that possibility.

Maybe. Most of the people I've spoken with did just overlook (i.e. didn't give more than an hour or two of thought - probably not more than 5 minutes) the possibility, but your experience may be different.

It's that absent some device for fixing them in society, the (expected) impact of most societal changes decays over time.

I'm not sure I agree, although this claim is a bit vague. If society's value (say, moral circles) is rated on a scale of 1 to 100 at every point in time and is currently at, say, 20, then even if there's noise that moves it up and down, a shift of 1 will increase the expected value at every future time period.

You might mean something different.

However, it is not straightforward to argue that such an ideology would be expected to thrive for millenea when almost all other poliyical and ethical ideologies have not.

I don't think it's about having the entire "ideology" survive, just about having it affect future ideologies. If you widen moral circles now, then the next ideology that comes along might have slightly wider circles than it would otherwise.

The challenge that the existential risk community has not yet successfully achieved, is to think of ones that are probable and worth moving altruistic resources towards, tgat couod as easily be used to reduce exyinction risk.

As a community, I agree. And I'm saying that might be because we haven't put enough effort into considering them. Although personally, I see at least one of those (widening moral circles) as more promising than any of the extinction risks currently on our radar. But I'm always open to arguments against that.

18

Some considerations for different ways to reduce x-risk

I believe the far future is a very important consideration in doing the most good, but I don’t focus on reducing extinction risks like unfriendly artificial intelligence . This post introduces and outlines some of the key considerations that went into that decision, and leaves discussion of the best answers... Read More
Comment author: Elizabeth 15 November 2015 04:59:24PM 2 points [-]

So there’s not an analogous situation to help other people understand this from an animal advocate’s perspective, but to put it mildly, when other people eat animals at EA events, it feels as if some people at that event gathered in a circle and began writing hate articles against the Centre for Effective Altruism or cutting up malaria nets that the Against Malaria Foundation was planning to distribute. It feels like a slap in the face to our work, and worse, like a dismissal of the plight of the billions of suffering farmed animals

I agree with your conclusion, and co-lead an EA group that puts it into practice. But I'm incredibly uncomfortable with framing this as "because it upsets people" rather than "because it's the right thing to do." My group lost at least one member because the concept of QALYs was profoundly upsetting to them- should we stop using QALYs to prevent that? Where is the line?

Comment author: tyrael 17 November 2015 07:22:57AM *  1 point [-]

It's definitely a trade-off. I think many more EAs are bothered by other people eating animals than the use of QALYs, that eating animals is far less useful than QALYs, and (less certain here) they are bothered for more EA reasons. If a large number of EAs opposed the use of QALYs because, for example, they felt QALYs painted a very misleading picture of what makes for the worst health issues, then I do think the EA community should seriously reconsider their use.

I do worry, for example, that people could start acting upset by something in order to make changes in the EA community. Although that abuse is possible, I think accepting some risk of it is worth making people more comfortable in cases of genuine discomfort. If I started seeing more abuse, I could change my views, but right now I think there's basically none. So given the lack of these issues, I'm okay with a norm of, "When something is upsetting a lot of community members and doesn't have a clear, substantial benefit for other community members, we should strongly reconsider including it in the community."

Comment author: Larks 17 November 2015 01:28:18AM 1 point [-]

How does the high effectiveness of the recommended ACE charities make a harm more trivial?

Because people generally care about animals in an aggregative sense - they care about the total amount of suffering.

AMF can save a human life very cheaply; does that make taking a human life a trivial harm?

No, firstly because it costs AMF over $3,000 per life, which is 600,000x more than the figure I was discussing. Multiplying a trivial number by 600,000 can yield non-trivial numbers!

Secondly because we generally think of human lives as being less interchangeable. Killing one human to save one other is not acceptable.

We should be thinking about what does the most good here, not just what satisfies people's personal preferences.

It's unreasonable to expect people to dedicate 100% of their resources to altruism. But what people are willing to dedicate, we should dedicate in the most efficient manner. It's better for both the individual and animals in aggregate for someone to eat meat for lunch and donate $1 than to abstain from meat.

Comment author: tyrael 17 November 2015 07:13:37AM *  0 points [-]

Because people generally care about animals in an aggregative sense - they care about the total amount of suffering.

That doesn't seem to do the work you imply it does. Being able to spare 100 lives is a huge feat of good, even if the total amount of people suffering is much greater.

No, firstly because it costs AMF over $3,000 per life, which is 600,000x more than the figure I was discussing. Multiplying a trivial number by 600,000 can yield non-trivial numbers!

But it's still very cheap, even if it's much larger than other very cheap figures.

Secondly because we generally think of human lives as being less interchangeable. Killing one human to save one other is not acceptable.

It seems speciesist to apply some moral standards to humans but not nonhumans.

It's unreasonable to expect people to dedicate 100% of their resources to altruism. But what people are willing to dedicate, we should dedicate in the most efficient manner. It's better for both the individual and animals in aggregate for someone to eat meat for lunch and donate $1 than to abstain from meat.

As argued elsewhere on this page, it seems dietary change has many more benefits than a small donation that has roughly the same (or even better) direct impact. And your original argument, that "many people are willing to pay much more than 2 cents to eat meat," doesn't do any work in addressing those additional benefits and simply draws from personal preference.

Comment author: Gregory_Lewis 14 November 2015 07:28:30PM 1 point [-]

It is pretty hard to offset a human life, as estimates from givewell suggest a cost per marginal life saved in the thousands of dollars.

Lark's point (I imagine) is something like this. If you think ACE's figures are about right, the direct harm of eating meat can be offset fairly cheaply, so better a carnivore giving a few dollars a year to THL than a vegetarian giving nothing. You might say this is a false dilemma, but in reality people are imperfect, and often try and allocate their limited altruistic resources as effectively as possible. If they find refraining from meat to be much more difficult than giving a few dollars (or earning a few more dollars to give away) it seem better all-things-considered they keep eating meat and give money.

So the harm of EA venues eating meat is primarily symbolic, as I don't think animal advocates would be happy if EA venues kept serving meat but gave $100 or whatever to THL, despite this being enough to offset the direct harm. In public facing events, fair enough (although I'm tempted to suggest that offsets etc. might be a good 'EA message'), yet this seems unclear in non-front-facing events.

Comment author: tyrael 17 November 2015 07:07:18AM 1 point [-]

That seems basically right, but different from what Lark actually said.

Comment author: Larks 14 November 2015 05:06:07PM 0 points [-]

we shouldn’t take actions that obviously and severely harm the work many other EAs are doing

But ACE's best estimate is that the cost of offsetting one meat meal is around 2 cents. That's not a severe harm, that is a very trivial one. The costs of avoiding such small harms almost definitely outweigh the harms themselves. Considering that many people are willing to pay much more than 2 cents to eat meat, it seems very ineffective to require they not do so.

Comment author: tyrael 14 November 2015 07:11:20PM 4 points [-]

How does the high effectiveness of the recommended ACE charities make a harm more trivial? AMF can save a human life very cheaply; does that make taking a human life a trivial harm?

Same for the willingness of many people to pay much more than 2 cents to eat meat. Why does their strong preference for eating meat make it very ineffective to avoid eating meat? We should be thinking about what does the most good here, not just what satisfies people's personal preferences.

View more: Next