19

MichaelDickens comments on Some considerations for different ways to reduce x-risk - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (34)

You are viewing a single comment's thread.

Comment author: MichaelDickens  (EA Profile) 04 February 2016 10:39:29PM *  21 points [-]

This is a critically important and neglected topic, and I'm glad you wrote about it. I've written about this distinction before but I think you did a much better job of explaining why it matters.

Here are some more writings on the subject, along with a summary of my favorite points from each article:

Michael Bitton: Why I Don't Prioritize GCRs

  • GCR prevention only matters if they will happen soon enough
  • If one GCR happens first, the others don't matter, but we don't know which will come first
  • Efforts to direct humanity have a poor track record

Brian Tomasik: Values Spreading Is Often More Important than Extinction Risk

  • Most people are incentivized to prevent extinction but not many people care about my/our values
  • (Mathematical argument that I can't really simplify but is worth reading in full)

Paul Christiano: Against Moral Advocacy

  • If we try to change values now, they will tend to drift
  • I don't want to lock in my current values because they could be wrong
  • Values tend to be similar, so it is possible to pursue competing objectives with only modest losses in efficiency

Paul Christiano: Why Might the Future Be Good?

  • The far future might be good either because (1) rational self-interested agents will make gains from trade or (2) future altruists will share my values and have power
  • Whether we expect (1) or (2) changes what we should do now
  • Natural selection might make people more selfish, but everyone is incentivized to survive no matter their values so selfish people won't have an adaptive advantage
  • People who care more about the (far) future will have more influence on it, so natural selection favors them
Comment author: Owen_Cotton-Barratt 05 February 2016 10:57:44AM 10 points [-]

I'd double-upvote this if I could. Providing (high-quality) summaries along with links is a great pro-social norm!

Comment author: RyanCarey 05 February 2016 02:09:55AM 2 points [-]

A couple of remarks:

GCR prevention only matters if they will happen soon enough

The very same, from a future perspective, applies to values-spreading.

Most people are incentivized to prevent extinction but not many people care about my/our values

This is a suspiciously antisocial approach that only works if you share Brian's view that not only are their no moral truths for future people to (inevitably) discover, but nonetheless it is very important to promote one's current point of view on moral questions over whatever moral views are taken in the future.

Comment author: tyrael 05 February 2016 07:22:04AM *  0 points [-]

The very same, from a future perspective, applies to values-spreading.

Why do you think that? There are different values we can change that seem somewhat independent.

This is a suspiciously antisocial approach

That seems mean and unfair. Having different values than the average person doesn't make you antisocial or suspicious; it just makes you different. In fact, I'd say most EAs have different values than average :)

Comment author: RyanCarey 06 February 2016 03:55:49AM 1 point [-]

The very same, from a future perspective, applies to values-spreading.

Why do you think that? There are different values we can change that seem somewhat independent.

If you spread some value and then extinction eventuates, then your work does not matter in the long run. So this doesn't separate the two courses of action, on the long-run view.

That seems mean and unfair. Having different values than the average person doesn't make you antisocial or suspicious; it just makes you different. In fact, I'd say most EAs have different values than average :)

That's not how it's antisocial. It's antisocial in the literal sense that it is antagonistic to social practises. Basically it's more than believing in uncommon values, it's acting out in a way that violates what we recognise to at least be valid heuristics like not engaging in zero sum competition, and especially not moralising efforts to do so. If EAs and humanity can't cooperate while disagreeing, it's bad news. Calling it mean and unfair is a premature judgement.

Comment author: AGB 06 February 2016 07:19:11PM 1 point [-]

If you mean antisocial in the literal sense, you could and should probably have clarified that originally.

If you mean it in the usual sense, where it's approximately synonymous with labelling Brian as 'offensive' or 'uncaring', then the charge of 'mean and unfair' seems reasonable.

Either way you shouldn't be surprised that someone would interpret in the usual sense and consider it unfair.

Comment author: RyanCarey 06 February 2016 09:47:19PM 1 point [-]

That's not really an accurate representation, I'm trying to say that it's anti-cooperative, which it mostly is, moreso than offensive or uncaring.