6

RobBensinger comments on Alice and Bob on big-picture worldviews (Oxford Prioritisation Project) - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (8)

You are viewing a single comment's thread.

Comment author: RobBensinger 17 March 2017 10:43:57PM *  3 points [-]

I think wild animal suffering isn't a long-term issue except in scenarios where we go extinct for non-AGI-related reasons. The three likeliest scenarios are:

  1. Humans leverage AGI-related technologies in a way that promotes human welfare as well as (non-human) animal welfare.

  2. Humans leverage AGI-related technologies in a way that promotes human welfare and is effectively indifferent to animal welfare.

  3. Humans accidentally use AGI-related technologies in a way that is indifferent to human and animal welfare.

In all three scenarios, the decision-makers are likely to have "ambitious" goals that favor seizing more and more resources. In scenario 2, efficient resource use almost certainly implies that biological human bodies and brains get switched out for computing hardware running humans, and that wild animals are replaced with more computing hardware, energy/cooling infrastructure, etc. Even if biological humans who need food stick around for some reason, it's unlikely that the optimal way to efficiently grow food in the long run will be "grow entire animals, wasting lots of energy on processes that don't directly increase the quantity or quality of the food transmitted to humans".

In scenario 1, wild animals might be euthanized, or uploaded to a substrate where they can live whatever number of high-quality lives seems best. This is by far the best scenario, especially for people who think (actual or potential) non-human animals might have at least some experiences that are of positive value, or at least some positive preferences that are worth fulfilling. I would consider this extremely likely if non-human animals are moral patients at all, though scenario 1 is also strongly preferable if we're uncertain about this question and want to hedge our bets.

Scenario 3 has the same impact on wild animals as scenario 1, and for analogous reasons: resource limitations make it costly to keep wild animals around. 3 is much worse than 1 because human welfare matters so much; even if the average present-day human life turned out to be net-negative, this would be a contingent fact that could be addressed by improving global welfare.

I consider scenario 2 much less likely than scenarios 1 and 3; my point in highlighting it is to note that scenario 2 is similarly good for the purpose of preventing wild animal suffering. I also consider scenario 2 vastly more likely than "sadistic" scenarios where some agent is exerting deliberate effort to produce more suffering in the world, for non-instrumental reasons.

Comment author: Brian_Tomasik 22 March 2017 05:29:30PM *  0 points [-]

What's your probability that wild-animal suffering will be created in (instrumentally useful or intrinsically valued) simulations?