JanBrauner comments on An Argument for Why the Future May Be Good - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (28)

You are viewing a single comment's thread.

Comment author: JanBrauner 21 July 2017 04:54:38PM *  3 points [-]

Here is another argument why the future with humanity is likely better than the future without it. Possibly, there are many things of moral weight that are independent of humanity's survival. And if you think that humanity would care about moral outcomes more than zero, then it might be better to have humanity around.

For example in many szenarios of human extinction, wild animals would continue existing. In your post you assigned farmed animals enough moral weight to determine the moral value of the future, and wild animals should probably have even more moral weight. There are 10 x more wild birds than farmed birds, 100-1000x more wild mammals than farmed animals (and of course many, many more fish or even invertebrates). I am not convinced that wild animals' lives are on average not worth living (= that they contain more suffering than happiness), but even without that, there surely is a huge amount of suffering. If you believe that humanity will have the potential to prevent/alleviate that suffering some time in the future, that seems pretty important.

The same goes for unknown unknowns. I think we know extremely little about what is morally good or bad, and maybe our views will fundamentally change in the (far) future. Maybe there are suffering non-intelligent extraterrestrials, maybe bacteria suffer, maybe there is moral weight in places were we would not have expected it (http://reducing-suffering.org/is-there-suffering-in-fundamental-physics/), maybe something completely different.

Let's see what the future brings, but it might be better to have an intelligent and at least slightly utility-concerned species around, as compared to no intelligent species.

Comment author: Brian_Tomasik 23 July 2017 05:50:20AM *  2 points [-]

For those with a strong suffering focus, there are reasons to worry about an intelligent future even if you think suffering in fundamental physics dominates, because intelligent agents seem to me more likely to want to increase the size or vivacity of physics rather than decrease it, given generally pro-life, pro-sentience sentiments (or, if paperclip maximizers control the future, to increase the number of quasi-paperclips that exist).