Hide table of contents

In late 2014, I ate  lunch with an EA who prefers to remain anonymous. I had originally been of the opinion that, should humans survive, the future is likely to be bad. He convinced me to change my mind about this.

I haven’t seen this argument written up anywhere and so, with his permission, I'm attempting to put it online for discussion.

A sketch of the argument is:

  1. Humans are generally not evil, just lazy

  2. Therefore, we should expect there to only be suffering in the future if that suffering enables people to be lazier

  3. The most efficient solutions to problems don’t seem like they involve suffering

  4. Therefore, as technology progresses, we will move more towards solutions which don’t involve suffering

  5. Furthermore, people are generally willing to exert some (small) amount of effort to reduce suffering

  6. As technology progresses, the amount of effort required to reduce suffering will go down

  7. Therefore, the future will contain less net suffering

  8. Therefore, the future will be good

My Original Theory for Why the Future Might Be Bad

There are about ten billion farmed land animals killed for food every year in the US, which has a population of ~320 million humans.

The farmed animals are overwhelmingly living in factory farming conditions, which results in enormous cruelties, and probably have lives which are not worth living. Since (a) farmed animals so completely outnumber humans, (b) humans are the cause of their cruelty, and (c) humans haven't caused an equal/higher # of beings to lead happy lives, human existence is plausibly bad on net.

Furthermore, technology seems to have instigated this problem. Animal agriculture has never been great for the animals which were being slaughtered, but there was historically some modicum of welfare. For example: chickens had to be let outside at least some of the time, because otherwise they would develop vitamin D deficiencies. But with the discovery of vitamins and methods for synthesizing them, chickens could now be kept indoors for their entire lives. Other scientific advancements like antibiotics enabled them to be packed densely, so that now the average chicken has 67 inches of space (about two thirds the size of a sheet of paper).

It's very hard to predict the future, but one reasonable thing you can do is guess that current trends will continue. Even if you don't believe society is currently net negative, it seems fairly clear that the trend has been getting worse (e.g. the number of suffering farmed animals grew much more rapidly than the [presumably happy] human population over the last century), and therefore we should predict that the future will be bad.

His Response

Technology is neither good nor bad, it’s merely a tool which enables the people who use it to do good or bad things. In the case of factory farming, it it seemed to me (Ben) that people overwhelmingly wanted to do bad things, and therefore technological progress was bad. Technological progress will presumably continue, and therefore we might expect this ethical trend to continue and the future to be even worse than today.

He pointed out that this wasn’t an entirely accurate way of viewing things: people didn’t actively want to cause suffering, they are just lazy, and it turns out that the lazy solution in this case causes more suffering.

So the key question is: when we look at problems that the future will have, will the lazy solution be the morally worse one?

It seems like the answer is plausibly “no”. To give some examples:

  1. Factory farming exists because the easiest way to get food which tastes good and meets various social goals people have causes cruelty. Once we get more scientifically advanced though, it will presumably become even more efficient to produce foods without any conscious experience at all by the animals (i.e. clean meat); at that point, the lazy solution is the more ethical one.

    1. (This arguably is what happened with domestic work animals on farms: we now have cars and trucks which replaced horses and mules, making even the phrase “beat like a rented mule” seem appalling.)

  2. Slavery exists because there is currently no way to get labor from people without them having conscious experience. Again though, this is due to a lack of scientific knowledge: there is no obvious reason why conscious experience is required for plowing a field or harvesting cocoa, and therefore the more efficient solution is to simply have nonconscious robots do these tasks.

    1. (This arguably is what happened with human slavery in the US: industrialization meant that slavery wasn’t required to create wealth in a large chunk of the US, and therefore slavery was outlawed.)

Of course, this is not a definitive proof that the future will be good. One can imagine the anti-GMO lobby morphing into an anti-clean meat lobby as part of some misguided appeal to nature, for example.

But this does give us hope that the lazy – and therefore default – position on issues will generally be the more ethical one, and therefore people would need to actively work against the grain in order to make the world less ethical.

If anything, we might have some hope towards the opposite: a small but nontrivial fraction of people are currently vegan, and a larger number of people spend extra money to buy animal products which (they believe) are less inhumane. I am not aware of any large group which does the opposite (go out of their way to cause more cruelty to farmed animals). Therefore, we might guess that the average position of people is slightly ethical and so people would be willing to not just be vegan if that was the cheaper option, but also be willing to pay a small amount of money to live more ethically.

The same thing goes for slavery: a small fraction of consumers go out of their way to buy slave-free chocolate, with no corresponding group of people who go out of their way to buy chocolate produced with slavery. Once machines come close to human cocoa growing abilities, we would expect chocolate industry slavery to die off.

Summary

If the default course of humanity is to be ethical, our prior should be that the future will be good, and the burden of proof shifts to those who believe that the future will be bad.

I do not believe it provides a knockdown counterargument to concerns about s-risks, but I hope this argument’s publication encourages more discussion of the topic, and a viewpoint some readers have not before considered.

 

This post represents a combination of my and the anonymous EA’s views. Any errors are mine. I would like to thank Gina Stuessy and this EA for proofreading a draft of this post, and for talking about this and many other important ideas about the far future with me.

Comments30
Sorted by Click to highlight new comments since: Today at 4:07 PM

What lazy solutions will look like seems unpredictable to me. Suppose someone in the future wants to realistically roleplay a historical or fantasy character. The lazy solution might be to simulate a game world with conscious NPCs. The universe contains so much potential for computing power (which presumably can be turned into conscious experiences), that even if a very small fraction of people do this (or other things whose lazy solutions happen to involve suffering), that could create an astronomical amount of suffering.

Lazy solutions to problems of motivating, punishing, and experimenting on digital sentiences could also involve astronomical suffering.

Yes, I agree. More generally: the more things consciousness (and particularly suffering) are useful for, the less reasonable point (3) above is.

One concern might be not malevolence, but misguided benevolence. For just one example, spreading wild animals to other planets could potentially involve at least some otherwise avoidable suffering (within at least some of the species), but might be done anyway out of misguided versions of "conservationist" or "nature-favoring" views.

I'm curious if you think that the "reflective equilibrium" position of the average person is net negative?

E.g. many people who would describe themselves as "conservationists" probably also think that suffering is bad. If they moved into reflective equilibrium, would they give up the conservation or the anti-suffering principles (where these conflict)?

I don't know, but I would guess that people would give up conservation under reflective equilibrium (assuming and insofar as conservation is, in fact, net negative).

This is what I am most concerned about. There is likely that there will be less suffering in those areas where humans are the direct cause or recipient of suffering (e.g. farmed animals, global poverty). I think it is less likely that there will be a reduction in suffering in areas where we are not the clear cause of the suffering.

I don't think wild-animal suffering will be solved somewhere along the line of our technological progress because of the above. That said, I do think the continued existence of humans is a good thing because without humans, I'm fairly confident that the world existing is a net negative.

Yeah, I think the point I'm trying to make is that it would require effort for things to go badly. This is, of course, importantly different from saying that things can't go badly.

Thanks for writing this up! I agree that this is a relevant argument, even though many steps of the argument are (as you say yourself) not airtight. For example, consciousness or suffering may be related to learning, in which case point 3) is much less clear.

Also, the future may contain vastly larger populations (e.g. because of space colonization), which, all else being equal, may imply (vastly) more suffering. Even if your argument is valid and the fraction of suffering decreases, it's not clear whether the absolute amount will be higher or lower (as you claim in 7.).

Finally, I would argue we should focus on the bad scenarios anyway – given sufficient uncertainty – because there's not much to do if the future will "automatically" be good. If s-risks are likely, my actions matter much more.

(This is from a suffering-focused perspective. Other value systems may arrive at different conclusions.)

Thanks for the response!

  1. It would be surprising to me if learning required suffering, but I agree that if it does then point (3) is less clear.
  2. Good point! I rewrote it to clarify that there is less net suffering
  3. Where I disagree with you the most is your statement "there's not much to do if the future will 'automatically' be good." Most obviously, we have the difficult (and perhaps impossible) task of ensuring the future exists at all (maxipok).

The Foundational Research Institute site in the links above seems to have a wealth of writing about the far future!

Thanks for the post! If lazy solutions reduce suffering by reducing consciousness, they also reduce happiness. So, for example, a future civilization optimizing for very alien values relative to what humans care about might not have much suffering or happiness (if you don't think consciousness is useful for many things; I think it is), and the net balance of welfare would be unclear (even relative to a typical classical-utilitarian evaluation of net welfare).

Personally I find it very likely that the long-run future of Earth-originating intelligence will optimize for values relatively alien to human values. This has been the historical trend whenever one dominant life form replaces another. (Human values are relatively alien to those of our fish ancestors, for example.) The main way out of this conclusion is if humans' abilities for self-understanding and cooperation make our own future evolution an exception to the general trend.

Thanks Brian!

I think you are describing two scenarios:

  1. Post-humans will become something completely alien to us (e.g. mindless outsourcers). In this case, arguments that these post-humans will not have negative states equally imply that these post-humans won't have positive states. Therefore, we might expect some (perhaps very strong) regression towards neutral moral value.
  2. Post-humans will have some sort of abilities which are influenced by current humans’ values. In this case, it seems like these post-humans will have good lives (at least as measured by our current values).

This still seems to me to be asymmetric – as long as you have some positive probability on scenario (2), isn't the expected value greater than zero?

I think maybe what I had in mind with my original comment was something like: "There's a high probability (maybe >80%?) that the future will be very alien relative to our values, and it's pretty unclear whether alien futures will be net positive or negative (say 50% for each), so there's a moderate probability that the future will be net negative: namely, at least 80% * 50%." This is a statement about P(future is positive), but probably what you had in mind was the expected value of the future, counting the IMO unlikely scenarios where human-like values persist. Relative to values of many people on this forum, that expected value does seem plausibly positive, though there are many scenarios where the future could be strongly and not just weakly negative. (Relative to my values, almost any scenario where space is colonized is likely negative.)

I think one important reason for optimism that you didn't explicitly mention is the expanding circle of moral concern, a la Peter Singer. Sure, people's behaviors are strongly influenced by laziness/convenience/self-interest, but they are also influenced by their own ethical principles, which in a society-wide sense have generally grown better and more sophisticated over time. For the two examples that you give, factory farming and slavery, your view seems to be that (and correct me if I'm wrong) in the future, people will look for more efficient ways to extract food/labor, and those more efficient ways will happen to involve less suffering; therefore, suffering will decrease in the future. In my head it's the other way around: people are first motivated by their moral concerns, which may then spur them to find efficient technological solutions to these problems. For example, I don't think the cultured meat movement has its roots in trying to find a more cost-effective way to make meat; I think it started off with people genuinely concerned about the suffering of factory-farmed animals. Same with the abolitionist movement to abolish slavery in the US; I don't think industrialization had as much to do with it as people's changing views on ethics.

We reach the same conclusion – that the future is likely to be good – but I think for slightly different reasons.

The change in ethical views seems very slow and patchy, though - there are something like 30 million slaves in the world today, compared to 3 million in the US at its peak (I don't know how worldwide numbers have changed over time.)

Humans are generally not evil, just lazy

?

Human history has many examples of systematic unnecessary sadism, such as torture for religious reasons. Modern Western moral values are an anomaly.

Thanks for the response! But is that true? The examples I can think of seem better explained by a desire for power etc. than suffering as an end goal in itself. (To quote every placeholder text: Lorem ipsum dolor sit amet...)

There now is a more detailed analysis of a similar topic: The expected value of extinction risk reduction is positive

Here is another argument why the future with humanity is likely better than the future without it. Possibly, there are many things of moral weight that are independent of humanity's survival. And if you think that humanity would care about moral outcomes more than zero, then it might be better to have humanity around.

For example in many szenarios of human extinction, wild animals would continue existing. In your post you assigned farmed animals enough moral weight to determine the moral value of the future, and wild animals should probably have even more moral weight. There are 10 x more wild birds than farmed birds, 100-1000x more wild mammals than farmed animals (and of course many, many more fish or even invertebrates). I am not convinced that wild animals' lives are on average not worth living (= that they contain more suffering than happiness), but even without that, there surely is a huge amount of suffering. If you believe that humanity will have the potential to prevent/alleviate that suffering some time in the future, that seems pretty important.

The same goes for unknown unknowns. I think we know extremely little about what is morally good or bad, and maybe our views will fundamentally change in the (far) future. Maybe there are suffering non-intelligent extraterrestrials, maybe bacteria suffer, maybe there is moral weight in places were we would not have expected it (http://reducing-suffering.org/is-there-suffering-in-fundamental-physics/), maybe something completely different.

Let's see what the future brings, but it might be better to have an intelligent and at least slightly utility-concerned species around, as compared to no intelligent species.

For those with a strong suffering focus, there are reasons to worry about an intelligent future even if you think suffering in fundamental physics dominates, because intelligent agents seem to me more likely to want to increase the size or vivacity of physics rather than decrease it, given generally pro-life, pro-sentience sentiments (or, if paperclip maximizers control the future, to increase the number of quasi-paperclips that exist).

Not sure if "lazy" is quite the right word. For example, it took work to rebuild chicken housing so that each chicken got even less space. I think "greedy" is a more accurate word.

By the way, does the vegan movement talk about running non-factory farms that sell animal products which are subsidized so they are priced competitively with factory farm products? If farming animals ethically costs a premium, from a purely consequentialist perspective, it doesn't seem like it should matter whether the premium is paid by the customer or by some random person who wants to convert dollars in to reduced suffering.

BTW I think this is pretty relevant to the Moloch line of thinking.

does the vegan movement talk about running non-factory farms that sell animal products which are subsidized so they are priced competitively with factory farm products?

I would guess it'd be much less cost-effective than lobbying for welfare reforms and such.

it doesn't seem like it should matter whether the premium is paid by the customer or by some random person who wants to convert dollars in to reduced suffering.

If the altruist spends her money on this, she has less left over to spend on other things. In contrast, most consumers won't spend their savings on highly altruistic causes.

I would guess it'd be much less cost-effective than lobbying for welfare reforms and such.

I suppose this cost-effectiveness difference could be seen as a crude way to measure how close we are to the pure Moloch type scenario?

I agree my proposal would probably not make sense for anyone reading this forum. It was more of theoretical interest. It's not clear whether equivalent actions exist for other Moloch type scenarios.

A complication: Whole-brain emulation seeks to instantiate human minds, which are conscious by default, in virtual worlds. Any suffering involved in that can presumably be edited away if I go by what Robin Hanson wrote in Age of Em. Hanson also thinks that this might be a more likely first route for HLAI, which suggests that may be the "lazy solution", compared to mathematically-based AGI. However, in the S-risks talk at EAG Boston, an example of s-risk was something like this.

Analogizing like this isn't my idea of a first-principle argument, and therefore what I'm saying is not airtight either, considering the levels of uncertainty for paths to AGI.

7 - Therefore, the future will contain less net suffering

8 - Therefore, the future will be good

Could this be rewritten as "8. Therefore, the future will be better than the present" or would that change its meaning?

If it would change the meaning, then what do you mean by "good"? (Note: If you're confused about why I'm confused about this, then note that it seems to me that 8 does not follow from 7 for the meaning of "good" I usually hear from EAs (something like "net positive utility").)

Yeah, it would change the meaning.

My assumption was that, if things monotonically improve, then in the long run (perhaps the very, very long run) we will get to net positive. You are proposing that we might instead asymptote at some negative value, even though we are still always improving?

I wasn't proposing that (I in fact think the present is already good), but rather was just trying to better understand what you meant.

Your comment clarified my understanding.

On premise 1, a related but stronger claim is that humans tend to shape the universe to their values much more strongly than do blind natural forces. This allows for a simpler but weaker argument than yours: it follows that, should humans survive, the universe is likely to be better (according to those values) than it otherwise would be.

I think a good definition of what is suffering is also required for this. Are we talking only human suffering? And if so, in what sense? Momentary suffering, chronic suffer, extremity of suffering?