B

BrianK

332 karmaJoined

Comments
22

Unfortunately it is not worth the risk of us spreading wild animals throughout the galaxies. Then there’s the fact we might torture digital beings.

Thanks, you too!

Perhaps you are right re: wild animal suffering.

I’ll add that insect farming is relevant too:

https://www.deepspacefoodchallenge.org/phase1winners.

Optimistic tone/utopian scene fuels the idea space colonization/expansion of humanity is a good idea.

I agree that’s good news!

It’s hard for me to make sense of whether AGI will be good to bad. I like the idea of it accelerating cellular agriculture; I hate the idea of it fueling space colonization. I could make a long list going back and forth.

Here’s an example. I don’t think this tone is helpful (though well intentioned and beautifully written): https://whatweowethefuture.com/afterwards/

I’m a negative leaning utilitarian but not a negative utilitarian—I think happiness matters and that a utopia is at least marginally better than the absence of life. But I also recognize there are many outcomes worse than the absence of life, and that we are in such a state right now. Despite our best efforts, which we should continue to deploy, I expect suffering will continue to rise as humans colonize other planets and torture more animals and eventually digital minds, etc. I’ll let you determine where that might lead philosophically if one could press a button, but I’m more concerned in practice, in reality, what to do about it. My vote is the EA community focus on making humanity less immoral, slow space colonization, focus much less on x-risks and more on s-risks, stop fueling utopians, etc. Hope that clarifies!

Thanks for your comment.

  1. I don’t think this is a compelling argument. Being less immoral than the worst doesn’t lead me to conclude we should increase the immorality further. I do think it should lead us to have compassion in so far as humanity makes it very difficult not to be immoral — it’s an evolutionary problem.

  2. That’s true! But still very bad for many. And of course, I’m concerned about all sentient beings, not just humans — the math looks truly horrible when non-humans are in concluded. I do credit humans for unintentionally reducing wild animal suffering by being so drawn to destroying the planet, but I expect the opposite will happen in space colonization situations (i.e. we will see wildlife or create more digital minds, etc.)

  3. I’m a longtermist in this sense. I’m concerned about us torturing non-humans not just in the next several decades, but eons after. This could look like factory farming animals, seeding wild animals, creating digital minds, bringing pets with us, and so on.

Is that transhumanism to the max? I need to learn more about those who endorse this philosophy—I imagine there is some diversity. Would the immorality in us be eradicated under the ideal circumstances, in their minds (s-risks and x-risks aside from AI acceleration)? Sounds like they are a different kind of utopian.

Thanks very much for the comment. As you can imagine, given my work, most of my friends and family know a lot about factory farming, and many continue to eat them, some on a daily basis. That includes plenty of my peers who identify as EAs. I don’t see a compelling reason to think colonists won’t salivate at a rib-eye or chicken wing too and act on that desire, if they can. Knowing about a problem isn’t usually enough to override our humanity. That isn’t to say some people don’t need to be educated, but this isn’t just a knowing problem; it’s a doing one.

Thanks for your engagement.

That’s an interesting point with respect to poverty. Intuitively I don’t see any reason why there won’t be famine and war and poverty in the galaxies, as there is and presumably will continue to be on Earth, but I’ll think on it more. I really doubt folks out there will live in peace, provided they remain human. One could articulate all sorts of hellscapes by looking at what it is like for many to live on Earth.

Humans by nature are immoral. For example, most members want to eat animals, and even if they know that it is wrong to eat those among them raised in cruel conditions, they will continue to do so. Efforts to meet this demand are already underway:

https://en.ifremer.fr/Latest-news/Fish-on-the-menu-at-the-future-moon-base

https://impact.canada.ca/en/challenges/deep-space-food-challenge/finalists

https://www.deepspacefoodchallenge.org/phase1winners

https://www.nature.com/articles/srep14172

https://www.gre.ac.uk/articles/public-relations/growing-vegetables-on-mars-using-fish-water-and-waste

Then there is the issue of bringing pets with us — most seem to be unhappy and bored, even though most “guardians” love them very much, and wouldn’t want to go live on another planet without them.

https://www.vox.com/future-perfect/2023/4/11/23673393/pets-dogs-cats-animal-welfare-boredom

In fact, this is one of the reasons some investors in space-tech—including the one I cited in the piece—are also investors in cell-cultivated meat. They understand Martians will want to eat what they already eat. The problem is that it’s unclear whether cellular agriculture is viable, or whether some colonists will insist on eating meat from animals even if cell-cultivated meat is available.

Then there is the issue of wild animal suffering.

https://reducing-suffering.org/will-space-colonization-multiply-wild-animal-suffering/

Granted, I think torturing digital beings on Mars might be more likely, but there’s room for suffering all around.

https://time.com/6296234/ai-should-be-terrified-of-humans/

There are many worse outcomes than the absence of life. Provided humanity remains highly immoral, as it is today, I suggest we stick to only one planet, at least for as long as it’ll have us. If humanity becomes more moral in the future, I’m happy to consider colonization then.

Fair point. It’s more nuanced later on: “Almost all of the conversations about risk have to do with the potential consequences of AI systems pursuing goals that diverge from what they were programmed to do and that are not in the interests of humans. Everyone can get behind this notion of AI alignment and safety, but this is only one side of the danger. Imagine what could unfold if AI does do what humans want.”

Load more