Dec 19 20161 min read 26

2

Should the establishment of a permanent lunar colony (possibly even terraforming) as a form of life insurance be a priority for humanity from a consequentialist perspective? 

 

2

0
0

Reactions

0
0
Comments26
Sorted by Click to highlight new comments since: Today at 12:26 PM
kbog
7y13
0
0

As far as I can tell there is zero serious basis for going to other planets in order to save humanity and it's an idea which stays alive merely because of science fiction fantasies and publicity statements from Elon Musk and the like. I've yet to see a likely catastrophic scenario where having a human space colony would be useful that would not be much more easily protected against with infrastructure on Earth.

-Can it help prevent x-risk events? Nope, there's nothing it can do for us except tourism and moon rocks.

-Is it good for keeping people safe against x-risks? Nope. In what scenario does having a lunar colony efficiently make humanity more resilient? If there's an asteroid, go somewhere safe on Earth. If there's cascading global warming, move to the Yukon. If there's a nuclear war, go to a fallout shelter. If there's a pandemic, build a biosphere.

-Can it bring people back to Earth after an extended period of isolation? Nope, the Moon has none of the resources required for sustaining a spacefaring civilization, except sunlight and water. Whatever resources you have will degrade with inefficiencies and damage. Your only hope is to just wait for however many years or millennia it takes for Earth to become habitable again and then jump back in a prepackaged spacecraft. But, as noted above, it's vastly easier to just do this in a shelter on Earth.

-It's physically impossible to terraform the Moon with conceivable technology, as it has month-long days, and far too little gravity to sustain an atmosphere.

-"But don't we need to leave the planet EVENTUALLY?" Maybe, but if we have multiple centuries or millennia then you should wait for better general technology and AI to be developed to make space travel easy, instead of funneling piles of money into it now.

I really fail to see the logic behind "Earth might become slightly less habitable in the future, so we need to go to an extremely isolated, totally barren wasteland that is absolutely inhospitable to all carbon-based life in order to survive." Whatever happens to Earth, it's still not going to have 200 degree temperature swings, a totally sterile geology, cancerous space radiation, unhealthy minimal gravity and a multibillion dollar week-long commute.

I'm in favor of questioning the logic of people like Musk, because I think the mindset needed to be a successful entrepreneur is significantly different than the mindset needed to improve the far future in a way that minimizes the chance of backfire. I'm also not that optimistic about colonizing Mars as a cause area. But I think you are being overly pessimistic here:

  • The Great Filter is arguably the central fact of our existence. Either we represent an absurd stroke of luck, perhaps the only chance the universe will ever have to know itself, or we face virtually certain doom in the future. (Disregarding the simulation hypothesis and similar. Maybe dark matter is computronium and we are in a nature preserve. Does anyone know of other ways to break the Great Filter's assumptions?)

  • Working on AI safety won't plausibly help with the Great Filter. AI itself isn't the filter. And if the filter is late, AI research won't save us: a late filter implies that AI is hard. (Restated: an uncolonized galaxy suggests superintelligence has never been developed, which means civilizations fail before developing superintelligence. So if the filter is ahead, it will come before superintelligence. More thoughts of mine.)

  • So what could help if there's a filter in front of us? The filter is likely non-obvious, because every species before us failed to get through. This decreases the promise of guarding against specific obvious scenarios like asteroid/global warming/nuclear war/pandemic. I have not looked into the less obvious scenarios, but a planetary colony could be useful for some, such as the conversion of regular matter into strange matter as described in this post. (Should the Great Filter caution us against performing innocuous-seeming physics experiments? Perhaps there is a trap in physics that makes up the filter. Physics experiments to facilitate space exploration could be especially deadly--see my technology tree point.)

Colonizing planets before it would be reasonable to do so looks like a decent project to me in the world where AI is hard and the filter is some random thing we can't anticipate. These are both strong possibilities.

Random trippy thought: Species probably vary on psychological measures like willingness to investigate and act on non-obvious filter candidates. If we think we're bad at creative thinking relative to a hypothetical alien species, it's probably a failing strategy for us. If we think the filter is ahead of us, we should go into novel-writing mode and think: what might be unique about our situation as a species that will somehow allow us to squeak past the filter? Then we can try to play to that strength.

We could study animal behavior to answer questions of this sort. A quick example: Bonobos are quite intelligent, and they may be much more benevolent than humans. If the filter is late, bonobo-type civilizations have probably been filtered many times. This suggests that working to make humanity more bonobo-like and cooperative will not help with a late filter. (On the other hand, I think I read that humans have an unusual lack of genetic diversity relative to the average species, due a relatively recent near-extinction event... so perhaps this adds up to a significant advantage in intraspecies cooperation overall?)

BTW, there's more discussion of this thread on Facebook and Arbital.

If the filter is ahead of us, then it's likely to be the sort of thing which civilizations don't ordinarily protect against. Humans seem to really like the idea of going to space. It's a common extension of basic drives for civilizations to expand, explore, conquer, discover, etc.

This decreases the promise of guarding against specific obvious scenarios like asteroid/global warming/nuclear war/pandemic.

Civilizations can be predictably bad at responding to such scenarios, which are often coordination problems or other kinds of dilemmas, so I think it's still very likely that such scenarios are filters.

I have not looked into the less obvious scenarios, but a planetary colony could be useful for some, such as the conversion of regular matter into strange matter as described in this post. (Should the Great Filter caution us against performing innocuous-seeming physics experiments? Perhaps there is a trap in physics that makes up the filter.)

Civilizations seem to have strong drives to explore other planets anyway. So even if these kinds of possibilities are really neglected, I think it's unlikely that they are filters, unless they only occur to pre-expansion civilizations, in which case our plans for colonizing other planets can't be implemented soon enough to remove the risk.

It's hubris to think that you need to have modeled the risk for it to be able to kill you. Must also invest in heuristic robustness measures.

Yes but going to another planet is so useless to known x-risks that it doesn't even work as a heuristic. Allocating government funding towards any other area would be just as good along general civilization-robustness lines.

It's bad if evaluated in the reference class of "things that work for known x-risks". But heuristics should be used from various levels of abstractions. It looks great on robustness, resilience, redundancy grounds - i.e. in the reference class of "things that stop things from dying". Or if you look at all of human civilization in the reference class of species, or in the reference class of civilizations.

When not looking at specific risks, I still don't see how it works well in generic robustness/resiliency/redundancy grounds compared to other things. Better healthcare, more education, less military conflict... tons of things seem to be equally good if not better along those lines, when it comes to improving the overall strength of the human race.

They may be good for improving the overall strength of the human race but to say that improves the robustness and resiliency is a non sequitur.

The idea (see e.g. here, here just to take my top two google results) is to work on modularity, back-ups, and decentralized, adaptivity, et cetera. Things like healthcare and education are centralized and don't adapt.

I know you said "I don't see how..." but in order to see how, probably the best thing is to read around the topic, and likewise for other puzzled readers.

These are sufficiently generic criteria that all kinds of systems can improve them. Healthcare, for instance: build more advanced healthcare centers in more areas of the world. This will give any segment of the population more redundancy and resiliency when performing healthcare related functions. Same goes with education: provide more educational programs so that they are redundant and resilient to anything that happens to other educational programs and provide varied methods of education. If you take an old-fashioned geopolitical look at the world then sure it seems like being on another planet makes you really robust, but if we're protecting against Unknown Unknowns then you can't assume that far-away-and-in-space is a more valuable direction to go in, out of all the other directions that you can go for improving resilience and redundancy.

Making healthcare centers more advanced would prima facie reduce the resiliency of healthcare systems by making them more complex and brittle. One would have to argue for more specific changes.

You don't need to resort to a geopolitical stance to want to be on another planet. Physical separation and duplication is useful for redundancy of basically everything. Any reasonable reference class makes this look good.

For the last two layers of nested comments you have not actually addressed my arguments, which can be seen if you look carefully over them, nor have you given any impression of really engaging seriously with the issue, so this is my final comment for the thread.

Making healthcare centers more advanced would prima facie reduce the resiliency of healthcare systems by making them more complex and brittle."

I said to build more healthcare centers. The more healthcare centers you have, the more redundancy you have. If you add very advanced healthcare centers or very basic ones without removing existing ones, then you have the option of providing more and different types of healthcare. This provides adaptiveness and redundancy in the form of different types of healthcare provision. If you add more healthcare professionals, you are achieving redundancy and adaptiveness by adding new talent and new ways of thinking to the field. And so on.

The whole redundancy-adaptiveness-etc stance is perfectly useful when you have some idea of what the risk actually is. If you really want to protect against "unknown unknowns" then you have no reason to think that the problem with humanity is going to be that we're all on the same planet, as opposed to the problem being that we don't have enough hospitals or didn't learn how to cure cancer or something of the sort.

You don't need to resort to a geopolitical stance to want to be on another planet. Physical separation and duplication is useful for redundancy of basically everything.

A colony on another planet is not some sort of parallel civilization that can support and replace the critical functions of the Earth-based one like an electric power generator. You can't use facilities and resources on Mars or the Moon to prop up a failing system on Earth and vice versa without extremely high costs and time delays. The combined Earth + space colony civilization within the current technological horizon isn't an integrated, resilient, adaptive system where strengths of one area can rapidly support the other. Even if the extraterrestrial colony were self-sustaining, there would essentially be two independent systems with their own possible failure modes, which is worse than systems which can flexibly support each other.

Physical separation can be taken in a bunch of different ways. Maybe the next x-risk will be best mitigated by minimizing the number of people who are within a five meter radius of another. Maybe the next x-risk will be best mitigated by increasing the number of people who are more than 25 meters beneath the surface of the planet. Maybe it will be mitigated by evening the distribution of people across the planet, to be less focused in cities and more distributed across the countryside or oceans.

In any case, protecting Earth's civilization has a much higher payoff than protecting a small civilization on an extraterrestrial body.

For the last two layers of nested comments you have not actually addressed my arguments, which can be seen if you look carefully over them, nor have you given any impression of really engaging seriously with the issue, so this is my final comment for the thread.

Hmm, well that's puzzling to me, because it looks like I answered them pretty directly.

Nope, the Moon has none of the resources required for sustaining a spacefaring civilization, except sunlight and water.

Well. This might be a bit of an over-statement -- we don't really have a good idea of what's up there. There is good evidence for Titanium and there may be Platnium Group metals up there. Who knows what else?

The moon, mars, or colonies inside hollowed out asteroids certainly doesn't make sense as x-risk mitigation in the near or medium term, but at some point they're going to be necessary.

Is it good for keeping people safe against x-risks? Nope. In what scenario does having a lunar colony efficiently make humanity more resilient? If there's an asteroid, go somewhere safe on Earth...

What if it's a big asteroid?

There are no known Earth-crossing minor planets large enough that a shelter on the other side of the world would be destroyed. All of them are approximately the size of the dinosaur-killer asteroid or smaller. We've surveyed of the large ones and there are no foreseeable impact risks from them.

Large asteroids are easier to detect from a long distance. A very large asteroid would have to come in from some previously unknown, unexpected orbit for it to be previously undetected. So probably a comet-like orbit, which for a large asteroid is probably ridiculously unusual.

I really don't know how big it would have to be to destroy a solid underground or underwater structure. Maybe around the size of the Vredefort asteroid if not larger. But we haven't had such an impact since the end of the late heavy bombardment period, three billion years ago, when these objects were cleared from Earth's orbit.

The big threat is from comets, because we have not tracked the vast majority of them. There is evidence in periodicity of bombardment that would correlate with the perturbation of the Oort Cloud of comets (see the book Global Catastrophic Risks). Burned-out comets can be very dark, and we would have little warning.

If it's so big no bunkers work, how long would we have to wait on Mars before coming back?

Around 100 km diameter would boil the oceans. It is possible that a bunker in Antarctica that can handle hundreds of atmospheres of pressure (due to the oceans being above us in vapor form) could work. But it would have to last for something like 1000 years. Or we would have to stay on Mars for 1000 years.

Nope, the Moon has none of the resources required for sustaining a spacefaring civilization, except sunlight and water. Whatever resources you have will degrade with inefficiencies and damage. Your only hope is to just wait for however many years or millennia it takes for Earth to become habitable again and then jump back in a prepackaged spacecraft. But, as noted above, it's vastly easier to just do this in a shelter on Earth.

You are forgetting the rocks, including metals and so forth that we know to be present there (and on the asteroids, which are an even more serious target). Lunar dirt and rock is about 10% aluminum, just like earth dirt and rock is, and just like stony asteroids are. Oxygen is the most abundant element in them, followed by silicon. Iron is also present in small (but inexpensively magnetically collectible) amounts in lunar regolith due to meteorite impacts from metallic asteroids.

The problem with earth is that as long as we stay here, we tend to only develop technologies optimized for this environment -- which is small, crowded, and vulnerable. If you develop technologies for the Moon, that same approach will tend to work almost anywhere in the universe. You wouldn't stay Moon-only for long.

We do have approaches that could be used, but they aren't mature because we don't have a need, thanks to plentiful water and water-based geology. For example, we have long known that you can convert any substance to plasma by raising the temperature to 10,000k and the dissociated ions can be separated by mass charge ratio (a la mass spectrometry). Efficiency in such a system would be tricky, but isn't necessarily insoluble (might require that it be done at very large scale, for example). Energy efficiency itself is also somewhat less relevant given the abundance of sunlight.

The big issue with dragging our feet on space is more to do with astronomical waste than x-risk in my opinion. Every day we wait to build the first self replicating robotic space factory is another huge loss in terms of expected utility from economic growth. The chance of an asteroid impact probably isn't high enough to rate by comparison to the missed gains of even a fraction of 1% of the solar output translated to meaningful economic activity.

I'm not sure expanding into space necessarily (in the "all else equal" sense) reduces x-risk, since space warfare has the capacity to be pretty brutal (impact weapons, e.g.) and the increased computational resources that would be granted by having a mature self replicating space industrial capacity could lead to earlier brute forcing of AGI. It's probably important to control who has access to space for it to actually reduce x-risk (just like any other form of great power, really). You would certainly eliminate some x-risks entirely though (natural asteroid impact, virus that wipes out humanity, global warming caused by reliance on carbon based fuels, nearby supernova, etc.)

The big issue with dragging our feet on space is more to do with astronomical waste than x-risk in my opinion.

In that case you should invest directly in base technologies. The private sector will find the most profitable uses for them, and usually there are more profitable applications for technology than space. Everyone loves to talk about all the new technologies which came out of the U.S. space program, but imagine how much more we would have gotten had we invested the same amount of money directly into medical technology, material science, and orange-flavored powdered drink mix.

Every day we wait to build the first self replicating robotic space factory is another huge loss in terms of expected utility from economic growth.

The technologies required for that are various things which are beyond our current abilities. We can't even do self replication on Earth. We may as well start with the fundamental nanoengineering and artificial intelligence domains. We don't know how space tech and missions will evolve, so if we try to make applied technology for current missions then much of the effort will be poorly targeted and less useful. It's already clear that more serious basic problems in materials science, AI and other domains must be overcome for space exploration to provide positive returns, and those are the fields which both the private sector and the government are less interested in supporting (due to long term horizons and riskiness of profits for the private sector, and lack of politically sellable 'results' for the government).

In that case you should invest directly in base technologies. The private sector will find the most profitable uses for them, and usually there are more profitable applications for technology than space. Everyone loves to talk about all the new technologies which came out of the U.S. space program, but imagine how much more we would have gotten had we invested the same amount of money directly into medical technology, material science, and orange-flavored powdered drink mix.

I'm with you on spinoffs argument, however we're concerned with technologies of specific usefulness in space to tap space resources. What is the profitable application of a zero gravity refinery for turning heterogeneous rocks into aluminum? Assume the process is (at the small scale) around 5% as energy efficient as electrolyzing bauxite and requires a high vacuum. Chances are such a thing could be worth something in a world without cheaper ways to get aluminum, assuming you could work around the gravity difference. Not so much in a world with abundant bauxite, gravity, and an atmosphere. So there is little incentive to develop in that direction unless you are actually planning to use it in space, where it would be highly useful (because aluminum is so useful in the service of energy collection in space that 5% energy efficiency actually wouldn't slow growth by much).

The technologies required for that are various things which are beyond our current abilities. We can't even do self replication on Earth. We may as well start with the fundamental nanoengineering and artificial intelligence domains. We don't know how space tech and missions will evolve, so if we try to make applied technology for current missions then much of the effort will be poorly targeted and less useful. It's already clear that more serious basic problems in materials science, AI and other domains must be overcome for space exploration to provide positive returns, and those are the fields which both the private sector and the government are less interested in supporting (due to long term horizons and riskiness of profits for the private sector, and lack of politically sellable 'results' for the government).

We actually do facilitate the replication of machinery, with the aid of human labor. An orbital factory wouldn't have much time delay compared to the human nervous system, so the minimal requirement for a fully self replicating space swarm seems to be telerobotics sufficiently good to mimic the human hand well enough to perform maintenance and assembly tasks. No new advances in nanoengineering or artificial intelligence are needed. However, until such a system replicates itself adequately to return results, it would be a monetary sink rather than a source of profit. It would become profitable at some point, because the cheap on site energy, ready made vacuum, zero gravity, absence of atmosphere/weather, reduction of rent/crowding issues due to 3d construction, enhanced transport logistics between factories due to vacuum/zero gravity, etc, would all contribute to making it more efficient.

One thing to keep in mind is that we currently don't have the ability to create a space colony that can sustain itself indefinitely. So pursuing a strategy of creating a space colony in case of human life on Earth being destroyed probably should look like capacity-building so that we can create an indefinitely self-sustaining space colony, rather than just creating a space colony.

I think so! The same goes for bunkers and better facilities for rebuilding from a catastrophe (energy, food, tech for fertility, tech for reindustrialization and literature on these). Same goes for research of mitigation of risks from AI and synthetic pandemics. It seems, as argued in Nick Bekstead's thesis that these things that have a plausible long-run impact are overwhelmingly important relative to things that don't. There is some room for argument around the edges about which short-range philanthropic endeavours might have some positive indirect long-run effects but by and large, we have a situation where society has left all of these areas neglected relative to their overwhelming importance. The group of effective altruists and rationalists who are willing to focus on these issues of direct long-run importance is perhaps 200, many of whom are still in school. So we have to prioritize. And that means more than half of effort should go into AI and synthetic pandemics including their policy currently, much of the remainder should go into identifying new technological risks, and bunkers and lunar colonies should have only a couple of percent of our attention.

Really, what we need is a lot more people on all of these areas of direct long-run importance.

The same goes for bunkers

For what it's worth, Nick did a shallow investigation of bunker building and found it was likely not very effective (not that this necessarily argues against general efforts to increase civilisation's robustness).

Yep, the robustness thing is picked up by Jebari while some GCRI folks give their own remarks in "Isolated refuges for surviving global catastrophes"

The most efficient way to colonize space is with self replicating robots/factories. Human settlement is probably going to be more of an afterthought, or along the lines of on-site repair and teleoperations personnel for the robots doing the heavy lifting. The concept of slave labor in orbital mining colonies doesn't make much sense outside of science fiction.

Once a certain critical mass has been reached in terms of having systems that can convert raw space materials (asteroids, lunar regolith, and so on) to more useful configurations of their raw elements (metal, solar panels, breathable air, and so on) it will make more sense to set up habitats. These will probably be between planets, not on planets.

This will have some positive consequences and some negative where x-risk is concerned. On the negative side, access to high energy weapons is almost an inherent part of being in space, since it is relatively easy to create high velocity projectiles (gravity alone adds huge amounts of energy to any sizable object, including the sun's gravity), and sterilizing the entire planet will be more plausible than even nuclear weaponry.

On the positive side, we will have decentralized our population and have the ability to survive if the earth itself is rendered uninhabitable. We will also likely develop more easily decentralized types of manufacturing tech which will protect against supply chain disruption, thus reducing economic paths of civilizational decline.

It is also worth noting that the earth's carrying capacity can essentially be extended, guarding against overpopulation, by returning refined materials and/or energy from space, once the level of off-planet industrial growth hits a point where it becomes profitable to do so. So even from the perspective of people who want to stay on earth forever, there is incentive to develop this. You can also use it to guard against asteroid impacts, but that's a relatively minor gain compared to the rest of the picture.

No, but maybe a Mars colony.

The moon has a couple issues:

  1. Resources. In Situ Resource Utilization is necessary for any colony of any size. (It could also dramatically reduces transportation costs for a smaller colony.) Unfortunately, "magnificent desolation" sums it up pretty well. Yes, you can make lunar concrete, any yes, they found ice in the permanently shadowed craters of the loner poles, and yes, there is silica in the sand, just like all sand on earth. But, that's about it. I'm all for unmanned mining, but humans would be putting the cart before the horse.

  2. The moon is tidally locked to the earth, meaning that the same face is always facing us. As a result, it rotates exactly once a month. That's a really long day. If using solar power, that means needing enough batteries to last 14 days. For colonization, it means crops cannot be grown using sunlight, necessitating enormous amounts of solar energy. Yes, the peaks of eternal light on the poles are in continuously in sunlight, but they don't have a ton of surface area, and would need extremely tall, rotating solar panels to take advantage of the solar power. So, the growth of any colony would be extremely limited.

On the resource side, Mars has enormous icecaps, large glaciers, and small concentrations of chemically bound water in the soil even near the equator. It even occasionally has liquid water on the surface, in areas where the salt concentration is high enough that frost can form a liquid brine. This provides a ready source of both H2 and O2. Mars also has CO2, Ar, and N2 in the atmosphere. The moon, however is lacking in carbon, which makes it difficult to grow food, and it doesn't have any inert gas handy for a breathable atmosphere. (You don't want to breathe 100% O2.) N2 is also a necessary component in fertilizers.

With respect to power and crop growth, Mars has a day that lasts 24 hours, 39 minutes, which makes solar power viable without massive batteries to store the electricity for 2 weeks. The downside is that Mars is further from the sun. Wind isn't strong enough to be useful, and geothermal may or may not be possible, so we're stuck with mediocre solar. A small nuclear reactor like what is used on nuclear submarines would be helpful to power growing colonies on either the moon or mars, but politically difficult.

So, if you want to maximize the probability that a self-sustaining colony is built in space, I would concentrate on tech applicable to mars. Particularly, rather than looking at all the components needed by such a colony, I would try and figure out a different question. What is the minimum amount (number, mass, etc.) of machines necessary to produce most of the heaviest components of such a colony?

There are already plenty of people in the space community, but as far as I can tell almost all of them are bikeshedding. If you want to make a difference there, I strongly recommend finding a clever, minimalist solution to the problem of bootstrapping a martian (or lunar) industrial base.