Comment author: capybaralet 05 January 2017 06:13:17PM 0 points [-]

"But maybe that's just because I am less satisfied with the current EA "business model"/"product" than most people."

Care to elaborate (or link to something?)

Comment author: John_Maxwell_IV 15 January 2017 01:25:21AM 0 points [-]
Comment author: Brian_Tomasik 02 January 2017 09:02:52AM *  0 points [-]

Interesting about dinosaurs. :) I added this to my piece as footnote 4 (see there for the hyperlinks):

John Maxwell disputes this claim. My reasoning is that dinosaurs lasted for at least 135 million years but only went extinct 66 mya. It's easy to imagine that dinosaurs might have lasted, say, twice as long as they did, in which case they would still rule the Earth today.

I agree that, among ways all humans might go extinct that leave around other animals, runaway climate change and nuclear war are contenders. However, I'm skeptical that there would be many vertebrates left in these scenarios. I would think humans would try to eat every mammal, bird, and fish they could find. I guess humans couldn't kill every last mouse or minnow, but in these scenarios, plant growth would also be compromised (or else humans would be eating plants), so it's not clear how well these small animals could survive.

A naive way to estimate: divide the number of times civilization has arisen (once) by the number of times Earth has been "wiped" by mass extinctions.

Interesting. :) But this is (perhaps very strongly) biased by the selection effect that we find ourselves here and not on the (many?) other planets where intelligent life never got to the human level.

Why do you believe that humans are outliers in their degree of compassion relative to other social species?

This was a statement about selection bias (namely, that humans care more about what they care about than other civilizations do) rather than an empirical generalization from biology. I agree humans seem to be somewhere in the middle in terms of peacefulness relative to primates. Unlike cetaceans, wolves, etc., we're not obligate carnivores. We're also more compassionate to outgroup members than ants are, although it's hard to say what ant morality would look like if ants were as intelligent as mammals.

Comment author: John_Maxwell_IV 03 January 2017 07:45:36PM *  0 points [-]

I still don't see why we should expect these future extinction events to wipe out all vertebrates when vertebrates made it through dinosaur extinction events. Most plants aren't human-edible, and I'm skeptical humans would be systematic enough in foraging through remote wilderness to kill off more than half the wild vertebrates on the planet.

Interesting. :) But this is (perhaps very strongly) biased by the selection effect that we find ourselves here and not on the (many?) other planets where intelligent life never got to the human level.

Yep.

humans seem to be somewhere in the middle in terms of peacefulness relative to primates

How certain are you about this? Some quick Googling:

  • "The northern muriqui has been argued to be important to understanding human evolution, since it is one of the few primates that has tolerant, nonhierarchial relationships among and between males and females, a feature shared with hunter-gatherer humans, but which contrasts with the ranked relationships of most other primates." (source)

  • "Male primates, in general, take very little interest in helping to rear offspring... Pair bonding of any sort is rare among primates, though gibbons seem to be lifelong monogamists, and some new world monkey groups, such as marmosets, have only one reproductively active pair in any group." (source)

  • "This was also studied in rhesus macaques and pigtail macaques. They found that infants, when separated from their mothers, went though all these stages of separations- protest, despair etc. The saw the same thing with rhesus and pigtails, but in bonnet macaques, the infants don't go through all this psychological trauma. It's pretty clear why if you look at their social organization- there are a lot of allomothers in bonnet macaques and babies are often left by their moms in the wild and someone else will take care of it and bring it back to her later. So it's important to pick more than one species and to compare across species when you're doing this comparative approach for behavioral models... We also are different because we (both sexes) cooperate with non-kin pretty often." (source--first bit is interesting because I don't think humans really alloparent, which seems like an altruistic behavior?)

  • Random related thought: Somewhere I read that humans "self-domesticated" over the course of our species through e.g. capital punishment for murderers. Does that mean that we are "just cooperative enough" to be civilized? (In other words, did this "self-domestication" process occur until the point at which large scale civilization became possible, and that's where we are right now?)

Some of this stuff might be related to the evolution of intelligence though, e.g. human babies are born prematurely relative to other species because our large heads would not fit through the birth canal otherwise. So perhaps a primate species would need to engage in pair bonding in order to make this sort of 'premature' birth (and thus the evolution of high intelligence) possible. This factor seems relatively contingent on primate anatomy. So maybe a non-primate-descended intelligent species would be less likely to experience pair bonding (I think it's rare in the animal kingdom) and thus be less benevolent. (BTW, I think species that pair bond are a strict (and small) subset of species that are considered K-selected, but I could be wrong. It seems pretty likely that intelligent aliens would be K-selected in some form.)

Comment author: Denkenberger 30 December 2016 02:52:45PM 1 point [-]

Multipandemic could cause human extinction. Even a single virus has had 100% kill rate.

Comment author: John_Maxwell_IV 31 December 2016 01:02:40PM *  1 point [-]

Interesting paper! I'm intuitively skeptical, though--with 7 billion people, it just seems really hard to kill off every last person.

Where was this paper posted?

Comment author: Brian_Tomasik 27 December 2016 02:27:25AM 0 points [-]

Thanks. :) I discuss that a bit here. I'd be curious to know your probability that non-humans would re-establish civilization if humans went extinct.

Comment author: John_Maxwell_IV 27 December 2016 01:17:52PM *  1 point [-]

I'd be curious to know your probability that non-humans would re-establish civilization if humans went extinct.

Uninformed speculation follows.

On the face of things it seems pretty likely. "...if the dinosaurs hadn't been killed by an asteroid, plausibly they would still rule the Earth, without any advanced civilization." I got the impression that the dinosaurs experienced several mass extinctions, and mammals displaced them when there was a mass extinction associated with climate change? Periodic mass extinctions are evidence against Earth getting "clogged" this way.

I don't feel like I have a good sense of likely causes of human extinction. Destruction of human civilization seems likely; most civilizations that have existed have eventually ended. But when I look at Wikipedia's page on human extinction, scenarios where every last human dies while other life persists on Earth don't seem super numerous. For example, it seems tricky to engineer a virus with a 100% kill rate that is also infectious enough to infect all 7 billion of us. (Do we have recorded instances of entire species being wiped out due to illness this way?) And if nanobots or some physics experiment eats the planet, that will destroy all the other life too. The most likely scenario seems like destruction of current human civilization alongside destruction of viable ecological niches for technologically unsophisticated human bands--runaway global warming or nuclear winter?

If that's the scenario that comes about, I would guess that lots of animals will survive, analogous to extinction events that killed off dinosaurs. I don't think that a big fraction of the great filter is between development of animals and civilization, although it seems plausible that there is some filter here. A naive way to estimate: divide the number of times civilization has arisen (once) by the number of times Earth has been "wiped" by mass extinctions. Then figure out how frequently mass extinctions occur and how many more "wipes" we can expect before Earth is uninhabitable.

On balance it's plausible our hypothetical replacements would be less compassionate, because compassion is something humans value a lot, while a random other species probably values something else more. The reason I'm asking this question in the first place is because humans are outliers in their degree of compassion.

Why do you believe that humans are outliers in their degree of compassion relative to other social species?

Almost by definition, a species that creates a civilization is capable of large-scale cooperation. But this large-scale cooperation could look much different than human cooperation looks like. (I'm guessing it would be relatively easy for a eusocial species to control its reproduction, so if it achieved sufficient intelligence to understand the basics of breeding, it might be able to "bootstrap" itself to higher levels of intelligence from there.)

(I can imagine exotic scenarios where large-scale cooperation is less necessary for starfaring: consider a species that lived much longer than humans, meaning individuals had longer lifetimes over which to accumulate knowledge, which makes knowledge-sharing through culture less necessary. But I believe that species tend to be longer-lived in highly stable environments, and a highly stable environment is less likely to stumble across a configuration that creates an ecological niche for a highly intelligent tool-using species.)

It occurs to me that we might want to focus on how cohesively a species cooperates over how compassionate it seems to be. If you look at human actions like factory farming, these seem to be less a product of some human prediliction for cruelty, and more a result of incentive structures. In a post-scarcity society, we'd expect this to be less of a consideration. But a post-scarcity society requires more than just technology. Incentive structures also seem less contingent on biological factors and more contingent on societal factors.

Comment author: rohinmshah  (EA Profile) 21 December 2016 07:51:39AM 2 points [-]

Note that it is possible for the credit to sum to more than 100%.

Yes, I agree that this is possible (this is why I said it could be "a reasonable conclusion by each organization"). My point is that because of this phenomenon, you can have the pathological case where from a global perspective, the impact does not justify the costs, even though the impact does justify the costs from the perspective of every organization.

I discuss point 6 here

Yeah, I agree that potential economies of scale are much greater than diminishing marginal returns, and I should have mentioned that. Mea culpa.

Issues with how to assess impact, metrics etc. are discussed in-depth in the organisation's impact evaluations.

My impression is that organizations acknowledge that there are issues, but the issues remain. I'll write up an example with GWWC soon.

Just to clarify, you'd like to see funding to meta-charities increase, so don't think these worries are actually sufficient to warrant a move back to first order charities?

That's correct.

PS. One other small thing – it's odd to class GiveWell as not meta, but 80k as meta. I often think of 80k as the GiveWell of career choice. Just as GiveWell does research into which charities are most effective and publicises it, we do research into which career strategies are most effective and publicise it.

I agree that 80k's research product is not meta the way I've defined it. However, 80k does a lot of publicity and outreach that GiveWell for the most part does not do. For example: the career workshops, the 80K newsletter, the recent 80K book, the TedX talks, the online ads, the flashy website that has popups for the mailing list. To my knowledge, of that list GiveWell only has online ads.

Comment author: John_Maxwell_IV 23 December 2016 12:10:25PM 0 points [-]

I agree that 80k's research product is not meta the way I've defined it. However, 80k does a lot of publicity and outreach that GiveWell for the most part does not do. For example: the career workshops, the 80K newsletter, the recent 80K book, the TedX talks, the online ads, the flashy website that has popups for the mailing list. To my knowledge, of that list GiveWell only has online ads.

Maybe instead of talking about "meta traps" we should talk about "promotion traps" or something?

Comment author: HowieL 21 December 2016 01:23:44AM *  12 points [-]

I'll add two more potential traps. There's overlap with some of the existing ones but I think these are worth mentioning on their own.

9) Object level work may contribute more learning value.

I think it's plausible that the community will learn more if it's more focused on object level work. There are several plausible mechanisms. For example (not comprehensive): object level work might have better feedback loops, object level work may build broader networks that can be used for learning about specific causes, or developing an expert inside view on an area may be the best way to improve your modelling of the world. (Think about liberal arts colleges' claim that it's worth having a major even if your educational goals are broad "critical thinking" skills.)

I'm eliding here over lots of open questions about how to model the learning of a community. For example: is it more efficient for communities to learn by their current members learning or by recruiting new members with preexisting knowledge/skills?

I don't have an answer to this question but when I think about it I try to take the perspective of a hypothetical EA community ten years from now and ask whether it would prefer to primarily be made up of people with ten years' experience working on meta causes or a biologist, a computer scientist, a lawyer, etc. . .

10) The most valuable types of capital may be "cause specific"

I suppose (9) is a subset of (10). But it may be that it's important to invest today on capital that will pay off tomorrow. (E.G. See 80k on career capital.) And cause specific opportunities may be better developed (and have higher returns) than meta ones. So, learning value aside, it may be valuable for EA to have lots of people who invested in graduate degrees or building professional networks. But these types of opportunities may sometimes require you to do object level work.

Comment author: John_Maxwell_IV 23 December 2016 12:00:32PM 1 point [-]

9) seems pretty compelling to me. To use some analogies from the business world: it wouldn't make sense for a company to hire lots of people before it had a business model figured out, or run a big marketing campaign while its product was still being developed. Sometimes it feels to me like EA is doing those things. (But maybe that's just because I am less satisfied with the current EA "business model"/"product" than most people.)

In response to Lunar Colony
Comment author: kbog  (EA Profile) 20 December 2016 09:31:31PM *  13 points [-]

As far as I can tell there is zero serious basis for going to other planets in order to save humanity and it's an idea which stays alive merely because of science fiction fantasies and publicity statements from Elon Musk and the like. I've yet to see a likely catastrophic scenario where having a human space colony would be useful that would not be much more easily protected against with infrastructure on Earth.

-Can it help prevent x-risk events? Nope, there's nothing it can do for us except tourism and moon rocks.

-Is it good for keeping people safe against x-risks? Nope. In what scenario does having a lunar colony efficiently make humanity more resilient? If there's an asteroid, go somewhere safe on Earth. If there's cascading global warming, move to the Yukon. If there's a nuclear war, go to a fallout shelter. If there's a pandemic, build a biosphere.

-Can it bring people back to Earth after an extended period of isolation? Nope, the Moon has none of the resources required for sustaining a spacefaring civilization, except sunlight and water. Whatever resources you have will degrade with inefficiencies and damage. Your only hope is to just wait for however many years or millennia it takes for Earth to become habitable again and then jump back in a prepackaged spacecraft. But, as noted above, it's vastly easier to just do this in a shelter on Earth.

-It's physically impossible to terraform the Moon with conceivable technology, as it has month-long days, and far too little gravity to sustain an atmosphere.

-"But don't we need to leave the planet EVENTUALLY?" Maybe, but if we have multiple centuries or millennia then you should wait for better general technology and AI to be developed to make space travel easy, instead of funneling piles of money into it now.

I really fail to see the logic behind "Earth might become slightly less habitable in the future, so we need to go to an extremely isolated, totally barren wasteland that is absolutely inhospitable to all carbon-based life in order to survive." Whatever happens to Earth, it's still not going to have 200 degree temperature swings, a totally sterile geology, cancerous space radiation, unhealthy minimal gravity and a multibillion dollar week-long commute.

In response to comment by kbog  (EA Profile) on Lunar Colony
Comment author: John_Maxwell_IV 22 December 2016 11:44:07AM *  2 points [-]

I'm in favor of questioning the logic of people like Musk, because I think the mindset needed to be a successful entrepreneur is significantly different than the mindset needed to improve the far future in a way that minimizes the chance of backfire. I'm also not that optimistic about colonizing Mars as a cause area. But I think you are being overly pessimistic here:

  • The Great Filter is arguably the central fact of our existence. Either we represent an absurd stroke of luck, perhaps the only chance the universe will ever have to know itself, or we face virtually certain doom in the future. (Disregarding the simulation hypothesis and similar. Maybe dark matter is computronium and we are in a nature preserve. Does anyone know of other ways to break the Great Filter's assumptions?)

  • Working on AI safety won't plausibly help with the Great Filter. AI itself isn't the filter. And if the filter is late, AI research won't save us: a late filter implies that AI is hard. (Restated: an uncolonized galaxy suggests superintelligence has never been developed, which means civilizations fail before developing superintelligence. So if the filter is ahead, it will come before superintelligence. More thoughts of mine.)

  • So what could help if there's a filter in front of us? The filter is likely non-obvious, because every species before us failed to get through. This decreases the promise of guarding against specific obvious scenarios like asteroid/global warming/nuclear war/pandemic. I have not looked into the less obvious scenarios, but a planetary colony could be useful for some, such as the conversion of regular matter into strange matter as described in this post. (Should the Great Filter caution us against performing innocuous-seeming physics experiments? Perhaps there is a trap in physics that makes up the filter. Physics experiments to facilitate space exploration could be especially deadly--see my technology tree point.)

Colonizing planets before it would be reasonable to do so looks like a decent project to me in the world where AI is hard and the filter is some random thing we can't anticipate. These are both strong possibilities.

Random trippy thought: Species probably vary on psychological measures like willingness to investigate and act on non-obvious filter candidates. If we think we're bad at creative thinking relative to a hypothetical alien species, it's probably a failing strategy for us. If we think the filter is ahead of us, we should go into novel-writing mode and think: what might be unique about our situation as a species that will somehow allow us to squeak past the filter? Then we can try to play to that strength.

We could study animal behavior to answer questions of this sort. A quick example: Bonobos are quite intelligent, and they may be much more benevolent than humans. If the filter is late, bonobo-type civilizations have probably been filtered many times. This suggests that working to make humanity more bonobo-like and cooperative will not help with a late filter. (On the other hand, I think I read that humans have an unusual lack of genetic diversity relative to the average species, due a relatively recent near-extinction event... so perhaps this adds up to a significant advantage in intraspecies cooperation overall?)

BTW, there's more discussion of this thread on Facebook and Arbital.

Comment author: Brian_Tomasik 20 December 2016 01:35:12AM 0 points [-]

I'm now more ambivalent about global stability than I was previously because, in addition to making uncooperative/violent futures less likely, it also makes space colonization more likely. The overall impact on suffering from a negative-utilitarian standpoint is unclear.

This perspective also requires assuming that the far future dominates over short-term effects in the calculation, which not everyone agrees with.

Comment author: John_Maxwell_IV 22 December 2016 09:16:21AM *  1 point [-]

Another complication: An unstable world could cause human extinction and create the opportunity for some other intelligent species to arise. Even the most destructive human war would probably not kill every animal living around deep-sea vents. We went from the first animals to humans in 600 million years of evolution, and life on Earth probably has at least 1 billion years left (casual Googling). So a new intelligent species evolving on Earth after human extinction seems like a strong possibility.

It's an open question whether we would make for better star colonizers than some later species. Relevant post of mine.

I wish we'd get more systematic about identifying and resolving crucial considerations of this sort.

Comment author: Brian_Tomasik 16 December 2016 07:02:36PM 0 points [-]

Yeah, successful ocean fertilization is a scary scenario in my eyes due to increasing oceanic animal suffering. A bit more discussion here.

Comment author: John_Maxwell_IV 17 December 2016 07:53:03AM *  2 points [-]

I would expect that improvements to global stability make ocean fertilization a positive prospect in the long run. Do you disagree?

Comment author: John_Maxwell_IV 15 December 2016 11:25:38PM *  2 points [-]

It was interesting to see you mention ocean fertilization. I know this has been proposed as a solution for global warming as well. It seems like scientists are mostly against it on precautionary principle grounds, which is frustrating. If we just had better ocean property rights, fishing companies might be incentivized to fertilize, which could improve food security while helping to reverse global warming as a side effect. It definitely seems to me like a concept that deserves further study. The best argument against it probably involves wild animal suffering.

View more: Prev | Next