This other Ryan Greenblatt is my old account[1]. Here is my LW account.
Account lost to the mists of time and expired university email addresses.
If your institute would like to contribute to this discussion, I would advise you to publish your work in a leading economics journal and to present your work at reputable economics departments and conferences.
I'm aware of various people considering trying to argue with economists about explosive growth (e.g. about the conclusions of this report).
In particular, the probability of explosive growth if you condition on human level machine intelligence. More precisely, something like human level machine intelligence and human level robotic bodies where the machine intelligence requires 10^14 FLOP / human equivalent second (e.g. 1/10 of an H100), can run 5x faster than humans using current hardware, and the robotic bodies cost $20,000 (on today's manufactoring base).
From my understanding they didn't ever end up trying to do this.
Personally, I argued against this being a good use of time:
So, I think the main question here is a question of whether this is a good use of time.
I think it's probably better to start by trying to talk with economists rather than trying to write a paper.
It is deeply misleading to suggest that accelerating economic growth “has been the norm for most of human history”.
From my understanding of historical growth rate estimates this is wrong. (As in, it is not "deeply misleading".)
Most historical growth rates were far slower than economic growth today. I think you might mean that we have transitioned over time from slower to faster growth modes.
To me, this sounds very similar to "economic growth has accelerated over time". And it sounds like this has happened over a long total period of time.
Maybe you think it has been very discrete with phases (seems unlikely to me as the dominant driver is likely to be population growth and better ability for technological development (e.g. reducing malnutrition)). Or maybe you think that it is key that the change in the rate of growth has historically been slow in sidereal time.
I think literal extinction is unlikely even conditional on misaligned AI takeover due to:
This is discussed in more detail here and here.
Insofar as humans and/or aliens care about nature, similar arguments apply there too, though this is mostly beside the point: if humans survive and have (even a tiny bit of) resources they can preserve some natural easily.
I find it annoying how confident this article is without really bother to engage with the relevant arguments here.
(Same goes for many other posts asserting that AIs will disassemble humans for their atoms.)
(This comment echos Owen's to some extent.)
This includes the potential for the AI to have preferences that are morally valueable from a typical human perspective.
One key issue with this model is that I expect that the majority of x-risk from my perspective doesn't correspond to extinction and instead corresponds to some undesirable group unding up with control over the long run future (either AIs seizing control (AI takeover) or undesirable human groups).
So, I would reject:
We can model extinction here by n(t) going to zero.
You might be able to recover things by supposing n(t) gets transformed by some constant multiple on x-risk maybe?
(Further, even if AI takeover does result in extinction there will probably still be some value due to acausal trade and potentially some value due to the AI's preferences.)
(Regardless, I expect that if you think the singularity is plausible, the effects of discounting are more complex because we could very plausibly have >10^20 experience years per year within 5 years of the singularity due to e.g. building a Dyson sphere around the sun. If we just look at AI takeover, ignore (acausal) trade, and assume for simplicity that AI preferences have no value, then it is likely that the vast, vast majority of value is contingent on retaining human control. If we allow for acausal trade, then the discount rates of the AI will also be important to determine how much trade should happen.)
(Separately, pure temporal discounting seems pretty insane and incoherent with my view of the universe works.)
I tried to find out if the time-horizons for potential x-risk events have been explicitly discussed in longtermism literature but I didn’t come across anything.
See here
More specifically, is there any good reason to assume that the odds are in favor of humans even by a little bit? If so, what exactly is the argument for that?
There is a good argument from your perspective: human resource utilization is likely to be more similar to your values on reflection than a randomly chosen other species.
Is there any specific reason for discounting the possibility that arthropods or reptiles evolving over millions of years to something that equals or surpasses the intelligence of humans that were last alive?
No, I think analysis shouldn't discount this. Unless there is an unknown hard-to-pass point (a filter) between existing mammals/primates and human level civilization, it seems like life re-evolving is quite likely. (I'd say 85% chance of a new civilization conditional on human extinction, but not primate extinction, and 75% if primates also go extinct.)
There is also the potential for alien civilizations, though I think this has a lower probability (perhaps 50% that aliens capture >75% of the cosmic resources in our light cone if earth originating civilizations don't caputure these resources).
IMO, the dominant effect of extinction due to bio-risk is that a different earth originating species acquires power and my values on reflection are likely to be closer to humanities values on reflection than the other species. (I also have some influence over how humanity spends its resources, though I expect this effect is not that big.)
If you were equally happy with other species, then I think you still only take a 10x discount from these considerations because there is some possibility of a hard-to-pass barrier between other life and humans. 10x discounts don't usually seem like cruxes IMO.
I would also note that for AI x-risk, life intelligent life reevolving is unimportant. (I also think AI x-risk is unlikely to result in extinction because AIs are unlikely to want to kill all humans for various reasons.)
And over time scales of billions, we could enter the possibility of evolution from basic eukaryotes too.
Earth will be habitable for about ~1 billion more years which probably isn't quite enough for this.
Perceived counter-argument:
My proposed counter-argument loosely based on the structure of yours.
The vast majority of value from my perspective on reflection (where my perspective on reflection is probably somewhat utilitarian, but this is somewhat unclear) in the future will come from agents who are trying to optimize explicitly for doing "good" things and are being at least somewhat thoughtful about it, rather than those who incidentally achieve utilitarian objectives. (By "good", I just mean what seems to them to be good.)
At present, the moral views of humanity are a hot mess. However, it seems likely to me that a reasonable fraction of the total computational resources of our lightcone (perhaps 50%) will in expectation be spent based on the result of a process in which an agent or some agents think carefully about what would be best in a pretty delibrate and relatively wise way. This could involve eventually deferring to other smarter/wiser agents or massive amounts of self-enhancement. Let's call this a "reasonably-good-reflection" process.
Why think a reasonable fraction of resources will be spent like this?
I expect that I am pretty aligned (on reasonably-good-reflection) with the result of random humans doing reasonably-good-reflection as I am also a human and many of the underlying arguments/intuitions I think seem important seem likely to seem important to many other humans (given various common human intuitions) upon those humans becoming wiser. Further, I really just care about the preferences of (post-)humans who end care most about using vast, vast amounts of computational resources (assuming I end up caring about these things on reflection), because the humans who care about other things won't use most of the resources. Additionally, I care "most" about the on-reflection preferences I have which are relatively less contingent and more common among at least humans for a variety of reasons. (One way to put this is that I care less about worlds in which my preferences on reflection seem highly contingent.)
So, I've claimed that reasonably-good-reflection resource usage will be non-trivial (perhaps 50%) and that I'm pretty aligned with humans on reasonably-good-reflection. Supposing these, why think that most of the value is coming from something like reasonably-good-reflection prefences rather than other things, e.g. not very thoughtful indexical preferences (selfish) consumption? Broadly three reasons:
(Aside: I was talking about not-very-thoughtful indexical-preferences. It's likely to me that doing a reasonably good job reflecting on selfish preferences get back to something like de facto utilitarianism (at least as far as how you spend the vast majority of computational resources) because personal identity and indexical preferences don't make much sense and the thing you end up thinking is more like "I guess I just care about experiences in general".)
What about AIs? I think there are broadly two main reasons to expect that what AIs do on reasonably-good-reflection to be worse from my perspective than what humans do:
Note that we're conditioning on safety/alignment technology failing to retain human control, so we should imagine correspondingly less human control over AI values.
I think that the fraction of computation resources of our lightcone used based on the result of a reasonably-good-reflection process seems similar between human control and AI control (perhaps 50%). It's possible to mess this up of course and either mess up the reflection or to lock-in bad values too early. But, when I look at the balance of arguments, humans messing this up seems pretty similar to AIs messing this up to me. So, the main question is what the result of such a process would be. One way to put this is that I don't expect humans to differ substantially from AIs in terms of how "thoughtful" they are.
I interpret one of your arguments as being "Humans won't be very thoughtful about how they spend vast, vast amounts of computational resources. After all, they aren't thoughtful right now." To the extent I buy this argument, I think it applies roughly equally well to AIs. So naively, it just divides by both sides rather than making AI look more favorable. (At least, if you accept that all most all of the value comes from being at least a bit thoughtful, which you also contest. See my arguments for that.)
Huh, I thought that most of the disagreement between people around these parts and bioethicists is in the direction of people around here being more pro-freedoms of human subjects/patients. (Freedoms aren't exactly the same as protections, but I interpret small-c conservative as being more about freedoms.)
Examples:
Generally, I personally think that much more freedom in medicine would be better.
(In fact, totally free-for-all would plausibly be better than status quo I think though I'm pretty uncertain.)
I agree that there is a disagreement around how utilitarian the medical system should be vs some more fairness based principle.
However, if you go fully in the direction of individual liberties, government involvement in the medical system doesn't matter much. E.g., in a simple system like:
The state doesn't need to make any tradeoffs in health care as it isn't involved. Places like (e.g.) hospitals can do whatever they want with respect to prioritizing care and they could in principle compete etc.
(I'm not claiming that fully in the direction of individual liberties is the right move, e.g. it seems like people are often irrational about health care and hospitals often have monopolies which can cause issues.)