OCB

Owen Cotton-Barratt

8899 karmaJoined Aug 2014

Sequences
3

Reflection as a strategic goal
On Wholesomeness
Everyday Longermism

Comments
788

Topic contributions
3

I might think of FHI as having borrowed prestige from Oxford. I think it benefited significantly from that prestige. But in the longer run it gets paid back (with interest!).

That metaphor doesn't really work, because it's not that FHI loses prestige when it pays it back -- but I think the basic dynamic of it being a trade of prestige at different points in time is roughly accurate.

I'm worried I'm misunderstanding what you mean by "value density". Could you perhaps spell this out with a stylized example, e.g. comparing two different interventions protecting against different sizes of catastrophe?

I think human extinction over 1 year is extremely unlikely. I estimated 5.93*10^-12 for nuclear wars, 2.20*10^-14 for asteroids and comets, 3.38*10^-14 for supervolcanoes, a prior of 6.36*10^-14 for wars, and a prior of 4.35*10^-15 for terrorist attacks.

Without having dug into them closely, these numbers don't seem crazy to me for the current state of the world. I think that the risk of human extinction over 1 year is almost all driven by some powerful new technology (with residues for the wilder astrophysical disasters, and the rise of some powerful ideology which somehow leads there). But this is an important class! In general dragon kings operate via something which is mechanically different than the more tame parts of the distribution, and "new technology" could totally facilitate that.

Do you have a sense of the extent to which the dragon king theory applies in the context of deaths in catastrophes?

Unfortunately, for the relevant part of the curve (catastrophes large enough to wipe out large fractions of the population) we have no data, so we'll be relying on theory. My understanding (based significantly just on the "mechanisms" section of that wikipedia page) is that dragon kings tend to arise in cases where there's a qualitatively different mechanism which causes the very large events but doesn't show up in the distribution of smaller events. In some cases we might not have such a mechanism, and in others we might. It certainly seems plausible to me when considering catastrophes (and this is enough to drive significant concern, because if we can't rule it out it's prudent to be concerned, and risk having wasted some resources if we turn out to be in a world where the total risk is extremely small), via the kind of mechanisms I allude to in the first half of this comment.

Sorry, I understood that you primarily weren't trying to model effects on extinction risk. But I understood you to be suggesting that this methodology might be appropriate for what we were doing in that paper -- which was primarily modelling effects on extinction risk.

Sorry, this isn't speaking to my central question. I'll try asking via an example:

  • Suppose we think that there's a 1% risk of a particular catastrophe C in a given time period T which kills 90% of people
  • We can today make an intervention X, which costs $Y, and means that if C occurs then T will only kill 89% of people
    • We pay the cost $Y in all worlds, including the 99% in which C never occurs
  • When calculating the cost to save a life for X, do you:
    • A) condition on C, so you save 1% of people at the cost of $Y; or
    • B) don't condition on C, so you save an expected 0.01% of people at a cost of $Y?

I'd naively have expected you to do B) (from the natural language descriptions), but when I look at your calculations it seems like you've done A). Is that right?

I think if you're primarily trying to model effects on extinction risk, then doing everything via "proportional increase in population" and nowhere directly analysing extinction risk, seems like a weirdly indirect way to do it -- and leaves me with a bunch of questions about whether that's really the best way to do it.

Re.

Cotton-Barratt 2020 says “it’s usually best to invest significantly into strengthening all three defence layers”:

“This is because the same relative change of each probability will have the same effect on the extinction probability”. I agree with this, but I wonder whether tail risk is the relevant metric. I think it is better to look into the expected value density of the cost-effectiveness of saving a life, accounting for indirect longterm effects as I did. I predict this expected value density to be higher for the 1st layers, which respect a lower severity, but are more likely to be requested. So, to equalise the marginal cost-effectiveness of additional investments across all layers, it may well be better to invest more in prevention than in response, and more in response than in resilience.

That paper was explicitly considering strategies for reducing the risk of human extinction. I agree that relative to the balance you get from that, society should skew towards prioritizing response and especially prevention, since these are also important for many of society's values that aren't just about reducing extinction risk.

I'm worried that modelling the tail risk here as a power law is doing a lot of work, since it's an assumption which makes the risk of very large events quite small (especially since you're taking a power law in the ratio, aside from the threshold from requiring a certain number of humans to have a viable population, the structure of the assumption essentially gives that extinction is impossible).

But we know from (the fancifully named) dragon king theory that the very largest events are often substantially larger than would be predicted by power law extrapolation.

I'm confused by some of the set-up here. When considering catastrophes, your "cost to save a life" represents the cost to save that life conditional on the catastrophe being due to occur? (I'm not saying "conditional on occurring" because presumably you're allowed interventions which try to avert the catastrophe.)

Understood this way, I find this assumption very questionable:

, since I feel like the effect of having more opportunities to save lives in catastrophes is roughly offset by the greater difficulty of preparing to take advantage of those opportunities pre-catastrophe.

Or is the point that you're only talking about saving lives via resilience mechanisms in catastrophes, rather than trying to make the catastrophes not happen or be small? But in that case the conclusions about existential risk mitigation would seem unwarranted.

Habryka identifies himself as the author of a different post which is linked to and being discussed in a different comment thread.

Load more