Hide table of contents

Which existential risk cause should you focus on? The cause where you have the largest impact on decreasing total existential risk. That's not the same as working on the cause where you have the largest impact when seen in isolation.

Model

Suppose there are existential risks, each with its probability  of ending the world. For each cause  you can reduce the probability of the world ending from that cause by , but only if you spend your whole career doing it.

For instance, suppose the risks are AI, biorisk, and asteroids. They have associated probabilities  and .[1] How much could you decrease the probability of extinction for each cause? You're pretty good at deflecting asteroids and killing viruses escaping from labs, but not that good at making humans lovable for AIs. Your probabilities are, say, .

Risk typeProbability ()Probability reduction ()
AI0.9
Biorisk0.1
Asteroids0.01

Which career should you choose? It sounds plausible that you should be agnostic between the biorisk and asteroids path. That's where you'll reduce the probability of extinction the most, after all. But we should do a decision-theoretic analysis of the problem to make sure.

Let's use the utility function where the world survives has utility  and the world ceases to exist has utility . Let  be a  vector with  if you choose action  and  otherwise. Then you ought to solve the total utility maximization problem 

 Why? Because you don't care which event causes extinction, only that it doesn't happen. And the total probability of no extinction equals .

Anyway, we can show that

  1. the optimal action, i.e., career path, is the one with the highest ;
  2. the multiplicative improvement you're causing by choosing action  is .

Proof

Define 

 The utility when taking action  equals 

 which is clearly maximized in  that maximises .

Consequences

You need to take both the probability of extinction by cause  and your ability to reduce the probability into account when you choose your career. If, for instance, the probability of AI ending the world () is higher than biorisk ending the world (), you need to be at least  times better at biorisk than AI risk (in terms of reducing the probability) to justify working on biorisk. If the probability of bio extinction is  and the probability of AI extinction is , you need to be  better at biorisk to justify doing biorisk instead of AI.

We can expand the table above to include the benefit of taking each action:

Risk typeProbability ()Probability reduction ()Benefit ()
AI0.9
Biorisk0.1
Asteroids0.01

So, the AI safety career is better than the asteroid career. But not by a lot, as the number  is virtually indistinguishable from . But of course, a higher number is a higher number, and they do add up. If we only care about the part , which might be reasonable, doing the AI career is  times better than the asteroids career. Which is more impressive.

A model with uncertainty

So, you say you have epistemic uncertainty about the probabilities of extinction from each cause? Perhaps you think your choice of entering a fiend may remove the risk entirely, not reduce it by a small number? (E.g., either you solve AI alignment, or you don't).

That turns out not to matter. For the problem doesn't change much when you allow for uncertainty. Provided  and  are independent when  we find that

where  The problem is maximized in the action with the highest 

Footnotes

  1. ^

    These probabilities sum to more than , but that doesn't matter for our purposes. Think about them as the probabilities of independent events and the event "the world ends" as an event that happens if at least one of them occurs.

12

0
0

Reactions

0
0

More posts like this

Comments4
Sorted by Click to highlight new comments since:

Thanks for writing this Jonas. As someone much below the lesswrong average at math, I would be grateful for a clarification of this sentence:

Provided  and  are independent when 

What does  and  refer to here? Moreover is it a reasonable assumption, that the uncertainties of existential risks are independent? It seems to me that many uncertainties run across risk types, such as chance of recovery after civilisations collapse.

and are indices for the causes. I wrote because you don't have to assume that and are independent for the math to work. But everything else will have to be independent.

Maybe the uncertainties shouldn't be independent, but often they will. Our uncertainty about the probability of AI doom is probably not related to our uncertainty about the probability of pandemics doom, for instance.

I don't understand what  is. What do you mean by a "probability reduction" of ?

If the probability of extinction by cause is and the probability reduction for that cause is , the probability of extinction becomes if you choose to focus on cause .

Curated and popular this week
Relevant opportunities