Hide table of contents

80000 hours use three factors to measure the effectiveness of working on different cause areas: Scale, neglectness and solvabilty. But maybe urgency is important, too. Some areas can be waited for a longer time for humans to work on, name it, animal welfare, transhumanism. We can work on this 500 years later (if we're alive) But some problems have urgency, like: AI safety and biorisk. Should we work more on areas that are more urgent for us to solve?

13

0
0

Reactions

0
0
New Answer
New Comment

5 Answers sorted by

Urgency in the sense you seem to have in mind is indeed a relevant consideration in cause prioritization, but I think it should be regarded as a heuristic for finding promising causes rather than as an additional factor in the ITN framework. See BrownHairedEevee's comment for one approach to doing this, proposed by Toby Ord. If you instead wanted to build 'urgency' into the framework, you would need to revise one of the existing factors so that the relevant units are canceled out when the three existing terms and this fourth new term are multiplied together, in such a way that the resulting quantity is still denominated in good done / extra person or dollar (cf. dimensional analysis). But I don't think there's a natural or intuitive way of doing this.

Separately, note that the term 'urgency' is sometimes used in EA to refer to a different idea. Some people, e.g. some negative-leaning folk, believe that the felt urgency of an aversive experience is a reason for prioritizing its alleviation, over and above its intensity and duration. In this sense, animal welfare seems (arguably) more, rather than less, urgent than AI risk. I think Rockwell has this sense in mind when they object to your choice of terminology.

I don't full comprehend why we can't include it. It seems like the ITN framework does not describe the future of the marginal utility per resource spent on the problem but rather the MU/resource right now. If we want to generalize the ITN framework across time, which theoretically we need to do to choose a sequence of decisions, we need to incorporate the fact that tractability and scale are functions of time (and even further the previous decisions we make). 

all this is going to do is change the resulting answer from (MU/$) to MU/$(t), where t is time. everything still cancels out the same as before. In practice I don't know if this is actually useful.

I agree with the view about "urgency" is hard to be in the formula. Because urgency is not related to "good done/ extra person or dollar" for yourself. 

An urgent problem means it can only be solved right now, so, if you don't focus on the more urgent problem, the people in future can't work on this, it may decrease the good things  will have done by future people. But, I don't know how to value the improtance of "urgency".

1
Stan Pinsent
Perhaps it can be captured by ensuring we compare counterfactual impacts. For an urgent, "now or never" cause, we can be confident that any impact we make wouldn't have happened otherwise. For something non-urgent, there is a chance that if we leave it, somebody else could solve it or it could go away naturally. Hence we should discount the expected value of working on this (or in other words we should recognise that the counterfactual impact of working on non-urgent causes, which is what really matters, is lower than the apparent impact).

Yes! In chapter 6 of The Precipice, Toby Ord talks about prioritizing risks that are more urgent, a.k.a. "soon, sudden, and sharp":

  • Risks that strike sooner rather than later should be prioritized: "One reason is that risks that strike later can be dealt with later, while those striking soon cannot. Another is that there will probably be more resources devoted to risks that occur later on, as humanity becomes more powerful and more people wake up to humanity's predicament. This makes later risks less neglected. And finally, we can see more clearly what to do about risks that are coming to a head now, whereas our work on later risks has more chance of being misdirected.... This makes later risks less tractable right now than earlier ones." (p. 180)
  • We should also prioritize potential catastrophes that we expect to cause damage more suddenly, or rapidly. More slow-burning risks like climate change give the public and policymakers a greater chance to react, whereas catastrophes with sudden effects like a rapidly spreading pandemic are more likely to catch these actors off guard.
  • Finally, for similar reasons, we should prioritize potential catastrophes that are more sharp, or less likely to be preceded by "warning shots".

Outside of x-risks, I've operationalized the "urgency" of problems as something I called stickiness, or the rate at which they are expected to grow or shrink over time:

When it comes to comparing non-longtermist problems from a longtermist perspective, I find it useful to evaluate them based on their "stickiness": the rate at which they will grow or shrink over time.

A problem's stickiness is its annual growth rate. So a problem has positive stickiness if it is growing, and negative stickiness if it is shrinking. For long-term planning, we care about a problem's expected stickiness: the annual rate at which we think it will grow or shrink. Over the long term - i.e. time frames of 50 years or more - we want to focus on problems that we expect to grow over time without our intervention, instead of problems that will go away on their own.

For example, global poverty has negative stickiness because the poverty rate has declined over the last 200 years. I believe its stickiness will continue to be negative, barring a global catastrophe like climate change or World War III.

On the other hand, farm animal suffering has not gone away over time; in fact, it has gotten worse, as a growing number of people around the world are eating meat and dairy. This trend will continue at least until alternative proteins become competitive with animal products. Therefore, farm animal suffering has positive stickiness. (I would expect wild animal suffering to also have positive stickiness due to increased habitat destruction, but I don't know.)

What do you mean by 

the rate at which they will grow or shrink over time.

specifically what mathematical quantity is "they"

Technically speaking this could be considered as part of "scale" (i.e. lock-in situations effect all future beings). 

For sake of clear communication, and avoiding the potentially asinine conversations long-termism can generate, including urgency seems like a smart idea. 

Let’s say I can press button a, which will create 1 utility or button b which will create 2 utility.

Button a is only press-able for the next year while button b is press-able for the next two years.

In this example I believe the scale has nothing to do with the urgency.

1
ElliotJDavies
Did you make a mistake when describing this example, because this relates perfectly to scale? One problem is twice the size of the other (in terms of "utility points"). 
1
Charlie_Guthmann
FYI I edited the comment slightly, but it doesn’t change anything. Can you explain how the urgency of the button presses relates to the scale?
2
david_reinstein
Can you clarify the example: 1. Do I choose between the buttons or can I press both? 2. Are you imagining some possibility of the world ending at some point in this scenario?
1
Charlie_Guthmann
1. You can only press one button per year due to time/resource/ etc constraints. Moreover you can only press each button once. 2. No I wasn’t

"Urgency" strikes me as very much the wrong word here. If you want to feel the urgency of e.g. ending animal farming, just watch these numbers tick by for a few seconds. It sounds like you're pretty directly describing x-risk (which has two distinct flavors). Prioritizing x-risk is a reasonable choice but an outcome or step in the process of cause prioritization, rather than a primary input.

Option value might also be a way of getting at what you describe.

This example seems a bit under-specified; maybe you could flesh it out more?

There seem to be a few things going on:

  1. Some 'cause areas' (or 'problems') may be relevant now, but only relevant in the future with a certain probability
  • e.g., 'animal welfare after the year 2200' is only relevant if humans make it to 2200)
  • But 'animal welfare between 2023 and 2200' is relevant as long as we make it until then
  1. Some cause areas (e.g., preventing a big meteor from hitting the earth in 2200) will affect the probability that the others are relevant (or for how long they are relevant)

  2. Some problems may be deferred to 'solve later' without much cost.

  • Hard to find an example here ... maybe 'preventing suffering from meteor strikes predicted between the years 2200-2300, presuming we can do little to improve the technology to avoid that before say, 2150'

was this meant to be a response to my comment? I can't tell. If so I'll try to come up with some examples

2
david_reinstein
Sorry, yes, it was, I think the sun was shining on my laptop so I put it in the wrong thread
3
Charlie_Guthmann
Ok so I'm trying to come up with an example where * You have two choices * You can only pick one of the choices per unit of time * One of the choices will last two units of time, the other will last one * The choices have different scale, but this difference has nothing to do with lockin, and you should pick the choice with less scale because it only is available for one unit of time   I think an example that perfectly reflects this is hard to come up with, but there are many things that are close.   1. I have a quiz on Monday worth 10% of my grade and a test on Friday worth 20%. The intersection of materials on both exams is the null set.  I have enough time between Monday and Friday to study for the test and hit sufficiently diminishing returns such that the extra day of studying on Monday will increase my test grade less than 1/2 of how much studying for the quiz would increase my quiz grade.   2. I'm a congressman, and I have two bills that I'm writing, Gun Control and immigration. Gun control needs to be finished by Monday, and Immigration needs to be finished by friday. The rest of this example is the same as above.  3. I go to school with Isaac Newton, and convincing him to go into AI safety will provide 2 utility. I also realize there isn't a ea club at my school and starting the club will provide 1 utility. I know that Isaac isn't applying to jobs for a few months, and I only need 1 hour of his time to hit dimishing returns on increasing his chance of going into AI safety. The deadline for starting a club next year is tomorrow. Of course, these vacuums are still underspecified. Opportunity cost is rarely just about trading off 2 objects -- in the real world we have many options. I think you are thinking more about cause areas and I'm thinking more about specific interventions. However, I think this could extend more broadly but it would be more confusing to work out. 
3
david_reinstein
So I can choose c1∈{a,b},c2∈{a,b} then?   What do we mean by 'last'? Do you mean that the choice in period 1,  c1 , yields benefits (or costs) in periods 1 and 2, while the choice in period 2,  c2 , only affects outcomes in period 2? Can you define this a bit? Which 'choices' have different scale, and what does that mean?  Maybe you want to define the sum of benefits U(c1,c2)=... E.g.,  U(a,b)=a+b,  U(a,a)=a,   U(b,b)=b+βb,   U(b,a)=b,  where a and b are positive numbers, and β<1 is a diminishing returns parameter?   For 'different scale' do you just mean something like b>a?   Not sure what this means. So this is like, above, U(a,b)=a+b>U(b,b)=(1+β)b  if β<a+bb−1 But that's not just 'because a has no value in period 2' but also because of the  diminishing returns on b (otherwise I might just choose b in both periods. Does this characterize your case of interest?
1
Charlie_Guthmann
Yes. but I think to be very specific, we should call the problems A and B (for instance, the quiz is problem A and the exam is problem B), and a choice to work on problem A equates to spending your resource [1]on problem A in a certain time frame. We can represent this as ai,j where {i} is the period in which we chose a and {j} is the number of times we have picked a before. j is sorta irrelevant for problem A since we only can use one resource max to study but relevant for problem B to represent the diminishing returns via βj .  Neither if I'm understanding you correctly. I mean that the Scale of problem A in period 2, U(A2), is 0. This also implies that the marginal utility of working on problem A in period 2 is 0. For instance, if I study for my quiz after it happens this is worthless. This is different from the diminishing returns that are at play when repeatedly studying for the same exam.  This is the extreme end of the spectrum though. We can generalize this by acknowledging that the marginal utility of a certain problem is a function of time. For instance, it's better to knock on doors for an election the day before than 3 years before but probably not infinitely better. I think I maybe actually used scale as both meaning MU/resource and as meaning: if we solve the entire problem, how much is that worth? Basically, importance, as described in the ITN framework, except maybe I didn't mean it as a function of the percent of work done and rather the total. Generally though, I think people consider this to be a constant (which I'm not sure they should...) but this being the case, we are basically talking about the same thing but they are dividing by a factor of 100, which again doesn't matter for this discussion. I think what Eliot meant is importance, so that's what I'm going to define it as, but I think you picked up on this confusion which is my bad.  By choices, I meant the problems, like the quiz or the exam. I think I used the incorrect wording here t
1[comment deleted]
1
Charlie_Guthmann
The more I think about this the more confused I get... Going to formalize and answer your questions but it might not be done till tomorrow. 
Curated and popular this week
Relevant opportunities