17

kierangreig comments on A Complete Quantitative Model for Cause Selection - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (38)

You are viewing a single comment's thread.

Comment author: kierangreig 19 May 2016 10:30:53PM *  3 points [-]

I think this quantitative model has some potential and it’s a great addition to the growing literature on cause selection. Thanks for taking the time Michael :)

One aspect of this model which I find problematic, and I feel is something that may be often overlooked when calculating the EV of the far future, is that there is some Cumulative Probability Of Non Existence (CPONE) that is currently not accounted for in the probabilities listed in the EV of the far future spreadsheet.

The CPONE relies on the following:

  • The extinction risk probability per year is always greater than 0 because extinction is possible in any year.

  • Extinction in any year means extinction in all future years. This property is what makes the probability of non-existence cumulative. By cumulative I mean it increases each additional year it is forecast into the future.

  • It follows that the probability of value existing x years in the future decreases as x increases.

I don’t have a great sense of what I feel the probability for extinction of humans or their descendants into the far future is but assigning zero probability to this outcome in the far future spreadsheet conflicts with my initial thoughts on this topic. For instance, available estimates seem to put the current annual probability of extinction at ~10^-4 and it seems that even much smaller annual extinction probabilities accumulate over large timescales to become significant.

These probabilities of extinction matter because future EV comes from the ∑(estimated value at future time point multiplied by the probability of value existing at that future time point) for all future time points. If we feel that, say, 10^10 years into the future there’s a 0.5 probability humans or their descendants are extinct then all estimated values after that time point have to be multiplied by <0.5 in order to find their EV.

Given this, I think there’s some chance that the inclusion of reasonable CPONE models into far future EV calculations can cause orders of magnitude difference relative to not including CPONE models.

Please note, I am not sure the points I made in this comment are correct. I haven’t thought about/ researched this much and as such there’s certainly a chance that I will update in future. It’s unclear to me what impact including CPONE on EV of the far future has, maybe one day I will attempt some calculations myself. I currently assign significant probability to it causing orders of magnitude difference and that makes me feel like CPONE should be attempted to be included in models like this. Another solution would be to make it clearer how the model is dealing with extinction probabilities into the far future and how this may conflict with some people’s views.

[Edited to master bullet point formatting]

Comment author: MichaelDickens  (EA Profile) 19 May 2016 11:17:12PM 2 points [-]

The CPONE model looks interesting. It does seem somewhat reasonable to say that if we assign some non-trivial probability to extinction each year, then this probability accumulates over time such that we are almost guaranteed to go extinct eventually. Ultimately, though, I don't believe this is a correct assumption.

First, just to clarify, I don't believe that the human species going extinct per se is bad. If humans go extinct but some intelligent benevolent[^1] species continues to exist, that would be good too. So the real concern is that valuable beings stop existing, not that humans stop existing. (Some people might claim that humans are the only valuable beings; I think this is pretty wrong but there's not much to argue here.)

If we successfully start colonizing other planets, our probability of extinction decreases as we grow. While it's true that we have a nonzero probability of extinction on any given year, this probability decreases over time. I believe a plausible cumulative distribution function for the probability of extinction would have an asymptote—or else something like an asymptote, e.g., the probability of extinction between 100 and 1000 years from now is about the same as the probability of extinction between 1000 and 10,000 years from now, etc.

Perhaps we can model the probability of extinction by looking at history. Almost all individual species have gone extinct so if we treat humans as a regular species then our extinction looks pretty likely. But I believe this is the wrong reference class. We shouldn't just care about humans, we should care about good happy beings in general. I think multi-cellular life on earth makes for a better reference class. Multi-cellular life has taken several big hits over the years, but it's always bounced back. Now, it's plausible that life so far has been net bad, but the point is that this gives us good theoretical reason to believe that we can avoid extinction for potentially billions of years.

Still, maybe you'd say we've have like a 90% chance of going extinct in the next 10^10 years conditional on making it through the next century. That sounds pretty defensible to me. That doesn't actually change the outputs of the model as much as you might think. When you're looking at interventions' effects on the far future, the numbers are so big that the prior does a lot of work—10^54 and 10^55 expected utility don't look that different after updating on the prior. (I believe this is the correct model: if you thought GiveDirectly was a little better than AI risk, but then I told you that I updated my estimated utility of the far future from 10^54 to 10^55, would that change your mind? But it would have a bigger impact if you used a much wider prior distribution.) Plus far-future interventions' effects tend to be correlated with each other, so this changes how good they all look but doesn't do as much to change how you prioritize them.

[^1] It's sort of unclear how benevolent humans are; we do have factory farming after all. I'm hoping we get rid of that before too long.

Comment author: RyanCarey 20 May 2016 04:34:15AM *  1 point [-]

The constant (irreducible) extinction risk model is a good one, but here's the catch. You don't really know how low the irreducible risk is once all risk-reducing interventions are made. It could go down to 0.1% annually, (suppose there are some unavoidable sun flares or other cosmic disasters) or it could go down to near-enough to zero. We don't know which kind of world we'd end up in, but if you care a lot about the long-run future, it's the latter family of worlds that you're aiming to save, and on expected value, the notion of those first worlds (whose cosmic endowments are kinda doomed :p) don't really matter. So the point if that the constant irreducable extinction risk model is just good enough that it only needs a slight upgrade to get the right (opposite) answer!

Comment author: Carl_Shulman 20 May 2016 12:55:32PM 3 points [-]

With uncertainty about that extinction rate this Weitzman paper's argument is relevant:

A critical feature of the distant future is currently unresolvable uncertainty about what will then be the appropriate rate of return on capital to use for discounting. This paper shows that there is a well-defined sense in which the ‘‘lowest possible’’ interest rate should be used for discounting the far-distant future part of any investment project. Some implications are discussed for evaluating long-term environmental projects or activities, like measures to mitigate the possible effects of global climate change

Comment author: AGB 25 May 2016 07:32:39PM 1 point [-]

Why 'once all risk-reducing measures are made'? Presumably what we care about is the marginal risk-reduction measure we can make on the margin?

I see no reason to think returns here are close to linear, since a reduction in the extinction rate from 0.2% to 0.1% (500 years -> 1000 years) delivers half the benefits of going from 0.1% to 0.05% (1000 years -> 2000 years) which is half of the benefits of going from 0.05% to 0.025%, etc. So my very weak prior on 'marginal return on effort spent reducing extinction risk' would be that they would be roughly exponentially increasing with the overall magnitude of resources thrown at the problem.

Which means I don't think you can take the usual shortcut of saying 'if 10% of world resources were spent on this it would be a great return on investment, and diminishing returns, so me spending 0.00001% of the worlds resources is also a great return'.

With that said, massively increasing returns is extremely unusual and feels intuitively odd so I'm very open to alternative models; this came up recently at a London EA discussion as a major objection to some of the magnitudes thrown around in x-risk causes, but I still don't have a great sense of what alternative models might look like.

Comment author: RyanCarey 27 May 2016 04:53:22AM 0 points [-]

Yeah introducing diminishing returns into a model could change the impact by an order of magnitude but I'm trying to answer a more binary question: which

What I'm trying to look at is will an intervention to x-risk has a "long-run impact" e.g. either approx. the cosmic endowment or approx. the current milleneum. If you use a constant discount or an exponential discount, that's going to make all of the difference. And if you think there's some amount of existential risk that's irreducible, that forces you to include some exponential discounting. So it's kind-of different from where you're trying to lead things.

Comment author: kierangreig 20 May 2016 05:34:11PM *  0 points [-]

I believe a plausible cumulative distribution function for the probability of extinction would have an asymptote—or else something like an asymptote, e.g., the probability of extinction between 100 and 1000 years from now is about the same as the probability of extinction between 1000 and 10,000 years from now, etc.

Using that example the probability of value existing could be roughly modelled as: p(n)= (1-r)^(log(n-1)) Where p is the probability of value existing n years into the future, r is the extinction probability between 10 and 100 years, log means the log base 10 and n is the number of years in the future. This relationship works for n>2.

I was curious about what the average of p(n) for that type of function would be over the next 10^11 years. Some available extinction estimates put r between 10% and 50%. I imagine there’s also similar variance within EAs’ r value. Using r= 10% the average of p(n) over 10^11 years seems like it would be 3 * 10^-1 . Using r= 50% the average of p(n) over 10^11 years would be ~7 * 10^-4. I used Wolfram Alpha’s integral calculator for these calculations and I am not that it’s performing the calculation correctly. These averages for p(n) could make the impact on far future EV significant.

I don’t have strong views on which CPONE model is best and the ones I mention here may be flawed. I softly lean towards including CPONE models because the posterior then more closely reflects the user’s view of reality, it’s not too difficult to include CPONE models, reasonable people may have different CPONE models, and the addition of a CPONE model may result in different cause prioritization conclusions.

I think multi-cellular life on earth makes for a better reference class. Multi-cellular life has taken several big hits over the years, but it's always bounced back.

Interesting. I hadn’t thought of that reference class before :)

When you're looking at interventions' effects on the far future, the numbers are so big that the prior does a lot of work—10^54 and 10^55 expected utility don't look that different after updating on the prior.

Excellent point :) I wasn’t fully taking that into consideration. Updates me towards thinking that CPONE models are less important than previously thought. I think reasonable people could have a CPONE model which causes more than one order of magnitude difference in EV and therefore cause a more significant difference after updating on the prior.

[edited originally I accidentally used the natural logarithm instead of log base 10 when calculating the average of the probability function over 10^11 years]