Hide table of contents

I often encounter the following argument, or a variant of it:

Historically, technological, economic, and social progress have been associated with significant gains in quality of life and significant improvement in society's ability to cope with challenges. All else equal, these trends should be expected to continue, and so contributions to technological, economic, and social progress should be expected to be very valuable.

I encounter this argument from a wide range of perspectives, including most of the social circles I interact with other than the LessWrong community (academics, friends from school, philanthropists, engineers in the bay area). For example, Holden Karnofsky writes about the general positive effects of progress here (I agree with many of these points). I think that similar reasoning informs people's views more often than it is actually articulated.

I disagree with this argument. This disagreement appears to be responsible for many of my other contrarian views, and to have significant consequences for my altruistic priorities; I discuss some concrete consequences at the end of the post. (The short summary is that I consider differential intellectual progress to be an order of magnitude more important than absolute intellectual progress.) In the body of this post I want to make my position as clear as possible. 

My impression is that I disagree with the conventional view because (1) I take the long-term perspective much more seriously than most people, (2) I have thought at more length about this question than most people. But overall I remain a bit hesitant in this view due to its unpopularity. Note that my view is common in the LessWrong crowd and has been argued for elsewhere. In general I endorse significant skepticism for views which are common amongst LessWrong but unpopular in the wider world (though I think this one is unusually solid).

Values

I suspect that one reason I disagree with conventional wisdom is that I consider the welfare of individual future people to be nearly as valuable as the welfare of existing people, and consequently the collective welfare of future people to be substantially more important than the welfare of existing people.

In particular, I think the original argument is accurate--and a dominant consideration--if we restrict our attention to people living over the next 100 years, and perhaps even the next 500 years. (Incidentally, most serious intellectuals appear to consider it unreasonable to have a view that discriminates between "Good for the people living over the next 500 years" and "Good for people overall.")

Some people who raise this argument consider the welfare of far future people to be of dubious or reduced moral value. But many people who raise this argument purport to share a long-term, risk-neutral, aggregative perspective. I think that this latter group is making an empirical error, which is what I want to address here.

Incidentally, I hope that in the future the EA crowd adopts a more reasonable compromise between long-term, species-agnostic, risk-neutral utilitarianism, and more normal-looking intuitions that by behaving morally we can collectively make all of our lives much better. It seems most EA's grant that there is a place for selfishness, but often reject conventional behaviors which collectively benefit the modern developed world.

I think that part of the resistance to anti-progress arguments comes from the desire to recover conventional pro-social behavior, without explicit recognition of that goal.

This is a painful disagreement

This is a painful disagreement for me for two reasons.

First, I believe that society at large substantially underestimates the welfare gains from economic and technological progress. Indeed, I think that given an exclusive concern for the next few generations, these should probably be overwhelming concerns of a would-be altruist. I could talk at length about this view and the errors which I think underly conventional views, but it would be a digression.

In light of this, I find it extremely unpleasant to find myself on the anti-progress side of almost any argument. First, because I think that someone sensible who learns my position will rationally assume that I am guilty of the most common errors responsible for the position, rather than making a heroically charitable assumption. Second, I have a visceral desire to argue for what I think is right (I hear the call of someone being wrong on the internet), and in most everyday discussions that means arguing for the merits of technological and economic progress.

Second, I think that pursuing plans which result in substantially slower growth comes at a material expense for the people alive today, and especially for their children and grandchildren. For the same reason that I would be uncomfortable hurting those around me for personal advantage, I am uncomfortable hurting those around me in the service of utilitarian ends (a problem much exacerbated by the erosion of the act-omission distinction).

[In fact I mostly do try to be a nice guy; in part this is due to the good effects of not-being-a-jerk (which are often substantial), but it's also largely due to a softening of the aggregate-utilitarian perspective and a decision-theoretic view partially calibrated to reproduce intuitions about what we ought to do.]

Why I disagree

For reference, the argument in question:

Historically, technological, economic, and social progress have been associated with significant gains in quality of life and significant improvement in society's ability to cope with challenges. All else equal, these trends should be expected to continue, and so contributions to technological, economic, and social progress should be considered highly valuable.

[Meta]

This is an instance of the general schema "In the past we have observed an association between X [progress] and Y [goodness]. This suggests that X is generally associated with Y, and in particular that this future instance of X will be associated with Y."

I have no problem with this schema in general, nor with this argument in particular. One way of responding to such an argument is to offer a clear explanation of why X and Y have been associated in the observed cases. This then screens off the evidence about the general association of X with Y; if the clear explanation doesn't predict that X and Y will be associated in the future, this undermines the predicted association.

[Object level]

In this case, it seems clear that greater technological capabilities at time T lead to improved quality of life at time T. This is a very simple observation, robustly supported by the historical record. Moreover, it is also clear that improved technological capabilities at time T lead to improved technological capabilities at time T+1. And I could make similar statements for economic progress, and arguably for social progress.

Once we accept this, we have a clear explanation of why faster progress leads to improvements in quality of life. There is no mysterious correlation to be explained.

So now we might ask: do the same mechanisms suggest that technological progress will be good overall, on aggregate utilitarian grounds?

The answer appears to me to be no.

It seems clear that economic, technological, and social progress are limited, and that material progress on these dimensions must stop long before human society has run its course. That is, the relationship between progress at time T and progress at time T+1 will break down eventually. For example, if exponential growth continued at 1% of its current rate for 1% of the remaining lifetime of our sun, Robin Hanson points out each atom in our galaxy would need to be about 10140 times as valuable as modern society. Indeed, unless our current understanding of the laws of physics, progress will eventually necessarily slow to an extremely modest rate by any meaningful measure.

So while further progress today increases our current quality of life, it will not increase the quality of life of our distant descendants--they will live in a world that is "saturated," where progress has run its course and has only very modest further effects.

I think this is sufficient to respond to the original argument: we have seen progress associated with good outcomes, and we have a relatively clear understanding of the mechanism by which that has occurred. We can see pretty clearly that this particular mechanism doesn't have much effect on very long-term outcomes.

[Some responses]

1. Maybe society will encounter problems in the future which will have an effect on the long-term conditions of human society, and their ability to solve those problems will depend on levels of development when they are encountered? 

[My first response to all of these considerations is really "Maybe, but you're no longer in the regime of extrapolating from past progress. I don't think that this is suggested by the fact that science has made our lives so much better and cured disease." But to be a good sport I'll answer them here anyway.]

For better or worse, almost all problems with the potential to permanently change human affairs are of our own making. There are natural disasters, periodic asteroid impacts, diseases and great die-offs; there is aging and natural climate change and the gradual burning out of the stars. But compared to human activity all of those events are slow. The risk of extinction from asteroids each year is very small, fast climate change is driven primarily by human activity, and the stars burn down at a glacial pace. 

The ability to permanently alter the future is almost entirely driven by technological progress and the use of existing technologies. With the possible exceptions of anthropogenic climate change and a particularly bad nuclear war, we barely even have the ability to really mess things up today: it appears that almost all of the risk of things going terribly and irrevocably awry lies in our future. Hastening technological progress improves our ability to cope with problems, but it also hastens the arrival of the problems at almost the same rate.

2.  Why should progress continue indefinitely? Maybe there will be progress until 2100, and the level of sophistication in that year will determine the entire future?

This scenario just seems to strain plausibility. Again, almost all ways that progress could plausibly stop don't depend on the calendar year, but are driven by human activities (and presumably some intermediating technological progress).

3. Might faster progress beget more progress and a more functional society, which will be better able to deal with the problems that arise at each fixed level of development?

I think this is an interesting discussion but I don't think it has any plausible claim to a "robust" or "non-speculative" argument, or to be a primary consideration in what outcomes are desirable. In particular, you can't justify this kind of thing with "Progress seems to have been good so far," you need to run a much more sophisticated historical counterfactual, and probably you need to start getting into speculating about causal mechanisms if you actually want the story to be convincing. Note that you need to distinguish wealth-related effects (which don't depend on how fast wealth is accumulated, and consequently don't affect our ability to address problems at each fixed level of development) from rate-of-progress related effects, which seems empirically treacherous (not to mention the greater superficial plausibility of wealth effects).

In particular I might note that technological progress seems to have proceded essentially continuously for the last 1000 or so years, with periodic setbacks but no apparent risk of stalling or backpedaling (outside of small isolated populations). Without the risk of an indefinite stagnation leading to eventual extinction, it's not really clear why momentum effects would have a positive long-term impact (this seems to be begging the question). It is more clear how people being nicer could help, and I grant that there is some evidence for faster progress leading to niceness, but I think this is definitely in the relatively speculative regime. 

[An alternative story]

An alternative story is that while progress has a modest positive effect on long-term welfare, this effect is radically smaller than the observed medium-term effects, and in particular much smaller than differential progress. Magically replacing the world of 1800 with the world of 1900 would make the calendar years 1800-1900 a lot more fun, but in the long run all of the same things happen (just 100 years sooner).

That is, if most problems that people will face are of their own creation, we might be more interested in the relative rate at which people create problems (or acquire the ability to create them) vs. resolve problems (or acquire the ability to resolve them). Such relative rates in progress would be much more important than an overall speedup in technological, economic, and social progress. And moreover, we can't use the fact that X has been good for quality of life historically in order to say anything about which side of the ledger it comes down on.

I'd like to note that this is not an argument about AI or about any particular future scenario. It's an argument that I could have made just as well in 1500 (except insofar as natural phenomena have become even less concerning now than they were in 1500). And the observations since 1500 don't seem to discredit this model at all. Predictions only diverge regarding what happens after quality of life stops increasing from technological progress. 

This might operate at the level of e.g. differential technological development, so that some kinds of technological progress create value while others destroy it; or it might operate at a higher level, so that e.g. the accumulation of wealth destroys value while technological progress creates it (if we thought it was better to be as poor as possible given our level of technological sophistication). Or we might think that population growth is bad while everything else is good, or whatever.

The key message is that when we compare our situation to the situation of last century, we mostly observe the overall benefits of progress, but that on a long-run perspective these overall benefits are likely to be much smaller than the difference between "good stuff" and "bad stuff."

(For a more fleshed out version of this story, see again Nick Beckstead's thesis or this presentation.)

Implications

Why does any of this matter? A few random implications:

  • I suspect that addressing poverty is good for the overall pace of progress, and for the welfare of people over the next 200 years. But I don't see much reason to think that it will make our society better in the very long-run, and I think that the arguments to this effect are quite speculative. For example, I think they are much more speculative than arguments offered for more explicitly future-shaping interventions. The same can be said for many other common-sense interventions.
  • I think that faster AI progress is a huge boon for this generation's welfare. But I think that improving our understanding of where AI is going and how it will develop is probably more important, because that reduces the probability that the development of AI unfolds in an unfavorable way rather than merely accelerating its arrival.
  • I think that improvements in decision-making capabilities or (probably) intelligence are more important than other productivity benefits, e.g. the benefits of automation, and so tend to focus on cognitive enhancement or improvements collective decision-making rather than the more conventional menu of entrepreneurial projects.
I don't think that any of these are particularly speculative or wild propositions, but often people arguing against investment in differential progress seem to have unreasonably high expectations. For example, I expect understanding-where-AI-is-going to have a much smaller effect on the world than helping-AI-get-there, but don't think that is a sufficient argument against it.
Comments32
Sorted by Click to highlight new comments since: Today at 11:10 PM

I think this post contributes something novel, nontrivial, and important, in how EA should relate to economic growth, "Progress Studies," and the like. Especially interesting/cool is how this post entirely predates the field of progress studies.

I think this post has stood the test of time well. 

Thanks for a great post. I agree with many of the points.

I do think that we still need to better understand how short-run changes interact with long-run expectations. In cases where we can effect surprisingly large changes to the world today (and poverty interventions have at least a plausible claim to this), the long-run effects may also be large. While I agree that differential intellectual progress is likely an order of magnitude more important than absolute intellectual progress, the opportunities available to us may in some cases be enough to recover the difference the other way. We should be prepared to look for such opportunities (although we should also be a bit sceptical if we haven't also looked for similarly good opportunities in differential progress).

Research on differential technological development seems much less crowded than the sorts of interventions advocated by proponents of the "progress and prosperity" argument that Paul criticizes. So unless one regards that area of research as singularly intractable, it seems its scoring much more highly on both the importance and crowdedness dimensions should make it a more promising cause overall.

Some current things that are trying to push on "differential progress", if I understand you right:

Does that look right? What else would you add?

(Paul, I think I've heard you talk before about trying to improve institutional quality - do you know of anyone you think is doing this well?)

I think that most good things people do push on differential progress in one way or another, it's just a different standard for evaluation. Those do stand out as things that contribute particularly to differential progress.

I would guess that on average progress in the social sciences is a net benefit in terms of differential progress, while progress in the hard sciences is a net cost in terms of differential progress. I think the bulk of improvement in institutions comes from people working within organizations trying to help them run better (with the positive developments then propagated through society), and then economists and the social sciences in a distant second.

"2. Why should progress continue indefinitely? Maybe there will be progress until 2100, and the level of sophistication in that year will determine the entire future?

This scenario just seems to strain plausibility. Again, almost all ways that progress could plausibly stop don't depend on the calendar year, but are driven by human activities (and presumably some intermediating technological progress)."

I think this is where I disagree with you most: I don't think this strains plausibility at all, and in fact I think the statement as given is basically true up to the year. Examples of events that seem determined by 'calendar year' rather than by the level of progress, but where likelihood of survival seems dependent on the level of progress:

  1. First, second, or later contact with intelligent alien life (especially in a 'they find us' scenario, rather than vice-versa).

  2. Asteroid impacts, super-volcanoes, and another 'natural' disasters.

I'm sure people more imaginative than me can think of others.

You mentioned the latter category explicitly and noted that they are very rare. I agree. But 'very rare' still means 'mathematically guaranteed eventually', so eventually there will be a year X where the level of sophistication does in fact determine the entire future. 1 concerns me more than 2 though, since it seems more of a 'make-or-break' event, and more guaranteed (Aside: I'm always a bit surprised that contact with alien life doesn't seem to turn up much in discussions of the very-far-future. It's normally my primary consideration when thinking that far ahead, since I view it as incredibly important to what the long-run looks like and incredibly likely given sufficient time).

This is obviously related to a broader disagreement, which is whether the threats and constraints on humanity are primarily external (disease, cosmic issues, aliens, maybe even resource issues) or internal (AI, nukes, engineered microbes). I lean strongly towards external.

Against an unknown external threat, speeding up broad-based progress is a sensible response. It has so far been the case the primary things hurting humanity have been external, so up to now broad-based progress has been very powerful. You don't appear to disagree with this. Your actual disagreement is, I think, contained in the sentence "For better or worse, almost all problems with the potential to permanently change human affairs are of our own making."

Arguing against that would take me somewhat out of the scope of this thread and make this comment even longer than it already is. But it seems clear to me that that assertion is not one that many of those you are arguing against would agree with, and without it, I don't think the rest of your argument holds.

My estimates for [1] and [2] together are less than 0.01% per year (for [1], they are very much lower!) So the quantitative effect of speeding up progress, via these channels, is quite small. You would have to be very pessimistic about the plausible trajectory-changing impacts of available interventions before such a small effect was relevant.

I can believe that most of the problems faced by people today are external, and indeed this is related to why I find this disagreement such a painful one. But why do you think that the long-term constraints are external? I've never seen a really plausible quantitative argument for this. Resource limitations are the closest, though (1) I'm quite skeptical as a very long-term factor, but this comes down mostly to views on the likely rate of technological progress over the next centuries, and (2) describing those as "external" rather than "a consequence of progress" is already pushing it.

I think that the "man-made" troubles have already easily surpassed the natural worries, via the risk of nuclear annihilation and anthropogenic climate change. Do you disagree with this? What natural problems do you think might be competitive? The unknown unknown?

Edited for TLDR:

TLDR; We can robustly increase the speed of progress. We can also establish that, contrary to the argument above, increasing the speed of progress has non-trivial very-long-run value. I don't see how we can robustly change the direction of progress. I also don't see how we could robustly know that the direction we move progress in is valuable, even if we could robustly change the direction of progress.

At the risk of stating something we both know, 0.01% per year is not at all 'small' in this context. My estimate would actually be lower, and I still think my argument holds. That's because I think progress will continue for many thousands of years. I agree with you that eventually it has to stop or dwindle into insignificance. But all I need to establish is that there is a non-trivial chance that it does not stop before one of the make-or-break events. If progress is currently set to continue for at least 1000 more years and there is indeed a 0.01% chance per year, there is at least a 10% chance of that scenario (the one that you said 'strains plausibility'). 1000 years just isn't a very long time. And once I have that, speeding up progress becomes very valuable again.

"You would have to be very pessimistic about the plausible trajectory-changing impacts of available interventions before such a small effect was relevant."

I am actually very pessimistic about these, because it's not clear to me that we have any examples of trying to do more of this on the margin working. We do have examples of attempts to speed up progress on the margin working.

Assuming that by 'anthropogenic climate change' you essentially mean rising CO2 levels, I don't actually rate climate change as an x-risk, so it's not obvious to me that it belongs in this discussion. I rate it as something which could cause a great deal of suffering and, as a side-effect of that, slow down progress, but not as something that brings about the 'end of the world as we know it'. If it is an x-risk, then in a sense my response is 'well, that's too bad', since I also view it as inevitable; all the evidence I'm aware of suggests that we collectively missed the boat on this one some time ago. In fact, because we've missed the boat already, the best thing to do right now to combat it is very likely to be to speed up progress so as to be better placed to deal with the problems when they come.

In the worst case, you personally saving money and spending it later (in the worst case, to hasten progress at a time when the annual risk of doom has increased!) seems very likely to beat 0.01%, unless you have pretty confident views about the future.

I think research on improving institutional quality, human cognition, and human decision-making, also quite easily cross this bar, have had successes in the past, and have opportunities for more work on the margin. I've written about why I think these things would constitute positive changes. But it's also worth pointing out that if you think there is a 0.01% / year chance of doom now, then improvements in decision-making can probably be justified just by short-term impacts on probability of handling a catastrophe soon

How long progress goes on for before stopping seems irrelevant to its value, according to the model you described. The value of an extra year of progress is always the per annum doom risk. Suppose that after some number of years T the per annum doom probability drops to 0. Then speeding up progress by 1 year reduces that number to T-1, reducing the cumulative probability of doom by the per annum doom probability. And this conclusion is unchanged if the doom probability continuously decreases rather than dropping to 0 all at once, or if there are many different kinds of doom, or whatever. It seems to be an extremely robust conclusion. Another way of seeing this is that speeding up progress by 1 year is equivalent to just pausing the natural doom-generating processes for a year, so naturally the goodness of a year of progress is equal to the badness of the doom-generating processes.

If you believe a 0.01% / year chance of doom by natural catastrophes, addressing response capabilities to those catastrophes in particular generally seems like it is going to easily dominate faster progress. On your model reducing our probability of doom from natural disasters by 1% over the next year is going to be comparable to increasing the overall rate of progress by 1% over the next year, and given the relative spending on those problems it would be surprising if the latter was cost-effective. I can imagine such a surprising outcome for some issues like encountering aliens (where it's really not clear what we would do), but not for more plausible problems like asteroids or volcanos (since those act by climate disruptions, and there are general interventions to improve society's robustness to massive climate disruptions).

Whether we've missed the boat on climate change or not, it would undermine your claim that historically the main problems have been nature-made and not man-made (which I find implausible), which you were invoking to justify the prediction that the same will hold true in the future. I'm also willing to believe the risk of extremely severe outcomes is not very large, but you don't have to be very large to beat 0.01% / year.

One reason I'm happy ignoring aliens is that the timing would have to work out very precisely for aliens to first visit us during any particular 10k year period given the overall timescales involved of at least hundreds of millions of years and probably billions. There are further reasons I discount this scenario, but the timing thing alone seems sufficient (modulo simulation-like hypotheses where the aliens periodically check up on Earth to see if it has advanced life which they should exterminate, or some other weird thing).

"In the worst case, you personally saving money and spending it later (in the worst case, to hasten progress at a time when the annual risk of doom has increased!) seems very likely to beat 0.01%, unless you have pretty confident views about the future."

I don't disagree on this point, except that I think there are better ways to maximise 'resources available to avert doom in year 20xx' than simply saving money and gaining interest.

"How long progress goes on for before stopping seems irrelevant to its value, according to the model you described. The value of an extra year of progress is always the per annum doom risk."

I basically agree, and I should have made that explicit myself. I only invoked specific numbers to highlight that 0.01% annual doom risk is actually pretty significant once we're working on the relevant timescales, and therefore why I think it is plausible/likely that there will indeed be a year, one day, where the level of sophistication determines the entire future.

"Whether we've missed the boat on climate change or not, it would undermine your claim that historically the main problems have been nature-made and not man-made (which I find implausible), which you were invoking to justify the prediction that the same will hold true in the future."

That wasn't the prediction I was trying to make at all, though on re-reading my post I can see why you might have thought I was. But the converse of 'almost all problems...are of our own making' is not 'most problems are not of our own making'. There's a large gap in the middle there, and indeed it's not clear to me which will dominate in the future. I think external has dominated historically and marginally think it still dominates now, but what I'm much more convinced of is that we have good methods to attack it.

In other words, external doesn't need to be much bigger than internal in the future to be the better thing to work on, all it needs to be is (a) non-trivial and (b) more tractable.

The rest of your post is suggesting specific alternative interventions. I'm open to the possibility that there is some specific intervention that is both more targeted and that is currently being overlooked. I think that conclusion is best reached by considering interventions or classes of interventions one at a time. But as a prior, it's not obvious to me that this would be the case.

And on that note, do you think that the above was the case in 1900? 1800? Has it always been the case that focusing on mitigating a particular category of risk is better than focusing on general progress? Or have we passed some kind of tipping point which now makes it so?

And on that note, do you think that the above was the case in 1900? 1800? Has it always been the case that focusing on mitigating a particular category of risk is better than focusing on general progress? Or have we passed some kind of tipping point which now makes it so?

At those dates I think focusing on improving institutional decision-making would almost certainly beat trying to mitigate specific risks, and might well also beat focusing on general progress.

What would be an example? It's quite possible I don't disagree on this, because it's very opaque to me what 'improving institutional decision making' would mean in practice.

Paul gives some examples of things he thinks are in this category today.

If we go back to these earlier dates, I guess I'd think of working to secure international cooperation, and perhaps to establishing better practices in things like governments and legal systems.

I think it's hard to find easy concrete things in this category, as they tend to just get done, but with a bit of work progress is possible.

You talk about progress today being of macroscopic relevance if we get an exogenous make-or-break event in the period where progress is continuing. I think it should really be if we get such an event in the period where progress is continuing exponentially. If we've moved into a phase of polynomial growth (plausible for instance if our growth is coming from spreading out spatially) then it seems less valuable. I'm relying here on a view that our (subjective) chances of dealing with such events scale with the logarithm of our resources. I don't think that this changes your qualitative point.

I do think that endogenous risk over the next couple of centuries is of at least comparable size as exogenous risk over the period before exponential growth. I think that increasing the resources devoted to dealing with endogenous risk by 1% will reduce these risks by a similar amount that increases prosperity by 1% will reduce long-term exogenous risks. And I think it's probably easier to get that 1% increase in the first case than the second one, in large part because there are a lot fewer people already trying to do that.

(epistemic status - A first draft, probably needs more thought and reflection in the future)

I don't necessarily disagree with your - but I am much more likely to endorse short term interventions that address poverty and short term technological improvements which reduce suffering for a few reasons

I consider current suffering as extremely negative and (comparatively) easy to fix. I find serious suffering - of the kind a lot of EA interventions try to prevent - absolutely intolerable disgust and fairness foundations (as Haidt would term them) on a deeply primal level to see needless suffering. Improving the speed of progress in attacking this suffering seems to be a big deal to me, because with some exceptions (defeat of death and ageing), I consider the life of those of us lucky enough to be born WIERD nations to already be pretty damn good - and that the welfare difference between me and a child growing up in Mogadishu may even be incommensurably greater than the difference between me and long run future saturated people. So I am much more concerned with current suffering, because current suffering is big and future suffering is likely - in the very long run, to be either catastrophic or very small.

The logical followup to that point is that I am interested in directional changes (as you are) that help ensure that we end up at " very small" rather than catastrophic - but I am very sceptical about our ability to measure and show what current research is effective at making such changes.

I may be wrong about that scepticism, but even if I am I think that globally suffering has a big negative effect on the likleyhood of people investing in decision making or directional investments - it is hard to plan your mortgage payments when you have a broken leg. Given how comparatively simple it seems to me to be is be too fix our collective broken legs I think even those people who want us to plan our mortgage payments should consider it a high priority to get a splint now so that we can actually plan our mortgage payments without constantly worrying or facing searing pain from our broken leg. On a similar point, I disagree with you that "faster progress leading to niceness" is very speculative. Stephen Pinker's excellent "The Better Angels of our Nature" - seems a good reference for this - I think it establishes fairly clearly that moral and technological progress has made us less warlike and less violent as a species.

The argument for differential progress has been made before by Bostrom a few times and Beckstead as well.

http://www.existential-risk.org/figure5.png

I can see six strategies which may prove fruitful to nourish differential technological progress:

1) Increasing safety savvy insight - e.g. helping with the AI control problem, perfecting our understanding of moral psychology, and of cultural evolution.

2) Decreasing rates of progress in dangerous areas - e.g. decelerating the brain emulation project, passing legislation anti-AI progress etc...

3) Diminishing the incentives which stimulate progress in undesirable areas - e.g. if aging is cured, chronologically old individuals no longer would have an incentive to accelerate the intelligence explosion.

4) Using a Scorched Earth strategy - e.g. if surpassing a threshold of average wealth would be risky, perhaps because some individuals would have enough power start a world-wrecking cascade, then burning resources now would guarantee that the future will be safer by being poorer.

5) Using a differentiated Scorched Earth strategy - e.g. try to predict which assets/valuables will be in the hands of those individuals who can start world destroying cascades in the future, and destroy non-liquid resources from that pool now. Whether non-liquid because rare, as enriched uranium, or non-liquid because hard to exchange, as private islands or old churches, the crucial consideration is whether destroying these resources now will counterfactually impede similar resources of being used at the time of their maximum destructive potential by these agents.

6) Finding whether there are more strategies not considered among the previous 5.

Which of these, if any, do you see as the lower hanging fruit for EAs? and why?

1) Seems like a good start, as its likely to draw together people with a common concern, but its v unlikely to stop things. Research capabilities are dispersed, and if you're saying that the stepping on each other's toes effect is going to outweigh the standing on the shoulders of giants effect, then researchers in different parts of world society with different agendas will want to get going with it. It would need enough of the right people to subscribe to this for log enough to prevent the technology. Unlikely. But might buy time.

Differentiated scorched earth isn't different in consequence than 2) - both are kinds of regulation. One official and centralised, one with the possibility of being unofficial and dispersed. The drawback of the scorched earth strategy is that it's irreversable. The drawback of the regulatory strategy is that its game-able.

curing aging might be one of the threats to humanity as we know it? Mortality = vulnerability = dependence on others = good society? Also not dependable strategies to reduce incentives if they're reliant on new tech fixes, as tech fixes are very hard to predict or encourage and possibly carry their own unknown risks even if we could do those things (so there's a level at which you could be introducing more risk than risk control and its hard to figure out which is which)

I personally think a harm control strategy / centralised regulatory + intelligence function is our best bet for differentiated progress. This also comes with the side-benefit of forcing debate and norms in scientific research to allign or not and the public being brought in on it, and they're usually in favour of regulating against scary risks even when they're too small (unless they're framed as protecting us from other human beings).

Paul hello. I think I'm missing something crucial in your argument. You are saying that investing in technological progress is worse than... what? I understand that there are black swan technologies which create X-risk therefore we want to invest into safety research rather than into speeding up the technology itself. Are you saying anything besides that? In a universe without risk, would you still be against prioritizing progress?

Why is the long term asymptotics crucial to the question? Yes, by accelerating progress you are "merely" translating everything in time. So what? The result of skipping from 1800 to 1900 is immense utility gain whatever is the long term scenario.

I think the point is: improving general (intellectual) progress is significantly less effective than improving differential (=specific) intellectual progress. Am I right?

I'm relatively new to the area of existential risk, and come more from the development economics/ global health side of things. I find this argument really interesting, and have a few questions and comments.

(1) Economic development in emerging economies generally leads to a demographic transition where birth rates eventually stabilize at or below the replacement rate. It seems to me that if this transition were hastened, it could lead to a smaller steady state population in the long term and lower long term resource consumption (even if consumption is accelerated in the near term). I'm not sure about the moral implications of eliminating people from a theoretical future, but it seems to me it would be preferable to have a smaller population with a better quality of life, than a large population with a poor quality of life.

(2) You write: "With the possible exceptions of anthropogenic climate change and a particularly bad nuclear war, we barely even have the ability to really mess things up today: it appears that almost all of the risk of things going terribly and irrevocably awry lies in our future. Hastening technological progress improves our ability to cope with problems, but it also hastens the arrival of the problems at almost the same rate." -Do know of any research that has tried to quantify the relative rates/magnitudes of problems generated vs. problems solved? -Large quality of life increases in developing economies could come from application of existing technologies where the risks, as you note, are relatively small/pretty well understood. Would you support the use of existing technology for development?

(3) Has anyone made an argument for existential risk mitigation based on recent history? For example, trying to look forward with the limited perspective of someone in 1900. Obviously this would be very speculative, but I think it could help make the concepts you are discussing more concrete to outsiders. Also, doing historical research into warnings about, say, the dangers of nuclear physics could help us understand the challenges of practically implementing policies that are theoretically correct.

I generally agree that more resources need to be devoted to existential risk mitigation, but I don't have a very good idea of where the bottlenecks are -- philosophy, theory, or practical implementation?

Firstly, I think this is the best post I've read in a while.

However, I notice I am confused. It seems that much of your argument is basically "the future is very big; as such, we should invest now rather than consuming, because

  • ROI > 0
  • Discount Rate = 0

...where in this case, investing in basically anything other than Xrisk counts as consumption.

However, this argument pulls roughly equally against many things. (Uniform) economic growth is not very much worse than deworming, or animal welfare, or civil liberties - they're all basically irrelevant compared to the astronomic waste. So I'm not sure of the reason for your emphasis in this article.

Of course, this has clear implications for the virtues of non-uniform economic growth/

I agree, but there's an argument in favour of progress that you don't mention. If we magically replace say 2015 with 2016, then we get one fewer year of 21st century conditions, and one more year of saturated-world conditions. If we think that an incredibly valuable saturated world is likely, then an extra year of it instead of the 21st century is well worth it.

Or a light-cone that started a year earlier, and thus permanently one extra light year in radius.

Yes, this comes down to empirics: the fraction of extra resources we can reach by starting earlier is very small, so most of our long-term impact comes from nudges to the probability of having a long future at all.

It seems like the empirical question could have come out the other way, and then progress would be more important.

[anonymous]9y1
0
0

To bring up population ethics again, how does taking a long-term view affect the desirability/cost-effectiveness of population-changing interventions? Because there's little negative feedback between population size and population growth in today's world (it's not Malthusian), small changes in population will last a long time and result in a lot of extra lives. For example, if there's slight negative feedback and population changes decay by about 10% a generation, an extra person today leads eventually to ~10 extra people.

I find this a bit counterintuitive (but probably not wrong), since it means that considering a long time horizon dramatically increases the benefit of saving lives, compared to poverty reduction or treatment of non-fatal diseases. In the extreme case of a total utilitarian who values saving and creating lives symmetrically, they would value saving a life in the tens of millions of dollars instead of a few million dollars (again assuming a ~10% "decay rate").

[anonymous]9y1
0
0

I largely agree with your argument. The most useful way I've heard this explained is that affecting the direction of progress is greatly more important than affecting the speed of progress. However, I think there are some situations where affecting speed is the most effective way to affect direction, like:

  • When the rate of progress in one area affects the directional outcome of another area (e.g. increasing AI safety technology more quickly improves the expected directional outcome of AI)

  • When our current situation is very risky (e.g. if you think while humans remain solely on earth, we're really likely to destroy ourselves, but you don't want that to happen, so you try to get us to colonize other planets as soon as possible, decreasing our likelihood of extinction)

Do you mean 'affecting the speed of a subfield of tech is the most effective way to affect the direction of movement of the centre of gravity of our tech capabilities'? If so, I agree.

Speeding up a particular tech counts as differential tech development.

[anonymous]9y1
0
0

That sounds like what I mean, although I'm not quite sure what you mean by 'centre of gravity' in this context. But yes, this is "differential tech development" through "speeding up a particular tech." So direction is still the the goal (just reached less directly).

Another response to this is Nick Bostrom's astronomical waste argument.

tl;dr: The resources in our light cone will decrease even if we don't make use of them. It's quite plausible that even a few months of a massive, highly advanced civilization could have more moral worth than the total moral worth of the next 500 years of human civilization. So accelerating development by even a small amount, allowing an eventual advanced civilization to be slightly larger and last slightly longer, is still massively important relative to other non x-risk causes.

I discussed some quantitative estimates of this here, with a general argument for why it would be small in light of model uncertainty. Overall it seems at least a few orders of magnitude smaller than other issues that favor faster progress.

Can't get it how the content of your essay relates to the notion of this extreme altruism, which is by definition is practical concept having its goal to do more now rather than in 10 years, let alone 100 years. In last 100 years millions of pages have been written by academics and lay people, yet 0.01% of that "mental sweat" had constructive utility to the humankind.

Hi Ilya, I think the reason that Paul is discussing this is because he values everyone equally, regardless of when they exist. And thus he is trying to figure out what actions people should take now in order to maximise the impact he can have on everyone in the world at all times. I agree with your sentiment that much academic work has had little to no utility to humankind (the median published paper is cited once apparently), however there are some questions such as "how can I do as much good as possible" that are significantly understudied, and so I think Paul is contributing there. Additionally I know many people who are pursuing technology entrepreneurship and so articles like this one will help them choose which areas they should be working in.