Hide table of contents

In short: some expect that we could not meaningfully slow or halt the development of A(G)I even if we expect extreme risks. Yet there is a surprisingly diverse historical track record of technological delay and restraint, even for strategically promising technologies that were seen as 'obvious' and near-inevitable in their time. Epistemic hurdles around studying 'undeployed' technologies make it likely that we underestimate the frequency of restraint decisions, or misinterpret their causes. From an outside view, this should lead us to be slightly more optimistic about the viability of restraint for future technologies. This analysis does not show that restraint for AGI is currently desirable; that it would be easy; that it would be a wise strategy (given its consequences); or that it is an optimal or competitive approach relative to other available AI governance strategies. However, possible conditions for- or pathways towards restraint should be explored in greater detail, as part of a comprehensive toolset that offers strategic clarity regarding all options on the table.  

Disclaimers: 

  • Background: this short essay was published on the legal academic forum Verfassungsblog, as part of a symposium debate sequence on 'Longtermism and the Law' (see summary twitter thread here). As such it approaches the topic from the perspective of 'legal longtermism', but many points apply to longtermist (or AI risk/governance) debates generally. This post has been lightly edited.  
  • Epistemic status: this is an initial primer on an in-progress research project, and many of the case studies will need further analysis;
  • My take: it is not yet my view that restraint is warranted as a strategy--nor that it currently compares well to other work. However, I do expect it should be explored amongst other avenues;
  • Feedback & future work: I will cover this in greater detail, including a discussion of advantages and risks of pursuing restraint, in an upcoming profile of the 'Containing' approach, as part of my 'Strategic Perspectives on Long-term AI Governance' sequence. I welcome feedback.

Abstract: If the development of certain technologies, such as advanced, unaligned A(G)I, would be as dangerous as some have suggested, a longtermist (legal) perspective might advocate a strategy of technological delay—or even restraint—to avoid a default outcome of catastrophe. To many, restraint–a decision to withhold indefinitely from the development, or at least deployment, of the technology–might look implausible. However, history offers a surprising array of cases where strategically promising technologies were delayed, abandoned, or left unbuilt (see in-progress list), even though many at the time perceived their development as inevitable. They range from radiological- and weather weapons to atomic planes, from dozens of voluntarily cancelled state nuclear weapons programs, to a Soviet internet, and many more. It is easy to miss these cases, or to misinterpret their underlying causes, in ways that lead us to be too pessimistic about future prospects for restraint. That does not mean that restraint for future technologies like advanced AI will be easy, or a wise strategy. Yet investigating when and why restraint might be needed, where it is viable, and how legal interventions could contribute to achieving and maintaining it, should be a key pillar within a long-termist legal research portfolio.

 

The question of restraint around AI development

In a famous 2000 essay, entitled ‘Why the Future Doesn’t Need Us’, computer scientist Bill Joy grimly reflected on the potential range of new technological threats that could await us in the 21st century, expressing early concerns over the potentially extreme risks of emerging technologies such as artificial intelligence (AI). Drawing a link to the 20th-century history of arms control and non-proliferation around nuclear and biological weapons, Joy argued that since shielding against the future technological threats was not viable, “the only realistic alternative I see is relinquishment: to limit development of the technologies that are too dangerous.” While Joy’s account reflected an early and in some ways outdated understanding of the particular technological risks that could threaten the long-term trajectory of civilization, the underlying question he posed remains an important and neglected one: if protecting our long-term future required us to significantly delay or even relinquish the development of certain powerful but risky technologies, would this be viable?

Whether strategies of technological delay or even restraint have merits is a crucial consideration for anyone concerned about society’s relation to technology in both the near and the long-term. It is particularly key for the long-termist project of legal prioritization, in determining how the law should aim to shape the development of new technologies.

The tension is most acute in discussions around the potential future impacts of increasingly capable and potentially transformative AI. While there are many models of what this could look like–such as ‘High-Level Machine Intelligence’, ‘Artificial General Intelligence’, ‘Comprehensive AI Services’, or ‘Process for Automating Scientific and Technological Advancement’–there is a general agreement that if viable, these advanced AI technologies could bring both immense societal benefits, but also potentially extreme risks, if they are not properly ‘aligned’ with human values.

Given these risks, it is relevant to ask whether the world ought to continue pursuing the development of advanced AI, or if it would be better that AI progress is slowed–or even halted–until we have overwhelming evidence that someone can safely ‘align’ the technology, and keep it under control. However, even if we conclude that restraint is needed, some have been skeptical whether this would be viable, given the potentially high strategic stakes around advanced AI development, and the mixed track record of non-proliferation agreements at shackling past competition between major states. But should we be so pessimistic? To understand the viability of technological restraint, it helps to examine when technological restraint has been possible in the past. This can help us answer the vital question of what legal, institutional or policy interventions could increase the prospects of this being achieved in the case of AI, should that be necessary.

The ethics of restraint

Long-termist philosophy has had a complex ethical relation to the prospect of advanced AI. On the one hand, long-termist thinkers have been amongst the earliest proponents of the idea that this technology could be existentially risky.

However, on the other hand, long-termist perspectives have generally favoured the idea that advanced AI systems not only will, but should be developed eventually. This is because they argue that safely aligned AI systems could, hypothetically, be a tremendous force for long-term good: they could bring tremendous scientific advancement and economic wealth to the world (e.g. what Sam Altman has called ‘Moore’s Law for Everything’). Aligned AI could also be critical to our long-term survival, because humanity today, and in the future, faces many other global catastrophic risks, and successfully addressing these might necessitate the support of advanced AI systems, capable of unlocking scientific problems or seeing solutions that we cannot. Moreover, under some moral frameworks, AI systems could themselves have moral standing or value.

Nonetheless, there are conditions under which even a long-termist perspective might advocate a strategy of aiming to relinquish (certain forms of) AI permanently. This could be because we expect that safe alignment of advanced AI systems is astronomically unlikely. Indeed, the more pessimistic we are, the more we should prefer such a strategy of ‘containment’. But if we assume that technological restraint is indeed desirable, would it be viable? While there is an emerging body of long-termist work exploring the viability of strategies ofdifferential technological development’ (i.e. speeding up defensive and safeguarding technologies, while slowing down hazardous capabilities), proposals to outright contain and avert advanced AI systems have not previously received much attention in the debate. This is often because of a perception that military-economic competitive pressures can create strong incentives to develop powerful technologies even at some risk—or that even if we were to convince some actors to delay, that this would just yield the field to other, less scrupulous AI developers, who are less likely to be responsible or use the technology for good. In this view, both unilateral and coordinated restraint by key AI actors (e.g. private companies or states) are unlikely: the former, because of the strong advantages AI technology would provide; the latter, given the difficulty of negotiating, monitoring and enforcing multilateral global bans around high-stakes, dual-use, and fast-moving technologies such as AI.

The history of restraint

However, this may be too pessimistic. In fact, surveying the historical track record of technological innovation provides a long list of candidate cases of technological restraint. These include cases of remarkable delays in development, of decades or even centuries, seemingly because: 

  1. low-hanging fruit was not perceived (e.g. the 1733 introduction of flying shuttles to weaving after five thousand years),
  2. inventors’ and investors’ uncertainty about the legal status of the technology (e.g. mechanized sawmills in England were unpursued for centuries on the basis of a widespread–and mistaken–belief that Parliament had outlawed them);
  3. local cultural preferences (electric cars held equal US marketshare in the 1900s, but soon declined in use because cars using internal combustion engines became seen as more masculine, and appealed to an aspiration for touring);
  4. top-down governmental policies or choices (e.g. a 250-year Tokugawa Shogunate firearms ban);
  5. narrow political or bureaucratic infighting (e.g. delays in the Indian nuclear weapon program and the early Cold War Soviet ICBM force); or simply because
  6. earlier technological choices funnelled industrial attention and resources away from particular scientific paths (e.g. early neural network-based AI approaches may have been delayed by decades, because a ‘hardware lottery’ locked in early attention and funding for symbolic AI approaches instead).

There are also cases of the deliberate non-pursuit or abandonment of envisioned technologies, for a range of reasons: 

  1. concerns over risks and treaty commitments led the US to refrain from pursuing a wide range of proposals for exotic nuclear delivery systems, nuclear ramjet-powered cruise missiles, and advanced space-based missile defence systems such as Projects Excalibur, Zenith Star, Casaba-Howitzer, and Brilliant Pebbles;
  2. treaties led to the end of Vietnam-era weather control programs;
  3. Edward Teller’s plans for ‘continent destroyer’-scale nuclear weapons with a 10-gigaton yield (670,000 times Hiroshima) were left on the drawing board;
  4. the majority of the ~31-38 nuclear weapons programs undertaken or considered were abandoned;
  5. the US, UK, and Soviet Union abandoned ‘death dust’ radiological weapons, even after an attempted treaty ban had failed;
  6. the Soviet Union committee pulled the plug on OGAS, an early ‘internet’;
  7. diverse bioweapon programs were slowed or limited in their efficacy;
  8. the US abandoned Project Westford, an ‘artificial ionosphere’ of half a billion copper needles put in orbit to ensure its radio communications;
  9. French hovertrains fell prey to state elite’s conflicts of interest,
  10. nuclear-powered aircraft fell prey to risk concerns and costs;
  11. in the early ‘90s DARPA axed its 10-year Strategic Computing Initiative to develop ‘machines that think’, instead redirecting funding towards more applied computing uses such as modelling nuclear explosions.

These are just a small sample of a much larger group of possible candidates (in-progress overview). It is key to note that this survey may be a low estimate. These are only the cases that have been publicly documented. There may be far more instances of restraint where technology was considered and abandoned, but we do not have clear records to draw on them; or cases where the absence of their widespread application today gives us no reason to even understand that they were ever meaningfully on the table. Or where we (falsely) believe the decision to abandon pursuit simply reflected an accurate assessment that these were unviable.

The epistemology of restraint

Studying cases of past restraint highlights an epistemic challenge that we should keep in mind when considering the future viability of restraint over AI or other powerful technologies. Namely, that the way we retrospectively understand and interpret the history of technological development (and extrapolate to AI) is affected by epistemic hurdles.

For instance, the appearance of dangerous new weapons and the visceral failures of arms control loom particularly large in historical memory; in contrast, we fail to see proposed-but-unbuilt technologies, which are more likely to end up as obscure footnotes. This ensures we are prone to under-estimate the historical frequency of technological restraint, or misinterpret the motives for restraint. Often, in cases where a state decided against pursuing a strategically pivotal technology for reasons of risk, or cost, or (moral or risk) concerns, this can be mis-interpreted as a case where ‚the technology probably was never viable, and they recognized it–but they would have raced for it if they thought there was a chance’.

Of course, it can be difficult to tease out the exact rationales for restraint (to understand whether/how these would apply to AI). In some cases, the apparent reason for why actors pulled the plug does indeed appear to have been a perception (whether or not accurate) that a technology was not viable or would be too costly; or a view that it would be redundant with other technologies. In other cases however, the driving force behind restraint appears to have been political instability, institutional infighting, or diplomatic haggling. Significantly, in a few cases, restraint appears to have reflected genuine normative concerns over potential risks, treaty commitments, international standing, or public pressure. This matters, because it shows that perceived ‘scientific unviability’ is not the only barrier to a technology’s development; rather, it highlights a range of potential intervention points or levers for legal and governance tools in the future. Ultimately, the key point is that while the track record of restraint is imperfect, it is still better than would be expected from the perspective of rational interests–and better than was often expected by people living at the time. From an outside view, it is important to understand epistemic lenses that skew our understanding of the future viability of restraint or coordination for other technologies (such as advanced AI), in the same way that we should reckon with ‘anthropic shadow’ arguments around extinction risks.

The strategy of AI restraint

In sum, when we make an assessment of the viability of restraint around advanced AI, it is important to complement our inside-view assessment, with an outside-view understanding of technological history that does not only count successful international non-proliferation initiatives, but also considers cases where domestic scientific, political, or legal drivers contributed to restraint.

Of course, to say that we should temper our pessimism does not mean that we should be highly optimistic about technological restraint for AI. For one, there are technological characteristics that appear to have contributed to past unilateral restraint, including long assembly roadmaps with uncertain payoff and no intermediate profitable use cases; strong single institutional ‘ownership’ of technology streams, or public aversion. These features appear absent or less strong for AI, making restraint less likely. Similarly, AI technologies do not share many of the features that have enabled coordinated bans on (weapons) technologies, making coordinated restraint difficult.

As such, the history of restraint does not provide a blueprint for a reliable policy path. Still, it highlights interventions that may help induce unexpected unilateral or coordinated restraint around advanced AI. Some of these (such as cases of regime change) are out of scope for legal approaches. Yet, legal scholars concerned about the long-term governance of AI can and should draw on the emerging field of ‘technology law’, to explore the landscape of technological restraint. They can do so not only by focusing on norms or multilateral treaties under international law, but also through interventions that frame policymaker perceptions of AI, alter inter-institutional interests and dynamics, or reroute investments in underlying hardware bases, locking in ‘hardware lotteries’.

Ultimately, technological restraint may not (yet) be desirable, and the prospects for any one of these avenues may remain poor: yet restraint provides a key backup strategy in the longtermist portfolio, should the slowing or containment of AI ever prove necessary to safeguarding the long-term.

 

Acknowledgements:

This essay reflects work-in-progress, and has built on useful comments and feedback from many. I especially thank Tom Hobson, Di Cooke, Sam Clarke, Otto Barten, Michael Aird, Ashwin Acharya, Luke Kemp, Cecil Abungu, Shin-Shin Hua, Haydn Belfield, and Marco Almada for insightful comments, pushback, and critiques. The positions expressed in this paper do not necessarily represent their views.

Comments6
Sorted by Click to highlight new comments since:

Many thanks for writing this essay. The history of technological restraint is fascinating. I never knew that Edward Teller wanted to design a 10-gigaton bomb.

Something I have noticed in history is that advocates of technological restraint are often labelled luddites or luddite supporters. Here's an example from 2016:

Artificial Intelligence Alarmists Win ITIF’s Annual Luddite Award

After a month-long public vote, the Information Technology and Innovation Foundation (ITIF) today announced that it has given its annual Luddite Award to a loose coalition of scientists and luminaries who stirred fear and hysteria in 2015 by raising alarms that artificial intelligence (AI) could spell doom for humanity. ITIF argued that such alarmism distracts attention from the enormous benefits that AI can offer society—and, worse, that unnecessary panic could forestall progress on AI by discouraging more robust public and private investment.

“It is deeply unfortunate that luminaries such as Elon Musk and Stephen Hawking have contributed to feverish hand-wringing about a looming artificial intelligence apocalypse,” said ITIF President Robert D. Atkinson. “Do we think either of them personally are Luddites? No, of course not. They are pioneers of science and technology. But they and others have done a disservice to the public—and have unquestionably given aid and comfort to an increasingly pervasive neo-Luddite impulse in society today—by demonizing AI in the popular imagination.”

“If we want to continue increasing productivity, creating jobs, and increasing wages, then we should be accelerating AI development, not raising fears about its destructive potential,” Atkinson said. “Raising sci-fi doomsday scenarios is unhelpful, because it spooks policymakers and the public, which is likely to erode support for more research, development, and adoption. The obvious irony here is that it is hard to think of anyone who invests as much as Elon Musk himself does to advance AI research, including research to ensure that AI is safe. But when he makes inflammatory comments about ‘summoning the demon,’ it takes us two steps back.”

The list of people the ITIF wanted to call luddites included "Advocates seeking a ban on 'killer robots'", probably the Campaign to Stop Killer Robots.

I wonder what the ITIF's position is on Teller's 10-gigaton bomb.

Thank you for this post. I really liked the large amount of examples of technological delay or restraint, many of which I was unaware.

The post presents these cases as examples of delay or restraint, but I would have liked to see a more in-depth discussion on whether or not individual examples were truly delayed or not deployed because of 'contingent/pathdependent' historical, cultural, sociological factors or because of technological & economic & military-strategic reasons. 

Just to pick out one particularly suspicious example: were electric cars really muscled out by combustion engines because combustion engines were perceived as more 'masculine'... or because battery power did not keep up with the power of the internal combustion engine until very recently with new developments in lithium-ion batteries? (see random graph I found below). 

I have not looked at this deeply but my prior would be that the former is deeply deeply implausible. At the very least, this is a very nonobvious and contentiable statement that needs argument. 

 

Graphic showing that the energy density of lithium-ion batteries has increased from 80 Wh/kg to around 300 Wh/kg since the beginning of the 1990s

I'm not really sure that this distinction you make between historical, cultural, sociological factors and technological, cultural and military-strategic factors is particularly useful; the interplay and interconnection between these sets of factors I think are so strong that such separation seems artificial and ahistorical. What is strategically or economically useful is culturally contingent, and so parisng out these factors already skews the question in a direction that is not useful.

Also, taking the electric car example, it's (obviously) complex. Certainly, there seems to have been pro-petrol vested interests at play (which is both economic and historical factors). The reason electric were perceived as more feminine is because of the shorter journeys and the lack of need for manual cranking (technological AND cultural), whereas even then electric cars had a key advantage (far less pollution, on of the main reasons for the cars development), but this obviously was not powerful enough a factors to win out (sociological); if it had been perceived as such, there would likely have been more investment and so costs and competitiveness in other domains of the electric cars may have increased (sociological factors effecting economic and technological ones)

Taking the above graph on faith, just from 1990-2020 average energy density of batteries has gone up ~5x. That is an absolutely enormous change. 

Can you similarly quantify your proposed story 'lack of need for manual cranking & shorter journers -> more feminine -> less cars sold'? Seems we should reject this on priors as too conjunctive and far too weak of an effect. 

It's interesting to see these lists! It does seem like there are many examples here, and I wasn't aware of many of them.

Many of the given examples relate to setbacks and restraints in one or two countries at a time. But my impression is that people don't doubt that various policy decisions or other interruptions could slow AGI development in a particular country; it's just that this might not substantially slow development overall (just handicap some actors relative to others).

So I think the harder and more useful next research step would be to do more detailed case studies of individual technologies at the international level to try to get a sense of whether restraint meaningfully delayed development at an international level?

Wow, this is a well-written, well-researched post. Thanks for putting it together!

Factors that would lend themselves to AI restraint

  • The cost of constant percentage improvements in Moore's law have gotten more and more expensive. State of the art chip fabs now cost many billions to build.
  • Preventing new discoveries in AI from being published might just be in the near-term interest of countries that view having differential AI progress as a strategic advantage. The same can be said of companies.
  • Generative models that rely on open-source datasets like those from the common crawl could run into issues with copyrights if they start invading the markets of the artists and writers whose data was used to train them.

Factors working against restraint

  • Widespread distribution of computational resources makes it difficult to prevent progress from being made on AI in the near term
  • Many countries (including China and the US) view AI as being relevant to their strategic political dominance.
  • The general public does not take the idea of dangerous AI seriously yet other than a small focus on AI bias, which does not seem particularly relevant to the most concerning aspects of AGI. It will be very difficult to rally public support for legislation unless this changes.
  • The short-term benefits of Moore's law continuing are widespread. If people can't buy a better iPhone next year because we banned new fabs, they are going to be upset.

Possible improvements to the post

Similarly, AI technologies do not share many of the features that have enabled coordinated bans on (weapons) technologies, making coordinated restraint difficult.

It would have been nice to read some in-text examples of the ban-enabling features lacking in AI. I clicked on the links you provided but there was too much information for it to be worth my time to go through them.

More from MMMaas
Curated and popular this week
Relevant opportunities