It is important to figure out the best way(s) to convince people that AI safety is worth taking seriously because despite the fact that it is (in my opinion, and in the opinion of many people in the EA community) the most important cause area, it often seems weird to people at first glance.  I think that one way to improve the persuasiveness of AI safety pitches would be to use the frame that AI safety  is a problem because the profit-based incentives of private sector AI developers do not account for the externalities generated by risky AGI projects.

Many groups that EA pitches to are relatively left leaning.  In particular, elite university students are much more left leaning than the general population.  As such, they are likely to be receptive to arguments for taking AI safety seriously which highlight the fact that AI safety is a problem largely due to deeper problems with capitalism.  One such problem is the fact that capitalism fails to take into account externalities, or effects of economic activity which are not reflected in that activity's price.[1]  Developing AGI generates huge negative externalities; while a private sector actor who creates aligned AGI would probably reap much of the economic gains from it (at least in the short term - it is unclear how these gains would be distributed over longer time scales), it would pay only a small fraction of the costs of unaligned AGI, which are almost entirely borne by the rest of the world and future generations.   Thus, misalignment risks from AGI are significantly heightened by the structural failure of capitalism to account for externalities, a problem which left leaning people tend to be very mindful of.  Even beyond left-leaning students, it is widely acknowledged by educated people with an understanding of economics that a major problem with capitalism is that it fails by default to deal with externalities.  Similarly, many people in the general public view corporations and big tech as irresponsible, greedy actors who harm the public good even if they lack an understanding of the logic of externalities.  Thus, in addition to being particularly persuasive to left-leaning people who understand externalities, this framing seems likely to also be persuasive to people with a wider range of political orientations and levels of understanding of economics.

While this argument does not imply that misaligned AGI constitutes an existential risk, when it is combined with the claim that AI systems will have large impacts of some kind on the future (which many who are skeptical of AI x-risk still believe), it implies that we will by default significantly underinvest in ensuring that the AI systems which will shape the future will have positive effects on society.  This conclusion seems likely to make people broadly more concerned about the negative effects of AI.  Moreover, even if they do not conclude that AI development could pose an existential risk, the argument still implies that AI safety research constitutes a public good which should receive much more funding and attention than it currently does.  Given that it seems to me like alignment research focused on preventing existential catastrophe seems highly related to broader efforts to ensure future AI systems have positive effects on the world, having more people believe the previous claim seems quite good.

As a result, it seems like "AI safety is a public good which will be underinvested in by default" or (more polemically) "AI developers are gambling with the fate of humanity for the sake of profit, and we need to stop them/ensure that their efforts don't have catastrophic effects" should be a more common frame used to pitch the importance of AI safety.  It is an accurate and rhetorically effective framing of the problem.  Am I missing something?

  1. ^

    For a longer explanation of externalities, see https://www.imf.org/external/pubs/ft/fandd/basics/external.htm

31

0
0

Reactions

0
0

More posts like this

Comments9
Sorted by Click to highlight new comments since:

It's risky to connect AI safety to one side of an ideological conflict.

There are ways to frame AI safety as (partly) an externality problem without getting mired in a broader ideological conflict.

I think you can stress the "ideological" implications of externalities to lefty audiences while having a more neutral tone with more centrist or conservative audiences.  The idea that externalities exist and require intervention is not IMO super ideologically charged.

I'm very pro framing this as an externality. Doesn't just help with left-leaning people, it can also be  helpful for talking to other audiences, such as those immersed in economics or antitrust/competition law.

I like this framing a lot. My 60 second pitch for AI safety often includes something like this. “It’s all about making sure AI benefits humanity. We think AI could develop really quickly and shape our society, and the big corporations building it are thinking more about profits than about safety. We want to do the research they should be doing to make sure this technology helps everyone. It’s like working on online privacy in the 1990s and 2000s: Companies aren’t going to have the incentive to care, so you could make a lot of progress on a neglected problem by bringing early attention to the issue.”

Without thinking too deeply, I believe that this framing, i.e. one in line with AI developers are gambling with the fate of humanity for the sake of profit, and we need to stop them/ensure that their efforts don't have catastrophic effects, for AI risk could serve as a conversational cushion for those who are unfamiliar with the general state of AI progress and with the existential risk poorly aligned AI poses. 

Those unfamiliar with AI might disregard the extent of risk from AI if approached in conversation with remarks about how not only it is non-trivial that humanity might be extinguished by AI, but many researchers believe this event is highly likely to occur,  even in the next 25 years. I imagine such scenarios are, for them, generally unbelievable. 

The cushioning could, however, lead to people trying to think about AI risk independently or to them searching for more evidence and commentary online, which might subsequently lead to them to the conclusion that AI does in fact pose a significant existential risk to humanity. 

When trying to introduce the idea of AI risk to someone who is unfamiliar with it, it's probably a good idea to give an example of a current issue with AI, and then have them extrapolate. The example of poorly designed AI systems being used by corporations for click-through, as covered in the introduction of Human Compatible, seems good to use in your framing of AI safety as a public good. Most people are familiar with the ills of algorithms designed for social media, so it is not a great step to imagine researchers designing more powerful AI systems that are deleterious to humanity via a similar design issue but at a much more lethal level: 

They aren't particularly intelligent, but they are in a position to affect the entire world because they directly influence billions of people. Typically, such algorithms are designed to maximize click-through, that is, the probability that the user clicks on presented items. The solution is simply to present items that the user likes to click on, right? Wrong. The solution is to change the user's preferences so that they become more predictable. A more predictable user can be fed items that they are likely to click on, thereby generating more revenue. People with more extreme political views tend to be more predictable in which items they click on. 

  • AI capabilities people don't psychologically feel like AI is a threat to their selfish interests (assuming they even understand why it is a threat), because humans value short-term gain more than long-term danger (time discounting). Therefore selfish actors have incentives to work on capabilities.
  • Great point that externalities might mislead people into thinking "ah yes, another instance where we need government regulation to stop greedy companies; government regulation will solve the problem." (Although government intervention for slowing down capabilities and amplifying safety research would indeed be quite helpful.)
  • Not sure how externalities "dilutes" the claims. It's a serious problem that there are huge economic incentives for capabilities, and minuscule economic incentives for safety.
  • I don't think it's very hard to make AI x-risk sound non-weird: (1) intelligence is a meaningful concept; (2) it might be possible to build AI systems with more intelligence than humans; (3) it might be possible to build such AIs within this century; (4) if built, these AIs would have a massive impact on society; (5) this impact might be extremely negative. These core ideas are reasonable-sounding propositions that someone with good social skills could bring up in a conversation.

It is true that private developers internalize some of the costs of AI risk.  However, this is also true in the case of carbon emissions; if a company emits CO2, its shareholders do pay some costs in terms of having a more polluted atmosphere.  The problem is that the private developer only pays a very small fraction of the total costs which, while still quite large in absolute terms, js plausibly worth paying for the upside.  For example, if I were entirely selfish and I thought AI risk was somewhat less likely than I actually do (let's say 10%), I would probably be willing to risk a 10% chance of death for a 90% chance of massive resource acquisition and control over the future.  However, if I internalized the full costs of that 10% chance (everyone else dying and all future generations being wiped out), then I would not be willing to take that gamble.

Ultimately, you should probably tailor messages to your audience, given their understanding, objections/beliefs, values, etc. If you think they understand the phrase “externalities,” I agree, but a sizable number of people in the world do not properly understand the concept.

Overall, I agree that this is probably a good thing to emphasize, but FWIW I think a lot pitches I’ve heard/read do emphasize this insofar as it makes sense to do so, albeit not always with the specific term “externality.”

Curated and popular this week
Relevant opportunities