Avoiding AI Races Through Self-Regulation

The first group to build artificial general intelligence or AGI stands to gain a significant strategic and market advantage over competitors, so companies, universities, militaries, and other actors have strong incentives to race to build AGI first. An AGI race would be dangerous, though, because it would prioritize capabilities over safety and increase the risk of existential catastrophe. A self-regulatory organization (SRO) for AGI may be able to change incentives to favor safety over capabilities and encourage cooperation rather than racing.

Full text available on Map and Territory. (sorry, it's annoying to recreate all the links so just cross-posting the summary)

Comments (4)

Comment author: Evan_Gaensbauer 13 March 2018 11:57:13PM 2 points [-]

I find when I do up a draft of a post in Google Docs, and copy-paste the whole thing to either the EA Forum or LW, the hypertext link formatting remains intact. Sometimes it can mess up the other formatting, though, so it's not the best idea if you're using bullet points or other graphical elements in a post. If you could write up posts in Google Docs and copy-paste them to your Medium blog too without having to recreate the links, then you wouldn't have the problem with recreating links on one or the other site. I don't have a Medium blog, so I wouldn't know if it's possible to copy-paste into it with the links intact.

Comment author: AviN 15 March 2018 12:03:50AM 2 points [-]

This is the approach I used to get this article on the EA Forum:

  • Used the "Publish to the web" feature in Google Docs.
  • Did a copy/paste from Firefox to the EA Forum. (Firefox generated much less messy content than Chrome.)
  • I manually fixed all the links to remove the https://www.google.com/url redirect junk. (Not strictly necessary, but it annoyed me.)
  • I edited the HTML heavily to get the one table to show up right.
  • I made a lot of other manual edits.

It was a pain though.

Comment author: cassidynelson 15 March 2018 03:44:06AM 2 points [-]

This article has many parallels with Greg Lewis' recent article on the unilateralist's curse, as it pertains to biotechnology development.

If participation in an SRO is voluntary, even if you have 9/10 organisations on board how do you stop the final from proceeding with AGI development without oversight? I'd imagine that the setup of a SRO may infer disadvantages to participants potentially, thus indirectly incentivizing non-participation (if the lack of restrictions increase the probability of reaching AGI first).

Do you anticipate a SRO may be an initial step towards a more obligatory framework for oversight?

Comment author: gworley3  (EA Profile) 15 March 2018 06:55:54PM 0 points [-]

An SRO might incentivize participation in several ways. One is idea sharing, be it via patent agreements or sharing of trade secrets among members on a secure forum. Another is via social and possibly legal penalties for non-participation, or by acting as a cartel to lock non-participants out of the market the way many professional groups do.

That said it does seem a step in the direction of legal oversight, but moves us towards a model similar to so-called technocratic regulatory bodies rather than one where legislation tries to directly control actions. Creating an SRO would give us an already-existing organization that could step in to serve this role in an official capacity if governments or inter-governmental organizations choose to regulate AI.