Hey folks, Liv Boeree here - I recently did a TED talk on Moloch (a.k.a the multipolar trap) and how it threatens safe AI development. Posting it here to a) raise awareness and b) get feedback from the community, given the relevancy of the topic. 

And of course, if any of you are active on social media, I'd really appreciate it being shared as widely as possible, thank you!

69

3
0

Reactions

3
0
Comments7
Sorted by Click to highlight new comments since: Today at 4:53 AM

I liked the talk. I also loved the boots! Great job.

In terms of feedback/reaction: I work on AI alignment, game theory, and cooperative AI, so Moloch is basically my key concern. And from that position, I highly approve of the overall talk, and of all of the content in particular --- except for one point, where I felt a bit so-so. And that is the part about what the company leaders can do to help the situation.

The key thing is 9:58-10:09 ("We need leaders who are willing to flip the Moloch's playbook. ...") , but I think this part then changes how people interpret 10:59-10:11 ("Perhaps companies can start competing over who ... "). I don't mean to say that I strongly disagree here --- rather, I mean that this part seems objectively speculative, which was in contrast with everything else in the talk (which seemed super solid).

More specifically, the talk's formulation suggested to me that the key thing is whether the leaders would be willing to not play the Moloch game. In contrast, it seems quite possible that this by itself wouldn't help at all, for example because they would just get fired if they tried. My personal guess is that "the key thing" is affordance the leaders have for not playing the Moloch game / the costs they incur for doing so. Or perhaps the combination of this and the willingness to not play the Moloch game. And this is also how I would frame the 10:59-10:11 part --- that we should try to make it such that the companies can compete on those other things that turn this into a race to the top. (As opposed to "the companies should compete on those other things".)

Maybe a link is missing or the embed function isn't working on my phone? As I'm not seeing anything.

(Also, do you have a transcript you could post?)

YouTube link here: https://www.youtube.com/watch?v=WX_vN1QYgmE (it's embedded in the post, as JohnSnow points out — not sure if something is breaking for you?)

Transcript here: https://www.ted.com/talks/liv_boeree_the_dark_side_of_competition_in_ai/transcript 

Executive summary: Competition can drive innovation but also create traps that lead to lose-lose outcomes. This dynamic is happening in AI and needs wise leadership to avoid catastrophe.

Key points:

  1. AI filters create body dysmorphia. News media sensationalizes. These competitions lead to lose-lose outcomes.
  2. Many global problems like pollution arise from misaligned incentives and game theory.
  3. The AI race risks sacrificing safety in pursuit of capabilities. This is like a trap set by the ancient god Moloch.
  4. Historical treaties show we can coordinate to escape traps. AI leaders should focus on alignment and safety.
  5. Steps like Anthropic's scaling policy point the way, but much more leadership is needed to avoid catastrophe.
  6. We must change the AI game into a race to the top of safety and ethics.

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

The TED talk is embedded on PC

Curated and popular this week
Relevant opportunities