Written by LW user Richard_Ngo.

This is part of LessWrong for EA, a LessWrong repost & low-commitment discussion group (inspired by this comment). Each week I will revive a highly upvoted, EA-relevant post from the LessWrong Archives, more or less at random

Excerpt from the post:

Ben Pace and I (Richard Ngo) recently did a public double crux at the Berkeley REACH on how valuable it is for people to go into AI policy and strategy work: I was optimistic and Ben was pessimistic. During the actual event, we didn't come anywhere near to finding a double crux on that issue. But after a lot of subsequent discussion, we've come up with some more general cruxes about where impact comes from.

I found Ben's model of how to have impact very interesting, and so in this post I've tried to explain it, along with my disagreements. Ben liked the goal of writing up a rough summary of our positions and having further discussion in the comments, so while he edited it somewhat he doesn’t at all think that it’s a perfect argument, and it’s not what he’d write if he spent 10 hours on it. He endorsed the wording of the cruxes as broadly accurate. (Full post on LW)

Please feel free to,

11

0
0

Reactions

0
0
Comments1
Sorted by Click to highlight new comments since:

Cool arguments on the impact of policy work for AI safety. I find myself agreeing with Richard Ngo’s support of AI policy given the scale of government influence and the uncertain nature of AI risk. Here’s a few quotes from the piece.

How AI could be influenced by policy experts:

in a few decades (assuming long timelines and slow takeoff) AIs that are less generally intelligent that humans will be causing political and economic shockwaves, whether that's via mass unemployment, enabling large-scale security breaches, designing more destructive weapons, psychological manipulation, or something even less predictable. At this point, governments will panic and AI policy advisors will have real influence. If competent and aligned people were the obvious choice for those positions, that'd be fantastic. If those people had spent several decades researching what interventions would be most valuable, that'd be even better.

This perspective is inspired by Milton Friedman, who argued that the way to create large-scale change is by nurturing ideas which will be seized upon in a crisis.

Why EA specifically could succeed:

… From the outside view, our chances are pretty good. We're a movement comprising many very competent, clever and committed people. We've got the sort of backing that makes policymakers take people seriously: we're affiliated with leading universities, tech companies, and public figures. It's likely that a number of EAs at the best universities already have friends who will end up in top government positions. We have enough money to do extensive lobbying, if that's judged a good idea.

These opposing opinions are driven by different views on timelines, takeoff speeds, and sources of risk:

More generally, Ben and I disagree on where the bottleneck to AI safety is. I think that finding a technical solution is probable, but that most solutions would still require careful oversight, which may or may not happen (maybe 50-50). Ben thinks that finding a technical solution is improbable, but that if it's found it'll probably be implemented well. I also have more credence on long timelines and slow takeoffs than he does. I think that these disagreements affect our views on the importance of influencing governments in particular.

Thanks for sharing LW4EA! Particularly the AI safety stuff. It’s an act of community service.

More from Jeremy
Curated and popular this week
Relevant opportunities