Hide table of contents

Thanks to everyone who responded to my last two questions on AGI, this should be my last on the topic.

I wonder if:

  • EA folks (working on AI risks) propose pathways for zero AGI development worldwide
  • EA folks reject such paths in arguments about AGI safety or value.

8

0
0

Reactions

0
0
New Answer
New Comment

1 Answers sorted by

I cannot speak for all EA folks; here's a line of reasoning I'm patching together from the "AGI-never is unrealistic" crowd.

Most AI research isn't explicitly geared towards AGI; while there are a few groups with that stated goal (for instance, DeepMind), most of the AI community wants to solve the next least difficult problem in a thousand subdomains, not the more general AGI problem.

So while peak-performance progress may be driven by the few groups pushing for general capability, for the bulk of the field "AGI development" is just not what they do. Which means, if all the current AGI groups stop working on it tomorrow, "regular" AI research still pushes forward.

One scenario for "everyone avoids generality very hard while still solving as many problems as possible" is the Comprehensive AI Services framework. That is one pathway, not without safety concerns.

However, as Richard Ngo argues, "Open-ended agentlike AI seems like the most likely candidate for the first strongly superhuman AGI system."

To sum up:

  • regular, not-aiming-for-AGI AI research will very likely attempt to cover as many tasks as possible, as most of the field has done, and will eventually on aggregate cover a wide enough range of capability that alignment issues kick in;
  • more general agents are still likely to appear before we get there, with nothing impeding progress (for instance, while DeepMind has a safety team aware of AGI concerns, this doesn't prevent them from advancing general capability further).

A separate line of reasoning argues that no one will ever admit (in time) we're close enough to AGI that we should stop for safety reasons; so that everyone can claim "we're not working on AGI, just regular capabilities" until it's too late.

In that scenario, stopping AGI research amounts to stopping/slowing down AI research at large, which is also a thing being discussed!

That's very interesting, I will follow up on those links, and the other links I have received in comments from other helpful people. 

Huh, Eric Drexler is one of the authors, the same one that popularized nanotechnology back when I was a teen, I think...

Thanks.

3
Jérémy Perret
The very same, yes!
Curated and popular this week
Relevant opportunities