Comments2
Sorted by Click to highlight new comments since:

Thanks for sharing this.

What’s your view on “why doesn’t OpenAI slow down work on capabilities to allow more time for alignment research?”

Is it

  1. OpenAI is worried that Deepmind might build AGI before OpenAI, and OpenAI is worried that Deepmind is more likely to build AGI unsafely or misuse it

  2. OpenAI is worried that another group we aren’t aware of yet might build AGI before OpenAI and Deepmind, and OpenAI is worried that this group is more likely to build AGI unsafely or misuse it

  3. OpenAI is already only scaling up capabilities where it allows them to do better alignment research

  4. OpenAI is in practice prioritising revenue over existential risk mitigation, despite claiming not to

  5. something else / a mix

I think that OpenAI is not worried about actors like DeepMind misusing AGI, but (a) is worried about actors that might not currently be on most people's radar misusing AGI, (b) thinks that scaling up capabilities  enables better alignment research (but sees other benefits to scaling up capabilities too) and (c) is earning revenue for reasons other than direct existential risk reduction where it does not see a conflict in doing so.

Curated and popular this week
Relevant opportunities