Fantastic post! Thank you very much for writing it.
Personally I'd add the Foundational Research Institute, which has released a few AI safety-related papers in the last year:
- Suffering-focused AI safety: Why "fail-safe" measures might be our top intervention
- How Feasible is the Rapid Development of Artificial Superintelligence?
- Backup utility functions as a fail-safe AI technique
As well as a bunch of draft blog posts that will eventually be incorporated into a strategy paper trying to chart various possibilities for AI risk, somewhat similar to GCRI's "A Model of Pathways to Artificial Superintelligence Catastrophe for Risk and Decision Analysis" which you mentioned in your post.
Subscribe to RSS Feed
Thanks for writing this. I found it helpful.
One question:
"People who have retired or partially retired [...] can join Giving What We Can and remain members for as long as they continue to donate at least 10% of their spending money (as defined above)."
Is the "10%" number here accurate? At all other locations where "spending money" is mentioned, the corresponding percentage is 1%.