13

S-risk FAQ

The idea that the future might contain astronomical amounts of suffering, and that we should work to prevent  such worst-case outcomes , has lately attracted some attention . I've written this FAQ to help clarify the concept and to clear up potential misconceptions. [Crossposted from my website on s-risks .] General... Read More
6

Strategic implications of AI scenarios

[Originally posted on my new website on cause prioritization . This article is an introductory exploration of what different AI scenarios imply for our strategy in shaping advanced AI and might be interesting to the broader EA community, which is why I crosspost it here.] Efforts to mitigate the risks... Read More