This is a special post for quick takes by Alex Long. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Even if AI kills every human on Earth that doesn't mean it's an existential threat necessarily. Humans evolved naturally once so who's to say that couldn't happen again if we all died? Who's to say it hasn't already happened on another planet? If you want to really go to an extreme, maybe we're in a simulation so none of us exist anyways and even if our AI does kill us all there'd still be plenty of intelligent life forms outside of the simulation who'd be fine.

A common thing I hear is that if there are potentially trillions or more possible humans who could exist in the future then any marginal reduction of existential risk is worth doing, but that view seems to be making some big assumptions about the universe we live in and how intelligent life emerged. I haven't heard anyone mention this before in ex-risk / longterm discussion but I'm sure I'm not the first one to think it so I figured I'd post it here.

Curated and popular this week
Relevant opportunities