A

anonymous123

52 karmaJoined

Comments
1

As someone who has spent years spreading the message that humans are very prone to self-serving biases (hopefully this is an acceptable paraphrase of some complex ideas!), I've personally been surprised to see your many posts in the forum right now which seem to confidently assert that the outcome was both unforeseeable and unrelated to rationalist ideas (therefore making EAs including yourself purely victims, rather than potentially also causal agents here).

To me, there seems a really plausible path from ideas about the extreme urgency of AI alignment research & the importance of taking "extreme" personal agency (relative to existing social norms) to a group of people taking on extreme risks with a lot of urgency and high personal agency to raise funds for AI alignment research. 

I have no connection to any of the people involved and no way to know whether it's what happened in this case, I'm just saying that it seems like a plausible path to what happened here given the publicly available information, and I'm curious whether that's something you've considered.