T

TheoC

86 karmaJoined

Comments
5

Thanks for the response!

The products being widely used doesn’t prevent the marginal impact of another user being very high in absolute terms, since the absolute cost of an AI catastrophe would be enormous.

In addition, establishing norms about behaviour can lead to a difference of a much larger number of users.

You could make similar arguments to suggest that if you are concerned about climate change and/or animal welfare, that it is not worth avoiding flying, or eating vegan, but I think those are at least given more serious consideration both in EA communities and other communities that care about these causes.

I remain unconvinced that these offsets are particularly helpful, and certainly not at 1:1.

My understanding is that alignment as a field is much more constrained by ideas, talent, and infrastructure than funding. Providing capabilities labs like OpenAI with more resources (and making it easier for similar organisations to raise capital) seems to do much more to slow down timelines than providing some extra cash to the alignment community today does to get us closer to good alignment solutions.

I am not saying it can never be ethical to pay for something like ChatGPT Plus, but if you are not directly using that to help with working on alignment then I think it’s likely to be very harmful in expectation.

I am pretty surprised that more of the community don’t have more of an issue with merely using ChatGPT and similar services - it provides a lot of real world data that capabilities researchers will use for future training, and encourages investment into more capabilities research, even if you don’t pay them directly.

TheoC
6
14
12

I find this comment pretty patronising, and echo Amber Dawn’s point about this leading to discussion only being accepted by those who are sufficiently emotionally detached from an issue (which tends to be people who aren’t directly impacted).

To me, this comment sounds like it is saying that if you are angry about this then you are being irrational, and should wait to calm down before commenting. Anger can be a perfectly rational response, and excessive tone policing can suppress marginalised voices from the conversation.

Risk-taking and ambition are two sides of the same coin. If you swarm to denouncing risks that failed, you do not understand what it takes to succeed. My very subjective sense of people in the EA community is that we are much more likely to fail due to insufficient ambition than too much risk-taking, especially without the support and skillset of the FTX team.

Risk taking and ambition are two sides of the same coin when the parties who stand to bear the downside are the ones who benefit from the upside, and can consent. Appropriating user funds to bear the downside risk without their knowledge is not ambition, it is theft and not morally acceptable. If for example, Alameda had collapsed due to trading losses, but customer deposits on FTX were untouched, then it would be an entirely different matter.

TheoC
42
27
0

I think Sam comes across very poorly here, even given the situation. I’m not sure who benefitted from him giving this interview. The ‘cryptic tweets’ make it seem like he’s treating the whole situation like some kind of joke, and the focus seems to be on his and FTX’s failures rather than the  creditors who have been wronged. Quoting the article:


"Shortly before the interview, Mr. Bankman-Fried had posted a cryptic tweet: the word “What.” Then he had tweeted the letter H. Asked to explain, Mr. Bankman-Fried said he planned to post the letter A and then the letter P. “It’s going to be more than one word,” he said. “I’m making it up as I go.”

So he was planning a series of cryptic tweets? “Something like that.”

But why? “I don’t know,” he said. “I’m improvising. I think it’s time.”"