NK

Nick K.

108 karmaJoined

Comments
25

Just noting that these are possibly much stronger claims than "AGI will be able to completely disempower humanity" (depending on how hard it is to solve cold fusion a-posteriori).

This is not a fair critique of the post, he's responding to a hypothetical discussed on Twitter.

At the risk of sounding, it's really not clear to me that anything "went wrong" - from my outside perspective, it's not like there was a clear mess-up on the part of EA's anywhere here, just a difficult situation managed to the best of people's abilities.

That doesn't mean that it's not worth pondering whether there's any aspect that had been handled badly, or more broadly what one can take away from this situation (although we should beware to over-update on single notable events). But, not knowing the counterfactuals, and absent a clear picture of what things "going right" would have looked like, it's not evident that this should be chalked up as a failing on the part of EA.

From gwern's summary over on lesswrong it sounds like the actual report only stated that the firing was "not mandated", which could be interpreted as "not justified" or "not required". Is it clear from the legal context that the former is implied?

It certainly does seem to push capabilities, although one could argue about whether the extent of it is very significant or not.

Being confused and skeptical about their adherence to their stated philosophy seems justified here, and it is up to them to explain their reasoning behind this decision.

On the margin, this should probably update us towards believing they don't take their stated policy of not advancing the SOTA too seriously.

You don't need to be an extreme longtermist to be sceptical about AI, it suffices to care about the next generation and not want extreme levels of change. I think looking too much into differing morals is the wrong lens here.

The most obvious explanation for how Altman and people more concerned about AI safety (not specifically EAs) differ seems to be in their estimates about how likely AI risk is vs other risks.

That being said, the point that it's disingenuous to ascribe cognitive bias to Altman for having whatever opinion he has, is a fair one - and one shouldn't go too far with it in view of general discourse norms. That said, given Altman's exceptional capability for unilateral action due to his position, it's reasonable to be at least concerned about it.

I realize that my question sounded rethorical, but I'm actually interested in your sources or reasons for your impression. I certainly don't have a good idea of the general opinion and the media I consume is biased towards what I consider reasonable takes. That being said, I haven't encountered the position you're concerned about very much and would be interested to be hear where you did. Regarding this forum, I imagine one could read into some answers, but overall I don't get the impression that the AI CEO's are seen as big safety proponents.

Who is considering Altman and Hassabis thought leaders in AI safety? I wouldn't even consider Altman a thought leader in AI - his extraordinary skill seems mostly social and organizational. There's maybe an argument for Amodei, as Anthropic is currently the only one of the company whose commitment to safety over scaling is at least reasonably plausible.

Noted! The key point I was trying to make is that I'd think it helpful for the discourse to separate 1) how one would act in a frame and 2) why one thinks each one is more or less likely (which is more contentious and easily gets a bit political). Since your post aims at the former, and the latter has been discussed at more length elsewhere, it would make sense to further de-emphasize the latter.

May I ask what your feelings on a pause were beforehand?

Load more