B

BabyBeluga

10 karmaJoined

Comments
1

For what it's worth, I think Eliezer's post was primarily directed at people who have spent a lot less time thinking about this stuff than you, and that this sentence:

"Getting perfect loss on the task of being GPT-4 is obviously much harder than being a human, and so gradient descent on its loss could produce wildly superhuman systems."

Is the whole point of his post, and is not at all obvious to even very smart people who haven't spent much time thinking about the problem. I've had a few conversations with e.g. skilled Google engineers who have said things like "even if we make really huge neural nets with lots of parameters, they have to cap out at human-level intelligence, since the internet itself is human-level intelligence," and then I bring up the hash/plaintext example (which I doubt I'd have thought of if I hadn't already seen Eliezer point it out) and they're like "oh, you're right... huh." 

I think the point Eliezer's making in this post is just a very well-fleshed out version of the hash/plaintext point (and making it clear that the basic concept isn't just confined to that one narrow example), and is actually pretty significant and non-obvious, and it only feels obvious because it has one of the nice property of simple, good ideas, of being "impossible to unsee" once you've seen it.