TPS

The Pipers Son

0 karmaJoined

Comments
1

Most of this stuff is well above my ability to really make a judgement call and at the moment I'm just trying to learn about it. Eliezer does make a lot of sense to me in the abstract though, I feel like I honestly don't understand enough to know if the above rebuttals make sense.

However there does seem to me to be one pretty likely possibility I haven't seen mentioned: there is now a lot of public and political attention on AI. The problem at the moment is it all seems abstract and the dangers seem too sci-fi and unbelievable.

Surely there is a non-trivial chance that in the intervening time between now and true AGI (at which point I agree, if Eliezer is right, the game is already up) something very scary and dangerous will happen because of AI? At that point presumably sufficient political will can be found to implement the required moratorium.

I'm reminded of a TNG episode where a relatively more primitive society is about to be destroyed by some sort of invading force (I forget the specifics) and the people refuse to believe it and just run about squabbling. So Data blows up something with his phaser, and instantly they fall into line and agree. I'm not suggesting it would be that neat and tidy, but you get the idea.

The question to me is, is there necessarily a correlation between how potentially dangerous what AI could cause is, and how intelligent it's become? Because obviously if it's too intelligent it's not going to try something that would cause it to be shut down. But even Eliezer would admit, we're not at AGI yet. And in the meantime...how on earth do we know in this weird world we're now in, somewhere "pre-AGI", that something crazy isn't going to happen?

I'm very likely to be wrong as a noob, but there does seem to be a slight contradiction in there somewhere, that we don't and can't know what AI is going to do as it's this inchoate set of fractions, and yet we can be sure it won't do something silly (at least in the early stages) that gives the game away and results in humans shutting it down.

Now of course that crazy thing in itself is going to have to be bad. People will almost certainly have to die, and not just one or two, for there then to be the political will to act. But that's not literally everyone, or even close.

If I were to sum it up I'd say: Eliezer is fond of saying "we need to get AI alignment right on the first attempt". Sure. But AI also needs to get its attack right on the first attempt, surely? Or, perhaps better, it needs to time that attack such that it holds all the cards if it doesn't work. And sure, it's smarter than us so there's good reason to think it would do that; but I'm smarter than any animal, but if I made a mistake could still end up being killed by them. And again (assuming I'm right about political will etc), it would need to get it right on its first try. Is that really so likely?

It just doesn't seem to me that the chance of that sequence of events is non-trivial, but I'm happy to be told otherwise and have it explained if I'm being naive, which I probably am. And by the way I'm not naive about the politics that would have to result either; that's where the "thing that happens" would have to be sufficiently terrible. But given the "thing that Eliezer is predicting" is even more terrible, I don't assign it a trivial possibility.

And that's before you get onto other real-world things that could interfere with this "inevitable" AGI emergence. What if climate change wins the "humanity destruction" race? That would prevent there being properly-operated data centres at all. Of course it's also a nasty apocalyptic scenario, but only the biggest doomers thing humanity will literally end because of it. I'm guessing this has been raised before and Eliezer basically thinks AGI emergence will win the destruction "race"? Again though, it seems difficult to predict that timeframe. So the "near 100%" chance again seems very questionable to me.