J

JoeW

1 karmaJoined Apr 2017

Comments
4

To clarify: I don't think it will be especially fruitful to try to ensure AIs are conscious, for the reason you mention: multipolar scenarios don't really work that way, what will happen is determined by what's efficient in a competitive world, which doesn't allow much room to make changes now that will actually persist.

And yes, if a singleton is inevitable, then our only hope for a good future is to do our best to align the singleton, so that it uses its uncontested power to do good things rather than just to pursue whatever nonsense goal it will have been given otherwise.

What I'm concerned about is the possibility that a singleton is not inevitable (which seems to me the most likely scenario) but that folks attempt to create one anyway. This includes realities where a singleton is impossible or close to it, as well as where a singleton is possible but only with some effort made to push towards that outcome. An example of the latter would just be a soft takeoff coupled with an attempt at forming a world government to control the AI - such a scenario certainly seems to me like it could fit the "possible but not inevitable" description.

A world takeover attempt has the potential to go very, very wrong - and then there's the serious possibility that the creation of the singleton would be successful but the alignment of it would not. Given this, I don't think it makes sense to push unequivocally for this option, with the enormous risks it entails, until we have a good idea of what the alternative looks like. That we can't control that alternative is irrelevant - we can still understand it! When we have a reasonable picture of that scenario, then we can start to think about whether it's so bad that we should embark on dangerous risky strategies to try to avoid it.

One element of that understanding would be on how likely AIs are to be conscious; another would be how good or bad a life conscious AIs would have in a multipolar scenario. I agree entirely that we don't know this yet - whether for rabbits or for future AIs - that's part of what I'd need to understand before I'd agree that a singleton seems like our best chance at a good future.

Thanks for the reply. You're right that we can't be sure that conscious beings will do good things, but we don't have that assurance for any outcome we might push for.

If AIs are conscious, then a multipolar future filled with vast numbers of unaligned AIs could very plausibly be a wonderful future, brimming with utility. This isn't overwhelmingly obvious, but it's a real possibility. By contrast, if AIs aren't conscious then this scenario would represent a dead future. So distinguishing the two seems quite vital to understanding whether a multipolar outcome is bad or good.

You point out that even compared to the optimistic scenario I describe above, a correctly-aligned singleton could do better, by ensuring the very best future possible. True, but if a singleton isn't inevitable, trying to create one will itself pose a serious existential risk, as would any similar world takeover attempt. And even if the attempt is successful, we all agree that creating an aligned singleton is a very difficult task. Most singleton outcomes result in a universe almost entirely full of dead matter, produced by the singleton AI optimising for something irrelevant; even if it's conscious itself, resources that could have been put towards creating utility are almost all wasted as paperclips or whatever.

So it seems to me that, unless you're quite certain we're headed for a singleton future, the question of whether AIs will be conscious or not has a pretty huge impact on what path we should try to take.

I'm still not sure how the consciousness issue can just be ignored. Yes, given the assumption that AIs will be mindless machines with no moral value, obviously we need to build them to serve humans. But if AIs will be conscious creatures with moral value like us, then...? In this case finding the right thing to do seems like a much harder problem, as it would be far from clear that a future in which machine intelligences gradually replace human intelligences represents a nightmare scenario, or even an existential risk at all. It's especially frustrating to see AI-risk folks treat this question as an irrelevance, since it seems to have such enormous implications on how important AI alignment actually is.

(Note that I'm not invoking 'ghost in the machine', I am making the very reasonable guess that our consciousness is a physical process that occurs in our brains, that it's there for the same reason other features of our minds are there - because it's adaptive - and that similar functionality might very plausibly be useful for an AI too.)

One implication of a multi-agent scenario is that there would likely be enormous variety in the types of minds that exist, as each mind design could be optimised for a different niche. So in such a scenario, it seems quite plausible that each feature of our minds would turn out to be a good solution in at least a few situations, and so would be reimplemented in minds designed for those particular niches.