2

kbog comments on Towards a measure of Autonomy and what it means for EA - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (6)

You are viewing a single comment's thread.

Comment author: kbog  (EA Profile) 21 July 2017 04:04:18PM *  4 points [-]

Some of the reasons you gave in favor of autonomy come from a perspective of subjective pragmatic normativity rather than universal moral values, and don't make as much sense when society as a whole is analyzed. E.g.:

You disagree with the larger system for moral reasons, for example if it is using slavery or polluting the seas. You may wish to opt out of the larger system in whole or in part so you are not contributing to the activity you disagree with.

But it's equally plausible that the larger system will be enforcing morally correct standards and a minority of individuals will want to do something wrong (like slavery or pollution).

The larger system is hostile to you. It is an authoritarian or racist government. There are plenty examples of this happening in history, so it will probably happen again.

Individuals could be disruptive or racist, and the government ought to restrain their ability to be hostile towards society.

So when we decide how to alter society as a whole, it's not clear that more autonomy is a good thing. We might be erring on different sides of the line in different contexts.

Moreover, I don't see a reason that we ought to intrinsically value autonomy. The reasons you gave only support autonomy instrumentally through other values. So we should just think about how to reduce catastrophic risks and how to improve the economic welfare of everyone whose jobs were automated. Autonomy may play a role in these contexts, but it will then be context-specific, so our definition of it and analysis of it should be contextual as well.

The autonomy view vastly prefers a certain outcome to the airisk question. It is not in favour of creating a single AI that looks after us all (especially not by uploading)

But by the original criteria, a single AI would (probably) be robust to catastrophe due to being extremely intelligent and having no local competitors. If it is a good friendly AI, then it will treat people as they deserve, not on the basis of thin economic need, and likewise it will always be morally correct. It won't be racist or oppressive. I bet no one will want to leave its society, but if we think that that right is important then we can design an AI which allows for that right.

I think this is the kind of problem you frequently get when you construct an explicit value out of something which was originally grounded in purely instrumental terms - you reach some inappropriate conclusions because future scenarios are often different from present ones in ways that remove the importance of our present social constructs.

Comment author: WillPearson 21 July 2017 06:11:49PM *  0 points [-]

But it's equally plausible that the larger system will be enforcing morally correct standards and a minority of individuals will want to do something wrong (like slavery or pollution).

Both of these would impinge on the vital sets of others though (slavery directly, pollution by disrupting the natural environment people rely on). So it would still be a bad outcome from the autonomy viewpoint if these things happened.

The autonomy viewpoint is only arguing that lots of actions should physically possible for people, all the actions physically possible aren't necessarily morally allowed.

Which of these three scenarios is best?:

  1. No one has guns so no-one gets shot
  2. Everyone has guns and people get shot because they have accidents
  3. Everyone has guns but no-one gets shot because they are well trained and smart.

The autonomy viewpoint argues that the third is the best possible outcome and tries to work towards it. There are legitimate uses for guns.

I don't go into how these things should be regulated, as this is a very complicated subject. I'll just point out that to get the robust free society that I want you would need to not regulate the ability to do these things, but make sure the incentive structures and education is correct.

I don't see a reason that we ought to intrinsically value autonomy. The reasons you gave only support autonomy instrumentally through other values.

I don't think I can argue for intrinsically valuing anything. I agree with not being able to argue ought from is. So the best I can do is either claim it as a value, which I do, and refer to other value systems that people might share. I suppose I could talk about other people who value autonomy in itself. Would you find that convincing?

But by the original criteria, a single AI would (probably) be robust to catastrophe due to being extremely intelligent and having no local competitors.

Unless it is omniscient I don't see how it will see all threats to itself. It may lose a gamble on the logical induction lottery and make an ill-advised change to itself.

Also what happens if a decisive strategic advantage is not possible and this hypothetical single AI does not come into existence. What is the strategy for that chunk of probability space?

If it is a good friendly AI, then it will treat people as they deserve, not on the basis of thin economic need, and likewise it will always be morally correct. It won't be racist or oppressive.

I'm personally highly skeptical that this will happen.

I bet no one will want to leave its society, but if we think that that right is important then we can design an AI which allows for that right.

How would that be allowed if those people might create a competitor AI?

you reach some inappropriate conclusions because future scenarios are often different from present ones in ways that remove the importance of our present social contexts.

Like I said in the article, if I was convinced of decisive strategic advantage my views of the future would be very different. However as I am not, I have to think that the future will remain similar to the present in many ways.

Comment author: kbog  (EA Profile) 16 August 2017 01:06:03PM *  0 points [-]

I don't think I can argue for intrinsically valuing anything. I agree with not being able to argue ought from is.

The is-ought problem doesn't say that you can't intrinsically value anything. It just says that it's hard. There's lots of ways to argue for intrinsically valuing things, and I have a reason to intrinsically value well-being, so why should I divert attention to something else?

Unless it is omniscient I don't see how it will see all threats to itself.

It will see most threats to itself in virtue of being very intelligent and having a lot of data, and will have a much easier time by not being in direct competition. Basically all known x-risks can be eliminated if you have zero coordination and competition problems.

Also what happens if a decisive strategic advantage is not possible and this hypothetical single AI does not come into existence. What is the strategy for that chunk of probability space?

Democratic oversight, international cooperation, good values in AI, FDT to facilitate coordination, stuff like that.

I'm personally highly skeptical that this will happen.

Okay, but the question was "is a single AI a good thing," not "will a single AI happen".

How would that be allowed if those people might create a competitor AI?

It will be allowed by allowing them to exist without allowing them to create a competitor AI. What specific part of this do you think would be difficult? Do you think that everyone who is allowed to exist must have access to supercomputers free of surveillance?

Comment author: WillPearson 18 August 2017 06:13:11PM *  0 points [-]

The is-ought problem doesn't say that you can't intrinsically value anything

I never said it did, I said it means I can't argue that you should intrinsically value anything. What arguments could I give to a paper clipper to stop its paper clipping ways.

That said I do think I can argue for a plurality of intrinsic values.

1) They allow you to break ties. If there are two situations with equal well being then having a second intrinsic value would give you a way of picking between the two.

2) It might be computationally complicated or informationally complicated to calculate your intrinsic value. Having another that is not at odds with it and generally correlates with it allows you to optimise for that. For example you could optimise for political freedom which would probabilistically lead to more eudaemonia, even if it is not the case that more political freedom always leads to more eudaemonia under all cases. As you can't measure the eudaemonia of everyone.

It will see most threats to itself in virtue of being very intelligent and having a lot of data, and will have a much easier time by not being in direct competition. Basically all known x-risks can be eliminated if you have zero coordination and competition problems.

I am thinking about unknown internal threats. One possibility that I alluded to is if it modifies itself to improve itself, but it does that on shakey premise and destroys itself. Another possibility is that parts of may degrade and/or get damaged and it gets the equivalent of cancers.

I'm personally highly skeptical that this will happen.

Okay, but the question was "is a single AI a good thing," not "will a single AI happen"

I was assuming a single non-morally perfect AI as that seems like the most likely outcome of the drive to a single AI to me.

It will be allowed by allowing them to exist without allowing them to create a competitor AI. What specific part of this do you think would be difficult? Do you think that everyone who is allowed to exist must have access to supercomputers free of surveillance?

If they are not free of surveillance, then they have not left the society. I think it a preferable world if we can allow everyone to have supercomputers because they are smart and wise enough to use them well.