2

kbog comments on Towards a measure of Autonomy and what it means for EA - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (11)

You are viewing a single comment's thread.

Comment author: kbog  (EA Profile) 21 July 2017 04:04:18PM *  4 points [-]

Some of the reasons you gave in favor of autonomy come from a perspective of subjective pragmatic normativity rather than universal moral values, and don't make as much sense when society as a whole is analyzed. E.g.:

You disagree with the larger system for moral reasons, for example if it is using slavery or polluting the seas. You may wish to opt out of the larger system in whole or in part so you are not contributing to the activity you disagree with.

But it's equally plausible that the larger system will be enforcing morally correct standards and a minority of individuals will want to do something wrong (like slavery or pollution).

The larger system is hostile to you. It is an authoritarian or racist government. There are plenty examples of this happening in history, so it will probably happen again.

Individuals could be disruptive or racist, and the government ought to restrain their ability to be hostile towards society.

So when we decide how to alter society as a whole, it's not clear that more autonomy is a good thing. We might be erring on different sides of the line in different contexts.

Moreover, I don't see a reason that we ought to intrinsically value autonomy. The reasons you gave only support autonomy instrumentally through other values. So we should just think about how to reduce catastrophic risks and how to improve the economic welfare of everyone whose jobs were automated. Autonomy may play a role in these contexts, but it will then be context-specific, so our definition of it and analysis of it should be contextual as well.

The autonomy view vastly prefers a certain outcome to the airisk question. It is not in favour of creating a single AI that looks after us all (especially not by uploading)

But by the original criteria, a single AI would (probably) be robust to catastrophe due to being extremely intelligent and having no local competitors. If it is a good friendly AI, then it will treat people as they deserve, not on the basis of thin economic need, and likewise it will always be morally correct. It won't be racist or oppressive. I bet no one will want to leave its society, but if we think that that right is important then we can design an AI which allows for that right.

I think this is the kind of problem you frequently get when you construct an explicit value out of something which was originally grounded in purely instrumental terms - you reach some inappropriate conclusions because future scenarios are often different from present ones in ways that remove the importance of our present social constructs.

Comment author: WillPearson 21 July 2017 06:11:49PM *  0 points [-]

But it's equally plausible that the larger system will be enforcing morally correct standards and a minority of individuals will want to do something wrong (like slavery or pollution).

Both of these would impinge on the vital sets of others though (slavery directly, pollution by disrupting the natural environment people rely on). So it would still be a bad outcome from the autonomy viewpoint if these things happened.

The autonomy viewpoint is only arguing that lots of actions should physically possible for people, all the actions physically possible aren't necessarily morally allowed.

Which of these three scenarios is best?:

  1. No one has guns so no-one gets shot
  2. Everyone has guns and people get shot because they have accidents
  3. Everyone has guns but no-one gets shot because they are well trained and smart.

The autonomy viewpoint argues that the third is the best possible outcome and tries to work towards it. There are legitimate uses for guns.

I don't go into how these things should be regulated, as this is a very complicated subject. I'll just point out that to get the robust free society that I want you would need to not regulate the ability to do these things, but make sure the incentive structures and education is correct.

I don't see a reason that we ought to intrinsically value autonomy. The reasons you gave only support autonomy instrumentally through other values.

I don't think I can argue for intrinsically valuing anything. I agree with not being able to argue ought from is. So the best I can do is either claim it as a value, which I do, and refer to other value systems that people might share. I suppose I could talk about other people who value autonomy in itself. Would you find that convincing?

But by the original criteria, a single AI would (probably) be robust to catastrophe due to being extremely intelligent and having no local competitors.

Unless it is omniscient I don't see how it will see all threats to itself. It may lose a gamble on the logical induction lottery and make an ill-advised change to itself.

Also what happens if a decisive strategic advantage is not possible and this hypothetical single AI does not come into existence. What is the strategy for that chunk of probability space?

If it is a good friendly AI, then it will treat people as they deserve, not on the basis of thin economic need, and likewise it will always be morally correct. It won't be racist or oppressive.

I'm personally highly skeptical that this will happen.

I bet no one will want to leave its society, but if we think that that right is important then we can design an AI which allows for that right.

How would that be allowed if those people might create a competitor AI?

you reach some inappropriate conclusions because future scenarios are often different from present ones in ways that remove the importance of our present social contexts.

Like I said in the article, if I was convinced of decisive strategic advantage my views of the future would be very different. However as I am not, I have to think that the future will remain similar to the present in many ways.

Comment author: WillPearson 22 August 2017 05:38:24PM 0 points [-]

This post covers what I want to get out of my autonomy measure. I think I might have pitched it wrong to start with. It is more along the lines of trying to increase the world economy than a base value. It also covers some of my initial forrays into the research into whether freedom is psychologically useful (I need to find out how reliable these are).

Comment author: kbog  (EA Profile) 16 August 2017 01:06:03PM *  0 points [-]

I don't think I can argue for intrinsically valuing anything. I agree with not being able to argue ought from is.

The is-ought problem doesn't say that you can't intrinsically value anything. It just says that it's hard. There's lots of ways to argue for intrinsically valuing things, and I have a reason to intrinsically value well-being, so why should I divert attention to something else?

Unless it is omniscient I don't see how it will see all threats to itself.

It will see most threats to itself in virtue of being very intelligent and having a lot of data, and will have a much easier time by not being in direct competition. Basically all known x-risks can be eliminated if you have zero coordination and competition problems.

Also what happens if a decisive strategic advantage is not possible and this hypothetical single AI does not come into existence. What is the strategy for that chunk of probability space?

Democratic oversight, international cooperation, good values in AI, FDT to facilitate coordination, stuff like that.

I'm personally highly skeptical that this will happen.

Okay, but the question was "is a single AI a good thing," not "will a single AI happen".

How would that be allowed if those people might create a competitor AI?

It will be allowed by allowing them to exist without allowing them to create a competitor AI. What specific part of this do you think would be difficult? Do you think that everyone who is allowed to exist must have access to supercomputers free of surveillance?

Comment author: WillPearson 18 August 2017 06:13:11PM *  0 points [-]

The is-ought problem doesn't say that you can't intrinsically value anything

I never said it did, I said it means I can't argue that you should intrinsically value anything. What arguments could I give to a paper clipper to stop its paper clipping ways.

That said I do think I can argue for a plurality of intrinsic values.

1) They allow you to break ties. If there are two situations with equal well being then having a second intrinsic value would give you a way of picking between the two.

2) It might be computationally complicated or informationally complicated to calculate your intrinsic value. Having another that is not at odds with it and generally correlates with it allows you to optimise for that. For example you could optimise for political freedom which would probabilistically lead to more eudaemonia, even if it is not the case that more political freedom always leads to more eudaemonia under all cases. As you can't measure the eudaemonia of everyone.

It will see most threats to itself in virtue of being very intelligent and having a lot of data, and will have a much easier time by not being in direct competition. Basically all known x-risks can be eliminated if you have zero coordination and competition problems.

I am thinking about unknown internal threats. One possibility that I alluded to is if it modifies itself to improve itself, but it does that on shakey premise and destroys itself. Another possibility is that parts of may degrade and/or get damaged and it gets the equivalent of cancers.

I'm personally highly skeptical that this will happen.

Okay, but the question was "is a single AI a good thing," not "will a single AI happen"

I was assuming a single non-morally perfect AI as that seems like the most likely outcome of the drive to a single AI to me.

It will be allowed by allowing them to exist without allowing them to create a competitor AI. What specific part of this do you think would be difficult? Do you think that everyone who is allowed to exist must have access to supercomputers free of surveillance?

If they are not free of surveillance, then they have not left the society. I think it a preferable world if we can allow everyone to have supercomputers because they are smart and wise enough to use them well.

Comment author: kbog  (EA Profile) 20 August 2017 06:34:18PM *  0 points [-]

I never said it did, I said it means I can't argue that you should intrinsically value anything. What arguments could I give to a paper clipper to stop its paper clipping ways.

Still, this is not right. There are plenty of arguments you can give to a paper clipper, such as Kant's argument for the categorical imperative, or Sidgwick's argument for utilitarianism, or many others.

1) They allow you to break ties. If there are two situations with equal well being then having a second intrinsic value would give you a way of picking between the two.

I don't see why we should want to break ties, since it presupposes that our preferred metric judges the different options to be equal. Moreover, your pluralist metric will end up with ties too.

2) It might be computationally complicated or informationally complicated to calculate your intrinsic value.

Sure, but that's not an argument for having pluralism over intrinsic values.

I am thinking about unknown internal threats. One possibility that I alluded to is if it modifies itself to improve itself, but it does that on shakey premise and destroys itself. Another possibility is that parts of may degrade and/or get damaged and it gets the equivalent of cancers.

If a single ASI is unstable and liable to collapse, then basically every view would count that as a problem, because it implies destruction of civilization and so on. It doesn't have anything to do with autonomy in particular.

I was assuming a single non-morally perfect AI as that seems like the most likely outcome of the drive to a single AI to me.

AI being non-morally perfect doesn't imply that it would be racist, oppressive, or generally as bad or worse as existing or alternative institutions.

If they are not free of surveillance, then they have not left the society.

Why should we care about someone's desire to have a supercomputer which doesn't get checked for the presence of dangerous AGI...?

Comment author: WillPearson 21 August 2017 09:02:34AM *  0 points [-]

I think we want different things from our moral systems. I think my morality/value is complicated and best represented by different heuristics that guide how I think or what I aim for. It would take more time than I am willing to invest at the moment to try and explain my views fully.

Why should we care about someone's desire to have a supercomputer which doesn't get checked for the presence of dangerous AGI...?

Why should we care about someone's desire to have their thoughts not checked for the presence of malicious genius? They may use their thinking to create something equally dangerous that we have not yet thought of.

Why care about freedom at all?

If I upload and then want to take a spaceship somewhere hard to monitor, will I be allowed to take a super computer, if I need it to perform science?

What is in my pocket was once considered a dangerous super computer. The majority of the world is now trusted with it, or at least the benefits of having them out weigh the potential costs.

Comment author: kbog  (EA Profile) 25 August 2017 09:52:27AM 0 points [-]

Why should we care about someone's desire to have their thoughts not checked for the presence of malicious genius? They may use their thinking to create something equally dangerous that we have not yet thought of.

If you can do that, sure. Some people might have a problem with it though, because you're probing their personal thoughts.

Why care about freedom at all?

Because people like being free and it keeps society fresh with new ideas.

If I upload and then want to take a spaceship somewhere hard to monitor, will I be allowed to take a super computer, if I need it to perform science?

Sure. Just don't use it to build a super-AGI that will take over the world.

What is in my pocket was once considered a dangerous super computer. The majority of the world is now trusted with it, or at least the benefits of having them out weigh the potential costs.

That's because you can't use what is in your pocket to take over the world. Remember that you started this conversation by asking "How would that be allowed if those people might create a competitor AI?" So if you assume that future people can't create a competitor AI, for instance because their computers have no more comparative power to help take over the world than our current computers do, then of course those people can be allowed to do whatever they want and your original question doesn't make sense.

Comment author: WillPearson 25 August 2017 02:16:42PM *  0 points [-]

Why care about freedom at all?

Because people like being free and it keeps society fresh with new ideas.

If I upload and then want to take a spaceship somewhere hard to monitor, will I be allowed to take a super computer, if I need it to perform science?

Sure. Just don't use it to build a super-AGI that will take over the world.

What if there is a very small risk that I will do so, lets say 0.0000001%? Using something like the arguments for the cosmic inheritance, this could be seen as likely causing a certain amount of astronomical waste. Judged purely on whether people are alive, this seems like a no go. But if you take into consideration that the society that stops this kind of activity would be less free, and less free for all people throughout history, this is a negative. I am trying to get this negative included in our moral calculus, else I fear we will optimize it away.