Hide table of contents

Autonomy is a human value that is referenced in discussions around effective altruism. However I have not seen any attempts to formalise autonomy so we can try and discern the impacts our decisions will have on autonomy in the future.

 

 *Epistemic Status: Exploratory*

  

In this article I shall introduce a relatively formal measure of autonomy, based on the intuition that it is the ability to do things by yourself with what you have. The measure introduced allows you to move from less to more autonomy, without being black and white about it. Then I shall talk about how increasing autonomy fits in with the values of movements such as poverty reduction, ai risk reduction and the reduction of suffering.

 

Autonomy is not naturally encouraged the capitalist system due to the incentives involved. So if we wish for a future with increased autonomy we need to think about it and how best to promote it.

 

Making autonomy an explicit value 

 

Part of Effective altruism is finding shared human values that we can work together towards.  Whether it is existential risk reduction or the reduction of suffering, our endeavours are underpinned by our shared values.

 

Autonomy is one of those values. It has not been I made fully explicit and so I think aspects of it have been neglected by the effective altruism community. With that in mind I want to propose a way of measuring autonomy to spark further discussion. 

 

Autonomy is valuable in many value systems that do not make it a primary value, as it allows you to exist outside the dominant economic and political systems. There are lots of reasons you might want to do so, these include:

 

  • The larger system is fragile and you want insulate parts of it from catastrophic failure in other parts of the system (reducing existential risk, by making the system more resilient to losses of parts of it).
  • The larger system has no need for you. For example if you are slowly becoming less economically valuable as more jobs are heavily automated. If something like universal basic income is not implemented, then becoming more autonomous might be the only way to survive.
  • You disagree with the larger system for moral reasons, for example if it is using slavery or polluting the seas. You may wish to opt out of the larger system in whole or in part so you are not contributing to the activity you disagree with.
  • The larger system is hostile to you. It is an authoritarian or racist government.  There are plenty examples of this happening in history, so it will probably happen again.
  • You wish to go somewhere outside the dominant system, for example to live in space.

 

Concepts around Autonomy

 

Autonomy, by my definition, is the ability to do a thing by yourself. An example of something you can (probably) do autonomously is open a door. You need no-ones help to walk over and manipulate the door in such a way that it opens. Not all everyday activities are so simple, things become more complicated when you talk about switching on a light. You can throw the light switch by yourself, but you still rely on there being an electricity grid maintained by humans (unless you happen to be off-grid), for the light to come on. You do not make the light go on by yourself. The other agents in the process are hidden from you and know nothing of your actions with regards to the light switch (apart from a very small blip in energy usage), but they are still required for the light to go on. So you cannot turn on the light autonomously. 

 

A mental concept useful in talking about autonomy is the capability footprint of an activity. The capability footprint is basically the volume of physical space required to do an activity. If that footprint to contains other agents (or actors maintained by another agent) then you are not autonomous in that activity. If there are agents involved but there are sufficient numbers and they have an incentive to carry on doing the activity, then you can just treat it like the environment. You only lose autonomy when agents can decide to stop you performing an action. An example of people relying on lots of agents while performing a task that seems autonomous is breathing. We rely on plants to produce the oxygen we need to breathe, but there is little chance that all the plants will one day decide to stop producing oxygen. So we can still be said to breathe autonomously. The free market in its idealised state can be seen to be like that (if one company decides to stop selling you a product, then another one will fill the gap). However in the real world natural monopolies, government regulation, intellectual property and changing economic times might mean that products are no longer available to you. So we are going to assume that no humans can be involved in an autonomous activity.

 

So for our light switch example the footprint includes the wiring in your house, the wiring of the grid, the people maintaining the grid and the people maintaining the power stations (and the miners/riggers). So you are not autonomous in that activity

 

Another example is navigation. If you rely on GPS your capability footprint expands to include the satellites in orbit, these are actors from another agent and they may stop maintaining it (or have degraded it’s performance) on their whim. If you rely on a compass and map, your capability footprint expands to molten iron core of earth, but you are autonomous, at least with regards to this activity, because that does not rely on another agent. 

 

Trying to make individual activities autonomous is not very interesting. For example being able autonomously navigate does not insulate yourself from catastrophe if you cannot produce your own food. So we need the concept of the important activities in your life. Those things that will sustain you and give your life meaning. These will be the vital set of activities. We can get an idea of how much you rely on others by taking each non-autonomous capability footprint of the activities in the vital set and creating a union of them to get a non-autonomous vital footprint.  This is something we can try to minimise, the larger your vital footprint the more easily your essential activities can be disrupted and the more agents you rely upon. But it doesn’t capture everything people value. People, myself included, choose to use google rather than setting up their own mail servers. So we need to look at what is inside each vital set to get to the reason why.

 

Some vital capability sets allow you to do more things than others, you can achieve a very small footprint if you adopt stone age technology but the number of activities you can do is limited. Being able to do more things is better as we can be more adaptable, so we need a measure that captures that. The vital capability set has a size, the number of activities you can perform, so we can divide that by the footprint to get the vital capability density. This measure captures both intuitions that doing more things is good, and doing things that are less spread out and intermingled with other people is good.

 

The history of human development has been of increasing both the vital capability footprint and the size of the vital capability set. So the vital capability density has probably been going up (these things are somewhat hard to measure, it is easy to see direction of change less easy to measure magnitudes of both things). So the current economic and political system seems very good at increasing the vital capability set. So there is little need for us to do work in that direction. But the expansion of the vital capability footprint has been going in the wrong direction and seems set to keep going in that direction. This is due to it not being incentivised by our current system. 

 

Companies are incentivised to try and keep control of their products and revenue streams so that they can get a return on their investment and stay solvent. Trade is the heart of capitalism. This might mean moving towards a larger vital capability footprint. You can see this in the transition to Software as a Service from shrinkwrapped software that you own. There are some products that makes moves towards shrinking the footprint, things such as solar panels. However what you really need to be independent is the ability to manufacture and recycle solar panels for yourself, else you are only energy independent for the life time of those panels.

 

The only people likely to work on technologies that reduce the vital capability footprint are the military and the space industry, neither of which will necessarily democratize the technology or have incentives to make things long term autonomous.

 

So there is potential work here to improve the human condition, that would not get done otherwise. We can try and help people to shrink their vital footprint while maintaining or expanding the vital capability set with the goal of allowing each human to increase their vital capability density over the long term. This is what I will mean when I talk about increasing humanities autonomy.  

 

What is to be done?

 

To increase humanities autonomy successfully we will need to figure out how to prevent any negative aspects of making more independent capable people and to create advanced technologies that do not exist today.

 

It has been hypothesised by Pinker that the increased interdependence of our society is what has led to the long peace. We rely on other people for things necessary for our livelihood so that we do not want to disrupt their business as that disrupts our lives is how the story goes. We would lose that mechanism, if it is important.  There is also the risks of giving people increased ability, they might do things by accident that have large negative consequences for other people’s lives, such as releasing deadly viruses. So we need to make sure we have found ways to mitigate these scenarios, so increased autonomy does not lead to more chaos and strife.

 

This is more pertinent when you consider what technologies are needed to reduce the vital footprint.The most important one is intelligence augmentation. Currently our economy is as partially as distributed as it is, because of the complexity of dealing with all the myriad things we create and how to create them. People and companies specialise in doing a few things and doing them well because it reduces the complexity they need to manage. So to reduce the size of the vital footprint you need to be able to get people able to do more things. Which means increasing their ability to manage complexity, which means intelligence augmentation. What exactly this looks like is not known at this time. Initiatives like link neuralink seem like part of the solution. We would also need to have computers we actually would want to interface with, ones that are more resistant to subversion and less reliant on human maintenance. We need to deal with issues of alignment of these systems with our goals, so they are not external agents (reducing our autonomy) and also the issues around potential intelligence explosions. I am working on these questions, but more people would be welcome.

 

Making us reliant on some piece of computer hardware that we cannot make ourselves would not decrease our vital footprint. So we need to be able to make them ourselves. Factories and recycling facilities are vast so we would need to shrink these too. There are hobbyist movements already for decentralising some manufacturing, things like 3d printing, but other manufacturing is still heavily centralised like solar panels construction, metal smelting and chip manufacturing. We do not have any current obvious pathways to decentralisation for these things. You also want to make make sure everything is fully recyclable as much as possible. If not you increase your vital footprint to include both the mines and also the places to dispose of your rubbish. 

 

Other EA views and autonomy

 

I don’t take increasing autonomy as the only human value, but it is interesting to think how it might interact with other goals of the effective altruist community by itself. Each of these probably deserves an essay, but a brief sketch will have to do for now.

 

AIrisk

 

The autonomy view vastly prefers a certain outcome to the airisk question. It is not in favour of creating a single AI that looks after us all (especially not by uploading), but prefers the outcome where everyone is augmented and we create the future together. However if decisive strategic advantages are possible and human will definitely seek them out or create agents that do so, then creating an AI to save us from that fate may be preferable. But trying to find a way that does not involve that is a high priority of the autonomy view. 

 

Poverty reduction

 

This can be seen as bringing everyone up to a more similar set of vital activities. So it is not in conflict. The autonomy view of decreasing the footprint points at something to do even when everyone is at an equal vital set. Aiming for that might be something currently neglected which could have a large impact. For example the a-fore mentioned human operated self replicating solar cell factory could have a large impact in Africa's development. Also trying to reduce poverty by getting people involved in a global economic system, which seems to have less need of people in the future, may not be the most effective long term strategy.

 

Suffering reduction

 

Increasing every human’s vital set to be similar should allow everyone to do the activities needed to avoid the same suffering. So in this regard it is compatible.

 

However my view of autonomy is not currently universal, in that I am not trying to increase the autonomy of animals in the same way as I want to increase humanities. I’m not sure what it would look like to try and give hens the same autonomy as humans. This in part is because I rely on people choosing more autonomy and I’m not sure how I could communicate that choice to a hen. It is also that humans are currently not very autonomous so the work to help them seems enormous. Perhaps the circle of moral concerm will increase as autonomy becomes easier.

 

In conclusion

 

I hope I have given you an interesting view of autonomy. I mainly hope to spark a discussion of what it means to be autonomous. I look forward to other people’s views on whether I have captured the aspects of autonomy important to them.

 

Thanks to my partner great philosophical discussions about this concept with me and for someone from an EA meetup in London who saw that I was mainly talking about autonomy and inspired me to try and be explicit about what I cared about.

1

0
0

Reactions

0
0

More posts like this

Comments11
Sorted by Click to highlight new comments since: Today at 10:07 AM

Some of the reasons you gave in favor of autonomy come from a perspective of subjective pragmatic normativity rather than universal moral values, and don't make as much sense when society as a whole is analyzed. E.g.:

You disagree with the larger system for moral reasons, for example if it is using slavery or polluting the seas. You may wish to opt out of the larger system in whole or in part so you are not contributing to the activity you disagree with.

But it's equally plausible that the larger system will be enforcing morally correct standards and a minority of individuals will want to do something wrong (like slavery or pollution).

The larger system is hostile to you. It is an authoritarian or racist government. There are plenty examples of this happening in history, so it will probably happen again.

Individuals could be disruptive or racist, and the government ought to restrain their ability to be hostile towards society.

So when we decide how to alter society as a whole, it's not clear that more autonomy is a good thing. We might be erring on different sides of the line in different contexts.

Moreover, I don't see a reason that we ought to intrinsically value autonomy. The reasons you gave only support autonomy instrumentally through other values. So we should just think about how to reduce catastrophic risks and how to improve the economic welfare of everyone whose jobs were automated. Autonomy may play a role in these contexts, but it will then be context-specific, so our definition of it and analysis of it should be contextual as well.

The autonomy view vastly prefers a certain outcome to the airisk question. It is not in favour of creating a single AI that looks after us all (especially not by uploading)

But by the original criteria, a single AI would (probably) be robust to catastrophe due to being extremely intelligent and having no local competitors. If it is a good friendly AI, then it will treat people as they deserve, not on the basis of thin economic need, and likewise it will always be morally correct. It won't be racist or oppressive. I bet no one will want to leave its society, but if we think that that right is important then we can design an AI which allows for that right.

I think this is the kind of problem you frequently get when you construct an explicit value out of something which was originally grounded in purely instrumental terms - you reach some inappropriate conclusions because future scenarios are often different from present ones in ways that remove the importance of our present social constructs.

But it's equally plausible that the larger system will be enforcing morally correct standards and a minority of individuals will want to do something wrong (like slavery or pollution).

Both of these would impinge on the vital sets of others though (slavery directly, pollution by disrupting the natural environment people rely on). So it would still be a bad outcome from the autonomy viewpoint if these things happened.

The autonomy viewpoint is only arguing that lots of actions should physically possible for people, all the actions physically possible aren't necessarily morally allowed.

Which of these three scenarios is best?:

  1. No one has guns so no-one gets shot
  2. Everyone has guns and people get shot because they have accidents
  3. Everyone has guns but no-one gets shot because they are well trained and smart.

The autonomy viewpoint argues that the third is the best possible outcome and tries to work towards it. There are legitimate uses for guns.

I don't go into how these things should be regulated, as this is a very complicated subject. I'll just point out that to get the robust free society that I want you would need to not regulate the ability to do these things, but make sure the incentive structures and education is correct.

I don't see a reason that we ought to intrinsically value autonomy. The reasons you gave only support autonomy instrumentally through other values.

I don't think I can argue for intrinsically valuing anything. I agree with not being able to argue ought from is. So the best I can do is either claim it as a value, which I do, and refer to other value systems that people might share. I suppose I could talk about other people who value autonomy in itself. Would you find that convincing?

But by the original criteria, a single AI would (probably) be robust to catastrophe due to being extremely intelligent and having no local competitors.

Unless it is omniscient I don't see how it will see all threats to itself. It may lose a gamble on the logical induction lottery and make an ill-advised change to itself.

Also what happens if a decisive strategic advantage is not possible and this hypothetical single AI does not come into existence. What is the strategy for that chunk of probability space?

If it is a good friendly AI, then it will treat people as they deserve, not on the basis of thin economic need, and likewise it will always be morally correct. It won't be racist or oppressive.

I'm personally highly skeptical that this will happen.

I bet no one will want to leave its society, but if we think that that right is important then we can design an AI which allows for that right.

How would that be allowed if those people might create a competitor AI?

you reach some inappropriate conclusions because future scenarios are often different from present ones in ways that remove the importance of our present social contexts.

Like I said in the article, if I was convinced of decisive strategic advantage my views of the future would be very different. However as I am not, I have to think that the future will remain similar to the present in many ways.

This post covers what I want to get out of my autonomy measure. I think I might have pitched it wrong to start with. It is more along the lines of trying to increase the world economy than a base value. It also covers some of my initial forrays into the research into whether freedom is psychologically useful (I need to find out how reliable these are).

I don't think I can argue for intrinsically valuing anything. I agree with not being able to argue ought from is.

The is-ought problem doesn't say that you can't intrinsically value anything. It just says that it's hard. There's lots of ways to argue for intrinsically valuing things, and I have a reason to intrinsically value well-being, so why should I divert attention to something else?

Unless it is omniscient I don't see how it will see all threats to itself.

It will see most threats to itself in virtue of being very intelligent and having a lot of data, and will have a much easier time by not being in direct competition. Basically all known x-risks can be eliminated if you have zero coordination and competition problems.

Also what happens if a decisive strategic advantage is not possible and this hypothetical single AI does not come into existence. What is the strategy for that chunk of probability space?

Democratic oversight, international cooperation, good values in AI, FDT to facilitate coordination, stuff like that.

I'm personally highly skeptical that this will happen.

Okay, but the question was "is a single AI a good thing," not "will a single AI happen".

How would that be allowed if those people might create a competitor AI?

It will be allowed by allowing them to exist without allowing them to create a competitor AI. What specific part of this do you think would be difficult? Do you think that everyone who is allowed to exist must have access to supercomputers free of surveillance?

The is-ought problem doesn't say that you can't intrinsically value anything

I never said it did, I said it means I can't argue that you should intrinsically value anything. What arguments could I give to a paper clipper to stop its paper clipping ways.

That said I do think I can argue for a plurality of intrinsic values.

1) They allow you to break ties. If there are two situations with equal well being then having a second intrinsic value would give you a way of picking between the two.

2) It might be computationally complicated or informationally complicated to calculate your intrinsic value. Having another that is not at odds with it and generally correlates with it allows you to optimise for that. For example you could optimise for political freedom which would probabilistically lead to more eudaemonia, even if it is not the case that more political freedom always leads to more eudaemonia under all cases. As you can't measure the eudaemonia of everyone.

It will see most threats to itself in virtue of being very intelligent and having a lot of data, and will have a much easier time by not being in direct competition. Basically all known x-risks can be eliminated if you have zero coordination and competition problems.

I am thinking about unknown internal threats. One possibility that I alluded to is if it modifies itself to improve itself, but it does that on shakey premise and destroys itself. Another possibility is that parts of may degrade and/or get damaged and it gets the equivalent of cancers.

I'm personally highly skeptical that this will happen.

Okay, but the question was "is a single AI a good thing," not "will a single AI happen"

I was assuming a single non-morally perfect AI as that seems like the most likely outcome of the drive to a single AI to me.

It will be allowed by allowing them to exist without allowing them to create a competitor AI. What specific part of this do you think would be difficult? Do you think that everyone who is allowed to exist must have access to supercomputers free of surveillance?

If they are not free of surveillance, then they have not left the society. I think it a preferable world if we can allow everyone to have supercomputers because they are smart and wise enough to use them well.

I never said it did, I said it means I can't argue that you should intrinsically value anything. What arguments could I give to a paper clipper to stop its paper clipping ways.

Still, this is not right. There are plenty of arguments you can give to a paper clipper, such as Kant's argument for the categorical imperative, or Sidgwick's argument for utilitarianism, or many others.

1) They allow you to break ties. If there are two situations with equal well being then having a second intrinsic value would give you a way of picking between the two.

I don't see why we should want to break ties, since it presupposes that our preferred metric judges the different options to be equal. Moreover, your pluralist metric will end up with ties too.

2) It might be computationally complicated or informationally complicated to calculate your intrinsic value.

Sure, but that's not an argument for having pluralism over intrinsic values.

I am thinking about unknown internal threats. One possibility that I alluded to is if it modifies itself to improve itself, but it does that on shakey premise and destroys itself. Another possibility is that parts of may degrade and/or get damaged and it gets the equivalent of cancers.

If a single ASI is unstable and liable to collapse, then basically every view would count that as a problem, because it implies destruction of civilization and so on. It doesn't have anything to do with autonomy in particular.

I was assuming a single non-morally perfect AI as that seems like the most likely outcome of the drive to a single AI to me.

AI being non-morally perfect doesn't imply that it would be racist, oppressive, or generally as bad or worse as existing or alternative institutions.

If they are not free of surveillance, then they have not left the society.

Why should we care about someone's desire to have a supercomputer which doesn't get checked for the presence of dangerous AGI...?

I think we want different things from our moral systems. I think my morality/value is complicated and best represented by different heuristics that guide how I think or what I aim for. It would take more time than I am willing to invest at the moment to try and explain my views fully.

Why should we care about someone's desire to have a supercomputer which doesn't get checked for the presence of dangerous AGI...?

Why should we care about someone's desire to have their thoughts not checked for the presence of malicious genius? They may use their thinking to create something equally dangerous that we have not yet thought of.

Why care about freedom at all?

If I upload and then want to take a spaceship somewhere hard to monitor, will I be allowed to take a super computer, if I need it to perform science?

What is in my pocket was once considered a dangerous super computer. The majority of the world is now trusted with it, or at least the benefits of having them out weigh the potential costs.

Why should we care about someone's desire to have their thoughts not checked for the presence of malicious genius? They may use their thinking to create something equally dangerous that we have not yet thought of.

If you can do that, sure. Some people might have a problem with it though, because you're probing their personal thoughts.

Why care about freedom at all?

Because people like being free and it keeps society fresh with new ideas.

If I upload and then want to take a spaceship somewhere hard to monitor, will I be allowed to take a super computer, if I need it to perform science?

Sure. Just don't use it to build a super-AGI that will take over the world.

What is in my pocket was once considered a dangerous super computer. The majority of the world is now trusted with it, or at least the benefits of having them out weigh the potential costs.

That's because you can't use what is in your pocket to take over the world. Remember that you started this conversation by asking "How would that be allowed if those people might create a competitor AI?" So if you assume that future people can't create a competitor AI, for instance because their computers have no more comparative power to help take over the world than our current computers do, then of course those people can be allowed to do whatever they want and your original question doesn't make sense.

Why care about freedom at all?

Because people like being free and it keeps society fresh with new ideas.

If I upload and then want to take a spaceship somewhere hard to monitor, will I be allowed to take a super computer, if I need it to perform science?

Sure. Just don't use it to build a super-AGI that will take over the world.

What if there is a very small risk that I will do so, lets say 0.0000001%? Using something like the arguments for the cosmic inheritance, this could be seen as likely causing a certain amount of astronomical waste. Judged purely on whether people are alive, this seems like a no go. But if you take into consideration that the society that stops this kind of activity would be less free, and less free for all people throughout history, this is a negative. I am trying to get this negative included in our moral calculus, else I fear we will optimize it away.

Another discussion and definition of autonomy, by philosopher John Danaher:

Many books and articles have been written on the concept of ‘autonomy’. Generations of philosophers have painstakingly identified necessary and sufficient conditions for its attainment, subjected those conditions to revision and critique, scrapped their original accounts, started again, given up and argued that the concept is devoid of meaning, and so on. I cannot hope to do justice to the richness of the literature on this topic here. Still, it’s important to have at least a rough and ready conception of what autonomy is and the most general (and hopefully least contentious) conditions needed for its attainment.

I have said this before, but I like Joseph Raz’s general account. Like most people, he thinks that an autonomous agent is one who is, in some meaningful sense, the author of their own lives. In order for this to happen, he says that three conditions must be met:

Rationality condition: The agent must have goals/ends and must be able to use their reason to plan the means to achieve those goals/ends.

Optionality condition: The agent must have an adequate range of options from which to choose their goals and their means.

Independence condition: The agent must be free from external coercion and manipulation when choosing and exercising their rationality.

I have mentioned before that you can view these as ‘threshold conditions’, i.e. conditions that simply have to be met in order for an agent to be autonomous, or you can have a slightly more complex view, taking them to define a three dimensional space in which autonomy resides. In other words, you can argue that an agent can have more or less rationality, more or less optionality, and more or less independence. The conditions are satisfied in degrees. This means that agents can be more or less autonomous, and the same overall level of autonomy can be achieved through different combinations of the relevant degrees of satisfaction of the conditions. That’s the view I tend to favour. I think there possibly is a minimum threshold for each condition that must be satisfied in order for an agent to count as autonomous, but I suspect that the cases in which this threshold is not met are pretty stark. The more complicated cases, and the ones that really keep us up at night, arise when someone scores high on one of the conditions but low on another. Are they autonomous or not? There may not be a simple ‘yes’ or ‘no’ answer to that question.

Anyway, using the three conditions we can formulate the following ‘autonomy principle’ or ‘autonomy test’:

Autonomy principle: An agent’s actions are more or less autonomous to the extent that they meet the (i) rationality condition; (ii) optionality condition and (iii) independence condition.

Thanks. I know I need to do more reading around this. This looks like a good place to start.