Comment author: Lee_Sharkey 28 August 2017 06:18:19PM *  4 points [-]

Hey kbog, Thanks for this. I think this is well argued. If I may, I'd like to pick some holes. I'm not sure if they are sufficient to swing the argument the other way, but I don't think they're trivial either.

I'm going to use autonomy in weapons systems in favour of LAWs for reasons argued here(see Takeaway 1).

As far as I can tell, almost all considerations you give are to inter-state conflict. The intra-state consequences are not explored and I think they deserve to be. Fully autonomous weapons systems potentially obviate the need for a mutually beneficial social contract between the regimes in control of the weapons and the populations over which they rule. All dissent becomes easy to crush. This is patently bad in itself, but it also has consequences for interstate conflict; with less approval needed to go to war, inter-state conflict may increase.

The introduction of weapons systems with high degrees of autonomy poses an arguably serious risk of geopolitical turbulence: it is not clear that all states will develop the capability to produce highly autonomous weapons systems. Those that do not will have to purchase them from technologically-more advanced allies willing to sell them. States that find themselves outside of such alliances will be highly vulnerable to attack. This may motivate a nontrivial reshuffling of global military alliances, the outcomes of which are hard to predict. For those without access to these new powerful weapons, one risk mitigation strategy is to develop nuclear weapons, potentially motivating nuclear proliferation.

On your point:

The logic here is a little bit gross, since it's saying that we should make sure that ordinary soldiers like me die for the sake of the greater good of manipulating the political system and it also implies that things like body armor and medics should be banned from the battlefield, but I won't worry about that here because this is a forum full of consequentialists and I honestly think that consequentialist arguments are valid anyway.

My argument here isn't hugely important but I take some issue with the analogies. I prefer thinking in terms of both actors agreeing on acceptable level of vulnerability in order to reduce the risk of conflict. In this case, a better analogy is to the Cold War agreement not to build comprehensive ICBM defenses, an analogy which would come out in favour of limiting autonomy in weapons systems. But neither of us are placing much importance on this point overall.

I'd like to unpack this point a little bit:

Third, you might say that LAWs will prompt an arms race in AI, reducing safety. But faster AI development will help us avoid other kinds of risks unrelated to AI, and it will expedite humanity's progress and expansion towards a future with exponentially growing value. Moreover, there is already substantial AI development in civilian sectors as well as non-battlefield military use, and all of these things have competitive dynamics. AGI would have such broad applications that restricting its use in one or two domains is unlikely to make a large difference; after all, economic power is the source of all military power, and international public opinion has nontrivial importance in international relations, and AI can help nations beat their competitors at both.

I believe discourse on AI risks often conflates 'AI arms race' with 'race to the finish'. While these races are certainly linked, and therefore the conflation justified in some senses, I think it trips up the argument in this case. In an AI arms race, we should be concerned about the safety of non-AGI systems, which may be neglected in an arms race scenario. This weakens the argument that highly autonomous weapons systems might lead to fewer civilian casualties, as this is likely the sort of safety measure that might be neglected when racing to develop weapons systems capable of out-doing the ever more capable weapons of one's rival.

The second sentence only holds if the safety issue is solved, so I don't accept the argument that it will help humanity reach a future exponentially growing in value (at least insofar as we're talking about the long run future, as there may be some exponential progress in the near-term).

It could simply be my reading, but I'm not entirely clear on the point made across the third and fourth sentences, and I don't think they give a compelling case that we shouldn't try to avoid military application or avoid exacerbating race dynamics.

Lastly, while I think you've given a strong case to soften opposition to advancing autonomy in weapons systems, the argument against any regulation of these weapons hasn't been made. Not all actors seek outright bans, and I think it'd be worth acknowledging that (contrary to the title) there are some undesirable things with highly autonomous weapons systems and that we should like to impose some regulations on them such as, for example, some minimum safety requirements that help reduce civilian casualties.

Overall, I think the first point I made should cause serious pause, and it's the largest single reason I don't agree with your overall argument, as many good points as you make here.

(And to avoid any suspicions: despite arguing on his side, coming from the same city, and having the same rare surname, I am of no known relation to Noel Sharkey of the Stop Killer Robots Campaign, though I confess a pet goal to meet him for a pint one day.)

Comment author: kbog  (EA Profile) 28 August 2017 10:09:41PM *  3 points [-]

Hmm, everything that I mentioned applies to interstate conflict, but they don't all only apply to interstate conflict. Intrastate conflicts might be murkier and harder to analyze, and I think they are something to be looked at, but I'm not sure how much it would modify the main points. The assumptions of the expected utility theory of conflict do get invalidated.

Fully autonomous weapons systems potentially obviate the need for a mutually beneficial social contract between the regimes in control of the weapons and the populations over which they rule. All dissent becomes easy to crush.

Well, firstly, I am of the opinion that most instances of violent resistance against governments in history were unjustified, and that a general reduction in revolutionary violence would do more good than harm. Peaceful resistance is more effective at political change than violent resistance anyway (https://www.psychologytoday.com/blog/sex-murder-and-the-meaning-life/201404/violent-versus-nonviolent-revolutions-which-way-wins). You could argue that governments will become more oppressive and less responsive to peaceful resistance if they have better security against hypothetical revolutions, though I don't have a large expectation for this to happen, at least in the first world.

Second, this doesn't have much to do with autonomous weapons in particular. It applies to all methods by which the government can suppress dissent, all military and police equipment.

Third, lethal force is a small and rare part of suppressing protests and dissent as long as full-fledged rebellion doesn't break out. Modern riot police are equipped with nonlethal weapons; we can expect that any country with the ability to deploy robots would have professional capabilities for riot control and the deployment of nonlethal weapons. And crowd control is based more on psychology and appearances than application of kinetic force.

Finally, even when violent rebellion does break out, nonstate actors such as terrorists and rebels are outgunned anyway. Governments trying to pacify rebellions need to work with the local population, gather intelligence, and assert their legitimacy in the eyes of the populace. Lethal autonomous weapons are terrible for all of these things. They would be very good for the application of quick precise firepower at low risk to friendly forces, but that is far from the greatest problem faced by governments seeking to suppress dissent.

The one thing that implies that rebellion would become less frequent in a country with LAWs is that an army of AGI robots could allow leadership to stop a rebellion without worrying about the loyalty of police and soldiers. By that time, probably we should just make sure that machines have ethical guidelines not to kill their own people, support evil governments and similar things. I can see this being a problem, but it's a little too far out and speculative to make plans around it.

This is patently bad in itself, but it also has consequences for interstate conflict; with less approval needed to go to war, inter-state conflict may increase.

The opposite is at least as likely. Nations often go to war in order to maintain legitimacy in the eyes of the population. Argentina's Falklands venture was a good example of this 'diversionary foreign policy' (https://en.wikipedia.org/wiki/Diversionary_foreign_policy).

The introduction of weapons systems with high degrees of autonomy poses an arguably serious risk of geopolitical turbulence: it is not clear that all states will develop the capability to produce highly autonomous weapons systems. Those that do not will have to purchase them from technologically-more advanced allies willing to sell them. States that find themselves outside of such alliances will be highly vulnerable to attack. This may motivate a nontrivial reshuffling of global military alliances, the outcomes of which are hard to predict.

How would AI be any different here from other kinds of technological progress? And I don't think that the advent of new military technology has major impacts on geopolitical alliances. I actually cannot think of a case where alliances shifted because of new military technology. Military exports and license production are common among non-allies, and few alliances lack advanced industrial powers; right now there are very few countries in the world which are not on good enough terms with at least one highly developed military power to buy weapons from them.

In an AI arms race, we should be concerned about the safety of non-AGI systems, which may be neglected in an arms race scenario. This weakens the argument that highly autonomous weapons systems might lead to fewer civilian casualties, as this is likely the sort of safety measure that might be neglected when racing to develop weapons systems capable of out-doing the ever more capable weapons of one's rival.

But the same dynamic is present when nations compete with non-AI weapons. The demand for potent firepower implies that systems will cause collateral damage and that soldiers will not be as trained or disciplined on ROE as they could be.

The second sentence only holds if the safety issue is solved, so I don't accept the argument that it will help humanity reach a future exponentially growing in value (at least insofar as we're talking about the long run future, as there may be some exponential progress in the near-term).

Well, of course nothing matters if there is an existential catastrophe. But you can't go into this with the assumption that AI will cause an existential catastrophe. It likely won't, and in all those scenarios, quicker AI development is likely better. Does this mean that AI should be developed quicker, all-things-considered? I don't know, I'm just saying that overall it's not clear that it should be developed more slowly.

It could simply be my reading, but I'm not entirely clear on the point made across the third and fourth sentences, and I don't think they give a compelling case that we shouldn't try to avoid military application or avoid exacerbating race dynamics.

I just mean that military use is a comparatively small part of the overall pressure towards quicker AI development.

Lastly, while I think you've given a strong case to soften opposition to advancing autonomy in weapons systems, the argument against any regulation of these weapons hasn't been made. Not all actors seek outright bans, and I think it'd be worth acknowledging that (contrary to the title) there are some undesirable things with highly autonomous weapons systems and that we should like to impose some regulations on them such as, for example, some minimum safety requirements that help reduce civilian casualties.

There are things that are wrong with AI weapons in that they are, after all, weapons, and there is always something wrong with weapons. But I think there is nothing that makes AI weapons overall worse than ordinary ones.

I don't think that regulating them is necessarily bad. I did say at the end that testing, lobbying, international watchdogs, etc are the right direction to go in. I haven't thought this through, but my first instinct is to say that autonomous systems should simply follow all the same regulations and laws that soldiers do today. Whenever a nation ratifies an international treaty on military conduct, such as the Geneva Convention, its norms should apply to autonomous systems as well as soldiers. That sounds sufficient to me, at first glance.

14

Nothing Wrong With AI Weapons

By Kyle Bogosian With all the recent worries over AI risks, a lot of people have raised fears about lethal autonomous weapons (LAWs) which take the place of soldiers on the battlefield. Specifically, in the news recently: Elon Musk and over 100 experts requested that the UN implement a ban.... Read More
Comment author: WillPearson 21 August 2017 09:02:34AM *  0 points [-]

I think we want different things from our moral systems. I think my morality/value is complicated and best represented by different heuristics that guide how I think or what I aim for. It would take more time than I am willing to invest at the moment to try and explain my views fully.

Why should we care about someone's desire to have a supercomputer which doesn't get checked for the presence of dangerous AGI...?

Why should we care about someone's desire to have their thoughts not checked for the presence of malicious genius? They may use their thinking to create something equally dangerous that we have not yet thought of.

Why care about freedom at all?

If I upload and then want to take a spaceship somewhere hard to monitor, will I be allowed to take a super computer, if I need it to perform science?

What is in my pocket was once considered a dangerous super computer. The majority of the world is now trusted with it, or at least the benefits of having them out weigh the potential costs.

Comment author: kbog  (EA Profile) 25 August 2017 09:52:27AM 0 points [-]

Why should we care about someone's desire to have their thoughts not checked for the presence of malicious genius? They may use their thinking to create something equally dangerous that we have not yet thought of.

If you can do that, sure. Some people might have a problem with it though, because you're probing their personal thoughts.

Why care about freedom at all?

Because people like being free and it keeps society fresh with new ideas.

If I upload and then want to take a spaceship somewhere hard to monitor, will I be allowed to take a super computer, if I need it to perform science?

Sure. Just don't use it to build a super-AGI that will take over the world.

What is in my pocket was once considered a dangerous super computer. The majority of the world is now trusted with it, or at least the benefits of having them out weigh the potential costs.

That's because you can't use what is in your pocket to take over the world. Remember that you started this conversation by asking "How would that be allowed if those people might create a competitor AI?" So if you assume that future people can't create a competitor AI, for instance because their computers have no more comparative power to help take over the world than our current computers do, then of course those people can be allowed to do whatever they want and your original question doesn't make sense.

Comment author: WillPearson 18 August 2017 06:13:11PM *  0 points [-]

The is-ought problem doesn't say that you can't intrinsically value anything

I never said it did, I said it means I can't argue that you should intrinsically value anything. What arguments could I give to a paper clipper to stop its paper clipping ways.

That said I do think I can argue for a plurality of intrinsic values.

1) They allow you to break ties. If there are two situations with equal well being then having a second intrinsic value would give you a way of picking between the two.

2) It might be computationally complicated or informationally complicated to calculate your intrinsic value. Having another that is not at odds with it and generally correlates with it allows you to optimise for that. For example you could optimise for political freedom which would probabilistically lead to more eudaemonia, even if it is not the case that more political freedom always leads to more eudaemonia under all cases. As you can't measure the eudaemonia of everyone.

It will see most threats to itself in virtue of being very intelligent and having a lot of data, and will have a much easier time by not being in direct competition. Basically all known x-risks can be eliminated if you have zero coordination and competition problems.

I am thinking about unknown internal threats. One possibility that I alluded to is if it modifies itself to improve itself, but it does that on shakey premise and destroys itself. Another possibility is that parts of may degrade and/or get damaged and it gets the equivalent of cancers.

I'm personally highly skeptical that this will happen.

Okay, but the question was "is a single AI a good thing," not "will a single AI happen"

I was assuming a single non-morally perfect AI as that seems like the most likely outcome of the drive to a single AI to me.

It will be allowed by allowing them to exist without allowing them to create a competitor AI. What specific part of this do you think would be difficult? Do you think that everyone who is allowed to exist must have access to supercomputers free of surveillance?

If they are not free of surveillance, then they have not left the society. I think it a preferable world if we can allow everyone to have supercomputers because they are smart and wise enough to use them well.

Comment author: kbog  (EA Profile) 20 August 2017 06:34:18PM *  0 points [-]

I never said it did, I said it means I can't argue that you should intrinsically value anything. What arguments could I give to a paper clipper to stop its paper clipping ways.

Still, this is not right. There are plenty of arguments you can give to a paper clipper, such as Kant's argument for the categorical imperative, or Sidgwick's argument for utilitarianism, or many others.

1) They allow you to break ties. If there are two situations with equal well being then having a second intrinsic value would give you a way of picking between the two.

I don't see why we should want to break ties, since it presupposes that our preferred metric judges the different options to be equal. Moreover, your pluralist metric will end up with ties too.

2) It might be computationally complicated or informationally complicated to calculate your intrinsic value.

Sure, but that's not an argument for having pluralism over intrinsic values.

I am thinking about unknown internal threats. One possibility that I alluded to is if it modifies itself to improve itself, but it does that on shakey premise and destroys itself. Another possibility is that parts of may degrade and/or get damaged and it gets the equivalent of cancers.

If a single ASI is unstable and liable to collapse, then basically every view would count that as a problem, because it implies destruction of civilization and so on. It doesn't have anything to do with autonomy in particular.

I was assuming a single non-morally perfect AI as that seems like the most likely outcome of the drive to a single AI to me.

AI being non-morally perfect doesn't imply that it would be racist, oppressive, or generally as bad or worse as existing or alternative institutions.

If they are not free of surveillance, then they have not left the society.

Why should we care about someone's desire to have a supercomputer which doesn't get checked for the presence of dangerous AGI...?

Comment author: Brian_Tomasik 27 July 2017 02:54:58AM *  2 points [-]

I think a default assumption should be that works by individual authors don't necessarily reflect the views of the organization they're part of. :) Indeed, Luke's report says this explicitly:

the rest of this report does not necessarily reflect the intuitions and judgments of the Open Philanthropy Project in general. I explain my views in this report merely so they can serve as one input among many as the Open Philanthropy Project considers how to clarify its values and make its grantmaking choices.

Of course, there is nonzero Bayesian evidence in the sense that an organization is unlikely to publish a viewpoint that it finds completely misguided.

When FRI put my consciousness pieces on its site, we were planning to add a counterpart article (I think defending type-F monism or something) to have more balance, but that latter article never got written.

Comment author: kbog  (EA Profile) 16 August 2017 01:12:49PM *  1 point [-]

MIRI/FHI have never published anything which talks about any view of consciousness. There is a huge difference between inferring based on things that people happen to write outside of the organization, and the actual research being published by the organization. In the second case, it's relevant to the research, whether it's an official value of the organization or not. In the first case, it's not obvious why it's relevant at all.

Luke affirmed elsewhere that Open Phil really heavily leans towards his view on consciousness and moral status.

Comment author: WillPearson 21 July 2017 06:11:49PM *  0 points [-]

But it's equally plausible that the larger system will be enforcing morally correct standards and a minority of individuals will want to do something wrong (like slavery or pollution).

Both of these would impinge on the vital sets of others though (slavery directly, pollution by disrupting the natural environment people rely on). So it would still be a bad outcome from the autonomy viewpoint if these things happened.

The autonomy viewpoint is only arguing that lots of actions should physically possible for people, all the actions physically possible aren't necessarily morally allowed.

Which of these three scenarios is best?:

  1. No one has guns so no-one gets shot
  2. Everyone has guns and people get shot because they have accidents
  3. Everyone has guns but no-one gets shot because they are well trained and smart.

The autonomy viewpoint argues that the third is the best possible outcome and tries to work towards it. There are legitimate uses for guns.

I don't go into how these things should be regulated, as this is a very complicated subject. I'll just point out that to get the robust free society that I want you would need to not regulate the ability to do these things, but make sure the incentive structures and education is correct.

I don't see a reason that we ought to intrinsically value autonomy. The reasons you gave only support autonomy instrumentally through other values.

I don't think I can argue for intrinsically valuing anything. I agree with not being able to argue ought from is. So the best I can do is either claim it as a value, which I do, and refer to other value systems that people might share. I suppose I could talk about other people who value autonomy in itself. Would you find that convincing?

But by the original criteria, a single AI would (probably) be robust to catastrophe due to being extremely intelligent and having no local competitors.

Unless it is omniscient I don't see how it will see all threats to itself. It may lose a gamble on the logical induction lottery and make an ill-advised change to itself.

Also what happens if a decisive strategic advantage is not possible and this hypothetical single AI does not come into existence. What is the strategy for that chunk of probability space?

If it is a good friendly AI, then it will treat people as they deserve, not on the basis of thin economic need, and likewise it will always be morally correct. It won't be racist or oppressive.

I'm personally highly skeptical that this will happen.

I bet no one will want to leave its society, but if we think that that right is important then we can design an AI which allows for that right.

How would that be allowed if those people might create a competitor AI?

you reach some inappropriate conclusions because future scenarios are often different from present ones in ways that remove the importance of our present social contexts.

Like I said in the article, if I was convinced of decisive strategic advantage my views of the future would be very different. However as I am not, I have to think that the future will remain similar to the present in many ways.

Comment author: kbog  (EA Profile) 16 August 2017 01:06:03PM *  0 points [-]

I don't think I can argue for intrinsically valuing anything. I agree with not being able to argue ought from is.

The is-ought problem doesn't say that you can't intrinsically value anything. It just says that it's hard. There's lots of ways to argue for intrinsically valuing things, and I have a reason to intrinsically value well-being, so why should I divert attention to something else?

Unless it is omniscient I don't see how it will see all threats to itself.

It will see most threats to itself in virtue of being very intelligent and having a lot of data, and will have a much easier time by not being in direct competition. Basically all known x-risks can be eliminated if you have zero coordination and competition problems.

Also what happens if a decisive strategic advantage is not possible and this hypothetical single AI does not come into existence. What is the strategy for that chunk of probability space?

Democratic oversight, international cooperation, good values in AI, FDT to facilitate coordination, stuff like that.

I'm personally highly skeptical that this will happen.

Okay, but the question was "is a single AI a good thing," not "will a single AI happen".

How would that be allowed if those people might create a competitor AI?

It will be allowed by allowing them to exist without allowing them to create a competitor AI. What specific part of this do you think would be difficult? Do you think that everyone who is allowed to exist must have access to supercomputers free of surveillance?

Comment author: MikeJohnson 23 July 2017 08:57:34PM 2 points [-]

My sense that MIRI and FHI are fairly strong believers in functionalism, based on reading various pieces on LessWrong, personal conversation with people who work there, and 'revealed preference' research directions. OpenPhil may be more of a stretch to categorize in this way; I'm going off what I recall of Holden's debate on AI risk, some limited personal interactions with people that work there, and Luke Muehlhauser's report (he was up-front about his assumptions on this).

Of course it's harder to pin down what people at these organizations believe than it is in Brian's case, since Brian writes a great deal about his views.

So to my knowledge, this statement is essentially correct, although there may be definitional & epistemological quibbles.

Comment author: kbog  (EA Profile) 26 July 2017 12:05:44AM *  2 points [-]

Well I think there is a big difference between FRI, where the point of view is at the forefront of their work and explicitly stated in research, and MIRI/FHI, where it's secondary to their main work and is only something which is inferred on the basis of what their researchers happen to believe. Plus as Kaj said you can be a functionalist without being all subjectivist about it.

But Open Phil does seem to have this view now to at least the same extent as FRI does (cf. Muelhauser's consciousness document).

Comment author: kbog  (EA Profile) 23 July 2017 07:07:12PM *  2 points [-]

much of the following critique would also apply to e.g. MIRI, FHI, and OpenPhil.

I'm a little confused here. Where does MIRI or FHI say anything about consciousness, much less assume any particular view?

Comment author: SoerenMind  (EA Profile) 21 July 2017 04:03:08PM 6 points [-]

As far as I can see that's just functionalism / physicalism plus moral anti-realism which are both well-respected. But as philosophy of mind and moral philosophy are separate fields you won't see much discussion of the intersection of these views. Completely agreed if you do assume the position is wrong.

Comment author: kbog  (EA Profile) 21 July 2017 04:20:14PM *  6 points [-]

I think the choice of a metaethical view is less important than you think. Anti-realism is frequently a much richer view than just talking about preferences. It says that our moral statements aren't truth-apt, but just because our statements aren't truth-apt doesn't mean they're merely about preferences. Anti-realists can give accounts of why a rigorous moral theory is justified and is the right one to follow, not much different from how realists can. Conversely, you could even be a moral realist who believes that moral status boils down to which computations you happen to care about. Anyway, the point is that anti-realists can take pretty much any view in normative ethics, and justify those views in mostly the same ways that realists tend to justify their views (i.e. reasons other than personal preference). Just because we're not talking about whether a moral principle is true or not doesn't mean that we can no longer use the same basic reasons and arguments in favor of or against that principle. Those reasons will just have a different meaning.

Plus, physicalism is a weaker assertion than the view that consciousness is merely a matter of computation or information processing. Consciousness could be reducible to physical phenomena but without being reducible to computational steps. (eta: this is probably what most physicalists think.)

Comment author: kbog  (EA Profile) 21 July 2017 04:04:18PM *  4 points [-]

Some of the reasons you gave in favor of autonomy come from a perspective of subjective pragmatic normativity rather than universal moral values, and don't make as much sense when society as a whole is analyzed. E.g.:

You disagree with the larger system for moral reasons, for example if it is using slavery or polluting the seas. You may wish to opt out of the larger system in whole or in part so you are not contributing to the activity you disagree with.

But it's equally plausible that the larger system will be enforcing morally correct standards and a minority of individuals will want to do something wrong (like slavery or pollution).

The larger system is hostile to you. It is an authoritarian or racist government. There are plenty examples of this happening in history, so it will probably happen again.

Individuals could be disruptive or racist, and the government ought to restrain their ability to be hostile towards society.

So when we decide how to alter society as a whole, it's not clear that more autonomy is a good thing. We might be erring on different sides of the line in different contexts.

Moreover, I don't see a reason that we ought to intrinsically value autonomy. The reasons you gave only support autonomy instrumentally through other values. So we should just think about how to reduce catastrophic risks and how to improve the economic welfare of everyone whose jobs were automated. Autonomy may play a role in these contexts, but it will then be context-specific, so our definition of it and analysis of it should be contextual as well.

The autonomy view vastly prefers a certain outcome to the airisk question. It is not in favour of creating a single AI that looks after us all (especially not by uploading)

But by the original criteria, a single AI would (probably) be robust to catastrophe due to being extremely intelligent and having no local competitors. If it is a good friendly AI, then it will treat people as they deserve, not on the basis of thin economic need, and likewise it will always be morally correct. It won't be racist or oppressive. I bet no one will want to leave its society, but if we think that that right is important then we can design an AI which allows for that right.

I think this is the kind of problem you frequently get when you construct an explicit value out of something which was originally grounded in purely instrumental terms - you reach some inappropriate conclusions because future scenarios are often different from present ones in ways that remove the importance of our present social constructs.

View more: Prev | Next