Comment author: itaibn 08 September 2017 01:00:00AM 1 point [-]

The article on machine learning doesn't discuss the possibility that more people to pursuing machine jobs can have a net negative effect. It's true your venue will generally encourage people that will be more considerate of the long-term and altruistic effects of their research and so will likely have a more positive effect than the average entrant to the field, but if accelerating the development of strong AI is a net negative then that could outweigh the benefit of the average researcher being more altruistic.

Comment author: kbog  (EA Profile) 08 September 2017 05:03:32AM *  0 points [-]

Accelerating the development of machine intelligence is not a net negative since it can make the world better and safer at least as much as it is a risk. The longer it takes for AGI algorithms to be developed, the more advanced hardware and datasets there will be to support an uncontrolled takeoff. Also, the longer it takes for AI leaders to develop AGI then the more time there is for other nations and organizations to catch up, sparking more dangerous competitive dynamics. Finally, even if it were a net negative, the marginal impact of one additional AI researcher is tiny whereas the marginal impact of one additional AI safety researcher is large, due to the latter community being much smaller.

Comment author: kbog  (EA Profile) 04 September 2017 08:18:06PM 1 point [-]

inverse reinforcement learning could allow AI systems to learn to model the current preferences and likely media reactions of populations, allowing new AI propaganda systems to pre-test ideological messaging with much more accuracy, shaping gov't 'talking points', policy rationales, and ads to be much more persuasive.

The same can be said for messages which come from non-government sources. Governments have always had an advantage in resources and laws, so they've always had the high ground in information warfare/propaganda, but at the same time dissenting ideas are frequently spread. I don't see why the balance would be shifted.

Likewise, the big US, UK, EU media conglomerates could weaponize AI ideological engineering systems to shape more effective messaging in their TV, movies, news, books, magazines, music, and web sites -- insofar as they have any ideologies to promote.

Likewise, the same reasoning goes for small and independent media and activist groups.

Compared to other AI applications, suppressing 'wrong-think' and promoting 'right-think' seems relatively easy. It requires nowhere near AGI. Data mining companies such as Youtube, Facebook, and Twitter are already using semi-automatic methods to suppress, censor, and demonetize dissident political opinions. And governments have strong incentives to implement such programs quickly and secretly, without any public oversight (which would undermine their utility by empowering dissidents to develop counter-strategies). Near-term AI ideological control systems don't even have to be as safe as autonomous vehicles, since their accidents, false positives, and value misalignments would be invisible to the public, hidden deep within the national security state.

Yeah, it is a problem, though I don't think I would classify it as AI safety. The real issue is one of control and competition. Youtube is effectively a monopoly and Facebook/Twitter are sort of a duopoly, and all of them are in the same Silicon Valley sphere with the same values and goals. Alternatives have little chance of success because of a combination of network effects and the 'Voat Phenomenon' (any alternative platform to the default platform will first attract the extreme types who were the first people to be ostracized by the main platform, so that the alternative platform will forever have a repulsive core community and a tarnished reputation). I'm sure AI can be used as a weapon to either support or dismantle the strength of these institutions; it seems better to approach it from a general perspective than just as an AI one.

Comment author: zdgroff 29 August 2017 09:26:00PM 0 points [-]

All I can say is that if you are going to have machines which can fulfill all the organizational and tactical responsibilities of humans, to create and lead large formations, then they are probably going to have some kind of general intelligence like humans do.

Couldn't it be the case, though, that you have a number of machines that together fulfill all the organizational and tactical responsibilities of humans without having any one of them have general intelligence? Given that humans already function as cogs in a machine (a point you make very well from your experience), this seems very plausible.

In that case, the intelligence could be fairly narrow, and I would think we should not bet too much on the AIs having a moral compass.

Comment author: kbog  (EA Profile) 30 August 2017 04:27:30AM 0 points [-]

If they are narrow in focus, then it might be easier to provide ethical guidance over their scope of operations.

Comment author: kbog  (EA Profile) 30 August 2017 04:18:46AM *  4 points [-]

I don't think there is a difference between a moral duty and an obligation.

In 2015, there were more than 2000 respondents, right? Does this mean EA is getting smaller??

Comment author: turchin 28 August 2017 10:47:15PM 1 point [-]

I think the title may be technically correct but sounds nasty.

For nitpicking side, I would argue that AI weapons problem mostly depends on their level of intelligence. If it is just narrow AI -ok. However, the greater is their intelligence, the greater is the danger and it may reach catastrophic levels before superintelligence will be created.

I would also add that superintelligence created by the military may be perfectly aligned, but still catastrophically dangerous if it is used as a universal weapon against perhaps another military superintelligence. And the first step for not creating military superintelligence - starts from non creating AI weapons.

Comment author: kbog  (EA Profile) 29 August 2017 10:36:33PM *  0 points [-]

I would also add that superintelligence created by the military may be perfectly aligned, but still catastrophically dangerous if it is used as a universal weapon against perhaps another military superintelligence. 

A superintelligence would have the ability and (probably) interest to shape the entire world. Whether it comes from the military, a corporation, or a government, it will have a compelling instrumental motivation to neutralize other superintelligences.

Comment author: Paul_Christiano 29 August 2017 04:50:34AM *  4 points [-]

The two arguments I most often hear are:

  • Cheap autonomous weapons could greatly decrease the cost of ending life---within a decade they could easily be the cheapest form of terrorism by far, and may eventually be the cheapest mass destruction in general. Think insect-sized drones carrying toxins or explosive charges that are lethal if detonated inside the skull.

  • The greater the military significance of AI, the more difficult it becomes for states to share information and coordinate regarding its development. This might be bad news for safety.

Comment author: kbog  (EA Profile) 29 August 2017 09:11:25AM *  2 points [-]

Cheap autonomous weapons could greatly decrease the cost of ending life---within a decade they could easily be the cheapest form of terrorism by far, and may eventually be the cheapest mass destruction in general. Think insect-sized drones carrying toxins or explosive charges that are lethal if detonated inside the skull.

That sounds a lot more expensive than bullets. You can already kill someone for a quarter.

If weaponized insect drones become cheap enough for terrorists to build and use, then counterterrorist organizations will be able to acquire large numbers of potent surveillance tools to find and eliminate their centers of fabrication.

You should have a low prior for new military technologies to alter the balance of power between different types of groups in a specific way. It hasn't happened much in history, because competing groups can take advantage of these technologies too.

The greater the military significance of AI, the more difficult it becomes for states to share information and coordinate regarding its development. This might be bad news for safety.

Of course the military AIs will be kept secret. But the rest of AI work won't be like that. Cruise liners aren't less safe because militaries are secretive about warship design.

Plus, in Bostrom and Armstrong's Race to the Precipice paper, it is shown that uncertainty about other nations' AI capabilities actually makes things safer.

Comment author: Robert_Wiblin 29 August 2017 04:33:31AM 1 point [-]

As Geoffrey suggests below, the 'political economy' (to use the term loosely) of robot armies seems quite bad. See for example the argument here: https://noahpinionblog.blogspot.com/2014/03/the-robot-lords-and-end-of-people-power.html .

Comment author: kbog  (EA Profile) 29 August 2017 09:07:04AM *  1 point [-]

If robots are cheap and effective compared to human soldiers, then the common people can get robots to fight as well.

Remember that F-15s and MRAPs are already far more powerful than anything owned by private citizens, and 600 years ago a man-at-arms was impervious to most peasant weapons. Revolution and civil stability is not about sheer military force.

Comment author: Daniel_Eth 29 August 2017 06:59:02AM 1 point [-]

"The same can be said for humans." - no, that's very much not true. Humans have common sense and can relatively easily think generally in novel situations. Regarding your second point, how would you avoid an arms race to a situation where they are acting in that level? It happened to a large degree with the financial sector, so I don't see why the military sphere would be much different. The amount of time from having limited deployment of autonomous weapons to the military being mostly automated likely would not be very large, especially since an arms race could ensure. And I could imagine catastrophes occurring due to errors in machines simply in "peaceful posture," not to mention that this could be very hard to enforce internationally or even determine which countries were breaking the rules. Having a hard cutoff at not letting machines kill without human approval seems much more prudent.

Comment author: kbog  (EA Profile) 29 August 2017 08:59:30AM *  2 points [-]

"The same can be said for humans." - no, that's very much not true. Humans have common sense and can relatively easily think generally in novel situations.

But humans commonly make mistakes on the battlefield. They have training which doesn't perfectly match real-world situations. And they can behave in very unexpected ways. There is no shortage of cases where human soldiers break the laws of war, engage in fratricide (intentional or not), commit war crimes and do other bad things.

Regarding your second point, how would you avoid an arms race to a situation where they are acting in that level?

Well it's little different from avoiding a race to a situation where AIs are acting on the level of state governors and corporate CEOs; the same general set of cognitive competencies will enable machines to fulfill all of these roles. Several things are possible - AI never outpaces humans in this kind of role, nations agree to maintain human oversight over AI systems at operational or theater levels, or AIs replace humans in all kinds of military leadership roles. I don't think any of these scenarios is necessarily bad. If AI systems will be intelligent enough to run theater commands better than humans can, then they will be intelligent enough to know the difference between a border scare and a real war. If they can make a plan to outfight an opposing military force then they can make a plan to prepare themselves against unnecessary escalations.

The amount of time from having limited deployment of autonomous weapons to the military being mostly automated likely would not be very large, especially since an arms race could ensure.

Why? Automation is not a game changer. Matchlock revolvers were invented in Germany in the 16th century, but it was not until the 19th century that armies had widespread adoption of repeating firearms. Light automatic infantry weapons were developed in WWI but did not become standardized as individual weapons until the early Cold War. The first guided attack missile was developed in WWI, the Kettering Bug, but iron bombs and unguided rockets have still been used widely in military combat in Vietnam and more recent wars. Automatic tank loaders have been around for more than half a century and still are yet to be adopted by the majority of industrialized nations. In the past 70 years since the end of WW2, tank crews have reduced from 6 to 4 and destroyer crews have reduced from 350 to 250. Not very fast.

not to mention that this could be very hard to enforce internationally or even determine which countries were breaking the rules.

We already don't have international rules preventing countries from keeping their forces on high alert on the border. Countries just don't do it because they know that military mobilization and tensions are not to be taken lightly. Having AIs instead of humans wouldn't change this matter.

Having a hard cutoff at not letting machines kill without human approval seems much more prudent.

No such hard cutoff is possible. What does it mean to have human approval to kill - does each potentially lethal shot need to be approved? Or can the human give ordinary fire orders in a tactical situation the way a commander does with his subordinates?

Comment author: geoffreymiller  (EA Profile) 28 August 2017 11:22:19PM 9 points [-]

Fascinating post. I agree that we shouldn't compare LAWs to (a) hypothetical, perfectly consequentialist, ethically coherent, well-trained philosopher-soldiers, but rather to (b) soldiers as the order-following, rules-of-engagement-implementing, semi-roboticized agents they're actually trained to become.

A key issue is the LAWs' chain of commands' legitimacy, and how it's secured.

Mencius Moldbug had some interesting suggestions in Patchwork about how a 'cryptographic chain of command' over LAWs could actually increase the legitimacy and flexibility of governance over lethal force. https://www.amazon.com/dp/B06XG2WNF1

Suppose a state has an armada/horde/flock of formidable LAWS that can potentially destroy or pacify the civilian populace -- an 'invincible robot army'. Who is permitted to issue orders? If the current political leader is voted out of office, but they don't want to leave, and they still have the LAWS 'launch codes', what keeps them from using LAWS to subvert democracy? In a standard human-soldier/secret service agent scenario, the soldiers and agents have been socialized to respect the outcomes of democratic elections, and would balk at defending the would-be dictator. They would literally escort him/her out of the White House. In the LAWs scenario, the soldiers/agents would be helpless against local LAWs under the head of state. The robot army would escort the secret service agents out of the White House until they accept the new dictator.

In other words, I'm not as worried about interstate war or intrastate protests; I'm worried about LAWs radically changing the incentives and opportunities for outright dictatorship. Under the Second Amendment, the standard countervailing force against dictatorship is supposed to be civilian ownership of near-equivalent tech that poses a credible threat against dictatorial imposition of force. But in this invincible-robot-army scenario, that implies civilians would need to be able to own and deploy LAWs too, either individually (so they can function as aggrieved tyrant-assassins) or collectively (so they can form revolutionary militias against gov't LAWs).

I guess this is just another example of an alignment problem - in this case between the LAWs and the citizens, with the citizens somehow able to collectively over-rule a dictator's 'launch codes'. Maybe every citizen has their own crypto key, and they do some kind of blockchain vote system about what the LAWs do and who they obey. This then opens the way to a majoritarian mob rule with LAWs forcibly displacing/genociding targeted minorites -- or the LAWs must embody some 'human/constitutional rights interrupts' that prevent such bullying.

Any suggestions on how to solve this 'chain of command' problem?

Comment author: kbog  (EA Profile) 29 August 2017 02:39:49AM *  2 points [-]

All I can say is that if you are going to have machines which can fulfill all the organizational and tactical responsibilities of humans, to create and lead large formations, then they are probably going to have some kind of general intelligence like humans do. That means we can expect and demand that they have a decent moral compass.

But in this invincible-robot-army scenario, that implies civilians would need to be able to own and deploy LAWs too, either individually (so they can function as aggrieved tyrant-assassins) or collectively (so they can form revolutionary militias against gov't LAWs).

We don't have civilian tanks or civilian fighter jets or lots of other things. Revolutions are almost always asymmetric.

Comment author: Daniel_Eth 29 August 2017 12:51:42AM 4 points [-]

"I don't know what reason there is to expect a loss in stability in tense situations; if militaries decide that machines are competent enough to replace humans in battlefield decision making, then they will probably be at least as good at avoiding errors."

I very much disagree with that. AI and similar algorithms tend to work quite well... until they don't. Often times assumptions are programmed into them which don't always hold, or the training data doesn't quite match the test data. It's probably the case that automated weapons would greatly decrease minor errors, but they could greatly increase the chance of a major error (though this rate might still be small). Consider the 2010 flash crash - the stock market dropped around 10% within minutes, then less than an hour later it bounced back. Why? Because a bunch of algorithms did stuff that we don't really understand while operating under slightly different assumptions than what happened in real life. What's the military equivalent of the flash crash? Something like a bunch of autonomous weapons in US and China starting all out war over some mistake, then stopping just as soon as it started, yet with 100M people dead. The way to avoid this sort of problem is to maintain human oversight, and the best place to draw the line is probably at the decision to kill. Partially autonomous weapons (where someone remotely has to make a decision to kill, or at least approve the decision) could provide almost all the benefit of fully autonomous weapons - including greatly reduced collateral damage - yet would not have the same risk of possibly leading to a military flash crash.

Comment author: kbog  (EA Profile) 29 August 2017 02:36:12AM *  2 points [-]

I very much disagree with that. AI and similar algorithms tend to work quite well... until they don't. Often times assumptions are programmed into them which don't always hold, or the training data doesn't quite match the test data.

The same can be said for humans. And remember that we are looking at AI systems conditional upon them being effective enough to replace people on the battlefield. If they make serious errors much more frequently than people do, then it's unlikely that the military will want to use them.

Something like a bunch of autonomous weapons in US and China starting all out war over some mistake, then stopping just as soon as it started, yet with 100M people dead.

That requires automation not just at the tactical level, but all the way up to the theatre level. I don't think we should have AIs in charge of major military commands, but that's kind of a different issue, and it's not going to happen anytime soon. Plus, it's easy enough to control whether machines are in a defensive posture, offensive posture, peaceful posture, etc. We already have to do this with manned military units.

View more: Next