klevanoff comments on Personal thoughts on careers in AI policy and strategy - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (25)

You are viewing a single comment's thread.

Comment author: klevanoff  (EA Profile) 27 September 2017 11:08:06PM *  7 points [-]

Carrick, this is an excellent post. I agree with most of the points that you make. I would, however, like to call attention to the wide consensus that exists in relation to acting prematurely.

As you observe, there are often path dependencies at play in AI strategy. Ill-conceived early actions can amplify the difficulty of taking corrective action at a later date. Under ideal circumstances, we would act under as close to certainty as possible. Achieving this ideal, however, is impractical for several interrelated reasons:

  1. AI strategy is replete with wicked problems. The confidence that we can have in many (most?) of our policy recommendations must necessarily be relatively low. If the marginal costs of further research are high, then undertaking that research may not be worthwhile.

  2. Delaying policy recommendations can sometimes be as harmful as or more harmful than making sub-par policy recommendations. There are several reasons for this. First, there are direct costs (e.g., lives lost prior to implementing sanitary standards). Second, delays allow other actors--most of whom are less concerned with rigor and welfare--to make relative gains in implementing their favored policies. If outcomes are path dependent, then inaction from AI strategists can lead to worse effects than missteps. Third, other actors are likely to gain influence if AI strategists delay. Opaque incentive structures and informal networks litter the path from ideation to policymaking. Even if there are not path dependencies baked into the policies themselves, there are sociopolitical path dependencies in the policymaking process. Gaining clout at an early stage tends to increase later influence. If AI strategists are unwilling to recommend policies, others will do so and reap the reputational gains entailed. Inversely, increased visibility may confer legitimacy to AI strategy as a discipline.

  3. Policy communities in multiple countries are becoming more aware of AI, and policymaking activity is poised to increase. China's national AI strategy, released several months ago, is a long-range plan, the implementation of which is being carried out by top officials. For the CCP, AI is not a marginal issue. Westerners will look to Chinese policies to inform their own decisions. In Washington, think tanks are increasingly recognizing the importance of AI. The Center for a New American Security, for example, now has a dedicated AI program (https://www.cnas.org/research/technology-and-national-security/artificial-intelligence) and is actively hiring. Other influential organizations are following suit. While DC policymakers paid little attention to AlphaGo, they definitely noticed Putin's comments on AI's strategic importance earlier this month. As someone with an inside vantage point, I can say with a high degree of confidence that AI will not remain neglected for long. Inaction on the part of AI strategists will not mean an absence of policy; it will mean the implementation of less considered policy.

As policy discussions in relation to AI become more commonplace and more ideologically motivated, EAs will likely have less ability to influence outcomes, ceteris paribus (hence Carrick's call for individuals to build career capital). Even if we are uncertain about specific recommendations--uncertainty that may be intractable--we will need to claim a seat at the table or risk being sidelined.

There are also many advantages to starting early. To offer a few:

  1. If AI strategists are early movers, they can wield disproportionate influence in framing the discourse. Since anchoring effects can be large, introducing policymakers to AI through the lens of safety rather than, say, national military advantage is probably quite positive in expectation.

  2. Making policy recommendations can be useful in outsourcing cognitive labor. Once an idea becomes public, others can begin working on it. Research rarely becomes policy overnight. In the interim period, proponents and critics alike can refine thinking and increase the analytical power brought to bear on a topic. This enables greater scrutiny for longer-range thought that has no realistic path to near-term implementation, and may result in fewer unidentified considerations.

  3. Taking reversible harmful actions at an early stage allows us to learn from our mistakes. If these mistakes are difficult to avoid ex ante, and we wait until later to make them, the consequences are likely to be more severe. Of course, we may not know which actions are reversible. This indicates to me that researching path dependence in policymaking would be valuable.

This is not a call for immediate action, and it is not to suggest that we should be irresponsible in making recommendations. I do, however, think that we should increasingly question the consensus around inaction and begin to consider more seriously how much uncertainty we are willing to accept, as well as when and how to take a more proactive approach to implementation.

Comment author: WillPearson 28 September 2017 09:09:54AM 0 points [-]

I think it is important to note that in the political world there is the vision of two phases of AI development, narrow AI and general AI.

Narrow AI is happening now. The 30+% job loss predictions in the next 20 years, all narrow AI. This is what people in the political sphere are preparing for, from my exposure to it.

General AI is conveniently predicted more that 20 years away, so people aren't thinking about it because they don't know what it will look like and they have problems today to deal with.

Getting this policy response right to narrow AI does have a large impact. Large scale unemployment could destabilize countries, causing economic woes and potentially war.

So perhaps people interested in general AI policy should get involved with narrow AI policy, but make it clear that this is the first battle in a war, not the whole thing. This would place them well and they could build up reputations etc. They could be be in contact with the disentanglers so that when the general AI picture is clearer, they can make policy recommendations.

I'd love it if the narrow-general AI split was reflected in all types of AI work.