Hide table of contents

This post explores which levers exist in Europe for influencing the governance of AI. The scope includes potential actions taken by bodies/offices/agencies of the EU, its constituent member countries, or some other potentially relevant European countries like Switzerland. I’m not looking at self-regulation by leading AI development groups in Europe since some attention is already being paid to that, and there aren’t any beyond DeepMind. I should note that at this level of abstraction, the analysis will be necessarily somewhat crude.

My role in AI governance: I’m personally interested in the topic. Beyond that, we at the Effective Altruism Foundation are concerned about risks from AI and governance is one lever to affect the relevant outcomes. Since our team members mainly hail from European countries and we are based there, it made sense to pick this as an entry point. Please get in touch with me if you want to talk about content related to this post at stefan.torges@ea-foundation.org.

Epistemic status and methodology: I’m neither a legal nor a political science expert (except for some undergrad coursework) and have not worked in governance (except for a few internships). This analysis is based on my fairly superficial understanding of different governance mechanisms and conversations I’ve had with people in the EA community about them. Some of these people had little knowledge about AI governance, some had a lot, nobody had a lot of knowledge about European governance mechanism (and how they might relate to AI). Therefore, I have restricted myself to statements I feel sufficiently confident to make based on that knowledge and expressed my remaining thoughts in the form of questions to be addressed in the future. I’m fairly confident that the pathways I outline below cover most of the potential levers that exist. However, I’m much less confident about their absolute and relative importance (with some exceptions).

Acknowledgments: I’m grateful for the helpful comments by people involved with the Effective Altruism Foundation on a draft of this post. I also want to thank the people in the AI governance community for taking the time to speak to me about this.

Summary

  • The best case for working on governance in Europe probably rests on personal fit and comparative advantage. Other, more general reasons, strike me as fairly weak.
  • Europe might have some fairly direct influence (executive or legislative) over AI development groups: either because they’re located in Europe (e.g., DeepMind) or because they’re transnational companies operating in Europe (e.g., Google, Facebook).
  • Europe might have significant indirect influence on AI development via a number of different pathways: they might set norms or pass blueprint regulations that are subsequently adopted in other jurisdictions; they might have significant say via international regimes governing AI development and deployment; they might influence the power balance between the US and China by “taking sides” in key situations; their planned “AI build-up” might influence the global AI landscape in hard-to-anticipate ways.
  • I don’t have strong views about the relative importance of these different pathways. I’d welcome more research on the legal situation of DeepMind, the relevance of different international bodies for the governance of AI, the prevalence of the EU as a norm or regulation role model, and what these different pathways imply for career choice in this field.

Why look at Europe at all?

So far, the AI governance community concerned with the long-term effects of transformative AI (mainly within or adjacent to the EA community) seems to have mainly focused on the US and China, with some notable exceptions[1]. The key drivers behind this seem to be the fact that most key AI development groups are located there (e.g., OpenAI, Google, Facebook, Microsoft, Amazon, Tencent, Alibaba, Baidu) and the fact that these two countries seem to be ahead more generally when it comes to AI capabilities research.

However, even assuming that this picture is roughly accurate, it could still make sense for some people to work toward influencing relevant European governance.

This claim is mainly driven by considerations related to personal fit and comparative advantage. For instance, a lot of US government roles are not open to non-US citizens. There will also only be a limited number of policy roles at groups like DeepMind or OpenAI.

There also seems to be an okay outside view argument in favor of influencing the EU and its member countries (and Europe more generally) when it comes to questions of governance. The EU is the second largest economy behind the US, but still ahead of China. Its constituent countries, France, the UK, and Germany in particular, also still have a lot of influence (some would say “disproportionate”) in international bodies (partly for historical reasons). My impression is that European scientific institutions are still at the cutting edge in many scientific fields. Therefore, one might expect Europe to matter for the governance of AI in ways that might be hard to anticipate. In particular, this argument pushes for allocating more resources toward influencing Europe at the expense of China, since the US seems to be ahead of Europe when it comes to most such measures. I don’t give this argument a lot of weight though since I expect a detailed comparative look at the AI landscape to be more informative (which seems to favor China over Europe).

Another more speculative reason, which I also don’t give a lot of weight, might be “threshold effects” in certain international contexts. A toy example is passing a resolution in some international body. Since this usually requires a majority, it could be important to build influence in lots of countries that could sway such a vote.

Concrete pathways for European governance to influence AI development

Direct legislative or executive influence over relevant stakeholders

There are several ways in which European stakeholders might be able to exert direct political influence on leading AI development groups.

DeepMind

DeepMind is one of the leading companies developing AI technology and they’re currently located in the UK. While the company was acquired by Alphabet in 2014, their location makes them potentially susceptible to European influence. Conditional on Brexit, this influence would be reduced to that of the UK. Personally, I don’t have a good understanding of the legal situation surrounding DeepMind.

Further questions:

  • What legislative or executive levers do the EU or the UK currently have on DeepMind?
  • How does that change when taking into account extraordinary conditions such as national emergencies or wars?

Transnational AI development companies

The EU has significant and direct regulatory influence over transnational companies (e.g., Facebook, Google, Amazon, Apple) through its regulation (e.g., they might set certain explainability standards when it comes to the use of AI algorithms for personal assistants used by Google or Amazon). Such groups often find global compliance easier than differential regional compliance. This has been called the “Brussels effect”. GDPR is a good example of this in the technology sector. Even just forced regional compliance would likely have ramifications for differential AI development (e.g., compliance might slow down capability development within these companies). To the extent that such companies are relevant to AI progress, the EU is a relevant stakeholder.

Further questions:

  • How likely is such regulation in the first place?
  • Which groups are most likely going to be affected by such regulation?

Other European groups relevant to A(G)I development

It seems like Europe seems to be lagging behind the US and China in terms of AI capabilities and their future trajectory (with the exception of DeepMind). However, this might turn out to be wrong on closer inspection (which seems very unlikely) or change over time (which seems somewhat unlikely). If so, it might be that there will be relevant A(G)I development groups in Europe at some point. It could also be the case that certain European groups are leaders within certain subfields which are crucial for A(G)I development, even though they lag behind in most areas. Chip development is an illustrative example of such a strategically important area (NB: Europe is not leading in chip development).

Further questions:

  • What is the state of European AI capabilities research compared to the US and China? If they are lagging behind, how likely is that they will catch up? What’s the most likely development path?
  • Which European countries are most likely to be relevant for AI development?
  • Which European development groups (excluding DeepMind) are most likely to be relevant global players?
  • Are there fields related to A(G)I development in which Europe or European groups are leading? Which ones?

Indirect influence

“Spill-over governance” via role modeling

Regulation and norms related to AI put forward by European countries or the EU might influence relevant governance in other jurisdictions. This is especially relevant to the extent that this applies to the US and China. GDPR, again, can serve as a useful example here: Apparently, China modeled its data privacy regulation to a large extent on GDPR. California appears to have done the same. When it comes to AI, the EU is already developing a focus on “Trustworthy AI” which might have relevant spill-over effects.

Further questions:

  • To what extent has this been the case for other regulation beyond GDPR, especially in the realm of technology policy? How does AI compare to these other examples?

Influence on international regimes

European countries or the EU are likely to play some role in the global governance of AI. So to the extent that the global governance of AI will matter, either through existing regimes or the creation of new ones, European influence will likely be significant. In most international regimes, European countries (the UK, France, and Germany in particular) have considerable influence that is disproportionate to their population size. The EU also has some influence but much less so than some of its constituent members. Even if bilateral negotiations and agreements between the US and China are most relevant, one could imagine third-party countries or bodies playing an important mediation role. Switzerland is probably the prime example here; Norway might also be a candidate.

Further questions:

  • Historically, how have global governance mechanisms for similar technologies (e.g., dual-use technologies) been developed? What has European influence looked like in these cases?
  • Which existing international bodies are likely to be most relevant when it comes to the governance of AI? (e.g., UN Security Council, G7/8, G20, International Telecommunication Union, International Organization for Standardization) Which European countries are most influential within these?

Directly influencing the “AI power balance” between the US and China

European countries or the EU might be in a position where they can influence the “AI power balance” between the US and China. For instance, they could join or abstain from potential US sanction regimes for strategic technologies or resources (cf. Iran nuclear deal framework). They might prevent the acquisition of AI development groups by Chinese companies (cf. discussions about this in Germany and EU regulation, in part as a result of the Chinese acquisition of German robotics firm KUKA). They might engage in sharing crucial intellectual property with the US. This is really a grab bag of different opportunities that might arise where the European response would have an influence on the Sino-American power balance.

Further questions:

  • What are the most relevant areas/scenarios that fall under this category?
  • How has “Europe” responded in analogous situations in the past?
  • How relevant is this type of “European” influence on the power balance?

Indirect effects from building up the European “AI sector”

European countries and the EU seem interested in expanding their AI capabilities (broadly speaking). The global effects of this on AI development are difficult to anticipate but potentially relevant if one could potentially slow down or stop this build-up. It might draw in funding and talent from the US but it could also serve as a talent and money pipeline to the US. It might exacerbate “race dynamics” between the US and China or the presence of a third “safety-conscious” stakeholder might actually slow down race dynamics. All of this could affect AI timelines and which stakeholders are most likely to gain a development advantage.

Further questions:

  • How does this planned European build-up this affect global talent and money flow related to AI?
  • How would it affect global “race dynamics”?
  • Overall, would it speed up or slow down A(G)I development in expectation?

Discussion

As I said before, I don’t have particularly strong views about the relative importance of these different pathways. Direct influence seems more important than indirect influence. Within that category, influence over existing leading AI development groups seems more important than potential new ones. Within the “indirect influence” category, I have barely any views. The last pathway (“Indirect effects from building up the European ‘AI sector’”) seems least important and least tractable to make research progress on.

I’d be most interested in an investigation of the potential influence over DeepMind since it could turn out to be quite significant or barely relevant. It’s also a fairly straightforward and tractable issue to research since this strikes me a fairly concrete legal question. Perhaps this could be complemented by some historical analysis regarding the precedent of extraordinary or even extra-legal means of influence, e.g., potential nationalization (attempts) of foreign companies during war times.

These are other questions that strike me as most important and tractable:

  • Which existing international bodies are likely to be most relevant when it comes to the governance of AI? (e.g., UN Security Council, G7/8, G20, International Telecommunication Union, International Organization for Standardization) Which European countries are most influential within these?
  • How common is “spill-over” governance via role modeling beyond GDPR, especially in the realm of technology policy? How does AI compare to these other examples?

I would also welcome more systematic research into which European bodies and positions are most important for different pathways which is also beyond the scope of this post.


  1. Charlotte Stix’ work is certainly the most relevant example here. In addition, Allan Dafoe from the Center for the Governance of AI at the Future of Humanity Institute (Oxford) spoke in front of the Subcommittee on Security and Defence of the European Parliament and he also participated as an Evidence Panelist in the All Party Parliamentary Group on Artificial Intelligence. The Cambridge Centre for the Study of Existential Risk submitted evidence to the Lords Select Committee on Artificial Intelligence. Still, these strike me as exceptions to the overall focus on the US and China within that cause area. ↩︎

Comments13
Sorted by Click to highlight new comments since:

I would add two relevant institutions and one potentially relevant place for consideration:

  • The EU's "AI High-Level Expert Group" is easy to make fun of, but as they're supposed to be the EU's think tank for AI governance they might be relevant for Brussels effect-type strategies. Some members of the HLEG may be approachable by EAs who happen to be in the right place. https://ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intelligence

  • ELLIS is the counterpoint to that, aiming to position itself as the (academic) hub for AI capability development in Europe. The founders' explicit intention is to help Europe catch up in AI. https://ellis.eu/

  • Estonia is said to be a hotspot for technology and microelectronics startups, and it is a testbed for e-governance. Its flat tax rate and e-citizenship program might also make Estonian regulation relevant for international companies that choose to incorporate there (by analogy to Ireland or other tax/regulation havens).

For what it's worth, according to Artificial Intelligence Index published in 2018:

Europe has consistently been the largest publisher of AI papers — 28% of AI papers on Scopus in 2017 originated in Europe. Meanwhile, the number of papers published in China increased 150% between 2007 and 2017. This is despite the spike and drop in Chinese papers around 2008.

(I'd post the graphs here, but I don't think images can be inserted into comments.)

My lived experience is that most of the papers I care about (even excluding safety-related papers) come from the US / UK. There are lots of reasons that both of these could be true, but for the sake of improving AGI-related governance, I think my lived experience is a much better measure of the thing we actually care about (which is something like "which region does good AGI-related thinking").

Are you mainly referring to technical papers, or does your statement consider work from folks at FHI, CSER, CFI, etc.?

I was excluding governance papers, because it seems like the relevant question is "will AI development happen in Europe or elsewhere", and governance papers provide ~no evidence for or against that.

You can post an image using standard markdown syntax:

![](link to the image)

For example, to insert the above image, I wrote:
![](https://nunosempere.github.io/ea/AI-Europe.png)

This strikes me as an isolated example of Europe leading on one metric. I plan to write something more comprehensive, but I think just seeing this statistic could create a wrong impression for some people.

(edited to remove accusatory tone)

I think this accusation is uncalled for. There is more statistics in the report and I linked to it, including things like citation impact. But a comprehensive overview of European AI research is, of course, very welcome.

Maybe I misunderstood. What's the point of highlighting only this statistic? It does not seem very representative of the report you're linking to or the overall claim this statistic might support if looked at in isolation.

EDIT: I didn't mean to imply intent on your part. Apologies for the unclear language. Edited original comment as well.

(I'd post the graphs here, but I don't think images can be inserted into comments.)

I think they can (or, at least, it used to be possible to do so). I've done so here and here for example.

RE "why look at Europe at all?", I'd say Europe's gusto for regulation is a good reason to be interested (you discuss that stuff later, but for me it's the first reason I'd give). It's also worth mentioning the "right to an explanation" as well as GDPR.





Do you think these points make Europe/the EU more important than the US or China? Otherwise, they don't give a reason for focusing on the Europe/the EU over these countries to the extent that this focus is mutually exclusive, which it is to some extent (e.g., you either set up your think tank in Washington DC or Brussels, you either analyze the EU policy-making process or the US one).

Reasons to focus on the EU/Europe over these countries are in my opinion:

  • personal fit/comparative advantage
  • diminishing returns for additional people to focus on the US/China (should have noted this in the OP)
  • threshold effects

To answer your question: no.

I basically agree with this comment, but I'd add that the "diminishing returns" point is fairly generic, and should be coupled with some arguments about why there are very rapidly diminishing returns in US/China (seems false) or non-trivial returns in Europe (seems plausible, but non-obvious, and also to be one of the focuses of the OP).


Curated and popular this week
Relevant opportunities