I would like to hear your views on whether EU regulations on AI are likely to matter to the rest of the world. I have been dedicating significant time and career capital contributing to these regulations, and I want to make sure I understand the strongest arguments against my impact. One of these is that the regulations won't affect AI developers in the rest of the world. Through the lens I suggested earlier (i.e. that "AGI safety" is a series of actions), the claim would be that the EU regulations don't factor significantly in the outcome i.e. it is irrelevant and therefore any intervention on it are wasteful.

Concretely, the argument goes this way: besides GDPR and perhaps machinery governance, there is little evidence that EU decisions on digital technologies matter significantly outside the EU territory. The global reach of EU decisions (the Brussels Effect) on digital matters could be a self-serving EU talking point, and the tens of millions that GAFAMI are spending yearly for the past 3 years lobbying in Brussels is mostly about non-AI digital files. Short of collecting further evidence on this argument (I believe there is an FHI working paper soon to be released - feel free to add reference whenever available), I can at least outline factors likely to alter the influence the EU regulations on AI will have. There are EU-specific factors and environmental factors. 

In terms of EU-specific factors, crucial considerations are the balance, transferability and effectiveness of the pro-safety EU policy.

  1. Failing balance: the EU regulating itself out of relevance. If a scandal of the scale of Cambridge Analytica or Snowden’s revelations comes up in the coming 12 months, this might put the European Parliament under pressure to use broad bans or similar unbalanced measures that would asphyxiate the EU market. This would then make the benefits of trying to capture the juicy EU market smaller than the compliance costs and make the potential pro-safety regulations untransferable abroad. That's also why just pushing against the technology as a whole is not a good idea in the EU: having to demonstrate mathematical alignment and robustness for all narrow AI systems rolled out would be so burdensome that most would leave the market. The counterargument is that it would significantly incentivize research and investment into making alignment and robustness testing/demonstration. On the other hand, if there is balance -e.g. asking companies to do some robustness & accuracy testing and incentivizing them to invest in safety R&D, without stifling the market- we could expect the market to "take the hit" of the regulation and invest in compliance rather than exit the market entirely. If the regulation is effective at setting safety requirements but unduly constrain the freedom to innovate and compete- we can hope this will drive wider adoption even in traditionally more pro-innovation jurisdiction. This implies any "unit of foregone freedom to innovate" should be paying off in terms "unit of uncertainty-adjusted AGI safety", and calls for a very surgical approach to improving the legislation. Despite the public cries from GAFAMI, most of GAFAMI are not even lobbying on the AI regulation because they find it very balanced (so do they declare in private).
  2. Failing transferability: the EU fails to transfer most of its pro-safety measures. Even if the policy is balanced, it might fail to transfer. GDPR transferred in a more or less unintended way - yes, it was meant to apply to foreign companies; yes, there was recognition of "adequacy" and the notion of Data Protection Authorities foreign equivalent, but no mechanisms were in place to promote GDPR compliance in industry abroad as far as we could tell. Commission staff was surprised when Indian and Japanese (and reportedly Chinese) policy staff asked to come to Brussels to discuss GDPR with its architects and translate it in their respective languages and legal environment, to inform their domestic privacy policy negotiations. In the case of AI though, the Commission is serious about promotion: they have launched an international alliance to promote "trustworthy AI" and the "EU approach to AI". This alliance's role is basically to do public diplomacy, network research, influence, and to make concrete "conversions" to the EU approach. The same "alliance" mechanism applied to cybersecurity has been described as very successful by EU insiders - but I am not knowledgeable enough about cybersecurity governance to assess how true that is. If these mechanisms fail, a significant portion of the “impactfulness” of the policy will be lost. However, for now, I don't have reason to believe they will fail.  
  3. Failing effectiveness: the EU creates ineffective regulations. That is by far the most likely bad scenario in my opinion. The little "click to accept cookies" could easily transfer to AI for developers ("I have reviewed my algorithm and confirm it is aligned with humanity's values" [√]). This is after all a policymaking ecosystem that non-content of having invented website-specific "cookies banners" has doubled down a couple of years later by adding "third-party data processing pop-ups" to the mix (again, website-specific, rather than, say, have your browser automatically share your preferences in http requests). Passing ineffective, easy laws would enable everyone to feel good about themselves and kill the momentum for more effective and creative mechanisms (e.g. mandated independent red-teaming, incubation sandboxes, software probes, etc.). The worst is that when looking at policy negotiations, there are good ideas put forward that never make it to the final text (GDPR originally had a strong right to explainability and explainability labeling system that could have increased investment in explainable AI quite significantly - for better or worse, it has been dropped). The AI Act seems to have relatively smart ideas (sandboxing, monitoring technological landscape evolution, distinction between narrow and general-purpose AI...), but I fear the quality of ideas will decrease at each step because of all the compromises. 

 

In terms of environmental factors, developments in the global governance of AI and geopolitics will affect whether the EU becomes more or less relevant to AI safety governance. In particular, transferability (explained above) is also affected by the situation in the rest of the world:

  1. Interoperability of the digital system(s) and their governance: in a world where China and the US decouples structurally tech-wise (separate consumers, suppliers, industries, technologies, and therefore institutions and therefore governance), global convergence e.g. on EU's higher standards will be more difficult and therefore the EU policy's impact will be more limited. Think of how the transatlantic internet governance has little influence on China's decoupled internet. Regardless of EU policy, this would be bad for EAs as they'd effectively have to shape not only a transatlantic AI governance regime but also a Chinese AI governance regime, and if any of these fails at embedding safety, it would structurally decrease our level of confidence in the outcome being safe.
  2. Sustainability of the digital systems: If China or the US or both plunge into domestic mayhem (e.g. November 2024 in the US; domestic discontent if China's real GDP growth converges to the 2-3% of developed economies within 5 years), there might be a serious weakening of the investors' appetite for AI R&D. Climate change might exert so much pressure on both countries over the next 15 years that the private and public investments are gradually redirected towards other technologies. Regardless of the reason, an AI winter would slow down market-driven progress towards more general AI systems. This would make the influence of EU policy abroad less of a priority.
  3. Conjoint effect of both: understanding which world we are likely to end up in is useful to assess the relevance of EU regulations on AI.
    1. [sustainable & interoperable] = cyber-peace --> maximal significant & relevant influence of EU policy on the world.
    2. [unsustainable but interoperable] = collapse of the global commons --> relatively significant but irrelevant influence of EU policy on the world.
    3. [unsustainable & not interoperable] = cold war 4.0 --> relatively little but anyway irrelevant influence of EU policy on the world.
    4. [sustainable but not interoperable] = digital trench warfare = little influence of EU policy on the world, even though it'd be needed.
Comments5
Sorted by Click to highlight new comments since:

For what it's worth, I'm optimistic about the EU AI regulation! I think, at least insofar as it is a smoke test, it marks a beginning in much-needed regulation on AI. I am also optimistic about transferability – perhaps not in exact wording or policy, but I like the risk-based approach. I think it gives companies the ability to replace the most risky AIs while keeping the less risky ones, and I think if it is shown to be feasible in the EU, it will likely be adopted elsewhere – if only because companies will already have found replacement solutions for unacceptable-risk AIs.

If there is one complaint I'd levy at it, I think it has been stripped back a little too far.  EU laws tend to be vague by design, but even by those standards, this new AI regulation leaves quite a few holes. I am also worried that the initial proposal was much stronger than the current EU AI act, and that this may continue:  the scaling back of regulations might even reflect a hesitancy in Court of Justice enforcement.  It isn't impossible that this becomes something like the cookies banners on websites in the EU now. If this happens, I fear that less regulation-savvy bodies (like the US government) might adopt a similar regulation to satisfy public pressure, while doing as little as possible to regulate AI. This might actually hinder efforts at AI regulation, as it would allow the EU and other regulatory bodies to put up a hollow regulation for good press, while leaving the problem unsolved.

Really hope this doesn't happen, or at least that if it does, the regulation serves as a stepping stone to stronger, future regulations :)

[anonymous]3
0
0

Thanks Neil - I share your concerns. However, to your point on vagueness: most policymakers also think it is more vague than usual. The Commission intends to clarify a lot during the trilogue based on input from the Parliament and the Council. They have also mandated standard-setting organisations ETSI and CEN-CENELEC to develop harmonized standards for operationalizing some of the governance concepts very concretely (E.g. defining techno-operationally what "robustness" is or what constitutes sufficient "human oversight"). They did not want to overspecify the regulations because they want a lot of expert input before enshrining them into law.  So in my opinion it is a good thing that they have left it vague, rather than off-the-mark. 

Thanks for writing up your thoughts, and for working on this, I think those are really important questions. Regard if the EU regulation will matter, I suppose many if not most important people in AI governance will learn about them and this will at least impact their thinking/plans/recommendations for their governments somehow, right?

Passing ineffective, easy laws would enable everyone to feel good about themselves and kill the momentum for more effective and creative mechanisms

Do you think it's possible that ineffective laws by the EU might lead European governments to invest more in their own AI regulation efforts? I listened to an interview with Gillian Hadfield about AI regulation and she seemed fairly confident that we want more flexibility and experimentation compared to status quo law-making, so maybe that would be actually end up being good?

[anonymous]5
0
0

Thanks for your questions Max! Hope this helps:

Regard if the EU regulation will matter, I suppose many if not most important people in AI governance will learn about them and this will at least impact their thinking/plans/recommendations for their governments somehow, right?

I don't think this is the case: most actors involved are not incentivized to care about balance, transferability and effectiveness. Making a gross generalization, we could say  civil society groups are concerned about issues that often uninformed donors care about. Industry groups on the other hands are hoping to minimize short-term costs from the regulatory requirements given financial markets' pressure -  by reducing either the law's scope of application or its requirements for whatever remains in scope. (There are more stakeholders and angles than just civil society and industry of course, but in practice most discussions end up being about whether a solution is pro- or anti-innovation nowadays). Policymakers themselves generally don't specialize in AI governance, but instead have a broad range of topics to deal with. Their concerns involve political positioning and they care about balance, transferability and effectiveness only to the extent it improves their positioning. 
 

Of course, policymakers and all people within civil society and industry groups are individuals with their own beliefs which affect their actions (that is why having EA or longtermist individuals in these roles would matter). They therefore sometimes sacrifice the party/org's mission in favor of doing what they think is right, particularly in civil society where there aren't enough resources to surveil compliance with HQ's talking points and "doing the right thing" can be argued to be part of the staff's mandate. 

In this system, balance is generally achieved thanks to both "sides" pushing as hard as they can in opposite directions. A vibrant civil society and democratically elected policymakers can offset the industry's resource advantage in lobbying. Moreover, transferability is an increasing function of balance. So both balance and transferability are not the primary concerns. 

However, effectiveness seems to be purely accidental: besides occasional individuals skewing the interpretation of their mandate in order to push for effectiveness, there is little incentive and pressure in the system to result in effective policies. 

 

Do you think it's possible that ineffective laws by the EU might lead European governments to invest more in their own AI regulation efforts?

The EU AI Act will reduce political demand for national AI regulations among Member States and beyond: as it is a Regulation (as opposed to a Directive), it requires all Member States to apply it the same way, so additional national AI regulations would literally layer up over, rather than complement or substitute, EU regulations. Countries outside the EU would also have less demand for regulations because of a potential de jure Brussels effect - though this effect would have to offset the hypothetical "regulatory competition" effect i.e.  lawmakers trying to be the first to have invented a legislative framework for topic X. Ineffective EU laws' impact on political demand will be smaller than effective ones, but not enough to offset the primary effect. 

so maybe that would be actually end up being good?

"Effectiveness" to the longtermist/EA community is different from "effectiveness" to the rest of society. For example, AGI-concerned individuals care more about requirements related to safety and alignment than measures to foster digital skills in rural areas. So it is possible that whatever we call ineffective is hailed as a major success of effectiveness by decisionmakers and will cut the demand for further policymaking for the next 20 years. I am very interested in the topic of experimentation and adaptiveness/future-proofing in policy, but since it requires decisionmakers i) acknowledging ignorance or admitting than current decisions might not be the best decisions and ii) considering time horizons of >8 years, it is politically difficult to achieve in representative democracies. 

That's very helpful and makes sense, thanks! Would be interesting to learn about case studies where decisionmakers acknowledged ignorance and acted with longer time horizons in mind. I suppose this will end up being cases that are not much publically debated and their consultants having longer time horizons. 

Curated and popular this week
Relevant opportunities