Hide table of contents
This is a linkpost for https://theinsideview.ai/shahar

Shahar Avin is a senior researcher at the Center for the Study of Existential Risk in Cambridge. In his past life, he was a Google Engineer, though right now he spends most of his time thinking about how to prevent the risks that occur if companies like Google end up deploying powerful AI systems, by organizing AI Governance role-playing workshops.

In this episode, we talk about a broad variety of topics, including how we could apply what Shahar learned running AI Governance workshops to governing transformative AI, AI Strategy, AI Governance, Trustworthy AI Development and end up answering some twitter questions.

Below are some highlighted quotes from our conversation (available on Youtube, Spotify, Google Podcast, Apple Podcast). For the full context for each of these quotes, you can find the accompanying transcript.

We Are Only Seeing The Tip Of The Iceberg

The Most Cutting Edge AI Research Is Probably Private

I don’t know how much of the most cutting edge research today is public. I would not be confident that it is. It is very easy to look at all of the stuff that is public and see a lot of it, and infer from the fact that you’re seeing a lot of public research that all research must, therefore be public. I don’t think that is a correct inference to make.”

AI companies may not be showing all of their cards

“My guess would be that they're not always showing all of the cards. It's not always a calculated decision, but there is a calculated decision to be made of, if I have a result, do I publish or not? And then what goes into the calculation is if there is a benefit from publishing. It increases your brand, it attracts more talent, it shows that you are at the cutting edge, it allows others to build on your result and then you get to benefit from building on top of their results. And you have the cost of, as long as you keep for yourself, no one else knows it, and you can keep on doing the research.”

Aligning Complex Systems Is Hard

Narrow AI Do Not Guarantee Alignment

One failure mode is that there is an overall emergent direction that is bad for us. And another is there is no emergent direction, but the systems in fact are conflicting with each other, undermining each other. So one system is optimizing for one proxy. It generates it externality that is not fully captured by its designers that gets picked up by another system that has a bad proxy for it, and then tries to do something about it.”

Security failures are unavoidable for large, complex systems

“In particular, if you're building very large, complex, opaque systems, from a system-engineering or system-security perspective, you're just significantly increasing the way things go wrong because you haven't engineered every little part of the thing to be 100% safe, and provably and verifiably secure. And even provably and verifiably secure stuff could fail because you've made some bad assumptions about the hardware.”

Why Regulating AI Makes Sense

Our World Is A Very Regulated World

Our world is a very regulated world. We tend to see the failures, but we forget that none of these digital technology would exist around us without standards, and interoperability. We wouldn’t be able to move around if transport was not regulated and controlled and mandated in some way. If you don’t have rules, standards, norms, treaties, laws, you just get chaos.”

Compliance Is Part Of The Cost Of Doing Business

“Compliance is part of the cost of doing business in a risky domain. If you have a medical AI startup, you get people inspecting your stuff all the time because you have to pass through a bunch of regulations and you could get fined or go to jail, if you don’t do that. The threat of going to jail is a very strong motivator for someone who just wants to go on building good tech for the world. I’m much more worried in that respect about the US than I am about Europe because Europe has regulation-heavy approach to regulation, which also explains why they don’t have any very large players in the tech space.”

Concrete AI Regulation Proposals

Data Is Much Harder To Regulate Than Compute

“Data is much harder to regulate than compute because compute is a physical object. You can quantify it. If you have one GPU sitting in front of you getting a second GPU just next to it is pretty much impossible. You have to go back to the GPU factory. Whereas if you have a bunch of data here and you want a copy of it on a folder next to it, it's basically free.”

Alignment Red Tape And Misalignment Fines

We should have misalignment fines in the same that we fine companies for causing harms. It’s basically a way of internalizing the externalities. If you make a system that causes harm, you should pay for it and the way we do it is through fines but I also think they should have alignment red tape. The more powerful your system is, you should be paying the red tape cost of proving that your system is safe and secure and aligned before you’re allowed to make a profit and deploy it in the world.

When Should You Regulate AI

Making Today’s AI Regulation “Future Ready”

Governments are now caring about AI where previously they did not, and they care about AI for all of the current reasons: bias and privacy. Once they care about AI, then the game is about making that "future ready". You don't want just an ossified thing that only cares about privacy, even in a world with giant drone swarms and highly manipulative chatbots. You want the regulation of today to be "updatable”, to take into account new risks, or that the parts of government that created today's regulation would be willing to create new regulation. Ideally you want to decrease the amount of time that it takes to update regulation to account for new risks and there are various institutional designs that you could do to make that happen.”

You Should Regulate An AI Explosion Before It Happens

If you want to regulate an explosion, you don’t regulate it as it’s happening, you regulate it before it’s happened. Similarly here, if you get to the point where the technology is radically transforming your world on a month by month or week by week basis, it’s too late to do this regulation, unless the regulators are also sitting on top of very powerful AI that helping them keep track of what’s happening in regulation. We need the different regulatory processes.”

The Collingridge Dilemma

“When you want to regulate a technology or steer a technology towards a good outcome or any big change that is predicting in the future, if you try to do it too far in advance, you don’t have the details of what the change is going to happen, and so you don’t have a good solution. If you do it too late, then the thing is pretty much locked in and you don’t have much ability to change it.

Trying to find the sweet spot in the middle, where you know enough to regulate, but it’s not too late to change how things are going to go, is the game of AI regulation, AI governance. And you can make the game easier by putting in the regulation early that they can scale up or get adapted as you go along. You could have lots of people who are broadly in agreement that we need something, and put them in places of power. And so when it comes time to regulate, you have lots of allies in lots of places. You could generally teach people the fundamentals of why cooperation is good and why everyone dying is bad.”

Comments2
Sorted by Click to highlight new comments since:

Note: if you want to discuss some of the content of this episode, or one of the above quotes, I'll be at EAG DC this weekend chatting about AI Governance–feel free to book a meeting!

Maybe there isn't a point-in-time sweet spot, as your point about "adapting as you go along" makes clear?

In other words, maybe you need to develop resilience, preparedness, regulatory and response capacities at the same speed as the tech development (and maybe even slightly faster, if there is also AGI risk?)

I am very confident that regulation alone, and idealistic global agreements, will not be sufficient to remove most real world risks. A more comprehensive approach is needed. 

I'm happy to discuss privately about risk and response frameworks IRL with anyone who is serious about IRL implementation.

Curated and popular this week
Relevant opportunities