EDIT: We have now relaunched. More details here
We wanted to update the EA community about some changes to the Centre for the Governance of AI (GovAI). GovAI was founded in 2018, as part of the University of Oxford’s Future of Humanity Institute (FHI)[1]. GovAI is now becoming an independent nonprofit. We are currently in the process of setting up the organisation, forming a board, and fundraising. You will find updates on our placeholder website and through our new mailing list (see the signup form on the homepage).
These changes were prompted by an opportunity that arose for Allan Dafoe – GovAI’s Director – to take on a senior role at DeepMind. The university informed us that it would not be possible for Allan to hold a dual appointment, and that GovAI’s status as an Oxford-affiliated research centre depended on that. In response to these constraints, we opted to become an independent nonprofit.
Some background for our decision is that we had previously considered the potential benefits of standing up a nonprofit to support the longtermist AI governance community, particularly due to much lower administrative overhead and new opportunities to expand our activities. Moreover, we recognised that our community had grown well beyond Oxford: the majority of our members are based at other universities, companies, and think tanks. An independent nonprofit structure therefore seemed consistent with our ambitions and with the established geography of our community.
Allan will continue as a co-leader of the organisation. I will help set up the organisation over the coming months. FHI’ers Toby Ord, Ben Garfinkel, Alexis Carlier, Toby Shevlane, and Anne le Roux are also likely to be prominently involved (pending university approval).
We would love to hear from people with a diverse set of skills – including research, event organizing, grantmaking, operations, and project management – who are motivated to work on AI governance. We are especially interested in growing our expertise in institutional design. For those interested, we outline our theory of impact here and some research questions here. You can fill out an expression of interest form here.
It succeeded the Oxford-based Governance of AI Program (which was founded in 2017) and the Yale-based Global Politics of AI Research Group (which was founded in 2016). ↩︎
Will GovAI in its new form continue to deal with the topic of regulation (i.e. regulation of AI companies by states)?
DeepMind is owned by Alphabet (Google). Many interventions that are related to AI regulation can affect the stock price of Alphabet,
which Alphabet is legally obligated to try to maximize* (regardless of the good intentions that many senior executives there may have). If GovAI will be co-lead by an employee of DeepMind, there is seemingly a severe conflict of interest issue about anything that GovAI does (or avoids doing) with respect to the topic of regulating AI companies.GovAI's research agenda (which is currently linked to from their 'placeholder website') includes the following:
How will this part of the research agenda be influenced by GovAI being co-lead by a DeepMind employee?
* [EDIT 2022-01-13: I'm retracting the claim that Alphabet is obligated to try to maximize its stock price (not just the word "legally", which I crossed out 90 minutes after publishing this comment). This does not change the main points made in this comment. For more on this see this comment.]
Thanks for the question. I agree that managing these kinds of issues is important and we aim to do so appropriately.
GovAI will continue to do research on regulation. To date, most of our work has been fairly foundational, though the past 1-2 years has seen an increase in research that may provide some fairly concrete advice to policymakers. This is primarily as the field is maturing, as policymakers are increasingly seeking to put in place AI regulation, and some folks at GovAI have had an interest in pursuing more policy-relevant work.
My view is that most of our policy work to date has been fairly (small c) conservative and has seldom passed judgment on whether there should be more or less regulation and praising specific actors. You can sample some of that previous work here:
We're not yet decided on how we'll manage potential conflicts of interest. Thoughts on what principles are welcome. Below is a subset of things that are likely to be put in place:
FWIW I agree that for some lines of work you might want to do managing conflicts of interests is very important, and I'm glad you're thinking about how to do this.
We've now relaunched. We wrote up our current principles with regards to conflicts of interest and governance here: https://www.governance.ai/legal/conflict-of-interest. I'd be curious if folks have thoughts, in particular @ofer.
This is great! I hope GovAI will maintain this transparency about its funding sources, and publish a policy to that effect.
I think it would be beneficial to have a policy that prevents such funding in the future as well. (There could be conflict of interest issues due to the mere possibility of receiving future funding from certain companies.)
(Also, I take it that "private" here means private sector; i.e. this statement applies to public companies as well?)
Great, this seems super important! Maybe there should be a policy that allows funding from a non-EA source only if all the board members approve it.
In many potential future situations it won't be obvious whether certain funding might compromise the independence or accuracy of GovAI's work; and one's judgment about it will be subjective and could easily be influenced by biases (and it could be very tempting to accept the funding).
Related to the concern that I raised here: I recommend interested readers to listen to (or read the transcript of) this FOL podcast episode with Mohamed Abdalla about their paper: "The Grey Hoodie Project: Big Tobacco, Big Tech, and the threat on academic integrity".
I originally wrote in my above comment that "Alphabet is legally obligated to try to maximize" its stock price, and I'm retracting that claim (not just the "legally" part, which I crossed out 90 minutes after publishing that comment). The claim is probably somewhere between inaccurate and wrong, I'm not an expert in the relevant domain, and I should have at the very least used hedging language when writing it; so that was a bad failure on my part. (All this does not change the main points made in my original comment)
I don't know in what situations shareholders of NASDAQ companies can successfully sue the company/directors/executives for making decisions that are not aligned with their myopic financial incentives. It's an open question (for me) to what extent Alphabet can be modeled as an agent that tries to maximize the stock price; and the answer can seemingly depend on the identity of some major shareholders and the directors.
What follows is some evidence I've found regarding that question.
According to this page on sec.gov:
According to this piece from investopedia.com, as of October 2021 Larry Page and Sergey Brin (the founders) collectively own 5.9% of the outstanding shares. If my calculation is correct, this means that the two founders collectively have at most ~27% of the voting power over Alphabet (which is the figure I got when assuming that "5.9% of the outstanding shares" = 20,429,887 Class B shares). On the other hand, all the Class B shares collectively correspond to ~61% of the voting power. My best guess from what I've read is that the only entities that can own Class B shares are mainly the founders and some pre-IPO investors. (If someone who owns Class B shares gives them to someone else who is not permitted to own Class B shares, the shares automatically turn into Class A shares, i.e. they no longer have 10x voting power, if my understanding is correct.)
Again, I'm not an expert and this is a very amateur analysis that may be completely wrong.
Some interesting nuggets from this page by "Alphabet Investor Relations":
.
.