EDIT: We have now relaunched. More details here

We wanted to update the EA community about some changes to the Centre for the Governance of AI (GovAI). GovAI was founded in 2018, as part of the University of Oxford’s Future of Humanity Institute (FHI)[1]. GovAI is now becoming an independent nonprofit. We are currently in the process of setting up the organisation, forming a board, and fundraising. You will find updates on our placeholder website and through our new mailing list (see the signup form on the homepage).

These changes were prompted by an opportunity that arose for Allan Dafoe – GovAI’s Director – to take on a senior role at DeepMind. The university informed us that it would not be possible for Allan to hold a dual appointment, and that GovAI’s status as an Oxford-affiliated research centre depended on that. In response to these constraints, we opted to become an independent nonprofit.

Some background for our decision is that we had previously considered the potential benefits of standing up a nonprofit to support the longtermist AI governance community, particularly due to much lower administrative overhead and new opportunities to expand our activities. Moreover, we recognised that our community had grown well beyond Oxford: the majority of our members are based at other universities, companies, and think tanks. An independent nonprofit structure therefore seemed consistent with our ambitions and with the established geography of our community.

Allan will continue as a co-leader of the organisation. I will help set up the organisation over the coming months. FHI’ers Toby Ord, Ben Garfinkel, Alexis Carlier, Toby Shevlane, and Anne le Roux are also likely to be prominently involved (pending university approval).

We would love to hear from people with a diverse set of skills – including research, event organizing, grantmaking, operations, and project management – who are motivated to work on AI governance. We are especially interested in growing our expertise in institutional design. For those interested, we outline our theory of impact here and some research questions here. You can fill out an expression of interest form here.


  1. It succeeded the Oxford-based Governance of AI Program (which was founded in 2017) and the Yale-based Global Politics of AI Research Group (which was founded in 2016). ↩︎

Comments7
Sorted by Click to highlight new comments since: Today at 7:01 AM
Ofer
3y41
0
0

Will GovAI in its new form continue to deal with the topic of regulation (i.e. regulation of AI companies by states)?

DeepMind is owned by Alphabet (Google). Many interventions that are related to AI regulation can affect the stock price of Alphabet, which Alphabet is legally obligated to try to maximize* (regardless of the good intentions that many senior executives there may have). If GovAI will be co-lead by an employee of DeepMind, there is seemingly a severe conflict of interest issue about anything that GovAI does (or avoids doing) with respect to the topic of regulating AI companies.

GovAI's research agenda (which is currently linked to from their 'placeholder website') includes the following:

[...] At what point would and should the state be involved? What are the legal and other tools that the state could employ (or are employing) to close and exert control over AI companies? With what probability, and under what circumstances, could AI research and development be securitized--i.e., treated as a matter of national security--at or before the point that transformative capabilities are developed? How might this happen and what would be the strategic implications? How are particular private companies likely to regard the involvement of their host government, and what policy options are available to them to navigate the process of state influence? [...]

How will this part of the research agenda be influenced by GovAI being co-lead by a DeepMind employee?

 

* [EDIT 2022-01-13: I'm retracting the claim that Alphabet is obligated to try to maximize its stock price (not just the word "legally", which I crossed out 90 minutes after publishing this comment). This does not change the main points made in this comment. For more on this see this comment.]

Thanks for the question. I agree that managing these kinds of issues is important and we aim to do so appropriately.

GovAI will continue to do research on regulation. To date, most of our work has been fairly foundational, though the past 1-2 years has seen an increase in research that may provide some fairly concrete advice to policymakers. This is primarily as the field is maturing, as policymakers are increasingly seeking to put in place AI regulation, and some folks at GovAI have had an interest in pursuing more policy-relevant work.

My view is that most of our policy work to date has been fairly (small c) conservative and has seldom passed judgment on whether there should be more or less regulation and praising specific actors. You can sample some of that previous work here:

We're not yet decided on how we'll manage potential conflicts of interest. Thoughts on what principles are welcome. Below is a subset of things that are likely to be put in place:

  • We're aiming for a board that does not have a majority of folks from any of: industry, policy, academia.
  • Allan will be the co-lead of the organisation. We hope to be able to announce others soon.
  • Whenever someone has a clear conflict of interest regarding a candidate or a piece of research – say we were to publish a ranking of how responsible various AI labs were being – we'll have the person recuse themselves from the decision.
  • For context, I expect most folks who collaborate with GovAI to not be directly paid by GovAI. Most folks will be employed elsewhere and not closely line managed by the organization.

FWIW I agree that for some lines of work you might want to do managing conflicts of interests is very important, and I'm glad you're thinking about how to do this.

We've now relaunched. We wrote up our current principles with regards to conflicts of interest and governance here: https://www.governance.ai/legal/conflict-of-interest. I'd be curious if folks have thoughts, in particular @ofer.

GovAI is funded by philanthropic organizations. So far, we have received funding from Open Philanthropy, the Center for Emerging Risk Research, and Effective Altruism Funds (a project of the Centre for Effective Altruism).

This is great! I hope GovAI will maintain this transparency about its funding sources, and publish a policy to that effect.

We do not currently accept funding from private companies.

I think it would be beneficial to have a policy that prevents such funding in the future as well. (There could be conflict of interest issues due to the mere possibility of receiving future funding from certain companies.)

(Also, I take it that "private" here means private sector; i.e. this statement applies to public companies as well?)

We will not accept donations that we believe might compromise the independence or accuracy of our work.

Great, this seems super important! Maybe there should be a policy that allows funding from a non-EA source only if all the board members approve it.

In many potential future situations it won't be obvious whether certain funding might compromise the independence or accuracy of GovAI's work; and one's judgment about it will be subjective and could easily be influenced by biases (and it could be very tempting to accept the funding).

Related to the concern that I raised here: I recommend interested readers to listen to (or read the transcript of) this FOL podcast episode with Mohamed Abdalla about their paper: "The Grey Hoodie Project: Big Tobacco, Big Tech, and the threat on academic integrity".

I originally wrote in my above comment that "Alphabet is legally obligated to try to maximize" its stock price, and I'm retracting that claim (not just the "legally" part, which I crossed out 90 minutes after publishing that comment). The claim is probably somewhere between inaccurate and wrong, I'm not an expert in the relevant domain, and I should have at the very least used hedging language when writing it; so that was a bad failure on my part. (All this does not change the main points made in my original comment)

I don't know in what situations shareholders of NASDAQ companies can successfully sue the company/directors/executives for making decisions that are not aligned with their myopic financial incentives. It's an open question (for me) to what extent Alphabet can be modeled as an agent that tries to maximize the stock price; and the answer can seemingly depend on the identity of some major shareholders and the directors.

What follows is some evidence I've found regarding that question.

According to this page on sec.gov:

At December 31, 2019, there were 299,828,232 shares of Class A Common Stock issued and outstanding, 46,441,036 shares of Class B Common Stock issued and outstanding

our Class B Common Stock has 10 votes per share, while our Class A Common Stock has one vote per share

According to this piece from investopedia.com, as of October 2021 Larry Page and Sergey Brin (the founders) collectively own 5.9% of the outstanding shares. If my calculation is correct, this means that the two founders collectively have at most ~27% of the voting power over Alphabet (which is the figure I got when assuming that "5.9% of the outstanding shares" = 20,429,887 Class B shares). On the other hand, all the Class B shares collectively correspond to ~61% of the voting power. My best guess from what I've read is that the only entities that can own Class B shares are mainly the founders and some pre-IPO investors. (If someone who owns Class B shares gives them to someone else who is not permitted to own Class B shares, the shares automatically turn into Class A shares, i.e. they no longer have 10x voting power, if my understanding is correct.)

Again, I'm not an expert and this is a very amateur analysis that may be completely wrong.

Some interesting nuggets from this page by "Alphabet Investor Relations":

These Corporate Governance Guidelines are established by the Board of Directors (the “Board”) of Alphabet Inc. to provide a structure within which our directors and management can effectively pursue Alphabet’s objectives for the benefit of its stockholders. The Board intends that these guidelines serve as a flexible framework within which the Board may conduct its business, not as a set of binding legal obligations. These guidelines should be interpreted in the context of all applicable laws, Alphabet’s charter documents and other governing legal documents and Alphabet’s policies.

.

The fundamental responsibility of the directors is to exercise their business judgment to act in what they reasonably believe to be the best interests of Alphabet and its stockholders. It is the duty of the Board to oversee management’s performance to ensure that Alphabet operates in an effective, efficient and ethical manner in order to produce value for Alphabet’s stockholders.

.

Minimum Stock Ownership Requirement. In an effort to more closely align the interests of our directors and senior management with those of our stockholders, each director and senior officer will be required to meet the following minimum stock ownership requirements: (i) each director shall own shares of Alphabet stock equal in value to at least $1,000,000 (One Million Dollars); (ii) the Founders of Google LLC, the Chief Executive Officer of Alphabet, and the Chief Executive Officer of Google LLC shall own shares of Alphabet stock equal in value to at least $30,000,000 (Thirty Million Dollars); and (iii) Senior Vice Presidents of Alphabet or Google LLC shall own shares of Alphabet stock equal in value to at least $6,000,000 (Six Million Dollars).

Curated and popular this week
Relevant opportunities