C

Cullen

3752 karmaJoined Dec 2017Working (0-5 years)Oakland, CA, USAcullenokeefe.com

Bio

I am a lawyer and policy researcher interested in improving the governance of artificial intelligence. In May 2019, I received a J.D. cum laude from Harvard Law School. I currently work as a Research Scientist in Governance at OpenAI.

I am also a Research Affiliate with the Centre for the Governance of AI at the Future of Humanity Institute; Founding Advisor and Research Affiliate at the Legal Priorities Project; and a VP at the O’Keefe Family Foundation.

My research focuses on the law, policy, and governance of advanced artificial intelligence.

You can share anonymous feedback with me here.

Sequences
2

Law-Following AI
AI Benefits

Comments
302

Topic contributions
24

OP gave some reasoning for their views on their recent blog post:

Another place where I have changed my mind over time is the grant we gave for the purchase of Wytham Abbey, an event space in Oxford.

We initially agreed to help fund that purchase as part of our effort to support the growth of the community working to reduce global catastrophic risks (GCRs). The original idea presented to us was that the space could serve as a hub for workshops, retreats, and conferences, to cut down on the financial and logistical costs of hosting large events at private facilities. This was pitched to us at a time when FTX was making huge commitments to the GCR community, which made resources appear more abundant and lowered our own bar. Since its purchase, the space has gotten meaningful use for community events and gatherings. But with the collapse of FTX, our bar for this kind of work rose, and the original grant would no longer have risen to the level where we would want to provide funding.

Because this was a large asset, we agreed with Effective Ventures ahead of time that we would ask them to sell the Abbey if the event space, all things considered, turned out not to be sufficiently cost-effective. We recently made that request; funds from the sale will be distributed to other valuable projects they run.

While this grant retroactively came in below our new bar, I don’t think that alone is a big problem. If you didn’t make some grants that look less attractive when the expected funding drops by half, you weren’t spending aggressively enough before.

But I still think I personally made a mistake in not objecting to this grant back when the initial decision was made and I was co-CEO. My assessment then was that this wasn’t a major risk to Open Philanthropy institutionally, so it wasn’t my place to try to stop it. I missed how something that could be parodied as an “effective altruist castle” would become a symbol of EA hypocrisy and self-servingness, causing reputational harm to many people and organizations who had nothing to do with the decision or the building.

This is a tough balance to strike because I think it’s easy for organizations to be paralyzed by concerns over reputational risk, rendering them unable to make nearly any decisions. And I think a core part of our hits-based giving philosophy is being able to make major bets that can fail outright, even in embarrassing ways. I want to maintain that openness to risk when the upside justifies it. But this example has made me want to raise our bar for things that could end up looking profligate or irresponsible to the detriment of broader communities we’re associated with.

How does AMF collect feedback from the end-recipients of bednets? How does feedback from them inform AMF's programming?

Do you have any citations for this claim?

According to the book Bullies and Saints: An Honest Look at the Good and Evil of Christian History, some early Christians sold themselves into slavery so they could donate the proceeds to the poor. Super interesting example of extreme and early ETG.

(I'm listening on audiobook so I don't have the precise page for this claim.)

(To avoid bad-faith misinterpretation: I obviously think that nobody should do the same.)

Longtermist shower thought: what if we had a campaign to install Far-UVC in poultry farms? Seems like it could:

  1. Reduce a bunch of diseases in the birds, which is good for: a. the birds’ welfare; b. the workers’ welfare; c. Therefore maybe the farmers’ bottom line?; d. Preventing/suppressing human pandemics (eg avian flu)
  2. Would hopefully drive down the cost curve of Far-UVC
  3. May also generate safety data in chickens, which could be helpful for derisking it for humans

Insofar as one of the main obstacles is humans' concerns for health effects, this would at least only raise these for a small group of workers.

Narrow point: my understanding is that, per his own claims, the Manifund grant would only fund technical upkeep of the blog, and that none of it is net income to him.

Answer by CullenJul 28, 20233
1
0

How probable does he think it is that some UAP observed on Earth are aliens? :-)

Super excited about the artificial conscience paper. I'd note that a similar approach be very useful for creating law-following AIs:

An LFAI system does not need to store all knowledge regarding the set of laws that it is trained to follow. More likely, the practical way to create such a system would be to make the system capable of recognizing when it faces sufficient legal uncertainty,[10] then seeking evaluation from a legal expert system ("Counselor").[11]

The Counselor could be a human lawyer, but in the long-run is probably most robust and efficient if (at least partially) automated. The Counselor would then render advice on the pure basis of idealized legality: the probability and expected legal downsides that would result from an idealized legal dispute regarding the action if everyone knew all the relevant facts.

Utilitarianism is much more explicit in its maximisation than most ideologies, plus it (at least superficially) actively undermines the normal safeguards against dangerous maximisation (virtues, the law, and moral rules) by pointing out these can be overridden for the greater good.

Like yes there are extreme environmentalists and that's bad, but normally when someone takes on an ideology like environmentalism, they don't also explicitly & automatically say that the environmental is all that matters and that it's in principle permissible to cheat & lie in order to benefit the environment.

I think it's true that utilitarianism is more maximizing than the median ideology. But I think a lot of other ideologies are minimizing in a way that creates equal pathologies in practice. E.g., deontological philosophies are often about minimizing rights violations, which can be used to justify pretty extreme (and bad) measures.

I would be very curious for Gregory's take on whether he thinks EAs are too epistemically immodest still!

Load more