AI safety governance/strategy research & field-building.
Formerly a PhD student in clinical psychology @ UPenn, college student at Harvard, and summer research fellow at the Happier Lives Institute.
Congratulations on the new role– I agree that engaging with people outside of existing AI risk networks has a lot of potential for impact.
Besides RSPs, can you give any additional examples of approaches that you're excited about from the perspective of building a bigger tent & appealing beyond AI risk communities? This balancing act of "find ideas that resonate with broader audiences" and "find ideas that actually reduce risk and don't merely serve as applause lights or safety washing" seems quite important. I'd be interested in hearing if you have any concrete ideas that you think strike a good balance of this, as well as any high-level advice for how to navigate this.
Additionally, how are you feeling about voluntary commitments from labs (RSPs included) relative to alternatives like mandatory regulation by governments (you can't do X or you can't do X unless Y), preparedness from governments (you can keep doing X but if we see Y then we're going to do Z), or other governance mechanisms?
(I'll note I ask these partially as someone who has been pretty disappointed in the ultimate output from RSPs, though there's no need to rehash that debate here– I am quite curious for how you're reasoning through these questions despite some likely differences in how we think about the success of previous efforts like RSPs.)
Congrats to Zach! I feel like this is mostly supposed to be a "quick update/celebratory post", but I feel like there's a missing mood that I want to convey in this comment. Note that my thoughts mostly come from an AI Safety perspective, so these thoughts may be less relevant for folks who focus on other cause areas.
My impression is that EA is currently facing an unprecedented about of PR backlash, as well as some solid internal criticisms among core EAs who are now distancing from EA. I suspect this will likely continue into 2024. Some examples:
I'd be curious to know how CEA is reacting to this. The answer might be "well, we don't really focus much on AI safety, so we don't really see this as our thing to respond to." The answer might be "we think these criticisms are unfair/low-quality, so we're going to ignore them." Or the answer might be "we take X criticism super seriously and are planning to do Y about it."
Regardless, I suspect that this is an especially important and challenging time to be the CEO of CEA. I hope Zach (and others at CEA) are able to navigate the increasing public scrutiny & internal scrutiny of EA that I suspect will continue into 2024.
Do you know anything about the strategic vision that Zach has for CEA? Or is this just meant to be a positive endorsement of Zach's character/judgment?
(Both are useful; just want to make sure that the distinction between them is clear).
I appreciate the comment, though I think there's a lack of specificity that makes it hard to figure out where we agree/disagree (or more generally what you believe).
If you want to engage further, here are some things I'd be excited to hear from you:
I think if you (and others at OP) are interested in receiving more critiques or overall feedback on your approach, one thing that would be helpful is writing up your current models/reasoning on comms/advocacy topics.
In the absence of this, people simply notice that OP doesn't seem to be funding some of the main existing examples of comms/advocacy efforts, but they don't really know why, and they don't really know what kinds of comms/advocacy efforts you'd be excited about.
I expect that your search for a "unified resource" will be unsatisfying. I think people disagree enough on their threat models/expectations that there is no real "EA perspective".
Some things you could consider doing:
Among the questions you list, I'm most interested in these:
Thanks for this overview, Trevor. I expect it'll be helpful– I also agree with your recommendations for people to consider working at standard-setting organizations and other relevant EU offices.
One perspective that I see missing from this post is what I'll call the advocacy/comms/politics perspective. Some examples of this with the EU AI Act:
It's difficult to know the impact of any given public comms campaign, but it seems quite plausible to me that many readers would have more marginal impact by focusing on advocacy/comms than focusing on research/policy development.
More broadly, I worry that many segments of the AI governance/policy community might be neglecting to think seriously about what ambitious comms/advocacy could look like in the space.
I'll note that I might be particularly primed to bring this up now that you work for Open Philanthropy. I think many folks (rightfully) critique Open Phil for being too wary of advocacy, campaigns, lobbying, and other policymaker-focused activities. I'm guessing that Open Phil has played an important role in shaping both the financial and cultural incentives that (in my view) leads to an overinvestment into research and an underinvestment into policy/advocacy/comms.
(I'll acknowledge these critiques are pretty high-level and I don't claim that this comment provides compelling evidence for them. Also, you only recently joined Open Phil, so I'm of course not trying to suggest that you created this culture, though I guess now that you work there you might have some opportunities to change it).
I'll now briefly try to do a Very Hard thing which is like "put myself in Trevor's shoes and ask what I actually want him to do." One concrete recommendation I have is something like "try to spend at least 5 minutes thinking about ways in which you or others around you might be embedded in a culture that has blind spots to some of the comms/advocacy stuff." Another is "make a list of people you read actively or talked to when writing this post. Then ask if there were any other people/orgs you could've reached out, particularly those that might focus more on comms+adovacy". (Also, to be clear, you might do both of these things and conclude "yea, actually I think my approach was very solid and I just had Good Reasons for writing the post the way I did.")
I'll stop here since this comment is getting long, but I'd be happy to chat further about this stuff. Thanks again for writing the post and kudos to OP for any of the work they supported/will support that ends up increasing P(good EU AI Act goes through & gets implemented).
I'm excited to see the EAIF share more about their reasoning and priorities. Thank you for doing this!
I'm going to give a few quick takes– happy to chat further about any of these. TLDR: I recommend (1) getting rid of the "principles-first" phrase & (2) issuing more calls for proposals focused on the specific projects you want to see (regardless of whether or not they fit neatly into an umbrella term like "principles-first").
Thanks! Familiar with the post— another way of framing my question is “has Holden changed his mind about anything in the last several months? Now that we’ve had more time to see how governments and labs are responding, what are his updated views/priorities?”
(The post, while helpful, is 6 months old, and I feel like the last several months has given us a lot more info about the world than we had back when RSPs were initially being formed/released.)