Hide table of contents

[EDITS- We no longer endorse everything in this post, and have changed our objectives and thinking significantly. As such, the mentioned document is now private if you have questions, please contact oxford@aisafetyhub.org]
In January we founded a student group at Oxford focused on technical AI safety. Since then we’ve run speaker events, socials, multiple cohorts of the AGISF, and supervised research projects (“Labs”). We think it went pretty well, so we’re sharing our takeaways and model here.

This post is a short summary of this public document which goes into more detail about our approach to AI safety community building, reflections, and recommendations.

Non-trivial takeaways

  1. Launching as part of an AI group, rather than an EA group, worked well for us. (see more)
  2. Outreach aimed at people interested in AI reached a much larger technical audience than past outreach aimed at people interested in EA or longtermism. (see more)
  3. It was surprisingly easy to interest people in AI safety without appealing to EA or longtermism. (see more)
  4. Significant value from our speaker events seemed to come from the high-retention, friendly socials we held afterwards. (see more)
  5. Our “Labs” model of student research projects seems effective for development and output with minimal time-cost for an expert supervisor (~1 hour per week). This is particularly valuable if field building is mentorship bottlenecked (see more). 

Our current model

Our working objective was to increase the number and quality of technical people pursuing a career in AI safety research. [1] To do this, we have been operating with the following pipeline:  [2]


 

Results so far

  • At least 2 of the current participants of Redwood’s MLAB this summer had never encountered AI safety or EA before attending our events this spring.  
  • We had 24-73 people attend our 9 speaker events, with 69% (on average) having a STEM background (according to survey data).
  • 65 people signed up for our AGI Safety Fundamentals course across 11 cohorts. 57% had STEM backgrounds.

Further Information

Please see the attached public document for further information about the student group or our contact details.
 

 

 

  1. ^

     We are now reconsidering our working objective and don’t necessarily endorse the stated objective "to increase the number and quality of technical people pursuing a career in AI safety research". However, we think it is important to start from your objective and work backwards, and this is the objective we actually used.

  2. ^

     We want to note that having a target audience of people “interested in AI” creates a self-selection effect that reduces the diversity of thought in our attendance. We are working to improve this.

Comments3
Sorted by Click to highlight new comments since:

Hey there! Just wanted to flag that the document linked in the post is currently not public!

Hey @gergogaspar! We decided to make the document private after posting. Please get in touch with oxford@aisafetyhub.org if you are interested.

I see, thanks!

Curated and popular this week
Relevant opportunities