Ben Snodin

1188 karmaJoined Dec 2018Working (6-15 years)Oxford, UK
www.bensnodin.com

Bio

Participation
4

I previously led the Existential Security team at Rethink Priorities, worked on nanotechnology strategy research (resources database here) and co-founded EA Pathfinder. Right now I’m taking time out to work out how to best contribute to AI safety efforts.

 

I’ve worked as a Senior Research Scholar at the Future of Humanity Institute and spent 5 years working in finance as a quantitative analyst. Before then I completed a PhD in DNA nanotechnology at Oxford University. I’ve signed the Giving What We Can pledge.

 

Feel free to send me a private message here, or to email me at hello [at] bensnodin dot com. You can also give me anonymous feedback with this form!

Comments
79

I don't necessarily have a great sense for how good each one is, but here are some names. Though I expect you're already familiar with all of them :).

EA / x-risk -related

  • Future of Life Foundation
  • Active grantmaking, which might happen e.g. at Open Phil or Longview or Effective Giving, is a bit like incubation
  • (Charity Entrepreneurship of course, as you mentioned)

Outside EA

  • Entrepreneur First seems impressive, though I'm not that well placed to judge
  • Maybe this is nitpicking: As far as I know Y-Combinator is an accelerator rather than an incubator (ie it's focused on helping out existing startups rather than helping people get something started)

PS: good luck with your incubation work at Impact Academy! :)

Like a lot of this post, this is a bit of an intuition-based 'hot take'. But some quick things that come to mind: i) iirc it didn't seem like our initial intuitions were very different to the WFM results, ii) when we filled in the weighted factor model I think we had a pretty limited understanding of what each project involved (so you might not expect super useful results), iii) I got a bit more of a belief that it just matters a lot that central-AI-x-risk people have a lot of context (and that this more than offsets the a risk of bias and groupthink) so understanding their view is very helpful, iv) having a deep understanding of the project and the space just seems very important for figuring out what if anything should be done and what kinds of profiles might be best for the potential founders

Hi Stephen, thanks for the kind words!

I'm wondering if you have any sense of how quickly returns to new projects in this space might diminish? Founding an AI policy research and advocacy org seems like a slam dunk, but I'm wondering how many more ideas nearly that promising are out there.

 

I guess my rough impression is that there's lots of possible great new projects if there's a combination of a well-suited founding team and support for that team. But "well-suited founding team" might be quite a high bar.

Thanks, I found this helpful to read. I added it to my database of resources relevant for thinking about extreme risks from advanced nanotechnology.

I do agree that MNT seems very hard, and because of that it seems likely that it if it's developed it in an AGI/ASI hyper-tech-accelerated world it would developed relatively late on (though if tech development is hugely accelerated maybe it would still be developed pretty fast in absolute terms).

Thanks for sharing Ben! As a UK national and resident I'm grateful for an easy way to be at least a little aware of relevant UK politics, which I otherwise struggle to manage.

Thanks for writing this Joey, very interesting!

Since the top 20% of founders who enter your programme generate most of the impact, and it's fairly predictable who these founders will be, it seems like getting more applicants in that top 20% bracket could be pretty huge for the impact you're able to have. Curious if you have any reaction to that? I don't know whether expanding the applicant pool at the top end is a top priority for the organisation currently.

Thanks for these!

I think my general feeling on these is that it's hard for me to tell if they actually reduced existential risk. Maybe this is just because I don't understand the mechanisms for a global catastrophe from AI well enough. (e.g. because of this, linking to Neel's longlist of theories for impact was helpful, so thank you for that!)

E.g. my impression is that some people with relevant knowledge seem to think that technical safety work currently can't achieve very much. 

(Hopefully this response isn't too annoying -- I could put in the work to understand the mechanisms for a global catastrophe from AI better, and maybe I will get round to this someday)

I think my motivation comes from things to do with: helping with my personal motivation for work on existential risk, helping me form accurate beliefs on the general tractability of work on existential risk, and helping me advocate to other people about the importance of work on existential risk.

Thinking about it maybe it would be pretty great to have someone assemble and maintain a good public list of answers to this question! (or maybe someone did already and I don't know about it)

I imagine a lot of relevant stuff could be infohazardous (although that stuff might not do very well on the "legible" criterion) -- if so and if you happen to feel comfortable sharing it with me privately, feel free to DM me about it.

Should EA people just be way more aggressive about spreading the word (within the community, either publicly or privately) about suspicions that particular people in the community have bad character?

(not saying that this is an original suggestion, you basically mention this in your thoughts on what you could have done differently)

Load more