End-of-year updates for those interested:

  • CFAR made a larger effort to track our programs' impact on existential risk over the last year; you can find a partial account of our findings on our blog.  (Also, while some of the details of our tracking aren't currently published due to privacy concerns, let me know if there's some particular thing you want to know and maybe we can share it.)

  • We're on the cusp of being able to maybe buy a permanent venue, which would dramatically reduce our per-workshop costs and would thereby substantially increase our ability to run free programs (which have historically been the cause of a substantial majority of our apparent impact on existential risk, despite being a smallish minority of our programs).  There're some details in our fundraiser post, and some details on what we've been up to for the last year in our 2017 Retrospective.

I'd be glad to discuss anything CFAR-related with anyone interested. I continue to suspect that donations to CFAR are among the best ways to turn marginal donations into reducing the talent bottleneck within AI risk efforts (basically because our good done seems almost linear in the number of free-to-participants programs we can run (because those can target high-impact AI stuff), and because the number of free-to-participants programs we can run is more or less linear in donations within the range in which donations might plausibly take us, plus or minus a rather substantial blip depending on whether we can purchase a venue).  I don't know a good way to measure or establish that as such, and I imagine many would disagree -- but I'd still welcome discussion, either here or at anna at rationality dot org.

 

8

0
0

Reactions

0
0
Comments3
Sorted by Click to highlight new comments since: Today at 12:02 AM

Thanks Anna! A couple of questions:

  1. If I'm understanding your impact report correctly, you identified 159 IEI alumni, and ~22 very high impact alumni whose path was determined to have been "affected" by CFAR. 1.1 Can you give me an idea of what that implies for the upcoming year? E.g. does that mean that you expect to have another 22 very high impact alumni affected in the next year? 1.2 Can you say more about what the threshold was for determining whether or not CFAR "affected" an alumnus? Was it just that they said there was some sort of counterfactual impact or was there a stricter criterion?
  2. You mention reducing the AI talent bottleneck: is this because you think that the number of people you moved into AI careers is a useful proxy for your ability to teach attendees rationality techniques, or because you think this is/should be the terminal goal of CFAR? (I assume the answer is that you think both are valuable, but I'm trying to get a sense for the relative weighting.)
  3. Do you have "targets" for 2018 impact metrics? Specifically: you mentioned that you think your good done is linear in donations: could you tell us what the formula is? 3.1 Or more generally: could you give us some insight into the value of information we could expect to see from a donation? E.g. "WAISS workshops will either fail or succeed spectacularly, so it will be useful to run some and see."

We're on the cusp of being able to maybe buy a permanent venue

Are you mostly searching for venues in the Bay Area (+ venues within day-driving distance of the Bay Area)?