What is the AI Safety Camp?
Would you like to work on AI safety or strategy research and are looking for a concrete way to get started? We are organizing this camp for aspiring AI safety and strategy researchers. At the camp, you:
- build connections with others in the field
- build your research portfolio
- receive feedback on your research ideas and help others with theirs
- make concrete progress on open AI safety research questions
Read more about the last research camp here, including a summary of the produced research.
What’s the structure of the camp?
The camp is preceded by 7 weeks of preparation in form of an online study group of 3-5 people, followed by a 10-day intensive camp with the aim of creating and publishing a research paper, extensive blog post, or github repository.
What will attendants work on?
Participants will work in groups on tightly-defined research projects, for example in the following areas:
- Strategy and Policy
- Agent Foundations (decision theory, subsystem alignment, embedded world models, MIRI-style)
- Value learning (IRL, approval-directed agents, wireheading, …)
- Corrigibility / Interruptibility
- Side Effects, Safe Exploration
- Scalable & Informed Oversight
- Robustness (distributional shift, adversarial examples)
- Human Values (including philosophical and psychological approaches)
When and where?
4–14 October 2018, in Prague, Czech Republic.
Pricing
Attendance is free.
Apply
Applications and more information on aisafetycamp.com
I very much understand your hope concerning the AI talent and the promising value of this camp. However, I'd also like to see the objective assessment of effectiveness (as in effective altruism) concerning such research attempts. To do so, you would have to show that such research has a comparatively higher chance of producing something outstanding than the existing academic research. Of course, that needs to be done in view of empirical evidence, which I very much hope you can provide. Otherwise, I don't know what sense of "effective" is still present in the meaning of "effective altruism".
Again: I think these kinds of research camps are great as such, i.e. in view of overall epistemic values. They are as valuable as, say, a logic camp, or a camp in agent-based models. However, I would never argue that a camp in agent-based models should be financed by EA funds unless I have empirically grounded reasons that such a research can contribute to effective charity and prevention of possible dangers better than the existing academic research can.
As for the talent search, you seem to assume that academic institutions cannot uncover such talents. I don't know where you get this evidence from, but PhD grants across EU, for instance, are precisely geared towards such talents. Why would talented individuals not apply for those? And where do you get the idea that the topic of AI safety won't be funded by, say, Belgian FWO or German DFG? Again, you would need to provide empirical reasons that such systematic bias against projects on these topics exists.
Finally, if the EA community wants to fund reliable project initiators for the topic of AI safety, why not make an open call for experts in the field to apply with project proposals and form the teams who can immediately execute these projects within the existing academic institutions? Where is this fear of academia coming from? Why would a camp like this be more streamlined than an expert proposal, where a PI of the given project employs the junior researchers and systematically guides them in the given research? In all other aspects of EA this is precisely how we wish to proceed (think of medical research).
For more on the thinking behind streamlined non-mainstream funding, see https://www.openphilanthropy.org/blog/hits-based-giving
I don't think academia is yet on the same page as EA with regard to AI Safety, but may well be soon hopefully (with credibility coming from the likes of Stuart Russell and Max Tegmark).