What is the AI Safety Camp?
Would you like to work on AI safety or strategy research and are looking for a concrete way to get started? We are organizing this camp for aspiring AI safety and strategy researchers. At the camp, you:
- build connections with others in the field
- build your research portfolio
- receive feedback on your research ideas and help others with theirs
- make concrete progress on open AI safety research questions
Read more about the last research camp here, including a summary of the produced research.
What’s the structure of the camp?
The camp is preceded by 7 weeks of preparation in form of an online study group of 3-5 people, followed by a 10-day intensive camp with the aim of creating and publishing a research paper, extensive blog post, or github repository.
What will attendants work on?
Participants will work in groups on tightly-defined research projects, for example in the following areas:
- Strategy and Policy
- Agent Foundations (decision theory, subsystem alignment, embedded world models, MIRI-style)
- Value learning (IRL, approval-directed agents, wireheading, …)
- Corrigibility / Interruptibility
- Side Effects, Safe Exploration
- Scalable & Informed Oversight
- Robustness (distributional shift, adversarial examples)
- Human Values (including philosophical and psychological approaches)
When and where?
4–14 October 2018, in Prague, Czech Republic.
Pricing
Attendance is free.
Apply
Applications and more information on aisafetycamp.com
If it would cost the same or less time to get funding via public grants and institutions, I would definitely agree (i.e. in filling in an application form, in the average number of applications that need to be submitted before the budget is covered, and in loss of time because of distractions and 'meddling' by unaligned funders).
Personally, I don't think this applies to AI Safety Camp at all though (i.e. my guess is that it would cost significantly more time than getting money from 'EA donors', which we would be better off spending on improving the camps) except perhaps in isolated cases that I have not found out about yet.
I'm also not going to spend the time to write up my thoughts in detail but here's a summary:
I very much understand your hope concerning the AI talent and the promising value of this camp. However, I'd also like to see the objective assessment of effectiveness (as in effective altruism) concerning such research attempts. To do so, you would have to show that such research has a comparatively higher chance of producing something outstanding than the existing academic research. Of course, that needs to be done in view of empirical evidence, which I very much hope you can provide. Otherwise, I don't know what sense of "effective" is still p... (read more)