What is the AI Safety Camp?
Would you like to work on AI safety or strategy research and are looking for a concrete way to get started? We are organizing this camp for aspiring AI safety and strategy researchers. At the camp, you:
- build connections with others in the field
- build your research portfolio
- receive feedback on your research ideas and help others with theirs
- make concrete progress on open AI safety research questions
Read more about the last research camp here, including a summary of the produced research.
What’s the structure of the camp?
The camp is preceded by 7 weeks of preparation in form of an online study group of 3-5 people, followed by a 10-day intensive camp with the aim of creating and publishing a research paper, extensive blog post, or github repository.
What will attendants work on?
Participants will work in groups on tightly-defined research projects, for example in the following areas:
- Strategy and Policy
- Agent Foundations (decision theory, subsystem alignment, embedded world models, MIRI-style)
- Value learning (IRL, approval-directed agents, wireheading, …)
- Corrigibility / Interruptibility
- Side Effects, Safe Exploration
- Scalable & Informed Oversight
- Robustness (distributional shift, adversarial examples)
- Human Values (including philosophical and psychological approaches)
When and where?
4–14 October 2018, in Prague, Czech Republic.
Pricing
Attendance is free.
Apply
Applications and more information on aisafetycamp.com
Oh I agree that for many ideas to be attractive, they have to gain a promising character. I wouldn't reduce the measure of pursuit worthiness of scientific hypotheses to the evidence of their success though: this measure is rather a matter of prospective values, which have to do with a feasible methodology (how many research paths we have despite current problems and anomalies?). But indeed, sometimes research may proceed simply as tapping in the dark, in spite all the good methodological proposals (as e.g. it might have been the case in the research on protein synthesis in the mid 20th c.).
However, my point was simply the question: does such an investment in future proposals outweigh the investment in other topics, so that it should be funded from an EA budget rather than from existing public funds? Again: I very much encourage such camps. Just not on the account of spending the cash meant for effectively reducing suffering (due to these projects being highly risky and due to the fact that they are already heavily funded by say OpPhil).
My point (and remmelt's) was that public funds would be harder/more time (and resource) consuming to get.
There is currently a gap at the low end (OpenPhil is too big to spend time on funding such small projects).
And Good Ventures/OpenPhil also already fill a lot of the gap in funding programs with track records of effectively reducing suffering.