Hide table of contents

co-author: Linda Linsefors

 

Summary

Last month, 5 teams of up-and-coming researchers gathered to solve concrete problems in AI alignment at our 10-day AI safety research camp in Gran Canaria.

This post describes

  • the event format we came up with
  • our experience & lessons learned in running it in Gran Canaria
  • how you can contribute to the next camp in Prague on 4-14 October & future editions

 

The event format

In February, we proposed our plans for the AI Safety Camp:

Goals:

Efficiently launch aspiring AI safety and strategy researchers into concrete productivity by creating an ‘on-ramp’ for future researchers.

Specifically: 

  1. Get people started on and immersed into concrete research work intended to lead to published papers.
  2. Address the bottleneck in AI safety/strategy of few experts being available to train or organize aspiring researchers by efficiently using expert time.
  3. Create a clear path from ‘interested/concerned’ to ‘active researcher’.
  4. Test a new method for bootstrapping talent-constrained research fields.

Note: this does not involve reaching out to or convincing outside researchers – those who are not yet doing work in the field of AI safety – on the imperative of solving alignment problems. 

Method:

Run an online study group culminating in an intensive in-person research camp. Participants work in teams on tightly-defined research projects on the following topics:

  • Agent foundations
  • Machine learning safety
  • Policy & strategy
  • Human values

Project ideas are proposed by participants prior to the start of the program. After that, participants split into teams around the most popular research ideas (each participant joins one team, and each team focuses on one research topic).

What the camp isn’t about:

The AI Safety Camp is not about convincing anyone about the importance of AI alignment research. The camp is for people who are already on board with the general ideas, and who want to develop their research skills and/or find like-minded people to collaborate with. Trying to convert people from adjacent research fields is a very different project, which we do not think mixes well with this event. 

The AI Safety Camp is not a summer school (unlike the one coming up this August in Prague). There are no teachers although teams can correspond with experienced advisors. Participants are expected to have the knowledge needed to do research together. However, it is not required for everyone to have research experience, or to know every relevant fact. That is what the team is for – to help each other, and lift each other up.

 

The first camp

How it came to be: 

The project got started when Linda Linsefors tried to figure out how to find AI safety researchers to cowork with in a supportive environment. Effective Altruism Global London (November 2017) was coming up so she decided to network there to look for a “multiplayer solution” to the problem – one that would also also help others in a similar situation. 

After bouncing ideas off various people in the conference corridors, Linda had formed a vague plan of starting a research retreat – renting a venue somewhere and inviting others to try to do research together.

While joining an Open Philanthropy Project open office hour, Tom McGrath (who became our team preparation leader) overheard Linda talking about her idea and wanted to explore it further. Later, while couchsurfing at Sam Hilton’s place, she met Remmelt Ellen (who became our meetings & logistics leader) and together they created and drew attention to a Facebook group and form where people could indicate their interest. Nandi Schoots (who became our interviews & programme leader) and David Kristoffersson (who became our international connector) quickly found the Facebook group and joined our first organisers’ call.

Our core organising team formed within a week, after which we scheduled regular video calls to sort out the format, what to call the event, where to organise it, and so on. We hit the ground running and coordinated well through Facebook chats and Zoom calls considering we were a bunch of international volunteers. Perhaps our team members were unusually dedicated because each of us had taken the initiative to reach out and join the group. We also deliberately made fast decisions on next actions and who would carry them out – thus avoiding the kind of dragged-out discussions where half of the team has to sit idly by to wait for conclusions that no one acts upon.

Initially, we decided to run the first camp in July 2018 in either Berlin or the UK. Then Las Palmas, Gran Canaria was suggested as an alternative in our Facebook group by Maia Pasek from Crow’s Nest (sadly Maia passed away before the camp started). We decided to run a small pilot camp there in April to test how well the format worked – thinking that Gran Canaria was a cheap, attractive sub-tropical island with on-the-ground collaborators to sort out the venue (this ended up being mostly Karol Kubicki).

However, in February a surprising number of researchers (32) submitted applications of mostly high quality – too many for our 12-person AirBnB apartment (a cancellable booking made by Greg Colbourn). Instead, we booked an entire hostel to run the full edition that we had originally envisaged for July, effectively shortening our planning time by 3 months. 

This forced us to be effective and focus on what was most important to make the camp happen. But we were also basically chasing the clock at every step of the organising process, which led to costly mistakes such as rushing out documents and spending insufficient time comparing available venues (we reviewed many more lessons learned in a 14-page internal document). 

Most of the original organisers were exhausted after the event finished and were not going to lead a second edition any time soon. Fortunately, some of the Gran Canaria camp participants are taking up the mantle to organise the second camp together with EA Czech Republic in Prague this October (for more on this see “Next camps” below).

Team formation:

Each applicant was invited for an interview call (with the help of Markus Salmela), of which we accepted 25 for the camp (of these, 4 people were unable to join the event). 
From there, we invited participants to jot down their preferences for topics to work on and planned a series of calls to form research teams around the most popular topics.

After forming 5 teams, we had an online preparation period of roughly 6 weeks to get up to speed on our chosen research topics (through Slack channels, calls and in-person chats). This minimised the need to study papers at the camp itself. However, it was up to each team to decide how to best spend this time – e.g. some divided up reading materials, or wrote research proposals and got feedback from senior researchers (including Victoria Krakovna, Stuart Armstrong and Owain Evans).

Event structure:

The camp consisted of coworking punctuated by team support sessions and participant-organised activities.

Programme summary:
Day 1:
     Arrival, starting ceremony
Day 2:     Team research
Day 3:     Team research
Day 4:     Research idea presentations, half day off
Day 5:     Team debugging, research ducking in pairs, team research
Day 6:     Inter-team hamming circles, team research, research ducking in pairs
Day 7:     Day off
Day 8:     Team research
Day 9:     Team research, AlphaZero presentation (participant initiative), career circle
Day 10:  Team research, research presentations, closing ceremony
Day 11:  Feedback form, departure

The programme was split into three arcs (day 1-4, day 4-7, day 7-11) where the workload gradually intensified until it turned back down – hopefully enabling teams to do intensive work sprints while not burning out. 

The support sessions on day 5 and 6 were aimed at helping teams resolve bottlenecks and become more productive. Although a few participants mentioned they were surprisingly useful, doing them during daylight hours hindered teams from getting on with research. For future camps, we suggest having only optional Hamming circles and research ducking sessions in the evening. 

Participants also shared their own initiatives on the dining room blackboard such as morning yoga, beach walks, mountain hiking, going out for dinner, a clicker game and an AlphaZero presentation. We wholeheartedly recommend fostering unconference-style initiatives at research events – they give participants the freedom to make up for what you have missed. 

Two experienced Centre for Applied Rationality workshop mentors, Ben Sancetta and Anne Wissemann, had the job of supporting participants in sorting out any issues they or their team encountered, and helping ensure that everyone was happy (Anne also oversaw supplies). Luckily, everyone got along so well that Anne and Ben only had a handful of one-on-ones. Nevertheless, having them around was a treat for some participants, as it allowed them to drop in and vent whatever was on their mind, knowing that it would not unduly bother either of them.

Budget:

The total cost of organising the camp was €11,572 (excluding some items paid for by the organisers themselves). 

The funds were managed through the bank account of Effective Altruism Netherlands. Unspent money was transferred to the Czech Association for Effective Altruism for the next camp (they are open to donations if their EA Grant application for the camp gets delayed or rejected).

AI Safety Camp - Gran Canaria - income & expenses 

Results:

Each team has written a brief summary of the work they did during the camp (as well as future plans). Other outcomes include: 

  • The bounded rationality team has received funding from Paul Christiano to continue their work.
  • The Gridworld team has written a blogpost and are making a GitHub pull request for their work to be added to the Safety Gridworlds repository.
  • The Safe AF team is writing a paper on their results.
  • At least 2 participants have changed their career plans towards working on AI safety (many participants were already junior researchers or had already made up their minds prior to the camp).
  • 8 more participants reported an increase in their motivation and/or confidence in doing research work.  

As our Gran Canaria "pilot camp" grew in ambition, we implicitly worked towards the outcomes we expected to see for the “main camp”:

  1. Three or more draft papers have been written that are considered to be promising by the research community.
  2. Three or more researchers who participated in the project would obtain funding or a research role in AI safety/strategy in the year following the camp.

It is too soon to say about whether the first goal will be met, although with one paper in preparation and one team having already obtained funding it is looking plausible. The second goal was already met less than a month after the camp.

 

Improving the format

The format of the AI Safety Camp is still under development. Here are two major points we would like to improve. Suggestions are welcome. 

1.   Managing team onboarding:

After the interviews, we accepted applicants on the condition that they would find a research team, which created uncertainty for them.

Forming research teams that consist of people with a good fit for promising topics lies at the foundation of a productive camp. But it is also a complex problem with many variables and moving parts (e.g. Do we accept people first and form teams around these people, or do we form teams first and accept people based on their fit with a team? Should we choose research topics first and then decide who joins which team, or should we form teams first and then let them choose topics?). 

We handled this at the first camp by trying to do everything at the same time. Although this worked out okay, the onboarding process can be made easier to follow and smoother at future camps.

Note: The irrationality team of 5 people ended up splitting into two sub-groups since one of the topics seemed too small in scope for 5 people. We suggest limiting group size to 4 people at future camps.

2.  Integrating outside advisors: 

Many senior AI Safety researchers replied slowly to our email requests to advise our teams, presumably because of busy schedules. This led to a dilemma: 

A. If we waited until we knew what the research topics would be, then we might not have gotten an answer from potential advisors in time. 

B. If we acted before topics had been selected, we would end up contacting many senior researchers who were not specialised in the final topics.

At the first camp, we lacked time for working out a clear strategy, so teams ended up having to reach out to advisors we found. For future camps, it should be easier to connect advisors with teams given that the next organisers are already on the move. Hopefully, experienced researchers reading this post will also be inclined to offer a few spare hours to review research proposals and draft papers (please send us a short email). 

 

Next camps

The next camps will happen in:
4-14 Oct 2018: Prague, Czechia
                                   in collaboration with the Czechia Association for Effective Altruism
                                   (they will also organise the Human-aligned AI Summer School in August)

~ March 2019: Blackpool, United Kingdom
                                  at the EA Hotel (offers free accommodation for researchers)

 

If you’ve gotten this far, we can use your contribution:
Apply to join the Prague camp 

Email contact@aisafetycamp.com if you are considering

  • advising research teams on their projects
  • contributing your skills to organising camps
  • funding future camps
  • running your own edition next year
    criteria: experienced organisers who will run our general format & uphold a high quality standard that reflects well on the wider research community

Join our Facebook group to stay informed (or check out our webpage)

  

Acknowledgement

The first AI Safety Camp was made possible by the following donors:
Centre for Effective Altruism                                     €2,961
Machine Intelligence Research Institute             €3,223
Greg Colbourn                                                                     €3,430
Lotta and Claes Linsefors                                              €4,000

Comments2
Sorted by Click to highlight new comments since: Today at 9:08 PM
[anonymous]6y2
0
0

Just want to highlight the bit where you describe how you exceeded your goals (at least, that's my takeaway):

As our Gran Canaria "pilot camp" grew in ambition, we implicitly worked towards the outcomes we expected to see for the “main camp”:

  1. Three or more draft papers have been written that are considered to be promising by the research community.
  2. Three or more researchers who participated in the project would obtain funding or a research role in AI safety/strategy in the year following the camp.

It is too soon to say about whether the first goal will be met, although with one paper in preparation and one team having already obtained funding it is looking plausible. The second goal was already met less than a month after the camp.

Congrats!

Thanks, yeah, perhaps we should have included that in the summary.

Personally, I was impressed with the committedness with which the researchers worked on their problems (and generally stepped in when there was dishwashing and other chores to be done). My sense is that the camp filled a ‘gap in the market’ where a small young group that’s serious about AI alignment research wanted to work with others to develop their skills and start producing output.