Comment author: Elityre 08 October 2018 03:09:44PM *  8 points [-]

It seems to me that in many cases the specific skills that are needed are both extremely rare and not well captured by the standard categories.

For instance, Paul Christiano seems to me to be an enormous asset to solving the core problems of AI safety. If "we didn't have a Paul" I would be willing to trade huge amounts of EA resources to have him working on AI safety, and I would similarly trade huge resources to get another Paul-equivalent working on the problem.

But it doesn't seem like Paul's skillset is one that I can easily select for. He's knowledgeable about ML, but there are many people with ML knowledge (about 100 new ML PhDs each year). That isn't the thing that distinguishes him.

Nevertheless, Paul has some qualities, above and beyond his technical familiarity, that allow him to do original and insightful thinking about AI safety. I don't understand what those qualities are, or know how to assess them, but they seem to me to be much more critical than having object level knowledge.

I have close to no idea how to recruit more people that can do the sort of work that Paul can do. (I wish I did. As I said, I would give up way more than my left arm to get more Pauls).

But, I'm afraid there's a tendency here to goodhart on the easily measurable virtues, like technical skill or credentials.

Comment author: AdamGleave 09 October 2018 03:20:49AM *  9 points [-]

There aren't many people with PhD-level research experience in relevant fields who are focusing on AI safety, so I think it's a bit early to conclude these skills are "extremely rare" amongst qualified individuals.

AI safety research spans a broad range of areas, but for the more ML-oriented research the skills are, unsurprisingly, not that different from other fields of ML research. There are two main differences I've noticed:

  • In AI safety you often have to turn ill-defined, messy intuitions into formal problem statements before you can start working on them. In other areas of AI, people are more likely to have already formalized the problem for you.
  • It's important to be your own harshest critic. This is cultivated in some other fields, such as computer security and (in a different way) in mathematics. But ML tends to encourage a sloppy attitude here.

Both of these I think are fairly easily measurable from looking at someone's past work and talking to them, though.

Identifying highly capable individuals is indeed hard, but I don't think this is any more of a problem in AI safety research than in other fields. I've been involved in screening in two different industries (financial trading and, more recently, AI research). In both cases there's always been a lot of guesswork involved, and I don't get the impression it's any better in other sectors. If anything I've found screening in AI easier: at least you can actually read the person's work, rather than everything behind behind an NDA (common in many industries).

Comment author: AdamGleave 23 April 2018 01:25:13AM 10 points [-]

Upvoted because this is an important topic I've seen little discussion of. Although you take pains to draw attention to the limitations of this data set, these caveats aren't included in the conclusion, so I'd be wary of anyone acting on this verbatim. I'd be interested in seeing drop out rates in other social movements to give a better idea of the base rate.

Comment author: AdamGleave 24 March 2018 07:15:13PM 3 points [-]

Thanks for running this! It's unfortunate this is at the same time as ICML/IJCAI/AAMAS, I'd have been interested in attending otherwise. Not sure what proportion of your target audience go to the major ML conferences, but might be worth trying to schedule around them for next year.