18

Elityre comments on Bottlenecks and Solutions for the X-Risk Ecosystem - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (9)

You are viewing a single comment's thread. Show more comments above.

Comment author: Elityre 08 October 2018 03:09:44PM *  8 points [-]

It seems to me that in many cases the specific skills that are needed are both extremely rare and not well captured by the standard categories.

For instance, Paul Christiano seems to me to be an enormous asset to solving the core problems of AI safety. If "we didn't have a Paul" I would be willing to trade huge amounts of EA resources to have him working on AI safety, and I would similarly trade huge resources to get another Paul-equivalent working on the problem.

But it doesn't seem like Paul's skillset is one that I can easily select for. He's knowledgeable about ML, but there are many people with ML knowledge (about 100 new ML PhDs each year). That isn't the thing that distinguishes him.

Nevertheless, Paul has some qualities, above and beyond his technical familiarity, that allow him to do original and insightful thinking about AI safety. I don't understand what those qualities are, or know how to assess them, but they seem to me to be much more critical than having object level knowledge.

I have close to no idea how to recruit more people that can do the sort of work that Paul can do. (I wish I did. As I said, I would give up way more than my left arm to get more Pauls).

But, I'm afraid there's a tendency here to goodhart on the easily measurable virtues, like technical skill or credentials.

Comment author: AdamGleave 09 October 2018 03:20:49AM *  9 points [-]

There aren't many people with PhD-level research experience in relevant fields who are focusing on AI safety, so I think it's a bit early to conclude these skills are "extremely rare" amongst qualified individuals.

AI safety research spans a broad range of areas, but for the more ML-oriented research the skills are, unsurprisingly, not that different from other fields of ML research. There are two main differences I've noticed:

  • In AI safety you often have to turn ill-defined, messy intuitions into formal problem statements before you can start working on them. In other areas of AI, people are more likely to have already formalized the problem for you.
  • It's important to be your own harshest critic. This is cultivated in some other fields, such as computer security and (in a different way) in mathematics. But ML tends to encourage a sloppy attitude here.

Both of these I think are fairly easily measurable from looking at someone's past work and talking to them, though.

Identifying highly capable individuals is indeed hard, but I don't think this is any more of a problem in AI safety research than in other fields. I've been involved in screening in two different industries (financial trading and, more recently, AI research). In both cases there's always been a lot of guesswork involved, and I don't get the impression it's any better in other sectors. If anything I've found screening in AI easier: at least you can actually read the person's work, rather than everything behind behind an NDA (common in many industries).

Comment author: Benito 09 October 2018 06:53:53AM 4 points [-]

Identifying highly capable individuals is indeed hard, but I don't think this is any more of a problem in AI safety research than in other fields.

Quite. I think that my model of Eli was setting the highest standard possible - not merely a good researcher, but a great one, the sort of person who can bring whole new paradigms/subfields into existence (Kahneman & Tversky, Von Neumann, Shannon, Einstein, etc), and then noting that because the tails come apart (aka regressional goodharting), optimising for the normal metrics used in standard hiring practices won't get you these researchers (I realise that probably wasn't true for Von Neumann, but I think it was true for all the others).