Comment author: 80000_Hours 10 October 2018 07:31:53PM *  4 points [-]

"Does the answer refer to high impact opportunities in general in the world"

That question is intended to look at the highest-impact jobs available in the world as a whole, in contrast with the organisations being surveyed. Given the top response was government and policy experts, I think people interpreted it correctly.

Comment author: Denise_Melchin 10 October 2018 07:08:37PM *  9 points [-]

Echoing David, I'm somewhat sceptical of the responses to "what skills and experience they think the community as a whole will need in the future". Does the answer refer to high impact opportunities in general in the world or only the ones who are mostly located at EA organisations?

I'm also not sure about the relevance to individual EA's career decisions. I think implying it might be relevant might be outright dangerous if this answer is built on the needs of jobs that are mostly located at EA organisations. From what I understand, EA organisations have had a sharp increase in not only the number, but also the quality of applications in recent times. That's great! But pretty unfortunate for people who took the arguments about 'talent constraints' seriously and focused their efforts on finding a job in the EA Community. They are now finding out that they may have little prospects, even if they are very talented and competent.

There's no shortage of high impact opportunities outside EA organisations. But the EA Community lacks the knowledge to identify them and resources to direct its talent there.

There are only a few dozen roles at EA orgs each year, nevermind roles that are a good fit for individual EA's skillset. Even if we only look at the most talented people, there are more capable people the EA Community isn't able to allocate among its own organizations. And this will only get worse - the EA Community is growing faster than jobs at EA orgs.

If we don't have the knowledge and connections to allocate all our talent right now, that's unfortunate, but not necessarily a big problem if this is something that is communicated. What is a big problem is to accidentally mislead people into thinking it's best to focus their career efforts mostly on EA orgs, instead of viewing them as a small sliver in a vast option space.

Comment author: DavidNash 10 October 2018 03:25:06PM *  5 points [-]

Looking at this part -

"We did include more people from organisations focused on long-termism. It’s not clear what the right method is here, as organisations that are bigger and/or have more influence over the community ought to have more representation, but we think there’s room for disagreement with this decision."

I think one potential reason there are more people interested in EA working at LTF organisations is that EA and LTF are both relatively new ideas. Not many people are considering careers in these areas, so it is much easier for a community to found and staff the majority of organisations.

If global development had been ignored until 5 years ago, it's very likely most of the organisations in this area would be founded by people interested in EA, and they might be over represented in surveys like this.

There may be talent gaps in other cause areas (beyond development and animals) that are missed out as they don't have leaders with EA backgrounds but that doesn't mean that those gaps should be under weighted.

It may be worth having a separate survey trying to get opinions considering talent gaps in priority areas whether they are led by people involved in EA or not.

Comment author: Tom_Voltz 10 October 2018 10:28:00AM 0 points [-]

Sounds great, thanks!

Comment author: Mati_Roy  (EA Profile) 10 October 2018 04:59:34AM 2 points [-]

feature idea: be able to mark (old) articles as obsolete

Comment author: kbog  (EA Profile) 09 October 2018 07:33:44PM 0 points [-]

OK, I've sent you a connection request.

Comment author: sirshred 09 October 2018 05:26:32PM 0 points [-]

Please link to the examples here when they are finished, thanks!

Comment author: oliverbramford 09 October 2018 12:17:21PM *  4 points [-]

The required skills and experience of senior hires vary between fields and roles; senior x-risk staff are probably best-placed to specify these requirements in their respective domains of work. You can look at x-risk job ads and recruitment webpages of leading x-risk orgs for some reasonable guidance. (we are developing a set of profiles for prospective high-impact talent, to give a more nuanced picture of who's required).

"Exceptionally good judgement and decision-making", for senior x-risk talent, I believe requires:

  • a thorough and nuanced understanding of EA concepts and how they apply to the context

  • good pragmatic foresight - an intuitive grasp of the likely and possible implications of one's actions

  • a conscientious risk-aware attitude, with the ability to think clearly and creatively to identify failure modes

Assessing good-judgement and decision-making is hard; it's particularly hard to assess the consistency of a person's judgement without knowing/working with them over at least several months. Some methods:

  • Speaking to a person can quickly clarify their level of knowledge of EA concepts and how they apply to the context of their role.

  • Speaking to references could be very helpful, to get a picture of how a person updates their beliefs and actions.

  • Actually working with them (perhaps via a work trial, partnership or consultancy project) is probably the best way to test whether a person is suitable for the role

  • A critical thinking psychometric test may plausibly be a good preliminary filter, but is perhaps more relevant for junior talent. A low score would be a big red flag, but a high score is far from sufficient to imply overall good judgement and decision-making.

Comment author: jonleighton  (EA Profile) 09 October 2018 12:03:05PM *  5 points [-]

As you know, Lee, your post increased our interest (OPIS; http://www.preventsuffering.org) in this issue as a potentially tractable cause area, and after the Lancet Commission report a year ago, we became engaged with the issue through our UN Human Rights Council event and advocacy (http://www.preventsuffering.org/pain/). We have since been contacted by palliative care associations about collaborating, and so I prepared a document with some new thoughts and an analysis of promoting morphine access as a potentially cost-effective EA cause area for those interested in relieving some of the worst human suffering. The document is here: http://www.preventsuffering.org/wp-content/uploads/2018/10/Relieving-extreme-physical-pain-in-humans-–-an-opportunity-for-effective-funding.pdf I will also create a new EA Forum post to elicit feedback.

Comment author: Tadhg-Giles 09 October 2018 09:45:24AM 0 points [-]

Hey kbog, love your line of thinking! I'm setting up an entrepreneurial initiative in London around this topic. Contact me via LinkedIn - https://www.linkedin.com/in/tadhg-giles-94a65882/ or via email tadhggiles@gmail.com if you'd like to talk about it!

Comment author: Benito 09 October 2018 06:53:53AM 4 points [-]

Identifying highly capable individuals is indeed hard, but I don't think this is any more of a problem in AI safety research than in other fields.

Quite. I think that my model of Eli was setting the highest standard possible - not merely a good researcher, but a great one, the sort of person who can bring whole new paradigms/subfields into existence (Kahneman & Tversky, Von Neumann, Shannon, Einstein, etc), and then noting that because the tails come apart (aka regressional goodharting), optimising for the normal metrics used in standard hiring practices won't get you these researchers (I realise that probably wasn't true for Von Neumann, but I think it was true for all the others).

Comment author: AdamGleave 09 October 2018 03:20:49AM *  9 points [-]

There aren't many people with PhD-level research experience in relevant fields who are focusing on AI safety, so I think it's a bit early to conclude these skills are "extremely rare" amongst qualified individuals.

AI safety research spans a broad range of areas, but for the more ML-oriented research the skills are, unsurprisingly, not that different from other fields of ML research. There are two main differences I've noticed:

  • In AI safety you often have to turn ill-defined, messy intuitions into formal problem statements before you can start working on them. In other areas of AI, people are more likely to have already formalized the problem for you.
  • It's important to be your own harshest critic. This is cultivated in some other fields, such as computer security and (in a different way) in mathematics. But ML tends to encourage a sloppy attitude here.

Both of these I think are fairly easily measurable from looking at someone's past work and talking to them, though.

Identifying highly capable individuals is indeed hard, but I don't think this is any more of a problem in AI safety research than in other fields. I've been involved in screening in two different industries (financial trading and, more recently, AI research). In both cases there's always been a lot of guesswork involved, and I don't get the impression it's any better in other sectors. If anything I've found screening in AI easier: at least you can actually read the person's work, rather than everything behind behind an NDA (common in many industries).

Comment author: Jon_Behar 08 October 2018 05:51:56PM 1 point [-]

Definitely agree on the value of spreading basic principles, though I think we also need to focus on some charity-specific themes given that we want to change giving behavior. In addition to the general frameworks you mention, I think it’s valuable to promote “intentional”, “informed”, and “impactful” giving as these are very uncontroversial ideas. And while it’s most valuable when someone buys into all three of those notions in a big way, there’s also value to getting a lot of people to buy in partially. If millions more people see the value of informed giving, incentives will improve and new products will emerge to meet that demand.

FWIW, I think the more accessible approach makes sense even in a world with huge variation in impact across charities. I think you’ll get more money to the “elite” charities if you have a culture where people seek out the best cancer charity they can find, the best local org they can find, etc vs trying “to get more people to adopt the whole EA mindset.”

Comment author: Peter_Hurford  (EA Profile) 08 October 2018 05:12:04PM 1 point [-]

Instead, you could assign based on whether they have and odd or even number of letters in their name.

You could SHA-256 hash the names and then randomize based on that. Doing so should remove all chances of confounding effects.

Comment author: Elityre 08 October 2018 04:36:07PM 4 points [-]

I'm not sure how much having a "watered down" version of EA ideas in the zeitgeist helps because, I don't have a clear sense of how effective most charities are.

If the difference between the median charity and the most impactful charity is 4 orders of magnitude ($1 to the most impactful charities does as much good as $1000 to the the median charity), then even a 100x improvement from the median charity is not very impactful. It's still only 1% as good a donating to the best charity. If that were the case, it's probably more efficient to just aim to get more people to adopt the whole EA mindset.

On the other hand, if the variation is much smaller, it might be the case that a 100x improvement get's you to about half of the impact per dollar of the best charities.

Which world we're living in matters a lot for whether we should invest in this strategy.

That said, promotion of EA principles, like cost effectiveness and EV estimates, separate from the EA brand, seem almost universally good, and extend far beyond people's choice of charities.

Comment author: Elityre 08 October 2018 03:09:44PM *  8 points [-]

It seems to me that in many cases the specific skills that are needed are both extremely rare and not well captured by the standard categories.

For instance, Paul Christiano seems to me to be an enormous asset to solving the core problems of AI safety. If "we didn't have a Paul" I would be willing to trade huge amounts of EA resources to have him working on AI safety, and I would similarly trade huge resources to get another Paul-equivalent working on the problem.

But it doesn't seem like Paul's skillset is one that I can easily select for. He's knowledgeable about ML, but there are many people with ML knowledge (about 100 new ML PhDs each year). That isn't the thing that distinguishes him.

Nevertheless, Paul has some qualities, above and beyond his technical familiarity, that allow him to do original and insightful thinking about AI safety. I don't understand what those qualities are, or know how to assess them, but they seem to me to be much more critical than having object level knowledge.

I have close to no idea how to recruit more people that can do the sort of work that Paul can do. (I wish I did. As I said, I would give up way more than my left arm to get more Pauls).

But, I'm afraid there's a tendency here to goodhart on the easily measurable virtues, like technical skill or credentials.

Comment author: Elityre 08 October 2018 02:57:26PM *  6 points [-]

In the short term, senior hires are most likely to come from finding and onboarding people who already have the required skills, experience, credentials and intrinsic motivation to reduce x-risks.

Can you be more specific about the the required skills and experience are?

Skimming the report, you say "All senior hires require exceptionally good judgement and decision-making." Can you be more specific about what that means and how it can be assessed?

Comment author: Elityre 08 October 2018 02:18:33PM 3 points [-]

Intellectual contributions to the rationality community: including CFAR’s class on goal factoring

Just a note. I think this might be a bit missleading. Geoff, and other members of Leverage research taught a version of goal factoring at some early CFAR workshops. And Leverage did develop a version of goal factoring inspired by CT. But my understanding is that CFAR staff independently developed goal factoring (starting from an attempt to teach applied consequentialism), and this is an instance of parallel development.

[I work for CFAR, though I had not yet joined the EA or rationality community in those early days. I am reporting what other long standing CFAR staff told me.]

Comment author: Ronja_Lutz 08 October 2018 12:18:08PM 0 points [-]

Thanks for this helpful post! I'm currently running EA Berlin on a part-time grant and was wondering about your thoughts on work groups, since we do a bunch of project-based work that might fit with that. Was "sparking workgroups" something of a side-effect or did you actively encourage that? Do members run them independently, or do you support them, and how?

Comment author: Safa_Amirbayat 06 October 2018 11:24:56PM -1 points [-]

Hello lovely people,

Call to all EAs in politics! (I am sorry for posting this here, I haven't enough karma for my own post).

Government systems can be one of the best (and worst) ways to make geniuses from idiots and angels from devils.

I am an Officer for the Camberwell Green district in London and have had many opportunities to speak with decision makers about EA.

I want to set up a point of contact - if one doesn't exist already - for discussing best practice for implementing EA policies in our respective spheres of influence.

I'll also be at EA Global later this month, looking forward to seeing you all.

Safa Amirbayat

View more: Prev | Next