19

MichaelPlant comments on Comparative advantage in the talent market - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (24)

You are viewing a single comment's thread.

Comment author: MichaelPlant 12 April 2018 10:16:26AM 13 points [-]

However, we can also err by thinking about a too narrow reference class

Just to pick up on this, a worry I've had for a while - which I'm don't think I'm going to do a very job explaining here - is that the reference class people use is "current EAs" not "current and future EAs". To explain, when I started to get involved in EA back in 2015, 80k's advice, in caricature, was that EAs should become software developers or management consultants and earn to give, whereas research roles, such as becoming a philosopher or historian, are low priority. Now the advice has, again in caricature, swung the other way: management consultancy looks very unpromising, and people are being recommended to do research. There's even occassion discussion (see MacAskill's 80k podcast) that, on the margin, philosophers might be useful. If you'd taken 80k's advice seriously and gone in consultancy, it seems you would have done the wrong thing. (Objection, imagining Wiblin's voice: but what about personal fit? We talked about that. Reply: if personal fit does all the work - i.e. "just do the thing that has greatest personal fit" - then there's no point making more substantive recommendations)

I'm concerned that people will funnel themselves into jobs that are high-priority now, in which they have a small comparative advice to other EAs, rather than jobs in which they will later have a much bigger comparative advantage to other EAs. At the present time, the conversation is about EA needing more operations roles. Suppose two EAs, C and D, are thinking about what to do. C realises he's 50% better than D at ops and 75% better at research, so C goes into Ops because that's higher priority. D goes into research. Time passes the movement grows. E now joins. E is better than C at Ops. The problem is that C has taken an ops role and it's much harder for C to transition to research. C only has a comparative advantage at ops in the first time period, thereafter he doesn't. Overall, it looks like C should just have gone into research, not ops.

In short, our comparative advantage is not fixed, but will change over time simply based on who else shows up. Hence we should think about comparative advantage over our lifetimes rather than the shorter term. This likely changes things.

Comment author: Denise_Melchin 12 April 2018 06:58:00PM 5 points [-]

I completely agree. I considered making the point in the post itself, but I didn't because I'm not sure about the practical implications myself!

Comment author: MichaelPlant 13 April 2018 11:18:07PM 1 point [-]

I agree it's really complicated, but merits some thinking. The one practical implication I take is "if 80k says I should be doing X, there's almost no chance X will be the best thing I could do by the time I'm in a position to do it"

Comment author: Ben_Todd 20 April 2018 06:24:03AM 2 points [-]

That seems very strong - you're saying all our recommendations are wrong, even though we're already trying to take account of this effect.

Comment author: ThomasSittler 12 April 2018 06:32:44PM 4 points [-]

Great point. Has anyone thought about what kind of changes to the talent landscape we should expect in the next 3-6 years, and how that would affect comparative advantage considerations?

Comment author: Sebastian_Oehm 13 April 2018 01:06:00PM *  6 points [-]

You could try to model by estimating how (i) the talent needs and (ii) the talent availability will be distributed if we further scale the community.

(i) If you assume that the EA community grows, you may think that the percentage of different skillsets that we need in the community will be different. E.g. you might believe that if the community grows by a factor of 10, we don't need 10x as many people thinking about movement building strategy (the problems size increases not linearly with the number of people) or entrepreneurial skills (as the average org will be larger and more established), but an increase by a factor of say 2-5 might be sufficient. On the other hand, you'd quite likely need ~10x as many ops people.

(ii) For the talent distribution, one could model this using one of the following assumptions:

1) Linearly scale the current talent distribution (i.e. assume that the distribution of skillsets in the future community would be the same as today).

2) Assume that the future talent distribution will become more similar to a relevant reference class (e.g. talent distribution for graduates from top unis)

A few conclusions e.g. I'd get from this

  • weak point against skills building in start-ups - if you're great at this, start stuff now

  • weak point in favour of building management skills, especially with assumption 1), but less so with assumption 2)

  • weak point against specialising in areas where EA would really benefit from having just 2-3 experts but unlikely need many more (e.g. history, psychology, institutional decision making, nanotech, geoengineering) if you're also a good fit for sth else, as we might just find them along the way

  • esp. if 2), weak points against working on biorisk (or investing substantially in skills building in bio) if you might be an equal fit for technical AI safety, as the maths/computer science : biologists ratio at most unis is more 1 : 1 (see https://www.hesa.ac.uk/news/11-01-2018/sfr247-higher-education-student-statistics/subjects), but we probably want to have 5-10x as many people working on AI rather than biorisk. [The naive view using current talent distribution might suggest that you should work on bio rather than AI if you're an equal fit, as the current AI : bio talent ratio seems to be > 10: 1]

All of this is less relevant if you believe in high discount rates on work done now rather than in 5-10 years.

Comment author: JanBrauner 14 April 2018 12:19:40PM 0 points [-]

I really like that idea. It might also be useful to check whether this model would have predicted past changes of career recommendations.

Comment author: John_Maxwell_IV 13 April 2018 05:30:38AM 3 points [-]

Before operations it was AI strategy researchers, and before AI strategy researchers it was web developers. At various times it has been EtG, technical AI safety, movement-building, etc. We can't predict talent shortages precisely in advance, so if you're a person with a broad skillset, I do think it might make sense to act as flexible human capital and address whatever is currently most needed.

Comment author: MichaelPlant 13 April 2018 11:14:10PM 1 point [-]

I think I'd go the other way and suggest people focus more on personal fit: i.e. do the thing in which you have greatest comparative advantage relative to the world as a whole, not just to the EA world.

Comment author: Ben_Todd 20 April 2018 06:22:00AM 2 points [-]

I agree with the "in short" section. I'm less sure about exactly how it changes things. It seems reasonable to think more about your comparative advantage compared to the world as a whole (taking that as a proxy for the future composition of the community), or maybe just try to think more about which types of talent will be hardest to attract in the long-term. I don't think much the changes in advice about etg and consulting were due to this exact mistake.

One small thing we'll do to help with this is ask people to project the biggest talent shortages at longer time horizons in our next talent survey.

Comment author: Michael_PJ 12 April 2018 08:46:53PM 1 point [-]

This is a good point, although talent across time is comparatively harder to estimate. So "act according to present-time comparative advantage" might be a passable approximation in most cases.

We also need to consider the interim period when thinking about trades across time. If C takes the ops job, then in the period between C taking the job and E joining the movement, we get better ops coverage. It's not immediately obvious to me how this plays out, might need a little bit of modelling.