19

Gregory_Lewis comments on Comparative advantage in the talent market - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (24)

You are viewing a single comment's thread.

Comment author: Gregory_Lewis 12 April 2018 07:19:30AM 7 points [-]

Bravo!

FWIW I am one of the people doing something similar to what you advocate: I work in biorisk for comparative advantage reasons, although I think AI risk is a bigger deal.

That said, this sort of trading might be easier within broad cause areas than between them. My impression is received wisdom among the far future EAs is that both AI and bio are both 'big deals': AI might be (even more) important, yet bio (even more) neglected. For this reason even though I suspect most (myself included) would recommend a 'pluripotent far future EA' to look into AI first, it wouldn't take much to tilt the scales the other way (e.g. disposition, comparative advantage, and other things you cite). It also means individuals may not suffer a motivation hit if they are merely doing a very good thing rather than the very best thing by their lights. I think a similar thing applies to means that further a particular cause (whether to strike out on ones own versus looking for a role in an existing group, operations versus research, etc.)

When the issue is between cause areas, one needs to grapple with decisive considerations open chasms which are hard to cross with talent arbitrage. In the far future case, the usual story around astronomical waste etc. implies (pace Tomasik) that work on the far future is hugely more valuable than work in another cause area like animal welfare. Thus even if one is comparatively advantaged in animal welfare, one may still think their marginal effect is much greater in the far future cause area.

As you say, this could still be fertile ground for moral trade, and I also worry about more cynical reasons that explain this hasn't happened (cf. fairly limited donation trading so far). Nonetheless, I'd like to offer a few less cynical reasons that draw the balance of my credence.

As you say, although Allison and Bettina should think, "This is great, by doing this I get to have a better version of me do work on the cause I think is most important!" They might mutually recognise their cognitive foibles will mean they will struggle with their commitment to a cause they both consider objectively less important, and this term might outweigh their comparative advantage.

It also may be the case that developing considerable sympathy to a cause area may not be enough. Both intra- and outside EA, I generally salute well-intentioned efforts to make the world better: I wish folks working on animal welfare, global poverty, or (developed world) public health every success. Yet when I was doing the latter, despite finding it intrinsically valuable, I struggled considerably with motivation. I imagine the same would apply if I traded places with an 'animal-EA' for comparative advantage reasons.

It would been (prudentially) better if I could 'hack' my beliefs to find this work more intrinsically valuable. Yet people are (rightly) chary to try and hack prudentially useful beliefs (cf. Pascal's wager, where Pascal anticipated the 'I can't just change my belief in God' point, and recommended atheists go to church and other things which would encourage religious faith to take root), given it may have spillover into other domains where they take epistemic accuracy is very important. If cause area decisions mostly rely on these (which I hope they do), there may not be much opportunity to hack away this motivational bracken to provide fertile ground for moral trade. 'Attitude hacking' (e.g. I really like research, but I'd be better at ops, so I try to make myself more motivated by operations work) lacks this downside, and so looks much more promising.

Further, a better ex ante strategy across the EA community might be not to settle for moral trade, but instead discuss the merits of the different cause areas. Both Allison and Bettina take the balance of reason on their side, and so might hope either a) they get their counterpart to join them, or b) they realise they are mistaken and so migrate to something more important. Perhaps this implies an idealistic view of how likely people are to change their minds about these matters. Yet the track record of quite a lot people changing their minds about what cause areas are the most important (I am one example) gives some cause for hope.

Comment author: AGB 16 April 2018 01:58:51AM *  2 points [-]

I suspect that the motivation hacking you describe is significantly harder for researchers than for, say, operations, HR, software developers, etc. To take your language, I do not think that the cause area beliefs are generally 'prudentially useful' for these roles, whereas in research a large part of your job may on justifying, developing, and improving the accuracy of those exact beliefs.

Indeed, my gut says that most people who would be good fits for these many critical and under-staffed supporting roles don't need to have a particularly strong or well-reasoned opinion on which cause area is 'best' in order to do their job extremely well. At which point I expect factors like 'does the organisation need the particular skills I have', and even straightforward issues like geographical location, to dominate cause prioritisation.

I speculate that the only reason this fact hasn't permeated into these discussions is that many of the most active participants, including yourself and Denise, are in fact researchers or potential researchers and so naturally view the world through that lens.

Comment author: Gregory_Lewis 20 April 2018 12:17:35PM 0 points [-]

I'd hesitate to extrapolate my experience across to operational roles for the reasons you say. That said, my impression was operations folks place a similar emphasis on these things as I. Tanya Singh (one my colleagues) gave a talk on 'x risk/EA ops'. From the Q&A (with apologies to Roxanne and Tanya for my poor transcription):

One common retort we get about people who are interested in operations is maybe they don't need to be value-aligned. Surely we can just hire someone who has operations skills but doesn't also buy into the cause. How true do you think this claim is?

I am by no means an expert, but I have a very strong opinion. I think it is extremely important to be values aligned to the cause, because in my narrow slice of personal experience that has led to me being happy, being content, and that's made a big difference as to how I approach work. I'm not sure you can be a crucial piece of a big puzzle or a tightly knit group if you don't buy into the values that everyone is trying to push towards. So I think it's very very important.

Comment author: AGB 16 April 2018 02:01:56AM 1 point [-]

I agree with your last paragraph, but indeed think that you are being unreasonably idealistic :)