Hide table of contents

Cross-posted to my blog.

Counterfactuals matter. When you’re taking a job, you should care about who would take the job if you didn’t, and how much worse a job than you they’d do.

This matters from the other side too: employers should consider counterfactuals when deciding who to hire. Suppose you’re an employer and considering hiring a promising employee. What would a prospective employee do if you didn’t hire them? How good is it compared to working for you?

If a particular candidate cares a lot about improving the lives of sentient beings, they’d probably do something valuable even if they didn’t get hired, and this should count as a consideration against hiring them.

This comes into play for organizations that hire many altruistically-minded people but also hire people who might not focus on doing good if they took a different job. For example, GiveWell hires some analysts who might otherwise go into consulting, and MIRI hires researchers who might otherwise go into academia and work on relatively unimportant math problems. These employees end up doing way more good than they would otherwise. But GiveWell and MIRI also hire many people who care a lot about improving the world and would do a lot of good even if they worked somewhere else. The only benefit of hiring these people comes from the differential between the good they do there versus the good they do elsewhere1.

Now, this is not to say you should never hire EAs or other altruists. There are two big reasons why hiring altruists still makes sense in many cases:

  1. You don’t have any alternative candidates worth hiring, or finding such a candidate would require a large investment.
  2. A particular altruistic candidate looks sufficiently better than the alternative candidate that the difference between candidates exceeds the difference between the altruistic candidate’s value at your organization and their value elsewhere.

That second reason may be a bit confusing so let’s delve into it further. When a candidate chooses to work for you instead of somewhere else, presumably that’s because they will do more good working for you. The amount of good you do by hiring them is determined by how much good they do directly minus how much good they would have done if you hadn’t hired them. Let’s say they would do X good working for you and Y good working elsewhere, so you add Z = X - Y value to the world by hiring them. If your alternative non-altruistic candidate would do less than Z good by working for you, it’s still worth it to hire the more altruistic applicant.

How much does this consideration matter in practice? It probably depends on each organization’s particular circumstances. I would expect that this wouldn’t change your decision about who to hire in the overwhelming majority of cases, but it may make a difference sometimes.

Notes

  1. This doesn’t just apply to people within the EA community; I’m certainly not claiming that only EAs do things that are effective. It applies to anyone who would do valuable things otherwise.

1

0
0

Reactions

0
0

More posts like this

Comments17
Sorted by Click to highlight new comments since: Today at 11:19 PM

Organizations become dysfunctional when employees have reasons to act in a way that's incompatible with the goals of the organization. See the principal-agent problem. Aligning the incentives of employees with the goals of the organization is nontrivial. You can see this play out in the business world:

  • One reason Silicon Valley wins is because companies in Silicon Valley offer meaningful equity to their employees. Peter Thiel recommends against compensating startup CEOs with lots of cash or using consultants.

  • Y Combinator companies are most interested in hiring programmers motivated by building a great product. If you're building an app for skiers, it may be wise to skip the programming genius in favor of a solid developer who's passionate about skiing.

  • For a non-Silicon Valley example, Koch Industries has grown 2000x since 1967. Rather than using breakthrough technology, connections, or cheap capital, Charles Koch credits his success to the culture he built based on "crazy ideas drawing from my interest in the philosophy of science, the scientific method, and my studies of how people can best live and work together." On the topic of hiring, he says: "most [companies] hire first on talent and then they hope that the values are compatible with their culture... We hire first on values."

The principal-agent problem is especially relevant in large organizations. An employee's incentives are set up by their boss, so each additional layer of hierarchy is an opportunity for incentive problems to compound. (Real-life example from my last job: boss says I need to work on project x, even though we both know it's not very important for the company, because finishing it makes our department look good.) This is the stuff out of which Dilbert comics are made.

But incentives are still super important in small organizations. You'd think that a company's CEO, of all people, would be immune, but Thiel observed that high CEO salaries predict startup failure because it makes the company's culture not "equity-focused".

There are also benefits associated with having a team that's unified culturally.

Using non-EA employees seems fine when incentives are easy to set up and the work is not core to the organization's mission, e.g. ditch-digging type work.

I agree with this. Organisational culture matters a lot. I would suggest that a good strategy is hiring a mix, where there are enough people motivated by the right thing to set and maintain the culture, and those new to the culture will, for the most part, adopt it (provided the culture is based around sound principles, as EA is). This provides the benefit of (a) allowing the flexibility to hire in people with specialist skills outside current EA (b) encouraging the development of more EAs (c) providing outside perspectives that can, where appropriate, be used to improve implementation of EA principles (or refinement of principles).

(Note that what you're most likely to be looking at in many of these cases (new people) is not "people who are opposed to EA" but more likely "people who haven't previously encountered, thought deeply about, or had a lot of exposure to the best thinking/arguments around EA".

(This is obviously relevant to generic culture-building, not necessarily EA-specific)

Excellent point, that's an important reason to hire value-aligned people that I hadn't really considered. I expect it wouldn't matter much in some cases; for example, my understanding is that most GiveWell employees wouldn't be doing anything particularly altruistic if they worked elsewhere, and GiveWell doesn't seem to have substantial principal-agent problems. But I would expect you'd want to hire value-aligned employees in most cases.

Edit: Alternatively, you can benefit from hiring value-aligned people who probably wouldn't do something similarly effective otherwise. For example, I'd expect that effective animal organizations hire some people who care about animals but otherwise would have worked at a shelter or something similarly small-scale.

GiveWell doesn't seem to have substantial principal-agent problems

Grantmaking seems to me like an area where it's especially important to hire value-aligned people. Handing out large amounts of money = conflict of interest opportunities galore.

It's also hard to observe how good a job a GiveWell analyst is doing. It seems easy for poorly-aligned analysts to do suboptimal work (mainly through subtle omissions) in a way that more motivated people may not. e.g. a non-altruistic employee may not choose to highlight a crucial consideration that renders 3 months of their work irrelevant.

There are two big reasons why hiring altruists still makes sense in many cases:

  1. You don’t have any alternative candidates worth hiring, or finding such a candidate would require a large investment.
  1. A particular altruistic candidate looks sufficiently better than the alternative candidate that the difference between candidates exceeds the difference between the altruistic candidate’s value at your organization and their value elsewhere.

Another consideration for hiring an altruistic candidate is that altruistic candidates are more open to lower salaries which then makes it more likely additional hires can be made.

Encouraging altruistic workers to self-subsidize altruistic work is a dangerous path to go down, in my opinion. On a large scale, it can (and does!) put downward pressure on sector-wide wages, which in turn can push qualified people away from the sector (thus hurting the talent pool), disproportionately may exclude people of humble backgrounds from getting jobs in altruistic work (which in turn helps ensure that people of privilege are over represented in those jobs, which is not good - e.g. this post), and may also, according to some sociological theories, possibly lead to a societal devaluation of such work (e.g. what is seen with care work, which also is bad). I'd much rather let such people seek normal wages, and then donate the excess - you get the same benefits, but avoid all of the associated problems.

If the prospective employee is an EA, then they are presumably already paying lots of attention to the question "How much good would I do in this job, compared with the amount of good I would do if I did something else instead?" And the prospective employee has better information than the employer about what that alternative would be and how much good it would do. So it's not clear how much is added by having the employer also consider this.

Employees don't know who else is being considered for the position so they don't have as much information about the tradeoffs there as employers do.

Alternatively, you could interpret me as saying that employees should consider how their taking a job affects what jobs other people take; although at that point you're looking at fairly far-removed effects and I'm not sure you can do anything useful at that level.

Can someone elaborate on what assessing candidates' value-alignment/morality and making decisions about that looks like in practice? I work in the 'traditional' charitable sector (for lack of a better word), and the number one piece of advice that I've always heard given to hiring managers is "ask people about their skills, never about their morality or commitment to the cause" (and, as a addendum, dock people who spend too much time in the interview talking about those things). Obviously there are special cases to avoiding considering someone's value-alignment - e.g. cases where people avoid hiring candidates with associations with organizations that seem a little 'questionable' vis-a-vis the cause - but overall I've not really heard a lot about assessing or taking into account people's value-alignment/morality during hiring decisions. So, with that in mind - do some EA-aligned organizations screen candidates based on morality/commitment/value-alignment? If so, how do they go about doing that - what sort of interview questions can get that information out of people accurately?

the number one piece of advice that I've always heard given to hiring managers is "ask people about their skills, never about their morality or commitment to the cause"

Is there a rationale given for this advice?

The rationale is mostly borne of experience, from what I can tell (e.g. managers experiencing consistent success with this set up), but formally it is that 1) you should hire based on who will do the most good in the position, and 2) asking about experience and skills is the best way of figuring out if they'll do the most good. Outside of corruption, which is a whole other discussion, the difference between very moral person A and mediocre-ly moral person B is that person B may dedicate more time to thinking about and working on the cause, which in turn becomes results. If person A is not as smart as person B, but works harder and gets better results as a result, you should hire person A. As a converse, if person B really doesn't care that much, slacks off a lot, but is a genius who consistently gets better results than person A, you should hire person B. In both cases, asking about their morality isn't going to tell you who will be most effective - it's an easy thing to lie about, and when it does play a large role it will show up in your skills and experience anyway (past success is an indicator of future success).

Interesting stuff, thanks! So I guess this could be a motivating factor for lower salaries at nonprofit organizations, if accepting a low salary is a credible indicator of being a moral person? (I see your comment downthread is also about this and is interesting.)

Possibly! Outside of a few annoying high-profile groups (who shall not be named), you don't really hear people working for charitible causes say "I'm in it for the money". I'm pretty sure this situation is mostly driven by a lack of money mixed with the availability of people who are willing to take a pay cut to work in aid, rather than it being a conscious attempt at screening workers for morality. It may be worth researching the 'screening for morality' aspect further - I haven't really seen much on the implications of it (hence my curiosity about how it would work in practice - it's a very interesting thought!). Either way, there's a sweet spot somewhere, it's just a question of where - how much below market rate do you need to pay charitable workers in order to maximize the costs/benefits between screening for morality, saving money, and minimizing possible side effects like those I mentioned downpost?

In humanitarian work, for example, I think we've gone too far (as one writer put it, "it's unrealistic to expect us to live like monks". On a related note, it may be worth looking into the large debate on the professionalization of the humanitarian aid sector. Basically, for a very long time the humanitarian aid sector under-invested in the professional development, mental health, safety, and general wellbeing of its workers, because the kind of people who work in frontline aid work tend to be willing to do it anyway even if they are getting paid next to nothing, are in serious danger all the time, and are under-invested in by their organization. Unsurprisingly, burn-out and untreated PTSD are common. As an aside, professionalization also seems to be slowly increasing the effectiveness of humanitarian aid, which is great.

yes i agree with john

Could this also be taken as a weak argument for EAs earning to give on the margin, given that the counterfactuals are lower?

In that case I don't think it's different from the standard replaceability argument--earning to give is non-replaceable but direct work is somewhat replaceable.