Hide table of contents

Summary: A naive understanding of replaceability might lead us to think too highly of earning to give. A more complete understanding might lead us away from it, as described here and here. An even more complete understanding, based on the possibility of differences in priorities, might lead us back again. In particular,

  • If you earn to give or get a position at an organization they wouldn't have otherwise filled, you can ensure you contribute counterfactually to your own priorities, but in a position that would have otherwise been filled by someone about as suitable as you, you may be mostly contributing counterfactually to the priorities of people who have been displaced by you taking that position, and these priorities may differ substantially from your own.
  • Highly competitive cause-neutral positions may have lower impact according to your prioritization than previously thought, if your priorities differ significantly from the average of those applying to the positions or of the EA community as a whole.

Naive replaceability

Naively, if you're considering working in position P at charity X or position Q, earning to give, at company Y, you might compare the following two impact differences and choose the corresponding position for which it is higher:

  1. Your impact in position P at charity X - A's impact in position P at charity X
  2. Your impact in position Q, earning to give, at company Y - B's impact in position Q at company Y

where A and B are the people who'd be in those positions if you weren't. The counterfactual you're comparing both 1. and 2. to is one in which you do nothing.

If position P is highly competitive, you might expect 1. to be small (assuming you'd get the position), while 2. looks big as long as the charities you donate to are funding-constrained.

But this can't always be right, since otherwise all EAs should earn to give, and we wouldn't know where to give, because there would be no one focused on prioritization and cost-effectiveness analysis! (Or we'd be relying on the research of people working full-time earning to give.)


A more complete picture

There are a few things missing in the above analysis. Sometimes, because of a talent constraint, the position would be otherwise unfilled, and we should have:

1. = You in position P at charity X - no one in position P at charity X

It's also possible you in position P at charity X will bring more people to it or similar charities overall.

Furthermore, we haven't considered more displacements that would happen if you take position P at charity X. If you displace A, they might displace someone else somewhere else:

A in position R - C in position R, where A would be instead of P

We should add this to 1., and there can be other similar terms to be added to 1., for more displacements:

+ (C in position S - D in position S) + (D in position T - E in position T) + …

I'll call this sequence of displacements a displacement chain (or replacement chain).

One of these displacements could push an altruist into earning to give or an extra otherwise unfilled position, like I mentioned above, and you might expect this to be the biggest term in the sum. However, further along the chain, the positions may become less important (since they're less desirable), so the more terms before this one in the chain, the smaller I'd expect it to be, and the largest term might actually often come earlier.

Similarly, we get more terms for 2., but smaller since people working in the industry are usually much less altruistic. So, after adding the extra terms to 1. and 2. as appropriate, it looks like their values should be closer.


Differing priorities along the chain

However, the further along the displacement chain, the less altruistic you should expect the people to be and, crucially, the less you should expect them to prioritize the same causes as you. This is regression toward the mean. If you get the position P at charity X, and the person A you displace ends up focusing on a cause you hardly care at all about (you might have had overlapping but nonidentical priorities), you should expect the next term we added to 1. to be pretty small, discounting based on prioritization. There might be a lot of overlap in your prioritization at this first step, though, so this might not usually be a significant concern here. However, this term might not be big in the first place if it's a displacement of A into a position Q at another organization that would have been filled anyway.

Furthermore, it seems like you should apply discounting to all the other terms for differences in cause prioritization, and more of it the further along the chain, although the multiplying factor need not tend towards 0, since if the chain stays in the EA community, there's still significant overlap with your priorities. For example, if you and 10% of the EA community prioritize cause A above all other causes, then you might expect the multiplying factor based on differing causes to be at least 0.1 (=10%). If the chain is long enough, you might expect the factor to be around 0.1 for the term that should have been the biggest, possibly an altruist pushed into earning to give or an otherwise unfilled position.

What I get from this is that, all else equal, you should prefer chains where the big terms (before prioritization discounting) come earlier, since you discount earlier terms less, in expectation, especially if your priorities differ significantly from other applicants' in the alternatives or the EA community on average. This favours earning to give (more than before this analysis for different priorities, not absolutely) and "extra positions" that wouldn't be filled without you, because most of the impact is in the first term, and to a lesser extent, positions for which there's a lot of overlap with earning to give, e.g. possibly quantitative/tech, management and operations.

So, if you're thinking of applying for a position for which you expect the other applicants to have very different priorities from you, other than the one for the given cause (and your priorities differ significantly from the average in EA), it might actually be better to let them have it instead, because then you can better ensure you contribute counterfactually to your own priorities, rather than contributing counterfactually to their priorities, or the priorities of someone else down the chain.


Aside on honesty

It might even make sense, in theory, to apply to EA organizations working on causes you support the least, so that you can displace the EA who would have otherwise worked there towards causes you support more. I have two objections:

1. This is probably not the most effective thing you could do, even if it worked.

2. I don't think you could do this without lying to or otherwise misleading the org about your concern for their cause, and given the risks to your own reputation and to coordination and trust in the EA community as a whole, no one should do this.


Cause-neutral charities

The kinds of work at charities I suspect the argument applies most against are cause-neutral, like meta EA, facilitating donations to and raising funds for EA charities cause-neutrally, cause prioritization research and community-building. But these kinds of roles are basically public goods to the EA community, since each cause area benefits from them, and public goods tend to be underproduced if left to markets.

RC Forward, which allows Canadians to get tax credits for donating to EA charities, recently starting taking fees for this service, and this makes perfect sense for a cause-neutral org like it! Unfortunately, I don't think there's any similar solution for cause-neutral roles at EA charities. We may be asking people to use their careers to counterfactually benefit causes they don't prioritize. This only really makes sense to me, from the top of my head, for EAs

1. whose cause prioritization is similar to that of the community as a whole or are happy to defer to it,

2. who prioritize "cause-neutral causes" or meta-EA, like cause prioritization or community building,

3. who are looking to build career capital at EA orgs, or

4. who are a much better fit than the candidate they displace (or someone else along the displacement chain is a much better than the one they displace, or no hire, if it ends at an otherwise unfilled position, all according to your cause prioritization).

This is probably not exhaustive.

Maybe we have enough EAs like this that it's not a problem? There doesn't seem to be any shortage of applicants to cause-neutral EA orgs.

26

0
0

Reactions

0
0

More posts like this

Comments5
Sorted by Click to highlight new comments since:

Thanks for including a summary! That's a really good feature for posts to have, especially posts that put forward a philosophical argument. I'd recommend using bullet points and paragraph breaks even within the summary, though, if it's more than a few sentences.

Thanks for the suggestion! I've updated the summary to include bullet points, and I'll try to remember to do this in the future, too.

A consideration that might sometimes go in the opposite direction is the movement of EA career capital, but I'm less sure about this. If you take a job at an EA org, you can ensure the EA career capital you build there stays within your priorities. If someone else takes the job and has different priorities, they might take that capital towards causes you don't prioritize in the future. However, you need to compare this capital to the capital you'd build in your other options, e.g. earning to give (and you also need to consider the capital built along the chains), which is why I'm less sure about this.

An EA organization focused on one cause (or only a few causes) could see it similarly. Although they might increase the number of people working in their cause in the short-term by accepting people with multiple priorities over people who just prioritize the organization's cause, they want to take people who will keep the EA career capital they build there within their cause area.

I like this analysis! Some slight counter-considerations:

Displacements can also occur in donations, albeit probably less starkly than with jobs, which are discrete units. If my highest priority charity announces its funding gap somewhat regularly, and I donate a large fraction of that gap, this would likely lower the expected amount donated by others to this charity and this difference might be donated to causes I consider much less important. (Thanks to Phil Trammell for pointing out this general consideration; all blame for potentially misapplying it to this situation goes to me.)

Also, in the example you gave where about 10% of people highly prioritize cause A, wouldn't we expect the multiplier to be significantly larger than 0.1 because conditional on a person applying to position P, they are quite likely to have a next best option that is closely aligned with yours? Admittedly this makes my first point less of a concern since you could also argue that the counterfactual donor to an unpopular cause I highly prioritize would go on to fund similar, probably neglected causes.

Displacements can also occur in donations, albeit probably less starkly than with jobs, which are discrete units. (...)

Agreed! This is a good point. I think many causes are still very funding-constrained (e.g. animal welfare and global health and poverty, at least), though, so this effect would be pretty negligible for them. This is a concern for less funding-constrained causes.

Also, in the example you gave where about 10% of people highly prioritize cause A, wouldn't we expect the multiplier to be significantly larger than 0.1 because conditional on a person applying to position P, they are quite likely to have a next best option that is closely aligned with yours?

Yes, this might be true for the next position Q which A would take. However, the effect there might be small in expectation anyway (before discounting) if it's another displacement to another position that would have been filled anyway, rather than a displacement into earning to give or into an otherwise unfilled position, and subsequent effects in the chain are discounted more, approaching a factor of 0.1.

I've updated the section to clarify. Thanks!

Curated and popular this week
Relevant opportunities