Recently there has been a spate of discussion on the EA Forum and elsewhere about increased spending in EA and its potential negative consequences.


There are various potential concerns one might have about this, and addressing all of them would require a much longer discussion. But it seems like one common worry is something like:

  • Having a frugal EA movement has positive selection effects.
    • Living frugally is a costly signal.
    • This ensures people will only want to join if they’re very altruistically motivated.
  • Conversely, spending more on community building, EA salaries, etc has negative selection effects.
    • It will attract people who are primarily motivated by money rather than altruism.

 

I think this argument conflates two separate questions:

  • How demanding should EA be?
  • How should we value EA time compared to money?

 

These two questions seem totally separable to me. For instance, say A works at an EA org.

  • His work produces $500/hour of value.
  • He gets paid $50/hour by his employer.
  • He has a fraudulent charge of $100 on a card that he could dispute.
    • This requires him to spend 1 hour on the phone with customer service.
    • He is indifferent between this and spending an hour doing a relatively unpleasant work task.

As things currently stand, he might spend the hour to recover the $100. But I think it would clearly be better if someone paid him $100 to spend an hour doing the unpleasant work task for his organization rather than trying to recover the money. It would keep his utility (and thus demandingness) constant, while resulting in $400 of surplus value created.

 

I think in the current EA movement:

  • I feel unsure about whether it would be better to increase or decrease demandingness.
    • It seems like a tough tradeoff.
    • Increasing demandingness pushes people to do and sacrifice more, as well as selecting for altruistically motivated people.
    • On the other hand, it may exclude people who could make valuable contributions but aren’t as dedicated, as well as leading to demotivation and burnout.
  • I do think it would be better to increase the monetary value we assign to the time of people doing EA work on average.
    • Given the current stock of funding vs human capital in EA, I think the time of the highest performing EAs is worth a lot.
    • I suspect the current artificially low salaries in EA often lead to people making inefficient time/money tradeoffs.

 

I think many people have an intuitive worry that paying EAs more will cause EA to lose its edge. Initially EA was a scrappy movement of people who really cared, and they worry that giving people more money will make it soft and cushy.

I’m sympathetic to that, but I think there are a lot of ways EA can be demanding that don’t rely on frugality. We could expect EAs to:

  • work 7 days a week
  • prioritize work over socializing or hobbies
  • work long hours and be constantly available/responsive outside work hours
  • leave a fun and/or high-status job for an unpleasant and/or low-status one
  • do job tasks that are valuable even if they’re ones they don’t enjoy
  • move to a location they don’t like for work
  • prioritize their impact over relationships with family, friends, romantic partners, or children
  • admit when they were wrong even if it’s painful
  • train skills they think are valuable even if it feels unnatural and hard
  • act in a way that represents EA well, even if they’d rather be petty and uncharitable
  • practice the virtue of silence

There are probably many other things I’m not thinking of here that are both demanding and potentially quite valuable. Many of the most effective EAs I know have done and continue to do a bunch of these things, and I think they’re pretty awesome and hardcore for doing so.

I think these are all more efficient costly signals than frugality, but my impression is that they tend to be regarded by people (both inside and outside EA) as worse signals of altruism, and I’m wondering why that is.


 

Comments37
Sorted by Click to highlight new comments since:

This post is great, thanks for writing it.

I'm not quite sure about the idea that we should have certain demanding norms because they are costly signals of altruism. It seems to me that the main reason to have demanding norms isn't that they are costly signals, but rather that they are directly impactful. For instance, I think that the norm that we should admit that we're wrong is a good one, but primarily because it's directly impactful. If we don't admit that we're wrong, then there's a risk we continue pursuing failed projects even as we get strong evidence that they have failed. So having a norm that counteracts our natural tendency not to want to admit when we're wrong seems good.

Relatedly, and in line with your reasoning, I think that effective altruism should be more demanding in terms of epistemics than in terms of material resources. Again, that's not because that's a better costly signal, but rather because better epistemics likely makes a greater impact difference than extreme material sacrifices do. I developed these ideas here; see also our paper on real-world virtues for utilitarians.

current artificially low salaries in EA often lead to people making inefficient time/money tradeoffs.

 

I agree that this is common, so I agree with your central point, which is important.

But I'm not sure I like your suggestions to move towards other demanding costly signals, like encouraging workaholism.

Rather, a better solution, which seems to be the norm in the not-for-profit world, might be to simply pay slightly, but non-trivially, below market rates, so if someone could earn $500/h in the private sector for similarly pleasant work, EA orgs could just pay ~80%. This should deter people who only care about money, while employees who net >$800k/y (> than current current EA non-profits exec compensation) still don't need to penny pinch and be efficient (buy a house close to work in a city like SF, raise a family, live a solidly middle-class lifestyle, fly business class), if they just forgo high-end luxury goods. There could be a progressive 'EA tax' build into the salary, so if support staff only earn $50/h, EA orgs might want to pay ~95% or something and offer Google-like perks like a catered food and laundry service to make them more efficient. 

I think these are all more efficient costly signals than frugality, but my impression is that they tend to be regarded by people (both inside and outside EA) as worse signals of altruism, and I’m wondering why that is.

Some more random thoughts:

  • People really don't like it when people earn very high amounts. Politicians often play up their frugality to the point where they're probably bad at their job (e.g. former Austrian chancellor who made a point of flying coach or José Mujica driving an old car)... people seem to like this though as evidenced by them being voted into office. Utilitarian arguments for higher salaries might come across as self-serving. 
  • Also, in finance, costly signals are long hours and conspicuous consumption, as retaining employees is very valuable and frugal people could retire after a working for just a couple of years (maybe that's why EAs did so well in finance- they didn't have diminishing returns in utility to salary increases- all without the drug habit). 
  • Long hours need not be cost-effective... maybe you could rather pay two EA org people $100k/y, than one person $250k/y. Especially because the nonprofit world is not as zero sum as the private sector where working very hard might pay off much more. Perhaps different work intensities lend themselves well to EtG in the for-profit world vs. say philosophy in the non-profit world. In the non-profit world, objectives are often much less clearly defined, and so it might not make sense to work very hard, but rather more deliberate... (see Bezos shifting from working very hard in the beginning of Amazon while in execution mode, to later saying he sleeps 8 hours a night because he's just making very high-level prioritization decisions (not unlike a philosopher).

Are you proposing to bite the bullet on the $100/hr card charge scenario by the $50/hr staffer (paid "$47.5/hr plus perks" at the EA org)?

"Market rate" of $50/hr for labor netting $500/hr of value seems well within the distribution I'd expect (not to mention that EA orgs might value that work even more than any org in industry ever will, perhaps because we're counting the consumer surplus and un-capturable externalities and the industry employer won't).

Great question - prompted me to think more about this problem. 

I maintain that the most elegant solution might be for EA orgs to pay slightly below market-rate (with a progressive element). But I’m quite uncertain about this and I'd love for people to think more about optimal compensation at EA orgs.

Some more thoughts on this:

  • I very much agree with the central argument made here that we should not have EAs live with a poverty mindset and sweat the small stuff. I think it’s a very big problem that creates a lot of lost utility. I also think a behavioral economics angle might make sense here (many people might irrationally be too frugal to increase their productivity).
  • My point was not about the absolute level of pay for a given position, which maybe should be higher. Concretely, we can still pay $100/h for an office assistant, but this will inevitably attract better candidates. This should take care of a lot of ‘card charge scenarios’ (e.g. I saw a job ad for high impact PA at ~$30/h recently, and ACE CEO for less than $100k/y, which seems low).
  • It’s also not about the absolute level of funding for orgs, which should maybe also be higher. In other words, we might want to hire 2 office assistants for 40h/week rather than 1 person for 80h /week, especially at lower salary levels. This way, a person on $50/h or $100k/y can deal w/ a $100 card charge, but only after a 40h work week,  where they can add 10h of life admin that is worth more than their salary. This is theoretically equivalent to someone doing part-time EtG when it’s above their salary level (with incentives neatly aligned i.e. they’ll know best what life admin to outsource). At scale, the advantages of division of labor from focusing more on one's job might not outweigh diminishing returns to increasing hours spent at the office.
  • Note that you have card charge-like scenarios even if you pay above market rates. But even if you only pay $99k/y for people whose market rate is $100k/y, this will detract those who only care about money, for only $1k a year. But the benefits of this are small as it only lets you outsource at most ~10h of life admin like card charge scenarios. Seems like a small price to pay for having EA org employees being value aligned and less likely to shirk.
  • Also, if employees forgo just some luxury goods and conspicuous consumption in their social reference class (not saying one should never go on holidays or to restaurants), one could afford everything that would make one instrumentally more effective.
  • We should think more out of the box with perks. NYC academics for instance often get subsidized university housing in the city that has a high market value and makes them more efficient. This seems to align incentives well, but maybe that’s too much of a faff administratively. Companies like Amazon are doing this.
  • If your job ad says ‘we pay $X’, then you’ll attract candidates whose market clusters around $X. Auction theory suggests that the best applicant who gets offered the job will have a market rate slightly higher than X.
  • We can have steady salary increases that match an employee's current market-rate to retain them. This would have the features of efficiency wages or even cheerful price.
  • Performance-based pay is also something that is under discussed in EA,
  • Anthropic has optional equity donation matching, iirc it’s a quite high like 3x - that could be another thing that should be explored more.

Again maybe salaries should generally be higher to attract better candidates, maybe there’s something generally amiss here… consider that:

  • The Givewell CEO makes ~$300k/y, but they move ~1000x as much money. Increasing the salary by $1m would only need to lead to a 0.3% increase in money moved to break even. Ideally, the compensation would include impact certificate equity that is traded.

Analogously:

  • In the US, senators and governors make $175k. The president and the highest paid civil servant (CMO - Fauci) (make ~$400k/y, the vice makes $260k).
  • UK MPs earn ~$100K/y. In contrast, median executive pay at the 300 of the biggest U.S. companies is ~$14M. Imagine increasing their pay to $1M/y - 650 MPs would then earn $0.65B. If they'd counterfactually raise UK GDP by just 0.1%, then the cost-benefit ratio would be about 5 (see 0.1% of UK GDP is $2.7B). In the US, it might be ~10x higher.
  • If that’s unpopular to increase politicians pay, then maybe we could increase pay of senior civil servants working in the central government. In the UK, the Department for Trade pays people $350k/y source

But optics matter as well (at the very least instrumentally). Elsewhere in this thread someone called a $125k/y salary a 'vow of poverty', and that seems tone-deaf.

I thought this recent quote was relevant here and resonated:

For the people who are Doing Thing, a lot of their time is spent on Stupid Stuff, and they must Beware Trivial Inconveniences and the expenses involved in various actions. This eats up a large portion of their time and cognitive efforts.

Even remarkably small stupid stuff can eat up a remarkably large amount of time that could otherwise have been useful. Having a good, well-equipped and stocked place to work, where you get meals taken care of and can meet with people and interact spontaneously and other neat stuff like that is a big game. So can ‘stop worrying about money’ either in general or in a particular context, or ‘not have to run these stupid errands.’ Life is so much better when there’s other people who can take care of stuff for you.

A lot of work and potential work, and a lot of community things and events, and many other things besides, are bottlenecked by activation energy and free time and relatively small expenses and stuff like that. Even I end up worrying a bunch about relatively small amounts of money, and getting time wasted and rising stress levels about things where the stakes are quite low, and fussing with stupid little stuff all the time. You really could substantially increase the productivity (although not at zero risk of weirdness or distortions of various kinds) of most of the people I know and almost anyone not high up in a corporation or otherwise super loaded, if you gave that person any combination of:

  1. A credit card and told me ‘seriously, use this for whatever you want as long as it’s putting you in a better position to do the stuff worth doing, it doesn’t matter, stop caring about money unless it’s huge amounts.’ More limited versions help less, but still help, and for someone more money constrained than I am, all this doubtless helps far more.
  2. A person I could call that would be an actually useful secretary and/or concierge, especially someone who could run around and do physical tasks. We have a nanny for the kids, and that helps a ton, but that alone doesn’t come anywhere near what would be useful here. The only concierge service I know about, which I somehow got access to, is completely useless to me because it assumes I’m super rich, and I’m not, and also the people who work there can’t follow basic directions or handle anything at all non-standard.
  3. A person I could have do research for me and figure things out, assemble spreadsheets, that sort of thing.
  4. A nice office space and hangout space I could use and take people to, especially where other interesting people also often went, and where everything was made easy, ideally including free food.

And I think that applies to basically everyone who hasn’t already gotten such things handled. And it’s a shame that we can’t find ways to get these sorts of things usefully into the hands of the People Doing Thing that we think are special and the limited resource that actually matters.

I agree with most of this, but one point on

Also, if employees forgo just some luxury goods and conspicuous consumption in their social reference class (not saying one should never go on holidays or to restaurants), one could afford everything that would make one instrumentally more effective.

This is probably true for people without children, but not necessarily when you're talking about extended hours of good-quality childcare. And once you have children, the costs of leisure like holidays and restaurants also increases.

Yes you're right, some people have very high costs, like people who care for others or also people with health issues. As a matter of public policy I think we should incentivize people to have more children, and also again, maybe EA orgs should give good perks packages (including care benefits, health and disability insurance).

But pricing all of this in, I still maintain that in most cases the pay should be slightly under market-rate (recall that I said this should be progressive, where the 'EA pay cut' gets larger as salary increases... I don't have strong opinions about the absolute level).

Even if we were to pay slightly above market rate, just like the card charge scenario, problems like the high cost of childcare will persist at many salary levels. Also, again, if we hire someone whose long-term average market rate is $30/h and childcare is $40/h, then maybe it's better to still pay them $29.5/h and their comparative advantage then is to do 'EtG' by watching their children, taking paid-time off and then coming back at a normal working week, rather than artificially inflating their salary to $41/h so that they can take less time off and work longer hours.

In the UK, the Department for Trade pays people $350k/y

I think you're talking about the Department for Transport, who has a few staff members at that level, mostly engineers overseeing major infrastructure projects.

From the source: "maximum pay band of almost £265,000 (at DIT)."

Oh I see, it really is DIT!

I would point out that the "maximum pay band" likely represents the Permanent Secretary, of which there is one. The context was about unequal pay amongst DIT executives. So "pays people £265k" is probably not accurate.

Maybe you think I'm being a bit pedantic (and I probably am) but I feel like the way it's drafted right now suggests civil servants routinely get paid a lot more than they actually do. Even at the executive level, managing budgets of billions of pounds, very few senior civil servants get paid more than £200k - it's extremely rare.

Edit: I was wrong, and my original instinct was correct - even the Permanent Secretary does not get paid this much. The person you're referring to is the Chief Trade Negotiator for the United Kingdom. Data on all executive pay is here. "People" at DIT being paid more than £200k is definitively incorrect. https://data.gov.uk/dataset/7d114298-919b-4108-9600-9313e34ce3b8/organogram-of-staff-roles-salaries

It seems like there would be some really easily available stats for salaries (histogram, means, medians).

I'm not saying you need to present this. It's that it's this info is unlikely to be secret, and wrong answers are easily contestable, so my sense is that your anecdotes have authority and should be trusted.

I've actually just found the data and posted it. You can see all the pay bands for everyone in the department, both junior and senior!

People spend large sums of their own money, plus a year or two of their own time working for free, to get elected to Congress. It seems the job is desirable enough on its own terms that  a salary increase isn't going to make a difference. Similarly for ambassadorships, which are the only type of appointed job where you're routinely allowed to do this. It seems to me the inherent desirability of the jobs is high enough that more salary is not going to attract better people. 

Very minor note, but I love that you included "practice the virtue of silence" in your list.

practice the virtue of silence

 

Honest question: Besides the negative example of myself, can you or the OP give some examples of  practicing or not practicing this virtue?

Motivation: The issue here is heterogeneity (which might be related in a deeper way to the post itself). 

I think some readers are going to overindex on this, and become very silent, while myself, or other unvirtuous people will just ignore it (or even pick at it irritatingly). 

So without more detail, the result could be the exact opposite of what you want.

I'm referencing this Slate Star Codex post: https://slatestarcodex.com/2013/06/14/the-virtue-of-silence/

So not indiscriminate silence, but more like avoiding spreading infohazards, contributing to inflammatory and unproductive conversations, sharing cruel gossip, that sort of thing. Scott makes the point that this is a very difficult virtue to practice, because by its nature it's sort of impossible to get noticed and praised for it.

I agree it's very valuable, but how can it be a good signal and impossible to notice at the same time?

Compare:

  • In this instance, someone demonstrated a virtue (I just saw them go out of their way to help a coworker)

  • They generally demonstrate a virtue (they never make ad hominem attacks)

Now, technically, these are really the same: even in the latter the signal is composed of individual observations. But they differ in that with the former each instance gives lots of signal (going out of your way is rare) while in the latter each instance gives very little signal (even someone pretty disagreeable is still going to spend most of their time not making ad hominem attacks).

I'm interpreting Caroline as saying that when someone is practicing this virtue well you don't notice any individual instance of silence, and praise is generally something we do at the instance level. On the other hand, we can still notice that someone, over many opportunities, has consistently refrained from harmful speech.

I agree, though, that it isn't a very good signal because of the difficulty in reception (less legible).

Self-signaling value ain't something to sneeze at. Personally, a lot of my desire-for-demandingness is about reinforcing my identity as someone who's willing to make sacrifices in order to do good. ("reinforcing" meaning both getting good  at that skill, and assuring myself that that's what I'm like :) 

[anonymous]28
0
0

This post is great. I have wondered whether there should be more emphasis in EA on 'warrior mindset' - working really hard to get important stuff done (discussed a bit here). A lot of highly effective people do seem to work very hard and that I think that is an important norm to spread in EA. 

One extremely under-rated impact of working harder is that you learn more. You have sub-linear short-term impact with increasing work hours because of things like burnout, or even just using up the best opportunities, but long-term you have super-linear impact (as long as you apply good epistemics) because you just complete more operational cycles and try more ideas about how to do the work. 

Other than variations of working crazy hours (which I think is both higher impact and better signal of altruism than donations, at least within the movement), the other actions you suggest don't seem like very legible signals of people acting morally in a demanding way. So this weakens their value as a signal for altruism, though they may still be object-level great things to do, or good signaling for other reasons. 

Agreed, that is unfortunate and seems like an important part of what's going on.

Thanks Caroline for writing this! I think it's a really rich vein to mine because it pulls together several threads I've been thinking a lot about lately.

One issue it raises is should we care about the "altruist" in effective altruists? If someone is doing really useful things because they think FTX will pay them a lot of money or fund their political ambitions, is this good because useful things happen or bad because they won't be a trustworthy agent for EA when put into positions of power? My instinct is to prefer giving people good incentives than selecting people who are virtuous: I think virtue tends to be very situationally dependent and that very admirable people can do bad things and self-deceive if it's in their interest to do so. But it's obviously not either-or. I also tend to have fairly bourgeois personal preferences and think EA should aspire to universality such that lots of adherents can be materially prosperous and conventionally successful and either donate ~20% of their income or work/volunteer for a useful cause (a sort of prosperity gospel form of EA amenable to wide swathes of the professional and working class rather than a self-sacrifice form that could be more pure). 

A separate issue is one of community health. So on an individual level maybe it could be fine if people join EA because the retreats are lit and the potential for power and status is high, but as a group there may be some like tipping point where people's self-identity changes as the community in fact prizes the perks and status over results. This could especially be a concern insofar as 1. goals that are far off make it easy to self-deceive about progress and 2. building the EA community can be seen as an end in itself in a way that risks circularity and self-congratulation. You can say the solution here is to really elevate people who do in fact achieve good results (because achieving good things for the world is what we care about), but lots of results take a long time to unfold (even for "near-termist" causes) and are uncertain (e.g. Open Phil's monetary policy and criminal justice reform work, both of which I admire and think have been positive). For example, while I've been in the Bahamas, people have been very complementary of 1Day Sooner (where I work and which I think EAs tend to see as a success story). I'm proud of my work at 1Day and hopeful what we've already done is expanding the use of challenge studies to develop better vaccines, but despite achieving some intermediate procedural successes (positive press coverage, some government buy-in and policy choices, some academic and bioethics work), I think the jury is very much still out on what our impact will end up being and most of our impact will likely come from future work. 

The point about self-identity and developing one's moral personhood really drives me in a direction of wanting to encourage people to make altruist choices that  are significant and legible to themselves and others. For example, becoming a kidney donor made me identify myself more with the desire to have an impact which led me further into doing EA types of work. I think the norm of donating a significant portion of your income to charity is an important one for this reason, and I've been disappointed to see that norm weaken in recent years. I do worry that some of the types of self-sacrificing behavior you mention aren't legible enough or state change-y enough to have this permanent character/self-identity building effect. 

There's an obvious point here about PR and I do think committing to behavior that we're proud to display in public is an important principle (though not one that I think necessarily cuts against paying EAs a lot). First, public display is epistemically valuable because (a) it unearths criticisms and ideas an insular community won't necessarily generate and (b) views that have overlapping consensus among diverse audiences are more likely to be true. Second, hiding things isn't a sustainable strategy and also looks bad on its own terms. 

Last thought that is imperfectly related is I do think there may be a bit of a flaw in EA considering meta-level community building on the same plane as object-level work and this might be a driving a bit of inflation in meta-level activities that manifests itself in opulent EA college resources (and maybe some other things) that are intuitively jarring even as they can seem intellectually justified. So if you consider object and meta-level stuff on the same plane, the $1 invested in recruiting EAs who then eventually spend $10 and recruit more EAs seems like an amazing investment (way better than spending that $1 on an actual EA object level activity). But this seems intuitively to me like it's missing something and discounting the object level $ for the $ spent on the meta-level needed for fundraising doesn't seem to satisfy the problem. I'm not sure but I think the issue (and this also applies to other power-seeking behavior like political fundraising) is that the community building is self-serving (not "altruistic") and from an  view outside of EA does not seem morally praiseworthy. We could take the position that that outside view is simply wrong insofar as it doesn't take into the account the possibility that we are in fact right about our movement being right. The Ponzi-ishness of the whole thing doesn't quite sit well, but I haven't come to a well-reasoned view.

I think these are all more efficient costly signals than frugality, but my impression is that they tend to be regarded by people (both inside and outside EA) as worse signals of altruism, and I’m wondering why that is.

(This reply addresses only some of your bullet points) If someone works crazy hours, that could be for altruism, but it could also be for higher status. Many people in the private sector selfishly choose to work long hours to be promoted, so it wouldn't be that surprising that in non-profits people would as well, even if the monetary payoffs were lower.

(Obviously people can donate money for status reasons too).

Love this post! I’d build on this question by slightly re-framing it. Instead of asking “should EA be more demanding?”, I’d ask:

  1. What should we promote as a general norm about demandingness?
  2. What should you personally aspire to in terms of demandingness?

I usually have completely different answers to these questions. General norms have to be simpler and work for a larger number of people. Personal aspirations can be tailor-made to your particular circumstances, personality, and goals.

Of course, you could make the general norm for each person to think specifically about what they should personally aspire to, and maybe we should just do that. One of my favorite things about EA is that we don’t tend to oversimplify things for people, but rather push people to really engage with the complexities and nuances of ideas.

Very much agree with this post. It reminds me of a problem we see in certain other fields (entry-level journalism, congressional staff, etc.)--the salaries are so low that it's as if the job is selecting for people who are either 1) independently wealthy (or heavily supported by parents), or 2) eager for some other job benefit (such as access to power).  Neither characteristic is likely to make an employee . . . effective. Passion about EA might be good for some things, but in many roles, there is no reason to think that personal passion equates to skill, knowledge, personal productivity, etc. 

Imagine there's a top CEO who earns $10 million currently, and who knows MacKenzie Scott well. If paid $20 million, he would leave his job, and spend a year designing and implementing a strategy to convince MacKenzie Scott to take an EA approach with 25% of her yearly giving.  Wouldn't that be worth it, if it worked? 

Or this: Open Phil recently was hiring a General Counsel. Imagine two options: 1) "We'll pay $125k, but no more than that, because we want a General Counsel who has essentially taken a vow of poverty, and/or who is so desperate for a job that they'll work anywhere." 2) "We'll pay market rate for the best non-profit lawyer we can find." 

Which option is likely to lead to a General Counsel who gives better advice to Open Phil about its operations, its legal structure, etc.? 

$125k ... essentially taken a vow of poverty

Someone earning $125k, even with a family of four to support, is at the 98th income percentile globally. Within the US it's 89th for an individual or 75th for a household, and even in the Bay Area it's above the median household income.

I don't disagree with your overall point (paying more for better people can make a lot of sense), but it's still useful to be calibrated on what constitutes poverty.

Fair enough, that was probably too dramatic of a rhetorical flourish! :) 

I guess my point is that if you want top corporate counsel to consider working for Open Phil, you have to the market into account. Someone who could make $700k as General Counsel for the Gates Foundation or $1m+ at a public corporation probably isn't going to take an 80% or 90% pay cut to work at Open Phil. Even the most well-motivated altruist is probably still going to find a reason to convince themselves that they'll do as much good for the world at Gates, and they'll have much more money to contribute to charity themselves, so they might as well take the job that allows them to easily pay for their kids' college, etc. 

(I don't disagree with you, or at least I suspect there could be very little actual disagreement if we drilled down to the root issues).

Assuming the comp for the general counsel was at market rates, which might not be true, here are some key points:

  • That institution you mention has a culture, management/faculties and reputation that is strong, even among good EA organizations. I think these traits greatly reduce the downsides of, and allow integration of talent at very high compensation.
  • With more uncertainty, I think another consideration is that the source of funding at that institution is more proximate than to most grantees.  As a consequence of upstream conditions related to this proximity, the deployment of money is more disciplined and systemic consequences of high salaries are lower. (I guess the crude way of saying this is that the incentives are more aligned, but this isn't quite right.)

I think a key issue that shouldn't be ignored are conditions, like culture, management capacity, and the ability to buy and absorb high value talent. 

There's a lot going on in the preceding sentence, but I think there's a huge point here related to funding debates going on now: 

The conditions for the deployment of money and use of funds or capacity can be hard to attain, and people without these faculties may never see this. I'll be even more direct: a lot of funding issues right now have nothing to do with cause area but competence and composition. 

This can be true, and can be the overwhelming consideration for impact, even if the funding situation is as unbalanced as some describe.

I've written a blog post relating to this article, arguing that while levels of demandingness are conceptually separate from such trade-offs, what kinds of resources we most demand may empirically affect the overall level of demandingness.

In most situations I doubt we should care about costly signals of altruism at all. Effectiveness in the work should be all that matters. If I'm hiring, all else being equal I will naturally prefer the person who will work 7 days a week for a lower salary vs the one who will only work 5 days and demand more. I don't need any signal here, other than making them the job offer on my preferred terms and seeing if they take it. But if the lazier, greedier one is 10x as effective per unit time, I should obviously prefer them despite them being a worse person in some philosophical sense. 

Perhaps in extremely critical positions where people could put important projects at risk by making selfish decisions there should be more of a need for people to prove the altruism part. But ultimately if people reliably do the right thing I don't care if they're doing it for selfish or altruistic motives. That's between them and their conscience. 

I quite like this idea, and many of the most frugal people I know also do a ton of these things as well. I think a bunch of them pretty clearly signal altruism. Interestingly, I would say that things that make EA soft and cushy financially seem to cross apply to non-financial areas as well. E.g. I am not sure the average EA is working more hours compared to what they worked 5 years ago; even with the increases in salary, PAs and time to money tradeoffs.

I also agree there are a lot more that could be listed. I think "leave a fun and/or high-status job for an unpleasant and/or low-status one" hints at the idea of decisions that need to be made with competing values. I think this is maybe the biggest way more dedicated EAs have really different impacts vs less dedicated ones, e.g. it may not be the biggest part of someones impact if someone works 5% more or takes 5% less salary but it correlates (due to selection effect) with when hard choices come up with impact on one side and personal benefit on the other. The person is more likely to pick impact and this can lead to huge differences in impact. E.g. The charity research I find most fun to do might have ~0 impact whereas research I think is the highest impact might be considerably less fun, but significantly more valuable.

Thanks for writing this Caroline, really interesting post! I think it's probably true that having talented people doing important things work really hard is higher impact than having people donate a little bit more money. 

However, I am concerned about the idea that one should prioritize their impact over relationships with family, friends, romantic partners, or children, for two reasons: 

  1. I think it's important to note that, personally, donating 10-20% of my income to effective charities literally makes zero difference to my life enjoyment.* But neglecting relationships would significantly  reduce my life enjoyment. If lots of EAs are less happy (and potentially also their partners, friends, and family), that means the corresponding increase in impact from working hard would need to outweigh their reduction in happiness to provide net benefit. 
  2. If lots of EAs are less happy, it would presumably be harder to attract new people and also increase burnout. There might also be diminishing marginal returns to work in many cases (e.g. once GiveWell has analysed a charity for 100 hours, the 101st hour probably doesn't provide that much more information). But returns to donations are probably linear, unless you are dealing with large amounts of money such that you run out of equally cost-effective opportunities. 

I am unsure whether this means EAs shouldn't work 7-day weeks and de-prioritze relationships, but I don't think it's clear they should. Of course, this might work for some people but not others! 

* I may be in a particularly priveledged position here, as I currently live with my parents and do not pay rent or have kids, but I suspect a high proportion of EAs would make a roughly similar conclusion. 

Curated and popular this week
Relevant opportunities