I recently gave a talk at EAG:Boston on this topic.

The video is now up.

Below is the blurb, and a rough script (it'll be easier to understand if you watch the video with boosted speed).

All these ideas are pretty speculative, and we're working on some more in-depth articles on this topic, so I'd be keen to get feedback.

* * *

A common objection to effective altruism is that it encourages an overly “individual” way of thinking, which reduces our impact. Ben will argue that, at least in a sense, that’s true.

When you’re part of a community, which careers, charities and actions are highest-impact changes. An overly narrow, individual analysis, which doesn’t take account of how the rest of the community will respond to your actions, can lead you to have less impact than you could. Ben will suggest some better options and rules of thumb for working together.

* * *

Introduction - why work together?

  • Here’s one of the most powerful ways to have more impact: join a community.
  • Why’s that?
  • Well, one reason is it’s motivating - being around other people who want to help others changes your social norms, and makes you more motivated.
  • It’s like networking on steroids - once one person vouches for you, they can introduce you to everyone else.
  • And also, what I want to talk about today, you can trade and coordinate.
    • Let’s suppose I want to build and sell a piece of software. One approach would be to learn all the skills needed myself - design, engineering, marketing and so on.
    • A much better approach is to form a team who are skilled in each area, and then build it together. Although I’ll have to share the gains with the other people, the size of the gains will be much larger, so we’ll all win.
      • [diagram]
    • One thing that’s going on here is specialisation - each person can focus on a specific skill, which lets them be more effective.
    • Another thing is that the team can also share fixed costs (same office, same company registration, same operational procedures etc.), letting up achieve economies of scale.
    • In total, we get what’s called the “gains from trade”.
    • An important thing about trade is that you can do it with people who don’t especially share your values, and both gain.
      • Suppose, hypothetically, one group runs an animal charity, and they don’t think global poverty is that high impact.
      • Another group runs a global poverty charity, and they don’t think factory farming is that high impact.
      • But imagine both groups know some donors who might be interested in the other cause. They can make mutual introductions. Making an introduction isn’t much cost, but could be a huge benefit to the other group. So, if both groups trade, they both gain.
    • This is why 100 people working together have the potential to have far more impact than 100 people doing what individually seems best.

 

Why the EA community?

  • What I’ve said so far applies to any community, and there are lots of great communities out there.
  • But I know many people who think that getting involved in the EA community has been an especially big boost to their impact.
  • Why’s that?
  • Well, as we’ve shown you can trade with in general to have a greater impact, even when you don’t especially share their values.
  • But if you do share values you don’t even need to trade.
  • What do I mean?
  • Well, if I help someone else in the EA community have more impact, then I’ve also had more impact, so we both achieve our goals.
  • This means I don’t need to worry about getting a favour back from the other person to break even. Just helping them is already valuable.
  • This unleashes far more opportunities to work together, that just wouldn’t be efficient in a community where people don’t share my aims as much.
    • (Technically speaking, transaction costs and principal-agent problems are dramatically reduced.)

 

  • We don’t normally think about it like this, but earning to give can actually an example of this kind of coordination.
    • In the early days of 80k, we needed one person to run the org and we needed funding. Me and another guy called Matt considered the position. We realised Matt had higher earning potential than me, while I was better suited to running 80,000 Hours, hopefully.
    • There were other factors at play, but in part, this is why I became the CEO, and Matt earned to give and became one of our largest donors, as well as seed funding several other orgs.
    • The alternative would have been to for us both to earn to give, in which case, no 80k; or for us both to work at 80k, in which case it would have taken us much longer to fundraise (and the other orgs wouldn’t have benefited).
    • With the community as a whole, some people are like Matt, relatively better suited to earning money, and others to running non-profits. We can achieve more if the people best suited to earning money earn to give and fund everyone else.
  • In sum, by working together effectively, we have the potential to achieve far more.

How can we work together better?

  • However, I don’t think we work together as a community as well as we could.
  • Effective altruism encourages us to ask which individual actions lead to the most impact.
  • Some critiques of the community have suggested this question could bias us so that we don’t actually do the highest impact things.
  • Here is perhaps the most well-known criticism in this vein, in the London Review of Books.
    • [read out]
    • I agree with Jeff McMahan’s response to this article. He points out that ultimately what we can control are our individual actions, so they are the most relevant. We can’t direct the whole of society.
  • However, I think there’s also truth in Amia’s view: when we think about what’s best from an individual perspective, we have to be wary of the biases of this way of thinking, otherwise we might not actually find the highest-impact individual actions.

  • I often see people in the community taking what I call a “narrow, single player” perspective to figuring out what’s best - not fully factoring in the relevant counterfactuals and how the community will adjust to their actions.
  • This might have worked before we had a community, but these days it doesn’t.
  • Instead we need to move to what I call a “multiplayer perspective”,
  • In particular:
    • First, we need to take a different approach to choosing between our options.
    • Second, new options become promising.
    • I’ll cover both in turn.

1) How to choose between our options

    • Let’s consider this question: should I take a job at a charity in the community? Like GiveWell, or AMF or CEA.
    • Amy is considering working at a charity. What’s her impact?
    • The naive view is that the job is high impact, so if I take it, I’ll have a big impact.
    • But then you hear about EA, and someone points out: if you don’t take the job, someone else will take it, so actually your impact is small. The job is only worth taking if you’d be much better than the person who replaces you.
    • We call this analysis “simple replaceability” and it’s an example of single player thinking.
    • This leads to lots of people not wanting to do direct work, and thinking it’s better to earn to give instead.
    • But this is wrong. And I apologise, because it’s partly our fault for talking loosely about the simple replaceability view in the past. But today I want to help stamp it out.

 

  • The first problem is that you won’t always be replaced. There’s a chance that the charity just won’t hire anyone otherwise.
    • In fact this seems to be often the case. When we talk to the organisations, there are roles they’ve been trying to fill for a while, but haven’t been able to.
    • One reason this is that, because there are donors with money on the sidelines, if the organisations were able to find someone with a good level of fit, they could fundraise enough money to pay for their salaries.
    • This means the organisations do what’s called “threshold hiring” - hire anyone above that level of fit.
    • And there ways you can end up being not replaceable, such as supply-demand effects, which we cover elsewhere.
    • Either way, the simple analysis of replaceability ends up underestimating the impact.
    • Instead, you can end up being pretty valuable to the organisation you work at.
    • One way to estimate the effect is to ask the org how much you’d need to donate to them to be indifferent between you taking the job and them getting donations. This helps to measure the size of the benefit to the organisation. [show Q on slide]
    • We actually did this with 12 organisations in the community, and for their most recent hire, they gave figures of
      • [slide: average of $126,000 – $505,000 and a median of $77,000 – $307,000 per year.]
      • The organisations may well be biased upwards, but since it’s significantly more than most people who could work at EA orgs could donate, it at least suggests they’re having much more impact than they would through earning to give.
      • We also asked the orgs simply how funding vs talent constrained they are, and you can see a clear bias towards talent constraint rather than funding constraint.
        • Interestingly, the animal focused orgs were more funding constrained, so if you remove them, the figures were higher.

  • So, the first problem with the simple analysis of replaceability is that you might not actually be replaced. The second problem is where the community comes in.
  • Even if you take the job, and someone else would have taken it anyway, that person is freed up to go and do something else that’s valuable. So there’s a spillover benefit to the rest of the community.
  • If you were considering a job that would be filled by someone who didn’t care about social impact otherwise, like a random job in the corporate world, then it’s fine, you can mostly ignore these spillovers. The “single player” view would be fine.
  • But in the current community, however, that person will probably go and do something else you think is high-impact, so it’s a significant benefit.
    • This is not hypothetical - I’ve seen real cases where someone didn’t take a job because they thought they’d be replaceable, which then meant someone else had to be taken from a high-impact role.
  • So, there’s a second benefit that’s ignored by the simple analysis of replaceability.
    • And, for both reasons, the impact of taking the job is higher.
  • How valuable? I think this is still an unsolved problem, but here is a sketch of our thinking.

 

  • Basically, you cause a chain reaction of replacements. The chain can end if either (i) it hits a threshold job where the person isn’t replaceable, or you (ii) hit the marginal opportunity in the community - the best role that has not yet been taken.
    • So, at the worst, you’re adding someone to the marginal position in the community, which is still a significant impact.
    • Plus, by triggering the chain, you’re hopefully helping people switch into roles that better play to their relative strengths. So we also get to a more efficient allocation.

 

  • Zooming out, rather than identify the single highest-impact job in general, and try to take that, instead we have several thousand roles to fill, and want to achieve the optimal allocation over them. The question is: what can you do to take the community towards the optimal allocation over the best roles?

  • I think the key concept here is your comparative advantage, compared to the rest of the community.

  • Comparative advantage can be a little counterintuitive, and is not always the same as personal fit, so let’s explore a little more.
  • Let’s imagine there are two roles that need filling, research and outreach; and two people.
    • [diagram]
    • Charlie is 1 at outreach and 2 at research.
    • Dora is 2 at outreach and 10 at research.
    • What’s best?
    • It depends on exactly how we interpret these numbers, but probably Charlize should do outreach and Dexter should do research, because 1 + 10 > 2 + 2
  • This is surprising because, in a sense, Charlie is actually worse at outreach than Dora, and worse at outreach than research, so he in no sense has an absolute advantage in outreach, but it turns out he does have a comparative advantage in it. This is because he’s relatively less bad at outreach compared to Dora.
  • I have a suspicion this might be a real example.
    • [slide]
    • Lots of people in the community are good at analytical things compared to people in general, so they figure they should do research.
    • But this doesn’t follow. What actually matters is how good they are at research relative to others in the community.
    • If we have lots of analytical people and few outreach people, then even though you’re good at research compared to people in general, you might have a comparative advantage in outreach.
    • Something similar seems true with operations roles.
    • It may also be true with earning to give. People often reason that because they have high earning potential, they should earn to give. But this doesn’t quite follow. What actually matters is their earning potential relative to others who could do direct work. If the other direct work people also have high earning potential, then they might have a comparative advantage in direct work.
    • Unless you’re chuck norris, who has a comparative advantage in everything.
  • How can you figure out your comparative advantage in real cases? Ask people in charge of hiring at the orgs - what would the next best hire do anyway, and what would their donation potential be compared to yours?

  • So, let’s sum up. How to work out your impact by taking a job in the community?
    • First, you probably cause some boost to the org itself, and you’re not fully replaceable. You can roughly estimate the size of this boost by asking the org to make tradeoffs, or making your own estimates.
    • Second, you cause some spillover benefit to the rest of the community, because you free up someone else to go and work elsewhere.
      • The key question then is whether the role is the above the bar for the community as a whole, and whether it plays to your comparative advantage compared to the other people who might take the role?
      • Are you getting the community towards a better overall allocation?

 

  • I think basically the same analysis applies to donating as well.
    • Sometimes people don’t want to donate because they think someone else will fund the charity anyway.
    • But they ignore the fact that if even if their donation were 100% replaced, at worst they’d be freeing up another donor, sooner, to go and donate to something else.
    • There’s lots more to say here, I go into a bit more detail in this article.



2) The new options that come available

 

  • Being part of a community also changes your career by opening up new options that wouldn’t be on the table if you were just in a single player game.

  • The EA mindset, with a focus on individual actions, can lead us to neglect paths to impact through helping others achieve more, because the impact is less salient. However, now that there are 1000s of other effective altruists, the highest impact option for you could well be one that involves “boosting” others in the community.

    • Here are five examples. They aren’t exhaustive or exclusive, but are just some ideas.

    • First, five minute favours.
  • We all have different strengths and weaknesses, knowledge and resources. Now the community is so big, there are probably lots of small ways we can help others in the community have far more impact, that would be very little cost to ourselves- 5 minute favours. (a term I took from Adam Grant). These are really worth looking for.
      • Do you know a job that needs filling? There’s a good chance someone in this room would be a good candidate. If you could introduce them to the job, it might only take you under an hour, but it would have a major impact for years.
      • There’s probably someone in this room who has a problem you could easily solve, because you’ve already solved it, or you know a book that contains the answer, and so on.
    • Here’s a second, more involved example of helping others be more effective: Operations roles in general.
    • Kyle went to Oxford, and ended up becoming Nick Bostrom’s assistant. He reasoned that if he could save Bostrom time, then he could let more research and outreach get done. I think this could be really high impact.
    • People feel like these roles are easily replaceable by people outside the community, but because you have to make lots of little decisions that require a good understanding of the aims of the organisation, they’re actually often very hard to outsource.

 

    • A third category is that building community infrastructure becomes much more valuable.
      • Having a job board isn’t really needed when there are only a few hundred people in the community, but now there are 1000s it can play a useful role, so we recently made one.
        • [show screenshot]
      • Infrastructure is anything that helps to make the community coordinate more efficiently, such as this event, or setting up good norms, that make it easier to work together….like stating the evidence for your views, or being nice.
      • If you can help 1000 people be 1% more effective, then that’s like having the impact of 10 people.
      • On the other hand, if you do something destructive, then it ruins it for everyone else

 

    • A fourth category, is knowledge sharing.
    • The more people there are in the community, the more valuable it is to do research into what the community should do and share it, because there are more people who might act on the findings.
      • One example is writing up reports on areas we have special knowledge of
        • [give examples from EA forum]
      • This can mean it’s sometimes worth going and learning about areas that don’t seem like the highest priority but might turn out to be. In a smaller community, this exploration wouldn’t be worth the time, but as we become larger, it is.
  • A fifth example, specialisation becomes more worth doing.
      • If the community were just a couple of people, we’d all need to become generalists. But in a community of, say, 1000 people, we can all become experts in our individual areas, and be more than 1000 times as productive as an individual. This is just division of labour like we mentioned at the very start, with the software example.
      • For instance, Dr. Greg Lewis did the research on 80,000 Hours into how many lives doctors save, and this convinced him that he wouldn’t have much social impact as a doctor through clinical practice.
      • Instead he decided to do a masters in public health.
      • Part of the reason was because it’s an important area for the community, especially around pandemics, but there’s a lack of people with the skillset.
      • Greg actually thinks AI risk might be higher priority in general, but as a doctor, he has a comparative advantage in public health.
      • Right now, I, and many others, think that one of the greatest weaknesses of the community is a lack of specialist expertise. We’re generally pretty young and inexperienced.
      • Some particular gaps include the following, which I’m not going to read out:
        • Policy experts - go and work in politics or at a think tank
        • Bioengineering PhDs
        • Machine learning PhDs
        • Economics PhDs
        • Other under-represented areas - history, anthropology,
        • Entrepreneurial managers and operational people
        • Marketing and outreach experts

Summary

  • When choosing whether to take a job, or donate somewhere, don’t assume you’re replaceable.
    • Rather, ask the organisation, or others who are aware of the alternative options, about your relative strengths and weaknesses.
    • You might also trigger a chain of people going into other roles. Consider whether the role plays to your comparative advantage.
  • Look for ways to boost the impact of others in the community.
    • 5 minute favours
    • Operations roles
    • Community infrastructure
    • Sharing knowledge.
    • Specialisation.

 

End

  • We still have a lot to learn about how to best work together, and there’s a lot more we could do. But I really believe that if we do work together effectively, then, in our lifetimes, the community can make major progress on reducing catastrophic risks, eliminating factory farming, ending global poverty, and many other issues.

 

 

11

0
0

Reactions

0
0

More posts like this

Comments8
Sorted by Click to highlight new comments since: Today at 8:01 PM

With regards to "following your comparative advantage":

Key statement : While "following your comparative advantage" is beneficial as a community norm, it might be less relevant as individual advice.

Imagine 2 people Ann and Ben. Ann has very good career capital to work on cause X. She studied a relevant subject, has relevant skill, maybe some promising work experience and a network. Ben has very good career capital to contribute to cause Y. Both have aptitude to become good at the other cause as well, but it would take some time, involve some cost, maybe not be as save.

Now Ann thinks that cause Y is 1000 times as urgent as cause X, and for Ben it is the other way around. Both consider retraining for the cause they think is more urgent.

From a community perspective, it is reasonable to promote the norm that everyone should follow their comparative advantage. This avoids prisoner's dilemma situations and increases total impact of the community. After all, the solution that would best satisfy both Ann's and Ben's goals was if each continued in their respective areas of expertise. (Let's assume they could be motivated to do so)

However, from a personal perspective, let's look at Ann's situation: In reality of course, there will rarely be a Ben to mirror Ann, who would also be considering retraining at exactly the same time as Ann. And if there was, they would likely not know each other. So Ann is not in the position to offer anyone the specific trade that she could offer Ben, namely: "I keep contributing to cause X, if you continue contributing to cause Y"

So these might be Ann's thoughts: "I really think that cause Y is much more urgent than anything I could contribute to cause X. And yes, I have already considered moral uncertainty. If I went on to work on cause X, this would not directly cause someone else to work on cause Y. I realize that it is beneficial for EA to have a norm that people should follow their comparative advantage, and the creation of such a norm would be very valuable. However, I do not see how my decision could possibly have any effect on the establishment of such a norm"

So for Ann it seems to be a prisoner’s dilemma without iteration, and she ought to defect.

I see one consideration why Ann should continue working towards cause X: If Ann believed that EA is going to grow a lot, EA would reach many people with better comparative advantage for cause Y. And if EA successfully promoted said norm, those people would all work on cause Y, until Y would not be neglected enough any more to be much more urgent than cause X. Whether Ann believes this is likely to happen depends strongly on her predictions of the future of EA and on the specific characteristics of causes X and Y. If she believed this would happen (soon), she might think it was best for her to continue contributing to X. However, I think this consideration is fairly uncertain and I would not give it high weight in my decision process.

So it seems that

  • it clearly makes sense (for CEA/ 80000 hours/ ...) to promote such a norm
  • it makes much less sense for an individual to follow the norm, especially if said indiviual is not cause agnostic or does not think that all causes are within the same 1-2 orders of magnitude of urgency.

All in all, the situation seems pretty weird. And there does not seem to be a consensus amongst EAs on how to deal with this. A real world example: I have met several trained physicians who thought that AI safety was the most urgent cause. Some retrained to do AI safety research, others continued working in health-related fields. (Of course, for each individual there were probably many other factors that played a role in their decision apart from impact, e.g. risk aversion, personal fit for AI safety work, fit with the rest of their lives, ...)

Ps: I would be really glad if you could point me to errors in my reasoning or aspects I missed, as I, too, am a physician currently considering retraining for AI safety research :D

Pps: I am new to this forum and need 5 karma to be able to post threads. So feel free to upvote.

Hi there,

I think basically you're right, in that people should care about comparative advantage to the degree that the community is responsive to your choices, and they're value-aligned with typical people in the community. If no-one is going to change their career in response to your choice, then you default back to whatever looks highest-impact in general.

I have a more detailed post about this, but I conclude that people should consider all of role impact, personal fit and comparative advantage, where you put or less emphasis on comparative advantage compared to personal fit given certain conditions.

Very much enjoyed this. Good to see the thinking developing.

My only comment is on simple replacability. I think you're right to say this is too simple in an EA context, where someone this could cause a cascade or the work wouldn't have got done anyway.

Do you think simple replacability doesn't apply outside the EA world? For example, person X wants to be a doctor because they think they'll do good. If they take a place at med school, should we expect that 'frees up' the person who doesn't get the place to go and do something else instead? My assumption is the borderline medical candidate is probably not that committed to doing the most good anyway.

To push the point in a familiar case, assume I'm offered a place in an investment bank and I was going to E2G, but I decide to do something more impactful, like work at an EA org. It's unlike the person who gets my job and salary instead would be donating to good causes.

If you think replacability is sometimes true and other times not, it would be really helpful to specify that. My guess is motivation and ability to be an EA play the big role.

Hi Michael,

I'm writing a much more detailed piece on replaceability.

But in short, simple replaceability could still be wrong in that the doctor wouldn't be replaced. In general, a greater supply of doctors should mean that more doctors get hired, even if it's less than 1.

But yes you're right that if the person you'd replace isn't value-aligned with you, then the displacement effects seem much less significant, and can probably often be ignored.

If you think replacability is sometimes true and other times not, it would be really helpful to specify that. My guess is motivation and ability to be an EA play the big role.

We did state this in our most recent writing about it from 2015: https://80000hours.org/2015/07/replaceability-isnt-as-important-as-you-might-think-or-weve-suggested/ It's pretty complex to specify the exact conditions under which it does and doesn't matter, and I'm still working on that.

That's a great talk, thank you for it. This is why I've started to mind that people get encouraged to figure out what "their cause area" is.

Apart from the fact that they're likely to change their mind within a few years anyway, it's more valuable for the world for them to focus on what they're good at even if it's not in their preferred cause area. Cooperation between cause areas is important.

(Also, "figuring out what the best cause areas are" might be something that should also be done by people whose comparative advantage it is).

[anonymous]7y0
0
0

Great talk!

Given the value that various blogs and other online discussion has provided to the EA community I'm a bit surprised by the relative absence of 'advancing the state of community knowledge by writing etc' in 80k's advice. In fact, I've found that the advice to build lots of career capital and fill every gap with an internship has discouraged me from such activities in the past.

One reason this is that, because there are donors with money on the sidelines, if the organisations were able to find someone with a good level of fit, they could fundraise enough money to pay for their salaries.

Can you (very roughly) quantify to what extent this is the case for EA organisations? (I imagine they will vary as to how donor-rich vs. potential-hire-rich they are, so some idea of the spread would be helpful.)

Hey, the most relevant data we have on this is here: https://80000hours.org/2017/03/what-skills-are-effective-altruist-organisations-missing/

We hope to do a more detailed survey this summer.

In terms of quantifying the amount of money available, we also have some survey results that I'm hoping to publish. But a key recent development is that now the Open Philanthropy Project is funding charities in the community.