Comment author: Ben_Todd 22 December 2016 09:08:33PM 1 point [-]

Hmm, my comment about this was lost.

On second thoughts, "leisure time" isn't quite what I meant. I more thought that it would come out of other extracurriculars (e.g. chess society).

Anyway, I think there's 3 main types of cost:

  1. Immediate impact you could have had doing something else e.g. part-time job and donating the proceeds.

  2. Better career capital you could have gained otherwise. I think this is probably the bigger issue. However, I also think running a local group is among the best options for career capital while a student, especially if you're into EA. So it's plausible the op cost is near zero. If you want to do research and give up doing a research project though, it could be pretty significant.

  3. More fun you could have had elsewhere. This could be significant on a personal level, but it wouldn't be a big factor in a calculation measured in terms of GiveWell dollars.

Comment author: rohinmshah  (EA Profile) 23 December 2016 06:37:02PM 0 points [-]

Okay, this makes more sense. I was mainly thinking of the second point -- I agree that the first and third points don't make too much of a difference. (However, some students can take on important jobs, eg. Oliver Habryka working at CEA while being a student.)

Another possibility is that you graduate faster. Instead of running a local group, you could take one extra course each semester. Aggregating this, for every two years of not running a local group, you could graduate a semester earlier.

(This would be for UC Berkeley, I think it should generalize about the same to other universities as well.)

Comment author: AGB 22 December 2016 06:18:54PM 0 points [-]

I think the arguments in favor of meta are intuitive, but not easy to find. For one thing, the org's posts tend to be org-specific (unsurprisngly) rather than a general defense of meta work. In fact, to the best of my knowledge the best general arguments have never been made on the forum at the top level because it's sort-of-assumed that everybody knows them. So while you're saying Peter's post is the only such post you could find, that's still more than the reverse (and with your post, it's now 2 - 0).

At the comment level it's easy to find plenty of examples of people making anti-meta arguments.

Comment author: rohinmshah  (EA Profile) 22 December 2016 07:36:26PM 0 points [-]

I think the arguments in favor of meta are intuitive, but not easy to find. For one thing, the org's posts tend to be org-specific (unsurprisngly) rather than a general defense of meta work.

Huh, there is a surprising lack of a canonical article that makes the case for meta work. (Just tried to find one.) That said, it's very common when getting interested in EA to hear about GiveWell, GWWC and 80K, and to look them up, which gives you a sense of the arguments for meta.

Also, I would actually prefer that the arguments against also be org-specific, since that's typically more decision-relevant, but a) that's more work and b) it's hard to do without actually being a part of the organization.

Anyway, even though there's not a general article arguing for meta (which I am surprised by), that doesn't particularly change my belief that a lot of people know the arguments for but not the arguments against. This has increased my estimate of the number of people who know neither the arguments for nor the arguments against.

Comment author: Elizabeth 22 December 2016 05:03:27PM 0 points [-]

Hypothesis: there's lots of good, informal meta work to be done, like telling convincing your aunt to donate to GiveWell rather than Heiffer International or your company to do a cash fundraiser rather than a canned food drive. But the marginal returns diminish real quickly: once you've convinced all the relatives that are amenable, it is really hard to convince the holdouts or find new relatives. But the remaining work isn't just lower expected value, it has much slower, more ambiguous feedback loops, so it's easy to miss the transition.

Object level work is hard, and there are few opportunities to do it part time. Part time meta work is easy to find and sometimes very high value. My hypothesis is that when people think about doing direct work full time, these facts conspire to make meta work the default choice. In fact full time meta work is the most difficult thing, because of poor feedback loops, and the easiest to be actively harmful with, because you risk damaging the reputation of EA or charity as a whole.

I think we need to flip the default so that people look to object work, not meta, when they have exhausted their personal low hanging fruit.

Comment author: rohinmshah  (EA Profile) 22 December 2016 05:40:45PM 0 points [-]

My hypothesis is that when people think about doing direct work full time, these facts conspire to make meta work the default choice.

I'm confused about what you mean by the "default". Do you mean the default career choice?

My impression was that most people don't even do their personal low hanging fruit because of the social awkwardness around it. What sorts of things do you think people do after exhausting their personal low hanging fruit?

If you mean that the default choice is to work at a meta organization, that seems unlikely -- most meta organizations are small, and it's my impression that CEA often has trouble filling positions. According to the annual survey, 512 people said they were going to earn to give, while only 190 people said "direct charity/non-profit work", and only a portion of those would be at meta organizations. So it seems like earning to give is the default choice.

In fact full time meta work is the most difficult thing, because of poor feedback loops

The feedback loops don't seem poor to me. If you're trying to do outreach, you can see exactly how your techniques are working based on how many people you get interested and how interested they are and what they go on to do.

and the easiest to be actively harmful with, because you risk damaging the reputation of EA or charity as a whole.

If you're working in animal welfare, you could turn off a lot of people ("those preachy vegans, they're all crazy"), harming animals. If you're working in x-risk, there can often be the chance that you actually increase x-risk (for example, you show how to have some basic AI safety features, and then people think the safety problem is solved and build an AGI with those safety features which turns out not to be enough). Even in global poverty, we have stories like PlayPump, though in theory we should be able to avoid that.

If you include long-run far future effects there are tons of arguments that action X could actually be net negative.

Comment author: AGB 22 December 2016 07:15:32AM *  2 points [-]

I agree, I think it's just disproportionately the case that donors to meta work are not taking into account these considerations.

What makes you think this? I found this post interesting, but not new; it's all stuff I've thought about quite hard before. I wouldn't have thought I was roughly representative of meta donors here (I certainly know people who have thought harder), though I'd be happy for other such donors to contradict me.

Comment author: rohinmshah  (EA Profile) 22 December 2016 05:14:43PM 0 points [-]

I've had conversations with people who said they've donated to GWWC because of high leverage ratios, and my impression based on those conversations is that they take the multiplier fairly literally ("even if it's off by an order of magnitude it's still worthwhile") without really considering the alternatives.

In addition, it's really easy to find all of the arguments in favor of meta, including (many of) the arguments that impact is probably being undercounted -- you just have to read the fundraising posts by meta orgs. I don't know of any post other than Hurford's that suggests considerations against meta. It took me about a year to generate all of the ideas not in that post, and it certainly helped that I was working in meta myself.

Comment author: Ben_Todd 21 December 2016 08:57:37PM 0 points [-]

Consider instead the case where a general member of a local group comes to a workshop and takes the GWWC pledge on the spot (which I think happens not infrequently?). The local group has done the job of finding the member and introducing her to EA, maybe raising the probability to 30%. 80K would count the full impact of that pledge, and the local group would probably also count a decent portion of that impact.

I can't speak for the other orgs, but 80k probably wouldn't count this as "full impact".

First, the person would have to say they made the pledge "due to 80k". Whereas if they were heavily influenced by the local group, they might say they would have taken it otherwise.

Second, as a first approximation, we use the same figure GWWC does for a value of a pledge in terms of donations. IIRC this already assumes only 30% is additional, once counterfactually adjusted. This % is based on their surveys of the pledgers. (Moreover, for the largest donors, who determine 75% of the donations, we ask them to make individual estimates too).

Taken together, 80k would attribute at most 30% of the value.

Third, you can still get the undercounting issue I mentioned. If someone later takes the pledge due to the local group, but was influenced by 80k, 80k probably wouldn't count it.

I don't know how 80k considers the impact of their career workshops, but I would bet money that they don't take into account the costs to the local group that hosts the workshop.

What would you estimate is the opportunity cost of student group organiser time per hour?

How would it compare to time spent by 80k staff?

Comment author: rohinmshah  (EA Profile) 21 December 2016 10:18:30PM 0 points [-]

First, the person would have to say they made the pledge "due to 80k".

Yes, I'm predicting that they would say that almost always (over 90% of the time).

this already assumes only 30% is additional, once counterfactually adjusted.

That does make quite a difference. It seems plausible then that impact is mostly undercounted rather than overcounted. This seems more like an artifact of a weird calculation (why use GWWC's counterfactual instead of having a separate one)? And you still have the issue that impact may be double counted, it's just that since you tend to undercount impact in the first place the effects seem to cancel out.

That's a little uncharitable of me, but the point I'm trying to make is that there is no correction for double-counting impact -- most of your counterarguments seem to be saying "we typically underestimate our impact so this doesn't end up being a problem". You aren't using the 30% counterfactual rate because you're worried about double counting impact with GWWC. (I'm correct about that, right? It would a really strange way to handle double counting of impact.)

Nitpick: This spreadsheet suggests 53%, and then adds some more impact based on changing where people donate (which could double count with GiveWell).

Third, you can still get the undercounting issue I mentioned. If someone later takes the pledge due to the local group, but was influenced by 80k, 80k probably wouldn't count it.

I agree that impact is often undercounted. I accept that impact is often undercounted, to such a degree that double counting would not get you over 100%. I still worry that people think "Their impact numbers are great and probably significant underestimates" without thinking about the issue of double counting, especially since most orgs make sure to mention how their impact estimates are likely underestimates.

Even if people just donated on the basis of "their impact numbers are great" without thinking about both undercounting and overcounting, I would worry that they are making the right decision for the wrong reasons. We should promote more rigorous thinking.

My perspective is something like "donors should know about these considerations", whereas you may be interpreting it as "people who work in meta don't know/care about these considerations". I would only endorse the latter in the one specific case of not valuing the time of other groups/people.

What would you estimate is the opportunity cost of student group organiser time per hour?

The number I use for myself is $20, mostly just made up so that I can use it in Fermi estimates.

How would it compare to time spent by 80k staff?

Unsure. Probably a little bit higher, but not much. Say $40?

(I have not thought much about the actual numbers. I do think that the ratio between the two should be relatively small.)

I also don't care too much that 80k doesn't include costs to student groups because those costs are relatively small compared to the costs to 80k (probably). This is why I haven't really looked into it. This is not the case with GWWC pledges or chapter seeding.

Comment author: Jeff_Kaufman 21 December 2016 07:18:46PM 0 points [-]

I just realized: there's no way that rss feed can work, because it needs to be authenticated with your cookies. Sorry!

Comment author: rohinmshah  (EA Profile) 21 December 2016 08:17:40PM 0 points [-]

Okay, that makes sense. I ran into that issue fairly quickly and thought there might be a workaround but tabled that to look at later.

Comment author: Ben_Todd 21 December 2016 05:51:36PM *  2 points [-]

These are all reasonable concerns. I can't speak for the details of the two estimates you mention, though my impression is that the points listed have probably already been considered by the people making the estimates. Though you could easily differ from them in your judgement calls.


With LEAN not including the costs of the chapter heads, they might have just decided that the costs of this time are low. Typically, in these estimates, people are trying to work out something like GiveWell dollars in vs. GiveWell dollars out. If a chapter head wouldn't have worked on an EA project or earned to give to GiveWell charities otherwise, then the opportunity cost of their time could be small when measured in GiveWell dollars. In practice, it seems like much chapter time comes out of other leisure activities.


With 80k, we ask people taking the pledge whether they would have taken it if 80k never existed, and only count people who say "probably not". These people might still be biased in our favor, but on the other hand, there's people we've influenced but were pushed over the edge by another org. We don't count these people towards our impact, even though we made it easier for the other org.

(We also don't count people who were influenced by us indirectly, so don't know they were influenced)


Zooming out a bit, ultimately what we do is make people more likely to pledge.

Here's a toy model.

  • At time 0, you have 3 people.
  • Amy has a 10% chance of taking the pledge
  • Bob has a 80% chance
  • Carla has a 90% chance

80k shows them a workshop, which makes the 10% more likely to take it, so at time 1, the probabilities are:

  • Amy: 20%
  • Bob: 90%
  • Carla: 100% -> she actually takes it

Then GWWC shows them a talk, which has the same effect. So at time 2:

  • Amy: 30%
  • Bob: 100% -> actually takes it
  • Carla: 100% (overdetermined)

Given current methods, 80k gets zero impact. Although they got Carla to pledge, Carla tells them she would have taken it otherwise due to GWWC, which is true.

GWWC counts both Carla and Bob as new pledgers in their total, but when they ask them how much they would have donated otherwise, Carla says zero (80k had already persuaded her) and Bob probably gives a high number too (~90%), because he was already close to doing it. So this reduces GWWC's estimate of the counterfactual value per pledge. In total, GWWC adds 10% of the value of Bob's donations to their estimates of counterfactual money moved.

This is pessimistic for 80k, because without 80k, GWWC wouldn't have persuaded Bob, but this isn't added to our impact.

It's also a bit pessimistic for GWWC, because none of their effect on Amy is measured, even though they've made it easier for other organisations to persuade her.

In either case, what's actually happening is that 80k is adding 30% of probability points and GWWC 20% of probability points. The current method of asking people what they would have done otherwise is a rough approximation for this, but it can both overcounts and undercounts what's really going on.

Comment author: rohinmshah  (EA Profile) 21 December 2016 08:12:09PM *  1 point [-]

In practice, it seems like much chapter time comes out of other leisure activities.

I strongly disagree.

I can't speak for the details of the two estimates you mention, though my impression is that the points listed have probably already been considered by the people making the estimates.

This is exactly why I focused on general high-level meta traps. I can give several plausible ways in which the meta traps may be happening, but it's very hard to actually prove that it is indeed happening without being on the inside. If GWWC has an issue where it is optimizing metrics instead of good done, there is no way for me to tell since all I can see are its metrics. If GWWC has an issue with overestimating their impact, I could suggest plausible ways that this happens, but they are obviously in a better position to estimate their impact and so the obvious response is "they've probably thought of that". To have some hard evidence, I would need to talk to lots of individual pledge takers, or at least see the data that GWWC has about them. I don't expect to be better than GWWC at estimating counterfactuals (and I don't have the data to do so), so I can't show that there's a better way to assess counterfactuals. To show that coordination problems actually lead to double-counting impact, I would need to do a comparative analysis of data from local groups, GWWC and 80k that I do not have.

There is one point that I can justify further. It's my impression that meta orgs consistently don't take into account the time spent by other people/groups, so I wouldn't call that one a judgment call. Some more examples:

  • CEA lists "Hosted eight EAGx conferences" as one of their key accomplishments, but as far as I can tell don't consider the costs to the people who ran the conferences, which can be huge. And there's no way that you could expect this to come out of leisure time.
  • I don't know how 80k considers the impact of their career workshops, but I would bet money that they don't take into account the costs to the local group that hosts the workshop.

(We also don't count people who were influenced by us indirectly, so don't know they were influenced)

Yes, I agree that there is impact that isn't counted by these calculations, but I expect this is the case with most activities (with perhaps the exception of global poverty, where most of the impacts have been studied and so the "uncounted" impact is probably low).

Here's a toy model.

The main issue is that I don't expect that people are performing these sorts of counterfactual analyses when reporting outcomes. It's a little hard for me to imagine what "90% chance" means so it's hard for me to predict what would happen in this scenario, but your analysis seems reasonable. (I still worry that Bob would attribute most or all of the impact to GWWC rather than just 10%.)

However, I think this is mostly because you've chosen a very small effect size. Under this model, it's impossible for 80k to ever have impact -- people will only say they "probably wouldn't" have taken the GWWC pledge if they started under 50%, but if they started under 50%, 80k could never get them to 100%. Of course this model will undercount impact.

Consider instead the case where a general member of a local group comes to a workshop and takes the GWWC pledge on the spot (which I think happens not infrequently?). The local group has done the job of finding the member and introducing her to EA, maybe raising the probability to 30%. 80K would count the full impact of that pledge, and the local group would probably also count a decent portion of that impact.

More generally, my model is that there are many sources that lead to someone taking the GWWC pledge (80k, the local group, online materials from various orgs), and a simple counterfactual analysis would lead to every such source getting nearly 100% of the credit, and based on how questions are phrased I think it is likely that people are actually attributing impact this way. Again, I can't tell without looking at data. (One example would be to look at what impact EA Berkeley members attribute to GWWC.)

Comment author: Ben_Todd 21 December 2016 05:15:39PM 1 point [-]

At a glance, it seems like most of the meta-traps don't apply to stuff like promotion of object-level causes.

That's why Peter Hurford distinguished between second-level and first-level meta, and focused his criticism on the second-level.

80,000 Hours and GiveWell are both mainly doing first-level meta (i.e. we promote specific first order opportunities for impact); though we also do some second-level meta (promoting EA as an idea). 80k does more second-level meta day-to-day than GiveWell, though GiveWell explains their ultimate mission in second-level meta terms:

We aim to direct as much funding as possible of this large pool to the best giving opportunities we can find, and create a global, public, open conversation about how best to help people. We picture a world in which donors reward charities for effectiveness in improving lives.

One other quick point is that I don't think coordination problems arise especially from meta-work. Rather, coordination problems can arise anywhere in which the best action for you depends on what someone else is going to do. E.g. you can get coordination problems among global health donors (GiveWell has written a lot about this). The points you list under "coordination problems" seem more like examples of why the counterfactuals are hard to assess, which is already under trap 8.

Comment author: rohinmshah  (EA Profile) 21 December 2016 07:03:54PM 0 points [-]

At a glance, it seems like most of the meta-traps don't apply to stuff like promotion of object-level causes. That's why Peter Hurford distinguished between second-level and first-level meta, and focused his criticism on the second-level.

I mostly agree, but I think a lot of them do apply to first-level meta in many cases. For example I talked about how they apply to GWWC, which is first-level meta (I think).

80,000 Hours and GiveWell are both mainly doing first-level meta (i.e. we promote specific first order opportunities for impact)

Yes, and I specifically didn't include that kind of first-level meta work. I think the parts of first-level meta that are affected by these traps are efforts to fundraise for effective organizations, mainly ones that target EAs specifically. Even for general fundraising though, I think several traps still do apply, such as trap #1, #6 and #8.

One other quick point is that I don't think coordination problems arise especially from meta-work.

I agree, I think it's just disproportionately the case that donors to meta work are not taking into account these considerations. GiveWell and ACE take these considerations into account when making recommendations, so anyone relying on those recommendations has already "taken it into account". This may arise in X-risk, I'm not sure -- certainly it seems to apply to the part of X-risk that is about convincing other people to work on X-risk.

The points you list under "coordination problems" seem more like examples of why the counterfactuals are hard to assess, which is already under trap 8.

Well, even if each organization assesses counterfactuals perfectly, you still have the problem that the sum of the impacts across all organizations may be larger than 100%. The made-up example with Alice was meant to illustrate a case where each organization assesses their impact perfectly, comes to a ratio of 2:1 correctly, but in aggregate they would have spent more than was warranted.

Comment author: Stefan_Schubert 21 December 2016 12:50:04PM 1 point [-]

I would in fact count this as "meta" work -- it would fall under "promoting effective altruism in the abstract".

I don't think that to promote X-risk should be counted as "promoting effective altruism in the abstract".

My point is that an RCT proves to you that distributing bed nets in a certain situation causes a reduction in child mortality.

There are two kinds of issues here:

1) Does the intervention have the intended effect, or would that effect have occurred anyway? 2) Does the donation make the intervention occur, or would that intervention have occurred anyway (for replaceability reasons)?

Bednet RCTs help with the first question, but not with the second. For meta-work and X-risk both questions are very tricky.

Comment author: rohinmshah  (EA Profile) 21 December 2016 06:32:06PM 0 points [-]

Bednet RCTs help with the first question, but not with the second. For meta-work and X-risk both questions are very tricky.

Yes, I agree.

Comment author: Telofy  (EA Profile) 21 December 2016 08:55:20AM 2 points [-]

… but people are aware of the problem and tackle it (research into the probabilities of various existential risks, looking for particularly neglected existential risks such as AI risk). I haven't seen anything similar for meta organizations.

I’m closest to the EA Foundation and know that their strategy rests to a great part on focusing on hard-to-quantify high risk–high return projects because these are likely to be neglected. I don’t know if other metaorganizations are doing something similar, but it is possible.

Imagine that Alice will now have an additional $2,000 of impact, and each organization spent $1,000 to accomplish this. Then each organization would (correctly) claim a leverage ratio of 2:1, but the aggregate outcome is that we spent $5,000 to get $2,000 of benefit, which is clearly suboptimal. These numbers are completely made up for pedagogical purposes and not meant to be actual estimates. In reality, even in this scenario I suspect that the ratio would be better than 1:1, though it would be smaller than the ratio each organization would compute for itself.

Yes. Good point and another reason fund ratios are silly (and possibly toxic). The other one is this one. I’ve written an article on a dangerous phenomenon that has been limiting the work in some cause areas that is also related to this attribution problem.

Comment author: rohinmshah  (EA Profile) 21 December 2016 10:43:24AM 0 points [-]

I’m closest to the EA Foundation and know that their strategy rests to a great part on focusing on hard-to-quantify high risk–high return projects because these are likely to be neglected. I don’t know if other metaorganizations are doing something similar, but it is possible.

Huh, interesting. I don't know much about the EA Foundation, but my impression is that this is not the case for other meta orgs.

The other one is this one.

Yeah, I forgot about evaluating from a growth perspective, despite reading and appreciating that article before. Whoops.

View more: Prev | Next