Hide table of contents
Nov 1 201714 min read 18

14

 

tl;dr

80 000 Hours’ cause priorities framework focuses too heavily on neglectedness at the expense of individuals’ traits. It's inapplicable in causes where progress yields comparatively little or no ‘good done’ until everything is tied together at the end, is insensitive to the slope of diminishing returns from which it draws its relevance, and as an a priori heuristic it has much lower evidential weight than even a shallow dive would provide.

 

Abstract

For some time I’ve been uneasy about 80,000 Hours’ stated cause prioritisation and the broader EA movement’s valorisation of neglectedness with what I see as very little justification. 80K recently updated their article 'How to compare different global problems in terms of impact,' with an elegant new format, which makes it easier to see the parts I agree with, and critique those I don’t. This essay is intended as a constructive exploration of the latter, to wit their definitions of solvability and neglectedness, and why I think they overweigh the importance of the latter.

 

80K’s framework

80K offer three factors which they think multiplied together give the value of contributing to an area. Here’s the factors along with their definition of each:

Scale (of the problem we're trying to solve) = Good done / % of the problem solved

Solvability = % of the problem solved / % increase in resources

Neglectedness = % increase in resources / extra person or $

Let’s look at them in turn.

 

Scale

I’ll skim over this one - for most problems the definition seems to capture decently what we think of as scale. That said, there is a class of issues to which - depending on how you interpret it - it’s misleading or inapplicable. These are what I’ll call ‘clustered value problems’: those where progress yields comparatively little or no ‘good done’ until everything is tied together at the end.1

Examples of this might be getting friendly AI research right, if any deviation from perfect programming might still generate a paperclipper, eliminating an infectious disease, or developing any important breakthrough technology (vaccines, cold fusion, mass produceable in vitro meat, a space elevator etc).

In such cases it wouldn’t make much sense to look only at the value of a ‘% of the problem solved’ unless it was the last percentage (and then that would make it look disproportionately good).

In such cases we should treat scale as 'good done / average % of the problem solved', distinguishing them from what I’ll call ‘distributed value problems’ (ie any that aren’t clustered), where ‘(marginal) % of the problem solved’ is the better metric (while noting that distributedness/clusteredness is really a spectrum).

 

‘Solvability’ and Neglectedness

Solvability is clearly crucial in thinking about prioritising problems. Here’s 80K’s definition of it again:

% of the problem solved / % increase in resources

And of neglectedness:

% increase in resources / extra person or $

There are, I think, three independently fatal problems that make these factors useless:

  1. As presented, ‘solvability’ would imply that any problem that no-one has worked on is unsolvable, since it would require an infinite % increase in resources to make any progress (or rather, the value would just be undefined, since it’s equivalent to a division by 0).

  2. By making this all a nice cancellable equation, it makes the actual values of all but the first numerator (‘good done’) and the last denominator (‘extra person or $’) irrelevant (unless they happen to be 0/infinity, per above). This is really just the equation ‘good done/extra person or $’ in fancy clothing, so the real world values of ‘% of the problem solved’ and ‘% increase in resources’ are no more pertinent to how much good per $ we’ll do than the interloping factor would be in ‘good done/dragons in Westeros * dragons in Westeros / extra person of $’.

    Perhaps 80K didn’t intend the framework to be taken too literally, but inasmuch as they use it at all, it seems reasonable to criticise it on its own terms.

  3. Intuitively, we can see how the real world value of the ‘% of the problem solved’ factor might have a place in a revised equation - eg ‘% of the problem solved/extra person or $’ (higher would obviously indicate more value from working on the problem, all else being equal). But ‘‘% increase in resources’ has no such use, because it’s a combination of two factors, (100 ) absolute contribution / resources already contributed to the problem, the latter of which is totally irrelevant to what we mean by ‘solving a problem’ (and is also the source of the potential division by 0). Because it accounts for contributions of people before me, this factor can increase even if my actual contribution shrinks, and vice versa.2 So it can’t be a reliable multiplier in any list of desirable traits for a high priority cause.

    By using ‘% increase in resources’ as the denominator instead of ‘absolute increase in resources’ I think 80K mean to capture the notion of diminishing returns. But diminishing returns in this regard is the hypothesis that the multiple people working on related areas of a potentially multifarious problem, often in competition with each other, will tend to achieve less than the people who worked on it before them. It’s not a clear example of the economic notion since multiple factors are constantly changing within most cause areas (and the economic notion is about increasing a single variable), and even if it were it shouldn’t be hard-coded into our very definition of problem-solving.

So although it doesn’t fit into a nice cancellable equation, I think we need to model diminishing returns separately - and then check the plausibility of our model for our purposes. Before we think about that, let’s quickly revisit clustered value problems.

 

Clustered value problems

Because work on clustered value problems (by definition) yields the hypermajority of its value at the point where the last piece is placed in the jigsaw, diminishing returns are largely irrelevant. People might work on the easiest parts of the solution first, but assuming that a fixed number of work-hours - or at least a fixed amount of ingenuity - is necessary to reach the breakthrough, someone will have to plough through the hard stuff to the end, and the value of every resource contributed is equal throughout.

 

Diminishing returns

So distributed value problems are the important case for diminishing returns, and I assume they are what the 80K piece is addressing.

For them, a more plausible approach to marginal solvability, that captures diminishing marginal returns, follows from the following pair of claims: a) we can estimate the rate of diminishing returns D by looking at multiple points in the history of the project, comparing (% of problem problem solved / absolute resources spent) and selecting an appropriate function.3 Therefore... b) we could apply that function to the number of resources that have been used R to figure out the contribution of adding a marginal resource:

Marginal contribution per resource = ((R + 1)D – RD)

And we could now define marginal solvability per resource:

Marginal solvability = % of the problem solved / ((R + 1)D – RD)

On this definition, ‘neglectedness’ (or rather, its inverse - heededness?) is just the value of R. And all else being equal, if these claims are true, then we can expect to solve a smaller % of a problem the higher the value of R.

But these claims make two assumptions that we shouldn’t take for granted:

Diminishing returns will apply due to problem prioritisation: if each new resource added (approximately aka each new hire4) within a cause has equivalent competence to those before them (where 'competence' is a measure of 'ability to solve the problem to which the organisation applies itself, directly or indirectly), they will tend to achieve less than those recruited before them. Resources will be at best fungible: each new hire within a cause tends to have equivalent or lesser competence than those before them. More precisely, each dollar spent on a marginal hire tends to have equivalent or lesser value than the dollars spent on previous hires. That is, in the new marginal solvability equation, each individual would approximately contribute M marginal resources, where M is a constant.

Looking at these in turn...

 

Diminishing returns due to problem prioritisation

This seems like a workable approximation in many cases, but the extent to which it applies could vary enormously from cause to cause - sometimes it might even reverse. To the extent that an organisation is perfectly rational, its staff will be picking the lowest hanging fruit to work on, but here are possible caveats, eg (in very roughly ascending order of importance):

  1. Economies of scale, which mean that even if a hire can't achieve as much as the hire before them, they might be sufficiently cheaper as to be equal or better value.

    Anticipating the value of economies of scale seems sufficiently difficult even from month to month that it's unlikely to be worth making life plans based on them. However, they might be relevant to someone investigating a new job rather than a new career – or to someone investigating where to donate.

  2. High risk roles. A new organisation might need to reliably produce results early on to justify its existence, but once established might be able to take on projects less likely to succeed, but with higher expectation. This could potentially happen in medium or even very large organisations, depending on the cost and risk of the new projects (the Open Philanthropy Project and Google X are real-life examples of this).

    High-risk roles (distinct from individuals with high-risk strategies, who we'll come to shortly), are by their nature substantially rarer than normal-risk roles. They also might not require any particularly different skills from other direct work, so again they seem more relevant to people considering donating or changing jobs rather than deciding career paths.

  3. Diminishing returns assumes that all factors except the number of resources you’re adding stay constant. But the bigger the cause, the more variables within it will constantly be changing, so the less this will be the case. For example, new technologies are constantly emerging, many of which require specialist skills to use effectively, if only to see their potential - eg cheap satellites (and cheap orbital access for them) that allow rapid responses to deforestation

  4. Some groundwork - possibly a great deal of groundwork - might be required to determine what the low-hanging fruit are, or to make them pickable when they’re identified. In such cases the actual value of working in a field might increase as such work is done.

    This could be a huge factor, especially given the relative novelty of RCTs and cost-benefit analysis. Even many purportedly saturated fields still have huge uncertainty around the best interventions. Also, as our social scientific tools improve we can reasonably hope to uncover high value interventions (ie low hanging fruit) we’ve hitherto missed.

  5. Most people aren’t particularly perfectly rational altruists - and many of the incentives they face within their field won’t be perfectly altruistic.

    Bizarrely, the nonrationality of people and organisations didn’t even occur to me until draft 2 of this essay - and I suspect in many cases this is a very strong effect. If it weren’t, there would be no need for an EA movement. Looking at Wikipedia’s list of animal charities based in the UK, for eg, I count 4 of 76 who appear to have a focus on factory farming, which most of us would consider by far the most important near-term cause within the area. I won’t single out others for negative comment, and no doubt our specific views will vary widely, but I imagine most EAs would agree with me that the list bears little to no resemblance to an optimal allocation of animal-welfare-oriented resources. In some causes people are provably acting irrationally, since they’re at directly crossing purposes - the climate change activists advocating nuclear power or geoengineering and those arguing against them can’t all be acting rationally, for example.5

    In some cases irrationality could increase the value of more rational people entering the field - eg by creating another RCT in a field that has plenty, or simply by creating the option for project managing or otherwise directing someone toward a more valuable area (this essentially seems to have been the situation which Givewell and Giving What We Can’s founders walked into when they first founded the orgs). It might even tend to do so, if the latter interaction turned out to be a major factor.

  6. The more work fully solving the problem requires, the slower we would expect returns to diminish. The density of sub-problems of any given tractability will be thicker, so it will take more resources to get through the low-hanging fruit. There’s just orders of magnitude more to do in eg creating a robust system of global governance than in eliminating schistosomiasis, so we would expect returns to diminish at a correspondingly fractional rate in the former.6

    We should expect a priori that problem areas will tend to be larger the more people they have working on them. Smaller areas with several people working on them quickly cease to be problem areas, and recruitment for an area would probably slow – if not reverse – as less effective work there is left to be done within it diminishes.

The hypothesis of diminishing returns due to problem prioritisation is ultimately an empirical claim, so shouldn’t be assumed to hold within any given field without at least some checking against some plausible metrics.

 

Fungible marginal resources

As a sufficiently broad tendency, this is surely true. Just as a rational organisation would pick the lowest hanging fruit to work on, so they would aim to pick the lowest hanging fruit when considering each new staff hire. Nonetheless, there’s one big problem with this assumption.

Problems are disproportionately solved by small numbers of people doing abnormal things, rather than by throwing sufficient fungible resources at them – ie the value of individual contributions follows a power law distribution.

This can be due to a number of reasons both personal and circumstantial, that might not transfer across causes, thus an individual could offer far more to one cause area than another. This is a grandiose way of saying ‘personal fit is very important’, but I think this is a potentially massive factor, that can dwarf any diminishing returns, and means that - at least for direct work - we should think of cause-person pairings, rather than individual causes as our unit of investigation.

 

The consequences of devaluing neglectedness

When I’ve discussed whether the EA movement overemphasises neglectedness, one response I’ve often heard is that it’s really just a heuristic to help decide which causes to look into in the first place. I think there’s some truth to that, especially for ‘top’ EA areas - I certainly don’t think, for example, that Givewell’s recommendations for combating various neglected tropical diseases are based (even in significant fraction) on them being neglected, that ACE recommend factory farming campaigns because so few animal welfare charities address them, or that FHI are so concerned about AI because other academics weren’t. These areas all seem to have reasonable independent arguments to recommend them.

That said, this essay is partly a response to 80K’s priorities framework, which explicitly describes neglectedness as a core part of their cause assessment. If they were ultimately just using it as a top-level heuristic, I would expect them to say as much, rather than rating it as a multiplier with equal weighting to scale and solvability.

And, what worries me more is we can see neglectedness invoked explicitly to deter EAs from getting involved in causes that would otherwise seem very promising. This is clearest in 80K’s climate change profile, for example. In the ‘overall view’, they present the three components - ‘scale’, ‘solvability’ and ‘neglectedness’ as visual equals. And in the ‘major arguments against working on it’ section they present info like ‘the US government spends about $8 billion per year on direct climate change efforts’ as a negative in itself.

It seems very strange to me to imagine this as a strong deterrent. Effective altruists often compare commercial thinking favourably with the ways we think about doing good7 - but it would be very strange to imagine an entrepreneur thinking that a large injection of cash into an area was a reason not to go into it.

80K’s profile on nuclear security similarly discounts its value in part due to it being ‘likely less neglected than other problems’. And so on in their biosecurity, global health and anti-smoking profiles.

And anecdata-ly, as someone concerned that climate change might be a much more pressing issue than EAs typically credit, I’ve repeatedly had conversations with people who dismiss it as a cause with only some variation of the phrase ‘well, it’s not very neglected’.

We should change this type of thinking.

 


1. Throughout this essay, you should take it as read that by any change to a value I mean ‘counterfactually adjusted expected change, all else being equal’.

2. For example, if I give $100 to AMF and $100 000 had been given before mine, my % increase in resources would be 0.1 - but if I gave $200 and $1 000 000 had been given before mine, it would be only 0.02.

When I showed 80K an early draft of this essay, they pointed out that the original solvability factor, ‘% of the problem solved / % increase in resources’ is really elasticity of solvability, and thought that anyone who noticed the problem above would have recognised that. But since my criticisms are of the components of the factor, I don’t see how relabelling it would matter.

3. This is easier said than done, and I won’t attempt it here. In Defining Returns Functions and Funding Gaps, Max Dalton looks at a few possible categories of model for the slope of diminishing returns, and in the follow up he and Owen Cotton-Barrett give some reasons for choosing between them. I suspect in some fields someone clever enough could throw the huge amounts of data on resources spent and outcomes generated at a machine learnination program and get some surprisingly cogent output. If it were possible to do so for whole cause areas, it could yield huge insights into cause prioritisation.

4. In general this essay discusses individuals working for organisations (which we can extrapolate to individuals working on cause areas), but virtually identical reasoning could extend to organisations if there were any particularly irreplaceable/irreducible ones. Ie if you replaced all phrases like ‘person hired at an organisation’ with ‘organisation founded to work on a cause’, the argument would be essentially the same.

Throughout the essay I tend to treat individuals as unitary, to be either added to or subtracted from a cause as a whole, but this is for convenience and not strictly accurate. For people it’s a decent approximation: people’s work hours (especially effective altruists’) will rarely diverge by more than a factor of two.

For organisations, it would be more important to account for the difference, since one can be orders of magnitude larger than another.

5. HT Goodwin Gibbins for these examples. There has been political advocacy both to promote it, in it order to mitigate the effects, and to prevent it because of fears that it will reduce the political will to solve climate change or risk even worse harm. Nuclear power clashes are so ubiquitous they have their own Wikipedia pages: in the red corner, https://en.wikipedia.org/wiki/Anti-nuclear_movement, in the blue corner, https://en.wikipedia.org/wiki/Pro-nuclear_movement.

On 80K’s current definition, even if all this adverse advocacy had perfectly cancelled itself it out, the problem of climate change would have become more solvable and, in broader EA parlance, less neglected.

6. This is closely related to Givewell’s room for more funding metric, which I’ve heard people equate with the idea of neglectedness, but which is functionally quite different.

7. Eg Will MacAskill’s discussion in the ‘Overhead Costs, CEO Pay and Confusions’ chapter of Doing Good Better, the ideas discussed esp by Michael Faye in this talk

 


Thanks to Nick Krempel (not a member of the EA community, but supersmart guy) for a great deal of help thinking through this, and to Michael Plant (who originally I forgot to thank!) for a lot of helpful feedback. Needless to say, errors and oversights thoroughly mine.

Comments18
Sorted by Click to highlight new comments since: Today at 1:33 PM

Hi Sacha, thanks for writing this, good food for thought. I'll get back to you properly next week after EA Global London (won't have any spare time for at least 4 days).

I just wanted to point out quickly that we do have personal fit in our framework and it can give you up to a 100x difference between causes: https://80000hours.org/articles/problem-framework/#how-to-assess-personal-fit

I also wonder if we should think about the effective resources dedicated to solving a problem using a Cobb-Douglas production function: Effective Resources = Funding ^ 0.5 * Talent ^ 0.5. That would help capture cases where an increase in funding without a commensurate increase in talent in the area has actually increased the marginal returns to an extra person working on the problem.

A few of the points made in this piece are similar to the points I make here: https://casparoesterheld.com/2017/06/25/complications-in-evaluating-neglectedness/

For example, the linked piece also argues that returns may diminish in a variety of different ways. In particular, it also argues that the returns diminish more slowly if the problem is big and that clustered value problems only produce benefits once the whole problem is solved.

That's a good piece - we thought about many of these issues when working on the framework, and I agree it's not all clearly explained on the page.

I know this is a late reply to an old comment but it would be awesome to know in how far you think you have addressed the issues raised? Or if you did not address them what was you reason for discarding them?

I am working through the cause prio literature at the moment and I don't really feel that 80k addresses all (or most) of the substantial concerns raised. For instance, the assessments of climate change and AI safety are great examples where 80k's considerations can be quite easily attacked given conceptual difficulties in the underlying cause prio framework/argument.

This will mainly need to wait for a separate article or podcast, since it's a pretty complicated topic.

However, my quick impression is that the issues Caspar mentions are mentioned in the problem framework article.

I also agree that their effect is probably to narrow the difference between AI safety and climate change, however I don't think they flip the ordering, and our 'all considered' view of the difference between the two was already narrower than a naive application of the INT framework implies – for the reasons mentioned here – so I don't think it really alters our bottom lines (in part because we were already aware of these issues). I'm sorry, though, that we're not clearer that our 'all considered' views are different from 'naive INT'.

Thanks for the quick reply! 

Yeah, an article or podcast on the framework and possible pitfalls would be great. I generally like ITN for broad cause assessments (i.e., is this interesting to look at?) but the quantitative version that 80k uses does seem to have some serious limitations if one digs more deeply into the topic. I would be mostly concerned about people new to EA either having false confidence in numbers or being turned off by an overly simplistic approach. But you obviously have much more insight into peoples reactions and I am looking forward to how you develop and improve on the content in the future!

Just read this. Nice point about future people.

It sounds like we agree on most of this, though perhaps with differing emphasis - yy feeling is that neglectedness such a weak heuristic that we should abandon it completely, and at the very least avoid making it a core part of the idea of effective altruism. Are there cases where you would still advocate using it?

I certainly don’t think, for example, that Givewell’s recommendations for combating various neglected tropical diseases are based (even in significant fraction) on them being neglected, that ACE recommend factory farming campaigns because so few animal welfare charities address them, or that FHI are so concerned about AI because other academics weren’t.

A few brief thoughts:

1) My understanding is that there are two different prioritization frameworks: the 80,000 Hours framework (for cause areas) and the GiveWell framework (for measurable interventions). The 80,000 Hours framework looks at how much good would be done by solving a problem (scale), how much of the problem would be solved by doubling the money/talent currently going towards it (tractability), and how much money/talent is currently going towards it (neglectedness). The GiveWell framework looks at whether there is strong evidence that an intervention has a positive impact (evidence of effectiveness), how much you have to spend on the intervention to have some standardized unit of impact such as saving a life (cost effectiveness), and whether additional donations would enable the charity to expand its implementation of the intervention (room for more funding). The GiveWell framework does not seem to explicitly consider any of the 80,000 Hours factors: if there is an intervention that is evidence-based and cost effective that is in need of more funding, then it does well under the GiveWell framework even if there are significant resources going towards the underlying problem (i.e. it is not neglected), the problem as a whole is relatively unimportant (i.e. it is small in scale), or the portion of the problem that would be solved is relatively small (i.e. it is not tractable in the 80,000 Hours sense). However, if an intervention performs poorly on all three 80,000 Hours factors (i.e. it is relatively unimportant, doubling the resources going towards it would not solve a substantial portion of it, and it would take significantly more resources to achieve that doubling), then it is unlikely to be considered cost effective by GiveWell.

2) Neglectedness is generally important for two reasons. The first is room for more funding: if there is already enough funding to implement an intervention to the maximum extent feasible given other constraints, then additional donations will either be diverted to a different intervention or will be spent on the intervention only after some delay. In both cases, there is a significant opportunity cost if there is a comparably effective alternative intervention with room for more funding. The second is diminishing marginal returns: an intervention will generally be implemented first in settings where it has a higher impact, which means that as more is spent on an intervention, the intervention tends to expand to settings where it is less effective.

3) GiveWell implicitly considers diminishing marginal returns as part of its cost effectiveness factor. For example, in its cost effectiveness estimate for bednets, it looks at baseline malaria rates in countries where the Against Malaria Foundation (AMF) operates (see A6 and A13 of this spreadsheet). As AMF expands to countries with lower malaria rates, GiveWell will increase its cost per life saved estimate for AMF (if other factors remain the same). In addition to its implicit consideration of diminishing marginal returns, GiveWell explicitly considers room for more funding.

4) By contrast, the 80,000 Hours framework does not appear to take into account diminishing marginal returns or room for more funding. To see this, consider the following example. Suppose that there is a village of people living in poverty. Any person in that village who receives cash transfers gains additional utility as follows:

Total Cash Received............... Total Additional Utility

$1000........................................... 5

$2000........................................... 9

$3000........................................... 12

$4000........................................... 14

$5000........................................... 15

Currently each person is receiving $2,000, which means that each of them is gaining 9 additional units of utility. You are considering transferring an extra $1,000 to each of them. Under the 80,000 Hours framework, you would start by calculating how much good would be done by solving the problem (scale). In this case, eliminating poverty would result in 15 additional units of utility per person. (I am including the impact of the first $2,000.) The next step is to look at the portion of the problem that would be solved by doubling spending (tractability). In this case, if spending were doubled to $4,000 per person, then each person would gain 5 additional units of utility (since they would go from having 9 additional units of utility to having 14 additional units of utility). Thus, doubling spending would result in solving 1/3 of the problem being solved (gaining 5 additional units of utility out of 15 possible additional units). The last step is to determine how far your money would go towards the doubling (neglectedness). In this case, donating $1,000 to each person would take you half way to the doubling (which would require $2,000 per person). Multiplying the three factors together, you get that your donation of $1,000 to each person would result in a gain of 2.5 additional units of utility per person (15 additional units scale x 1/3 tractability x 1/2 neglectedness). However, your donation will actually result in a gain of 3 additional units of utility per person (12 additional units compared to 9 additional units). Why does the 80,000 Hours formula give a lower number? Because it assumes that the average per dollar impact of the money needed to double spending (in this case $2,000 per person) is as large as the average per dollar of impact of a smaller amount of money (in this case $1,000 per person). In other words, it assumes constant marginal returns (rather than diminishing marginal returns). (The same issue exists with respect to room for more funding: if in doubling spending, you fill the entire funding gap for the most effective intervention and have to spend the remainder of the money on a less effective intervention, then the average effectiveness of the money spent doubling will be less than the average effectiveness of money used to fill the funding gap.)

5) ACE's focus on factory farming is in fact based partly on neglectedness (see here and here).

Note: I am not affiliated with GiveWell or 80,000 Hours, so nothing in this comment should be taken as an official description of their frameworks.

I'm going to write a relatively long comment making a relatively narrow objection to your post. Sorry about that, but I think it's a particularly illustrative point to make. I disagree with these two points against the neglectedness framing in particular:

  1. that it could divide by zero, and this is a serious problem
  2. that it splits a fraction into unnecessarily conditional parts (the "dragons in Westeros" problem).

Firstly in response to (1), this is a legitimate illustration that the framework only applies where it applies, but it seems like in practice like it isn't an obstacle. Specifically, the framing works well when your proposed addition is small relative to the existing resource, and it seems like that's true of most people in most situations. I'll come back to this later.

More importantly, I feel like (2) misses the point of what the framework was developed for. The goal is to get a better handle on what kinds of things to look for when evaluating causes. So the fact that the fraction simplifies to "good done per additional resource" is sort of trivial – that's the goal, the metric we're trying to optimize. It's hard to measure that directly, so the value added by the framework is the claim that certain conditionalizations of the metric (if that's the right word) yield questions that are easier to answer, and answers that are easier to compare.

That is, we write it as "scale times neglectedness times solvability" because we find empirically that those individual factors of the metric tend to be more predictable, comparable and measurable than the metric as a whole. The applicability of the framework is absolutely contingent on what we in-practice discover to be the important considerations when we try to evaluate a cause from scratch.

So while there's no fundamental reason why neglectedness, particularly as measured in the form of the ratio of percentage per resource, needs to be a part of your analysis, it just turns out to be the case that you can often find e.g. two different health interventions that are otherwise very comparable in how much good they do, but with very different ability to consume extra resources, and that drives a big difference in their attractiveness as causes to work on.

If ever you did want to evaluate a cause where the existing resources were zero, you could just as easily swap the bad cancellative denominator/numerator pair with another one, say the same thing in absolute instead of relative terms, and the rest of the model would more or less stand up. Whether that should be done in general for evaluating other causes as well is a judgement call about how these numbers vary in practice and what situations are most easily compared and contrasted.

[anonymous]6y2
0
0

I suggest summarising your reasoning as well as your conclusion in your tl;dr e.g. adding something like the following: "as neglectedness is not a useful proxy for impact w/r/t many causes, such as those where progress yields comparatively little or no ‘good done’ until everything is tied together at the end, or those where progress benefits significantly from economies of scale."

Ta Holly - done.

What do you think of neglectedness popping up in Owen's model when he was not trying to produce it? And his general logarithmic returns? I do agree with you that even if the cause area is not neglected, there could be cost effective interventions, as I argue here. But I would still say that within interventions, neglectedness is an important indicator of cost effectiveness.

Excellent to see some challenge to this framework! I was particularly pleased to see this line: "in the ‘major arguments against working on it’ section they present info like ‘the US government spends about $8 billion per year on direct climate change efforts’ as a negative in itself." I've often thought that 80k communicates about this oddly -- after all, for all we know, maybe there's room for $10 billion to be spent on climate change before returns start diminishing.

However, having looked through this, I'm not sure I've been convinced to update much against neglectedness. After all, if you clarify that the % changes in the formula are really meant to be elasticities (which you allude to in the footnotes, and which I agree isn't clear in the 80k article), then surely lots of the problems actually go away? (i.e. thinking about diminishing marginal returns is important and valid, but that's also consistent with the elasticity view of neglectedness, isn't it?)

Why I still think I'm in favour of including neglectedness: because it matters for counterfactual impact. I.e. with a crowded area (e.g. climate change), it's more likely that if you had never gone into that area, someone else would have come along and achieved the same outcomes as you (or found out the same results as you). And this likelihood drops if the area is neglected.

So a claim that might usefully update my views looks something like this hypothetical dialogue:

  • Climate change has lots of people working on it (bad)

  • However there are sub-sectors of climate change work that are high impact and neglected (good)

  • But because lots of other people work on climate change, if you hadn't done your awesome high-impact neglected climate change thing, someone else probably would have since there are so many people working in something adjacent (bad)

  • But [some argument that I haven't thought of!]

then surely lots of the problems actually go away? (i.e. thinking about diminishing marginal returns is important and valid, but that's also consistent with the elasticity view of neglectedness, isn't it?)

Can you expand on this? I only know of elasticity from reading around it after's Rob's in response to the first draft of this essay, so if there's some significance to it that isn't captured in the equations given, I maybe don't know it. If it's just a case of relabelling, I don't see how it would solve the problems with the equations, though - unused variables and divisions by zero seem fundamentally problematic.

But because lots of other people work on climate change, if you hadn't done your awesome high-impact neglected climate change thing, someone else probably would have since there are so many people working in something adjacent (bad)

But [

this only holds to the extent that the field is proportionally less neglected - a priori you're less replaceable in an area that's 1/3 filled than one which is half filled, even if the former has a far higher absolute number of people working in it.

]

which is just point 6 from the 'Diminishing returns due to problem prioritisation' section applied. I think all the preceding points from the section could apply as well - eg the more rational people tend to work on (eg) AI-related fields, the better comparative chance you have of finding something importantly neglected within climate change (5), your awesome high-impact neglected climate change thing might turn out to be something which actually increases the value of subsequent work in the field (4), and so on.

To be clear, I do think neglectedness will roughly track the value of entering a field, ceteris literally being paribus. I just think it's one of a huge number of variables that do so, and a comparatively low-weighted one. As such, I can't see a good reason for EAs having chosen to focus on it over several others, let alone over trusting the estimates from even a shallow dive into what options there are for contributing to an area.

To be clear, I do think neglectedness will roughly track the value of entering a field, ceteris literally being paribus.

On reflection I don't think I believe this. The same assumption of rationality that says that people will tend to pick the best problems in a cause area to work on suggests that (a priori) they would tend to pick the best cause area to work on, in which case more people working on a field would indicate that it was more worth working on.

The same assumption of rationality that says that people will tend to pick the best problems in a cause area to work on suggests that (a priori) they would tend to pick the best cause area to work on

This was an insightful comment for me, and the argument does seem correct at first glance. I guess the reason I'd still disagree is because I observe people thinking about within-cause choices very differently from how they think about across-cause choices, so they're more rational in one context than the other. A key part of effective altruism's value, it seems to me, is the recognition of this discrepancy and the argument that it should be eliminated.

in which case more people working on a field would indicate that it was more worth working on.

I think if you really believe people are rational in the way described, more people working on a field doesn't necessarily give you a clue as to whether more people should be working on it or not, because you expect the number of people working on it to roughly track the number of people who ought to work on it -- you think the people who are not working on it are also rational, so there must be circumstances under which that's correct, too.

Even if people pick interventions at random, the more people who enter a cause, the more the best interventions will get taken (by chance), so you still get diminishing returns even if people aren't strategically selecting.

To clarify, this only applies if everyone else is picking interventions at random, but you're still managing to pick the best remaining one (or at least better than chance).

It also seems to me like it applies across causes as well as within causes.