Comment author: RomeoStevens 29 January 2018 08:55:44PM *  7 points [-]

This is a big part of why I find the 'EA is talent constrained not funding constrained' meme to be a bit silly. The obvious counter is to spend money learning how to convert money into talent. I haven't heard of anyone focusing on this problem as a core area, but if it's an ongoing bottleneck then it 'should' be scoring high on effective actions.

There is a lot of outside view research on this that could be collected and analyzed.

Comment author: Arepo 30 January 2018 08:37:05PM 2 points [-]

I pretty much agree with this - though I would add that you could also spend the money on just attracting existing talent. I doubt the Venn diagram of 'people who would plausibly be the best employee for any given EA job' and 'people who would seriously be interested in it given a relatively low EA wage' always forms a perfect circle.

Comment author: DavidMoss 30 January 2018 06:33:33PM 3 points [-]

I didn't read the post as meaning either "scale is bad if it is the only metric that is used" or "Scale, neglectedness, solvability is only one model for prioritisation. It's useful to have multiple different models...."

When looking at scale in a scale, neglectedness, tractability, framework, it's true that the other factors can offset the influence of scale. e.g. if something is large in scale but intractable, the intractability counts against the cause being considered and at least somewhat offsets the consideration that the cause is large in scale. But this doesn't touch on the point this post makes, which is that looking at scale itself as a consideration, the 'total scale' may be of little or no relevance to the evaluation of the cause, and rather 'scale' is only of value up to a given bottleneck and of no value beyond that. I almost never see people talking of scale in this way in the context of a scale, neglectedness, tractability, framework: dividing up the total scale into tractable bits, less tractable bits and totally intractable bits. Rather, I more typically see people assigning some points for scale, evaluating tractability independently and assigning some points for that and evaluating neglectedness independently and assigning some points for that.

Comment author: Arepo 30 January 2018 08:27:33PM 1 point [-]

I read this the same way as Max. The issue of cost to solve (eg) all cases of malaria is really tractability, not scale. Scale is how many people would be helped (and to what degree) by doing so. Divide the latter by the former, and you have a sensible-looking cost-benefit analysis, (that is sensitive to the 'size and intensity of the problem', ie the former).

I do think there are scale-related issues with drawing lines between 'problems', though - if a marginal contribution to malaria nets now achieves twice as much good as the same marginal contribution would in 5 years, are combatting malaria now and combatting malaria in five years 'different problems', or do you just try to average out the cost-benefit ratio between somewhat arbitrary points (eg now and when the last case of malaria is prevented/cured). But I also think the models Max and Owen have written about on the CEA blog do a decent job of dealing with this kind of question.

Comment author: Greg_Colbourn 30 January 2018 06:23:38PM 1 point [-]

Regarding potentially tax deductible items mentioned in section 5, usually accommodation or anything regarded as for personal use is not included. It would be regarded as payment in kind and therefore taxable (and also make the tax reporting more complicated!) This in the UK at least. E.g. https://www.gov.uk/expenses-and-benefits-accommodation/whats-exempt

Comment author: Arepo 30 January 2018 08:11:38PM *  2 points [-]

I had a feeling that might be the case. That page still leaves some possible alternatives, though, eg this exemption:

an employer is usually expected to provide accommodation for people doing that type of work (for example a manager living above a pub, or a vicar looking after a parish)

It seems unlikely, but worth looking at whether developing a sufficient culture of EA orgs offering accommodation might suffice the 'usually expected' criterion.

It also seems a bit vague about what would happen if the EA org actually owned the accommodation rather than reimbursing rent as an expense, or if a wealthy EA would-be donor did, and let employees (potentially of multiple EA orgs) stay in it for little or no money (and if so, in the latter case, whether 'wealthy would-be donor' could potentially be a conglomerate a la EA funds)

There seems at least some precedent for this in the UK in that some schools and universities offer free accommodation to their staff, which don't seem to come under any of the exemptions listed on the page.

Obviously other countries with an EA presence might have more/less flexibility around this sort of thing. But if you have an organisation giving accommodation to 10 employees in a major developed world city, it seems like you'd be saving (in the UK) 20% tax on something in the order of £800 per month per employee, ie about £7600 per year, which seems like a far better return than investing the money would get (not to mention, if it's offered as a benefit for the job, being essentially doubly invested - once on the tax savings, once on the normal value of owning a property).

So while I'm far from confident that it would be ultimately workable, it seems like there would be high EV in an EA with tax law experience looking into it in each country with an EA org.

19

The almighty Hive will

I’ve been wondering whether EA can’t find some strategic benefits from a) a peer-to-peer trust economy, or b) rational coordination towards various goals. These seem like simple ideas, but I haven’t seen them publicly discussed.  I’ll start from the related and oversimplifying assumptions that  a) there’s a wholly fungible pool of... Read More
Comment author: hollymorgan 05 November 2017 01:46:05AM 2 points [-]

I suggest summarising your reasoning as well as your conclusion in your tl;dr e.g. adding something like the following: "as neglectedness is not a useful proxy for impact w/r/t many causes, such as those where progress yields comparatively little or no ‘good done’ until everything is tied together at the end, or those where progress benefits significantly from economies of scale."

Comment author: Arepo 06 November 2017 08:16:15PM 1 point [-]

Ta Holly - done.

Comment author: caspar42 02 November 2017 09:48:43PM 4 points [-]

A few of the points made in this piece are similar to the points I make here: https://casparoesterheld.com/2017/06/25/complications-in-evaluating-neglectedness/

For example, the linked piece also argues that returns may diminish in a variety of different ways. In particular, it also argues that the returns diminish more slowly if the problem is big and that clustered value problems only produce benefits once the whole problem is solved.

Comment author: Arepo 06 November 2017 08:04:13PM *  0 points [-]

Just read this. Nice point about future people.

It sounds like we agree on most of this, though perhaps with differing emphasis - yy feeling is that neglectedness such a weak heuristic that we should abandon it completely, and at the very least avoid making it a core part of the idea of effective altruism. Are there cases where you would still advocate using it?

Comment author: Arepo 03 November 2017 04:54:57PM *  0 points [-]

then surely lots of the problems actually go away? (i.e. thinking about diminishing marginal returns is important and valid, but that's also consistent with the elasticity view of neglectedness, isn't it?)

Can you expand on this? I only know of elasticity from reading around it after's Rob's in response to the first draft of this essay, so if there's some significance to it that isn't captured in the equations given, I maybe don't know it. If it's just a case of relabelling, I don't see how it would solve the problems with the equations, though - unused variables and divisions by zero seem fundamentally problematic.

But because lots of other people work on climate change, if you hadn't done your awesome high-impact neglected climate change thing, someone else probably would have since there are so many people working in something adjacent (bad)

But [

this only holds to the extent that the field is proportionally less neglected - a priori you're less replaceable in an area that's 1/3 filled than one which is half filled, even if the former has a far higher absolute number of people working in it.

]

which is just point 6 from the 'Diminishing returns due to problem prioritisation' section applied. I think all the preceding points from the section could apply as well - eg the more rational people tend to work on (eg) AI-related fields, the better comparative chance you have of finding something importantly neglected within climate change (5), your awesome high-impact neglected climate change thing might turn out to be something which actually increases the value of subsequent work in the field (4), and so on.

To be clear, I do think neglectedness will roughly track the value of entering a field, ceteris literally being paribus. I just think it's one of a huge number of variables that do so, and a comparatively low-weighted one. As such, I can't see a good reason for EAs having chosen to focus on it over several others, let alone over trusting the estimates from even a shallow dive into what options there are for contributing to an area.

Comment author: Arepo 06 November 2017 07:36:07PM *  1 point [-]

To be clear, I do think neglectedness will roughly track the value of entering a field, ceteris literally being paribus.

On reflection I don't think I even believe this. The same assumption of rationality that says that people will tend to pick the best problems in a cause area to work on suggests that (a priori) they would tend to pick the best cause area to work on, in which case more people working on a field would indicate that it was more worth working on.

Comment author: Sanjay 02 November 2017 02:05:29AM *  1 point [-]

Excellent to see some challenge to this framework! I was particularly pleased to see this line: "in the ‘major arguments against working on it’ section they present info like ‘the US government spends about $8 billion per year on direct climate change efforts’ as a negative in itself." I've often thought that 80k communicates about this oddly -- after all, for all we know, maybe there's room for $10 billion to be spent on climate change before returns start diminishing.

However, having looked through this, I'm not sure I've been convinced to update much against neglectedness. After all, if you clarify that the % changes in the formula are really meant to be elasticities (which you allude to in the footnotes, and which I agree isn't clear in the 80k article), then surely lots of the problems actually go away? (i.e. thinking about diminishing marginal returns is important and valid, but that's also consistent with the elasticity view of neglectedness, isn't it?)

Why I still think I'm in favour of including neglectedness: because it matters for counterfactual impact. I.e. with a crowded area (e.g. climate change), it's more likely that if you had never gone into that area, someone else would have come along and achieved the same outcomes as you (or found out the same results as you). And this likelihood drops if the area is neglected.

So a claim that might usefully update my views looks something like this hypothetical dialogue:

  • Climate change has lots of people working on it (bad)

  • However there are sub-sectors of climate change work that are high impact and neglected (good)

  • But because lots of other people work on climate change, if you hadn't done your awesome high-impact neglected climate change thing, someone else probably would have since there are so many people working in something adjacent (bad)

  • But [some argument that I haven't thought of!]

Comment author: Arepo 03 November 2017 04:54:57PM *  0 points [-]

then surely lots of the problems actually go away? (i.e. thinking about diminishing marginal returns is important and valid, but that's also consistent with the elasticity view of neglectedness, isn't it?)

Can you expand on this? I only know of elasticity from reading around it after's Rob's in response to the first draft of this essay, so if there's some significance to it that isn't captured in the equations given, I maybe don't know it. If it's just a case of relabelling, I don't see how it would solve the problems with the equations, though - unused variables and divisions by zero seem fundamentally problematic.

But because lots of other people work on climate change, if you hadn't done your awesome high-impact neglected climate change thing, someone else probably would have since there are so many people working in something adjacent (bad)

But [

this only holds to the extent that the field is proportionally less neglected - a priori you're less replaceable in an area that's 1/3 filled than one which is half filled, even if the former has a far higher absolute number of people working in it.

]

which is just point 6 from the 'Diminishing returns due to problem prioritisation' section applied. I think all the preceding points from the section could apply as well - eg the more rational people tend to work on (eg) AI-related fields, the better comparative chance you have of finding something importantly neglected within climate change (5), your awesome high-impact neglected climate change thing might turn out to be something which actually increases the value of subsequent work in the field (4), and so on.

To be clear, I do think neglectedness will roughly track the value of entering a field, ceteris literally being paribus. I just think it's one of a huge number of variables that do so, and a comparatively low-weighted one. As such, I can't see a good reason for EAs having chosen to focus on it over several others, let alone over trusting the estimates from even a shallow dive into what options there are for contributing to an area.

8

Against neglectedness

  tl;dr 80 000 Hours’ cause priorities framework focuses too heavily on neglectedness at the expense of individuals’ traits. It's inapplicable in causes where progress yields comparatively little or no ‘good done’ until everything is tied together at the end, is insensitive to the slope of diminishing returns from which... Read More
Comment author: Buck 28 October 2017 12:06:41AM 8 points [-]

I might also prompt people to say what they didn't like with the other person's vote, rather than just voting anonymously (and snarkily) with karma points.

The problem is that this takes a lot of time, and people with good judgement are more likely to have a high opportunity cost of time; you want to make it as cheap as possible for people with good judgement to discourage bad comments; I think that the current downvoting system is working pretty well for that purpose. (One suggestion that's better than yours is to only allow a subset of people (perhaps those with over 500 karma) to downvote; Hacker News for example does this.)

Comment author: Arepo 28 October 2017 09:58:46AM 2 points [-]

Please let's not give people any more incentives to game the karma system than they already have.

View more: Prev | Next