Comment author: Arepo 03 November 2017 04:54:57PM *  0 points [-]

then surely lots of the problems actually go away? (i.e. thinking about diminishing marginal returns is important and valid, but that's also consistent with the elasticity view of neglectedness, isn't it?)

Can you expand on this? I only know of elasticity from reading around it after's Rob's in response to the first draft of this essay, so if there's some significance to it that isn't captured in the equations given, I maybe don't know it. If it's just a case of relabelling, I don't see how it would solve the problems with the equations, though - unused variables and divisions by zero seem fundamentally problematic.

But because lots of other people work on climate change, if you hadn't done your awesome high-impact neglected climate change thing, someone else probably would have since there are so many people working in something adjacent (bad)

But [

this only holds to the extent that the field is proportionally less neglected - a priori you're less replaceable in an area that's 1/3 filled than one which is half filled, even if the former has a far higher absolute number of people working in it.

]

which is just point 6 from the 'Diminishing returns due to problem prioritisation' section applied. I think all the preceding points from the section could apply as well - eg the more rational people tend to work on (eg) AI-related fields, the better comparative chance you have of finding something importantly neglected within climate change (5), your awesome high-impact neglected climate change thing might turn out to be something which actually increases the value of subsequent work in the field (4), and so on.

To be clear, I do think neglectedness will roughly track the value of entering a field, ceteris literally being paribus. I just think it's one of a huge number of variables that do so, and a comparatively low-weighted one. As such, I can't see a good reason for EAs having chosen to focus on it over several others, let alone over trusting the estimates from even a shallow dive into what options there are for contributing to an area.

Comment author: Arepo 06 November 2017 07:36:07PM *  1 point [-]

To be clear, I do think neglectedness will roughly track the value of entering a field, ceteris literally being paribus.

On reflection I don't think I even believe this. The same assumption of rationality that says that people will tend to pick the best problems in a cause area to work on suggests that (a priori) they would tend to pick the best cause area to work on, in which case more people working on a field would indicate that it was more worth working on.

Comment author: Sanjay 02 November 2017 02:05:29AM *  1 point [-]

Excellent to see some challenge to this framework! I was particularly pleased to see this line: "in the ‘major arguments against working on it’ section they present info like ‘the US government spends about $8 billion per year on direct climate change efforts’ as a negative in itself." I've often thought that 80k communicates about this oddly -- after all, for all we know, maybe there's room for $10 billion to be spent on climate change before returns start diminishing.

However, having looked through this, I'm not sure I've been convinced to update much against neglectedness. After all, if you clarify that the % changes in the formula are really meant to be elasticities (which you allude to in the footnotes, and which I agree isn't clear in the 80k article), then surely lots of the problems actually go away? (i.e. thinking about diminishing marginal returns is important and valid, but that's also consistent with the elasticity view of neglectedness, isn't it?)

Why I still think I'm in favour of including neglectedness: because it matters for counterfactual impact. I.e. with a crowded area (e.g. climate change), it's more likely that if you had never gone into that area, someone else would have come along and achieved the same outcomes as you (or found out the same results as you). And this likelihood drops if the area is neglected.

So a claim that might usefully update my views looks something like this hypothetical dialogue:

  • Climate change has lots of people working on it (bad)

  • However there are sub-sectors of climate change work that are high impact and neglected (good)

  • But because lots of other people work on climate change, if you hadn't done your awesome high-impact neglected climate change thing, someone else probably would have since there are so many people working in something adjacent (bad)

  • But [some argument that I haven't thought of!]

Comment author: Arepo 03 November 2017 04:54:57PM *  0 points [-]

then surely lots of the problems actually go away? (i.e. thinking about diminishing marginal returns is important and valid, but that's also consistent with the elasticity view of neglectedness, isn't it?)

Can you expand on this? I only know of elasticity from reading around it after's Rob's in response to the first draft of this essay, so if there's some significance to it that isn't captured in the equations given, I maybe don't know it. If it's just a case of relabelling, I don't see how it would solve the problems with the equations, though - unused variables and divisions by zero seem fundamentally problematic.

But because lots of other people work on climate change, if you hadn't done your awesome high-impact neglected climate change thing, someone else probably would have since there are so many people working in something adjacent (bad)

But [

this only holds to the extent that the field is proportionally less neglected - a priori you're less replaceable in an area that's 1/3 filled than one which is half filled, even if the former has a far higher absolute number of people working in it.

]

which is just point 6 from the 'Diminishing returns due to problem prioritisation' section applied. I think all the preceding points from the section could apply as well - eg the more rational people tend to work on (eg) AI-related fields, the better comparative chance you have of finding something importantly neglected within climate change (5), your awesome high-impact neglected climate change thing might turn out to be something which actually increases the value of subsequent work in the field (4), and so on.

To be clear, I do think neglectedness will roughly track the value of entering a field, ceteris literally being paribus. I just think it's one of a huge number of variables that do so, and a comparatively low-weighted one. As such, I can't see a good reason for EAs having chosen to focus on it over several others, let alone over trusting the estimates from even a shallow dive into what options there are for contributing to an area.

8

Against neglectedness

  tl;dr 80 000 Hours’ cause priorities framework focuses too heavily on neglectedness at the expense of individuals’ traits. It's inapplicable in causes where progress yields comparatively little or no ‘good done’ until everything is tied together at the end, is insensitive to the slope of diminishing returns from which... Read More
Comment author: Buck 28 October 2017 12:06:41AM 8 points [-]

I might also prompt people to say what they didn't like with the other person's vote, rather than just voting anonymously (and snarkily) with karma points.

The problem is that this takes a lot of time, and people with good judgement are more likely to have a high opportunity cost of time; you want to make it as cheap as possible for people with good judgement to discourage bad comments; I think that the current downvoting system is working pretty well for that purpose. (One suggestion that's better than yours is to only allow a subset of people (perhaps those with over 500 karma) to downvote; Hacker News for example does this.)

Comment author: Arepo 28 October 2017 09:58:46AM 2 points [-]

Please let's not give people any more incentives to game the karma system than they already have.

5

Job: Country Manager needed for Germany at Founders Pledge

Founders Pledge is looking for someone to lead our growth and community in Berlin. This is a great opportunity for someone who wants to raise a huge amount of money for effective charities and build an unrivalled network in the Berlin startup scene. To apply please send a short email... Read More
Comment author: Arepo 03 April 2016 12:00:38AM 0 points [-]

If it's not a force for good, and if you believe investment banking and similar roles damage the economy, that makes earning to give via them look more attractive.

Comment author: Robert_Wiblin 21 February 2016 02:53:09PM *  4 points [-]

I'll start with the most important first:

"Perhaps the global economy is advancing fast enough or faster than enough to keep pace with the increasing difficulty of switching resource-bases, but that feels like a potential house of cards - if something badly damages the global economy (say, a resource irreplaceably running out, or a project to replace one unexpectedly failing), the gulf between several other depleting resources and their possible replacements could effectively widen."

Yes, I acknowledge that is a risk. Personally I have never found a persuasive case that this will probably happen for any particular pressing need we have. But, as I say, the future is uncertain and even if everyone thinks it's unlikely, we could be wrong. So work to make a bigger buffer does have value.

But the question I am concerned with is whether it's the most valuable problem to work on. The considerations above, and current prices for such goods make me think the answer is no.

"The possible cascade from this is a GCR in itself, and one that numerous people seem to consider a serious one. I feel like we'd be foolish to dismiss the large number of scientifically literate doomsayers based on non-expert speculation."

Certainly there are many natural scientists who have that attitude. I used to place more stock in their pronouncements. However, three things reduced my trust:

  • Noticing that market prices - a collective judgement of millions of informed people in these industries - seemed to contradict their concerns. Of course anyone could be wrong, but I place more weight on market prices than individual natural scientists who lack a lot of relevant knowledge.
  • Many of these natural scientists show an astonishing lack of understanding of economics when they comment on these things. This made me think that while they may be good at identifying potential problems, they cannot be trusted to judge our processes for solving them, because academic specialisation means they are barely even aware of them.
  • Looking into specific cases and trends (e.g. food yields or predictions of peak oil) and coming away unconvinced the data supports pessimism.

I think the pessimistic take here is a contrarian bet. It may be a bet worth making, but it has to be compared to other contrarian bets that could be more compelling.

"it seems far too superficial to justify turning people away from working on the subject if that's where their skills and interests lie."

My comments in the piece are that I merely don't encourage people to work on it, and that it is the best fit for some people's skills.

"In particular in seems unclear that economic-philosophical research into GCR and X-risk has a greater chance of actually lowering such outcomes than scientific and technological research into technologies that will reliably do so once/if they're available."

The contrast I intended to draw there is with research into non-resource shortage related GCRs - particularly dangers from new technologies.

"Yes, people can switch from one resource to another as each runs low, but it would be very surprising if in almost all cases the switch wasn't to a higher-hanging fruit. People naturally tend to grab the most accessible/valuable resources first."

It's true that the fruit we will switch to are higher now. But technological progress is constantly lowering the metaphorical tree. In some cases the fruit will be higher at the future time, in other cases it will be lower. My claim is that I don't see a reason for it to be higher overall, in expectation.

Comment author: Arepo 22 February 2016 06:30:16PM *  2 points [-]

But the question I am concerned with is whether it's the most valuable problem to work on. The considerations above, and current prices for such goods make me think the answer is no.

Sure. I mean, we basically agree, except that I feel much lower confidence (and anxiety at the confidence with which non-specialists make these pronunciations). Going into research in general is something that I've mostly felt more pessimistic about as an EA approach than 80K are, but if someone already partway down the path to a career based on resource depletion showed promise and passion in it, I'd think it plausible it was optimal for them to continue.

Certainly there are many natural scientists who have that attitude. I used to place more stock in their pronouncements. However, three things reduced my trust:

  • Noticing that market prices - a collective judgement of millions of informed people in these industries - seemed to contradict their concerns. Of course anyone could be wrong, but I place more weight on market prices than individual natural scientists who lack a lot of relevant knowledge.

I would probably trust the market over a single scientist, but I would trust the collective judgement of a field of scientists over the market. I don't see what mechanism is supposed to make the market a reliable predictor of anything if not a reflection of the scientific understanding of the field with individual randomness mostly drowned out.

  • Many of these natural scientists show an astonishing lack of understanding of economics when they comment on these things. This made me think that while they may be good at identifying potential problems, they cannot be trusted to judge our processes for solving them, because academic specialisation means they are barely even aware of them.

I've seen the same, but my own sense is that the reverse problem - economists having an astonishing lack of understanding of science - is much more acute. Also, I find scientists more scrupulous about the limits of their predictive ability. To give specific examples two of which are by figures close to the EA movement, Stephen Landsburg informing Stephen Hawking that his understanding of physics is '90% of the way there', Robin Hanson arguing without a number in sight that 'Most farm animals prefer living to dying; they do not want to commit suicide' and therefore that vegetarianism is harmful, and Bjorn Lomborg's head-on collision with apparently the entire field of climate science in The Skeptical Environmentalist.

  • Looking into specific cases and trends (e.g. food yields or predictions of peak oil) and coming away unconvinced the data supports pessimism.

I can't opine on this, except that I still feel greater epistemic humility is worthwhile. If your conclusions are right, it seems worth trying to get them published in a prominent scientific journal (or if not by you then by an academic who shares your views - and perhaps hasn't already alienated the journal in question) - even if you don't manage, one would hope you'd get decent feedback on what they perceived as the flaws in your argument.

It's true that the fruit we will switch to are higher now. But technological progress is constantly lowering the metaphorical tree. In some cases the fruit will be higher at the future time, in other cases it will be lower. My claim is that I don't see a reason for it to be higher overall, in expectation.

Perhaps, but I don't feel like you've acknowledged the problem that technological progress relies on technological progress, such that this could turn out to be a house of cards. As such, it needn't necessarily be resource depletion that brings it crashing down - any GCR could have the same effect. So work on resource depletion provides some insurance against such a multiply-catastrophic scenario.

Comment author: Robert_Wiblin 21 February 2016 02:53:09PM *  4 points [-]

I'll start with the most important first:

"Perhaps the global economy is advancing fast enough or faster than enough to keep pace with the increasing difficulty of switching resource-bases, but that feels like a potential house of cards - if something badly damages the global economy (say, a resource irreplaceably running out, or a project to replace one unexpectedly failing), the gulf between several other depleting resources and their possible replacements could effectively widen."

Yes, I acknowledge that is a risk. Personally I have never found a persuasive case that this will probably happen for any particular pressing need we have. But, as I say, the future is uncertain and even if everyone thinks it's unlikely, we could be wrong. So work to make a bigger buffer does have value.

But the question I am concerned with is whether it's the most valuable problem to work on. The considerations above, and current prices for such goods make me think the answer is no.

"The possible cascade from this is a GCR in itself, and one that numerous people seem to consider a serious one. I feel like we'd be foolish to dismiss the large number of scientifically literate doomsayers based on non-expert speculation."

Certainly there are many natural scientists who have that attitude. I used to place more stock in their pronouncements. However, three things reduced my trust:

  • Noticing that market prices - a collective judgement of millions of informed people in these industries - seemed to contradict their concerns. Of course anyone could be wrong, but I place more weight on market prices than individual natural scientists who lack a lot of relevant knowledge.
  • Many of these natural scientists show an astonishing lack of understanding of economics when they comment on these things. This made me think that while they may be good at identifying potential problems, they cannot be trusted to judge our processes for solving them, because academic specialisation means they are barely even aware of them.
  • Looking into specific cases and trends (e.g. food yields or predictions of peak oil) and coming away unconvinced the data supports pessimism.

I think the pessimistic take here is a contrarian bet. It may be a bet worth making, but it has to be compared to other contrarian bets that could be more compelling.

"it seems far too superficial to justify turning people away from working on the subject if that's where their skills and interests lie."

My comments in the piece are that I merely don't encourage people to work on it, and that it is the best fit for some people's skills.

"In particular in seems unclear that economic-philosophical research into GCR and X-risk has a greater chance of actually lowering such outcomes than scientific and technological research into technologies that will reliably do so once/if they're available."

The contrast I intended to draw there is with research into non-resource shortage related GCRs - particularly dangers from new technologies.

"Yes, people can switch from one resource to another as each runs low, but it would be very surprising if in almost all cases the switch wasn't to a higher-hanging fruit. People naturally tend to grab the most accessible/valuable resources first."

It's true that the fruit we will switch to are higher now. But technological progress is constantly lowering the metaphorical tree. In some cases the fruit will be higher at the future time, in other cases it will be lower. My claim is that I don't see a reason for it to be higher overall, in expectation.

Comment author: Arepo 22 February 2016 05:53:38PM 1 point [-]

(reposted from slightly divergent Facebook discussion)

I sometimes wonder if the 'neglectedness criterion' isn't overstated in current EA thought. Is there any solid evidence that it makes marginal contributions to a cause massively worse?

Marginal impact is a product of a number of factors of which the (log of the?) number of people working on it is one, but the bigger the area the thinner that number will be stretched in any subfield - and resource depletion is an enormous category, so it seems unlikely that the number of people working on any specific area of it will exceed the number of people working on core EA issues by more than a couple of orders of magnitude. Even if that equated to a marginal effectiveness multiplier of 0.01 (which seems far too pessimistic to me), we're used to seeing such multipliers become virtually irrelevant when comparing between causes. I doubt if many X-riskers would feel deterred if you told them their chances of reducing X-risk was comparably nerfed.

Michael Wiebe commented on my first reply:

No altruism needed here; profit-seeking firms will solve this problem.

That seems like begging the question. So long as the gap between a depleting resource and its replacement is sufficiently small, they probably will do so, but if for some reason it widens sufficiently, profit-seeking firms will have little incentive or even ability to bridge it.

I'm thinking of the current example of in vitro meat as a possible analogue - once the technology for that's cracked, the companies that produce it will be able to make a killing undercutting naturally grown meat. But even now, with prototypes appearing, it seems too distant to entice more than a couple of companies to actively pursue it. Five years ago, virtually none were - all the research on it was being done by a small number of academics. And that is a relatively tractable technology that we've (I think) always had a pretty clear road map to developing.

Comment author: Arepo 21 February 2016 10:59:21AM 1 point [-]

Julian Simon, the incorrigible optimist, won the bet - with all five becoming cheaper in inflation adjusted terms.'

I hope he paid Stanislav Petrov off for that.

Less glibly, I lean towards agreeing with the argument, but very weakly - it seems far too superficial to justify turning people away from working on the subject if that's where their skills and interests lie.

In particular in seems unclear that economic-philosophical research into GCR and X-risk has a greater chance of actually lowering such outcomes than scientific and technological research into technologies that will reliably do so once/if they're available.

Yes, people can switch from one resource to another as each runs low, but it would be very surprising if in almost all cases the switch wasn't to a higher-hanging fruit. People naturally tend to grab the most accessible/valuable resources first.

Perhaps the global economy is advancing fast enough or faster than enough to keep pace with the increasing difficulty of switching resource-bases, but that feels like a potential house of cards - if something badly damages the global economy (say, a resource irreplaceably running out, or a project to replace one unexpectedly failing), the gulf between several other depleting resources and their possible replacements could effectively widen. The possible cascade from this is a GCR in itself, and one that numerous people seem to consider a serious one. I feel like we'd be foolish to dismiss the large number of scientifically literate doomsayers based on non-expert speculation.

Comment author: Arepo 06 February 2016 01:45:20PM 0 points [-]

Slight quibble:

This introduces another factor we need to control for. Yes, if you really are better than the alternative CEO you might sell more cigarettes, and yes the board clearly thought you were the best choice for CEO - but what if they're wrong? We need to adjust by the probability that you are indeed the best choice for CEO, conditional on the board thinking you were.

This seems like a pretty hard probability to estimate. My guess is it is quite low - I would expect many potential applicants, and a relatively poor ability to discriminate between them - but in lieu of actual analysis lets just say 50%.

You seem to shift here between p(Best among applicants) and p(Better than the guy who would have been hired in lieu of you). Guesstimating 50% for the former sounds reasonable-ish to me, but I would guess it's substantially higher for the latter.

Maybe this comes out in the wash, since the difference between you and your actual replacement is smaller in expectation than the difference between you and the best among all the applicants.

View more: Prev | Next