5

Job: Country Manager needed for Germany at Founders Pledge

Founders Pledge is looking for someone to lead our growth and community in Berlin. This is a great opportunity for someone who wants to raise a huge amount of money for effective charities and build an unrivalled network in the Berlin startup scene. To apply please send a short email... Read More
Comment author: Arepo 03 April 2016 12:00:38AM 0 points [-]

If it's not a force for good, and if you believe investment banking and similar roles damage the economy, that makes earning to give via them look more attractive.

Comment author: Robert_Wiblin 21 February 2016 02:53:09PM *  4 points [-]

I'll start with the most important first:

"Perhaps the global economy is advancing fast enough or faster than enough to keep pace with the increasing difficulty of switching resource-bases, but that feels like a potential house of cards - if something badly damages the global economy (say, a resource irreplaceably running out, or a project to replace one unexpectedly failing), the gulf between several other depleting resources and their possible replacements could effectively widen."

Yes, I acknowledge that is a risk. Personally I have never found a persuasive case that this will probably happen for any particular pressing need we have. But, as I say, the future is uncertain and even if everyone thinks it's unlikely, we could be wrong. So work to make a bigger buffer does have value.

But the question I am concerned with is whether it's the most valuable problem to work on. The considerations above, and current prices for such goods make me think the answer is no.

"The possible cascade from this is a GCR in itself, and one that numerous people seem to consider a serious one. I feel like we'd be foolish to dismiss the large number of scientifically literate doomsayers based on non-expert speculation."

Certainly there are many natural scientists who have that attitude. I used to place more stock in their pronouncements. However, three things reduced my trust:

  • Noticing that market prices - a collective judgement of millions of informed people in these industries - seemed to contradict their concerns. Of course anyone could be wrong, but I place more weight on market prices than individual natural scientists who lack a lot of relevant knowledge.
  • Many of these natural scientists show an astonishing lack of understanding of economics when they comment on these things. This made me think that while they may be good at identifying potential problems, they cannot be trusted to judge our processes for solving them, because academic specialisation means they are barely even aware of them.
  • Looking into specific cases and trends (e.g. food yields or predictions of peak oil) and coming away unconvinced the data supports pessimism.

I think the pessimistic take here is a contrarian bet. It may be a bet worth making, but it has to be compared to other contrarian bets that could be more compelling.

"it seems far too superficial to justify turning people away from working on the subject if that's where their skills and interests lie."

My comments in the piece are that I merely don't encourage people to work on it, and that it is the best fit for some people's skills.

"In particular in seems unclear that economic-philosophical research into GCR and X-risk has a greater chance of actually lowering such outcomes than scientific and technological research into technologies that will reliably do so once/if they're available."

The contrast I intended to draw there is with research into non-resource shortage related GCRs - particularly dangers from new technologies.

"Yes, people can switch from one resource to another as each runs low, but it would be very surprising if in almost all cases the switch wasn't to a higher-hanging fruit. People naturally tend to grab the most accessible/valuable resources first."

It's true that the fruit we will switch to are higher now. But technological progress is constantly lowering the metaphorical tree. In some cases the fruit will be higher at the future time, in other cases it will be lower. My claim is that I don't see a reason for it to be higher overall, in expectation.

Comment author: Arepo 22 February 2016 06:30:16PM *  2 points [-]

But the question I am concerned with is whether it's the most valuable problem to work on. The considerations above, and current prices for such goods make me think the answer is no.

Sure. I mean, we basically agree, except that I feel much lower confidence (and anxiety at the confidence with which non-specialists make these pronunciations). Going into research in general is something that I've mostly felt more pessimistic about as an EA approach than 80K are, but if someone already partway down the path to a career based on resource depletion showed promise and passion in it, I'd think it plausible it was optimal for them to continue.

Certainly there are many natural scientists who have that attitude. I used to place more stock in their pronouncements. However, three things reduced my trust:

  • Noticing that market prices - a collective judgement of millions of informed people in these industries - seemed to contradict their concerns. Of course anyone could be wrong, but I place more weight on market prices than individual natural scientists who lack a lot of relevant knowledge.

I would probably trust the market over a single scientist, but I would trust the collective judgement of a field of scientists over the market. I don't see what mechanism is supposed to make the market a reliable predictor of anything if not a reflection of the scientific understanding of the field with individual randomness mostly drowned out.

  • Many of these natural scientists show an astonishing lack of understanding of economics when they comment on these things. This made me think that while they may be good at identifying potential problems, they cannot be trusted to judge our processes for solving them, because academic specialisation means they are barely even aware of them.

I've seen the same, but my own sense is that the reverse problem - economists having an astonishing lack of understanding of science - is much more acute. Also, I find scientists more scrupulous about the limits of their predictive ability. To give specific examples two of which are by figures close to the EA movement, Stephen Landsburg informing Stephen Hawking that his understanding of physics is '90% of the way there', Robin Hanson arguing without a number in sight that 'Most farm animals prefer living to dying; they do not want to commit suicide' and therefore that vegetarianism is harmful, and Bjorn Lomborg's head-on collision with apparently the entire field of climate science in The Skeptical Environmentalist.

  • Looking into specific cases and trends (e.g. food yields or predictions of peak oil) and coming away unconvinced the data supports pessimism.

I can't opine on this, except that I still feel greater epistemic humility is worthwhile. If your conclusions are right, it seems worth trying to get them published in a prominent scientific journal (or if not by you then by an academic who shares your views - and perhaps hasn't already alienated the journal in question) - even if you don't manage, one would hope you'd get decent feedback on what they perceived as the flaws in your argument.

It's true that the fruit we will switch to are higher now. But technological progress is constantly lowering the metaphorical tree. In some cases the fruit will be higher at the future time, in other cases it will be lower. My claim is that I don't see a reason for it to be higher overall, in expectation.

Perhaps, but I don't feel like you've acknowledged the problem that technological progress relies on technological progress, such that this could turn out to be a house of cards. As such, it needn't necessarily be resource depletion that brings it crashing down - any GCR could have the same effect. So work on resource depletion provides some insurance against such a multiply-catastrophic scenario.

Comment author: Robert_Wiblin 21 February 2016 02:53:09PM *  4 points [-]

I'll start with the most important first:

"Perhaps the global economy is advancing fast enough or faster than enough to keep pace with the increasing difficulty of switching resource-bases, but that feels like a potential house of cards - if something badly damages the global economy (say, a resource irreplaceably running out, or a project to replace one unexpectedly failing), the gulf between several other depleting resources and their possible replacements could effectively widen."

Yes, I acknowledge that is a risk. Personally I have never found a persuasive case that this will probably happen for any particular pressing need we have. But, as I say, the future is uncertain and even if everyone thinks it's unlikely, we could be wrong. So work to make a bigger buffer does have value.

But the question I am concerned with is whether it's the most valuable problem to work on. The considerations above, and current prices for such goods make me think the answer is no.

"The possible cascade from this is a GCR in itself, and one that numerous people seem to consider a serious one. I feel like we'd be foolish to dismiss the large number of scientifically literate doomsayers based on non-expert speculation."

Certainly there are many natural scientists who have that attitude. I used to place more stock in their pronouncements. However, three things reduced my trust:

  • Noticing that market prices - a collective judgement of millions of informed people in these industries - seemed to contradict their concerns. Of course anyone could be wrong, but I place more weight on market prices than individual natural scientists who lack a lot of relevant knowledge.
  • Many of these natural scientists show an astonishing lack of understanding of economics when they comment on these things. This made me think that while they may be good at identifying potential problems, they cannot be trusted to judge our processes for solving them, because academic specialisation means they are barely even aware of them.
  • Looking into specific cases and trends (e.g. food yields or predictions of peak oil) and coming away unconvinced the data supports pessimism.

I think the pessimistic take here is a contrarian bet. It may be a bet worth making, but it has to be compared to other contrarian bets that could be more compelling.

"it seems far too superficial to justify turning people away from working on the subject if that's where their skills and interests lie."

My comments in the piece are that I merely don't encourage people to work on it, and that it is the best fit for some people's skills.

"In particular in seems unclear that economic-philosophical research into GCR and X-risk has a greater chance of actually lowering such outcomes than scientific and technological research into technologies that will reliably do so once/if they're available."

The contrast I intended to draw there is with research into non-resource shortage related GCRs - particularly dangers from new technologies.

"Yes, people can switch from one resource to another as each runs low, but it would be very surprising if in almost all cases the switch wasn't to a higher-hanging fruit. People naturally tend to grab the most accessible/valuable resources first."

It's true that the fruit we will switch to are higher now. But technological progress is constantly lowering the metaphorical tree. In some cases the fruit will be higher at the future time, in other cases it will be lower. My claim is that I don't see a reason for it to be higher overall, in expectation.

Comment author: Arepo 22 February 2016 05:53:38PM 1 point [-]

(reposted from slightly divergent Facebook discussion)

I sometimes wonder if the 'neglectedness criterion' isn't overstated in current EA thought. Is there any solid evidence that it makes marginal contributions to a cause massively worse?

Marginal impact is a product of a number of factors of which the (log of the?) number of people working on it is one, but the bigger the area the thinner that number will be stretched in any subfield - and resource depletion is an enormous category, so it seems unlikely that the number of people working on any specific area of it will exceed the number of people working on core EA issues by more than a couple of orders of magnitude. Even if that equated to a marginal effectiveness multiplier of 0.01 (which seems far too pessimistic to me), we're used to seeing such multipliers become virtually irrelevant when comparing between causes. I doubt if many X-riskers would feel deterred if you told them their chances of reducing X-risk was comparably nerfed.

Michael Wiebe commented on my first reply:

No altruism needed here; profit-seeking firms will solve this problem.

That seems like begging the question. So long as the gap between a depleting resource and its replacement is sufficiently small, they probably will do so, but if for some reason it widens sufficiently, profit-seeking firms will have little incentive or even ability to bridge it.

I'm thinking of the current example of in vitro meat as a possible analogue - once the technology for that's cracked, the companies that produce it will be able to make a killing undercutting naturally grown meat. But even now, with prototypes appearing, it seems too distant to entice more than a couple of companies to actively pursue it. Five years ago, virtually none were - all the research on it was being done by a small number of academics. And that is a relatively tractable technology that we've (I think) always had a pretty clear road map to developing.

Comment author: Arepo 21 February 2016 10:59:21AM 1 point [-]

Julian Simon, the incorrigible optimist, won the bet - with all five becoming cheaper in inflation adjusted terms.'

I hope he paid Stanislav Petrov off for that.

Less glibly, I lean towards agreeing with the argument, but very weakly - it seems far too superficial to justify turning people away from working on the subject if that's where their skills and interests lie.

In particular in seems unclear that economic-philosophical research into GCR and X-risk has a greater chance of actually lowering such outcomes than scientific and technological research into technologies that will reliably do so once/if they're available.

Yes, people can switch from one resource to another as each runs low, but it would be very surprising if in almost all cases the switch wasn't to a higher-hanging fruit. People naturally tend to grab the most accessible/valuable resources first.

Perhaps the global economy is advancing fast enough or faster than enough to keep pace with the increasing difficulty of switching resource-bases, but that feels like a potential house of cards - if something badly damages the global economy (say, a resource irreplaceably running out, or a project to replace one unexpectedly failing), the gulf between several other depleting resources and their possible replacements could effectively widen. The possible cascade from this is a GCR in itself, and one that numerous people seem to consider a serious one. I feel like we'd be foolish to dismiss the large number of scientifically literate doomsayers based on non-expert speculation.

Comment author: Arepo 06 February 2016 01:45:20PM 0 points [-]

Slight quibble:

This introduces another factor we need to control for. Yes, if you really are better than the alternative CEO you might sell more cigarettes, and yes the board clearly thought you were the best choice for CEO - but what if they're wrong? We need to adjust by the probability that you are indeed the best choice for CEO, conditional on the board thinking you were.

This seems like a pretty hard probability to estimate. My guess is it is quite low - I would expect many potential applicants, and a relatively poor ability to discriminate between them - but in lieu of actual analysis lets just say 50%.

You seem to shift here between p(Best among applicants) and p(Better than the guy who would have been hired in lieu of you). Guesstimating 50% for the former sounds reasonable-ish to me, but I would guess it's substantially higher for the latter.

Maybe this comes out in the wash, since the difference between you and your actual replacement is smaller in expectation than the difference between you and the best among all the applicants.

Comment author: Denkenberger 15 July 2015 03:41:36AM *  5 points [-]

The recent talk of not having good giving opportunities is confusing to me. Maybe there are limited opportunities to save a life at $3000, but there should be many opportunities to save a life at $10,000 (UNICEF? Oxfam?). This is still far better than developed country charities (where it costs millions of dollars to save a life), so it is still a great opportunity. As for global catastrophic risk, it is true that some areas receive a lot of funding. But then there is the Global Catastrophic Risk Institute that has an integrated assessment to look across all the global catastrophic risks and prioritize interventions that is largely unfunded (disclosure: I am a research associate at the Global Catastrophic Risk Institute). And in general, if we took seriously the value of future generations, we should be funding global catastrophic risk reduction orders of magnitude more than it is currently funded.

Comment author: Arepo 19 July 2015 09:46:57PM 0 points [-]

+1

The idea that we could become saturated with money seems bizarre. It's not like Givewell's top charities have run out of RfMF, and even if they do, and all the top organisations are genuinely more talent- than money-constrained, it doesn't follow that there's a better option than putting money towards them.

You could potentially fund scholarships for people learning the skills they need - not that I would expect this to be a top-tier use of the money, but it seems likely to be as good or better use of your resources than either a) applying or studying for a highly 'talent-constrained' job which, if it's that hard, there's little reason to expect yourself to be be competent for, or b) sitting around and waiting for someone to show up.

Comment author: Tom_Ash  (EA Profile) 30 January 2015 01:28:32PM 4 points [-]

David Moss brings up the question of why EAs are disproportionately consequentialist in the Facebook thread:

"This kinda begs the question of what consequentialism is good for and why it seems to have an affinity for EA. A couple of suggestions: consequentialism is great for i) mandating (currently) counter-intuitive approaches (like becoming really rich to help reduce poverty) and ii) being really demanding relative to (currently) standard levels of demandingess (i.e. give away money until it stops being useful; not give away £5 a month if that doesn't really detract from your happiness in any way). These benefits to consequentialism are overturned in cases where i) your desired moral outcome is not counter-intuitive (if people are already inclined to think you should never harm innocent creatures or should always be a good ally, then consequentialism just makes people have to shoulder a, potentially very difficult, burden of proof, to show that their preferred action is actually helpful in this case), ii) if people were inclined to think that something is something that you should never do, as a rule, then consequentialism just makes people more open to potentially trading-off and doing things they otherwise would never do, in the right circumstances."

These two factors may partly explain why EAs are disproportionately consequentialist, but I'm not convinced they're the main explanation. I don't know what that explanation is, but I think other factors include that:

a) consequentialism is a contrarian, counter-intuitive moral position, and EA can be too

b) consequentialism goes along with a quantitative mindset

c) many EAs were recruited through people's social circles, and the seeds for these were often consequentialist or philosophical (studying philosophy being a risk factor for consequentialism)

Comment author: Arepo 31 January 2015 06:22:43PM 0 points [-]

These two factors may partly explain why EAs are disproportionately consequentialist, but I'm not convinced they're the main explanation. I don't know what that explanation is...

I would guess the simpler option that (virtually all actually supported) forms of consequentialism imply EA, whereas other moral theories, if they imply anything relevant at all, tend to imply it's optional.

One exception to consequentialisms implying EA, for e.g. is Randian Objectivism. And I doubt it's a coincidence that the EA movement contains a very small number of (I know of 0) Randian Objectivists ;)

Comment author: Arepo 31 January 2015 06:12:51PM *  0 points [-]

Another reason is that consequentialism may be false. The importance of this possibility depends upon the probability we assign to it, but it must carry some weight unless it can be rejected absolutely, which is only plausible on the most extreme forms of moral subjectivism.

I don't think this is true. It's perfectly possible to find some views (eq 'the set of all nonconsequentialist moral views') incoherent enough as to be impossible to consider (or at least, no more so than the negation of various axioms of applied maths and logic would be), but some others to be conceivable.

I basically adhere to that (in fact thinking it of the albeit poorly defined set of 'non utilitarian moral views'); I don't know (or much care) if people would describe me as a moral realist, but I doubt anyone would accuse me of being an extreme moral subjectivist!

Btw, I'm glad to see this post, and sad that it hasn't been upvoted more. I have nothing against the more emotion-oriented content that seems to dominate the top-voted page on this forum, but it's of little interest to me. I hope we begin to see more posts examining the logic and science behind EA.

Comment author: Arepo 12 January 2015 06:16:29PM 0 points [-]

'Existential opportunity'?

Comment author: Arepo 12 January 2015 06:42:41PM 0 points [-]

Everything I can think of sounds mercantile: 'Existential profit' 'Existential gain'

View more: Next