Comment author: MichaelPlant 24 April 2018 07:28:02PM 6 points [-]

I did think that while writing it, and it worried me too. Despite that, the thought doesn't strike me as totally stupid. If we think it's reasonable to talk about commitment devices in general, it seems one we ought to talk about in particular in one's choice of partner. If you want to do X, finding someone that supports you to towards you goal of achieving X seems rather helpful, whereas finding a partner that will discourage you from achieving X seems unhelpful. Nevertheless, I accept one of the obvious warning signs of being in a cult is the cult leaders tell you to date only people inside the cult lest you get 'corrupted'...

Comment author: Halstead 24 April 2018 07:35:08PM 2 points [-]

haha yeah that was my take. I think the best norm to propagate is "go out with whoever makes you happy"

Comment author: MichaelPlant 24 April 2018 06:33:40PM 1 point [-]

Ah, that's great. Thanks very much for that. I think "dating a non-EA" is a particularly dangerous(/negative impact?) phenomenon we should probably be talking about more. I also know someone, A, whose non-EA-inclined partner, B, was really unhappy that A wasn't aiming to get a high-paying professional job and it really wrenched A from focusing on trying do the most useful stuff. Part of the problem was B's family wanted B's partner to be dating a high earner.

Comment author: Halstead 24 April 2018 06:46:46PM 12 points [-]

This comment comes across as a tad cult-y.

Comment author: MichaelPlant 16 April 2018 08:57:32AM 2 points [-]

The last two sentences of this come across as pretty curt to me.

Yeah, on re-reading, the "How is that not a one off of $1bn?" does seem snippy. Okay. Fair cop.

Comment author: Halstead 18 April 2018 09:40:03AM 1 point [-]

I didn't see it as all that snipey. I think downvotes should be reserved for more severe tonal misdemeanours than this.

There's a bit of difficult balance between necessary policing of tone and engagement with substantive arguments. I think as a rule people tend to talk about tone too much in arguments to the detriment of talking about the substance.

Comment author: MichaelPlant 15 April 2018 10:42:41PM *  1 point [-]

As I have noted before on this forum, most people advancing person-affecting views tend to opt for asymmetric versions where future bad lives matter but future good lives don't. If you're temporally neutral and aggregative, then you end up with a moral theory which is practically exactly the same as negative utiltiarianism (priorities one two three four etc are preventing future suffering

If someone did take an asymmetric view and really committed to it, I would think you should probably be in favour of increasing existential risk, as that removes the possibility of future suffering, rather trying to reduce existential risk. I suppose you might have some (not obviously plausible) story you had about how humanity's survival decreases future suffering: You could think humans will remove misery in surviving non-humans if humans dodge existential risk, but this misery wouldn't be averted if humans went extinct but other life keep living.

Comment author: Halstead 16 April 2018 09:02:49AM 2 points [-]

I think the argument is as you describe in the last sentence, though I haven't engaged much with the NUs on this.

Comment author: Halstead 15 April 2018 04:05:09PM *  3 points [-]

Three cheers for this. Two ways in which the post might understate the case for person-affecting focusing on ex risk

  1. Most actions to reduce ex risk would also reduce catastrophic non-ex risks. e.g. efforts to reduce the risk of an existential threat attack by an engineered pathogen would also reduce the risk of e.g. >100m people dying in an attack by an engineered pathogen. I would expect that the benefits from reducing GCRs as a side-effect of reducing ex risks would be significantly larger than the benefits accruing from preventing ex risks because the probability of GCRs is much much greater. I wouldn't be that surprised if that increased the EV of ex risk by an order of magntidue, thereby propelling ex risk reduction further into AMF territory.

  2. As I have noted before on this forum, most people advancing person-affecting views tend to opt for asymmetric versions where future bad lives matter but future good lives don't. If you're temporally neutral and aggregative, then you end up with a moral theory which is practically exactly the same as negative utiltiarianism (priorities one two three four etc are preventing future suffering).

It is in general good to reassert that there are numerous reasons to focus on ex risk aside from the total view, including neglectedness, political short-termism, the global public goods aspect, the context of the techologies we are developing, the tendency to neglect rare events etc

Comment author: AviN 03 April 2018 03:20:59PM *  1 point [-]

I mentioned this in a previous comment, but in case readers missed it:

  • The increase in flock size from December 2015 to December 2017 is far better explained by the US egg industry's recovery from an avian influenza outbreak than by cage-free pledges.

  • Norwood and Lusk (2011) estimate based on price elasticity data that, on the margin, a reduction in demand for 1 conventional egg causes a reduction in supply of 0.91 conventional eggs. But correspondingly, an increase in demand for 1 cage-free egg should lead to an increase in supply of less than 1 cage-free egg. So it's unclear why we should expect the transition to cage-free to increase the number of layer hens. If anything, the increase in prices caused by the transition should reduce the number of layer hens.

Comment author: Halstead 05 April 2018 06:40:25PM 0 points [-]

Thanks for this. The true cost-effectiveness estimate should still be reduced by whatever the displacement effect is, even if it isn't large. If we expect 9% of a conventional egg to be consumed for switching demand to 1 cage-free egg, then we should adjust the impact of the campaign downward by whatever the welfare effect of 9% of a conventional egg is.

Comment author: weeatquince  (EA Profile) 25 March 2018 10:05:04AM *  0 points [-]

Concerns about model uncertainty cut in both directions and I think the preponderance of probabilities favours SAI (provided it can be governed safely)

Good point. Agreed. Had not considered this

I tend to deflate their significance because SAI has natural analogues... volcanoes ... industrial emissions.

This seems like flawed thinking to me. Data from natural analogues should be built into predictive SAI models. Accepting that model uncertainty is a factor worth considering means questioning whether these analogues are actually good predictors of the full effects of SAI.

(Note: LHC also had natural analogues in atmospheric cosmic rays, I believe this was accounted for in FHI's work on the matter)

-

I think the main thing that model uncertainty suggests is that mitigation or less extreme forms of geoengineering should be prioritised much more.

Comment author: Halstead 25 March 2018 10:15:14AM 0 points [-]

I agree that mitigation should be prioritised.

SAI has advantages that other approaches don't have, which is why it is insurance against model uncertainty about the sensitivity of the climate to GHGs. Carbon dioxide removal is much slower acting, will be incredibly expensive and has other costs. The other main proposed form of solar geoengineering involves tropospheric cooling by brightening clouds etc. Uncertainties about this are probably greater than for SAI.

Comment author: weeatquince  (EA Profile) 23 March 2018 06:50:45PM 1 point [-]

My very limited understanding of this topic is that climate models, especially of unusual phenomena. are highly uncertain and therefore there is a some chance that our models are incorrect. this means that SAI could go horribly wrong, not have the intended effects or make the climate spin out of control in some catastrophic way.

The chance of this might be small but if you are worried about existential risks it should definitely be considered. (In fact I thought this was the main x-risk associated with SAI and similar grand geo-engineering exercises).

I admit I have not read your article (only this post) but I was surprised this was not mentioned and I wanted to flag the matter.

For a similar case see the work of FHI researchers Toby Ord and Anders Sandberg on the risks of the Large Hadron Collider (LHC) here: https://arxiv.org/abs/0810.5515 and I am reasonably sure that SAI models are a lot more uncertain than the LHC physics.

Comment author: Halstead 23 March 2018 07:35:32PM *  1 point [-]

I discuss this in the paper under the heading of 'unknown risks'. I tend to deflate their significance because SAI has natural analogues - volcanoes, which haven't set off said catastrophic spirals. The massive 1991 pinatubo eruption reduced global temperatures by 0.5 degreesish. There is also already an enormous amount of tropospheric cooling due to industrial emissions of sulphur and other particulates. The effects of this could be very substantial - (from memory) at most cancelling out up to half of the total warming effect of all CO2 ever emitted. Due to concerns about air pollution, we are now reducing emissions of these tropospheric aerosols. This could have a very substantial warming effect.

Concerns about model uncertainty cut in both directions and I think the preponderance of probabilities favours SAI (provided it can be governed safely). Estimates of the sensitivity of the climate to CO2 are also beset by model uncertainty. The main worry is the unprecedented warming effect from CO2 having unexpected runaway effects on the ecosystem. It is clear that SAI would allow us to reduce global temperatures and so would on average reduce the risk of heat-induced tipping points or runaway processes. Moreover, SAI is controllable on tight timescales - we get a response to our action within weeks - allowing us to respond if something weird starts happening as a result of GHGs or of SAI. The downside risk associated with model uncertainty about climate sensitivity to GHGs is much greater than that associated with the effects of SAI, in my opinion. SAI is insurance against this model uncertainty.

Comment author: casebash 21 March 2018 01:00:23AM *  2 points [-]

I'd really appreciate a sentence or two on each of the following questions:

  • What is termination shock risk?
  • What is the main concern with unilateral deployment?
  • What is the worry re: interstate conflict?
Comment author: Halstead 23 March 2018 05:20:46PM 1 point [-]

termination shock: the worry that after SAI is deployed, it is for some reason stopped suddenly, leading to rapid and large warming. Unilteral deployment: the worry that a state or other actor would deploy SAI unilaterally in a way that would damage other states

The concern I have about interstate conflict is that: SAI will have to be deployed for decades up to a century to provide benefits. Over this period, there would need to be global agreement on SAI - a technology that would have divergent regional climatic effects. If there are adverse weather events (caused by SAI or not) victims would be angry and this could heighten interstate tension. Generally, maintaining agreement on something like that for decades seems like it would be really hard.

Comment author: MichaelPlant 22 March 2018 05:12:13PM 0 points [-]

weird account of the meaning of 'benefitting others'.

The account might be uncommon in ordinarly langauge, but most philosophers accept creating lives doesn't benefit the created person. I'm at least being consistent and I don't think that consistency is objectionable. Calling it the view weird is unhelpful.

But suppose people typically think it's odd to claim you're benefiting someone by creating them. Then the stated definition of what's EAs about will be at least somewhat misleading to them when you explain EA in greater detail. Consistent with other things I've written on this forum, I think EA should take avoiding being misleading very seriously.

I'm not claiming this is a massive point, it just stuck out to me.

Comment author: Halstead 22 March 2018 05:58:27PM 0 points [-]

Agreed, weirdness accusation retracted.

I suppose there are two ways of securing neutrality - letting people pick their own meaning of 'doing good', and letting people pick their own meaning of 'benefiting others'

View more: Next