Comment author: Joey 24 April 2018 04:41:24PM 23 points [-]

So I want to be pretty careful about going into details, but I can mix some stories together to make a plausible sounding story based on what I have heard. Please keep in mind this story is a fiction based off a composite of case studies I’ve witnessed, not a real example of any particular person.

Say Alice is an EA. She learns about it in his first year of college. She starts by attending an EA event or two and eventually ends up being a member of his university chapter and pretty heavily reading the EA forum. She takes the GWWC pledge and a year later she takes a summer internship at an EA organization. During this time she identifies strongly with the EA movement and considers it one of her top priorities. Sadly, as Alice is away at her internship her chapter suffers and when she gets back she hits a particularly rough year of school and due to long term concerns, she prioritizes school over setting the chapter back up, mainly thinking about her impact. The silver lining is at the end of this rough year she starts a relationship. The person is smart and well suited, but does not share her charitable interest. Over time she stops reading the EA content she used to and the chapter never gets started again. After her degree ends she takes a job in consulting that she says will give her career capital, but she has a sense her heart is not as into EA as she once was. She knows a big factor is her boyfriend’s family would approve of a more normal job than a charity focused one, plus she is confident she can donate and have some impact that way. Her first few paychecks she rationalizes as needing to move out and get established. The next few to build up a safe 6 month runway. The donations never happen. There's always some reason or another to put it off, and EA seems so low on the priorities list now, just a thing she did in college, like playing a sport. Alice ends up donating a fairly small amount to effective charities (a little over 1%). Her involvement was at its peak when she was in college and she knows her college self would be disappointed. Each choice made sense at the time. Many of them even follow traditional EA advice, but the endline result is Alice does not really feel she is an EA anymore. She has many other stronger identities. In this story, with different recommendations from the EA movement and different choices from Alice, she could have ended up doing earning to give and donating a large percentage long term or working with an EA org long term, but instead she “value drifted”.

Comment author: MichaelPlant 24 April 2018 06:33:40PM 3 points [-]

Ah, that's great. Thanks very much for that. I think "dating a non-EA" is a particularly dangerous(/negative impact?) phenomenon we should probably be talking about more. I also know someone, A, whose non-EA-inclined partner, B, was really unhappy that A wasn't aiming to get a high-paying professional job and it really wrenched A from focusing on trying do the most useful stuff. Part of the problem was B's family wanted B's partner to be dating a high earner.

Comment author: MichaelPlant 23 April 2018 10:30:57PM *  16 points [-]

Thanks very much for doing this.

Could you possibly say more (i.e. as much as you can) about why people left? Moving city, leaving university or starting a family don't have to stop someone being an EA. More explanation seems needed. For instance, "X moved city" by itself doesn't really explain what happened, whereas "X moved city, didn't know any EAs and lost motivation without group support" or "Y started a family and realised they wanted a higher quality of life than they could find working for an EA org" do. Putting this in dating terms, one reason people sometimes give when they break up with someone is "I'm moving to city Z and it would never work" but that's not quite a sufficient/honest reason, which would be "I'm moving to Z and this will make things sufficiently hard I want to stop. If I liked you a lot more I'd suggest we do long distance; but I don't like you that much, so we're breaking up". I'd want to know if people stop 'believing' in EA, kept thinking it was important but lost motivation or something else.

Equally, I'd be interested if you did a survey of the people who stayed and ask why they stayed to see what the differences were. If the explanations for the remainer and the leavers are consistent with each other than they don't provide any explanatory power.

I'd add the (usual) proviso that people don't really know why they do what they do and self-reports are to be treated with some suspicion. It's generally more useful to see what people do rather than listen to what they say.

Finally, it would be interesting to compare these retention ratios to other things - religion, using a given tech product, dieting, etc. - it strikes me that, if some sense, 50% retention after 5 years might be pretty good in some sense, though I agree it's also worrying put another way.

Comment author: SiebeRozendal 17 April 2018 02:55:20PM 1 point [-]

You're implicitly using the life-comparative account of the badness of death - the badness of your death is equal to amount of happiness you would have had if you'd lived.

I have heard surprisingly many non-philosophers argue for the Epicurean view: that death is not bad for the individual because there's no one for it to be bad for. They would argue that death is only bad because others will have grief and other negative consequences. However, in a painless extinction event this would not be bad at all.

This is all to say that one's conception of the badness of death indeed matters a lot for the negative value of extinction.

Comment author: MichaelPlant 17 April 2018 05:37:00PM *  0 points [-]

Ah good point! Yes, I didn't mention this for some reason, although I should have. Indeed, if (like me) you're sympathetic to the person-affecting views of population ethics and Epicureanism about the badness of death, then the only reason to reduce X-risk would be to reducing the suffering to currently living people during their lifetimes. In short, X-risk would not be much of a priority of this combination but that's basically pretty obvious if you hold this combination of views.

Comment author: Alex_Barry 16 April 2018 12:15:18AM 0 points [-]

Also, not sure why my comment was downvoted. I wasn't being rude (or, I think, stupid) and I think it's unhelpful to downvote without explanation as it just looks petty and feels unfriendly.

I didn't downvote, but:

In which case I'm not understanding your model. The 'Cost per life year' box is $1bn/EV. How is that not a one off of $1bn? What have I missed?

The last two sentences of this come across as pretty curt to me. I think there is a wide range in how people interpret things like these, so it is probably just a bit of a communication style mismatch. (I think I have noticed a myself having a similar reaction to a few of your comments before where I don't think you meant any rudeness).

I think it's unhelpful to downvote without explanation as it just looks petty and feels unfriendly.

I agree with this on some level, but I'm not sure I want there to be uneven costs to upvoting/downvoting content. I think there is also an unfriendliness vs. enforcing standards tradeoff where the marginal decisions will typically look petty.

Comment author: MichaelPlant 16 April 2018 08:57:32AM 2 points [-]

The last two sentences of this come across as pretty curt to me.

Yeah, on re-reading, the "How is that not a one off of $1bn?" does seem snippy. Okay. Fair cop.

Comment author: Halstead 15 April 2018 04:05:09PM *  3 points [-]

Three cheers for this. Two ways in which the post might understate the case for person-affecting focusing on ex risk

  1. Most actions to reduce ex risk would also reduce catastrophic non-ex risks. e.g. efforts to reduce the risk of an existential threat attack by an engineered pathogen would also reduce the risk of e.g. >100m people dying in an attack by an engineered pathogen. I would expect that the benefits from reducing GCRs as a side-effect of reducing ex risks would be significantly larger than the benefits accruing from preventing ex risks because the probability of GCRs is much much greater. I wouldn't be that surprised if that increased the EV of ex risk by an order of magntidue, thereby propelling ex risk reduction further into AMF territory.

  2. As I have noted before on this forum, most people advancing person-affecting views tend to opt for asymmetric versions where future bad lives matter but future good lives don't. If you're temporally neutral and aggregative, then you end up with a moral theory which is practically exactly the same as negative utiltiarianism (priorities one two three four etc are preventing future suffering).

It is in general good to reassert that there are numerous reasons to focus on ex risk aside from the total view, including neglectedness, political short-termism, the global public goods aspect, the context of the techologies we are developing, the tendency to neglect rare events etc

Comment author: MichaelPlant 15 April 2018 10:42:41PM *  1 point [-]

As I have noted before on this forum, most people advancing person-affecting views tend to opt for asymmetric versions where future bad lives matter but future good lives don't. If you're temporally neutral and aggregative, then you end up with a moral theory which is practically exactly the same as negative utiltiarianism (priorities one two three four etc are preventing future suffering

If someone did take an asymmetric view and really committed to it, I would think you should probably be in favour of increasing existential risk, as that removes the possibility of future suffering, rather trying to reduce existential risk. I suppose you might have some (not obviously plausible) story you had about how humanity's survival decreases future suffering: You could think humans will remove misery in surviving non-humans if humans dodge existential risk, but this misery wouldn't be averted if humans went extinct but other life keep living.

Comment author: Gregory_Lewis 14 April 2018 12:07:15AM 0 points [-]

The EV in question is the reduction in x-risk for a single year, not across the century. I'll change the wording to make this clearer.

Comment author: MichaelPlant 14 April 2018 11:20:33AM 0 points [-]

Ah. So the EV is for a single year. But I still only see $1bn. So your number is "this is the cost per life year saved if we spend the money this year and it causes an instanteous reduction in X-risk for this year"?

So your figure is the cost effectiveness of reducing instanteous X-risk at Tn, where Tn is now, whenever now is. But it's not the cost effectiveness of that reduction at Tf, where Tf is some year in the future, because the further in the future this occurs, the less the EV is on PAA. If I'm wondering what the cost-effectiveness, from the perspective of T0, it would be to spend $1bn in 10 years and cause a reduction at T10, on your model I increase the mean age by 10 years to 48, the average cost per year become $12k. From the perspective of T10, reducing X-risk in the way you say at T10 is, again $9k.

By contrast, for totalists the calculations would be the same (excepting inflation, etc.).

Also, not sure why my comment was downvoted. I wasn't being rude (or, I think, stupid) and I think it's unhelpful to downvote without explanation as it just looks petty and feels unfriendly.

Comment author: Denise_Melchin 12 April 2018 06:58:00PM 5 points [-]

I completely agree. I considered making the point in the post itself, but I didn't because I'm not sure about the practical implications myself!

Comment author: MichaelPlant 13 April 2018 11:18:07PM 0 points [-]

I agree it's really complicated, but merits some thinking. The one practical implication I take is "if 80k says I should be doing X, there's almost no chance X will be the best thing I could do by the time I'm in a position to do it"

Comment author: John_Maxwell_IV 13 April 2018 05:30:38AM 4 points [-]

Before operations it was AI strategy researchers, and before AI strategy researchers it was web developers. At various times it has been EtG, technical AI safety, movement-building, etc. We can't predict talent shortages precisely in advance, so if you're a person with a broad skillset, I do think it might make sense to act as flexible human capital and address whatever is currently most needed.

Comment author: MichaelPlant 13 April 2018 11:14:10PM 1 point [-]

I think I'd go the other way and suggest people focus more on personal fit: i.e. do the thing in which you have greatest comparative advantage relative to the world as a whole, not just to the EA world.

Comment author: MichaelPlant 13 April 2018 11:04:39PM 1 point [-]

The economic impact of vegetarianism or veganism is only one factor in the decision of whether one should become a vegetarian or vegan, but an important one

I'm confused by this. If you genuinely think your purchase decisions will make no difference to what happens to animals, then you might as well go ahead and order the big bucket at KFC with a guiltless conscience.

Comment author: Alex_Barry 13 April 2018 10:33:52PM *  1 point [-]

If this isn't true, or consensus view amongst PAAs is "TRIA, and we're mistaken to our degree of psychological continuity", then this plausibly shaves off an order of magnitude-ish and plonks it more in the 'probably not a good buy' category.

It would also have the same (or worse) effect on other things that save lives (e.g. AMF) so it is not totally clear how much worse x-risk would look compared to everything else. (Although perhaps e.g. deworming would come out very well, if it just reduces suffering for a short-ish timescale. (The fact that it mostly effects children might sway things the other way though!))

Comment author: MichaelPlant 13 April 2018 10:59:19PM 2 points [-]

It would also have the same (or worse) effect on other things that save lives (e.g. AMF)

I agree. As I said here, TRIA implies you should care much less about saving young lives. The upshot for TRIA vs PAA combined with the life-comparative account is you should focused more on improving lives than saving lives if you like TRIA.>

Although perhaps e.g. deworming would come out very well, if it just reduces suffering for a short-ish timescale

Just on this note, GiveWell claim only 2% of the value of deworming comes from short term health benefits and 98% from economic gains (see their latest cost-effectiveness spreadsheet), so they don't think the value is on the suffering-reducing end.

View more: Prev | Next