Comment author: BenMillwood  (EA Profile) 16 June 2018 01:21:37PM 0 points [-]

On what grounds do you expect EAs to have better personal ability?

Something I've been idly concerned about in the past is the possibility that EAs might be systematically more ambitious than equivalently competent people, and thus at a given level of ambition, EAs would be systematically less competent. I don't have a huge amount of evidence for this being borne out in practice, but it's one not-so-implausible way that EA charity founders might be worse than average at the skills needed to found charities.

Comment author: BenMillwood  (EA Profile) 03 June 2018 07:47:44AM *  1 point [-]

I think this framing is a good one, but I don't immediately agree with the conclusion you make about which level to prioritize.

Firstly, consider the benefits we expect from a change in someone's view at each level. Do most people stand to improve their impact the most by choosing the best implementation within their cause area, or switching to an average implementation in a more pressing cause area? I don't think this is obvious, but I lean to the latter.

Higher levels are more generalizable: cross-implementation comparisons are only relevant to people within that cause, whereas cross-cause comparisons are relevant to everyone who shares approximately the same values, so focusing on lower levels limits the size of the audience that can benefit from what you have to say.

Low-level comparisons tend to require domain-specific expertise, which we won't be able to have across a wide range of domains.

I also think there's just a much greater deficit of high-quality discussion of the higher levels. They're virtually unexamined by most people. Speaking personally, my introduction to EA was approximately that I knew I was confused about the medium-level question, so I was directly looking for answers to that: I'm not sure a good discussion of the low-level question would have captured me as effectively.

Comment author: MichaelPlant 24 May 2018 10:03:51PM 1 point [-]

Interesting thought puting it on medium. Someone put it on Hacker News here were people were, um, not terribly nice about it, so I had some reservations about that.

Comment author: BenMillwood  (EA Profile) 25 May 2018 02:12:28PM 1 point [-]

I don't think you should update too much on people being unkind on the internet :)

Comment author: oge 12 May 2018 01:43:06PM 0 points [-]

I posted the story to let folks know of a possible altruistic target: letting people live as long as they want by vitrifying their nervous systems for eventual resuscitation.

Comment author: BenMillwood  (EA Profile) 13 May 2018 09:33:26AM 8 points [-]

There are many, many possible altruistic targets. I think to be suitable for the EA forum, a presentation of an altruistic goal should include some analysis of how it compares with existing goals, or what heuristics lead you to believe it's worthy of particular attention.

Comment author: AviN 22 April 2018 01:39:38PM 1 point [-]

I wonder if the cutoff point is more like 25,000 though, the number of broiler chickens raised in a shed. It's unclear to me whether producers respond to small changes in demand by adjusting the numbers of broilers in a shed or only by adjusting the number of sheds in use.

If the cutoff point is more like 25,000, then this would imply that most veg*ns go their entire lives without preventing the existence of a single broiler through their consumption changes, while a minority prevent the existence of a huge number.

For what it's worth, it seems likely that donations to AMF are similar since their distributions typically cover hundreds of thousands or millions of people.

Comment author: BenMillwood  (EA Profile) 13 May 2018 09:06:49AM *  1 point [-]

I think it's sort of bizarre to suggest that out of 25,000 vegetarians, one is responsible for the shed being closed, and the others did nothing at all. Why privilege the "last" decision to not purchase a chicken? It makes more sense to me that you'd allocate the "credit" equally to everyone who chose not to eat meat.

The first 24,999 needed to not buy a chicken in order for the last one to be in a position for their choice to make a difference.

Comment author: Darius_Meissner 10 May 2018 02:16:58PM *  2 points [-]

Thanks, Tom! I agree with with you that all else being equal

solutions that destroy less option value are preferable

though I still think that in some cases the benefits of hard-to-reverse decisions can outweigh the costs.

It seems strange to override what your future self wants to do, if you expect your future self to be in an equally good epistemic position. If anything, future you is better informed and wiser...

This seems to assume that our future selves will actually make important decisions purely (or mostly) based on their epistemic status. However, as CalebWithers points out in a comment:

I believe most people who appear to have value "drifted" will merely have drifted into situations where fulfilling a core drive (e.g. belonging, status) is less consistent with effective altruism than it was previously; as per The Elephant in the Brain, I believe these non-altruistic motives are more important than most people think.

If this is valid (as it seems to me) than many of the important decisions of our future selves are a result of some more or less conscious psychological drives rather than an all-things-considered, reflective and value-based judgment. It is very hard for me to imagine that my future self could ever decide to stop being altruistic or caring about effectiveness on the basis of being better informed and more rational. However, I find it much more plausible that other psychological drives could bring my future self to abandon these core values (and find a rationalization for it). To be frank, though I generally appreciate the idea of 'being loyal to and cooperating with my future self', it seems to me that I place a considerably lower trust in the driving motivations of my future self than many others. From my perspective now, it is my future self that might act disloyally with regards to my current values and that is what I want to find ways to prevent.

It is worth pointing out that in the whole article and this comment I mostly speak about high-level, abstract values such as a fundamental commitment to altruism and to effectiveness. This is what I don't want to lose and what I'd like to lock in for my future self. As illustrated by RandomEAs comment, I would be much more careful about attempting to tie-myself-to-the-mast with respect to very specific values such as discount rates between humans and non-human animals, specific cause area or intervention preferences etc.

Comment author: BenMillwood  (EA Profile) 13 May 2018 08:36:40AM *  2 points [-]

It's not enough to place a low level of trust in your future self for commitment devices to be a good idea. You also have to put a high level of trust in your current self :)

That is, if you believe in moral uncertainty, and believe you currently haven't done a good job of figuring out the "correct" way of thinking about ethics, you may think you're likely to make mistakes by committing and acting now, and so be willing to wait, even in the face of a strong chance your future self won't even be interested in those questions anymore.

Comment author: Dunja 05 May 2018 05:27:44PM 0 points [-]

Sure, but can we really speak of "choice" for those who have no other options? Again: your argument can be used to defend any form of slavery, as long as slaves became slaves "out of choice". If otherwise they wouldn't have survived, what kind of choice is that?! Imagine the alternative: there is consumers-driven pressure on companies to introduce serious control of working conditions. As a result, current sweatshops eventually become much safer for work. It's a long-term win-win scenario.

Comment author: BenMillwood  (EA Profile) 09 May 2018 09:06:01AM *  0 points [-]

I think on balance there's a strong chance you're right, but there IS a lose-lose outcome, where the consumer pressure drives the companies to fire all their sweatshop employees and move to a place where they can get people from a different, less needy origin (that maybe has different labour laws, or in some other ways pacifies many of the consumer activists).

Comment author: BenMillwood  (EA Profile) 09 May 2018 08:27:13AM 1 point [-]

First of all, thanks for this post -- I think it's really valuable to get a realistic sense of how these beliefs play out over the long term.

Like others in the comments, though, I'm a little critical of the framing and skeptical of the role of commitment devices. In my mind, we can view commitment devices as essentially being anti-cooperative with our future selves. I think we should default to viewing these attempts as suspicious, similarly to how we would caution against acting anti-cooperatively towards any other non-EAs.

Implicit is the assumption that if we change, it must be for "bad" reasons. It's natural enough -- clearly we can't think of any good reasons, otherwise we would already have changed -- but it lacks imagination. We may learn of a reason why things are not as we thought. Limiting your options according to your current knowledge or preferences means limiting your ability to flourish if the world turns out to be very different from your expectations.

More abstractly, imagine that you heard about someone who believed that doing X was a really good idea, and then three years later, believed that doing X was not a good idea. Without any more details, who do you think is most likely to be correct?

(At the same time, I think we're all familiar with failing to achieve goals because we failed to commit to them, even as we knew they were worth it, so there can be value in binding yourself. It's also good signalling, of course. But such explanations or justifications need to be strong enough to overcome the general prior based on the above argument.)

Comment author: Denkenberger 24 April 2018 10:08:49PM *  1 point [-]

I think the 10% versus 50% descriptions are useful, and I'm surprised I have not seen them before on the forum, except for my comment here. In that comment, I was arguing that free time could be defined as 40 hours a week, so if you volunteer effectively four hours a week, that would make you a 10% EA. But this also means if you donate 50% and spend 50% of your free time effectively (like I try to do), you would be a 100% EA. Another way is having an EA job (which is typically half of market salary, so it is like donating 50%) that is nominally 40 hours a week, but actually working 60 hours a week, so it is like you are volunteering half of your "free" time. Then it would be nice clean orders of magnitude. But 100% is not very common, and it could be misleading, so 50% is ok.

Comment author: BenMillwood  (EA Profile) 09 May 2018 07:46:08AM 0 points [-]

But this also means if you donate 50% and spend 50% of your free time effectively (like I try to do), you would be a 100% EA

If you gave 60% of your income would that make you a 110% EA? If so, I think that mostly just highlights that this metric should not be taken too seriously. (I was going to criticize it on more technical grounds, but I think to do so would be to give legitimacy to the idea that people should compare their own "numbers" with each other, which seems likely to be to be a bad idea)

Comment author: Gregory_Lewis 02 February 2018 06:48:12PM 2 points [-]

That seems surprising to me, given the natural model for the counterpart in the case you describe would be a sibling, and observed behaviour between sibs is pretty divergent. I grant your counterfactual sibling would be more likely than a random member of the population to be writing something similar to the parent comment, but the absolute likelihood remains very low.

The fairly intermediate heritabilities of things like intelligence, personality traits etc. also look pretty variable. Not least, there's about a 0.5 chance your counterpart would be the opposite sex to you.

I agree even if history is chaotic in some respects, it is not chaotic to everything, and there can be forcing interventions (one can grab a double pendulum, etc), yet less overwhelming interventions may be pretty hard to fathom in the chaotic case (It's too early to say whether the french revolution was good or bad, etc.)

Comment author: BenMillwood  (EA Profile) 09 February 2018 05:48:26PM *  0 points [-]

Not that it's obviously terribly important to the historical chaos discussion, but I think siblings aren't a great natural model. Siblings differ by at least (usually more than) nine months, which you can imagine affecting them biologically, via the physiology of the mother during pregnancy, or via the medical / material conditions of their early life. They also differ in social context -- after all, one of them has one more older sibling, while the other has one more younger one. Two agents interacting may exaggerate their differences over time, or perhaps they sequentially fill particular roles in the eyes of the parents, which leads to differences in treatment. So I think there are lots of sources of sibling difference that aren't present in hypothetical genetic reshuffles.

(That said, the coinflip on sex seems pretty compelling.)

View more: Next