Comment author: oge 12 May 2018 01:43:06PM 0 points [-]

I posted the story to let folks know of a possible altruistic target: letting people live as long as they want by vitrifying their nervous systems for eventual resuscitation.

Comment author: BenMillwood  (EA Profile) 13 May 2018 09:33:26AM 7 points [-]

There are many, many possible altruistic targets. I think to be suitable for the EA forum, a presentation of an altruistic goal should include some analysis of how it compares with existing goals, or what heuristics lead you to believe it's worthy of particular attention.

Comment author: AviN 22 April 2018 01:39:38PM 1 point [-]

I wonder if the cutoff point is more like 25,000 though, the number of broiler chickens raised in a shed. It's unclear to me whether producers respond to small changes in demand by adjusting the numbers of broilers in a shed or only by adjusting the number of sheds in use.

If the cutoff point is more like 25,000, then this would imply that most veg*ns go their entire lives without preventing the existence of a single broiler through their consumption changes, while a minority prevent the existence of a huge number.

For what it's worth, it seems likely that donations to AMF are similar since their distributions typically cover hundreds of thousands or millions of people.

Comment author: BenMillwood  (EA Profile) 13 May 2018 09:06:49AM *  1 point [-]

I think it's sort of bizarre to suggest that out of 25,000 vegetarians, one is responsible for the shed being closed, and the others did nothing at all. Why privilege the "last" decision to not purchase a chicken? It makes more sense to me that you'd allocate the "credit" equally to everyone who chose not to eat meat.

The first 24,999 needed to not buy a chicken in order for the last one to be in a position for their choice to make a difference.

Comment author: Darius_Meissner 10 May 2018 02:16:58PM *  1 point [-]

Thanks, Tom! I agree with with you that all else being equal

solutions that destroy less option value are preferable

though I still think that in some cases the benefits of hard-to-reverse decisions can outweigh the costs.

It seems strange to override what your future self wants to do, if you expect your future self to be in an equally good epistemic position. If anything, future you is better informed and wiser...

This seems to assume that our future selves will actually make important decisions purely (or mostly) based on their epistemic status. However, as CalebWithers points out in a comment:

I believe most people who appear to have value "drifted" will merely have drifted into situations where fulfilling a core drive (e.g. belonging, status) is less consistent with effective altruism than it was previously; as per The Elephant in the Brain, I believe these non-altruistic motives are more important than most people think.

If this is valid (as it seems to me) than many of the important decisions of our future selves are a result of some more or less conscious psychological drives rather than an all-things-considered, reflective and value-based judgment. It is very hard for me to imagine that my future self could ever decide to stop being altruistic or caring about effectiveness on the basis of being better informed and more rational. However, I find it much more plausible that other psychological drives could bring my future self to abandon these core values (and find a rationalization for it). To be frank, though I generally appreciate the idea of 'being loyal to and cooperating with my future self', it seems to me that I place a considerably lower trust in the driving motivations of my future self than many others. From my perspective now, it is my future self that might act disloyally with regards to my current values and that is what I want to find ways to prevent.

It is worth pointing out that in the whole article and this comment I mostly speak about high-level, abstract values such as a fundamental commitment to altruism and to effectiveness. This is what I don't want to lose and what I'd like to lock in for my future self. As illustrated by RandomEAs comment, I would be much more careful about attempting to tie-myself-to-the-mast with respect to very specific values such as discount rates between humans and non-human animals, specific cause area or intervention preferences etc.

Comment author: BenMillwood  (EA Profile) 13 May 2018 08:36:40AM 0 points [-]

It's not enough to place a low level of trust in your future self for commitment devices to be a bad idea. You also have to put a high level of trust in your current self :)

That is, if you believe in moral uncertainty, and believe you currently haven't done a good job of figuring out the "correct" way of thinking about ethics, you may think you're likely to make mistakes by committing and acting now, and so be willing to wait, even in the face of a strong chance your future self won't even be interested in those questions anymore.

Comment author: Dunja 05 May 2018 05:27:44PM 0 points [-]

Sure, but can we really speak of "choice" for those who have no other options? Again: your argument can be used to defend any form of slavery, as long as slaves became slaves "out of choice". If otherwise they wouldn't have survived, what kind of choice is that?! Imagine the alternative: there is consumers-driven pressure on companies to introduce serious control of working conditions. As a result, current sweatshops eventually become much safer for work. It's a long-term win-win scenario.

Comment author: BenMillwood  (EA Profile) 09 May 2018 09:06:01AM *  0 points [-]

I think on balance there's a strong chance you're right, but there IS a lose-lose outcome, where the consumer pressure drives the companies to fire all their sweatshop employees and move to a place where they can get people from a different, less needy origin (that maybe has different labour laws, or in some other ways pacifies many of the consumer activists).

Comment author: BenMillwood  (EA Profile) 09 May 2018 08:27:13AM 1 point [-]

First of all, thanks for this post -- I think it's really valuable to get a realistic sense of how these beliefs play out over the long term.

Like others in the comments, though, I'm a little critical of the framing and skeptical of the role of commitment devices. In my mind, we can view commitment devices as essentially being anti-cooperative with our future selves. I think we should default to viewing these attempts as suspicious, similarly to how we would caution against acting anti-cooperatively towards any other non-EAs.

Implicit is the assumption that if we change, it must be for "bad" reasons. It's natural enough -- clearly we can't think of any good reasons, otherwise we would already have changed -- but it lacks imagination. We may learn of a reason why things are not as we thought. Limiting your options according to your current knowledge or preferences means limiting your ability to flourish if the world turns out to be very different from your expectations.

More abstractly, imagine that you heard about someone who believed that doing X was a really good idea, and then three years later, believed that doing X was not a good idea. Without any more details, who do you think is most likely to be correct?

(At the same time, I think we're all familiar with failing to achieve goals because we failed to commit to them, even as we knew they were worth it, so there can be value in binding yourself. It's also good signalling, of course. But such explanations or justifications need to be strong enough to overcome the general prior based on the above argument.)

Comment author: Denkenberger 24 April 2018 10:08:49PM *  1 point [-]

I think the 10% versus 50% descriptions are useful, and I'm surprised I have not seen them before on the forum, except for my comment here. In that comment, I was arguing that free time could be defined as 40 hours a week, so if you volunteer effectively four hours a week, that would make you a 10% EA. But this also means if you donate 50% and spend 50% of your free time effectively (like I try to do), you would be a 100% EA. Another way is having an EA job (which is typically half of market salary, so it is like donating 50%) that is nominally 40 hours a week, but actually working 60 hours a week, so it is like you are volunteering half of your "free" time. Then it would be nice clean orders of magnitude. But 100% is not very common, and it could be misleading, so 50% is ok.

Comment author: BenMillwood  (EA Profile) 09 May 2018 07:46:08AM 0 points [-]

But this also means if you donate 50% and spend 50% of your free time effectively (like I try to do), you would be a 100% EA

If you gave 60% of your income would that make you a 110% EA? If so, I think that mostly just highlights that this metric should not be taken too seriously. (I was going to criticize it on more technical grounds, but I think to do so would be to give legitimacy to the idea that people should compare their own "numbers" with each other, which seems likely to be to be a bad idea)

Comment author: Gregory_Lewis 02 February 2018 06:48:12PM 2 points [-]

That seems surprising to me, given the natural model for the counterpart in the case you describe would be a sibling, and observed behaviour between sibs is pretty divergent. I grant your counterfactual sibling would be more likely than a random member of the population to be writing something similar to the parent comment, but the absolute likelihood remains very low.

The fairly intermediate heritabilities of things like intelligence, personality traits etc. also look pretty variable. Not least, there's about a 0.5 chance your counterpart would be the opposite sex to you.

I agree even if history is chaotic in some respects, it is not chaotic to everything, and there can be forcing interventions (one can grab a double pendulum, etc), yet less overwhelming interventions may be pretty hard to fathom in the chaotic case (It's too early to say whether the french revolution was good or bad, etc.)

Comment author: BenMillwood  (EA Profile) 09 February 2018 05:48:26PM *  0 points [-]

Not that it's obviously terribly important to the historical chaos discussion, but I think siblings aren't a great natural model. Siblings differ by at least (usually more than) nine months, which you can imagine affecting them biologically, via the physiology of the mother during pregnancy, or via the medical / material conditions of their early life. They also differ in social context -- after all, one of them has one more older sibling, while the other has one more younger one. Two agents interacting may exaggerate their differences over time, or perhaps they sequentially fill particular roles in the eyes of the parents, which leads to differences in treatment. So I think there are lots of sources of sibling difference that aren't present in hypothetical genetic reshuffles.

(That said, the coinflip on sex seems pretty compelling.)

Comment author: RomeoStevens 29 January 2018 08:59:50PM *  7 points [-]

Another framing of that solution: EA needs a full time counselor who works with EAs gratis. I expect that paying the salary of such a person would be +ROI.

Comment author: BenMillwood  (EA Profile) 09 February 2018 04:40:45PM 5 points [-]

I would be interested in funding this.

Comment author: BenMillwood  (EA Profile) 16 December 2017 04:38:03PM 3 points [-]

For the benefit of future readers: Giving Tuesday happened, and the matching funds were exhausted within about 90 seconds. In total of ~$370k in donations we matched ~$46k, or about 13%, which was lower than hoped. William wrote up a lessons-learned document as a Google doc.

Comment author: BenMillwood  (EA Profile) 25 November 2017 08:07:49AM 0 points [-]

I'm going to write a relatively long comment making a relatively narrow objection to your post. Sorry about that, but I think it's a particularly illustrative point to make. I disagree with these two points against the neglectedness framing in particular:

  1. that it could divide by zero, and this is a serious problem
  2. that it splits a fraction into unnecessarily conditional parts (the "dragons in Westeros" problem).

Firstly in response to (1), this is a legitimate illustration that the framework only applies where it applies, but it seems like in practice like it isn't an obstacle. Specifically, the framing works well when your proposed addition is small relative to the existing resource, and it seems like that's true of most people in most situations. I'll come back to this later.

More importantly, I feel like (2) misses the point of what the framework was developed for. The goal is to get a better handle on what kinds of things to look for when evaluating causes. So the fact that the fraction simplifies to "good done per additional resource" is sort of trivial – that's the goal, the metric we're trying to optimize. It's hard to measure that directly, so the value added by the framework is the claim that certain conditionalizations of the metric (if that's the right word) yield questions that are easier to answer, and answers that are easier to compare.

That is, we write it as "scale times neglectedness times solvability" because we find empirically that those individual factors of the metric tend to be more predictable, comparable and measurable than the metric as a whole. The applicability of the framework is absolutely contingent on what we in-practice discover to be the important considerations when we try to evaluate a cause from scratch.

So while there's no fundamental reason why neglectedness, particularly as measured in the form of the ratio of percentage per resource, needs to be a part of your analysis, it just turns out to be the case that you can often find e.g. two different health interventions that are otherwise very comparable in how much good they do, but with very different ability to consume extra resources, and that drives a big difference in their attractiveness as causes to work on.

If ever you did want to evaluate a cause where the existing resources were zero, you could just as easily swap the bad cancellative denominator/numerator pair with another one, say the same thing in absolute instead of relative terms, and the rest of the model would more or less stand up. Whether that should be done in general for evaluating other causes as well is a judgement call about how these numbers vary in practice and what situations are most easily compared and contrasted.

View more: Next