Comment author: pmelchor  (EA Profile) 10 May 2018 05:50:11PM 4 points [-]

Great posts, Joey and Darius!

I'd like to introduce a few considerations as an "older" EA (I am 43 now) :

  • Scope of measurement: Joey’s post was based on 5 year data. As Joey mentioned, “it would take a long time to get good data”. However, it may well be that expanding the time scope would yield very different results. It is possible that a graph plotting a typical EA’s degree of involvement/commitment with the movement would not look like a horizontal line but rather like a zigzag. I base this on purely anecdotal evidence, but I have seen many people (including myself) recover interests, hobbies, passions, etc. once their children are older. I am quite new to the movement, but there is no way that 10 years ago I would have put in the time I am now devoting to EA. If I had started my involvement in college —supposing EA had been around—, you could have seen a sharp decline during my thirties (and tag that as value drift)… without knowing there would be a sharp increase in my forties.

  • Expectations: This is related to my previous point. Is it optimal to expect a constant involvement/commitment with the movement? As EAs, we should think of maximizing our lifetime contributions. Keeping the initial engagement levels constant sounds good in theory, but it may not be the best strategy in the long run (e.g. potentially leading to burnout, etc). Maybe we should think of “engagement fluctuations” as something natural and to be expected instead of something dangerous that must be fought against.

  • EA interaction styles: If and as the median age of the community goes up, we may need to adapt the ways in which we interact (or rather add to the existing ones). It can be much harder for people with full-time jobs and children to attend regular meetings or late afternoon “socials”. How can we make it easier for people that have very strong demands on their time to stay involved without feeling that they are missing out or that they just can’t cope with everything? I don’t have an answer right now, but I think this is worth exploring.

The overall idea here is that instead of fighting an uneven involvement/commitment across time it may be better to actually plan for it and find ways of accommodating it within a “lifetime contribution strategy”. It may well be that there is a minimum threshold below which people completely abandon EA. If that it so I suggest we think of ways of making it easy for people to stay above that threshold at times when other parts of their lives are especially demanding.

Comment author: Darius_Meissner 10 May 2018 07:56:18PM *  1 point [-]

Great points, thanks for raising them!

It is possible that a graph plotting a typical EA’s degree of involvement/commitment with the movement would not look like a horizontal line but rather like a zigzag.

It would be very encouraging if this is a common phenomenon and many people 'dropping out' might potentially come back at some point to EA ideals. It provides a counterexample to something I have commented earlier:

It is worth pointing out that most of this discussion is just speculation. The very limited anecdata we have from Joey and others seems too weak to draw detailed conclusions. Anyway: From talking to people who are in their 40s and 50s now, it seems to me that a significant fraction of them were at some point during their youth or at university very engaged in politics and wanted to contribute to 'changing the world for the better'. However, most of these people have reduced their altruistic engagement over time and have at some point started a family, bought a house etc. and have never come back to their altruistic roots. This common story is what seems to be captured by the saying (that I neither like nor endorse): "If you're not a socialist at the age of 20 you have no heart. If you're not a conservative at the age of 40, you have no head".

Regarding your related point:

Is it optimal to expect a constant involvement/commitment with the movement? As EAs, we should think of maximizing our lifetime contributions (...) and find ways of accommodating it within a “lifetime contribution strategy”

I strongly agree with this, which was my motivation to write the post in the first place! I don't think constant involvement/commitment to (effective) altruism is necessary to maximise your lifetime impact. That said, it seems like for many people there is a considerable chance to never 'find their way back' to this commitment after they spent years/decades in non-altruistic environments, on starting a family, on settling down etc. This is why I'd generally think people with EA values in their twenties should consider ways to at the least stay loosely involved/updated over the mid- to long-term to reduce the chance of this happening. So it provides a great example to hear that you actually managed to do just that! In any case, more research is needed on this - I somewhat want to caution against survivorship bias, which could become an issue if we mostly talk to the people who did what is possibly exceptional (e.g. took up a strong altruistic commitment in their forties or having been around EA for for a long time).

Comment author: ThomasSittler 06 May 2018 05:20:21PM *  7 points [-]

Thanks for the post. I'm sceptical of lock-in (or, more Homerically, tie-yourself-to-the-mast) strategies. It seems strange to override what your future self wants to do, if you expect your future self to be in an equally good epistemic position. If anything, future you is better informed and wiser...

I know you said your post just aims to provide ideas and tools for how you can avoid value drift if you want to do so. But even so, in the spirit of compromise between your time-slices, solutions that destroy less option value are preferable.

Comment author: Darius_Meissner 10 May 2018 02:16:58PM *  1 point [-]

Thanks, Tom! I agree with with you that all else being equal

solutions that destroy less option value are preferable

though I still think that in some cases the benefits of hard-to-reverse decisions can outweigh the costs.

It seems strange to override what your future self wants to do, if you expect your future self to be in an equally good epistemic position. If anything, future you is better informed and wiser...

This seems to assume that our future selves will actually make important decisions purely (or mostly) based on their epistemic status. However, as CalebWithers points out in a comment:

I believe most people who appear to have value "drifted" will merely have drifted into situations where fulfilling a core drive (e.g. belonging, status) is less consistent with effective altruism than it was previously; as per The Elephant in the Brain, I believe these non-altruistic motives are more important than most people think.

If this is valid (as it seems to me) than many of the important decisions of our future selves are a result of some more or less conscious psychological drives rather than an all-things-considered, reflective and value-based judgment. It is very hard for me to imagine that my future self could ever decide to stop being altruistic or caring about effectiveness on the basis of being better informed and more rational. However, I find it much more plausible that other psychological drives could bring my future self to abandon these core values (and find a rationalization for it). To be frank, though I generally appreciate the idea of 'being loyal to and cooperating with my future self', it seems to me that I place a considerably lower trust in the driving motivations of my future self than many others. From my perspective now, it is my future self that might act disloyally with regards to my current values and that is what I want to find ways to prevent.

It is worth pointing out that in the whole article and this comment I mostly speak about high-level, abstract values such as a fundamental commitment to altruism and to effectiveness. This is what I don't want to lose and what I'd like to lock in for my future self. As illustrated by RandomEAs comment, I would be much more careful about attempting to tie-myself-to-the-mast with respect to very specific values such as discount rates between humans and non-human animals, specific cause area or intervention preferences etc.

Comment author: KarolinaSarek 06 May 2018 07:03:47PM 4 points [-]

Thank you, Joey, for gathering those data. And thank you, Darius, for providing us with the suggestions for reducing this risk. I agree that further research on causes of value drift and how to avoid it is needed. If the phenomenon is explained correctly, that could be a great asset to the EA community building. But regardless of this explanation, your suggestions are valuable.

It seems to be a generally complex problem because retention encapsulates the phenomenon in which a person develops an identity, skill set, and consistent motivation or dedication to significantly change the course of their life. CEA in their recent model of community building framed it as resources, dedication, and realization.

Decreasing retention is also observed in many social movements. Some insights about how it happens can be culled from sociological literature. Although it is still underexplored and the sociological analysis might have mediocre quality, but it might still be useful to have a look at it. For example, this analysis implicate that “movement’s ability to sustain itself is a deeply interactive question predicted by its relationship to its participants: their availability, their relationships to others, and the organization’s capacity to make them feel empowered, obligated, and invested."

Additional aspects of value drift to consider on an individual level that might not be relevant to other social movements: mental health and well-being, pathological altruism, purchasing fuzzies and utilons separately.

The reasons for the value drift from EA seems to be as important in understanding the process, as the value drift that led to EA, e.g. In Joey's post, he gave an illustrative story of Alice. What could explain her value drift was the fact that at people during their first year of college are more prone to social pressure and need for belonging. That could make her become EA and drifted when she left college and her EA peers. So "Surround yourself with value aligned people" for the whole course of your life. That also stresses the importance of untapped potential of local groups outside the main EA hubs. For this reason, it's worth considering even If in case of outreach we shouldn't rush to translate effective altruism

About the data itself. We might be making wrong inferences trying to explain those date. Because it shows only a fraction of the process and maybe if we would observe the curve of engagement it would fluctuate over a longer period of time, eg. 50% in the first 2-5 year, 10% in a 6th year, 1% in for the next 2-3 and then coming back to 10%, 50% etc.? Me might hypothesize that life situation influence the baseline engagement for short period (1 month- 3 years). As analogous for changes in a baseline of happiness and influences of live events explained by hedonic adaptation, maybe we have sth like altruistic adaptation, that changes after a significant live event (changing the city, marriage etc.) and then comes back to baseline.

Additionally, the level of engagement in EA and other significant variables does not correlate perfectly, the data could also be explained by the regression to the mean. If some of the EAs were hardcore at the beginning, they will tend to be closer to the average on a second measurement, so from 50% to 10%, and those from 10% to 1%. Anyhow, the likelihood that the value drift is true is higher than that it's not.

More could be done about the vale drift on the structural level, e.g. it might be also explained by the main bottlenecks in the community itself, like the Mid-Tire Trap (e.g. too good for running local group, but no good enough to be hired by main EA organizations -> multiple unsuccessful job applications -> frustration -> drop out).

Becuase mechanism of the value drift would determine the strategies to minimalize risk or harm of it and because the EA community might not be representative for other social movements, we should systematically and empirically explore those and other factors in order to find the 80/20 of long-lasting commitment.

Comment author: Darius_Meissner 10 May 2018 01:44:37PM *  0 points [-]

Thanks for your comment, Karolina!

That also stresses the importance of untapped potential of local groups outside the main EA hubs.

Yep, I see engaging people & keeping up their motivation in one location as a major contribution of EA groups to the movement!

maybe we have sth like altruistic adaptation, that changes after a significant live event (changing the city, marriage etc.) and then comes back to baseline.

This is an interesting suggestion, though I think it unlikely. It is worth pointing out that most of this discussion is just speculation. The very limited anecdata we have from Joey and others seems too weak to draw detailed conclusions. Anyway: From talking to people who are in their 40s and 50s now, it seems to me that a significant fraction of them were at some point during their youth or at university very engaged in politics and wanted to contribute to 'changing the world for the better'. However, most of these people have reduced their altruistic engagement over time and have at some point started a family, bought a house etc. and have never come back to their altruistic roots. This common story is what seems to be captured by the saying (that I neither like nor endorse): "If you're not a socialist at the age of 20 you have no heart. If you're not a conservative at the age of 40, you have no head".

More could be done about the vale drift on the structural level, e.g. it might be also explained by the main bottlenecks in the community itself, like the Mid-Tire Trap

This is a valuable and under-discussed point that I endorse!

Comment author: CalebWithers  (EA Profile) 07 May 2018 12:33:14PM *  9 points [-]

Thanks for writing this - it seems worthwhile to be strategic about potential "value drift", and this list is definitely useful in that regard.

I have the tentative hypothesis that a framing with slightly more self-loyalty would be preferable.

In the vein of Denise_Melchin's comment on Joey's post, I believe most people who appear to have value "drifted" will merely have drifted into situations where fulfilling a core drive (e.g. belonging, status) is less consistent with effective altruism than it was previously; as per The Elephant in the Brain, I believe these non-altruistic motives are more important than most people think. In the vein of The Replacing Guilt series, I don't think that attempting to override these other values is generally sustainable for long-term motivation.

This hypothesis would point away from pledges or 'locking in' (at least for the sake of avoiding value drift) and, I think, towards a slightly different framing of some suggestions: for example, rather than spending time with value-aligned people to "reduce the risk of value drift", we might instead recognize that spending time with value-aligned people is an opportunity to both meet our social needs and cultivate one's impactfulness.

Comment author: Darius_Meissner 10 May 2018 01:22:33PM *  0 points [-]

Thanks for your comment! I agree with everything you have said and like the framing you suggest.

I believe most people who appear to have value "drifted" will merely have drifted into situations where fulfilling a core drive (e.g. belonging, status) is less consistent with effective altruism than it was previously

This is what I tried to address though you have expressed it more clearly than I could! As some others have pointed out as well, it might make sense to differentiate between 'value drift' (i.e. change of internal motivation) and 'lifestyle drift' (i.e. change of external factors that make implementation of values more difficult). I acknowledge that, as Denise's comment points out, the term 'value drift' is not ideal in the way that Joey and I used it and that:

As the EA community we should treat people sharing goals and values of EA but finding it hard to act towards implementing them very differently to people simply not sharing our goals and values anymore. Those groups require different responses. (Denise_Melchin comment).

However, it seems reasonable to me to be concerned and attempt to avoid both about value and lifestyle drift and in many cases it will be hard to draw a line between the two (as changes in lifestyle likely precipitate changes in values and the other way around).

Comment author: Darius_Meissner 03 May 2018 09:41:42AM *  8 points [-]

Now that a new version of the handbook is out, could you update the 'More on Effective Altruism' link? It is quite prominent in the 'Getting Started' navigation panel on the right-hand side of the EA Forum.

Comment author: Darius_Meissner 03 May 2018 09:38:45AM 2 points [-]

In light of the recently published 2nd edition of the EA Handbook, could this page be updated as well? The 'more on effective altruism' link in the navigation menu is quite prominent and it would be great to lead visitors to the most up-to-date content.