23

Concrete Ways to Reduce Risks of Value Drift

This post is motivated by Joey’s recent post on ‘Empirical data on value drift’ and some of the comments. Its purpose is not to argue for why you should avoid value drift, but to provide you with ideas and tools for how you can avoid it if you want to do so.

Introduction

“And Harry remembered what Professor Quirrell had said beneath the starlight: Sometimes, when this flawed world seems unusually hateful, I wonder whether there might be some other place, far away, where I should have been…

And Harry couldn’t understand Professor Quirrell’s words, it might have been an alien that had spoken, (...) something built along such different lines from Harry that his brain couldn’t be forced to operate in that mode. You couldn’t leave your home planet while it still contained a place like Azkaban. You had to stay and fight.”

Harry Potter and the Methods of Rationality

I use the term value drift in a broad sense to mean certain life changes that would lead you to lose most of the expected altruistic value of your life. Those could be a) changes to your value system or internal motivation, and b) changes in your life circumstances leading to difficulties implementing your values (I acknowledge that the term is non-ideal. Value drift seems to capture part a) well, and part b) might better be captured by 'lifestyle drift'; see terminology discussion here). On the motivational side, this could be by ceasing to see helping others as one of your life’s priorities (losing the ‘A’ in EA), or pivoting towards an ineffective cause area or intervention (losing the ‘E’ in EA).

Of course, changing your cause area or intervention to something that is equally or more effective within the EA framework does not count as value drift. Note that even if your future self were to decide to leave the EA community, as long as you still see ‘helping others effectively’ as one of your top-priorities in life it might not constitute value drift. You don’t need to call yourself an EA to have a large impact. But I am convinced that EA as a community helps many members uphold their motivation for doing the most good.

Most of the potential value of EAs lies in the mid- to long-term, when more and more people in the community take up highly effective career paths and build their professional expertise to reach their ‘peak productivity’ (likely in their 40s). If value drift is common, then many of the people currently active in the community will cease to be interested in doing the most good long before they reach this point. This is why, speaking for myself, losing my altruistic motivation in the future would equal a small moral tragedy to my present self. I think that as EAs we can reasonably have a preference for our future selves not to abandon our fundamental commitment to altruism or effectiveness.

Caveat: the following suggestions are all very tentative and largely based on my intuition of what I think will help me avoid value drift; please take them with a large grain of salt. I acknowledge that other people function differently in some respects, that some of the suggestions below will not have beneficial effects for many people and could even be harmful for some. Also keep in mind that some of the suggestions might involve trade-offs with other goals. A toy example to illustrate the point: it might turn out that getting an EA tattoo is a great commitment mechanism, however it could conflict with the goal (among others) to spend your limited weirdness points wisely and might have negative effects on how EA is perceived by people around you. Please reflect carefully on your personal situation before adopting any of the following.

What you can do to reduce risks of value drift:

  • Beware of falling prey to cognitive biases when thinking about value drift: You probably systematically underestimate a) the likelihood of changing significantly in the future (i.e. End-of-history-illusion) and b) the role that social dynamics play in your motivation. There is a danger in believing both that your fundamental values will not change or that you have control over how they will change, and in believing that your mind works radically differently from other people (e.g. atypical mind fallacy or bias blind spot); for instance, that your motivation is grounded more in rational arguments than it is for others and less in social dynamics. In particular, beware of base rate neglect when thinking that the risk of value drift occurring to your own person is very low; Joey’s post provides a very rough base rate for orientation.
  • Surround yourself with value aligned people: There is a saying that you become the average of the five people closest to you. Therefore, surround yourself with people who motivate and inspire you in your altruistic pursuits. From this perspective, it seems especially beneficial to spend time with other EAs to hold up and regain your motivation; though ‘value aligned’ people don’t have to be EAs, of course. However, it is worth pointing out that you should beware of groupthink and surrounding yourself only with people who are very similar to you. As a community we should retain our ability to take the outside view and engage critically with community trends and ideas. If you decide you want to spend more time with value aligned people / other EAs, here are some concrete ways: making an effort to have regular social interactions with value aligned people (e.g. meeting for lunch/dinner, coffee), engaging in or starting your own local EA chapter, attending EA Global conferences or retreats, becoming friends with EAs, complete internships at EA aligned organisations, getting in touch with value aligned people & other EAs online and chatting/skyping to exchange ideas, sharing a flat etc. Avoiding value drift might increase the importance you should place on living in an EA hub, such as the Bay Area, London, Oxford or Berlin, or other places with a supportive community.
  • Discount the expected value of your longer term altruistic plans by the probability that they will never be realised due to value drift (see Joey’s post for a very rough base rate). This consideration might lead you to place relatively more weight on how you can achieve near term impact or reduce risks of value drift. However, a counter-consideration is that your future self will have more skills, knowledge and resources to do good, which could make capacity building in the near term extremely valuable. Attempt to balance these considerations – the risk of value drift tomorrow against the risk of underinvesting in building your capacity today.
  • Make reducing risks of value drift a top altruistic priority: Think about whether you agree that most of the potential social impact of your life lies several years or decades in the future. If yes, then thinking about risks of value drift in your own life and implementing concrete steps to reduce them, is likely going to be (among) the highest expected value activities for you in the short-term. I expect that learning more about the causes of value drift on the individual level has a high moral value of information by making it easier for yourself to anticipate and avoid future life circumstances that contribute to it. Joey’s post indicates that value drift occurs for various different reasons and many of those seem to be circumstantial rather than coming from disagreement with fundamental EA principles (e.g. moving to a new city without a supportive EA community, transitioning from university to workforce, finding a non-EA partner and investing heavily in the relationship, marrying, getting kids etc.).
  • Think about what your priorities are in life: There are many different ways to lead a happy and fulfilling life. A subset of those ways revolve around altruism. And a subset of these count as effectively altruistic. While you should be careful not to sacrifice your long term happiness to short-term altruistic goals – being unhappy with your way of life, even if it is doing a ton of good in the short-term, is a safe way to lose your motivation and pivot over time – there are ways to live a very happy and fulfilled life that also is dedicated to EA principles.
  • Confront yourself with your major motivational sources regularly: This is related to the above point. For example, talk to other EAs about what motivates you and them, reread your preferred book by your favourite moral philosopher, watch motivating talks or articles (quick shout-out for Nate Soare’s ‘On Caring’) or whatever increased your motivation to become EA in the first place. In addition, consider writing a list of personalised, motivational affirmations for yourself that you read regularly or when feeling low and unmotivated. When considering (re-)watching emotionally salient videos (e.g. slaughterhouse videos), please bear in mind that this can have traumatic effects for some people and might thus be counterproductive.
  • Send your future self letters: describing a) your altruistic motivation, b) wishes for how you should live your life in the years to come and including c) concrete resources (e.g. the new EA Handbook) to re-learn and potentially regain motivation. Consider adding d) a list of ways in which your present self would accept value changes to prevent your future self from rationalising value drift after the fact (e.g. value changes resulting from your future self being better informed, say, about moral philosophy and overall more rational – as opposed to purely circumstantial value drift).
  • Conduct (semi-)annual reviews and planning: By evaluating how your life is going according to your own priorities, goals and values, you can know whether you are still on track to achieving them or whether you should make changes to the status quo.
  • Really make bodily and mental health a priority: This is particularly important for the EA community, which is focused on (self-)optimization and where some people might be tempted in the short-run to work really hard and long hours, reduce sleep, neglect nutrition and exercise, and do other things that are neither healthy nor sustainable in the long run. Experiment with and implement practices to your life to reduce the chance of future (mental) health breakdown, which would a) be very bad by itself, b) radically limit your ability to do good in the short-term and c) could cause a reshuffling of your priorities or act as a Schelling point for your future self to disengage from EA. Julia Wise offers great advice on self-care and burnout prevention for EAs.
  • Make doing good enjoyable: This is related to the above point on mental health. By finding ways to make engaging in altruistic behaviour enjoyable, you create a positive emotional association with the activity. This should help you keep up the commitment in the long-run. On the flipside, be careful when engaging in altruistic activities that you have (strong) negative associations with. Julia Wise writes “effective altruism is not about driving yourself to a breakdown. We don't need people making sacrifices that leave them drained and miserable. We need people who can walk cheerfully over the world”. A further advantage of finding ways to combine effective altruism with ‘having fun’ or ‘being cheerful’ is that it will likely make EA much more attractive for others. Concretely, you might want to try the following: Many activities are more fun in a group than alone, so engage in altruistic endeavours together with others if possible. Attempt to associate EA in your life not just with work, but also with socialising, friendship and fun. Make sure not to overwork yourself and keep in mind that “the important lesson of working a lot is to be comfortable with taking a break” (from Peter Hurfords ‘How I Am Productive’).
  • Do good directly: You might want to consider keeping habits of doing good directly, even in cases where these are not top-priority do-gooding activities by themselves. I believe this can be helpful to keep up and increase internal motivation to engage in altruistic activities as well as for cultivating a sense of ‘being an altruistic person’. For example, you could live veg*an, live frugally, donate some amount of money every year (even if the sums are small) and keep up to date with cause area and charity recommendations when making your donation decisions. However, as a counter to this point, I have met someone arguing that spending willpower on low-impact activities might potentially lead to ego depletion (note that this effect is disputed) or compassion fatigue for some people, thereby decreasing their motivation to engage in high-impact behaviour. Regarding career choice, you might see reducing risks of value drift as one reason to place a higher weight on direct work or research within an EA aligned organisation relative to other options such as earning to give or building career capital.
  • Consider ‘locking in’ part of your donation or career plans: While the flexibility to change your plans and retain future option value are important considerations, in some cases making hard-to-reverse decisions could be beneficial to avoid value drift. Application for career planning: be wary of building very general career capital for a long time, “particularly if the built capacity is broad and leaves open appealing non-altruist paths”, Joey writes. Instead you might consider specialising and building more narrow, EA-focused career capital (which is endorsed by 80,000 Hours for people focusing on top-priority paths anyway). However, in this article Ben Todd discusses some counterarguments to locking in your career decisions too early. Application for donations: Consider putting your donations in a donor advised fund instead of a savings account and potentially take a donation pledge (see point below). Joey writes, “that way even if you become less altruistic in the future, you can’t back out on the pledged donations and spend it on a fancier wedding or a bigger house”.
  • Consider taking the Giving What We Can pledge: For me, the ‘lock in’ aspect of the pledge as a commitment device was among the strongest reasons to take it. It is worth pointing out though that taking the pledge could have downsides for some people (e.g. losing flexibility and falling prey to the overjustification effect; for details, read Michael Dicken’s post).
  • Commit yourself publicly: This is another form of ‘lock in’. For example, you could participate in an EA group, write articles describing EA and your motivation to dedicate your life to doing the most good, post on social media about this, talk to other people about EA and be public about your EA career and donation plans, wear EA-T-shirts etc. The idea behind this is to engineer peer pressure for your future self and a potential loss of social status that could come with abandoning EA principles; I believe this works (subconsciously) for many as a motivational driving force to stay engaged. For this strategy to work it seems more important what you think your peers think of you, then what they actually think of you. Having said that, I encourage fostering a social norm among EAs not to shame or blame others when value drift occurs to them, in line with the overall recommendation for EAs to be especially nice and considerate.
  • Relationships: For those looking for a partner, I endorse the recommendation of generally just choosing whoever makes you happiest. For most people this anyway includes finding partners who share their values. It is worth pointing out that avoiding value drift might give you an additional reason to place some weight on finding partners who share your values and wouldn't put you under pressure in the long-term to give up your altruistic commitments or make it much harder to implement them. Concretely, you might consider looking for partners via platforms that allow you to share a lot about yourself and don’t match you with people with opposing values (e.g. OkCupid).
  • Apply findings of behavioural science research: I suspect that there are relevant insights from the research on nudging or on successful habit creation and retention (e.g. see these articles, one & two), that can be applied to help you avoid long-term value drift. One way to use nudges to make yourself engage in a desired altruistic behaviour is by making the behaviour the default option. For instance, you might set up automated, recurring donations (i.e. donating as default option) or, Joey writes, “ask your employer to automatically donate a pre-set portion of your income to charity before you even see it in your bank account”. As another example, by working for an EA aligned organisation you can make high-impact direct work or research your default option.

What EA organisations can do to deal with value drift:

  • Encourage norms of considerateness, friendliness and welcomingness within the EA community, which is beneficial in its own right but also helps keep motivational levels of community members high.
  • Conduct further research on causes of value drift and how to avoid it. An obvious starting point is researching the EA ‘reference class’, i.e. looking at the value drift experiences of other social movements. I acknowledge that many EA organisations have already spent significant efforts on similar research projects (e.g. Open Philanthropy Project, Sentience Institute). In particular, there might be ways for Rethink Charity to expand the EA survey to gather more rigorous data on value drift (selection effects are obviously problematic – the people whose values drifted the most will likely not participate in the survey).
  • Continue to support and expand opportunities for community members to surround themselves with other great people, e.g. by organising EAG(x) conferences and EA retreats, supporting local chapters and creating friendly and welcoming online communities (such as this forum or EA Facebook groups).
  • Incorporate the findings of research on value drift into EA career advice, especially when recommending careers whose value will only be realized decades in the future. Rob Wiblin already indicated that 80,000 Hours considers incorporating this into their discussion of discount rates.

I would highly appreciate your suggestions for concrete ways to reduce risks of value drift in the comments.

I warmly thank the following people for providing me with their input, suggestions and comments to this post: Joey Savoie, Pascal Zimmer, Greg Lewis, Jasper Götting, Aidan Goth, James Aung, Ed Lawrence, Linh Chi Nguyen, Huw Thomas, Tillman Schenk, Alex Norman, Charlie Rogers-Smith.

Comments (23)

Comment author: CalebWithers  (EA Profile) 07 May 2018 12:33:14PM *  9 points [-]

Thanks for writing this - it seems worthwhile to be strategic about potential "value drift", and this list is definitely useful in that regard.

I have the tentative hypothesis that a framing with slightly more self-loyalty would be preferable.

In the vein of Denise_Melchin's comment on Joey's post, I believe most people who appear to have value "drifted" will merely have drifted into situations where fulfilling a core drive (e.g. belonging, status) is less consistent with effective altruism than it was previously; as per The Elephant in the Brain, I believe these non-altruistic motives are more important than most people think. In the vein of The Replacing Guilt series, I don't think that attempting to override these other values is generally sustainable for long-term motivation.

This hypothesis would point away from pledges or 'locking in' (at least for the sake of avoiding value drift) and, I think, towards a slightly different framing of some suggestions: for example, rather than spending time with value-aligned people to "reduce the risk of value drift", we might instead recognize that spending time with value-aligned people is an opportunity to both meet our social needs and cultivate one's impactfulness.

Comment author: Darius_Meissner 10 May 2018 01:22:33PM *  0 points [-]

Thanks for your comment! I agree with everything you have said and like the framing you suggest.

I believe most people who appear to have value "drifted" will merely have drifted into situations where fulfilling a core drive (e.g. belonging, status) is less consistent with effective altruism than it was previously

This is what I tried to address though you have expressed it more clearly than I could! As some others have pointed out as well, it might make sense to differentiate between 'value drift' (i.e. change of internal motivation) and 'lifestyle drift' (i.e. change of external factors that make implementation of values more difficult). I acknowledge that, as Denise's comment points out, the term 'value drift' is not ideal in the way that Joey and I used it and that:

As the EA community we should treat people sharing goals and values of EA but finding it hard to act towards implementing them very differently to people simply not sharing our goals and values anymore. Those groups require different responses. (Denise_Melchin comment).

However, it seems reasonable to me to be concerned and attempt to avoid both about value and lifestyle drift and in many cases it will be hard to draw a line between the two (as changes in lifestyle likely precipitate changes in values and the other way around).

Comment author: pmelchor  (EA Profile) 10 May 2018 05:50:11PM 4 points [-]

Great posts, Joey and Darius!

I'd like to introduce a few considerations as an "older" EA (I am 43 now) :

  • Scope of measurement: Joey’s post was based on 5 year data. As Joey mentioned, “it would take a long time to get good data”. However, it may well be that expanding the time scope would yield very different results. It is possible that a graph plotting a typical EA’s degree of involvement/commitment with the movement would not look like a horizontal line but rather like a zigzag. I base this on purely anecdotal evidence, but I have seen many people (including myself) recover interests, hobbies, passions, etc. once their children are older. I am quite new to the movement, but there is no way that 10 years ago I would have put in the time I am now devoting to EA. If I had started my involvement in college —supposing EA had been around—, you could have seen a sharp decline during my thirties (and tag that as value drift)… without knowing there would be a sharp increase in my forties.

  • Expectations: This is related to my previous point. Is it optimal to expect a constant involvement/commitment with the movement? As EAs, we should think of maximizing our lifetime contributions. Keeping the initial engagement levels constant sounds good in theory, but it may not be the best strategy in the long run (e.g. potentially leading to burnout, etc). Maybe we should think of “engagement fluctuations” as something natural and to be expected instead of something dangerous that must be fought against.

  • EA interaction styles: If and as the median age of the community goes up, we may need to adapt the ways in which we interact (or rather add to the existing ones). It can be much harder for people with full-time jobs and children to attend regular meetings or late afternoon “socials”. How can we make it easier for people that have very strong demands on their time to stay involved without feeling that they are missing out or that they just can’t cope with everything? I don’t have an answer right now, but I think this is worth exploring.

The overall idea here is that instead of fighting an uneven involvement/commitment across time it may be better to actually plan for it and find ways of accommodating it within a “lifetime contribution strategy”. It may well be that there is a minimum threshold below which people completely abandon EA. If that it so I suggest we think of ways of making it easy for people to stay above that threshold at times when other parts of their lives are especially demanding.

Comment author: Darius_Meissner 10 May 2018 07:56:18PM *  1 point [-]

Great points, thanks for raising them!

It is possible that a graph plotting a typical EA’s degree of involvement/commitment with the movement would not look like a horizontal line but rather like a zigzag.

It would be very encouraging if this is a common phenomenon and many people 'dropping out' might potentially come back at some point to EA ideals. It provides a counterexample to something I have commented earlier:

It is worth pointing out that most of this discussion is just speculation. The very limited anecdata we have from Joey and others seems too weak to draw detailed conclusions. Anyway: From talking to people who are in their 40s and 50s now, it seems to me that a significant fraction of them were at some point during their youth or at university very engaged in politics and wanted to contribute to 'changing the world for the better'. However, most of these people have reduced their altruistic engagement over time and have at some point started a family, bought a house etc. and have never come back to their altruistic roots. This common story is what seems to be captured by the saying (that I neither like nor endorse): "If you're not a socialist at the age of 20 you have no heart. If you're not a conservative at the age of 40, you have no head".

Regarding your related point:

Is it optimal to expect a constant involvement/commitment with the movement? As EAs, we should think of maximizing our lifetime contributions (...) and find ways of accommodating it within a “lifetime contribution strategy”

I strongly agree with this, which was my motivation to write the post in the first place! I don't think constant involvement/commitment to (effective) altruism is necessary to maximise your lifetime impact. That said, it seems like for many people there is a considerable chance to never 'find their way back' to this commitment after they spent years/decades in non-altruistic environments, on starting a family, on settling down etc. This is why I'd generally think people with EA values in their twenties should consider ways to at the least stay loosely involved/updated over the mid- to long-term to reduce the chance of this happening. So it provides a great example to hear that you actually managed to do just that! In any case, more research is needed on this - I somewhat want to caution against survivorship bias, which could become an issue if we mostly talk to the people who did what is possibly exceptional (e.g. took up a strong altruistic commitment in their forties or having been around EA for for a long time).

Comment author: pmelchor  (EA Profile) 11 May 2018 09:28:36AM 2 points [-]

Good points. If I were doing a write up on this subject it would be something like this:

"As the years go by, you will likely go through stages during which you cannot commit as much time or other resources to EA. This is natural and you should not interpret lower-commitment stages as failures: the goal is to maximize your lifetime contributions and that will require balancing EA with other goals and demands. However, there is a risk that you may drift away from EA permanently if your engagement is too low for a long period of time. Here are some tools you can use to prevent that from happening:"

Comment author: KarolinaSarek 06 May 2018 07:03:47PM 4 points [-]

Thank you, Joey, for gathering those data. And thank you, Darius, for providing us with the suggestions for reducing this risk. I agree that further research on causes of value drift and how to avoid it is needed. If the phenomenon is explained correctly, that could be a great asset to the EA community building. But regardless of this explanation, your suggestions are valuable.

It seems to be a generally complex problem because retention encapsulates the phenomenon in which a person develops an identity, skill set, and consistent motivation or dedication to significantly change the course of their life. CEA in their recent model of community building framed it as resources, dedication, and realization.

Decreasing retention is also observed in many social movements. Some insights about how it happens can be culled from sociological literature. Although it is still underexplored and the sociological analysis might have mediocre quality, but it might still be useful to have a look at it. For example, this analysis implicate that “movement’s ability to sustain itself is a deeply interactive question predicted by its relationship to its participants: their availability, their relationships to others, and the organization’s capacity to make them feel empowered, obligated, and invested."

Additional aspects of value drift to consider on an individual level that might not be relevant to other social movements: mental health and well-being, pathological altruism, purchasing fuzzies and utilons separately.

The reasons for the value drift from EA seems to be as important in understanding the process, as the value drift that led to EA, e.g. In Joey's post, he gave an illustrative story of Alice. What could explain her value drift was the fact that at people during their first year of college are more prone to social pressure and need for belonging. That could make her become EA and drifted when she left college and her EA peers. So "Surround yourself with value aligned people" for the whole course of your life. That also stresses the importance of untapped potential of local groups outside the main EA hubs. For this reason, it's worth considering even If in case of outreach we shouldn't rush to translate effective altruism

About the data itself. We might be making wrong inferences trying to explain those date. Because it shows only a fraction of the process and maybe if we would observe the curve of engagement it would fluctuate over a longer period of time, eg. 50% in the first 2-5 year, 10% in a 6th year, 1% in for the next 2-3 and then coming back to 10%, 50% etc.? Me might hypothesize that life situation influence the baseline engagement for short period (1 month- 3 years). As analogous for changes in a baseline of happiness and influences of live events explained by hedonic adaptation, maybe we have sth like altruistic adaptation, that changes after a significant live event (changing the city, marriage etc.) and then comes back to baseline.

Additionally, the level of engagement in EA and other significant variables does not correlate perfectly, the data could also be explained by the regression to the mean. If some of the EAs were hardcore at the beginning, they will tend to be closer to the average on a second measurement, so from 50% to 10%, and those from 10% to 1%. Anyhow, the likelihood that the value drift is true is higher than that it's not.

More could be done about the vale drift on the structural level, e.g. it might be also explained by the main bottlenecks in the community itself, like the Mid-Tire Trap (e.g. too good for running local group, but no good enough to be hired by main EA organizations -> multiple unsuccessful job applications -> frustration -> drop out).

Becuase mechanism of the value drift would determine the strategies to minimalize risk or harm of it and because the EA community might not be representative for other social movements, we should systematically and empirically explore those and other factors in order to find the 80/20 of long-lasting commitment.

Comment author: Denise_Melchin 11 May 2018 06:29:53PM *  6 points [-]

More could be done about the vale drift on the structural level, e.g. it might be also explained by the main bottlenecks in the community itself, like the Mid-Tire Trap (e.g. too good for running local group, but no good enough to be hired by main EA organizations -> multiple unsuccessful job applications -> frustration -> drop out).

Doing effective altruistic things ≠ Doing Effective Altruism™ things

All the main Effective Altruism orgs together employ only a few dozen people. There are two orders of magnitude more people interested in Effective Altruism. They can't all work at the main EA orgs.

There are lots of highly impactful opportunities out there that aren't branded as EA - check out the career profiles on 80,000hours for reference. Academia, politics, tech startups, doing EtG in random places, etc.

We should be interested in having as high an impact as possible and not in 'performing EA-ness'.

I do think that EA orgs dominate the conversations within the EA sphere which can lead to this unfortunate effect where people quite understandably feel that the best thing they can do is work there (or at an 'EA approved' workplace like D pmind or J n Street) - or nothing. That's counterproductive and sad.

A potential explanation: it's difficult for people to evaluate the highly impactful positions in other fields. Therefore the few organisations and firms we can all agree on are Effectively Altruistic get a disproportionate amount of attention and 'status'.

As the community, we should try to encourage to find the highest impact opportunity for them out of many possible options, of which only a tiny fraction is working at EA orgs.

Comment author: Darius_Meissner 10 May 2018 01:44:37PM *  0 points [-]

Thanks for your comment, Karolina!

That also stresses the importance of untapped potential of local groups outside the main EA hubs.

Yep, I see engaging people & keeping up their motivation in one location as a major contribution of EA groups to the movement!

maybe we have sth like altruistic adaptation, that changes after a significant live event (changing the city, marriage etc.) and then comes back to baseline.

This is an interesting suggestion, though I think it unlikely. It is worth pointing out that most of this discussion is just speculation. The very limited anecdata we have from Joey and others seems too weak to draw detailed conclusions. Anyway: From talking to people who are in their 40s and 50s now, it seems to me that a significant fraction of them were at some point during their youth or at university very engaged in politics and wanted to contribute to 'changing the world for the better'. However, most of these people have reduced their altruistic engagement over time and have at some point started a family, bought a house etc. and have never come back to their altruistic roots. This common story is what seems to be captured by the saying (that I neither like nor endorse): "If you're not a socialist at the age of 20 you have no heart. If you're not a conservative at the age of 40, you have no head".

More could be done about the vale drift on the structural level, e.g. it might be also explained by the main bottlenecks in the community itself, like the Mid-Tire Trap

This is a valuable and under-discussed point that I endorse!

Comment author: ThomasSittler 06 May 2018 05:20:21PM *  7 points [-]

Thanks for the post. I'm sceptical of lock-in (or, more Homerically, tie-yourself-to-the-mast) strategies. It seems strange to override what your future self wants to do, if you expect your future self to be in an equally good epistemic position. If anything, future you is better informed and wiser...

I know you said your post just aims to provide ideas and tools for how you can avoid value drift if you want to do so. But even so, in the spirit of compromise between your time-slices, solutions that destroy less option value are preferable.

Comment author: Yannick_Muehlhaeuser 06 May 2018 05:48:51PM 4 points [-]

There's probably something to be gained by investigating this further, but i would guess that most cases of value drift are because a loss of willpower and motivation, rather that an update of one's opinion. I think the word value drift is a bit ambigious here, because i think the stuff you mention is something we don't really want to include in whatever term we use here. Now that i think about it, i think what really makes the difference here are deeply held intuitions about the range of our moral duty and so for which 'changing your mind' doesn't alway seem appropriate.

Comment author: Joey 06 May 2018 06:11:43PM 3 points [-]

Say a person could check a box and commit to being vegan for the rest of their lives, do you think that would be a ethical/good thing for someone to do? Given what we know about average recidivism in vegans?

Comment author: RandomEA 07 May 2018 11:03:07AM *  4 points [-]

It could turn out to be bad. For example, say she pledges in 2000 to "never eat meat, dairy, or eggs again." By 2030, clean meat, dairy, and eggs become near universal (something she did not anticipate in 2000). Her view in 2030 is that she should be willing to order non-vegan food at restaurants since asking for vegan food would make her seem weird while being unlikely to prevent animal suffering. If she takes her pledge seriously and literally, she is tied to a suboptimal position (despite only intending to prevent loss of motivation).

This could happen in a number of other ways:

  1. She takes the Giving What We Can Further Pledge* intending to prevent herself from buying unnecessary stuff but the result is that her future self (who is just as altruistic) cannot move to a higher cost of living location.

  2. She places her donation money into a donor-advised fund intending to prevent herself from spending it non-altruistically later but the result is that her future self (who is just as altruistic) cannot donate to promising projects that lack 501(c)(3) status.

  3. She chooses a direct work career path with little flexible career capital intending to prevent herself from switching to a high earning career and keeping all the money but the result is that her future self (who is just as altruistic) cannot easily switch to a new cause area where she would be able to have a much larger impact.

It seems to me that actions that bind you can constrain you in unexpected ways despite your intention being to only constrain yourself in case you lose motivation. Of course, it may still be good to constrain yourself because the expected benefit from preventing reduced altruism due to loss of motivation could outweigh the expected cost from the possibility of preventing yourself from becoming more impactful. However, the possibility of constraining actions ultimately being harmful makes me think that they are distinct from actions like surrounding yourself with like-minded people and regularly consuming EA content.

*Giving What We Can does not push people to take the Further Pledge.

Comment author: MichaelPlant 06 May 2018 08:43:29PM 3 points [-]

It seems strange to override what your future self wants to do,

I think you're just denying the possibility of value drift here. If you think it exists, then committment strategies could make sense. if you don't, they won't.

Comment author: ThomasSittler 07 May 2018 09:14:01AM 2 points [-]

Michael -- keen philosopher that you are, you're right ;)

The part you quote does ultimately deny that value drift is something we ought to combat (holding constant information, etc.). That would be my (weakly held) view on the philosophy of things.

In practise though, there may be large gains from compromise between time-slices, compared to the two extremes of always doing what your current self wants, or using drastic commitment devices. So we could aim to get those gains so long as we're unsure about the philosophy.

Comment author: Khorton 06 May 2018 09:48:00PM 2 points [-]

I disagree - I think you can believe "value drift" exists and also allow your future self autonomy.

My current "values" or priorities are different from my teenage values, because I've learned and because I have a different peer group now. In ten years, they will likely be different again.

Which "values" should I follow: 16-year-old me, 26-year-old me, or 36-year-old me? It's not obvious to me that the right answer is 26-year-old me (my current values).

Comment author: Darius_Meissner 10 May 2018 02:16:58PM *  1 point [-]

Thanks, Tom! I agree with with you that all else being equal

solutions that destroy less option value are preferable

though I still think that in some cases the benefits of hard-to-reverse decisions can outweigh the costs.

It seems strange to override what your future self wants to do, if you expect your future self to be in an equally good epistemic position. If anything, future you is better informed and wiser...

This seems to assume that our future selves will actually make important decisions purely (or mostly) based on their epistemic status. However, as CalebWithers points out in a comment:

I believe most people who appear to have value "drifted" will merely have drifted into situations where fulfilling a core drive (e.g. belonging, status) is less consistent with effective altruism than it was previously; as per The Elephant in the Brain, I believe these non-altruistic motives are more important than most people think.

If this is valid (as it seems to me) than many of the important decisions of our future selves are a result of some more or less conscious psychological drives rather than an all-things-considered, reflective and value-based judgment. It is very hard for me to imagine that my future self could ever decide to stop being altruistic or caring about effectiveness on the basis of being better informed and more rational. However, I find it much more plausible that other psychological drives could bring my future self to abandon these core values (and find a rationalization for it). To be frank, though I generally appreciate the idea of 'being loyal to and cooperating with my future self', it seems to me that I place a considerably lower trust in the driving motivations of my future self than many others. From my perspective now, it is my future self that might act disloyally with regards to my current values and that is what I want to find ways to prevent.

It is worth pointing out that in the whole article and this comment I mostly speak about high-level, abstract values such as a fundamental commitment to altruism and to effectiveness. This is what I don't want to lose and what I'd like to lock in for my future self. As illustrated by RandomEAs comment, I would be much more careful about attempting to tie-myself-to-the-mast with respect to very specific values such as discount rates between humans and non-human animals, specific cause area or intervention preferences etc.

Comment author: BenMillwood  (EA Profile) 13 May 2018 08:36:40AM 0 points [-]

It's not enough to place a low level of trust in your future self for commitment devices to be a bad idea. You also have to put a high level of trust in your current self :)

That is, if you believe in moral uncertainty, and believe you currently haven't done a good job of figuring out the "correct" way of thinking about ethics, you may think you're likely to make mistakes by committing and acting now, and so be willing to wait, even in the face of a strong chance your future self won't even be interested in those questions anymore.

Comment author: Emanuele_Ascani 13 May 2018 10:26:39AM *  2 points [-]

One thing I find really helpful to remain consistent in my values is introspection followed by writing the results down in a note, both a physical one and in a text file in my pc. I observed that this strategy really works for me, both for figuring out who I am and for making my actions consistent with it for however long periods of time. I still have 70% of the notes I wrote 5 years ago, and 100% of the most important ones that are the core of all my values.

Comment author: John_Maxwell_IV 10 May 2018 02:34:07AM *  1 point [-]

In particular, there might be ways for Rethink Charity to expand the EA survey to gather more rigorous data on value drift (selection effects are obviously problematic – the people whose values drifted the most will likely not participate in the survey).

An easy way to gather a pool of "value drifted" people to survey could be to look at previous iterations of the EA survey and identify people who filled out the survey at some point in the past, but haven't filled it out in the past N years. Then you could email them a special survey asking why they haven't been filling out the survey, perhaps offering a chance to win an Amazon gift card as an incentive, and include questions about sources of value drift.

Comment author: Khorton 14 May 2018 12:06:12AM *  0 points [-]

What you're calling "value drift," Evangelical Christians call "backsliding." The idea is you've taken steps toward a countercultural lifestyle in line with your values, but now you're sliding back toward the mainstream - for an Evangelical Christian, an example would be binge drinking with friends. Backsliding is common and Evangelicals use many of the techniques listed above to counteract it.

Evangelicals heavily emphasize community. Christians are encouraged to attend services, join a small group Bible study, socialize with each other, and marry other Christians.

I also remember being encouraged to establish good habits and stick with them - for example, reading the Bible every morning.

We also, of course, begin with a public commitment to Christianity. And community members will pull you aside and have a chat with you (read: judge you) if they think you're in danger of backsliding.

I've seen all of these strategies work, although some have undesirable side effects.

Comment author: JoshP 07 May 2018 01:49:16PM 0 points [-]

Good article in lots of ways. I'm perhaps slightly put off by the sheer amount of info here- I don't feel like I can input all of this easily, given my own laziness and number of goals which I feel like I prioritise. Not sure there's an easy solution to that (maybe some sort of two three top suggestions?), but feel like this is a bit of an information overload. Thanks for writing it though Darius, I enjoyed it :)

Comment author: Joey 07 May 2018 03:18:04PM 2 points [-]

Personally, if I were to simplify this post down to top 2 pieces of advice 1) focus on doing good now 2) surround yourself with people who will keep encouraging you to do good long term.