Comment author: pmelchor  (EA Profile) 11 September 2018 07:32:00AM 3 points [-]

I am personally very interested in cause areas like global poverty, so it is great to see more people wanting to discuss the related issues in depth.

Nevertheless, I strongly support the definition of EA as a question (how can we use our resources to help others the most?) and that makes me not want to tag myself as a "[enter category here] EA" (e.g. "near-term EA", "far-future EA"...).

In practical terms, the above leads me to enjoy my views being challenged by people who have come to different conclusions and I tend to favour a "portfolio approach" to doing good, somewhat along the lines of Open Phil's "worldview diversification".

Regarding discussion, there should be great spaces for both the meta topics and the cause-specific ones. Wouldn't it be ideal if we could host all those discussions under the same roof? Maybe this thread can be used as an input for the upcoming EA Forum 2.0. The feature request would be something like "make it easy to host and find worldview-specific discussions".

Comment author: Carl_Shulman 12 August 2018 10:12:06PM *  4 points [-]

The argument is that some things in the relatively near term have lasting effects that cannot be reversed by later generations. For example, if humanity goes extinct as a result of war with weapons of mass destruction this century, before it can become more robust (e.g. by being present on multiple planets, creating lasting peace, etc), then there won't be any future generations to act in our stead (for at least many millions of years for another species to follow in our footsteps, if that happens before the end of the Earth's habitability).

Likewise, if our civilization was replaced this century by unsafe AI with stable less morally valuable ends, then future generations over millions of years would be controlled by AIs pursuing those same ends.

This period appears exceptional over the course of all history so far in that we might be able to destroy or permanently worsen the prospects of civilizations as a result of new technologies, but before we have reached a stable technological equilibrium or dispersed through space.

Comment author: pmelchor  (EA Profile) 15 August 2018 02:46:04PM 0 points [-]

Thanks, Carl. I fully agree: if we are convinced it is essential that we act now to counter existential risks, we must definitely do that.

My question is more theoretical (feel free to not continue the exchange if you find this less interesting). Imagine we lived in a world just like ours but where the development of AI, global pandemics, etc. are just not possible: for whatever reason, those huge risks are just not there. An argument in favour of weighting the long-term future heavily could still be valid (there could be many more people alive in the future and therefore a great potential for either flourishing or suffering). But how should we weight that against the responsibility to help people alive today, since we are the only ones who can do it (future generations will not be able to replace us in that role)?

Comment author: RandomEA 04 August 2018 06:12:11PM *  44 points [-]

Here are ten reasons you might choose to work on near-term causes. The first five are reasons you might think near term work is more important, while the latter five are why you might work on near term causes even if you think long term future work is more important.

  1. You might think the future is likely to be net negative. Click here for why one person initially thought this and here for why another person would be reluctant to support existential risk work (it makes space colonization more likely, which could increase future suffering).

  2. Your view of population ethics might cause you to think existential risks are relatively unimportant. Of course, if your view was merely a standard person affecting view, it would be subject to the response that work on existential risk is high value even if only the present generation is considered. However, you might go further and adopt an Epicurean view under which it is not bad for a person to die a premature death (meaning that death is only bad to the extent it inflicts suffering on oneself or others).

  3. You might have a methodological objection to applying expected value to cases where the probability is small. While the author attributes this view to Holden Karnofsky, Karnofsky now puts much more weight on the view that improving the long term future is valuable.

  4. You might think it's hard to predict how the future will unfold and what impact our actions will have. (Note that the post is from five years ago and may no longer reflect the views of the author.)

  5. You might think that AI is unlikely to be a concern for at least 50 years (perhaps based on your conversations with people in the field). Given that ongoing suffering can only be alleviated in the present, you might think it's better to focus on that for now.

  6. You might think that when there is an opportunity to have an unusually large impact in the present, you should take it even if the impact is smaller than the expected impact of spending that money on long term future causes.

  7. You might think that the shorter feedback loops of near term causes allow us to learn lessons that may help with the long term future. For example, Animal Charity Evaluators may help us get a better sense of how to estimate cost-effectiveness with relatively weak empirical evidence, Wild Animal Suffering Research may help us learn how to build a new academic field, and the Good Food Institute may help us gain valuable experience influencing major economic and political actors.

  8. You might feel like you are a bad fit for long term future causes because they require more technical expertise (making it hard to contribute directly) and are less funding constrained (making it hard to contribute financially).

  9. You might feel a spiritual need to work on near term causes. Relatedly, you might feel like you're more likely to do direct work long term if you can feel motivated by videos of animal suffering (similar to how you might donate a smaller portion of your income because you think it's more likely to result in you giving long term).

  10. As you noted, you might think there are public image or recruitment benefits to near term work.

Note: I do not necessarily agree with any of the above.

Comment author: pmelchor  (EA Profile) 11 August 2018 10:45:31PM 5 points [-]

I think there is an 11th reason why someone may want to work on near-term causes: while we may be replaceable by the next generations when it comes to working on the long-term future, we are irreplaceable when it comes to helping people / sentient beings who are alive today. In other words: influencing what may happen 100 years from now can be done by us, our children, our grand-children and so on; however, only we can help say the 700 million people living in extreme poverty today.

I have not come across the counter-arguments for this one: has it been discussed on previous posts or related material? Or maybe it is a basic question in moral philosophy 101 and I am just not knowledgeable enough :-)

Comment author: Darius_Meissner 10 May 2018 07:56:18PM *  1 point [-]

Great points, thanks for raising them!

It is possible that a graph plotting a typical EA’s degree of involvement/commitment with the movement would not look like a horizontal line but rather like a zigzag.

It would be very encouraging if this is a common phenomenon and many people 'dropping out' might potentially come back at some point to EA ideals. It provides a counterexample to something I have commented earlier:

It is worth pointing out that most of this discussion is just speculation. The very limited anecdata we have from Joey and others seems too weak to draw detailed conclusions. Anyway: From talking to people who are in their 40s and 50s now, it seems to me that a significant fraction of them were at some point during their youth or at university very engaged in politics and wanted to contribute to 'changing the world for the better'. However, most of these people have reduced their altruistic engagement over time and have at some point started a family, bought a house etc. and have never come back to their altruistic roots. This common story is what seems to be captured by the saying (that I neither like nor endorse): "If you're not a socialist at the age of 20 you have no heart. If you're not a conservative at the age of 40, you have no head".

Regarding your related point:

Is it optimal to expect a constant involvement/commitment with the movement? As EAs, we should think of maximizing our lifetime contributions (...) and find ways of accommodating it within a “lifetime contribution strategy”

I strongly agree with this, which was my motivation to write the post in the first place! I don't think constant involvement/commitment to (effective) altruism is necessary to maximise your lifetime impact. That said, it seems like for many people there is a considerable chance to never 'find their way back' to this commitment after they spent years/decades in non-altruistic environments, on starting a family, on settling down etc. This is why I'd generally think people with EA values in their twenties should consider ways to at the least stay loosely involved/updated over the mid- to long-term to reduce the chance of this happening. So it provides a great example to hear that you actually managed to do just that! In any case, more research is needed on this - I somewhat want to caution against survivorship bias, which could become an issue if we mostly talk to the people who did what is possibly exceptional (e.g. took up a strong altruistic commitment in their forties or having been around EA for for a long time).

Comment author: pmelchor  (EA Profile) 11 May 2018 09:28:36AM 2 points [-]

Good points. If I were doing a write up on this subject it would be something like this:

"As the years go by, you will likely go through stages during which you cannot commit as much time or other resources to EA. This is natural and you should not interpret lower-commitment stages as failures: the goal is to maximize your lifetime contributions and that will require balancing EA with other goals and demands. However, there is a risk that you may drift away from EA permanently if your engagement is too low for a long period of time. Here are some tools you can use to prevent that from happening:"

Comment author: pmelchor  (EA Profile) 10 May 2018 05:50:11PM 5 points [-]

Great posts, Joey and Darius!

I'd like to introduce a few considerations as an "older" EA (I am 43 now) :

  • Scope of measurement: Joey’s post was based on 5 year data. As Joey mentioned, “it would take a long time to get good data”. However, it may well be that expanding the time scope would yield very different results. It is possible that a graph plotting a typical EA’s degree of involvement/commitment with the movement would not look like a horizontal line but rather like a zigzag. I base this on purely anecdotal evidence, but I have seen many people (including myself) recover interests, hobbies, passions, etc. once their children are older. I am quite new to the movement, but there is no way that 10 years ago I would have put in the time I am now devoting to EA. If I had started my involvement in college —supposing EA had been around—, you could have seen a sharp decline during my thirties (and tag that as value drift)… without knowing there would be a sharp increase in my forties.

  • Expectations: This is related to my previous point. Is it optimal to expect a constant involvement/commitment with the movement? As EAs, we should think of maximizing our lifetime contributions. Keeping the initial engagement levels constant sounds good in theory, but it may not be the best strategy in the long run (e.g. potentially leading to burnout, etc). Maybe we should think of “engagement fluctuations” as something natural and to be expected instead of something dangerous that must be fought against.

  • EA interaction styles: If and as the median age of the community goes up, we may need to adapt the ways in which we interact (or rather add to the existing ones). It can be much harder for people with full-time jobs and children to attend regular meetings or late afternoon “socials”. How can we make it easier for people that have very strong demands on their time to stay involved without feeling that they are missing out or that they just can’t cope with everything? I don’t have an answer right now, but I think this is worth exploring.

The overall idea here is that instead of fighting an uneven involvement/commitment across time it may be better to actually plan for it and find ways of accommodating it within a “lifetime contribution strategy”. It may well be that there is a minimum threshold below which people completely abandon EA. If that it so I suggest we think of ways of making it easy for people to stay above that threshold at times when other parts of their lives are especially demanding.