Dec 21 20153 min read 11

35

[This is a personal and emotional post about my feelings about EA. Probably not for everyone! Could cause sadness, scrupulosity concerns, and guilt.]

I think it's true that 2x the suffering is 2x as bad, and it would be emotionally accurate for it to make me 2x as sad, i.e. if it did then my emotions would better reflect reality. But I worry that a lot of people get tangled up with the distinction between emotional accuracy and instrumental value of emotions. They're often correlated; it's useful to be more scared of dying in a car crash than dying by lion attack. And emotions can be motivating, so having emotions that reflect reality can cause greater effectiveness.

But this gets tricky with EA.

I believe the moral importance of suffering increases linearly as suffering increases, but there are non-linear marginal returns to having emotions that reflect that. Just as there are instrumentally rational techniques that require irrationality, there are instrumentally useful emotions that require emotional inaccuracy. I don't know what emotions are most instrumentally useful for improving the world, but they're probably not going to be the ones that correspond linearly to the reality of the amounts of suffering in the world. 

I only know from the inside my own seemingly morally relevant experiences, my subjective feelings of joy and serenity and curiosity and sorrow and anger and apathy. In practice, I can only at my most emotionally expansive moments hold in my mind all the morally relevant experiences I think I have in my median hour. So I can maybe comprehend less than 1/140,000 of the morally important things I've personally felt*. I don't know if I'm an outlier in that regard, but I'm pretty certain that I am completely incapable of emotionally understanding a fraction of the value of a life (even when I have the huge advantage of having felt the life from the inside). And that's not changing any time soon.

Yet, it somehow seems to be true that billions or trillions of beings are having morally relevant experiences right now, and had them in the past, and (many times) more could have morally relevant experiences in the future. My emotions are not well-equipped to deal with this; they can't really understand numbers bigger than a three or experiences longer than an hour (true story) (I may be unusually incompetent in this regard, but probably not by many orders of magnitude).

The cost to save a human life might be a few thousand dollars. The value of each sentient life is incomprehensibly vast**. EA is a "bargain"  because so many lives are so drastically undervalued by others. And resources are scarce; even if some lives weren't undervalued relative to others, we still couldn't give everyone what their value alone would compel us to, if we had more.

Having to triage is desperately sad. The fact that we can't help everyone is terrible and tragic; we should never stop fighting to be able to help everyone more. I worry about losing sight of this, and denying the emotional correctness of feeling an ocean of sorrow for the suffering around us. To feel it is impossible, and would be debilitating. 

I can't emotionally comprehend all of what I'm doing and not doing, and wouldn't choose to if I could. That's why, for me, effective altruism is a leap of faith. I'm learning to live a life I can't emotionally fully understand, and I think that's okay. But I think it's good to remind myself, from time to time, what I'm missing by necessity. 

 

 

*Assuming I have no morally relevant experiences while sleeping, which seems untrue.

**With the exception of borderline-sentient or very short-lived beings that have lives with little (but nonzero!) moral value.  

35

0
0

Reactions

0
0

More posts like this

Comments11
Sorted by Click to highlight new comments since: Today at 3:22 PM

Great post!

Out of interest, can you give an example of an "instrumentally rational technique that require irrationality"?

[anonymous]8y2
0
0

Great post! It reminds me of this one: http://mindingourway.com/the-value-of-a-life/

Yeah, definitely includes some similar ideas :) Hadn't read that before (but had heard the idea vaguely floated).

This is really powerful, Claire, thank you for sharing it!

Let me be clear that I am not responding to how you should feel, but just brainstorming about an instrumental approach toward the most effective emotional tonality.

For an ideal emotional tonality, I wonder if it might be helpful to orient not toward feeling sorrow for the lives not saved or morally relevant experiences not had, but to feeling neutral about them, and only get positive experience from additional lives saved and morally relevant experiences had. This can tap into the power of positive reinforcement and rewards, which research shows tends to function better than negative reinforcement in motivating effective behavior. Since sadness is, science suggests, not motivating, Effective Altruists might be better off orienting to avoiding sadness, and focusing on experiencing joy over successes.

This presumes an ability to self-modify one's emotions, which is certainly doable, but quite effortful.

Again, this is not meant to be prescriptive, but just responding to what an ideal emotional tonality could be.

Thanks Gleb.

It was my understanding that thinking of both potential good and bad outcomes (mental contrasting) was more powerfully motivating than thinking of either alone. In my experience, psychology research on this subject also isn't super reliable. Personally, I definitely find thinking about bad outcomes motivating, as I'm a naturally happy person and good outcomes don't make me much happier than the baseline for long.

I expect this varies a lot from person to person.

The motivational aspects do vary a lot from person to person :-) The nature of the specific emotions and their impact on motivation is more consistent across the majority, however - far from all, but for the majority.

Negative feelings of sadness/sorrow tend to be demotivating, and may lead to depression. Anxiety can be motivating or demotivating, depending on the extent of the anxiety. Anger/frustration tends to be motivating.

Positive feelings of satisfaction/contentment are usually demotivating. Joy/pleasure/excitement can be motivating, especially if coupled with a clear means of gaining these experiences.

Just discovering this now, but it really resonates. “A leap of faith” — I like that.

I think this is an incredibly powerful post, and definitely worth sharing. I wonder if there's a way to edit some of it to make it more front-facing, without losing out on any of the emotional power.

Sadly, I don't think there's a way to make it a good 'front facing' pick, because it would seem too 'hardcore' to newcomers.

Thanks! If you have any suggestions, I'd love to hear them.

Graham Oddie, in his argument for moral realism, writes about the distinction between value and experience of value.

Feelings of empathy can be considered moral sense-data which grow more or less vivid depending on your distance and acquaintance with the object of moral concern. It's similar to objects appearing large when close but appearing small when far away.

In that respect, there's nothing inappropriate about feeling more strongly about nearby moral problems and less strongly about faraway moral problems if you recognize that the intrinsic value of the issue is unchanged by the factors which affect your mentality.

But if we truly cared just as much about distant others as we are naturally inclined to or as much as anyone down the street did, we might not do anything because moral beliefs alone rarely provide sufficient motivation to act (in my opinion). So I dunno, I agree with him in terms of moral epistemology, but not in terms of moral motivation. We can define, to some extent, how much we care and we can influence each other to care more or less.

It's rarely possible to care about all other people equally. The naive view where we would experience equal care and concern for the trials and experiences of each individual would be not just unfeasible but wholly undesirable.

At the end of the day, the correct framework for the consequentialist to adopt is to recognize that he/she must adopt not just the actions, but also the levels of empathy, dispositions and attitudes which best facilitate their productivity and contribution to the world.

This could involve caring about people in terms of instrumental rather than intrinsic value - caring about yourself the most because you are an effective altruist, caring about your friends and family somewhat less because they provide emotional support and stability to an effective altruist, and caring about distant others the least. This gives you the justification for possessing some attitudes and approaches to relationships which are similar to that of a normal person. You need not be constantly thinking of people in utilitarian terms, you can have a dual mental approach where your inner attitudes also remain similar to that of ordinary people if (and only if) those kinds of attitudes make you the most effective kind of person - see "Alienation, Consequentialism and the Demands of Morality" by Peter Railton.