Feb 9 20174 min read 9

31

I'm an employee of the Centre for Effective Altruism, but my thoughts are not necessarily those of my employer.

As someone who frequents the EA community in the Bay Area, there's a verbal habit that I've heard a lot and of which I think we should be wary. In discussing what they are focusing on, people often say things like "I primarily care about existential risks" or, more cringingly, "I don't care about global poverty." I use this example given the existential risk-oriented disposition of many Bay Area EAs, but don't expect that they are exceptional in this type of language misuse. While this may be little more than a verbal misstep, I think it hints at other problems we may face.

I know what they mean. In most cases it's not that they actually primarily care about just making sure humans continue to exist, period, and don't care about the quality of life of billions of people who exist today. It's that, given the high level of implicit understanding between the conversationalists, "care" actually means something closer to "prioritize" or "think is the best use of my resources to address." That being said, I still think this shorthand is dangerous for the EA brand, the EA community, and our EA goals.

The brand risk needs little explanation. A newcomer to the community would be really weirded out, if not horrified, to hear that a person "doesn't care about global poverty" or whatever other cause they're disavowing. Coupled with the often-cavalier tone in which this comment is made, EAs certainly stand to shock someone who might otherwise be inclined. Sure, some people might be intrigued by the boldness of these statements, but... would you have been more intrigued, or turned off? What about the other EAs you respect, back in the infancy of their involvement? (Perhaps you would have been intrigued. At least I know I would have been turned off.) Should a large media outlet, social justice-oriented group, or charity in our network catch wind of such language, there's no telling what sort of negative ramifications this could have for the movement.

It also serves as a quick way to create a rift between one's self and their potential allies in the community. People who are really involved in EA and focus on public health, factory farming, better science, etc. have chosen that cause because they think it is the one with the most promise. It's certainly not because they don't care about other causes that hinder the wellbeing of creatures they deem relevant. Particularly on an emotional level, vocally not "caring" about a person's cause selection seems to imply not caring about those whose whom the cause impacts. Maybe the cavalier EA doesn't care about the cause, but I've only really found this to be the case amongst people who don't think that animals have morally relevant suffering. The cavalier EA probably does care about young African children, at least intellectually, and risks alienating the very people with whom they should be collaborating by speaking in this way.

Finally, and in my perception most detrimentally, one risks coming to believe that they do not, in fact, care about e.g. global poverty. Yes, it's really draining to have your heartstrings tugged every which way, to feel the opportunity cost of all of the things one is foregoing by focusing their efforts. If you're like me and tried to help people before finding the community, the EA approach to doing good was a bit of a relief, giving you emotional permission to block out some of the endless charitable asks. Maybe you instead come to EA from a place of excitement at the opportunities for influence and power, or intrigued by the complexity of the problems at hand. For those types, not caring about anything except The Goal may feel compromising and antithetical to the classic advice for career or startup success. And yeah, it is, but only in the short run. The horse-blinder focus is only good when you are pushing to make progress on a pre-determined instrumental goal. This shouldn't stay stable for long, given the nature of rapid change of most problems one might address. In the long run, as one evaluates their fundamental goal and path to achieving it, emotional openness is likely to be essential for cognitive openness to being wrong and for changing course. This isn't to say that you should start from Square 1 every time you check your plans; this is to say that you probably were wrong, in some way, and keeping in mind the reason you're here in the first place (because you care) will make you less prone to personal bullshitting. Nothing like an occasional stare into the void to keep your other motives in check.

I've also found that intentionally remembering that I do care reinvigorates my resolve to do the best I can. I periodically reaffirm my compassion for those whose afflictions fall outside my priorities. I find it enraging that people can be incarcerated for years for petty crimes or crimes they didn't commit. I am sickened by the nonchalant treatment and killing of lab animals. I ache for teenagers mired in deep depression and suicidal thoughts. I remind myself of their plight not as a form of self torture but as a means of self humbling, to remind myself how simplistic my characterizations of the world can become. This leads me to reevaluate my cause priorities, as well as reminding me why it's important to resist the lure of other motives.

Perhaps this blows this offhanded choice of words out of proportion. It's just a word, after all; we have far bigger things to tackle. But if, as it seems, this hits on larger risks about how we perceive and are perceived by the world, it behooves us to use "care" with care.

Footnote: I'm pretty uncertain about this post, and expect it to seem kind of melodramatic. It feels like I'm pointing at a real problem, but hey, I may just be pedantic. Please call me out on it if so. One of my goals for this year is to get more feedback on my thoughts, since it's easier to be self-deceived without it.

Comments9
Sorted by Click to highlight new comments since: Today at 8:41 AM

Thanks for writing this Roxanne, I agree that this is a risk - and I've also cringed sometimes when I've heard EAs say they "don't care" about certain things. I think it's good to highlight this as a thing we should be wary of.

It reminds me a bit of how in academia people often say, "I'm interested in x", where x is some very specific, niche subfield, implying that they're not interested in anything else - whereas what they really mean is, "x is the focus of my research." I've found myself saying this wrt my own research, and then often caveating, "actually, I'm interested in a tonne of wider stuff, this is just what I'm thinking about at the moment!" So I'd like it if the norm in EA were more towards saying things like, "I'm currently focusing on/working on/thinking about x" rather than, "I care about x"

Yeah, I can see how this could come off poorly. I'd recommend using the word "focus" instead (i.e. "I focus mostly on X-risks")

Thanks for recommending a concrete change in behavior here!

I also appreciate the discussion of your emotional engagement / other EAs' possible emotional engagement with cause prioritization -- my EA emotional life is complicated, I'm guessing others have a different set of feelings and struggles, and this kind of post seems like a good direction for understanding and supporting one another.

ETA: personally, it feels correct when the opportunity arises to emotionally remind myself of the gravity of the ER-triage-like decisions that humans have to make when allocating resources. I can do this by celebrating wins (e.g. donations / grants others make, actual outcomes) as well as by thinking about how far we have to go in most areas. It's slightly scary, but makes me more confident that I'm even-handedly examining the world and its problems to the best of my abilities and making the best calls I can, and I hope it keeps my ability to switch cause areas healthy. I'd guess this works for me partially because those emotions don't interfere with my ability to be happy / productive, and I expect there are people whose feelings work differently and who shouldn't regularly dwell on that kind of thing :)

To generalize: people are strongly reinforced in attention dollars for doing this sort of thing. Stating reasonable beliefs doesn't generate interesting discussion so people extreme-ize. See "EA has a lying problem" where it seemed fairly likely at the outset that the claim would be updated to something less strong in the end. I think this is a major problem, as it encourages the hijacking of useful things for attention spirals as people compete to be more extreme and results in evaporative cooling of the movement as outsiders are more and more annoyed. I think this has already started.

You're clearly pointing at a real problem, and the only case in which I can read this as melodramatic is the case in which the problem is already very serious. So, thank you for writing.

When the word "care" is used carelessly, or, more generally, when the emotional content of messages is not carefully tended to, this nudges EA towards being the sort of place where e.g. the word "care" is used carelessly. This has all sorts of hard to track negative effects; the sort of people who are irked by things like misuse of the word "care" are disproportionately likely to be the sort of people who are careful about this sort of thing themselves. It's easy to see how a harmful "positive" feedback loop might be created in such a scenario if not paying attention to the connotations of words can drive our friends away.

Great point.

Note that sometimes people actually don't care about things. Eg, some EAs don't care about animal suffering or the far future.

Also, there's the meaning of "I don't care about global poverty" which is "Though obviously global poverty causes suffering, I think that alleviating global poverty would probably cause about as much suffering in expectation, and so I don't care about how much global poverty there is on the margin at the moment." (I don't agree with this view but some EAs hold it.)

Thanks for this - I'm pretty sure I'm guilty of doing this carelessly, and I agree that it's actually not great.

Brief comment: I personally use the word 'care' to imply that I prioritise something not just abstractedly, but also have a gut, S1 feel of desire to work on the problem. I expect people in my reference class here to mostly continue to use 'care' unless a better alternative is proposed.

Amanda Askell has interesting thoughts suggestive of using "care" to have a counterfactual meaning. She suggests we think of care as what you would have cared about if you were in a context such that this was a thing you could potentially change. In a way, the distinction is between people who think about "care" in terms of rank "oh, that isn't the thing I most care about" and those who care in terms of absolutes "oh, I think the moral value of this is positive." further complicated by the fact some people are thinking in expected value of action and others are thinking absolute value of the object the action affects.

Semantically, if we think it is a good idea to "expand our circle of care" we should probably adopt "care" to mean the counterfactual meaning, as that broadens the scope of things we can truthfully claim to care about.