(cross-posted from my blog)
I think that tribalism is one of the biggest problems with humanity today, and that even small reductions of it could cause a massive boost to well-being.
By tribalism, I basically mean the phenomenon where arguments and actions are primarily evaluated based on who makes them and which group they seem to support, not anything else. E.g. if a group thinks that X is bad, then it's often seen as outright immoral to make an argument which would imply that X isn't quite as bad, or that some things which are classified as X would be more correctly classified as non-X instead. I don't want to give any specific examples so as to not derail the discussion, but hopefully everyone can think of some; the article "Can Democracy Survive Tribalism" lists lot of them, picked from various sides of the political spectrum.
Joshua Greene (among others) makes the argument, in his book Moral Tribes, that tribalism exists for the purpose of coordinating aggression and alliances against other groups (so that you can kill them and take their stuff, basically). It specifically exists for the purpose of making you hurt others, as well as defend yourself against people who would hurt you. And while defending yourself against people who would hurt you is clearly good, attacking others is clearly not. And everything being viewed in tribal terms means that we can't make much progress on things that actually matter: as someone commented, "people are fine with randomized controlled trials in policy, as long as the trials are on things that nobody cares about".
Given how deep tribalism sits in the human psyche, it seems unlikely that we'll be getting rid of it anytime soon. That said, there do seem to be a number of things that affect the amount of tribalism we have:
* As Steven Pinker argues in The Better Angels of Our Nature, violence in general has declined over historical time, replaced by more cooperation and an assumption of human rights; Democrats and Republicans may still hate each other, but they generally agree that they still shouldn't be killing each other.
* As a purely anecdotal observation, I seem to get the feeling that people on the autism spectrum tend to be less tribal, up to the point of not being able to perceive tribes at all. (this suggests, somewhat oddly, that the world would actually be a better place if everyone was slightly autistic)
* Feelings of safety or threat seem to play a lot into feelings of tribalism: if you perceive (correctly or incorrectly) that a group Y is out to get you and that they are a real threat to you, then you will react much more aggressively to any claims that might be read as supporting Y. Conversely, if you feel safe and secure, then you are much less likely to feel the need to attack others.
The last point is especially troublesome, since it can give rise to self-fulfilling predictions. Say that Alice says something to Bob, and Bob misperceives this as an insult; Bob feels threatened so snaps at Alice, and now Alice feels threatened as well, so shouts back. The same kind of phenomenon seems to be going on a much larger scale: whenever someone perceives a threat, they are no longer willing to give someone the benefit of doubt, and would rather treat the other person as an enemy. (which isn't too surprising, since it makes evolutionary sense: if someone is out to get you, then the cost of misclassifying them as a friend is much bigger than the cost of misclassifying a would-be friend as an enemy. you can always find new friends, but it only takes one person to get near you and hurt you really bad)
One implication might be that general mental health work, not only in the conventional sense of "healing disorders", but also the positive psychology-style mental health work that actively seeks to make people happy rather than just fine, could be even more valuable for society than we've previously thought. Curing depression etc. would be enormously valuable even by itself, but if we could figure out how to make people generally happier and resilient to negative events, then fewer things would threaten their well-being and they would perceive fewer things as being threats, reducing tribalism.
Right, I think an obvious case can be made that mental health is Important; making the case that it's also Tractable and Neglected requires more nuance but I think this can be done. E.g., few non-EA organizations are 'pulling the ropes sideways', have the institutional freedom to think about this as an actual optimization target, or are in a position to work with ideas or interventions that are actually upstream of the problem. My intuition is that mental health is hugely aligned with what EAs actually care about, and is much much more tractable and neglected than the naive view suggests. To me, it's a natural fit for a top-level cause area.
The problem I foresee is that EA hasn't actually added a new Official Top-Level Cause Area since... maybe EA was founded? And so I don't expect to see much of a push from the EA leadership to add mental health as a cause area -- not because they don't want it to happen, but because (1) there's no playbook for how to make it happen, and (2) there may be local incentives that hinder doing this.
More specifically: mental health interventions that actually work are likely to be weird- e.g., Michael D. Plant's ideas about drug legalization is a little weird; Enthea's ideas about psilocybin is more weird; QRI's valence research is very weird. Now, at EAG there was a talk suggesting that we 'Keep EA Weird'. But I worry that's a retcon, that weird things have been grandfathered into EA but institutional EA is not actually very weird, and despite lots of funding, it has very little funding for Actually Weird Things. Looking at what gets funded ('revealed preferences') I see support for lots of conventionally-worthy things and some appetite for moderately weird things, but almost none for things that are sufficiently weird that they could seed a new '10x+' cause area ("zero-to-one weird").
*Note to all EA leadership reading this: I would LOVE LOVE LOVE to be proven wrong here!
So, my intuition is that EAs who want this to happen will need to organize, make some noise, 'start the party', and in general nurture this mental-health-as-cause-area thing until it's mature enough that 'core EA' orgs won't need to take a status hit to fund it. I.e., if we want EA to rally around mental health, it's literally up to people like us to make that happen.
I think if we can figure out good answers to these questions we'd have a good shot:
Why do you think mental health is Neglected and Tractable?
Why us, why now, why hasn't it already been done?
Which threads & people in EA do you think could be rallied under the banner of mental health?
Which people in 'core EA' could we convince to be a champion of mental health as an EA cause area?
Who could tell us What It Would Actually Take to make mental health a cause area?
What EA, and non-EA, organizations could we partner with here? Do we have anyone with solid connections to these organizations?
(Anyone with answers to these questions, please chime in!)
FWIW, my impression of EA leadership is that they (correctly) find that mental health isn't the best target for currently existing people due to other things in global health, and it isn't the best thing for future people, due to dominance of X risk etc. I don't see a huge 'gap in the market' for marginal efforts re global mental health for really outsized impact.
Openphil funds a variety of things outside the 'big cause areas' (criminal justice, open science, education, etc.), so there doesn't seem a huge barrier to this cause area getting traction.
Funding... (read more)