(cross-posted from my blog)
I think that tribalism is one of the biggest problems with humanity today, and that even small reductions of it could cause a massive boost to well-being.
By tribalism, I basically mean the phenomenon where arguments and actions are primarily evaluated based on who makes them and which group they seem to support, not anything else. E.g. if a group thinks that X is bad, then it's often seen as outright immoral to make an argument which would imply that X isn't quite as bad, or that some things which are classified as X would be more correctly classified as non-X instead. I don't want to give any specific examples so as to not derail the discussion, but hopefully everyone can think of some; the article "Can Democracy Survive Tribalism" lists lot of them, picked from various sides of the political spectrum.
Joshua Greene (among others) makes the argument, in his book Moral Tribes, that tribalism exists for the purpose of coordinating aggression and alliances against other groups (so that you can kill them and take their stuff, basically). It specifically exists for the purpose of making you hurt others, as well as defend yourself against people who would hurt you. And while defending yourself against people who would hurt you is clearly good, attacking others is clearly not. And everything being viewed in tribal terms means that we can't make much progress on things that actually matter: as someone commented, "people are fine with randomized controlled trials in policy, as long as the trials are on things that nobody cares about".
Given how deep tribalism sits in the human psyche, it seems unlikely that we'll be getting rid of it anytime soon. That said, there do seem to be a number of things that affect the amount of tribalism we have:
* As Steven Pinker argues in The Better Angels of Our Nature, violence in general has declined over historical time, replaced by more cooperation and an assumption of human rights; Democrats and Republicans may still hate each other, but they generally agree that they still shouldn't be killing each other.
* As a purely anecdotal observation, I seem to get the feeling that people on the autism spectrum tend to be less tribal, up to the point of not being able to perceive tribes at all. (this suggests, somewhat oddly, that the world would actually be a better place if everyone was slightly autistic)
* Feelings of safety or threat seem to play a lot into feelings of tribalism: if you perceive (correctly or incorrectly) that a group Y is out to get you and that they are a real threat to you, then you will react much more aggressively to any claims that might be read as supporting Y. Conversely, if you feel safe and secure, then you are much less likely to feel the need to attack others.
The last point is especially troublesome, since it can give rise to self-fulfilling predictions. Say that Alice says something to Bob, and Bob misperceives this as an insult; Bob feels threatened so snaps at Alice, and now Alice feels threatened as well, so shouts back. The same kind of phenomenon seems to be going on a much larger scale: whenever someone perceives a threat, they are no longer willing to give someone the benefit of doubt, and would rather treat the other person as an enemy. (which isn't too surprising, since it makes evolutionary sense: if someone is out to get you, then the cost of misclassifying them as a friend is much bigger than the cost of misclassifying a would-be friend as an enemy. you can always find new friends, but it only takes one person to get near you and hurt you really bad)
One implication might be that general mental health work, not only in the conventional sense of "healing disorders", but also the positive psychology-style mental health work that actively seeks to make people happy rather than just fine, could be even more valuable for society than we've previously thought. Curing depression etc. would be enormously valuable even by itself, but if we could figure out how to make people generally happier and resilient to negative events, then fewer things would threaten their well-being and they would perceive fewer things as being threats, reducing tribalism.
1-3 looks general, and can in essence be claimed to apply to any putative cause area not currently thought to be a good candidate. E.g.
1) Current anti-aging interventions are pretty bad on average. 2) There could be low hanging fruit behind things that look 'too weird to try'. 3) EA may be in position to signal boost weird things that have plausible chance of working.
Mutatis mutandis criminal justice reform, improving empathy, human enhancement, and so on. One could adjudicate these competing areas by evidence that some really do have these low hanging fruit. Yet it remains unclear that (for example) things like psilocybin data gives more a boost than (say) cryonics. Naturally I don't mind if enthusiasts pick some area and give it a go, but appeals to make it a 'new cause area' based on these speculative bets look premature by my lights: better to pick winners based on which of the disparate fields shows the greatest progress, such that one forecasts similar marginal returns to the 'big three'.
(Given GCR/x-risks, I think the 'opportunities' for saving quite a lot of lives - everyone's - are increasing. I agree that ignoring that - which one shouldn't - it seems likely status quo progress should exhaust preventable mortality faster than preventable ill-health. Yet I don't think we are there yet.)
I worry that you're also using a fully-general argument here, one that would also apply to established EA cause areas.
This stands out at me in particular:
There's a lot here that I'd challenge. E.g., (1) I think you're implici... (read more)