(cross-posted from my blog)

I think that tribalism is one of the biggest problems with humanity today, and that even small reductions of it could cause a massive boost to well-being.

By tribalism, I basically mean the phenomenon where arguments and actions are primarily evaluated based on who makes them and which group they seem to support, not anything else. E.g. if a group thinks that X is bad, then it's often seen as outright immoral to make an argument which would imply that X isn't quite as bad, or that some things which are classified as X would be more correctly classified as non-X instead. I don't want to give any specific examples so as to not derail the discussion, but hopefully everyone can think of some; the article "Can Democracy Survive Tribalism" lists lot of them, picked from various sides of the political spectrum.

Joshua Greene (among others) makes the argument, in his book Moral Tribes, that tribalism exists for the purpose of coordinating aggression and alliances against other groups (so that you can kill them and take their stuff, basically). It specifically exists for the purpose of making you hurt others, as well as defend yourself against people who would hurt you. And while defending yourself against people who would hurt you is clearly good, attacking others is clearly not. And everything being viewed in tribal terms means that we can't make much progress on things that actually matter: as someone commented, "people are fine with randomized controlled trials in policy, as long as the trials are on things that nobody cares about".

Given how deep tribalism sits in the human psyche, it seems unlikely that we'll be getting rid of it anytime soon. That said, there do seem to be a number of things that affect the amount of tribalism we have:

* As Steven Pinker argues in The Better Angels of Our Nature, violence in general has declined over historical time, replaced by more cooperation and an assumption of human rights; Democrats and Republicans may still hate each other, but they generally agree that they still shouldn't be killing each other.
* As a purely anecdotal observation, I seem to get the feeling that people on the autism spectrum tend to be less tribal, up to the point of not being able to perceive tribes at all. (this suggests, somewhat oddly, that the world would actually be a better place if everyone was slightly autistic)
* Feelings of safety or threat seem to play a lot into feelings of tribalism: if you perceive (correctly or incorrectly) that a group Y is out to get you and that they are a real threat to you, then you will react much more aggressively to any claims that might be read as supporting Y. Conversely, if you feel safe and secure, then you are much less likely to feel the need to attack others.

The last point is especially troublesome, since it can give rise to self-fulfilling predictions. Say that Alice says something to Bob, and Bob misperceives this as an insult; Bob feels threatened so snaps at Alice, and now Alice feels threatened as well, so shouts back. The same kind of phenomenon seems to be going on a much larger scale: whenever someone perceives a threat, they are no longer willing to give someone the benefit of doubt, and would rather treat the other person as an enemy. (which isn't too surprising, since it makes evolutionary sense: if someone is out to get you, then the cost of misclassifying them as a friend is much bigger than the cost of misclassifying a would-be friend as an enemy. you can always find new friends, but it only takes one person to get near you and hurt you really bad)

One implication might be that general mental health work, not only in the conventional sense of "healing disorders", but also the positive psychology-style mental health work that actively seeks to make people happy rather than just fine, could be even more valuable for society than we've previously thought. Curing depression etc. would be enormously valuable even by itself, but if we could figure out how to make people generally happier and resilient to negative events, then fewer things would threaten their well-being and they would perceive fewer things as being threats, reducing tribalism.

30

0
0

Reactions

0
0

More posts like this

Comments31
Sorted by Click to highlight new comments since: Today at 2:24 AM

I wonder if there are good ways to transform a tribalism of "us versus them" into an "us versus it," aligning joint interest in overcoming the constraints of amoral causes of suffering. This is framing is somewhat commonplace when we talk about combatting malaria on a global scale and fighting cancer in individual care. There are countless examples in the medical world of this personification of amoral disease "agents." It seems like it may be a way to repurpose our tribalistic cognitive mechanisms for good.

I've written a bit about this for a course with Josh Greene a few years back, and I'd be happy to share if anyone is interested.

In a lot of cases today, certain populations are “forced” into a tribalist state in order to survive and prevent s-risks. This usually occurs when a larger tribe subjugates, brutalizes, and terrorizes a smaller and less tribally organized tribe, which forces the smaller tribe to have to act in tribal ways to defend itself from ethnic cleansing or genocide. One can call this effect induced tribalism. It is also something I should make a larger post about.

Examples of this induced Tribal effect includes but is not limited to:

Kurdish & Yezedi people Armenians and Assyrians Tibetan people Native Americans Tutsis (though Rwanda is a special case due to its non-ethnic, slightly non-Tribal identity policy today. The ‘we are all Rwandans’ position/policy of the current Rwandan gov) Some would also include post-Holocaust Zionism (among European Jews) in this category.

From an analysis of the histories of these groups, nationalism and tribalism largely grew in response to an active threat against their lives, rather than prior to it (as would be the case if they were the aggressors). EA is largely unaware of the tribal dynamics that occur in the world and would benefit from research that makes sure helping out one ‘tribe’ doesn’t come at the destruction and devastation of another. Tribalism can and should end and Kaj puts forward a strong argument, though efforts should also take into consideration complex situations where tribalism has been or still is necessary to prevent s-risks.

QRI is very interested and working hard on the mental health cause area; notable documents are Quantifying Bliss and Wireheading Done Right - more to come soon. There are also other good things in this space from e.g. Michael D. Plant, Spencer Greenberg, and perhaps Enthea.

On the tribalism point, I agree tribalism causes a lot of problems; I'd also agree with what I take you to be saying that in some senses it may be a load-bearing part of an Evolutionary Stable Strategy (ESS) which may prove troublesome to tinker with. Finally, I agree that mental health is a potentially upstream factor in some of the negative-sum presentations of tribalism.

I would say it's unclear at this point whether the EA movement has the plasticity required to make mental health an Official Cause Area -- I believe the leadership is interested but "revealed constraints" seem to tell a mixed story. I'm certainly hoping it'll happen if enough people get together to 'start the party'.

(Personal opinions; not necessarily shared by others at QRI)

Can you say more about the "revealed constraints" here? What would be the appropriate preconditions for "starting the party?" I think it can and should be done - we've embraced frontline cost-effectiveness in doing good today, and we've embraced initiatives oriented towards good in the far future even in the absence of clear interventions; even so, global mental health hasn't quite fit into either of those EA approaches, despite being a high-burden problem that is extremely neglected and arguably tractable.

Mental health interventions will become more cost-effective on an absolute scale, as we advance knowledge and implement better, and on a relative scale, as we largely overcome the burden of communicable diseases like malaria. The EA community should rally around the possibility of accelerating the development and dissemination of mental health interventions. It is quite exciting to see the work of some EAs in this space, and I think EAs could bring real value to the academic and nonprofit leaders involved in global mental health. It may be that working on this issue is more valuable at this point than merely funding this cause, but that's a broader strategy discussion. I'm excited to join other EAs in building the case for the relevance of mental health to the movement.

Can you say more about the "revealed constraints" here? What would be the appropriate preconditions for "starting the party?" I think it can and should be done - we've embraced frontline cost-effectiveness in doing good today, and we've embraced initiatives oriented towards good in the far future even in the absence of clear interventions; even so, global mental health hasn't quite fit into either of those EA approaches, despite being a high-burden problem that is extremely neglected and arguably tractable.

Right, I think an obvious case can be made that mental health is Important; making the case that it's also Tractable and Neglected requires more nuance but I think this can be done. E.g., few non-EA organizations are 'pulling the ropes sideways', have the institutional freedom to think about this as an actual optimization target, or are in a position to work with ideas or interventions that are actually upstream of the problem. My intuition is that mental health is hugely aligned with what EAs actually care about, and is much much more tractable and neglected than the naive view suggests. To me, it's a natural fit for a top-level cause area.

The problem I foresee is that EA hasn't actually added a new Official Top-Level Cause Area since... maybe EA was founded? And so I don't expect to see much of a push from the EA leadership to add mental health as a cause area -- not because they don't want it to happen, but because (1) there's no playbook for how to make it happen, and (2) there may be local incentives that hinder doing this.

More specifically: mental health interventions that actually work are likely to be weird- e.g., Michael D. Plant's ideas about drug legalization is a little weird; Enthea's ideas about psilocybin is more weird; QRI's valence research is very weird. Now, at EAG there was a talk suggesting that we 'Keep EA Weird'. But I worry that's a retcon, that weird things have been grandfathered into EA but institutional EA is not actually very weird, and despite lots of funding, it has very little funding for Actually Weird Things. Looking at what gets funded ('revealed preferences') I see support for lots of conventionally-worthy things and some appetite for moderately weird things, but almost none for things that are sufficiently weird that they could seed a new '10x+' cause area ("zero-to-one weird").

*Note to all EA leadership reading this: I would LOVE LOVE LOVE to be proven wrong here!

So, my intuition is that EAs who want this to happen will need to organize, make some noise, 'start the party', and in general nurture this mental-health-as-cause-area thing until it's mature enough that 'core EA' orgs won't need to take a status hit to fund it. I.e., if we want EA to rally around mental health, it's literally up to people like us to make that happen.


I think if we can figure out good answers to these questions we'd have a good shot:

  • Why do you think mental health is Neglected and Tractable?

  • Why us, why now, why hasn't it already been done?

  • Which threads & people in EA do you think could be rallied under the banner of mental health?

  • Which people in 'core EA' could we convince to be a champion of mental health as an EA cause area?

  • Who could tell us What It Would Actually Take to make mental health a cause area?

  • What EA, and non-EA, organizations could we partner with here? Do we have anyone with solid connections to these organizations?

(Anyone with answers to these questions, please chime in!)

FWIW, my impression of EA leadership is that they (correctly) find that mental health isn't the best target for currently existing people due to other things in global health, and it isn't the best thing for future people, due to dominance of X risk etc. I don't see a huge 'gap in the market' for marginal efforts re global mental health for really outsized impact.

Openphil funds a variety of things outside the 'big cause areas' (criminal justice, open science, education, etc.), so there doesn't seem a huge barrier to this cause area getting traction.

Funding weird stuff is a bit tricky, as only a tiny minority of weird things are worthwhile, even ex ante: most are meritless. I guess you want to select from a propitious reference class, and to look for some clear forecast indicators that can allow it to be promptly dropped from the portfolio. It doesn't strike me as crazy that there's no current weird project candidate that clears the bar as being worth speculative investment.

FWIW, my impression of EA leadership is that they (correctly) find that mental health isn't the best target for currently existing people due to other things in global health

Can you say what you think is more valuable? If i'm looking at GW's top charities, the options are AMF or SCI. AMF is about saving lives, rather than improving lives, so that's a moral question as to how you trade those off. I'm not really sure how to think of the happiness impact of SCI. GW seem to argue it's worthwhile because it increases income for the recipient, but I'm pretty sceptical increases in income, even at low levels, improve aggergrate happiness (see this paper on Give Directly that found it didn't increase overall happiness)

I don't think mental health has comparably good interventions to either of these, even given the caveats you note. Cost per QALY or similar for treatment looks to have central estimates much higher than these, and we should probably guess mental health interventions in poor countries have more regression to the mean to go.

Some hypothetical future intervention could be much better, but looking for these isn't that neglected, and such progress looks intractable given we understand the biology of a given common mental illness much more poorly than a typical NTD.

I don't think mental health has comparably good ... [c]ost per QALY or similar.

Some hypothetical future intervention could be much better, but looking for these isn't that neglected, and such progress looks intractable given we understand the biology of a given common mental illness much more poorly than a typical NTD.

I think the core argument for mental health as a new cause area is that (1) yes, current mental health interventions are pretty bad on average, but (2) there could be low-hanging fruit locked away behind things that look 'too weird to try', and (3) EA may be in a position to signal-boost the weird things ('pull the ropes sideways') that have a plausible chance of working.

Using psilocybin as an adjunct to therapy seems like a reasonable example of some low-hanging fruit that's effective, yet hasn't been Really Tried, since it is weird. And this definitely does not exhaust the set of weird & plausible interventions.

I'd also like to signal-boost @MichaelPlant's notion that "A more general worry is that effective altruists focus too much on saving lives rather than improving lives.." At some point, we'll get to hard diminishing returns on how many lives we can 'save' (delay the passing of) at reasonable cost or without significant negative externalities. We may be at that point now. If we're serious about 'doing the most good we can do' I think it's reasonable to explore a pivot to improving lives -- and mental health is a pretty key component of this.

1-3 looks general, and can in essence be claimed to apply to any putative cause area not currently thought to be a good candidate. E.g.

1) Current anti-aging interventions are pretty bad on average. 2) There could be low hanging fruit behind things that look 'too weird to try'. 3) EA may be in position to signal boost weird things that have plausible chance of working.

Mutatis mutandis criminal justice reform, improving empathy, human enhancement, and so on. One could adjudicate these competing areas by evidence that some really do have these low hanging fruit. Yet it remains unclear that (for example) things like psilocybin data gives more a boost than (say) cryonics. Naturally I don't mind if enthusiasts pick some area and give it a go, but appeals to make it a 'new cause area' based on these speculative bets look premature by my lights: better to pick winners based on which of the disparate fields shows the greatest progress, such that one forecasts similar marginal returns to the 'big three'.

(Given GCR/x-risks, I think the 'opportunities' for saving quite a lot of lives - everyone's - are increasing. I agree that ignoring that - which one shouldn't - it seems likely status quo progress should exhaust preventable mortality faster than preventable ill-health. Yet I don't think we are there yet.)

I worry that you're also using a fully-general argument here, one that would also apply to established EA cause areas.

This stands out at me in particular:

Naturally I don't mind if enthusiasts pick some area and give it a go, but appeals to make it a 'new cause area' based on these speculative bets look premature by my lights: better to pick winners based on which of the disparate fields shows the greatest progress, such that one forecasts similar marginal returns to the 'big three'.

There's a lot here that I'd challenge. E.g., (1) I think you're implicitly overstating how good the marginal returns on the 'big three' actually are, (2) you seem to be doubling down on the notion that "saving lives is better than improving lives" or that "the current calculus of EA does and should lean toward reduction of mortality, not improving well-being", which I challenged above, (3) I don't think your analogy between cryonics (which, for the record, I'm skeptical on as an EA cause area) and e.g., Enthea's collation of research on psilocybin seems very solid.

I would also push back on how dismissive "Naturally I don't mind if enthusiasts pick some area and give it a go, but appeals to make it a 'new cause area' based on these speculative bets look premature by my lights" sounds. Enthusiasts are the ones that create new cause areas. We wouldn't have any cause areas, save for those 'silly enthusiasts'. Perhaps I'm misreading your intended tone, however.

Respectfully, I take 'challenging P' to require offering considerations for ¬P. Remarks like "I worry you're using a fully-general argument" (without describing what it is or how my remarks produce it), "I don't think your analogy is very solid" (without offering dis-analogies) don't have much more information than simply "I disagree".

1) I'd suggest astronomical stakes considerations imply at that one of the 'big three' do have extremely large marginal returns. If one prefers something much more concrete, I'd point to the humane reforms improving quality of life for millions of animals.

2) I don't think the primacy of the big three depends in any important way on recondite issues of disability weights or population ethics. Conditional on a strict person affecting view (which denies the badness of death) I would still think the current margin of global health interventions should offer better yields. I think this based on current best estimates of disability weights in things like the GCPP, and the lack of robust evidence for something better in mental health (we should expect, for example, Enthea's results to regress significantly, perhaps all the way back to the null).

On the general point: I am dismissive of mental health as a cause area insofar as I don't believe it to be a good direction for EA energy to go relative to the other major ones (and especially my own 'best bet' of xrisk). I don't want it to be a cause area as it will plausibly compete for time/attention/etc. with other things I deem more important. I'm no EA leader, but I don't think we need to impute some 'anti-weirdness bias' (which I think is facially implausible given the early embrace of AI stuff etc) to explain why they might think the same.

Naturally, I may be wrong in this determination, and if I am wrong, I want to know about it. Thus having enthusiasts go into more speculative things outside the currently recognised cause areas improves likelihood of the movement self-correcting and realising mental health should be on a par with (e.g.) animal welfare as a valuable use of EA energy.

Yet anointing mental health as a cause area before this case has been persuasively made would be a bad approach. There are many other candidates for 'cause area No. n+1' which (as I suggested above) have about the same plausibility as mental health. Making them all recognised 'cause areas' seems the wrong approach. Thus the threshold should be higher.

Just to chip in.

I agree that, if you care about the far future, mental health (along with poverty, physical and pretty much anything apart from X-risk focused interventions) will look at least look like a waste of time. Further analysis may reveal this to be a bit more complicated, but this isn't the time for such complicated, further analysis.

I don't want it to be a cause area as it will plausibly compete for time/attention/etc

I think this probably isn't true, just because those interested in current-human vs far-future stuff are two different audiences. It's more a question of whether, in as much people are going to focus on current stuff, would do more good if they focused on mental health over poverty. There's a comment about moral trade to be made here.

I also find the apparent underlying attitude here unsettling. It's sort of 'I think your views are stupid and I'm confident I know best so I just want to shut them out of the conversation rather than let others make up their own mind' approach. On a personal level, I find this thinking (which, unless i'm paranoid, I've encountered in the EA world before) really annoying. I say some stuff in the area in this post on moral inclusivity.

I also think both of you being too hypothetical about mental health. Halstead and Snowden have a new report where they reckon Strong Minds is $225/DALY, which is comparable to AMF if you think AMF's live saving is equivalent to 40 years of life-improving treatments.

Drugs policy reform I consider to be less at the 'this might be a good idea but we have no reason to think so' stage and more at the 'oh wow, if this is true it's really promising and we should look into it to find out if it is true' stage. I'm unclear what the bar is to be annointed an 'official cause' or who we should allow to be in charge of such this censorious judgements.

Hi Gregory,

We have never interacted before this, at least to my knowledge, and I worry that you may be bringing some external baggage into this interaction (perhaps some poor experience with some cryonics enthusiast...). I find your "let's shut this down before it competes for resources" attitude very puzzling and aggressive, especially since you show zero evidence that you understand what I'm actually attempting to do or gather support for on the object-level. Very possibly we'd disagree on that too, which is fine, but I'm reading your responses as preemptively closed and uncharitable (perhaps veering toward 'aggressively hostile') toward anything that might 'rock the EA boat' as you see it.

I don't think this is good for EA, and I don't think it's working off a reasonable model of the expected value of a new cause area. I.e., you seem to be implying the expected cause area would be at best zero, but more probably negative, due to zero-sum dynamics. On the other hand, I think a successful new cause area would more realistically draw in or internally generate at least as many resources as it would consume, and probably much more -- my intuition is that at the upper bound we may be looking at something as synergistic as a factorial relationship (with three causes, the total 'EA pie' might be 321=6; with four causes the total 'EA pie' might be 432*1=24). More realistically, perhaps 4+3+2+1 instead of 3+2+1. This could be and probably is very wrong-- but at the same time I think it's more accurate than a zero-sum model.

At any rate, I'm skeptical that we can turn this discussion into something that will generate value to either of us or to EA, so unless you have any specific things you'd like to discuss or clarify, I'm going to leave things here. Feel free to PM me questions.

I prefer to keep discussion on the object level, rather offering adverse impressions of one another's behaviour (e.g. uncharitable, aggressive, censorious etc.)[1] with speculative diagnoses as to the root cause of these ("perhaps some poor experience with a cryonics enthusiast").

To recall the dialectical context: the implication upthread was a worry that the EA community (or EA leadership) are improperly neglecting the metal health cause area, perhaps due to (in practice) some anti-weirdness bias. To which my counter-suggestion was that maybe EA generally/leaders thereof have instead made their best guess that the merits of this area isn't more promising than those cause areas they already attend to.

I accept that conditional on some recondite moral and empirical matters, mental health interventions look promising. Yet that does not distinguish mental health beyond many other candidate cause areas, e.g.:

  • Life extension/cryonics
  • Pro-life advocacy/natural embryo loss mitigation
  • Immigration reform
  • Improving scientific norms etc.

All generally have potentially large scale, sometimes neglected, but less persuasive tractability. In terms of some hypothetical dis aggregated EA resource (e.g. people, money), I'd prefer it to go into one of the 'big three' than any of these other areas, as my impression is the marginal returns for any of these three is greater than one of those. In other senses there may not be such zero sum dynamics (i.e. conditional on Alice only wanting to work in mental health, better that she work in EA-style mental mental), yet I aver this doesn't really apply to which topics the movement gives relative prominence to (after all, one might hope that people switch from lower- to higher-impact cause areas, as I have attempted to do).

Of course, there remains value in exploration: if in fact EA writ large is undervaluing mental health, they would want to know about it and change tack What I hope would happen if I am wrong in my determination of mental health is that public discussion of the merits would persuade more and more people of the merits of this approach (perhaps I'm incorrigible, hopefully third parties are not), and so it gains momentum from a large enough crowd of interested people it becomes its own thing with similar size and esteem to areas 'within the movement'. Inferring from the fact that this has not yet happened that the EA community is not giving a fair hearing is not necessarily wise.

[1]: I take particular exception to the accusations of censoriousness (from Plant) and wanting to 'shut down discussion' [from Plant and yourself]. In what possible world is arguing publicly on the internet a censorious act? I don't plot to 'run the mental health guys out of the EA movement', I don't work behind the scenes to talk to moderators to get rid of your contributions, I don't downvote remarks or posts on mental health, and so on and so forth for any remotely plausible 'shutting down discussion' behaviour. I leave adverse remarks I could make to this apophasis.

I prefer to keep discussion on the object level

I'm not seeing object-level arguments against mental health as an EA cause area. We have made some object-level arguments for, and I'm working on a longer-form description of what QRI plans in this space. Look for more object-level work and meta-level organizing over the coming months.

I'd welcome object-level feedback on our approaches. It didn't seem like your comments above were feedback-focused, but rather they seemed motivated by a belief that this was not "a good direction for EA energy to go relative to the other major ones." I can't rule that out at this point. But I don't like seeing a community member just dishing out relatively content-free dismissiveness on people at a relatively early stage in trying to build something new. If you don't see any good interventions here, and don't think we'll figure out any good interventions, it seems much better to just let us fail, rather than actively try to pour cold water on us. If we're on the verge of using lots of community resources on something that you know to be unworkable, please pour the cold water. But if your argument boils down to "this seems like a bad idea, but I can't give any object-level reasons, but I really want people to know I think this is a bad idea" then I'm not sure what value this interaction can produce.

But, that said, I'd also like to apologize if I've come on too strong in this back-and-forth, or if you feel I've maligned your motives. I think you seem smart, honest, invested in doing good as you see it, and are obviously willing to speak your mind. I would love to channel this into making our ideas better! In trying to do something new, there's approximately a 100% chance we'll make a lot of mistakes. I'd like to enlist your help in figuring out where the mistakes are and better alternatives. Or, if you'd rather preemptively write off mental health as a cause area, that's your prerogative. But we're in this tent together, and although all the evidence I have suggests we have significantly different (perhaps downright dissonant) cognitive styles, perhaps we can still find some moral trade.

Best wishes, Mike

Feelings of safety or threat seem to play a lot into feelings of tribalism: if you perceive (correctly or incorrectly) that a group Y is out to get you and that they are a real threat to you, then you will react much more aggressively to any claims that might be read as supporting Y.

This sounds roughly supported by Karen Stenner's work in The Authoritarian Dynamic which argues that "political intolerance, moral intolerance and punitiveness" are increased by perceived levels of threat.

Your comments about increasing happiness and comfort are particularly striking in light of this opinionated description (from a review) of the different groups (based on interviews):

Authoritarians tended to be closed-minded, unintelligent, lacking in self-confidence, unhappy, unfriendly, unsophisticated, inarticulate, and generally unappealing. Libertarians tended toward the opposite; they seemed happy, gregarious, relaxed, warm, open, thoughtful, eloquent, and humble.

That said I am sceptical prime facie that any positive psychology interventions would be powerful enough at producing these effects to be warranted on these grounds.

Thanks for the reference! That sounds valuable.

Thanks for the post Kaj. I agree that this is a high priority area.

"By tribalism, I basically mean the phenomenon where arguments and actions are primarily evaluated based on who makes them and which group they seem to support, not anything else. "

I think tribalism could be described as a class of (largely biased) decision and judgement heuristics. It might be helpful to investigate why a person chooses to use such heuristics.

At least, as a heuristic it is much less cognitively taxing than the alternative of trying to figure things out by oneself, or looking for expert opinions. Also, it is uncomfortable for many people to challenge one's beliefs or belonginess to a group. These underlying factors suggest possible avenues for interventions and preventions.

[anonymous]6y3
1
0

Thanks for posting this. I agree this is a hugely neglected issue. It would be good to see a more coherent and sustained movement towards reducing this problem.

Anyone wanting to learn more should read Dan Kahan.

I don't see any high-value interventions here. Simply pointing out a problem people have been aware of for millenia will not help anyone.

There seem to be a lot of leads that could help us figure out the high-value interventions, though: i) knowledge about what causes it and what has contributed to changes of it over time ii) research directions that could help further improve our understanding of what causes it / what doesn't cause it iii) various interventions which already seem like they work in a small-scale setting, though it's still unclear how they might be scaled up (e.g. something like Crucial Conversations is basically about increasing trust and safety in one-to-one and small-group conversations) iv) and of course psychology in general is full of interesting ideas for improving mental health and well-being that haven't been rigorously tested, which also suggests that v) any meta-work that would improve psychology's research practices would also be even more valuable than we previously thought.

As for the "pointing out a problem people have been aware of for millenia", well, people have been aware of global poverty for millenia too. Then we got science and randomized controlled trials and all the stuff that EAs like, and got better at fixing the problem. Time to start looking at how we could apply our improved understanding of this old problem, to fixing it.

First, I consider our knowledge of psychology today to be roughly equivalent to that of alchemists when alchemy was popular. Like with alchemy, our main advantage over previous generations is that we're doing lots of experiments and starting to notice vague patterns, but we still don't have any systematic or reliable knowledge of what is actually going on. It is premature to seriously expect to change human nature.

Improving our knowledge of psychology to the point where we can actually figure things out could have a major positive effect on society. The same could be said for other branches of science. I think basic science is a potentially high-value cause, but I don't see why psychology should be singled out.

Second, this cause is not neglected. It is one of the major issues intellectuals have been grappling with for centuries or more. Framing the issue in terms of "tribalism" may be a novelty, but I don't see it as an improvement.

Finally, I'm not saying that there's nothing the effective altruism community can do about tribalism. I'm saying I don't see how this post is helping.

edit: As an aside, I'm now wondering if I might be expressing the point too rudely, especially the last paragraph. I hope we manage to communicate effectively in spite of any mistakes on my part.

Interesting post.

The first thought that came to my mind is related to the other post on this forum about psychedelics.

My interpretation is therapeutic psilocybin experiences can create a feeling of all being part of the same team / global interconnectedness. I wonder if this would lead to less tribalism. It seems like it very well may.

"In 6-month follow-up interviews, participants were asked: ‘Did this treatment work for you, and if so how?’ and responses were analysed for consistent themes (Watts et al. 2017). Of the 17 patients who endorsed the treatment’s effectiveness, all made reference to one particular mediating factor: a renewed sense of connection or connectedness. This factor was found to have three distinguishable aspects: connection to (1) self, (2) others and (3) the world in general (Watts et al. 2017)."

References

https://www.ncbi.nlm.nih.gov/pubmed/28795211 via a friend.

I would worry that the "feeling of all being part of the same team" could just as likely lead to more tribalism as to less. It's a question of who "all" refers to. Reminds me of discussions around empathy and compassion–if our other-regarding behaviors are strengthened toward those close to us, it can actually make us worse to those further away (even if only because of resource constraints).

I've had this instinct myself for a while and blogged about it today (http://www.zachgroff.com/2017/10/democratic-dysfunction-may-get-in-way). I'm quite sympathetic to Pinker's thesis but becoming less sympathetic with each passing day. (Maybe I just need to reread the book.)

Do you think tribalism is indeed getting better, and even if so, do you think its rate of decrease might be slowing given the rise of far right populism and leftist identity politics? Articles like this make me worried: https://www.vox.com/the-big-idea/2017/9/5/16227700/hyperpartisanship-identity-american-democracy-problems-solutions-doom-loop

I think most people that write about this subject don't take a step back and look at the historical context and general trends in society, which makes it really hard to work out what's going on.

It's hard to tell from just observing the news how views/public opinion are trending, if the number of the KKK has gone from 3000 to 300, but we only start interviewing and televising them at the 300 level, it will appear as if they are more present in society than in the past.

One study of polarisation (which in some ways is similar to tribalism) shows that polarisation could be increasing the most in older generations, who use the internet least. This might suggest that as people come online, we're hearing more from a more polarised generation, who before the internet, wouldn't be letting people know about their views as much.

https://www.brown.edu/Research/Shapiro/pdfs/age-polars.pdf

Here is another post about how we may start to interpret events in one way even if it doesn't match trends. It seems like a lot of people are now focused on the far right/extremism and tribalism, when they weren't before the election.

http://slatestarcodex.com/2016/11/07/tuesday-shouldnt-change-the-narrative/

Wow, the older generation thing is really interesting. Definitely giving that paper a read.

Re: interpretation of events, yeah, that makes sense. I just find it alarming that Trump could get 30%+, or Brexit.

It could be that we are politically engaged and read about every event that happens but the majority of people don't pay much attention to politics. So Trump getting 30%+ is based on a lot of those voters having read 1 or maybe 2 favourable things about him and nothing else, similar with Democrat voters.

For example, Fox news averages 3 million viewers, which is less than 1% of the population.

I assume this means 3 million viewers at any one time - the total number of people who primarily get their news from Fox would be much larger.

True, looking at this article, it seems that it could be as high as 24 million which is just above 7% of the population, but the political scientist in the post has doubts of how true the figure is and for people who watch, is it 5 minutes or 5 hours.

https://www.csmonitor.com/USA/Society/2017/0119/Is-watching-Fox-News-the-ultimate-conservative-calling-card