tylermjohn comments on Why & How to Make Progress on Diversity & Inclusion in EA - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (235)

You are viewing a single comment's thread. Show more comments above.

Comment author: tylermjohn 27 October 2017 01:35:14AM 4 points [-]

Hi KelseyPiper, thanks so much for a thoughtful reply. I really agree with most of this - I was talking in terms of these benefits as "pure" benefits because I assumed the many costs you rightly point out up front. That is, assuming that we read Kelly's piece and we come away with a sense of the costs and benefits that promoting diversity and inclusion in the Effective Altruism movement will have, these benefits I've pointed out above are "pure" because they come along for free with that labor involved in making the EA community more inclusive, and don't require additional effort. But I understand how that could be misleading, and so I take all of your criticism on board. I also agree that this will involve priority-setting - even if we think that all of these suggestions are important and some people should be doing all of them to some extent (and especially if not), there are some that we ought to spend more time on than others as a community.

I also agree that the EA community should focus on identifying and working on the very most important things. Although I might disagree slightly with how you've characterized that. I don't think that we should be a community doing work that fosters "fast progress on the most important things," because I think that we should be doing whatever does the most good in the long run, all-things-considered, and fostering "fast progress" on the most important things does not necessarily correlate with doing the most good in the long run, all-things-considered - unless we define "fosters fast progress" in a way that makes this trivial. But if, for example, we could perform one of two different interventions, one which added an additional +5 well-being to all of the global poor, on average, over twenty years, for one generation, and one which added an additional +5 well-being to all of the global poor, on average, over one hundred years, for all generations, we should choose the latter intervention, even though the former intervention is in a sense fostering faster progress. I make this point not to be pedantic, but because I think some EAs sometimes forget that what we (or many of us) are trying to do is to produce the most benefits and avert the most harm all-things-considered, and not simply make a lot of progress on some very important projects very quickly, and I think that this is quite relevant to this conversation.

To your question as to why "the magnitude of the current EA movement's contributions to harmful societal structures in the United States might outweigh the magnitude of the effects EA has on nonhumans and on the poorest humans," I unfortunately haven't written something on this and perhaps I should. But I can say a few things. I should first say that I certainly don't think it's obvious that the EA movement's contributions to such harmful structures clearly will outweigh the magnitude of the effects we have on nonhumans and on the poorest humans. I only claimed that it was non-obvious that the effect size was "very small" compared to the positive effects we have. It's something more EAs should treat as non-negligible more often than they do.

Still, here are some of the basic reasons why I think that the EA movement's contributions to harmful social structures could well be of sufficient magnitude that we should keep constant accounting of them in our efforts to do good in the world, apart from reputation costs and instrumental epistemic benefits of inclusion and diversity work. First, the fundamental structure of society and its social, legal, and political norms profoundly shape the kinds and quality of life of all beings, as well as profoundly shaping cultural and moral mores, and so ensuring that the fundamental structure of society and these norms are good ones is crucial to ensuring that the long-run future is good, and shaping these structures for the better may make the trajectory of the future far better than the counterfactual where we shape these structures for the worse (for reasons of legal precedent, memetics, psychological and value anchoring, and more). Second, norms against harming others are very sticky - much stickier than norms favoring helping others except in certain particular cases (e.g. within one's own family). They are psychologically sticky, whether for innate biological reasons which fix this, or for entirely cultural reasons. Which of these is true makes a difference to how much staying power this stickiness has. But whichever is true, ensuring that we set good norms in place around not causing harm to others and ensuring that these norms are stringently upheld and not violated so that we internalize them as commonsense norms seems like a good way to shape how the future goes. They are also easier to enforce through sanction, blame, and punishment, whereas norms of aid (especially effective aid) are more difficult to enforce. And our human legal and political history suggests that they are much easier to codify into law. So for all these reasons, ensuring that we have good norms in these areas and not violating them looks like a very important intervention for shaping the social and legal institutions of future societies. Third, there are reasons to think that our moral and political attitudes towards others are psychologically intertwined in complex ways. How we treat and think about some groups, and the norms we have around harming and helping them, seems to have an impact on how we treat and think about other groups. This seems especially important if we are interested in expanding our human moral circle to include nonhuman animals and silicon-based sentient life. If our negative attitudes, norms, laws, and practices around other humans have negative downstream effects on our attitudes, norms, laws, and practices around other animals and other, inorganic sentient beings, then the benefits of prioritizing moral development and averting harmful social structures which favor some sentient beings over others may be very important. If AI value alignment is decided as a result of a political arms race, then it seems that having a broader moral circle may significantly shape the impact of intelligent and superintelligent AI for better or worse. (Here I'm out of my depth, and my impression is that this is a matter of significant disagreement, so I certainly won't come down hard on this.) The main point is that the downstream effects of our norms, attitudes, laws, and practices around humans, and who our society decides is worthy of full moral consideration, may have significant downstream effects in complicated and to some extent unpredictable ways. The more skeptical we are about how much we know about the future, the greater our uncertainty should be about these effects. I think it's reasonable to be concerned that this may be too speculative or too optimistic about the downstream consequences of our norm-shaping on the far future, but we should be careful to remember that there are also skeptical considerations cutting in the opposite direction - measurability bias may lead us to exclude less measurable, long-term effects in favor of more measurable, short-term effects of our actions irrationally.

I am not arguing that actively averting oppressive social structures and hierarchies of dominance should be a main cause area for EAs (although that could be an upshot of this conversation, too, depending on the probabilities we assign to the hypotheses delineated above), but given the psychological, social, and legal stickiness of norms against harming and the fact that failing to make EA a more diverse and inclusive community will raise the probability of EAs harming marginalized communities and failing to create and uphold norms around not harming them. And the more influential the EA community is as a community, the more this holds true. So it seems to me that there's a plausible case to be made that entrenching strong norms against treating marginalized communities inequitably within the EA community is an effective cause area that we should spend some of our time on, even if we should spend the majority of our time advocating for farmed and wild animals and the global poor.