Hide table of contents

Preamble

This is an extract from a post called "Doing EA Better", which argued that EA's new-found power and influence obligates us to solve our movement's significant problems with respect to epistemics, rigour, expertise, governance, and power.

We are splitting DEAB up into a sequence to facilitate object-level discussion.

Each post will include the relevant parts of the list of suggested reforms. There isn't a perfect correspondence between the subheadings of the post and the reforms list, so not all reforms listed will be 100% relevant to the section in question.

Finally, we have tried (imperfectly) to be reasonably precise in our wording, and we ask that before criticising an argument of ours, commenters ensure that it is an argument that we are in fact making.

Main

Summary: Diverse communities are typically much better at accurately analysing the world and solving problems, but EA is extremely homogenous along essentially all dimensions. EA institutions and norms actively and strongly select against diversity. This provides short-term efficiency at the expense of long-term epistemic health.

The EA community is notoriously homogenous, and the “average EA” is extremely easy to imagine: he is a white male[9] in his twenties or thirties from an upper-middle class family in North America or Western Europe. He is ethically utilitarian and politically centrist; an atheist, but culturally protestant. He studied analytic philosophy, mathematics, computer science, or economics at an elite university in the US or UK. He is neurodivergent. He thinks space is really cool. He highly values intelligence, and believes that his own is significantly above average. He hung around LessWrong for a while as a teenager, and now wears EA-branded shirts and hoodies, drinks Huel, and consumes a narrow range of blogs, podcasts, and vegan ready-meals. He moves in particular ways, talks in particular ways, and thinks in particular ways. Let us name him “Sam”, if only because there’s a solid chance he already is.[10]

Even leaving aside the ethical and political issues surrounding major decisions about humanity’s future being made by such a small and homogenous group of people, especially given the fact that the poor of the Global South will suffer most in almost any conceivable catastrophe, having the EA community overwhelmingly populated by Sams or near-Sams is decidedly Not Good for our collective epistemic health.

As noted above, diversity is one of the main predictors of the collective intelligence of a group. If EA wants optimise its ability to solve big, complex problems like the ones we focus on, we need people with different disciplinary backgrounds[11], different kinds of professional training, different kinds of talent/intelligence[12], different ethical and political viewpoints, different temperaments, and different life experiences. That’s where new ideas tend to come from.[13]

Worryingly, EA institutions seem to select against diversity. Hiring and funding practices often select for highly value-aligned yet inexperienced individuals over outgroup experts, university recruitment drives are deliberately targeted at the Sam Demographic (at least by proxy) and EA organisations are advised to maintain a high level of internal value-alignment to maximise operational efficiency. The 80,000 Hours website seems purpose-written for Sam, and is noticeably uninterested in people with humanities or social sciences backgrounds,[14] or those without university education. Unconscious bias is also likely to play a role here – it does everywhere else.

The vast majority of EAs will, when asked, say that we should have a more diverse community, but in that case, why are only a very narrow spectrum of people given access to EA funding or EA platforms? There are exceptions, of course, but the trend is clear.

It’s worth mentioning that senior EAs have done some interesting work on moral uncertainty and value-pluralism, and we think several of their recommendations are well-taken. However, the focus is firmly on individual rather than collective factors. The point remains that one cannot substitute a philosophically diverse community for an overwhelmingly utilitarian one where everyone individually tries to keep all possible viewpoints in mind. None of us are so rational as to obviate true diversity through our own thoughts.[15]

Suggested reforms

Below, we have a preliminary non-exhaustive list of relevant suggestions for structural and cultural reform that we think may be a good idea and should certainly be discussed further.

It is of course plausible that some of them would not work; if you think so for a particular reform, please explain why! We would like input from a range of people, and we certainly do not claim to have all the answers!

In fact, we believe it important to open up a conversation about plausible reforms not because we have all the answers, but precisely because we don’t.

Italics indicates reforms strongly inspired by or outright stolen from Zoe Cremer’s list of structural reform ideas. Some are edited or merely related to her ideas; they should not be taken to represent Zoe’s views.

Asterisks (*) indicate that we are less sure about a suggestion, but sure enough that we think they are worth considering seriously, e.g. through deliberation or research. Otherwise, we have been developing or advocating for most of these reforms for a long time and have a reasonable degree of confidence that they should be implemented in some form or another.

Timelines are suggested to ensure that reforms can become concrete. If stated, they are rough estimates, and if there are structural barriers to a particular reform being implemented within the timespan we suggest, let us know!

Categorisations are somewhat arbitrary, we just needed to break up the text for ease of reading.

Critique

Red Teams

  • Red teams should be paid, composed of people with a variety of views, and former- or non-EAs should be actively recruited for red-teaming
    • Interesting critiques often come from dissidents/exiles who left EA in disappointment or were pushed out due to their heterodox/”heretical” views (yes, this category includes a couple of us)
  • The judging panels of criticism contests should include people with a wide variety of views, including heterodox/”heretical” views

Epistemics

General

  • EA should study social epistemics and collective intelligence more, and epistemic efforts should focus on creating good community epistemics rather than merely good individual epistemics
    • As a preliminary programme, we should explore how to increase EA’s overall levels of diversity, egalitarianism, and openness
  • EAs should practise epistemic modesty
    • We should read much more, and more widely, including authors who have no association with (or even open opposition to) the EA community
    • We should avoid assuming that EA/Rationalist ways of thinking are the only or best ways
    • We should actively seek out not only critiques of EA, but critiques of and alternatives to the underlying premises/assumptions/characteristics of EA (high modernism, elite philanthropy, quasi-positivism, etc.)
    • We should stop assuming that we are smarter than everybody else
  • EAs should make a point of engaging with and listening to EAs from underrepresented disciplines and backgrounds, as well as those with heterodox/“heretical” views

Ways of Knowing

  • EAs should consider how our shared modes of thought may subconsciously affect our views of the world – what blindspots and biases might we have created for ourselves?
  • EAs should increase their awareness of their own positionality and subjectivity, and pay far more attention to e.g. postcolonial critiques of western academia
    • History is full of people who thought they were very rational saying very silly and/or unpleasant things: let’s make sure that doesn’t include us
  • EAs should study other ways of knowing, taking inspiration from a range of academic and professional communities as well as indigenous worldviews

Diversity

  • EA institutions should select for diversity
    • With respect to:
      • Hiring (especially grantmakers and other positions of power)
      • Funding sources and recipients
      • Community outreach/recruitment
    • Along lines of:
      • Academic discipline
      • Educational & professional background
      • Personal background (class, race, nationality, gender, etc.)
      • Philosophical and political beliefs
    • Naturally, this should not be unlimited – some degree of mutual similarity of beliefs is needed for people to work together – but we do not appear to be in any immediate danger of becoming too diverse
  • Previous EA involvement should not be a necessary condition to apply for specific roles, and the job postings should not assume that all applicants will identify with the label “EA”
  • EA institutions should hire more people who have had little to no involvement with the EA community providing that they care about doing the most good
  • People with heterodox/“heretical” views should be actively selected for when hiring to ensure that teams include people able to play “devil’s advocate” authentically, reducing the need to rely on highly orthodox people accurately steel-manning alternative points of view
  • Community-building efforts should be broadened, e.g. involving a wider range of universities, and group funding should be less contingent on the perceived prestige of the university in question and more focused on the quality of the proposal being made
  • EA institutions and community-builders should promote diversity and inclusion more, including funding projects targeted at traditionally underrepresented groups
  • A greater range of people should be invited to EA events and retreats, rather than limiting e.g. key networking events to similar groups of people each time
  • There should be a survey on cognitive/intellectual diversity within EA
  • EAs should not make EA the centre of their lives, and should actively build social networks and career capital outside of EA

Expertise & Rigour

Reading

  • Insofar as a “canon” is created, it should be of the best-quality works on a given topic, not the best works by (orthodox) EAs about (orthodox) EA approaches to the topic
    • Reading lists, fellowship curricula, and bibliographies should be radically diversified
    • We should search everywhere for pertinent content, not just the EA Forum, LessWrong, and the websites of EA orgs
    • We should not be afraid of consulting outside experts, both to improve content/framing and to discover blind-spots

Experts & Expertise

  • EAs should deliberately broaden their social/professional circles to include external domain-experts with differing views
  • When hiring for research roles at medium to high levels, EA institutions should select in favour of domain-experts, even when that means passing over a highly “value-aligned” or prominent EA

Funding & Employment

Grantmaking

  • Grantmakers should be radically diversified to incorporate EAs with a much wider variety of views, including those with heterodox/”heretical” views

Transparency & Ethics

Moral Uncertainty

  • EAs should practise moral uncertainty/pluralism as well as talking about it
  • EAs who advocate using ethical safeguards such as “integrity” and “common-sense morality” should publicly specify what they mean by this, how it should be operationalised, and where the boundaries lie in their view
  • EA institutions that subscribe to moral uncertainty/pluralism should publish their policies for weighting different ethical views within 12 months
Comments17
Sorted by Click to highlight new comments since:
MHR
61
18
3

I think Ozy Brennan's response to this section was very good. To quote the relevant section (though I would encourage readers to read the whole piece, which also includes some footnotes) : 

It is true that effective altruism is very homogeneous, and this is a problem. I am 100% behind inclusivity efforts. And I praise the authors for their observation that inclusivity goes beyond the standard race/class/gender to matters of culture and intellectual diversity.

However, I think that this subject should be addressed with care. When you’re talking about homogeneity, it’s important to acknowledge effective altruist members of various groups underrepresented in effective altruism. Very few things are more unwelcoming than “by the way, people like you don’t exist here.”

Further, the description itself is offensive in many ways. Describing the average member of a movement with as many Jews as effective altruism as “culturally Protestant” is quite anti-Semitic. The authors fail to mention queerness and transness, probably because it would be a bit inconvenient for their point to mention that an enormous number of EAs are bisexual and trans women are represented in EA at something like forty times the population rate. The average effective altruist is “neurodivergent” which… is a bad thing, apparently? We need to go represent the neurotypical point of view, which is inescapable everywhere else in politics, corporations, and the media? The vague term “neurodivergence” actually understates the scale of effective altruism’s inclusion problem. Effective altruism is inclusive of a relatively narrow range of neurodivergences: it’s strikingly unwelcoming of, say, non-Aspie autistics.

Finally, some of this homogeneity is about things that are… true? I realize it’s rude to say so, but consuming animal products in the vast majority of situations in fact supports an industry which tortures animal and God in fact doesn't exist.I am glad that the effective altruism movement has reached general consensus on these things! Effective Altruist Political Ideology is hardly correct in every detail, but I don't think it's a bad sign if a movement broadly agrees on a lot of political issues. Some political policies are harmful! Other policies make things better!

Further, perhaps I am interpreting the authors uncharitably, but I suspect that when they say “there should be more diversity of political opinions” they mean “there should be more leftists.” I am just ever-so-slightly suspicious that if my one-sided archnemesis Richard Hanania showed up with a post about how the top cause area is fighting wokeness, the authors would not be happy with this and in fact would probably start talking about racism. Which is fine! I too agree that fighting wokeness is not the top cause area! But in this case your criticism is not “effective altruism should be more inclusive of different political views,” it’s “effective altruism’s political views are wrong and they should have different, correct ones,” and it is dishonest to smuggle it in as an inclusivity thing.

Not sure at all how Doing EA Better is 'quite anti-semitic', and I certainly think accusations of anti-semitism shouldn't just be thrown around, particularly given how common a problem of anti-semitism actually is. I certainly don't see how  a rather amusing sterotypical EA description as 'culturally Protestant' is antisemitic; whilst there are lots of us Jews in EA, I'm not sure I find it at all offensive to not be mentioned!

I also strongly disagree that Doing EA Better suggests having lots of Sams in it is bad (hell, they say that such a description fits 'Several of the authors of this post fit this description eerily well'), and so I'm not sure the accusations of, say, anti-neurodivergent people or antisemitism really hold much water. I also don't get how 'eats a narrow range of vegan ready meals' becomes 'think being vegan is bad'; it reads to me like a comment on how cultuyrally homogenous we are that huel, bol and planty etc could be a cultural thing, rather than all the other vegan foods out there 

+1 to "how is this anti-Semitic?" (I'm also Jewish)

If there was no difference at all between the beliefs/values/behaviours of a the average member of this community, versus the average member of the human species - then there would be no reson for the concept "Effective Altruism" to exist at all.

It would be a terrible thing for our community to directly discriminate against traits which are totally irrelevant to what someone has offer to the EA project (such as race/gender/sexual preference) - and I've never heard anyone around here disagree with that.

But when it comes to a traits such as being highly intelligent, not being a political extremist, or having intellectual curiosity about any part of the universe other than our comparitively tiny planet (aka "thinks space is cool") - having these traits be  over-represented in the community is an obviously good thing!

Dear authors, if you think the community at large has the wrong idea about moral philosophy, I think the best response is to present compelling arguments which criticize utilitarianism directly!

If you think the community at large has the wrong economic/political beliefs, please argue against these directly!

Or if you think there is a a particular organisation in the movement is making a particular mistake which they wouldn't have made had they consulted more domain experts, please lay out a compelling case for this as well!

Jason
20
5
11

I remain of the opinion that posts made in good faith should not be voted below one karma without a pretty good reason. The original Doing EA Better was just too massive to facilitate discussion of specifics, and splitting it up to facilitate more specific discussion seems reasonable. I do not see a good reason for this to have negative karma.

While I haven't voted either way on this post, I think it is one of the least well done portions of the original larger post. Ozy's response, quoted in the comment above, shows how this post is harmful on its own terms.

I recall at least one, possibly both, of the other segments being in the negative at certain points in time before settling to weakly positive karma. My memory could be wrong, but that suggests that the early negative vote isn't primarily a function of this particular segment being problematic.

A charitable interpretation would be that it's a symptom of there being no separate way to 'upvote and disagreevote' something that a Forum user thinks is important but still disagree with. -5 from 17 (at time of writing) does seem unbalanced though, especially given the original DEAB post was highly upvoted, and one of the most common suggestions was for the authors to break it up into smaller chunks

I think the unfortunate absence of disagreevote on posts is a good bit of what we are seeing. Given the single vote type, I'm more okay with downvote-to-disagreevote on posts that have a decent amount of karma. But negging a substantial post sends an implied message that the content was inappropriate or unwelcome, which comes across as somewhat unfriendly at best.

(For comments, negging serves a more useful purpose in rank ordering comments, and getting a light neg on a five-line comment just doesn't have the same sting as on a substantial post.)

Personally, I don't think it's a problem if a substantial post has negative karma. Someone could read a post, agree that it was well-written and detailed, but still think it was bad and that they'd want to see fewer posts like it. A downvote seems like the right response there.

Overall, I think there's a tendency for people to upvote things more often when they are very long, and that this is one factor that pushes the average Forum post to be too long. This makes me especially wary about norms like "negging a substantial post is somewhat unfriendly".

That said, I'd be quite happy to see disagree-voting added to posts, since many people would find it useful (including me!).

Agree on not upvoting for length; I meant "substantial" to exclude shower thoughts and similar material that should clearly should be a shortform.

I think a net karma of (say) +5 conveys pretty effectively that the community doesn't think much of the post and "want[s] to see fewer posts like" it. The difference between that and a net karma of -5 is that the latter comes across as a sanction. Although people ideally wouldn't take karma on their posts personally, people are also human and are prone to do so. And it's well-documented that people tend to view taking away something they had (say, $100) as significantly more negative than acquiring the same thing.

So, if posts were movies, ending up with +5 (where the median post gets much more) is somewhat like bombing at the box office and being panned by the critics. Everyone gets the message loud and clear. Getting -5 is inching closer to receiving a Razzie. That feels like potential overdeterrence to me unless there are strong reasons for the award.

Also, a downvote is a blunt instrument, and it's worth thinking about the message that assigning a net negative karma to a reasonably well-written, medium-or-higher effort post sends to newer and casual users. I fear that message will often come across as "Better not write something the majority disagrees with; the same thing might happen to you." Any karma system almost inevitably incentivizes widely-acceptable bromides already, and I think assigning net negative karma to reasonably high-effort posts absent strong reason risks intensifying that tendency. (To be clear, I have standard downvoted at least two posts already in the negative this week, one for a click-baity title and one for excessive promotion of a for-profit enterprise, so I am not suggesting strong reasons do not occur.)

In the end, I don't think any marginal increase in signalling that is achieved by net negative (vs. very low positive) karma is worth the downsides of net negative karma in most cases of good-faith, rule-compliant, reasonable-effort posts.

Thanks for continuing to engage so thoughtfully!

I agree that +5 and -5 will feel more different to most people than +5 and +15.

I think this reflects a common dilemma with karma systems, which is that people tend to use them in one of two ways:

  1. Voting based on how they feel about content, without regard for its current karma
  2. Voting so that they bring content closer to the karma score they think it should have

There are many cases where I've seen a comment at, say, -10, and I've had the thought "I dislike this comment, but -10 seems too harsh", and I've had to choose whether to upvote or downvote (or leave it alone).

My behavior in those cases isn't consistent — it depends on the context, my mood, etc.

 

I expect that method (2) leads to fewer pile-ons and reduces echo chamber effects. But it also creates a weird dynamic where people are upvoting things they think are bad and vice-versa to make a more complicated point (what would Aaron Hamlin say?). 

If someone were deciding how to vote on my post, I think I'd want them to just express their feelings regardless of what other people had done, because that result would feel more "true" to me and give me more information about what readers actually thought.

 

I'm not sure there is a right answer in the end, and I'm definitely not confident enough to try to push people in one direction or the other (to the point of calling it "unfriendly" to downvote posts below zero, or, say, "dishonest" to vote against one's feelings).

I upvoted this post due to this comment. I don’t see a good reason for this to have negative karma either.

I definitely think it’s a good idea for EA to expand the variety of academic disciplines of its members. I certainly think that the social sciences would benefit EA- for example, sociology could give us a framework of the social, cultural, and institutional relationships that underlie the problems found within developing countries. This could inform how we direct our resources. I also think that EAs may be blindsided to the idea that diversity increases a group’s collective intelligence because we assume that we already recruit the most talented people (e.g., highly educated people studying the most relevant subjects). Therefore, if we recruit the most talented people, then our epistemics is surely top-notch. This therefore excludes lots of people, especially those in poorer countries where education isn’t as easily accessible, and ways of thinking/knowing. 

I think I shut my epistemology brain off at "make beliefs pay rent in anticipated experiences" when I first got sequences pilled. Emphasizing predictions and being wrong, constraining anticipation, this is I think the most productive way to think about belief. I definitely kept grinding up applied epistemology by reading Jaynes or something like that. But I still haven't seen an adequate argument that I'm wrong about feeling like we solved ways of knowing. Sometimes a subset of philosophy gets solved! You can even bring in the merits of standpoint epistemology under the banner of making beliefs pay rent by celebrating how much demographic diversity improves your brier score, if you want, I think that'd be clearheaded and understandable! 

About steel-manning vs charitably interpreting

The ConcernedEA's state:

"People with heterodox/'heretical' views should be actively selected for when hiring to ensure that teams include people able to play 'devil’s advocate' authentically, reducing the need to rely on highly orthodox people accurately steel-manning alternative points of view"

I disagree. Ability to accurately evaluate the views of the heterodox minority depends on developing a charitable interpretation (not necessarily a steel-manning) of the views. Furthermore, if the majority can not or will not develop such a charitable interpretation, then the heretic must put their argument in a form that the majority will accept (for example, using jargon and selectively adopting non-conflicting elements of the majority ideology). This unduly increases burden on the person with heterodox views.

The difference between a charitably -interpreted view and a steel-manned view is that the steel-manned view is strengthened to seem like a stronger argument to the opposing side. Unfortunately, if there are differences in evaluating strength of evidence or relevance of lines of argument (for example, due to differing experiences between the sides), then steel-manning will actually distort the argument. A charitable interpretation only requires that you accurately determine what the person holding the view intends to mean when they communicate it, not that you make the argument seem correct or persuasive to you.

Sometimes I think EA's mean "charitable interpretation" when they write "steel-manning". Other times I think that they don't. So I make the distinction here.

It's up to the opposing side to charitably interpret any devil's advocate position or heretical view. While you could benefit from including diverse viewpoints, the burden is on you to interpret them correctly, to gain any value available from them.

Developing charitable interpretation skills

To charitably interpret another's viewpoint takes Scout Mindset, first of all. With the wrong attitude, you'll produce the wrong interpretation no matter how well you understand the opposing side. It also takes some pre-existing knowledge of the opposing side's worldview, typical experiences, and typical communication patterns. That comes from research and communication skills training. Trial-and-error also plays a role: this is about understanding another's culture, like an anthropologist would. Immersion in another person's culture can help.

However, I suspect that the demands on EA's to charitably interpret other people's arguments are not that extreme. Charitable interpretations are not that hard in the typical domains you require them. To succeed with including heterodox positions, though, demands on EA's empathy, imagination, and communication skills do go up.

About imagination, communication skills, and empathy for charitably interpreting

EA's have plenty of imagination, that is, they can easily consider all kinds of strange views, it's a notable strength of the movement, at least in some domains. However, EA's need training or practice in advanced communication skills and argumentation. They can't benefit from heterodox views without them. Their idiosyncratic takes on argumentation (adjusting Bayesian probabilities) and communication patterns (schelling points) fit some narrative about their rationalism or intelligence, I suppose, but they could benefit from long-standing work in communication, critical thinking, and informal logic. As practitioners of rationalism to the degree that mathematics is integral, I would think that EA's would have first committed their thinking to consistent analysis with easier tools, such as inference structures, setting aside word-smithing for argument analysis. Instead, IBT gives EA's the excuse not to grapple with the more difficult skills of analyzing argument structures, detailing inference types, and developing critical questions about information gaps present in an argument. EDIT: that's a generalization, but is how I see the impact of IBT in practical use among EA's.

The movement has not developed in any strong way around communication skills specifically, aside from a commitment to truth-seeking and open-mindedness, neither of which is required in order to understand others' views, but are still valuable to empathy.

There's a generalization that "lack of communication skills" is some kind of remedial problem. There are communication skills that fit that category, but those skills are not what I mean.

After several communication studies courses, I learned that communication skills are difficult to develop, that they require setting aside personal opinions and feelings in favor of empathy, and that specific communication techniques require practice. A similar situation exists with interpreting arguments correctly: it takes training in informal logic and plenty of practice. Scout mindset is essential to all this, but not enough on its own.

Actually, Galef's podcast Rationally Speaking includes plenty of examples of charitable interpretation, accomplished through careful questions and sensitivity to nuance, so there's some educational material there.

Typically the skills that require practice are the ones that you (and I) intentionally set aside at the precise time that they are essential: when our emotions run high or the situation seems like the wrong context (for example, during a pleasant conversation or when receiving a criticism). Maybe experience helps with that problem, maybe not. It's a problem that you could address with cognitive aids, when feasible.

Is moral uncertainty important to collective morality?

Ahh, am I right that you see the value of moral uncertainty models as their use in establishing a collective morality given differences in the morality held by individuals?

Curated and popular this week
Relevant opportunities