Hide table of contents

Should there be a community around EA? Should EA aim to be one coherent movement?

I believe in the basic EA values and think they are important values to strive for. However, I think that EA’s current way of self-organizing – as a community and an umbrella for many causes – is not well suited to optimizing for these values.

In this post I will argue that there are substantial costs to being a community (as opposed to being “just” a movement or a collection of organizations). Separately, I will argue that EA has naturally grown in scope for the past ten years (without much pruning), and that now may be a good time to restructure.

In the following sections I will explore (potential) negative facets of EA as a community and as a large umbrella of causes:

  1. If the community aspect of EA becomes too dominant, then we will find ourselves with cult-like problems, such as: the incentive for people to stay in the community being stronger than the incentive to be truth-seeking.
  2. Currently, EA’s goal is very broad: “do good better”. Originally, colloquially it meant something fairly specific: when considering where to donate, keep in mind that some (traditional) charities save much more QALYs per dollar than others. However, over the past ten years the objects of charity EA covers have vastly grown in scope e.g. animals and future beings (also see a and b). We should beware that we don't reach a point where EA is so broad (in values) that the main thing two EAs have in common is some kind of vibe: ‘we have similar intellectual aesthetics’ and ‘we belong to the same group’, rather than ‘we’re actually aiming for the same things’. EA shouldn't be some giant fraternity with EA slogans as its mottos, but should be goal-oriented.

I think most of these issues would go away if we:

  • De-emphasize the community aspect
  • Narrow the scope of EA, for example into:
    • A movement focusing on doing traditional charities better; and an independent
    • Incubator of neglected but important causes

1. Too Much Emphasis on Community

In this section I will argue that a) EA is not good as a community and b) being a community is bad for EA. That is, there are high costs associated with self-organizing as a community. The arguments are independent, so the costs you associate with each argument should be added up to get a lower bound for the total cost of organizing as a community.

Problems with Ideological Communities in General 

The EA-community is bad in the sense that any X-community is bad. EA in itself is good. Community in itself is good. However, fusing an idea to a community is often bad. 

Groups of people can lie anywhere on the spectrum of purpose vs people. On one extreme you have movements or organizations that have a purpose and people coordinating to make it happen. Think of a political movement with one narrow, urgent purpose. People in this movement form an alliance because they want the same outcome, but they don’t have to personally like each other.

On the other end of the extreme you have villages, in which people support each other but don’t feel the urge to be on the same page as their neighbor ideologically. (They may find the guy who cares a lot about X a weirdo, but they accept him as one of them.) For an unexpected example, consider the Esperanto community. This community was founded on an idea, but today it is very much at the community end of the spectrum rather than the ideological one. 

Both extremes (main focus on ideology/purpose or on community/people) can be healthy. However, combining ideology with community tends to lead to dysfunctional dynamics. The ideology component takes a hit because people sacrifice epistemics and goal-directedness for harmony. At the same time, the people take a hit because their main community is no longer just a place to find solidarity and refuge, but also the place where they apply for positions and their competence and usefulness is measured. Both ideology and community wellbeing are compromised.

EA as a Community 

An emphasis on community encourages EA members to make new connections with other EAs. However, too much of someone’s social network being within EA may lead to them being too dependent on the community.

Currently for some people, many facets of their life are intertwined with EA. What happens when someone reaches a point where a critical number of their friends, colleagues, and romantic interests are EAs? Losing social standing in EA for them means risking a critical portion of their social network… Most communities are not work-related, like a salsa dance community. Most communities that are related to a field or based on work are much looser and less homogeneous in values, e.g. the algebraic geometry (mathematics) community. The combination of work centered and fairly tight-nit is less common. 

Additionally, many individual EAs benefit financially from EA (through grants or salaries), derive their sense of belonging from EA. Some EAs are in careers that have good prospects within EA but bleak ones outside of it (e.g. independent AI safety researcher or wild animal welfare analyst). For these people there is a strong (subconscious) incentive to preserve their social connections within the community. 

1. Human costs of high intertwinement of many facets of life with EA

Higher anxiety and a higher sensitivity to failures when observed by other EAs. By having a lot of needs met within EA, one’s supply of many resources is reliant on one’s acceptance within the EA community. That is, individuals become reliant on their social standing within EA and their supply of good things in life becomes less robust. And it would only be natural to become vigilant (i.e. anxious) in this aspect of life.    

For example, if an individual’s EA-related job is going poorly, this may make them insecure about their position in the community and very anxious. Whereas, on the contrary, if the individual’s eggs were spread more widely they would probably be less likely to catastrophize damage to their career. EAs do seem to have anxiety around rejection and failure as pointed out by Damon.

Power dynamics and group status are amplified in an ideological community (as opposed to for example an ideology without a community). Julia has written about power dynamics she observes in EA. Many people would like to be high up in the social hierarchy of their primary community, if EA is the primary community of many, that has repercussions. 

Career dependency. Choosing for a career as a full-time community leader is well-accepted within EA. However, people may find it difficult to find a stimulating job outside of EA if their main work experience is in community building.

Incentive to preserve social connections may override desire for truth-seeking. For example, I get the impression that there are subgroups in EA in which it’s especially cool to buy in to AI risk arguments. There is a cynical view that one of the reasons mathsy academically inclined people like arguments for x-risk from AI is that this could make them and their friends heroes. For an in-depth explication of this phenomenon consider the motivated reasoning critique of effective altruism.

If you or your friends just received a grant to work on x-risk from AI, then it would be quite inconvenient if you stopped believing x-risk from AI was a big problem. 

2. Epistemic costs of intertwinement: Groupthink 

Groupthink is hard to distinguish from convergence. When a group agrees on something and the outcome of their decision process is positive, we usually call this convergence. In the moment, and by members, it’s hard to judge whether groupthink or convergence is happening. Groupthink is usually only identified after a fiasco. 

Quotes. Two anonymous quotes about groupthink in EA:

a. “Groupthink seems like a problem to me. I’ve noticed that if one really respected member of the community changes their mind on something, a lot of other people quickly do too. And there is some merit to that, if you think someone is really smart and shares your values — it does make sense to update somewhat. But I see it happening a lot more than it probably should.”

b. “Too many people think that there’s some group of people who have thought things through really carefully — and then go with those views. As opposed to acknowledging that things are often chaotic and unpredictable, and that while there might be some wisdom in these views, it’s probably only a little bit.”

Currents in the ocean. The EA community wants to find out what the most important cause is. But, many things are important for orthogonal reasons. And perhaps there is no point in forming an explicit ranking between cause areas. However, EA as a community wants to be coherent and it does try to form a ranking.

A decade ago people directed their careers towards earning to give. In 2015 Global Poverty was considered 1.5 times as important as AI Risk, and in 2020 almost as important, see. In my experience, and those of some people who’ve seen the community evolve for a long time (who I’ve spoken to in private) EA experiences currents. And the current sways large numbers of people. 

An indication that unnecessary convergence happens. On topics like x-risk we may think that EAs agree on this because they’ve engaged more with arguments for it. However, in the EA and rationality spheres I think there is homogeneity or convergence where you wouldn't expect it by default. Such as polyamory being much more common, a favorable view towards cuddle-piles, a preference for non-violent communication, a preference for explicit communication about preferences, waves of meditation being popular, waves of woo being unpopular, etc. The reader can probably think of more things that are much more common in EA than outside of it, even though they are in principle unrelated to what EA is about. 

This could be a result of some nebulous selection effect or could be due to group convergence.

When I know someone is an EA or rationalist my base rate for a lot of beliefs, preferences and attributes, instantly becomes different from my base rate for a well-educated, western person.

I think this is a combination of 1) correlation in traits  that were acquired before encountering EA; 2) unintended selection of arbitrary traits by EA (for example due to there already existing a decent number of people with that trait); and 3) convergence or groupthink. I think we should try to avoid 2) and 3).

EA is hard to attack as an outsider. Isn't EA particularly good at combating groupthink, for example by inviting criticisms? You may ask. No, I do not think EA is particularly immune to this. 

It is difficult to get the needed outside criticism, because EAs only appreciate criticism when it's written in the EA style that is hard to acquire. For example, the large volume of existing texts that one would have to be familiar with before being able to emulate the style is fairly prohibitive.

An existence proof? that EA-like groups may not be immune to groupthink. Some examples of (small) communities that have some overlap in demographic to EA and that highly value critical thinking are Leverage, CFAR and MIRI. These nonetheless suffered from groupthink. To be clear I think Leverage, CFAR and MIRI are all Very different from EA as a community. However, these organizations do consist of people who particularly enjoy and (in some contexts) encourage critical thinking. Those three organizations may have nonetheless suffered from groupthink, as expanded on in these blog posts by Jessicata and Zoe Curzi.

3. Special responsibilities are a cost of organizing as a community

A moral compass often incorporates special responsibilities of individuals (EA or not) towards their children, elderly in their care, family, friends, locals and community.

Being a community gives you such a special responsibility to your members. However, if EA is too much of a community, this means that it may have to spend more resources on keeping those in it happy and healthy, than it would if it was maximizing utility. 

I think EA as a movement should only care about its 'members' insofar as the evidence suggests that this does in fact create the most utility per dollar. However, by being people’s community EA takes on a special responsibility towards its members that goes beyond this. 

To be clear, I do think that EA organizations should treat their employees in a humane fashion and care for them (just like most companies do). However, being a community goes beyond this. For example, it gives EA a special responsibility to (almost) anyone who identifies as EA.

Advantages of a community 

One factor that makes it so attractive to increase the community aspect of EA is: People (new to EA) come across this super important idea (EA), and none of their friends seem to care. And they want to do something about it. They can instantly contribute to “EA” by for example supporting people that are already contributing directly, or they can do EA community building.

Young people studying something like economics (which could be a very useful expertise for EA in ten years!) end up doing community building because they want to contribute now. Because of the importance, people want to help and join others in their efforts and this is how the community starting mechanism gets bootstrapped. (This sensed importance makes joining EA different from joining a company.) However, because EA is an ideology rather than one narrow project, people are buying into an entire community rather than joining a project. 

To play devil’s advocate I will highlight some advantages of having an EA community. 

  • Many people would like a community. Maybe EA can attract more capable people to work on EA goals.
    • Counter-argument (not supported by data or other evidence, just by a hunch): the kind of people who crave a community tend to come from a place of feeling isolated and feeling like a misfit, craving a feeling of meaning and being less emotionally stable. I don't think it's necessarily productive to have a community full of people with these attributes.
    • In fact we may have been unable to attract people with the most needed skills because of the approach to community building that EA has been taking. Which is what this post argues.
  • A community increases trust among its members, which avoids costly checks. For example, it avoids checking whether someone will screw you over and people can defer some judgements to the community.
    • I think avoiding checks on whether someone will screw you over is just plain good.
    • Counter-argument to ease of deferring judgements: I think this can cause a situation of people thinking that there's community judgement and no more scrutiny is needed when in fact more scrutiny is needed.

2. Too Broad a Scope

In this section I will argue that:

  1. Optimizing for utility leads to different goals depending on what you value and meta-preferences.
  2. Some important goals don’t naturally go together, i.e. are inconvenient to optimize for within the same movement.
  3. More concrete and actionable goals are easier to coordinate around.

1. Foundation of EA may be too large

To quote Tyler Alterman: ‘The foundation of EA is so general as to be nearly indisputable. One version: "Do the most good that you can." [fill in own definitions of 'good,' 'the most,' etc]. The denial of this seems kind of dumb: "Be indifferent about doing the most good that you can" ?’

Many different (even contradictory!) actual goals can stem from trying to act altruistically effectively. For example a negative utilitarian and a traditional one disagree on how to count utility. I think that the current umbrella of EA cause areas is too large. EA may agree on mottos and methods, but the ideology is too broad to agree on what matters on an object-level.

People who subscribe to the most general motto(s) of EA could still disagree on:

  1. What has utility (Should we care about animals? Should we care about future humans?)
  2. Meta-preferences such as risk-tolerance and time-horizons

a. What has utility? People have different values.

b. Meta-preferences such as risk-tolerance and time-horizons. Even if you have the same values, you may still have different ‘meta-preferences’, such as risk-tolerance, time-horizons etc. For example people need different amounts of evidence before they’re comfortable with investing in a project. People could all be epistemically sound, while having different thresholds for when they think evidence for a project is strong enough to pour resources into it. (Ajeya talked about this in a podcast.)

For example, one EA-slogan is ‘be evidence-based’, but this could lead to different behavior depending on how risk-averse you are towards the evidence pointing in the right direction. In global health and wellbeing you can try a health intervention such as malaria nets, worm medication, etc. You measure some outcomes and compare with the default treatment. In this case, you can have high demands for evidence. 

Say you instead consider x-risks from natural disasters. In this case, you cannot do experiments/intervention studies. Say you consider evidence for a particular intervention to prevent x-risks from emerging technologies. In this case, the only evidence you can work with is fairly weak. You could extrapolate trends. Or use analogies but that's weaker still. So far people have also relied on first principles. 

People have different bars for conclusiveness of evidence. Many scientists would probably stay with interventional and observational studies. Philosophers may be content with first principles reasoning. People will care about different projects in part because they have different meta-preferences.

2. Friction between Goals

Spread idea of QALYs independently of growing EA. It would be great if the meme ‘when donating keep in mind that some charities are more effective than others’ became more widespread. Not everyone who incorporates this idea into their worldview has to ‘become an EA’. However, because EA is currently a community and an umbrella of many outgrowths, it’s difficult to only expose them to ‘vanilla’ EA ideas. We should find a way of separating this core idea from current EA and let it integrate into mainstream culture. 

Goal of normalizing doing traditional charities better and goal of shining light on new cause areas doesn’t go well together. Trying to convince people that charity evaluation should be more evidence based doesn’t gel well with working on niche causes. (Even if they’d be sympathetic to counting QALYs) people who are more convention-oriented may feel too stretched by charities that target new cause areas. For example ones in the far future or ones regarding (wild) animals.

Counter-factual effective altruistic people and projects. Even if we’re able to internally marry all the different causes, such that no individual in EA feels iffy about it. Then still, we probably have already deterred a lot of people from joining (projects within) the movement or feeling sympathy towards it. It’s of course hard to know how many counterfactual EAs we might have lost. Or what exactly the preferences are of counter-factual EAs. But we should keep them in mind. 

Note that, thinking of counterfactual EAs may actually not be the best phrasing. We shouldn’t aim to expand EA as much as we can as a goal in itself. If we focus more on doing object-level projects rather than engaging with how many people we can get to buy into EA ideas, then we may end up with more people doing good effectively (than we would have ended up with effective EAs).

Why should we?  We should not try to get people who care a lot about animal welfare or decreasing global inequality to care about x-risk. Many goals are basically orthogonal. There’s no need to have one movement that ‘collects’ all of the most important goals. It’s fine for there to be a different movement for each goal.

As a side-note, I do think there’s a place for a cause area incubator, i.e. a group of people that professionally work to get completely new or overlooked cause areas off the ground. Current EA is much bigger than that though. Current EA includes: active work in cause areas that have more than 200 professionals; directing people’s careers; philosophy etc. 

EA as a movement doesn’t have to encompass all the cool things in the world. There can be different movements for different cool things. To me the question is no longer ‘should it be split up?’ but ‘how should it be split up?’

3. EA is not focused enough

In my opinion EA as a community currently bears a lot of similarity to a service club such as Rotary International. Service clubs are substantially about people doing favors for other club members.

EA started small but became big quite quickly, and has mostly aimed to expand. This aim of keeping all the sheep in the same herd facilitates community-based activities (as opposed to goal-based activities). For example, improving mental health of young EAs, supporting new EAs, and organizing EA socials are all mostly supporting the community and only indirectly supporting EA goals. Additional examples of EA focused activities are: organizing EA global, organizing local groups, running a contest for criticisms, maintaining the quality on the EA forum and so on. These activities may or may not prove to be worth their cost, but their worth to the world is indirect (via the effectiveness of EA) rather than directly improving the world.

If a project or organization has very concrete goals (such as submitting papers to an AI safety workshop), then it’s usually clear how many resources should go into maintaining the team or organization and how much should go into directly trying to achieve the goal. Making sure you work with collaborators towards a positive goal is altruistic, generally participating in a service club less so.

Advantages of an Umbrella

  • People can work on riskier endeavors if they feel there are people like them working on less risky projects. If you feel like you’re part of a bigger movement you may be happy to do whatever needs to be done in that movement that’s to your comparative advantage.
  • It may be annoying or messy to decide where to draw the lines.
  • There are a lot of ideas that flow out of Doing Good Better that are less obvious than ‘certain charities are more effective than others’ such as ‘earning to give’ or directing your career towards positive impact.
    • It may be possible to expose people to these ideas without them being a core part of the umbrella? For example, it’s also possible to reference Popper or Elon Musk even though they are not at the core of EA.

The way forward: less is more

Less community

Less EA identity. Currently a community aspect to EA is created by people ‘identifying’ as EA, which is unnecessary in my opinion. So I’d advocate for refraining from seeing oneself as an EA. (Rather than just as someone who’s working on x for reason y.)

Fewer community events. I’m in favor of project-based events, but am wary of non-specific networking events. 

Less internal recruiting. EA is community focused in that advertisements for opportunities are often broadcast in EA groups rather than in universities or on LinkedIn directly. Currently a common funnel is: An EA group advertisement is placed in a university, once people have entered the group, they see advertisements for scholarships etc. 

Instead I’d aim for: Removing the community mode of communicating about opportunities and directly advertise for specific opportunities in universities. We shouldn’t make opportunities conditional on in-group status, so we should try and make opportunities equally accessible to all. (Also try to avoid having ‘secret’ signals that an opportunity is very cool, that only EA’s can read.)

Narrowing the scope 

EA as is, could be split into separate movements that each are more narrow in scope and more focused.

Split off EA into:

  • A movement focusing on doing traditional charities better;
  • A movement or organization focusing on becoming an incubator of neglected but important causes;
  • A couple of mature scientific fields (much like physics has split off from philosophy);

The EA movement and branding could split into 1) the original EA, namely doing traditional charities better by assessing QALYs per dollar; and 2) an incubator. This split would for example mean that EA Global would no longer exist, and instead there could be completely independent (e.g. that are not on purpose run in parallel or shortly after one another) conferences with more narrow focuses. 

The incubator could be an organization that identifies ‘new’ causes; does basic research on them; and hands out grants to charities that work for the causes. Once a cause area becomes large enough that it can stand on its own, it’s cut off from the metaphorical umbilical cord. So for example, AI risk would probably be cut off around now. (Note that there could be multiple organizations and/or research labs working in the newly split off field.)

Two advantages of separating an incubator from traditional EA are:

  • The cause areas in the incubator would all be small and so would be more balanced in size. As a cause area becomes sizable it can be cut off.
  • The incubator could absorb all the weirdness points, and even if people don’t feel attracted to the incubator, they wouldn’t find the weirdness fishy, as an incubator ought to support innovative ideas.

If that seems useful, then additional to movement 1) doing traditional charities better by assessing QALYs per dollar; and 2) an incubator, we could have a movement centered around 3) longtermism, or public perception of and solutions to x-risk.


Overall Recommendation: EA should drop expansionism and loosen its grip. 





















 

Comments10
Sorted by Click to highlight new comments since: Today at 7:44 AM

I thought this post raised many points worth pondering, but I am skeptical of the actual suggestions largely because it underrates the benefits of the current setup and neglects the costs. I'll list my thoughts below:

a) Yeah, the community aspect is worrying in terms of how it distorts people's incentives, but I believe that we also have to be willing to back ourselves and not risk crippling our effectiveness by optimising too much on minimising downside in the case where we are wrong.

b) Ways of thinking and frameworks of the world are much more than merely a "vibe" or "intellectual aesthetic".

c) Groupthink can be addressed by other less costly interventions, such the Criticism Red Teaming and Competition. I imagine that we could run other projects in this space, such as providing longer-term funding for people who bring a different perspective to the table. These aren't perfect, but achieving goals is much easier when you have many like-minded people, so winging too much the other way could cripple us too.

d) I don't see the EA style as being hard to acquire. However, I agree that it's important for us to be able to appreciate criticisms written in other styles, as otherwise we'll learn from others at a much slower rate.

e) I feel that EA is somewhat dropping the ball on special responsibilities at the moment. With our current resources and community size, I think that we could address this without substantially impacting our mission although this might change over the longer-term.

f) I feel that the advantages of a community are vastly underrated by this post. For one, the community provides a vital talent pool. Many people would never have gotten involved in direct work if there wasn't a lower commitment step than redirected their career that they could take first or a local community running events to help them understand why the cause was important. I suppose we could structure events and activities to minimise the extent to which people become friends, but that would just be a bad community.

g) That said, we should dedicate more effort towards recruiting people with the specific skills that we need. We need more programs like the Legal Priorities Summer Institute or the EA Communicators Fellowship. I'm also bullish on cause-specific movement building to attract people to direct work who care about the specific cause, but who might not vibe with EA.

h) Giving What We Can and GiveWell are spreading the meme ‘when donating keep in mind that some charities are more effective than others' without positioning it in a broader EA framework. I'm really happy to see this as I think that many people who would never want to be part of the EA community might be persuaded to adopt this meme.

i) "We should not try to get people who care a lot about animal welfare or decreasing global inequality to care about x-risk" - I agree that we want to limit the amount of effort/resources spend trying to poach people from other cause areas, but shifting from one cause are to another could potential lead to orders of magnitude in the impact someone has.

j) EA was pretty skeptical about being too meta-focused at first due to worries that we might lose our focus on direct work, but in retrospect, I suspect that were wrong not to have spent more money on community-building as it seems to have paid dividends in terms of recruiting talent.

k) I'd like to see the section on less internal recruiting engage with the argument that value alignment is actually important. I think that the inevitable result of hiring people from society at large would be a watering down of EA ideas and rather than a marginally less impactful project being pursued, I expect that in many cases this could reduce impact by an order of magnitude from having people pursue the highest impact project that is high status rather than the highest impact project. There's also significant risk in that once you bring in people who aren't value-aligned, they bring in more people who aren't value-aligned and then the whole culture changes. I'm guessing you might not think this is important given that you've described it as a "vibe", but I'd suggest that having the right culture is a key part of achieving high performance.

l) "So for example, AI risk would probably be cut off around now" - While this would free up resources to incubate more causes, this could also be a major blunder if AI risk is an immediate, short-term priority and we need to be moving on this ASAP.

Most of what I've written here is criticism, so I wanted to emphasise again that I found the ideas here fascinating and I definitely think reading your post was worth my time :-).

Thanks!

a) I broadly like the idea that “we also have to be willing to back ourselves and not risk crippling our effectiveness by optimising too much on minimising downside in the case where we are wrong”. I would like to note that downgrading the self-directed-investment reduces the need for caution, and so reduces the crippling effect.

j) I think it’s hard to decide how much meta-investment is optimal. You talk about it as if it’s a matter of dialling up or down one parameter (money) though, which I think is not the right way to think about it. The ‘direction’ in which you invest in meta-things also matters a lot. In my ideal world “Doing Good Better” becomes part of the collective meme-space just like “The Scientific Method” has. However, it’s not perfectly obvious which type of investment would lead to such normalisation and adoption. 

h) I’m happy to hear Giving What We Can and GiveWell don’t position themselves in the wider EA framework. I’m not very up to date with how (effectively) they are spreading memes.

c) Running an intervention such as the Criticism Red Teaming and Competition is only effective if people can fundamentally change their minds based on submissions. (And don’t just enjoy that we’re all being so open-minded by inviting criticisms or only change their minds about non-core topics.)

f) I agree talent is important. However, I think organising as a community might as well have made us lose out on talent. (This “a local community running events to help them understand why the cause was important” actually gives me some pyramid scheme vibes btw.)

i) I wasn’t talking about poaching. I was more talking about: caring about all  EA cause areas should not in any way be a conditional or desired outcome for someone caring about one  EA cause area. 
Re “shifting from one cause are to another could potential lead to orders of magnitude in the impact someone has”: sure, but I think in EA the cost of switching has also been high. It switches all the time what people think is the most impactful area, and skilling up in a new area takes time. If someone works in a useful area, has built up expertise there and the whole area is to their comparative advantage, then it would be Best if they stayed in that area. 

k) Here we disagree. I think that within a project there should be value-alignment. However, the people within a project imo do not have to be value-aligned to EA at large. 

Re “I'd suggest that having the right culture is a key part of achieving high performance.” I personally think “Doing the thing” and engagement with concrete projects is most important.

I also actually feel like “could reduce impact by an order of magnitude from having people pursue the highest impact project that is high status rather than the highest impact project” is currently partially caused by the EA community being important to people. If people’s primary community is something like a chess club, pub or family, then there are probably loads of ways to increase status in that group that have nothing to do with the content of their job (e.g. getting better at chess, being funny, and being reliable and kind). However, if the status that’s most important to you is whether other EAs think your work is impactful, then you end up with people wanting to work on the hottest topic, rather than doing the most impactful think based on their comparative advantage.

I really liked reading this, as I think it captures my most recent concerns/thoughts around the EA community.

  • I strongly agree that the costs of intertwining the professional and personal spheres require more careful thought—particularly re: EA hubs and student groups. The epistemic costs seem most important to me here: how can we minimize social costs for 'going against' a belief(s) held by the majority of one's social circle?
  • I think more delineation would be helpful, particularly between the effective giving and cause incubation approaches of EA. I would hate to see the former be bottlenecked by the latter, or vice versa.

However, I'm not really sure I agree that the foundation of EA is too large. I am selfishly motivated here because I think the EA community, writ large, is one of my absolute favourite things and I would hate to see it go away (which is the thrust of your suggestion). I think there are core principles to EA that give it just enough of a shape while still bringing together people who are interested, to whatever extent they have decided is appropriate for their life, in maximizing social impact.

I don't really know if I have good alternative proposals, though. I think I'd want to see more delineation and then see if that solves most of the problems.

This type of piece is what the Criticism contest was designed for, and I hope it gets a lot of attention and discussion. EA should have the courage of its convictions; global poverty and AI alignment aren't going to be solved by a friend group, let alone the same friend group.

Could you describe in other words what you mean by "friend group"?

While a group formed around hiking, tabletop games or some fanfic may not solve AI (ok the fanfic part might), but friends with a common interest in ships and trains probably have an above-average shot at solving global logistic problems.

I’m using ‘friend group’ as something like a relatively small community with tight social ties and large and diverse set of semi-reliable identifiers.

EA attracts people who want to do large amounts of good. Weighted by engagement, the EA community is made up of people for whom this initial interest in EA was reinforced socially or financially, often both. Many EAs believe that AI alignment is an extremely difficult technical problem, on the scale of questions motivating major research programs in math and physics. My claim is that such a problem won’t be directly solved by this relatively tiny subset of technically-inclined do-gooders, nice people who like meet-ups and have suspiciously convergent interests outside of AI stuff.

EA is a friend group, algebraic geometers are not. Importantly, even if you don’t believe alignment is that difficult, we’d still solve it more quickly without tacking on this whole social framework. It worries me that alignment research isn’t catching on in mainstream academia (like climate change did); this seems to indicate that some factor in the post above (like groupthink) is preventing EAs from either constructing a widely compelling argument for AI safety, or making it compelling for outsiders who aren’t into the whole EA thing.

Basically we shouldn’t tie causes unnecessarily to the EA community - which is a great community - unless we have a really good reason.

While I think this post touches on some very important points - the EA, as a movement, should be more conscious about its culture - the proposed solution would be terrible in my opinion.

Splitting up EA would mean losing a common ground. Currently, resource allocation for different goals can be made under the "doing good better" principles, whatever that means. Without that, the causes would compete with each other for talent, donors, etc., and with that networks would fragment, and efficiency would decrease.

However, the EA identifying people should more clearly think about what are these common principles; and should be more intentional about creating the culture, to avoid some of the described problems in the EA community.

Incentive to preserve social connections may override desire for truth-seeking

EA/the rationalist diaspora is supposed to protect the practice of healthy disagreement and earnest truthseeking by lifting up people who're good at it, rewarding the bringers of bad news, and so on. I find the existence of that community to be so protective against broader societal pressures to be lazy about truthseeking that I can't imagine how things would be better if you got rid of it.

It's very possible that we're not doing that well enough and in some way it's failing (there are, indeed, many social pressures relating to community that undermine truthseeking), but we shouldn't just accept that as inevitable. If it's failing, we need to talk about that.

 

I think if we really just try to build a moral community around the broadest notion of EA (epistemically rigorous optimization of/dialog between shared values, essentially), I think that'll turn out to be in some ways... less intense than the community people are expecting, but it'll still be a community, that's still a shared morality that inspires some affection between people, informal ledgers of favors, a shared language, an ongoing dialog, mutual respect and trust.

I agree that it’s great that EA values truth-seeking. However, I’m not sure a social community is essential to acting according to this value, since this value could be incorporated on the level of projects and organisations just as well.

For example, consider the scientific method and thinking in a sciency way. Although we can speak of ‘the scientific community’ it’s a community with fairly weak social ties. And most things happen on the level of projects and organisations. These (science) projects and organisations usually heavily incorporate scientific thinking.

For an individual's experience and choices there are usually many ‘communities’ relevant at the same time, e.g. their colleagues, school-mates, country of residence, people sharing their language etc. However, each of these ‘communities’ have a differently sized impact on their experience and choices. What I’m arguing for is increasing the ‘grasp’ of projects and organisations and decreasing the grasp of the wider EA community.

I thought this was a great post, thanks for writing it. Some notes:

  • If a community rests itself on broad, generally-agreed-to-be-true principles, like a kind of lowest-common-denominator beneficentrism, some of these concerns seem to me to go away.
    • Example: People feel free to change their minds ideologically; the only sacred principles are something like "it's good to do good" and "when doing good, we should do so effectively", which people probably won't disagree with, and which, if they did disagree, should probably make them not-EAs.
    • If a core value of EA is truth-seeking/scout mindset, then identifying as an EA may reduce groupthink. (This is similar to what Julia Galef recommends in The Scout Mindset.)
  • I feel like, if there wasn't an EA community, there would naturally spring up an independent effective global health & poverty community, an independent effective animal advocacy community, an independent AI safety community, etc., all of which would be more homogeneous and therefore possibly more at risk of groupthink. The fact that EA allows people with these subtly different inclinations (of course there's a lot of overlap) to exist in the same space should if anything attenuate groupthink.
    • Maybe there's evidence for this in European politics, where narrow parties like Socialists, Greens and (in Scandinavia though not in Germany) Christian Democrats may be more groupthinky than big-tent parties like generic Social Democratic ones. I'm not sure if this is true though.
  • Fwiw, I think EA should not grow indefinitely. I think at a certain point it makes sense to try to advocate for some core EA values and practices without necessarily linking them (or weighing them down) with EA.
  • I agree that it seems potentially unhealthy to have one's entire social and professional circle drawn from a single intellectual movement.

Many different (even contradictory!) actual goals can stem from trying to act altruistically effectively. For example a negative utilitarian and a traditional one disagree on how to count utility. I think that the current umbrella of EA cause areas is too large. EA may agree on mottos and methods, but the ideology is too broad to agree on what matters on an object-level.

This just doesn't seem to cause problems in practice? And why not? I think because (1) we should and often do have some uncertainty about our moral views and (2) even though we think A is an order of magnitude more important to work on than B, we can still think B is orders of magnitude more important than whatever most non-EAs do. In that case two EAs can disagree and still be happy that the other is doing what they're doing.

Curated and popular this week
Relevant opportunities