Hide table of contents

Summary: I argue for a very broad, inclusive EA, based on the premise that the culture of a region is more important than any specific group within that region, and that broad and inclusive EA will help shift the overall culture of the world in a better direction. As a concrete strategy, I propose a division into low- and high-level EA - a division which I argue already exists within EA - and then selling people on low-level EA (using EA concepts within their chosen cause area to make that cause more effective), even if they are already committed to causes which traditional EA would consider low-impact or ineffective. I argue that in the long term, this will both boost the general effectiveness of all altruistic work done in the world, and also bring in more people into high-level EA as well.

Related post / see also: All causes are EA causes, by Ian David Moss.

An analogy

Suppose that you were a thinker living in a predominantly theocratic world, where most people were, if not exactly hostile to science, then at least utterly uninterested in it. You wanted to further scientific understanding in the world, and were pondering between two kinds of strategies:

1) Focus on gathering a small group of exceptional individuals to do research and to directly further scientific progress, so that the end result of your life's work would be the creation of a small elite academy of scientists who did valuable research.

2) Focus on spreading ideas and attitudes that made people more amenable to the idea of scientific inquiry, so that the end result of your life's work would be your society shifting towards modern Western-style attitudes to science about a hundred years earlier than they would otherwise.

(I am not assuming that these strategies would have absolutely no overlap: for instance, maybe you would start by forming a small elite academy of scientists to do impressive research, and then use their breakthroughs to impress people and convince them of the value of science. But I am assuming that there are tradeoffs between the two goals, and that you ultimately have to choose to focus more on one or the other.)

Which of these outcomes, if successful, would do more to further scientific progress in the world?

It seems clear to me that the second outcome would: most obviously, because if people become generally pro-science, then that will lead to the creation of many elite scientific academies, not just one. Founding an academy composed of exceptional individuals may cause them to make a lot of important results, but they still have to face the population's general indifference, and many of their discoveries may eventually be forgotten entirely. The combined output of a whole civilization's worth of scientists, is unavoidably going to outweigh the accomplishments of any small group.

Mindsets matter more than groups

As you have probably guessed, this is an analogy for EA, and a commentary on some of the debates that I've seen on whether to make EA broad and inclusive, or narrow and weird. My argument is that, in the long term a civilization will do a lot more good if core EA concepts, such as evaluating charities based on their tractability, have permeated throughout the whole civilization. Such a civilization will do much more good than a civilization where just a small group of people are focusing on particularly high-impact interventions. Similarly to one elite scientific academy vs. a civilization of science-minded people, the civilization that has been permeated by EA ideas will form lots of groups focused on high-impact interventions.

This could be summed as the intuition that civilizational mindsets are more important than any group or individual. (Donella Meadows places a system's mindset or paradigm as one of the most effective points to intervene in a system.) Any given group can only do as much: but mindsets will consistently lead to the formation of many different groups. Consider for instance the spread of environmentalist ideas over the last century or so: we are now at a point where these ideas are taken for so granted that a lot of different people think that environmentalist charities are self-evidently a good idea and that people who do such work are praiseworthy. Or similarly the spread of the idea that education is important, with the result that an enormous number of education-focused charities now exists. E.g. Charity Navigator alone lists close to 700 education-focused charities and over 400 environment-focused charities.

If EA ideas were thought to be similarly obvious, we could have hundreds of EA organizations - or thousands or tens of thousands, given that I expect Charity Navigator to only list a small fraction of all the existing charities in the world.

Now, there are currently a lot of people working on what many EAs would probably consider ineffective causes, and who have emotional and other commitments to those causes. Many of those people would likely resist the spread of EA ideas, as EA implies that they should change their focus to doing something else.

I think this happening would be bad - and I don't mean "it's bad that we can't convert these people to more high-impact causes". I mean "I consider everyone who tries to make the world a better place to be my ally, and I'm happy to see people do anything that contributes to that; and if they have personal reasons for sticking with some particular cause, then I would at least want to enable them to be as effective as possible within that cause".

In other words, if someone is committed to getting guide dogs to blind people, then I think that that's awesome! It may not be the most high-impact thing to do, but I do want to enable blind people to live the best possible lives too, and altruists working to enable that is many times better than those altruists doing nothing at all. And if this is the field that they are committed to, then I hope that they will use EA concepts within that field: figure out whether there are neglected approaches towards helping blind people (could there be something even better than guide dogs?), gather more empirical data to verify existing assumptions about which dog breeds and dog training techniques are the best for helping blind people, consider things like job satisfaction and personal fit in determining whether they want to personally train guide dogs / do administrative work in matching those dogs to blind people / do earn-to-give, etc.

If EA ideas do spread in this way to everybody who does altruistic work, then that will make all altruistic work more effective. And by the ideas becoming more generally accepted, it will also increase the proportion of people who end up taking EA ideas for granted and consider them obvious. Such people are more likely to apply them to the question of career choice before they're committed to any specific cause. Both outcomes - all altruistic work becoming more effective, and more people going into more high-impact causes - are fantastic.

A concrete proposal for mindset-focused EA strategy

Maybe you grant that all of that sounds like a good idea in principle, but how to apply it in practice?

My proposal is to explicitly talk about two kinds of EA (these may need catchier names):

1. High-level EA: taking various EA concepts of tractability, neglectedness, room for funding, etc., and applying them generally, to find whatever cause or intervention that can be expected to do the most good in the world.

2. Low-level EA: taking some specific cause for granted, and using EA concepts to find the most effective ways of furthering that specific cause.

With this distinction in place, we can talk about how people can do high-level EA if they are interested in generally doing the most good in the world, or if they are interested in some specific cause, applying low-level EA within that cause. And, to some extent, this is what's already happening within the EA community: while some people are focused specifically on high-level EA and general cause selection, a lot of others "dip their toes" into high-level EA for a bit to pick their preferred cause area (e.g. global poverty, AI, animal suffering), and then do low-level EA on their chosen cause area from that moment forward. As a result, we already have detailed case studies of applying low-level EA into specific areas: e.g. Animal Charity Evaluators is a low-level EA organization within the cause of animal charity, and has documented ways in which they have applied EA concepts into that cause.

The main modification is to talk about this distinction more explicitly, and phrase things so as to make it more obvious that people from all cause areas are welcome to apply EA principles into their work. Something like the program of EA Global events could be kept mostly the same, with some of the programming focused on high-level EA content, and some of it focused on low-level EA; just add in some talks/workshops/etc. on applying low-level EA more generally. (Have a workshop about doing this in general, find a guide dog charity that has started applying low-level EA to its work and have their leader give a talk on what they've done, etc.) Of course, to more effectively spread the EA ideas, some people would need to focus on making contact with existing charities that are outside the current umbrella of EA causes and, if the people in those charities are receptive to it, work together with them to figure out how they could apply EA to their work.

Comments21
Sorted by Click to highlight new comments since: Today at 3:12 PM

Hey Kaj,

I agree with a lot of these points. I just want to throw some counter-points out there for consideration. I'm not necessarily endorsing them, and don't intend them as a direct response, but thought they might be interesting. It's all very rough and quickly written.

1) Having a high/low distinction is part of what has led people to claim EAs are misleading. One version of it involves getting people interested through global poverty (or whatever causes they're already interested in), and then later trying to upsell them into high-level EA, which presumably has a major focus on GCRs, meta and so on.

It becomes particularly difficult because the leaders, who do the broad outreach, want to focus on high-level EA. It's more transparent and open to pitch high-level EA directly.

There are probably ways you could implement a division without incurring these problems, but it would need some careful thought.

2) It sometimes seems like the most innovative and valuable idea within EA is cause-selection. It's what makes us different from simply "competent" do-gooding, and often seems to be where the biggest gains in impact lie. Low level EA seems to basically be EA minus cause selection, so by promoting it, you might lose most of the value. You might need a very big increase in scale of influence to offset this.

3) Often the best way to promote general ideas is to live them. With your example of promoting science, people often seem to think the Royal Society was important in building the scientific culture in the UK. It was an elite group of scientists who just got about the business of doing science. Early members included Newton and Boyle. The society brought likeminded people together, and helped them to be more successful, ultimately spreading the scientific mindset.

Another example is Y Combinator, which has helped to spread norms about how to run startups, encourage younger people to do them, reduce the power of VCs, and have other significant effects on the ecosystem. The partners often say they became famous and influential due to reddit -> dropbox -> airbnb, so much of their general impact was due to having a couple of concrete successes.

Maybe if EA wants to have more general impact on societal norms, the first thing we should focus on doing is just having a huge impact - finding the "airbnb of EA" or the "Newton of EA".

Thanks!

1) Having a high/low distinction is part of what has led people to claim EAs are misleading. One version of it involves getting people interested through global poverty (or whatever causes they're already interested in), and then later trying to upsell them into high-level EA, which presumably has a major focus on GCRs, meta and so on.

Yeah, agreed. Though part of what I was trying to say is that, as you mentioned, we have the high/low distinction already - "implementing" that distinction would just be giving an explicit name to something that already exists. And something that has a name is easier to refer to and talk about, so having some set of terms for the two types could make it easier to be more transparent about the existence of the distinction when doing outreach. (This would be the case regardless of whether we want to expand EA to lower-impact causes or not.)

2) It sometimes seems like the most innovative and valuable idea within EA is cause-selection. It's what makes us different from simply "competent" do-gooding, and often seems to be where the biggest gains in impact lie. Low level EA seems to basically be EA minus cause selection, so by promoting it, you might lose most of the value. You might need a very big increase in scale of influence to offset this.

I guess the question here is, how much would efforts to bring in low-level EAs hurt the efforts to bring in high-level EAs. My intuition would be that the net effect would be to bring in more high-level EAs overall (a smaller percentage of incoming people would become high-level EAs, but that would be offset by there being more incoming people overall), but I don't have any firm support for that intuition and one would have to test it somehow.

3) Often the best way to promote general ideas is to live them. ... Maybe if EA wants to have more general impact on societal norms, the first thing we should focus on doing is just having a huge impact - finding the "airbnb of EA" or the "Newton of EA".

I agree that the best way to promote general ideas can be to live them. But I think we need to be more specific about what a "huge impact" would mean in this context. E.g. High Impact Science suggests that Norman Borlaug is one of the people who have had the biggest positive impact on the world - but most people have probably never heard of him. So for spreading social norms, it's not enough to live the ideas and make a big impact, one has to do it in a sufficiently visible way.

Ian David Moss has a post on this forum arguing for things along the lines of 'EA for the rich country fine arts' and other such restricted scope versions of EA.

My biggest objection to this is that to stay in line with people's habitual activities the rationales for the restricted scope have to be very gerrymandered (perhaps too much to be credible if stated explicitly), and optimizing within that restricted objective function may be pick out things that are overall bad, e.g. the recent media discussion comparing interventions purely in terms of their carbon emissions without taking anything else into account, suggesting that the existence of a member of a society with GDP per capita of $56,000 is bad if it includes carbon emissions with a social cost of $2,000 per person.

Ian David Moss has a post on this forum arguing for things along the lines of 'EA for the rich country fine arts' and other such restricted scope versions of EA.

Thanks for the link! I did a quick search to find if someone had already said something similar, but missed that.

My biggest objection to this is that to stay in line with people's habitual activities the rationales for the restricted scope have to be very gerrymandered (perhaps too much to be credible if stated explicitly), and optimizing within that restricted objective function may be pick out things that are overall bad,

I'm not sure whether the first one is really an issue - just saying that "these are general tools that you can use to improve whatever it is that you care about, and if you're not sure what you care about, you can also apply the same concepts to find that" seems reasonable enough to me, and not particularly gerrymandering.

I do agree that optimizing too specifically within some narrow domain can be a problem that produces results that are globally undesirable, though.

Views my own, not my employers.

Thanks for writing this up! I agree that it could be a big win if general EA ideas besides cause prioritization (or the idea of scope-limited cause prioritization) spread to the point of being as widely accepted as environmentalism. Some alternatives to this proposal though:

  1. It might be better to spread rationality and numeracy concepts like expected value, opportunity costs, comparative advantage, cognitive biases, etc completely unconnected to altruism than to try to explicitly spread narrow or cause-specific EA. People on average care much more about being productive, making money, having good relationships, finding meaning, etc than about their preferred altruistic causes. And it really would be a big win if they succeeded -- less ambiguously so than with narrow EA I think (see Carl's comment below). The biggest objection to this is probably crowdedness/lack of obvious low-hanging fruit.
  2. Another alternative might be to focus on spreading the prerequisites/correlates of cause-neutral, intense EA: e.g. math education, high levels of caring/empathy, cosmopolitanism, motivation to think systematically about ethics, etc. I'm unsure how difficult this would be.

Both of these alternatives seem to have what is (to me) an advantage: they don't involve the brand and terminology of EA. I think it would be easier to push on the frontiers of cause-neutral/broad EA if the label were a good signal of a large set of pretty unusual beliefs and attitudes, so that people can have high trust collaboration relatively quickly.

FWIW, I think I would be much more excited to evangelize broad low-level EA memes if there were some strong alternative channel to distinguish cause-neutral, super intense/obsessive EAs. Science has a very explicit distinction between science fans and scientists, and a very explicit funnel from one to the other (several years of formal education). EA doesn't have that yet, and may never. My instinct is that we should work on building a really really great "product", then build high and publicly-recognized walls around "practitioners" and "consumers" (a practical division of labor rather than a moral high ground thing), and then market the product hard to consumers.

Thanks for the comment!

  1. It might be better to spread rationality and numeracy concepts like expected value, opportunity costs, comparative advantage, cognitive biases, etc completely unconnected to altruism than to try to explicitly spread narrow or cause-specific EA. People on average care much more about being productive, making money, having good relationships, finding meaning, etc than about their preferred altruistic causes. And it really would be a big win if they succeeded -- less ambiguously so than with narrow EA I think (see Carl's comment below). The biggest objection to this is probably crowdedness/lack of obvious low-hanging fruit.

I agree with the "lack of obvious low-hanging fruit". It doesn't actually seem obvious to me how useful these concepts are to people in general, as opposed to more specific concrete advice (such as specific exercises for improving their social skills etc.). In particular, Less Wrong has been devoted to roughly this kind of thing, and even among LW regulars who may have spent hundreds of hours participating on the site, it's always been controversial whether the concepts they've learned from the site have translated into any major life gains. My current inclination would be that "general thinking skills" just aren't very useful for dealing with your practical life, and that concrete domain-specific ideas are much more useful.

You said that people in general care much more about concrete things in their own lives than their preferred altruistic causes, and I agree with this. But on the other hand, the kinds of people who are already committed to working on some altruistic cause are probably a different case: if you're already devoted to some specific goal, then you might have more of an interest in applying those things. If you first targeted people working in existing organizations and won them over to using these ideas, then they might start teaching the ideas to all of their future hires, and over time the concepts could start to spread to the general population more.

  1. Another alternative might be to focus on spreading the prerequisites/correlates of cause-neutral, intense EA: e.g. math education, high levels of caring/empathy, cosmopolitanism, motivation to think systematically about ethics, etc. I'm unsure how difficult this would be.

Maybe. One problem here is that some of these correlate only very loosely with EA: a lot of people have completed math education who aren't EAs. And I think that another problem is that in order to really internalize an idea, you need to actively use it. My thinking here is similar to Venkatesh Rao's, who wrote:

Strong views represent a kind of high sunk cost. When you have invested a lot of effort forming habits, and beliefs justifying those habits, shifting a view involves more than just accepting a new set of beliefs. You have to:

  1. Learn new habits based on the new view
  2. Learn new patterns of thinking within the new view

The order is very important. I have never met anybody who has changed their reasoning first and their habits second. You change your habits first. This is a behavioral conditioning problem largely unrelated to the logical structure and content of the behavior. Once you’ve done that, you learn the new conscious analysis and synthesis patterns.

This is why I would never attempt to debate a literal creationist. If forced to attempt to convert one, I’d try to get them to learn innocuous habits whose effectiveness depends on evolutionary principles (the simplest thing I can think of is A/B testing; once you learn that they work, and then understand how and why they work, you’re on a slippery slope towards understanding things like genetic algorithms, and from there to an appreciation of the power of evolutionary processes).

I wouldn't know how to spread something like cosmopolitanism, to a large extent because I don't know how to teach the kind of thinking habits that would cause you to internalize cosmopolitanism. And even after that, there would still be the step of getting from all of those prerequisites to applying EA principles in concepts. In contrast, teaching EA concepts by getting people to apply them to a charitable field they already care about, gets them into applying EA-ish thinking habits directly.

Both of these alternatives seem to have what is (to me) an advantage: they don't involve the brand and terminology of EA. I think it would be easier to push on the frontiers of cause-neutral/broad EA if the label were a good signal of a large set of pretty unusual beliefs and attitudes, so that people can have high trust collaboration relatively quickly.

That's an interesting view, which I hadn't considered. I might view it more as a disadvantage, in that in the model that I was thinking of, people who got into low-level EA would almost automatically also be exposed to high-level EA, causing the idea of high-level EA to spread further. If you were only teaching related concepts, that jump from them to high-level EA wouldn't happen automatically, but would require some additional steps. (That said, if you could teach enough of those prerequisites, maybe the jump would be relatively automatic. But this seems challenging for the reasons I've mentioned above.)

I want to suggest a more general version of Ajeya's views which is:

If someone did want to put time and effort into creating the resources to promote something akin to "broad effective altruism" they could focus their effort in two ways:

  1. on research and advocacy that does not add to (and possibly detracts attention from) the "narrow effective altruism" movement.

  2. on research and advocacy that benefits the effective altruism movement.

EXAMPLES

  1. Eg. Researching what is the best arts charity in the UK. Not useful as it is very unlikely that anyone who does take a cause neutral approach to charity would want to give to a UK arts charity. There is a risk of misleading, for example if you google effective altruism and a bunch of materials on UK arts comes up first.

  2. Eg. Researching general principles of how to evaluate charities. Researching climate change solutions. Researching systemic change charities. These would all expand the scope of EA research and writings, might produce plausible candidates for the best charity/cause, and at the same time act to attract more people into the movement. Consider climate change. It is a problem that at some point this century humanity has to solve (unlike UK arts) and it is also a cause many non-EAs care about strongly

CONCLUSION

So if there was at least some effort put into any "broad effective altruism" expansion I would strongly recommend starting with finding ways to expand the movement that are simultaneously useful areas for us to be considering in more detail.

(That said, FWIW I am very wary of attempts to expanding to have a "broad effective altruism" for some of the reasons mentioned by others)

Does anyone know which version of your analogy early science actually looked like? I don't know very much about the history of science, but it seems worth noting that science is strongly associated with academia, which is famous for being exclusionary & elitist. ("The scientific community" is almost synonymous with "the academic science community".)

Did science ever call itself a "movement" the way EA calls itself a movement? My impression is that the the skeptic movement (the thing that spreads scientific ideas and attitudes through society at large) came well after science proved its worth. If broad scientific attitudes were a prerequisite for science, that predicts that the popular atheism movement should have come several centuries sooner than it did.

If one's goal is to promote scientific progress, it seems like you're better off focusing on a few top people who make important discoveries. There's plausibly something similar going on with EA.

I'm somewhat confused that you list the formation of many groups as a benefit of broad mindset spread, but then say that we should try to achieve the formation of one very large group (that of "low-level EA"). If our goal is many groups, maybe it would be better to just create many groups? If our goal is to spread particular memes, why not the naive approach of trying to achieve positions of influence in order to spread those particular memes?

The current situation WRT growth of the EA movement seems like it could be the worst of both worlds. The EA movement does marketing, but we also have discussions internally about how exclusive to be. So people hear about EA because of the marketing, but they also hear that some people in the EA movement think that maybe the EA movement should be too exclusive to let them in. We'd plausibly be better off if we adopted a compromise position of doing less marketing and also having fewer discussions about how exclusive to be.

Growth is a hard to reverse decision. Companies like Google are very selective about who they hire because firing people is bad for morale. The analogy here is that instead of "firing" people from EA, we're better off if we don't do outreach to those people in the first place.

[Highly speculative]: One nice thing about companies and universities is that they have a clear, well-understood inclusion/exclusion mechanism. In the absence of such a mechanism, you can get concentric circles of inclusion/exclusion and associated internal politics. People don't resent Harvard for rejecting them, at least not for more than a month or two. But getting a subtle cold shoulder from people in the EA community will produce a lasting negative impression. Covert exclusiveness feels worse than overt exclusiveness, and having an official party line that "the EA movement must be welcoming to everyone" will just cause people to be exclusive in a more covert way.

I'm somewhat confused that you list the formation of many groups as a benefit of broad mindset spread, but then say that we should try to achieve the formation of one very large group (that of "low-level EA"). If our goal is many groups, maybe it would be better to just create many groups?

I must have expressed myself badly somehow - I specifically meant that "low-level EA" would be composed of multiple groups. What gave you the opposite impression?

For example, the current situation is that organizations like the Centre for Effective Altruism and Open Philanthropy Project are high-level organizations: they are devoted to finding the best ways of doing good in general. At the same time, organizations like Centre for the Study of Existential Risk, Animal Charity Evaluators, and Center for Applied Rationality are low-level organizations, as they are each devoted to some specific cause area (x-risk, animal welfare, and rationality, respectively). We already have several high- and low-level EA groups, and spreading the ideas would ideally cause even more of both to be formed.

If our goal is to spread particular memes, why not the naive approach of trying to achieve positions of influence in order to spread those particular memes?

This seems completely compatible with what I said? On my own behalf, I'm definitely interested in trying to achieve a position of higher influence to better spread these ideas.

EAs seems pretty open to the idea of being big-tent with respect to key normative differences (animals, future people, etc). But total indifference to cause area seems too lax. What if I just want to improve my local neighborhood or family? Or my country club? At some point, it becomes silly.

It might be worth considering parallels with the Catholic Church and the Jesuits. The broader church is "high level", but the requirements for membership are far from trivial.

"Total indifference to cause area" isn't quite how I'd describe my proposal - after all, we would still be talking about high-level EA, a lot of people would still be focused on high-level EA and doing that, etc. The general recommendation would still be to go into high-impact causes if you had no strong preference.

Really appreciate you taking the time to write this up! My initial reaction is that the central point about mindset-shifting seems really right.

My proposal is to explicitly talk about two kinds of EA (these may need catchier names)

It seems (to me) “low-level” and “high-level” could read as value-laden in a way that might make people practicing “low-level” EA (especially in cause areas not already embraced by lots of other EAs) feel like they’re not viewed as “real” EAs and so work at cross-purposes with the tent-broadening goal of the proposal. Quick brainstorm of terms that make some kind of descriptive distinction instead:

  1. cause-blind EA vs. cause-specific or cause-limited EA
  2. broad EA vs. narrow EA
  3. inter-cause vs. intra-cause

(Thoughts/views only my own, not my employer’s.)

"General vs. specific" could also be one

Hmm, maybe

  • Global EA vs local EA
  • Total EA vs focused EA

A few thoughts:

  • If you believe that existential risk is literally the most important issue in the world and that we will be facing possible extinction events imminently, then it follows that we can't wait to develop a mass movement and that we need to find a way to make the small, exceptional group strategy work (although, we may also spread low-level EA, but not as our focus)
  • I suspect that most EAs would agree that spreading low-level EA is worthwhile. The first question is whether this should be the focus/a major focus (as noted above). The second question is whether this should occur within EA or be a spin-off/a set of spin-offs. For example, I would really like to see an Effective Environmentalism movement.
  • Some people take issue with the name Effective Altruism because it implies that everything else is Ineffective Altruism. Your suggestion might mitigate this to a certain extent, but we really need better names!

I agree that if one thinks that x-risk is an immediate concern, then one should focus specifically on that now. This is explicitly a long-term strategy, so assumes that there will be a long term.

One thing to keep in mind is that people often (or usually, even) choose the middle ground by themselves. Matt Ball often mentions how this happens in animal rights with people deciding to reduce meat after learning about the merits vegetarianism and mentions that Nobel laureate Herb Simon is known for this realization of people opting for sub-optimal decisions.

Thus, I think that in promoting pure EA, most people will practice weak EA (ie. not cause neutral) on their own accord, so perhaps the best way to proliferate weak EA is by promoting strong EA.

This can be an issue, but i think Matt Ball has chosen not to present a strong position because he believes that is offputting, instead he undermines the strong position and presents a sub optimal one. However, he says this is in fact optimal as it reduces more harm.

If applied to EA we would undermine a position we believe might put people off, because it is too complicated / esoteric, and present a first step that will do more good.

My point was that EAs probably should exclusively promote full-blown EA, because that has a good chance of leading to more uptake of both full-blown and weak EA. Ball's issue with the effect of people choosing to go part-way after hearing the veg message is that it often leads to more animals being killed due to people replacing beef and pork with chicken. That's a major impetus for his direct “cut out chicken before pork and beef” message. It doesn't undermine veganism because chicken-reducers are more likely to continue on towards that lifestyle, probably more so even than someone who went vegetarian right away Vegetarians have a very high drop out rate, but many believe that those who transitioned gradually last longer.

I think that promoting effectively giving 10% of one's time and/or income (for the gainfully employed) is a good balance between promoting a high impact lifestyle and being rejected due to high demandingness. I don't think it would be productive to lower the bar on that (ie. By saying cause neutrality is optional).

On the face of it, the idea does sound quite good. However, we need to place it into a broader movement context and look at how it has been evaluated to consider how effective it is likely to be, and what other impacts the approach has that aren’t immediately clear.

A central issue with EA is that it says for instance, that we need to consider scope, neglectedness and tractability, but meeting this criteria doesn’t then lead to effectiveness, or optimal outcomes, it just flags that it is an approach worth more consideration.

Consequently, we can note the ‘pragmatic’ trend in EA support for animal related groups, but this trend isn’t well understood, and neither is it contextualised. Where we are trying to be inclusive and encourage more people into EA then this is the type of thing we need to consider, so we need to consider things like ideology and organisational / movement culture when determining how groups inter-relate and what impact this has. I think for many people who are looking at different aspects of EA, they don’t have the time to do this, and expect EAAs to do this work, but there isn’t any evidence this form of evaluation has been taking place up to now. My own observation of the movement is that this is a neglected area, and will likely be quite important in terms of inclusion.

In terms of EA, the trade off would be making EA look more appealing by diminishing it in terms of elitism, specifically where a certain ‘lower’ section of EAs were to say they aren’t like the ‘higher’ ones. The corollary in the animal movement is to claim veganism is extreme, all or nothing, fundamentalist, angry, crazy, puritan, dogmatic, absolutist, hardline and so on. These are stereotypes that Matt Ball, Tobias Leenaert and Brian Kateman have played on in order to centre their pragmatic (or not vegan) approach. I think people who have paid attention to what they say are likely to recognise this (see in particular Matt Ball’s recent Vox video), it is just that rights activists are more sensitive to it because it infringes on our work.

I think it is possible to claim the work of the mainstream groups hasn’t been contextualised, or even criticised from within EA, it has largely been encouraged and supported by EAs and other mainstream animal activists because it either sounds good on the face of it, or it hasn’t caused any issues for the work they are doing, or it is simply expedient to go along with that flow. We can also look at the divisions created and perpetuated and ask whether we really want to replicate the behaviour of some EAs within the animal movement and transfer that into EA. I think the answer would be no, however, we would then still need to consider whether we ought to be validating that work in the organisations that EAs support, and I would say no to that as well.

Links.

Disrupting the animal movement: https://qz.com/829956/how-the-vegan-movement-broke-out-of-its-echo-chamber-and-finally-started-disrupting-things/

Focus on Fish: A Call to Effective Altruists: http://commons.pacificu.edu/cgi/viewcontent.cgi?article=1567&context=eip

Utilitarian equivocation and moral consistency: https://network23.org/orcasandanimals/2017/06/21/effective-altruism-for-animals-utilitarian-equivocation-and-moral-consistency/

I think first we would need to ascertain whether low level (maybe foundational) EA were taking place, otherwise we could risk creating a divide within the movement around consistency. So we would need to see the evidence for where process has been applied. Perhaps there could be a scheme that could grade how much EA process has been applied, and direct us to where we could locate that information. Maybe it could also be undertaken by an external group that is neutral to EA.

I think we ought to be fairly uncertain around how much process is presently applied by EA backed organisations (particularly in EAA, i don't know so much about other areas), and be cautious about getting too far ahead when groups may have further to go in order to meet what may reasonably be considered a foundational level.