by Lila
Feb 19 20172 min read 59

48

I don't intend to convince you to leave EA, and I don't expect you to convince me to stay. But typical insider "steel-manned" arguments against EA lack imagination about other people's perspectives: for example, they assume that the audience is utilitarian. Outsider anti-EA arguments are often mean-spirited or misrepresent EA (though I think EAs still under-value these perspectives). So I provide a unique perspective: a former "insider" who had a change of heart about the principles of EA. 

Like many EAs, I'm a moral anti-realist. This is why I find it frustrating that EAs act as if utilitarianism is self-evident and would be the natural conclusion of any rational person. (I used to be guilty of this.) My view is that morality is largely the product of the whims of history, culture, and psychology. Any attempt to systematize such complex belief systems will necessarily lead to unwanted conclusions. Given anti-realism, I don't know what compels me to "bite bullets" and accept these conclusions. Moral particularism is closest to my current beliefs. 

Some specific issues with EA ethics: 

  • Absurd expected value calculations/Pascal's mugging
  • Hypothetically causing harm to individuals for the good of the group. Some utilitarians come up with ways around this (e.g. the reputation cost would outweigh the benefits). But this raises the possibility that in some cases the costs won't outweigh the benefits, and we'll be compelled to do harm to individuals.  
  • Under-valuing violence. Many EAs glibly act as if a death from civil war or genocide is no different from a death from malaria. Yet this contradicts deeply held intuitions about the costs of violence. For example, many people would agree that a parent breaking a child's arm through abuse is far worse than a child breaking her arm by falling out of a tree. You could frame this as a moral claim that violence holds a special horror, or as an empirical claim that violence causes psychological trauma and other harms, which must be accounted for in a utilitarian framework. The unique costs of violence are also apparent through people's extreme actions to avoid violence. Large migrations of people are most associated with war. Economic downturns cause increases in migration to a lesser degree, and disease outbreaks to a far lesser degree. This prioritization doesn't line up with how bad EAs think these problems are. 

Once I rejected utilitarianism, much of the rest of EA fell apart for me:

  • Valuing existential risk and high-risk, high-reward careers rely on expected value calculations
  • Prioritizing animals (particularly invertebrates) relied on total-view utilitarianism (for me). I value animals (particularly non-mammals) very little compared to humans and find the evidence for animal charities very weak, so the only convincing argument for prioritizing farmed animals was their large numbers. (I still endorse veganism, I just don't donate to animal charities.)
  • GiveWell's recommendations are overly focused on disease-associated mortality and short-term economic indicators, from my perspective. They fail to address violence and exploitation, which are major causes of poverty in the developing world. (Incidentally, I also think that they undervalue how much reproductive freedom benefits women.)  

The remaining principles of EA, such as donating significant amounts of one's money and ensuring that a charity is effective in achieving its goals, weren't unique enough to convince me to stay in the community. 

48

0
0

Reactions

0
0

More posts like this

Comments59
Sorted by Click to highlight new comments since: Today at 1:06 PM

Thank you, Lila, for your openness on explaining your reasons for leaving EA. It's good to hear legitimate reasons why someone might leave the community. It's certainly better than the outsider anti-EA arguments that do tend to misrepresent EA too often. I hope that other insiders who leave the movement will also be kind enough to share their reasoning, as you have here.

While I recognize that Lila does not want to participate in a debate, I nevertheless would like to contribute an alternate perspective for the benefit of other readers.

Like Lila, I am a moral anti-realist. Yet while she has left the movement largely for this reason, I still identify strongly with the EA movement.

This is because I do not feel that utilitarianism is required to prop up as many of EA's ideas as Lila does. For example, non-consequentialist moral realists can still use expected value to try and maximize good done without thinking that the maximization itself is the ultimate source of that good. Presumably if you think lying is bad, then refraining from lying twice may be better than refraining from lying just once.

I agree with Lila that many EAs act too glib about deaths from violence being no worse than deaths from non-violence. But to the extent that this is true, we can just weight these differently. For example, Lila rightly points out that "violence causes psychological trauma and other harms, which must be accounted for in a utilitarian framework". EAs should definitely take into account these extra considerations about violence.

But the main difference between myself and Lila here is that when she sees EAs not taking things like this into consideration, she takes that as an argument against EA; against utilitarianism; against expected value. Whereas I take it as an improper expected value estimate that doesn't take into account all of the facts. For me, this is not an argument against EA, nor even an argument against expected value -- it's an argument for why we need to be careful about taking into account as many considerations as possible when constructing expected value estimates.

As a moral anti-realist, I have to figure out how to act not by discovering rules of morality, but by deciding on what should be valued. If I wanted, I suppose I could just choose to go with whatever felt intuitively correct, but evolution is messy, and I trust a system of logic and consistency more than any intuitions that evolution has forced upon me. While I still use my intuitions because they make me feel good, when my intuitions clash with expected value estimates, I feel much more comfortable going with the EV estimates. I do not agree with everything individual EAs say, but I largely agree with the basic ideas behind EA arguments.

There are all sorts of moral anti-realists. Almost by definition, it's difficult to predict what any given moral anti-realist would value. I endorse moral anti-realism, and I just want to emphasize that EAs can become moral anti-realist without leaving the EA movement.

The way I think about violence has to do with the importance/tractability/neglectedness framework: I see it as very important but not all that tractable. I do see a lot of its importance as related to the indirect harms it causes. What does it do to a person's family when they are assaulted or killed, or when they go to prison for violence? How does it affect their children and other children around who are forming a concept of what's normal? As a social worker, I saw a lot of people harmed by the violence they themselves had carried out, whether as soldiers, gang members, or family members. (I think about indirect effects with more typical EA causes too - I suspect parental grief is a major cost of child mortality that we don't pay enough attention to.)

My understanding is that the most promising interventions on large-scale violence prevention are around preventing return to war after an initial conflict, since areas that just had a war are particularly likely to have another one soon. Copenhagen Consensus considers the most effective intervention "deploy UN peacekeeping forces" which isn't easy to influence (though there are also some others listed that seem more tractable.) http://www.copenhagenconsensus.com/sites/default/files/CP%2B-%2BConflicts%2BFINISHED.pdf

I really like this response -- thanks, Eric. I'd say the way I think about maximizing expected value is that it's the natural thing you'll end up doing if you're trying to produce a particular outcome, especially a large-scale one that doesn't hinge much on your own mental state and local environment.

Thinking in 'maximizing-ish ways' can be useful at times in lots of contexts, but it's especially likely to be helpful (or necessary) when you're trying to move the world's state in a big way; not so much when you're trying to raise a family or follow the rules of etiquette, and possibly even less so when the goal you're pursuing is something like 'have fun and unwind this afternoon watching a movie'. There my mindset is a much more dominant consideration than it is in large-scale moral dilemmas, so the costs of thinking like a maximizer are likelier to matter.

In real life, I'm not a perfect altruist or a perfect egoist; I have a mix of hundreds of different goals like the ones above. But without being a strictly maximizing agent in all walks of life, I can still recognize that (all else being equal) I'd rather spend $1000 to protect two people from suffering from violence (or malaria, or what-have-you) than spend $1000 to protect just one person from violence. And without knowing the right way to reason with weird extreme Pascalian situations, I can still recognize that I'd rather spend $1000 to protect those two people, than spend $1000 to protect three people with 50% probability (and protect no one the other 50% of the time).

Acting on preferences like those will mean that I exhibit the outward behaviors of an EV maximizer in how I choose between charitable opportunities, even if I'm not an EV maximizer in other parts of my life. (Much like I'll act like a well-functioning calculator when I'm achieving the goal of getting a high score on a math quiz, even though I don't act calculator-like when I pursue other goals.)

For more background on what I mean by 'any policy of caring a lot about strangers will tend to recommend behavior reminiscent of expected value maximization, the more so the more steadfast and strong the caring is', see e.g. 'Coherent decisions imply a utility funtion' and The "Intuitions" Behind "Utilitarianism":

When you’ve read enough heuristics and biases research, and enough coherence and uniqueness proofs for Bayesian probabilities and expected utility, and you’ve seen the “Dutch book” and “money pump” effects that penalize trying to handle uncertain outcomes any other way, then you don’t see the preference reversals in the Allais Paradox as revealing some incredibly deep moral truth about the intrinsic value of certainty. It just goes to show that the brain doesn’t goddamn multiply.

The primitive, perceptual intuitions that make a choice “feel good” don’t handle probabilistic pathways through time very skillfully, especially when the probabilities have been expressed symbolically rather than experienced as a frequency. So you reflect, devise more trustworthy logics, and think it through in words.

When you see people insisting that no amount of money whatsoever is worth a single human life, and then driving an extra mile to save $10; or when you see people insisting that no amount of money is worth a decrement of health, and then choosing the cheapest health insurance available; then you don’t think that their protestations reveal some deep truth about incommensurable utilities.

Part of it, clearly, is that primitive intuitions don’t successfully diminish the emotional impact of symbols standing for small quantities—anything you talk about seems like “an amount worth considering.”

And part of it has to do with preferring unconditional social rules to conditional social rules. Conditional rules seem weaker, seem more subject to manipulation. If there’s any loophole that lets the government legally commit torture, then the government will drive a truck through that loophole.

So it seems like there should be an unconditional social injunction against preferring money to life, and no “but” following it. Not even “but a thousand dollars isn’t worth a 0.0000000001% probability of saving a life.” Though the latter choice, of course, is revealed every time we sneeze without calling a doctor.

The rhetoric of sacredness gets bonus points for seeming to express an unlimited commitment, an unconditional refusal that signals trustworthiness and refusal to compromise. So you conclude that moral rhetoric espouses qualitative distinctions, because espousing a quantitative tradeoff would sound like you were plotting to defect.

On such occasions, people vigorously want to throw quantities out the window, and they get upset if you try to bring quantities back in, because quantities sound like conditions that would weaken the rule.

But you don’t conclude that there are actually two tiers of utility with lexical ordering. You don’t conclude that there is actually an infinitely sharp moral gradient, some atom that moves a Planck distance (in our continuous physical universe) and sends a utility from zero to infinity. You don’t conclude that utilities must be expressed using hyper-real numbers. Because the lower tier would simply vanish in any equation. It would never be worth the tiniest effort to recalculate for it. All decisions would be determined by the upper tier, and all thought spent thinking about the upper tier only, if the upper tier genuinely had lexical priority.

As Peter Norvig once pointed out, if Asimov’s robots had strict priority for the First Law of Robotics (“A robot shall not harm a human being, nor through inaction allow a human being to come to harm”) then no robot’s behavior would ever show any sign of the other two Laws; there would always be some tiny First Law factor that would be sufficient to determine the decision.

Whatever value is worth thinking about at all must be worth trading off against all other values worth thinking about, because thought itself is a limited resource that must be traded off. When you reveal a value, you reveal a utility.

I don’t say that morality should always be simple. I’ve already said that the meaning of music is more than happiness alone, more than just a pleasure center lighting up. I would rather see music composed by people than by nonsentient machine learning algorithms, so that someone should have the joy of composition; I care about the journey, as well as the destination. And I am ready to hear if you tell me that the value of music is deeper, and involves more complications, than I realize—that the valuation of this one event is more complex than I know.

But that’s for one event. When it comes to multiplying by quantities and probabilities, complication is to be avoided—at least if you care more about the destination than the journey. When you’ve reflected on enough intuitions, and corrected enough absurdities, you start to see a common denominator, a meta-principle at work, which one might phrase as “Shut up and multiply.” Where music is concerned, I care about the journey. When lives are at stake, I shut up and multiply.

It is more important that lives be saved, than that we conform to any particular ritual in saving them. And the optimal path to that destination is governed by laws that are simple, because they are math. And that’s why I’m a utilitarian—at least when I am doing something that is overwhelmingly more important than my own feelings about it—which is most of the time, because there are not many utilitarians, and many things left undone.

... Also, just to be clear -- since this seems to be a weirdly common misconception -- acting like an expected value maximizer is totally different from utilitarianism. EV maximizing is a thing wherever you consistently care enough about your actions' consequences; utilitarianism is specifically the idea that the thing people should (act as though they) care about is how good things are for everyone, impartially.

But often people argue against the consequentialism aspect of utilitarianism and the consequent willingness to quantitatively compare different goods, rather than arguing against the altruism aspect or the egalitarianism; hence the two ideas get blurred together a bit in the above, even though you can certainly maximize expected utility for conceptions of "utility" that are partial to your own interests, your friends', etc.

Hi Lila,

Since you've indicated you're not interested in a debate, please don't feel that this is directed at you. But as a general point:

I think extreme ideas get disproportionate attention compared to the amount of action people actually take on them. EAs are a lot more likely than other people to consider whether invertebrates matter, and to have thought about Pascal's-muggging type situations, but I think mainstream EAs remain pretty unsure about these. In general I think EAs appear a lot weirder because they're willing to carefully think through and discuss weird ideas and thought experiments, even if in the end they're not persuaded.

Ideas like "being willing to cause harm to one person to benefit others" sound bad and weird until you consider that people do this all the time. We have fire departments even though we know some fire fighters will die in the line of duty. Emergency room staff triage patients, leaving some to die in order to save others. I wash my toddler's hair even though she dislikes it, because the smell will bother people if I don't. It's hard to imagine how societies would work if they weren't willing to do things like these.

I think EA has work to do on making clear that we're not a philosophical monolith, and emphasizing our commonalities with other value systems.

Ideas like "being willing to cause harm to one person to benefit others" sound bad and weird until you consider that people do this all the time. We have fire departments even though we know some fire fighters will die in the line of duty. Emergency room staff triage patients, leaving some to die in order to save others. I wash my toddler's hair even though she dislikes it, because the smell will bother people if I don't.

None of these seem to get at the core part of Lila's objection. The firemen volunteered to do the job, understanding the risks. The emergency room might not treat everyone, but that's an omission rather than an act - they don't inflict additional harm on people. People generally think that parents have a wide degree of freedom to judge what is best for their kids, even if the kids disagree, because the parents know more.

However, what Lila is talking about (or at least a Steelman version, I don't want to put words into her mouth) is actively inflicting harm, which would not have occurred otherwise, on someone who has the capacity to rationally consent, but has chosen not to. Utilitarians have a prima facie problem with cases like secretly killing people for their organs. Just because utilitarianism gives the same answer as conventional ethics in other cases doesn't mean there aren't cases where it widely diverges.

Ok, a government drafts some of its residents, against their will, to fight and die in a war that it thinks will benefit its population overall. This seems to be acceptable to typical people if the war is popular enough (look at the vitriol against conscientious objectors during the World Wars).

Many governments abolish conscription over time because citizens precisely don't agree with that (as of 2011, countries with active military forces were roughly split in half between those with some form of conscription or emergency conscription, and those with no conscription even in emergency cases - http://chartsbin.com/view/1887)

kbog
7y17
0
0

Like many EAs, I'm a moral anti-realist. This is why I find it frustrating that EAs act as if utilitarianism is self-evident and would be the natural conclusion of any rational person. (I used to be guilty of this.)

You can be a moral realist and be very skeptical of anyone who is confident in a moral system, and you can be an anti-realist and be really confident in a moral system. The metaethical question of realism can affect the normative question over moral theories, but it doesn't directly tell us whether to be confident or not.

Anti realists reject the claim that any moral propositions are true. So they don't think there is a fact of the matter about what we morally ought to do. But this doesn't mean they believe that anyone's moral opinion is equally valid. The anti realist can believe that our moral talk is not grounded in facts while also believing that we should follow a particular moral system.

Finally, it seems to me that utilitarians in EA seem to have arguments for it which are at least as well grounded as people with any other moral view in any other community, with the exception of actual moral philosophers. I, for instance, don't think utilitarianism is self-evident. But I think that debunking arguments against moral intuitions are very good and that subjective normativity is the closest pointer to objective normativity that we have, that this implies that we ought to give equal respect to subjective normativity experienced by others, that the von Neumann-Morgenstein axioms of decisionmaking are valuable for a moral theory and point us towards maximizing expected value, and that there is no reason to be risk averse. I think a lot of utilitarians in EA would say something vaguely like this, and the fact that they don't do so explicitly is no proof that they have no justification whatsoever or respect for opposing views.

My view is that morality is largely the product of the whims of history, culture, and psychology.

Empirically, yes, this happens to be the case, but realists don't disagree with that. (Science is also the product of the whims of history, culture, and psychology.) They disagree over whether these products of history, culture, and psychology can be justified as true or not.

Any attempt to systematize such complex belief systems will necessarily lead to unwanted conclusions.

Plenty of realists have had this view. And you could be an anti-realist who believes in systematizing complex belief systems as well - it's not clear to me why you can't be both.

So I'm just not sure that your reasoning for your normative ideas is valid, because you're talking as if they follow from your metaethical assumptions when they really should not (without some more details/assumptions).

Note that many people in EA take moral uncertainty seriously, something which is rarely done anywhere else.

Absurd expected value calculations/Pascal's mugging -> Valuing existential risk and high-risk, high-reward careers rely on expected value calculations

Pascal's Mugging is a thought experiment with exceptionally low probability, arbitrarily high stakes events. It poses a noteworthy counterargument to the standard framework of universally maximizing expected value, and therefore is philosophically interesting. That is after all the reason why Nick Bostrom and Eliezer Yudkowsky, two famous effective altruists, developed the idea. But to say that it poses an argument against other expected value calculations is a bit of a non sequitur - there is no clear reason not to say that in Pascal's Mugging, we shouldn't maximize expected value, but in existential risk where the situation is not so improbable and counterintuitive, we should maximize expected value. The whole point of Pascal's Mugging is to try to show that some cases of maximizing expected value are obviously problematic, but I don't see what this says about all the cases where maximizing expected value is not obviously problematic. If there were a single parsimonious decision theory that was intuitive and worked well in all cases including Pascal's Mugging, then you might abandon maximizing expected value in favor of it, but there is no such theory.

There's actual reasons that people like the framework of maximizing expected value; such as how it's invulnerable to Dutch Book Theorems and doesn't lead to intransitivity. In Pascal's Mugging, maybe we can accept losing these properties, because it's such a problematic case. But in other scenarios we will want to preserve them.

It's also worth noting that many of those working on existential risk don't rely on formal mathematical calculations at all, or believe that their cause is very high in probability anyway, as people at MIRI for instance have made clear.

Prioritizing animals (particularly invertebrates) relied on total-view utilitarianism (for me). I value animals (particularly non-mammals) very little compared to humans

But you are wrong about that. Valuing animal interests comparably to humans' is not a uniquely utilitarian principle. Numerous non-utilitarian arguments for this have been advanced by philosophers such as Regan, Norcross, and Korsgaard, and they have been received very well in philosophy. In fact, they are received so well that there is hardly a credible position which involves rejecting the set of them.

You might think that lots of animals just don't experience suffering, but so many EAs agree with this that I'm a little puzzled as to what the problem is. Sure, there's far more people who take invertebrate suffering seriously in a group of EAs than in a group of other people. But there's so many who don't think that invertebrates are sentient that, to be quite honest, this looks less like "I'm surrounded by people I disagree with" and more like "I'm not comfortable in the presence of people I disagree with."

Also, just to be clear although you never stated it explicitly: the idea that we should make serious sacrifices to others according to a framework of maximizing expected value does not imply utilitarianism. Choosing to maximize expected value is a question of decision theory where many moral theories often don't take a clear side, while the obligation to make significant sacrifices to the developing world has been advanced by non-utilitarian arguments from Cohen, Singer, Pogge, and others. These arguments, also, are considered compelling enough that there is hardly a credible position which involves rejecting the set of them.

I don't expect you to convince me to stay.

Maybe I should have said "I'd prefer if you didn't try to convince me to stay". Moral philosophy isn't a huge interest of mine anymore, and I don't really feel like justifying myself on this. I am giving an account of something that happened to me. Not making an argument for what you should believe. I was very careful to say "in my view" for non-trivial claims. I explicitly said "Prioritizing animals (particularly invertebrates) relied on total-view utilitarianism (for me)." So I'm not interested in hearing why prioritizing animals does not necessarily rely on total view utilitarianism.

I'm clearing up the philosophical issues here. It's fine if you don't agree, but I want others to have a better view of the issue. After all, you started your post by saying that EAs are overconfident and think their views are self evident. Well, what I'm doing here is explaining the reasons I have for believing these things, to combat such perceptions and improve people's understanding of the issues. Because other people are going to see this conversation, and they're going to make some judgement about EAs like me because of it.

But if you explicitly didn't want people to respond to your points... heck, I dunno what you were looking for. You shouldn't expect to not have people respond with their points of view, especially when you disagree on a public forum.

You're free to offer your own thoughts on the matter, but you seemed to be trying to engage me in a personal debate, which I have no interest in doing. This isn't a clickbait title, I'm not concern trolling, I really have left the EA community. I don't know of any other people who have changed their mind about EA like this, so I thought my story might be of some interest to people. And hey, maybe a few of y'all were wondering where I went.

Lila, what will you do now? What questions or problems do you see in the path ahead? What good things will you miss by leaving the EA community?

For some reason, I've always felt a deep sense of empathy for people who do what you have done. It is very honest and generous of you to do it this way. I wish you only the very best in all you do.

(This is my first post on this forum. I am new to EA.)

One thing I'm unclear on is:

Is s/he leaving the EA community and retaining the EA philosophy or rejecting the EA philosophy and staying in the EA community or leaving both?

What EAs do and what EA is are two different things after all. I'm going to guess leaving the EA community given that yes most EAs are utilitarians and this seems to be foundational to the reason Lila is leaving. However the EA philosophy is not utilitarian per se so you'd expect there to be many non-utilitarian EAs. I've commented on this before here. Many of us are not utilitarian. 44% of us according to the 2015 survey in fact. The linked survey results argue that this sample accurately estimates the actual EA population. 44% is a lot of non-utilitarian EAs. I imagine many of them aren't as engaged in the EA community as the utilitarian EAs, despite self-identifying as EAs.

If s/he is just leaving the community then, to me, this is only disheartening insofar as s/he doesn't interact with the community from this point on. So I do hope Lila continues to be an EA outside of the EA community where s/he can spread goodness in the world using her/his non-utilitarian priortarian ethics (prioritizing victims of violence) using the EA philosophy as a guide.

The "movement isn't diverse enough" is a legitimate complaint and a sound reason to leave a movement if you don't feel like you fit in. So s/he might well do much better for the world elsewhere in some other movement that has a better personal fit. And as long as she stays in touch with EA then we can have some good 'ol moral trade for the benefit of all. This trade could conceivably be much more beneficial for EA and for Lila if s/he is no longer in the EA community.

It feels like there are a range of different communities for EA, both on and offline. I've never really looked into philosophy and most of my conversations with people revolve around practical things to do and rarely does it go into existential risk/invertebrates and there is a lot more focus on system change/mental health in terms of more popular fringe ideas.

Also there are quite a few people who dip in and out of the community and will turn up once a year or just read the latest updates which seems good to me, everyone has different priorities and tasks taking up their time.

This isn't really to persuade you, just to highlight to anyone reading that there doesn't seem to be one type of community, and that you don't have to be in or out, you can just use the tools provided for free.

I'd be interested in an elaboration on why you reject expected value calculations.

My personal feeling is that expected-value calculations with very small probabilities are unlikely to be helpful, because my calibration for these probabilities is very poor: a one in ten million chance feels identical to a one in ten billion chance for me, even though their expected-value implications are very different. But I expect to be better-calibrated on the difference between a one in ten chance and a one in a hundred chance, particularly if-- as is true much of the time in career choice-- I can look at data on the average person's chance of success in a particular career. So I think that high-risk high-reward careers are quite different from Pascal's muggings.

Can you explain why (and whether) you disagree?

That's a good point, though my main reason for being wary of EV is related to rejecting utilitarianism. I don't think that quantitative, systematic ways of thinking are necessarily well-suited to thinking about morality, any more than they'd be suited to thinking about aesthetics. Even in biology (my field), a priori first-principles approaches can be misleading. Biology is too squishy and context-dependent. And moral psychology is probably even squishier.

EV is one tool in our moral toolkit. I find it most insightful when comparing fairly similar actions, such as public health interventions. It's sometimes useful when thinking about careers. But I used to feel compelled to pursue careers that I hated and probably wouldn't be good at, just on the off chance it would work. Now I see morality as being more closely tied to what I find meaning in (again, anti-realism). And I don't find meaning in saving a trillion EV lives or whatever.

Model uncertainty drastically increases in the tails is how I think about it.

Lila, thanks for sharing. You've made it clear that you've left the EA movement, but I'm wondering whether and, if so, why, your arguments also have pushed you away from being committed to "lowercase effective altruism"---that is, altruism which is effective, but isn't necessarily associated with this movement.

Are you still an altruist? If so, do you think altruism is better engaged in with careful attention put to the effectiveness of the endeavors?

Thanks in advance.

Every so often this sort of thing happens to me - deep down I wonder if philosophy is much more interesting than it is useful! As you say I think trying to get a human (ie myself) to figure out how they make decisions and then act in a fully rational manner is a task that's too difficult, or at least too for me.

However, what I come back to is that I don't need to make things complicated to believe that donating to AMF or GiveDirectly is fundamentally a worthwhile activity.

Yea as a two-level consequentialist moral anti-realist I actually am pretty tired of EA's insistence of "how many lives we can save" instead of emphasizing how much "life fulfillment and happiness" you can spread. I always thought this was not only a PR mistake but also a utilitarian mistake. We're trying to prevent suffering, so obviously preventing instances where a single person goes through more suffering on the road to death is more morally relevant utils-wise than preventing a death with less suffering.

Nonetheless, this is the first I've heard that violence and exploitation are under-valued by EA's. It always seemed the case to me that EAs generally weep and feel angsty feelings in their gut when they read about the violence and exploitation of their fellow man. But, what can we do? Regions of violence are notoriously difficult for setting up interventions that are tractable. As such it always seeemed to me that we should focus on what we know works since lifting people out of disease and poverty empowers them to address issues of violence and exploitation themselves. And giving someone their own agency back in this way is, in my view, something worth putting a lot of moral weight on due to its long-term (albeit hard-to measure) consequences.

And now I'm going to say something that I feel some people probably wont like.

I consistently feel that a lot of the critique on EA has to do with how others perceive EAs rather than what they are really like. i.e prejudice. I mentioned above that I generally feel EAs are legit moved to tears (or whatever is a significant feeling for them) regarding issues of violence. But, I find that as soon as this person spends most of his/her time in the public space talking about math and weird utilitarian expected value calculations this person is suddenly viewed as no longer having a heart or "the right heart." The amount of compassion and empathy a person has is not tied to what weird mathematical arguments they push out but what they do and feel inside (this is how I operationalize "compassion" at any rate: an internal state leading to external consequences. Yes I know, that's a pretty virtue ethics way to look at it, so sue me.).

Anyway, maybe part of this is because I know what it feels like to be the highschool nerd that secretly cries when he sees someone getting bullied at break time but who then talks to people about and cevelops exstensivly resaeched weird ideas like transhumanism as a means of optimizing the human flourishing (instead of say caring to go to an anti-bullying event that everyone instead thinks I should be going to if I really cared about bullying). It makes sense to me that many people think I have my priorities wrong. But it certainly isn't due to a lack of compassion and concern for my fellow man. It's not too hard to go from this analogy and argue that

This is perhaps what I absolutely love about the EA community. I've finally found a community of nerds where I can be myself and go in depth with uber-weird (any and all) ideas without being looked at as any less compassionate <3.

When people talk about ending violence and exploitation by doing something that will change the system that keeps these problems in place I get upset. This "system" is often invisible and amorphous and a product of ideology rather than say cost-effectiveness calculations. Why this gets me upset is that I often find this means people are willing to sacrifice giving someone their agency back when it is clear you can do so through donating to proven disease and poverty alleviation interventions to instead donate/support a cause against violence and exploitation because it aligns with their ideology. This essentially seems to me a way of making donation about yourself - trying to make sure you feel content in your own ethical worldview because specifically not doing anything about that violence and exploitation makes you feel bad - rather than making it about the individuals on the receiving end of the donation.

Yea I know, my past virtue ethics predilections are showing again. Even if someone like what I've described above supports an anti-violence cause that though difficult to get a effectiveness measure from is still nontheless doing a lot of good in the world we cant measure I still don't like it. I'm caring what people think and arguing that certain self-serving thoughts appear morally problematic independent of the end-result they cause. So let me show I'm also strongly opposed to forms of anti-realist virtue ethics. It's not enough to merely be aligned with the right way of thinking/ideology etc and then good things come from that. The end result: the actual people on the receiving end - are what actually matter. And this is why I find a "mostly" utilitarian perspective so much more humanizing than people a lot of people who get uncomfortable with its extreme conclusions and then reject the whole thing. A more utilitarian perspective forces you to make it about the receiver.

Whatever the case, writing this has made me sad. I'm sad to see you go, you seem highly intelligent and a likely asset to the movement, and as someone who is on the front-line of EA and PR I take this as a personal failure but wish you the best. Does anyone know of any EA-vetted charities working on violence and exploitation prevention? Even ones that are a stretch tractability-wise would be good. I'd like to donate - always makes me feel better.

We're trying to prevent suffering, so obviously preventing instances where a single person goes through more suffering on the road to death is more morally relevant utils-wise than preventing a death with less suffering.

What do you mean by 'we'? Negative utilitarians?

Yes, precisely. Although - there are so many variants of negative utilitarianism that "precisely" is probably a misnomer.

OK, then since most EAs (and philosophers, and the world) think that other things like overall well-being matter it's misleading to suggest that by valuing saving overall good lives they are failing to achieve a shared goal of negative utilitarianism (which they reject).

I'm confused and your 4 points only make me feel I'm missing something embarrassingly obvious.

Where did I suggest that valuing saving overall good lives means we are failing to achieve a shared goal of negative utilitarianism? In the first paragraph of my post and the part you seem to think is misleading I thought I specifically suggested exactly the opposite.

And yes, negative utilitarianism is a useful ethical theory that nonetheless many EAs and philosophers will indeed reject given particular real-world circumstances. And I wholeheartedly agree. This is a whole different topic though, so I feel like you're getting at something others think is obvious that I'm clearly missing.

These may be silly questions, apologies if so, -Can one be a moral anti-realist and a moral particularist? (Do you mean non-cognitivist? It's just because I didn't think many in EA were moral anti-realists, but perhaps could be non-cognitivists) -What do you feel the consequences of moral uncertainty are? -Are you saying that moral particularism is closest to your beliefs, as a result of moral uncertainty? Or are closest to your beliefs, were you to be a moral realist? -In being an anti-realist, does that mean none of the claims made above are morally normative in nature?

Like others have below I'd like to thank you for an honest and interesting post.

You can be a particularist and an antirealist.

Do you mean non-cognitivist? It's just because I didn't think many in EA were moral anti-realists, but perhaps could be non-cognitivists)

Noncognitivists are anti realists, along with error theorists.

Note that a framework of moral uncertainty doesn't seem to make a lot of sense for the anti-realist, because there isn't a clear sense in which one ought to pay attention to it. Maybe it can work, it's just less clear.

You're probably correct, reading up I realise I didn't understand it as well as I think I did, but I still have a few questions. If one is a particularist and anti-realist how do those judgements have any force that can possibly be called moral? As for moral uncertainty, I meant that if one ascribes some non-zero probability to there being genuine moral demands on one, it would seem one still has reason to follow them. If you're right then nothing you do matters so you've lost nothing. If you're wrong you have done something good. So, it would seem moral uncertainty gives one reasons to act in a certain way, because some probability of doing good has some motivating power even if not as much as certainly doing good. I think I was mixed up about non-cognitivism, but some people seem to be called non-cognitivists and realists? For example David Hume, who I've heard called a non-cognitivist and a consequentialist, and Simon Blackburn who is called a quasi-realist despite being a non-cognitivist. Are either of these people properly called realists?

If one is a particularist and anti-realist how do those judgements have any force that can possibly be called moral?

The antirealist position is that calling something moral or immoral entails a different kind of claim than what the realist means. Since moral talk is not about facts in the first place, something need not be a factual claim to have moral force. Instead, if a moral statement is an expression of emotion for instance, then to have moral force it needs to properly express emotions. But I'm not well read here so that's about as far as I understand it.

I meant that if one ascribes some non-zero probability to there being genuine moral demands on one, it would seem one still has reason to follow them.

Sure, though that's not quite what we mean by moral uncertainty, which is the idea that there are different moral theories and we're not sure which is right. E.g.: https://philpapers.org/archive/URAMIM.pdf

You're referring to a kind of metaethical uncertainty, uncertainty over whether there are any moral requirements at all. In which case this is more relevant, and the same basic idea that you have: http://www.journals.uchicago.edu/doi/full/10.1086/505234 And, yeah, it's a good argument, though William MacAskill has a paper out there claiming that it doesn't always work.

I think I was mixed up about non-cognitivism, but some people seem to be called non-cognitivists and realists?

Generally speaking you cannot be both. There are antirealists and there are realists. Noncognitivists are antirealists and so are error theorists.

For example David Hume, who I've heard called a non-cognitivist and a consequentialist

Just as one can be an antirealist particularist, one can be an antirealist consequentialist.

Simon Blackburn who is called a quasi-realist despite being a non-cognitivist.

So, quasi realism is different, probably best considered something in between. There are blurry boundaries between antirealism and realism.

I would recommend reading from here if you want to go deep into the positions, and then any particular citations that get your interest:

https://plato.stanford.edu/entries/moral-anti-realism/

https://plato.stanford.edu/entries/moral-realism/

https://plato.stanford.edu/entries/moral-anti-realism/projectivism-quasi-realism.html

Or, if you want a couple of particular arguments, look at sources 3 and 4 linked by Rob.

Once you've read most of the above, you might want to look at things written by rationalists as well.

I think the intuition that moral judgments need to have "force" or make "demands" is a bit of a muddle, and some good readings for getting un-muddled here are:

  1. Peter Hurford's "A Meta-Ethics FAQ"
  2. Eliezer Yudkowsky's Mere Goodness
  3. Philippa Foot's "Morality as a System of Hypothetical Imperatives"
  4. Peter Singer's "The Triviality of the Debate Over 'Is-Ought' and the Definition of 'Moral'"

Kyle might have some better suggestions for readings here.

For me, most of the value I get out of commenting in EA-adjacent spaces comes through tasting the ways in which I gently care about our causes and community. (Hopefully it is tacit that one of the many warm flavors of that value for me is in the outcomes our conversations contribute to.)

But I suspect that many of you are like me in this way, and also that, in many broad senses, former EAs have different information than the rest of us. Perhaps the feedback we hear when anyone shares some of what they've learned before they go will tend to be less rewarding for them to share, and more informative to us to receive, than most other feedback. In that spirit, I'd like to affirm that it's valuable to have people in similar positions to Lila's share. Thanks to Lila for doing so.

I felt like this post just said that the person had some idiosyncratic reasons they did not like EA, so they left. Well, great, but I'm not sure how that helps anyone else.

Here's a thought I think is more useful. For a long time I have been talking anonymously about politics online. Lately I think this is pointless because it's too disconnected from anything I can accomplish. The tractability of these issues for me is too low. So to encourage myself to think more efficiently, and to think mainly about issues I can do something about, I'm cutting out all anonymous online talk about big social issues. In general, I'm going to keep anonymous communications to a minimum.

Should we maybe take this as a sign that EA needs to become more like Aspirin, or many other types of medicine? I just checked an Aspirin leaflet, and it said clearly exactly what Aspirin is for. The common “doing the most good” slogan kind of falls short of that.

The definition from the FAQ is better, especially in combination with the additional clarifications below on the page:

Effective altruism is using evidence and analysis to take actions that help others as much as possible.

We’ve focused a lot on finding (with high recall) all the value aligned people who find EA to be exactly the thing they’ve been looking for all their lives, but just like with medicine, it’s also important to prevent the people who are not sufficiently aligned from taking it – for the sake of the movement and for their own sake.

Asprin may be a good example because it’s not known for any terrible side effects, but if someone takes it for some unrelated ailment, they’ll be disillusioned and angry about their investment.

Do we need to be more clear not only about who EA is for but also who EA is probably not for?

it’s also important to prevent the people who are not sufficiently aligned from taking it – for the sake of the movement

How so?

If they're not aligned then they'll eventually leave. Along the way, hopefully they'll contribute something.

It would be a problem if we loosened our standards and weakened the movement to accommodate them. But I don't see what's harmful about someone thinking that EA is for them, exploring it and then later deciding otherwise.

and for their own sake.

Seriously? We're trying to make the world a better place as effectively as possible. I don't think that ensuring convenience for privileged Western people who are wandering through social movements is important.

There be dragons! Dragons with headaches!

I think the discussion that has emerged here is about an orthogonal point from the one I wanted to make.

Seriously? We're trying to make the world a better place as effectively as possible. I don't think that ensuring convenience for privileged Western people who are wandering through social movements is important.

A year ago I would’ve simply agreed or said the same thing, and there would’ve been no second level to my decision process, but reading about religious and movement dynamics (e.g., most recently in The Righteous Mind), my perspective was joined by a more cooperation-based strategic perspective.

So I certainly agree with you that I care incomparably more about reducing suffering than about pandering to some privileged person’s divergent moral goals, but here are some more things I currently believe:

  1. The EA movement has a huge potential to reduce suffering (and further related moral goals).
  2. All the effort we put into strengthening the movement will fall far short of their potential if it degenerates into infighting/fragmentation, lethargy, value drift, signaling contests, a zero-sum game, and any other of various failure modes.
  3. People losing interest in EA or even leaving with a loud, public bang are one thing that is really, really bad for cohesion within the movement.

When someone just sort of silently loses interest in EA, they’ll pull some of their social circle after them, at least to some degree. When someone leaves with a loud, public bang, they’ll likely pull even more people after them.

If I may, for the moment, redefine “self-interested” to include the “self-interested” pursuit of altruistic goals at the expense of other people’s (selfish and nonselfish) goals, then such a “self-interested” approach will run us into several of the walls or failure modes above:

  1. Lethargy will ensue when enough people publicly an privately drop out of the movement to ensure that those who remain are disillusioned, pessimistic, and unmotivated. They may come to feel like the EA project has failed or is about to, and so don’t want to invest into it anymore. Maybe they’ll rather join some adjacent movement or an object-level organization, but the potential of the consolidated EA movement will be lost.
  2. Infighting or frgmentation will result when people try to defend their EA identity. Someone may think, “Yeah, I identify with core EA, but those animal advocacy people are all delusional, overconfident, controversy-seeking, etc.” because they want to defend their ingrained identity (EA) but are not cooperative enough to collaborate with people with slightly different moral goals. I have more and more the feeling that the whole talk about ACE being overconfident is just a meme perpetuated by people who haven’t been following ACE or animal advocacy closely.
  3. Value drift can ensue when people with new moral goals join the movement and gradually change it to their liking. It happens when we moral-trade away too much of our actual moral goals.
  4. But if we trade away too little, we’ll create enemies, resulting in more and more zero-sum fights with groups with other moral goals.

The failure modes most relevant to this post are the lethargy and the zero-sum fights one:

If they're not aligned then they'll eventually leave. Along the way, hopefully they'll contribute something.

Someone who finds out that they actually don’t care about EA will feel exploited by such an approach. They’ll further my moral goal of reducing suffering for the time they’re around, but if they’re, e.g., a Kantian, they’ll afterwards feel instrumentalized and become a more or less vocal opponent. That’s probably more costly for us than whatever they may’ve contributed along the way unless the first was as trajectory-changing as I think movement building (or movement destroying) can be.

So I should’ve clarified, also in the interest of cooperation, I care indefinitely more about reducing suffering than about pandering to divergent moral goals of “privileged Western people.” But they are powerful, they’re reading this thread, and they want to be respected or they’ll cause us great costs in suffering we’ll fail to reduce.

but reading about religious and movement dynamics (e.g., most recently in The Righteous Mind), my perspective was joined by a more cooperation-based strategic perspective.

This not about strategic cooperation. This is about strategic sacrifice - in other words, doing things for people that they never do for you or others. Like I pointed out elsewhere, other social movements don't worry about this sort of thing.

All the effort we put into strengthening the movement will fall far short of their potential if it degenerates into infighting/fragmentation, lethargy, value drift, signaling contests, a zero-sum game, and any other of various failure modes.

Yes. And that's exactly why this constant second-guessing and language policing - "oh, we have to be more nice," "we have a lying problem," "we have to respect everybody's intellectual autonomy and give huge disclaimers about our movement," etc - must be prevented from being pursued to a pathological extent.

People losing interest in EA or even leaving with a loud, public bang are one thing that is really, really bad for cohesion within the movement.

Nobody who has left EA has done so with a loud public bang. People losing interest in EA is bad, but that's kind of irrelevant - the issue here is whether it's better for someone to join then leave, or never come at all. And people joining-then-leaving is generally better for the movement than people never coming at all.

When someone just sort of silently loses interest in EA, they’ll pull some of their social circle after them, at least to some degree.

At the same time, when someone joins EA, they'll pull some of their social circle after them.

Lethargy will ensue when enough people publicly an privately drop out of the movement to ensure that those who remain are disillusioned, pessimistic, and unmotivated.

But the kind of strategy I am referring to also increases the rate at which new people enter the movement, so there will be no such lethargy.

When you speculate too much on complicated movement dynamics, it's easy to overlook things like this via motivated reasoning.

Infighting or frgmentation will result when people try to defend their EA identity. Someone may think, “Yeah, I identify with core EA, but those animal advocacy people are all delusional, overconfident, controversy-seeking, etc.” because they want to defend their ingrained identity (EA) but are not cooperative enough to collaborate with people with slightly different moral goals.

We are talking about communications between people within EA and people outside EA. I don't recognize a clear connection between these issues.

Value drift can ensue when people with new moral goals join the movement and gradually change it to their liking.

Sure, but I don't think that people with credible but slightly different views of ethics and decision theory ought to be excluded. I'm not so close minded that I think that anyone who isn't a thorough expected value maximizer ought to be in our community.

It happens when we moral-trade away too much of our actual moral goals.

Moral trades are Pareto improvements, not compromises.

Someone who finds out that they actually don’t care about EA will feel exploited by such an approach.

But we are not exploiting them in any way. Exploitation involves manipulation and deception. I am in no way saying that we should lie about what EA stands for. Someone who finds out that they actually don't care about EA will realize that they simply didn't know enough about it before joining, which doesn't cause anyone to feel exploited.

Overall, you seem to be really worried about people criticizing EA, something which only a tiny fraction of people who leave will do to a significant extent. This pales in comparison to actual contributions which people make - something which every EA does. You'll have to believe that verbally criticizing EA is more significant than the contributions of many, perhaps dozens, of people actually being in EA. This is odd.

So I should’ve clarified, also in the interest of cooperation, I care indefinitely more about reducing suffering than about pandering to divergent moral goals of “privileged Western people.” But they are powerful, they’re reading this thread, and they want to be respected or they’ll cause us great costs in suffering we’ll fail to reduce.

Thanks for affirming the first point. But lurkers on a forum thread don't feel respected or disrespected. They just observe and judge. And you want them to respect us, first and foremost.

So I'll tell you how to make the people who are reading this thread respect us.

Imagine that you come across a communist forum and someone posts a thread saying "why I no longer identify as a Marxist." This person says that they don't like how Marxists don't pay attention to economic research and they don't like how they are so hostile to liberal democrats, or something of the sort.

Option A: the regulars of the forum respond as follows. They say that they actually have tons of economic research on their side, and they cite a bunch of studies from heterodox economists who have written papers supporting their claims. They point out the flaws and shallowness in mainstream economists' attacks on their beliefs. They show empirical evidence of successful central planning in Cuba or the Soviet Union or other countries. Then they say that they're friends with plenty of liberal democrats, and point out that they never ban them from their forum. They point out that the only times they downvote and ignore liberal democrats is when they're repeating debunked old arguments, but they give examples of times they have engaged seriously with liberal democrats who have interesting ideas. And so on. Then they conclude by telling the person posting that their reasons for leaving don't make any sense, because people who respect economic literature or want to get along with liberal democrats ought to fit in just fine on this forum.

Option B: the regulars on the forum apologize for not making it abundantly clear that their community is not suited for anyone who respects academic economic research. They affirm the OP's claim that anyone who wants to get along with liberal democrats is not welcome and should just stay away. They express deep regret at the minutes and hours of their intellectual opponents' time that they wasted by inviting them to engage with their ideas. They put up statements and notices on the website explaining all the quirks of the community which might piss people off, and then suggest that anyone who is bothered by those things could save time if they stayed away.

The forum which takes option A looks respectable and strong. They cut to the object level instead of dancing around on the meta level. They look like they know what they are talking about, and someone who has the same opinions of the OP would - if reading the thread - tend to be attracted to the forum. Option B? I'm not sure if it looks snobbish, or just pathetic.

I wanted to pose a question (that I found plausible), and now you’ve understood what I was asking, so my work here is pretty much done.

But I can also, for a moment longer, stay in my role and argue for the other side, because I think there are a few more good arguments to be made.

The forum which takes option A looks respectable and strong. They cut to the object level instead of dancing around on the meta level. They look like they know what they are talking about, and someone who has the same opinions of the OP would - if reading the thread - tend to be attracted to the forum. Option B? I'm not sure if it looks snobbish, or just pathetic.

It’s true that I hadn’t considered the “online charisma” of the situation, but I don’t feel like Option B is what I’d like to argue for. Neither is Option A.

Option A looks really great until we consider the cost side of things. Several people with a comprehensive knowledge of economics, history, and politics investing hours of their time (per person leaving) on explaining things that must seem like complete basics to these experts? They could be using that time to push their own boundaries of knowledge or write a textbook or plan political activism or conduct prioritization research. And they will. Few people will have the patience to explain the same basics more than, say, five or ten times.

They’ll write FAQs, but then find that people are not satisfied when they pour out their most heartfelt irritation with the group only to be linked an FAQ entry that only fits their case so so.

It’s really just the basic Eternal September Effect that I’m describing, part of what Durkheim described as anomie.

Option B doesn’t have much to do with anything. I’m hoping to lower the churn rate by helping people predict from the outset whether they’ll want to stick with EA long term. Whatever tone we’ll favor for forum discussions is orthogonal to that.

But the kind of strategy I am referring to also increases the rate at which new people enter the movement, so there will be no such lethargy.

That’s also why a movement with a high churn rate like that would be doomed to having discussions only on a very shallow and, for many, tedious level.

When you speculate too much on complicated movement dynamics, it's easy to overlook things like this via motivated reasoning.

Also what Fluttershy said. If you imagine me as some sort of ideologue with fixed or even just strong opinions, then I can assure you that neither is the case. My automatic reaction to your objections is, “Oh, I must’ve been wrong!” then “Well, good thing I didn’t state my opinion strongly. That’d be embarrassing,” and only after some deliberation I’ll remember that I had already considered many of these objections and gradually update back in the direction of my previous hypothesis. My opinions are quite unusually fluid.

Like I pointed out elsewhere, other social movements don't worry about this sort of thing.

Other social movements end up like feminism, with oppositions and toxoplasma. Successful social movements don’t happen by default by not worrying about these sorts of dynamics, or I don’t think they do. That doesn’t mean that my stab at a partial solution goes in the correct direction, but it currently seems to me like an improvement.

Yes. And that's exactly why this constant second-guessing and language policing - "oh, we have to be more nice," "we have a lying problem," "we have to respect everybody's intellectual autonomy and give huge disclaimers about our movement," etc - must be prevented from being pursued to a pathological extent.

Let’s exclude the last example or it’ll get recursive. How would you realize that? I’ve been a lurker in a very authoritarian forum for a while. They had some rules and the core users trusted the authorities to interpret them justly. Someone got banned every other week or so, but they were also somewhat secretive, never advertised the forum to more than one specific person at a time and only when they knew the person well enough to tell that they’d be a good fit for the forum. The core users all loved the forum as a place where they could safely express themselves.

I would’ve probably done great there, but the authoritarian thing scared me on a System 1 level. The latter (about careful advertisement) is roughly what I’m proposing here. (And if it turns out that we need more authoritarianism than I’ll accept that too.)

The lying problem thing is a point in case. She didn’t identify with the movement, just picked out some quotes, invented a story around them, and later took most of it back. Why does she even write something about a community she doesn’t feel part of? If most of her friends had been into badminton and she didn’t like it, she wouldn’t have caused a stir in the badminton community accusing it of having a lying or cheating problem or something. She would’ve tried it for a few hours and then largely ignored it, not needing to make up any excuse for disliking it.

It’s in the nature of moral intuitions that we think everyone should share ours, and maybe there’ll come a time when we have the power to change values in all of society and have the knowledge to know in what direction to change them and by how much, but we’re only starting in that direction now. We can still easily wink out again if we don’t play nice with other moral systems or don’t try to be ignored by them.

Moral trades are Pareto improvements, not compromises.

What’s the formal definition of “compromise”? My intuitive one included Pareto improvements.

Nobody who has left EA has done so with a loud public bang.

I counted this post as a loud, public bang.

People losing interest in EA is bad, but that's kind of irrelevant - the issue here is whether it's better for someone to join then leave, or never come at all. And people joining-then-leaving is generally better for the movement than people never coming at all.

I don’t think so, or at least when put into less extreme terms. I’d love to get input on this from an expert in social movements or organizational culture at companies.

Consultancy firms are known for their high churn rates, but that seems like an exception to me. Otherwise high onboarding costs (which we definitely have in EA), a gradual lowering of standards, minimization of communication overhead, and surely many other factors drive a lot of companies toward rather hiring with high precision and low recall than the other way around and then investing greatly into retaining the good employees they have. (Someone at Google, for example, said “The number one thing was to have an incredibly high bar for talent and never compromise.” They don’t want to get lots of people in, get them up to speed, hope they’ll contribute something, and lose most of them again after a year. They want to rather grow more slowly than get diluted like that.)

We probably can’t interview and reject people who are interested in EA, so the closest thing we can do is to help them decide as well as possible whether it’s really what they want to become part of long-term.

I don’t think this sort of thing, from Google or from EAs, would come off as pathetic.

But again, this is the sort of thing where I would love to ask an expert like Laszlo Bock for advise rather than trying to piece together some consistent narrative from a couple books and interviews. I’m really a big fan of just asking experts.

Option A looks really great until we consider the cost side of things. Several people with a comprehensive knowledge of economics, history, and politics investing hours of their time (per person leaving) on explaining things that must seem like complete basics to these experts? They could be using that time to push their own boundaries of knowledge or write a textbook or plan political activism or conduct prioritization research. And they will. Few people will have the patience to explain the same basics more than, say, five or ten times.

What I wrote in response to the OP took me maybe half an hour. If you want to save time then you can easily make quicker, smaller points, especially if you're a subject matter expert. The issue at stake is more about the type of attitude and response than the length. What you're worried about here applies equally well against all methods of online discourse, unless you want people to generally ignore posts.

They’ll write FAQs, but then find that people are not satisfied when they pour out their most heartfelt irritation with the group only to be linked an FAQ entry that only fits their case so so.

The purpose is not to satisfy the person writing the OP. That person has already made up their mind, as we've observed in this thread. The purpose is to make observers and forum members realize that we know what we are talking about.

Option B doesn’t have much to do with anything. I’m hoping to lower the churn rate by helping people predict from the outset whether they’ll want to stick with EA long term. Whatever tone we’ll favor for forum discussions is orthogonal to that.

Okay, so what kinds of things are you thinking of? I'm kind of lost here. The Wikipedia article on EA, the books by MacAskill and Singer, the EA Handbook, all seem to be a pretty good overview of what we do and stand for. You said that the one sentence descriptions of EA aren't good enough, but they can't possibly be, and no one joins a social movement based on its one sentence description.

That’s also why a movement with a high churn rate like that would be doomed to having discussions only on a very shallow and, for many, tedious level.

The addition of new members does not prevent old members from having high quality discussions. It only increases the amount of new person discussions, which seems perfect good to me.

If you imagine me as some sort of ideologue with fixed or even just strong opinions, then I can assure you that neither is the case.

I'm not. But the methodology you're using here is suspect and prone to bias.

Other social movements end up like feminism, with oppositions and toxoplasma.

Or they end up successful and achieve major progress.

If you want to prevent oppositions and toxoplasma, narrowing who is invited in accomplishes very little. The smaller your ideological circle, the finer the factions become.

Successful social movements don’t happen by default by not worrying about these sorts of dynamics, or I don’t think they do.

No social movement has done things like this, i.e. trying to save time and effort for outsiders who are interested in joining by pushing off their interest, at the expense of its own short term goals. And no other social movement has had this level of obsessive theorizing about movement dynamics.

How would you realize that?

By calling out such behavior when I see it.

The latter (about careful advertisement) is roughly what I’m proposing here. (And if it turns out that we need more authoritarianism than I’ll accept that too.)

That sounds like a great way to ensure intellectual homogeneity as well as slow growth. The whole side of this which I ignored in my above post is that it's completely wrong to think that restricting your outward messages will not result in false negatives among potential additions to the movement. So how many good donors and leaders would you want to ignore for the ability to keep one insufficiently likeminded person from joining? Since most EAs don't leave, at least not in any bad way, it's going to be >1.

Why does she even write something about a community she doesn’t feel part of?

She's been with the rationalist community since early days as a member of MetaMed, so maybe that has something to do with it.

Movements really get criticized by people who are on the opposite spectrum and completely uninvolved. Every political faction gets its worst criticism from ideological opponents. Rationalists and EAs get most of their criticism from ideological opponents. I just don't see much of this hypothesized twilight zone criticism that comes from nearly-aligned people, and when it does come it tends to be interesting and worth listening to. You only think of it as unduly significant because you are more exposed to it; you have no idea of the extent and audience of much more negative pieces written by people outside the EA social circle.

It’s in the nature of moral intuitions that we think everyone should share ours, and maybe there’ll come a time when we have the power to change values in all of society and have the knowledge to know in what direction to change them and by how much, but we’re only starting in that direction now. We can still easily wink out again if we don’t play nice with other moral systems or don’t try to be ignored by them.

I am not talking about not playing nice with other value systems. This is about whether to make conscious attempts to homogenize our community with a single value system and to prevent people with other value systems from making the supposed mistake of exploring our community. It's not cooperation, it's sacrificial, and it's not about moral systems, it's about people and their apparently precious time.

What’s the formal definition of “compromise”? My intuitive one included Pareto improvements.

Stipulate any definition, the point will be the same; you should not be worried about EAs making too many moral trades, because they're going to be Pareto improvements.

I counted this post as a loud, public bang.

Then you should be much less worried about loud public bangs and much more worried about getting people interested in effective altruism.

I’d love to get input on this from an expert in social movements or organizational culture at companies.

Companies experience enormous costs in training new talent and opportunity costs if their talent needs to be replaced. Our onboarding costs are very low in comparison. Companies also have a limited amount of talent they can hire, while a social movement can grow very quickly, so it makes sense for companies to be selective in ways that social movements shouldn't be. If a company could hire people for free then it would be much less selective. Finally, the example you selected (Google) is one of the more unusually selective companies, compared to other ones.

The Wikipedia article on EA, the books by MacAskill and Singer, the EA Handbook, all seem to be a pretty good overview of what we do and stand for.

Lila has probably read those. I think Singer’s book contained something to the effect that the book is probably not meant for anyone who wouldn’t pull out the child. MacAskill’s book is more of a how-to; such a meta question would feel out of place there, but I’m not sure; it’s been a while since I read it.

Especially texts that appeal to moral obligation (which I share) signal that the reader needs to find an objective flaw in them to be able to reject them. That, I’m afraid, leads to people attacking EA for all sorts of made-up or not-actually-evil reasons. That can result in toxoplasma and opposition. If they could just feel like they can ignore us without attacking us first, we could avoid that.

If you want to prevent oppositions and toxoplasma, narrowing who is invited in accomplishes very little. The smaller your ideological circle, the finer the factions become.

A lot of your objections take the form of likely-sounding counternarratives to my narratives. They don’t make me feel like my narratives are less likely than yours, but I increasingly feel like this discussion is not going to go anywhere unless someone jumps in with solid knowledge of history or organizational culture, historical precedents and empirical studies to cite, etc.

So how many good donors and leaders would you want to ignore for the ability to keep one insufficiently likeminded person from joining? Since most EAs don't leave, at least not in any bad way, it's going to be >1.

That’s a good way to approach the question! We shouldn’t only count those that join the movement for a while and then part ways with it again but also those that hear about it and ignore it, publish a nonconstructive critique of it, tell friends why EA is bad, etc. With small rhetorical tweaks of the type that I’m proposing, we can probably increase the number of those that ignore it solely at the expense of the numbers who would’ve smeared it and not at the expense of the numbers who would’ve joined. Once we exhaust our options for such tweaks, the problem becomes as hairy as you put it.

I haven’t really dared to take a stab at how such an improvement should be worded. I’d rather base this on a bit of survey data among people who feel that EA values are immoral from their perspective. The positive appeals may stay the same but be joined by something to the effect that if they think they can’t come to terms with values X and Y, EA may not be for them. They’ll probably already have known that (and the differences may be too subtle to have helped Lila), but saying it will communicate that they can ignore EA without first finding fault with it or attacking it.

And no other social movement has had this level of obsessive theorizing about movement dynamics.

Oh dear, yeah! We should both be writing our little five-hour research summaries on possible cause areas rather than starting yet another marketing discussion. I know someone at CEA who’d get cross with me if he saw me doing this again. xD

It’s well possible that I’m overly sensitive to being attacked (by outside critics), and I should just ignore it and carry on doing my EA things, but I don’t think I overestimate this threat to the extend that I think further investment of our time into this discussion would be proportional.

kbog
7y-2
0
0

Lila has probably read those.

Sure. But Lila complained about small things that are far from universal to effective altruism. The vast majority of people who differ in their opinions on the points described in the OP do not leave EA. As I mentioned in my top level comment, Lila is simply confused about many of the foundational philosophical issues which she thinks pose an obstacle to her being in effective altruism. Some people will always fall through the cracks, and in this case one of them decided to write about it. Don't over-update based on an example like this.

Note also that someone who engages with EA to the extent of reading one of these books will mostly ignore the short taglines accompanying marketing messages, which seem to be what you're after. And people who engage with the community will mostly ignore both books and marketing messages when it comes to making an affective judgement.

Especially texts that appeal to moral obligation (which I share) signal that the reader needs to find an objective flaw in them to be able to reject them. That, I’m afraid, leads to people attacking EA for all sorts of made-up or not-actually-evil reasons. That can result in toxoplasma and opposition. If they could just feel like they can ignore us without attacking us first, we could avoid that.

And texts that don't appeal to moral obligation make a weak argument that is simply ignored. That results in apathy and a frivolous approach.

A lot of your objections take the form of likely-sounding counternarratives to my narratives.

Yes, and it's sufficient. You are proposing a policy which will necessarily hurt short term movement growth. The argument depends on being establish a narrative to support its value.

We shouldn’t only count those that join the movement for a while and then part ways with it again but also those that hear about it and ignore it, publish a nonconstructive critique of it, tell friends why EA is bad, etc.

But on my side, we shouldn't only count those who join the movement and stay; we should also count those who hear about it and are lightly positive about it, share some articles and books with their friends, publish a positive critique about it, start a conversation with their friends about EA, like it on social media, etc.

With small rhetorical tweaks of the type that I’m proposing, we can probably increase the number of those that ignore it solely at the expense of the numbers who would’ve smeared it and not at the expense of the numbers who would’ve joined.

I don't see how. The more restrictive your message, the less appealing and widespread it is.

The positive appeals may stay the same but be joined by something to the effect that if they think they can’t come to terms with values X and Y, EA may not be for them.

What a great way to signal-boost messages which harm our movement. Time for the outside view: do you see any organization in the whole world which does this? Why?

Are you really advocating messages like "EA is great but if you don't agree with universally following expected value calculations then it may not be for you?" If we had done this with any of the things described here, we'd be intellectually dishonest - since EA does not assume absurd expected value calculations, or invertebrate sentience, or moral realism.

It's one thing to try to help people out by being honest with them... it's quite another to be dishonest in a paternalistic bid to keep them from "wasting time" by contributing to our movement.

but saying it will communicate that they can ignore EA without first finding fault with it or attacking it.

That is what the vast majority of people who read about EA already do.

It’s well possible that I’m overly sensitive to being attacked (by outside critics),

Not only that, but you're sensitive to the extent that you're advocating caving in to their ideas and giving up the ideological space they want.

This is why we like rule consequentialism and heuristics instead of doing act-consequentialist calculations all the time. A movement that gets emotionally affected by its critics and shaken by people leaving will fall apart. A movement that makes itself subservient to the people it markets to will stagnate. And a movement whose response to criticism is to retreat to narrower and narrower ideological space will become irrelevant. But a movement that practices strength and assures its value on multiple fronts will succeed.

You get way too riled up over this. I started out being like “Uh, cloudy outside. Should we all pack umbrellas?” I’m not interested in an adversarial debate over the merits of packing umbrellas, one where there is winning and losing and all that nonsense. I’m not backing down; I was never interested in that format to begin with. It would incentivize me to exaggerate my confidence into the merits of packing umbrellas, which has been low all along; incentivize me to not be transparent about my epistemic status, as it were, my suspected biases and such; and so would incentivize an uncooperative setup for the discussion. The same probably applies to you.

I’m updating down from 70% for packing umbrellas to 50% for packing umbrellas. So I guess I won’t pack one unless it happens to be in the bag already. But I’m worried I’m over-updating because of everything I don’t know about why you never assumed what ended up as “my position” in this thread.

kbog
7y-1
0
0

You get way too riled up over this.

As you pointed out yourself, people around here systematically spend too much time on the negative-sum activity (http://lesswrong.com/lw/3h/why_our_kind_cant_cooperate/) of speculating on their personal theories for what's wrong with EA, usually from a position of lacking formal knowledge or seasoned experience with social movements. So when some speculation of the sort is presented, I say exactly what is flawed about the ideas and methodology, and will continue to do so until epistemic standards improve. People should not take every opportunity to question whether we should all pack umbrellas; they should go about their ordinary business until they find a sufficiently compelling reason for everyone to pack umbrellas, and then state their case.

And, if my language seems too "adversarial"... honestly, I expect people to deal with it. I don't communicate in any way which is out of bounds for ordinary Internet or academic discourse. So, I'm not "riled up", I feel entirely normal. And insisting upon a pathological level of faux civility is itself a kind of bias which inhibits subtle ingredients of communication.

We’ve been communicating so badly that I would’ve thought you’d be one to reject an article like the one you linked. Establishing the sort of movement that Eliezer is talking about was the central motivation for making my suggestion in the first place.

If you think you can use a cooperative type of discourse in a private conversation where there is no audience that you need to address at the same time, then I’d like to remember that for the next time when I think we can learn something from each other on some topic.

When you speculate too much on complicated movement dynamics, it's easy to overlook things like this via motivated reasoning.

Thanks for affirming the first point. But lurkers on a forum thread don't feel respected or disrespected. They just observe and judge. And you want them to respect us, first and foremost.

I appreciate that you thanked Telofy; that was respectful of you. I've said a lot about how using kind communication norms is both agreeable and useful in general, but the same principles apply to our conversation.

I notice that, in the first passage I've quoted, it's socially (but not logically) implied that Telofy has "speculated", "overlooked things", and used "motivated reasoning". The second passage I've quoted states that certain people who "don't feel respected or disrespected" should "respect us, first and foremost", which socially (but not logically) implies that they are both less capable of having feelings in reaction to being (dis)respected, and less deserving of respect, than we are.

These examples are part of a trend in your writing.

Cut it out.

Thank you. <3

kbog
7y-2
0
0

which socially (but not logically) implies that they are both less capable of having feelings in reaction to being (dis)respected, and less deserving of respect, than we are.

I've noticed that strawmanning and poor interpretations of my writing is a trend in your writing. Cut it out.

I did not state that lurkers should respect us at the expense of us disrespecting them. I stated quite clearly that lurkers feel nothing of the sort, since they are observers. This has nothing to do with who they are, and everything to do with the fact that they are passively reading the conversation rather than being a subject of it. Rather, I argued that lurkers should be led to respect us instead of being unimpressed by us, and that they would be unimpressed by us if they saw that the standard reaction to somebody criticizing and leaving the movement was to leave their complaints unassailed and to affirm that such people don't fit in the movement.

We're trying to make the world a better place as effectively as possible. I don't think that ensuring convenience for privileged Western people who are wandering through social movements is important.

I'm certainly a privileged Western person, and I'm aware that that affords me many comforts and advantages that others don't have! I also think that many people from intersectional perspectives within the scope of "privileged Western person" other than your own may place more or less value on respecting people's efforts, time, and autonomy than you do, and that their perspectives are valid too.

(As a more general note, and not something I want to address to kbog in particular, I've noticed that I do sometimes System-1-feel like I have to justify arguments for being considerate in terms of utilitarianism. Utilitarianism does justify kindness, but feeling emotionally compelled to argue for kindness on grounds of utilitarianism rather than on grounds of decency feels like overkill, and makes it feel like something is off--even if it is just my emotional calibration that's off.)

I'm certainly a privileged Western person, and I'm aware that that affords me many comforts and advantages that others don't have!

This isn't about "let's all check our privileges", this is "the trivial interests of wealthy people are practically meaningless in comparison to the things we're trying to accomplish."

I also think that many people from intersectional perspectives within the scope of "privileged Western person" other than your own may place more or less value on respecting people's efforts, time, and autonomy than you do, and that their perspectives are valid too.

There's nothing necessarily intersectional/background-based about that, you can find philosophers in the Western moral tradition arguing the same thing. Sure, they're valid perspectives. They're also untenable, and we don't agree with them, since they place wealthy people's efforts, time, and autonomy on par with the need to mitigate suffering in the developing world, and such a position is widely considered untenable by many other philosophers who have written on the subject. Having a perspective from another culture does not excuse you from having a flawed moral belief.

But don't get confused. This is not "should we rip people off/lie to people in order to prevent mothers from having to bury their little kids" or some other moral dilemma. This is "should we go out of our way to give disclaimers and pander to the people we market to, something which other social movements never do, in order to save them time and effort." It's simply insane.

(As a more general note, and not something I want to address to kbog in particular, I've noticed that I do sometimes System-1-feel like I have to justify arguments for being considerate in terms of utilitarianism. Utilitarianism does justify kindness, but feeling emotionally compelled to argue for kindness on grounds of utilitarianism rather than on grounds of decency feels like overkill, and makes it feel like something is off--even if it is just my emotional calibration that's off.)

The kind of 'kindness' being discussed here - going out of one's way to make your communication maximally considerate to all the new people it's going to reach - is not grounded in traditional norms and inclinations to be kind to your fellow person. It's another utilitarian-ish approach, equally impersonal as donating to charity, just much less effective.

There's nothing necessarily intersectional/background-based about that

People have different experiences, which can inform their ability to accurately predict how effective various interventions are. Some people have better information on some domains than others.

One utilitarian steelman of this position that's pertinent to the question of the value of kindness and respect of other's time would be that:

  • respecting people's intellectual autonomy and being generally kind tends to bring more skilled people to EA
  • attracting more skilled EAs is worth it in utilitarian terms
  • there are only some people who have had experiences that would point them to this correct conclusion

Sure, they're valid perspectives. They're also untenable, and we don't agree with them

The kind of 'kindness' being discussed here [is]... another utilitarian-ish approach, equally impersonal as donating to charity, just much less effective.

I feel that both of these statements are untrue of myself, and I have some sort of dispreference for speech about how "we" in EA believe one thing or another.

I'm not going to concede the ground that this conversation is about kindness or intellectual autonomy. Because it's really not what's at stake. This is about telling certain kinds of people that EA isn't for them.

there are only some people who have had experiences that would point them to this correct conclusion

But this is about optimal marketing and movement growth, a very objective empirical question. It doesn't seem to have much to do with personal experiences; we don't normally bring up intersectionalism in debates about other ordinary things like this, we just talk about experiences and knowledge in common terms, since race and so on aren't dominant factors.

By the way, think of the kind of message that would be sent. "Hey you! Don't come to effective altruism! It probably isn't for you!" That would be interpreted as elitist and close-minded, because there are smart people who don't have the same views that other EAs do and they ought to be involved.

Let's be really clear. The points given in the OP, even if steelmanned, do not contradict EA. They happened to cause trouble for one person, that's all.

I have some sort of dispreference for speech about how "we" in EA believe one thing or another.

You can interpret that kind of speech prescriptively - i.e., I am making the claim that given the premises of our shared activities and values, effective altruists should agree that reducing world poverty is overwhelmingly more important than aspiring to be the nicest, meekest social movement in the world.

Edit: also, since you stated earlier that you don't actually identify as EA, it really doesn't make any sense for you to complain about how we talk about what we believe.

I agree with your last paragraph, as written. But this conversation is about kindness, and trusting people to be competent altruists, and epistemic humility. That's because acting indifferent to whether or not people who care about similar things as we do waste time figuring things out is cold in a way that disproportionately drives away certain types of skilled people who'd otherwise feel welcome in EA.

But this is about optimal marketing and movement growth, a very empirical question. It doesn't seem to have much to do with personal experiences

I'm happy to discuss optimal marketing and movement growth strategies, but I don't think the question of how to optimally grow EA is best answered as an empirical question at all. I'm generally highly supportive of trying to quantify and optimize things, but in this case, treating movement growth as something suited to empirical analysis may be harmful on net, because the underlying factors actually responsible for the way & extent to which movement growth maps to eventual impact are impossible to meaningfully track. Intersectionality comes into the picture when, due to their experiences, people from certain backgrounds are much, much likelier to be able to easily grasp how these underlying factors impact the way in which not all movement growth is equal.

The obvious-to-me way in which this could be true is if traditionally privileged people (especially first-worlders with testosterone-dominated bodies) either don't understand or don't appreciate that unhealthy conversation norms subtly but surely drive away valuable people. I'd expect the effect of unhealthy conversation norms to be mostly unnoticeable; for one, AB-testing EA's overall conversation norms isn't possible. If you're the sort of person who doesn't use particularly friendly conversation norms in the first place, you're likely to underestimate how important friendly conversation norms are to the well-being of others, and overestimate the willingness of others to consider themselves a part of a movement with poor conversation norms.

"Conversation norms" might seem like a dangerously broad term, but I think it's pointing at exactly the right thing. When people speak as if dishonesty is permissible, as if kindness is optional, or as if dominating others is ok, this makes EA's conversation norms worse. There's no reason to think that a decrease in quality of EA's conversation norms would show up in quantitative metrics like number of new pledges per month. But when EA's conversation norms become less healthy, key people are pushed away, or don't engage with us in the first place, and this destroys utility we'd have otherwise produced.

It may be worse than this, even: if counterfactual EAs who care a lot about having healthy conversational norms are a somewhat homogeneous group of people with skill sets that are distinct from our own, this could cause us to disproportionately lack certain classes of talented people in EA.

Really liked this comment. Would be happy to see a top level post on the issue.

kbog
7y-1
0
0

I agree that it would be better out of context, since it's strawmanning the comment that it's trying to respond to.

That's because acting indifferent to whether or not people who care about similar things as we do waste time figuring things out is cold

No, it's not cold. It's indifferent, and normal. No one in any social movement worries about wasting the time of people who come to learn about things. Churches don't worry that they're wasting people's time when inviting them to come in for a sermon; they don't advertise all the reasons that people don't believe in God. Feminists don't worry that they're wasting people's time by not advertising that they want white women to check their privilege before colored ones. BLM doesn't worry that it's wasting people's time by not advertising that they don't welcome people who are primarily concerned with combating black-on-black violence. And so on.

Learning what EA is about does not take a long time. This is not like asking people to read Marx or the LessWrong sequences. The books by Singer and MacAskill are very accessible and do not take long to read. If someone reads it and doesn't like it, so what? They heard a different perspective before going back to their ordinary life.

is cold in a way that disproportionately drives away certain types of skilled people who'd otherwise feel welcome in EA.

Who thinks "I'm an effective altruist and I feel unwelcome here in effective altruism because people who don't agree with effective altruism aren't properly shielded from our movement"? If you want to make people feel welcome then make it a movement that works for them. I fail to see how publicly broadcasting incompatibility with others does any good.

Sure, it's nice to have a clearly defined outgroup that you can contrast yourselves with, to promote solidarity. Is that what you mean? But there are much easier and safer punching bags to be used for this purpose, like selfish capitalists or snobby Marxist intellectuals.

Intersectionality comes into the picture when, due to their experiences, people from certain backgrounds are much, much likelier to be able to easily grasp how these underlying factors impact the way in which not all movement growth is equal.

Intersectionality does not mean simply looking at people's experiences from different backgrounds. It means critiquing and moving past sweeping modernist narratives of the experiences of large groups by investigating the unique ways in which orthogonal identity categories interact. I don't see why it's helpful, given that identity hasn't previously entered the picture at all in this conversation, and that there don't seem to be any problematic sweeping identity narratives floating around.

The obvious-to-me way in which this could be true is if traditionally privileged people (especially first-worlders with testosterone-dominated bodies) either don't understand or don't appreciate that unhealthy conversation norms subtly but surely drive away valuable people.

I am a little bit confused here. You are the one saying that we should make outward facing statements telling people that EA isn't suited for them. How is that not going to drive away valuable people, in particular the ones who have diverse perspectives?

And in what way is failing to make such statements an unhealthy conversational norm? I have never seen any social movement perform this sort of behavior. If doing so is a conversational norm then it's not one which people have grown accustomed to expect.

Moreover, the street goes two ways. Here's a different perspective which you may have overlooked due to your background: some people want to be in a movement that's solid and self-assured. Creating an environment where language is constantly being policed for extreme niceness can lead some people to feel uninterested in engaging in honest dialogue.

If you're the sort of person who doesn't use particularly friendly conversation norms in the first place, you're likely to underestimate how important friendly conversation norms are to the well-being of others, and overestimate the willingness of others to consider themselves a part of a movement with poor conversation norms.

You can reject quantitative metrics, and you can also give some credence to allegations of bias. But you can't rely on this sort of thing to form a narrative. You have to find some kind of evidence.

When people speak as if dishonesty is permissible, as if kindness is optional, or as if dominating others is ok, this makes EA's conversation norms worse.

This is a strawman of my statements, which I have no interest in validating through response.

I agree with the characterization of EA here: it is, in my view, about doing the most good that you can do, and EA has generally defined "good" in terms of the well-being of sentient beings. It is cause-neutral.

People can disagree on whether potential beings (who would not exist if extinction occurred) have well-being (total vs. prior-existence), they can disagree on whether non-human animals have well-being, and can disagree on how much well-being a particular intervention will result in, but they don't arbitrarily discount the well-being of sentient beings in a speciesist manner or in a manner which discriminates against potential future beings. At least, that's the strong form of EA. This doesn't require one to be a moral realist, though it is very close to utilitarianism.

If I'm understanding this post correctly, the "weak form" of EA - donating more and donating more effectively to causes you already care about, or even just donating more effectively given the resources you're willing to commit - is not unique enough for Lila to stay. I suspect, though, that many EAs (particularly those who are only familiar with the global poverty aspect of EA) only endorse this weak form, but the more vocal EAs are the ones who endorse the strong form.

If morality isn't real, then perhaps we should just care about our selves.

But suppose we do decide to care about other people's interests - maybe not completely, but at least to some degree. To the extent that we decide to devote resources to helping other people, it makes sense that we should do this to the maximal extent possible and this is what utilitarianism does.

To the extent that we decide to devote resources to helping other people, it makes sense that we should do this to the maximal extent possible

I don't think I do anything in my life to the maximal extent possible

So you don't want to raise your kids so that they can achieve their highest potential? Or if you're training for a 5K/half-marathon, and you don't want to make the best use of your time training? You don't want to get your maximal PR? I digress.

I do not believe in all the ideas, especially about MIRI (AI risk). Although, in my mind, EA is just getting the biggest bang for your buck. Donating is huge! And organizations, such as GiveWell, are just tools. Sure, I could scour GuideStar and evaluate and compare 990 forms--but why go though all the hassle?

Anyway, honestly it doesn't really matter that people call themselves "effective altruists." And the philosophical underpinnings--which are built to be utilitarian independent--seem after the fact. "Effective Altruism" is just a label really; so we can be on the same general page: Effective Altruism has Five Serious Flaws - Avoid It - Be a DIY Philanthropist Instead

There's some statistic out there that says two-thirds or something of donors do no research at all into the organizations they give to. I hope that some people just wouldn't give at all ~ nonmalfeasance.

If morality isn't real, then perhaps we should just care about our selves.

Lila's argument that "morality isn't real" also carries over to "self-interest isn't real". Or, to be more specific, her argument against being systematic and maximizing EV in moral dilemmas also applies to prudential dilemmas, aesthetic dilemmas, etc.

That said, I agree with you that it's more important to maximize when you're dealing with others' welfare. See e.g. One Life Against the World:

For some people, the notion that saving the world is significantly better than saving one human life will be obvious, like saying that six billion dollars is worth more than one dollar, or that six cubic kilometers of gold weighs more than one cubic meter of gold. (And never mind the expected value of posterity.)

Why might it not be obvious? Well, suppose there's a qualitative duty to save what lives you can - then someone who saves the world, and someone who saves one human life, are just fulfilling the same duty. Or suppose that we follow the Greek conception of personal virtue, rather than consequentialism; someone who saves the world is virtuous, but not six billion times as virtuous as someone who saves one human life. Or perhaps the value of one human life is already too great to comprehend - so that the passing grief we experience at funerals is an infinitesimal underestimate of what is lost - and thus passing to the entire world changes little.

I agree that one human life is of unimaginably high value. I also hold that two human lives are twice as unimaginably valuable. Or to put it another way: Whoever saves one life, if it is as if they had saved the whole world; whoever saves ten lives, it is as if they had saved ten worlds. Whoever actually saves the whole world - not to be confused with pretend rhetorical saving the world - it is as if they had saved an intergalactic civilization.

Two deaf children are sleeping on the railroad tracks, the train speeding down; you see this, but you are too far away to save the child. I'm nearby, within reach, so I leap forward and drag one child off the railroad tracks - and then stop, calmly sipping a Diet Pepsi as the train bears down on the second child. "Quick!" you scream to me. "Do something!" But (I call back) I already saved one child from the train tracks, and thus I am "unimaginably" far ahead on points. Whether I save the second child, or not, I will still be credited with an "unimaginably" good deed. Thus, I have no further motive to act. Doesn't sound right, does it?

Why should it be any different if a philanthropist spends $10 million on curing a rare but spectacularly fatal disease which afflicts only a hundred people planetwide, when the same money has an equal probability of producing a cure for a less spectacular disease that kills 10% of 100,000 people? I don't think it is different. When human lives are at stake, we have a duty to maximize, not satisfice; and this duty has the same strength as the original duty to save lives.