I don't intend to convince you to leave EA, and I don't expect you to convince me to stay. But typical insider "steel-manned" arguments against EA lack imagination about other people's perspectives: for example, they assume that the audience is utilitarian. Outsider anti-EA arguments are often mean-spirited or misrepresent EA (though I think EAs still under-value these perspectives). So I provide a unique perspective: a former "insider" who had a change of heart about the principles of EA.
Like many EAs, I'm a moral anti-realist. This is why I find it frustrating that EAs act as if utilitarianism is self-evident and would be the natural conclusion of any rational person. (I used to be guilty of this.) My view is that morality is largely the product of the whims of history, culture, and psychology. Any attempt to systematize such complex belief systems will necessarily lead to unwanted conclusions. Given anti-realism, I don't know what compels me to "bite bullets" and accept these conclusions. Moral particularism is closest to my current beliefs.
Some specific issues with EA ethics:
- Absurd expected value calculations/Pascal's mugging
- Hypothetically causing harm to individuals for the good of the group. Some utilitarians come up with ways around this (e.g. the reputation cost would outweigh the benefits). But this raises the possibility that in some cases the costs won't outweigh the benefits, and we'll be compelled to do harm to individuals.
- Under-valuing violence. Many EAs glibly act as if a death from civil war or genocide is no different from a death from malaria. Yet this contradicts deeply held intuitions about the costs of violence. For example, many people would agree that a parent breaking a child's arm through abuse is far worse than a child breaking her arm by falling out of a tree. You could frame this as a moral claim that violence holds a special horror, or as an empirical claim that violence causes psychological trauma and other harms, which must be accounted for in a utilitarian framework. The unique costs of violence are also apparent through people's extreme actions to avoid violence. Large migrations of people are most associated with war. Economic downturns cause increases in migration to a lesser degree, and disease outbreaks to a far lesser degree. This prioritization doesn't line up with how bad EAs think these problems are.
Once I rejected utilitarianism, much of the rest of EA fell apart for me:
- Valuing existential risk and high-risk, high-reward careers rely on expected value calculations
- Prioritizing animals (particularly invertebrates) relied on total-view utilitarianism (for me). I value animals (particularly non-mammals) very little compared to humans and find the evidence for animal charities very weak, so the only convincing argument for prioritizing farmed animals was their large numbers. (I still endorse veganism, I just don't donate to animal charities.)
- GiveWell's recommendations are overly focused on disease-associated mortality and short-term economic indicators, from my perspective. They fail to address violence and exploitation, which are major causes of poverty in the developing world. (Incidentally, I also think that they undervalue how much reproductive freedom benefits women.)
The remaining principles of EA, such as donating significant amounts of one's money and ensuring that a charity is effective in achieving its goals, weren't unique enough to convince me to stay in the community.
The antirealist position is that calling something moral or immoral entails a different kind of claim than what the realist means. Since moral talk is not about facts in the first place, something need not be a factual claim to have moral force. Instead, if a moral statement is an expression of emotion for instance, then to have moral force it needs to properly express emotions. But I'm not well read here so that's about as far as I understand it.
Sure, though that's not quite what we mean by moral uncertainty, which is the idea that there are different moral theories and we're not sure which is right. E.g.: https://philpapers.org/archive/URAMIM.pdf
You're referring to a kind of metaethical uncertainty, uncertainty over whether there are any moral requirements at all. In which case this is more relevant, and the same basic idea that you have: http://www.journals.uchicago.edu/doi/full/10.1086/505234 And, yeah, it's a good argument, though William MacAskill has a paper out there claiming that it doesn't always work.
Generally speaking you cannot be both. There are antirealists and there are realists. Noncognitivists are antirealists and so are error theorists.
Just as one can be an antirealist particularist, one can be an antirealist consequentialist.
So, quasi realism is different, probably best considered something in between. There are blurry boundaries between antirealism and realism.
I would recommend reading from here if you want to go deep into the positions, and then any particular citations that get your interest:
https://plato.stanford.edu/entries/moral-anti-realism/
https://plato.stanford.edu/entries/moral-realism/
https://plato.stanford.edu/entries/moral-anti-realism/projectivism-quasi-realism.html
Or, if you want a couple of particular arguments, look at sources 3 and 4 linked by Rob.
Once you've read most of the above, you might want to look at things written by rationalists as well.