Ari Ne’eman wrote an essay about effective altruism and disability rights in 2017 which quoted me and I have just gotten around to writing up a response to it.

I am going to skip the first part of the essay, which establishes that Peter Singer is quite ableist. I agree that Peter Singer is quite ableist, and anyway have literally never had a discussion with any effective altruist advocating for mass infanticide of disabled babies. It is not, I think, a live issue in the effective altruism community.

Later in the post, however, Ne’eman writes:

After all, from Singer’s utilitarian perspective there is no meaningful moral distinction to be made between charitable donations and social services like healthcare. Both are commitments a society makes to improving the conditions of its less fortunate. It would be illogical to argue for perfect efficiency in voluntary contributions and not with taxpayer funds. As Singer makes clear in his views on health policy, utilitarian thinking belongs in government just as much as it does in philanthropy.

By what right should the government fund wheelchairs when these funds might be more efficiently spent on paralysis prevention or cure research? How can we justify vocational rehabilitation services, which spend considerable sums helping disabled adults find employment, when non-disabled people might achieve similar outcomes for a fraction of the cost? If resources should always flow to their area of greatest efficiency, there is little hope for people with significant disabilities, a notoriously resource-intensive group to assist.

Indeed, the very idea of a welfare state comes under threat in this kind of moral calculus. Since effective altruism allows for no greater moral weight to be given to those in one’s own family or community, it becomes hard to justify expensive programs like housing assistance or in-home care for seniors and people with disabilities, which can cost tens of thousands of dollars per a person, when malaria net charities can save lives elsewhere at a fraction of the cost.

First, while this is a tangential issue, I would like to address the middle paragraph. It is far from obvious to me that buying people wheelchairs is less cost-effective than paralysis cure research, or that helping disabled people get jobs is less cost-effective than helping high-school dropouts or felons or whomever else get jobs. Ne’eman doesn’t link to any sort of cost-benefit analysis establishing this claim. And it’s also just… kind of a weird argument? “How does effective altruism deal with the fact that the issues I passionately advocate for are self-evidently worse ways to spend money than the issues I don’t advocate for, so self-evidently that I don’t even have to back up my claims in any way?” That seems like a bit of an own goal? I assume this is not what Ne’eman means but I am incredibly confused about what on earth he could mean.

On the main point: I’m not sure whether Ne’eman is arguing about Everyone Wakes Up A Perfect Utilitarian One Day World or the actual world we actually have right now, so I’m going to take each case one at a time.

The first piece of good news about Everyone Wakes Up A Perfect Utilitarian One Day World is that it is absolutely never going to happen. No effective altruist is a perfect utilitarian. (Some of us aren't even utilitarians at all.) I can’t overstate how unlikely it is that, not only will effective altruists convince every random nurse in Montana that they need to take the Giving What We Can pledge, but also all such nurses have decided to assess their every purchasing decision based on the greatest good for the greatest number.

Everyone Wakes Up A Perfect Utilitarian One Day World would definitely buy a lot of malaria nets. But there is not an infinite demand for malaria nets. At some point, everyone who needs a malaria net would have one, and we would spend money on other things. These things would… probably include wheelchairs? A wheelchair is only a few thousand dollars. That’s a pretty small price to make people able to leave the house and participate in their community! In fact, Everyone Wakes Up A Perfect Utilitarian One Day World may very well have more people have wheelchairs. It is difficult to imagine Everyone Wakes Up A Perfect Utilitarian One Day World doing much asset testing, and they’d certainly give wheelchairs to all the people in the developing world who can’t afford one.

I am really baffled at the idea that in Everyone Wakes Up A Perfect Utilitarian One Day World the malaria nets would come out of the wheelchairs budget instead of the the $100 million spent on the average big-budget movie or the $32 billion we spend every year on chewing gum. $32 billion would buy a heck of a lot of wheelchairs.

On the other hand, Ari Ne’eman might be referring to the actual world we actually exist in, in which case… no one is advocating for defunding housing assistance or vocational rehab? You can scroll down Open Philanthropy’s list of grants. They give money to political advocacy related to farmed animal welfare, pandemic preparedness, climate change, and criminal justice reform. (Although their criminal justice reform work has recently spun out into its own organization.) Defunding the welfare state is not on the list.

It’s true that if the government is spending money on pandemic preparedness and climate change, they’re not spending it on something else, but there’s no reason to assume that this would come from Medicaid cuts. I, for one, would hope we pay for pandemic preparedness by buying fewer fighter jets.

Actually existing effective altruism is, if anything, good for disability rights. Effective altruist Sam Bankman-Fried made the second-largest donation to Joe Biden’s re-election campaign, thus protecting the Affordable Care Act and Medicaid and making concrete improvements in the lives of disabled people.

I think the heart of my disagreement with Ne’eman comes earlier in the post:

Unfortunately, effective altruists take [concern for impact] a step further, arguing that impact is the only appropriate factor to consider in charitable contributions, with considerations like a community’s responsibility to its members discounted as illegitimate. Effective altruism does not just teach that it is better to contribute to malaria nets in Africa than to arts funding at home, a reasonable conclusion to most. Instead, the ideology goes a step further, arguing that one should invest in interventions in the developing world rather than things like homeless shelters or social services for low-income Americans in one’s own community. Anything less is unethical favoritism, since charitable dollars spent in one’s local communities require greater investment per life saved than those in more impoverished parts of the world.

So I live near Marin County, one of the richest counties in the United States. Marin is also home to the Buck Trust, a trust which has hundreds of millions of dollars which is only allowed to be used to help people in Marin County. Unfortunately, since Marin County doesn’t really actually contain many needy people in need of help, the money goes to symphony orchestras and high-school sports teams and bike paths and a study of French intensive gardening and giving some money to every kid in Marin with top grades. (Remember, these are some of the richest kids in the US.)

I live in Oakland. There are any number of people in Oakland who need the money more than a high school sports team in Marin does, starting with the people in the homeless encampment a few blocks away from me and continuing from there. But this money doesn’t go to people in Oakland, because Beryl Buck wanted to help people in her community.

That seems kind of… bad?

And if we say “never mind that this is ‘your community,’ it is wrong to spend hundreds of millions of dollars helping rich white people in Marin when there are poor black people in Oakland who need the money way more”… well, it brings to mind the question of why it’s okay to spend the money helping poor black people in Oakland and not helping poor black people in Africa. Is the system here that it’s outrageous if the other community is driving distance away but not if you’d have to take a plane?

Every egalitarian instinct rebels against what Ne’eman says in this essay. He functionally advocates for the richest people with the most capability to help to spend their time, resources, and money helping the people who need it the least.

There are, in fact, disabled people in the developing world. There are probably more disabled people in the developing world than in the developed world. Why did you decide that they don’t count? Why, when you’re helping severely disabled people, is there this whole group of severely disabled people in desperate need whom you’re ignoring?

There are some disabilities that aren’t represented much in the developing world, because developing-world healthcare systems can’t afford to take care of them, and so the people who have them are dead. As Ne’eman says when he discusses Singer’s view of infanticide, disabled people dying is bad.

In his conclusion, Ari Ne’eman writes:

By removing the attacks on disabled people, Singer’s ideas become milquetoast, almost mainstream liberal platitudes about being nice to animals and using data in decision-making. To put it another way, in so far as effective altruism is compatible with disability rights, it offers nothing new. And in so far as it is new, it is not compatible.

This is the section that it’s most unfair for me to write five years later, because it’s now way more obvious how effective altruism is different from mainstream liberalism.

I wish I lived in the world Ari Ne’eman does. Imagine if all charities rigorously collected information on their outcomes and made it publicly available—including their mistakes, the negative side effects, the flaws in the data collection, and the reasons to believe that it didn’t work at all. Imagine if a liberal eating meat was as unthinkable as them using the N word, and donating ten percent of your income to charities that helped people in the developing world was as normal as celebrating a gay friend’s wedding. Imagine if everyone responded to the Covid-19 pandemic by throwing billions of dollars at biosecurity and pandemic preparedness, and by realizing we were deeply unprepared for a pandemic and so we should get on issues like artificial intelligence that aren’t problems yet but might be soon.

It sounds very nice, Ari Ne’eman’s world. I want to move there.

Effective altruists talk a lot about ethics. It’s fun to get into the nitty-gritty of virtue ethics versus deontology. But, frankly, the core effective altruist beliefs are platitudes, the kind of thing I teach my four-year-old. “It is bad to hurt animals.” “Every person matters equally, even if they are poor or black or far away.” “We should care about the effects of our actions on future generations.” “We should try to figure out what ways of helping people work the best and do those instead of ones that don’t work as well.”

But if you actually take those platitudes seriously they lead to bizarre actions. You donate a kidney to a stranger. You skip a vacation to pay for children you don’t know not to get a disease you can’t pronounce. You shut down a charity that donors love because the evidence isn’t good enough. You decide that rather than being a doctor you’ll help sentient beings more if you become a food scientist and make soybeans taste really really good. You spend a lot of time talking about worries that sound like science fiction movies starring Arnold Schwarzenegger.

The new thing in effective altruism isn’t ableism. The new thing is that, unlike everyone else in the entire world, you should actually not flunk Applied Kindergarten Ethics.

And I can assure you that, should us Applied Kindergarten Ethics-Passers take over the world, we will definitely buy people wheelchairs.

Comments6
Sorted by Click to highlight new comments since:

Thank you so much for writing this. 

Effective altruists talk a lot about ethics. It’s fun to get into the nitty-gritty of virtue ethics versus deontology. But, frankly, the core effective altruist beliefs are platitudes, the kind of thing I teach my four-year-old. “It is bad to hurt animals.” “Every person matters equally, even if they are poor or black or far away.” “We should care about the effects of our actions on future generations.” “We should try to figure out what ways of helping people work the best and do those instead of ones that don’t work as well.”

But if you actually take those platitudes seriously they lead to bizarre actions. You donate a kidney to a stranger. You skip a vacation to pay for children you don’t know not to get a disease you can’t pronounce. You shut down a charity that donors love because the evidence isn’t good enough. You decide that rather than being a doctor you’ll help sentient beings more if you become a food scientist and make soybeans taste really really good. You spend a lot of time talking about worries that sound like science fiction movies starring Arnold Schwarzenegger.

This is just so powerful and so true, tbh it almost made me tear up, really describes well what we should be about here. 

Now this is a Cold Take.

I like the division here between "everyone wakes up a perfect utilitarian" and "EA on the margin".

CRW
11
0
0

Based on the quoted material, I understand Ne'eman to be disparaging the fact that utilitarianism has a tendency to deprivilege positive impacts that we can have on our local environment - indeed the positive version of this is one of the things I most admire about the EA community.  However, I think the relevant, annoying quirk of communicating is that we can say "X is a really great intervention for improving welfare", and have people who don't already buy into the utilitarian framework realise - correctly - that this means Y is implicitly worse than X.   Often this gets garbled from "Y is worse than X" to "Y is bad and X is good" (as opposed to "Y is good and X is even better").

The 'steelman' version of the case as I understand it is that being a committed EA / Utilitarian (which are synonymous unless people are being unusually careful, and is conceptually accurate here anyways) means that you'll often have to trade off between doing 'unintuitive good in large quantities' and doing 'more intuitive good in smaller amounts'.  People placing more credence in utilitarianism might accept the greater increase in U as self-evidently preferable to the loss of intuitiveness, but many others will make judgements on the basis of their moral intuition and come to the opposite conclusion.  It's not totally far-fetched or clearly 'evil' to have a moral philosophy which takes doing 'intuitive good' to be a duty, and maximising U to be supererogatory - great if you want to, but you're not 'bad' for doing otherwise.

I think in particular that here you are strawmanning:
“How does effective altruism deal with the fact that the issues I passionately advocate for are self-evidently worse ways to spend money than the issues I don’t advocate for, so self-evidently that I don’t even have to back up my claims in any way?”
In particular, you're begging the question by using the word 'worse', as you've already assumed that U(X)>U(Y) implies X>Y, whereas Ne'eman is saying (in my own interpretation) that his internal moral compass 'knows' Y>X, which is frankly quite difficult to argue against if he isn't concerned about potentially being inconsistent (and many people aren't!  Remember that rationality is instrumentally useful because it prevents Dutch-book attacks and helps us maximise utility functions effectively, but if you don't have a utility function, then being irrational might be a reasonable price to pay to satisfy your moral impulses).

It's hard to say, and I would love to see if there's been any work done on this front, but I would hypothesise that, at the meta-level, people would buy in more willingly to the EA mission if it were presented as being totally supererogatory (and then maybe would stick around long-enough to realise that this might not be a necessary stipulation) - the alternative being to go 'all-or-nothing' in demanding people submit to the 'tyranny of the QALY'.

 

Tl;dr not being utilitarian is inconsistent to a utilitarian, but an ethical belief isn't a fortiori stupid just because it is inconsistent with utilitarianism, and I'd think that taking Ne'eman's implicit concern seriously looks something like portraying utility maximisation as supererogatory (maybe even if you totally don't agree and think it's 100% required?)

(sorry if this was ranty - unfortunately my default writing style - I really did like the post :) )

Someday I will write a piece on how disability rights and EA are often compatible, if we discard Singer's (disgusting) takes on infanticide and truly value ALL lives.  Our focus on QALYs makes currently EA an extremely uncomfortable place to be disabled.  (I could write a whole piece on that too.). Nearly every global health piece I read reminds me that my life is worth less than other people's, which honestly I do not love.  But if we take all lives to be worth saving, EA can be a powerful way to support disabled people.

Disability is more common in the developing world than the developed, and often the people in most need are disabled.  The poorest of the poor - the people EA is likely to be most concerned about - are often disabled.  When EAs send money through GiveDirectly, they are directly supporting the livelihoods of quite a few disabled people.  (A rough estimate would suggest perhaps a quarter of GiveDirectly's recipients are disabled.)

And in the developing world, disabled people are often not more resource-intensive to support than able-bodied people.  The treatments and support needed for disabled people in poor contexts are often cheap enough to be perfectly EA-compatible.  (E.g. economic growth - something EAs frequently discuss as something we should support more - may lift all boats, but it is extra important for disabled people to have work options that are not physical labor.  So growth of a services sector can be useful to disabled people.). Development is disability justice.

I would be very happy to read this piece, and I encourage you to write it (not just for my own desire to read it, but because I think that the world will be ever-so-slightly a better place if people in the EA community read and grapple with these ideas).

"(A rough estimate would suggest perhaps a quarter of GiveDirectly's recipients are disabled.)" If you don't mind sharing, I'm curious about how you (roughly) calculated this? 

Curated and popular this week
Relevant opportunities