In a 2013 TED talk Peter Singer claims

“It costs about 40,000 dollars to train a guide dog and train the recipient so that the guide dog can be an effective help to a blind person. It costs somewhere between 20 and 50 dollars to cure a blind person in a developing country if they have trachoma.”

 

Unfortunately, this claim is not accurate. To begin with, blindness from trachoma is irreversible so it's only possible to prevent blindness from trachoma, not to cure it. According to a GiveWell blog post, it does cost ~$20-60 to perform one trachoma surgery but “there can be a small improvement in vision following surgery”. According to their back-of-envelope calculation with some assumptions, 1 case of full-blown blindness is averted for every 6-20 successful surgeries. In any case, my point is that people who use this example to advertise GiveWell don't read what GiveWell has to say about it.

 

-------

EDIT (2017-05-16): Even though GiveWell haven't made such claim and may have a different opinion, one doctor (who has a much deeper understanding of these issues than me) commented that she "would be comfortable with saying that for about $100 we can prevent trachoma-induced blindness" and that Singer's claim was not as nearly as inaccurate as I made it seem.

-------

 

As of 2017-05-10, Giving What We Can also gives a similar example:

“In the developing world there are more than a million people suffering from trachoma-induced blindness and poor vision which could be helped by a safe eye operation, costing only about $100 and preventing 1-30 years of blindness and another 1-30 years of low vision, according to GiveWell.org

They also do something EAs (including me) don’t do often enough — provide a source. The source is a GiveWell page which is published in 2009 and has a disclaimer

“The content on this page has not been recently updated. This content is likely to be no longer fully accurate, both with respect to the research it presents and with respect to what it implies about our views and positions.”

The page has the following text:

“We have not done thorough cost-effectiveness analysis of this program. Because such analysis is highly time-consuming - and because the results can vary significantly depending on details of the context - we generally do not provide cost-effectiveness analysis for an intervention unless we find what we consider to be a strong associated giving opportunity.

We provide some preliminary figures based on the Disease Control Priorities in Developing Countries report, which we previously used for cost-effectiveness estimates until we vetted its work in 2011, finding major errors that raised general concerns.

We have relatively little information about the likely impact of this program, so it's difficult to estimate the cost-effectiveness.”

[...]

 

“Using a simple conversion calculation, we estimate that $100 prevents 1-30 years of blindness and an additional 1-30 years of low vision when spent on surgeries (though insignificant benefits, in these terms, when spent on antibiotics). The source of the Disease Control Priorities in Developing Countries report's estimate is unclear and these figures should be taken with extreme caution.

 

It seems unfair to just provide the numbers and skip all these disclaimers. Despite knowing about this uncertainty, sometimes I feel temptation to also omit disclaimers and just present the numbers to be more convincing. After all, the goal is very admirable - to help more people living in extreme poverty. But I believe that in the long run EA will achieve more if we are being totally honest and upfront about uncertainties and never take any shortcuts. Otherwise we might not be trusted the next time we have something to say. Furthermore, to influence the world we need our community to have a correct model of the world.

 

On the other hand, trachoma is a horrible disease. Just watch this excerpt:

tl;dw: eyelids turn inwards and eyelashes scrape the eyeball, causing intense pain on every blink. That scraping eventually causes blindness. People treat themselves by pulling out their eyelashes with tweezers. One woman said she does it every 2 weeks. Horrible.

 

If you worry about being convincing, you can talk about that and then honestly talk about uncertainty regarding numbers. Most people are scope insensitive anyway. Or you can talk about cataract surgery instead of trachoma because disclaimers in this page seem slightly less severe. Or just talk about your favorite charity and then add "imagine that suffering could be prevented so cheaply in our country, action would be taken urgently". But the main points of this post are

 

  • many of us were overstating the point that money goes further in poor countries

  • many of us don’t do enough fact checking, especially before making public claims

  • many of us should communicate uncertainty better

-------

EDIT (2017-05-15):

Many people in the comments gave other reasons not to use the comparison but if you decide to use it anyway and want to quote GiveWell, you could also use figures from this Peter Singer's comment. Alternatively, you can use one of the other comparisons proposed by Ben Todd.

 

Comments34
Sorted by Click to highlight new comments since: Today at 11:55 AM

Regrettably, I misspoke in my TED talk when I referred to "curing" blindness from trachoma. I should have said "preventing." (I used to talk about curing blindness by performing cataract surgery, and that may be the cause of the slip.) But there is a source for the figure I cited, and it is not GiveWell. I give the details in The Most Good You Can Do", in an endnote on p. 194, but to save you all looking it up, here it is:

"I owe this comparison to Toby Ord, “The moral imperative towards cost-effectiveness,” http://www.givingwhatwecan.org/sites/givingwhatwecan.org/files/attachments/moral_imperative.pdf. Ord suggests a figure of $20 for preventing blindness; I have been more conservative. Ord explains his estimate of the cost of providing a guide dog as follows: “Guide Dogs of America estimate $19,000 for the training of the dog. When the cost of training the recipient to use the dog is included, the cost doubles to $38,000. Other guide dog providers give similar estimates, for example Seeing Eye estimates a total of $50,000 per person/dog partnership, while Guiding Eyes for the Blind estimates a total of $40,000.” His figure for the cost of preventing blindness by treating trachoma comes from Joseph Cook et al., “Loss of vision and hearing,” in Dean Jamison et al., eds., Disease Control Priorities in Developing Countries, 2d ed. (Oxford: Oxford University Press, 2006), 954. The figure Cook et al. give is $7.14 per surgery, with a 77 percent cure rate. I thank Brian Doolan of the Fred Hollows Foundation for discussion of his organization’s claim that it can restore sight for $25. GiveWell suggests a figure of $100 for surgeries that prevent one to thirty years of blindness and another one to thirty years of low vision but cautions that the sources of these figures are not clear enough to justify a high level of confidence."

Now, maybe there is some more recent research casting doubt on this figure, but note that the numbers I use allow that the figure may be $100 (typically, when I speak on this, I give a range, saying that for the cost of training one guide dog, we may be able to prevent somewhere between 400 - 1600 cases of blindness. Probably it isn't necessary even to do that. The point would be just as strong if it were 400, or even 40.

EDIT: this comment contains some mistakes

To begin with, I want to say that my goal is not to put blame on anyone but to change how we speak and act in the future.

His figure for the cost of preventing blindness by treating trachoma comes from Joseph Cook et al., “Loss of vision and hearing,” in Dean Jamison et al., eds., Disease Control Priorities in Developing Countries, 2d ed. (Oxford: Oxford University Press, 2006), 954. The figure Cook et al. give is $7.14 per surgery, with a 77 percent cure rate.

I am looking at this table from the cited source (Loss of Vision and Hearing, DCP2). It’s 77% cure rate for trachoma that sometimes develops into blindness. Not 77% cure rate for blindness. At least that’s how I interpret it, I can’t be sure because the cited source of the figure in the DCP2’s table doesn’t even mention trachoma! From what I’ve read, sometimes recurrences happen so 77% cure rate from trachoma is much much more plausible. I'm afraid Toby Ord made the mistake of implying that curing trachoma = preventing blindness.

What is more, Toby Ord used the same DCP2 report that GiveWell used and GiveWell found major errors in it. To sum up very briefly:

Eventually, we were able to obtain the spreadsheet that was used to generate the $3.41/DALY estimate. That spreadsheet contains five separate errors that, when corrected, shift the estimated cost effectiveness of deworming from $3.41 to $326.43. [...] The estimates on deworming are the only DCP2 figures we’ve gotten enough information on to examine in-depth.

Regarding Fred Hollows Foundation, please see GiveWell’s page about them and this blog post. In my eyes these discredit organization’s claim that it restores sight for $25.

In conclusion, without further research we have no basis for the claim that trachoma surgeries can prevent 400, or even 40 cases of blindness for $40,000. We simply don't know. I wish we did, I want to help those people in the video.

I think one thing that is happening is that we are too eager to believe any figures we find if they support an opinion we already hold. That severely worsens already existing problem of optimizer’s curse.


I also want to add that preventing 400 blindness cases for $40,000 (i.e. one case for $100) to me sounds much more effective than top GiveWell's charities. GiveWell seem to agree, see citations from this page

Based on very rough guesses at major inputs, we estimate that cataract programs may cost $112-$1,250 per severe visual impairment reversed [...] Based on prior experience with cost-effectiveness analyses, we expect our estimate of cost per severe visual impairment reversed to increase with further evaluation. [...] Our rough estimate of the cost-effectiveness of cataract surgery suggests that it may be competitive with our priority programs; however, we retain a high degree of uncertainty.

We tell the trachoma example and then advertise GiveWell, showing that GiveWell’s top and standout charities are not even related to blindness and no one in EA ever talks about blindness. So people probably assume that GiveWell’s recommended charities are much more effective than surgery that cures blindness for $100 but they are not.

Because GiveWell’s estimates for cataract surgeries are based on guesses, I think we shouldn’t use those figures in introductory EA talks as well. We can tell the disclaimers but the person who hears the example might skip them when retelling the thought experiment (out of desire to sound more convincing). And then the same will happen.

These are good points and I'm suitably chastened for not being sufficiently thorough in checking Toby Ord's claims,
I'm pleased to see that GiveWell is again investigating treating blindness: http://blog.givewell.org/2017/05/11/update-on-our-views-on-cataract-surgery/. In this very recent post, they say: "We believe there is evidence that cataract surgeries substantially improve vision. Very roughly, we estimate that the cost-effectiveness of cataract surgery is ~$1,000 per severe visual impairment reversed.[1]"
The footnote reads: "This estimate is on the higher end of the range we calculated, because it assumes additional costs due to demand generation activities, or identifying patients who would not otherwise have known about surgery. We use this figure because we expect that GiveWell is more likely to recommend an organization that can demonstrate, through its demand generation activities, that it is causing additional surgeries to happen. The $1,000 figure also reflects our sense that cost-effectiveness in general tends to worsen (become more expensive) as we spend more time building our model of any intervention. Finally, it is a round figure that communicates our uncertainty about this estimate overall. But it's reasonable to say that until they complete this investigation, which will be years rather than months, it may be better to avoid using the example of preventing or curing blindness." So the options seem to be either not using the example of blindness at all, or using this rough figure of $1000, with suitable disclaimers. It still leads to 40 cases of severe visual impairment reversed v. 1 case of providing a blind person with a guide dog.

The mention of the specific errors found in DCP2 estimates of de-worming efficacy, seem to be functioning here as guilt by association. I can't see any reason they should be extrapolated to all other calculations in different chapters of a >1000 page document. The figure from DCP2 for trachoma treatment directly references the primary source, so it's highly unlikely to be vulnerable to any spreadsheet errors.

The table Toby cites and you reference here (Table 50.1 from DCP2) says "trichiasis surgery". This means surgical treatment for a late stage of trachoma. Trichiasis is not synonymous with trachoma, but a late and severe complication of trachoma infection, by which stage eyelashes are causing corneal friction. It doesn't 'sometimes' lead to blindness, though that is true of trachoma infections when the whole spectrum is considered. Trichiasis frequently causes corneal damage leading to visual impairment and blindness. You are right to point out that not every person with trichiasis will develop blindness, and a "Number Needed to Treat" is needed to correct the estimate from $20 per case of blindness prevented. However we don't have good epidemiological data to say whether that number is 1, 2, 10 or more. Looking at the literature it's likely to be closer to 2 than 10. The uncertainty factor encoded in Peter Singer's use of $100 per person would allow for a number needed to treat of 5.

In this case the term "cure" is appropriate - as trichiasis is the condition being treated by surgery. At one point Toby's essay talks about curing blindness as well as curing trachoma. Strictly speaking trichiasis surgery is tertiary prevention (treatment of a condition which has already caused damage to prevent further damage.), but the error is not so egregious as to elicit the scorn of the hypothetical doctor you quote below. (Source: I am a medical doctor specialising in infectious diseases, I think the WHO fact sheet you link to is overly simplifying matters when it states "blindness caused by trachoma is irreversible").

[Edited to add DOI: I'm married to Toby Ord]

Thank you very much for writing this. Ironically, I did not do enough fact-checking before making public claims. Now I am not even sure I was right to say that everyone should frequently check facts in this manner because it takes a lot of time and it's easy to make mistakes, especially when it's not the field of expertise for most of us.

Trichiasis surgery then does seem to be absurdly effective in preventing blindness and pain. I am puzzled why GiveWell hasn't looked into it more. Well, they explain it here. The same uncertainty about "Number Needed to Treat".

I want to ask if you don't mind:

  • When literature says that surgery costs ~$20-60 or $7.14, is that for both eyes?
  • Do you think that it's fair to say that it costs say $100 to prevent trachoma-induced blindness? Or is there too much uncertainty to use such number when introducing EA?

Thanks for responding!

I think it's laudable to investigate the basis for claims as you've done. It's fair to say evidence appraisal and communication really is a specialist area in its own right, and outside our ares of expertise it's common to make errors in doing so. And while we all like evidence confirms what we think, other biases may be at play. I think some people in effective altruism also put a high value on identifying and admitting mistakes, so we might also be quick to jump on a contrary assessment even if it has some errors of its own.

I think your broader point about communicating the areas and extent of uncertainty is important, but the solution to how we do that when communicating in different domains is not simple. For example, you can look at how NICE investigates the efficacy of clinical interventions. They have to distill 1000's of pages of evidence into a decision, and even the 'summary' of that can be 100s of pages long. At the front of that will be an 'executive summary' which can't possibly capture all the ares of uncertainty and imperfect evidence, but usually represents their best assessment because ultimately they have to make concrete recommendations.

Another approach is that seen in the Cochrane Systematic Reviews. These take a very careful approach to criticising the methodology of all studies included in their analysis. A running joke though its that every Cochrane review reaches the same conclusion: "More Evidence is Needed". This is precise and careful, but often lacks any practical conclusion.

Re your 2 questions:

It's $7.14 for 1 eye (in 2001) with 77% success, according to this source: https://www.ncbi.nlm.nih.gov/pubmed/11471088 In Toby Ord's essay he uses this to derive the "less than $20 per person" figure (7.14 *2 /(0.77) = $18.5 ) https://www.givingwhatwecan.org/sites/givingwhatwecan.org/files/attachments/moral_imperative.pdf So that's both eyes (in 2001 terms).

My main area of uncertainty on that figure is around number needed to treat. I've spoken to a colleague who is an ophthalmologist and has treated trichiasis in Ghana. Her response was "trachoma with trichiasis always causes blindness". But in the absence of solid epidemiology to back it up, I think it's wise to allow for NNT being higher than 1. I would be comfortable with saying that for about $100 we can prevent trachoma-induced blindness, in order to contrast that with things that we consider a reasonable buy in other contexts. (I haven't assessed any orgs to know if there are orgs who do it for that little: they may for instance do surgeries on a wider range of conditions with varying DALYs gained per dollar spent).

It's pretty much like you said in this comment and I completely agree with you and am putting it here because of how well I think you've driven home the point:

...I myself once mocked a co-worker for taking an effort to recycle when the same effort could do so much more impact for people in Africa. That's wrong in any case, but I was probably wrong in my reasoning too because of numbers.

Also, I'm afraid that some doctor will stand up during an EA presentation and say

You kids pretend to be visionaries, but in reality you don't have the slightest idea what you are talking about. Firstly, it's impossible to cure trachoma induced blindness. Secondly [...] You should go back to play in your sandboxes instead of preaching adults how to solve real world problems

Also, I'm afraid that the doctor might be partially right

Also, my experience has persistently been that the blindness vs trachoma example is quite off-putting in an "now this person who might have gotten into EA is going to avoid it" kind of way. So if we want more EAs, this example seems miserably inept at getting people into EA. I myself have stopped using the example in introductory EA talks altogether. I might be an outlier though and will start using it again if provided a good argument that it works well, but I suspect I'm not the only one that has seen better results introducing EAs by not bringing up this example at all. Now with all the uncertainty around it, it would seem that both emotions and numbers argue against the EA community using this example in introductory talks? Save it for the in-depth discussions that happen after an intro instead?

I strongly agree with both of the comments you've written in this thread so far, but the last paragraph here seems especially important. Regarding this bit, though:

I might be a bit of an outlier

This factor may push in the opposite way than you'd think, given the context. Specifically, if people who might have gotten into EA in the past ended up avoiding it because they were exposed to this example, then you'd expect the example to be more popular than it would be if everyone who once stood a reasonable chance of becoming an EA (or even a hardcore EA) had stuck around to give you their opinion on whether you should use that example. So, keep doing what you're doing! I like your approach.

This is a great post and I thank you for taking the time to write it up.

I ran an EA club at my university and ran a workshop where we covered all the philosophical objections to Effective Altruism. All objections were fairly straightforward to address except for one which - in addressing it - seemed to upend how many participants viewed EA, given what image they thus far had of EA. That objection is: Effective Altruism is not that effective.

There is a lot to be said for this objection and I highly highly recommend anyone who calls themselves an EA to read up on it here and here. None of the other objections to EA seem to me to have nearly as much moral urgency as this one. If we call this thing we do EA and it is not E I see a moral problem. If you donate to deworming charities and have never heard of wormwars I also recommend taking a look at this which is an attempt to track the entire debacle of "deworming-isn't-that-effective" controversy in good faith.

Disclaimer: I donate to SCI and rank it near the top of my priorities, just below AMF currently. I even donate to less certain charities like ACE's recommendations. So I certainly don't mean to dissuade anyone from donating in this comment. Reasoning under uncertainty is a thing and you can see these two recent posts if you desire insight into how an EA might try to go about it effectively.

The take home of this though is the same as the three main points raised by OP. If it had been made clear to us from the get-go what mechanisms are at play that determine how much impact an individual has with their donation to an EA recommended charity, then this EA is not E objection would have been as innocuous as the rest. Instead, after addressing this concern and setting straight how things actually work (I still don't completely understand it, it's complicated) participants felt their initial exposure to EA (such as through the guide dog example and other over-simplified EA infographics that strongly imply it's as simple and obvious as: "donation = lives saved") contained false advertising. The words slight disillusionment comes to mind, given these were all dedicated EAs going into the workshop.

So yes, I bow down to the almighty points bestowed by OP:

  • many of us were overstating the point that money goes further in poor countries

  • many of us don’t do enough fact checking, especially before making public claims

  • many of us should communicate uncertainty better

Btw, Scope insensitive link does not seem to work I'm afraid (Update: Thanx for fixing!)

DC
7y1
0
0

Overstatement seems to be selected for when 1) evaluators like Givewell are deferred-to rather than questioned, 2) you want to market that Faithful Deferrence to others.

I agree with those concerns.

In addition, some people might perceive the "guide dogs vs. trachoma surgeries" example as ableist, or might think that EAs are suggesting that governments spend less on handicapped people and more on foreign aid. (This is a particularly significant issue in Germany, where there have been lots of protests by disability rights advocates against Singer, also more recently when he gave talks about EA.)

In fact, one of the top google hits for "guide dog vs trachoma surgery" is this:

The philosopher says funding should go toward prevention instead of guide-dog training. Activists for the blind, of course, disagree.

For these reasons, I suggest not using the guide dog example at all anymore.

The above article also makes the following, interesting point:

Many people are able to function in society at a much higher level than ever before because of service dogs and therapy dogs. You would think that’s a level of utility that would appeal to Singer, but he seems to have a blind spot of his own in that respect.

This suggests that both guide dogs and trachoma surgeries cause significant flow-through effects. All of these points combined might decrease the effectiveness difference from 1000x to something around 5x-50x (see also Why Charities Don't Differ Astronomically in Cost-Effectiveness).

I don't understand the objection about it being "ableist" to say funding should go towards preventing people becoming blind rather than training guide dogs

If "ableism" is really supposed to be like racism or sexism, then we should not regard it as better to be able to see than to have the disability of not being able to see. But if people who cannot see are no worse off than people who can see, why should we even provide guide dogs for them? On the other hand, if -- more sensibly -- disability activists think that people who are unable to see are at a disadvantage and need our help, wouldn't they agree that it is better to prevent many people -- say, 400 -- experiencing this disadvantage than to help one person cope a little better with the disadvantage? Especially if the 400 are living in a developing country and have far less social support than the one person who lives in a developed country?

Can someone explain to me what is wrong with this argument? If not, I plan to keep using the example.

Here’s what I usually found most unfortunate about the comparison, but I don’t mean to compete with anyone who thinks that the math is more unfortunate or anything else.

  1. The decision to sacrifice the well-being of one person for that of others (even many others) should be hard. If we want to be trusted (and the whole point of GiveWell is that people don’t have the time to double-check all research no matter how accessible it is – plus, even just following a link to GiveWell after watching a TED Talk requires that someone trusts us with their time), we need to signal clearly that we don’t make such decisions lightly. It is honest signaling too, since the whole point of EA is to put a whole lot more effort into the decision than usual. Many people I talk to are so “conscientious” about such decisions that they shy away from them completely (implicitly making very bad decisions). It’s probably impossible to show just how much effort and diligence has gone into such a difficult decision in just a short talk, so I rather focus on cases where I am, or each listener is, the one at whose detriment we make the prioritization decision, just like in the Child in the Pond case. Few people would no-platform me because they think it’s evil of me to ruin my own suit.
  2. Sacrificing oneself, or rather some trivial luxury of oneself, also avoids the common objection why a discriminated against minority should have to pay when there are [insert all the commonly cited bad things like tax cuts for the most wealthy, military spending, inefficient health system, etc.]. It streamlines the communication a lot more.
  3. The group at whose detriment we need to decide should never be a known, discriminated against minority in such examples, because these people are used to being discriminated against and their allies are used to seeing them being discriminated against, so when someone seems to be saying that they shouldn’t receive some form of assistance, they have just a huge prior for assuming that it’s just another discriminatory attack. I think their heuristic more or less fails in this case, but that is not to say that it’s not a very valid heuristic. I’ve been abroad in a country where pedestrian crosswalks are generally ignored by car drivers. I’m not going to just blinding walk onto the street there even if the driver of the only car coming toward me is actually one who would’ve stopped for me if I did. My heuristic fails in that case, but it generally keeps me safe.
  4. Discriminated minority groups are super few, especially the ones the audience will be aware of. Some people may be able to come up with a dozen or so, some with several dozens. But in my actual prioritization decisions for the Your Siblings charity, I had to decide between groups of so fuzzy reference classes that there must be basically arbitrarily many of such groups. Street children vs. people at risk of malaria vs. farmed animals? Or street children in Kampala vs. people at risk of malaria in the southern DRC vs. chickens farmed for eggs in Spain? Or street children of the lost generation in the suburb’s of Kampala who were abducted for child sacrifice but freed by the police and delivered to the orphanage we’re cooperating with vs. …. You get the idea. If we’re unbiased, then what are the odds that we’ll draw a discriminated against group from the countless potential examples in this urn? This should heavily update a listener toward thinking that there’s some bias against the minority group at work here. Surely, the real explanation is something about salience on our minds or ease of communication and not about discrimination, but they’d have to know us very well to have so much trust in our intentions.
  5. People with disability probably have distance “bias” at the same rates as anyone else, so they’ll perceive the blind person with the guide dog as in-group, the blind people suffering from cataracts in developing countries as completely neutral foreign group, and us as attacking them, making us the out-group. Such controversy is completely avoidable and highly dangerous, as Owen Cotton-Barratt describes in more detail in his paper on movement growth. Controversy breeds an opposition (and one that is not willing to engage in moral trade with us) that destroys option value particularly by depriving us of the highly promising option to draw on the democratic process to push for the most uncontroversial implications of effective altruism that we can find. Scott Alexander has written about it under the title “Toxoplasma of Rage.” I don’t think publicity is worth sacrificing the political power of EA for it, but that is just a great simplification of Owen Cotton-Barratt’s differentiated points on the topic.
  6. Communication is by necessity cooperative. If we say something, however true it may be, and important members of the audience understand it as something false or something else entirely (that may not have propositional nature), then we failed to communicate. When this happens, we can’t just stamp our collective foot on the ground and be like, “But it’s true! Look at the numbers!” or “It’s your fault you didn’t understand me because you don’t know where I’m coming from!” That’s not the point of communication. We need to adapt our messaging or make sure that people at least don’t misunderstand us in dangerous ways.

(I feel like you may disagree on some of these points for similar reasons that The Point of View of the Universe seemed to me to argue for a non-naturalist type of moral realism while I “only” try to assume some form of non-cognitivist moral antirealism, maybe emotivism, which seems more parsimonious to me. Maybe you feel like or have good reasons to think that there is a true language (albeit in a non-naturalist sense) so that it makes sense to say “Yes, you misunderstood me, but what I said is true, because …,” while I’m unsure. I might say, “Yes, you misunderstood me, but what I meant was something you’d probably agree with. Let me try again.”)

Blind people are not a discriminated group, at least not in the first world. The extreme poor, on the other hand, often face severe discrimination -- they are mistreated and have their rights violated by those with power, especially if they are Indians of low caste.

Comparative intervention effectiveness is a pillar of EA, distinct from personal sacrifice, so they are not interchangeable. I reject that there is some sort of prejudice for choosing to help one group over another, whether the groups are defined by physical condition, location, etc. One always has to choose. No one can help every group. Taking the example of preventing blindness vs assisting the blind, clearly the former is the wildly superior intervention for blindness so it is absurd to call it prejudiced against the blind.

Thanks! In response to which point is that? I think points 5 and 6 should answer your objection, but tell me if they don’t. Truth is not at issue here (if we ignore the parenthetical at the very end that isn’t mean to be part of my argument). I’d even say that Peter Singer deals in concepts of unusual importance and predictive power. But I think it’s important to make sure that we’re not being misunderstood in dangerous ways by valuable potential allies.

The objection about it being ableist to promote funding for trachoma surgeries rather than guide dogs doesn't have to do with how many QALYs we'd save from providing someone with a guide dog or a trachoma surgery. Roughly, this objection is about how much respect we're showing to disabled people. I'm not sure how many of the people who have said that this example is ableist are utilitarians, but we can actually make a good case that using the example causes negative consequences for the reason that it's ableist. (It's also possible that using the example as it's typically used causes negative consequences by affecting how intellectually rigorous EA is, but that's another topic). A few different points that might be used to support this argument would be:

  • On average, people get a lot of value out of having self-esteem; often, having more self-esteem on the margins enables them to do value-producing things they wouldn't have done otherwise (flow-through effects!). Sometimes, it just makes them a bit happier (probably a much smaller effect in utilitarian terms).
  • Roughly, raising or lowering the group-wise esteem of a group has an effect on the self-esteem of some of the group's members.
  • Keeping from lowering a group's esteem isn't very costly, if doing so involves nothing more than using a different tone. (There are of course situations where making a certain claim will raise or lower a group's esteem a large amount if a certain tone is used, and a lesser amount if a different tone is used, even though the group's esteem is nevertheless changed in the same direction in either case).
  • Decreases in a group's ability to do value-producing things or be happy because their esteem has been lowered by someone acting in an ablelist manner, do not cause others to experience a similarly sized boost to their ability to be happy or do value-producing things. (I.e. the truth value of claims that "status games are zero sum" has little effect on the extent to which it's true that decreasing a group's esteem by e.g. ableist remarks has negative utilitarian consequences).

I've generally found it hard to make this sort of observation publicly in EA-inhabited spaces, since I typically get interpreted as primarily trying to say something political, rather than primarily trying to point out that certain actions have certain consequences. It's legitimately hard to figure out what the ideal utilitarian combination of tone and example would be for this case, but it's possible to iterate towards better combinations of the two as you have time to try different things according to your own best judgement, or just ask a critic what the most hurtful parts of an example are.

Peter, even if a trachoma operation cost the same as training a guide dog, and didn't always prevent blindness, it would still be an excellent cost comparison because vision correction is vastly superior to having a dog.

And moreover it doesn't just improve vision, it removes a source of intense pain.

If I try to steelman the argument, it comes out something like:

Some people, when they hear about the guide dog - tracheoma surgery contrast, will take the point to be that ameliorating a disability is intrinsically less valuable than preventing or curing an impairment. (In other words, that helping people live fulfilling lives while blind is necessarily a less worthy cause than "fixing" them.) Since this is not in fact the intended point, a comparison of more directly comparable interventions would be preferable, if available.

Why is the choice not directly comparable? If it were possible to offer a blind person a choice between being able to see, or having a guide dog, would it be so difficult for the blind person to choose?

Still, if you can suggest better comparisons that make the same point, I'll be happy to use them.

Hi Peter,

Some examples that might be useful:

1) Differences in income

A US college graduate earns about 100x more than GiveDirectly recipients, suggesting money can go far further with GiveDirectly. (100x further if utility ~log-income.) https://80000hours.org/career-guide/anyone-make-a-difference/

2) The cost to save a life

GiveWell now says $7500 for a death prevented by malaria nets (plus many other benefits) Rich country governments, however, are often willing to pay over $1m to save a life of one of their citizens, a factor of 130+ difference. https://80000hours.org/career-guide/world-problems/#global-health-a-problem-where-you-could-really-make-progress

3) Cost per QALY

It still seems possible to save QALYs for a few hundred dollars in the developing world, whereas the UK's NHS is willing to fund most things that save a QALY for under £20,000, and some that are over £30,000, which is again a factor of 100 difference.

So I still think a factor of 100x difference is defensible, though if you also take into account Brian's point below, then it might be reduced to, say, a factor of 30, though that's basically just a guess, and it could go the other way too. More on this: http://reflectivedisequilibrium.blogspot.com/2014/01/what-portion-of-boost-to-global-gdp.html

I like these examples but they do have some limitations.

I'm still searching for some better examples that are empirically robust as well as intuitively powerful.

(I'm looking for the strongest references to give to my claim here that "there is a strong case that most donations go to charities that improve well-being far less per-dollar than others." (of course I'm willing to admit there's some possibility that we don't have strong evidence for this)

1) Differences in income: This will not be terribly convincing to anyone who doesn't already accept the idea of vastly diminishing marginal utility, and there is the standard (inadequate but hard to easily rebut) objection that "things are much cheaper in developing countries".

2) The cost to save a life: Yes, rich country governments factor this into their calculations, but is this indeed the calculation that is relevant when considering "typical charities operating in rich countries?" It also does not identify a particular intervention that is "much less efficient".

3) Cost per QALY/ UK NHS: Similar limitations as in case 2.

What is the strongest statistic or comparison for making this point? Perhaps Sanjay Joshi of SoGive has some suggestions?

Perhaps making a comparison based on the tables near the end of Jamison, D. T. et al (2006). Disease control priorities in developing countries? 2006 was a long time ago, however.

significant flow-through effects.

And those flow-through effects may often be bigger per person when helping people in richer countries, since people in richer countries tend to have more impact on economic growth, technological and memetic developments, etc. than those in poorer countries. (Of course, whether these flow-through effects are net good or net bad is not clear.)

On the ableism point, my best guess is that the right response is to figure out the substance of the criticism. If we disagree, we should admit that openly, and forgo the support of people who do not in fact agree with us. If we agree, then we should account for the criticism and adjust both our beliefs and statements. Directly optimizing on avoiding adverse perceptions seems like it would lead to a distorted picture of what we are about.

The article Vollmer cites says:

Singer’s idea about the relative value of guide dogs sets up a false dichotomy, assuming that you can fund guide dogs or fund medical prevention. In fact, you can do both.

In this case that seems to be the substance of the criticism. You can't anticipate every counter-argument one could make when talking to bigger audiences, but this one is pretty common. It might be necessary to say

if I have to decide where to donate my $ 100...

Not sure it would help, it could be that such arguments trigger bad emotions for other reasons and the counter-arguments we hear are just rationalizations of those emotions. It does feel like a minefield.

Therefore, when comparing any 2 charities while introducing someone (especially an audience) to EA, we must phrase it carefully and sensitively. BTW, I think there is something to learn from way Singer phrased it in the TED talk:

Take, for example, providing a guide dog for a blind person. That's a good thing to do, right? Well, right, it is a good thing to do, but you have to think what else you could do with the resources. It costs about 40,000 dollars...

Thanks for researching and writing this up! We've been discussing the topic a lot at CEA/Giving What We Can over the last few days. I think this points to the importance of flagging publication dates (as GiveWell does, indicating that the research on a certain page was current as of a given date but isn't necessarily accurate anymore). Fact-checking, updating, or just information flagging as older and possibly inaccurate was on our to-do list for materials on the Giving What We Can site, which go back as much as 10 years and sometimes no longer represent our best understanding. I now think it needs to be higher priority than I did.

For individuals rather than organizations, I'm unsure about the best way to handle things like this, which will surely come up again. If someone publishes a paper or blog post, how often are they obliged to update it with corrected figures? I'm thinking of a popular post which used PSI's figure of around $800 to save a child's life. In 2010 when it was written that seemed like a reasonable estimate, but it doesn't now. Is the author responsible for updating the figure everywhere the post was published and re-published? (That's a strong disincentive for ever writing anything that includes a cost-effectiveness estimate, since they're always changing.) Does everyone who quoted it or referred to it need to go back each year and include a new estimate? My guess is it's good practice, particularly when we notice people creating new material that cites old figures, to give them a friendly note with a link to newer sources, with the understanding that this stuff is genuinely confusing and hard to stay on top of.

It's obviously impossible to enforce everyone to update figures all the time. If there is an old publication date, everyone probably understands that it could be outdated. I just think that the date should be always featured prominently. E.g. in this page it could be better. I think that flagging pages the way GiveWell does is a great idea. But featured pages that have no date should probably be checked or updated quite often. I mean pages like "top charities", "what we can achieve" and "myths about aid" in GWWC's case.

This feels like nitpicking that gives the impression of undermining Singer's original claim when in reality the figures support them. I have no reason to believe Singer was claiming that of all possible charitable donations trauchoma is the most effective, merely to give the most stunningly large difference in cost effectiveness between charitable donations used for comparable ends (both about blindness so no hard comparisons across kinds of suffering/disability).

I agree that within the EA community and when presenting EA analysis of cost-effectiveness it is important to be upfront with the full complexity of the figures. However, Singer's purpose at TED isn't to carefully pick the most cost effective donations but to force people to confront the fact that cost effectiveness matters.. While those of us already in EA might find a statement like "We prevent 1 year of blindness for every 3 surgeries done which on average cost..." perfectly compelling the audience members who aren't yet persuaded simply tune out. After all it's just more math talk and they are interested in emotional impact. The only way to convince them is to ignore getting the numbers perfectly right and focus on the emotional impact of choosing to help a blind person in the US get a dog rather than many people in poor countries avoid blindness.

Now it's important that we don't simplify in misleading ways but even with the qualifications here it is obvious that it still costs orders of magnitude more to train a dog than prevent blindness via this surgery. Moreover, once one factors in considerations like pain, the imperfect replacement for eyes provided by a dog, etc.. the original numbers are probably too favorable to dog training as far as relative cost effectiveness goes.

This isn't to say that your point here isn't important regarding people inside EA making estimates or givewell analysis or the like. I'm just pointing out that it's important to distinguish the kind of thing being done at a TED talk like this from that being done by givewell. So long as when people leave the TED talk their research leaves the big picture in place (dogs >>>> trauchoma surgery) it's a victory.

I think there is truth in what you said. But I also have disagreements:

"The only way to convince them is to ignore getting the numbers perfectly right and focus on the emotional impact"

That's a dangerous line of reasoning. If we can't make a point with honest numbers, we shouldn't make the point at all. We might fail to notice when we are wrong when we use bogus numbers to prove whatever opinion we already hold.

What is more, many people who become EAs after hearing such TED talks already think in numbers. They continue in believing the same numbers afterwards and are more likely to dismiss other cause areas because of it. I myself once mocked a co-worker for taking an effort to recycle when the same effort could do so much more impact for people in Africa. That's wrong in any case, but I was probably wrong in my reasoning too because of numbers.

Also, I'm afraid that some doctor will stand up during an EA presentation and say

You kids pretend to be visionaries, but in reality you don't have the slightest idea what you are talking about. Firstly, it's impossible to cure trachoma induced blindness. Secondly [...] You should go back to play in your sandboxes instead of preaching adults how to solve real world problems

Also, I'm afraid that the doctor might be partially right

If we're ignoring getting the numbers right and instead focusing on the emotional impact, we have no claim to the term "effective". This sort of reasoning is why epistemics around dogooding are so bad in the first place.

I hate to admit it, but I think there does exist a utilitarian trade-off between marketability and accuracy. Although I'm thrilled that the EA movement prides itself on being as factually accurate as possible and I believe the core EA movement absolutely needs to stick with that, there is a case to be made that an exaggerated truth may be an important teaching tool in helping non-EAs understand why EAs do what they do.

It seems likely that Peter Singer's example has  had a net-positive impact, despite the inaccuracies. Even I was originally drawn to EA by this example, among a few of his others. I've since been donating at least 10% and been active in EA projects. I'm sure I'm not the only one.

We just have to be careful that the integrity of the EA movement isn't compromised due to inaccurate examples like this. But I think anyone who goes far enough with EA to learn that this example is inaccurate, or even cares to do so, will most likely already have converted into an EA mindset, which is Mr. Singer's end-goal.