BY

Bernadette_Young

481 karmaJoined Aug 2014

Comments
108

That's still a very important point that doesn't seem to have been made in the analysis here: the demographic questions were not included in the questions put to all respondents. Since there are good reasons to think that people taking the "full" and "donations only" survey will differ systematically (e.g. more likely to have been involved with EA for longer). If the non responses are not random that's an important caveat on all these findings and very much limits any comparisons that can be done over time. I can't seem to see it discussed in the post?

Thanks for responding!

I think it's laudable to investigate the basis for claims as you've done. It's fair to say evidence appraisal and communication really is a specialist area in its own right, and outside our ares of expertise it's common to make errors in doing so. And while we all like evidence confirms what we think, other biases may be at play. I think some people in effective altruism also put a high value on identifying and admitting mistakes, so we might also be quick to jump on a contrary assessment even if it has some errors of its own.

I think your broader point about communicating the areas and extent of uncertainty is important, but the solution to how we do that when communicating in different domains is not simple. For example, you can look at how NICE investigates the efficacy of clinical interventions. They have to distill 1000's of pages of evidence into a decision, and even the 'summary' of that can be 100s of pages long. At the front of that will be an 'executive summary' which can't possibly capture all the ares of uncertainty and imperfect evidence, but usually represents their best assessment because ultimately they have to make concrete recommendations.

Another approach is that seen in the Cochrane Systematic Reviews. These take a very careful approach to criticising the methodology of all studies included in their analysis. A running joke though its that every Cochrane review reaches the same conclusion: "More Evidence is Needed". This is precise and careful, but often lacks any practical conclusion.

Re your 2 questions:

It's $7.14 for 1 eye (in 2001) with 77% success, according to this source: https://www.ncbi.nlm.nih.gov/pubmed/11471088 In Toby Ord's essay he uses this to derive the "less than $20 per person" figure (7.14 *2 /(0.77) = $18.5 ) https://www.givingwhatwecan.org/sites/givingwhatwecan.org/files/attachments/moral_imperative.pdf So that's both eyes (in 2001 terms).

My main area of uncertainty on that figure is around number needed to treat. I've spoken to a colleague who is an ophthalmologist and has treated trichiasis in Ghana. Her response was "trachoma with trichiasis always causes blindness". But in the absence of solid epidemiology to back it up, I think it's wise to allow for NNT being higher than 1. I would be comfortable with saying that for about $100 we can prevent trachoma-induced blindness, in order to contrast that with things that we consider a reasonable buy in other contexts. (I haven't assessed any orgs to know if there are orgs who do it for that little: they may for instance do surgeries on a wider range of conditions with varying DALYs gained per dollar spent).

The mention of the specific errors found in DCP2 estimates of de-worming efficacy, seem to be functioning here as guilt by association. I can't see any reason they should be extrapolated to all other calculations in different chapters of a >1000 page document. The figure from DCP2 for trachoma treatment directly references the primary source, so it's highly unlikely to be vulnerable to any spreadsheet errors.

The table Toby cites and you reference here (Table 50.1 from DCP2) says "trichiasis surgery". This means surgical treatment for a late stage of trachoma. Trichiasis is not synonymous with trachoma, but a late and severe complication of trachoma infection, by which stage eyelashes are causing corneal friction. It doesn't 'sometimes' lead to blindness, though that is true of trachoma infections when the whole spectrum is considered. Trichiasis frequently causes corneal damage leading to visual impairment and blindness. You are right to point out that not every person with trichiasis will develop blindness, and a "Number Needed to Treat" is needed to correct the estimate from $20 per case of blindness prevented. However we don't have good epidemiological data to say whether that number is 1, 2, 10 or more. Looking at the literature it's likely to be closer to 2 than 10. The uncertainty factor encoded in Peter Singer's use of $100 per person would allow for a number needed to treat of 5.

In this case the term "cure" is appropriate - as trichiasis is the condition being treated by surgery. At one point Toby's essay talks about curing blindness as well as curing trachoma. Strictly speaking trichiasis surgery is tertiary prevention (treatment of a condition which has already caused damage to prevent further damage.), but the error is not so egregious as to elicit the scorn of the hypothetical doctor you quote below. (Source: I am a medical doctor specialising in infectious diseases, I think the WHO fact sheet you link to is overly simplifying matters when it states "blindness caused by trachoma is irreversible").

[Edited to add DOI: I'm married to Toby Ord]

I'm pleased to see the update on GWWC recommendations; it was perturbing to have such different messages being communicated in different channels.

However I'm really disappointed to hear the Giving What We Can trust will disappear - not least because it means I no longer have a means to leave a legacy to effective charities in my will (which I'll now need to change). Previously the GWWC trust meant I had a means of leaving money, hedging against changes in the landscape of what's effective, run by an org whose philosophy I agree with and whose decisions I had a good track record of trusting. EA funds requires I either specify organisations (which I can do myself in a will, but might not be the best picks at a relevant time), or trust a single individual in whom I don't have the same confidence. Also if a legacy is likely to be a substantial amount of money I am more risk averse about where it goes.

Ethics approval would probably depend on not collecting identifying data like name, so it would be important to build that into your design. College name would work, but pseudo-randomising by leafleting some colleges would introduce significant confounding, because colleges frequently differ in their make up and culture.

Thanks Georgie - I see where we were misunderstanding each other! That's great - research like this is quite hard to get right, and I think it's an excellent plan to have people with experience and knowledge about the design and execution as well as analysis involved. (My background is medical research as well as clinical medicine, and a depressing amount of research - including randomised clinical trials - is never able to answer the important question because of fundamental design choices. Unfortunately knowing this fact isn't enough to avoid the pitfalls. It's great that EA is interested in data, but it's vital we generate and analyse good data well.)

Unless you have a specific hypothesis that you are testing, I think the survey is the wrong methodology to answer this question. If you actually want to explore the reasons why (and expect there will not be a single answer) then you need qualitative research.

If you do pursue questions on this topic in a survey format, it is likely you will get misleading answers unless you have the resources to very rigorously test and refine your question methodology. Since you will essentially be asking people if they are not doing something they have said is good to do, there will be all sorts of biases as play, and it will be very difficult to write questions that function the way you expect them to. To the best of my knowledge question testing didn't happen at all with the first survey, I don't know if any happened with the second.

I appreciate the survey uses a vast amount of people's resources, and is done for good reasons. I hate sounding like a doom-monger, but there are pitfalls here and significant limitations on surveys as a research method. I think the EA community risks falling into a trap on this topic, thinking dubious data is better than none, when actually false data can literally costs lives. As previously, I would strongly suggest getting professional involvement.

The median EA donation ($330) was pretty low. There could be various reasons for this, but we can only really pin down an explanation when .impact conduct the next EA Survey. I

According to the reports, the first survey of 2014 (ie reported in 2015) found a median donation of $450 in 2013, with 766 people reporting their donations.

The next survey of 2015 (ie reported 2106) found a mediant donation of $330 in 2014, with 1341 people reporting their donations.

Repeating the survey has gathered more data and actually produced a lower estimate. I'm interested how the third survey will help understand this better?

I didn't down vote it, but I suspect others who did were - like me - frustrated by the accusation of not engaging with you on the substantive points that are summarised in Jeff's post. This post followed a discussion with literally hundreds of comments and dozens of people in this community discussing them with you.

I could explain why I think the term astroturfing does apply to your actions, even though they were not exactly the same as Holden's activities, but the pattern of discussion I've experienced and witnessed with you gives me very low credence that the discussion will lead to any change in our relative positions.

I hope the break is good for your health and wish you well.

Load more