Comment author: Cornelius  (EA Profile) 19 May 2017 08:09:36PM *  3 points [-]

We have found this exceptionally difficult due to the diversity of GFI’s activities and the particularly unclear counterfactuals.

Perhaps I am not understanding but isn't it possible to simplify your model by honing in on one particular thing GFI is doing and pretending that a donation goes towards only that? Oxfam's impact is notoriously difficult to model (too big, too many counterfactuals) but as soon as you only look at their disaster management programs (where they've done RCTs to showcase effectiveness) then suddenly we have far better cost-effectiveness assurance. This approach wouldn't grant a cost-effectiveness figure for all of GFI, but for one of their initiatives at least. Doing this should also drastically simplify your counterfactuals.

I've read the full report on GFI by ACE. Both it and this post suggest to me that a broad capture-everything approach is being undertaken by both ACE and OPP. I don't understand. Why do I not see a systematic list of all of GFIs projects and activities both on ACE's website and here and then an incremental systematic review of each one in isolation? I realize I am likely sounding like an obnoxious physicist encountering a new subject so do note that I am just confused. This is far from my area of expertise.

However, this approach is a bit silly because it does not model the acceleration of research: If there are no other donors in the field, then our donation is futile because £10,000 will not fund the entire effort required.

Could you explain this more clearly to me please? With some stats as an example it'll likely be much clearer. Looking at the development of the Impossible Burger seems a fair phenomena to base GFI's model on, at least for now and at least insofar as it is being used to model a GFI donation's counterfactual impact in supporting similar products GFI is trying to push to market. I don't understand why the approach is silly because $10,000 wouldn't support the entire effort and that this is somehow tied to acceleration of research.

Regarding acceleration dynamics then, isn't it best to just model based on the most pessimistic conservative curve? It makes sense to me to think this would be the diminishing returns one. This also falls in line with what I know about clean meat. If we eventually do need (might as well assume we do for sake of being conservative) to simulate all elements of meat we'll also have to go beyond merely the scaffolding and growth medium problem and also include an artificial blood circulation system for the meat being grown. No such system yet exists and it seems reasonable to suspect that the closer we want to simulate meat precisely the more our scientific problems rise exponentially. So a diminishing returns curve is expected from GFI's impact - at least insofar as its work on clean meat is concerned.

Comment author: saulius  (EA Profile) 13 May 2017 02:01:41PM *  4 points [-]

EDIT: this comment contains some mistakes

To begin with, I want to say that my goal is not to put blame on anyone but to change how we speak and act in the future.

His figure for the cost of preventing blindness by treating trachoma comes from Joseph Cook et al., “Loss of vision and hearing,” in Dean Jamison et al., eds., Disease Control Priorities in Developing Countries, 2d ed. (Oxford: Oxford University Press, 2006), 954. The figure Cook et al. give is $7.14 per surgery, with a 77 percent cure rate.

I am looking at this table from the cited source (Loss of Vision and Hearing, DCP2). It’s 77% cure rate for trachoma that sometimes develops into blindness. Not 77% cure rate for blindness. At least that’s how I interpret it, I can’t be sure because the cited source of the figure in the DCP2’s table doesn’t even mention trachoma! From what I’ve read, sometimes recurrences happen so 77% cure rate from trachoma is much much more plausible. I'm afraid Toby Ord made the mistake of implying that curing trachoma = preventing blindness.

What is more, Toby Ord used the same DCP2 report that GiveWell used and GiveWell found major errors in it. To sum up very briefly:

Eventually, we were able to obtain the spreadsheet that was used to generate the $3.41/DALY estimate. That spreadsheet contains five separate errors that, when corrected, shift the estimated cost effectiveness of deworming from $3.41 to $326.43. [...] The estimates on deworming are the only DCP2 figures we’ve gotten enough information on to examine in-depth.

Regarding Fred Hollows Foundation, please see GiveWell’s page about them and this blog post. In my eyes these discredit organization’s claim that it restores sight for $25.

In conclusion, without further research we have no basis for the claim that trachoma surgeries can prevent 400, or even 40 cases of blindness for $40,000. We simply don't know. I wish we did, I want to help those people in the video.

I think one thing that is happening is that we are too eager to believe any figures we find if they support an opinion we already hold. That severely worsens already existing problem of optimizer’s curse.


I also want to add that preventing 400 blindness cases for $40,000 (i.e. one case for $100) to me sounds much more effective than top GiveWell's charities. GiveWell seem to agree, see citations from this page

Based on very rough guesses at major inputs, we estimate that cataract programs may cost $112-$1,250 per severe visual impairment reversed [...] Based on prior experience with cost-effectiveness analyses, we expect our estimate of cost per severe visual impairment reversed to increase with further evaluation. [...] Our rough estimate of the cost-effectiveness of cataract surgery suggests that it may be competitive with our priority programs; however, we retain a high degree of uncertainty.

We tell the trachoma example and then advertise GiveWell, showing that GiveWell’s top and standout charities are not even related to blindness and no one in EA ever talks about blindness. So people probably assume that GiveWell’s recommended charities are much more effective than surgery that cures blindness for $100 but they are not.

Because GiveWell’s estimates for cataract surgeries are based on guesses, I think we shouldn’t use those figures in introductory EA talks as well. We can tell the disclaimers but the person who hears the example might skip them when retelling the thought experiment (out of desire to sound more convincing). And then the same will happen.

Comment author: Cornelius  (EA Profile) 13 May 2017 09:21:08PM *  1 point [-]

It's pretty much like you said in this comment and I completely agree with you and am putting it here because of how well I think you've driven home the point:

...I myself once mocked a co-worker for taking an effort to recycle when the same effort could do so much more impact for people in Africa. That's wrong in any case, but I was probably wrong in my reasoning too because of numbers.

Also, I'm afraid that some doctor will stand up during an EA presentation and say

You kids pretend to be visionaries, but in reality you don't have the slightest idea what you are talking about. Firstly, it's impossible to cure trachoma induced blindness. Secondly [...] You should go back to play in your sandboxes instead of preaching adults how to solve real world problems

Also, I'm afraid that the doctor might be partially right

Also, my experience has persistently been that the blindness vs trachoma example is quite off-putting in an "now this person who might have gotten into EA is going to avoid it" kind of way. So if we want more EAs, this example seems miserably inept at getting people into EA. I myself have stopped using the example in introductory EA talks altogether. I might be an outlier though and will start using it again if provided a good argument that it works well, but I suspect I'm not the only one that has seen better results introducing EAs by not bringing up this example at all. Now with all the uncertainty around it, it would seem that both emotions and numbers argue against the EA community using this example in introductory talks? Save it for the in-depth discussions that happen after an intro instead?

Comment author: Cornelius  (EA Profile) 11 May 2017 12:43:46AM *  9 points [-]

This is a great post and I thank you for taking the time to write it up.

I ran an EA club at my university and ran a workshop where we covered all the philosophical objections to Effective Altruism. All objections were fairly straightforward to address except for one which - in addressing it - seemed to upend how many participants viewed EA, given what image they thus far had of EA. That objection is: Effective Altruism is not that effective.

There is a lot to be said for this objection and I highly highly recommend anyone who calls themselves an EA to read up on it here and here. None of the other objections to EA seem to me to have nearly as much moral urgency as this one. If we call this thing we do EA and it is not E I see a moral problem. If you donate to deworming charities and have never heard of wormwars I also recommend taking a look at this which is an attempt to track the entire debacle of "deworming-isn't-that-effective" controversy in good faith.

Disclaimer: I donate to SCI and rank it near the top of my priorities, just below AMF currently. I even donate to less certain charities like ACE's recommendations. So I certainly don't mean to dissuade anyone from donating in this comment. Reasoning under uncertainty is a thing and you can see these two recent posts if you desire insight into how an EA might try to go about it effectively.

The take home of this though is the same as the three main points raised by OP. If it had been made clear to us from the get-go what mechanisms are at play that determine how much impact an individual has with their donation to an EA recommended charity, then this EA is not E objection would have been as innocuous as the rest. Instead, after addressing this concern and setting straight how things actually work (I still don't completely understand it, it's complicated) participants felt their initial exposure to EA (such as through the guide dog example and other over-simplified EA infographics that strongly imply it's as simple and obvious as: "donation = lives saved") contained false advertising. The words slight disillusionment comes to mind, given these were all dedicated EAs going into the workshop.

So yes, I bow down to the almighty points bestowed by OP:

  • many of us were overstating the point that money goes further in poor countries

  • many of us don’t do enough fact checking, especially before making public claims

  • many of us should communicate uncertainty better

Btw, Scope insensitive link does not seem to work I'm afraid (Update: Thanx for fixing!)

Comment author: Cornelius  (EA Profile) 10 May 2017 11:30:40PM 2 points [-]

Everyone is warm (±37°C, ideally), open-minded, reasonable and curious.

You sir, will be thoroughly quoted and requoted on this gem, lol. I commend this heartfelt post.

In response to Why I left EA
Comment author: rviss 21 February 2017 01:09:21AM 12 points [-]

Lila, what will you do now? What questions or problems do you see in the path ahead? What good things will you miss by leaving the EA community?

For some reason, I've always felt a deep sense of empathy for people who do what you have done. It is very honest and generous of you to do it this way. I wish you only the very best in all you do.

(This is my first post on this forum. I am new to EA.)

In response to comment by rviss on Why I left EA
Comment author: Cornelius  (EA Profile) 10 May 2017 08:46:00PM *  0 points [-]

One thing I'm unclear on is:

Is s/he leaving the EA community and retaining the EA philosophy or rejecting the EA philosophy and staying in the EA community or leaving both?

What EAs do and what EA is are two different things after all. I'm going to guess leaving the EA community given that yes most EAs are utilitarians and this seems to be foundational to the reason Lila is leaving. However the EA philosophy is not utilitarian per se so you'd expect there to be many non-utilitarian EAs. I've commented on this before here. Many of us are not utilitarian. 44% of us according to the 2015 survey in fact. The linked survey results argue that this sample accurately estimates the actual EA population. 44% is a lot of non-utilitarian EAs. I imagine many of them aren't as engaged in the EA community as the utilitarian EAs, despite self-identifying as EAs.

If s/he is just leaving the community then, to me, this is only disheartening insofar as s/he doesn't interact with the community from this point on. So I do hope Lila continues to be an EA outside of the EA community where s/he can spread goodness in the world using her/his non-utilitarian priortarian ethics (prioritizing victims of violence) using the EA philosophy as a guide.

The "movement isn't diverse enough" is a legitimate complaint and a sound reason to leave a movement if you don't feel like you fit in. So s/he might well do much better for the world elsewhere in some other movement that has a better personal fit. And as long as she stays in touch with EA then we can have some good 'ol moral trade for the benefit of all. This trade could conceivably be much more beneficial for EA and for Lila if s/he is no longer in the EA community.

Comment author: lucarade 09 August 2016 10:56:53PM 8 points [-]

Have you looked into why SC failed and if there's parallels between its organizational structure and EA's? Although you've convincingly argued that in many of the specifics the two movements differ significantly, there might be useful insights into how to prevent failure modes in a more general sense of a movement seeking to improve altruism.

Comment author: Cornelius  (EA Profile) 07 May 2017 10:29:23AM 0 points [-]

The movement started around 1870 and was still appears to have been active around 1894 (latest handbook in OP). WW1 was 1914-1918 and WW2 1939-1945. I'd like to know if it survived to 1945. If it did this is its cut off since my guess is that it died very quickly after WW2 when eugenics very rapidly spread throughout the world's collective consciousness as an unspeakable evil. I imagine the movement couldn't adapt quickly enough to bad PR and silently faded or rebranded itself. For instance, the Charity Organization Society of Denver, Colorado, is the forerunner of the modern United Way of America.

So I imagine the lesson for EA is to beware the rapid and irreversible effects of having EA tied implicitly to something everyone everywhere has suddenly started to hate in the strongest possible terms. This is probably why it is a good idea for EA to stay out of politics. Once you associate a movement with something political, good luck disassociating yourself when some major bad stuff happens. Or maybe the lesson is just that EA should beware WW3. Who knows.

Comment author: Cornelius  (EA Profile) 03 May 2017 08:04:05PM *  2 points [-]

No one has made any concerted effort to map the values of people who are not utilitarians, to come up with metrics that may represent what such people care about and evaluate charities on these metrics.

This appears to be demonstrably false. And in very strong terms given how strong a claim you've made and how I only need to find one person to prove it wrong. We have many non-utilitarian egalitarian luminaries making a concerted effort to come up with exactly the metrics that would tell us, based on egalitarian/priorian principles, what charities/interventions we should prioritize:

  • Adam Swift: Political theorist, sociologist, specializes in liberal egalitarian ethics, family values, communitarianism, school choice, social justice.

  • Ole Norheim: Harvard Physician, Medical ethics prof. working on distributive theories of justice, fair priority setting in low and high income countries. Is the head of the Priority Setting in Global Health (2012-2017) research project which is aiming to do exactly what you claimed nobody is working on.

  • Alex Voorhoeve: Egalitarian theorist, member of Priority Setting in Global Health project, featured on BBC, has co-authored with Norheim unsurprisingly

  • Nir Eyal: Harvard Global Health and Social Medicine Prof., specializes in population-level bioethics. Is currently working on a book that defends an egalitarian consequentialist (i.e. instrumental egalitarianism) framework for evaluating questions in bioethics and political theory.

All of these folks are mentioned in the paper.

I don't want to call these individuals Effective Altruists without having personally seen/heard them self-identify with it but they have all publicly pledged 10% of their lifetime income to effective charities via Giving What We Can.

So if the old adage "Actions speak louder than words" still rings true then these non-utilitarians are far "more EA" than any number of utilitarians who publicly flaunt that they are part of effective altruism, but then do nothing.

And none of this should be surprising. The 2015 EA Survey shows that only 56% of respondents identify as utilitarian. The linked survey results argue that this sample accurately estimates the actual EA population. This would mean that ~44% of all EAs are non-utilitarian. That's a lot. So even if utilitarians are the single largest majority, of course the rest of us non-utilitarian EAs aren't just lounging around.

Comment author: Cornelius  (EA Profile) 06 May 2017 06:57:22PM *  0 points [-]

Update: Nir Eyal very much appears to self-identify as an effective altruist despite being a non-utilitarian. See interview with Harvard EA here specifically about non-utilitarin effective altruism and this article on Effective Altruism back in 2015. Wikipedia even mentions him as a "leader in Effective Altruism"

Comment author: weeatquince  (EA Profile) 30 March 2017 09:19:03AM 0 points [-]

This is a good paper and well done to the authors.

I think section 3 is very weak. I am not flagging this as a flaw in the argument just the area that I see the most room for improvement in the paper and/or the most need for follow up research. The authors do say that more research is needed which is good.

Some examples of what I mean by the argument is weak: - The paper says it is "reasonable to believe that AMF does very well on prioritarian, egalitarian, and sufficientarian criteria". "reasonable to believe" is not a strong claim. No one has made any concerted effort to map the values of people who are not utilitarians, to come up with metrics that may represent what such people care about and evaluate charities on these metrics. This could be done but is not happening. - The paper says Iason "fail[s] to show that effective altruist recommendations actually do rely on utilitarianism" but the paper also fail to show that effective altruist recommendations actually do not rely on utilitarianism. - Etc

Why I think more research is useful here: - Because when the strongest case you can make for EA to people with equality as a moral intuition begins by saying "it is reasonable to believe . . . " it is so hard to make EA useful to such people. For example when I meet people new to EA who care a lot about equality making the case that: 'if you care about minimising suffering this 'AMF' thing comes up top and it is reasonable to assume that if you care about equality it also could be at the top because it is effective and helps the poorest' carries a lot less weight than perhaps saying: 'hey we funded a bunch of people who care foremost about equality, like you do, to map out their values and rank charities and this came top.'

Note cross-posting a summarised comment on this paper from a discussion on Facebook https://www.facebook.com/groups/798404410293244/permalink/1021820764618273/?comment_id=1022125664587783

Comment author: Cornelius  (EA Profile) 03 May 2017 08:04:05PM *  2 points [-]

No one has made any concerted effort to map the values of people who are not utilitarians, to come up with metrics that may represent what such people care about and evaluate charities on these metrics.

This appears to be demonstrably false. And in very strong terms given how strong a claim you've made and how I only need to find one person to prove it wrong. We have many non-utilitarian egalitarian luminaries making a concerted effort to come up with exactly the metrics that would tell us, based on egalitarian/priorian principles, what charities/interventions we should prioritize:

  • Adam Swift: Political theorist, sociologist, specializes in liberal egalitarian ethics, family values, communitarianism, school choice, social justice.

  • Ole Norheim: Harvard Physician, Medical ethics prof. working on distributive theories of justice, fair priority setting in low and high income countries. Is the head of the Priority Setting in Global Health (2012-2017) research project which is aiming to do exactly what you claimed nobody is working on.

  • Alex Voorhoeve: Egalitarian theorist, member of Priority Setting in Global Health project, featured on BBC, has co-authored with Norheim unsurprisingly

  • Nir Eyal: Harvard Global Health and Social Medicine Prof., specializes in population-level bioethics. Is currently working on a book that defends an egalitarian consequentialist (i.e. instrumental egalitarianism) framework for evaluating questions in bioethics and political theory.

All of these folks are mentioned in the paper.

I don't want to call these individuals Effective Altruists without having personally seen/heard them self-identify with it but they have all publicly pledged 10% of their lifetime income to effective charities via Giving What We Can.

So if the old adage "Actions speak louder than words" still rings true then these non-utilitarians are far "more EA" than any number of utilitarians who publicly flaunt that they are part of effective altruism, but then do nothing.

And none of this should be surprising. The 2015 EA Survey shows that only 56% of respondents identify as utilitarian. The linked survey results argue that this sample accurately estimates the actual EA population. This would mean that ~44% of all EAs are non-utilitarian. That's a lot. So even if utilitarians are the single largest majority, of course the rest of us non-utilitarian EAs aren't just lounging around.

Comment author: Peter_Hurford  (EA Profile) 25 April 2017 09:55:34PM 0 points [-]

Good question!

We have people report both household and individual income. If you have an individual income and you're comfortable disclosing that, put that as "individual income" and then report your joint income as "household income".

After that, I'd recommend that both of you each disclose the full joint donation amount on both surveys.

From there, we can figure it out.

Thanks! We'll try to make this more clear next year and we'd love any suggestions for a better way to handle joint donations.

Comment author: Cornelius  (EA Profile) 29 April 2017 08:48:22PM 1 point [-]

I think that joint donations not only with kin or via couples, but with friends in an extended community, may become more common if EA becomes more prevalent in collectivist cultures. Right now EA is focused primarily in the UK, Netherlands, Germany, Switzerland, Australia and America, which are all pretty much your archetypal individualist cultures.

I mention this because I consistently notice the trend of the EA community focusing on advertising what the individual can accomplish with their donation. This may not be best if EA is to achieve broad appeal in pretty much any country in Asia, where an appeal to what a community can accomplish with their collective donation might have drastically more appeal.

I'm no expert on this topic though.

In response to comment by Cornelius  (EA Profile) on Why I left EA
Comment author: Carl_Shulman 06 March 2017 05:09:54PM *  4 points [-]

OK, then since most EAs (and philosophers, and the world) think that other things like overall well-being matter it's misleading to suggest that by valuing saving overall good lives they are failing to achieve a shared goal of negative utilitarianism (which they reject).

In response to comment by Carl_Shulman on Why I left EA
Comment author: Cornelius  (EA Profile) 11 April 2017 07:40:46AM 0 points [-]

I'm confused and your 4 points only make me feel I'm missing something embarrassingly obvious.

Where did I suggest that valuing saving overall good lives means we are failing to achieve a shared goal of negative utilitarianism? In the first paragraph of my post and the part you seem to think is misleading I thought I specifically suggested exactly the opposite.

And yes, negative utilitarianism is a useful ethical theory that nonetheless many EAs and philosophers will indeed reject given particular real-world circumstances. And I wholeheartedly agree. This is a whole different topic though, so I feel like you're getting at something others think is obvious that I'm clearly missing.

View more: Next