Comment author: MichaelPlant 24 April 2018 06:33:40PM 3 points [-]

Ah, that's great. Thanks very much for that. I think "dating a non-EA" is a particularly dangerous(/negative impact?) phenomenon we should probably be talking about more. I also know someone, A, whose non-EA-inclined partner, B, was really unhappy that A wasn't aiming to get a high-paying professional job and it really wrenched A from focusing on trying do the most useful stuff. Part of the problem was B's family wanted B's partner to be dating a high earner.

Comment author: Cornelius  (EA Profile) 17 August 2018 09:37:07PM *  0 points [-]

To flip this one on its head: I think counter-factually for most EAs it could actually be "better" for the world at large to date non-EAs because of the whole drastic increase of impact that can typically be expected if you convince your lover of EA - which to me on balance seems more likely than value drift from dating a non-EA if you are in fact a committed EA. However, I think if you have long-term relationships exceeding 2 years then value drift becomes far more of an issue:

  • < 2 year relationship. Value drift potential = low. Convert lover to EA potential = very high
  • > 2 year relationship. Value drift potential = medium. Convert lover to EA potential = very low if it didn't happen in the first 2 years
  • > 5 year relationship. Value drift potential = high. Convert lover to EA potential = extremely low if it didn't happen in the first 5 years

Suffice to say my current girlfriend is now much more EA-minded and I have received messages from my ex that she eats less meat still even after she stopped dating me (I'll take her word for it). I know my behaviour has been very strongly impacted by the people I've dated so there's no reason to assume vice versa doesn't happen.

Fun-fact: I use this as an excuse to argue with my girlfriend that clearly I should be dating many many girls short-term for obvious EA-reasons.

Comment author: the_jaded_one 29 January 2017 05:53:47PM 0 points [-]

I am confused. If you took it as given, why bother talking about whether Alliance for Safety and Justice and Cosecha are good charities?

Well, I am free to both assert that it is a sensible background assumption that it is not usually good for EA to do highly political things, and also argue a few relevant special cases of highly political EA things that aren't good, without taking on the bigger task of specifying and defending my assumption. But I offer Robin Hanson's post as some degree of defence.

I expect that they would become culture-war issues as soon as they become more prominent. Do you disagree?

I disagree strongly for synthetic meat, it will be an open-and-shut case once the quality surpasses real meat. I think wild animal suffering is emotive and will generate debate, but I don't think it will split left-right, mostly because I can't even decide which of {left, right} maps to {wild-suffering-bad, wild-suffering-OK}.

Or do you think that the appropriate role of EA is to elevate issues into culture-war prominence and then step aside?

Well hopefully EA can elevate issues that are approximately-pareto-improvements from irrelevance to broad-consensus, skipping out any kind of war.

that's a tribal war between economists and epidemiologists?


Or do you mean that they shouldn't take sides in issues associated with the American left and right, even if they sincerely believe that one of those issues is the best way to improve the world?

yes, this. And if they do believe that one particular side of the the US/EU culture war is the most important cause, then they should provide rock solid evidence that it is, that deals with the best arguments from the other side as well as the argument from marginal utility of extra effort, which is critically missing in the OP.

Comment author: Cornelius  (EA Profile) 28 July 2017 09:29:06PM 0 points [-]

that's a tribal war between economists and epidemiologists?


I guess you aren't up to speed with worm-wars. Things have gotten pretty tribal here with twitter wars between respected academics (made worse by a viral Buzzfeed article that arguably politicized the issue...), but nobody (to date) would argue EAs should stay out of deworming altogether because of that.

On the contrary precisely because of all this shit I'd think we need more EAs working on deworming.

Of course in the case of deworming it seems more clear that throwing in EAs will lead to a better outcome. This isn't nearly as clear when it comes to politics so I am with you that EAs should be more weary when it comes to recommending political/politicized work. Either way, I think ozymandias's point was that just like we don't tell EAs in deworming to leave the sinking ship, it also seems absurd to have a blanket ban on EA political/politicized recommendations. You don't want a blanket ban and don't mind EA endorsing political charities because as you've said you don't mind your favourite immigration charity being recommended. So the argument between you and ozymandias seems to mostly be about "to what degree."

And niether of you have actually operationalized what your stance is on "to what degee" and as such, in my view, this is why the argument between the two of you dwindled into the void.

Comment author: carneades 23 December 2016 03:15:45PM -1 points [-]

The link to your argument regarding international aid is broken, so I'll post this here. While I am all for effective altruism in principle, the claim that the particular aid organizations that Give Well, and other promote do the most good is patently false. I live and work in West Africa and I see every day the devastating economic harm that organizations like the Against Malaria Foundation wreak on communities. Effective Altruism as a movement has failed to actually be effective because it promote charities that do more harm than good. Here's a video as to why: Stop Giving Well

Comment author: Cornelius  (EA Profile) 13 July 2017 09:04:20AM 1 point [-]

I see every day the devastating economic harm that organizations like the Against Malaria Foundation wreak on communities.

Make a series of videos about that instead then if it's so prevalent. It would serve to undermine GiveWell far more and strengthen your credibility.

Your video against GiveWell does not address or debunk any of GiveWell's evidence. It's a philosophical treatise on GiveWell's methods not an evidence-based treatise. Arguing by analogy based on your own experience is not evidence. I've been robbed 3 times living in Vancouver and yet zero times in Africa, despite living in Namibia/South Africa for most of my life. This does not however entail that Vancouver is more dangerous. I in fact have near-zero evidence to back up the claim that Vancouver is more dangerous

All of your methodology objections (and far stronger anti-EA arguments) were systematically raised in Iason Gabriel’s piece on criticisms of effective altruism. And all of these criticisms were systematically responded to and found lacking by Halstead et al's defense paper

I'd highly recommend reading both of these. They are both pretty bad ass.

Comment author: Cornelius  (EA Profile) 27 June 2017 11:09:34PM *  2 points [-]

I've for a long time seen things this way:

  • GiveWell: emphasizes effectiveness: the logic pull
  • TLYCS: emphasizes altruism: the emotion pull
  • GWWC: emphasizes the pledge: the act that unifies us as a common movement (or I think+feel it does)

One cute EA family.

Comment author: Cornelius  (EA Profile) 19 May 2017 08:09:36PM *  4 points [-]

We have found this exceptionally difficult due to the diversity of GFI’s activities and the particularly unclear counterfactuals.

Perhaps I am not understanding but isn't it possible to simplify your model by honing in on one particular thing GFI is doing and pretending that a donation goes towards only that? Oxfam's impact is notoriously difficult to model (too big, too many counterfactuals) but as soon as you only look at their disaster management programs (where they've done RCTs to showcase effectiveness) then suddenly we have far better cost-effectiveness assurance. This approach wouldn't grant a cost-effectiveness figure for all of GFI, but for one of their initiatives at least. Doing this should also drastically simplify your counterfactuals.

I've read the full report on GFI by ACE. Both it and this post suggest to me that a broad capture-everything approach is being undertaken by both ACE and OPP. I don't understand. Why do I not see a systematic list of all of GFIs projects and activities both on ACE's website and here and then an incremental systematic review of each one in isolation? I realize I am likely sounding like an obnoxious physicist encountering a new subject so do note that I am just confused. This is far from my area of expertise.

However, this approach is a bit silly because it does not model the acceleration of research: If there are no other donors in the field, then our donation is futile because £10,000 will not fund the entire effort required.

Could you explain this more clearly to me please? With some stats as an example it'll likely be much clearer. Looking at the development of the Impossible Burger seems a fair phenomena to base GFI's model on, at least for now and at least insofar as it is being used to model a GFI donation's counterfactual impact in supporting similar products GFI is trying to push to market. I don't understand why the approach is silly because $10,000 wouldn't support the entire effort and that this is somehow tied to acceleration of research.

Regarding acceleration dynamics then, isn't it best to just model based on the most pessimistic conservative curve? It makes sense to me to think this would be the diminishing returns one. This also falls in line with what I know about clean meat. If we eventually do need (might as well assume we do for sake of being conservative) to simulate all elements of meat we'll also have to go beyond merely the scaffolding and growth medium problem and also include an artificial blood circulation system for the meat being grown. No such system yet exists and it seems reasonable to suspect that the closer we want to simulate meat precisely the more our scientific problems rise exponentially. So a diminishing returns curve is expected from GFI's impact - at least insofar as its work on clean meat is concerned.

Comment author: saulius  (EA Profile) 13 May 2017 02:01:41PM *  6 points [-]

EDIT: this comment contains some mistakes

To begin with, I want to say that my goal is not to put blame on anyone but to change how we speak and act in the future.

His figure for the cost of preventing blindness by treating trachoma comes from Joseph Cook et al., “Loss of vision and hearing,” in Dean Jamison et al., eds., Disease Control Priorities in Developing Countries, 2d ed. (Oxford: Oxford University Press, 2006), 954. The figure Cook et al. give is $7.14 per surgery, with a 77 percent cure rate.

I am looking at this table from the cited source (Loss of Vision and Hearing, DCP2). It’s 77% cure rate for trachoma that sometimes develops into blindness. Not 77% cure rate for blindness. At least that’s how I interpret it, I can’t be sure because the cited source of the figure in the DCP2’s table doesn’t even mention trachoma! From what I’ve read, sometimes recurrences happen so 77% cure rate from trachoma is much much more plausible. I'm afraid Toby Ord made the mistake of implying that curing trachoma = preventing blindness.

What is more, Toby Ord used the same DCP2 report that GiveWell used and GiveWell found major errors in it. To sum up very briefly:

Eventually, we were able to obtain the spreadsheet that was used to generate the $3.41/DALY estimate. That spreadsheet contains five separate errors that, when corrected, shift the estimated cost effectiveness of deworming from $3.41 to $326.43. [...] The estimates on deworming are the only DCP2 figures we’ve gotten enough information on to examine in-depth.

Regarding Fred Hollows Foundation, please see GiveWell’s page about them and this blog post. In my eyes these discredit organization’s claim that it restores sight for $25.

In conclusion, without further research we have no basis for the claim that trachoma surgeries can prevent 400, or even 40 cases of blindness for $40,000. We simply don't know. I wish we did, I want to help those people in the video.

I think one thing that is happening is that we are too eager to believe any figures we find if they support an opinion we already hold. That severely worsens already existing problem of optimizer’s curse.

I also want to add that preventing 400 blindness cases for $40,000 (i.e. one case for $100) to me sounds much more effective than top GiveWell's charities. GiveWell seem to agree, see citations from this page

Based on very rough guesses at major inputs, we estimate that cataract programs may cost $112-$1,250 per severe visual impairment reversed [...] Based on prior experience with cost-effectiveness analyses, we expect our estimate of cost per severe visual impairment reversed to increase with further evaluation. [...] Our rough estimate of the cost-effectiveness of cataract surgery suggests that it may be competitive with our priority programs; however, we retain a high degree of uncertainty.

We tell the trachoma example and then advertise GiveWell, showing that GiveWell’s top and standout charities are not even related to blindness and no one in EA ever talks about blindness. So people probably assume that GiveWell’s recommended charities are much more effective than surgery that cures blindness for $100 but they are not.

Because GiveWell’s estimates for cataract surgeries are based on guesses, I think we shouldn’t use those figures in introductory EA talks as well. We can tell the disclaimers but the person who hears the example might skip them when retelling the thought experiment (out of desire to sound more convincing). And then the same will happen.

Comment author: Cornelius  (EA Profile) 13 May 2017 09:21:08PM *  1 point [-]

It's pretty much like you said in this comment and I completely agree with you and am putting it here because of how well I think you've driven home the point:

...I myself once mocked a co-worker for taking an effort to recycle when the same effort could do so much more impact for people in Africa. That's wrong in any case, but I was probably wrong in my reasoning too because of numbers.

Also, I'm afraid that some doctor will stand up during an EA presentation and say

You kids pretend to be visionaries, but in reality you don't have the slightest idea what you are talking about. Firstly, it's impossible to cure trachoma induced blindness. Secondly [...] You should go back to play in your sandboxes instead of preaching adults how to solve real world problems

Also, I'm afraid that the doctor might be partially right

Also, my experience has persistently been that the blindness vs trachoma example is quite off-putting in an "now this person who might have gotten into EA is going to avoid it" kind of way. So if we want more EAs, this example seems miserably inept at getting people into EA. I myself have stopped using the example in introductory EA talks altogether. I might be an outlier though and will start using it again if provided a good argument that it works well, but I suspect I'm not the only one that has seen better results introducing EAs by not bringing up this example at all. Now with all the uncertainty around it, it would seem that both emotions and numbers argue against the EA community using this example in introductory talks? Save it for the in-depth discussions that happen after an intro instead?

Comment author: Cornelius  (EA Profile) 11 May 2017 12:43:46AM *  9 points [-]

This is a great post and I thank you for taking the time to write it up.

I ran an EA club at my university and ran a workshop where we covered all the philosophical objections to Effective Altruism. All objections were fairly straightforward to address except for one which - in addressing it - seemed to upend how many participants viewed EA, given what image they thus far had of EA. That objection is: Effective Altruism is not that effective.

There is a lot to be said for this objection and I highly highly recommend anyone who calls themselves an EA to read up on it here and here. None of the other objections to EA seem to me to have nearly as much moral urgency as this one. If we call this thing we do EA and it is not E I see a moral problem. If you donate to deworming charities and have never heard of wormwars I also recommend taking a look at this which is an attempt to track the entire debacle of "deworming-isn't-that-effective" controversy in good faith.

Disclaimer: I donate to SCI and rank it near the top of my priorities, just below AMF currently. I even donate to less certain charities like ACE's recommendations. So I certainly don't mean to dissuade anyone from donating in this comment. Reasoning under uncertainty is a thing and you can see these two recent posts if you desire insight into how an EA might try to go about it effectively.

The take home of this though is the same as the three main points raised by OP. If it had been made clear to us from the get-go what mechanisms are at play that determine how much impact an individual has with their donation to an EA recommended charity, then this EA is not E objection would have been as innocuous as the rest. Instead, after addressing this concern and setting straight how things actually work (I still don't completely understand it, it's complicated) participants felt their initial exposure to EA (such as through the guide dog example and other over-simplified EA infographics that strongly imply it's as simple and obvious as: "donation = lives saved") contained false advertising. The words slight disillusionment comes to mind, given these were all dedicated EAs going into the workshop.

So yes, I bow down to the almighty points bestowed by OP:

  • many of us were overstating the point that money goes further in poor countries

  • many of us don’t do enough fact checking, especially before making public claims

  • many of us should communicate uncertainty better

Btw, Scope insensitive link does not seem to work I'm afraid (Update: Thanx for fixing!)

Comment author: Cornelius  (EA Profile) 10 May 2017 11:30:40PM 2 points [-]

Everyone is warm (±37°C, ideally), open-minded, reasonable and curious.

You sir, will be thoroughly quoted and requoted on this gem, lol. I commend this heartfelt post.

In response to Why I left EA
Comment author: rviss 21 February 2017 01:09:21AM 13 points [-]

Lila, what will you do now? What questions or problems do you see in the path ahead? What good things will you miss by leaving the EA community?

For some reason, I've always felt a deep sense of empathy for people who do what you have done. It is very honest and generous of you to do it this way. I wish you only the very best in all you do.

(This is my first post on this forum. I am new to EA.)

In response to comment by rviss on Why I left EA
Comment author: Cornelius  (EA Profile) 10 May 2017 08:46:00PM *  1 point [-]

One thing I'm unclear on is:

Is s/he leaving the EA community and retaining the EA philosophy or rejecting the EA philosophy and staying in the EA community or leaving both?

What EAs do and what EA is are two different things after all. I'm going to guess leaving the EA community given that yes most EAs are utilitarians and this seems to be foundational to the reason Lila is leaving. However the EA philosophy is not utilitarian per se so you'd expect there to be many non-utilitarian EAs. I've commented on this before here. Many of us are not utilitarian. 44% of us according to the 2015 survey in fact. The linked survey results argue that this sample accurately estimates the actual EA population. 44% is a lot of non-utilitarian EAs. I imagine many of them aren't as engaged in the EA community as the utilitarian EAs, despite self-identifying as EAs.

If s/he is just leaving the community then, to me, this is only disheartening insofar as s/he doesn't interact with the community from this point on. So I do hope Lila continues to be an EA outside of the EA community where s/he can spread goodness in the world using her/his non-utilitarian priortarian ethics (prioritizing victims of violence) using the EA philosophy as a guide.

The "movement isn't diverse enough" is a legitimate complaint and a sound reason to leave a movement if you don't feel like you fit in. So s/he might well do much better for the world elsewhere in some other movement that has a better personal fit. And as long as she stays in touch with EA then we can have some good 'ol moral trade for the benefit of all. This trade could conceivably be much more beneficial for EA and for Lila if s/he is no longer in the EA community.

Comment author: lucarade 09 August 2016 10:56:53PM 8 points [-]

Have you looked into why SC failed and if there's parallels between its organizational structure and EA's? Although you've convincingly argued that in many of the specifics the two movements differ significantly, there might be useful insights into how to prevent failure modes in a more general sense of a movement seeking to improve altruism.

Comment author: Cornelius  (EA Profile) 07 May 2017 10:29:23AM 0 points [-]

The movement started around 1870 and was still appears to have been active around 1894 (latest handbook in OP). WW1 was 1914-1918 and WW2 1939-1945. I'd like to know if it survived to 1945. If it did this is its cut off since my guess is that it died very quickly after WW2 when eugenics very rapidly spread throughout the world's collective consciousness as an unspeakable evil. I imagine the movement couldn't adapt quickly enough to bad PR and silently faded or rebranded itself. For instance, the Charity Organization Society of Denver, Colorado, is the forerunner of the modern United Way of America.

So I imagine the lesson for EA is to beware the rapid and irreversible effects of having EA tied implicitly to something everyone everywhere has suddenly started to hate in the strongest possible terms. This is probably why it is a good idea for EA to stay out of politics. Once you associate a movement with something political, good luck disassociating yourself when some major bad stuff happens. Or maybe the lesson is just that EA should beware WW3. Who knows.

View more: Next