Comment author: Cornelius  (EA Profile) 26 March 2017 02:04:44AM 0 points [-]

Perhaps "systemic change bias" needs to be coined, or something to that effect, to be used in further debates.

Might be useful in elucidating why people criticizing EAs always mischaracterize us as not caring about systemic change or harder-to-quantify causes.

Comment author: Carl_Shulman 26 March 2017 04:07:38AM *  2 points [-]

Might be useful in elucidating why people criticizing EAs always mischaracterize us as not caring about systemic change or harder-to-quantify causes.

Those causes get criticized because of how hard to quantify they are. The relatively neglected thing is recognizing both strands, and arguing for Goldilocks positions between 'linear clear evidence-backed non-systemic charity' and 'far too radical for most interested in systemic change.'

Comment author: Ben_Todd 25 March 2017 04:44:40AM 7 points [-]

I read him as saying that EA community would not support e.g. the abolishionist movement were it around then, precisely because of the difficulties in EV calculations, and I agree with him on that.

Just as an aside, I'm not sure that's obvious. John Stuart Mill was a leader in the abolition movement. He was arguably the Peter Singer of those times.

Turning to current issues, ending factory farming is also a cause that likely requires large scale social change through advocacy, and lots of EAs work on that.

Comment author: Carl_Shulman 25 March 2017 08:15:34AM 9 points [-]

Just as an aside, I'm not sure that's obvious. John Stuart Mill was a leader in the abolition movement. He was arguably the Peter Singer of those times.

And Bentham was ahead of the curve on:

  • Abolition of slavery
  • Calling for legal equality of the sexes
  • The first known argument for legalization of homosexuality in England
  • Animal rights
  • Abolishing the death penalty and corporal punishment (including of children)
  • Separation of church and state
  • Freedom of speech

precisely because of the difficulties in EV calculation

The extensive work on factory farming is certainly one counterexample, but the interest in artificial intelligence may be a more powerful one on this point.

In response to comment by Carl_Shulman on Why I left EA
Comment author: Cornelius  (EA Profile) 06 March 2017 05:11:44AM 1 point [-]

Yes, precisely. Although - there are so many variants of negative utilitarianism that "precisely" is probably a misnomer.

In response to comment by Cornelius  (EA Profile) on Why I left EA
Comment author: Carl_Shulman 06 March 2017 05:09:54PM *  4 points [-]

OK, then since most EAs (and philosophers, and the world) think that other things like overall well-being matter it's misleading to suggest that by valuing saving overall good lives they are failing to achieve a shared goal of negative utilitarianism (which they reject).

In response to Why I left EA
Comment author: Cornelius  (EA Profile) 03 March 2017 10:56:38PM 0 points [-]

Yea as a two-level consequentialist moral anti-realist I actually am pretty tired of EA's insistence of "how many lives we can save" instead of emphasizing how much "life fulfillment and happiness" you can spread. I always thought this was not only a PR mistake but also a utilitarian mistake. We're trying to prevent suffering, so obviously preventing instances where a single person goes through more suffering on the road to death is more morally relevant utils-wise than preventing a death with less suffering.

Nonetheless, this is the first I've heard that violence and exploitation are under-valued by EA's. It always seemed the case to me that EAs generally weep and feel angsty feelings in their gut when they read about the violence and exploitation of their fellow man. But, what can we do? Regions of violence are notoriously difficult for setting up interventions that are tractable. As such it always seeemed to me that we should focus on what we know works since lifting people out of disease and poverty empowers them to address issues of violence and exploitation themselves. And giving someone their own agency back in this way is, in my view, something worth putting a lot of moral weight on due to its long-term (albeit hard-to measure) consequences.

And now I'm going to say something that I feel some people probably wont like.

I consistently feel that a lot of the critique on EA has to do with how others perceive EAs rather than what they are really like. i.e prejudice. I mentioned above that I generally feel EAs are legit moved to tears (or whatever is a significant feeling for them) regarding issues of violence. But, I find that as soon as this person spends most of his/her time in the public space talking about math and weird utilitarian expected value calculations this person is suddenly viewed as no longer having a heart or "the right heart." The amount of compassion and empathy a person has is not tied to what weird mathematical arguments they push out but what they do and feel inside (this is how I operationalize "compassion" at any rate: an internal state leading to external consequences. Yes I know, that's a pretty virtue ethics way to look at it, so sue me.).

Anyway, maybe part of this is because I know what it feels like to be the highschool nerd that secretly cries when he sees someone getting bullied at break time but who then talks to people about and cevelops exstensivly resaeched weird ideas like transhumanism as a means of optimizing the human flourishing (instead of say caring to go to an anti-bullying event that everyone instead thinks I should be going to if I really cared about bullying). It makes sense to me that many people think I have my priorities wrong. But it certainly isn't due to a lack of compassion and concern for my fellow man. It's not too hard to go from this analogy and argue that

This is perhaps what I absolutely love about the EA community. I've finally found a community of nerds where I can be myself and go in depth with uber-weird (any and all) ideas without being looked at as any less compassionate <3.

When people talk about ending violence and exploitation by doing something that will change the system that keeps these problems in place I get upset. This "system" is often invisible and amorphous and a product of ideology rather than say cost-effectiveness calculations. Why this gets me upset is that I often find this means people are willing to sacrifice giving someone their agency back when it is clear you can do so through donating to proven disease and poverty alleviation interventions to instead donate/support a cause against violence and exploitation because it aligns with their ideology. This essentially seems to me a way of making donation about yourself - trying to make sure you feel content in your own ethical worldview because specifically not doing anything about that violence and exploitation makes you feel bad - rather than making it about the individuals on the receiving end of the donation.

Yea I know, my past virtue ethics predilections are showing again. Even if someone like what I've described above supports an anti-violence cause that though difficult to get a effectiveness measure from is still nontheless doing a lot of good in the world we cant measure I still don't like it. I'm caring what people think and arguing that certain self-serving thoughts appear morally problematic independent of the end-result they cause. So let me show I'm also strongly opposed to forms of anti-realist virtue ethics. It's not enough to merely be aligned with the right way of thinking/ideology etc and then good things come from that. The end result: the actual people on the receiving end - are what actually matter. And this is why I find a "mostly" utilitarian perspective so much more humanizing than people a lot of people who get uncomfortable with its extreme conclusions and then reject the whole thing. A more utilitarian perspective forces you to make it about the receiver.

Whatever the case, writing this has made me sad. I'm sad to see you go, you seem highly intelligent and a likely asset to the movement, and as someone who is on the front-line of EA and PR I take this as a personal failure but wish you the best. Does anyone know of any EA-vetted charities working on violence and exploitation prevention? Even ones that are a stretch tractability-wise would be good. I'd like to donate - always makes me feel better.

In response to comment by Cornelius  (EA Profile) on Why I left EA
Comment author: Carl_Shulman 04 March 2017 12:24:06AM *  4 points [-]

We're trying to prevent suffering, so obviously preventing instances where a single person goes through more suffering on the road to death is more morally relevant utils-wise than preventing a death with less suffering.

What do you mean by 'we'? Negative utilitarians?

Comment author: Peter_Hurford  (EA Profile) 15 February 2017 09:39:46PM 1 point [-]

Would it make sense to donate to the LJAF for promoting open science?

Comment author: Carl_Shulman 15 February 2017 10:06:19PM 2 points [-]

If you were trying to mimic them, I'd give more to some of their grantees, like METRICS or COS.

Comment author: BenHoffman 15 February 2017 06:27:59PM *  1 point [-]

This seems like evidence for a combination of the second and third possibilities in the trilemma. Either GiveWell should expect to be able to point to empirical evidence of dramatic results soon (if not already), or it should expect to reach substantially diminishing returns, or both.

I agree that there are lots of practical reasons why you can't just firehose this stuff - that's part of the diminishing returns story!

I could imagine a scenario that slips in between 2 and 3, like you don't hit substantially diminishing returns on malaria until the last 1% of incidence, but is there reason to think that's the case?

Comment author: Carl_Shulman 15 February 2017 09:57:54PM 2 points [-]

I could imagine a scenario that slips in between 2 and 3, like you don't hit substantially diminishing returns on malaria until the last 1% of incidence, but is there reason to think that's the case?

I suggest reading about the Gates malaria eradication plans, including the barriers to that which lead Gates to think ITINs alone can't achieve eradication.

Comment author: BenHoffman 15 February 2017 04:27:09PM *  0 points [-]

GBD 2015 estimates that communicable, maternal, neonatal, and nutritional deaths worldwide amount to about 10 million in 2015. And they are declining at a rate of about 20% per decade. If at current cost-effectiveness levels, top charities could scale up to solve that whole problem, then if we assume a cost of $5,000 per life saved, the whole thing would cost $50 billion/yr. That's more than Good Ventures has on hand - but it's not an order of magnitude more. It's not more than Good Ventures and its donors and the Gates foundation ($40 billion) and Warren Buffett's planned gifts to the Gates Foundation add up to - and all of those parties seem to be interested in this program area.

That's an extreme upper bound. It's not limited to the developing world, or to especially tractable problems. You almost certainly can't scale up that high at current costs - after all, the GiveWell top charities are supposed to be the ones pursuing the most important low-hanging fruit, tractable interventions for important but straightforward problems. But then, how high can you scale up at similar cost-effectiveness numbers? Can you do a single disease? For one continent? One region? One country? Now, we're getting to magnitudes that may fall well within Good Ventures's ability to fund the whole thing. (Starting with a small area where you can show clear gains is not a new idea - it's the intuition behind Jeffrey Sachs's idea of millennium villages.) And remember that once you wipe out a communicable disease, it's much cheaper to keep it away; when's the last time people were getting smallpox? Similarly, nutritional interventions such as food fortification tend to be permanent. There's a one-time cost, and then it's standard practice.

GBD 2015 estimates that there are only about 850,000 deaths due to neglected tropical diseases each year, worldwide. At $5,000 per life saved, that's about $4.2 billion to wipe out the whole category. Even less if you focus on one continent, or one region, or one country. To name one example, Haiti is a poor island with 0.1% of the world's population; can we wipe out neglected tropical diseases for $4.2 million there? $40 million?

Comment author: Carl_Shulman 15 February 2017 06:01:06PM 2 points [-]

I don't think linear giving opportunities closely analogous to bednets will take $10BB without diminishing returns (although you might be able to beat that with R&D, advocacy, gene drives, and other leveraged strategies for a longer period). But I think this is a flawed argument.

If at current cost-effectiveness levels, top charities could scale up to solve that whole problem, then if we assume a cost of $5,000 per life saved, the whole thing would cost $50 billion/yr. That's more than Good Ventures has on hand - but it's not an order of magnitude more. It's not more than Good Ventures and its donors and the Gates foundation ($40 billion) and Warren Buffett's planned gifts to the Gates Foundation add up to - and all of those parties seem to be interested in this program area.

The original text strongly suggested a one-time cost, not a recurring annual cost. When you have diminishing returns in a single year (especially as programs are scaled up; BMGF has ramped up its spending over time), the fact that they don't spend everything in a firehose in a single year is far from shocking (note BMGF has spent a lot on US education too, it's not a pure global poverty focus although that is its main agenda).

GWWC's FAQ claims:

The Institute of Health Metrics and Evaluation estimated that between 2000 and 2014 the $73.6 billion spent on child health by donors (including both private and public) averted the death of 14 million infants and children. This is in addition to the $133 billion spent on child health by low- and middle-income country governments, which is estimated to have averted the deaths of 20 million children.

The annual figure for this is ~$14 billion (and not all spent where the evidence is best, including corruption, etc).

Gates Foundation spending is several billion dollars per year spread across a number of areas.

Total spending in these areas is not so large that a billion dollars a year is a drop in the bucket, and theses diseases have been massively checked or reduced (e.g. malaria, vaccinations, slowing HIV infections, smallpox eradication, salt iodization, etc).

And we haven't explicitly talked about possible leverage from R&D and advocacy in poverty.

Starting with a small area where you can show clear gains is not a new idea - it's the intuition behind Jeffrey Sachs's idea of millennium villages

Those were criticized at the time for spending so much on the same people, including less well-supported interventions and over diminishing returns, rather than doing more cost-effective interventions across a larger number of people. Local effectiveness of medical interventions is tested in clinical trials.

And remember that once you wipe out a communicable disease, it's much cheaper to keep it away; when's the last time people were getting smallpox?

Smallpox was a disease found only in humans with a highly effective vaccine. Such diseases are regularly locally extirpated, although getting universal coverage around the world to the last holdout regions (civil war, conspiracy theories about the vaccinations) can be very hard, as in polio eradication, and infectious diseases can quickly recolonize afterwards (malaria rebounded from the 60s failed eradication effort in places without continuing high quality prevention). But polio eradication is close and is a priority of e.g. Gates Foundation funding. It's also quite expensive, more than $10 billion so far. For harder to control diseases without vaccines like malaria, even moreso (and you couldn't just spend more in a big bang one year and be sure you haven't missed a spot).

Comment author: BenHoffman 14 February 2017 06:32:59PM *  1 point [-]

If a substantial share of other donors were already observed defecting, that seems like it would be the single most important consideration to mention in the 2015 post explaining splitting, and I am baffled as to why it was left out.

It seems like genuinely unfriendly behavior on the part of other donors and it would have been a public service at that time to call them out on this.

Comment author: Carl_Shulman 15 February 2017 04:33:16AM *  6 points [-]

that seems like it would be the single most important consideration to mention in the 2015 post explaining splitting, and I am baffled as to why it was left out.

Ben, you have advocated just giving to the best thing at the margin, simply. Doing that while taking room for more funding into account automatically results in what you are calling 'defecting' here in this post (which I object to, since the game theoretic analogy is dubious, and you're using it in a highly morally charged way to criticize a general practice with respect to a single actor). That's a normal way of assessing donations in effective altruism, and common among strategic philanthropists.

The 'driving away donors' bit was repeatedly discussed, as was the routine occurrence of such issues in large-scale philanthropy (where foundations bargain with each other over shares of funding in areas of common interest).

Comment author: RobBensinger 14 February 2017 04:40:53PM *  4 points [-]

Thanks for summarizing this, Ben!

First, the adversarial framing here seems unnecessary. If the other player hasn’t started defecting in the iterated prisoner’s dilemma, why start?

I might be getting this wrong, but my understanding is that a bunch of donors immediately started 'defecting' (= pulling out of funding the kinds of work GV is excited about) once they learned of GV's excitement for GW/OPP causes, on the assumption that GV would at some future point adopt a general policy of (unconditionally?) 'cooperating' (= fully funding everything to the extent it cares about those things).

I think GW/GV/OPP arrived at their decision in an environment where they saw a non-trivial number of donors preemptively 'defecting' either based on a misunderstanding of whether GW/GV/OPP was already 'cooperating' (= they didn't realize that GW/GV/OPP was funding less than the full amount it wanted funded), or based on the assumption that GW/GV/OPP was intending to do so later (and perhaps could even be induced to do if others withdrew their funding). If my understanding of this is right, then it both made the cooperative equilibrium seem less likely, and made it seem extra important for GW/GV/OPP to very loudly and clearly communicate their non-CooperateBot policy lest the misapprehension spread even further.

I think the difficulty of actually communicating en masse with smaller GW donors, much less having a real back-and-forth negotiation with them, played a very large role in GW/GV/OPP's decisions here, including their decision to choose an 'obviously arbitrary' split number like 50% rather than something more subtle.

It also assumes that people are taking the cost-per-life-saved numbers at face value, and if so, then GiveWell already thinks they’ve been misled.

I'm not sure I understand this point. Is this saying that if people are already misled to some extent, or in some respect, then it doesn't matter what related ways one's actions might confuse them?

(Disclaimer: I work for MIRI, which has received an Open Phil grant. As usual, the above is me speaking on my own behalf, not on MIRI's.)

Comment author: Carl_Shulman 15 February 2017 04:26:13AM *  3 points [-]

Cross-posted from Ben's blog:

if Good Ventures committed to fully funding the GiveWell top charities, other donors might withdraw funding to fund the next-best thing by their values, confident that they’d be offset. A commitment to “splitting” would prevent this...

I have two main objections to this. First, the adversarial framing here seems unnecessary. If the other player hasn’t started defecting in the iterated prisoner’s dilemma, why start?

If GV fully funded the top charities, and others also funded them, then they would be overfunded by GV's lights. if A and B both like X (and have the same desired funding level for it), but have different second choices of Y and Z, the fully cooperative solution would not involve either A or B funding X alone.

[CoI notice: I consult for OpenPhil.]

Comment author: erikaalonso 13 January 2017 12:38:41AM *  21 points [-]

Hi everyone! I’m here to formally respond to Sarah’s article, on behalf of ACE. It’s difficult to determine where the response should go, as it seems there are many discussions, and reposting appears to be discouraged. I’ve decided to post here on the EA forum (as it tends to be the central meeting place for EAs), and will try to direct people from other places to this longer response.

Firstly, I’d like to clarify why we have not inserted ourselves into the discussion happening in multiple Facebook groups and fora. We have recently implemented a formal social media policy which encourages ACE staff to respond to comments about our work with great consideration, and in a way that accurately reflects our views (as opposed to those of one staff member). We are aware that this might come across as “radio silence” or lack of concern for the criticism at hand—but that is not the case. Whenever there are legitimate critiques about our work, we take it very seriously. When there are accusations of intent to deceive, we do not take them lightly. The last thing we want to do is respond in haste only to realize that we had not given the criticism enough consideration. We also want to allow the community to discuss amongst themselves prior to posting a response. This is not only to encourage discussion amongst individual members of the community, but also so that we can prioritize responding to the concerns shared by the greatest number of community members.

It is clear to us now that we have failed to adequately communicate the uncertainty surrounding the outcomes of our leafleting intervention report. We absolutely disagree with claims of intentional deception and the characterization of our staff as acting in bad-faith—we have never tried to hide our uncertainty about the existing leafleting research report, and as others have pointed out, it is clearly stated throughout the site where leafleting is mentioned. However, our reasoning that these disclaimers would be obvious was based on the assumption that those interested in the report would read it in its entirety. After reading the responses to this article, it’s obvious that we have not made these disclaimers as apparent as they should be. We have added a longer disclaimer to the top of our leafleting report page, expressing our current thoughts and noting that we will update the report sometime in 2017.

In addition, we have decided to remove the impact calculator (a tool which included an ability to enter donations directed to leafleting and receive estimates of high and low bounds of animals spared) from our website entirely until we feel more confident that it is not misleading to those unfamiliar with cost effectiveness calculations and/or an understanding of how the low/best/high error bounds exemplify the uncertainty regarding those numbers. It is not typical for us to remove content from the site, but we intend to operate with abundant caution. This change seems to be the best option, given that people believe we are being intentionally deceptive in keeping them online.

Finally, leadership at ACE all agree it has been too long since we have updated our Mistakes page, so we have added new entries concerning issues we have reflected upon as an organization.

We also notice that there is concern among the community that our recommendations are suspect due to the weak evidence supporting our cost-effectiveness estimates of leafleting. The focus on leafleting for this criticism is confusing to us, as our cost-effectiveness estimates address many interventions, not only leafleting, and the evidence for leafleting is not much weaker than other evidence available about animal advocacy interventions. On top of that, cost-effectiveness estimates are only a factor in one of the seven criteria used in our evaluation process. In most cases, we don’t think that they have changed the outcome of our evaluation decisions. While we haven’t come up with a solution for clarifying this point, we always welcome and are appreciative of constructive feedback.

We are committed to honesty, and are disappointed that the content we've published on the website concerning leafleting has caused so much confusion as to lead anyone to believe we are intentionally deceiving our supporters for profit. On a personal note, I’m devastated to hear that our error in communication has led to the character assassination not only of ACE, but of the people who comprise the organization—some of the hardest working, well-intentioned people I’ve ever worked with.

Finally, I would like everyone to know that we sincerely appreciate the constructive feedback we receive from people within and beyond the EA movement.

*Edited to add links

Comment author: Carl_Shulman 24 January 2017 03:13:27AM *  8 points [-]

After reading the responses to this article, it’s obvious that we have not made these disclaimers as apparent as they should be...until we feel more confident that it is not misleading to those unfamiliar with cost effectiveness calculations

When there are debates about how readers are interpreting text, or potentially being misled by it, empirical testing (e.g. having Mechanical Turk readers view a page and then answer questions about the topic where they might be misled) is a powerful tool (and also avoids reliance on staff intuitions that might be affected by a curse of knowledge). See here for a recent successful example.

View more: Next