Comment author: MichaelPlant 03 April 2017 10:22:03AM 2 points [-]

Agree with the above, but wanted to ask: what do you mean by a 'strong presentist' view? I've not heard/seen the term and am unsure what it is contrasted with.

Is 'weak presentism' that you give some weight to non-presently existing people, 'strong presentism' that you give none?

Comment author: Carl_Shulman 05 April 2017 05:12:07PM 1 point [-]

"Is 'weak presentism' that you give some weight to non-presently existing people, 'strong presentism' that you give none?"

In my comment, yes.

Comment author: William_MacAskill 31 March 2017 05:13:07PM 1 point [-]

That's reasonable, though if the aim is just "benefits over the next 50 years" I think that campaigns against factory farming seem like the stronger comparison:

"We’ve estimated that corporate campaigns can spare over 200 hens from cage confinement for each dollar spent. If we roughly imagine that each hen gains two years of 25%-improved life, this is equivalent to one hen-life-year for every $0.01 spent." "One could, of course, value chickens while valuing humans more. If one values humans 10-100x as much, this still implies that corporate campaigns are a far better use of funds (100-1,000x) [So $30-ish per equivalent life saved]." http://www.openphilanthropy.org/blog/worldview-diversification

And to clarify my first comment, "unlikely to be optimal" = I think it's a contender, but the base rate for "X is an optimal intervention" is really low.

Comment author: Carl_Shulman 31 March 2017 07:23:54PM *  13 points [-]

"if you are only considering the impact on beings alive today...factory farming"

The interventions you are discussing don't help any beings alive at the time, but only affect the conditions (or existence) of future ones. In particular cage-free campaigns, and campaigns for slower growth-genetics and lower crowding among chickens raised for meat are all about changing the conditions into which future chickens will be born, and don't involve moving any particular chickens from the old to new systems.

I.e. the case for those interventions already involves rejecting a strong presentist view.

"That's reasonable, though if the aim is just "benefits over the next 50 years" I think that campaigns against factory farming seem like the stronger comparison:"

Suppose there's an intelligence explosion in 30 years (not wildly unlikely in expert surveys), and expansion of population by 3-12 orders of magnitude over the next 10 years (with AI life of various kinds outnumbering both human and non-human animals today, with vastly more total computation). Then almost all the well-being of the next 50 years lies in that period.

Also in that scenario existing beings could enjoy accelerated subjective speed of thought and greatly enhanced well-being, so most of the QALY-equivalents for long-lived existing beings could lie there.

Comment author: Cornelius  (EA Profile) 26 March 2017 02:04:44AM 0 points [-]

Perhaps "systemic change bias" needs to be coined, or something to that effect, to be used in further debates.

Might be useful in elucidating why people criticizing EAs always mischaracterize us as not caring about systemic change or harder-to-quantify causes.

Comment author: Carl_Shulman 26 March 2017 04:07:38AM *  2 points [-]

Might be useful in elucidating why people criticizing EAs always mischaracterize us as not caring about systemic change or harder-to-quantify causes.

Those causes get criticized because of how hard to quantify they are. The relatively neglected thing is recognizing both strands, and arguing for Goldilocks positions between 'linear clear evidence-backed non-systemic charity' and 'far too radical for most interested in systemic change.'

Comment author: Ben_Todd 25 March 2017 04:44:40AM 7 points [-]

I read him as saying that EA community would not support e.g. the abolishionist movement were it around then, precisely because of the difficulties in EV calculations, and I agree with him on that.

Just as an aside, I'm not sure that's obvious. John Stuart Mill was a leader in the abolition movement. He was arguably the Peter Singer of those times.

Turning to current issues, ending factory farming is also a cause that likely requires large scale social change through advocacy, and lots of EAs work on that.

Comment author: Carl_Shulman 25 March 2017 08:15:34AM 9 points [-]

Just as an aside, I'm not sure that's obvious. John Stuart Mill was a leader in the abolition movement. He was arguably the Peter Singer of those times.

And Bentham was ahead of the curve on:

  • Abolition of slavery
  • Calling for legal equality of the sexes
  • The first known argument for legalization of homosexuality in England
  • Animal rights
  • Abolishing the death penalty and corporal punishment (including of children)
  • Separation of church and state
  • Freedom of speech

precisely because of the difficulties in EV calculation

The extensive work on factory farming is certainly one counterexample, but the interest in artificial intelligence may be a more powerful one on this point.

In response to comment by Carl_Shulman on Why I left EA
Comment author: Cornelius  (EA Profile) 06 March 2017 05:11:44AM 1 point [-]

Yes, precisely. Although - there are so many variants of negative utilitarianism that "precisely" is probably a misnomer.

In response to comment by Cornelius  (EA Profile) on Why I left EA
Comment author: Carl_Shulman 06 March 2017 05:09:54PM *  4 points [-]

OK, then since most EAs (and philosophers, and the world) think that other things like overall well-being matter it's misleading to suggest that by valuing saving overall good lives they are failing to achieve a shared goal of negative utilitarianism (which they reject).

In response to Why I left EA
Comment author: Cornelius  (EA Profile) 03 March 2017 10:56:38PM 0 points [-]

Yea as a two-level consequentialist moral anti-realist I actually am pretty tired of EA's insistence of "how many lives we can save" instead of emphasizing how much "life fulfillment and happiness" you can spread. I always thought this was not only a PR mistake but also a utilitarian mistake. We're trying to prevent suffering, so obviously preventing instances where a single person goes through more suffering on the road to death is more morally relevant utils-wise than preventing a death with less suffering.

Nonetheless, this is the first I've heard that violence and exploitation are under-valued by EA's. It always seemed the case to me that EAs generally weep and feel angsty feelings in their gut when they read about the violence and exploitation of their fellow man. But, what can we do? Regions of violence are notoriously difficult for setting up interventions that are tractable. As such it always seeemed to me that we should focus on what we know works since lifting people out of disease and poverty empowers them to address issues of violence and exploitation themselves. And giving someone their own agency back in this way is, in my view, something worth putting a lot of moral weight on due to its long-term (albeit hard-to measure) consequences.

And now I'm going to say something that I feel some people probably wont like.

I consistently feel that a lot of the critique on EA has to do with how others perceive EAs rather than what they are really like. i.e prejudice. I mentioned above that I generally feel EAs are legit moved to tears (or whatever is a significant feeling for them) regarding issues of violence. But, I find that as soon as this person spends most of his/her time in the public space talking about math and weird utilitarian expected value calculations this person is suddenly viewed as no longer having a heart or "the right heart." The amount of compassion and empathy a person has is not tied to what weird mathematical arguments they push out but what they do and feel inside (this is how I operationalize "compassion" at any rate: an internal state leading to external consequences. Yes I know, that's a pretty virtue ethics way to look at it, so sue me.).

Anyway, maybe part of this is because I know what it feels like to be the highschool nerd that secretly cries when he sees someone getting bullied at break time but who then talks to people about and cevelops exstensivly resaeched weird ideas like transhumanism as a means of optimizing the human flourishing (instead of say caring to go to an anti-bullying event that everyone instead thinks I should be going to if I really cared about bullying). It makes sense to me that many people think I have my priorities wrong. But it certainly isn't due to a lack of compassion and concern for my fellow man. It's not too hard to go from this analogy and argue that

This is perhaps what I absolutely love about the EA community. I've finally found a community of nerds where I can be myself and go in depth with uber-weird (any and all) ideas without being looked at as any less compassionate <3.

When people talk about ending violence and exploitation by doing something that will change the system that keeps these problems in place I get upset. This "system" is often invisible and amorphous and a product of ideology rather than say cost-effectiveness calculations. Why this gets me upset is that I often find this means people are willing to sacrifice giving someone their agency back when it is clear you can do so through donating to proven disease and poverty alleviation interventions to instead donate/support a cause against violence and exploitation because it aligns with their ideology. This essentially seems to me a way of making donation about yourself - trying to make sure you feel content in your own ethical worldview because specifically not doing anything about that violence and exploitation makes you feel bad - rather than making it about the individuals on the receiving end of the donation.

Yea I know, my past virtue ethics predilections are showing again. Even if someone like what I've described above supports an anti-violence cause that though difficult to get a effectiveness measure from is still nontheless doing a lot of good in the world we cant measure I still don't like it. I'm caring what people think and arguing that certain self-serving thoughts appear morally problematic independent of the end-result they cause. So let me show I'm also strongly opposed to forms of anti-realist virtue ethics. It's not enough to merely be aligned with the right way of thinking/ideology etc and then good things come from that. The end result: the actual people on the receiving end - are what actually matter. And this is why I find a "mostly" utilitarian perspective so much more humanizing than people a lot of people who get uncomfortable with its extreme conclusions and then reject the whole thing. A more utilitarian perspective forces you to make it about the receiver.

Whatever the case, writing this has made me sad. I'm sad to see you go, you seem highly intelligent and a likely asset to the movement, and as someone who is on the front-line of EA and PR I take this as a personal failure but wish you the best. Does anyone know of any EA-vetted charities working on violence and exploitation prevention? Even ones that are a stretch tractability-wise would be good. I'd like to donate - always makes me feel better.

In response to comment by Cornelius  (EA Profile) on Why I left EA
Comment author: Carl_Shulman 04 March 2017 12:24:06AM *  4 points [-]

We're trying to prevent suffering, so obviously preventing instances where a single person goes through more suffering on the road to death is more morally relevant utils-wise than preventing a death with less suffering.

What do you mean by 'we'? Negative utilitarians?

Comment author: Peter_Hurford  (EA Profile) 15 February 2017 09:39:46PM 1 point [-]

Would it make sense to donate to the LJAF for promoting open science?

Comment author: Carl_Shulman 15 February 2017 10:06:19PM 2 points [-]

If you were trying to mimic them, I'd give more to some of their grantees, like METRICS or COS.

Comment author: BenHoffman 15 February 2017 06:27:59PM *  1 point [-]

This seems like evidence for a combination of the second and third possibilities in the trilemma. Either GiveWell should expect to be able to point to empirical evidence of dramatic results soon (if not already), or it should expect to reach substantially diminishing returns, or both.

I agree that there are lots of practical reasons why you can't just firehose this stuff - that's part of the diminishing returns story!

I could imagine a scenario that slips in between 2 and 3, like you don't hit substantially diminishing returns on malaria until the last 1% of incidence, but is there reason to think that's the case?

Comment author: Carl_Shulman 15 February 2017 09:57:54PM 2 points [-]

I could imagine a scenario that slips in between 2 and 3, like you don't hit substantially diminishing returns on malaria until the last 1% of incidence, but is there reason to think that's the case?

I suggest reading about the Gates malaria eradication plans, including the barriers to that which lead Gates to think ITINs alone can't achieve eradication.

Comment author: BenHoffman 15 February 2017 04:27:09PM *  0 points [-]

GBD 2015 estimates that communicable, maternal, neonatal, and nutritional deaths worldwide amount to about 10 million in 2015. And they are declining at a rate of about 20% per decade. If at current cost-effectiveness levels, top charities could scale up to solve that whole problem, then if we assume a cost of $5,000 per life saved, the whole thing would cost $50 billion/yr. That's more than Good Ventures has on hand - but it's not an order of magnitude more. It's not more than Good Ventures and its donors and the Gates foundation ($40 billion) and Warren Buffett's planned gifts to the Gates Foundation add up to - and all of those parties seem to be interested in this program area.

That's an extreme upper bound. It's not limited to the developing world, or to especially tractable problems. You almost certainly can't scale up that high at current costs - after all, the GiveWell top charities are supposed to be the ones pursuing the most important low-hanging fruit, tractable interventions for important but straightforward problems. But then, how high can you scale up at similar cost-effectiveness numbers? Can you do a single disease? For one continent? One region? One country? Now, we're getting to magnitudes that may fall well within Good Ventures's ability to fund the whole thing. (Starting with a small area where you can show clear gains is not a new idea - it's the intuition behind Jeffrey Sachs's idea of millennium villages.) And remember that once you wipe out a communicable disease, it's much cheaper to keep it away; when's the last time people were getting smallpox? Similarly, nutritional interventions such as food fortification tend to be permanent. There's a one-time cost, and then it's standard practice.

GBD 2015 estimates that there are only about 850,000 deaths due to neglected tropical diseases each year, worldwide. At $5,000 per life saved, that's about $4.2 billion to wipe out the whole category. Even less if you focus on one continent, or one region, or one country. To name one example, Haiti is a poor island with 0.1% of the world's population; can we wipe out neglected tropical diseases for $4.2 million there? $40 million?

Comment author: Carl_Shulman 15 February 2017 06:01:06PM 2 points [-]

I don't think linear giving opportunities closely analogous to bednets will take $10BB without diminishing returns (although you might be able to beat that with R&D, advocacy, gene drives, and other leveraged strategies for a longer period). But I think this is a flawed argument.

If at current cost-effectiveness levels, top charities could scale up to solve that whole problem, then if we assume a cost of $5,000 per life saved, the whole thing would cost $50 billion/yr. That's more than Good Ventures has on hand - but it's not an order of magnitude more. It's not more than Good Ventures and its donors and the Gates foundation ($40 billion) and Warren Buffett's planned gifts to the Gates Foundation add up to - and all of those parties seem to be interested in this program area.

The original text strongly suggested a one-time cost, not a recurring annual cost. When you have diminishing returns in a single year (especially as programs are scaled up; BMGF has ramped up its spending over time), the fact that they don't spend everything in a firehose in a single year is far from shocking (note BMGF has spent a lot on US education too, it's not a pure global poverty focus although that is its main agenda).

GWWC's FAQ claims:

The Institute of Health Metrics and Evaluation estimated that between 2000 and 2014 the $73.6 billion spent on child health by donors (including both private and public) averted the death of 14 million infants and children. This is in addition to the $133 billion spent on child health by low- and middle-income country governments, which is estimated to have averted the deaths of 20 million children.

The annual figure for this is ~$14 billion (and not all spent where the evidence is best, including corruption, etc).

Gates Foundation spending is several billion dollars per year spread across a number of areas.

Total spending in these areas is not so large that a billion dollars a year is a drop in the bucket, and theses diseases have been massively checked or reduced (e.g. malaria, vaccinations, slowing HIV infections, smallpox eradication, salt iodization, etc).

And we haven't explicitly talked about possible leverage from R&D and advocacy in poverty.

Starting with a small area where you can show clear gains is not a new idea - it's the intuition behind Jeffrey Sachs's idea of millennium villages

Those were criticized at the time for spending so much on the same people, including less well-supported interventions and over diminishing returns, rather than doing more cost-effective interventions across a larger number of people. Local effectiveness of medical interventions is tested in clinical trials.

And remember that once you wipe out a communicable disease, it's much cheaper to keep it away; when's the last time people were getting smallpox?

Smallpox was a disease found only in humans with a highly effective vaccine. Such diseases are regularly locally extirpated, although getting universal coverage around the world to the last holdout regions (civil war, conspiracy theories about the vaccinations) can be very hard, as in polio eradication, and infectious diseases can quickly recolonize afterwards (malaria rebounded from the 60s failed eradication effort in places without continuing high quality prevention). But polio eradication is close and is a priority of e.g. Gates Foundation funding. It's also quite expensive, more than $10 billion so far. For harder to control diseases without vaccines like malaria, even moreso (and you couldn't just spend more in a big bang one year and be sure you haven't missed a spot).

Comment author: BenHoffman 14 February 2017 06:32:59PM *  1 point [-]

If a substantial share of other donors were already observed defecting, that seems like it would be the single most important consideration to mention in the 2015 post explaining splitting, and I am baffled as to why it was left out.

It seems like genuinely unfriendly behavior on the part of other donors and it would have been a public service at that time to call them out on this.

Comment author: Carl_Shulman 15 February 2017 04:33:16AM *  6 points [-]

that seems like it would be the single most important consideration to mention in the 2015 post explaining splitting, and I am baffled as to why it was left out.

Ben, you have advocated just giving to the best thing at the margin, simply. Doing that while taking room for more funding into account automatically results in what you are calling 'defecting' here in this post (which I object to, since the game theoretic analogy is dubious, and you're using it in a highly morally charged way to criticize a general practice with respect to a single actor). That's a normal way of assessing donations in effective altruism, and common among strategic philanthropists.

The 'driving away donors' bit was repeatedly discussed, as was the routine occurrence of such issues in large-scale philanthropy (where foundations bargain with each other over shares of funding in areas of common interest).

View more: Next