Comment author: the_jaded_one 06 April 2017 05:37:09PM *  2 points [-]

these bottom lines remain in every estimate of the global income distribution I’ve seen so far... Many people in the world live in serious absolute poverty, surviving on as little as one hundredth the income of the upper-middle class in the US.

But is this bottom line really approximately true?

A salary of $70,000 could be considered upper-middle-class. 1/100th of $70,000 is $700.

According to the chart, that is slightly greater than the income of the median Indian, adjusted for PPP.

Since these figures have been adjusted, that should mean that $700 in Western Europe or the US will afford you the same quality of life as the median Indian person, without you getting any additional resources such as extra meals from sympathetic passers-by or free accommodation in a shelter (because otherwise, to be 100 times richer you would have to have 100 units per day of these additional resources - i.e. $70,000 plus 100 meals/day plus owning low-quality accommodation for 100 people).

However, $700/year (= $1.91/day, =€1.80/day, =£1.53 /day) (without gifts or handouts) is not a sufficient amount of money to be alive in the west. You would be homeless. You would starve to death. In many places, you would die of exposure in the winter without shelter. Clearly, the median person in India is better off than a dead person.

A realistic minimum amount of money to not die in the west is probably $2000-$5000/year, again without gifts or handouts, implying that to be 100 times richer than the average Indian, you have to be earning at least $200,000-$500,000 net of tax (or at least net of that portion of tax which isn't spent on things that benefit you - which at that level is almost all of it, unless you are somehow getting huge amounts of government money spent on you in particular).

The reality is that a PPP conversion factor is trying to represent a nonlinear mapping with a single straight line, and it fails badly at the extremes. But the extremes are exactly where one is getting this (misleading) factor of 100 from.

Comment author: William_MacAskill 11 April 2017 08:18:03PM *  6 points [-]

"However, $700/year (= $1.91/day, =€1.80/day, =£1.53 /day) (without gifts or handouts) is not a sufficient amount of money to be alive in the west. You would be homeless. You would starve to death. In many places, you would die of exposure in the winter without shelter."

One could live on that amount of money per day in the West. You'd live in a second-hand tent, you'd scavenge food from bins (which would count towards your 'expenditure', because we're talking about consumption expenditure, but wouldn't count that much). Your life expectancy would be considerably lower than others in the West, but probably not lower than the 55 years which is the life expectancy in Burkina Faso (as an example comparison, bear in mind that number includes infant mortality). Your life would suck very badly, but you wouldn't die, and it wouldn't be that dissimilar to the lives of the millions of people who live in makeshift slums or shanty towns and scavenge from dumps to make a living. (Such people aren't representative of all extremely poor people, but they are a notable fraction.)

Comment author: RobBensinger 31 March 2017 08:43:25PM *  2 points [-]

And it's framed as long-run future because we think that there are potentially lots of things that could have a huge positive on the value of the long-run future which aren't GCRs - like humanity having the right values, for example.

I don't have much to add to what Rob W and Carl said, but I'll note that Bostrom defined "existential risk" like this back in 2008:

A subset of global catastrophic risks is existential risks. An existential risk is one that threatens to cause the extinction of Earth-originating intelligent life or to reduce its quality of life (compared to what would otherwise have been possible) permanently and drastically.

Presumably we should replace "intelligent" here with "sentient" or similar. The reason I'm quoting this is that on the above definition, it sounds like any potential future event or process that would cost us a large portion of the future's value counts as an xrisk (and therefore as a GCR). 'Humanity's moral progress stagnates or we otherwise end up with the wrong values' sounds like a global catastrophic risk to me, on that definition. (From a perspective that does care about long-term issues, at least.)

I'll note that I think there's at least some disagreement at FHI / Open Phil / etc. about how best to define terms like "GCR", and I don't know if there's currently a consensus or what that consensus is. Also worth noting that the "risk" part is more clearly relevant than the "global catastrophe" part -- malaria and factory farming are arguably global catastrophes in Bostrom's sense, but they aren't "risks" in the relevant sense, because they're already occurring.

Comment author: William_MacAskill 04 April 2017 07:00:51PM 0 points [-]

"counts as an xrisk (and therefore as a GCR)"

My understanding: GCR = (something like) risk of major catastrophe that kills 100mn+ people

(I think the GCR book defines it as risk of 10mn+ deaths, but that seemed too low to me).

So, as I was using the term, something being an x-risk does not entail it being a GCR. I'd count 'Humanity's moral progress stagnates or we otherwise end up with the wrong values' as an x-risk but not a GCR.

Interesting (/worrying!) how we're understanding widely-used terms so differently.

Comment author: Carl_Shulman 31 March 2017 07:23:54PM *  13 points [-]

"if you are only considering the impact on beings alive today...factory farming"

The interventions you are discussing don't help any beings alive at the time, but only affect the conditions (or existence) of future ones. In particular cage-free campaigns, and campaigns for slower growth-genetics and lower crowding among chickens raised for meat are all about changing the conditions into which future chickens will be born, and don't involve moving any particular chickens from the old to new systems.

I.e. the case for those interventions already involves rejecting a strong presentist view.

"That's reasonable, though if the aim is just "benefits over the next 50 years" I think that campaigns against factory farming seem like the stronger comparison:"

Suppose there's an intelligence explosion in 30 years (not wildly unlikely in expert surveys), and expansion of population by 3-12 orders of magnitude over the next 10 years (with AI life of various kinds outnumbering both human and non-human animals today, with vastly more total computation). Then almost all the well-being of the next 50 years lies in that period.

Also in that scenario existing beings could enjoy accelerated subjective speed of thought and greatly enhanced well-being, so most of the QALY-equivalents for long-lived existing beings could lie there.

Comment author: William_MacAskill 04 April 2017 06:55:55PM 5 points [-]

Mea culpa that I switched from "impact on beings alive today" to "benefits over the next 50 years" without noticing.

Comment author: Robert_Wiblin 31 March 2017 12:04:07AM *  12 points [-]

Someone taking a hard 'inside view' about AI risk could reasonably view it as better than AMF for people alive now, or during the rest of their lives. I'm thinking something like:

1 in 10 risk of AI killing everyone within the next 50 years. Spending an extra $1 billion on safety research could reduce the size of this risk by 1%.

$1 billion / (0.1 risk * reduced by 1% * 8 billion lives) = $125 per life saved. Compares with $3,000-7,000+ for AMF.

This is before considering any upside from improved length or quality of life for the present generation as a result of a value-aligned AI.

I'm probably not quite as optimistic as this, but I still prefer AI as a cause over poverty reduction, for the purposes of helping the present generation (and those remaining to be born during my lifetime).

Comment author: William_MacAskill 31 March 2017 05:13:07PM 1 point [-]

That's reasonable, though if the aim is just "benefits over the next 50 years" I think that campaigns against factory farming seem like the stronger comparison:

"We’ve estimated that corporate campaigns can spare over 200 hens from cage confinement for each dollar spent. If we roughly imagine that each hen gains two years of 25%-improved life, this is equivalent to one hen-life-year for every $0.01 spent." "One could, of course, value chickens while valuing humans more. If one values humans 10-100x as much, this still implies that corporate campaigns are a far better use of funds (100-1,000x) [So $30-ish per equivalent life saved]." http://www.openphilanthropy.org/blog/worldview-diversification

And to clarify my first comment, "unlikely to be optimal" = I think it's a contender, but the base rate for "X is an optimal intervention" is really low.

Comment author: RobBensinger 30 March 2017 09:47:22PM *  12 points [-]

To some, it’s just obvious that future lives have value and the highest priority is fighting existential threats to humanity (‘X-risks’).

I realize this is just an example, but I want to mention as a side-note that I find it weird what a common framing this is. AFAIK almost everyone working on existential risk think it's a serious concern in our lifetimes, not specifically a "far future" issue or one that turns on whether it's good to create new people.

As an example of what I have in mind, I don't understand why the GCR-focused EA Fund is framed as a "long-term future" fund (unless I'm misunderstanding the kinds of GCR interventions it's planning to focus on), or why philosophical stances like the person-affecting view and presentism are foregrounded. The natural things I'd expect to be foregrounded are factual questions about the probability and magnitude over the coming decades of the specific GCRs EAs are most worried about.

Comment author: William_MacAskill 30 March 2017 11:34:06PM *  9 points [-]

Agree that GCRs are a within-our-lifetime problem. But in my view mitigating GCRs is unlikely to be the optimal donation target if you are only considering the impact on beings alive today. Do you know of any sources that make the opposite case?

And it's framed as long-run future because we think that there are potentially lots of things that could have a huge positive on the value of the long-run future which aren't GCRs - like humanity having the right values, for example.

Comment author: William_MacAskill 08 March 2017 01:59:04AM *  10 points [-]

In my previous post I wrote: “The existence of this would bring us into alignment with other societies, which usually have some document that describes the principles that the society stands for, and has some mechanism for ensuring that those who choose to represent themselves as part of that society abides by those principles.” I now think that’s an incorrect statement. EA, currently, is all of the following: an idea/movement, a community, and a small group of organisations. On the ‘movement’ understanding of EA, analogies of EA don’t have a community panel similar to what I suggested, and only some have ‘guiding principles’. (Though communities and organisations, or groups of organisations, often do.)

Julia created a list of potential analogies here:

[https://docs.google.com/document/d/1aXQp_9pGauMK9rKES9W3Uk3soW6c1oSx68bhDmY73p4/edit?usp=sharing].

The closest analogy to what we want to do is given by the open source community: many but not all of the organisations within the open source community created their own codes of conduct, many of them very similar to each other.

36

Introducing CEA's Guiding Principles

Over the last year, I’ve given a lot of thought to the question of how the effective altruism community can stay true to its best elements and avoid problems that often bring movements down. Failure is the default outcome for a social movement, and so we should be proactive in... Read More
9

[CEA Update] Updates from January 2017

Hi everyone,   Last month, CEA started Y Combinator. This has already had a very positive impact on the team. We’re getting a lot of advice and attention from the partners, and we’re feeling excited about building CEA.   January has been our best month ever in terms of growth... Read More
Comment author: William_MacAskill 11 February 2017 12:07:00AM 4 points [-]

One thing to note, re diversification (which I do think is an important point in general) is that it's easy to think of Open Phil as a single agent, rather than a collection of agents; and because Open Phil is a collective entity, there are gains from diversification even with the funds.

For example, there might be a grant that a program officer wants to make, but there's internal disagreement about it, and the program officer doesn't have time (given opportunity cost) to convince others at Open Phil why it's a good idea. (This has been historically true for, say, the EA Giving Fund). Having a separate pool of money would allow them to fund things like that.

Comment author: LukeDing 10 February 2017 11:51:24AM 21 points [-]

Even though I have supported many EA organisations over the years (both meta, ex-risks and global poverty and some at quite early stages) and devote a great deal of time to try to do it well, I feel the EA funds could still be really useful.

There is a limit to how much high quality due diligence one could do. It takes time to build relationships, analyse opportunities and monitor them. This is also the reason I have not supported some of the EA Venture projects not necessarily because of the projects but because I did not have the bandwidth.

I am really impressed with some of the really high leverage, high impact work that Nick Beckstead supported through his donor group, I remember his catalysing the formation of CSER and early support to Founders Pledge. The possibility to participate in Elie's work not limited to top charities also sounds exciting. I have not had the time to analyse animal charities and and this will help too.

I think donating to EA funds alongside my existing donations will provide diversification and allow me to support projects that I do not have direct access to or I do not have time and/or resources to support on a standalone basis.

The EA funds could also have benchmarking and signalling values (especially on less well known projects) if they publish their donation decisions.

Comment author: William_MacAskill 10 February 2017 11:45:05PM 8 points [-]

Thanks so much for this, Luke! If someone who spends half their working time dedicating to philanthropy, as you do, says "There is a limit to how much high quality due diligence one could do. It takes time to build relationships, analyse opportunities and monitor them" - that's pretty useful information!

View more: Next