Comment author: Jay_Shooster 13 April 2017 09:30:31PM *  5 points [-]

"I think this article would have been better if it had noted these issues."

Yes, it would have! Very glad you raised them. This is part of what I had in mind when mentioning "reputational risk" but I'm glad you fleshed it out more fully.

That being said, I think there is a low cost way to reap the benefits I'm talking about with integrity. Perhaps we have different standards/expectations of what's misleading on a resume, and what kind of achievements should be required for certain accolades. Maybe a 20 min presentation that required a short application should be required before doing this. I don't know. But I find it hard to believe that we couldn't be much more generous with bestowing accolades to dedicated members of the community without engaging in deception.

Maybe I can try to restate this in a way that would seem less deceptive...

I genuinely believe that there are tons of deserving candidates for accolades and speaking engagements in our community. I think that we can do more to provide opportunities for these people at a very low cost. I hope to help organize an event like this in NYC. I probably wouldn't leave it open to just anyone to participate, but I would guess (from my experience with the NYC community) that few people would volunteer to speak who didn't have an interesting and informed perspective to share in a 15 minute presentation. Perhaps, I have an overly positive impression of the EA community though.

(ps. I think your response is a model of polite and constructive criticism. thanks for that!)

Comment author: Robert_Wiblin 13 April 2017 10:36:50PM 3 points [-]

I agree it's possible to do these things without being misleading (e.g. give awards to those who deserve them, and put forward good speakers).

I suspect society adapts to ensure 'no free positive signals' (something like a social equivalent of conservation of energy). Imagine that you did put forward a lousy speaker (not that you were advocating doing this). If it's easy to put on events like this in such a way that nobody involved suffers a reputation hit (e.g. nobody attends and the organisation putting on the event couldn't care less that you put forward a bad speaker), then I bet the line 'gave a talk at a law school' won't actually be that useful on a CV. Or it will quickly become devalued by people who read CVs as they cotton on to what's going on.

While at any point in time there are some misleading signals you can grab that haven't yet been devalued, it's probably more efficient (and more enduring) to gain real skills and translate them into credible signals.

But your post is most charitably read as saying 'giving good speakers opportunities to perform', and 'reward people who have done virtuous things'.

Comment author: BenHoffman 12 April 2017 04:44:44AM 2 points [-]

I'm glad I was mistaken about at least part of this - if the stitching-together was originally meant to avoid overstating what percentile someone was in, and originally intended for point estimates rather than to illustrate a trend, then that seems pretty reasonable.

In that context (which I didn't have, and I hope it's clear to you how without that context I'd have drawn the opposite conclusion), using the existing stitched-together data to make a chart seems like a neutral error, the sort of thing someone does because that's the dataset they happen to have lying around. (Unless, of course, someone would have been more likely to notice and flag a chart with a suppressed trend than a chart with an exaggerated one. That sort of bias is very hard to overcome.)

This is why things like keeping track of sources are so important, though. Without that, a decision intended to make a tool more conservative ended up being used in a graph where it could be expected to exaggerate a trend, and no one seems to have noticed until you went digging (for which, again, thank you). I'm glad you intend to do better with your version.

Comment author: Robert_Wiblin 12 April 2017 06:18:16AM *  2 points [-]

Here are some other thoughts off the top of my head. As I see it there are different points this figure could be used to support:

i) The social impact of someone earning, e.g. $100k a year, is potentially quite large, as they are earning more than the global average, making them unusually powerful.

ii) It's high-impact to help people in the developing world because many people are so very poor.

iii) This high level of inequality is an indication of a deep injustice in the economic system that needs to be resolved.

It seems like some folks are particularly worried about the graph being used to support the third point. But I can't actually recall anyone in EA circles using it to make that case (though I think one could try). Our workshop notes that some in the audience may see things that way, but then works to remain neutral on the topic as it would be a big debate in itself.

Point i) seems best measured by someone's disposable income as a fraction of total global disposable income, or at least the average global disposable income.

Point ii) is best made by the ratio of the income of our hypothetical donor to that of someone at the 10th percentile (e.g. or whatever income percentile is the beneficiary of marginal work by GiveDirectly or AMF). Despite outstanding income growth in the middle of the distribution, IIRC the 10th percentile's income hasn't risen much at all. It remains around the minimum subsistence level. With graduate incomes rising in the US, this ratio has probably increased since 2008. Whether this ratio is 30, 100 or 300 is one factor relevant to how good the opportunities look in poverty reduction as a cause relatively to others (what the ratio is and how much that matters is discussed in thejadedone's thread). We turn to this ratio later in our career guide, and recently did a fact check on the incomes of GiveDirectly recipients.

Interestingly, the ratio of a reader’s income to the global median doesn’t seem the best measure for any of these purposes.

Comment author: BenHoffman 12 April 2017 04:44:44AM 2 points [-]

I'm glad I was mistaken about at least part of this - if the stitching-together was originally meant to avoid overstating what percentile someone was in, and originally intended for point estimates rather than to illustrate a trend, then that seems pretty reasonable.

In that context (which I didn't have, and I hope it's clear to you how without that context I'd have drawn the opposite conclusion), using the existing stitched-together data to make a chart seems like a neutral error, the sort of thing someone does because that's the dataset they happen to have lying around. (Unless, of course, someone would have been more likely to notice and flag a chart with a suppressed trend than a chart with an exaggerated one. That sort of bias is very hard to overcome.)

This is why things like keeping track of sources are so important, though. Without that, a decision intended to make a tool more conservative ended up being used in a graph where it could be expected to exaggerate a trend, and no one seems to have noticed until you went digging (for which, again, thank you). I'm glad you intend to do better with your version.

Comment author: Robert_Wiblin 12 April 2017 05:35:16AM *  4 points [-]

Hi Ben, thanks for retracting the comment.

The broader concern I share is the risk of data moving from experts to semi-experts to non-experts, with a loss of understanding at each stage. This is basically a ubiquitous problem, and EA is no exception. From looking into this back in 2013 I understand well where these numbers come from, the parts of the analysis that make me most nervous, and what they can and can't show. But I think it's fair to say that there has existed a risk of derivative works being produced by people dabbling in the topic on a tough schedule, and i) losing the full citation, or ii) accidentally presenting the numbers in a misleading way.

A classic case of this playing out at the moment is the confusion around GiveWell’s estimated 'cost per life saved' for AMF, vs the new 'cost per life saved equivalent'. GiveWell has tried, but research communication is hard. I feel sorry for people who engage in EA advocacy part time as it's very easy for them to get a detail wrong, or have their facts out of date (snap quiz, how probable is each of these in light of the latest research: deworming impacts i) weight, ii) school attendance, iii) incomes later in life?). This stuff should be corrected, but with love, as folks are usually doing their best, and not everyone can be expected to fully understand or keep up with research in effective altruism.

One valuable thing about this debate has been that it reminds us that people working on communicating ideas need to speak with the experts who are aware of the details and stress about getting things as accurate as they can be in practice. Ideally one individual should become the point-person who truly understands any complex data source (and gets replaced when staff move on).

Comment author: BenHoffman 11 April 2017 06:00:31PM *  0 points [-]

On reflection, it's not clear to me that anyone has the appropriate level of urgency around this. Two distinct datasets were stitched together at the 80th percentile. The dataset used for the above-80 figures was chosen specifically because it had higher numbers. This chart was then used specifically to illustrate how unequally the quantity was distributed.

This is not a problem on the level of "someone could potentially be misled". This is a problem on the level of "this chart was cooked up specifically to favor the intended conclusion." When you're picking and choosing sources for part of a trend, it stops mattering that the chart was originally based on real data.

It's entirely possible for someone to make this sort of error thoughtlessly rather than maliciously, but now that the error has been discovered, the honest thing to do is promptly and prominently retract the chart, with an explanation.

It's also possible I'm somehow misunderstanding. For instance, I'm confused about why there isn't at least a small discontinuity around the 80th percentile - substantially differing methodologies shouldn't get the exact same numbers.

Comment author: Robert_Wiblin 12 April 2017 04:22:43AM 4 points [-]

“This is a problem on the level of “this chart was cooked up specifically to favor the intended conclusion....”

Actually, our incentives were the precise reverse when this data was being put together. These figures first appeared in the ‘How Rich Are You Calculator’. In that context we took people who knew their income, and told them what percentage of households they were richer than. It would have been in our interests to include the lowest income numbers possible for the richest folks in the world, in order to inflate what global income percentile people stood at.

That could have been achieved by going with PovcalNet’s numbers the whole way. Had we been lazy we could have done this more easily than what we did do, as these numbers were already public. We could have then claimed that an individual earning $36,500 is richer than 99.85% of the world! But this is quite wrong. PovcalNet is designed to be reliable for lower incomes, as part of the World Bank’s attempt to measure poverty and economic development around the world. It progressively understates the incomes of people at the top of the income distribution as they aren’t well sampled; hence the need for Milanović’s alternative numbers for that group.

GWWC used Milanović’s numbers for as much of the distribution as he gave us data for (i.e. it did not exercise discretion about where to switch).

Unfortunately, I was not working at GWWC when the two datasets were combined, so I wouldn’t want to comment on how that was done. Any new chart should document how things like that are performed (and mine will).

The most material problem as I see it is that PovcalNet and other measures of poverty usually measure consumption (to ensure inclusion of e.g. growing your own food or foraging for free things), while figures for people in developed countries measure income (as that’s what people know and it can be found in on tax records, while most households don’t know their net consumption in any given year). The effect on the shape should be modest:

  • Most people on the graph, which caps out at $100k, are not being among the super-rich, which means they will consume most of their lifetime income before they die. The US personal household savings rate is a measly 6%, suggesting pretty small adjustments.
  • A large fraction of people at the bottom of the distribution are not in a position to accumulate significant financial assets - most ‘savings’ will come in the form of consumer durables (e.g. bricks or a roof on a house) that will be picked up as consumption. Furthermore using consumption inflates the income of the poor relative to the rich because it includes things received for free that wouldn’t be included in income measures for people in the developing world.

Nonetheless, I think this does bias the graph towards showing higher inequality. I’m not yet sure how I'll fix this, as I don’t know of reliable figures across the whole distribution that use only one of these measures, or figures of net savings as a percent of income across the income distribution, which could be used to fix the discrepancy. I’m open to ideas or new data sources if anyone has one. In the absence of that we’ll just have to continue explaining this weakness of the method.

I’m looking forward to improving this as far as I can, but I suspect that it won’t change the big picture very much.

Comment author: BenHoffman 08 April 2017 09:21:25PM *  2 points [-]

Thanks for writing this! This is a helpful overview of some of the challenges in coming up with a single quantitative view.

Overall, I think this suggests two things about how to display and interpret the relevant data.

First, when using purely quantitative estimates of distributions in currency terms to illustrate an overall trend, use a variety of different consistent estimates. It seems like when the whole thing you're trying to estimate is income inequality, stitching together different sources for the portions below and above the 80th percentile is very likely to introduce problems. For instance, if your above-80% source is better at detecting income, or otherwise biased upwards relative to your below-80% source, then this will substantially overestimate income inequality.

I would have liked to instead see the whole curve drawn from PovCalNet numbers, with the trendline from Milanovic overlaid on it. Or, ideally, as many different estimated lines as you can measure on the same axis. It's fine to merely footnote or link to explanations for exactly why the lines differ, how they were generated, and your thoughts on which ones are better estimates for which quantiles, but when you use a single graph, people are likely to assume that it's an authoritative illustration of a single data source, whereas showing multiple estimates makes it clearer that there is uncertainty about the details, but not about the fact that the distribution is very unequal.

If you're worried that this would lead to a too-noisy graph, I highly recommend Edward Tufte's books for advice on how to visually display a large amount of quantitative information elegantly.

Second, we shouldn't use these numbers directly to make judgments about specific programs to help poor people. Instead, when trying to evaluate any particular decision, we should make sure we understand how a difference in dollar figures relates to a difference in material conditions, since this will not be perfectly consistent.

For instance, if you use "purchasing power parity" figures, you may get a better estimate of how big differences in material circumstances are, but at the cost of obscuring things like what percentage of someone's income a cash transfer of a certain size will constitute. For this reason, the work charities like GiveDirectly and JPAL are doing directly reporting on what happens as a result of various interventions is extremely important.

Comment author: Robert_Wiblin 08 April 2017 09:47:00PM 1 point [-]

Thanks Ben, this sounds reasonable. I'm working to create a new figure that will have more recent data, inflation adjust up to 2017, and offer more details about precisely how it was constructed. I'll keep these ideas in mind.

Unfortunately, as I'm waiting on other people busy people to get back to me with the data/information I need, I can't say when I'll be able to put it up.

18

How accurately does anyone know the global distribution of income?

Cross posted from the 80,000 Hours blog .   How much should you believe the numbers in charts like this? People in the effective altruism community often refer to the global income distribution to make various points: The richest people in the world are many times richer than the poor. People... Read More
Comment author: William_MacAskill 30 March 2017 11:34:06PM *  9 points [-]

Agree that GCRs are a within-our-lifetime problem. But in my view mitigating GCRs is unlikely to be the optimal donation target if you are only considering the impact on beings alive today. Do you know of any sources that make the opposite case?

And it's framed as long-run future because we think that there are potentially lots of things that could have a huge positive on the value of the long-run future which aren't GCRs - like humanity having the right values, for example.

Comment author: Robert_Wiblin 31 March 2017 12:04:07AM *  12 points [-]

Someone taking a hard 'inside view' about AI risk could reasonably view it as better than AMF for people alive now, or during the rest of their lives. I'm thinking something like:

1 in 10 risk of AI killing everyone within the next 50 years. Spending an extra $1 billion on safety research could reduce the size of this risk by 1%.

$1 billion / (0.1 risk * reduced by 1% * 8 billion lives) = $125 per life saved. Compares with $3,000-7,000+ for AMF.

This is before considering any upside from improved length or quality of life for the present generation as a result of a value-aligned AI.

I'm probably not quite as optimistic as this, but I still prefer AI as a cause over poverty reduction, for the purposes of helping the present generation (and those remaining to be born during my lifetime).

Comment author: Robert_Wiblin 30 March 2017 11:53:00PM *  2 points [-]

"It also exists amongst academic philosophers"

As far as I can tell virtually all academic philosophy bottoms out at some kind of intuition jousting. In each philosophical sub-field, no matter what axioms you accept common sense is going to suffer some damage, and people differ on where they'd least mind to take the hit. And there doesn't seem to be another means to choose among the most foundational premises on which people's models are built.

I predict nothing will stop people from intuition jousting except a more objective, or dialectically persuasive, way to answer philosophical questions.

In response to EA Funds Beta Launch
Comment author: Brian_Tomasik 02 March 2017 03:29:03AM *  15 points [-]

Open Phil currently tries to set an upper limit on the proportion of an organization’s budget they will provide, in order to avoid dependence on a single funder. In the case where EA Funds generates recurring donations from a large number of donors, Fund Managers may be able to fully fund an organization already identified, saving the organization from spending additional time raising funds from many small donors individually.

It seems like in practice, donations from EA Funds are extremely correlated with OPP's own donations. That is, if OPP decided to stop funding a charity, presumably the EA Funds fund would also stop donating, because the charity no longer looks sufficiently promising. So the risk involved in depending on getting fully funded by OPP + EA Funds is seemingly about as high as the risk of depending on getting fully funded by just OPP. In this case, either fully funding a charity isn't a good thing, or OPP should already be doing it.

This comment isn't very important -- just an observation about argument 1.3.

Comment author: Robert_Wiblin 02 March 2017 07:03:59PM 14 points [-]

I love EA Funds, but my main concern is that as a community we are getting closer and closer to a single point of failure. If OPP reaches the wrong conclusion about something, there's now fewer independent donors forming their own views to correct them. This was already true because of how much people used the views of OPP and its staff to guide their own decisions.

We need some diversity (or outright randomness) in funding decisions for robustness.

Comment author: SoerenMind  (EA Profile) 13 February 2017 02:39:10PM 3 points [-]

If the funding for a problem with known total funding needs (e.g. creating drug x which costs $1b) goes up 10x, its solvability will go up 10x too - how do you resolve that this will make problems with low funding look very intractable? I guess the high neglectedness makes up for it. But this definition of solvability doesn't quite capture my intuition.

Comment author: Robert_Wiblin 28 February 2017 10:22:57PM 0 points [-]

Don't the shifts in solvability and neglectedness perfectly offset one another in such a case? Can you write out the case you're considering in more detail?

View more: Next