We all know that an extra dollar is worth more to you the poorer you are. That's why it can be good to donate money to an organisation like GiveDirectly even when a few cents in the dollar get used up in transaction costs. But how much more is it worth? Economists have a good quantitative model of what is going on, which can enable us to make rough comparisons about whether, say, people on $1,000 per year would get more value from an extra $100 than people on $2,000 per year would get from $200. This can help us work out how much additional cost we should bear to get money to the very poorest people.
It can also be useful for improving our thinking about the relative values of different financial flows such as remittances and aid. It is easy to find out the sizes of these in dollars, but what about the size in terms of value to the individuals? If the individuals in one case are substantially richer, then this can really change things.
I've written an article explaining how all of this works up on centerforeffectivealtruism.org. Have a read and let me know what you think.
I think it is mainly from individuals' explicit preferences over hypothetical gambles for income streams. e.g. if you are indifferent between a sure salary of $50,000 PA and a 50-50 gamble between a salary of $25,000 or one of $100,000, then that fits logarithmic utility (eta = 1). Note that while people's intuitions about such cases are far from perfect (e.g. they will have status quo bias) this methodology is actually very similar to that of QALYs/DALYs. But I imagine all methods you mention are used. Also other methods such as happiness surveys give results in the same ballpark. If asking about ideal societal distribution, then that is actually a somewhat different question as there could be additional moral reasons in favour of equality or priority to the worst off on top of diminishing marginal utility effects. Eta is typically intended to set aside such issues, though there are other tests to measure those.
Thank you Toby. The 'preference over gambles' as a way of measuring diminishing marginal utility will depend strongly on the expected utility maximization assumption; in practice, it could be vulnerable to reference-point effects I believe. (Also the logarithmic utility function is obviously an imposed parametric assumption, but a good start.)
Still, these approaches seem reasonable, especially insofar as broadly similar results come from varying contexts.