ADV

Alexander de Vries

184 karmaJoined

Comments
16

The factors being only caused by GDP is not at all central to my argument against your analysis, nor is it a claim I would make. The key point is that any causal effect of GDP on happiness must necessarily run through other factors like shelter, clean water, health, etc. Nobody (except economists like me) feels more satisfied with their life as a result of hearing that GDP/cap has gone up by 3% this year.

As such, if you control for all of those mediating factors, it will be literally impossible to find a significant effect of GDP/cap on happiness whether or not such an effect actually exists. If the effect is real, such an analysis would necessarily find a false negative.

Similarly, your counterexample would be valid, if I were claiming that GDP is the only factor in happiness. Again, I do not claim that, nor does anyone I know of. There are plenty of factors which account for national average life satisfaction, one of which is GDP. It is perfectly possible for Costa Rica to be higher in other factors and therefore be happier than the US despite a lower GDP.

There are some economists who oppose GDP as a measure of value, and some who support it. If you're appealing to expertise, there's a huge difference between consensus view and "some experts agree with me".

Others have pointed out that most of the factors you take into account are (often strongly) correlated with GDP per capita. What I think is more important from an econometric perspective is that many of them are caused by GDP per capita. If you're trying to measure the effect of (average) income on happiness, and you control for almost all the mediators of income's effect on happiness, of course you'll find that there is almost no independent effect of income left over! 

In my opinion, this analysis, while certainly interesting and useful for other purposes, says nothing at all about the effect of GDP per capita on life satisfaction or happiness.

Great point, I hadn't thought of it that way. Indeed, if you consider "increase in de facto consumption income over the next 4 years" to be a production increase (which I now agree is a reasonable point of view to take), then the long term effects on production are positive. I need to think more about how exactly that works out, but you've possibly convinced me of this.

My remaining point of contention, then, is that you say this argument isn't intuitive to those without experience in the field, which is true, but GiveDirectly didn't even try to make it! The only argument they affirmatively used to support their big claim was pointing vaguely at the Eggers (2022) spillovers study, which has general equilibrium models and economic slack as its focus, saying very little about durable consumption goods at all. So if GD wants to explain their reasoning to anyone, they should do it more like you did, and IMO a full analysis of multiple pages would be not just worthwhile but necessary.

Thanks for bringing your considerable expertise here! To be clear, my contention was that GiveDirectly hasn't shown this, not that the evidence doesn't exist. If they had made something like your comment, I would never have written this post in the first place.

That said, to engage with what you're saying:

-Many durable goods are a permanent or long-term increase in quality of life (like the metal roofs and cement floors you mentioned) without bringing an increase in productive capacity. Those goods are important in their own right, but don't support the claim made.

-Here I'm really going out of my knowledge zone, but based on what I currently know, I'm skeptical that kids going to school more in developing countries has a large total effect. The main reason I'm worried about this is that I've seen too many studies where it turns out that most of the teachers aren't showing up at these schools, or the kids can't actually write simple words by age 10, that kind of stuff. This opinion is lightly held because I'm not very familiar with the literature here.

-I do believe some poverty traps exist, and there is probably a long-term effect of cash transfers, mostly through investments like those you described. I just think the effect is likely small to medium.

The main reason I feel like I can make some of these statements with reasonable confidence (besides my econ background) is that, well, anyone can interpret a randomized controlled trial. And the few randomized controlled trials done show mixed to no effects, over the long term. More data coming in a few years though!

You're right, I shouldn't have taken them at their word there, it's probably not a small portion of philanthropic spending. US phil spending is about 500 billion per year, and the US has the highest spending as % of GDP. Can't find sources for total worldwide spending online, but if I had to guess, it's about 1.5 trillion, in which case the named figure is some 20-ish percent of the total.

Will edit post.

So I did a fair bit more research on this subject for a post I'm writing on it, and from what I can tell, that Blanchflower study you mentioned is making the exact mistake Bartram points out, and if you use controls correctly, the u-shape only shows up in a few countries.

This study by Kratz and Brüderl is very interesting - it points out four potential causes of bias in the age-happiness literature and make their own study without those biases, finding a constant downwards slope in happiness. I think they miss the second-biggest issue, though (after overcontrol bias): there is constant confusion between different happiness measures, as I described in my post above, and  that really matters when studying a subject with effects this small.

If I ever have time, I'm planning on doing some kind of small meta-analysis, taking the five or so biggest unbiased studies in the field. I'd have to learn some more stats first, though :)

Thanks for doing the back of the envelope calculation here! This made me view blood donation as significantly more effective than I did before. A few points:

  • Your second source doesn't exactly say that one third of blood is used during emergencies, but rather that 1/3rd is used in "surgery and emergencies including childbirth". Not all surgeries are emergencies, and not all emergencies are potentially fatal.
  • However, I think this is more than balanced out by the fact that according to the same source, the other two thirds are used to treat "medical conditions including anemia, cancer, and blood disorders." A lot of those conditions are potentially fatal, so I think it probably actually ends up at more than 1/3rd of blood donated going to life-saving interventions.

I'd love to see someone do the full calculation sometime. Based on this, I expect that for a lot of people, donating blood is sufficiently effective that they should do it once in a while, even instead of an hour of effective work or earning-to-give.

[I'll be assuming a consequentialist moral framework in this response, since most EAs are in fact consequentialists. I'm sure other moral systems have their own arguments for (de)prioritizing AI.]

Almost all the disputes on prioritizing AI safety are really epistemological, rather than ethical; the two big exceptions being a disagreement about how to value future persons, and one on ethics with very high numbers of people (Pascal's Mugging-adjacent situations).

I'll use the importance-tractability-neglectedness (ITN) framework to explain what I mean. The ITN framework is meant to figure out whether an extra marginal dollar to cause 1 will have more positive impact than a dollar to cause 2; in any consequentialist ethical system, that's enough reason to prefer cause 1. Importance is the scale of the problem, the (negative) expected value in the counterfactual world where nothing is done about it - I'll note this as CEV, for counterfactual expected value. Tractability is the share of the problem which can be solved with a given amount of resources; percent-solved-per-dollar, which I'll note as %/$. Neglectedness is comparing cause 1 to other causes with similar importance-times-tractability, and seeing which cause currently has more funding. In an equation:

Now let's do ITN for AI risk specifically:

Tractability - This is entirely an epistemological issue, and one which changes the result of any calculations done by a lot. If AI safety is 15% more likely to be solved with a billion more dollars to hire more grad students (or maybe Terence Tao), few people who are really worried about AI risk would object to throwing that amount of money at it. But there are other models under which throwing an extra billion dollars at the problem would barely increase AI safety progress at all, and many are skeptical of using vast amounts of money which could otherwise help alleviate global poverty on an issue with so much uncertainty.

Neglectedness - Just about everyone agrees that if AI safety is indeed as important and tractable as safety advocates say, it currently gets less resources than other issues on the same or smaller scales, like climate change and nuclear war prevention.

Importance - Essentially, importance is probability-of-doom [p(doom)] multiplied by how-bad-doom-actually-is [v(doom)], which gives us expected-(negative)-value [CEV] in the counterfactual universe where we don't do anything about AI risk.

The obvious first issue here is an epistemological one; what is p(doom)? 10% chance of everyone dying is a lot different from 1%, which is in turn very different from 0.1%. And some people think p(doom) is over 50%, or almost nonexistent! All of these numbers have very different implications regarding how much money we should put into AI safety.

Alright, let's briefly take a step back before looking at how to calculate v(doom). Our equation now looks like this:

Assuming that the right side of the equation is constant, we now have 3 variables that can move around:  and . I've shown that the first two have a lot of variability, which can lead to multiple orders of magnitude difference in the results. 

The 'longtermist' argument for AI risk is, plainly, that  is so unbelievably large that the variations in  and  are too small to matter. This is based on an epistemological claim and two ethical claims.

Epistemological claim: the expected amount of people (/sentient beings) in the future is huge. An OWID article estimates it at between 800 trillion and 625 quadrillion given a stable population of 11 billion on Earth, while some longtermists, assuming things like space colonization and uploaded minds, go up to 10^53 or something like that. This is the Astronomical Value Thesis.

This claim, at its core, is based on an expectation that existential risk will effectively cease to exist soon (or at least drop to very very low levels), because of something like singularity-level technology. If x-risk stays at something like 1% per century, after all, it's very unlikely that we ever reach anything like 800 trillion people, let alone some of the higher numbers. This EA Forum post does a great job of explaining the math behind it.

Moral claim 1: We should care about potential future sentient beings; 800 trillion humans existing is 100,000 times better than 8 billion, and the loss of 800 trillion future potential lives should be counted as 100,000 times as bad as the loss of today's 8 billion lives. This is a very non-intuitive moral claim, but many total utilitarians will agree with it.

If we combine the Astronomical Value Thesis with moral claim 1, we get to the conclusion that  is so massive that it overwhelms nearly everything else in the equation. To illustrate, I'll use the lowball estimate of 800 trillion lives:

You don't need advanced math to know that the side with that many zeroes is probably larger. But valid math is not always valid philosophy, and it has often been said that ethics gets weird around very large numbers. Some people say that this is in fact invalid reasoning, and that it resembles the case of Pascal's mugging, which infamously 'proves' things like that you should exchange 10$ for a one-in-a-quadrillion chance of getting 50 quadrillion dollars (after all, the expected value is $50).

So, to finish, moral claim 2: at least in this case, reasoning like this with very large numbers is ethically valid.

 

And there you have it! If you accept the Astronomical Value Thesis and both moral claims, just about any spending which decreases x-risk at all will be worth prioritizing. If you reject any of those three claims, it can still be entirely reasonable to prioritize AI risk, if your p(doom) and tractability estimates are high enough. Plugging in the current 8 billion people on the planet:

That's still a lot of zeroes!

Context: there has recently been a new phase 1/2b RCT in The Lancet, confirming a ~80% effectiveness rate for the R21/MM malaria vaccine (and confirming that booster shots work). 

Quoting https://www.bbc.com/news/health-62797776:

'Prof Hill said the vaccine - called R21 - could be made for "a few dollars" and "we really could be looking at a very substantial reduction in that horrendous burden of malaria".

He added: "We hope that this will be deployed and available and saving lives, certainly by the end of next year."'

If the vaccine makes it through phase III trials, this seems intuitively like a much more effective malaria intervention than bednets.

Load more