EG

Erich_Grunewald

Associate Researcher @ Institute for AI Policy and Strategy
2371 karmaJoined Working (6-15 years)Berlin, Germanywww.erichgrunewald.com

Bio

Anything I write here is written purely on my own behalf, and does not represent my employer's views (unless otherwise noted).

Comments
267

Let me see if I can rephrase your argument, because I'm not sure I get it. As I understand it, you're saying:

  1. In humans, higher IQ means better performance across a variety of tasks. This is analogous to AI, where more compute/parameters/data etc. means better performance across a variety of tasks.
  2. AI systems tend to share a common underlying architecture, just as humans share the same basic biology.
  3. For humans, when IQ increases, there are improvements across the board, but still specialization, meaning no single human (the one with the most IQ) will be better than all other humans at all of those things.
  4. By analogy: For AIs, when they're scaled up, there are improvements across the board, but (likely) still specialization, meaning no single AI (the one with the most compute/parameters/data/etc.) will be better than all other AIs at all of those things.

Now I'm a bit unsure about whether you're saying that you find it extremely unlikely that any AI will be vastly better in the areas I mentioned than all humans, or that you find it extremely unlikely that any AI will be vastly better than all humans and all other AIs in those areas.

If you mean 1-4 to suggest that no AI is will be better than all humans and other AIs, I'm not sure about whether 4 follows from 1-3, but I think that seems plausible at least. But if this is what you mean, I'm not sure what you're original comment ("Note humans are also trained on all those abilities, but no single human is trained to be a specialist in all those areas. Likewise for AIs.") was meant to say in response to my original comment, which was meant as pushback against the view that AGI would be bad at taking over the planet since it wouldn't be intended for that purpose.

If you mean 1-4 to suggest that no AI will be better than all humans, I don't think the analogy holds, because the underlying factor (IQ versus AI scale/algorithms) is different. Like, it seems possible that even unspecialized AIs could just sweep past the most intelligent and specialized humans, given enough time.

For an agent to conquer to world, I think it would have to be close to the best across all those areas

That seems right.

I think this is super unlikely based on it being super unlikely for a human to be close to the best across all those areas

I'm not sure that follows? I would expect improvements on these types of tasks to be highly correlated in general-purpose AIs. I think we've seen that with GPT-3 to GPT-4, for example: GPT-4 got better pretty much across the board (excluding the tasks that neither of them can do, and the tasks that GPT-3 could already do perfectly). That is not the case for a human who will typically improve in just one domain or a few domains from one year to the next, depending on where they focus their effort.

Yes, that's true. Can you spell out for me what you think that implies in a little more detail?

A 10-fold increase in the number of GPUs above GPT-5 would require a 1 to 2.5 GW data center, which doesn’t exist and would take years to build, OR would require decentralized training using several data centers. Thus GPT-5 is expected to mark a significant slowdown in scaling runs.

Why do you think decentralized training using several data centers will lead to a significant slowdown in scaling runs? Gemini was already trained across multiple data centers.

Interesting post! Another potential downside (which I don't think you mention) is that strict liability could disincentivize information sharing. For example, it could make AI labs more reluctant to disclose new dangerous capabilities or incidents (when that's not required by law). That information could be valuable for other AI labs, for regulators, for safety researchers, and for users.

Thank you for writing this! I love rats and found this -- and especially watching the video of the rodent farm and reading your account of the breeder visit -- distressing and pitiful.

Can you specify what you mean with "2.7x is a ridiculous number"?

I ask because it does happen that economies grow like that in a fairly short amount of time. For example, since the year 2000:

  • China's GDPpc 2.7x'd about 2.6 times
  • Vietnam's did it ~2.4 times
  • Ethiopia's ~2.1 times
  • India's ~1.7 times
  • Rwanda's ~1.3 times
  • The US's GDPpc is on track to 2.7x from 2000 in about 2029, assuming a 4% annual increase

So I assume you don't mean something like "2.7x never happens". Do you mean something more like "it's hard to find policies that produce 2.7x growth in a reasonable amount of time" or "typically it takes economies decades to 2.7x"?

I think the biggest danger to that reasoning is the premise that they are caused by GDP, and only by gdp, which I quite flatly dispute.

Well, this seems like something that is actually worth finding out. Because if it is the case that GDP (/ GDP per capita) does have a significant causal influence on one (or more) of them, then you are conditioning on a mediator, (partially) hiding the causal effect of GDP on the outcome. It seems to me like your model assumes that GDP does not have any casual influence on any of these variables, which seems like a pretty strong assumption. Unless I am misunderstanding something.

(ETA: Similarly, if both GDP and life satisfaction causally influence one of the variables, you are conditioning on a collider. That could introduce a spurious negative correlation masking a real correlation between GDP and life satisfaction, via Berkson's paradox. For example, suppose both life satisfaction and GDP cause social stability. Then, when you stratify by social stability, it would not be surprising to find a spurious negative correlation between GDP and life satisfaction, because a high-social-stability country, if it happens to have relatively low GDP, must have very high life satisfaction in order to achieve high social stability, and vice versa.)

Any attempt of a defense of GDP, specifically, needs to take into the account the fact that it’s just a deeply flawed measure of value. That’s why econ nobelists have been arguing against it for over a decade (and likely much longer, given that whole international reports were being published on it in 2012). So even if it *were *more predictive than the model suggests, that still wouldn’t address the fact it’s known to be misleading, all on its own, and not something I would spend a lot of time defending on the merits.

My understanding of these critiques is that they say either that (1) GDP is not intrinsically valuable, (2) GDP does not measure perfectly anything that we care about, or fails to measure many things that we care about, and/or (3) GDP focuses too narrowly on quantifiable economical transactions.

But if you were to find empirically that GDP causes something we do care about, e.g., life satisfaction, then I don't understand how those critiques would be relevant? (1) would not be relevant because we don't care about increasing GDP for its own sake, only in order to increase life satisfaction. (2) would not be relevant because whatever GDP would or would not succeed in measuring, it does measure something, and it would be desirable to increase whatever it measures (since whatever that is, causes life satisfaction). (3) would not be relevant because whatever does or does not go into the measure, again, it does measure something, and it would be desirable to increase whatever it measures.

But perhaps the most definitive argument against the unique value of gdp is in simple counterexamples. Between 2005 and 2022, Costa Rica had a higher life satisfaction than the United States, with less than a third of the GDPpc. This simply wouldn’t be possible, if gdp just bought you happiness. Ergo, that simply cannot be the answer.

Your reductio shows that GDP cannot be the only thing that has a causal influence on life satisfaction (assuming measurements are good, etc.). But I don't think OP or anyone else in this comment section is saying that GDP/wealth/money is the only thing that influences life satisfaction, only at most that it is one thing that has a comparatively strong influence on it. And your counterexample does not disprove that.

I don't know if these things make it robustly good, but some considerations:

  • Raising and killing donkeys for their skin seems like it could scale up more than the use of working donkeys, since (1) there may be increasing demand for donkey skin as China develops economically, and (2) there may be diminishing demand for working donkeys as Africa develops economically. So it could be valuable to have a preemptive norm/ban against slaughtering donkeys for this use, even if the short-term effect is net-negative.
  • It is not obvious that working donkeys have net-negative lives. My impression is that their lives are substantially better than the lives of most factory farmed animals, though that is a low bar. One reason to think that is the case is that working donkeys' owners live more closely to, and are more dependent on, their animals, than operators of factory farms, meaning they benefit more from their animals being healthy and happy.
  • Markets in donkey skin could have some pretty bad externalities, e.g., with people who rely on working donkeys for a living seeing their animals illegally poached. (On the other hand, this ban could also make such effects worse, by pushing the market underground.) Meanwhile, working donkeys do useful work, so they probably improve human welfare a bit. (I doubt donkey skin used for TCM improves human welfare.)
  • On non-utilitarian views, you may place relatively more value on not killing animals, and/or relatively less value on reducing suffering. So if you give some weight to those views, that may be another reason to think this ban is net positive.

That makes sense. I agree that capitalism likely advances AI faster than other economical systems. I just don’t think the difference is large enough for economic system to be a very useful frame of analysis (or point of intervention) when it comes to existential risk, let alone the primary frame.

Load more