Anything I write here is written purely on my own behalf, and does not represent my employer's views (unless otherwise noted).
For an agent to conquer to world, I think it would have to be close to the best across all those areas
That seems right.
I think this is super unlikely based on it being super unlikely for a human to be close to the best across all those areas
I'm not sure that follows? I would expect improvements on these types of tasks to be highly correlated in general-purpose AIs. I think we've seen that with GPT-3 to GPT-4, for example: GPT-4 got better pretty much across the board (excluding the tasks that neither of them can do, and the tasks that GPT-3 could already do perfectly). That is not the case for a human who will typically improve in just one domain or a few domains from one year to the next, depending on where they focus their effort.
A 10-fold increase in the number of GPUs above GPT-5 would require a 1 to 2.5 GW data center, which doesn’t exist and would take years to build, OR would require decentralized training using several data centers. Thus GPT-5 is expected to mark a significant slowdown in scaling runs.
Why do you think decentralized training using several data centers will lead to a significant slowdown in scaling runs? Gemini was already trained across multiple data centers.
Interesting post! Another potential downside (which I don't think you mention) is that strict liability could disincentivize information sharing. For example, it could make AI labs more reluctant to disclose new dangerous capabilities or incidents (when that's not required by law). That information could be valuable for other AI labs, for regulators, for safety researchers, and for users.
Can you specify what you mean with "2.7x is a ridiculous number"?
I ask because it does happen that economies grow like that in a fairly short amount of time. For example, since the year 2000:
So I assume you don't mean something like "2.7x never happens". Do you mean something more like "it's hard to find policies that produce 2.7x growth in a reasonable amount of time" or "typically it takes economies decades to 2.7x"?
I think the biggest danger to that reasoning is the premise that they are caused by GDP, and only by gdp, which I quite flatly dispute.
Well, this seems like something that is actually worth finding out. Because if it is the case that GDP (/ GDP per capita) does have a significant causal influence on one (or more) of them, then you are conditioning on a mediator, (partially) hiding the causal effect of GDP on the outcome. It seems to me like your model assumes that GDP does not have any casual influence on any of these variables, which seems like a pretty strong assumption. Unless I am misunderstanding something.
(ETA: Similarly, if both GDP and life satisfaction causally influence one of the variables, you are conditioning on a collider. That could introduce a spurious negative correlation masking a real correlation between GDP and life satisfaction, via Berkson's paradox. For example, suppose both life satisfaction and GDP cause social stability. Then, when you stratify by social stability, it would not be surprising to find a spurious negative correlation between GDP and life satisfaction, because a high-social-stability country, if it happens to have relatively low GDP, must have very high life satisfaction in order to achieve high social stability, and vice versa.)
Any attempt of a defense of GDP, specifically, needs to take into the account the fact that it’s just a deeply flawed measure of value. That’s why econ nobelists have been arguing against it for over a decade (and likely much longer, given that whole international reports were being published on it in 2012). So even if it *were *more predictive than the model suggests, that still wouldn’t address the fact it’s known to be misleading, all on its own, and not something I would spend a lot of time defending on the merits.
My understanding of these critiques is that they say either that (1) GDP is not intrinsically valuable, (2) GDP does not measure perfectly anything that we care about, or fails to measure many things that we care about, and/or (3) GDP focuses too narrowly on quantifiable economical transactions.
But if you were to find empirically that GDP causes something we do care about, e.g., life satisfaction, then I don't understand how those critiques would be relevant? (1) would not be relevant because we don't care about increasing GDP for its own sake, only in order to increase life satisfaction. (2) would not be relevant because whatever GDP would or would not succeed in measuring, it does measure something, and it would be desirable to increase whatever it measures (since whatever that is, causes life satisfaction). (3) would not be relevant because whatever does or does not go into the measure, again, it does measure something, and it would be desirable to increase whatever it measures.
But perhaps the most definitive argument against the unique value of gdp is in simple counterexamples. Between 2005 and 2022, Costa Rica had a higher life satisfaction than the United States, with less than a third of the GDPpc. This simply wouldn’t be possible, if gdp just bought you happiness. Ergo, that simply cannot be the answer.
Your reductio shows that GDP cannot be the only thing that has a causal influence on life satisfaction (assuming measurements are good, etc.). But I don't think OP or anyone else in this comment section is saying that GDP/wealth/money is the only thing that influences life satisfaction, only at most that it is one thing that has a comparatively strong influence on it. And your counterexample does not disprove that.
I don't know if these things make it robustly good, but some considerations:
Let me see if I can rephrase your argument, because I'm not sure I get it. As I understand it, you're saying:
Now I'm a bit unsure about whether you're saying that you find it extremely unlikely that any AI will be vastly better in the areas I mentioned than all humans, or that you find it extremely unlikely that any AI will be vastly better than all humans and all other AIs in those areas.
If you mean 1-4 to suggest that no AI is will be better than all humans and other AIs, I'm not sure about whether 4 follows from 1-3, but I think that seems plausible at least. But if this is what you mean, I'm not sure what you're original comment ("Note humans are also trained on all those abilities, but no single human is trained to be a specialist in all those areas. Likewise for AIs.") was meant to say in response to my original comment, which was meant as pushback against the view that AGI would be bad at taking over the planet since it wouldn't be intended for that purpose.
If you mean 1-4 to suggest that no AI will be better than all humans, I don't think the analogy holds, because the underlying factor (IQ versus AI scale/algorithms) is different. Like, it seems possible that even unspecialized AIs could just sweep past the most intelligent and specialized humans, given enough time.