It seems to me like you disagree with Carl because you write:
- The reason for an investor to make a bet, is that they believe they will profit later
- However, if they believe in near-term TAI, savvy investors won't value future profits (since they'll be dead or super rich anyways)
- Therefore, there is no way for them to win by betting on near-term TAI
So you're saying that investors can't win from betting on near-term TAI. But Carl thinks they can win.
Local cheap production makes for small supply chains that can regrow from disruption as industry becomes more like information goods.
Could you say more about what you mean by this?
Thanks for these great questions Ben!
To take them point by point:
if they had explained why their views were not moved by the expert reviews OpenPhil has already solicited.
I included responses to each review, explaining my reactions to it. What kind of additional explanation were you hoping for?
Davidson 2021 on semi-informative priors received three reviews.
By my judgment, all three made strong negative assessments, in the sense (among others) that if one agreed with the review, one would not use the report's reasoning to inform decision-making in the manner advocated by Karnofsky (and by Beckstead).
For Hajek&Strasser's and Halpern’s reviews, I don't think "strong negative assessment" is supported by your quotes. The quotes focus on things like 'the reported numbers are too precise' and 'we should use more than a single probability measure' rather than whether the estimate is too high or too low overall or whether we should be worrying more vs less about TAI. I also think the reviews are more positive overall than you imply, e.g. Halpern's review says "This seems to be the most serious attempt to estimate when AGI will be developed that I’ve seen"
Davidson 2021 on explosive growth received many reviews... Two of them made strong negative assessments.
I agree that these two reviewers assign much lower probabilities to explosive growth than I do (I explain why I continue to disagree with them in my responses to their reviews). Again though, I think these reviews are more positive overall than you imply, e.g. Jones states that the report "is balanced, engaging a wide set of viewpoints and acknowledging debates and uncertainties... is also admirably clear in its arguments and in digesting the literature... engages key ideas in a transparent way, integrating perspectives and developing its analysis clearly and coherently." This is important as it helps us move from "maybe we're completely missing a big consideration" to "some experts continue to disagree for certain reasons, but we have a solid understanding of the relevant considerations and can hold our own in a disagreement".
Thanks for this!
I won't address all of your points right now, but I will say that I hadn't considered that "R&D is compensating for natural resources becoming harder to extract over time", which would increase the returns somewhat. However, my sense is that raw resource extraction is a small % of GDP, so I don't think this effect would be large.
Sorry for the slow reply!
I agree you can probably beat this average by aiming specifically at R&D for boosting economic growth.
I'd be surprised if you could spend $100s millions per year and consistently beat the average by a large amount (>5X) though:
Another relevant point is that some interventions increase R&D inputs in a non-targeted, or weakly targeted, way. E.g. high-skill immigration to the US or increasing government funding for broad R&D pots. The 'average R&D' number seems particularly useful for these interventions.
Great question!
I would read Appendix G as conditional on "~no civilizational collapse (from any cause)", but not conditional on "~no AI-triggered fundamental reshaping of society that unexpectedly prevents growth". I think the latter would be incorporated in "an unanticipated bottleneck prevents explosive growth".
I think the question of GDP measurement is a big deal here. GDP deflators determine what counts as "economic growth" compared to nominal price changes, but deflators don't really know what to do with new products that didn't exist. What was the "price" of an iPhone in 2000? Infinity? Could this help recover Roodman's model? If ideas being produced end up as new products that never existed before, could that mean that GDP deflators should be "pricing" these replacements as massively cheaper, thus increasing the resulting "real" growth rate?
This is an interesting idea. It wasn't a focus of my work, but my loose impression is that when economists have attempted to correct for these kinds of problems the resulting adjustment isn't nearly large enough to make Roodman's model consistent with the recent data. Firstly, measurements of growth in the 1700s and 1800s face the same problem, so it's far from clear that the adjustment would raise recent growth relative to old growth (which is what Roodman's model would need). Secondly, I think that when economists have tried to measure willingness to pay for 'free' goods like email and social media, the willingness is not high enough to make a huge difference to GDP growth.
Thank you for this comment! I'll make reply to different points in different comments.
But then the next point seems very clear: there's been tons of population growth since 1880 and yet growth rates are not 4x 1880 growth rates despite having 4x the population. The more people -> more ideas thing may or may not be true, but it hasn't translated to more growth.
So if AI is exciting because AIs could start expanding the number of "people" or agents coming up with ideas, why aren't we seeing huge growth spurts now?
The most plausible models have diminishing returns to efforts to generate new ideas. In these models, you need an exponentially growing population to sustain exponential growth. So these models aren't surprised that growth hasn't increased since 1880.
At the same time, these same models imply that if increasing output causes the population to increase (more output -> more people), then there can be super-exponential growth. This is because the population can grow super-exponentially with this feedback loop.
So my overall opinion is that it's 100% consistent to think:
I agree that bottlenecks like the ones you mention will slow things down. I think that's compatible with this being a "jump in forward a century" thing though.
Let's consider the case of a cure for cancer. First of all, even if it takes "years to get it out due to the need for human trials and to actually build and distribute the thing" AGI could still bring the cure forward from 2200 to 2040 (assuming we get AGI in 2035).
Second, the excess top-quality labour from AGI could help us route-around the bottlenecks you mentioned: