KT

Karthik Tadepalli

Economics PhD @ UC Berkeley
2951 karmaJoined Pursuing a doctoral degree (e.g. PhD)karthiktadepalli.com

Bio

I research a wide variety of issues relevant to global health and development. I'm always happy to chat - if you think we have similar interests and would like to talk, send me a calendar invite at karthikt@berkeley.edu!

Sequences
1

What we know about economic growth in LMICs

Comments
375

AI companies are constrained by the risk that they might not be able to monetize their products effectively enough to recover the insane compute costs of training. As an extreme example, if everyone used free GPT but zero people were willing to pay for a subscription, then investors would become significantly less excited by AI companies, because the potential profits they would expect to recover would be lower than if people are willing to buy subscriptions at a high rate.

So I think it's better to frame the impact of a subscription not as "you give OAI $20" but rather "you increase OAI's (real and perceived) ability to monetize its products by 1/(# of subscribers)".

Without rehashing the moral offsetting debate, I seriously doubt that there are any AI safety funding options that provide as much benefit as the harm of enabling OpenAI. This intuition comes from the fact that Open Phil funds a ton of AI safety work, so your money would only be marginal for AI safety work that falls below their funding bar, combined with my anecdotal (totally could be wrong) view that AI safety projects are more limited by manpower than by money.

This note says:

Total costs saved (direct + indirect) through sexual and reproductive healthcare is sourced from the Copenhagen Consensus, as their analysis indicates that for each dollar spent on SRHR, $120 would be saved.

I suppose the other numbers are extrapolations from this figure, though it's hard to say.

I think Kelsey Piper's article marks a huge turning point. In 2022, there were lots of people saying in an abstract sense "we shouldn't work with AI companies", but I can't imagine that article being written in 2022. And the call for attorneys for ex-OpenAI employees is another step so adversarial I can't imagine it being taken in 2022. Both of these have been pretty positively received, so I think they reflect a real shift in attitudes.

To be concrete, I imagine if Kelsey wrote an article in 2022 about the non disparagement clause (assume it existed then), a lot of people's response would be "this clause is bad, but we shouldn't alienate the most safety conscious AI company or else we might increase risk". I don't see anyone saying that today. The obvious reason is that people have quickly updated on evidence that OpenAI is not actually safety-conscious. My fear was that they would not update this way, hence my positive reaction.

Obelus seems to be the organizational name under which Asterisk is registered - both the asterisk and the obelus are punctuation symbols so I highly doubt that Obelus exists separately from Asterisk.

Charging readers is probably an attempt to be financially independent of EV, which is a worthy goal for all EA organizations and especially media organizations that may have good cause to criticize EV at some point.

The eggs and milk quip is just a quip about their new prices; I don't understand what's offensive about it.

The California issue is weird to me too.

[Conflict note: writing an article for Asterisk now]

I find it encouraging that EAs have quickly pivoted to viewing AI companies as adversaries, after a long period of uneasily viewing them as necessary allies (c.f. Why Not Slow AI Progress?). Previously, I worried that social/professional entanglements and image concerns would lead EAs to align with AI companies even after receiving clear signals that AI companies are not interested in safety. I'm glad to have been wrong about that.

Caveat: we've only seen this kind of scrutiny applied to OpenAI and it remains to be seen whether Anthropic and DeepMind will get the same scrutiny.

I read it as aiming to reduce AI risk by increasing the cost of scaling.

I also don't see how breaking deepmind off from Google would increase competitive dynamics. Google, Microsoft, Amazon and other big tech partners are likely to be pushing their subsidiaries to race even faster since they are likely to have much less conscientiousness about AI risk than the companies building AI. Coordination between DeepMind and e.g. OpenAI seems much easier than coordination between Google and Microsoft.

I think a neutral world is much better than extinction, and most dystopias are also preferable to human extinction. The latter is debatable but the former seems clear? What do you imagine by a neutral world?

Load more