A

aaguirre

260 karmaJoined Nov 2020

Bio

Professor of Physics at UCSC, and co-founder of the Future of Life Institute, Metaculus, and the Foundational Questions Institute

Comments
22

The important things about a pause, as envisaged in the FLI letter, for example, are that (a) it actually happens, and (b) the pause is not lifted until there is affirmative demonstration that the risk is lifted. The FLI pause call was not, in my view, on the basis of any particular capability or risk, but because of the out-of-control race to do larger giant scaling experiments without any reasonable safety assurances. This pause should still happen, and it should not be lifted until there is a way in place to assure that safety. Many of the things FLI hoped could happen during the pause are happening — there is huge activity in the policy space developing standards, governance, and potentially regulations. It's just that now those efforts are racing the un-paused technology.

In the case of "responsible scaling" (for which I think the ideas of "controlled scaling" or "safety-first scaling" would be better), what I think is very important is that there not be a presumption that the pause will be temporary, and lifted "once" the right mitigations are in place. We may well hit point (and may be there now), where it is pretty clear that we don't know how to mitigate the risks of the next generation of systems we are building (and it may not even be possible), and new bigger ones should not be built until we can do so. An individual company pausing "until" it believes things are safe is subject to the exact same competitive pressures that are driving scaling now — both against pausing, and in favor of lifting a pause as quickly as possible. If the limitations on scaling come from the outside, via regulation or oversight, then we should ask for something stronger: before proceeding, show to those outside organizations that scaling is safe. The pause should not be lifted until or unless that is possible. And that's what the FLI pause letter asks for.

I'm not sure about this, but there is a possibility that this sort of model violated US online gambling laws. (These laws, along with those against unregulated trading of securities, are the primarily obstacles to prediction markets in the US.) IIRC, you can get into trouble with these rules if there is a payout on the outcome of a single event, which seems like it would be the case here. There's definite gray area, but before setting up such a thing one would definitely want to get some legal clarity.

I'd note that Metaculus is not a prediction market and there are no assets to "tie up." Tachyons are not a currency you earn by betting. Nonetheless, as with any prediction system there are a number of incentives skewing one way or another. But for a question like this I'd say it's a pretty good aggregator of what people who think about such issues (and have an excellent forecasting track record) think — there's heavy overlap between the Metaculus and EA communities, and most of the top forecasters are pretty aware of the arguments.

Great, thanks! Just PM me (anthony@futureoflife.org) and I'll put you in touch once the project is underway.

Probably some of both; the toolkit we can make available to all but the capacity to advise will obviously be limited by available personnel.

Totally agree here that what's interesting is the ways in which things turn out well due to agency rather than luck. Of course if things turn out well, it's likely to be in part due to luck — but as you say that's less useful to focus on. We'll think about whether it's worth tweaking the rules a bit to emphasize this.

Even if you don't speak for FLI, I (at least somewhat) do, and agree with most of what you say here — thanks for taking the time and effort to say it!

I'll also add that — again — we envisage this contest as just step 1 in a bigger program, which will include other sets of constraints.

There's obviously lots I disagree with here, but at bottom, I simply don't think it's the case that economically transformative AI necessarily entails singularity or catastrophe within 5 years in any plausible world: there are lots of imaginable scenarios compatible with the ground rules set for this exercise, and I think assigning accurate probabilities amongst them and relative to others is very, very difficult.

Speaking as one partly responsible for that conjunction, I'd say the aim here was to target a scenario that is interesting (AGI) but not too interesting. (It's called a singularity for a reason!) It's arguably a bit conservative in terms of AGI's transformative power, but rapid takeoff is not guaranteed (Metaculus currently gives ~20% probability to >60 months), nor is superintelligence axiomatically the same as a singularity. It is also in a conservative spirit of "varying one thing at a time" (rather than a claim of maximal probability) that we kept much of the rest of the world relatively similar to how it is now.

Part of our goal is to use this contest as a springboard for exploring a wider variety of scenarios and "ground assumptions" and there I think we can try some out that are more radically transformative.

Thanks Hayden!

FLI also is quite funding constrained particularly on technical-adjacent policy research work, where in my opinion there is going to be a lot of important research and a dearth of resources to do it. For example, the charge to NIST to develop an AI risk assessment framework, just passed in the US NDAA, is likely to be extremely critical to get right. FLI will be working hard to connect technical researchers with this effort, but is very resource-constrained.

I generally feel that the idea that AI safety (including research) is not funding constrained to be an incorrect and potentially dangerous one — but that's a bigger topic for discussion.

Load more