MT

Maxwell Tabarrok

605 karmaJoined Mar 2022

Comments
14

Yes that's fair. I do think that even specific advocacy can have risks though. Most advocacy is motivated by AI fear which can be picked up and used to support lots of other bad policies, e.g how Sam Altaman was received in congress.

I do make the "by default" claim but I also give reasons why advocating for specific regulations can backfire. E.g the environmentalist success with NEPA. Environmentalists had huge success in getting the specific legal powers and constraints on govt that they asked for but those have been repurposed in service of default govt incentives. Also, advocacy for a specific set of regulations has spillovers onto others. When AI safety advocates make the case for fearing AI progress they provide support for a wide range of responses to AI including lots of nonsensical ones.

Thank you for reading and for your well thought out comment!

I agree with this and I'm glad you wrote it. 

To steelman the other side I would point to 16th century new world encounters between Europeans and Natives. It seems like this was a case where the technological advantage of the Europeans made conquest better than comparative advantage trade.

The high productivity of the Europeans made it easy for them to lawfully accumulate wealth (e.g buying large tracts of land for small quantities of manufactured goods), but they still often chose to take land by conquest rather than trade.

Maybe transaction frictions were higher here than they might be with AIs since we'd share a language and be able to use AI tools to communicate.

Thank you for reading and for the kind words :)

Thank you for reading and for your insightful reply!

I think you've correctly pointed out one of the cruxes of the argument: That humans have average "quality of sentience" as you put it. In your analogous examples (except for the last one), we have a lot of evidence to compare things too. We can say with relative confidence where our genetic line or academic research stands in relation to what might replace it because we can measure what average genes or research is like.

So far, we don't have this ability for alien life. If we start updating our estimation of the number of alien life forms in our galaxy, their "moral characteristics," whatever that might mean, will be very important for the reasons you point out.

Thank you for reading and for your detailed comment. In general I would agree that my post is not a neutral survey of the VWH but a critical response, and I think I made that clear in the introduction even if I did not call it red-teaming explicitly. 

I'd like to respond to some of the points you make.

  1. "As Zach mentioned, I think you at least somewhat overstate the extent to which Bostrom is recommending as opposed to analyzing these interventions."

    I think this is overall unclear in Bostrom's paper, but he does have a section called Policy Implications right at the top of the paper where he says "In order for civilization to have a general capacity to deal with “black ball” inventions of this type, it would need a system of ubiquitous real-time worldwide surveillance. In some scenarios, such a system would need to be in place before the technology is invented." I think it is confusing because he starts out analyzing the urn of technology, then conditioned on there being black balls in the urn he recommends ubiquitous real-time worldwide surveillance, and then the 'high-tech panopticon' example is just one possible incarnation of that surveillance that he is analyzing. I think it is hard to deny that he is recommending the panopticon if  existential risk prevention is the only value we're measuring. He doesn't claim all-things-considered support, but my response isn't about other considerations of a panopticon. I don't think a panopticon is any good even if existential risk is all we care about. 
     
  2.  "You seem to argue (or at least give the vibe that) that there's there's so little value in trying to steer technological development for the better than we should mostly not bother and instead just charge ahead as fast as possible. "

    I think this is true insofar as it goes, but you miss what is in my opinion the more important second part of the argument. Predicting the benefits of future tech is very difficult, but even if we knew all of that, getting the government to actually steer in the right direction is harder. For example, economists have known for centuries that domestic farming subsidies are inefficient. They are wasteful and they produce big negative externalities. But almost every country on earth has big domestic farming subsidies because they benefit a small, politically active group in most countries. I admit that we have some foreknowledge of which technologies look dangerous and which do not. That is far from sufficient for using the government to decrease risk. 

    The point of Enlightenment Values is not that no one should think about the risks of technology and we should all charge blindly forward. Rather, it is that decisions about how best to steer technology for the better can and should be made on the individual level where they are more voluntary, constrained by competition, and mistakes are hedged by lots of other people making different decisions. 
     
  3. "A core premise/argument in your post appears to be that pulling a black ball and an antidote (i.e., discovering a very dangerous technology and a technology that can protect us from it) at the same time means we're safe. This seems false, and I think that substantially undermines the case for trying to rush forward and grab balls from the urn as fast as possible."

    There are technologies like engineered viruses and vaccines, but how they interact depends much more on their relative costs. An antidote to $5-per-infection viruses might need to be $1-per-dose vaccines or $0.5-per-mask PPE. If you just define an antidote to be "a technology which is powerful and cheap enough to counter the black ball should they be pulled simultaneously" then the premise stands.
     
  4. "Do you (the reader) feel confident that everything will go well in that world where all possible techs and insights on dumped on us at once?"

    Until meta-understanding of technology greatly improves this is ultimately a matter of opinion. If you think there exists some technology that is incompatible with civilization in all contexts then I can't really prove you wrong but it doesn't seem right to me. 

    Type-0 vulnerabilities were 'surprising strangelets.' Not techs that are incompatible with civilization in all contexts, but risks that come from unexpected phenomena like the Hadron Collider opening a black hole or something like that. 
     
  5. "I think the following bolded claim is false, and I think it's very weird to make this empirical claim without providing any actual evidence for it: "AI safety researchers argue over the feasibility of ‘boxing’ AIs in virtual environments, or restricting them to act as oracles only, but they all agree that training an AI with access to 80+% of all human sense-data and connecting it with the infrastructure to call out armed soldiers to kill or imprison anyone perceived as dangerous would be a disaster."

    You're right that I didn't get any survey of AI researchers for this question. The near-tautological nature of "properly aligned superintelligence" guarantees that if we had it, everything would go well. So yeah, probably lots of AI researchers would agree that a properly aligned superintelligence would use surveillance to improve the world. This is a pretty empty statement imo. The question is about what we should do next. This hypothetical aligned intelligence tells us nothing about what increasing state AI surveillance capacity does on the margin. Note that Bostrom is not recommending that an aligned superintelligent-being do the surveillance. His recommendations are about increasing global governance and surveillance on the margin. The AI he mentions is just a machine learning classifier that can help a human government blur out the private parts the cameras collect. 
     
  6. "I'm just saying that thinking that increased surveillance, enforcement, moves towards global governance, etc. would be good doesn't require thinking that permanent extreme levels (centralised in a single state-like entity) would be good."

    This is only true if you have a reliable way of taking back increased surveillance, enforcement, and moves towards global governance. The alignment and instrumental convergence problems I outlined in those sections give strong reasons why these capabilities are extremely difficult to take back. Bostrom scantly mentions the issue of getting governments to enact his risk reducing policies once they have the power to enforce them, let alone give a mechanism design which would judiciously use its power to guide us through the time of perils and then reliably step down. Without such a plan the issues of power-seeking and misalignment are not ones you can ignore

Bostrom may have talked about this elsewhere since I've heard other people say this, but he doesn't make this point in the paper. He only mentions AI briefly as a tool the panopticon government could use to analyze the video and audio coming in from their surveillance. He also says:

"Being even further removed from individuals and culturally cohesive ‘peoples’ than are typical state governments, such an institution might by some be perceived as less legitimate, and it may be more susceptible to agency problems such as bureaucratic sclerosis or political drift away from the public interest."

He also considers what might be required for a global state to bring other world governments to heel. So I don't think he is assuming that the state can completely ignore all dissent or resistance because it FOOMs into an all powerful AI. 

Either way I think that is a really bad argument. It's basically just saying "if we had aligned superintelligence running the world everything would be fine" which is almost tautologically true. But what are we supposed to conclude from that? I don't think that tells us anything about increasing state power on the margin. Also, aligning the interests of powerful AI with a powerful global state is not sufficient for alignment of AI with humanity more generally. Powerful global states are not very well aligned with the interests of their constituents. 

My reading is that Bostrom is making arguments about how human governance would need to change to address risks from some types of technology. The arguments aren't explicitly contingent on any AI technology that isn't available today. 

Bostrom says in the policy recommendations: 

"Some areas, such as synthetic biology, could produce a discovery that suddenly democratizes mass destruction, e.g. by empowering individuals to kill hundreds of millions of people using readily available materials. In order for civilization to have a general capacity to deal with “black ball” inventions of this type, it would need a system of ubiquitous real-time worldwide surveillance. In some scenarios, such a system would need to be in place before the technology is invented."

So if we assume that some black balls like this are in the urn which I do in the essay, this is a position that Bostrom explicitly advocates, not just one which he analyzes. But even assuming that the VWH is true and a technology like this does exist, I don't think this policy recommendation is helpful.

State enforced "ubiquitous real-time worldwide surveillance" is neither a necessary nor sufficient technology to address a type-1 vulnerability like this unless the definition of type-1 vulnerability trivially assumes that it is. Advanced technology that democratizes protection like vaccines, PPE, or drugs can alleviate a risk like this, so a panopticon is not necessary.  A state with ubiquitous surveillance need not stop pandemics to stay rich and powerful and indeed may create them to keep their position. 

Even if we knew a black ball was coming, setting up a panopticon would probably do more harm than good, and it certainly would if we didn't come up with any new ways of aligning and constraining state power. I don't think Bostrom would agree with that statement but that is what I defend in the essay. Do you think Bostrom would agree with that on your reading of the VWH?

This might be the best strategy if we're all eventually doomed. Although it might turn out that the tech required to colonize planets comes after a bunch of black balls. At least like nuclear rockets and some bio-tech stuff seems likely. 

Even Bostrom doesn't think we're inevitably doomed though. He just thinks that global government is the only escape hatch. 

Load more