18

FlorentBerthet comments on I am Nate Soares, AMA! - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (128)

You are viewing a single comment's thread.

Comment author: FlorentBerthet 10 June 2015 04:49:33PM 9 points [-]

Congrats on the new position!

My question: what advances does MIRI hope to achieve in the next 5 years?

Comment author: So8res 11 June 2015 11:19:23PM 6 points [-]

Short version: FAI. (You said "hope", not "expect" :-p)

Longer version: Hard question, both because (a) I don't know how you want me to trade off between how nice the advance would be and how likely we are to get it, and (b) my expectations for the next five years are very volatile. In the year since Nick Bostrom released Superintelligence, there has been a huge wave of interest in the future of AI (due in no small part to the efforts of FLI and their wonderful Puerto Rico conference!), and my expectations of where I'll be in five years range all the way from "well that was a nice fad while it lasted" to "oh wow there are billions of dollars flowing into the field".

But I'll do my best to answer. The most obvious schelling point I'd like to hit in 5 years is "fully naturalized AIXI," that is, a solid theoretical understanding of how we would "brute force" an FAI if we had ungodly amounts of computing power. (AIXI is an equation that Marcus Hutter uses to define an optimal general intelligence under certain simplifying assumptions that don't hold in the real world: AIXI is sufficiently powerful that you could use it to destroy the world while demonstrating something that would surely look like "intelligence" from the outside, but it's not yet clear how you could use it to build a generally intelligent system that maximizes something in the world -- for example, even if you gave me unlimited computing power, I wouldn't yet know how to write the program that stably and reliably pursues the goal of turning as much of the universe as possible into into diamond.)

Formalizing "fully naturalized AIXI" would require a better understanding of decision theory (How do we want advanced systems to reason about counterfactuals? Preferences alone are not enough to determine what counts as a "good action," that notion also depends on how you evaluate the counterfactual consequences of taking various actions, we lack a theory of idealized counterfactual reasoning.), logical uncertainty (What does it even mean for a reasoner to reason reliably about something larger than the reasoner? Solomonoff induction basically works by having the reasoner be just friggin' bigger than the environment, and I'd be thrilled if we could get a working theoretical model of "good reasoning" in cases where the reasoner is smaller than the environment), and a whole host of other problems (many of them covered in our technical agenda).

5 years is a pretty wildly optimistic timeline for developing fully naturalized AIXI, though, and I'd be thrilled if we could make concrete progress in any one of the topic areas listed in the technical agenda.