Hello Effective Altruism Forum, I am Nate Soares, and I will be here to answer your questions tomorrow, Thursday the 11th of June, 15:00-18:00 US Pacific time. You can post questions here in the interim.
Last week Monday, I took the reins as executive director of the Machine Intelligence Research Institute. MIRI focuses on studying technical problems of long-term AI safety. I'm happy to chat about what that means, why it's important, why we think we can make a difference now, what the open technical problems are, how we approach them, and some of my plans for the future.
I'm also happy to answer questions about my personal history and how I got here, or about personal growth and mindhacking (a subject I touch upon frequently in my blog, Minding Our Way), or about whatever else piques your curiosity. This is an AMA, after all!
EDIT (15:00): All right, I'm here. Dang there are a lot of questions! Let's get this started :-)
EDIT (18:00): Ok, that's a wrap. Thanks, everyone! Those were great questions.
(1) Things Executive!Nate will do differently from Researcher!Nate? Or things Nate!MIRI will do differently from Luke!MIRI? For the former, I'll be thinking lots more about global coordination & engaging with interested academics etc, and lots less about specific math problems. For the latter, the biggest shift is probably going to be something like "more engagement with the academic mainstream," although it's a bit hard to say: Luke probably would have pushed in that direction too, after growing the research team a bit. (I have a lot of opportunities available to me that weren't available to Luke at this time last year.)
(2) The old SIAI definitely made some obvious mistakes; see e.g. Holden Karnofsky’s 2012 critique. Luke tried to transfer a number of the lessons learned to me, but it remains to be seen whether I actually learned them :-) The concrete list includes things like (a) constantly drive to systematize, automate, and outsource the busywork; (b) always attack the biggest constraint (by contrast, most people seem to have a default mode of "try and do everything that meets a certain importance level"); (c) put less emphasis on explicit models that you've built yourself an more emphasis on advice from others who have succeeded in doing something similar to what you're trying to do.
(3) MIRI played a pretty big role in getting long-term AI alignment issues onto the world stage. There are lots and lots of things I've learned from that particular success. Perhaps the biggest is "don't disregard intellectual capital."