On August 1, I'll be moderating a panel at EA Global on the relationship between effective altruism, astronomical stakes, and artificial intelligence. The panelists will be Stuart Russell (UC Berkeley), Nick Bostrom (Future of Humanity Institute), Nate Soares (Machine Intelligence Research Institute), and Elon Musk (SpaceX, Tesla). I'm very excited to have this conversation with some of the leading figures in AI safety!
As part of the panel, I'd love to ask our panelists some questions from the broader EA community. To that end, please submit questions below that you'd like to be considered for the event. I'll be selecting a set of these questions and integrating them into our discussion. I can't guarantee that every question will fit into the time allotted, but I'm confident that you can come up with some great questions to facilitate high-quality discussion among our panelists.
Thanks in advance for your questions, and looking forward to seeing some of you at the event!
Mr. Musk has personally donated $10 million via the Future of Life Institute towards a variety of AI safety projects. Additionally, MIRI is currently engaged in its annual fundraising drive with ambitious stretch goals, which include the hiring of several (and potentially many) additional researchers.
With this in mind, Is the bottleneck to progress in AI Safety research the availability of funding or researchers? Stated differently, If a technically-competent person assesses AI Safety to be the most effective cause, which is approach more effective: Earning-to-give to MIRI or FLI, or becoming an AI Safety researcher?
Related: What is your estimate of the field's room-for-funding for the next few years?