44

My current thoughts on MIRI's "highly reliable agent design" work

Interpreting this writeup: I lead the Open Philanthropy Project's work on technical AI safety research. In our MIRI grant writeup last year, we said that we had strong reservations about MIRI’s research, and that we hoped to write more about MIRI's research in the future. This writeup explains my current... Read More
19

How I missed my pledge and how I'm fixing it

I failed to meet my pledge this year due to poor budgeting. In this post, I explain what I think happened and how I plan to avoid similar problems in the future. I wrote this post because I felt like I owed the community something for missing my pledge, and I thought... Read More
10

Three areas of research on the superintelligence control problem

I’ve recently published an introduction to research on superintelligence risk, with the aim of making it easier for students to get into this area.... Read More
6

How much does work in AI safety help the world?

Using a method of Owen's, we made an interactive tool to estimate the probability that joining the AI safety research community would actually avert existential catastrophe.... Read More
6

Effective Altruism Global SF panel on AI: question submissions thread

On August 1, I'll be moderating a panel at EA Global on the relationship between effective altruism, astronomical stakes, and artificial intelligence. The panelists will be Stuart Russell (UC Berkeley), Nick Bostrom (Future of Humanity Institute), Nate Soares (Machine Intelligence Research Institute), and Elon Musk (SpaceX, Tesla). I'm very excited... Read More
7

Research position at Future of Humanity Institute

The Future of Humanity Institute is hiring, and I wanted to personally extend the invitation to the EA community: Applications are invited for a full-time Postdoctoral Research Fellow in Artificial Intelligence (AI) safety within the Future of Humanity Institute (FHI) at Oxford University. This post is fixed-term for 2 years... Read More
5

Request for proposals for Musk/FLI AI research grants

Recently, Elon Musk donated $10M  to fund research on making AI more robust and beneficial, motivated in part by Nick Bostrom's book Superintelligence  and by AI's links to existential risk. Many EAs I know are interested in the relationship between artificial intelligence and existential risk, and there has been some... Read More

View more: Next