Hot on the heels of 80K's excellent AI risk research career profile (https://80000hours.org/career-guide/top-careers/profiles/artificial-intelligence-risk-research/), we're delighted to announce the funding of a new international Leverhulme Centre for the Future of Intelligence, to be led by Cambridge, with spokes at Oxford (Nick Bostrom), Imperial (Murray Shanahan), and Berkeley (Stuart Russell). The Centre proposal was developed by us at CSER, but will be a stand-alone centre, albeit collaborating extensively at CSER.

Building on the by-now-familiar "Puerto Rico Agenda", it will have the long-term safe and beneficial development of AI at its core, but with a slightly broader remit than CSER's focus on catastrophic AI risk and superintelligence. For example, it will consider some near-term challenges such as lethal autonomous weapons, and as well as some of the longer-term philosophical and practical issues surrounding the opportunities and challenges we expect to face, should greater-than-human level intelligence be developed later this century. 

It builds on the pioneering work of FHI, FLI and others, and the generous support of Elon Musk in massively boosting this field with his (separate) $10M grants programme in January of this year. One of the most important things this Centre will achieve is in taking a big step towards making this global area of research a long-term one in which the best talents can be expected to have lasting careers - the Centre is funded for a full 10 years, and we will aim to build longer-lasting funding on top of this.

In practical terms, it means that ~10 new postdoc positions at a minimum will be opening up in this space (we're currently pursuing matched funding opportunities) across academic disciplines and locations (Cambridge, Oxford, Berkeley, Imperial and elsewhere). Our first priority will be to identify and hire a world-class Executive Director, who would start in October. This will be a very influential position over the coming years. Research positions will most likely begin in April 2017.

In between now and then, FHI is hiring for AI safety researchers, and CSER will be hiring for an AI policy postdoc in the spring. I'll have limited time to post in between now and the Christmas break (I'll be away at NIPS and then occupied with funder deadlines and CSER recruitment), but will be happy to post more over the Christmas break if desired.

Thank you so much as always to the Effective Altruism community for their support of existential risk/far future work, both financially and intellectually - it has made a huge difference over the last couple of years.

Seán (Executive Director, CSER)

http://www.eurekalert.org/pub_releases/2015-12/uoc-cul120215.php

 

Human-level intelligence is familiar in biological 'hardware' -- it happens inside our skulls. Technology and science are now converging on a possible future where similar intelligence can be created in computers.

While it is hard to predict when this will happen, some researchers suggest that human-level AI will be created within this century. Freed of biological constraints, such machines might become much more intelligent than humans. What would this mean for us? Stuart Russell, a world-leading AI researcher at the University of California, Berkeley, and collaborator on the project, suggests that this would be "the biggest event in human history". Professor Stephen Hawking agrees, saying that "when it eventually does occur, it's likely to be either the best or worst thing ever to happen to humanity, so there's huge value in getting it right."

Now, thanks to an unprecedented £10 million grant from the Leverhulme Trust, the University of Cambridge is to establish a new interdisciplinary research centre, the Leverhulme Centre for the Future of Intelligence, to explore the opportunities and challenges of this potentially epoch-making technological development, both short and long term.

The Centre brings together computer scientists, philosophers, social scientists and others to examine the technical, practical and philosophical questions artificial intelligence raises for humanity in the coming century.

Huw Price, the Bertrand Russell Professor of Philosophy at Cambridge and Director of the Centre, said: "Machine intelligence will be one of the defining themes of our century, and the challenges of ensuring that we make good use of its opportunities are ones we all face together. At present, however, we have barely begun to consider its ramifications, good or bad".

The Centre is a response to the Leverhulme Trust's call for "bold, disruptive thinking, capable of creating a step-change in our understanding". The Trust awarded the grant to Cambridge for a proposal developed with the Executive Director of the University's Centre for the Study of Existential Risk (CSER), Dr Seán Ó hÉigeartaigh. CSER investigates emerging risks to humanity's future including climate change, disease, warfare and technological revolutions.

Dr Ó hÉigeartaigh said: "The Centre is intended to build on CSER's pioneering work on the risks posed by high-level AI and place those concerns in a broader context, looking at themes such as different kinds of intelligence, responsible development of technology and issues surrounding autonomous weapons and drones."

The Leverhulme Centre for the Future of Intelligence spans institutions, as well as disciplines. It is a collaboration led by the University of Cambridge with links to the Oxford Martin School at the University of Oxford, Imperial College London, and the University of California, Berkeley. It is supported by Cambridge's Centre for Research in the Arts, Social Sciences and Humanities (CRASSH). As Professor Price put it, "a proposal this ambitious, combining some of the best minds across four universities and many disciplines, could not have been achieved without CRASSH's vision and expertise."

Zoubin Ghahramani, Deputy Director, Professor of Information Engineering and a Fellow of St John's College, Cambridge, said: "The field of machine learning continues to advance at a tremendous pace, and machines can now achieve near-human abilities at many cognitive tasks -- from recognising images to translating between languages and driving cars. We need to understand where this is all leading, and ensure that research in machine intelligence continues to benefit humanity. The Leverhulme Centre for the Future of Intelligence will bring together researchers from a number of disciplines, from philosophers to social scientists, cognitive scientists and computer scientists, to help guide the future of this technology and study its implications."

The Centre aims to lead the global conversation about the opportunities and challenges to humanity that lie ahead in the future of AI. Professor Price said: "With far-sighted alumni such as Charles Babbage, Alan Turing, and Margaret Boden, Cambridge has an enviable record of leadership in this field, and I am delighted that it will be home to the new Leverhulme Centre.

 

 

11

0
0

Reactions

0
0

More posts like this

Comments2
Sorted by Click to highlight new comments since:

For example, it will consider some near-term challenges such as lethal autonomous weapons, and as well as some of the longer-term philosophical and practical issues surrounding the

You've missed the end of this sentence.

Thanks so much for the spot Daniel, greatly appreciated! I was working a little too quickly yesterday :)

Curated and popular this week
Relevant opportunities