Paul_Crowley comments on What Should the Average EA Do About AI Alignment? - Effective Altruism Forum
posted by
on
25 February 2017 08:07PM
You are viewing a comment permalink. View the original post to see all comments and the full post content.
= 1ffb9810742e55d426cb37191bcb1f2a)
Subscribe to RSS Feed
Comments (39)
Nitpick: "England" here probably wants to be something like "the south-east of England". There's not a lot you could do from Newcastle that you couldn't do from Stockholm; you need to be within travel distance of Oxford, Cambridge, or London.
Thanks, fixed.
Actually, is anyone other than DeepMind in London? (the section I brought this up was on volunteering, which I assume is less relevant for DeepMind than FHI)
One of the spokes of the Leverhulme Centre for the Future of Intelligence is at Imperial College London, headed by Murray Shanahan.
There will be a technical AI safety-relevant postdoc position opening up with this CFI spke shortly, looking at trust/transparency/interpretability in AI systems.
... Aaand 33 hours later: https://twitter.com/mpshanahan/status/836249423369756672
Murray will be remaining involved with CFI, albeit at reduced hours. The current intention is that there will still be a postdoc in trust/transparency/interpretability based out of Imperial, although we are looking into the possibility of having a colleague of Murray's supervising or co-supervising.