Raemon comments on What Should the Average EA Do About AI Alignment? - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (36)

You are viewing a single comment's thread. Show more comments above.

Comment author: Paul_Crowley 25 February 2017 09:05:30PM 2 points [-]

Nitpick: "England" here probably wants to be something like "the south-east of England". There's not a lot you could do from Newcastle that you couldn't do from Stockholm; you need to be within travel distance of Oxford, Cambridge, or London.

Comment author: Raemon 25 February 2017 10:37:20PM 1 point [-]

Thanks, fixed.

Actually, is anyone other than DeepMind in London? (the section I brought this up was on volunteering, which I assume is less relevant for DeepMind than FHI)

Comment author: RobBensinger 25 February 2017 11:15:11PM 4 points [-]

One of the spokes of the Leverhulme Centre for the Future of Intelligence is at Imperial College London, headed by Murray Shanahan.

Comment author: Sean_o_h 26 February 2017 04:33:19PM 5 points [-]

There will be a technical AI safety-relevant postdoc position opening up with this CFI spke shortly, looking at trust/transparency/interpretability in AI systems.

Comment author: RobBensinger 27 February 2017 08:02:25PM 3 points [-]
Comment author: Sean_o_h 09 March 2017 04:31:10PM 0 points [-]

Murray will be remaining involved with CFI, albeit at reduced hours. The current intention is that there will still be a postdoc in trust/transparency/interpretability based out of Imperial, although we are looking into the possibility of having a colleague of Murray's supervising or co-supervising.