Comment author: RomeoStevens 13 October 2018 07:05:56AM 4 points [-]

Was whether senior talent should have better access to high quality executive assistants explored?

Actually in general if there were any negative findings that is probably useful too.

Comment author: Sean_o_h 15 October 2018 11:56:13AM 1 point [-]

I can say anecdotally that at different points, access to excellent executive assistants (or people effectively functioning in such roles) has been hugely helpful for me and other folks in xrisk leadership positions I've known.

Comment author: Sean_o_h 29 August 2018 10:02:13AM *  4 points [-]

Great summary, thanks.

The sink source should then be owned by a team seen as extremely responsible, reliable, and committed to safety above all else. I recommend FHI or MIRI (or both!) take on that role.

Were this to happen, these orgs would not be seen as the appropriate 'owners' by most folk in mainstream AI (I say as a fan of both). Their work is not really well-known outside of EA/Bay area circles (other than people having heard of Bostrom as the 'superintelligence guy;).

One possible path would be for a high-reputation network to take on this role. E.g. something like the partnership on AI's safety-critical AI group (which has a number of long-term safety folk on it as well as near-term safety) or something similar. The process might be normalised by focusing on reviewing/advising on risky/dual use AI research in then near-term - e.g. research that highlights new ways of doing adversarial attacks on current systems, or enables new surveillance capabilities (e.g. https://arxiv.org/abs/1808.07301). This could help set the precedents for, and establish the institutions needed for safety review for AGI-relevant research (right now I think it would be too hard to say in most cases what would constitute a 'risky' piece of research from an AGI perspective, given most of it for now would look like building blocks of fundamental research).

Comment author: Peter_Hurford  (EA Profile) 29 August 2018 04:02:23AM 2 points [-]

Thanks for sharing these reflections. I really appreciate them and it's exciting to see all this progress. I think some additional context about what the Human Level AI multi-conference is would be helpful. It sounds like it was a mix of non-EA and EA AI researchers meeting together?

Comment author: Sean_o_h 29 August 2018 09:50:03AM *  2 points [-]

It sounds like it was a mix of non-EA and EA AI researchers meeting together?

Mostly the former; maybe 95% / 5% or higher. Probably best to describe it as a slightly non-mainstream AI conference (in that it was focused on AGI more so than narrow AI; but had high-quality speakers from DeepMind, Facebook, MIT, DARPA etc) which some EA folk participated in.

https://www.hlai-conf.org/

Comment author: Sean_o_h 07 August 2018 07:39:51AM 9 points [-]

Incidentally, CSER's Simon Beard has a working paper just up looking at sources of evidence for probability assessments of difference Xrisks and GCRs, and the underlying methodologies. It may be useful for people thinking about the topic of this post (I also imagine he'd be pleased to get comments, as this will go to a peer reviewed publication in due course). http://eprints.lse.ac.uk/89506/1/Beard_Existential-Risk-Assessments_Accepted.pdf

Comment author: Sean_o_h 30 December 2017 10:24:57AM 4 points [-]

"Last year I criticised them for not having produced any online research over several years;" I should have responded to this last year. CSER's first researcher arrived in post in autumn 2015; at the point at which last year's review was done, CSER had been doing active research for 1 year (in the case of the earliest-arrived postdoc), not several. As all were new additions to the Xrisk community, there wasn't already work in the pipeline; getting to the point of having peer-reviewed publications online within a year, when getting up to speed in a new area, is a challenging ask.

Prior to autumn 2015, I was the only employee of CSER (part-time, shared with FHI), and my role was fundraising grantwriting, research planning, lecture organisation and other 'get a centre off the ground' activities. I think it's incorrect to consider CSER a research generating organisation before that point.

We have now had our earliest two postdocs in post for >2 years, and publications are beginning to come through the peer review process.

Comment author: Sean_o_h 21 December 2017 12:57:12PM 7 points [-]

Thank you for all the great and detailed analysis again. +1 on GCRI's great work, on a shoestring budget, this year. I think my comment from last year's version of this post holds word for word, but more strongly (copied below for convenience). I would note that I believe Seth and others on his team are working on some very limited funding horizons, which considerably limits what they can do - EA support would likely make a very big positive difference.

"I would encourage EAs to think about supporting GCRI, whether on AI safety or (especially) GCR more broadly. (1) As you note, they've done some useful analyses on a very limited budget. (2) It's my view that a number of their members (Seth Baum and Dave Denkenberger in particular in my case) have been useful and generous information and expertise resources to this community over the last couple of years (Seth has both provided useful insights, and made very useful networking connections, for me and others in FHI and CSER re: adjacent fields that we could usefully engage with, including biosecurity, risk analysis, governance etc). (3) They've been making good efforts to get more relevant talent into the field - e.g. one of their research associates, Matthias Maas, gave one of the best-received contributed talks at our conference this week (on nuclear security and governance). (4) They're less well-positioned to secure major funding from other sources than some of the orgs above. (5) As a result of (4), they've never really had the opportunity to "show what they can do" so to speak - I'm quite curious as to what they could achieve with a bigger budget and a little more long-term security.

So I think there's an argument to be made on the grounds of funding opportunity constraints, scaleability, and exploration value."

Comment author: Sean_o_h 21 December 2017 02:22:43PM 2 points [-]

Also, +1 on the great work being done by AI impacts (also on a shoestring!)

Comment author: Sean_o_h 21 December 2017 12:57:12PM 7 points [-]

Thank you for all the great and detailed analysis again. +1 on GCRI's great work, on a shoestring budget, this year. I think my comment from last year's version of this post holds word for word, but more strongly (copied below for convenience). I would note that I believe Seth and others on his team are working on some very limited funding horizons, which considerably limits what they can do - EA support would likely make a very big positive difference.

"I would encourage EAs to think about supporting GCRI, whether on AI safety or (especially) GCR more broadly. (1) As you note, they've done some useful analyses on a very limited budget. (2) It's my view that a number of their members (Seth Baum and Dave Denkenberger in particular in my case) have been useful and generous information and expertise resources to this community over the last couple of years (Seth has both provided useful insights, and made very useful networking connections, for me and others in FHI and CSER re: adjacent fields that we could usefully engage with, including biosecurity, risk analysis, governance etc). (3) They've been making good efforts to get more relevant talent into the field - e.g. one of their research associates, Matthias Maas, gave one of the best-received contributed talks at our conference this week (on nuclear security and governance). (4) They're less well-positioned to secure major funding from other sources than some of the orgs above. (5) As a result of (4), they've never really had the opportunity to "show what they can do" so to speak - I'm quite curious as to what they could achieve with a bigger budget and a little more long-term security.

So I think there's an argument to be made on the grounds of funding opportunity constraints, scaleability, and exploration value."

Comment author: RobBensinger 27 February 2017 08:02:25PM 3 points [-]
Comment author: Sean_o_h 09 March 2017 04:31:10PM 0 points [-]

Murray will be remaining involved with CFI, albeit at reduced hours. The current intention is that there will still be a postdoc in trust/transparency/interpretability based out of Imperial, although we are looking into the possibility of having a colleague of Murray's supervising or co-supervising.

Comment author: RobBensinger 25 February 2017 11:15:11PM 4 points [-]

One of the spokes of the Leverhulme Centre for the Future of Intelligence is at Imperial College London, headed by Murray Shanahan.

Comment author: Sean_o_h 26 February 2017 04:33:19PM 5 points [-]

There will be a technical AI safety-relevant postdoc position opening up with this CFI spke shortly, looking at trust/transparency/interpretability in AI systems.

Comment author: Sean_o_h 15 December 2016 01:45:54PM 13 points [-]

Thanks for a very detailed and informative post.

As I have before, I would encourage EAs to think about supporting GCRI, whether on AI safety or (especially) GCR more broadly. (1) As you note, they've done some useful analyses on a very limited budget. (2) It's my view that a number of their members (Seth Baum and Dave Denkenberger in particular in my case) have been useful and generous information and expertise resources to this community over the last couple of years (Seth has both provided useful insights, and made very useful networking connections, for me and others in FHI and CSER re: adjacent fields that we could usefully engage with, including biosecurity, risk analysis, governance etc). (3) They've been making good efforts to get more relevant talent into the field - e.g. one of their research associates, Matthias Maas, gave one of the best-received contributed talks at our conference this week (on nuclear security and governance). (4) They're less well-positioned to secure major funding from other sources than some of the orgs above. (5) As a result of (4), they've never really had the opportunity to "show what they can do" so to speak - I'm quite curious as to what they could achieve with a bigger budget and a little more long-term security.

So I think there's an argument to be made on the grounds of funding opportunity constraints, scaleability, and exploration value. The argument is perhaps less strong on the grounds of AI safety alone, but stronger for GCR more broadly.

View more: Next