CL

Chris Leong

Organiser @ AI Safety Australia and NZ
6261 karmaJoined Sydney NSW, Australia

Bio

Participation
7

Currently doing local AI safety Movement Building in Australia and NZ.

Comments
1030

I'd love to hear from other people whether the management/leadership crunch has lessened?

A crash in the stock market might actually increase AI arms races if companies don't feel like they have the option to go slow.

There was anecdotal evidence that some of the concerns and risks relating to outreach to high school audiences have indeed been borne out to some extent, e.g. some evidence of overwhelmingness.

 

Could you say more about this?

Is there any chance that you could make the content publicly available?

Within AI risk, it seems plausible the community is somewhat too focused on risks from misalignment rather than mis-use or concentration of power.


My strong bet is that most interventions targeted toward concentration of power end up being net-negative by further proliferating dual-use technologies that can't adequately be defended against.

Do you have any proposed interventions that don't contain this drawback?

Further, why should this be prioritised when there are already many powerful actors deadset on proliferating these technologies as quickly as possible, if you count the large open-source labs, plus all of the money that governments are spending on accelerating commercialization which dwarfs spending on AI safety. And all the efforts by various universities and researchers at commercial labs to publish as much as possible about how to build such systems.

I'm confused. Don't you already have a second building? Is that dedicated towards events or towards more guests?

It knows the concept of cruxes? I suppose that isn’t that surprising in retrospect.

This is a great project idea!

In case you were wondering, Secure Software Development Practices for Generative AI and Dual-Use Foundation Models completely avoids any discussion of the fact that releasing a dual-use model could potentially be dangerous or that the impacts of any such models should be evaluated before use. This is a truly stunning display of ball-dropping.

Update: I just checked NIST AI 600-1 as well: the report is extremely blaise about CBRN hazards from general AI (admitting though that chemical and biological design tools might pose risks to society or national security"). They quote the RAND report that claims the current generation doesn't pose any such risks beyond web search, neglecting to mention that these results only applied to the release of a model over an API. As far as they're concerned, these risks just need to be "carefully monitored".

Load more