CL

Chris Leong

Organiser @ AI Safety Australia and NZ
6315 karmaJoined Sydney NSW, Australia

Bio

Participation
7

Currently doing local AI safety Movement Building in Australia and NZ.

Comments
1032

That means people who’ve spent decades building experience in the field will no longer be able to find jobs.


Hot-take: I'd likely be less excited about people with decades in the field vs. new blood given that things seem stuck.

I think you've missed the main con and this is quite a subtle disadvantage that would only arise over longer periods of time.

Hiring people who aren't aligned in terms of values can exert subtle pressure to drift toward the mainstream over time. I know some people are going to say something along the lines of "why should we trust ourselves over other people?" and my answer is that if you don't have a particularly high regard for EA, you should go find a group that you do have particularly high regard for and support their work instead. Life's too short to waste on a group you find to be a bit "meh" and there are a lot of different groups out there.

Titoal argues that we should "have normal people around to provide sanity checks". I agree that it is important to try to not get too caught up in the EA bubble and maintain an understanding of how the rest of the world thinks, but I don't see this as outweighing the costs of introducing a high risk of value drift.

There is some merit to the argument that being value-aligned isn't particularly relevant to particular roles, but it's more complex than that because people's roles can change over time. Let's suppose you hire an employee for role X and they apply to shift to role Y, but you deny them vs. an employee who is more value-aligned but less qualified. That's a recipe for internal conflict. In practice, I suspect that there are some roles such as accountant where professional skills matter more and they are more likely to be happy sticking to that particular area. 

I'd love to hear from other people whether the management/leadership crunch has lessened?

A crash in the stock market might actually increase AI arms races if companies don't feel like they have the option to go slow.

There was anecdotal evidence that some of the concerns and risks relating to outreach to high school audiences have indeed been borne out to some extent, e.g. some evidence of overwhelmingness.

 

Could you say more about this?

Is there any chance that you could make the content publicly available?

Within AI risk, it seems plausible the community is somewhat too focused on risks from misalignment rather than mis-use or concentration of power.


My strong bet is that most interventions targeted toward concentration of power end up being net-negative by further proliferating dual-use technologies that can't adequately be defended against.

Do you have any proposed interventions that don't contain this drawback?

Further, why should this be prioritised when there are already many powerful actors deadset on proliferating these technologies as quickly as possible, if you count the large open-source labs, plus all of the money that governments are spending on accelerating commercialization which dwarfs spending on AI safety. And all the efforts by various universities and researchers at commercial labs to publish as much as possible about how to build such systems.

I'm confused. Don't you already have a second building? Is that dedicated towards events or towards more guests?

It knows the concept of cruxes? I suppose that isn’t that surprising in retrospect.

Load more