Currently doing local AI safety Movement Building in Australia and NZ.
I think you've missed the main con and this is quite a subtle disadvantage that would only arise over longer periods of time.
Hiring people who aren't aligned in terms of values can exert subtle pressure to drift toward the mainstream over time. I know some people are going to say something along the lines of "why should we trust ourselves over other people?" and my answer is that if you don't have a particularly high regard for EA, you should go find a group that you do have particularly high regard for and support their work instead. Life's too short to waste on a group you find to be a bit "meh" and there are a lot of different groups out there.
Titoal argues that we should "have normal people around to provide sanity checks". I agree that it is important to try to not get too caught up in the EA bubble and maintain an understanding of how the rest of the world thinks, but I don't see this as outweighing the costs of introducing a high risk of value drift.
There is some merit to the argument that being value-aligned isn't particularly relevant to particular roles, but it's more complex than that because people's roles can change over time. Let's suppose you hire an employee for role X and they apply to shift to role Y, but you deny them vs. an employee who is more value-aligned but less qualified. That's a recipe for internal conflict. In practice, I suspect that there are some roles such as accountant where professional skills matter more and they are more likely to be happy sticking to that particular area.
Within AI risk, it seems plausible the community is somewhat too focused on risks from misalignment rather than mis-use or concentration of power.
My strong bet is that most interventions targeted toward concentration of power end up being net-negative by further proliferating dual-use technologies that can't adequately be defended against.
Do you have any proposed interventions that don't contain this drawback?
Further, why should this be prioritised when there are already many powerful actors deadset on proliferating these technologies as quickly as possible, if you count the large open-source labs, plus all of the money that governments are spending on accelerating commercialization which dwarfs spending on AI safety. And all the efforts by various universities and researchers at commercial labs to publish as much as possible about how to build such systems.
Hot-take: I'd likely be less excited about people with decades in the field vs. new blood given that things seem stuck.