This seems aimed at regulators; I'd be more interested in a version for orgs like the CIA or NSA.
Both those orgs seem to have a lot more flexibility than regulators to more or less do what they want when national security is an issue, and AI could plausibly become just that kind of issue.
So 'policy ideas for the NSA/CIA' could be at once both more ambitious and more actionable.
I did write the survey assuming AI researchers have at least been exposed to these ideas, even if they were completely unconvinced by them, as that's my personal experience of AI researchers who don't care about alignment. But if my experiences don't generalize, I agree that more explanation is necessary.
I agree that private docs and group chats are totally fine and normal. The bit that concerns me is 'discuss how to position themselves and how to hide their more controversial views or make them seem palatable', which seems a problematic thing for leaders to be doing in private. (Just to reiterate I have zero evidence for or against this happening though.)
I think there's something epistemically off about allowing users to filter only bad AI news. The first tag doesn't have that problem, but I'd still worry about missing important info. I prefer the approach of just requesting users be vigilant against the phenomenon I described.