This is a special post for quick takes by Yadav. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

(I am mostly articulating feelings here. I am unsure about what I think should change). 

I am somewhat disappointed with the way Manifund has turned out. This isn't to critique the manifund team or that regranting as an idea is bad, but after a few months of excitement and momentum, things have somewhat decelerated. While you get the occasional cool projects, most of the projects on the website don't seem particularly impressive to me. I also feel like some of the regrantors seem slow to move money, but it could be that the previous problem is feeding into this. 

Suggestion: Enlarge the font size for pronouns on EA Global/EA retreat name cards

There was a period when I used they/them pronouns and was frequently misgendered at EA events. This likely occurred because I present as male, but regardless, it was a frustrating experience. I often find it difficult to correct people and explicitly mention my preferred pronouns, especially in socially taxing environments like EAGs or retreats. Increasing the size of the pronouns on name cards could be helpful.

I wonder if anyone has examined the pros and cons of protesting against AI labs? I have seen a lot of people uncertain about this. It may be useful to have someone have a post up, having done maybe <10 hours of thinking on this.

I'm doing some thinking on the prospects for international cooperation on AI safety, particularly potential agreements to slow down risky AI progress like CHARTS. Does anyone know of a good website or resource that summarizes different countries' current views and policies regarding deliberately slowing AI progress? For example, something laying out which governments seem open to restrictive policies or agreements to constrain the development of advanced AI (like the EU?) versus which ones want to charge full steam ahead, no matter the risks. Or which countries seem undecided or could be persuaded. Basically, I'm looking for something that synthesizes various countries' attitudes and stated priorities when it comes to potentially regulating the pace of AI advancement, especially policies that could slow the race to AGI. Let me know if you have any suggestions!

Not exactly what you're looking for (because it focuses on the US and China rather than giving an overview of lots of countries), but you might find "Prospects for AI safety agreements between countries" useful if you haven't already read it, particularly the section on CHARTS.

More from Yadav
Curated and popular this week
Relevant opportunities