This is a special post for quick takes by EffectiveAdvocate. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

I can't find a better place to ask this, but I was wondering whether/where there is a good explanation of the scepticism of leading rationalists about animal consciousness/moral patienthood. I am thinking in particular of Zvi and Yudkowsky. In the recent podcast with Zvi Mowshowitz on 80K, the question came up a bit, and I know he is also very sceptical of interventions for non-human animals on his blog, but I had a hard time finding a clear explanation of where this belief comes from.

I really like Zvi's work, and he has been right about a lot of things I was initially on the other side of, so I would be curious to read more of his or similar people's thoughts on this.

Seems like potentially a place where there is a motivation gap: non-animal welfare people have little incentive to convince me that they think the things I work on are not that useful.

Perhaps the large uncertainty around it makes it less likely that people will argue against it publicly as well. I would imagine many people might think with very low confidence that some interventions for non-human animals might not be the most cost-effective, but stay relatively quiet due to that uncertainty.

Reflecting on the upcoming EAGx event in Utrecht, I find myself both excited and cautiously optimistic about its potential to further grow the Dutch EA community. My experiences from the last EAGX in the Netherlands marked a pivotal moment in my own EA journey (significantly grounding it locally) and boosted the community's growth. I think this event also contributed to the growth of the 10% club and the founding of the School for Moral Ambition this year, highlighting the Netherlands as fertile ground for EA principles.

However, I'm less inclined to view the upcoming event as an opportunity to introduce proto-EAs. Recalling the previous Rotterdam edition's perceived expense, I'm concerned that the cost may deter potential newcomers, especially given the feedback I've heard regarding its perceived extravagance. I think we all understand why these events are worth our charitable Euros, but I have a hard time explaining that to newcomers who are attracted to EA for its (perceived) efficiency/effectiveness.

While the funding landscape may have changed (and this problem may have solved itself through that), I think it remains crucial to consider the aesthetics of events like these where the goal is in part to welcome new members into our community.

Thanks for sharing your thoughts! 

It's a pity you don't feel comfortable inviting people to the conference - that's the last thing we want to hear! 

So far our visual style for EAGxUtrecht hasn't been austere[1] so we'll think more about this. Normally, to avoid looking too fancy, I ask myself: would this be something the NHS would spend money on?

But I'm not sure how to balance the appearance of prudence with making things look attractive. Things that make me lean towards making things look attractive include:

  • This essay on the value of aesthetics to movements 
  • This SSC review, specifically the third reason Pease mentions for the Fabians' success
  • The early success of SMA and their choice to spend a lot on marketing and design
  • Things I've heard from friends who could really help EA, saying things like, "ugh, all this EA stuff looks the same/like it was made by a bunch of guys"

For what it's worth, the total budget this year is about half of what was spent in 2022, and we have the capacity for almost the same number of attendees (700 instead of 750). 

In case it's useful, here are some links that show the benefits of EAGx events. I admit they don't provide a slam-dunk case for cost-effectiveness, but they might be useful when talking to people about why we organise them: 

  • EAGx events seem to be a particularly cost-effective way of building the EA community, and we think the EA community has enormous potential to help build a better world. 
  • Open Philanthropy’s 2020 survey of people involved in longtermist priority work (a significant fraction of work in the EA community) found that about half of the impact that CEA had on respondents was via EAG and EAGx conferences.
  • Anecdotally, we regularly encounter community members who cite EAGx events as playing a key part in their EA journey. You can read some examples from CEA’s analysis

Thanks again for sharing your thoughts! I hope your pseudonymous account is helping you use the forum, although I definitely don't think you need to worry about looking dumb :)

  1. ^

    We're going for pink and fun instead. We're only going to spend a few hundred euros on graphic design. 

Hi James, I feel quite guilty for prompting you to write such a long, detailed, and persuasive response! Striving to find a balance between prudence and appeal seems to be the ideal goal. Using the NHS's spending habits as a heuristic to avoid extravagance seems smart (although I would not say that this should apply to other events!). Most importantly, I am relieved to learn that this year's budget per person will likely be significantly lower.

I totally agree that these events are invaluable. EAGs and EAGxs have been crucial in expanding my network and enhancing my impact and agency. However, as mentioned, I am concerned about perceptions. Having heard this I feel reassured, and I will see who I can invite! Thank you! 

That's nice to read! But please don't feel guilty, I found it to be a very useful prompt to write up my thoughts on the matter. 

Recent announcements of Meta had me thinking about "open source" AI systems more, and I am  wondering whether it would be worthwhile to reframe open source models, and start referring to them as, "Models with publicly available model weights", or "Free weight-models". 

This is not just more accurate, but also a better political frame for those (like me) that think that releasing model weights publicly is probably not going to lead to safer AI development.  

We can also talk about irreversible proliferation.

Curated and popular this week
Relevant opportunities