I'm posting this to tie in with the Forum's Draft Amnesty Week (March 11-17) plans, but it is also a question of more general interest. The last time this question was posted, it got some great responses.
This post is a companion post for What posts are you thinking...
Recently I got published an op-ed in The Crimson advocating, sort of, for an Earning to Give strategy.
The Crimson is widely read among Harvard students, and its content runs through many circles — not just those who care about student journalism.
I thought the piece was ...
Great post!
Do note that given the context and background, a lot of your peers are probably going to be nudged towards charitable ideas. I would want to be generally mindful that you are doing things that have counterfactual impacts while also taking into account the value of your own time and potential to do good.
I encourage you to also be cognizant of not epistemically taking over other people's world models with something like "AI is going to kill us all" - I think an uncomfortable amount of space inadvertently and unknowingly does this and is one of the key reasons why I never started an EA group at my university.
TL;DR: Global performance indicators (GPIs) compare countries' policy performance, encouraging competition and pressuring policymakers for reforms. While effective, creating GPIs carries risks such as public backlash. However, certain characteristics can mitigate these ...
I'm a big fan of these intervention reports. They're not directly relevant to anything I'm working on right now so I'm only skimming them but they seem high quality to me. I especially appreciate how you both draw on relevant social science external to the movement, and more anecdotal evidence and reasoning specific to animal advocacy.
When you summarise the studies, I'd find it more helpful if you summarised the key evidence rather than their all-things-considered views.
E.g. in the cost-effectiveness section you mention that costs are low, seeming to assum...
I’m Emma from the Communications team at the Centre for Effective Altruism (CEA). I want to flag a few media items related to EA that have come out recently or will be coming out soon, given they’ll touch on topics—like FTX—that I expect will be of interest to Forum readers.
I’ll add links for these episodes once they become available and plan to update ...
Thanks, Philippe! Good luck at Boston!! I wanted to do it this year, but it didn't work out with my schedule.
This post summarizes "Against the Singularity Hypothesis," a Global Priorities Institute Working Paper by David Thorstad. This post is part of my sequence of GPI Working Paper summaries. For more, Thorstad’s blog, Reflective Altruism, has a three...
I feel this claim is disconnected with the definition of the singularity given in the paper:
...The singularity hypothesis begins with the supposition that artificial agents will gain the ability to improve their own intelligence. From there, it is claimed that the intelligence of artificial agents will grow at a rapidly accelerating rate, producing an intelligence explosion in which artificial agents quickly become orders of magnitude more intelligent than their human creators. The result will be a singularity, understood as a fundamental discontinuity
Reflecting on the upcoming EAGx event in Utrecht, I find myself both excited and cautiously optimistic about its potential to further grow the Dutch EA community. My experiences from the last EAGX in the Netherlands marked a pivotal moment in my own EA journey (significantly grounding it locally) and boosted the community's growth. I think this event also contributed to the growth of the 10% club and the founding of the School for Moral Ambition this year, highlighting the Netherlands as fertile ground for EA principles.
However, I'm less inclined to view the upcoming event as an opportunity to introduce proto-EAs. Recalling the previous Rotterdam edition's perceived expense, I'm concerned that the cost may deter potential newcomers, especially given the feedback I've heard regarding its perceived extravagance. I think we all understand why these events are worth our charitable Euros, but I have a hard time explaining that to newcomers who are attracted to EA for its (perceived) efficiency/effectiveness.
While the funding landscape may have changed (and this problem may have solved itself through that), I think it remains crucial to consider the aesthetics of events like these where the goal is in part to welcome new members into our community.
Manifold is hosting a festival for prediction markets: Manifest 2024! We’ll have serious talks, attendee-run workshops, and fun side events over the weekend. Chat with special guests like Nate Silver, Scott Alexander, Robin Hanson, Dwarkesh Patel, Cate Hall, and...
Hi Ben! Thanks for your comment.
I'm curious what you think the upsides and the downsides are?
I'll also add to what Austin said — in general, I think the strategy of [inviting highly accomplished person in field X to a conference about field Y] is underrated to cross-pollinate among and between fields. I think this is especially true of something like prediction markets, where by necessity they're applicable across disciplines; prediction markets are useless absent something on which to predict. This is the main reason I'm in favor of inviting e.g. Rob Mile...
I was originally going to write an essay based on this prompt but I don't think I actually understand the Epicurean view well enough to do it justice. So instead, here's a quick list of what seem to me to be the implications. I don't exactly agree with the Epicurean view but I do tend to believe that death in itself isn't bad, it's only bad in that it prevents you from having future good experiences.