April 27 - May 4
DIY Debate Week

Write posts with slider-polls. Read this to learn how.

Write posts with slider-polls. Read this to learn how.

New & upvoted

Customize feedCustomize feed

Quick takes

Show community
View more

This article gave me 5% more energy today. I love the no fear, no bull#!@$, passionate approach. I hope this kindly packaged "get off your ass priveleged people" can spur some action, and great to see these sentiments front and center in a newspaper like the Guardian!

https://www.theguardian.com/lifeandstyle/2025/apr/19/no-youre-not-fine-just-the-way-you-are-time-to-quit-your-pointless-job-become-morally-ambitious-and-change-the-world?CMP=Share_AndroidApp_Other 

This article gave me 5% more energy today. I love the no fear, no bull#!@$, passionate approach. I hope this kindly packaged "get off your ass priveleged people" can spur some action, and great to see these sentiments front and center in a newspaper like the Guardian! https://www.theguardian.com/lifeandstyle/2025/apr/19/no-youre-not-fine-just-the-way-you-are-time-to-quit-your-pointless-job-become-morally-ambitious-and-change-the-world?CMP=Share_AndroidApp_Other 

Should I Be Public About Effective Altruism?

TL;DR: I've kept my EA ties low-profile due to career and reputational concerns, especially in policy. But I'm now choosing to be more openly supportive of effective giving, despite some risks.

For most of my career, I’ve worked in policy roles—first as a civil servant, now in an EA-aligned organization. Early on, both EA and policy work seemed wary of each other. EA had a mixed reputation in government, and I chose to stay quiet about my involvement, sharing only in trusted settings.

This caution gave me flexibility. My public profile isn’t linked to EA, and I avoided permanent records of affiliation. At times, I’ve even distanced myself deliberately. But I’m now wondering if this is limiting both my own impact and the spread of ideas I care about.

Ideas spread through visibility. I believe in EA and effective giving and want it to become a social norm—but norms need visible examples. If no one speaks up, can we expect others to follow?

I’ve been cautious about reputational risks—especially the potential downsides of being tied to EA in future influential roles, like running for office. EA still carries baggage: concerns about longtermism, elitism, the FTX/SBF scandal, and public misunderstandings of our priorities. But these risks seem more manageable now. Most people I meet either don’t know EA, or have a neutral-to-positive view when I explain it. Also, my current role is somewhat publicly associated with EA, and that won’t change. Hiding my views on effective giving feels less justifiable.

So, I’m shifting to increased openness: I’ll be sharing more and be more honest about the sources of my thinking, my intellectual ecosystem, and I’ll more actively push ideas around effective giving when relevant. I’ll still be thoughtful about context, but near-total caution no longer serves me—or the causes I care about.

This seems likely to be a shared challenge, curious how to hear how others are navigating it and whether your thinking has changed lately. 

Should I Be Public About Effective Altruism? TL;DR: I've kept my EA ties low-profile due to career and reputational concerns, especially in policy. But I'm now choosing to be more openly supportive of effective giving, despite some risks. For most of my career, I’ve worked in policy roles—first as a civil servant, now in an EA-aligned organization. Early on, both EA and policy work seemed wary of each other. EA had a mixed reputation in government, and I chose to stay quiet about my involvement, sharing only in trusted settings. This caution gave me flexibility. My public profile isn’t linked to EA, and I avoided permanent records of affiliation. At times, I’ve even distanced myself deliberately. But I’m now wondering if this is limiting both my own impact and the spread of ideas I care about. Ideas spread through visibility. I believe in EA and effective giving and want it to become a social norm—but norms need visible examples. If no one speaks up, can we expect others to follow? I’ve been cautious about reputational risks—especially the potential downsides of being tied to EA in future influential roles, like running for office. EA still carries baggage: concerns about longtermism, elitism, the FTX/SBF scandal, and public misunderstandings of our priorities. But these risks seem more manageable now. Most people I meet either don’t know EA, or have a neutral-to-positive view when I explain it. Also, my current role is somewhat publicly associated with EA, and that won’t change. Hiding my views on effective giving feels less justifiable. So, I’m shifting to increased openness: I’ll be sharing more and be more honest about the sources of my thinking, my intellectual ecosystem, and I’ll more actively push ideas around effective giving when relevant. I’ll still be thoughtful about context, but near-total caution no longer serves me—or the causes I care about. This seems likely to be a shared challenge, curious how to hear how others are navigating it and whethe

Announcing PauseCon, the PauseAI conference.
Three days of workshops, panels, and discussions, culminating in our biggest protest to date.
Twitter: https://x.com/PauseAI/status/1915773746725474581
Apply now: https://pausecon.org

Announcing PauseCon, the PauseAI conference. Three days of workshops, panels, and discussions, culminating in our biggest protest to date. Twitter: https://x.com/PauseAI/status/1915773746725474581 Apply now: https://pausecon.org

People often appeal to Intelligence Explosion/Recursive Self-Improvement as some win-condition for current model developers e.g. Dario argues Recursive Self-Improvement could enshrine the US's lead over China. 

This seems non-obvious to me. For example, suppose OpenAI trains GPT 6 which trains GPT 7 which trains GPT 8. Then a fast follower could take GPT 8 and then use it to train GPT 9. In this case, the fast follower has a lead and has spent far less on R&D (since they didn't have to develop GPT 7 or 8 themselves).  

I guess people are thinking that OpenAI will be able to ban GPT 8 from helping competitors? But has anyone argued for why they would be able to do that (either legally or technically)?

People often appeal to Intelligence Explosion/Recursive Self-Improvement as some win-condition for current model developers e.g. Dario argues Recursive Self-Improvement could enshrine the US's lead over China.  This seems non-obvious to me. For example, suppose OpenAI trains GPT 6 which trains GPT 7 which trains GPT 8. Then a fast follower could take GPT 8 and then use it to train GPT 9. In this case, the fast follower has a lead and has spent far less on R&D (since they didn't have to develop GPT 7 or 8 themselves).   I guess people are thinking that OpenAI will be able to ban GPT 8 from helping competitors? But has anyone argued for why they would be able to do that (either legally or technically)?

At Risk of violating @Linch's principle "Assume by default that if something is missing in EA, nobody else is going to step up.", I think it would be valuable to have a well researched estimate of the counterfactual value of getting investment from different investors (whether for profit or donors).

For example in global health, we could make GiveWell the baseline, as I doubt whether there is a ll funding source where switching as less impact, as the money will only ever be shifted from something slightly less effective. For example if my organisation received funding from GiveWell, we might only make slightly better use of that money than where it would otherwise have gone, and we're not going to be increasing the overall donor pool either.

Who knows, for-profit investment dollars could be 10x -100x more counterfactually impactful than GiveWell, which could mean a for-profit company trying to do something good could plausibly be 10-100x less effective than a charity and still doing as much counterfactual good overall? Or is this a stretch?

This would be hard to estimate but doable, and must have been done at least on a casual scale by some people.

Examples ( and random guesses) of counterfactual comparisons of the value of each dollar given by a particular source might be something like....

1. GiveWell                                                             1x
2. Gates Foundation                                            3x
3. Individual donors NEW donations                10x
4. Indivudal donors SHIFTING donations.        5x
5. Non EA-Aligned foundations                         8x
6. Climate funding                                               5x
7. For-profit investors.                                         20x

Or this might be barking up the wrong tree, not sure (and I have mentioned it before)

 

At Risk of violating @Linch's principle "Assume by default that if something is missing in EA, nobody else is going to step up.", I think it would be valuable to have a well researched estimate of the counterfactual value of getting investment from different investors (whether for profit or donors). For example in global health, we could make GiveWell the baseline, as I doubt whether there is a ll funding source where switching as less impact, as the money will only ever be shifted from something slightly less effective. For example if my organisation received funding from GiveWell, we might only make slightly better use of that money than where it would otherwise have gone, and we're not going to be increasing the overall donor pool either. Who knows, for-profit investment dollars could be 10x -100x more counterfactually impactful than GiveWell, which could mean a for-profit company trying to do something good could plausibly be 10-100x less effective than a charity and still doing as much counterfactual good overall? Or is this a stretch? This would be hard to estimate but doable, and must have been done at least on a casual scale by some people. Examples ( and random guesses) of counterfactual comparisons of the value of each dollar given by a particular source might be something like.... 1. GiveWell                                                             1x 2. Gates Foundation                                            3x 3. Individual donors NEW donations                10x 4. Indivudal donors SHIFTING donations.        5x 5. Non EA-Aligned foundations                         8x 6. Climate funding                                               5x 7. For-profit investors.                                         20x Or this might be barking up the wrong tree, not sure (and I have mentioned it before)  

Popular comments