Thanks to generous initial contributions from Ozzie Gooen and Peter Hurford, we've started a .impact fund which can pay for small costs and virtual assistant (VA) work for independent EA projects. If you're interested in making use of this, contact Ozzie!

Examples of things the fund could pay for include domain names, web hosting, and printing or mailing out leaflets for an experiment. We've found some good virtual assistants and opened paid-for accounts with them that you could use. These virtual assistants are generally cheap overseas freelancers who bill us hourly, and are happy to help with any task, including time-consuming administrative ones. We also have an account at US-based Fancy Hands, which provides us with a certain number of 15 minute tasks per month.

Though we're open to offering more in some cases, at least for now the fund will mostly be limited to very small-scale spending - $10 for domain names here, $30 for VA work to complete or drive forwards a delayed project there. This seemed like a valuable niche in the EA movement; EA Ventures is targeting larger projects, and we'd encourage those interested in these to apply there.

Naturally, it doesn't make sense to spend hours on an application for such small amounts - just fire off a brief exploratory email, and we may approve spending quickly, including for future work on the project.

As a reminder, .impact is the place to coordinate and communicate about EA projects. For more, see Marcus Davis' recent post reintroducing it.

5

0
0

Reactions

0
0
Comments10


Sorted by Click to highlight new comments since:

This seems like an extremely useful resource. As someone familiar with working on developing high impact ideas, to have just a little bit of extra help to get something started and see if a project is viable could mean the difference between success and failure.

I think that for many people with great ideas, sometimes the "startup inertia" is too great, and the .impact fund can go a long way towards reducing startup inertia.

This looks great!

Alasdair Pearce made a point worth discussing, which I'll post my reply to below:

What is the reasoning that people interested in EA are unable to finance such micro amounts themselves? Certainly there are a certain number of people in EA with the financial resources to make big donations to EA causes or looking to work full time on EA work through donations but I am a bit sceptical that there is a big bottleneck of people who have the time and expertise to apply for funds from you but not too finance things which would cost beer money themselves. What is the plausible narrative that makes these donations to globally rich people (who can speak English and use email on a regular basis) more valuable than donating this money to givedirectly.

Even when people could afford these costs (for instance by funging against donations), asking for money serves another useful purpose: it serves as a sanity check on whether this is a reasonable use of resources.

For people who can afford it and are confident the expense is justified, it makes sense for them just to pay. Having a system they can apply to makes sense both for those who cannot afford it, and those who want a hard-nosed outside opinion.

Here's how I replied:

I think the reasoning is that people are irrationally offput from spending small amounts of money, and in particular offput from getting VAs to finish projects. In addition, there's high and offputting overhead for first getting set up with VAs, which we've had to handle anyway.

Of course, people could afford to pay these small sums themselves, and I'm very sensitive to concerns about donating to the global rich rather than GiveDirectly. Indeed I think a lot of spending in the EA movement broadly conceived is misplaced on these grounds, ultimately gets focused on helping those globally rich people, and does less good than GiveDirectly. But I think this particular .impact spending, which is after all especially small, is worth trying.

Indeed I think a lot of spending in the EA movement broadly conceived is misplaced on these grounds, ultimately gets focused on helping those globally rich people, and does less good than GiveDirectly.

Do you want to elaborate on this?

Not really ;)

Reductio ad absurdum: Why we should give any money to organizations like Givewell, that then turn around and pay that money to their globally rich employees who are doing EA research work?

Paying Givewell employees a solid salary lets them focus on EA stuff full-time while not having to expend a lot of stress & energy making sure their needs get met. It's effectiveness that matters, not overhead. Providing virtual assistants to independent EAs seems like it could do much the same thing--giving them a free virtual assistant exoskeleton lets them spend more of their time on nonreplaceable EA work and less on menial, outsourceable tasks that arise during the course of their EA work.

The counterargument would be that many EAs have full-time earning-to-give jobs, so they could just deduct the funding necessary for their independent EA projects from whatever their normal charitable donations would be. So I think funds like this make sense for EAs who don't have full-time earning-to-give jobs: students who want to do effective activism (say for pandemic risks?) in their spare time, someone who quits their job and lives off savings in order to research EA-related topics full-time, someone who thinks earning to give is oversubscribed and is focusing on an EA project instead of advancing their career, etc.

That's all reasonable. I think the strongest point Alasdair could be making is the claim that even those people could afford to spends $10s or even $100s themselves. (I agree that that claim isn't obviously true though, depending on what you mean by 'afford'.) By contrast, most GiveWell staff couldn't afford to work for no salary whatsoever.

Even if they could, that may be demanding a higher level of self-sacrifice from volunteers than we want to... consider that giving 10+% of your time/income qualifies you for EA status, so if someone is giving 20% of their time and living paycheck to paycheck, asking for more money on top of that feels like it will set the bar too high for some worthy volunteers.

Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 14m read
 · 
This is a transcript of my opening talk at EA Global: London 2025. In my talk, I challenge the misconception that EA is populated by “cold, uncaring, spreadsheet-obsessed robots” and explain how EA principles serve as tools for putting compassion into practice, translating our feelings about the world's problems into effective action. Key points:  * Most people involved in EA are here because of their feelings, not despite them. Many of us are driven by emotions like anger about neglected global health needs, sadness about animal suffering, or fear about AI risks. What distinguishes us as a community isn't that we don't feel; it's that we don't stop at feeling — we act. Two examples: * When USAID cuts threatened critical health programs, GiveWell mobilized $24 million in emergency funding within weeks. * People from the EA ecosystem spotted AI risks years ahead of the mainstream and pioneered funding for the field starting in 2015, helping transform AI safety from a fringe concern into a thriving research field. * We don't make spreadsheets because we lack care. We make them because we care deeply. In the face of tremendous suffering, prioritization helps us take decisive, thoughtful action instead of freezing or leaving impact on the table. * Surveys show that personal connections are the most common way that people first discover EA. When we share our own stories — explaining not just what we do but why it matters to us emotionally — we help others see that EA offers a concrete way to turn their compassion into meaningful impact. You can also watch my full talk on YouTube. ---------------------------------------- One year ago, I stood on this stage as the new CEO of the Centre for Effective Altruism to talk about the journey effective altruism is on. Among other key messages, my talk made this point: if we want to get to where we want to go, we need to be better at telling our own stories rather than leaving that to critics and commentators. Since
 ·  · 32m read
 · 
Formosa: Fulcrum of the Future? An invasion of Taiwan is uncomfortably likely and potentially catastrophic. We should research better ways to avoid it.   TLDR: I forecast that an invasion of Taiwan increases all the anthropogenic risks by ~1.5% (percentage points) of a catastrophe killing 10% or more of the population by 2100 (nuclear risk by 0.9%, AI + Biorisk by 0.6%). This would imply it constitutes a sizable share of the total catastrophic risk burden expected over the rest of this century by skilled and knowledgeable forecasters (8% of the total risk of 20% according to domain experts and 17% of the total risk of 9% according to superforecasters). I think this means that we should research ways to cost-effectively decrease the likelihood that China invades Taiwan. This could mean exploring the prospect of advocating that Taiwan increase its deterrence by investing in cheap but lethal weapons platforms like mines, first-person view drones, or signaling that mobilized reserves would resist an invasion. Disclaimer I read about and forecast on topics related to conflict as a hobby (4th out of 3,909 on the Metaculus Ukraine conflict forecasting competition, 73 out of 42,326 in general on Metaculus), but I claim no expertise on the topic. I probably spent something like ~40 hours on this over the course of a few months. Some of the numbers I use may be slightly outdated, but this is one of those things that if I kept fiddling with it I'd never publish it.  Acknowledgements: I heartily thank Lily Ottinger, Jeremy Garrison, Maggie Moss and my sister for providing valuable feedback on previous drafts. Part 0: Background The Chinese Civil War (1927–1949) ended with the victorious communists establishing the People's Republic of China (PRC) on the mainland. The defeated Kuomintang (KMT[1]) retreated to Taiwan in 1949 and formed the Republic of China (ROC). A dictatorship during the cold war, Taiwan eventually democratized in the 1990s and today is one of the riche
Relevant opportunities