Hide table of contents

Epistemic status: Guessing based on what makes sense to me. I have not done any actual calculations. If someone else wants to do that, this would be great. Arb research has done some calculations for AISC.

For some time, AI Safety has had a culture of not caring about  cost effectiveness because there was so much money anyway. I am not convinced that this was ever the correct approach, but regardless, it’s not a good approach anymore. With the current funding situation AI Safety should care about cost effectiveness. 

This post is meant as a conversation starter. 

However, I’m not trying to start another conversation about “What should OpenPhil do with their money”. I’m not interested in that conversation, unless I’m talking to someone who is actually involved in that decision process. 

The conversation I’d like to have is with: 

  • People making funding decisions for themselves. There are many projects individual donors can contribute to. 
  • People who want to help make a more thorough quantitative analysis. If you’re interested in this, I’m happy to answer questions about anything I’m involved in.

Some context (feel free to skip)

What is the current situation?

  • Interest in AI Safety is growing fast. With this comes more funding opportunities.
  • We have not had any new large donors in a while (not counting FTX, since that money was not real). OpenPhil is still the largest source of funding.
  • You should not assume that OpenPhil is already funding everything that you think is worth funding. I’ve had many private conversations with people who disagree with what OpenPhil chooses to fund or not fund. This means that even if OpenPhil had infinite money, there would probably still be funding opportunities for other donors.
  • Some orgs are still dealing with the aftermath of FTX. E.g. promised funds that were never paid out, or money that has already been used and might be recalled in a clawback claim.

What about government funding?

Governments are getting interested in AI risk. This is great, and will lead to a lot of more money spent on AI Safety. However this will be spent on things that look good and respectable to the government, which will not cover everything EAs think is worth funding. 

Things I expect the governments to spend money on:

  • Evaluations of models
  • Interpretability
  • Supporting prestigious academics and their teams

Things I don’t expect governments to spend money on:

  • Community building and upskilling programs outside academia
  • More obscure research directions
  • People without standard credentials

I could be wrong about the specifics, but I don’t think I’m wrong about the fact that government funding will leave some very valuable funding opportunities on the table. I.e, there is still room for EA funding to make a large difference. 

I expect academic AI Safety research to be very important, but I also think there is a role for things that don’t quite fit into academia, and this is where I think EA funding can make a huge difference.

My guess for the most cost effective AI Safety projects

In a better version of this post I would have summaries of all the projects. But this post is the version I have time to write. Follow the links to the project descriptions on Manifold for more info.

Scalable online infrastructure

These are all project where a small team can serve a large number of people. I expect the most cost effective AI Safety projects to be in this category. 

Building and maintaining the Alignment Ecosystem | Manifund 

Alignment Ecosystem Development (AED) is building and maintaining most of the AI Safety online information platforms. Their resources are infinitely scalable, i.e. they can support an arbitrary large amount of traffic to their websites, which means their impact grows as the community grows.

Their current biggest bottleneck is visibility. Not enough people are aware of their resources. I encourage everyone to share aisafety.training, which I claim to be the single most useful link to give to aspiring AI safety researchers. 

Their second biggest bottleneck is funding. The project is currently volunteer based, which is a fragile situation. This is bad, because with this type of project, reliability and continuity is super important. 

  • The longer a resource exists the more people will find out about it and start using it.
  • A website like aisafety.training is much more useful if I can trust that it has all the events I might want to go to, then if it has just some of them.

AED has done a lot with just volunteer power. It’s possible that they will be able to continue this way, however with a small team of paid workers, the project would be much more stable. Given the size of their potential impact, I expect that funding them would be a cost effective donation.

10th edition of AI Safety Camp | Manifund 

Disclosure: I’m one of the AISC organisers.

AISC is an online and part time AI Safety research program. We help people take the step from just learning and thinking about AI Safety, to actually getting involved and doing something. 

We’ve found a format that scales very well. The 9th edition of AISC has 133 participants, which we did with only 2 organisers. I expect the next one to be even bigger, if it happens. 

Arb is currently evaluating AI Safety camp, including cost effectiveness. See their preliminary report here.

AISC is currently fundraising in order to be able to continue existing. We have a separate post about that here.

AI Safety Fundamentals – BlueDot Impact 

I don’t know if they need more money. I have not found any fundraiser, but I've also not looked very hard. I list them here anyway because they obviously belong on this list.

BlueDot impact is not just running large online courses. Their curriculum is used buy many more in-person and online study groups and programs.

Help Apart Expand Global AI Safety Research | Manifund 

Edit: Apart was not originally included in this blogpost, because I forgot. But they clearly belong here too.

Apart runs the Alignment Jams, which are weekend long research sprints on various AI safety relevant topics. The Alignment Jams are hybrid online and in person events, i.e. it's possible to join online from anywhere in the world, but Apart also helps local groups to run in-person meetups centred around these sprints. 

Low cost hubs 

My guess is that this category is less cost effective than scalable online infrastructure, but will be able to absorb more funding.

It’s hard to be an AI safety researcher all on your own, but it’s also hard to afford to move to London or the SF Bay area. This is why I’m excited about projects aimed at starting and maintaining hubs in other locations. It lowers the barrier to entry, and also lowers the cost of salaries for people working there.

I know of two projects in this category, and I think both are promising.

 





 

26

2
0

Reactions

2
0

More posts like this

Comments2
Sorted by Click to highlight new comments since:

I notice all the suggestions are field-building. Do you have any examples in mind of "More obscure research directions"? I know you wrote elsewhere that your general view is we need to try lots of different things, so maybe field-building is the key lever from your perspective?

Executive summary: Funding for AI safety is tightening, so cost-effectiveness should now be prioritized. The most cost-effective projects are likely scalable online infrastructure and low-cost hubs, which can serve more researchers per dollar spent.

Key points:

  1. Interest and funding for AI safety is growing, but OpenPhil is still the largest donor and gaps remain in what governments will fund.
  2. Scalable online infrastructure like platforms, courses, and camps can serve unlimited participants with small teams, so they are highly cost-effective.
  3. Low-cost geographic hubs also increase accessibility and affordability for aspiring AI safety researchers.
  4. BlueDot Impact, Alignment Ecosystem Development, AI Safety Camp, the Serbia Hub, and EA Hotel are given as examples worth considering.
  5. More funding is needed for reliability and continuity, as well as raising awareness of these scalable resources.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Relevant opportunities