Hide table of contents

Summary

  1. We research high leverage AI Safety interventions. Our team of analysts generate, identify, and evaluate potentially high impact opportunities.  
  2. When we find them, we make them happen. Once a top idea has been vetted, we use a variety of tools to turn it into a reality, including grantmaking, advocacy, RFPs, and incubating it ourselves.

More details

While we are means neutral and open to whichever methods make the most sense given the intervention, we will primarily use two tools: grantmaking and RFPs. 

Requests for Proposals (RFPs) - funding ideas no one is working on

RFPs are a bit like job ads for organizations, usually for contract work. Instead of hiring an individual for a job, an RFP is put out to hire an organization or individual for a contract, and there’s much less management overhead than if the project was done in-house. (If you’d like a more detailed explanation of how they work, please see Appendix A.) 

The reason why RFPs are amazing is that they fix an underlying problem with most grantmaking: you can make an idea happen even if nobody is currently working on it. 

Think of it from the perspective of a large foundation. You’re a program officer there and just had an awesome idea for how to make AI safer. You’re excited. You have tons of resources at your disposal. All you have to do is find an organization that’s doing the idea, then give them oodles of money to scale it up. 

The problem is, you look around and find that nobody’s doing it. Or maybe there’s one team doing it, but they’re not very competent, and you worry they’ll do a poor job of it. 

Unfortunately for you, you’re out of luck. You could go start it yourself, but you’re in a really high impact role and running a startup wouldn’t be your comparative advantage. In your spare time you could try to convince existing orgs to do the idea, but that’s socially difficult and it’s hard to find the right team who’d be interested. Unfortunately, the usual grantmaking route is limited to choosing from existing organizations and projects. 

Now, if you had RFPs in your toolkit, you’d be able to put out an RFP for the idea. You could say, “The Nonlinear Fund is looking to fund people to do this idea. We’ll give up to $200,000 for the right team(s) to do it.” Then people will come. 

Values-aligned organizations that might not have known that you were interested in these projects will apply. Individuals who find the idea exciting and high impact will come forward. It will also help spread the idea, since people will know that there’s money and interest in the area. 

This is why Nonlinear (1) will do RFPs in addition to the usual grantmaking. This will allow our prioritization research to not be limited to only evaluating existing projects. 

We do not currently have a set timeline for when we will issue RFPs or do grantmaking rounds. If you would like to hear about funding opportunities when they do come up, either as an individual or an organization, make sure to subscribe to our newsletter or periodically check out our website

Research methods

We will have a team of research analysts working on generating, identifying, evaluating, and comparing different intervention opportunities. 

We will use a research process similar to the one Charity Entrepreneurship used to help launch multiple GiveWell-funded and Open Philanthropy Project-funded charities. This involves, among other things, using the spreadsheet method to systematically identify the highest impact opportunities. The main elements of this method are:

  • Collect as many potential ideas as possible and record them in a spreadsheet.
  • Identify the best criteria to evaluate the ideas against. Add these as column headers for the spreadsheet (e.g.  cost-effectiveness, potential flow-through effects, etc).
  • Systematically go through the spreadsheet, collecting information to inform how well the ideas do on each of the criteria.
  • Try to destroy the ideas, finding disconfirming evidence or crucial considerations that rule them out.
  • Compare the ideas that survive the gauntlet and strategize about how to get them implemented.

What about the risks?

Reducing astronomical risks is risky. There are many potential ways to accidentally make things worse. This is why in addition to spending hundreds of hours evaluating ideas, we will have a panel of advisors who will vet our work to maximize the chance of spotting dangers beforehand. Our board of advisors currently includes: Jonas Vollmer, Spencer Greenberg, Alex Zhu, Robert Miles, and David Moss. We are working on getting people from all the major safety organizations and major viewpoints to make sure our interventions are robustly positive. 

Who we are

Kat Woods (previously Katherine Savoie): Prior to Nonlinear she co-founded multiple GiveWell and Open Philanthropy-funded charities. Namely: Charity Entrepreneurship (a charity startup incubator in poverty and animal welfare), Charity Science Health (increases vaccination rates in India), and Charity Science Outreach (a meta charity). The connecting theme between all of her organizations has been a focus on systematic prioritization research to identify priority interventions then turning those ideas into high impact projects and organizations. 

Emerson Spartz: Named "King of Viral Media" by Forbes, Spartz is one of the world's leading experts on internet virality and has been featured in major media including CBS, CNBC, CNN, and hundreds more. He was named to both Forbes' and Inc Magazine's "30 Under 30" lists. Spartz is the founder of Dose, a top digital media company with $35 million in funding. By the age of 19, Spartz became a New York Times bestselling author after publishing his first book. He helps run Nonlinear part time while also angel investing and reading all the things. 

You? We’re hiring! Please see the section below for more details. 

Ways to get involved

  • Receive research updates and funding opportunities by signing up to our newsletter.
  • We’re hiring! If you want an EA job or internship, check out our job descriptions. The deadline for applications is April 2nd. Kat will be attending EAG, so please reach out to her while you’re there to ask any questions you might have. We are looking for:
    • Research analysts. If you like obsessively learning about EA things, and you probably do if you’re still reading this blog post, we need your skills!
    • Video editor for Robert Miles. If you like his videos, want there to be more of them, and can edit videos, the world needs you!
    • Technical help. We are looking to automate some cool EA things, like an automatic EA podcast. If you have ideas and know-how on how to do that, please apply!
    • High impact executive assistant. If you like what Nonlinear is doing and want there to be more of it, help save Kat and Emerson time. Additional benefit: if you dream of traveling the world, you can travel with Kat who lives nomadically (Caribbean this winter, Europe this summer). This position can also be done remotely.
    • Social media. Have you spent an embarrassing amount of time figuring out how to get more likes? Use your social media addiction for the greater good!

We greatly value any feedback or suggestions you might have. Please post your questions and comments below or reach out to Kat at EAG if you are attending. 

1 - Nonlinear’s full name is The Nonlinear Fund. We will mostly refer to ourselves as Nonlinear unless the situation is sufficiently formal that the full name is worth the extra syllables. 

Appendix A - More detailed explanation of RFPs

Frequently used in the charity sector, the original charity will “request proposals” for accomplishing a certain goal. Sometimes the goals are broad, like “decrease malaria infections in Uganda”, sometimes they’re more specific, like “hand out 10,000 bednets in the Budaka district”. 

Then charities will send in applications, usually listing a plan on how they’d accomplish the goal, an explanation of why their organization is trustworthy and competent (a “CV” of the org), and a proposed budget. 

The original charity reviews the applications, interviews the top contenders, then chooses the top one. 

The grantee then goes and executes on the plan. There are varying degrees of management from the original charity. Sometimes it can be checking in once a month, sometimes it can be once a year. Sometimes it’s a recurring agreement, sometimes it’s a one-off. Regardless, it’s always less management time than if the charity just did it themselves.

Comments32
Sorted by Click to highlight new comments since:

Both founders don't seem to have a background in technical AI safety research. Why do you think Nonlinear will be able to research and prioritize these interventions without having prior experience or familiarity in technical AI safety research?

Relatedly, wouldn't the organization be better if it hired for a full-time researcher or have a co-founder who has a background in technical AI safety research? Is this something you're considering doing?

Similar questions came to mind for me as well.

Relatedly, I'd be interested to hear more about Nonlinear's thoughts on what the downside risks of this org/approach might be, and how you plan to mitigate them. I appreciated the section "What about the risks?", and I think gathering that board of advisors seems like a great step, but I'd be interested to hear you expand on that topic.

I think the main downside risks I'd personally have in mind would be risks to the reputations and relationships of other people working on related issues (particularly AI safety, other existential risks, and other longtermist stuff). It seems important to avoid seeming naive, slapdash, non-expert, or weird when working on these topics, or at least to find ways of minimising the chances that such perceptions would rub off on other people working in the same spaces. 

(To be clear, I do not mean to imply that the existence of concerns like this means no one should ever do any projects like this. All projects will have at least some degree of downside risks, and some projects are very much worth doing even given those risks. It's always just a matter of assessing risks vs benefits, thinking of mitigation options, and trying to account for biases and the unilateralist's curse. So I'm just asking questions, rather than trying to imply a criticism.)

P.S. Some sources that inform/express my views on this sort of topic (in a generic rather than AI-safety-specific way):

Thanks for the links and thoughtful question! 

From an overarching viewpoint, I am personally extremely motivated to avoid accidentally doing more harm than good. I have seen how very easy it is to do that in the relatively forgiving fields of poverty and animal welfare and the stakes are much higher and the field much smaller in AI safety. I literally (not figuratively or hyperbolically) lose sleep over this concern. So when I say we take it seriously, it’s not corporate speak for appeasing the masses, but a deeply, genuinely held concern. I say this to point towards the fact that whatever our current methods are for avoiding causing harm, we are motivated to find and become aware of other ways to increase our robustness. 

More specifically, another approach we’re using is being extremely cautious in launching things, even if we are not convinced by an advisor’s object level arguments. Last year I was considering launching a project but before I went for it, I asked a bunch of experts in the area. Lots of people liked the idea but some were worried about it for various reasons. I wasn’t convinced by their reasoning, but I am convinced by epistemic modesty arguments and they had more experience in the area, so I nixxed the project. We intend to have a similar mindset moving forwards, while still keeping in mind that no project will ever be universally considered good.

That sounds good to me.

I wasn’t convinced by their reasoning, but I am convinced by epistemic modesty arguments and they had more experience in the area, so I nixxed the project.

I agree that the epistemic modesty/humility idea of "defer at least somewhat to other people, and more so the more relevant experience/expertise they have" makes sense in general. 

I also think that the unilateralist's curse provides additional reason to take that sort of approach in situations (like this one) where "some number of altruistically minded actors each have the ability to take an action that would cause accidental harm to others" (quoting from that link). So it's good to hear you're doing that :)

On a somewhat related note, I'd be interested to hear more about what you mean by "advocacy" when you say "Once a top idea has been vetted, we use a variety of tools to turn it into a reality, including grantmaking, advocacy, RFPs, and incubating it ourselves." Do you mean like advocacy to the general public? Or like writing EA Forum posts about the idea to encourage EAs to act on it? 

Part of me wonders if a better model than the one outlined in this post is for Nonlinear to collaborate with well-established AI research organisations who can advise on the high-impact interventions, for which Nonlinear then proceeds to do the grunt work to turn into a reality.

Even in this alternative model I agree that Nonlinear would probably benefit from someone with in-depth knowledge of AI safety as a full-time employee.

This is indeed part of our plan! No need to re-invent the wheel. :) 

One of our first steps will be to canvas existing AI Safety organizations and compile a comprehensive list of ideas they want done. We will do our own due diligence before launching any of them, but I would love for it to be that Nonlinear is the organization people come to when they have a great idea that they want to have happen. 

Sounds good!

Replied to hiring full-timer above   https://forum.effectivealtruism.org/posts/fX8JsabQyRSd7zWiD/introducing-the-nonlinear-fund-ai-safety-research-incubation?commentId=ANTbuSPrNTwRHvw73

For hiring full-time RAs, we have plans to do that in the future. Right now we are being slow on hiring full-timers. We want to get feedback from external people first (thank you!) and have a more solidified strategy before taking on permanent employees. 

We are, however, working on developing a technical advisory board of people who are experts in ML. If you know anybody who'd be keen, please send them our way! 

I see, makes sense!

Good and important points! 

Sorry for the miscommunication. We are not intending to do technical AI safety work. We are going to focus on non-technical for the time being. 

I am in the process of learning ML but am very far from being able to make contributions in that area. This is mostly so that I have a better understanding of the area and can better communicate with people with more technical expertise. 

Thanks for the reply Kat!

However, I'm still a bit confused.  When you say "We are not intending to do technical AI safety work. We are going to focus on non-technical for the time being.", do you mean you will only be researching high leverage, non-technical AI Safety interventions? Or do you mean that the research work you're doing is non-technical

I understand that the research work you're doing is non-technical (in that you probably aren't going to directly use any ML to do your research), but I'm not that aware of what the non-technical AI Safety interventions are, aside from semi-related things like working on AI strategy and policy (i.e. FHI's GovAI, The Partnership on AI) and advocating against shorter-term AI risks (i.e. Future of Life Institute's work on Lethal Autonomous Weapons Systems). Could you elaborate on what you mean when you say you will focus on non-technical AI safety work for the time being? Maybe you could give some examples of possible non-technical AI safety interventions? Thanks!

For sure. One example that we'll be researching is scaling up getting PAs for high impact people in AI safety. It seems like one of the things that's bottlenecking the movement is talent. Getting more talent is one solution which we should definitely be working on. Another is helping the talent we already have be more productive. Setting up an organization that specializes in hiring PAs and pairing them with top AI safety experts seems like a potentially great way to boost the impact of already high impact people. 

Great, I think that's a good idea actually! I'm looking forward to see other potential good ideas like that from Nonlinear's research.

I'm not that aware of what the non-technical AI Safety interventions are, aside from semi-related things like working on AI strategy and policy (i.e. FHI's GovAI, The Partnership on AI) and advocating against shorter-term AI risks (i.e. Future of Life Institute's work on Lethal Autonomous Weapons Systems).

Just wanted to quickly flag: I think the more popular interpretation of the term AI safety points to a wide landscape that includes AI policy/strategy as well as technical AI safety (which is also often referred to by the term AI alignment).

Thanks for clarifying! I wasn't aware. 

I thought the term AI safety was shorthand for technical AI safety, and didn't really include AI policy/strategy. I personally use the term AI risk (or sometimes AI x-risk) to group together work on AI safety and AI strategy/policy/governance, i.e. work on AI risk = work on AI safety or AI strategy/policy. 

I was aware though of AI safety being referred to as AI alignment.

What'd you feel your comparative advantage is versus other organisations in this space? In particular, the Long-Term Future Fund and Survival & Flourishing?

How is Nonlinear currently funded, and how does it plan to get funding for the RFPs?

We currently have a donor who is funding everything. In the future, we intend for it to be a combination of 1) fundraising for specific ideas when they are identified and 2) fundraising for non-earmarked donations from people who trust our research and assessment process.

Out of interest, have you already talked to people/institutions (beyond that current donor) who might provide either of those types of funding in future?

This organization is interesting. I have a few questions, which I'll split into different questions, so people can vote on them separately:

What made you decide to start an organization on researching high leverage AI Safety interventions?

Semi-related: Could you say more about precisely what the scope of Nonlinear will/might be? 

Some possibilities that come to mind, in terms of areas addressed:

  1. Just direct technical AI safety work
  2. Also "meta" work that increases the amount/quality of direct technical AI safety work, e.g. the AI safety camp
  3. Also AI governance work
  4. Also work on other existential risks or longtermist priorities
  5. Also work that's not focused specifically on AI safety or governance but could still help with work on those things, such as work on forecasting or improving institutional decision-making

And some possibilities that come to mind in terms of type of project:

  1. Tuition costs (e.g. for PhD students)
  2. Teaching buyouts
  3. Independent research projects lasting something like 0.2-2 FTE years
  4. Projects like the AI safety camp
  5. Projects like new startups working on building aligned AGI

We will use a research process similar to the one Charity Entrepreneurship used to help launch multiple GiveWell-funded and Open Philanthropy Project-funded charities.

This is interesting. 

Could you also say more about what you expect will be the main ways Nonlinear's process will differ from Charity Entrepreneurship's process? 

Not sure if you noticed, but your comment got cut off after "making"

Oh, thanks! Fixed.

(I decided that that sentence was unimportant halfway through writing, but evidently left the half-formed monstrosity in place when I hit submit.)

The discussion of requests for proposals reminds me of how the Long-Term Future Fund is (I believe) interested in doing more active grantmaking in future. You or readers may be interested in seeing some comments on that from their recent AMA (search "active grant" on that page; there are a few separate comment threads touching on it). 

What grant sizes do you think you will be giving for your first year of making grants?

Could you provide some possible examples of AI Safety Interventions that could be carried out? I’m unclear on what these might look like

I could imagine that the feedback loops for technical AI safety research might be long - i.e. 2 years or longer (although I'm unsure). Would you agree with this? 

Also, what number of months of FTE work do you think you'll be granting for usually?

From this post, I infer that the rough, big-picture theory of change for Nonlinear is as follows:

  1. "Our team of analysts generate, identify, and evaluate potentially high impact opportunities."
  2. "Once a top idea has been vetted, we use a variety of tools to turn it into a reality, including grantmaking, advocacy, RFPs, and incubating it ourselves."
  3. "The existence of those projects/ideas/opportunities causes a reduction in existential risk from AI"

Does that sound accurate to you? In particular, is that third step your primary pathways and objective, or do you have other pathways in mind (like the publication of Nonlinear's research reports having an impact itself) or other objectives (like trajectory changes other than existential risk reduction)?

Also, do you already have a more explicit and fleshed out theory of change? Perhaps in diagram form? This might cover things like what audiences you seek to reach, what kinds of projects you seek to create, and what sorts of ways you think they'll reduce risks. (This is just a question, not a veiled critique; I think your current theory of change may be sufficiently explicit and fleshed out for this very early stage of the project.)

ETA: Ah, I now see that your site's About page already provides more info on this. I think that shifts me from wondering what you see the relevant pathways and objectives are to wondering how much weight you expect to put on each (e.g., x-risk reduction generally vs AI safety vs cause prioritisation, or grantmaking vs action recommendations vs donation recommendations). 

Curated and popular this week
Relevant opportunities