Hide table of contents

Tl;dr

Over the past few years, the availability of funding for AI safety projects has increased significantly. Yet opportunities for individual donors to find high impact grant opportunities remain. In this post I review the recent history of AI safety organizations, speculate on the ways in which funding early AI safety organizations may have impacted today’s AI safety landscape, and on this basis propose some heuristics for finding untapped funding opportunities:

  • Focus on which people you’re bringing into the community
  • Personal brands and platforms matter, and have accounted for some of the largest successes of the community
  • Culture and intellectual environments matter
  • Small amounts of funding today may move much larger amounts of funding in the future

Introduction

For early AI safety organizations, funding was a major bottleneck. During the 2000-2010 period, large donors were sparse and, with few exceptions, not motivated to commit large amounts of capital to such esoteric causes[1]. Small donors accounted for a substantial fraction of the budget of organizations such as SIAI[2], while others located themselves within universities. A private donor looking for funding opportunities at this time would quickly find multiple organizations with significant room for funding, and much of the work in making grants consisted in comparing different funding opportunities to determine which would lead to greatest counterfactual impact.

 

Over the past few years that situation has changed. Multiple large grant-making organizations have identified AI safety as an important cause area,[3] including the Open Philanthropy Project[4], the Future of Life Institute[5], the Berkeley Existential Risk Institute[6], and the EA Long Term Future Fund[7]. While the amounts that each organization or individual hopes to deploy within AI safety is not public (or in many cases is not yet decided), it seems likely that the amount of funding that would be deployed if there were many large, promising funding opportunities exceeds the room for more funding across currently existing AI safety organizations.

 

This changes the nature of funding decisions, since in a world with more funding than projects, the decision to fund a project looks more like an evaluation of whether the project crosses some usefulness threshold, and less like a comparison between projects.

 

In this post I propose some initially thoughts on how grant-makers might approach this changed funding landscape. Of particular relevance to me is opportunities available to a individual donors acting in the midst of multiple large foundations deploying significant amounts of capital.

 

Everything here should be should be interpreted as early thoughts on the topic, and I expect to update significantly based on feedback from the community.

Growth

If today’s AI safety community is to have a meaningful effect on the overall development of AI then I believe it must at some point in the future grow significantly beyond its current size. By “grow” I mean an increase in either the number of people involved, or a spreading of ideas, culture, and public interest. There appears at present to be a vast amount of technical work to do, and an even greater amount of operational and engineer work must surely follow. In the long term, it seems to me that the total amount of work being done on AI safety in a given year must be non-trivial compared to the total amount of work being done on AI, yet at the moment these two fields are several orders of magnitude apart in terms of people and output. Since the AI field itself is currently growing quickly, it seems unlikely that the AI safety community will have a meaningful impact on the future development of AI if it never grows beyond its current size.

 

This does not imply that the community should grow immediately or that it needs to grow quickly, only that it must eventually grow if it is to have a meaningful impact. Thankfully, the community looks to me to be well poised for significant growth over the coming years and decades. When making funding decisions, grant makers should therefore think carefully about how funding decisions flow through scenarios in which there is significant growth at some point in the future.

 

If the community does grow significantly, then much of our raw output (research, outreach, policy proposals, and so on) will be in the future, where there are larger numbers of people doing direct work, and those people are better informed by what has or has not worked so far, have more information on how the overall AI landscape is evolving, and so on. For this reason we may have substantial leverage now, since small efforts today may be able to affect the work done by the future AI safety community: what its focus is, how clear its research agenda is, who is involved, and how it is perceived externally.

 

I therefore think that much of the impact of funding decisions today flow through this larger future AI safety community: that is, either the community does not grow, and grants made today have little impact on the long term future of AI, or else the community does grow, and the impact of today’s grants play out via their effects on the future where most of the total work is done.

Examples from recent history

If the impact of today’s funding decisions mostly flow through tomorrow’s AI safety community, then it may be helpful to understand how funding decisions made in the past have affected today’s AI safety community. In this section I will give a brief rundown of the oldest organizations in this space, and some thoughts on how their actions have shaped today’s AI safety community.

 

Unfortunately the task of reconstructing a detailed history of these organizations is a much larger project, so I’m going to present here some very rough notes based on published timelines[8] and on my own understanding of how these organizations evolved. If my reconstruction here is incorrect then I hope that this lens will still provide an intuition pump for thinking about how funding decisions made today may affect the future.

 

Some caveats:

  • I am arbitrarily including everything up to 2010 as “recent history”
  • I am trying to assess the ways in which the early forms of today’s organizations affected the present landscape. I am not trying to assign positive or negative value to these effects.
  • As always, it is very difficult to properly allocate counterfactual credit and this is just a first pass.

Singularity Institute for Artificial Intelligence

Singularity Institute was founded by Eliezer Yudkowsky in July 2000[9] with the stated mission of “creating a friendly, self-improving artificial intelligence”. In 2013 the organization was renamed to the Machine Intelligence Research Institute.

 

Here I list some of the work that SIAI engaged in during these years and my best guess as to their effects on today’s AI safety community.

  • Direct research
  • It seems that some of the early Yudkowsky publications (pre-2012) convinced key individuals to take AI safety seriously.
  • But today’s AI safety literature does not seem to build directly on conceptual foundations laid by this early work.
  • The main exception appears to be the work on decision theory, which does seem to trace its roots directly to technical work from this period.
  • Lesswrong
  • Appears to have caused a substantial number of people to devote their careers to working on AI safety, and many more to become involved outside of full time work.
  • The specific ideas written about in the sequences are clearly widely known, but it is unclear to me how much impact they have had on today’s technical AI safety landscape.
  • The culture created within lesswrong appears to have spread via numerous lesswrong meetups. My sense is that this culture has had significant effects on the culture of today’s AI safety community.
  • Visiting Fellows Program
  • I personally know of two individuals who trace their full time involvement with AI safety to the SI summer fellows program. On this basis I estimate that there are at least two more individuals that I am not aware of.
  • Singularity Summit
  • I do not have any sense of the impact of Singularity Summit on today’s AI safety community.
  • Public outreach
  • What kinds of PR did Eliezer do in the very early days?
  • Whatever he did, it does not seem to have significantly shaped today’s public perception of AI or AI safety.

 

Overall, Singularity Institute may deserve partial credit for the very existence of an AI safety community today. Much of its impact appears to have flowed through culture creation and bringing people into the community. Its early publications appear to have had less direct impact on shaping the research agendas being pursued today

Future of Humanity Institute

FHI was established by Nick Bostrom and Anders Sandberg in 2005 with funding from James Martin and the Bright Horizons Foundation. [10] Some of the activities that FHI undertook in the years up to 2012 were:

  • Funding of the blog Overcoming Bias
  • Organization of Whole Brain Emulation Workshop
  • Direct research
  • Publishing of Global Catastrophic Risks
  • Publishing of Anthropic Bias
  • Publishing of Human Enhancement
  • Numerous papers by Bostrom, Sandberg, and others
  • Creation of an intellectual microcosm
  • Toby Ord was a researcher at FHI when he and others launched Giving What We Can, which led directly to the formation of the Effective Altruism movement.

 

FHI appears to have placed less emphasis on AI safety in its early days compared to the present day. To my mind, it looks as if much of the impact of FHI’s early work flows through the later publishing of Superintelligence by Bostrom. To a first approximation, the early FHI research appears to have had much of its impact by giving Bostrom the credibility and platform that allowed his later book to have such a broad reception.

 

One might also speculate about what if any contribution FHI’s early work had in spurring Toby Ord to start Giving What We Can by Toby Ord in 2009, which was one of the foundations on which the early Effective Altruism community was built. Ord and Bostrom were publishing together as early as 2006[11]. On the other hand, there are no other FHI staff on the list of early Giving What We Can pledges.[12]

GiveWell

GiveWell was founded in 2008 by Holden Karnofsky and Elie Hassenfeld to study philanthropic giving opportunities and make recommendations to small and medium-sized donors. GiveWell was never focused on AI safety, although GiveWell Labs did appear to conduct an informal evaluation of SIAI in 2012, which was negative.

 

From an AI safety perspective, GiveWell’s largest impact has almost certainly been the creation of the Open Philanthropy Project, which is the primary adviser to the multi-billion dollar fund, Good Ventures, and has identified risks from advanced AI as a focus area.

 

It seems to me that the largest impacts from GiveWell’s early work flowed through the people who went on to create OpenPhil, and the culture of approaching philanthropy as an intellectually serious endeavor. The direct work performed by GiveWell was almost certainly crucial in attracting talent to the cause, and capturing the attention of Good Ventures, but this impact appears to flow to a large extent through people and culture, both of which were critical to the later creation of OpehPhil.

Heuristics for funding AI safety organizations

Focus on People

Some of the largest impacts of funding FHI or SIAI in their early years appear to have flown through the individuals those organizations hired, who went on to contribute substantially within the community. I suspect that the early research performed by these organizations was impactful primarily insofar as it attracted certain individuals into the community.

 

One lens through which grant makers may view funding decisions is therefore as a sort of hiring decision in which the grant maker asks which people will be moved into the AI safety community as a result of the grant. For organizations that would not otherwise receive funding, this may encompass all individuals in the organization. For others, the grant-maker is moving whichever individuals end up being hired as a result of the grant.

 

When making a hiring decision, it is of course important to review past work and evaluate the impact that the candidate’s planned work will have, yet most hiring decisions also place substantial weight on a more nebulous sense of the potential of bringing intelligent, motivated, value-aligned individuals into an organization. Grant makers should similarly take this more nebulous component seriously, since it appears to have accounted for a significant amount of the impact of past funding decisions in the field.

 

Interestingly, this matches the oft-quoted wisdom among startup investors of making early-stage investments largely on the strength of the team that runs the company.

Platforms Matter

Bostrom’s book Superintelligence probably could not have gained such widespread exposure without Bostrom having built up a serious academic platform[13]. The platform on which he launched Superintelligence was built over 14 years, and during this time he authored more than a hundred papers, presented at many hundreds of conferences and workshops, and gave numerous media appearances.[14] Yet a large component of impact of all this work thus far does not seem to have been the direct insights gleaned or the direct effects of public outreach on specific issues, but instead seems to have flown through the creation of a platform upon which his later views on AI safety could reach a broad audience.

 

A grant maker considering funding FHI during the 2000-2010 period may have been tempted to evaluate the impact of the direct work that the organization planned to pursue in the short term, but this would have largely missed the most important resource being generated at FHI.

Culture and Intellectual Environments

Through the creation of LessWrong, SIAI caused a worldwide network to coalesce around ideas in philosophy, math, and AI. This network had distinctive culture that appears to me to have had a substantial impact on today’s AI safety community[15]. The long-term cultural impacts of FHI appears to have been even larger, particularly if we give it partial credit for creating the intellectual environment in which Giving What We Can and eventually the broader Effective Altruism movement formed.

 

Culture is difficult to measure or predict. Early funders of SIAI or FHI would have had difficulty foreseeing the long-term cultural impacts these organizations would have, yet these cultural outcomes do appear to have been a very significant component of the respective organizations’ impact, so are worth taking seriously.

 

This also matches a widespread belief among technology startups that early company culture is a critical determinant of long term success or failure.

Funding begets further funding

If the growth thesis above is correct then making grants today has the potential to move many times as much money in the future, if making a grant now causes an organization to be more likely to exist, or be more prominent during a future AI safety growth phase.

 

Early GiveWell supporters surely deserve some credit for the much larger amounts of money that OpenPhil is now moving. Similarly, early FHI and SIAI backers deserve some credit for the more substantial present day budgets of those organizations.

 

For AI safety grant makers today, this implies that funding early-stage organizations may have particularly high impact, since it can make the difference between an organization existing and not existing during a future growth phase. This provides one argument in favor of giving now rather than giving later.[16]

Feedback Loops

It looks to me that the largest effects of funding AI safety organizations will materialize through growth and the feedback loops that drive them:

  • Cultural feedback
  • Initial effect: founding team lays down a certain culture within an organization
  • Feedback effect: as new people enter the organization they adopt the culture of those already in the organization
  • Hiring feedback
  • Initial effect: an organization uses money from a grant to hire people
  • Feedback effect: some of those people start new organizations, which continue to hire more people, or contribute to hiring decisions
  • Memetic feedback
  • Initial effect: by encountering work published by an organization, individuals outside the AI safety community adjust their views on AI safety
  • Feedback effect: memes propagate to further individuals
  • Funding feedback
  • Initial effect: an organization receives a grant
  • Feedback effect: other grant makers choose whether to funding this and similar organizations based on its perceived success or failure

Conclusion

With the entry of several large grant makers to the AI safety space, many of the largest organizations appear to have exhausted their capacity for additional funding. Furthermore, the budgets of these organizations remain small compared to the total capital that AI safety grant makers hope to deploy in the next few years, so the re-emergence of significant funding gaps among well-established organizations seems unlikely even as these organizations scale up.

 

Nevertheless, individual grant makers may hope to find large opportunities for impact by identifying funding opportunities among small organizations, since we must assume that the AI safety community will grow substantially at some point if it is not have substantial impact.

 

When making funding decisions, grant makers should pay special attention to opportunities to “hire” people into the overall AI safety community, and the unique culture that some organizations foster. In addition, grant makers should seize opportunities to help individuals build up platforms upon which their ideas can reach a wide audience, and be aware that grants made today may move much larger sums of money in the future.

 

Donors who take these nebulous factors seriously may find opportunities for impact missed by grant makers that focus more on the immediate work generated by grants.

 


[1] For one snapshot of the funding landscape from 2003 until 2010, see http://lesswrong.com/lw/5il/siai_an_examination/

[2] The Singularity Institute for Artificial Intelligence, which was renamed to the Machine Intelligence Research Institute in 2014

[3] See e.g. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence

[4] https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/

[5] https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/

[6] http://existence.org/grants

[7] https://app.effectivealtruism.org/funds/far-future

[11] Nick Bostrom, and Toby Ord. "The reversal test: eliminating status quo bias in applied ethics."Ethics 116.4 (2006): 656-679.)

[13] Looking through the nonfiction category of the New York Times bestsellers list, I see few published by individuals who do not already have a substantial platform.

[15] This is very difficult to justify explicitly. All I can offer is my own intuitions.

[16] Just one argument of many; for a more complete treatment see https://rationalaltruist.com/2013/03/12/giving-now-vs-later/

13

0
0

Reactions

0
0

More posts like this

Comments11
Sorted by Click to highlight new comments since:

This is great. You might want to cross-post it to LessWrong, since a lot of visitors to that site which may not visit the EA Forum would probably be interested in this information.

To add onto the "platforms matter" point, you could tell a story similar to Bostrom's (build up credibility first, then have impact later) with Max Tegmark's career. He explicitly advocates this strategy to EAs in 25:48 to 29:00 of this video: https://www.youtube.com/watch?v=2f1lmNqbgrk&feature=youtu.be&t=1548.

Thanks for the pointer - noted!

This post is excellent. I find the historical work particularly useful, both as a collation of timelines and for the conclusions you tease out of it.

Considering the high quality and usefulness of this post, it is churlish to ask for more, but I'll do so anyway.

Have you given any thought to how donors might identify funding opportunities in the AI safety space? OpenPhil have written about how they found many more giving opportunities after committing to give, but it may be difficult to shop around a more modest personal giving budget.

A fallback here could be the far future EA fund, but I would be keen to hear other ideas

If you find your opportunities are being constrained by small donation size, you can use donor lotteries to trade your donation for a small chance of a large budget (just get in touch with CEA if you need a chance at a larger pot). You may also be interested in a post I made on this subject.

Thanks Carl, this looks great. By

just get in touch with CEA if you need a chance at a larger pot

do you mean (a) get in touch with CEA if you need a chance at a larger pot than the current lotteries offer or (b) get in touch with CEA if you need a chance at a larger pot by entering a lottery (as there currently aren't any)?

Thank you!

In terms of finding opportunities, I don't have a complete framework but I do have some rough heuristics: (1) look for opportunities that the large donors can't find, are too small for them to act on, or for some other reason fail to execute on (2) follow the example of angel investors in the tech community by identifying a funding thesis and then reaching out through personal networks to find people to fund at the very early stage of starting projects/organizations.

In terms of the historical work, I'm considering organizing a much deeper investigation into the history of these organizations. If you or anyone else is interested in working full time / part time on this, do let me know!

Thanks Alex! Those sound like useful heuristics, though I'd love to see some experience reports (perhaps I ought to generate them).

I would be interested! I'll reach out via private message

It's no big deal, but your formatting is a little different from the normal forum formatting - it might be worth requesting .impact provide a button to clear extraneous formatting via the issues link at http://effective-altruism.com/ea/vm/ea_forum_faq/

Thanks for the note. I'll file an issue. FWIW I originally wrote this in google docs, then the best way I could find to get it here was to export as an HTML file, then copy and past from there to here.

Curated and popular this week
Relevant opportunities