Hide table of contents

This is the introductory post in a sequence showcasing a series of mini-research projects (“speedruns”) into scalable longtermist projects, conducted by Rethink Priorities’ General Longtermism Team in the fall of 2022. Each speedrun involved an initial scoping and evaluation of an idea for a scalable longtermist project, to try to identify if the project could be a top candidate for our team to try to bring about. 

This post explains how and why you might want to use the speedruns. The appendix contains additional information about how and why we produced the speedruns.

Who is this sequence for?

We imagine the speedruns might be interesting to:

  1. Potential longtermist entrepreneurs who might want to use the speedruns as a source of information and inspiration when deciding what projects to work on.
  2. Potential funders of entrepreneurial longtermist projects (for the same reason).
  3. Researchers who might be interested in looking further into any of the project ideas.
  4. (Aspiring) junior researchers interested in empirical global priorities research who might want to use the speedruns as an example of what such research can look like. 
  5. Stakeholders and the public interested in Rethink Priorities’ General Longtermism team who might want to use the speedruns as an insight into how we work, such as potential funders and job applicants. 

Things to keep in mind when using the speedruns

  • The speedruns are very preliminary and should not be considered the final word on whether a project is promising or not. They were written by junior generalists, and we spent only ~15h on each, prioritizing speed (surprise surprise) over depth and rigor. So they likely contain mistakes, miss important information, and include poor takes. We would have conducted a more in-depth investigation before launching any of these projects (and recommend that others do the same). 
  • The project ideas covered in the speedruns are very exploratory and tentative. They are not plans for what RP will work on, but just ideas for what RP could consider working on.
  • The three speedruns in this series should not be considered as “the three most promising projects in our view”. “Project promisingness” was not a criterion in deciding which speedruns to publish (more on how we chose which speedruns to publish in the Appendix). That said, all topics we conducted speedruns on scored relatively highly on an internal weighted factor model.
  • Opinions on whether a given project is worth pursuing will differ for people in a different position to our team. The speedruns were conducted with the specific aim of helping our team figure out which projects to spend further resources on. So they take into account factors that are specific to our team and strategy, such as our relative fit to work on the project in question. 
  • We have not updated the conclusions of the speedrun to reflect the recent changes to the funding situation. The speedruns were all conducted before the drastic changes in the EA funding landscape towards the end of 2022, and so they operate with a rough cost-effectiveness bar that is probably outdated. 

Overview of the sequence

So far, we are planning for this series to contain 3 of the 13 speedruns we conducted in fall 2022. There’s no particular order we recommend reading them in. 

The speedruns we’re planning to include in this sequence (so far) are:

  1. Developing an affordable super PPE
  2. Create AI alignment prizes
  3. Demonstrate the ability to rapidly scale food production in the case of nuclear winter

These are the speedruns that (a) did not develop into longer-term research projects which might have other publishable outputs, and (b) were close to a publicly legible format at the start of 2023 (more on how we chose which speedruns to publish, in the Appendix). We might continue to add to this sequence with more speedruns in the future.


Appendix: Further details on why and how we produced the speedruns

What are speedruns?

The speedruns were short (10-15h) research projects conducted over a few days. 
The overarching aim was to help our team assess whether we wanted to invest significant further time in exploring a given project. The speedruns did this by:

  1. Scoping out the project to define it and its different variations more clearly, 
  2. Doing a shallow analysis of cost-effectiveness, downside risk, marginal value of additional projects in this space, and other relevant factors. 

The idea was to get an initial sense of the promisingness of a project and the degree to which further work might change our assessment. We explicitly prioritized speed over rigor in order to get an okay sense of the promisingness of several projects, rather than a good sense of the promisingness of a few, such that we could prioritize further research.

All of the speedruns were conducted by junior researchers (fellows or research assistants who were in their first couple of months on the team) without expertise in the specific area of the speedrun. We thought that, in addition to the direct information value of the speedrun, speedruns would also be useful practice for junior researchers to build research skills and that this would have important talent development benefits.

Why did we produce the speedruns?

This section is a summary of the relevant parts of our 2022 summary post; see that post for a full explanation of how speedruns fit into our strategy.

Broadly, we produced the speedruns in order to identify top candidates projects for our team to try to help incubate. 

The primary aim of our team in 2022 was to facilitate the creation of faster and better longtermist “megaprojects”, i.e., projects that we believe have a decent shot of reducing existential risk at scale (spending hundreds of millions of dollars per year). (However, in practice we focused more on somewhat scalable projects that could be piloted much more cheaply; hence the title of this post.)

We tested several different approaches to making this happen, one of which was to identify a small set of project ideas we thought looked unusually promising, then attempt to recruit founders to work on those projects (elsewhere called the “project-first approach”).

To identify the set of project ideas, we made a rough weighted factor model to prioritize which ideas to research further, then conducted speedruns on ideas that scored highly on this model. The idea was then to pick the ideas that looked most promising after the speedruns, analyze them further, and eventually circulate these ideas and try to incubate projects related to them. 

How did we choose the topics of the speedruns and which ones to publish? 

We conducted a total of 13 speedruns (we had initially planned to conduct >20, but we paused to reassess as a result of the FTX collapse). The topics were chosen based on a combination of the following factors:

  1. Score on the internal weighted factor model used to prioritize between the projects in our projects database.  
  2. Researcher interest.
  3. Degree to which there was existing work on this project (that we knew about).
  4. Urgency for internal decision-making. 

We’re only publishing 3 speedruns. Where we’re not publishing a speedrun, it’s for one the following reasons:

  1. Internally focused: The speedrun seemed mostly relevant to internal decision-making and did not seem like it would be useful to people outside of RP.
  2. Developed into a larger project: The speedrun kicked off a larger research project (in some cases leading to other publishable outputs).
  3. Time-consuming: The speedrun would have been particularly time-consuming to publish (e.g., because of containing large amounts of potentially infohazardous information, claims that could easily be misunderstood, or just being unusually early stage). 
  4. Non-public work: Work that for strategic or confidentiality reasons cannot be made public at this time.

Here is an overview of all of the speedrun topics (roughly sorted by cause area), whether we’re publishing them, and why:

Speedrun topicAre we publishing this? If not, why not?
Develop an affordable super PPEPublishing
Mass deployment of air sterilization techniquesDeveloped into a larger project
Rethink Priorities runs coordination activities for the biosecurity communityInternally focused
A quick ranking of all of the AI-related projects on our listTime-consuming
Create AI alignment prizesPublishing
Establish an AI ethics organizationTime-consuming
Establish an AI auditing organizationNon-public work
Establish an AI whistleblowing organizationNon-public work
Infrastructure to support independent researchersDeveloped into a larger project
Research fit-testing/upskilling programs such as xERIsInternally focused + Time-consuming
Demonstrate the ability to rapidly scale food production in the case of nuclear winterPublishing
Create and distribute civilisation restart manualsTime-consuming
Consolidate an AI technical safety or AI governance hub Time-consuming


 If you are doing (or considering doing) work related to one of the speedruns we did not publish, feel free to reach out to me (at marie at rethinkpriorities dot org) and we can talk about it in more detail.

Acknowledgements

This research is a project of Rethink Priorities. It was written by Marie Davidsen Buhl. Thanks to my colleagues Bill Anderson-Samways, Joe O’Brien, Linch Zhang, Max Raüker, Onni Aarne, Patrick Levermore, Peter Wildeford, and Renan Araujo for helpful feedback. If you like our work, please consider subscribing to our newsletter. You can explore our completed public work here.

Comments2
Sorted by Click to highlight new comments since:

This is a cool idea, thanks for doing this and publishing the results

Do you know of more writing on the possibility of AI auditing organizations? I am mostly interested in writing on the amount of x-risk reduction such an organization might provide as well as downsides to starting such an organization from the perspective of x-risk.

More from Buhl
Curated and popular this week
Relevant opportunities