Hide table of contents

EDIT 2023/11/21

We have finished evaluating the first batch of our applications.

As we have not yet finished the hiring process, people are still welcome to apply to us. We are still very likely to skim future applications, however we have no firm commitments to do so.

Candidates are welcome to message me either here or in other channels to flag their late applications with us.

This is a linkpost for our job ad on Notion.

The Long-Term Future Fund is looking for a full-time fund chair. You can apply here.

Summary

  • The Long-Term Future Fund is seeking a new full-time fund chair to lead strategy, fundraising, management, and grant evaluation. 
  • The chair will articulate a vision for the fund, coordinate stakeholders, improve processes, represent the fund, and make final decisions.  [more]
  • The role offers a competitive salary with location flexibility, starting with a 3-month trial period. [more]
  • The fund aims to distribute $5M-$15M (last year, we distributed over $10M) annually to reduce existential risk, especially from AI. We expect the LTFF chair to contribute substantially to our strategy and fundraising efforts. 
  • We have a strong preference for a full-time candidate though we will seriously consider part-time candidates. [more]
  • Applications are open now through October 23rd. 
    • The form might take up to an hour, including screening questions.

Why is LTFF looking for a fund chair?

The Long-Term Future Fund (LTFF) has had a significant impact on the long-termist and AI safety funding ecosystems:

  • We are one of the few significant sources of funding for longtermist or AI safety funding, allowing (some) worldview diversification away from Open Phil.
  • In 2022, LTFF was operationally able to distribute 12 million dollars to over 250 small projects. This entails clearing nontrivial logistical hurdles in following nonprofit law across multiple countries, consistent operational capacity, and a careful eye towards downside risk mitigation.
  • We have the largest “always open” application form, where anybody can apply, connecting funding with a wide range of excellent grantees without pre-existing networks.
  • We are one of the primary donation options for new donors in longtermism, with a relatively well-known brand and accessibility.
  • We believe we are able to fund a wide range of excellent small longtermist projects, and increase grant capacity in the ecosystem overall.
  • We have contributed to improvements in the epistemics and transparency around the longtermist funding ecosystem, e.g. by writing very detailed and frequently viewed posts about our past grants, concerns, and how we make decisions.

However, despite the successes, there have been some challenges to increased scale and other significant limitations:

  • We have significant strategic confusion as a fund, with significant uncertainty and disagreements on questions like:
    • How much should we really envision ourselves as “longtermist”, as opposed to just focused on near- and medium- term catastrophic risks? 
    • Should we focus more on AI, vs be willing to fund a wide range of interventions to reduce other catastrophic risks like engineered pandemics? 
    • Should we be willing to be more directly antagonistic towards the interests of the big AI labs? 
    • How good is independent research and upskilling, vs funding established organizations and programs? 
    • How important is good forecasting, relative to direct x-risk reduction interventions?
  • We do not get back to candidates as quickly as we would like. Our current median response time is 28 days, with a long tail of much slower replies. While we are likely still faster than many other funders, we believe that our response times inhibit both our and our grantees’ abilities to move nimbly, in addition to causing unnecessary stress with potential grantees. 
  • We do not (yet) communicate much with our donors nor try to proactively reach out to new donors. This likely limits our fundraising, particularly from donors who are less familiar with longtermism, existential security, and catastrophic AI risk.
  • Our part-time fund manager setup reduces reliability and consistency. While it has many advantages, relying entirely on part-time fund managers can make it difficult to plan and/or move quickly and deliberately.
  • We have limited ability and capacity to help our grantees’ goals and push them to excel. Because of our limited capacity constraints and the apparent deluge of new applications, we never seem to have enough time to regularly provide detailed feedback nor set up processes to reliably help our grantees excel (other than via providing money).

We think a good Long-Term Future Fund chair can help us remove or at least ameliorate many of the above challenges, while maintaining or enhancing the aspects of LTFF that are currently excellent. 

Responsibilities and fit

As a fund chair, you will be responsible for:

  • Strategy
    • Articulating and shepherding a vision and strategy for LTFF going forwards.
    • Keeping the focus of LTFF on trying to cost-effectively solve important but hard problems like AI alignment; consistently being willing to pivot and adjust the strategy, processes, personnel, or other aspects of the fund to preserve a focus on longer-term impact.
    • Holding firm and staying laser-focused on what matters. Push back against individual and institutional incentives that may push you towards myopic bureaucratic practices and short-term goals.
  • Fundraising
    • Creating a product that impact-oriented donors are excited to donate to.
    • Leading or helping with fundraising such that LTFF reliably has enough resources to fund high-impact projects
    • Provide high donor transparency by leading the fund in a transparent and high-integrity manner.
  • Management
    • Being the face and core decisionmaker at LTFF, speaking on behalf of LTFF and setting the relevant institutional policies.
    • Maintaining and improving relationships with key stakeholders: donors, grantees, advisors, and decision-makers at other foundations.
    • Coordinating between and resolving disputes between the internal members of LTFF and close affiliates: different grantmakers, operations staff, EV, and future donor and/or grantee liaisons.
    • Creating good hiring and other institutional processes for integrity; be able to guard against fund managers or grantees abusing power they derive from LTFF, such as by wrongfully using collective resources for personal gain.
  • Grant evaluation
    • Ensuring that the grantee experience is as smooth and hassle-free as possible
    • Managing other fund managers, doing grant evaluations yourself, and creating new processes to resolve any increased load in grant evaluations so that grant evaluations are done quickly, consistently, and with good judgment, while maintaining high upside potential and with an eye towards mitigating downside risk.

I (Linch) think a well-run LTFF with a good chair could reliably distribute ~$5M-$15M yearly to high-impact projects going forwards. A good chair can potentially be responsible for ~$500K-~$5M of value per year, including via both improved decision quality and better counterfactual fundraising from new donors.

You might be a good fit if you:

  • Have experience leading or managing a significant project
  • Have a fairly deep understanding of technical AI alignment, and/or other emerging scientific fields of interest
    • E.g. 3+ years of direct research experience as part of academia, in a research-focused corporate lab, or independent research
  • Possess experience in fundraising or are comfortable with the concept and execution of fundraising initiatives
  • People you know and respect generally consider you to have good judgment on difficult and subjective decisions.
  • Have a strong internal sense of honesty and integrity, 
  • Are a reasonably good “peacemaker”: are frequently capable of finding mutually beneficial trades across a variety of stakeholders
  • Are excited about making difficult calls in a fast-moving domain
  • Are generally considered industrious and reliable
    • E.g. tend to be good at making deadlines and meetings, strong executive function, rarely drop tasks
  • Are good at Fermi estimates and back of the envelope calculations; regularly factor in uncertainty into your numerical analysis

Evidence you might be a poor fit:

  • Have never been responsible for projects other than ones initiated by a boss or advisor.
  • You are an “ideas person” to the extent that you’ve never delved deeply into a technical or otherwise complex subject outside of formal schooling
  • Strongly prefer to be solely focused on a single project at a time.
  • Dreads the idea of meetings
  • Find asking for money and/or being responsible for weighty decisions extremely stressful and/or repulsive
  • Really don't like disappointing people
  • Finds integrity hard and have repeatedly been criticized in the past for being bad at navigating boundaries, conflicts-of-interest, etc.
  • Frequently adopt a “my way or the highway” approach to disagreements
  • Have below 30th percentile conscientiousness on OCEAN (Big Five)

Nice-to-haves:

  • People management experience and ability
    • You will be navigating a number of internal stakeholders like fund managers, but the fund managers are relatively independent (and usually have other day jobs) so not being a great people manager is okay.
  • Communication ability
    • Being good at oral and written communication is a strong plus but others on the fund (e.g. myself) can largely cover for this specific deficiency
  • Substantial prior experience grantmaking
    • Having someone here who significant prior experience grantmaking will de-risk the LTFF chair role a lot, but we’re tentatively willing to take on that risk for otherwise great candidates
  • Fundraising experience
  • Willingness to live in the Bay Area
  • A strong network in the longtermist and/or AI safety and/or biosecurity fields

Practical Details

Compensation: We aim for our salaries to be competitive with nonprofit counterfactuals like researchers at AI safety nonprofits or mid-level scientific program officers at large private foundations. We expect to pay between $120,000 and $240,000, depending on years of experience, location, and how remunerative your skill-sets are elsewhere. The higher end would be for people living in the Bay Area (as opposed to remote), people with many years of relevant experience, and people with deep technological knowledge and expertise. After the first year, you will be one of the main people setting the policies that will play a large role in determining your future salary.

Salaries will likely be prorated for temporary and part-time candidates. We are likely also willing to raise the salary for excellent candidates.

Flexibility: We have a strong preference for a full-time candidate. We will seriously consider part-time (ideally 20h/week or more; basically we want LTFF to be your main professional responsibility) candidates as well if we cannot find a good full-time fit.

Location: We have a strong preference for a fund chair who lives in the SF Bay Area. However, we will seriously consider remote candidates. Please note that we will probably sponsor visas for hires willing to relocate to the SF Bay Area. 

Timeline and Process: We will evaluate applications on a rolling basis, with preference for applications submitted before October 23. We will offer interviews and (paid) trial tasks to approximately 30 candidates. Based on the selection process, we may then make an offer to you for a 3-month trial period as LTFF chair, with a negotiable starting date. If you do as well as expected or better in the trial period, we will make a permanent offer.

EDIT: We will close applications soon. For remaining people who are interested, Please apply by 2023/11/17 11:59 PM Pacific Time if you want us to look at your application!

Benefits

(The benefits below are inherited from EV, our current fiscal sponsor. As LTFF fund chair, you will likely be the second or third employee at EA funds, and can play a large role in setting the benefits package that works best for you and the organization).

Our benefits reflect our belief in investing in our people to build the strongest possible team. We want everyone to be able to be able to perform at their best as we provide world-class support and maximize our positive impact.

  • Prioritized health & wellbeing
    We provide private medical, vision and dental insurance, up to 3 weeks’ paid sick leave, and a mental health allowance of $6,000 each year.
  • Flexible working
    You’re generally free to set your own schedule (with some overlapping hours with colleagues). 
  • Generous vacation
    We provide all team members with 25 days' holiday each year, plus public holidays.
  • Professional development opportunities
    We offer a $6,000 allowance each year and build in opportunities for career growth through on-the-job learning, increasing responsibility, and role progression pathways.
  • Parental leave and support
    New parents have up to 14 weeks fully-paid leave and up to 52 weeks leave in total. We also provide financial support for child care to help parents balance child care needs.
  • Pension and income protection plans
    We offer a 10% employer / 0% employee 401K contribution, and income protection insurance.
  • Equipment to help your productivity
    We will pay for high-quality and ergonomic equipment (laptop, monitors, chair, etc.), in the office or at home if you work remotely.
  • Work environment with catered meals, gym, ergonomic equipment, and ample opportunities to cowork with members of other organizations working on the world’s most pressing problems

To apply:
Please apply here (We estimate applications to take about an hour). Applications will be evaluated on a rolling basis, with preference for applications submitted before October 23.

EA Funds serves a global community, and our team works with people and organizations all over the world. We’re committed to fostering a culture of inclusion, and we encourage individuals with diverse backgrounds and experiences to apply. We especially encourage applications from women, gender minorities, and people of color who are excited about contributing to our mission. We’re an equal-opportunity employer.

Appendix A: A Typical Day as LTFF Fund Chair

(Note that this is very hypothetical: we’ve never had a full-time fund chair before, and the schedule below assumes a more competent LTFF than currently exists).

9am-11am: You review your notes on a proposed more quantitative alternative to the way LTFF currently does grant evaluations. One of your contractors made a scrappy mathematical model that tentatively suggests the proposed evaluation method is slightly higher EV, but you’re concerned that it’s not worth the switching costs in practice. You also notice a hole in the model. After two hours, you’re still not sure and you regrettably decide that you need to spend a bit more time evaluating this project again.

11am-1140am: An applicant for a fairly complicated AI Safety research proposal emailed LTFF three days ago saying that they needed a more urgent response than they said in the application. Their primary grant investigator is on vacation. You dedicated this time today to doing the grant evaluation yourself. You look through the PI’s notes and some of the public outputs of the applicant and come to a tentative conclusion. You write down your notes, give a tentative score for the grant application, and put the application into Voting.

1140am-12pm: You jot some notes for how to change processes such that urgent applications are less likely to slip through the cracks going forwards.

1pm-2pm: You facilitate the weekly LTFF grantmaker meeting. You spend the first 20 minutes soliciting feedback on the new proposed grant evaluation alternative, and open up the next 40 minutes to discuss potentially controversial grants.

2pm-3pm: You answer and send off some emails and Slack threads (you usually do this after lunch but today the grantmaker meeting happened first). Among other things, you pass along a geoengineering grant application to a different grantmaking fund in your network that specializes in extreme climate risks.

3pm-4pm: You vote on grant applications. You read through the applications quickly, look through the notes from the respective primary investigators of each grant, and try your best to form an independent evaluation of the impact of each grant.

4pm-430pm: You take a short break. Sometimes you nap, but today you decide to argue with people on LessWrong instead.

430pm-5pm: You take a call with a whistleblower for one of your grantees. You ascertain the problem (alleged issues with managerial incompetence and academic integrity). You take notes, ask some questions, and ascertain relevant confidentiality policy (okay to share within LTFF and Comm Health, but please ask before sharing elsewhere).

5pm-530pm: You try to make a decision for next steps on the whistleblowing case. You also share your notes with Comm Health. 

530pm-6pm: You do a daily review and decide what to work on the next day. 

1030pm-11pm: You take a call with a new earning-to-give donor from Eastern Europe. The donor is interested in making another donation, but this one specifically earmarked for AI governance. You explain your policy on this matter: it isn’t feasible for LTFF to have narrowly targeted donations to specific focus areas without funging, and trying to set up an alternative system would not be worth the overhead. The donor is understanding and says she’ll circle back on whether it makes more sense for her to make that donation to LTFF anyway vs give directly to GovAI or a similar organization. You wish her luck.

Comments21
Sorted by Click to highlight new comments since: Today at 12:18 PM
Larks
7mo33
12
0
2

I love the 'Day in the Life', feels like it did a good job exposing the sorts of issues and frustrations you deal with!

The benefits below are inherited from EV, our current fiscal sponsor. As LTFF fund chair, you will likely be the second or third employee and EA funds

Is this an announcement that EA Funds is spinning out of EV?

It is not. Was it the word “current” that gave you that impression ?

I wasn't sure how to read it, but my highest probability interpretation was "we're currently part of EV, but we're spinning out soon and you would likely be the second or third employee of the spun out organization".

Thanks for clarifying!

(What does "you will likely be the second or third employee and EA funds" mean?)

*"and" should be "at," sorry.

Thanks! So is the idea that this person would be some high-number employee of EV, but you don't think of it that way? And instead EA Funds and the other orgs legally within EV think of themselves as separate organizations with their own employee counts?

And instead EA Funds and the other orgs legally within EV think of themselves as separate organizations with their own employee counts?

Yes, I don't want to speak for Caleb (the only existing FT employee of EA Funds[1]) and I certainly don't want to speak for other orgs, but I think the second or third employee at EA Funds should think of themselves as being a low-number employee at EA Funds, rather than the 200th or whatever employee of EV. Whereas eg the 50th employee at Rethink Priorities should meaningfully think of themselves as the 50th employee of RP, even if they're the first member of a new team at RP.

De facto, EA Funds does the following things independently or almost independently of EV:

  • fundraise
  • branding
  • set organizational strategy and theory of change
  • hire
  • decide which offices to work at
  • if we ever have work retreats, EA Funds will decide where the retreat is, and there's no expectation that non-EA Funds EV people will be invited (and similarly other EV org retreats don't invite us)
  • organizational communication in its own Slack
  • internal processes etc
  • set non-executive pay rates
    • Though I think it needs to be reviewed by EV
  • pay contractors, including ones who work at other EV orgs

The following things are shared:

  • The default benefits package
  • payroll, accounting
  • legal risk (which in the last year has been quite frustrating)
    • all my high-level posts written in a professional capacity for EA Funds is reviewed by EV lawyer(s).
  • certain shared services that might be easier to do as part of EV, but we still pay for them separately
    • in particular, we currently pay EV Ops for grant disbursement (and EV Ops people won't consider themselves as working for EA Funds, and vice versa)
    • We also pay tech people who work for other EV orgs like CEA for website maintenance etc.

The closest non-EA analogy I can think of is Alphabet. I expect people at the flagship Alphabet subcompany (Google)shouldn't have much of a mental distinction between Alphabet and Google, but e.g., the 20th employee at Wing would be very mistaken if they approach joining Wing and expecting a "big company" vibe. Similarly, "flagship" orgs within EV like CEA and EV Operations are likely more closely connected to the EV brand and organizational center etc, but people at more peripheral projects like GovAI and Asterisk should mostly not think of themselves as EV employees first, except in the legal sense. (Likewise, Epoch and Apollo employees probably do not think of themselves as Rethink Priorities employees) And the reality is that the Alphabet <>Wing connection is in many ways stronger than EV <> EA Funds (eg employees of Alphabet subsidiaries probably code in the same repo, get paid in the same stock, etc).

EA Funds is a bit of an edge case because it originally grew out of CEA before spinning out. It also has "EA" in the name. We internally do not think of ourselves as particularly tied to EV, and it's rather frustrating (from my personal perspective) to get most of the costs of being in a large organization with very few of the benefits, and if external people continue to primarily think of us as an "EV project" I think this itself will be a good reason (though not necessarily dispositive) to spin out completely and rebrand, though of course that is also costly.

  1. ^

    I'm a contractor. If I decide to be an employee, I'd be the 2nd employee.

Presumably the EV board could fire you all though if it wanted to, and my understanding is that at least in the past they had a veto on grants. If the buck ultimately stops with them, that seems like a pretty meaningful way in which you are part of EV.

Yes, this is a good point re "firing". I'm confused about how much it matters when there's de jure power that's not in fact exercised [1]but I agree that in principle there's some real oversight responsibilities from EV. Which from an individual perspective is rather rough; EV can fire us but doesn't give us money and has no obligation to keep us employed. At some point I'd be interested in interviewing donors about whether they perceive the costs of such oversight is greater or lower than the benefits; I can see it go either way. [2]

  1. ^

    I'm not aware of any cases other than CEA.

  2. ^

    From the donor perspective on the one hand having a smaller number of entities to model makes it easier to decide whether to donate, on the other hand having a board to complain to in case of abuse etc seems valuable for mitigating certain types of downside risks.

  • Having the EV board as your board ≠ becoming a coworker of someone at 80k
  • There is an important sense that EA Funds is ""merely"" a project of EV, and another important sense in which they are a ~2 person team
  • Fiscal sponsorship is pretty common
  • EV's board has never (to my knowledge) fired an employee of an organization, but they have fired a CEO

As another example of this, apparently the EV board banned GWWC from advertising, which suggests they would be willing to exercise that power over your fundraising and branding.

We have a strong preference for a fund chair who lives in the SF Bay Area.

Why? Is EA Funds trying to consolidate staff in the Bay generally, or for the LTFF specifically? I'm worried that this perpetuates existing dynamics around the concentration of funding in the Bay, especially at a time when EA Funds is, in theory, becoming more separate from Open Phil.

The other full-time members of EA Funds work from Berkeley (as do many of the fund managers) and we find working in person very useful. It’s not implausible that we’d move to be near the new fund chair if they weren’t able to relocate to the Bay Area.

The Bay Area is also great as it has a really high density of EA and AIS stakeholders, and I’d expect it to have the plurality (but nowhere near the majority) of our grantees amongst locations of similar sizes.

I travel pretty regularly to the UK and Boston - though I haven’t visited South America, Asia etc. very much. My guess is that some of this travel helps to reduce the dynamics that you’re worried about but I don’t have great solutions beyond what we’re currently doing. If we had a lot more in the way of resources I could imagine translating the website and hiring interpreters to help us evaluate grants in other languages - but empirically I think this rarely stops people applying and I expect it to get better as LLMs improve.

In addition to the points Caleb raised, I think there are pretty good reasons to believe that longtermists[1] are underinvesting in Bay Area fundraising. 

Note that this is entirely theoretical currently; all of my donor calls to date have been online, except a few in person at a Bay Area conference (which I presumably could've attended even if I lived elsewhere). I wouldn't endorse the LTFF chair moving to the Bay just for fundraising until the model is more proven.

  1. ^

    Probably EA in general, but I'm most convinced about the longtermist side.

Nitpick: I would be sad if people ruled themselves out for e.g. being "20th percentile conscientiousness" since in my impression the popular tests for OCEAN are very sensitive to what implicit reference class the test-taker is using. 

For example, I took one a year ago and got third percentile conscientiousness, which seems pretty unlikely to be true given my abilities to e.g. hold down a grantmaking job, get decent grades in grad school, successfully run 50-person retreats, etc. I think the explanation is basically that this is how I respond to "I am often late for my appointments": "Oh boy, so true. I really am often rushing to my office for meetings and often don't join until a minute or two after the hour." And I could instead be thinking, "Well, there are lots of people who just regularly completely miss appointments, don't pay bills, lose jobs, etc. It seems to me like I'm running late a lot, but I should be accounting for the vast diversity of human experience and answer 'somewhat disagree'." But the first thing is way easier; you kinda have to know about this issue with the test to do the second thing.

(Unless you wouldn't hire someone because they were only ~1.3 standard deviations more conscientious than I am, which is fair I guess!)

Yeah this is a good point; fwiw I was pointing at "<30th percentile conscientiousness" as a problem that I have, as someone who is often late to meetings for more than 1-2 minutes (including twice today). My guess is that my (actual, not perceived) level of conscientiousness is pretty detrimental to LTFF fund chair work, while yours should be fine? I also think "Harvard Law student" is just a very wacky reference class re: conscientious; most people probably come from a less skewed sample than yours. 

I agree with the overall point from tlevin, but, I think that "evidence you are not a good fit" is still a reasonable way to describe this and my guess is that fewer good applicants will rule themselves out than "rule themselves in" as a result of this line.

I'm pretty unsure though - I think often people who are a good fit don't apply for small reasons like the one that tlevin said, but also making the bar seem really low or very vague is bad for managing applicant expectations and doesn't sufficiently take advantage of selection effects.

We've been evaluating applications on a rolling basis, and I think now is a good stopping point for evaluating applications (we've already proceeded to the next step for many candidates).

For remaining people who are interested, Please apply by 2023/11/17 11:59 PM Pacific Time if you want us to look at your application!

(I think the links in the summary are linking to a collaborative edit version of this doc, rather than the places you want it to)

Thanks! I think it's now fixed.

More from Linch
Curated and popular this week
Relevant opportunities