A

Ajeya

1675 karmaJoined Aug 2016

Sequences
1

Planned Obsolescence

Comments
74

Ajeya
5mo3
0
0
1
1
1

Thanks Mishaal!

  1. I think previous experience taking on operationally challenging projects is definitely the most important thing here, though it may not necessarily be traditional job experience (running a student group or local group can also provide good experience here). Beyond that, demonstrating pragmatism and worldliness in interviews (for example, when discussing real or hypothetical operational or time management challenges) is useful.
  2. I think an important quality in a role like this is steadiness — not getting easily overwhelmed by juggling a lot of competing tasks, having the ability to get the easy stuff done quickly and make smart calls about prioritizing between the harder more nebulous tasks. And across all our roles, being comfortable with upward feedback and disagreement is key.

For me personally, research and then grantmaking at Open Phil has been excellent for my career development, and it's pretty implausible that grad school in ML or CS, or an ML engineering role at an AI company, or any other path I can easily think of, would have been comparably useful. 

If I had pursued an academic path, then assuming I was successful on that path, I would be in my first or maybe second year as an assistant professor right about now (or maybe I'd just be starting to apply for such a role). Instead, at Open Phil, I wrote less-academic reports and posts about less established topics in a more home-grown style, gave talks in a variety of venues, talked to podcasters and journalists, and built lots of relationships in industry, academia, and the policy world in the course of funding and advising people. I am likely more noteworthy among AI companies, policymakers, and even academic researchers than I would have been if I had spent that time doing technical research in a grad school and then went for a faculty role — and I additionally get to direct funding, an option which wouldn't have been easily available to me on that alternative path.

The obvious con of OP relative to a path like that is that you have to "roll your own" career path to a much greater degree. If you go to grad school, you will definitely write papers, and then be evaluated based on how many good papers you've written; there isn't something analogous you will definitely be made to do and evaluated on at OP (at least not something clearly publicly visible). But I think there are a lot of pros:

  • The flipside of the social awkwardness and stress that Linch highlighted in one of his questions is that a grantmaking role teaches you how to navigate delicate power dynamics, say no, give tough feedback, and make non-obvious decisions that have tangible consequences on reasonably short timeframes. I think I've developed more social maturity and operational effectiveness than I would have in a research role; this is a pretty important and transferrable skillset.
  • There is more space than there would be in a grad school or AI lab setting to think about weird questions that sit at the intersection of different fields and have no obvious academic home, such as the trajectory of AI development and timelines to very powerful AI. While independent research or other small-scale nonprofit research groups could offer a similar degree of space to think about "weird stuff," OP is unusual in combining that kind of latitude with the ability to direct funding (and thus the ability to help make big material projects happen in the world).
     

I'm very interested in these paths. In fact, I currently think that well over half the value created by the projects we have funded or will fund in 2023 will go through "providing evidence for dangerous capabilities" and "demonstrating emergent misalignment;" I wouldn't be surprised if that continues being the case.

The way I approach the role, it involves thinking deeply about what technical research we want to see in the world and why, and trying to articulate that to potential grantees (in one-on-one conversations, posts like this one, RFPs, talks at conferences, etc) so that they can form a fine-grained understanding of how we're thinking about the core problems and where their research interests overlap with Open Phil's philanthropic goals in the space. To do this well, it's really valuable to have a good grip on the existing work in the relevant area(s).

I think this is definitely a real dynamic, but a lot of EAs seem to exaggerate it a lot in their minds and inappropriately round the impact of external research down to 0. Here are a few scattered points on this topic:

  • Third party researchers can influence the research that happens at labs through the normal diffusion process by which all research influences all other research. There's definitely some barrier to research insight diffusing from academia to companies (and e.g. it's unfortunately common for an academic project to have no impact on company practice because it just wasn't developed with the right practical constraints in mind), but it still happens all the time (and some types of research, e.g. benchmarks, are especially easy to port over). If third party research can influence lab practice to a substantial degree, then funding third party research just straightforwardly increases the total amount of useful research happening, since labs can't hire everyone who could do useful work.  
  • It will increasingly be possible to do good (non-interpretability) research on large models through APIs provided by labs, and Open Phil could help facilitate that and increase the rate at which it happens. We can also help facilitate greater compute budgets and engineering support.
  • The work of the lab-external safety research community can also impact policy and public opinion; the safety teams at scaling labs are not their only audience. For example, capability evaluations and model organisms work both have the potential to have at least as big an impact on policy as they do on technical safety work happening inside labs.
  • We can fund nonprofits and companies which directly interface with AI companies in a consulting-like manner (e.g. red-teaming consultants); I expect an increasing fraction of our opportunities to look like this.
  • Academics and other external safety researchers we fund now can end up joining scaling labs later (as e.g. Ethan Perez and Collin Burns did), to implement ideas that they developed on the outside; I think this is likely to happen more and more.
  • Some research directions benefit less than others from access to cutting edge models. For example, it seems like there's a lot of interpretability work that can be done on very small models, whereas scalable oversight work seems harder to do without quite smart models.

Professors typically have their own salaries covered, but need to secure funding for each new student they take on, so providing funding to an academic lab allows them to take on more students and grow (it's not always the case that everyone is taking on as many students as they can manage). Additionally, it's often hard for professors to get funding for non-student expenses (compute, engineering help, data labeling contractors, etc) through NSF grants and similar, which are often restricted to students.

Yeah, I feel a lot of this stress as well, though FWIW for me personally research was more stressful. I don't think there's any crisp institutional advice or formula for dealing with this kind of thing unfortunately. One disposition that I think makes it hard to be a grantmaker at OP (in addition to your list, which I think is largely overlapping) is being overly attached to perfection and satisfyingly clean, beautifully-justifiable answers and decisions.

It's hard to project forward of course, but currently there are ~50 applicants to the TAIS team and ~100 to the AI governance team (although I think a number of people are likely to apply close to the deadline).

There is certainly no defined age cutoff, and we are usually extra excited when we can hire candidates who bring many years of career experience to the table in addition to other qualifications!

I'll just add that in a lot of cases, I fund technical research that I think is likely to help with policy goals (for example, work in the space of model organisms of misalignment can feed into policy goals).

Load more