This post was prompted by three other posts on the EA forum. A recent post raises the alarm about abuse of power and relationships in the EA community. An earlier post suggested that the EA community welcomes shallow critiques but is less receptive to deep critiques. Dustin Moskovitz has recently mentioned that the EA forum functions as a conflict of interest appeals board for Open Philanthropy. Yet on the EA forum, there doesn’t seem to be many specific criticisms of Open Philanthropy.

 

For the sake of epistemics, I wanted to create this post and invite individuals to voice any issues they may have with Open Philanthropy or propose potential solutions.

 

I’ll start by discussing the funding dynamics within the field of technical alignment (alignment theory, applied alignment), with a particular focus on Open Philanthropy.

 

In the past two years, the technical alignment organisations which have received substantial funding include:

All of these organisations are situated in the San Francisco Bay Area. Although many people are thinking about the alignment problem, there is much less funding for technical alignment researchers for other locations (e.g., the east coast of the US, the UK, or other parts of Europe).

 

This collectively indicates that, all else being equal, having strong or intimate connections with employees of Open Philanthropy greatly enhances the chances of having funding, and it seems almost necessary. As a concerned EA, this seems incredibly alarming and in need of significant reform. Residency in the San Francisco Bay Area is also a must. A skeptical perspective would be that Open Philanthropy allocates its resources to those with the most political access. Since it's hard to solve the alignment problem, the only people grantmakers end up trusting to do so are those who are very close to them.

 

This is a problem with Open Philanthropy’s design and processes, and points to the biases of the technical alignment grantmakers and decision makers. This seems almost inevitable given (1) community norms around conflicts of interest and (2) Open Philanthropy’s strong centralization of power. This is not to say that any specific individual is to blame. Instead processes, structure, and norms are more useful to direct reforms towards.

 

Right now, even if a highly respected alignment researcher thinks what you do is extremely valuable, the decision ultimately can be blocked by an Open Philanthropy grantmaker, which could cause people to leave alignment altogether.

One common suggestion involves making the grantmaking process more democratic or less centralised. For example, the “regranting” approach has been successful for other grantmakers. This involves selecting a large pool of grantmakers or regrantors who have the autonomy to make their own decisions. With more grantmakers, there is less potential for Goodharting by individuals and reduces the likelihood of funding only those who are known best by a few Open Philanthropy staff. Additionally, Open Philanthropy can still choose regrantors who are more aligned with EA values or have previously demonstrated good judgement. A smaller thing that could help is indicating what proportion of funding in each specific area goes to organisations where there are, or have been, intimate relationships between organisation leadership and Open Philanthropy grantmakers.

 

It seems vital for community health to be able to speak about these sorts of issues openly. What criticisms of Open Philanthropy are important to share, or what are other ways they should improve?

85

0
0

Reactions

0
0

More posts like this

Comments8
Sorted by Click to highlight new comments since:

In the past two years, the technical alignment organisations which have received substantial funding include

Your post does not actually say this, but when I read it I thought you were saying that these are all the organizations that have received major funding in technical alignment. I think it would have been clearer if you had said "include the following organizations based in the San Francisco Bay Area:" to make it clearer you're discussing a subset.

Anyway, here are the public numbers, for those curious, of $1 million+ grants in technical AI safety in 2021 and 2022 (ordered by total size) made by Open Philanthropy:

  • Redwood Research: $9.4 million, and then another grant for $10.7 million
  • Many professors at a lot of universities: $14.4 million
  • CHAI: $11.3 million
  • Aleksander Madry at MIT: $1.4 million
  • Hofvarpnir Studios: $1.4 million
  • Berkeley Existential Risk Initiative - CHAI collaboration: $1.1 million
  • Berkeley Existential Risk Initiative - SERI MATS Program: $1 million

The Alignment Research Center received much less: $265,000.

There isn't actually any public grant saying that Open Phil funded Anthropic. However, that isn't to say that they couldn't have made a non-public grant. It was public that FTX funded Anthropic.

having strong or intimate connections with employees of Open Philanthropy greatly enhances the chances of having funding, and it seems almost necessary

Based on spending some time in Berkeley, I think a more accurate way to describe this is as follows:

People who care about AI safety and are involved in EA tend to move to Berkeley because that is where everyone else is. It really can increase your productivity if you can easily interact with others working in your field and know what is going on, or so the established wisdom goes. The people who have been around the longest are often leading research organizations or are grantmakers at Open Phil. They go to the same parties, have the same friends, work in the same offices, and often spend nearly all of their time working with little time to socialize with anyone outside their community. Unless they make a special effort to avoid dating anyone in their social community, they may end up dating a grantmaker.

If we want these conflicts of interest to go away, we could try simply saying it should be a norm for Open Phil not to grant to organizations with possible conflicts of interest. But knowing the Berkeley social scene, this means that many Open Phil grantmakers wouldn't be able to date anyone in their social circles, since basically everyone in their social circles is receiving money from Open Phil.

The real question is as you say one of structure: whether so many of the EA-aligned AI safety organizations should be headquartered in close proximity and whether EAs should live together and be friends with basically only other EAs. That's the dynamic that created the conflicts. I don't think the answer to this is extremely obvious, but I don't really feel like trying to argue both sides of it right now.

It's possibly true that regrantors would reduce this effect in grantmaking, because you could designate regrantors in other places or who have different friends. But my suspicion would be that regrantors would by default be the same people who are already receiving grants.

There isn't actually any public grant saying that Open Phil funded Anthropic

I was looking into this topic, and found this source:

Anthropic has raised a $124 million Series A led by Stripe co-founder Jaan Tallinn, with participation from James McClave, Dustin Moskovitz, the Center for Emerging Risk Research and Eric Schmidt. The company is a developer of AI systems.

Speculating, conditional on the pitchbook data being correct, I don't think that Moskovitz funded Anthropic because his object level beliefs about their value or because they're such good pals, rather I'm guessing he received a recommendation from Open Philanthropy, even if Open Philanthropy wasn't the vehicle he used to transfer the funds.

Also note that Luke Muehlhauser is part of the board of Anthropic; see footnote 4 here.

My impression is that the lag between "idea is common knowledge at OpenPhil" and "idea is public" is fairly long. I was hoping that hiring more communications staff would help with this, and maybe it has, but the problem still seems to exist.

As evidence that sharing information is valuable: 6 of the 20 most valuable Forum posts from 2022 were from OpenPhil authors, and Holden is the only author to have multiple posts in the top 20. (I think my personal list of most valuable posts is even more skewed towards OpenPhil authors.)

I understand the reasons why it's costly to do public writeups (ironically, Holden's post on public discourse is one of the better references), and I'm not confident that people are actually making a mistake here,[1] but it does seem plausible that this is being undervalued.

 

  1. ^

    Also I'm biased in favor of liking stuff that's on the Forum

In the past two years, the technical alignment organisations which have received substantial funding include:

In context it sounds like you're saying that Open Phil funded Anthropic, but as far as I am aware that is simply not true.

I think maybe what you meant to say is that, "These orgs that have gotten substantial funding tend to have ties to Open Phil, whether OP was the funder or not." Might be worth editing the post to make that more explicit, so it's clear whether you're alleging a conflict of interest or not.

"having strong or intimate connections with employees of Open Philanthropy greatly enhances the chances of having funding, and it seems almost necessary"

Is this a well-identified phenomenon (in the causal inference sense) ?

Consider the following directed acyclic graph:

Connected with OpenPhil employees  ----------> Gets funding from OpenPhil

                                            ^                                                                                                        ^
                                            |                                                                                                          |
                                            |                                                                                                          |
                                            |                                                                                                          |
                                            |                                                                                                          |
                                            +------      Works on alignment      ------+

One explanation for this correlation you identify is that being connected with OpenPhil employees leads to people getting funding from OpenPhil (as demonstrated by the horizontal arrow). However, another explanation is that working on alignment causes one to connect with others who are interested  in the problem of AI alignment, as well as getting funding from a philanthropic organisation which funds work on AI alignment (as demonstrated by the vertical arrows).

These two explanations are observationally equivalent, in the absence of exogenous variation with respect to how connected one is to OpenPhil employees. Since claiming that it is "almost necessary" to have "strong or intimate connections with employees of Open Philanthropy" to get funding implies wrongdoing from OpenPhil, I'd be interested in evidence which would demonstrate this!

Thanks for writing this!

I think Open Phil should clarify how they allocate resources between different cause areas. The high level approach is worldview diversification, but I would like to know more about the specifics. For example, is some sort of softmax approach being used? Why, or why not?

It also looks like Open Phil is directing too much resources towards GiveWell's (GW's) top charities, given the uncertainties about its near term effects on animals, and longterm effects (see here). Even if Open Phil has funded Rethink Priorities to do research on moral weights (see here), which is quite relevant to get a coherent picture about the near term effects of GW's top charities, it seems this research should have been done earlier, and before significant funding being moved towards GW.

Have Open Phil considered relocating to move away from the Bay Area? Would this reduce the incidence of tangled relationships and shared offices between grantmakers and grantees? Would this be good (reducing bias in funding decisions) or bad (by making grantmakers less well-informed)? They should really publish a cost-benefit analysis of these topics you're bringing up.

I know that a few Open Phil staff live outside the Bay Area and work remotely

Curated and popular this week
Relevant opportunities