L

Linch

@ EA Funds
24594 karmaJoined Working (6-15 years)openasteroidimpact.org

Comments
2654

I'm interested in what people think of are the strongest arguments against this view. Here are a few counterarguments that I'm aware of: 

1. Empirically the AI-focused scaling labs seem to care quite a lot about safety, and make credible commitments for safety. If anything, they seem to be "ahead of the curve" compared to larger tech companies or governments.

2. Government/intergovernmental agencies, and to a lesser degree larger companies, are bureaucratic and sclerotic and generally less competent. 

3. The AGI safety issues that EAs worry about the most are abstract and speculative, so having a "normal" safety culture isn't as helpful as buying in into the more abstract arguments, which you might expect to be easier to do for newer companies.

4. Scaling labs share "my" values. So AI doom aside, all else equal, you might still want scaling labs to "win" over democratically elected governments/populist control.

We should expect that the incentives and culture for AI-focused companies to make them uniquely terrible for producing safe AGI. 
 

From a “safety from catastrophic risk” perspective, I suspect an “AI-focused company” (e.g. Anthropic, OpenAI, Mistral) is abstractly pretty close to the worst possible organizational structure for getting us towards AGI. I have two distinct but related reasons:

  1. Incentives
  2. Culture

From an incentives perspective, consider realistic alternative organizational structures to “AI-focused company” that nonetheless has enough firepower to host successful multibillion-dollar scientific/engineering projects:

  1. As part of an intergovernmental effort (e.g. CERN’s Large Hadron Collider, the ISS)
  2. As part of a governmental effort of a single country (e.g. Apollo Program, Manhattan Project, China’s Tiangong)
  3. As part of a larger company (e.g. Google DeepMind, Meta AI)

In each of those cases, I claim that there are stronger (though still not ideal) organizational incentives to slow down, pause/stop, or roll back deployment if there is sufficient evidence or reason to believe that further development can result in major catastrophe. In contrast, an AI-focused company has every incentive to go ahead on AI when the case for pausing is uncertain, and minimal incentive to stop or even take things slowly. 

From a culture perspective, I claim that without knowing any details of the specific companies, you should expect AI-focused companies to be more likely than plausible contenders to have the following cultural elements:

  1. Ideological AGI Vision AI-focused companies may have a large contingent of “true believers” who are ideologically motivated to make AGI at all costs and
  2. No Pre-existing Safety Culture AI-focused companies may have minimal or no strong “safety” culture where people deeply understand, have experience in, and are motivated by a desire to avoid catastrophic outcomes. 

The first one should be self-explanatory. The second one is a bit more complicated, but basically I think it’s hard to have a safety-focused culture just by “wanting it” hard enough in the abstract, or by talking a big game. Instead, institutions (relatively) have more of a safe & robust culture if they have previously suffered the (large) costs of not focusing enough on safety.

For example, engineers who aren’t software engineers understand fairly deep down that their mistakes can kill people, and that their predecessors’ fuck-up have indeed killed people (think bridges collapsing, airplanes falling, medicines not working, etc). Software engineers rarely have such experience.

Similarly, governmental institutions have institutional memories with the problems of major historical fuckups, in a way that new startups very much don’t.

Introducing Ulysses*, a new app for grantseekers. 


 

We (Austin Chen, Caleb Parikh, and I) built an app! You can test the app out if you’re writing a grant application! You can put in sections of your grant application** and the app will try to give constructive feedback about your applicants. Right now we're focused on the "Track Record" and "Project Goals" section of the application. (The main hope is to save back-and-forth-time between applicants and grantmakers by asking you questions that grantmakers might want to ask.

Austin, Caleb, and I hacked together a quick app as a fun experiment in coworking and LLM apps. We wanted a short project that we could complete in ~a day. Working on it was really fun! We mostly did it for our own edification, but we’d love it if the product is actually useful for at least a few people in the community!

As grantmakers in AI Safety, we’re often thinking about how LLMs will shape the future; the idea for this app came out of brainstorming, “How might we apply LLMs to our own work?”. We reflected on common pitfalls we see in grant applications, and I wrote a very rough checklist/rubric and graded some Manifund/synthetic applications against the rubric.  Caleb then generated a small number of few shot prompts by hand and then used LLMs to generate further prompts for different criteria (e.g., concreteness, honesty, and information on past projects) using a “meta-prompting” scheme. Austin set up a simple interface in Streamlit to let grantees paste in parts of their grant proposals. All of our code is open source on Github (but not open weight 😛).***

This is very much a prototype, and everything is very rough, but please let us know what you think! If there’s sufficient interest, we’d be excited about improving it (e.g., by adding other sections or putting more effort into prompt engineering). To be clear, the actual LLM feedback isn’t necessarily good or endorsed by us, especially at this very early stage. As usual, use your own best judgment before incorporating the feedback.

*Credit to Saul for the name, who originally got the Ulysses S. Grant pun from Scott Alexander.

** Note: Our app will not be locally saving your data. We are using the OpenAI API for our LLM feedback. OpenAI says that it won’t use your data to train models, but you may still wish to be cautious with highly sensitive data anyway. 

*** Linch led a discussion on the potential capabilities insights of our work, but we ultimately decided that it was asymmetrically good for safety; if you work on a capabilities team at a lab, we ask that you pay $20 to LTFF before you look at the repo.


 

The broader question I'm confused about is how much to update on the local/object-level of whether the labs are doing "kind of reasonable" stuff, vs what their overall incentives and positions in the ecosystem points them to doing. 

eg your site puts OpenAI and Anthropic as the least-bad options based on their activities, but from an incentives/organizational perspective, their place in the ecosystem is just really bad for safety. Contrast with, e.g., being situated within a large tech company[1] where having an AI scaling lab is just one revenue source among many, or Meta's alleged "scorched Earth" strategy where they are trying very hard to commoditize the component of LLMs

  1. ^

    eg GDM employees have Google/Alphabet stock, most of the variance in their earnings isn't going to come from AI, at least in the short term.

I'm happy to be one of the intermediaries, if Torres etc are willing to trust me (no particular reason to think they would)

Yudkowsky's comments at his sister's wedding seems surprisingly relevant here:

David Bashevkin:

And I would not think it was not think that Eliezer Yudkowsky would be the best sheva brachos speaker, but it was the most lovely thing that he said. What did Eliezer Yudkowsky say at your sheva brachos?

Channah Cohen:

Yeah, it’s a great story because it was mind-blowingly surprising at the time. And it is, I think the only thing that anyone said at a sheva brachos that I actually remember, he got up at the first sheva brachos and he said, when you die after 120 years, you’re going to go up to shamayim [this means heaven] and Hakadosh Baruch Hu [this means God]. And again, he used these phrases—

PART 3 OF 4 ENDS [01:18:04]

Channah Cohen:

Yeah. Hakadosh Baruch Hu will stand the man and the woman in front of him and he will go through a whole list of all the arguments you ever had together, and he will tell you who was actually right in each one of those arguments. And at the end he’ll take a tally, and whoever was right more often wins the marriage. And then everyone kind of chuckled and Ellie said, “And if you don’t believe that, then don’t act like it’s true.”

David Bashevkin:

What a profound… If you don’t believe that, then don’t act like it’s true. Don’t spend your entire marriage and relationship hoping that you’re going to win the test to win the marriage.

I'm at work and don't have the book with me, but you can look at the "Acknowledgements" section of Superintelligence. 

I agree that it's not clear whether the Department of Philosophy acted reasonably in the unique prestige ecosystem which universities inhabit, whether in the abstract or after adjusting for FHI quite possibly being unusually difficult/annoying to work with. I do think history will vindicate my position in the abstract and "normal people" with a smattering of facts about the situation (though perhaps not the degree of granularity where you understand the details of specific academic squabbles) will agree with me.

I don't think it's just an in-group perspective! Bostrom literally gives and receives feedback from kings; other members of FHI have gone on to influential positions in multi-billion dollar companies. 

Are you really saying that if you ask the general public (or members of the intellectual elite), typical philosophy faculty at prestigious universities will be recognized to be as or more impressive or influential in comparison? 

Linch
92
12
2
3

(I work for EA Funds, including EAIF, helping out with public communications among other work. I'm not a grantmaker on EAIF and I'm not responsible for any decision on any specific EAIF grant). 

Hi. Thanks for writing this. I appreciate you putting the work in this, even though I strongly disagree with the framing of most of the doc that I feel informed enough to opine on, as well as most of the object-level. 

Ultimately, I think the parts of your report about EA Funds are mostly incorrect or substantively misleading, given the best information I have available. But I think it’s possible I’m misunderstanding your position or I don’t have enough context. So please read the following as my own best understanding of the situation, which can definitely be wrong. But first, onto the positives:

  • I appreciate that the critical points in the doc are made as technical critiques, rather than paradigmatic ones. Technical critiques are ones that people are actually compelled to respond to, and can actually compel action (rather than just make people feel vaguely bad/smug and don’t compel any change).
  • The report has many numerical/quantitative details. In theory, those are easier to falsify.
  • The report appears extensive and must have taken a long time to write.

There are also some things the report mentioned that we have also been tracking, and I believe we have substantial room for improvement:

  • Our grant evaluation process is still slower than we would like.
    • While broadly faster than other comparable funds I’m aware of (see this comment), I still think we have substantial room for improvement.
  • Our various subfunds, particularly EAIF, have at points been understaffed and under-capacity. 
    • While I strongly disagree with the assessment that hiring more grantmakers is “fairly straightforward” (bad calls with grant evaluations are very costly, for reasons including but not limited to insufficient attention to adversarial selection; empirically most EA grantmaking organizations have found it very difficult to hire), I do think on the margin we can do significantly more on hiring.
  • Our limited capacity has made it difficult for us to communicate with and/or coordinate with all the other stakeholders in the system, so we’re probably missing out on key high-EV opportunities (eg several of our existing collaborations in AI safety have started later than they counterfactually could have, and we haven’t been able to schedule time to fly to coordinate with folks in London/DC/Netherlands/Sweden.
    • One of the reasons I came on to EA Funds full-time is to help communicate with various groups. 

Now, onto the disagreements:

Procedurally:

  • I was surprised that so much of the views that were ascribed to “EA Funds’ leadership” were from notes taken from a single informal call with the EA Funds project lead, that you did not confirm was okay to be shared publicly. 
    • They said that they usually explicitly request privacy before quoting them publicly, but are not sure if they did so in this instance.
    • They were also surprised that there was a public report out at all.
      • My best guess is that there was a misunderstanding that arose from a norm difference, where you come from the expectation that professional meetings are expected to be public unless explicitly stated otherwise, whereas the norm that I (and I think most EAs?) are more used to is that 1-1 meetings are private unless explicitly stated otherwise. 
    • They also disagree with the characterization of almost all of their comments (will say more below), which I think speaks to the epistemic advantages of confirming before publicly attributing comments made by someone else.
  • I’d have found it helpful if you shared a copy of the post before making it public.
    • We could’ve corrected most misunderstandings
    • If you were too busy for private corrections, I could’ve at least written this response earlier.
  • Many things in general were false (more details below). Having a fact-checking process might be useful going forwards.

 

Substantively:

  • When the report quoted “CEA has had to step in and provide support in evaluating EAIF grants for them” I believe this is false or at least substantively misleading.
    • The closest thing I can think about is that we ask CEA Comm Health for help in reviewing comm health issues with our grants (which as I understand is part of their explicit job duties and both sides are happy with)”
      • It’s possible your source misunderstood the relationship and thought the Comm Health work was supererogatory or accidental? 
    • We frequently ask technical advisors for advice on project direction in a less institutionalized capacity as well[1]. I think “step in” conveys the wrong understanding, as most of the grant evaluation is still done by the various funds.
    • (To confirm this impression, we checked with multiple senior people involved with CEA’s work; however, this was not an exhaustive sweep and it’s definitely possible that my impression is incorrect). 
  • While it is true that we’re much slower than we would like, it seems very unreasonable to single out EA Funds grantmaking as "unreasonably long" when other grantmakers are as slow or slower. 
    • See e.g. Abraham Rowe’s comment here
      • “"Not weighing in on LTFF specifically, but from having done a lot of traditional nonprofit fundraising, I'd guess two months is a faster response time than 80% of foundations/institutional funders, and one month is probably faster than like 95%+. My best guess at the average for traditional nonprofit funders is more like 3-6 months. I guess my impression is that even in the worst cases, EA Funds has been operating pretty well above average compared to the traditional nonprofit funding world (though perhaps that isn't the right comparison). Given that LTFF is funding a lot of research, 2 months is almost certainly better than most academic grants.
      • My impression from what I think is a pretty large sample of EA funders and grants is also that EA Funds is the fastest turnaround time on average compared to the list you mention [Editor's note: "Open Phil, SFF, Founders Pledge, and Longview"] ([with] exceptions in some cases in both directions for EA Funds and other funders)”
      • I also sanity-checked with both Google Search and GPT-4[2]
    • Broadly I’m aware that other people on the forum also believes that we’re slow, but I think most people who believe this believe so because:
      • We have our own aspirations to be faster, and we try to do so.
      • They think from a first-principles perspective that grantmakers “can” be faster.
      • We talk about our decisions and process very publicly, and so become more of an easy target for applicants’ grievances.
    • But while I understand and sympathize with other people’s frustrations[3], it is probably not factually true that we’re slower in relative terms than other organizations, and it’s odd to single out EA Funds here.
  • When your report said, “Critically, the expert reports that another major meta donor found EA Funds leadership frustrating to work with, and so ended up disengaging from further meta grantmaking coordination” my guess is that the quoted position is not true.
    • I feel like we’re keeping tabs on all the major donors (OP, SFF, Longview, etc).So I’m not sure who they could possibly be referring to.
      • Though I guess it’s possible that there is a major donor that’s so annoyed with us that they made efforts to hide themselves from us so we haven’t even heard about them.
      • But I think it’s more likely that the person in question isn’t a major donor.
    • To be clear, any donor feeling frustrated with us is regrettable. While it is true that not all donors can or should want to work with us (e.g. due to sufficiently differing cause prioritization or empirical worldviews), it is still regrettable that people have an emotionally frustrating experience. 
  • Your report says that EA Funds leadership was “strongly dismissing the value of prioritization research, where other grantmakers generally expressed higher uncertainty”, but this is false.
    • I want to be clear that EA Funds has historically been, and currently is, quite positive on cause prioritization in general (though of course specific work may be lower quality, or good work that’s not cause prioritization may be falsely labeled as cause prioritization).
    • By revealed preferences, EAIF has given very large grants to worldview investigations and moral weights work at Rethink Priorities
    • By stated preferences, “research that aids prioritization across different cause areas” was listed as one of the central examples of things that EAIF would be excited to fund.
    • My understanding is that the best evidence you have for this view is that EA Funds leadership “would consider less than half of what [Rethink Priorities] does cause prioritization.” 
    • I’m confused why you think the quoted statement is surprising or good evidence, given that the stated claim is just obviously true. Eg, eg The Rodenticide Reduction Sequence or Cultured meat: A comparison of techno-economic analyses or Exposure to Lead Paint in Low- and Middle-Income Countries (To give three examples of work that I have more than a passing familiarity with) are much more about intervention prioritization than intercause prioritization. An example of the latter is the moral weights work at Rethink Priorities Worldview Investigations.
    • My best guess is that this is just a semantics misunderstanding, where EA Funds’ project lead was trying to convey a technical point about the difference between intercause cause prioritization vs intervention prioritization, whereas you understood his claim as a emotive position of “boo cause prioritization”
  • Your report states that “EA Funds leadership doesn't believe that there is more uncertainty now with EA Fund's funding compared to other points in time” This is clearly false. 
    • I knew coming on to EA Funds that the job will have greater job uncertainty than other jobs I’ve had in the past, and I believe this was adequately communicated to me. 
    • We think about funding uncertainty a lot. EA Funds’ funding has always been uncertain, and things have gotten worse since Nov 2022.
    • Nor would it be consistent with either our stated or revealed preferences. 
      • Our revealed preference is that we spend substantially more staff time on fundraising than we have in the past.
      • Our stated preferences include “I generally expect our funding bar to vary more over time and to depend more on individual donations than it has historically.” and “LTFF and EAIF are unusually funding-constrained right now” 
        • The last one was even the title of a post with 4000+ views!
        • I don’t have a good idea for how much more unambiguous we could be.

 

Semantically:

I originally want to correct misunderstandings and misrepresentations of EA Funds’ positions more broadly in the report. However I think there were just a lot of misunderstandings overall, so I think it's simpler for people to just assume I contest almost every categorization of the form “EA funds believes X”. A few select examples:

  • When your report claimed “​​leadership is of the view that the current funding landscape isn't more difficult for community builders” I a) don’t think we’ve said that, and b) to the extent we believe it, it’s relative to eg 2019; it’d be false compared to the 2022 era of unjustly excessive spending.
  • “The EA Funds chair has clarified that EAIF would only really coordinate with OP, since they're reliably around; only if the [Meta-Charity Funders] was around for some time, would EA Funds find it worth factoring into their plans. ” 
    • To clarify, at the time the fund chair was unsure if MCF was only going to have one round. If they only have one round, it wouldn't make sense to change EA Funds’ strategy based on that. If they have multiple rounds (eg, more than 2, it could be worth factoring in). The costs of coordination are nontrivially significant.
    • It’s also worth noting that the fund chair had 2 calls with MCF and passed on various grants that they thought MCF might be interested in evaluating, which some people may consider coordination.
    • We’ve also coordinated more extensively with other non-OP funders, and have plans in the works for other collaborations with large funders.
  • “In general, they don't think that other funders outside of OP need to do work on prioritization, and are in general sceptical of such work. ”
    • AFAIK nobody at EA Funds believes this.
  • [EA funds believes] “so if EA groups struggle to raise money, it's simply because there are more compelling opportunities available instead.”
    • The statement seems kind of conceptually confused? Funders should always be trying to give to the most cost-effective projects on the margin.
    • The most charitable position that’s similar to the above I could think of is that some people might believe that 
      • “most community-building projects that are not funded now aren’t funded because of constraints on grantmaker capacity,” so grantmakers make poor decisions
        • Note that the corollary to the above is that many of the community building projects that are funded should not have been funded. 
      • I can’t speak to other people at EA Funds, but my own best guess is that this is not true (for projects that people online are likely to have heard of).
        • I’m more optimistic about boutique funding arrangements for projects within people’s networks that are unlikely to have applied to big funders, or people inspiring those around them to create new projects.
    • If projects aren’t funded, the biggest high-level reason is that there are limited resources in the world in general, and in EA specifically. You might additionally also believe that meta-EA in general is underfunded relative to object-level programs. 
  • (Minor): To clarify “since OP's GHW EA team is focusing on effective giving, EAIF will consider this less neglected” we should caveat this by saying that less neglected doesn’t necessarily mean less cost-effective than other plausible things for EAIF to fund.
  • Re: “EA Funds not posting reports or having public metrics of success”
    • We do post public payout reports.
    • My understanding is that you are (understandably) upset that we don’t have clear metrics and cost-effectiveness analyses written up. 
      • I think this is a reasonable and understandable complaint, and we have indeed gotten this feedback before from others and have substantial room to improve here.
    • However, I think many readers might interpret the statement as something stronger, e.g. interpreting it as us not posting reports or writing publicly much at all
      • As a matter of practice, we write a lot more about what we fund and our decision process than any other EA funder I’m aware of (and likely more than most other non-EA funders). I think many readers may get the wrong impression of our level of transparency from that comment.

Note to readers: I reached out to Joel to clarify some of these points before posting. I really appreciate his prompt responses! Due to time constraints, I decided to not send him a copy of this exact comment before posting publicly.

  1. ^

    I personally have benefited greatly from talking to specialist advisors in biosecurity.

  2. ^

    From GPT4

    “The median time to receive a response for an academic grant can vary significantly depending on the funding organization, the field of study, and the specific grant program. Generally, the process can take anywhere from a few months to over a year. ”
    “The timeline for receiving a response on grant applications can vary across different fields and types of grants, but generally, the processes are similar in length to those in the academic and scientific research sectors.”
    “Smaller grants in this field might be decided upon quicker, potentially within 3 to 6 months [emphasis mine], especially if they require less funding or involve fewer regulatory hurdles.”

  3. ^

    Being funded by grants kind of sucks as an experience compared to e.g. employment; I dislike adding to such frustrations. There are also several cases I’m aware of where counterfactually impactful projects were not taken due to funders being insufficiently able to fund things in time, in some of those incidences I'm more responsible than anybody else.

Load more