L

Linch

Founder and CEO @ Open Asteroid Impact
24516 karmaJoined Working (6-15 years)openasteroidimpact.org

Comments
2651

The broader question I'm confused about is how much to update on the local/object-level of whether the labs are doing "kind of reasonable" stuff, vs what their overall incentives and positions in the ecosystem points them to doing. 

eg your site puts OpenAI and Anthropic as the least-bad options based on their activities, but from an incentives/organizational perspective, their place in the ecosystem is just really bad for safety. Contrast with, e.g., being situated within a large tech company[1] where having an AI scaling lab is just one revenue source among many, or Meta's alleged "scorched Earth" strategy where they are trying very hard to commoditize the component of LLMs

  1. ^

    eg GDM employees have Google/Alphabet stock, most of the variance in their earnings isn't going to come from AI, at least in the short term.

I'm happy to be one of the intermediaries, if Torres etc are willing to trust me (no particular reason to think they would)

Yudkowsky's comments at his sister's wedding seems surprisingly relevant here:

David Bashevkin:

And I would not think it was not think that Eliezer Yudkowsky would be the best sheva brachos speaker, but it was the most lovely thing that he said. What did Eliezer Yudkowsky say at your sheva brachos?

Channah Cohen:

Yeah, it’s a great story because it was mind-blowingly surprising at the time. And it is, I think the only thing that anyone said at a sheva brachos that I actually remember, he got up at the first sheva brachos and he said, when you die after 120 years, you’re going to go up to shamayim [this means heaven] and Hakadosh Baruch Hu [this means God]. And again, he used these phrases—

PART 3 OF 4 ENDS [01:18:04]

Channah Cohen:

Yeah. Hakadosh Baruch Hu will stand the man and the woman in front of him and he will go through a whole list of all the arguments you ever had together, and he will tell you who was actually right in each one of those arguments. And at the end he’ll take a tally, and whoever was right more often wins the marriage. And then everyone kind of chuckled and Ellie said, “And if you don’t believe that, then don’t act like it’s true.”

David Bashevkin:

What a profound… If you don’t believe that, then don’t act like it’s true. Don’t spend your entire marriage and relationship hoping that you’re going to win the test to win the marriage.

I'm at work and don't have the book with me, but you can look at the "Acknowledgements" section of Superintelligence. 

I agree that it's not clear whether the Department of Philosophy acted reasonably in the unique prestige ecosystem which universities inhabit, whether in the abstract or after adjusting for FHI quite possibly being unusually difficult/annoying to work with. I do think history will vindicate my position in the abstract and "normal people" with a smattering of facts about the situation (though perhaps not the degree of granularity where you understand the details of specific academic squabbles) will agree with me.

I don't think it's just an in-group perspective! Bostrom literally gives and receives feedback from kings; other members of FHI have gone on to influential positions in multi-billion dollar companies. 

Are you really saying that if you ask the general public (or members of the intellectual elite), typical philosophy faculty at prestigious universities will be recognized to be as or more impressive or influential in comparison? 

Linch
92
12
2
3

(I work for EA Funds, including EAIF, helping out with public communications among other work. I'm not a grantmaker on EAIF and I'm not responsible for any decision on any specific EAIF grant). 

Hi. Thanks for writing this. I appreciate you putting the work in this, even though I strongly disagree with the framing of most of the doc that I feel informed enough to opine on, as well as most of the object-level. 

Ultimately, I think the parts of your report about EA Funds are mostly incorrect or substantively misleading, given the best information I have available. But I think it’s possible I’m misunderstanding your position or I don’t have enough context. So please read the following as my own best understanding of the situation, which can definitely be wrong. But first, onto the positives:

  • I appreciate that the critical points in the doc are made as technical critiques, rather than paradigmatic ones. Technical critiques are ones that people are actually compelled to respond to, and can actually compel action (rather than just make people feel vaguely bad/smug and don’t compel any change).
  • The report has many numerical/quantitative details. In theory, those are easier to falsify.
  • The report appears extensive and must have taken a long time to write.

There are also some things the report mentioned that we have also been tracking, and I believe we have substantial room for improvement:

  • Our grant evaluation process is still slower than we would like.
    • While broadly faster than other comparable funds I’m aware of (see this comment), I still think we have substantial room for improvement.
  • Our various subfunds, particularly EAIF, have at points been understaffed and under-capacity. 
    • While I strongly disagree with the assessment that hiring more grantmakers is “fairly straightforward” (bad calls with grant evaluations are very costly, for reasons including but not limited to insufficient attention to adversarial selection; empirically most EA grantmaking organizations have found it very difficult to hire), I do think on the margin we can do significantly more on hiring.
  • Our limited capacity has made it difficult for us to communicate with and/or coordinate with all the other stakeholders in the system, so we’re probably missing out on key high-EV opportunities (eg several of our existing collaborations in AI safety have started later than they counterfactually could have, and we haven’t been able to schedule time to fly to coordinate with folks in London/DC/Netherlands/Sweden.
    • One of the reasons I came on to EA Funds full-time is to help communicate with various groups. 

Now, onto the disagreements:

Procedurally:

  • I was surprised that so much of the views that were ascribed to “EA Funds’ leadership” were from notes taken from a single informal call with the EA Funds project lead, that you did not confirm was okay to be shared publicly. 
    • They said that they usually explicitly request privacy before quoting them publicly, but are not sure if they did so in this instance.
    • They were also surprised that there was a public report out at all.
      • My best guess is that there was a misunderstanding that arose from a norm difference, where you come from the expectation that professional meetings are expected to be public unless explicitly stated otherwise, whereas the norm that I (and I think most EAs?) are more used to is that 1-1 meetings are private unless explicitly stated otherwise. 
    • They also disagree with the characterization of almost all of their comments (will say more below), which I think speaks to the epistemic advantages of confirming before publicly attributing comments made by someone else.
  • I’d have found it helpful if you shared a copy of the post before making it public.
    • We could’ve corrected most misunderstandings
    • If you were too busy for private corrections, I could’ve at least written this response earlier.
  • Many things in general were false (more details below). Having a fact-checking process might be useful going forwards.

 

Substantively:

  • When the report quoted “CEA has had to step in and provide support in evaluating EAIF grants for them” I believe this is false or at least substantively misleading.
    • The closest thing I can think about is that we ask CEA Comm Health for help in reviewing comm health issues with our grants (which as I understand is part of their explicit job duties and both sides are happy with)”
      • It’s possible your source misunderstood the relationship and thought the Comm Health work was supererogatory or accidental? 
    • We frequently ask technical advisors for advice on project direction in a less institutionalized capacity as well[1]. I think “step in” conveys the wrong understanding, as most of the grant evaluation is still done by the various funds.
    • (To confirm this impression, we checked with multiple senior people involved with CEA’s work; however, this was not an exhaustive sweep and it’s definitely possible that my impression is incorrect). 
  • While it is true that we’re much slower than we would like, it seems very unreasonable to single out EA Funds grantmaking as "unreasonably long" when other grantmakers are as slow or slower. 
    • See e.g. Abraham Rowe’s comment here
      • “"Not weighing in on LTFF specifically, but from having done a lot of traditional nonprofit fundraising, I'd guess two months is a faster response time than 80% of foundations/institutional funders, and one month is probably faster than like 95%+. My best guess at the average for traditional nonprofit funders is more like 3-6 months. I guess my impression is that even in the worst cases, EA Funds has been operating pretty well above average compared to the traditional nonprofit funding world (though perhaps that isn't the right comparison). Given that LTFF is funding a lot of research, 2 months is almost certainly better than most academic grants.
      • My impression from what I think is a pretty large sample of EA funders and grants is also that EA Funds is the fastest turnaround time on average compared to the list you mention [Editor's note: "Open Phil, SFF, Founders Pledge, and Longview"] ([with] exceptions in some cases in both directions for EA Funds and other funders)”
      • I also sanity-checked with both Google Search and GPT-4[2]
    • Broadly I’m aware that other people on the forum also believes that we’re slow, but I think most people who believe this believe so because:
      • We have our own aspirations to be faster, and we try to do so.
      • They think from a first-principles perspective that grantmakers “can” be faster.
      • We talk about our decisions and process very publicly, and so become more of an easy target for applicants’ grievances.
    • But while I understand and sympathize with other people’s frustrations[3], it is probably not factually true that we’re slower in relative terms than other organizations, and it’s odd to single out EA Funds here.
  • When your report said, “Critically, the expert reports that another major meta donor found EA Funds leadership frustrating to work with, and so ended up disengaging from further meta grantmaking coordination” my guess is that the quoted position is not true.
    • I feel like we’re keeping tabs on all the major donors (OP, SFF, Longview, etc).So I’m not sure who they could possibly be referring to.
      • Though I guess it’s possible that there is a major donor that’s so annoyed with us that they made efforts to hide themselves from us so we haven’t even heard about them.
      • But I think it’s more likely that the person in question isn’t a major donor.
    • To be clear, any donor feeling frustrated with us is regrettable. While it is true that not all donors can or should want to work with us (e.g. due to sufficiently differing cause prioritization or empirical worldviews), it is still regrettable that people have an emotionally frustrating experience. 
  • Your report says that EA Funds leadership was “strongly dismissing the value of prioritization research, where other grantmakers generally expressed higher uncertainty”, but this is false.
    • I want to be clear that EA Funds has historically been, and currently is, quite positive on cause prioritization in general (though of course specific work may be lower quality, or good work that’s not cause prioritization may be falsely labeled as cause prioritization).
    • By revealed preferences, EAIF has given very large grants to worldview investigations and moral weights work at Rethink Priorities
    • By stated preferences, “research that aids prioritization across different cause areas” was listed as one of the central examples of things that EAIF would be excited to fund.
    • My understanding is that the best evidence you have for this view is that EA Funds leadership “would consider less than half of what [Rethink Priorities] does cause prioritization.” 
    • I’m confused why you think the quoted statement is surprising or good evidence, given that the stated claim is just obviously true. Eg, eg The Rodenticide Reduction Sequence or Cultured meat: A comparison of techno-economic analyses or Exposure to Lead Paint in Low- and Middle-Income Countries (To give three examples of work that I have more than a passing familiarity with) are much more about intervention prioritization than intercause prioritization. An example of the latter is the moral weights work at Rethink Priorities Worldview Investigations.
    • My best guess is that this is just a semantics misunderstanding, where EA Funds’ project lead was trying to convey a technical point about the difference between intercause cause prioritization vs intervention prioritization, whereas you understood his claim as a emotive position of “boo cause prioritization”
  • Your report states that “EA Funds leadership doesn't believe that there is more uncertainty now with EA Fund's funding compared to other points in time” This is clearly false. 
    • I knew coming on to EA Funds that the job will have greater job uncertainty than other jobs I’ve had in the past, and I believe this was adequately communicated to me. 
    • We think about funding uncertainty a lot. EA Funds’ funding has always been uncertain, and things have gotten worse since Nov 2022.
    • Nor would it be consistent with either our stated or revealed preferences. 
      • Our revealed preference is that we spend substantially more staff time on fundraising than we have in the past.
      • Our stated preferences include “I generally expect our funding bar to vary more over time and to depend more on individual donations than it has historically.” and “LTFF and EAIF are unusually funding-constrained right now” 
        • The last one was even the title of a post with 4000+ views!
        • I don’t have a good idea for how much more unambiguous we could be.

 

Semantically:

I originally want to correct misunderstandings and misrepresentations of EA Funds’ positions more broadly in the report. However I think there were just a lot of misunderstandings overall, so I think it's simpler for people to just assume I contest almost every categorization of the form “EA funds believes X”. A few select examples:

  • When your report claimed “​​leadership is of the view that the current funding landscape isn't more difficult for community builders” I a) don’t think we’ve said that, and b) to the extent we believe it, it’s relative to eg 2019; it’d be false compared to the 2022 era of unjustly excessive spending.
  • “The EA Funds chair has clarified that EAIF would only really coordinate with OP, since they're reliably around; only if the [Meta-Charity Funders] was around for some time, would EA Funds find it worth factoring into their plans. ” 
    • To clarify, at the time the fund chair was unsure if MCF was only going to have one round. If they only have one round, it wouldn't make sense to change EA Funds’ strategy based on that. If they have multiple rounds (eg, more than 2, it could be worth factoring in). The costs of coordination are nontrivially significant.
    • It’s also worth noting that the fund chair had 2 calls with MCF and passed on various grants that they thought MCF might be interested in evaluating, which some people may consider coordination.
    • We’ve also coordinated more extensively with other non-OP funders, and have plans in the works for other collaborations with large funders.
  • “In general, they don't think that other funders outside of OP need to do work on prioritization, and are in general sceptical of such work. ”
    • AFAIK nobody at EA Funds believes this.
  • [EA funds believes] “so if EA groups struggle to raise money, it's simply because there are more compelling opportunities available instead.”
    • The statement seems kind of conceptually confused? Funders should always be trying to give to the most cost-effective projects on the margin.
    • The most charitable position that’s similar to the above I could think of is that some people might believe that 
      • “most community-building projects that are not funded now aren’t funded because of constraints on grantmaker capacity,” so grantmakers make poor decisions
        • Note that the corollary to the above is that many of the community building projects that are funded should not have been funded. 
      • I can’t speak to other people at EA Funds, but my own best guess is that this is not true (for projects that people online are likely to have heard of).
        • I’m more optimistic about boutique funding arrangements for projects within people’s networks that are unlikely to have applied to big funders, or people inspiring those around them to create new projects.
    • If projects aren’t funded, the biggest high-level reason is that there are limited resources in the world in general, and in EA specifically. You might additionally also believe that meta-EA in general is underfunded relative to object-level programs. 
  • (Minor): To clarify “since OP's GHW EA team is focusing on effective giving, EAIF will consider this less neglected” we should caveat this by saying that less neglected doesn’t necessarily mean less cost-effective than other plausible things for EAIF to fund.
  • Re: “EA Funds not posting reports or having public metrics of success”
    • We do post public payout reports.
    • My understanding is that you are (understandably) upset that we don’t have clear metrics and cost-effectiveness analyses written up. 
      • I think this is a reasonable and understandable complaint, and we have indeed gotten this feedback before from others and have substantial room to improve here.
    • However, I think many readers might interpret the statement as something stronger, e.g. interpreting it as us not posting reports or writing publicly much at all
      • As a matter of practice, we write a lot more about what we fund and our decision process than any other EA funder I’m aware of (and likely more than most other non-EA funders). I think many readers may get the wrong impression of our level of transparency from that comment.

Note to readers: I reached out to Joel to clarify some of these points before posting. I really appreciate his prompt responses! Due to time constraints, I decided to not send him a copy of this exact comment before posting publicly.

  1. ^

    I personally have benefited greatly from talking to specialist advisors in biosecurity.

  2. ^

    From GPT4

    “The median time to receive a response for an academic grant can vary significantly depending on the funding organization, the field of study, and the specific grant program. Generally, the process can take anywhere from a few months to over a year. ”
    “The timeline for receiving a response on grant applications can vary across different fields and types of grants, but generally, the processes are similar in length to those in the academic and scientific research sectors.”
    “Smaller grants in this field might be decided upon quicker, potentially within 3 to 6 months [emphasis mine], especially if they require less funding or involve fewer regulatory hurdles.”

  3. ^

    Being funded by grants kind of sucks as an experience compared to e.g. employment; I dislike adding to such frustrations. There are also several cases I’m aware of where counterfactually impactful projects were not taken due to funders being insufficiently able to fund things in time, in some of those incidences I'm more responsible than anybody else.

I thought this summary by TracingWoodgrains was good (in terms of being a summary. I don't know enough about the object-level to know if it was true). If roughly accurate, it paints an extremely unflattering picture of Johnson.

A relevant reframing here is whether having a PhD provides a high Bayes factor update to being hired. Eg, if people with and without PhDs have a 2% chance of being hired, but ">50% of successful applicants had a PhD" because most applicants have a PhD, then you should probably not include this, but if 1 in 50 applicants are hired, but it rises to 1 in 10 people if you have a PhD and falls to 1 in 100 if you don't, then the PhD is a massive evidential update even if there is no causal effect.

@JWS asked the question: why do EA critics hate EA so much? Are all EA haters just irrational culture warriors?

I genuinely don't know if this is an interesting/relevant question that's unique to EA. To me, the obvious follow-up question here is whether EA is unique or special in having this (average) level of vitriol in critiques of us? Like is the answer to "why so much EA criticism is hostile and lazy" the same answer to "why is so much criticism, period, hostile and lazy?" Or are there specific factors of EA that's at all relevant here?

I haven't been sufficiently embedded in other intellectual or social movements. I was a bit involved in global development before and don't recall much serious vitirol, maybe something like Easterly or Moyo are closest. I guess maybe MAGA implicitly doesn't like global dev? 

But otoh I've heard of other people involved in say animal rights who say that the "critiques" of EA are all really light and milquetoast by comparison.

I'd really appreciate answers from people who have been more "around the block" than I have. 

Load more