The broader question I'm confused about is how much to update on the local/object-level of whether the labs are doing "kind of reasonable" stuff, vs what their overall incentives and positions in the ecosystem points them to doing.
eg your site puts OpenAI and Anthropic as the least-bad options based on their activities, but from an incentives/organizational perspective, their place in the ecosystem is just really bad for safety. Contrast with, e.g., being situated within a large tech company[1] where having an AI scaling lab is just one revenue source among many, or Meta's alleged "scorched Earth" strategy where they are trying very hard to commoditize the component of LLMs.
eg GDM employees have Google/Alphabet stock, most of the variance in their earnings isn't going to come from AI, at least in the short term.
Yudkowsky's comments at his sister's wedding seems surprisingly relevant here:
David Bashevkin:
And I would not think it was not think that Eliezer Yudkowsky would be the best sheva brachos speaker, but it was the most lovely thing that he said. What did Eliezer Yudkowsky say at your sheva brachos?
Channah Cohen:
Yeah, it’s a great story because it was mind-blowingly surprising at the time. And it is, I think the only thing that anyone said at a sheva brachos that I actually remember, he got up at the first sheva brachos and he said, when you die after 120 years, you’re going to go up to shamayim [this means heaven] and Hakadosh Baruch Hu [this means God]. And again, he used these phrases—
PART 3 OF 4 ENDS [01:18:04]
Channah Cohen:
Yeah. Hakadosh Baruch Hu will stand the man and the woman in front of him and he will go through a whole list of all the arguments you ever had together, and he will tell you who was actually right in each one of those arguments. And at the end he’ll take a tally, and whoever was right more often wins the marriage. And then everyone kind of chuckled and Ellie said, “And if you don’t believe that, then don’t act like it’s true.”
David Bashevkin:What a profound… If you don’t believe that, then don’t act like it’s true. Don’t spend your entire marriage and relationship hoping that you’re going to win the test to win the marriage.
I'm at work and don't have the book with me, but you can look at the "Acknowledgements" section of Superintelligence.
I agree that it's not clear whether the Department of Philosophy acted reasonably in the unique prestige ecosystem which universities inhabit, whether in the abstract or after adjusting for FHI quite possibly being unusually difficult/annoying to work with. I do think history will vindicate my position in the abstract and "normal people" with a smattering of facts about the situation (though perhaps not the degree of granularity where you understand the details of specific academic squabbles) will agree with me.
I don't think it's just an in-group perspective! Bostrom literally gives and receives feedback from kings; other members of FHI have gone on to influential positions in multi-billion dollar companies.
Are you really saying that if you ask the general public (or members of the intellectual elite), typical philosophy faculty at prestigious universities will be recognized to be as or more impressive or influential in comparison?
(I work for EA Funds, including EAIF, helping out with public communications among other work. I'm not a grantmaker on EAIF and I'm not responsible for any decision on any specific EAIF grant).
Hi. Thanks for writing this. I appreciate you putting the work in this, even though I strongly disagree with the framing of most of the doc that I feel informed enough to opine on, as well as most of the object-level.
Ultimately, I think the parts of your report about EA Funds are mostly incorrect or substantively misleading, given the best information I have available. But I think it’s possible I’m misunderstanding your position or I don’t have enough context. So please read the following as my own best understanding of the situation, which can definitely be wrong. But first, onto the positives:
There are also some things the report mentioned that we have also been tracking, and I believe we have substantial room for improvement:
Now, onto the disagreements:
Procedurally:
Substantively:
Semantically:
I originally want to correct misunderstandings and misrepresentations of EA Funds’ positions more broadly in the report. However I think there were just a lot of misunderstandings overall, so I think it's simpler for people to just assume I contest almost every categorization of the form “EA funds believes X”. A few select examples:
Note to readers: I reached out to Joel to clarify some of these points before posting. I really appreciate his prompt responses! Due to time constraints, I decided to not send him a copy of this exact comment before posting publicly.
I personally have benefited greatly from talking to specialist advisors in biosecurity.
From GPT4
“The median time to receive a response for an academic grant can vary significantly depending on the funding organization, the field of study, and the specific grant program. Generally, the process can take anywhere from a few months to over a year. ”
“The timeline for receiving a response on grant applications can vary across different fields and types of grants, but generally, the processes are similar in length to those in the academic and scientific research sectors.”
“Smaller grants in this field might be decided upon quicker, potentially within 3 to 6 months [emphasis mine], especially if they require less funding or involve fewer regulatory hurdles.”
Being funded by grants kind of sucks as an experience compared to e.g. employment; I dislike adding to such frustrations. There are also several cases I’m aware of where counterfactually impactful projects were not taken due to funders being insufficiently able to fund things in time, in some of those incidences I'm more responsible than anybody else.
I thought this summary by TracingWoodgrains was good (in terms of being a summary. I don't know enough about the object-level to know if it was true). If roughly accurate, it paints an extremely unflattering picture of Johnson.
A relevant reframing here is whether having a PhD provides a high Bayes factor update to being hired. Eg, if people with and without PhDs have a 2% chance of being hired, but ">50% of successful applicants had a PhD" because most applicants have a PhD, then you should probably not include this, but if 1 in 50 applicants are hired, but it rises to 1 in 10 people if you have a PhD and falls to 1 in 100 if you don't, then the PhD is a massive evidential update even if there is no causal effect.
@JWS asked the question: why do EA critics hate EA so much? Are all EA haters just irrational culture warriors?
I genuinely don't know if this is an interesting/relevant question that's unique to EA. To me, the obvious follow-up question here is whether EA is unique or special in having this (average) level of vitriol in critiques of us? Like is the answer to "why so much EA criticism is hostile and lazy" the same answer to "why is so much criticism, period, hostile and lazy?" Or are there specific factors of EA that's at all relevant here?
I haven't been sufficiently embedded in other intellectual or social movements. I was a bit involved in global development before and don't recall much serious vitirol, maybe something like Easterly or Moyo are closest. I guess maybe MAGA implicitly doesn't like global dev?
But otoh I've heard of other people involved in say animal rights who say that the "critiques" of EA are all really light and milquetoast by comparison.
I'd really appreciate answers from people who have been more "around the block" than I have.
I really liked the book.