R

RobBensinger

@ Machine Intelligence Research Institute
8063 karmaJoined Sep 2014Berkeley, CA, USA

Sequences
1

Late 2021 MIRI Conversations

Comments
576

Topic contributions
2

Update Apr. 15:  I talked to a CEA employee and got some more context on why CEA hasn't done an SBF investigation and postmortem. In addition to the 'this might be really difficult and it might not be very useful' concern, they mentioned that the Charity Commission investigation into EV UK is still ongoing a year and a half later. (Google suggests that statutory inquiries by the Charity Commission take an average of 1.2 years to complete, so the super long wait here is sadly normal.)

Although the Commission has said "there is no indication of wrongdoing by the trustees at this time", and the risk of anything crazy happening is lower now than it was a year and a half ago, I gather that it's still at least possible that the Commission could take some drastic action like "we think EV did bad stuff, so we're going to take over the legal entity that includes the UK components of CEA, 80K, GWWC, GovAI, etc.", which may make it harder for CEA to usefully hold the steering wheel on an SBF investigation at this stage.

Example scenario: CEA tries to write up some lessons learned from the SBF thing, with an EA audience in mind; EAs tend to have unusually high standards, and a CEA staffer writes a comment that assumes this context, without running the comment by lawyers because it seemed innocent enough; because of those high standards, the Charity Commission misreads the CEA employee as implying a way worse thing happened than is actually the case.

This particular scenario may not be a big risk, but the sum of the risk of all possible scenarios like that (including scenarios that might not currently be on their radar) seems non-negligible to the CEA person I spoke to, even though they don't think there's any info out there that should rationally cause the Charity Commission to do anything wild here. And trying to do serious public reflection or soul-searching while also carefully nitpicking every sentence for possible ways the Charity Commission could misinterpret something, doesn't seem like an optimal set-up for deep, authentic, and productive soul-searching.

The CEA employee said that they thought this is one reason (but not the only reason) EV is unlikely to run a postmortem of this kind.

 

My initial thoughts on all this: This is very useful info! I had no idea the Charity Commission investigation was still ongoing, and if there are significant worries about that, that does indeed help make CEA and EV’s actions over the last year feel a lot less weird-and-mysterious to me.

I’m not sure I agree with CEA or EV’s choices here, but I no longer feel like there’s a mystery to be explained here; this seems like a place where reasonable people can easily disagree about what the right strategy is. I don't expect the Charity Commission to in fact take over those organizations, since as far as I know there's no reason to do that, but I can see how this would make it harder for CEA to do a soul-searching postmortem.

I do suspect that EV and/or CEA may be underestimating the costs of silence here. I could imagine a frog-boiling problem arising here, where it made sense to delay a postmortem for a few months based on a relatively small risk of disaster (and a hope that the Charity Commission investigation in this case might turn out to be brief), but it may not make sense to continue to delay in this situation for years on end. Both options are risky; I suspect the risks of inaction and silence may be getting systematically under-weighted here. (But it’s hard to be confident when I don’t know the specifics of how these decisions are being made.)

 

I ran the above by Oliver Habryka, who said:

“I talked to a CEA employee and got some more context on why CEA hasn't done an SBF investigation and postmortem.”

Seems like it wouldn't be too hard for them to just advocate for someone else doing it?

Or to just have whoever is leading the investigation leave the organization.

In general it seems to me that an investigation is probably best done in a relatively independent vehicle anyways, for many reasons.

“My thoughts on all this: This is very useful info! I had no idea the Charity Commission investigation was still ongoing, and that does indeed help make CEA and EV’s actions over the last year feel a lot less weird-and-mysterious to me.”

Agree that this is an important component (and a major component for my models).


I have some information suggesting that maybe Oliver and/or the CEA employee's account is wrong, or missing part of the story?? But I'm confused about the details, so I'll look into things more and post an update here if I learn more.

I feel like "people who worked with Sam told people about specific instances of quite serious dishonesty they had personally observed" is being classed as "rumour" here, which whilst not strictly inaccurate, is misleading, because it is a very atypical case relative to the image the word "rumour" conjures.

I agree with this.

[...] I feel like we still want to know if any one in leadership argued "oh, yeah, Sam might well be dodgy, but the expected value of publicly backing him is high because of the upside". That's a signal someone is a bad leader in my view, which is useful knowledge going forward.

I don't really agree with this. Everyone has some probability of turning out to be dodgy; it matters exactly how strong the available evidence was. "This EA leader writes people off immediately when they have even a tiny probability of being untrustworthy" would be a negative update about the person's decision-making too!

"Just focus on the arguments" isn't a decision-making algorithm, but I think informal processes like "just talk about it and individually do what makes sense" perform better than rigid algorithms in cases like this.

If we want something more formal, I tend to prefer approaches like "delegate the question to someone trustworthy who can spend a bunch of time carefully weighing the arguments" or "subsidize a prediction market to resolve the question" over "just run an opinion poll and do whatever the majority of people-who-see-the-poll vote for, without checking how informed or wise the respondents are".

Knowing what people think is useful, especially if it's a non-anonymous poll aimed at sparking conversations, questions, etc. (One thing that might help here is to include a field for people to leave a brief explanation of their vote, if the polling software allows for it.)

Anonymous polls are a bit trickier, since random people on the Internet can easily brigade such a poll. And I wouldn't want to assume that something's a good idea just because most EAs agree with it; I'd rather focus on the arguments for and against.

4/4 Update: An EA who was involved in EA's early response to the FTX disaster has give me their take on why there hasn't yet been an investigation. They think EA leaders (at least, the ones they talked to a lot at the time) had "little to do with a desire to protect the reputation of EA or of individual EAs", and had more to do with things like "general time constraints and various exogenous logistical difficulties".

See this comment for a lot more details, and a short response from Habryka.

Also, some corrections: I said that "there was a narrow investigation into legal risk to Effective Ventures last year", which I think risks overstating how narrow the investigation probably was. I also said that Julia Wise had "been calling for the existence of such an investigation", when from her perspective she "listed it as a possible project rather than calling for it exactly". Again, see the comment for details.

Update Apr. 4: I’ve now spoken with another EA who was involved in EA’s response to the FTX implosion. To summarize what they said to me:

  • They thought that the lack of an investigation was primarily due to general time constraints and various exogenous logistical difficulties. At the time, they thought that setting up a team who could overcome the various difficulties would be extremely hard for mundane reasons such as:
    • thorough, even-handed investigations into sensitive topics are very hard to do (especially if you start out low-context);
    • this is especially true when they are vaguely scoped and potentially involve a large number of people across a number of different organizations;
    • “professional investigators” (like law firms) aren’t very well-suited to do the kind of investigation that would actually be helpful;
    • legal counsels were generally strongly advising people against talking about FTX stuff in general;
    • various old confidentiality agreements would make it difficult to discuss what happened in some relevant instances (e.g. meetings that had Chatham House Rules);
    • it would be hard to guarantee confidentiality in the investigation when info might be subpoenaed or something like that;
    • and a general plethora of individually-surmountable but collectively-highly-challenging obstacles.
  • They flagged that at the time, most people involved were already in an exceptionally busy and difficult time, and so had less bandwidth for additional projects than usual.
  • A caveat here is that the EV board did block some people from speaking publicly during the initial investigation into EV's legal situation. That investigation ended back in the summer of 2023. 
  • Julia Wise and Ozzie Gooen wrote on the EA Forum that this is a potentially useful project for someone to take on, which as far as this person knew isn’t something any EA leadership did or would try to stop, and the impression of the person I spoke to was that Julia and Ozzie indeed tried to investigate what reforms should happen, though the person I spoke to didn’t follow that situation closely.
  • The person I spoke to didn't want to put words in the mouth of EA leaders, and their information is mostly from ~1 year ago and might be out of date. But to the extent some people aren't currently champing at the bit to make this happen, their impression (with respect to the EA leaders they have interacted with relatively extensively) is that this has little to do with a desire to protect the reputation of EA or of individual EAs.
  • Rather, their impression is that for a lot of top EA leaders, this whole thing is a lot less interesting, because those EAs think they know what happened (and that it's not that interesting). So the choice is like "should I pour in a ton of energy to try to set up this investigation that will struggle to get off the ground to learn kinda boring stuff I already know?" And maybe they are underrating how interesting others would find it, but that made the whole idea not so important-seeming (at least in the early days after FTX’s collapse, relative to all the other urgent and confusing things swirling around in the wake of the collapse) from their perspective.

I vouch for this person as generally honest and well-intentioned. I update from the above that community leaders are probably less resistant to doing some kind of fact-finding inquiry than I thought. I’m hoping that this take is correct, since it suggests to me that it might not be too hard to get an SBF postmortem to happen now that the trial and the EV legal investigation are both over (and now that we’re all talking about the subject in the first place).

If the take above isn’t correct, then hopefully my sharing it will cause others to chime in with further objections, and I can zigzag my way to understanding what actually happened!

I shared the above summary with Oliver Habryka, and he said:

Hmm, I definitely don’t buy the "this has little to do with EA leadership desire to protect their reputation". A lot of the reason for the high standards is for PR reasons.

I think people are like "Oh man, doing a good job here seems really hard, since doing it badly seems like it would be really costly reputation-wise. But if someone did want to put in the heroic effort to do a good enough job to not have many downsides, then yeah, I would be in favor of that. But that seems so hard to do that I don’t really expect it to happen."

Like, the primary thing that seems to me the mistake is the standard to which any such investigation is being held to before people consider it net-positive.

I'll also share Ozzie Gooen's Twitter take from a few days ago:

My quick guess (likely to be wrong!)

  • There are really only a few people "on top" of EA.
  • This would have essentially been "these top people investigating each other, and documenting that for others in the community"
  • These people don't care that much about being trusted in the community. They fund the community and have power
  • the EA community really doesn't have much power over the funders/leaders.
  • These people generally feel like they understand the problem well enough.

And, some corrections to my earlier posts about this:

  • I said that "there was a narrow investigation into legal risk to Effective Ventures last year", which I think may have overstated the narrowness of the investigation a bit. My understanding is that the investigation's main goal was to reduce EV's legal exposure, but to that end the investigation covered a somewhat wider range of topics (possibly including things like COI policies), including things that might touch on broader EA mistakes and possible improvements. But it's hard to be sure about any of this because details of the investigation's scope and outcomes weren't shared, and it doesn't sound like they will be.
  • I said that Julia Wise had "been calling for the existence of such an investigation"; Julia clarifies on social media, "I would say I listed it as a possible project rather than calling for it exactly."
    • Specifically, Julia Wise, Ozzie Gooen, and Sam Donald co-wrote a November 2023 blog post that listed "comprehensive investigation into FTX<>EA connections / problems" as one of four "projects and programs we'd like to see", saying "these projects are promising, but they're sizable or ongoing projects that we don't have the capacity to carry out". They also included this idea in a list of Further Possible Projects on EA Reform.

Fair! I definitely don't want to imply that there's been zero reflection or inquiry in the wake of FTX. I just think "what actually happened within EA networks, and could we have done better with different processes or norms?" is a really large and central piece of the puzzle.

The issue is that there are degrees of naiveness. Oliver's view, as I understand it, is that there are at least three positions:

  • Maximally Naive: Buy nice event venues, because we need more places to host events.
  • Moderately Naive: Don't buy nice event venues, because it's more valuable to convince people that we're frugal and humble than it is valuable to host events.
  • Non-Naive: Buy nice event venues, because we need more places to host events, and the value of signaling frugality and humility is in any case lower than the value of signaling that we're willing to do weird and unpopular things when the first-order effects are clearly positive. Indeed, trying to look frugal here may even cause more harm than benefit, since:
    • (a) it nudges EA toward being a home for empty virtue-signalers instead of people trying to actually help others, and
    • (b) it nudges EA toward being a home for manipulative people who are obsessed with controlling others' perceptions of EA, as opposed to EA being a home for honest, open, and cooperative souls who prize doing good and causing others to have accurate models over having a good reputation.

Optimizing too hard for reputation can get you into hot water, because you've hit the sour spot of being too naive to recognize that many others can see what you're doing and discount your signals accordingly, but not naive enough to just blithely do the obvious right thing without overthinking it.

There are obviously cases where reputation matters for impact, but many people fall into the trap of fixating on reputation when they lack the skill to take into account enough higher-order effects.

(Of course, the above isn't the only reason people might disagree on the utility of event venues. If you think EA is mainly bottlenecked on research and ideas, then you'll want to gather people together to solve problems and share their thoughts. If you instead think EA's big bottleneck is that we aren't drawing in enough people to donate to GiveWell top charities, then you should think events are a lot less useful, unless maybe it's a very large event targeted at drawing in new people to donate.)

I mean "done enough" in the sense that 80K is at fault for falling short, not in the sense that they should necessarily stop sharing that message.

I haven't heard any arguments against doing an investigation yet, and I could imagine folks might be nervous about speaking up here. So I'll try to break the ice by writing an imaginary dialogue between myself and someone who disagrees with me.

Obviously this argument may not be compelling compared to what an actual proponent would say, and I'd guess I'm missing at least one key consideration here, so treat this as a mere conversation-starter.


Hypothetical EA: Why isn't EV's 2023 investigation enough? You want us to investigate; well, we investigated.

Rob: That investigation was only investigating legal risk to EV. Everything I've read (and everything I've heard privately) suggests that it wasn't at all trying to answer the question of whether the EA community made any moral or prudential errors in how we handled SBF over the years. Nor was it trying to produce common-knowledge documents (either private or public) to help any subset of EA understand what happened. Nor was it trying to come up with any proposal for what we should do differently (if anything) in the future.

I take it as fairly obvious that those are all useful activities to carry out after a crisis, especially when there was sharp disagreement, within EA leadership, long before the FTX implosion, about how we should handle SBF.

Hypothetical EA: Look, I know there’s been no capital-I “Investigation”, but plenty of established EAs have poked around at dinner parties and learned a lot of the messy complicated details of what happened. My own informal poking around has convinced me that no EAs outside FTX leadership did anything super evil or Machiavellian. The worst you can say is that they muddled along and had miscommunications and brain farts like any big disorganized group of humans, and were a bit naively over-trusting.

Me: Maybe! But scattered dinner conversation with random friends and colleagues, with minimal following up or cross-checking of facts, isn’t the best medium for getting an unbiased picture of what happened. People skew the truth, withhold info, pass the blame ball around. And you like your friends, so you’re eager to latch on to whatever story shows they did an OK job.

Perhaps your story is true, but we shouldn’t be scared of checking, applying the same level of rigor we readily apply to everything else we’re doing.

The utility of this doesn't require that any EAs be Evil. A postmortem is plenty useful in a world where we were “too trusting” or were otherwise subject to biases in how we thought, or how we shared information and made group decisions — so we can learn from our mistakes and do better next time.

And if we’ve historically been “too trusting”, it seems doubly foolish to err on the side of trusting every individual, institution, and process involved in the EA-SBF debacle, and write them a preemptive waiver for all the errors we’re studiously avoiding checking whether they’ve made.

Hypothetical EA: Look, there’s just no reason to use SBF in particular for your social experiment in radical honesty and perfect transparency. It was to some extent a matter of luck that SBF succeeded as well as he did, and that he therefore had an opportunity to cause so much harm. If there were systemic biases in EA that caused us to err here, then those same biases should show up in tons of other cases too.

The only reason to single out the SBF case in particular and give it 1000x more attention than everything else is that it’s the most newsworthy EA error.

But the main effect of this is to inflate and distort minor missteps random EA decision-makers made, bolstered by the public’s hindsight bias and cancel culture and by journalists’ axe-grinding, so that the smallest misjudgments an EA makes look like horrific unforgivable sins.

SBF is no more useful for learning about EA’s causal dynamics than any other case (and in fact SBF is an unusually bad place to try to learn generalizable lessons, because the sky-high stakes will cause people to withhold key evidence and/or bend the truth toward social desirability); it’s only useful as a bludgeon, if you came into all this already sure that EA is deeply corrupt (or that particular individuals or orgs are), and you want to summon a mob to punish those people and drive them from the community.

(Or, alternatively, if you’re sad about EA’s bad reputation and you want to find scapegoats: find the specific Bad EAs and drive them out, to prove to the world that you’re a Good EA and that EA-writ-large is now pure.)

Me: I find that argument somewhat compelling, but I still think an investigation would make sense.

First, extreme cases can often illustrate important causal dynamics that are harder to see in normal cases. E.g., if EA has a problem like “we fudge the truth too much”, this might be hard to detect in low-stakes cases where people have less incentive to lie. People’s behavior when push comes to shove is important, given the huge impact EA is trying to have on the world; and SBF is one huge instance where push came to shove and our character was really tested.

And, yes, some people may withhold information more because of the high stakes. But others will be much more willing to spend time on this question because they recognize it as important. If nothing else, SBF is a Schelling point for us all to direct our eyes at the same thing simultaneously, and see if we can converge on some new truths about the world.

Second, and moving away from abstractions to talk about the specifics of this case: My understanding is that a bunch of EAs tried to warn the community that SBF was extremely shady, and a bunch of other EAs apparently didn't believe the warnings, or didn't want those warnings widely shared even though they believed them.

“SBF is extremely shady” isn’t knowledge that FTX was committing financial fraud, and shouting "SBF is extremely shady" from the hills wouldn’t necessarily have prevented the fraud from happening. But there's some probability it might have been the tipping point at various important junctures, as potential employees and funders and customers weighed their options. And even if it wouldn't have helped at all in this case, it's good to share that kind of information in case it helps the next time around.

I think it would be directly useful to know what happened to those warnings about SBF, so we can do better next time. And I think it would also help restore a lot of trust in EA (and a lot of internal ability for EAs to coordinate with each other) if people knew what happened — if we knew which thought leaders or orgs did better or worse, how processes failed, how people plan to do better next time.

I recognize that this will be harder in some ways with journalists and twitter users breathing down your necks. And I recognize that some people may suffer unfair scrutiny and criticism because they were in the wrong place at the wrong time. To some extent I just think we need to eat that cost; when you’re playing chess with the world and making massively impactful decisions, that comes with some extra responsibility to take a rare bit of unfair flack for the sake of being able to fact-find and orient at all about what happened. Hopefully the fact that some time has passed, and that we’re looking at a wide variety of people and orgs rather than a specific singled-out individual, will mitigate this problem.

If FTX were a total bolt out of the blue, that would be one thing. But apparently there were rather a lot of EAs who thought SBF was untrustworthy and evil, and had lots of evidence on hand to cite, at the exact same time 80K and Will and others were using their megaphones to broadcast that SBF is an awesome EA hero. I don’t know that 80K or Will in particular are the ones who fucked up here, but it seems like somebody fucked up in order for this perception gap to exist and go undiscussed.

I understand people having disagreements about someone's character. Hindsight bias is a thing, and I'm sure people had reasons at the time to be skeptical of some of the bad rumors about SBF. But I tend to think those disagreements should be things that are argued about rather than kept secret. Especially if the secret conversations empirically have not resulted in the best outcomes.

Hypothetical EA: I dunno, this whole “we need a public airing out of our micro-sins in order to restore trust” thing sounds an awful lot like the exact “you’re looking for scapegoats” thing I was warning about.

You’re fixated on this idea that EAs did something Wrong and need to be chastised and corrected, like we’re perpetrators alongside SBF. On the contrary, I claim that the non-FTX EAs who interacted the most with Sam should mostly be thought of as additional victims of Sam: people who were manipulated and mistreated, who often saw their livelihoods threatened as a result and their life’s work badly damaged or destroyed.

The policies you’re calling for amount to singling out and re-victimizing many of Sam’s primary victims, in the name of pleasant-sounding abstractions like Accountability — abstractions that have little actual consequentialist value in this case, just a veneer of “that sounds nice on paper”.

Me: It's unfortunately hard for me to assess the consequentialist value in this case, because no investigation has taken place. I've gestured at some questions I have above, but I'm missing most of the pieces about what actually happened, and some of the unknown unknowns here might turn out to swamp the importance of what I know about. It's not clear to me that you know much more than me, either. Rather than pitting your speculation against mine, I'd rather do some actual inquiry.

Hypothetical EA: I think we already know enough, including from the legal investigation into Sam Bankman-Fried and who was involved in his conspiracy, to make a good guess that re-victimizing random EAs is not a useful way for this movement to spend its time and energy. The world has many huge problems that need fixing, and it's not as though EA's critics are going to suddenly conclude that EAs are Good After All if we spill all of our dirty laundry. What will actually happen is that they'll cherry-pick and distort the worst-sounding tidbits, while ignoring all the parts you hoped would be "trust-restoring".

Me: Some EA critics will do that, sure. But there are plenty of people, both within EA and outside of it, who legitimately just want to know what happened, and will be very reassured to have a clearer picture of the basic sequence of events, which orgs did a better or worse job, which processes failed or succeeded. They'll also be reassured to know that we know what happened, vs. blinding ourselves to the facts and to any lessons they might contain.

Or maybe they'll be horrified because the details are actually awful (ethically, not legally). Part of being honest is taking on the risk that this could happen too. That's just not avoidable. If we're not the sort of community that would share bad stuff if it were true, then people are forced to be that much more worried that we're in fact hiding a bunch of bad stuff.

Hypothetical EA: I just don't think there's that much crucial information EA leaders are missing, from their informal poking around. You can doubt that, but I don't think a formal investigation would help much, since people who don't want to speak now will (if anything) probably be even more tight-lipped in the face of what looks like a witch-hunt.

You say that EAs have a responsibility to jump through a bunch of transparency hoops. But whether or not you agree with my “EAs are victims” frame: EAs don’t owe the community their lives. If you’re someone who made personal sacrifices to try to make the world a better place, that doesn’t somehow come with a gotcha clause where you now have incurred a huge additional responsibility that we’d never impose on ordinary private citizens, to dump your personal life into the public Internet.

Me: I don’t necessarily disagree with that, as stated. But I think particular EAs are signing up for some extra responsibility, e.g., when they become EA leaders and ask for a lot of trust on the part of their community.

I wouldn’t necessarily describe that responsibility as “huge”, because I don’t actually think a basic investigation into the SBF thing is that unusual or onerous.

I don’t see myself as proposing anything all that radical here. I’m even open to the idea that we might want to redact some names and events in the public recounting of what happened, to protect the innocent. I don’t see anything weird about that; what strikes me as puzzling is the complete absence of any basic fact-finding effort (beyond the narrow-scope EV legal inquiry).

And what strike me as doubly puzzling is that there hasn’t even been a public write-up that CEA and others are not planning to look into this at all, nor has there been any public argument for this policy — whence this dialogue. As though EAs are just hoping we’ll quietly forget about this pretty major omission, so they don’t have to say anything potentially controversial. That I don’t really respect; if you think this investigation is a bad idea, do the EA thing and make your case!

Hypothetical EA: Well, hopefully my arguments have given you some clues about (non-nefarious reasons why) EAs might want to quietly let this thing die, rather than giving a big public argument for letting it die. In addition to the obvious fact that folks are just very busy, and more time spent on this means less time spent on a hundred other things.

Me: And hopefully my arguments have helped remind some folks that things are sometimes worth doing even when they're hard.

All the arguments in the world don't erase the fact that at the end of the day, we have a choice between taking risks for the sake of righting our wrongs and helping people understand what happened, versus hiding from the light of day and quietly hoping that no one calls us out for retreating from our idealistic-sounding principles.

We have a choice between following the path of least resistance into ever-murkier, ever-more-confusing, ever-less-trusting waters; or taking a bold stand and doing whatever we can to give EAs and non-EAs alike real insight into what happened, and a real capacity to adjust course if and only if some course-changing is warranted.

There are certainly times when the boring, practical, un-virtuous-sounding option really is the right option. I don't think this is one of those times; I think we need to be better than that this one time, or we risk losing by a thousand cuts some extremely precious things that used to be central to what made EA EA.

... And if you disagree with me about all that, well, tell me why I'm wrong.

Load more