Ben_West

Advisor @ CEA
13278 karmaJoined Sep 2014Working (15+ years)Panama City, Panama
🤷♂🤷♂🤷♂.ws

Bio

Non-EA interests include chess and TikTok (@benthamite). We are probably hiring: https://www.centreforeffectivealtruism.org/careers

How others can help me

Feedback always appreciated; feel free to email/DM me or use this link if you prefer to be anonymous.

Sequences
3

AI Pause Debate Week
EA Hiring
EA Retention

Comments
986

Topic contributions
6

Okay, that seems reasonable. But I want to repeat my claim[1] that people are not blocked by "not really knowing what worked and didn't work in the FTX case" – even if e.g. there was some type of rumor which was effective in the FTX case, I still think we shouldn't rely on that type of rumor being effective in the future, so knowing whether or not this type of rumor was effective in the FTX case is largely irrelevant.[2]

I think the blockers are more like: fraud management is a complex and niche area that very few people in EA have experience with, and getting up to speed with it is time-consuming, and also ~all of the practices are based under assumptions like "the risk manager has some amount of formal authority" which aren't true in EA.

(And to be clear: I think these are very big blockers! They just aren't resolved by doing an investigation.)

  1. ^

    Or maybe more specifically: would like people to explicitly refute my claim. If someone does think that rumor mills are a robust defense against fraud but were just implemented poorly last time, I would love to hear that!

  2. ^

    Again, under the assumption that your goal is fraud detection. Investigations may be more or less useful for other goals.

Suppose I want to devote some amount of resources towards finding alternatives to a rumor mill. I had been interpreting you as claiming that, instead of directly investing these resources towards finding an alternative, I should invest these resources towards an investigation (which will then in turn motivate other people to find alternatives).

Is that correct? If so, I'm interested in understanding why – usually if you want to do a thing, the best approach is to just do that thing.

Oh good point! That does seem to increase the urgency of this. I'd be interested to hear if CE/AIM had any thoughts on the subject.

Interesting! I'm glad I wrote this then.

Do you think "[doing an investigation is] one of the things that would have the most potential to give rise to something better here" because you believe it is very hard to find alternatives to the rumor mill strategy? Or because you expect alternatives to not be adopted, even if found?

the choice is like "should I pour in a ton of energy to try to set up this investigation that will struggle to get off the ground to learn kinda boring stuff I already know?"

I'm not the person quoted, but I agree with this part, and some of the reasons for why I expect the results of an investigation like this to be boring aren’t based on any private or confidential information, so perhaps worth sharing.

One key reason: I think rumor mills are not very effective fraud detection mechanisms.

(This seems almost definitionally true: if something was clear evidence of fraud then it would just be described as "clear evidence of fraud"; describing something as a "rumor" seems to almost definitionally imply a substantial probability that the rumor is false or at least unclear or hard to update on.[1])

E.g. If I imagine a bank whose primary fraud detection mechanism was "hope the executives hear rumors of malfeasance," I would not feel very satisfied with their risk management. If fraud did occur, I wouldn’t expect that their primary process improvement to be "see if the executives could have updated from rumors better." I am therefore somewhat confused by how much interest there seems to be in investigating how well the rumor mill worked for FTX.[2]

To be clear: I assume that the rumor mill could function more efficiently, and that there’s probably someone who heard "SBF is often overconfident" or whatever and could have updated from that information more accurately than they did. (If you’re interested in my experience, you can read my comments here.) I’m just very skeptical that a new and improved rumor mill is substantial protection against fraud, and don’t understand what an investigation could show me that would change my mind.[3] Moreover, even if I somehow became convinced that rumors could have been effective in the specific case of FTX, I will still likely be skeptical of their efficacy in the future.

Relatedly, I’ve heard people suggest that 80k shouldn’t have put SBF on their website given some rumors that were floating around. My take is that the base rate of criminality among large donors is high, having a rumor mill does not do very much to lower that rate, and so I expect to believe that the risk will be relatively high for high net worth people 80k puts on the front page in the future, and I don't need an investigation to tell me that.

To make some positive suggestions about things I could imagine learning from/finding useful:

  1. I have played around with the idea of some voluntary pledge for earning to give companies where they could opt into additional risk management and transparency policies (e.g. selecting some processes from Sarbanes-Oxley). My sense is that these policies do actually substantially reduce the risk of fraud (albeit at great expense), and might be worth doing.[4]
    1. At least, it seems like this should be our first port of call. Maybe we can’t actually implement industry best practices around risk management, but it feels like we should at least try before giving up and doing the rumor mill thing.
  2. My understanding is that a bunch of work has gone into making regulations so that publicly traded companies are less likely to commit fraud, and these regulations are somewhat effective, but they are so onerous that many companies are willing to stay private and forgo billions of dollars in investment just to not have to deal with them. I suspect that EA might find itself in a similarly unfortunate situation where reducing risks from "prominent individuals" requires the individuals in question to do something so onerous that no one is willing to become "prominent." I would be excited about research into a) whether this is in fact the case, and b) what to do about it, if so.
  3. Some people probably disagree with my claim that rumor mills are ineffective. If so, research into this would be useful. E.g. it's been on my backlog for a while to write up a summary of Why They Do It, or a fraud management textbook.
    1. Why They Do It is perhaps particularly useful, given that one of its key claims is that, unlike with blue-collar crime, character traits don’t correlate well with propensity to commit white-collar crimes crimes, and I think this may be a crux between me and people who disagree with me.

All that being said, I think I'm weakly in favor of someone more famous than me[5] doing some sort of write up about what rumors they heard, largely because I don't expect the above to convince many people, and I think such a write up will mostly result in people realizing that the rumors were not very motivating. 

 

  1. ^

     Thanks to Chana Messinger for this point

  2. ^

     One possible reason for this is that people are aiming for goals other than detecting fraud, e.g. they are hoping that rumors could also be used to identify other types of misconduct. I have opinions about this, but this comment is already too long so I'm not going to address it here.

  3. ^

     e.g. I appreciate Nate writing this, but if in the future I learned that a certain person has spoken to Nate, I'm not going to update my beliefs about the likelihood of them committing financial misconduct very much (and I believe that Nate would agree with this assessment)

  4. ^

     Part of why I haven't prioritized this is that there aren't a lot of earning to give companies anymore, but I think it's still potentially worth someone spending time on this

  5. ^

     I have done my own version of this, but my sense is that people (very reasonably) would prefer to hear from someone like Will

Thanks for doing this! You say "My own impression (quite low-confidence!) is that spending on EA focus areas like technologies such as far-UVC, synthesis screening, and GCBR-specific concerns is likely dominated by EA" and I'm trying to figure out precisely how dominant EA is. 

You say "Therefore, I would guess it is highly unlikely that philanthropic spending on technologies such as far-UVC, preventing bioterrorism, synthesis screening, and regulating dual-use research of concern represent more than 5% of the total biosecurity spend." And also EA funding is ~4% of total biosecurity spend. Can we conclude from this that EA is likely >80% of GCBR-specific funding?

ACE's "organizational health" criterion is described here and they wrote a blog post about it here. tl;dr is that they have a checklist of various policies and also survey staff, then combine this into a rating on dimensions like "Harassment and discrimination policies".

As an example of it in action, see the 2022 review of Vegetarianos Hoy:

A few staff (1–3 individuals) report that they have experienced harassment or discrimination at their workplace during the last 12 months, and a few (1–3 individuals) report to have witnessed harassment or discrimination of others in that period. In particular, they report low recognition of others’ work and low salaries. All of the claimants reported that the situation was not handled appropriately...

Vegetarianos Hoy’s leadership team recognizes reported issues and reports that they have taken steps to resolve them. In particular, they report they are aware of alleged issues and have hired a Culture and Talent Analyst position and two new leadership positions.

I think OP also deserves a lot of the credit, but I am not aware of anything publicly written to describe what they have done.

Thanks for doing this! It seems cool.

We're happy to sink hundreds of hours into fun "criticism of EA" contests, but when the biggest disaster in EA's history manifests, we aren't willing to pay even one investigator to review what happened so we can get the facts straight, begin to rebuild trust, and see if there's anything we should change in response? 

I disagree with this framing.

Something that I believe I got wrong pre-FTX was base rates/priors: I had assumed that if a company was making billions of dollars, had received investment from top-tier firms, complied with a bunch of regulations, etc. then the chance of serious misconduct was fairly low.

I have now spent a fair amount of time documenting that this is not true, in data sets of YCombinator companies and major philanthropists.

It's hard to measure this, but at least anecdotally some other people (including in "EA leadership" positions) tell me that they were updated by this work and think that they similarly had incorrect priors.

I think what you are calling an "investigation" is fine/good, but it is not the only way to "get the facts straight" or "see if there's anything we should change in response".

ICYMI: I wrote this in response to a previous "EA leaders knew stuff" story. [Although I'm not sure if I'm one of the "leaders" Becca is referring to, or if the signs I mentioned are what she's concerned about.]

Load more