A lot of EAs are reporting that some things seem like early signs of character or judgment flaws in SBF — an argument that seems wrong, an action that seems unjustified, etc. — now that they can reexamine those data points with the benefit of hindsight.

But the mental motions involved in "revisit the past and do a mental search for warning signs confirming that a Bad Person is bad" are pretty different from the mental motions involved in noticing and responding to problems before the person seems Bad at all.

"Noticing red flags" often isn't what it feels like from the inside to properly notice, respond to, and propagate warning signs that someone you respect is fucking up in a surprising way.

Things usually feel like "red flags" after you're suspicious, rather than before.

You're hopefully learning some real-world patterns via this "reinterpret old data points in a new light" process. But you aren't necessarily training the relevant skills and habits by doing this.

From my perspective, the whole idea that the relevant skillset is specifically about spotting Bad Actors is itself sort of confused. Like, EAs might indeed have too low a prior on bad actors existing, but also, the idea that the world is sharply divided into Fully Good Actors and Fully Bad Actors is part of what protected SBF in the first place!

It kept us from doing mundane epistemic accounting before he seemed Bad. If you're discouraged from just raising a minor local Criticism or Objection for its own sake — if you need some larger thesis or agenda or axe to grind, before it's OK to say "hey wait, I don't get X" — then it will be a lot harder to update incrementally and spot problems early. 

(And, incidentally, a lot harder to trust your information sources! EA will inevitably make slower intellectual progress insofar as we don't trust each other to just say what's on our mind like an ordinary group of acquaintances working on a project together, and instead have to try to correct for various agendas or strategies we think the other party might be implementing.)

(Even if nobody's lying, we have to worry about filtered evidence, where people are willing to say X if they believe X but unwilling to say not-X if they believe not-X.)


Suppose that I say "the mental motions needed to spot SBF's issues early are mostly the same as the mental motions needed to notice when Eliezer's saying something that doesn't seem to make sense, casually updating at least a little against Eliezer's judgment in this domain, and naively blurting out 'wait, that doesn't currently make sense to me, what about objection X?'"

(Or if you don't have much respect for Eliezer, pick someone you do have respect for — Holden Karnofsky, or Paul Graham, or Peter Singer, or whoever.)

I imagine some people's reaction to that being: "But wait! Are you saying that Eliezer/Holden/whoever is a bad actor?? That seems totally wrong, what about evidence A B C X Y Z..."

Which seems to me to be missing the point:

1. The processes required to catch bad actors reliably, are often (though not always) similar to the processes required to correct innocent errors by good actors.

You do need to also have "bad actor" in your hypothesis space, or you'll be fooled forever even as you keep noting weird data points. (More concretely, since "bad actor" is vague verbiage: you need to have probability mass on people being liars, promise-breakers, Machiavellian manipulators, etc.)

But in practice, I think most of the problem lies in people not noticing or sharing the data points in the first place. Certainly in SBF's case, I (and I think most EAs) had never even heard any of the red flags about SBF, as opposed to us hearing a ton of flags and trying to explain them away.

So something went wrong in the processes "notice when something is off", "blurt out when you notice something is off", and "propagate interesting blurtings so others can hear about them", more so than in the process "realize that someone might be a bad actor if a long list of publicly discussed things already seem off about them".

(Though I assume some EAs — ones with more insider knowledge about SBF than me — made the latter mistake too.)

2. If a community only activates its "blurt out objections when you think you see an issue" reflex when it thinks it might be in the presence of bad actors, then (a) it will be way harder for the community to notice when a bad actor is present, but also (b) a ton of other dysfunctions became way likelier in the community.

I think (b) is where most of the action is.

EA has a big problem, I claim — relative to its goals and relative to what's possible, not necessarily relative to the average intellectual community — with...

  • excessive deference;
  •  passivity and "taking marching orders" (rather than taking initiative);
  • not asking questions or raising objections;
  • learned helplessness;
  • lack of social incentive to blurt things out when you're worried you might be wrong;
  • lack of social incentive to build up your own inside-view model (especially one that disagrees with all the popular views among elite EAs);
  • general lack of error-correction and propagation-of-information-about-errors;
  •  excessive focus on helping EA's image ("protecting the brand"), over simple inquiry into obvious questions that interest or confuse you.

I think EA leadership is unusually correct, and I think it legit can be hard for new EAs to come up with arguments that haven't already been extensively considered at some point in the past, somewhere on the public Internet or in unpublished Google Docs or wherever. So I think it's easy to understand why a lot of EAs are wary of looking stupid by blurting out their naive first-pass objections to things.

But I think that not blurting those things out turns out to have really serious costs at the community level. (Even in cases where a myopic Causal Decision Theorist would say it's individually rational.)

First, because it means that the EA with a nagging objection never learns why their objection is right or wrong, and therefore permanently has a hole in their model of reality.

And second, because a lot of how EA ended up unusually correct in the first place, was people autistically blurting out objections to "obvious-seeming" claims.

If we keep the cached conclusions of that process but ditch the methods that got us here, we're likely to drift away from truth over time (or at least fail to advance the frontier of knowledge nearly as much as we could).

EA is not "finished". We have not solved the problem of "figure out a plan that saves the world", such that the main obstacle is Implementing Existing Ideas. The main obstacle continues to be Figuring Things Out.

EAs should note and propagate criticisms and objections to their Favorite Ideas and Favorite People just because they're curious about what the answer is.

(And aren't hindered by Modest Epistemology or Worry About Looking Dumb or Worry About Making EA Look Bad, so they're free to blurt without first doing a complicated calculus about whether it's Okay to say the first thought that popped into their head.)

They shouldn't need to suspect that their Favorite Idea is secretly false/bad, or that their Favorite Person is secretly evil/corrupt, in order to notice an anomaly and go "huh, what's that about?" and naively raise the issue (including raising it in public).

Most Bayesian updating is incremental; and when a single piece of evidence is obviously decisive, it's less likely that EAs will be the only ones who notice it, so it matters less whether we spot the thing first. The ambiguous, hard-to-resolve cases that require unusual heuristics, experience, or domain knowledge are most of where we can hope to improve the world.

If EAs want to outperform, they need to be good at the micro-level updates, and at building up good intuitions about areas via many, many repeated instances of poking at small things and seeing how reality shakes out.

I think we need to fix that process in EA — practice it more at the individual level, and find ways to better incentivize it at the group level.

Not just when there's a big Generate A Far-Mode Criticism Of EA contest, or a clear Bad Guy to criticize, but when you just see an Eliezer-comment or Rob-comment or Toby-comment that doesn't quite make sense to you and you blurt out that tiny note of dissonance, even if you fully expect that there's a perfectly good response you just aren't thinking of.

(Or no good response, but it's OK because Eliezer Yudkowsky and Rob Bensinger and Toby Ord are not perfect angelic beings and people make mistakes.)

I do think that EA leadership isn't dumb, and has thought a lot about the Big Questions, such that you'll often be able to beat the larger intellectual market and guess at important truths if you try exercises like "attempt to come up with a good reason why Carl Shulman / Holden Karnofsky / etc. might be doing X, even though X isn't what I'd do at a glance".

But I don't think this exercise should be required in order to blurt out a first-order objection. Noticing when something seems false is a lot easier than doing that and generating a plausible hypothesis about another human's brain. And if you do come up with a plausible-sounding hypothesis, well, blurting out your first-order objection is a great way to test whether your hypothesis is correct!

Comments26
Sorted by Click to highlight new comments since:

I think the issue you’re addressing is a real and important one. However, I think current norms are a response to disadvantages of blurting, both on an individual and movement level. As you note, most people’s naive divergent first impressions are wrong, and on issues most salient to the community, there’s usually somebody else who’s thought about it more. If we added lots more blurting, we’d have an even greater problem with finding the signal in the noise. This adds substantial costs in terms of reader energy, and it also decreases the reward for sharing carefully vetted information because it gets crowded out by less considered blurting.

Hence, the current equilibria, in which ill-considered blurting gets mildly socially punished by people with better-considered views frustrated by the blurter, leading to pre-emptive self-censorship and something of a runaway “stay in your lane” feedback loop that can result in “emperor has no clothes” problems like this one. Except it wasn’t a child or “blurter” who exposed SBF - it was his lead competitor, one of the most expert people on the topic.

I’ve said it before and I’ll say it again, EA’s response to this fraud cannot be - not just shouldn’t, but can’t - to achieve some combination of oracular predictive ability and perfect social coordination for high-fidelity information transmission. It just ain’t gonna happen. We should assume that we cannot predict the next scandal. Instead we should focus on finding general-purpose ways to mitigate or prevent scandal without having to know exactly how it will occur.

This comes down to governance. It’s things like good accounting, finding ways to better protect grantees in the case that their funder goes under, perhaps increased transparency of internal records of EA orgs, that sort of thing.

If we added lots more blurting, we’d have an even greater problem with finding the signal in the noise.

The EA Forum is a hub for a wide variety of approaches and associated perspectives: global development randomista, anti-factory-farming activist, pandemic preparedness lobbyist, AI alignment researcher, Tomasik-style "what if electrons are conscious?" theorist, etc. On top of that, it has a karma system, post curation, and many options for filtering/tagging and subscribing to particular kinds of content.

So both in terms of the forum's infrastructure and in terms of its content and audience, I have a hard time imagining a more ideal venue for high-quality 101-level questions and conversations. What specific signal are people worried about losing? Is there any way to  (e.g., with tags or mod curation or new features) to encourage freer discussion on this forum and preserve that signal?

(One bit of social tech that might help here is just to flag at the top of your comment what the epistemic status of your statements is. That plus a karma system addresses most of the "wasting others' time" problem, IMO.)

Except it wasn’t a child or “blurter” who exposed SBF - it was his lead competitor, one of the most expert people on the topic.

Sure. SBF isn't my real crux for thinking EA is largely bottlenecked on blurting. It's an illustrative example of one reason we're likely to benefit from more blurting.

Discovering the fraud sooner probably wouldn't have been trivial, especially if the fraud started pretty recently; but  there are many outcomes that fall short of full discovery and yet are a fair bit better than the status quo. (As well as many other dimensions on which I expect blurting to improve EA's ability to help the world.)

I’ve said it before and I’ll say it again, EA’s response to this fraud cannot be - not just shouldn’t, but can’t - to achieve some combination of oracular predictive ability and perfect social coordination for high-fidelity information transmission. It just ain’t gonna happen. We should assume that we cannot predict the next scandal. Instead we should focus on finding general-purpose ways to mitigate or prevent scandal without having to know exactly how it will occur.

I fully agree! But it sounds like SBF already should have been a fairly scandalous name throughout EA, based on reports about the early history of Alameda. Never mind whether we could have predicted the exact specifics of what happened at FTX; why did the Alameda info stay bottled up for so many years, such that I and others are only hearing about it now? This seems like a misstep regardless of how it would have changed our relationship to SBF.

Context: This post popped into my head because I was having a conversation with Peter Hartree about whether a specific argument by Peter Thiel made sense. And I was claiming that at a glance, a specific Thiel-argument seemed locally invalid to me, in the sense of Local Validity as a Key to Sanity and Civilization.

And Peter's response was that Thiel has a good enough track record that we should be very reluctant to assume he's wrong about something like this, and should put in an effort to steel-man him and figure out what alternative, more-valid things he could have meant.

And I'm OK with trying that out as an intellectual exercise. (Though I've said before that steelmanning can encourage fuzzy thinking and misunderstandings, and we should usually prioritize fleshmanning / passing people's Ideological Turing Test, rather than just trying to make up arguments that seem more plausible ex nihilo.)

But I felt an urge to say "this EA thing where we steel-man and defer to impressive people we respect, rather than just blurting out when a thing doesn't make sense to us until we hear a specific counter-argument, is part of our Bigger Problem". I think this problem keeps cropping up in EA across a bunch of domains — not just "why didn't the EA Forum host a good early discussion of at least one SBF red or yellow flag?", but "why do EAs keep getting into deference cascades that lead them to double- and triple-count evidence for propositions?", and "why are EAs herding together and regurgitating others' claims on AI alignment topics rather than going off in dozens of strange directions to test various weird inside-view ideas and convictions and build their own understanding?".

(Do people not realize that humanity doesn't know shit about alignment yet? I feel like people keep going into alignment research and being surprised by this fact.)

It all feels like one thing to me — this idea that it's fine for me to blurt out an objection to a local thing Thiel said, even though I respect him as a thinker and am a big fan of some of his ideas. Because blurting out objections is just the standard way for EAs to respond to anything that seems off to them.

I then want to be open that I may be wrong about Thiel on this specific point, and I want to listen to counter-arguments. But I don't think it would be good (for my epistemics or for the group's epistemics) to go through any special mental gymnastics or justification-ritual in order to blurt out my first-order objection in the first place.

I wanted to say all that in the Peter Thiel conversation. But then I worried that I wouldn't be able to communicate my point because people would think that I'm darkly hinting at Peter Thiel being a bad actor. (Because they're conflating the problem "EAs aren't putting a high enough prior on people being bad actors" with this other problem, and not realizing that organizing their mental universe around "bad actors vs. good actors" can make it harder to spot early signs of bad actors, and also harder to do a lot of other things EA ought to try to do.)

So I wrote this post. :)

[Epistemic status: I'm writing this in the spirit of blurting things out. I think I'm pointing to something real, but I may be wrong about all the details.]

  • lack of social incentive to blurt things out when you're worried you might be wrong;
  • lack of social incentive to build up your own inside-view model (especially one that disagrees with all the popular views among elite EAs);

You are correct that there is an incentive problem here. But the problem is not just lack of incentive, but actual incentive to fall in line. 

Because funding is very centralised in EA, there are strong incentives to agree with the people who control the money. The funders are obviouly smarter than just selecting only "yes"-sayers, but they are also humans with emotions and limited time. There are types of ideas, projects, criticism that don't appeal to them. This is not meant as criticism of individuals but as criticism of the structure. Because given the structure I don't see how thing s could be otherwise.

This shapes the community in two major ways. 

  1. People who don't fit the mould of what the funders like, don't get funded.
  2. People are self-censoring in order to fit what they think the mould is.
     

I think the only way out of this is to have less centralised funding. Some steps that may help:

  • Close the EA Funds. Specifically, don't collect decentralised funding into centralised funds. 
  • Encourage more people to earn-to-give and encourage all earning-to-givers to make their own funding decisions. 
  • Maybe set up some infrastructure to help funders find projects? Maybe EA Funds could be replaced by some type of EA GoFundMe platform? I'm not sure what would be the best solution. But if I where to build something like this, I would start talking to earning-to-givers about what would appeal to them.

 

Ironically FTX fund actually got this right. Their re-grant program was explicitly designed to decentralise funding decisions. 

I would prefer more people give through donor lotteries rather than deferring to EA funds or some vague EA vibe. Right now I think EA funds do like $10M / year vs $1M / year through lotteries, and probably in my ideal world the lottery number would be at least 10x higher.

I think that EAs are consistently underrating the value of this kind of decentralization. With all due respect to EA funds I don't think it's reasonable to say "thinking more about how to donate wouldn't help because obviously I should just donate  100% to EA funds." (That said, I don't have a take about whether EA funds should shut down. I would have guessed not.)

I think that's probably the lowest hanging fruit, though it might only help modestly. The effect size depends on how much the problem is lacking diverse sources of money vs lacking diverse sources of attention.

Personally the FTX regrantor system felt like a nice middle ground between EA Funds and donor lotteries in terms of (de)centralization. I'd be excited to donate to something less centralized than EA Funds but more centralized than a donor lottery.

Maybe something like the S-process used by SFF?
https://survivalandflourishing.fund/s-process

It would be cool to have a grant system where anyone can list them selves as fund manager, and donors can pick which fund managers decisions they want to back with their donations.  If I remember correctly, the s-process could facilitate something like that.

At present, the Funds evaluate hundreds of grant applications a year, generally seeking high four to low six figures in funding (median seems to be mid-five). In a world where the Funds were largely replaced by lotteries, where would those applicants go? Do we predict that the lottery winners would largely take over the funding niche currently filled by the Funds? If so, how would this change affect the likelihood of funding for smaller grant applicants?

Strong agree, and note that it's not obvious after browsing both GWWC and EAF how to donate to lotteries currently, or when they'll be running next. I'd like to see them run more regularly and placed prominently on the websites.

Ideally, EigenTrust or something similar should be able to help with regranting once it takes off, no? : )

Is there somewhere we can see how the winners of donor lotteries have been donating their winnings?

Linda - interesting ideas. 

As you note, centralized funding in EA, and fear of offending the funding decision-makers, can create incentives to pre-emptively self-censor, and can reduce free-spirited 'blurting'.

Additional suggestion: One way to help reduce this effect would be for EA funding decision-makers to give a little more feedback about why grant proposals get turned down. Several programs at the moment take the view that it's reasonable to say 'We just don't have enough time or staff to give any individualized feedback about grant proposals that we don't fund; sorry/not sorry'.  

Someone may have spent several hours (or days) writing a grant proposal, and the proposal judges/funders may have spent a couple of hours reading it, but they can't spend five minutes writing an explanation of why it's turned down?

This lack of feedback can lead grant applicants to assume that there may have been personal, reputational, or ideological bias in the decision-making process. This might lead them to be extra-cautious in what they say in future on EA Forum, or in other contexts, for fear of offending the funding decision-makers.

tldr: if EA funders have time to read grant proposals and to take them seriously, then they have time to give honest, candid, constructive feedback on those proposals; this would help reduce fear and anxiety around the funding process, and would probably reduce self-censorship, and would promote a more resilient culture of blurting.

I totally agree with you regarding the value of feedback.

Someone may have spent several hours (or days) writing a grant proposal, and the proposal judges/funders may have spent a couple of hours reading it, but they can't spend five minutes writing an explanation of why it's turned down?

I'm also confused by this. I'm guessing it's more about the discomfort around giving negative feedback, than it is about time? 

I'm verry much in favour of acknowledging the cost associated with the energy drain of dealing with negative emotions. There are lots of things around the emotional cost of applications that could be improved, if we agreed that this is worth caring about.

Clarification (because based on past experience this seems to be necessary): I don't think the feelings of fellow EA is the only thing that matters, or even the top priority or anything like that. What I do think is that we are losing both valuable people and productivity (who could have contributed to the mission) because we ignore that personal emotions is a thing that exists.

I like the idea of less centralized funding, and I think giving more influence to a diverse set ofmid-size funders is a critical part of reducing the risk of undue influence bya few megadonors. But the implementation may be complicated. 

I feel that most people who are not "professional" EAs (for lack of a better word, and definitely including myself) would be pretty bad at playing grantmaker without devoting quite a bit of time to it. And most people who have enough income to be systematically important funders are spending a lot of time/energy at their day jobs and are unlikely to have bandwidth to do more than -- at most --  evaluate others' evaluations of funding candidates.

I can think of two possible ways to get some of what you're looking for if the assumptions above are correct:

  • It may be plausible to have a centralized fund designed to accept donations from small-to-midsize EtGers set up in a way that minimizes the influence of "the people who control (most) of the money." In other words, the fund would be independent of CEA/EV, would not invite anyone who has Open Phil's ear to be a grantmaker, would not accept money from any potential megadonors, etc. That would at least create one "independent" funding stream.
  • It may be preferable to have an organization offering several funds in a specified area -- e.g., separate pools of money for EA infrastructure managed separately by Jim, Pam, and Dwight. Presumably, the fund managers could talk to and cooperate with each other, but there would likely be more independence and diversity of approach than under a committee system. (I think that is true even if a committee almost always approves the lead grantmaker's recommendation; the very existence of an approval requirement can significantly shape an individual's exercise of initial discretion.) Perhaps there could be a mechanism for Jim and Pam to go to an oversight board if they felt Dwight was planning to fund something harmful or unusually silly.

Those ideas would have costs as well as benefits. In particular, I've always appreciated that the EA community tries to minimize the amount of time/resources organizations need to devote to fundraising as opposed to substantive work. I get the sense that top leadership in many "mainstream" charities spends a lot of its bandwidth on fundraising and donor management. Decentralization would require a step back from that. But I think either of the two ideas above might achieve some of your goals with significantly lower downsides than having (e.g.) several dozen different midsize EtGers evaluating grant proposals. 

I feel that most people who are not "professional" EAs (for lack of a better word, and definitely including myself) would be pretty bad at playing grantmaker without devoting quite a bit of time to it.

I think you overestimate the difference between you and "professional" EAs. Good grant making both hard and time consuming for everyone

If someone is doing grant evaluation as their full-time job, then they are probably better at it than you, because they can spend more time on it. But as far as I know, most EA grants are evaluated by people doing this as some side volunteering. They usually have an EA job, but that job is often something different than grant making. 

I think OpenPhil is the only org that employ full time grant makers? But you can't even apply to OpenPhil unless you either fitted into any of their pre-defined programs or if know the right people. The only time I asked OpenPhil for money (I went to their office hour at EA Global) they politely told me that they would not even evaluate my project, because it was too small to be worth their time. To be clear, I'm not writing this to complain. I'm not saying that they did the wrong judgment. Having paid professional evaluators looking at every small project is expensive. 

I just hate that people like yourself think that there are some grant experts out there, looking at all the grant, making much better evaluations than you could have done. Because there isn't. That's not how things are currently run.

In particular, I've always appreciated that the EA community tries to minimize the amount of time/resources organizations need to devote to fundraising as opposed to substantive work. I get the sense that top leadership in many "mainstream" charities spends a lot of its bandwidth on fundraising and donor management.

I agree. I know some academics and have an idea of how much time and effort they spend on grant making. We don't want to end up in that situation.

I think this can be solved by having an EA wide standard for grant applications. If I were in charge, it would be a google doc template. If I want funding, I can fill it in with my project, and then send it to all the relevant mid-sized funders. 

By "professional EA," I meant that -- at least by and large -- the fund managers have relevant professional expertise in the subject area of the fund. An investment banker, law firm partner, neurosurgeon, or corporate CEO is very unlikely to have that kind of experience. My assumption is that those folks will take significantly longer to adequately evaluate a grant proposal than someone with helpful background knowledge from professional experience. And given the requirements of most jobs that pay enough to make independent grantmaking viable, I don't think most people would have enough time to devote to adequately evaluating grants without strong subject-matter background. In contrast, I imagine that I would do a better job evaluating grant proposals in my field of expertise (law) than the bulk of the EA Funds managers, even if the specific subject matter of the grant was a branch of law I hadn't touched since law school.

I'm a public-sector lawyer, so no one has dissauded me from independent grantmaking. I don't have nearly the money to have ever thought about it!

In case someone is interested in the idea of a "Donors Choose" type system: unless the proposed grantees had their own 501(c)(3)s, what you're describing would need some degree of 501(c)(3) organizational oversight/control/overhead to keep the tax deduction in the US (which higher-income individuals definitely care about). A straight-up "EA GoFundMe" wouldn't be any more tax-deductible than vanilla GoFundMe is. Certain types of grants -- those that could be seen as closer to gifts to individuals rather than compensation for work to be performed -- might need heightened scrutiny to avoid  problems with private inurement (benefit).

501(c)(3)s can be acceded via fiscal sponsorship. There is already a network of agreement between EA orgs to re-grant to each other for tax reasons, mostly thanks to Rethink
https://rethink.charity/donate 

In order to tap into this, an individual needs to be paid by though an org that is tapped into this network.  For AI Safety projects I think AI Safety Support would be willing to provide this service (I know they already done this for two projects and one person). I don't know what the options are for other cause areas, but if this becomes a major bottleneck, then it seems like a good plan would be to set up orgs to financially host various projects.

I posted for the ~first time in the EA forum after the SBF stuff, and was pretty disappointed by the voting patterns: almost all critical posts get highly upvoted (well, taking into account the selection effect where I wouldn't see negative-karma posts), seemingly regardless of how valid or truthseeking or actionable they are. And then the high-karma comments very often just consist of praise for writing up that criticism, or ones that take the criticism for granted and expand on it, while criticism of criticism gets few upvotes.

(Anyway, after observing the voting patterns on this comment thread of mine, I see little reason with engaging on this forum anymore. I find the voting patterns on LW healthier.)

Over a year ago, I thought it was completely inexplicable that SBF hired a Chief Regulatory Officer who was a lawyer known only for involvement in online fraud. There is no legitimate reason to hire such a person. And even apart from the fraud, his resume was not the typical profile of someone who a legitimate multi-billionaire would hire at a high level. An actual multi-billionaire could have poached away the general counsel from a Fortune 100 company. Why settle for someone like this? https://coingeek.com/tether-links-to-questionable-market-makers-yet-another-cause-for-concern/ 

I finally worked up the courage to hint at this point 4 months ago ( https://forum.effectivealtruism.org/posts/KBw6wKDbvmqacbB5M/crypto-markets-ea-funding-and-optics?commentId=wcvYZtw7b4xvrdetL ), and then was a little more direct 2 months ago ( https://forum.effectivealtruism.org/posts/YgbpxJmEdFhFGpqci/winners-of-the-ea-criticism-and-red-teaming-contest?commentId=odHG4hhM2FSGiXXFQ). 

Lo and behold,  that guy was probably in on the whole thing---he also served as general counsel to Alameda! https://www.nbcnews.com/news/ftxs-regulatory-chief-4-job-titles-2-years-was-really-rcna57965 

I see no mention in either of your forum posts of the aforesaid lawyer?

My post from 4 months ago linked to a story about the lawyer, which is why I said I merely hinted at this point. The post from 2 months ago didn't expressly mention it, but a followup post definitely did in detail (I deleted the post soon thereafter because I got a few downvotes and I got nervous that maybe it was over the line). 

I find Stuart Buck extremely impressive.

He has a lot of comments going against the grain at the time, and pointing out issues with SBF.

His contributions is focused and he's not salesy.

It’s worth pausing and taking a look at his org, which looks like one of the most impressively advised I've ever seen (and low overlap with EA figures, which is healthy).
 

Thanks! There's a very natural reason for all of this: https://twitter.com/stuartbuck1/status/1595254445683654657 

Oh, this is excellent! I do a version of this, but I haven't paid enough attention to what I do to give it a name. "Blurting" is perfect.

I try to make sure to always notice my immediate reaction to something, so I can more reliably tell what my more sophisticated reasoning modules transforms that reaction into. Almost all the search-process imbalances (eg. filtered recollections, motivated stopping, etc.) come into play during the sophistication, so it's inherently risky. But refusing to reason past the blurt is equally inadvisable.

This is interesting from a predictive-processing perspective.[1] The first thing I do when I hear someone I respect tell me their opinion, is to compare that statement to my prior mental model of the world. That's the fast check. If it conflicts, I aspire to mentally blurt out that reaction to myself.

It takes longer to generate an alternative mental model (ie. sophistication) that is able to predict the world described by the other person's statement, and there's a lot more room for bias to enter via the mental equivalent of multiple comparisons. Thus, if I'm overly prone to conform, that bias will show itself after I've already blurted out "huh!" and made note of my prior. The blurt helps me avoid the failure mode of conforming and feeling like that's what I believed all along.

Blurting is a faster and more usefwl variation on writing down your predictions in advance.

  1. ^

    Speculation. I'm not very familiar with predictive processing, but the claim seems plausible to me on alternative models as well.

I think this is one of those places where a relative outsider may have a comparative advantage in some cases. Speaking for myself, I think I would be less likely to blurt if my employer had anything to do with EA, if I was hoping to get grants, if my sense of social identity was strongly tied to the EA community, etc. Of course, it means I know less and am more likely to say something stupid, too . . .

As a thought experiment, and recognizing the huge risk of hindsight bias, consider if a well-educated and well-informed outsider had been asked to come up a year ago with a list of events that would have been moderately serious to catastrophic for EA and estimated probabilities. I think there is a very high chance they would have come up with "loss of a megadonor," and a high chance they would have come up with "and I note that SBF's assets are in a high volatility field with frequent bankruptcies, in a very young company."I think there is a very high chance that they would have come up with "major public scandal involving a key figure," and a high chance they would have come up with a list of key figures that included SBF. I suspect that most outsiders would have assigned a higher probability to these things than most insiders, although this is impossible to test at this point.

I don't think it likely "SBF is discovered to be running a massive fraud" would explicitly be on most hypothetical outsiders' list, but I think it is more likely to have appeared than if an insider had prepared the list. That is due to (1) self-censoring, and (2) less deference to / weighing of what key figures in the community thought of SBF. At least when you're talking about assessing serious to catastrophic risks, I think you want to get both the "insider" and "outsider" views. 

Curated and popular this week
Relevant opportunities