Epistemic status: Motivated by the feeling that there's something like a missing mood in the EA sphere.  Informed by my personal experience, not by rigorous survey.  Probably a bit scattershot, but it's already more than a month after I wanted to publish this.  (Minus this parenthetical, this post was entirely written before the Bostrom thing.  I just kept forgetting to post it.)

The last half year - the time since I moved to Berkeley to work on LessWrong, and consequently found myself embedded in the broader Bay Area rationality & EA communities - have been surprisingly normal.

The weeks following the FTX collapse, admittedly, a little less so.

One thing has kept coming up, though.  I keep hearing that people are reluctant to voice disagreements, criticisms, or concerns they have, and each time I do a double-take.  (My consistent surprise is part of what prompted me to write this post: both those generating the surprise, and those who are surprised like me, might benefit from this perspective.)

The type of issue where one person has an unpleasant[1] interaction with another person is difficult to navigate.  The current solution of discussing those things with the CEA Community Health team at least tries to balance both concerns of reducing false positive and false negatives; earlier and more public discussion of those concerns is not a Pareto-improvement[2].

But most of them are other fears: that you will annoy an important funder, by criticizing ideas that they support, or by raising concerns about their honesty, given publicly-available evidence, or something similar.  And the degree to which these fears have shaped the epistemic landscape makes me feel like I took a wrong turn somewhere and ended up in a mirror universe.

Having these fears - probably common!  Discussing those fears in public - not crazy!  Acting on those fears?  (I keep running face-first into the fact that not everybody has read The Sequences, that not everybody who has read them has internalized them, and that not everybody who has internalized them has externalized that understanding through their actions.[3])

My take is that acting on those fears, by not publishing that criticism, or raising those concerns, with receipts attached, is harmful[4].  For simplicity's sake, let's consider the cartesian product of the options:

  • to publicize a criticism, or not
  • the criticism being accurate, or not
  • the funder deciding to fund your work, or not

The set of possible outcomes:

  1. you publicize a criticism; the criticism is accurate; the funder funds your work
  2. you publicize a criticism; the criticism is accurate; the funder doesn't fund your work
  3. you publicize a criticism; the criticism is inaccurate; the funder funds your work
  4. you publicize a criticism; the criticism is inaccurate; the funder doesn't fund your work
  5. you don't publicize a criticism; the criticism is accurate; the funder funds your work
  6. you don't publicize a criticism; the criticism is accurate; the funder doesn't fund your work
  7. you don't publicize a criticism; the criticism is inaccurate; the funds your work
  8. you don't publicize a criticism; the criticism is inaccurate; the funder doesn't fund your work

What predicted outcomes are motivating these fears?  2 and 4 are the obvious candidates.

I won't pretend that these are impossible, or that you would necessarily see another funder step in if such a thing happened.  You could very well pay costs for saying things in public.  I do think that people overestimate how likely those outcomes are, or how high the costs will be, and underestimate the damage that staying silent causes to community epistemics.

But I will bite the bullet: assuming the worst, you should pay those costs.  In the long run, you do not achieve better outcomes by pretending to have beliefs other than those you have, in order to extract grant money from intolerant funding sources.

If your criticism is accurate, and a potential source of funding decides to not fund you when they would have otherwise because of it, the only way for the others to orient and react to that defection is for them to see the criticism[5] and subsequent lack of funding[6].

If your criticism is not accurate, and a potential source of funding decides to not fund you as a result, the details end up being pretty important.  From the funder's perspective, the "best" possible reason for that kind of decision is if the criticism betrays serious intellectual or epistemic failure by the critic.  This might happen in the least convenient possible world, but in practice I think most such fears, when coming from good-faith actors, are the product of imposter syndrome.  (Needless to say, grifters and other bad actors are correct to have such fears.  Making EA more robust to adversarial forces is another excellent reason for being forthright about one's honest opinions.)

Then there are criticisms which one might fear would cause them to pay more indirect social costs.  Take as examples this and this.  Let me also take this opportunity to put my money where my mouth is, by making public a disagreement I have[7].  I do not think we should be inviting AI hardware capabilities organizations to EAG(x) career fairs.  The proposed theory of change[8] suffers, on priors, from being dominated by the 1st-order effects of having better AI hardware.  If you want to subsidize hardware for alignment research, doing it by starting an general AI hardware capabilities organization seems deeply perilous.  Just start a crypto exchange literally any other startup!

There are some practical takeaways from adopting this stance:

  • It is important to support someone who pays costs as a result of publishing rigorous, well-motivated criticism.  Relying on an after-the-fact process to catch you if you step on a broken stair is scary enough; knowing that there's no net at all renders this hardly more useful than yelling into the void.  There will be those laudable individuals willing to take the leap regardless, but it is simply good policy to support those who pay costs to generate positive externalities.  I'm not really sure what this support looks like, and there are obvious difficulties with trying to formalize anything here, but it would be good for something to exist.  Maybe after-the-fact prizes to those who proffered early EA-related criticisms, which were ignored/misunderstood/rabbit-holed at the time, but have since been integrated?  I believe this has happened at least once but can't currently find a reference.
  • You should carefully consider the price of your silence.  There are good reasons to be able to credibly promise that you will keep certain things secret, but many conversations end up happening in a totally unnecessary regime of secrecy out of social inertia.  NDAs are probably much more expensive than naive calculations would suggest.
  • Correspondingly, defaults are very important.  I claim that you should default to openness.  This forces you to be explicit about what you agree to keep secret, and reduces ambiguity about other people's expectations of you (which in turn reduces your own mental overhead for tracking those expectations).
  • Notice when you are flinching away from considering a specific course of action.
  • It helps a lot to be resilient to the "things go to shit because you decided you weren't going to stay quiet about something bad" scenario.  One reason I might be surprised by the reports I hear of self-suppressed criticisms is some mixture of the typical mind fallacy and the fundamental attribution error.  My realistic worst-case outcome, assuming I somehow managed to piss off everyone doing hiring and funding in domains I consider important, is that I have to give up on direct work entirely.  I switched to direct work as a mid-career software engineer with significant prior experience in industry, and if my former employer's fortunes have changed enough that they no longer want me back, I'm not concerned about my ability to find another industry role, nor am I under any meaningful time pressure to do so.  My friends and family in LA would also be quite glad to have me back.  Don't misunderstand: this would suck a lot.  But it would suck because of what it implied about my ability to effect the kind of change that motivated me to make the jump in the first place, not because I'd be totally bereft of social and professional opportunities as a result.  The same is not necessarily true of many others, and I expect that makes it much harder when such a situation arises.

The main thing I want this post to accomplish for readers is to raise to the level of conscious awareness the existence of these dynamics, and to hopefully let them notice in real-time if they ever run into them.  It's much easier to choose to do the right thing when you consciously notice the choice in front of you.

  1. ^

    Or worse!

  2. ^

    Though I can conceive of a case that it'd be a Kaldor-Hicks improvement.

  3. ^

    Not everybody needs to read The Sequences to understand why being honest and not submitting to blackmail & extortion are both critical to establishing healthy equilibria in communities.  I, personally, did become more scrupulously honest after internalizing those lessons.

  4. ^

    I'm not totally sure how I relate to setting the zero point, or to what things should be considered superogatory.  In this case I chose the word "harmful" because it feels like the correct frame due to the background context, but I don't think a different choice would be crazy.

  5. ^

    Which has the advantage of being accurate, and is therefore advantaged in that astute observers are more likely to consider it correct!

  6. ^

    Unfortunately probabilistic, but since "obviously good" ideas tend to be slam dunks across multiple funding sources, it wouldn't take many such cases to establish a pattern.

  7. ^

    The originating thought came from someone else, but I agree with it.

  8. ^

    Which, to be clear, is not coming directly from the organization, so may not be an accurate representation of their views.

92

0
0

Reactions

0
0

More posts like this

Comments13
Sorted by Click to highlight new comments since:

I do agree with you that silence can hurt community epistemics.

In the past I also thought people worried about missing out on job and grant opportunities if they voiced criticisms on the EA Forum overestimated the risks. I am ashamed to say that I thought this was a mere result of their social anxiety and pretty irrational.

Then last year I applied to an explicitly identified longtermist (central) EA org. They rejected me straight away with the reason that I wasn't bought into longtermism (as written up here which is now featured in the EA Handbook as the critical piece on longtermism...). This was perfectly fine by me, my interactions with the org were kind and professional and I had applied on a whim anyway.

But only later I realised that this meant that the people who say they are afraid to be critical of longtermism and potentially other bits of EA because they are worried about losing out on opportunities were more correct than I previously thought.

I still think it's harmful not to voice disagreements. But evidently there is a more of a cost to individuals than I thought, especially to ones who are financially reliant on EA funding or EA jobs, and I was unreasonably dismissive of this possibility.

I am a bit reluctant to write this. I very much appreciated being told the reason for the rejection and I think it's great that the org invested time and effort to do so. I hope they'll continue doing this in the future, even if insufficient buy-in to longtermism is the reason for rejection.

I’d also like to point out this post as related to the topic of speaking your mind etc: https://forum.effectivealtruism.org/posts/qtGjAJrmBRNiJGKFQ/the-writing-style-here-is-bad

To my mind the artificial academic tone here makes people feel the stakes are much higher than they should be for an online discussion forum. Also, people who have English as a second language likely have much more insight into the blunders EA is making, especially when it comes to messaging our ideas to the public.

By selecting for people who talk in strict academic tones and with an overwrought style, I’d imagine we lose a lot of legitimate opinions that could help us course correct.

To the extent that writing on the AI forum matters to EA decision making it has high stakes. Opinions that would actually allow the EA community to course correct have stakes that are worth millions of dollars. 

Curated and popular this week
Relevant opportunities