B

Bessemer12

140 karmaJoined

Comments
6

Saw this post for the first time after it was linked from one of the recent FTX  posts, and wanted to say thank you for having taken the time to write and express these concerns, which clearly weren't very popular but turned out to be...prescient.  I'm a bit frustrated this didn't get more karma or engagement at the time. 

I'm also frustrated that I probably just scrolled past without clicking or considering because it didn't have that much karma and seemed 'against the mood.' It feels important for everyone (like me) who was caught off guard this week to recognize that this was not, actually,  unforeseeable. It's humbling to realize how much work our cognitive biases must have been doing . Anyway, thanks! 

I definitely felt dumb when I first encountered EA . Certain kinds of intelligence are particularly overrepresented and valorized in EA (e.g. quantitative/rational/analytical intelligence) and those are the kinds I've always felt weakest in (e.g. I failed high school physics, stopped taking math as quickly as I could). When I first started out working in EA I felt a lot of panic about being found out for being secretly dumb because I couldn't keep up with discussions that leaned on those kinds of intelligence. I feel a lot better about this now, though it still haunts me sometimes.

What's changed since then?

  1. I still make dumb math mistakes when I'm required to do math in my current role - but I've found that I have other kinds of intelligence some of the colleagues I initially felt intimidated by are less strong in (i.e. judgment, emotional intelligence, intuitions about people) and even though these can be easily dismissed/considered 'fluffy' they actually do and have mattered in concrete ways.
  2. I've come to realize that intelligence isn't useful to consider as a monolithic category -- most things can be decomposed into a bunch of specific skills, and ~anyone can get better at really specific skills if they decide it's important enough and ask for help. The reason I've kept making dumb mistakes is mostly due to a learned helplessness I developed over years of failing at certain classes in school. This caused me to develop an identity around being hopeless along those dimensions, but this really isn't the same as being "dumb" and I wish I'd recognized this sooner and just tried to solve a few key gaps in my knowledge that I felt embarrassed by.
  3. Spending a lot of time in EA social circles can make you feel like people have value in proportion to how smart they are (again, smart in very specific ways) and people often seem to treat intelligence as a proxy for ability to have impact. But looking back at history the track record of highly intelligent people is a pretty mixed bag (smart people are also responsible for breakthroughs that create terrible threats!), and other character traits have also mattered a lot for doing a lot of good (e.g. work ethic, courage, moral fortitude). It feels good to remind myself that e.g. Stanislav Petrov or Vasili Arkhipov seem like they were pretty average people intelligence-wise; it didn't stop them from preventing nuclear wars, so why should being bad at math lower my level of ambition about how much good I can do?
  4. Another thing that's helped is just having a sense of humor about feeling dumb. I used to feel a lot of shame about asking 'dumb' sounding questions and thus giving myself away as a dumb person in both work and social contexts. This caused a lot of anxiety and meant I was pretty nervous and serious most of the time. Over time I learned that if I asked my real questions in a lighthearted way/lightened up around discussions of stuff I didn't understand people took it well and I enjoyed the interactions more.
  5. I've also realized over time how much the desire to be perceived as smart shapes EA group dynamics. This leads to a) more people doing what I described above (not asking questions or wanting to reveal they don't understand, causing you to feel alone in this feeling), b) talking or writing in unnecessarily complex or erudite ways in order to signal that they're part of the in group. Becoming aware of these dynamics helped me to start opting out of them/trying to intentionally violate these scripts more often.

    I hope this is at least somewhat helpful -- I'm sorry you're feeling this way and I can definitely assure you you're not alone (and I really hope you don't leave EA for this reason)!.

"It is also similarly the case that EA's should not support policy groups without clear rationale, express aims and an understanding that sponsorship can come with the reasonable assumption from general public, journalists, or future or current members, that EA is endorsing particular political views."

  • This doesn't seem right to me -- I think anyone who understands EA should  explicitly expect  more consequentialist grant-makers to be willing to support groups whose political beliefs they might strongly disagree with if they also thought the group was going to take useful action with their funding.
  • As an observer, I would assume EA funders are just  thinking through who has [leverage, influence, is positioned to act in the space, etc.] and putting aside any distaste they might feel for the group's politics more readily than non-EA funders (e.g. the CJR program also funded conservative groups working on CJR whose views the program director presumably didn't like or agree with for similar reasons).

"Other mission statements are politically motivated to a degree which is simply unacceptable for a group receiving major funds from an EA org."

  • This seems to imply that EA funders should only give funding to groups that pass a certain epistemic purity test or are untouched by political considerations.  I think applying EA-like epistemic standards to movement organizations in the US that touch on ~anything political would probably preclude you from funding anything political at all (maybe you're arguing EA should therefore never fund anything that touches on politics, but that seems likely to be leaving a lot of impact on the table if taken to an extreme).
  • My guess is that if you looked at grantees in many other OP cause areas, you would see a large spread of opinions expressed by the grantees, many of which don't follow EA-like epistemic norms. E.g. I understand the FAW grant-making team supports a range of groups who hold views on animal welfare, some of which are ideologically far afield from the internally stated goals of the program. Again, I don't assume that the OP FAW POs necessarily endorse their views  -- I assume they are being funded because the PO believes that those groups are going to do work that is effective, or productively contribute to the FAW movement ecosystem overall (e.g. by playing bad cop to another organization's good cop with industry).

I think the point has been made in a few places that more money means lower barrier to entry and is an opportunity to reduce elitism in EA and I just wanted to add some nuance:

  • I think deploying money to literally make participation in the movement possible for more people is great (i.e. offering good salaries/healthcare/scholarships to people who would be barred from an event by finances).
  • On the other hand, I think excessive perks/fancy events etc.  are likely to be especially alienating for people who have close family members  struggling financially (this aligns with my own experience), so I worry that spending of this kind may actually make the movement feel less welcoming to people from a different socioeconomic background instead of more.

Thank you for writing this post; I know these take a lot of time and I think this was a really valuable contribution to the discourse/resonated strongly with me. 

I find it helpful get clearer about who the audience is in any given circumstance, what they most want/value and how money might help/hurt in reaching them. When you have a lot of money, it's tempting to use it as an incentive without noticing it's not what your audience actually most values. (And creates the danger of attracting the audience that does most value money, which we obviously don't want.) 

For example, I think two critical audiences include both 'talented  people who really want to do good' and 'talented  people who mostly just like solving hard problems.' We're competing with different kinds of entities/institutions in each case, but I think money is rarely the operative thing for the individuals I'd most like to see involved in EA. 

  • For e.g. young people who really want to do good:
    • Many of them already know they could go work in finance/consulting and get lots of money and travel and perks, but choose not to. I was in this situation, and all the free stuff made me assume that there was nothing intrinsically motivating/valuable about the work,  since they had to dangle so many extrinsic rewards to get me to do it. EA is competing for those people with other social movements/nonprofits, and I suspect the more EA starts looking like the finance career option in terms of extrinsic rewards, the more those people might end up dismissing it/feel like they're being "bought."
  • For people who really like solving hard problems:
    • I'm thinking here of people I know who are extremely smart and have chosen to do ethically neutral to dubious work because they got nerdswiped/enjoyed the magnitude of the challenge/felt like it was a good fit for their brain/etc.
    • My sense is that money was a factor for them but more as a neutral indicator of value/signal that they were working on something hard and valuable than because of a desire to live a specific lifestyle (many still live with roommates/don't spend much). I think the best way to get these people is emphasizing that we have even harder and more interesting problems they could be solving (though offering reasonably comparable salaries so that the choice to switch is less disruptive also seems good).