I've seen a lot of discussion in the EA community recently about the divide between people who think EA should focus on high-level philosophical arguments/thoughts, and those who think EA should work on making our core insights more appealing to the public at large.

 

This last year the topic has become increasingly salient, the big shifts from my perspective being Scott Alexander's Open EA Global post, the FTX crash, and the Wytham Abbey purchase. I quite frequently see those in the first camp, people not wanting to prioritize social capital, use the argument that epistemics in EA have declined. 

 

To those who haven't studied philosophy, epistemics broadly refers to the idea of knowledge itself, or the study of how we gain knowledge, sort out good from bad, etc. As someone who is admittedly on the side of growing EA's social capital, when I see the argument that the community's epistemics have declined it tends to assume a number of things, namely:

  • It is a simple matter to judge who has high quality epistemics
  • Those with high quality epistemics usually agree on similar things
  • It's a given that the path of catering to a smaller group of people with higher quality epistemics will have more impact than spreading the core EA messaging to a larger group of people with lower quality epistemics

 

In the spirit of changing EA forum discussion norms, I'll go ahead and say directly that my immediate reaction to this argument is something like: "You and the people who disagree with me are less intelligent than I am, the people who agree with me are smarter than you as well." In other words, it feels like whoever makes this argument is indirectly saying my epistemics are inferior to theirs. 

 

 This is especially  true when someone brings up the "declining epistemics" argument to defend EA orgs from criticism, like in this comment. For instance, the author writes:

"The discussion often almost completely misses the direct, object-level, even if just at back-of-the-envelope estimate way."

I'd argue that by bemoaning the intellectual state of EA, one risks focusing entirely on the object-level when in a real utilitarian calculus, things outside the object level can matter much more than the object level itself. The Wytham Abbey purchase is a great example.

 

This whole split may also point to the divergence between rationalists and newer effective altruists.

 

My reaction is admittedly not extremely rational, well thought out, and doesn't have high quality epistemics backing it. But it's important to point out emotional reactions to the arguments we make, especially if we ever intend to convince the public of Effective Altruism's usefulness.

 

I don't have any great solutions to this debate, but I'd like to see less talk of epistemic decline in the EA forum, or at least have people state it more blatantly rather than dressing up their ideas in fancy language. If you think that less intelligent or thoughtful people are coming into the EA movement, I'd argue you should say so directly to help foster discussion of the actual topic. 

 

Ultimately I agree that epistemics are important to discuss, and that the overall epistemics of discussion in EA related spaces has gone down. However I think the way this topic is being discussed and leveraged in arguments is toxic to fostering trust in our community, and assumes that high quality epistemics is a good in itself.

28

0
0

Reactions

0
0

More posts like this

Comments44
Sorted by Click to highlight new comments since:

It's worth pointing out that the assumption that growing social capital comes at the cost of truth-seeking is not necessarily true.  Sometimes the general public is correct, and your group of core members is wrong. A group that is insular, homogenous, and unwelcoming to new members can risk having incorrect beliefs and assumptions frozen in place, whereas one that is welcoming to newcomers with diverse skills,backgrounds and beliefs is more likely to have poor assumptions and beliefs challenged. 

I agree, taking your words literally, with all their qualifications ("can risk," "not necessarily true," "sometimes," etc). There are a few other important caveats I think EA needs to keep in mind.

  • "Newcomer" is a specific role in the group, which can range from a hostile onlooker, to an loud and incompetent notice, to a neutral observer, to a person using the group for status or identity rather than to contribute to its mission, to a friendly and enthusiastic participant, to an expert in the topic area who hasn't been part of the specific group before.
  • Being welcoming to newcomers does not mean tolerating destructive behavior, and it can absolutely encompass vigorous and ongoing acculturation of the newcomer to the group, requiring them to conform to the group's expectations in order to preserve the group's integrity.
  • In order to avoid such efforts of acculturation resulting in total conformism and cultish behavior, the group needs to find a way to enact it that is professional and limited to specific, appropriate domains.
  • Generally, the respect that a participant has earned from the group is an important determinant of how seriously their proposals for change will be taken. They should expect that most of the ideas they have for change will be bad ones, and that the group knows better, until they've spent time learning and understanding why things are done the way they are (see Chesterton's Fence). Over time, they will gain the ability to make more useful proposals and be entrusted with greater independence and responsibility.

In EA, I think we have foolishly focused WAY too much on inviting newcomers in, and completely failed at acculturation. We also lack adequate infrastructure to build the intimate working relationships that allow bonds of individual trust and respect to develop and structure the group as a whole. Those pods of EAs who have managed to do this are typically those working in EA orgs, and they get described as "insular" because they haven't managed to integrate their local networks with the broader EA space.

I don't see a need to be more blandly tolerant of whatever energy newcomers are bringing to the table in EA. Instead, I think we need very specific interventions to build one-on-one, long-term, working relationships between less and more experienced EA individuals, and a better way to update the rest of EA on the behavior and professional accomplishments of EAs. Right now, we appear to be largely dependent on hostile critics to provide this informational feedback loop, and it's breaking our brains. At the same time, we've spent the last few years scaling up EA participation so much without commeasurate efforts at acculturation, and that has badly shrunk our capacity to do so, possibly without the ability to recover.

Excellent points here. I think this is close to what I am trying to get at.

I agree that we shouldn’t just open the floodgates and invite anyone and everyone.

I absolutely agree! To put it more plainly I intuit that this distinction is a core cause of tension in the EA community, and is the single most important discussion to have with regards to how EA plans to grow our impact over time.

 

I've come down on the side of social capital not because I believe the public is always right, or that we should put every topic to a sort of 'wisdom of the crowds' referendum. I actually think that a core strength of EA and rationalism in general is the refusal to accept popular consensus on face value.

 

Over time it seems from my perspective that EA has leaned too far in the direction of supporting outlandish and difficult to explain cause areas, without giving any thought to convincing the public of these arguments. AI Safety is a great example here. Regardless of your AI timelines or priors on how likely AGI is to come about, it seems like a mistake to me that so much AI Safety research and discussion is gated. Most of the things EA talks about with regard to the field would absolutely freak out the general public - I know this from running a local community organization.

 

In the end if we want to grow and become an effective movement, we have to at least optimize for attracting workers in tech, academia, etc. If many of our core arguments cease to be compelling to these groups, we should take a look at our messaging and try to keep the core of the idea while tweaking how it's communicated.

" Most of the things EA talks about with regard to the field would absolutely freak out the general public" - This is precisely what worries me and presumably others in the field. Freaking people out is a great way of making them take wild, impulsive actions that are equally likely to be net-negative as net-positive. Communication with the public should  probably aim to not freak them out.

EA is currently growing relatively fast, so I suspect that the risk of insularity is overrated for now. However, this is a concern that I would have to take more seriously if recent events were to cause movement growth to fall off a cliff.

Thanks for sharing your side here. That seems really frustrating. 

This is definitely a heated topic, and it doesn't seem like much fun at any side of it.

I think in the background, there are different clusters of people who represent different viewpoints regarding philosophical foundations and what good thinking looks like. Early EA was fairly small, and correspondingly, it favored some specific and socially unusual viewpoints. 

Expanding "EA" to more people is really tricky. It's really difficult to expand in ways where newer people are highly aligned with clusters of the "old guard."

I think it's important to see both sides of this. Newer people really hate feeling excluded, but at the same time, if the group grows too much, then we arguably lose many of the key characteristics that make EA valuable in the first place. If we started calling everything and everyone "EA", then there basically wouldn't be EA.

Frustratingly, I think it's difficult to talk about on either end. My guess is that much of the uneasiness is not said at all, in part because people don't want to get attacked. I think saying "epistemics are declining" is a way of trying to be a bit polite. A much more clear version of this might look like flagging specific posts or people as good or bad, but no one wants to publicly shame specific people/posts.

So, right now we seem to be in a situation where both sides of this don't particularly trust each other, and also are often hesitant to say things to each other, in large part because they don't trust the other side. 

For what it's worth, as good as some people might think their/EA epistemics are, the fact that this discussion is so poor I think demonstrates that there's a lot of improvement to do. (I think it's fair to say that being able to discuss heated issues online is a case of good epistemics, and now our community isn't doing terrific here.)

(I really wasn't sure how to write this. I'm fairly sure that I did a poor job, but I hope it's still better than me not commenting.)

I agree with most of what you’ve written here, and I actually think this is a much better framing on why open discussion of disagreement between the new guard and the old guard is important.

If we let this problem fester, a bunch of people who are newer to the movement will get turned away. If we can instead increase the amount of talented and influential people that join EA while getting better at convincing others of our ideas, that’s where most of the impact lies to me.

A related topic is the youth and academic focus of EA. If we truly want to convince the decision makers in society then we need to practice appealing to people outside an academic setting.

Good to know, thanks.

A few responses:

> If we let this problem fester, a bunch of people who are newer to the movement will get turned away.

Agreed. I think the situation now does result in it being hard for many people to join. I

> If we can instead increase the amount of talented and influential people that join EA while getting better at convincing others of our ideas, that’s where most of the impact lies to me.

I think you're looking at this differently to how some EA decision makers are looking at it. I think that many EA community members have a view of the EA community that makes it seem much more important than many current decision makers do. There's been a very long controversy here - where some people would say, "All that matters is that we grow EA ideas as much as possible", and others say things more like, "We already know what to do. We really just need very specific skill sets"

Growth comes with a lot of costs. I think that recent EA failures have highlighted issues that come from trying to grow really quickly. You really need strong bureaucracies to ensure that growth isn't a total mess (we promote some billionaire who commits fraud of $8B, or EAs fund some new local leader who creates an abusive environment). 

I think that many EA community members have a view of the EA community that makes it seem much more important than many current decision makers do.

 

Interesting, I actually feel that I have the alternative view. In my mind people who are decision makers in EA severely overestimate the true impact of the movement, and by extension their own impact, which makes them more comfortable with keeping EA small and insular. Happy to expand here if you're curious.
 

Growth comes with a lot of costs. I think that recent EA failures have highlighted issues that come from trying to grow really quickly.

 

Would you mind throwing in a couple of examples? To my mind, the whole SBF/FTX fiasco was a result of EA's focus on elite people who presented as having 'high quality epistemics.'

 

Many people outside the rat sphere in my life think the whole FTX debacle, for instance, is ridiculous because they don't find SBF convincing at all. SBF managed to convince so many people in the movement of his importance because of his ability to expound and rationalize his opinions on many different topics very quickly. This type of communication doesn't get you very far with normal, run of the mill folks.

Would you mind throwing in a couple of examples?

Leverage Research gets attention here. I believe there were a few cases of sexual harassment and similar (depressingly common when you have a lot of people together). There were several projects I know of that were just done poorly and needed to get bailed out or similar. CEA went through a very tough time for its first ~5 years or so (it went through lots of EDs)

This type of communication doesn't get you very far with normal, run of the mill folks.

I don't see it that way. Lots of relatively normal folks put money into FTX. Journalists and VCs were very positive about SBF/FTX.

Wasn’t the Leverage issue back around 2016? Also that doesn’t strike me as a growing too fast cost. From my recollection the issue was Leverage was extremely secretive and a lot of the psychological abuse was justified with the idea that they were saving the world.

I’d argue that elitist thinking and strange beliefs the public would never accept are dangerous, and usually improved by more scrutiny. If we can make our messaging internally more palatable to the public, we will avoid fiascos like Leverage.

With regards to FTX, as I mentioned below the millions who put money in were speculating. EA folks who helped SBF were trying to do good. It’s an important distinction.

If you are an individual engaging in a speculative bet you can afford, you don’t really need to worry about optics or the impact of potential failure. If someone was gambling on the success of FTX with a bet they couldn’t afford to pay, I don’t think they would be someone we should defend in our community anyway.

However people at the top of the EA movement made gigantic bets on FTX working out, at least with regards to their social capital.

I don't see it that way. Lots of relatively normal folks put money into FTX. Journalists and VCs were very positive about SBF/FTX.

This. I do think that blaming rationalist culture is mostly a distraction, primarily because way too much normie stuff promoted SBF.

I had a very different opinion of the whole crypto train (that is, crypto needs to at least stop having real money, if not flat out banned altogether.)

Yes, EA failed. But let's be more careful about suggesting that normies didn't fail here.

Many people outside the rat sphere in my life think the whole FTX debacle, for instance, is ridiculous because they don't find SBF convincing at all. SBF managed to convince so many people in the movement of his importance because of his ability to expound and rationalize his opinions on many different topics very quickly. This type of communication doesn't get you very far with normal, run of the mill folks.

I ignored SBF and the crypto crowd, however I disagree with this primarily because I think this is predictably overrating how much you wouldn't fall for a scam. We need to remember that before SBF collapsed, 1 million at the least decided to go on the FTX train, and the mainstream financial media was fawning SBF. So while I do think EA failed here, IMO the real failure is crypto until November was treated as though it was legitimate, when it isn't.

Your comment minimizes EA’s role in getting SBF as far as he got. If you read the now-deleted Sequoia article it’s clear that the whole reason he was able to take advantage of the Japan crypto arbitrage is because he knew and could convince people in the movement to help him.

Most of the million who hopped on the crypto/SBF train were blatantly speculating and trying to make money. I see those in EA who fell for it as worse because they were ostensibly trying to do good.

I agree that EA failed pretty hard here. My big disagreements are probably on why EA failed, not that EA failed to prevent harm.

What would you say caused EA to fail?

If I were to extract generalizable lessons from the FTX, the major changes I would make are:

  1. EA should stay out of crypto, until and unless the situation improves to the extent that it doesn't have to rely on speculators. One big failure is EAs thought they could invest in winner stocks more than other investors.

  2. Good Governance matters. By and large, EA failed at basic governance tasks, and I think governance needs to be improved. My thoughts are similar to this post:

https://forum.effectivealtruism.org/posts/sEpWkCvvJfoEbhnsd/the-ftx-crisis-highlights-a-deeper-cultural-problem-within

These are the biggest changes I would make on the margin, IMO.

In the spirit of communication style you advocate for... my immediate emotional reaction to this is "Eternal September has arrived".

I dislike my comment being summarized as "brings up the "declining epistemics" argument to defend EA orgs from criticism".  In the blunt style you want, this is something between distortion and manipulation. 

On my side, I wanted to express my view on the Wytham debate. And I wrote a comment expressing  my views on the debate.

I also dislike the way my comment is straw-manned by selective quotation.

In the next bullet point to "The discussion often almost completely misses the direct, object-level, even if just at back-of-the-envelope estimate way."  I do  explicitly acknowledge the possible large effects of higher order factors.
 

In contrast, large fraction attention in the discussion seems spent on topics which are both two steps removed from the actual thing , and very open to opinions. Where by one step removed I mean e.g. "how was this announced" or "how was this decided", and two steps removed is e.g. "what will be the impact of how was this announced on the sentiment of the twitter discussion". While I do agree such considerations can have large effect, driving decisions by this type of reasoning in my view moves people and orgs into the sphere of pure PR, spin and appearance. 

What I object to is a combination of  
1. ignore the object level, or discuss it in a very lazy way
2. focus on the 2nd order ... but not in a systematic way, but mostly based on saliency and emotional pull (e.g., how will this look on twitter)

Yes, it is a simple matter to judge where this leads in the limit. We have a bunch of examples how the discourse looks like when completely taken over by these considerations - e.g., political campaigns. Words have little meaning connected to physical reality, but are mostly tools in the fight for the emotional states and minds of other people.

Also: while those with high quality epistemics usually agree on similar things  is a distortion making the argument personal, about people, in reality yes, good reasoning often converges to similar conclusions

Also: It's a given that the path of catering to a smaller group of people with higher quality epistemics will have more impact than spreading the core EA messaging to a larger group of people with lower quality epistemics

No, it's not given. Just, so far, effective altruism was about using evidence and reason to figure out how to benefit others as much as possible, and acting based on that. Based on the thinking so far, It was decidedly not trying to be a mass movement, making our core insights more appealing to the public at large . 

In my view, no one figured out yet how the appealing to the masses, don't need to think much  version of effective altruism should look like to be actually good.

(edit: Also, I quite dislike the frame-manipulation move of shifting from "epistemic decline of the community" to "less intelligent or thoughtful people joining". You can imagine a randomized experiment where you take two groups of equally intelligent and thoughtful people, and you make them join community with different styles of epistemic culture (eg physics, and multi-level marketing).  You will get very different results. While you seem to interpret a lot of things as about people (are they smart?  had they studied philosophy?) I think it's often much more about norms.)

Thanks for responding, I wasn’t trying to call you out and perhaps shouldn’t have quoted your comment so selectively.

We seem to have opposite intuitions on this topic. My point with this post is that my visceral reaction to these arguments is that I’m being patronized. I even admit that the declining epistemic quality is a legitimate concern at the end of my post.

In some of my other comments I’ve admitted that I could’ve phrased this whole issue better, for sure.

I suppose to me, current charities/NGOs are so bad, and young people feel so powerless to change things, that the core EA principles could be extremely effective if spread.

Also: while those with high quality epistemics usually agree on similar things is a distortion making the argument personal, about people, in reality yes, good reasoning often converges to similar conclusions

This. Aumann's Agreement Theorem tells us that Bayesian that have common priors and trust each other to be honest cannot disagree.

The in practice version of this is that a group agreeing on similar views around certain subjects isn't automatically irrational, unless we have outside evidence or one of the conditions is wrong.

Aumann's agreement theorem is pretty vacuous because the common prior assumption never holds in important situations, e.g. everyone has different priors on AI risk.

I'm just going to register a disagreement that I think is going to be a weird intersection of opinions. I despise posting online but here goes. I think this post is full of applause lights and quite frankly white psychodrama. 

I'm a queer person of colour and quite left-wing. I really disliked Bostrom's letter but still lean hard on epistemics being important. I dislike Bostrom's letter because I think it is an untrue belief and equivocates out of grey tribe laziness. But reading a lot of how white EAs write about being against the letter it sounds like you're more bothered by issues of social capital and optics for yourself not for any real impact reason. 

The reason I believe this is because of two reasons:

1. This post bundles together the Bostrom letter and Wyntham. I personally think Wyntham quite possibly could be negative EV (mostly because I think Oxford real estate is inflated and the castle is aesthetically ugly and not conducive to good work being done). But the wrongness in the Bostrom isn't that it looks bad. I am bothered by Bostrom holding a wrong belief not a belief that is optically bad. 

2. You bundled in AI safety in later discussions about this. But there are lots of neartermist causes that are really weird e.g. shrimp welfare.  Your job as a community builder isn't to feel good and be popular it's to truth-seek and present morally salient fact. The fact AI safety is the hard one for you speaks to a cohort difference not anything particular about these issues. For instance,  in many silicon valley circles AI safety makes EA more popular!

Lastly, I don't think the social capital people actually complete the argument for full implication of what it means for EA to become optics aware. Do we now go full Shorism and make sure we have white men in leadership positions so we're socially popular? The discussion devolved to the meta-level of epistemics because the discussion is often low quality and so for it to even continue to do object-level utilitarian calculus to exist because we're doing group decision-making. It all just seems like a way to descend into respectability politics and ineffectiveness. I want to be part of a movement that does what's right and true not what's popular.

On a personal emotional note, I can't help but wonder how the social capital people would act in previous years with great queer minds. It was just a generation ago that queer people were socially undesirable and hidden away. If your ethics are so sensitive to the feelings of the public I frankly do not trust it. I can't help but feel a lot of the expressions of fears by mostly white EAs in these social capital posts are status anxieties about their inability to sit around the dinner table and brag about their GiveWell donations. 

Could you clarify the meaning of "Shorism" here? I assume you're referring to David Shor?

Couple of things. I didn’t actually mention the Bostrom letter at all or any sort of race or gender identity politics. This was an intentional decision to try and get at what I was saying without muddying the waters.

You seem to be assuming I am advocating for 100% maximization of social outcomes and capital in all situations, which is absolutely not what I want. I simply think we can do more work on messaging without losing epistemic quality.

Even if there is a trade off between the two, I’d argue optimizing a little more for social capital while keeping the core insights would be more impactful than remaining insular.

I'll defend the declining epistemics point of view, even though I don't think it's quite so bad as some others think.

I recommend first reading about the Eternal September. The basic effect is that when new people join a movement, it takes time for them to get up to speed and if people join at too fast a rate this breaks down. This isn't necessarily a lack of intelligence, more a result of needing a certain number of more experienced people to help newer members understand why things work the way that they do.

When a movement grows too fast, it's very easy for this cultural knowledge to dissipate and for the values of a movement to change not because people deeply understood the old values and consciously decided that it would be better because they were different, but more because people are importing their assumptions from the broader society.

Now some people think it would be arrogant for a group of people to think that they have better access to the truth than society on average. On the other, almost no-one denies that there are groups with worse access to the truth and that they have some idea of what groups these are. And the hypothesis that we can identify some groups with worse access to the truth, but we can't identify groups with better access would be an extremely strange and weird hypothesis to entertain, the kind of hypothesis that is generally only produced by social processes.

Once we've accepted that it is possible to identify at least some groups with better epistemics than average, then it becomes reasonable to suggest that EA could be one of those groups. Indeed, if someone didn't think EA was one of those groups, I'd wonder why they decided to join EA rather than something else? 

So once you accept that it is likely  that your group has above-average epistemics, it follows quite quickly that you don't want it regressing to the societal mean. And this is a major challenge because social forces naturally push your movement to regress to the mean and the so maintaining the current quality requires constantly pushing back against these entropic forces.

I'd challenge you to reverse your analysis and consider what are the "smuggled assumptions" in the "epistemic decline isn't an issue" hypothesis. As an example, one assumption might be, "We mostly can't tell who has high-quality epistemics or not". And I would push back against this by pointing out, as I did above, that people are pretty good at agreeing that certain groups have low-quality epistemics (flat-earthers, creationists, lizard-person conspiracy theorists) and so it would be strange if we had no idea of high-quality epistemics, particularly since you can get pretty far towards high-quality epistemics by not doing things that clearly degrade your epistemics. 

I think that whichever group within EA which values the epistemic norms should stop complaining, and work on exhibiting those traits, explaining clearly why they are useful, and how to build them, and proactively pushing for the community to build those traits. That will be hard, but the alternative is to be insular and insulting, and will continue to undermine their goals - which might be narrowly epistemically virtuous, but is counterproductive if we have goals outside of pure epistemology. And EA, as distinct from LessWrong, is about maximizing the good in an impartial welfarist sense, not the art of human rationality.

I guess that sounds a lot like suggesting that people who value epistemics should just surrender the public conversation which is essentially the same as surrendering the direction that EA takes?

I think that epistemics will ground out in terms of more impact, but explaining this would take a bit of work and so I'm deciding to pass today because I already spent too long on my comment above. However, feel free to ping me in a few days if you'd like me to write something up.

I guess that sounds a lot like suggesting that people who value epistemics should just surrender the public conversation which is essentially the same as surrendering the direction that EA takes?

That's not what I was saying, and I don't really understand where you got that idea from. I was saying that the people who value epistemics need to actually do work, and push for investment by EA into community epistemics. In other words, they should "stop complaining, and work on exhibiting those traits, explaining clearly why they are useful, and how to build them, and proactively pushing for the community to build those traits."
That's what they should do - instead of complaining and thinking that it will help, when what it actually does is hurt the community while failing to improve epistemic norms

Perhaps people already are exhibiting those traits =P? It’s not like it would necessarily be super legible if they were.

And it’s hard to make progress on a problem if you want to hide that it exists.

I really wanted to like this post, because I agree with the fact that "declining epistemics" is rarely a complaint that is made in good faith, and is often harmful when compared to actually asking about the issues. However,the problem that the [high-decoupler epistemic purists|LessWrong crowd] seem to have is a different one than you imply - that they do ignore impactful and important emotional and social factors which matter, and instead focus on only easily assessed object level issues. (A version of the McNamara fallacy.)

You correctly point out that "in a real utilitarian calculus, things outside the object level can matter much more than the object level itself," which I strongly agree with - but then explicitly decide not to do any of that calculus, and appeal to emotion - not to say that you think it matters more than the object level factors in certain cases, but to say that you feel like it should.

I appreciate the feedback!

 

In an ideal situation I would definitely try and outline the uses I take issue with, and provide arguments from both sides. At the same time this is my first top level post, and I've held back on posting something similar multiple times due to the high level of rigor standard here.

 

I suppose I decided that  when it comes to community building especially, intuitions, moods and gut feelings are something EA should be aware of and respond to, even if they can't always be explained rationally. My plan is to develop more on this idea in subsequent posts.

I understand, and both think the bar for people posting should be lower, and that people's own standards for posting should be higher. I definitely appreciate that writing is hard, and it's certainly something I still work on. The biggest piece of advice I'd have is to write drafts, share and get feedback on them, and plan to take significant amounts of time to write good posts - because thinking and revising your thoughts takes time, as does polishing writing.

That’s the stance I took for a long time, and unfortunately I posted nothing because I’m busy and I guess I don’t value drafting posts that much.

Realistically I may not be suited to posting here.

I suspect that the amount of effort you put into drafting and thinking about posts, distributed slightly differently, would result in some very good posts, and the best thing to do is to draft things and then share them with a small group of people, and decide then whether to put in effort or to abandon them - rather than preemptively not posting without getting feedback.

I don't have any great solutions to this debate, but I'd like to see less talk of epistemic decline in the EA forum, or at least have people state it more blatantly rather than dressing up their ideas in fancy language. If you think that less intelligent or thoughtful people are coming into the EA movement, I'd argue you should say so directly to help foster discussion of the actual topic.

Tyler Cowen said:

...effective altruism seems to attract more talented young people than anything else I know of right now by a considerable margin. And I've observed this by running my own project, Emergent Ventures for Talented Young People. And I just see time and again, the smartest and most successful people who apply get grants. They turn out to have connections to the EA movement. And that's very much to the credit of effective altruism.

I agree with Cowen: I think EA has done a great job attracting talented people.

I don't like the idea of driving those people away.

Like you, I am fairly suspicious of the "declining epistemic quality" idea.

But it seems possible that this idea is correct, and also that "less intelligent or thoughtful people are coming into the EA movement" is an incorrect diagnosis of what's going wrong.

I don't think people should jump to the most inflammatory diagnosis.

I mostly agree that EA has done a good job, but don't see any reason to think that "declining epistemic quality" implies that "less intelligent or thoughtful people are coming into the EA movement," nor that it disagrees with Tyler's claim that "EA has done a great job attracting talented people."

What I think is happening, based on my observations and model of the community, is that a group of EAs (mostly now in their 30s) has spent a decade or more building their internal understanding and improving their epistemics in order to do good, and many of them now see new people (mostly in their late teens and 20s) who are on a similar path, but are not as far along, who haven't yet managed to do all of the work of improving their understanding. The set of older EAs who are complaining are pointing out, correctly, that bringing in these new people has lowered the epistemic standards of the community.

I think that this explains why all three of the original statements are correct, and why it's also a bad idea to dismiss the people coming in.

One difference between our perspectives is that I don't take for granted that this process will occur unless the conditions are right. And the faster a movement grows, the less likely it is for lessons to be passed on to those who are coming in. This isn't dismissing these people, just how group dynamics work and a reality of more experienced people having less time to engage.

I want to see EA grow fast. But past a certain threshold, I'm not sure exactly where it is, our culture will most likely start to degrade.  That said, I'm less concerned about this than before. As terrible as the FTX collapse and recent events have been, it may have actually resolved any worries about potentially growing too fast.

And the faster a movement grows, the less likely it is for lessons to be passed on to those who are coming in.

That's assuming a lot about what movement growth means. (But I don't think the movement should be grown via measuring attendance at events. That seems to be a current failure, and part of why I think that investment in EA community building is flawed.)

Hey Wil,

as someone who is likely in the "declining epistemics would be bad" camp, I will try to write this reply while mindfully attempting to be better at epistemics than I usually am.

Let's start with some points where you hit on something true:

However I think the way this topic is being discussed and leveraged in arguments is toxic to fostering trust in our community

I agree that talk about bad epistemics can come across as being unwelcoming to newcomers and considering them stupid. Coupled with the elitist vibe many people get from EA, this is not great.

I also agree that many people will read the position you describe as implying "I am smarter than you", and people making that argument should be mindful of this, and think about how to avoid giving this impression.

You cite as one of the implied assumptions:

Those with high quality epistemics usually agree on similar things

I think it is indeed a danger that "quality epistemics" is sometimes used as a shortcut to defend things mindlessly. In an EA context, I often disagreed with arguments that defer strongly to experts in EA orgs. These arguments vaguely seem to neglect that these experts might have systematic biases qua working in those orgs. Personally, I probably sometimes use "bad epistemics" as a cached thought internally when encountering a position for which I have mostly seen arguments that I found unconvincing in the past.

Now for the parts I disagree with:

I scrolled through some of the disagreeing comments on Making Effective Altruism Enormous, and tried to examine if any have the implicit assumptions you state:

It is a simple matter to judge who has high quality epistemics This comment argues that broadening the movement too much will reduce nuance by default. While it implies that EA discussions have more nuance than the average discussion, I do not think the poster or anyone else in the thread says it is easy to identify people with good epistemics. Furthermore, many argue that growth should be not too fast to be able to get people used to EA discussion norms, which implies that people do not necessarily think that bad epistemics are fundamental.

Those with high quality epistemics usually agree on similar things

I don't think the strong version of this statement ("usually") holds true for most people in the epistemics camp, some people, including me, would probably agree that e.g. disagreeing with "it is morally better to prioritize expected impact over warm feelings" is usually not good epistemics. I.e. there are some few core tenants for which "those with high quality epistemics" usually agree on.

It’s a given that the path of catering to a smaller group of people with higher quality epistemics will have more impact than spreading the core EA messaging to a larger group of people with lower quality epistemics

While many people probably think it is likely , I do not think the majority consider it "a given". I could not find a comment that argues or assumes it is obvious in the above discussion.

What I personally believe:

My vague position is that one of the core advantages of the EA community is caring about true arguments, and consequently earnest and open-minded reasoning. In so far that I would complain about bad epistemics, it is definitely not that people are dumb. Rather, I think that it is a danger that people engage a bit more in what seems like motivated reasoning in some discussions than the EA average, and seem less interested in understanding other people's position and changing their mind. These are gradual differences, I do not mean to imply that there is a camp who reasons perfectly and impartially, and another one that does not.
Without fleshing out my opinion too much (the goal of this comment is not to defend my position), I usually point to the thought experiment: "What would have happened if Eliezer Yudkowsky wrote ai safety posts on the machine learning subreddit?" to illustrate how important having an open-minded and curious community can be.

For example, in your post you posit three implicit assumptions, and later link to a single comment as justification. And tbf, that comment reads a little bit dismissive, but I don't think it actually carries the three assumptions you outline, and should not be used to represent a whole "camp", especially since this debate was very heated on both sides. It is not really visible that you try to charitably interpret the position you disagree with. And while it's good that you clearly state that something is an emotional reaction, I think it would also be good if that reaction is accompanied with a better attempt to understand the other side.

You make some great points here. I’ll admit my arguments weren’t as charitable as they should’ve been, and more motivated from heat than light.

I hope to find time to explore this in more detail and with more charity!

Your point about genuine truth seeking is certainly something I love about EA, and don’t want to see go away. It’s definitely a risk if we can’t figure out how to screen for that sort of thing.

Do you have any recommendations for screening based on epistemics?

Those with high quality epistemics usually agree on similar things

On factual questions, this is how it should be, and this matters. Putting it another way, it's not a problem for EAs to come to agree on factual questions, without more assumptions.

Curated and popular this week
Relevant opportunities