Hide table of contents

Epistemic status: uncertain. I removed most weasel words for clarity, but that doesn't mean I'm very confident. Don’t take this too literally. I'd love to see a cluster analysis to see if there's actually a trend, this is a rough guess.[1] I'm interested in feedback on this; if it matches the intuitions of others reading this, and if there are important aspects I've missed or really messed up.

I separate a lot of interesting intellectuals into disagreeables and assessors.[2]

Disagreeables are highly disagreeable. They're quick to call out bullshit and are excellent at coming up with innovative ideas. Unfortunately, they produce a whole lot of false positives; they're pretty overconfident and wrong a great deal of the time. They're "idea detectors" with the sensitivity turned up to 11.

Disagreeables often either work alone or on top of a structure that believes (almost) everything they say.

Assessors are well-calibrated. If/when they participate in forecasting tournaments, they do really well. Assessors don't try to come up with many brilliant new ideas and don't spend particularly much effort questioning the most deeply held group assumptions, but they're awfully good at not being wrong. When they say X is true, X is indeed very likely to be true.

Both disagreeables and assessors are frustrated by mainstream epistemics, but for different reasons.

Disagreeables tend to make dramatic statements like,

  • “There’s a big conspiracy that everyone is in on”
  • “Everyone is just signaling all the time, they’re not even trying to be right”
  • “This organization is a huge Ponzi scheme”

Assessors would make more calm clarifications like,

  • “Yea, disagreeable person said X is a huge Ponzi scheme. There’s some truth there, but it’s a big oversimplification”
  • “I’m quite sure that Y’s paper is unlikely to replicate, after closely looking at the related papers”

If the emperor really has no clothes, disagreeables would be the first to (loudly) call that out. If the emperor has a large set of broadly reasonable policies, but a few that are subtly mistaken, the assessors would be a better fit to diligently identify and explain these.

Some disagreeables include Socrates, Nietzsche, Wittgenstein, Nassim Taleb, Robin Hanson, early Steve Jobs, other tech founders, angry public figures, professional gurus of all kinds.

Assessors include David Hume[3], Bertrand Russell, Robert Caro, Scott Alexander, Superforecasters, some of late Steve Jobs, good CEOs, many not-particularly-angry politicians (Eisenhower/Clinton/Obama come to mind).

To the disagreeables, assessors seem like boring blankfaces and bureaucrats who maintain the suffocating status quo. To assessors, disagreeables often seem like reckless cult leaders who go around creating epistemic disarray.

Disagreeables value boldness and novelty; being the most interesting person in the room, making a big statement. Assessors value nuance, clarity, discernment. Getting things really right, even if the truth is tedious or boring.

I feel like Rationalists lean disagreeable, and Effective Altruists lean assessment.

The ideal is a combination of both. Have disagreeables come up with ideas and assessors assess them. But this is really hard to do!

Disagreeable normally don't exactly pronounce they are disagreeable; they often have compelling sounding arguments why everyone else is wrong (including all the assessors). Disagreeables often really truly and absolutely believe their inaccuracies. Meanwhile, accessors can be very soft-spoken and boring.[4] Great accessors who are unsure about X, even after a lot of research, can seem a whole lot like regular people who are unsure about X.

I wish we had a world where the people with great ideas are also all well-calibrated. But we don't live in that world. As such, I can rarely easily recommend interesting books, I need to condition my reviews.

"These books are very interesting, but I'd only recommend them if you're already familiar with the topic and surrounding debate, otherwise they might cause you more harm than good."

Or with people, 

"These people have the cool ideas, but you can't take them too seriously. You instead have to wait for these other people to review the ideas, but honestly, you’ll likely be waiting a while."

Summary Table

 DisagreeablesAssessors
GoalInnovationNot being wrong
TraitsDisagreeable, innovative, interesting, individualistic, unreasonable, (occasionally) angryCalibrated, nuanced, clear, strong discernment, reasonable, calm
ExamplesSocrates, Nietzsche, Wittgenstein,  Nassim Taleb, Robin Hanson, early Steve Jobs, other tech founders, angry public figures, professional gurus of all kindsDavid Hume[1], Bertrand Russell, Robert Caro, Scott Alexander, Superforecasters, some of late Steve Jobs, good CEOs, many not-particularly-angry politicians (Eisenhower/Clinton/Obama)
Example Quotes

“There’s a big conspiracy that everyone is in on”

“Everyone is just signaling all the time, they’re not even trying to be right”

“This organization is a huge Ponzi scheme”

“Yes, disagreeable person said X is a huge Ponzi scheme. There’s some truth there, but it’s a big oversimplification”

“I’m quite sure that Y’s paper is unlikely to replicate, after closely looking at the related papers”

Failure modesWild overconfidence, convincing the public that personal"pet theories" are either widely accepted or self-evidentToo quiet to draw attention, focuses on being accurate on things that don't even matter that much
Great forIdea generation, calling out huge mistakes, big simplifications (when justified), tackling hard problems in areas with stigmasPrioritization, filtering the ideas of disagreeables, catching many moderate-sized mistakes

[1] This work has flavors of personality profiling tools like the Enneagram and Myers-Briggs. If you hate those things, you should probably be suspicious of this.

[2] These aren't all the types, but they're the main ones I think of. Another interesting type is "bulldogs", who dogmatically champion one or two ideas over several decades. Arguably "philosopher king/queen/ruler" is a good archetype, though it overlaps heavily with disagreeables and assessors. 

[3] I'm not really sure about Hume, this is just my impression of him from several summaries.

[4] See this set of interviews of superforecasters for an idea. I think these people are interesting, but I could easily imagine overlooking them if I just heard them speak for a short period.

Comments20
Sorted by Click to highlight new comments since:

I agree with the general point that idea generators are often overconfident and poorly calibrated, but I'm not super excited about conflating that with being disagreeable. There's something there, in that being impatient with the failings of others can be a good impetus to create something better, but I think there are also plenty of generators who aren't highly disagreeable, and if that's the case it seems bad for the dichotomy as formulated here to catch on.

One example archetype of a generator who isn't disagreeable is someone who is both very intelligent and very enthusiastic/excitable. This person will generate lots of cool new ideas they're super excited about, and will likely still be overconfident and poorly calibrated, without feeling particularly motivated to yell about other people's bad ideas.

This clustering is based on anecdotal data; I wouldn't be too surprised if it were wrong. I'd be extremely curious for someone to do a cluster analysis here and see if there are any real clusters here.

I feel like I've noticed a distinct cluster of generators who are disagreeable, and have a hard time thinking of many who are agreeable. Maybe you could give some examples that come to mind to you? Anders Sandberg comes to my mind, and maybe some futurists and religious people. 

My hunch is that few top intellectuals (that I respect) would score in the 70th percentile or above on the big 5 agreeableness chart, but I'm not sure. It's an empirical question.

I don't remember hearing about a generators/evaluators dichotomy before, that you & Stefan mention. I like that dichotomy too, it's quite possible it's better than the one I raise here.

Spencer Greenberg also comes to mind; he once noted that his agreeableness is in the 77th percentile. I'd consider him a generator.

At the very least I think we can be more confident in the generators/evaluators (or /assessors) dichotomy, than in the further claim that the former tend to be disagreeable.

I'm coming at this from science, where lot of top generators have a strong "this is so cool!" sort of vibe to them – they have a thousand ideas and can't wait to try them out. Don't get me wrong, I think disagreeable generators play an important role in science too, but it's not my go-to image of a generator in that space.

[Wild speculation] It's plausible to me that this varies by field, based on the degree to which that field tends to strike out into new frontiers of knowledge vs generate new theories for things that are already well-studied. In the latter case, in order for new ideas to be useful, the previous work on the topic needs to be wrong in some way – and if the people who did the previous work are still around they'll probably want to fight you. So if you want to propose really new ideas in those sorts of fields you'll need to get into fights – and so generators in these fields will be disproportionately disagreeable. Whereas if everyone agrees that there are oodles of things in the field that are criminally understudied, you can potentially get quite a long way as a generator before you need to start knocking down other people's work.

Obviously if this theory I just made up has any validity, it will be more of a spectrum than a binary. But this sort of dynamic might be at play here.

Dr. Greger from NutritionFacts.org also seems like an agreeable generator. Actually he may be disagreeable in that he's not shy about pointing out flaws in studies and others' conceptions, but he does it in an enthusiastic, silly and not particularly abrasive way.

It's interesting that some people may still disagree often but not be doing it in a disagreeable manner.

Albert Einstein also comes to mind as an agreeable generator. I haven't read his biography or anything, but based on the collage of stories I've heard about him, he never seemed like a very disagreeable person but obviously generated important new ideas.

Disclaimer: I have disagreeable tendencies, working on it but biased. I think you're getting at something useful, even if most people are somewhere in the middle. I think we should care most about the outliers on both sides because they could be extremely powerful when working together.

I want to add some **speculations** on these roles in the context of the level at which we're trying to achieve something: individual or collective.

When no single agent can understand reality well enough to be a good principal, it seems most beneficial for the collective to consist of modestly polarized agents (this seems true from most of the literature on group decision-making and policy processes, e.g. Adaptive Rationality, Garbage Cans, and the Policy Process | Emerald Insight).

This means that the EA network should want people who are confident enough in their own world views to explore them properly, who are happy to generate new ideas through epistemic trespassing, and to explore outside of the Overton window etc. Unless your social environment productively reframes what is currently perceived as "failure", overconfidence seems basically required to keep going as a disagreeable.

By nature, overconfidence gets punished in communities that value calibration and clear metrics of success. Disagreeables become poisonous as they feel misunderstood and good assessors become increasingly conservative. The succesful ones of the two characters build up different communities in which they are high status and extremize one another.

To succeed altogether, we need to walk the very fine line between productive epistemic trespassing and conserving what we have.

Disagreeables can quickly lose status with assessors because they seem insufficiently epistemically humble or outright nuts. Making your case against a local consensus costs you points. Not being well calibrated on what reality looks like costs you points.

If we are in a sub-optimal reality, however, effort needs to be put into defying the odds and change reality. To have the chutzpah to change a system, it helps to ignore parts of reality at times. It helps to believe that you can have sufficient power to change it. If you're convinced enough of those beliefs, they often confer power on you in and of themselves.

Incrementally assessing baseline and then betting on the most plausible outcomes also deepens the tracks we find ourselves on. It is the safe thing to do and stabilizes society. Stability is needed if you want to make sure coordination happens. Thus, assessors rightly gain status for predicting correctly. Yet, they also reinforce existing narratives and create consensus about what the future could be like.

Consensus about the median outcome can make it harder to break out of existing dynamics because the barrier to coordinating such a break-out is even higher when everyone knows the expected outcome (e.g. odds of success of major change are low).

In a world where ground truth doesn't matter much, the power of disagreeables is to create a mob that isn't anchored in reality but that achieves the coordination to break out of local realities.

Unfortunately, to us who have insufficient capabilities to achieve their aims - to change not just our local social reality but the human condition - creating a cult just isn't helpful. None of us have sufficient data or compute to do it alone.

To achieve our mission, we will need constant error correction. Plus, the universe is so large that information won't always travel fast enough, even if there was a sufficiently swift processor. So we need to compute decentrally and somehow still coordinate.

It seems hard for single brains to be both explorers and stabilizers simultaneously, however. So as a collective, we need to appropriately value both and insure one another. Maybe we can help each other switch roles to make it easier to understand both. Instead of drawing conclusions for action at our individual levels, we need to aggregate our insights and decide on action as a collective.

As of right now, only very high status or privileged people really say what they think and most others defer to the authorities to ensure their social survival. At an individual level, that's the right thing to do. But as a collective, we would all benefit if we enabled more value-aligned people to explore, fail and yet survive comfortably enough to be able to feed their learnings back into the collective.

This is of course not just a norms questions, but also a question of infrastructure and psychology.

Thanks for the comment (this could be it's own post). This is a lot to get through, so I'll comment on some aspects.

I have disagreeable tendencies, working on it but biased

I have some too! I think there are times when I'm fairly sure my intuitions lean overconfident in a research project (due to selection effects, at least), but it doesn't seem worth debiasing, because I'm going to be doing it for a while no matter what, and not writing about its prioritization. I feel like I'm not a great example of a disagreeable or an assessor, but I sometimes can lean one way in different situations.

Instead of drawing conclusions for action at our individual levels, we need to aggregate our insights and decide on action as a collective.

I would definitely advocate for the appreciation of both disagreeables and assessors. I agree it's easy for assessors to team up against disagreeables (for examples, when a company gets full of MBAs), particularly when they don't respect them. 

Some Venture Capitalists might be examples of assessors who appreciate and have learned to work with disagreeables. I'm sure they spend a lot of time thinking, "Person X seems slightly insane, but no one else is crazy enough to make a startup in this space, and the downside for us is limited."

As of right now, only very high status or privileged people really say what they think and most others defer to the authorities to ensure their social survival.

This clearly seems bad to me. For what it's worth, I don't feel like I have to hide that much that I think, though maybe I'm somewhat high status. Sadly, I know that high-status people sometimes can say even less than low-status people, because they have more people paying attention and more to lose. I think we really could use improved epistemic setups somehow.

I was writing a LessWrong post on what I called "generators" and "evaluators" of ideas a few years ago, but never finished it. "Evaluators"  and "assessors" seem similar. The main difference was that I didn't postulate that "generators" are disagreeable, unlike your "disagreeables".

This post by the cognitive scientist Hugo Mercier may also be of relevane: "Why assholes are more likely to be wrong".

I like that naming setup. I considered using the word "evaluators", but decided against it because I've personally been using "evaluator" to mean something a bit distinct. 

Great post! 

I love Taleb but he embarrasses me, so it's nice to find a dialectical role for him in my head.

I think I've met a few people who can do both modes, and they're all deeply impressive people. What in late Jobs do you see as sane assessment? Always seemed like a wild half-charlatan half-prophet to me.

I think that Jobs, later on (after he re-joined Apple), was just a great manager. This meant he considered a whole lot of decisions and arguments, and generally made smart decisions upon reflection.

I think he (and other CEOs) are wildly inaccurate with how they portray themselves to the public. However, I think they can have great decision making in company-internal decisions. It's a weird, advantageous, inconsistency. 

This book goes into some detail:
https://www.amazon.com/Becoming-Steve-Jobs-Evolution-Visionary-ebook/dp/B00N6PCWY8/ref=sr_1_3?keywords=steve+jobs&qid=1636131865&rnid=2941120011&s=books&sr=1-3

Thanks a lot for the post! This felt like one of the rare posts that clearly defines and articulates a thing that feels intuitively true, but which I've never properly thought about before.

Thanks for the post - I can see what you're getting at, but this doesn't feel like two clearly distinct categories to me. The first person I thought to try and apply this to had strong traits from both columns, for example. As a similar but more available example, where would you fit Bryan Caplan here? He's disagreeable without being angry, and is trying hard not to be wrong while happily telling others why they are. 

I'm not sure whether my intuition here is that these can both be strong/weak in the same person, that there's more of a spectrum, or that they're a set of characteristics that may or may not cluster the way you've described. I'm not really sure what shape you meant for this to take, or how well it applies in these intermediate cases.

I'd note that I expect these clusters (and I suspect they're clusters) to be a minority of intellectuals. They stand out a fair bit to me, but they're unusual. 

I agree Bryan Caplan leans disagreeable, but is less intense than others. I found The Case Against Education and some of his other work purposefully edgy, which is disagreeable-type-stuff, but at the same time, I found his interviews to often be more reasonable. 

I would definitely see the "disagreeable" and "assessor" archetypes as a spectrum, and also think one person can have the perks of both.

The distinction reminds me of the foxes vs hedgehogs model from Superforecasting / Tetlock. Hedgehogs being "great idea thinkers" seeing everything in the light of that one great idea they're following, whereas foxes are more nuanced, taking in many viewpoints and trying to converge on the most accurate beliefs. I think he mentioned in  the book that while foxes tend to make much better forecasters, hedgehogs are not only more entertaining but also good in coming up with good questions to forecast in the first place.

An entirely different thought: The Laws of Human Nature by Robert Greene was the first audible book I returned without finishing. It was packed with endless "human archetypes" described in great detail, making some rather bold claims about what "this type" will do in some given situation. You mention in the footnotes already that people who dislike e.g. personality profiling tools might not like this post. And it did indeed somewhat remind me of that book, but maybe your "assessor" way of describing the model, as opposed to Greene's very overconfident seeming way of writing, made this seem much more reasonable. There seems to be a fine line between actually useful models of this kind which have some predictive power (or at least allow thoughts to be a bit tidier), and those that are merely peculiarly entertaining, like Myers-Briggs. And I find it hard to tell from the outside on which side of that line any given model falls. 

There seems to be a fine line between actually useful models of this kind which have some predictive power (or at least allow thoughts to be a bit tidier), and those that are merely peculiarly entertaining, like Myers-Briggs. And I find it hard to tell from the outside on which side of that line any given model falls. 

I have mixed feelings here. I think I'm more sympathetic to Myers-Briggs when used correctly, than other people. There definitely seems to be some signal that it categorizes (some professions are highly biased towards a narrow part of the spectrum). It doesn't seem all too different to categorizing philosophy as "continental" vs. "analytical". It's definitely not the best categorization, there are some flawed assumptions baked into it (either/or, as opposed to a spectrum, most famously), the org that owns it seems pretty weird, and lots of people make overconfident statements around it, but I think it can serve a role when used correctly.

Anyway, I imagine what we'd really want is a "Big 5 of Intellectuals" or similar. For that, it would be great for someone to eventually do some sort of cluster analysis.
 

I don't necessarily recommend that the disagreeables/assessors terminology takes off; I'd prefer it if this can be used for discussion that finds something better.  

Scott Garrabrant has discussed this (or some very similar distinction) in some LessWrong comments. There's also been a lot of discussion about babble and prune, which is basically the same distinction, except happening inside a single mind instead of across multiple minds.

Good find, I didn't see that discussion before. 

For those curious; Scott makes the point that it's good to separate "idea generation" from "vetted ideas that aren't wrong"; and that's it's valuable to have spaces where people can suggest ideas without needing them to be right. I agree a lot with this.

I have this model where in a healthy society, there can be contexts where people generate all sorts of false beliefs, but also sometimes generate gold (e.g. new ontologies that can vastly improve the collective map). If this context is generating a sufficient supply of gold, you DO NOT go in and punish their false beliefs. Instead, you quarantine them. You put up a bunch of signs that point to them and say e.g. “80% boring true beliefs 19% crap 1% gold,” then you have your rigorous pockets watch them, and try to learn how to efficiently distinguish between the gold and the crap, and maybe see if they can generate the gold without the crap. However sometimes they will fail and will just have to keep digging through the crap to find the gold.

Great post. Reminds me of Eric Weinstein on excellence vs. genius: https://youtu.be/bsgWSPWX-6A?t=553

Curated and popular this week
Relevant opportunities