Hide table of contents

This piece defends a strong form of epistemic modesty: that, in most cases, one should pay scarcely any attention to what you find the most persuasive view on an issue, hewing instead to an idealized consensus of experts. I start by better pinning down exactly what is meant by ‘epistemic modesty’, go on to offer a variety of reasons that motivate it, and reply to some common objections. Along the way, I show common traps people being inappropriately modest fall into. I conclude that modesty is a superior epistemic strategy, and ought to be more widely used - particularly in the EA/rationalist communities.

[gdoc]

 

Provocation

I argue for this:

In virtually all cases, the credence you hold for any given belief should be dominated by the balance of credences held by your epistemic peers and superiors. One’s own convictions should weigh no more heavily in the balance than that of one other epistemic peer.

 

Introductions and clarifications

A favourable motivating case

Suppose your mother thinks she can make some easy money day trading blue-chip stocks, and plans to kick off tomorrow shorting Google on the stock market, as they’re sure it’s headed for a crash. You might want to dissuade her in a variety of ways.

You might appeal to an outside view:

Mum, when you make this short you’re going to be betting against some hedge fund, quant, or whatever else. They have loads of advantages: relevant background, better information, lots of data and computers, and so on. Do you really think you’re odds on to win this bet?

Or appeal to some reference class:

Mum, I’m pretty sure the research says that people trying to day-trade stocks tend not to make much money at all. Although you might hear some big successes on the internet, you don’t hear about everyone else who went bust. So why should you think you are likely to be one of these remarkable successes?

Or just cite disagreement:

Look Mum: Dad, sister, the grandparents and I all think this is a really bad idea. Please don’t do it!

Instead of directly challenging the object level claim (i.e. “Google isn’t overvalued, because X”). These considerations attempt to situate the cogniser within some population, and from characteristics of this population infer the likelihood of this cogniser getting things right.  

Call the practice of using these techniques considerations epistemic modesty. We can distinguish two components:

  1. ‘In theory’ modesty: That considerations of this type should in principle influence our credences.
  2. ‘In practice’ modesty: That one should in fact use these considerations when forming credences.

 

Weaker and stronger forms of modesty

Some degree of modesty is (almost) inarguable. If one leaves for work on Tuesday and finds all your neighbours left their bins out, that’s at least reason to doubt your belief bins were on Thursday, and perhaps sufficient to believe instead bins are on Tuesday (and follow suit with your bins). If it appears that, say, the coagulation cascade ‘couldn’t evolve’, the near unanimity of assent for evolution among biologists at least counts against this, if not a decisive reason, despite one's impressions, that it could. Nick Beckstead suggests something like ‘elite common sense’ forms a prior which one should be hesitant to diverge from without good reason. 

I argue for something much stronger (c.f. the Provocation above): in theory, one’s credence in some proposition P should be almost wholly informed by modest considerations. That, ceteris paribus, the fact it appears to you that P should weigh no more heavily in one’s determination regarding P than knowing that it appears to someone else that P. Not only is this the case in theory, but it is also the case in practice. One’s all things considered judgement on P should be just that implied by an idealized expert consensus on P, no matter one’s own convictions regarding P.

 

Motivations for more modesty

Why believe ‘strong form’ epistemic modesty? I first show families of cases where ‘strong modesty’ leads to predictably better performance, and show these results generalise widely.[1] 

The symmetry case    

Suppose Adam and Beatrice are perfect epistemic peers, equal in all respects which could bear on them forming more or less accurate beliefs. They disagree on a particular proposition P (say “This tree is an Oak tree”). They argue about this at length, such that all considerations Adam takes to favour “This is an Oak tree” are known to Beatrice, and vice versa.[2] After this, they still disagree: Adam has a credence of 0.8, Beatrice 0.4.

Suppose an outside party (call him Oliver) is asked for his credence of P, given Adam and Beatrice’s credences and their epistemic peer-hood to one another, but bereft of any object-level knowledge. He should split the difference between Adam and Beatrice - 0.6: Oliver doesn’t have any reason to favour Adam over Beatrice’s credence for P as they are epistemic peers, and so splitting the difference gives the least expected error.[3] If he was faced with a large class of similar situations (maybe Adam and Beatrice get into the same argument for Tree 2 to Tree 10,000) Oliver would find that difference splitting has lower error than biasing to either Adam or Beatrice’s credence.

Adam and Beatrice should do likewise. They also know they are epistemic peers, and so they should also know that for whatever considerations explain their difference (perhaps Adam is really persuaded by the leaf shapes, but Beatrice isn’t) Adam’s take and Beatrice’s take are no more likely to be right than one another. So Adam should go (and Beatrice vice-versa), “I don’t understand why Beatrice isn’t persuaded by the leaf shapes, but she expresses the same about why I find it so convincing. Given she is my epistemic peer, ‘She’s not getting it’, and, ‘I’m not getting it’ are equally likely. So we should meet in the middle”.

The underlying intuition is one of symmetry. Adam and Beatrice have the same information. The correct credence regarding P given this information should not depend on which brain Adam or Beatrice happens to inhabit. Given this, they should hold the same credence[4], and as they Adam is as likely to be further from the truth than Beatrice, the shared credence should be in the middle.  

Compressed sensing of (and not double-counting) the object level

It seems odd that both Adam and Beatrice do better discarding their object level considerations regarding P. If we adjust the scenario above so they cannot discuss with one another but are merely informed of each other’s credences (and that they are peers regarding P), the right strategy remains to meet in the middle.[5] Yet how come Adam and Beatrice are doing better if they ignore relevant information? Both Adam and Beatrice have their ‘inside view’ evidence (i.e. what they take to bear on the credence of P) and the ‘outside view’ evidence (what each other think about P). Why not use a hybrid strategy which uses both?

Yet to whatever extent Adam or Beatrice’s hybrid approach leads them to diverge from equal weight, they will do worse. Oliver can use the ‘meet in the middle strategy’ to get an expectedly better accuracy than either biasing towards their own inside view determination. In betting terms, Oliver can arbitrage any difference in credence between Adam and Beatrice.

We can explain why: the credences Adam and Beatrice offer can be thought of as very compressed summaries of the considerations they take to bear upon P. Whatever ‘inside view’ considerations Adam took to bear upon P are already ‘priced in’ to the credence he reports (ditto Beatrice). Modesty is not ignoring this evidence, but weighing it appropriately: if Adam then tries to adjust the outside view determination by his own take on the balance of evidence, he double counts his inside view: once in itself, and once more by including his credence as weighing equally to Beatrice’s in giving the outside view.

One’s take on the set of considerations regarding P may err, either by bias,[6] ignorance, or ‘innocent’ mistake. Splitting the difference between you and your peer’s very high level summary of these captures the great fraction of benefit of hashing out where these summaries differ.[7] Modesty correctly diagnoses that one’s high level summary is no more likely to be more accurate than one’s peers, and so holds those in equal regard, even in cases where the components of one’s own summary are known better.

Repeated measures, brains as credence censors, and the wisdom of crowds

Modesty outperforms non-modesty in the n=2 case. The degree of outperformance grows (albeit concavely) as n increases.

Scientific fields often have to deal with unreliable measurement. They commonly mitigate this by having repeat measurement. If you have a crummy thermometer, repeating readings several times improves accuracy over just the once. Human brains also try and measure things, and they are also often unreliable. It is commonly observed that nonetheless the average of their measurement tends to lie closer to the mark than the vast majority of individual measurements. Consider the commonplace ‘guess how many skittles are in this jar’ or similar estimation games: the usual observation is that the average of all the guesses is better than all (or almost all) the individual guesses.

A toy model makes this unsurprising. The individual guesses will form some distribution centered on the true value. Thus the expected error of a given individual guess is the standard deviation of this distribution. The expected error of the average of all guesses is given by the standard error, which is the standard deviation divided by root(number of guesses):[8] with 10 individuals, the error is about 3 times smaller than the expected error of each individual guess; with 100, 10 times smaller; and so on.

Analogously, human brains also try to measure credences or degrees of belief, and are similarly imperfect to when they’re trying to estimate ‘number of X’. Yet one may expect a similar effect to this ‘wisdom of crowds’ to operate here too. In the same way Adam and Beatrice would do better in the situation above if they took the average (even if it went against their view of the balance of reasons by their lights), if Adam-to-Zabaleta (all epistemic peers) investigated the same P, they’d expect to do better if they took the average of their group versus steadfastly holding to the credence they arrived at ‘by their lights’. Whatever inaccuracies that may throw off their individual estimates of P somewhat cancel out.

Deferring to better brains

The arguments above apply to cases where one is an epistemic peer. If not, one needs to adjust by some measure of ‘epistemic virtue’. In cases where Adam is an epistemic superior to Beatrice, they should meet closer to Adam’s view, commensurate with the degree of epistemic superiority (and vice versa).

Although reasons for being an epistemic superior could be ‘they’re a superforecaster’ or ‘they’re smarter than I am’, perhaps the most common source of epistemic superiors lie under the heading of ‘subject matter expert’. On topics from human nutrition, to voting rules, to the impact of the minimum wage, to the nature of consciousness, to basically anything that isn’t trivial, one can usually find a fairly large group of very smart people who spend many years studying that topic, who make public their views about this topic (sometimes not even behind a paywall). That they at least have a much greater body of relevant information and have spent longer thinking about it gives them a large advantage compared to you.

In such cases, the analogy might be that your brain is a sundial, whilst theirs is an atomic clock. So if you have the option of taking their readings rather than yours, you should do so. The evidence a reading of a sundial provides about the time conditional on the atomic clock reading is effectively zero. ‘Splitting the difference’ in analagous epistemic cases should result with both you and your epistemic superior agreeing that they are right and you are wrong. 

Inference to the ideal epistemic observer

We can summarise these motivations by analogy to ideal observers (used elsewhere in perception and ethical theory). We can gesture that an ideal (epistemic) observer is just that which is able to form the most accurate credence for P given whatever prior: we can explain they have vast intelligence, full knowledge of all matters that bear upon P, perfect judgement, and in essence all epistemic virtues in excelsis.

Now consider this helpful fiction:

The epistemic fall: Imagine a population solely comprised of ideal observers, who all share the same (correct) view on P. Overnight their epistemic virtues are assailed: they lose some of their reasoning capacity; they pick up particular biases that could throw them one way or another; they lose information, and so on, and each one to varying degrees.

They wake up to find they now have all sorts of different credences about P, and none of them can remember what credence they all held yesterday. What should they do?

It seems our fallen ideal observers can begin to piece together what their original credence was about P by finding out more about their credences and remaining epistemic virtue, and so backpropagate their return to epistemic apotheosis. If they find they’re all similarly virtuous and are evenly scattered, their best guess is the ideal observer was in the middle of the distribution (c.f. the wisdom of crowds). If they see a trend that those with greater residual virtue tend to hold a higher credence in P, they should attempt to extrapolate this trend to suggest the ideal agent origin from which they were differentially blown of course from. If they see one group demonstrates a bias that others do not, they can correct the position of this group before trying these procedures. If they find the more virtuous agents are more scattered regarding P, (or that they segregate into widely dispersed aggregations), this should make them very unsure about where the ideal observer initially was. And so on.

Such a model clarifies the benefit of modesty. Although we didn’t have some grand epistemic fall, it is clear we all fall manifestly short of an ideal observer. Yet we all fall short in different respects, and in different degrees. One should want to believe whatever one would believe if one was an ideal observer, shorn of one’s manifest epistemic vices. Purely immodest views must say their best guess is the ideal observer would think the same as they do, and hope that all the vicissitudes of their epistemic vice happen to cancel out. By accounting for the distribution of cognisers, modesty allows a much better forecast, and so a much more accurate belief. And the best such forecast is the strong form of modesty, where one’s particular datapoint, in and of itself, should not be counted higher than any other.

 

Excursus: Against common justifications for immodesty

So much for strong modesty in theory. How does it perform in practice?

One rough heuristic for strong modesty is this: for any question, find the plausible expert class to answer that question (e.g. if P is whether to raise the minimum wage, talk to economists). If this class converges on a particular answer, believe that answer too. If they do not agree, have little confidence in any answer. Do this no matter whether one’s impression of the object level considerations that recommend (by your lights) a particular answer.

Such a model captures all the common sense cases of modesty - trust the results in typical textbooks, defer to consensus in cases like when to put the bins out, and so on. I now show it is also better in many cases where people think it is better to be immodest.

Being ‘well informed’ (or even true expertise) is not enough

A common refrain is that one is entitled to ‘join issue’ with the experts due to one having made some non-trivial effort at improving one's knowledge of the subject. “Sure, I accept experts widely disagree on macro-economics, but I’m confident in neo-Keynesianism after many months of careful study and reflection.”

This doesn’t fly by the symmetry argument above. Our outsider observes widespread disagreement in the area of macroeconomics, and that many experts who spend years on the subject nonetheless greatly disagree. Although it is possible the ideal observer would have been in one or another of the ‘camps’ (the clustering implies intermediate positions are less plausible), the outsider cannot adjudicate which one if we grant the economists in each appear to have similar levels of epistemic virtue. The balance of this outside view changes imperceptibly if another person who despite a few months of study remains nowhere near peerhood (let alone superiority) of these divided experts, happens to side with one camp or another. By symmetry, one's own view of the balance of reason should remain unchanged if this ‘another person’ happened to be you.

The same applies even if you are a bona fide expert. Unless the distribution of expertise is such that there is a lone ‘world authority’ above all others (and you’re them) your fellow experts form your epistemic peer group. Taking the outside view is still the better bet: the consensus of experts tends to be right more often than dissenting experts, and so some difference splitting (weighed more to the consensus owing to their greater numbers) is the right answer.[9]

Common knowledge ‘silver bullet arguments’

Suppose one takes an introductory class in economics. From this, one sees there must be a ‘knock-down’ argument against a minimum wage:

Well, suppose you’re an employee whose true value on the free market is less than the minimum wage. But under the minimum wage, the firm might not decide on charitably employing above your market value, and just firing you instead. You’re worse off, as you’re on the dole, and the firm’s worse off, as it has to meet its labour demand another way. Everyone’s lost! So much for the minimum wage!

Yet one quickly discovers economists seem to be deeply divided over the merits of the minimum wage (as they are about most other things). See for example this poll suggesting 38 economic experts in the US are pretty evenly divided on whether the minimum wage would ‘hit’ employment for low-skill workers, and leant in favour of the minimum wage ‘all things considered’.

It seems risible to suppose these economists don’t know their economics 101. What seems much more likely is that they know other things that you don’t which make the minimum wage more reasonable than your jejune understanding of the subject suggests. One need not belabour which side the outside view strongly prefers.

Yet it is depressingly common for people to confidently hold that view X or Y is decisively refuted by some point or another, notwithstanding the fact this point is well known to the group of experts that nonetheless hold X or Y. Of course in some cases one really has touched on the decisive point the experts have failed to appreciate. More often, one is proclaiming that one is on the wrong side of the Dunning-Kruger effect.

Debunking the expert class (but not you)

To the litany of cases where (apparent) experts screwed up, we can add verses without end. So we might be inclined to debunk a particular ‘expert consensus’ due to some bias or irrationality we can identify. Thus, having seen there are no ‘real’ experts to help us, we must look at the object level case.

The key question is this: “How are you better?” And it is here that debunking attempts often flounder:

An undercutting defeater for one aspect of epistemic superiority for the expert class is not good enough. Maybe one can show the expert class has a poor predictive track record in their field. Unless one has a better track record in their field, this puts you on a par with respect to this desideratum of epistemic virtue. They likely have others (e.g. more relevant object-level knowledge) that should still give them an edge, albeit attenuated.

An undercutting defeater that seems to apply equally well to oneself as the expert class also isn’t enough. Suppose (say) economics is riven by ideological bias: why are you less susceptible to these biases? The same ideological biases that might plague professional economists may also plague amateur economists, but the former retain other advantages.

Even if a proposed debunking is ‘selectively toxic’ to the experts versus you, it still might be your epistemic superior all things considered. Both Big Pharma and Professional Philosophy may be misaligned, but perhaps not so much to be orthogonal or antiparallel to the truth: in both they still expectedly benefit by finding drugs that work or making good arguments respectively. They may still fare better overall than, “Intelligent layperson who’s read extensively”, even if they are not subject to ‘publish or perish’ or similar.

Even if a proposed debunking shows one as decisively superior to that expert class, there may be another expert class which remains epistemically superior to you. Maybe you can persuasively show professional philosophers are so compromised on consciousness that they should not be deferred to about it. Then the real expert class may simply switch to something like ‘intelligent people outside the academy who think a lot about the topic’. If it’s the case that this group of people do not share your confidence in your view, it seems outsiders should still reject it - as should you.

It need not be said that the track record for these debunking defeaters is poor. Most crackpots have a persecution narrative to explain why the mainstream doesn’t recognise or understand them, and some of the most mordant criticisms of the medical establishment arise from those touting complementary medicine. Thus ‘explaining away’ expert disagreement may not put one in a more propitious reference class than one started from. One should be particularly suspicious of debunking(s) sufficiently general that the person holding the unorthodox view has no epistemic peers - they are akin to Moses, descending from Mt. Sinai, bringing down God-breathed truth for the rest of us.[10]

Private evidence and pet arguments

Suppose one thinks one is in receipt of a powerful piece of private evidence: maybe you’ve got new data or a new insight. So even though the experts are generally in the right, in this particular case they are wrong because they are unaware of this new consideration.

New knowledge will not spread instantaneously, and that someone can be ‘ahead of the curve’ comes as no surprise. Yet many people who take themselves to have private evidence are wrong: maybe experts know about it but don’t bother to discuss it because it is so weak, or it is already in the literature (but you haven’t seen it), or it isn’t actually relevant to the topic, or whatever else. Most mavericks who take themselves to have new evidence that overturns consensus are mistaken.

The natural risk is people tend to be too partial to their pet arguments or pet data, and so give them undue weight, and so one’s ‘insider’ perceptions should perhaps be attenuated by this fact. I suspect most are overconfident here.[11] If this private evidence really is powerful, one should expect it to be persuasive to members of this expert class once they become aware of it. So it seems the credence one should have is the (appropriately discounted) forecast of what the expert class would think once you provide them this evidence.

The natural test of the power of this private evidence is to make it public. If one observes experts (or just epistemic peers) shift to your view, you were right about how powerful this evidence was. If instead one sees a much more modest change in opinion, this should lead one to downgrade your estimate as to how powerful this evidence really is (and perhaps provide calibration data for next time). Holding instead this really is decisive evidence leads one to the problematic ‘common knowledge silver bullet’ case discussed above. Inferring from this experts just can’t understand your reasoning or are biased against outsiders or whatever else produces a suspiciously self-serving debunking argument, also discussed above.  

 

Objections

So much for the case in favour. What about the case against? I divide objections into those ‘in theory’, and those ‘in practice’.

In theory

There’s no pure ‘outside view’[12]

It is not the case you can bootstrap an outside view from nothing. One needs to at least start with some considerations as to what makes one an epistemic peer or superior, and probably some minimal background knowledge of ‘aboutness’ to place topics under one or another expert class. 

In the same way large amounts of our empirical information are now derived by instrument rather than direct application of our senses (but were ultimately germinated from direct sensory experience), large amounts of our epistemic information can be derived by deferring to better (or more) brains rather than using our own, even if this relies on some initial seed epistemology we have to realise for ourselves. This ‘germinal set of claims’ can still be modestly revised later.

Immodestly modest?

One line of attack from the social epistemology literature is that strong forms of modesty are self-defeating. If one is modest, one should assumedly be modest about ‘What is the right way to form beliefs if epistemic peers disagree with you?’ Yet one finds that very few people endorse the sort of epistemic modesty advocated above. When one looks among potential expert classes, such as more intelligent friends of mine (i.e. friends of mine), epistemologists, and so on, conciliatory views like these command only a minority. So the epistemically modest should vanish as they defer to the more steadfast consensus.

If so, so much the worse for modesty. I offer a couple of incomplete defences:

One is haggling over the topic of disagreement. In my limited reading of ‘equal weight/conciliatory views and their detractors’, I take the detractors to be suggesting something like “one is ‘within one’s rights’ to be steadfast”, rather than something like “you’re more accurate if you’re steadfast”. Maybe there are epistemic virtues which aren’t the same as being more accurate. Yet there may be less disagreement on ‘conditional on an accuracy first view, is modesty the right approach?’

This only gets so far (after all, shouldn’t we be modest whether only to care about accuracy?) A more general defence is this: the ‘what if you apply the theory to itself?’ problem looks pretty pervasive across theories.[13] Accounts of moral uncertainty that in whatever sense involve weighing normative theories by their plausibility tend to run into problems if the same accounts are applied ‘one level up’ to meta-moral uncertainty. Bayesian accounts of epistemology seem to go haywire if we think one should have a credence in Bayesian epistemology itself, especially if one assigns any non-zero credence on any theory which entails object level credences have undefined values.

Closer to home, milder versions of conciliation (e.g. “Pay some attention to peer disagreement, but it’s not the only factor”) share a similarly troublesome recursive loop (“Well, I see most other people are steadfast, so I should update to be a bit less conciliatory, but now I have to apply my modified view to this disagreement again”) and neat convergence is not guaranteed. The theories which avoid this problem (e.g. ‘Wholly steadfast, so peer disagreement should be ignored’), tend to be the least plausible on the object level (e.g. That if you believe bins are on Thursday, the fact all your neighbours have their bins out on Tuesday is not even reason to reconsider your belief).

A solution to these types of problems remains elusive. Yet modesty finds itself in fairly good company. It may be the case that a good resolution to this type of issue would rule out the strong form of modesty advocated here, in favour of some intermediate view. Until then, I hope the (admittedly inelegant) “Be modest, save for meta-epistemic norms about modesty itself” is not too great a cost to bear across the scales from the merits of the approach.

 

In practice

I take most of the action to surround whether modesty makes sense as a practical procedure in the real world, even granting it’s ‘in theory’ virtue. Given the strength of modesty, I advocate, the fact we use something like it in some cases, and we can identify it can help in others, is not enough. It needs to be shown as a better strategy than even slightly weaker forms, in circumstances deliberately selected to pose the greatest challenge to strong modesty.

Trivial (and less trivial) non-use cases

For some topics there’s no relevant epistemic peers or superiors to consider. This is commonly the case with pretty trivial beliefs (e.g. my desk is yellow).

Modesty also doesn’t help much for individual tastes, idiosyncrasies, or circumstances. If Adam works best listening to Bach and Beatrice to Beethoven, they probably won’t do better ‘meeting in the middle’ and both going half-and-half for each (or maybe picking a composer intermediate in history, like Mozart). Anyway, Adam is probably Beatrice’s significant epistemic superior on “What music does Adam work best listening to?”, and vice-versa. One can also be credulous of claims like “It turned out this diet really helped my back pain”: perhaps it’s placebo, or perhaps it is one of those cases where different things work for different people, and one expects in such cases individuals to have privileged access to what worked for them.[14] 

There will be cases where one really is plowing a lonely furrow where there aren’t any close epistemic peers or superiors. It’s possible I really am the world’s leading expert on “How many counter-factual DALYs does a doctor avert during their career?”, because no one else has really looked into this question. My current role involves investigating global catastrophic biological risks, which appears understudied to the point of being pre-paradigmatic.

These comprise a very small minority of topics I have credences about. Yet even here modesty can help. One can use more distant bodies of experts: I am reassured that my autumnal estimate for the ‘DALY question’ coheres with expert consensus that medical practice had a minor role in improvements to human health, for example. Even if I don’t have any epistemic peers, I can simulate some by asking, “If there were lots of people as or more reasonable than me looking at this, would I expect them to agree with my take?” Given that the econometric-esque methods I deploy to the answer the ‘DALY question’ could probably be done better by an expert, and in any case reasonable people are often sceptical of these in other areas, I am less confident of my findings than my ‘inside view’ suggests, which I take to be a welcome corrective to ‘pet argument’ biases.[15]    

In theory, the world should be mad

Whether devoured by Moloch, burned by Ra, trapped by aberrant signalling equilibria, or whatever else, we can expect to predict when apparent expert classes (and apparent epistemic peers) are going to collectively go wrong. With this knowledge, we can know which topics we should expect to ourselves to outperform expertise. Rather than the scenario where we commonly find ourselves looking up (at experts) or around (at our peers), we find ourselves in many situations where those who are usually epistemic peers or superiors are below us - and above us, only sky.

We could distinguish two sorts of madness, a surprising absence of expertise and a surprising error of expertise:

The former is a gap in the epistemic market. Although an important topic should be combed over by a body of experts, for whatever reason it isn’t, and so it takes surprisingly little effort to climb to the summit of epistemic superiority. In such cases our summaries of expert classes as ranging over a broad area conceal the degree of expertise is very patchy: public health experts generally know a great deal about the health impacts of smoking; they usually know much less about the health impacts of nicotine.

The latter is a stronger debunking argument. One appeals to some features of the world that generates expertise and suggests that these expertise generating features are anti-correlated to the truth, thus one can adjudicate between warring expert camps (or just indict all so-called ‘experts’) based on this knowledge. One strong predictor of incompatibilism regarding free will among philosophers is believing in God. If we are confident these beliefs in God are irrational, then we can winnow the expert class by this consideration and side with the compatibilist camp much more strongly.

Yet, similar to the problems of debunking mentioned earlier, that there is a good story suggesting one of these things does not imply one will do better ‘striking out on one's own’. Even in cases of disease where accuracy is poorly correlated to expert activity, it is hard to think of cases where these line up orthogonal or worse. Big pharma studies are infamous, but even if you’re in big pharma optimising for ‘can I get evidence to support my product’, your drug actually working does make this easier. Even in pre-replication crisis psychology, true results would be overrepresented versus false ones in the literature compared to some base rate across generated hypotheses.

The ‘residual’ expert class still often remains better. Although most public health experts know little about nicotine per se, there are some nearby health experts, perhaps scattered across our common-sense demarcation of fields, who do know about the impacts of nicotine. It may still take quite a lot of effort to reach parity or superiority to these. Even if we want to strike all theists from free will philosophers, compatibilism does not rise close to unanimity, and so cautions against extremely high confidence this is the correct view.[16] So, I aver, the world is not that mad.

Empirically, the world is mad

One can offer a more direct demonstration of world-madness, and so refute modesty: outperformance.

A common reply is to point to a particular case where those being modest would have gotten it wrong. There are lots of cases where amateurs and mavericks were ridiculed by common sense or experts-at-the-time, only to be subsequently vindicated.

Another problem is the modest view introduces a lag - it seems one often needs to wait for the new information to take root among one’s epistemic peers before changing one’s view, whilst a cogniser just relying on the object level updates on correct arguments ‘at first sight’. It is often crucially important to be fast as well as right in both empirical and moral matters: it is extremely costly if a view makes one slower to recognise (among many other past moral catastrophes) the horror of slavery.

Yet modesty need not infallible, merely an improvement. Citing cases where it goes poorly is (hopefully less than) half the story. Modesty does worse in cases the maverick is right, yet better where the maverick is wrong: there are more cases of the latter than the former. Modesty does worse in being sluggish in responding to moral revolutions, yet better at avoiding being swept away by waves of mistaken sentiment: again, the latter seem more common than the former.[17]

Maybe one can follow a strategy such that you can ‘pick the hits’ of when to carve out exceptions, and so have a superior track record. Yet, empirically, I don’t see it. When I look at people who are touted as particularly good at being ‘correct contrarians’, I see at best something like an ‘epistemic venture capitalist’ - their bold contrarian guesses are right more often than chance, but not right more often than not. They appear by my lights to be unable to judiciously ‘pick their battles’, staking out radical views in topics where there isn’t a good story as to why the experts would be getting this wrong (still less why they’re more likely to get it right). So although they do get big wins, the modal outcome of their contrarian take is a bust.[18] 

Modesty should price in the views of better-than-chance contrarians into how it weighs consensus. Confidence in a consensus view should fall if a good contrarian takes aim at it, but not so much one now takes the contrarian view oneself. If one happens to be a particularly successful contrarian one should follow the same approach: “I get these right surprisingly often, but I’m still wrong more often than not, so it might be worth it to look into this further to see if I can strike gold, but until then I should bank on the consensus view.”

Expert groups are seldom in reflective equilibrium

Even if modesty works well in the ideal case of a clearly identified ‘expert class’, it can get a lot messier in reality:

  1. Suppose one is in the early 1940s and asks, “Is there going to be explosives with many orders of magnitude more power than current explosives?” One can imagine if one consulted explosive experts (however we cash that out), their consensus would generally say ‘no’. If one was able to talk to the physicists working on the Manhattan project, they would say ‘yes’. Which one should an outside view believe?[19]
  2. Most people believe god exists (the so called ‘common consent argument for God’s existence’); if one looks at potential expert classes (e.g. philosophers, people who are more intelligent), most of them are Atheists. Yet if one looks at philosophers of religion (who spend a lot of time on arguments for or against God’s existence), most of them are Theists - but maybe there’s a gradient within them too. Which group, exactly, should be weighed most heavily?

So constructing the ideal ‘weighted consensus’ modesty recommends deferring to can become a pretty involved procedure. One must carefully divine whether a given topic lies closer to the magisterium of one or another putative expert class (e.g. maybe one should lean more to the physicists, as the question is really more ‘about physics’ than ‘about explosives’). One might have to carefully weigh up the relevant epistemic virtues of various expert classes that appear far from reflective equilibrium from one another (so perhaps one might use likely selection effect of philosophy of religion party discount the apparent support this provides). One might have to delve into complicated issues of independence: although most people may believe god exists, unlike guesses of how many skittles are in the jar, they are not all forming this belief independently from one another.[20]

This exercise begins to look increasingly insider-view-esque. Trying to determine the right magisterium involves getting closer to object level considerations about ‘aboutness’ of topics; trying to tease apart issues of independence and selection amount to looking at belief forming practices, and veer close to object level justifications for the belief in question. At some point it becomes extraordinarily challenging to try and back-trace from all these factors to the likely position of the ideal observer: the degrees of freedom these considerations invite (and the challenge in estimating them reliably) make strong modesty go worse.

One should not give up too early, though: modesty can still work pretty well even in these tricky cases. One can ask whether there’s any communication between the classes, and if so any direction of travel (e.g. did some explosive experts end up talking to the physicists, and agreeing they were right? Vice-versa?), even if they were completely isolated, one can ask if a third group having access to both made a decision (e.g. the agreement of the U.S. and German governments with the implied view of the physicists). This is a lot more involved, but the expected ‘accuracy yield per unit time spent’ may still be greater than (for example) making a careful study of the relevant physics. 

A broader modification would be ‘immodest only for the web of belief, but modest for the weights’: one uses an inside view to piece together the graph of considerations around P, but one still defers to consensus on the weights. This may avoid cases where (for example) strong modesty may mistake astronomers as the expert class for about space travel being infeasible (versus primordial rocket scientists), even though astronomers and rocket scientists agreed about the necessary acceleration, but astronomers were inexpert on the key question as to whether that explanation could be produced.[21]

What if one cannot even do that? Then modestly (rightly) offers a counsel of despair. If an area is so fractious there’s no agreement, with no way to see which of numerous of disparate camps have better access the truth of the matter; so suffused with bias that even those with apparent epistemic virtues (e.g. judgement, intelligence, subject-matter knowledge) cannot be seen to even tend towards the truth; what hope does one have to do better than they? In attempting to thread the needle through these hazards towards the right judgement, one will almost certainly run aground somewhere or somehow, alike all one's epistemic peers or superiors who made the attempt before. Perhaps reality obliges us to undertake these doxastic suicide missions from time to time. If modesty cannot help us, it can at least provide the solace of a pre-emptive funeral, rather than (as immodest views would) cheer us on to our almost certain demise.

Somewhat satisfying Shulman  

Carl Shulman encourages me to offer my credences and rationale in cases he takes to be particularly difficult for my view, and suggests in these cases I either arrive at absurd credences or I am covertly abandoning the strong modesty approach. I offer these below for readers to decide - with the rider that if these are in fact absurd, ‘I’m an idiot’ is a competing explanation to ‘strong modesty is a bad epistemic practice’ (and that, assuredly, whatever one’s credence on the latter, one’s credence in the former should be far greater).

 

Proposition (roughly)

Credence (ish)

(Modesty-based) rationale, in sketch

Theism

0.1[22]

Mostly discount common consent (non-independence) and PoR (selection). Major hits from more intelligent people/ better informed tend to be atheist, but struggle to extrapolate this closer to 0 given existence proofs of very epistemically virtuous religious people.

Libertarian free will

0.1

Commands a non-trivial minority across virtuous epistemic classes (philosophers, intelligent people, etc), only somewhat degraded by selection worries.

Jesus rose from the dead

0.005

Christianity in particular a very small fraction of possibility space of Theism. Support from its widespread support is mostly (but not wholly) screened off by non-independence effects. Relevant (but distant) expert classes in history etc. weigh adversely.

There has been a case of cold fusion

10^-5

Strong pan scientific consensus against, cold fusion community looks renegade and much less epistemically virtuous. Base rate of these conditional on no effect gives very adverse reference class.

ESP

10^-6

Very strong (but non-complete) trophism among elite common sense, scientists, etc; bad predictive track records for ESP researchers; distant consensuses highly adverse. Some greatly attenuated boost from survey data/small fraction of reasonable believers.

 

Practical challenges to modesty

Modesty can lead to double-counting, or even groupthink. Suppose in the original example Beatrice does what I suggest and revise their credences to be 0.6, but Adam doesn’t. Now Charlie forms his own view (say 0.4 as well) and does the same procedure as Beatrice, so Charlie now holds a credence of 0.6 as well. The average should be lower: (0.8+0.4+0.4)/3, not (0.8+0.6+0.4)/3, but the results are distorted by using one-and-a-half helpings of Adam’s credence. With larger cases one can imagine people wrongly deferring to hold consensus around a view they should think is implausible, and in general the nigh-intractable challenge from trying to infer cases of double counting from the patterns of ‘all things considered’ evidence.

One can rectify this by distinguishing ‘credence by my lights’ versus ‘credence all things considered’. So one can say “Well, by my lights the credence of P is 0.8, but my actual credence is 0.6, once I account for the views of my epistemic peers etc.” Ironically, one’s personal ‘inside view’ of the evidence is usually the most helpful credence to publicly report (as it helps others modestly aggregate), whilst ones all things considered modest view usually for private consumption.

Community benefits to immodesty

Modesty could be parasitic on a community level. If one is modest, one need never trouble oneself with any ‘object level’ considerations at all, and simply cultivate the appropriate weighting of consensuses to defer to. If everyone free-rode like that, no one would discover any new evidence, have any new ideas, and so collectively stagnate.[23] Progress only happens if people get their hands dirty on the object-level matters of the world, try to build models, and make some guesses - sometimes the experts have gotten it wrong, and one won’t ever find that out by deferring to them based on the fact they usually get it right.[24] 

The distinction between ‘credence by my lights’ versus ‘credence all things considered’ allows the best of both worlds. One can say ‘by my lights, P’s credence is X’ yet at the same time ‘all things considered though, I take P’s credence to be Y’. One can form one’s own model of P, think the experts are wrong about P, and marshall evidence and arguments for why you are right and they are wrong; yet soberly realise that the chances are you are more likely mistaken; yet also think this effort is nonetheless valuable because even if one is most likely heading down a dead-end, the corporate efforts of people like you promises a good chance of someone finding a better path.

Scott Sumner seems to do something similar:

In macro, it's important for people like me to always search for the truth, and reach conclusions about economic models in a way that is independent of the consensus model. In that way, I play my "worker ant" role of nudging the profession towards a greater truth. But at the same time we need to recognize that there is nothing special about our view. If we are made dictator, we should implement the consensus view of optimal policy, not our own. People have trouble with this, as it implies two levels of belief about what is true. The view from inside our mind, and the view from 20,000 miles out in space, where I see there is no objective reason to favor my view over Krugman's.

Despite this example, maybe it is the case that ‘having a creative brain which makes big discoveries’ is anticorrelated to ‘having a sober brain well-calibrated to its limitations compared to others’: anecdotally, eccentric views among geniuses are common. Maybe for most it isn’t psychologically tenable to spend one's life investigating a renegade view one thinks ultimately is likely a dead-end, and in fact people do groundbreaking research generally have to be overconfident to do the best science. If so, we should act communally to moderate this cost, but not celebrate it as a feature.

Not everyone has to do be working on discovering new information. One could imagine a symbiosis between eccentric overconfident geniuses whose epistemic comparative advantage is to who gambol around idea-space to find new considerations, and well-calibrated thoughtful people whose comparative advantage is in soberly weighing considerations to arrive at a well callibrated all-things-considered view.

 

Conclusion: a pean, and a plea

I have argued above for a strong approach to modesty, one which implies - at least in terms of ‘all things considered view’ - one’s view of the object level merits counts for very little. Even if I am mistaken about the ideal strength of modesty, I am highly confident both the EA and rationalist communities err in the ‘insufficiently modest’ direction. I close on these remarks.

Rationalist/EA exceptionalism 

Both communities endure a steady ostinato of complaints about arrogance. They’ve got a point. I despair of seeing some wannabe-iconoclast spout off about how obviously the solution to some famously recondite issue is X and the supposed experts who disagree obviously just need to better understand the ‘tenets of EA’ or the sequences. I become lachrymose when further discussion demonstrates said iconoclast has a shaky grasp of the basics, that they are recapitulating points already better-discussed in the literature, and so forth.[25]

To stress (and to pre-empt), the problem is not, “You aren’t kowtowing appropriately to social status!” The problem is considerable over-confidence married with inadequate understanding. This both looks bad to outsiders,[26] but it also is bad as the individual (and the community itself) could get to the truth faster if they were more modest about their likely position in the distribution of knowledge about X, and then did commonsensical things to increase it.

Consider Gell-Mann amnesia (via Michael Crichton):

You open the newspaper to an article on some subject you know well. In Murray's case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the "wet streets cause rain" stories. Paper's full of them.

In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.

Gell-Mann cases invite inferring adverse judgements based on extrapolating from in instance of poor performance. When experts in multiple different subjects say the same thing (i.e. Murray and Crichton chatted to an expert on Palestine who had the same impression), this adverse inference gets all the stronger.

Some, perhaps many, pieces of work or corporate projects in our community share this property: it might look good or groundbreaking to us as relatively less-informed, domain experts in the fields it touches upon tend to report it is misguided or rudimentary. Although it is possible to indict all these judgements, akin to a person who gives very adverse accounts of all of their previous romantic partners, we may start to wonder about a common factor explanation. Our collective ego is writing checks our epistemic performance (or, in candour, performance generally) cannot cash; general ignorance, rather than particular knowledge, may explain our self-regard.

To discover, not summarise

It is thought that to make the world go better new things need to be discovered, above and beyond making sound judgements on existing knowledge. Quickly making accurate determinations of the balance of reason for a given issue is greatly valuable for the latter, but not so much for the former.

Yet the two should not be confused. If one writes a short overview of a subject ‘for internal consumption’ which gives a fairly good impression of what a particular view should be, one should not be too worried if a specialist complains that you haven’t covered all the topics as adequately as one might. However, if one is aiming to write something which articulates an insight or understanding not just novel to the community, but novel to the world, one should be extremely concerned if domain experts review this work and say things along the lines of, “Well, this is sort of a potted recapitulation of work in our field, and this insight is widely discussed”.

Yet I see this happen a lot to things we tout as ‘breakthrough discoveries’. We want to avoid case where we waste our time in unwitting recapitulation, or fail to catch elementary mistakes. Yet too often we license ourselves to pronounce these discoveries without sufficient modesty in cases where there’s already a large expert community working on similar matters. This does not preclude these discoveries, but it cautions us to carefully check first. On occasions where I take myself to have a new insight in areas outside my field (most often philosophy), I am extremely suspect of my supposed discovery: all too often would this arise from my misunderstanding, or already be in the literature somewhere I haven’t looked. I carefully consult the literature as best as I can, and run the idea by true domain experts, to rule out these possibilities.[27] 

Others seem to lack this modesty, and so predictably err. More generally, a more modest view of ‘intra-community versus outside competence’ may also avoid cases of having to reinvent the wheel (e.g. that scoring rule you spent six months deriving for a karma system is in this canonical paper), or for an effort to derail (e.g. oh drat, our evaluation provides worthless data because of reasons we could have known from googling ‘study design’).

Paradoxically pathological modesty 

If the EA and rationalist communities comprised a bunch of highly overconfident and eccentric people buzzing around bumping their pet theories together, I may worry about overall judgement and how much novel work gets done, but I would at grant this at least looks like fertile ground for new ideas to be developed.

Alas, not so much. What occurs instead is agreement approaching fawning obeisance to a small set of people the community anoints as ‘thought leaders’, and so centralizing on one particular eccentric and overconfident view.[28] So although we may preach immodesty on behalf of the wider community, our practice within it is much more deferential.

I hope a better understanding of modesty can get us out of this ‘worst of both worlds’ scenario. It can at least provide better ‘gurus’ to defer to. Better, modesty also helps to correct two mistaken impressions: one, overly wide gap between our gurus and other experts; two, the overly narrow gap between ‘intelligent layperson in the community’ and ‘someone able to contribute to the state of the art'. Some topics are really hard: being able to become someone with ‘something useful to say’ about these not take days but take years; there are many deep problems we must concern ourselves with; that the few we select as champions, despite their virtue, cannot do them all alone; and that we need all the outside help we can get.

Coda

What the EA community mainly has now is a briar-patch of dilettantes: each ranges widely, but with shallow roots, forming whorls around others where it deems it can find support. What it needs is a forest of experts: each spreading not so widely; forming a deeper foundation and gathering more resources from the common ground; standing apart yet taller, and in concert producing a verdant canopy.[29] I hope this transformation occurs, and aver modesty may help effect it.

  

Acknowledgements

I thank Joseph Carlsmith, Owen Cotton-Barratt, Eric Drexler, Ben Garfinkel, Roxanne Heston, Will MacAskill, Ben Pace, Stefan Schubert, Carl Shulman, and Pablo Stafforini for their helpful discussion, remarks, and criticism. Their kind help does not imply their agreement. The errors remain my own.

 

[Edit 30/10: Rewording and other corrections - thanks to Claire Zabel and Robert Wiblin]

  


[1] Much of this follows discussion in the social epistemology literature about conciliationism, or the ‘equal weight view’. See here for a summary

[2] They also argue at length about the appropriate weight each of these considerations should have on the scales of judgement. I suggest (although this is not necessary for this argument) that in many cases most of the action lies in judging the ‘power’ of evidence. In most cases I observe people agree that a given consideration C influences the credence one holds in P; they usually also agree in its qualitative direction; the challenge comes in trying to weigh each consideration against the others, to see which considerations one’s credence over P should pay the greatest attention to.

This may represent a general feature of webs of belief being dense and many-many (A given credence is influenced by many other considerations, and forms a consideration for many credences in turn), or it may simply be a particular feature of webs of belief in which humans perform poorly: although I am confident I can determine the sign of a particular consideration, I generally don’t back myself to hold credences (or likelihood ratios) to much greater precision than the first significant digit, and I (and, perhaps, others) struggle in cases where large numbers of considerations point in both directions.  

[3] In the literature this is called ‘straight averaging’. For a variety of technical reasons this doesn’t quite work as a peer update rule. That said, given things like bayesian aggregation remain somewhat open problems, I hope readers will accept my promissory note that there will be a more precise account which will produce effectively the same results (maybe ‘approximately splitting the difference’) through the same motivation.

[4] C.f. Aumann’s agreement theorem. As an aside (which I owe to Carl Shulman), straight averaging will not work in some degenerate cases where (similar to ‘common knowledge puzzles’) one can infer precise observations from the probabilities stated. The neatest example I can find comes from Hal Finney (see also):

Suppose two coins are flipped out of sight, and you and another person are trying to estimate the probability that both are heads. You are told what the first coin is, and the other person is told what the second coin is. You both report your observations to each other.

Let's suppose that they did in fact fall both heads. You are told that the first coin is heads, and you report the probability of both heads as 1/2. The other person is told that the second coin is heads, and he also reports the probability as 1/2. However, you can now both conclude that the probability is 1, because if either of you had been told that the coin was tails, he would have reported a probability of zero. So in this case, both of you update your information away from the estimate provided by the other.

[5] To motivate: Adam and Beatrice no longer know whether or not reasons they hold for or against P are private evidence or not. Yet (given epistemic peerhood), they have no principled reason to suppose “I know something that they don’t” is more plausible than the opposite. So again they should be symmetrical.

[6] (On which more later) it is worth making clear that the possibility of bias for either Adam or Beatrice doesn’t change the winning strategy on expectation. Say Adam’s credence for P is in fact biased upwards by 0.4. If Adam knows this, he can adjust and become unbiased, if Oliver or Beatrice knows this (and knows Adam doesn’t), the break the peerhood for Adam but can simulate unbiased Adam* which would remain a peer, and act accordingly. If none of them know this, then it is the case that Beatrice wins, as does Oliver following a non-averaging ‘go with Beatrice’ strategy. Yet this is simply epistemic luck: without information, all reasonable prior distribution candidates of (Adam’s bias - Beatrice’s bias) are symmetrical about 0.  

[7] Another benefit of modesty is speed: Although it is the case Adam and Beatrice’s credence (and thus the average) gets more accurate if they have time to discuss it, and so catch one another if they make a mistake or reveal previously-private evidence, averaging is faster and the trade-off in time for better precision may not be worth it. It still remains the case, as per the first example, that they still do better, after this discussion, if they meet in the middle on residual disagreement. 

[8] A further (albeit minor and technical) dividend is that although individual guesses may form any distribution (for which the standard deviation may not be a helpful summary), the central limit theorem applies to the average of guesses distribution, so it tends to normality.  

[9] Even if one is the world authority, there should be some deference to lesser experts. In cases where the world expert is an outlier, one needs to weigh up numbers versus (relative) epistemic superiority to find the appropriate middle.

[10] God from the Mount of Sinai, whose gray top
Shall tremble, he descending, will himself
In Thunder Lightning and loud Trumpets sound
Ordaine them Lawes…

Milton, Paradise Lost

[11] I take the general pattern that strong modesty usually immures one from common biases is a further point in its favour.  

[12] I owe this to Eric Drexler

[13] A related philosophical defence would point out that the self-undermining objection would only apply to whether one should believe modesty, not whether modesty is in fact true.

[14] I naturally get much more sceptical if that person then generalises from this N=1 uncontrolled unblinded crossover trial to others, or takes it as lending significant support against some particular expert consensus or expertise more broadly: “Doctors don’t know anything about back pain! They did all this rubbish but I found out all anyone needs to do is cut carbs!”

[15] It also provokes fear and trembling in my pre-paradigmatic day job, given I don’t want the area to have strong founder effects which poorly track the truth.

[16] For example:

One of the easiest hard questions, as millennia-old philosophical dilemmas go. Though this impossible question is fully and completely dissolved on Less Wrong, aspiring reductionists should try to solve it on their own.

[17] Aside: A related consideration is ‘optimal damping’ of credences, which is closely related to resilience. Very volatile credences may represent the buffeting of a degree of belief by evidence large relative to one's prior - but it may also represent poor calibration in overweighing new evidence (and vice versa). The ‘ideal’ response in terms of accuracy is given by standard theory. Yet it is also worth noting that’s one prudential reasons may want to introduce further lag or lead, akin to the ‘D’ or ‘I’ components of a PID controller. In large irreversible decisions (e.g. career choice) it may be better to wait a while after one’s credences support a change to change action; for case of new moral consideration it may be better to act ‘in advance’ for precautionary principle-esque reasons.

[18] (Owed to Will MacAskill) There’s also a selection effect: of a sample of ‘accurate contrarians’, many of these may be lucky rather than good.

[19] I owe this particular example to Eric Drexler, but similar counter-examples along these lines to Carl Shulman.

[20] Another general worry is these difficult-to-divine considerations offer plenty of fudge factors - both to make modesty get the ‘right answer’ in historical cases, and to fudge present areas of uncertainty to get results that accord with one’s prior judgement.

[21] I owe both this modification and example to discussions with Eric Drexler. There are some costs - one may think there are cases one should defer to an outside view on the web of belief (E.g. Christian apologist: “Sure, I agree with scientific consensus that it’s improbable Jesus rose naturally from the dead, but the key argument is whether Jesus rose supernaturally from the dead. So the consensus for philosophers of religion is the right expert class.”) The balance of merit overall is hard to say, but such a modification still looks like pretty strong modesty.

[22] In conversation I recall a suggestion by Shulman such a credence should change one’s behaviour regarding EA - maybe one should do theology research in the hope of finding a way to extract infinite value etc. Yet the expert class for action|Theism gives a highly adverse prior: virtually no actual theists (regardless of theological expertise, within or outside EA) advocate this.

[23] I understand a similar point is raised in economics regarding the EMH and the success of index funds. Someone has to do the price discovery.

[24] I owe this mainly to Ben Pace, Andrew Critch argues similarly.

[25] For obvious reasons I’m reluctant to cite specific examples. I can offer some key words for the sort of topics I see this problem as endemic: Many-worlds, population ethics, free will, p-zombies, macroeconomics, meta-ethics.

[26] C.f. Augustine, On the Literal Meaning of Genesis:

Usually, even a non-Christian knows something about the earth, the heavens, and the other elements of this world, about the motion and orbit of the stars and even their size and relative positions, about the predictable eclipses of the sun and moon, the cycles of the years and the seasons, about the kinds of animals, shrubs, stones, and so forth, and this knowledge he hold to as being certain from reason and experience. Now, it is a disgraceful and dangerous thing for an infidel to hear a Christian, presumably giving the meaning of Holy Scripture, talking nonsense on these topics; and we should take all means to prevent such an embarrassing situation, in which people show up vast ignorance in a Christian and laugh it to scorn.

[27] I’m uncommonly fortunate that for me such domain experts are both nearby and generous with their attention. Yet this obstacle is not insurmountable. An idea (which I owe to Pablo Stafforini) is that a contrarian and a sceptic of the contrarian view could bet on whether a given expert, on exposure to the contrarian view, would change their mind as the contrarian predicts. S may bet with C: “We’ll pay some expert $X to read your work explicating your view, if they change their mind significantly in favour (however we cash this out) I’ll pay the $X, if not, you pay the $X.

[28] C.f. Askell’s and Page’s remarks on ‘buzz’.

[29] Perhaps unsurprisingly, I would use a more modest ecological metaphor in my own case. In reclaiming extremely inhospitable environments, the initial pioneer organisms die rapidly. Yet their corpses sustain detritivores, and little by little, an initial ecosystem emerges to be succeeded by others. In a similar way, I hope that the detritus I provide will, after a fashion (and a while), become the compost in which an oak tree grows.  

Comments52
Sorted by Click to highlight new comments since: Today at 8:08 AM

Thank so much for the clear and eloquent post. I think a lot of the issues related to lack of expertise and expert bias are stronger than I think you do, and I think it's both rare and not inordinately difficult to adjust for common biases such that in certain cases a less-informed individual can often beat the expert consensus (because few enough of the experts are doing this, for now). But it was useful to read this detailed and compelling explanation of your view.

The following point seems essential, and I think underemphasized:

Modesty can lead to double-counting, or even groupthink. Suppose in the original example Beatrice does what I suggest and revise their credences to be 0.6, but Adam doesn’t. Now Charlie forms his own view (say 0.4 as well) and does the same procedure as Beatrice, so Charlie now holds a credence of 0.6 as well. The average should be lower: (0.8+0.4+0.4)/3, not (0.8+0.6+0.4)/3, but the results are distorted by using one-and-a-half helpings of Adam’s credence. With larger cases one can imagine people wrongly deferring to hold consensus around a view they should think is implausible, and in general the nigh-intractable challenge from trying to infer cases of double counting from the patterns of ‘all things considered’ evidence.

One can rectify this by distinguishing ‘credence by my lights’ versus ‘credence all things considered’. So one can say “Well, by my lights the credence of P is 0.8, but my actual credence is 0.6, once I account for the views of my epistemic peers etc.” Ironically, one’s personal ‘inside view’ of the evidence is usually the most helpful credence to publicly report (as it helps others modestly aggregate), whilst ones all things considered modest view usually for private consumption.

I rarely see any effort to distinguish between the two outside the rationalist/EA communities, which is one reason I think both over-modesty and overconfident backlash against it are common.

My experience is that most reasonable, intelligent people I know have never explicitly thought of the distinction between the two types of credence. I think many of them have an intuition that something would be lost if they stated their "all things considered" credence only, even though it feels "truer" and "more likely to be right," though they haven't formally articulated the problem. And knowing that other people rarely make this distinction, it's hard for everyone know how to update based on others' views without double-counting, as you note.

It seems like it's intuitive for people to state either their inside view, or their all-things-considered view, but not both. To me, stating "both">"inside view only">"outside view only", but I worry that calls for more modest views tend to leak nuance and end up pushing for people to publicly state "outside view only" rather than "both"

Also, I've generally heard people call the "credence by my lights" and "credence all things considered" one's "impressions" and "beliefs," respectively, which I prefer because they are less clunky. Just fyi.

(views my own, not my employer's)

I just thought I'd note that this appears similar to the 'herding' phenomenon in political polling, which reduces aggregate accuracy: http://www.aapor.org/Education-Resources/Election-Polling-Resources/Herding.aspx

I agree that this distinction is important and should be used more frequently. I also think good terminology is very important. Clunky terms are unlikely to be used.

Something along the lines of "impressions" or "seemings" may be good for "credence by my lights" (cf optical illusions, where the way certain matter of facts seem or appear to you differs from your beliefs about them). Another possibility is "private signal".

I don't think inside vs outside view is a good terminology. E.g., I may have a credence by my lights about X partly because I believe that X falls in a certain reference class. Such reasoning is normally called "outside-view"-reasoning, yet it doesn't involve deference to epistemic peers.

Concur that the distinction between "credence by lights" and "credence all things considered" seems very helpful, possibly deserving of it's own post.

Thanks for your generous reply, Claire. I agree the 'double counting' issue remains challenging, although my thought was that most people, at least in the wider world, are currently pretty immodest, the downsides are not too large in what I take to be common applications where you are trying to weigh up large groups of people/experts. I agree there's a risk of degrading norms if people mistakenly switch to offering 'outside view' credences publicly.

I regret I hadn't seen the 'impressions' versus 'beliefs' distinction being used before. 'Impression' works very well for 'credence by my lights' (I had toyed with using the term 'image'), but I'm not sure 'belief' translates quite so well for those who haven't seen the way the term is used in the rationalist community. I guess this might just be hard, as there does seem to be a good word (or two) I can find which captures modesty ("being modest, my credence is X", "modestly, I think it's Y", maybe?)

The dichotomy I see the most at MIRI is 'one's inside-view model' v. 'one's belief', where the latter tries to take into account things like model uncertainty, outside-view debiasing for addressing things like the planning fallacy, and deference to epistemic peers. Nate draws this distinction a lot.

I guess you could make a trichotomy:

a) Your inside-view model.

b) Your all-things-considered private signal, where you've added outside-view reasoning, taken model uncertainty into account, etc.

c) Your all-things-considered belief, which also takes the views of your epistemic peers into account.

As one data point, I did not have this association with "impressions" vs. "beliefs", even though I do in fact distinguish between these two kinds of credences and often report both (usually with a long clunky explanation since I don't know of good terminology for it).

The comments on naming beliefs by Hal Finney (2008) appears to be how the consensus around the impressions/beliefs distinction began to form (the commenters include such movers and shakers as Eliezer and Anna Salamon).

Also, impression track records by Katja (September 2017) recent blog post/article circulated in the rationalist community that revived the terminology.

Thanks for drawing our attention to that early Overcoming Bias post. But please note that it was written by Hal Finney, not Robin Hanson. It took me a few minutes to realize this, so it seemed worth highlighting lest others fail to appreciate it.

Incidentally, I've been re-reading Finney's posts over the past couple of days and have been very impressed. What a shame that such a fine thinker is no longer with us.

ETA: Though one hopes this is temporary.

Somehow I missed your reply originally; I've updated my comment to correct the author name of the post.

Thanks! By the way,  I found your original comment helpful for writing about the history of the concept of an independent impression.

I'm not sure where I picked it up, though I'm pretty sure it was somewhere in the rationalist community.

E.g. from What epistemic hygiene norms should there be?:

Explicitly separate “individual impressions” (impressions based only on evidence you've verified yourself) from “beliefs” (which include evidence from others’ impressions)

To add to the list of references in this thread, Brian Tomasik talks about this in "Gains from Trade through Compromise" in the section "Epistemic prisoner's dilemma".

Although you are right that modesty (or deference) often outperforms one's own personal judgment, this isn't always the case. Results below are based on Monte Carlo simulations I haven't published yet.

Take the case of a crowd estimating a cow's weight. The members of the crowd announce their guesses sequentially. They adopt a uniform rule of D% deference, so that each person's guess is a weighted average of a sample from a Normal distribution centered on the cow's true weight, and of the current crowd average guess:

Guess_i = D*(Crowd average) + (1-D)*(Direct observation)

Under this rule, as deference increases, the crowd converges more slowly on the cow's true weight: deference is bad for group epistemics. This isn't the same thing as an information cascade, because the crowd will converge eventually unless they are completely deferent. 

Furthermore, the benefits of deference for individual guess accuracy are maximized at about 78% deference except potentially on very long timescales. Beyond this point, the group converges so slowly that it undermines, though doesn't fully cancel out, the individual benefit of adopting the group average.

Finally, when faced with a choice of whether to make a guess according to the group's rule for deference or whether to be 100% deferent and simply guess the current group average, you will actually do better to make a partially deferent guess than a 100% deferent guess if the group is more than about 78% deferent. Below that point, it's better for individual accuracy to 100% defer, which suggests a Prisoner's Game Dilemma model in which individuals 'defect' on the project of obtaining high crowd accuracy by deferring to the crowd rather than contributing their own independent guess, leading to very high and deeply suboptimal but not maximally bad levels of deference.  

These results depend on specific modeling assumptions.

  • We might penalize inaccuracy according to its square. This makes deference less useful for individual accuracy, putting the optimal level closer to 65% rather than 78%.
  • We can also imagine that instead of using the crowd average, the deferent portion of the guess is sampled from a Normal distribution about the crowd average. In this case, the optimal level of deference is closer to 45%, and beyond about 78% deference, it's better to ignore the crowd entirely and just do pure direct observation.
  • I haven't simulated this yet, but I am curious to know what happens if we assume a fixed 1% of guesses are purely independent.
  • We are evaluating the whole timeline of guesses and observations. What if we are thinking about a person joining a mature debate, where the crowd average has had more time to converge?
  • It assumes that making and sharing observations is cost-free. In reality, of course, if every scientist had to redo all the experiments in their field (i.e. never deferred to previous results), science could not progress, and this is true everywhere else as well.
  • Whether or not the cost of further observations is linked to the current group accuracy. If we imagine a scenario where individual accuracy is a heavy driver of the costs of individual observations, then we might want to prioritize deference to keep costs down and permit more or faster guesses. If instead it is crowd accuracy that controls the costs of observations, then we might want to focus on independent observations. 

Overall, I think we need to refocus the debate on epistemic modesty around tradeoffs and modeling assumptions in order to help people make the best choice given their goals:

  • How much to defer seems to depend on a few key factors:
    • The cost of making independent observations vs. deferring, and whether or not these costs are linked to current group or individual accuracy
    • How inaccuracy is penalized 
    • How deferent we think the group is
    • Whether we are prioritizing our own individual accuracy or the speed with which the group converges on the truth
  • Basically all the problems with deference can be eliminated if we are able to track the difference between independent observations and deferent guesses.

My main takeaways are that:

  • Intuition is only good enough to be dangerous for thinking in the abstract about deference
  • Real-world empirical information is crucial for making the right choice of modeling assumptions to decide on how much to defer
  • Deference is a common and important topic in the rationalist and EA communities on a number of subjects, and should motivate us to try and take a lot more guesses
  • It is probably worth trying to figure out how to better track the difference between deferent guesses and independent observations in our discourse

Some evidence that people tend to underuse social information, suggesting they're not by default epistemically modest:


Social information is immensely valuable. Yet we waste it. The information we get from observing other humans and from communicating with them is a cheap and reliable informational resource. It is considered the backbone of human cultural evolution. Theories and models focused on the evolution of social learning show the great adaptive benefits of evolving cognitive tools to process it. In spite of this, human adults in the experimental literature use social information quite inefficiently: they do not take it sufficiently into account. A comprehensive review of the literature on five experimental tasks documented 45 studies showing social information waste, and four studies showing social information being over-used. These studies cover ‘egocentric discounting’ phenomena as studied by social psychology, but also include experimental social learning studies. Social information waste means that human adults fail to give social information its optimal weight. Both proximal explanations and accounts derived from evolutionary theory leave crucial aspects of the phenomenon unaccounted for: egocentric discounting is a pervasive effect that no single unifying explanation fully captures. Cultural evolutionary theory's insistence on the power and benefits of social influence is to be balanced against this phenomenon.

There is a discussion on "the producer-scrounger dilemma for information use" of potential interest:

Social information is only useful when others also gather information asocially. Cultural evolutionary models contain a possible explanation of egocentric discounting. Rogers' influential model [81] showed that social learning may not provide any advantage over individual learning when the environment changes. The advantage of using social learning depends on the frequency of social learners in the population: if those are too numerous, social learning is useless. When there are mostly individual learners, copying is effective, because it saves the costs of individual exploration, and because the probability of copying a correct behaviour is high. However, when there are mostly social learners, the risk of copying an outdated behaviour increases and individual learners are advantaged. This means the advantages of social learning are inversely frequency-dependent: the more other people learn socially, the less efficient it is to learn from them. The same logic is reflected, on a smaller scale, in models of information cascades, where social learning can (with a small probability) become detrimental for an individual when too many other individuals resort to it. More generally, a broad range of models converge upon the view that social information use can be likened, in terms of evolutionary game theory, to a producer–scrounger dynamic [37,77,82]. At equilibrium, these games typically yield a mixed population of producers (individual learners) and scroungers (social learners), where neither type does better than the other [83,84]. Egocentric discounting might emerge from a producer–scrounger dilemma, as a response to the devaluation of social information which may occur when too many other agents rely on social learning.

Note that this seems to assume that people don't use the "credence by my lights" vs. "credence all things considered"-distinction discussed in the comments.

Can you give 5 examples of cases where rationalist/EAs should defer more to experts?

This was a really good read! In addition to being super well-timed.

I don't think there's a disagreement here about ideal in-principle reasoning. I’m guessing that the disagreement is about several different points:

  • In reality, how generally difficult is it to spot important institutions and authorities failing in large ways? Where we might ask subquestions for particular kinds of groups; e.g., maybe you and the anti-modest will turn out to agree about how dysfunctional US national politics is on average, while disagreeing about how dysfunctional academia is on average in the US.

  • In reality, how generally difficult is it to evaluate your own level of object-level accuracy in some domain, the strength of object-level considerations in that domain, your general competence or rationality or meta-rationality, etc.? To what extent should we update strongly on various kinds of data about our reasoning ability, vs. distrusting the data source and penalizing the evidence? (Or looking for ways to not have to gather or analyze data like that at all, e.g., prioritizing finding epistemic norms or policies that work relatively OK without such data.)

  • How strong are various biases, either in general or in our environs? It sounds like you think that arrogance, overconfidence, and excess reliance on inside-view arguments are much bigger problems for core EAs than underconfidence or neglect of inside-view arguments, while Eliezer thinks the opposite.

  • What are the most important and useful debiasing interventions? It sounds like you think these mostly look like attempts to reduce overconfidence in inside views, self-aggrandizing biases, and the like, while Eliezer thinks that it's too easy to overcorrect if you organize your epistemology around that goal. I think the anti-modesty view here is that we should mostly address those biases (and other biases) through more local interventions that are sensitive to the individual's state and situation, rather than through rules akin to "be less confident" or "be more confident".

  • What's the track record for more modesty-like views versus less modesty-like views overall?

  • What's the track record for critics of modesty in particular? I would say that Eliezer and his social circle have a really strong epistemic track record, and that this is good evidence that modesty is a bad idea; but I gather you want to use that track record as Exhibit A in the case for modesty being a good idea. So I assume it would help to discuss the object-level disagreements underlying these diverging generalizations.

Does that match your sense of the disagreement?

Thanks for your helpful reply. I think your bullet points do track the main sources of disagreement, but I venture an even crisper summary:

I think the Eliezer-style 'immodest' view comprises two key claims:

1) There are a reasonably large number of cases that due to inadequate equiilbria or similar that who we might take to be expert classes are in fact going to be sufficiently poorly optimised for the truth that the views a reasonable rationalist or similar could be expected to do better.

2) We can reliably identify these cases.

If they're both true we can license ourselves to 'pick fights' where we make confident bets against expert consensus (or lack thereof) in the knowledge we are more likely than not to be right. If not, then it seems modesty is the better approach: it might be worth acting 'as if' our contra-expert impression is right and doing further work (because we might discover something important), but nonetheless defer to the expert consensus.

It seems the best vindication of the immodesty view as Eliezer defends is a track record of such cases on his behalf or the wider rationalist community. You correctly anticipate I would definitely include the track record here as highly adverse. For two reasons:

First, when domain experts look at the 'answer according to the rationalist community re. X', they're usually very unimpressed, even if they're sympathetic to the view themselves. I'm pretty Atheist, but I find the 'answer' to the theism question per LW or similar woefully rudimentary compared to state of the art discussion in the field. I see similar experts on animal consciousness, quantum mechanics, free will, and so on similarly be deeply unimpressed with the sophistication of argument offered.

Unfortunately, many of these questions tend to be the sort where a convincing adjudication is far off (i.e. it seems unlikely to discover convincing proof of physicalism any time soon). So what we observe is both compatible with 'the rationalist community is right and this field is diseased (and so gets it wrong)' and 'the rationalist community is greatly over confident and the field ia on the right track'. That said, I take the number of fields which the rationalist community takes to be sufficiently diseased that it takes itself to do better as implausible on priors.

The best thing would be a clear track record to judge - single cases, either way, don't give much to go one, as neither modesty nor immodesty would claim they should expect to win every single time. I see the rationalist community having one big win (re. AI), yet little else. That Eliezer's book offers two pretty weak examples (e.g. BoJ, where he got the argument from a recognised authority, and an n=1 medical intervention), and reports one case against (e.g. a big bet of Taubes) doesn't lead me to upgrade my pretty autumnal view of the track record.

Buck
3y15
0
0

when domain experts look at the 'answer according to the rationalist community re. X', they're usually very unimpressed, even if they're sympathetic to the view themselves. I'm pretty Atheist, but I find the 'answer' to the theism question per LW or similar woefully rudimentary compared to state of the art discussion in the field. I see similar experts on animal consciousness, quantum mechanics, free will, and so on similarly be deeply unimpressed with the sophistication of argument offered.

I would love to see better evidence about this. Eg it doesn't match my experience of talking to physicists.

I'm pretty Atheist, but I find the 'answer' to the theism question per LW or similar woefully rudimentary compared to state of the art discussion in the field.

This will be a pertinent critique if the aim of LessWrong is to be a skeptics forum, created to make the most canonical debunkings (serving a societal purpose akin to Snopes). It seems much less relevant if you are trying to understand the world, unless you maybe have a very strong intuition or evidence that sophistication is highly correlated with truth.

Unfortunately, many of these questions tend to be the sort where a convincing adjudication is far off (i.e. it seems unlikely to discover convincing proof of physicalism any time soon).

I think a convincing object-level argument could be given; you could potentially show on object-level grounds why the specific arguments or conclusions of various rationalists are off-base, thereby at least settling the issue (or certain sub-issues) to the satisfaction of people who take the relevant kinds of inside-view arguments sufficiently seriously in the first place. I'd be particularly interested to hear reasons you (or experts you defer to) reject the relevant arguments against gods, philosophical zombies, or objective collapse / non-realism views in QM.

If you mean that a convincing expert-consensus argument is likely to be far off, though, then I agree about that. As a start, experts' views and toolkits in general can be slow to change, particularly in areas like philosophy.

I assume one part of the model Eliezer is working with here is that it can take many decades for new conceptual discoveries to come to be widely understood, accepted, and used in a given field, and even longer for these ideas to spill over into other fields. E.g., some but not all philosophers have a deep understanding of Shannon, Solomonoff, and Jaynes' accounts of inductive inference, even though many of the key insights have been around for over fifty years at this point. When ideas spread slowly, consensus across all fields won't instantly snap into a new state that's maximally consistent with all of the world's newest developments, and there can be low-hanging fruit for the philosophers who do help import those ideas into old discussions.

This is why Eliezer doesn't claim uniqueness for his arguments in philosophy; e.g., Gary Drescher used the same methodology and background ideas to arrive largely at the same conclusions largely independently, as far as I know.

I'd consider the big advances in decision theory from Wei Dai and Eliezer to be a key example of this, and another good example of independent discovery of similar ideas by people working with similar methodologies and importing similar ideas into a relatively old and entrenched field. (Though Wei Dai and Eliezer were actively talking to each and sharing large numbers of ideas, so the independence is much weaker.)

You can find most of the relevant component ideas circulating before that, too; but they were scattered across multiple fields in a way that made them less likely to get spontaneously combined by specialists busy hashing out the standard sub-sub-arguments within old paradigms.

I agree such an object level demonstration would be good evidence (although of course one-sided, for reasons Pablo ably articulates elsewhere). I regret I can't provide this. On many of these topics (QM, p-zombies) I don't pretend any great knowledge; for others (e.g. Theism), I can't exactly find the 'rationalist case for Atheism' crisply presented.

I am naturally hesitant to infer from the (inarguable) point that diffusion of knowledge and ideas within and across fields takes time that he best explanation for disagreement is that rationalists are just ahead of the curve. I enjoyed the small parts of Drescher I read, but I assume many reasonable philosophers are aware of his work and yet are not persuaded. Many things touted in philosophy (and elsewhere) as paradigm shifting insights transpire to be misguided, and betting on some based on your personal assent on the object level looks unlikely to go well.

I consider the decision theory work a case-in-point. The view that the F- U- T- DT is this great advance on decision theoretic state of the art is a view that is very tightly circumscribed to the rationalist community itself. Of course, many decision theorists are simply ignorant of it given it is expounded outside the academic press. Yet others are not: there were academic decision theorists who attend some MIRI workshops, others who have been shown versions (via Chalmers, I understand), and a few who have looked at MIRI's stuff on Arxiv and similar. Yet the prevailing view of these seems to be at best lukewarm, and at worst scathing.

This seems challenging to reconcile with a model of rationalists just getting to the great insights early before everyone else catches up. It could be the decision theorist community is so diseased so they cannot appreciate the technical breakthrough MIRI-style decision theory promises. Yet I find the alternative hypothesis where it is the rationalist community which is diseased and diving down a decision theory dead end without the benefit of much interaction with decision theory experts to correct them somewhat more compelling.

To be clear, I'm not saying that the story I told above ("here are some cool ideas that I claim haven't sufficiently saturated the philosophy community to cause all the low-hanging fruit to get grabbed, or to produce fieldwide knowledge and acceptance in the cases where it has been grabbed") should persuade arbitrary readers that people like Eliezer or Gary Drescher are on the right track; plenty of false turns and wrong solutions can also claim to be importing neglected ideas, or combining ideas in neglected ways. I'm just gesturing at one reason why I think it's possible at all to reach confident correct beliefs about lots of controversial claims in philosophy, in spite of the fact that philosophy is a large and competitive field whose nominal purpose is to answer these kinds of questions.

I'm also implicitly making a claim about there being similarities between many of the domains you're pointing to that help make it not just a coincidence that one (relatively) new methodology and set of ideas can put you ahead of the curve on multiple issues simultaneously (plus produce multiple discovery and convergence). A framework that's unusually useful for answering questions related to naturalism, determinism, and reflective reasoning can simultaneously have implications for how we should (and shouldn't) be thinking about experience, agency, volition, decision theory, and AI, among other topics. To some extent, all of these cases can be thought of as applications of a particular naturalist/reductionist toolkit (containing concepts and formalisms that aren't widely known among philosophers who endorse naturalism) to new domains.

I'm curious what criticisms you've heard of MIRI's work on decision theory. Is there anything relevant you can link to?

I don't think the account of the relative novelty of the 'LW approach' to philosophy makes a good fit for the available facts; "relatively" new is, I suggest, a pretty relative term.

You can find similar reduction-esque sensibilities among the logicial positivists around a century ago, and a very similar approach from Quine about half a century ago. In the case of the logical positivists, they enjoyed a heyday amongst the philosophical community, but gradually fell from favour due to shortcomings other philosophers identified; I suggest Quine is a sufficiently 'big name' in philosophy that his approach was at least widely appreciated by the relevant academic communities.

This is challenging to reconcile with an account of "Rationality's philosophical framework allows one to get to confidently get to the right answer across a range of hard philosophical problems, and the lack of assent of domain experts is best explained by not being aware of it". Closely analogous approaches have been tried a very long time ago, and haven't been found extraordinarily persuasive (even if we subset to naturalists). It doesn't help that when the 'LW-answer' is expounded (e.g. in the sequences) the argument offered isn't particularly sophisticated (and often turns out to be recapitulating extant literature), nor does it usually deign to address objections raised by dissenting camps.

I suggest a better fit for this data is the rationality approach looks particularly persuasive to people without subject matter expertise.

Re. decision theory. Beyond the general social epistemiological steers (i.e. the absence of good decision theorists raving about the breakthrough represented by MIRI style decision theory, despite many of them having come into contact with this work one way or another), remarks I've heard often target 'technical quality': Chalmers noted in a past AMA disappointment this theory had not been made rigorous (maybe things have changed since), and I know one decision theorist's view is that the work also isn't rigorous and a bit sloppy (on Carl's advice, I'm trying to contact more). Not being a decision theorist myself, I haven't delved into the object level considerations.

The "Cheating Death in Damascus" and "Functional Decision Theory" papers came out in March and October, so I recommend sharing those, possibly along with the "Decisions Are For Making Bad Outcomes Inconsistent" conversation notes. I think these are much better introductions than e.g. Eliezer's old "Timeless Decision Theory" paper.

Quineans and logical positivists have some vague attitudes in common with people like Drescher, but the analogy seems loose to me. If you want to ask why other philosophers didn't grab all the low-hanging fruit in areas like decision theory or persuade all their peers in areas like philosophy of mind (which is an interesting set of questions from where I'm standing, and one I'd like to see examined more too), I think a more relevant group to look at will be technically minded philosophers who think in terms of Bayesian epistemology (and information-theoretic models of evidence, etc.) and software analogies. In particular, analogies that are more detailed than just "the mind is like software", though computationalism is an important start. A more specific question might be: "Why didn't E.T. Jaynes' work sweep the philosophical community?"

I would say that Eliezer and his social circle have a really strong epistemic track record, and that this is good evidence that modesty is a bad idea; but I gather you want to use that track record as Exhibit A in the case for modesty being a good idea.

Really? My sense is that the opposite is the case. Eliezer himself acknowledges that he has an "amazing bet-losing capability" and my sense is that he tends to bet against scientific consensus (while Caplan, who almost always takes the consensus view, has won all his bets). Carl Shulman notes that Eliezer's approach "has lead [him] astray repeatedly, but I haven't seen as many successes."

and Carl Shulman notes that his approach "has lead [him] astray repeatedly, but I haven't seen as many successes."

That quote may not convey my view, so I'll add to this. I think Eliezer has had a number of striking successes, but in that comment I was saying that it seemed to me he was overshooting more than undershooting with the base rate for dysfunctionality in institutions/fields, and that he should update accordingly and check more carefully for the good reasons that institutional practice or popular academic views often (but far from always) indicate. That doesn't mean one can't look closely and form much better estimates of the likelihood of good invisible reasons, or that the base rate of dysfunction is anywhere near zero. E.g. I think he has discharged the burden of due diligence wrt MWI.

If many physicists say X, and many others say Y and Z which seem in conflict with X, then at a high rate there will be some good arguments for X, Y, and Z. If you first see good arguments for X, you should check to see what physicists who buy Y and Z are saying, and whether they (and physicists who buy X) say they have knowledge that you don't understand.

In the case of MWI, the physicists say they don't have key obscure missing arguments (they are public and not esoteric), and that you can sort interpretations into ones that accept the unobserved parts of the wave function in QM as real (MWI, etc), ones that add new physics to pick out part of the wavefunction to be our world, and ones like shut-up-and-calculate that amount to 'don't talk about whether parts of the wave function we don't see are real.'

Physicists working on quantum foundations are mostly mutually aware of one another's arguments, and you can read or listen to them for their explanations of why they respond differently to that evidence, and look to the general success of those habits of mind. E.g. the past success of scientific realism and Copernican moves: distant lands on Earth that were previously unseen by particular communities turned out to be real, other Sun-like stars and planets were found, biological evolution, etc. Finding out that many of the interpretations amount to MWI under another name, or just refusing to answer the question of whether MWI is true, reduces the level of disagreement to be explained, as does the finding that realist/multiverse interpretations have tended to gain ground with time and to do better among among those who engage with quantum foundations and cosmology.

In terms of modesty, I would say that generally 'trying to answer the question about external reality' is a good epistemic marker for questions about external reality, as is Copernicanism/not giving humans a special place in physics or drastically penalizing theories on which the world is big/human nature looks different (consistently with past evidence). Regarding new physics for objective collapse, I would also note the failure to show it experimentally and the general opposition to it. That seems sufficient to favor the realist side of the debate among physicists.

In contrast, I hadn't seen anything like such due diligence regarding nutrition, or precedent in common law.

Regarding the OP thesis, you could summarize my stance as that assigning 'epistemic peer' or 'epistemic superior/inferior' status in the context of some question of fact requires a lot of information and understanding when we are not assumed to already have reliable fine-grained knowledge of epistemic status. That often involves descending into the object-level: e.g. if the class of 'scientific realist arguments' has a good track record, then you will need to learn enough about a given question and the debate on it to know if that systemic factor is actually at play in the debate before you can know whether to apply that track record in assessing epistemic status.

In that comment I was saying that it seemed to me he was overshooting more than undershooting with the base rate for dysfunctionality in institutions/fields, and that he should update accordingly and check more carefully for the good reasons that institutional practice or popular academic views often (but far from always) indicate. That doesn't mean one can't look closely and form much better estimates of the likelihood of good invisible reasons, or that the base rate of dysfunction is anywhere near zero.

I offered that quote to cast doubt on Rob's assertion that Eliezer has "a really strong epistemic track record, and that this is good evidence that modesty is a bad idea." I didn't mean to deny that Eliezer had some successes, or that one shouldn't "look closely and form much better estimates of the likelihood of good invisible reasons" or that "the base rate of dysfunction is anywhere near zero", and I didn't offer the quote to dispute those claims.

Readers can read the original comment and judge for themselves whether the quote was in fact pulled out of context.

Please take my comment as explaining my own views, lest they be misunderstood, not condemning your citation of me.

Okay, thank you for the clarification.

[In the original version, your comment said that the quote was pulled out of context, hence my interpretation.]

I'm pretty late to the party (perhaps even so late that people forgot that there was a party), but just in case someone is still reading this, I'll leave my 2 cents on this post. 

[Context: A few days ago, I released a post that distils a paper by Kenny Easwaran and others, in which they propose a rule for updating on the credences of others. In a (tiny) nutshell, this rule, "Upco", asks you to update on someones credence in proposition A by multiplying your odds with their odds.]

 1. Using Upco suggests some version of strong epistemic modesty: whenever the product of all the odds of your peers that you have learned is larger than your own odds, then your credence should be dominated by those of others; and if we grant that this is virtually always the case, then strong epistemic modesty follows. 

2. While I agree with some version of strong epistemic modesty, I strongly disagree with what I take to be the method of updating on the credence of peers that is proposed in this post: taking some kind of linear average (from hereon referred to as LA). Here's a summary of reasons why I think Upco is a better updating rule, copied from my post: 

Unfortunately, the LA has some undesirable properties (see section 4 of the paper):

  • Applied in the way sketched above, LA is non-commutative, meaning that LA is sensitive to the order in which you update on the credences of others, and it seems like this should be completely irrelevant to your subsequent beliefs. 
    • This can be avoided by taking the “cumulative average” of the credences of the people you update on, i.e. each time you learn someone's credence in A you average again over all the credences you have ever learned regarding this proposition. However, now the LA has lost its initial appeal; for each proposition you have some credence in, rationality seems to require you to keep track of everyone you have updated on and the weights you assigned to them. This seems clearly intractable once the number of propositions and learned credences grows large. 
    • See Gardiner (2013) for more on this.
  • Relatedly, LA is also sensitive to whether you update on multiple peers at once or sequentially.
  • Also, LA does not commute with Bayesian Updating. There are cases where it matters whether you first update on someone's credence (e.g. regarding the bias of a coin) using the LA and then on “non-psychological” evidence (e.g. the outcome of a coin-flip you observed) using Bayesian Updating or the reverse. 
  • Moreover, LA does not preserve ‘judgments of independence’. That is, if two peers judge two propositions A and B to be independent, i.e.   and , then after updating on each other's credences, independence is not always preserved. This seems intuitively undesirable: if you think that the outcome of (say) a coin flip and a die roll are independent and I think the same - why should updating on my credences change your mind about that?
  • LA does not exhibit what the authors call “synergy”. That is, suppose  and . Then it is necessarily the case that  are in the interval  if they are both applying LA. In other words, using the LA never allows you to update beyond the credence of the most confident person you’ve updated on (or yourself if you are more confident than everybody else). 

    • At first sight, this might seem like a feature rather than a bug. However, this means that the credence of someone less confident than you can never be positive evidence regarding the issue at hand. Suppose you are 95% sure that A is true. Now, for any credence smaller than 95% LA would demand that you update downwards. Even if someone is perfectly rational, has a 94.9% credence in A and has evidence completely independent from yours, LA tells you that their credence is disconfirming evidence. 


Perhaps most importantly, since Bayesian Updating does not have these properties, LA does not generally produce the same results. Thus, insofar as we regard Bayesian updating as the normative ideal, we should expect LA to be at best an imperfect heuristic and perhaps not even that. 

In sum, LA has a whole host of undesirable properties. It seems like we therefore would want an alternative rule that avoids these pitfalls while retaining the simplicity of LA.   

The EaGlHiVe aims to show that such a rule exists. They call this rule “Upco”, standing for “updating on the credences of others”. Upco is a simple rule that avoids many of the problems of LA: preservation of independence, commutativity, synergy, etc. Moreover, Upco produces the same results as Bayesian Updating under some conditions.

I don't have much to contribute to the normative social epistemology questions raised here, since this is a huge debate within philosophy. People interested in a general summary might read the Philosophy Compass review or the SEP article.

But I did want to question the claim about the descriptive social epistemology of the EA movement which is made i.e. that:

What occurs instead is agreement approaching fawning obeisance to a small set of people the community anoints as ‘thought leaders’, and so centralizing on one particular eccentric and overconfident view.

I'm not sure this is useful as a general characterisation of the EA community, though certainly at times people are too confident, too deferential etc. What beliefs might be the beneficiaries of this fawning obeisance? There doesn't seem to me to be sufficient uncontroversial agreement about much (even utilitarianism has a number of prominent 'thought leaders' pushing against it saying that we ought to be opening ourselves up to alternatives).

The general characterisation seems in tension with the common idea that EA is highly combative and confrontational (it would be strange though not impossible if we had a constant disagreement and attempted argumentative one-upmanship, combined with excessive deference to certain thought leaders). Instead what I see is occasional excessive deference to people respected within certain cliques, by members of those circles, but not 'centralization' on any one particular view. Perhaps all Greg has in mind is these kinds of cases where people defer too much to people they shouldn't (perhaps due to a lack of actual experts in EA rather than due to their own vice). But then it's not clear to me what the typical EA-rationalist who has not and probably shouldn't make a deep study of many-worlds, free will, or meta-ethics should do to avoid this problem.

Apropos of which, SEP published an article on disagreement last week, which provides an (even more) up to date survey of philosophical discussion in this area.

(even utilitarianism has a number of prominent 'thought leaders' pushing against it saying that we ought to be opening ourselves up to alternatives).

Also, EA selects for utilitarians in the first place. So you can't say that we're being irrational just because we're disproportionately utilitarian.

This post is one of the best things I've read on this forum. I upvoted it, but didn't feel that was sufficient appreciation for you writing something this thorough in your spare time!

[anonymous]6y6
0
0

Hi Greg, thanks for this post, it was very good. I thought it would help future discussion to separate these claims, which leave your argument ambiguous:

  1. You should give equal weight to your own credences and those of epistemic peers on all propositions for which you and they are epistemic peers.
  2. Claims about the nature of the community of epistemic peers and our ability to reliably identify them.

In places, you seem to identify modesty with 1, in others with the conjunction of 1 and a subset of claims in 2. 1 doesn't seem sufficient on its own for modesty, for if 1 is true but I have no epistemic peers or can't reliably identify them, then I should pay lots of attention to my own inside view of an issue. Similarly, if EAs have no epistemic peers or superiors, then they should ignore everyone else. This is compatible with conciliationism but seems immodest. The relevant claim in 2 seems to be that for most people, including EAs, with beliefs about practically important propostions, there are epistemic peers and superiors who can be reliably identified.

This noted, I wonder how different the conjunction of 1 and 2 is to epistemic chauvinism. It seems to me that I could accept 1 and 2, but demote people from my epistemic peer group with respect to a proposition p if they disagree with me about p. If I have read all of the object-level arguments on p and someone else has as well and we disagree on p, then demotion seems appropriate at least in some cases. To give an example, I've read and thought less about vagueness less than lots of much cleverer philosophers who hold a view called supervaluationism, which I believe to be extremely implausible. I believe I can explain why they are wrong with the object-level arguments about vagueness. I received the evidence that they disagree. Very good, I reply, they are not my epistemic peers with respect to this question for object level reasons x, y, and z. (Note that my reasons for demoting them are the object-level reasons; they are not that I believe that supervaluationism is false. Generally, the fact that I believe p is usually not my reason to believe that p.) This is entirely compatible with the view that I should be modest with respect to my epistemic peers.

In this spirit, I find Scott Sumner's quote deeply strange. If he thinks that "there is no objective reason to favor my view over Krugman's", then he shouldn't believe his view over Krugman's (even though he (Sumner) does). If I were in Sumner's shoes after reasoning about p and reading the object level reasons about p, then I would EITHER become agnostic or demote krugman from my epistemic peer group.

[anonymous]6y7
0
0

I thought I'd offer up more object-level examples to try to push against your view. AI risk is a case in which EAs disagree with the consensus among numerous AI researchers and other intelligent people. In my view, a lot of the arguments I've heard from AI researchers have been very weak and haven't shifted my credence all that much. But modesty here seems to push me toward the consensus to a greater extent than the object-level reasons warrant.

With respect to the question of AI risk, it seems to me that I should demote these people from my epistemic peer group because they disagree with me on the subject of AI risk. If you accept this, then its hard to see what difference there is between immodesty and modesty

The difference in many object level claims, like the probability that there will be an intelligence explosion and so on, is not very much between EAs and AI researchers. This survey demonstrated it: https://arxiv.org/abs/1705.08807

AI researchers are just more likely to have an attitude that anything less than ~10% likely to occur should be ignored, or existential risks are not orders of magnitude more important than other things, or similar kinds of judgement calls.

The one major technical issue where EAs might be systematically different from AI researchers would be the validity of current research in addressing the problem.

Is there any data on how likely EAs think that explosive progress after HLMI will happen? I would have thought it more than 10%?

I would also have expected more debate about explosive progress, more than just the recent Hanson-Yudkowski flair up, if there was as much doubt in the community as that survey suggests.

Gregory, thanks for writing this up. Your writing style is charming and I really enjoy reading the many deft turns of phrase.

Moving on to the substance, I think I share JH's worries. What seems missing from your account is why people have the credences they have. Wouldn't it be easiest just to go and assess the object level reasons people have for their credences? For instance, with your Beatrice and Adam example, one (better?) way to make progress on finding out whether it's an oak or not is ask them for their reasons, rather than ask them to state their credences and take those on trust. If Beatrice says "I am tree expert but I've left my glasses at home so can't see the leaves" (or something) whereas Adam gives a terrible explanation ("I decided every fifth tree I see must be an oak tree"), that would tell us quite a lot.

Perhaps, we should defer to others either when we don't know what their reasons are but need to make a decision quickly, or we think they have the same access to object levels reasons as we do (potential example: two philosophers who've read everything but still disagree).

Hello John (and Michael - never quite how to manage these sorts of 'two to one' replies)

I would reject epistemic chauvinism. In the cases where you disagree on P with your epistemic peer, and you take some set of object reasons x, y, and z to support P, the right approach is to downgrade your confidence in the strength of these reasons rather than demote them from epistemic peerhood. I'd want to support that using some set of considerations about [2]: among others, the reference class where you demote people from peerhood (or superiority) on disagreement goes predictably much worse in the 'truly modest' one where you downgrade your confidence in the reasons that lead you to disagree (consider a typical crackpot who thinks the real numbers have the same cardinality as the natural for whatever reason, and then infers from disagreement mathematicians are all fools)

For the supervaluation case, I don't know whether it is the majority view on vagueness, but pretend it was a consensus. I'd say the right thing in such a situation is to be a supervaluationist yourself, even if it appears to you it is false. Indicting apparent peers/superiors for object level disagreement involves retrenchment, and so seems to go poorly.

In the AI case, I'd say you'd have to weigh up (which is tricky) degrees of expertise re. AI. I don't see it as a cost for my view to update towards the more sceptical AI researchers even if you don't think the object level reasons warrant it, as in plausible reference classes the strategy of going with the experts beats going with the non-expert opinion.

In essence, the challenge modesty would make is, "Why do you back yourself to have the right grasp on the object level reasons?" Returning to a supervaluation consensus, it seems one needs to offer a story as to why the object level reasons that convincingly refute the view are not appreciated by the philosophers who specialise in the subject. It could be the case they're all going systemically wrong (and so you should demote them), but it seems more likely that you have mistaken the object level balance of reason. Using the former as an assumption looks overconfident.

What I take Sumner to be saying is to take the agnosticism you suggest he should, maybe something like this:

My impression is that my theory is right, but I don't believe its more likely my impression is more likely to be right than Paul Krugman's (or others). So if you put a gun to my head and I had to give my best guess on economics, I would take an intermediate view, and not follow the theory I espouse. In my day to day work, though, I use this impression to argue in support of this view, so it can contribute to our mutual knowledge.

Of course, maybe you can investigate the object level reasons, per Michael's example. In the Adam and Beatrice case, Oliver could start talking to them about the reasons, and maybe find one of them isn't an epistemic peer to the other (or to him). Yet in cases where Oliver forms his own view about the object level considerations, he should still be modest across the impressions of Adam, Beatrice, and himself, for parallel reasons to the original case where he was an outsider (suppose we imagine Penelope who is an outsider to this conversation, etc.)

[anonymous]6y1
0
0

Hi Greg, So, your view is that it's ok to demote people from my peer group when I not only disagree with them about p but also when I have an explanation of why they would be biased that doesn't apply to me. And on your view their verdict on p could never be evidence of their bias. This last seems wrong in many cases.

Consider some obvious truth P (e.g. if a, then a; if a or b, then a and b can't both not be true; it's wrong to torture people for fun etc.). Myself and some other equally intelligent person have been thinking about P for an equal amount of time. I learn that she believes that not-P. It seems entirely appropriate for me to demote them in this case. If you deny this, suppose now we are deciding on some proposition Q and I knew only that they had got P wrong. As you would agree, their past performance (on P) is pro tanto reason to demote with respect to Q. How can it then not also be pro tanto reason to demote with respect to P? [aside: the second example of an obvious truth I gave is denied by supervaluationists]. In short, how could epistemic peerhood not be in part determined by performance on the object level reasons?

In some of these cases, it also seems that in order to justifiably demote, one doesn't need to offer an account of why the other party is biased that is independent of the object-level reasons.

A separate point, it seems like today and historically there are and have been pockets of severe epistemic error. e.g. in the 19th century, almost all of the world's most intelligent philosophers thought that idealism is true; a large chunk of political philosophers believe that public reason is true; I'm sure there are lots of examples outside philosophy.

In this context, selective epistemic exceptionalism seems appropriate for a community that has taken lots of steps to debias. There's still very good reason to be aware of what the rest of the epistemic community thinks and why they think it, and this is a (weaker) form of modesty.

Minor point: epistemic peer judgements are independent of whether you disagree with them or not. I'm happy to indict people who are epistemically unvirtuous even if they happen to agree with me.

I generally think one should not use object level disagreement to judge peerhood, given the risk of entrenchment (i.e. everyone else thinks I'm wrong, so I conclude everyone else is wrong and an idiot).

For 'obvious truths' like P, there's usually a lot of tacit peer agreement in background knowledge. So the disagreement with you and these other people provides some evidence for demotion, rather than disagreeing with you alone. I find it hard to disentangle intuitions where one removes this rider, and in these cases I'm not so sure about whether steadfastness + demotion is the appropriate response. Demoting supervaluationaists as peers re. supervaluationism because they disagree with you about it, for example, seems a bad idea.

In any case, almost by definition it would be extraordinarily rare people we think prima facie are epistemic peers disagree on something sufficiently obvious. In real world cases where its some contentious topic where reasonable people disagree, one should not demote people based on their disagreement with you (or, perhaps, in these cases the evidence for demotion is sufficiently trivial that it is heuristically better ignored).

Modest accounts shouldn't be surprised by expert error. Yet being able to determine these instances ex post gives little steer as to what to do ex ante. Random renegade schools of thought assuredly have an even poorer track record. If it was the case the EA/rationalist community had a good track record of out performing expert classes in their field, that would be a good reason for epistemic exceptionalism. Yet I don't see it.

To support a claim that this applies in "virtually all" cases, I'd want to see more engagement with pragmatic problems applying modesty, including:

  • Identifying experts is far from free epistemically.
  • Epistemic majoritarianism in practice assumes that no one else is an epistemic majoritarian. Your first guess should be that nearly everyone else is iff you are, in which you should expect information cascades due to the occasional overconfident person. If other people are not majoritarians because they're too stupid to notice the considerations for it, then it seems a bit silly to defer to them. On the other hand, if they're not majoritarians because they're smarter than you are... well, you mention this, but this objection seems to me to be obviously fatal and the only thing left is to explain why the wisdom of the majority disagrees with the epistemically modest.
  • The vast majority of information available about other people's opinions does not differentiate clearly between their impressions and their beliefs after adjusting for their knowledge about others' beliefs.
  • People lie to maintain socially desirable opinions.
  • Control over others' opinions is a valuable social commodity, and apparent expertise gives one some control.

In particular, the last two factors (different sorts of dishonesty) are much bigger deals if most uninformed people copy the opinions of apparently informed people instead of saying "I have no idea".

Overall, I agree that when you have a verified-independent, verified-honest opinion from a peer, one should weight it equally to one's own, and defer to one's verified epistemic superiors - but this has little to do with real life, in which we rarely have that opportunity!

Thanks for writing this–as basically everyone else has said, it's really beautifully written.

I share others' (cf. Claire Zabel's comment) gratitude for the distinction you make between publicly reporting one's inside view while privately acting on one's outside view. This seems to raise a serious question about what is public and what is private. For instance, donation decisions may seem like a very private decision (unless declared publicly), but as an organization starts to grow, people will interpret that as a signal of people's views, which can lead to double-counting. I think this is actually something worth worrying about: while I think the most vocal EAs lean too far toward immodesty in expression of attitudes, EAs writ large do seem to act to a serious degree based on others' actions (at least in animal advocacy). The methodological individualism of economics and other fields that guide EAs may cause people to systematically overestimate how private certain decisions are.

Another worry I have is that people may systematically confuse expert consensus as having a wider scope for the following reason: experts who study Y may pronounce an opinion not on Y but on 'Y given X' even though they have not studied X. Economists, for instance, will often make explicit or just-shy-of-explicit claims about whether a policy is good or not, but the goodness of policies typically depends on empirical facts that most economists are equipped to consider and normative claims that economists may not be equipped to consider. It strikes me that we need to have a fine scalpel to see that we should accept economists' consensus on the direction and magnitude of policies' effects but look to political philosophers or ethicists for judgments of those effects.

I think your conclusion is worth being a post on it's own, and would potentially get read by more people in a shorter format.

It may also be the people that you'd want to read to the end wouldn't read a post as in depth as this.

On a separate small point, I think your probability estimate for ESP is too low, for two reasons:

Firstly, it is a taboo topic (like UFOs and the Loch Ness monster), which scientists are therefore far more likely to dismiss from a position of ignorance, or with weakish arguments (e.g. 'it lacks an explanatory mechanism', 'much of the research methodology is flawed', or 'some of the research has been on fraudsters' - hardly disproof). Few skeptics have domain expertise, i.e. of having conducted or investigated research in the area.

Secondly, ESP covers quite a range of rather distinct phenomena. Only one has to be right for ESP to be true. And I'm not sure that all would require completely novel scientific principles (e.g. unknown physical forces); and the fact that our understanding of physics has gaps, and our understanding of consciousness certainly does, may well leave room for some form of ESP to be compatible with current science (not that that is essential).

Great post!

I think the question of "how do we make epistemic progress" needs a bit more attention.

Continuing the analogy with the EMH (which I like), I think the usual answer is that there are some $20 bills on the floor, and that some individuals are either specially placed to see that, or have a higher risk tolerance and so are willing to take the risk that it's a prank.

This suggests similar cases in epistemics: some people really are better situated (the experts), but perhaps we should also consider the class of "epistemic risk takers", who hold riskier beliefs in the attempt to get "ahead of the curve". However, as with startup founders, we should take such people with more than a pinch of salt. We may want to "fund" the ecosystem as a whole, because on average the one successful epistemic entrepreneur pays for the rest, but any individual is still likely to be worse than the consensus.

So that suggests that we should encourage people to think riskily, but usually discount their "risky" beliefs when making practical decisions until they have proven themselves. And this actually seems to reflect our behaviour: people are much more epistemically modest when the money is on the table. Being more explicit about which beliefs are "speculative" seems like it would be an improvement, though.

Finally, we also have to ask how people become experts. Again, in the economic analogy, people end up well situated to start businesses often through luck, sometimes through canny positioning, and sometimes through irrational pursuit of an idea. Similarly, to become an expert in X one has to invest a lot of time and effort, but we may want people to speculate in this domain too, and become experts in unlikely things so that we can get good credences on topics that may, with low probability, turn out to be important.

(Meta: I was confused about whether to comment on LW2 or here. Cross-posting comments seems silly... or is it?)

Great article. I'm very late to the party in reading it & commenting, but I hope not too late to be of use!

I have three further reasons for epistemic immodesty in some circumstances. They all involve experts, or those who follow their advice, being overconfident about the experts' relevant knowledge. (Though I note your comments about debunking experts; none of these arguments show an amateur is better than some other, probably small, set of experts who have taken these considerations into account.)

HIDDEN PREFERENCES

You mention that expert views aren't relevant in matters of taste, i.e. preference. However, expert views are often based on non-explicit preferences, which some experts may even be unaware of themselves.

To start with a clear situation where preferences are involved: If I'm looking for a house to buy and trying to decide which one to choose, I may well consult experts in the field, such as an estate agent (realtor), a mortgage advisor, and an architect (if it may need building work). They may advise that I can't afford a house more than $x, or it will cost $y to do up, etc. But even with all their expert advice, this won't necessarily settle the matter of which house to buy, because I also have to *like* the house in question, want to live in that area, etc. So my decision involves both expert factual opinion and my personal preference; and I am the sole expert on the latter.

Now to take a less clear situation, currently topical in the UK: Brexit. Despite years of debate about this, which often includes discussion of experts and whether they should be trusted, I don't think I've heard anyone state clearly that it too mixes expert opinion and preference. Most economists say Brexit will harm the economy, and most voters opposed to Brexit assume this simply entails Brexit is a bad thing. But of course the issue is not only about money - various other considerations are involved (e.g. self-determination) - and the trade-off between these is a matter of preference. Some people with unusual preferences may have coherent reasons to oppose Brexit (e.g. I spoke to someone who voted based on the fact that animal welfare is taken more seriously in the UK than most other EU countries, a consideration she regarded as more important than the economy). So this is an example of a 'semi-hidden' preference - one where many people assume expert opinion is a silver bullet - perhaps including the experts themselves - and overlook the element of preference.

A different example is government guidelines on alcohol consumption. In the UK men are advised by experts to drink no more than (I think) 14 units per week. However, this advice is based on a trade-off between health and pleasure: if you really enjoy alcohol you may be happy to exchange a risk of significantly reduced health or longevity for drinking much more than 14 units. This trade-off is a preference, which the experts have made for you. (And AFAIK the trade-off they chose is arbitrary, not even based on research into say average preferences.)

Other topics may include preferences so hidden that even the experts are hardly aware of them. An example in EA would be the use of DALYs and QALYs (disability/quality-adjusted life years) as human welfare metrics in assessing charities & interventions. Some who work with these metrics may overlook, or perhaps be unaware of, their shortcomings. DALYs and QALYs as currently defined assume that no condition is worse than death - which is inconsistent with the existence of suicide and euthanasia. When ordinary people are surveyed, their views on this vary widely - some taking the (perhaps religious) position that nothing is worse than death, and suicide/euthanasia should never be allowed, whereas others have no problem with the idea of suicide/euthanasia to escape prolonged untreatable agony, for example. So the mere use of these units involves tacitly taking a position on this, i.e. a hidden preference. A resulting expert view that X charity or intervention is better than Y is therefore partly objective and partly subjective; the expert themself may overlook this fact, or even (when involving technical philosophical issues) be unaware of it.

Other unstated assumptions are widespread in EA, e.g. that saving lives is a good thing (even though the world may be overpopulated), or that the prevention of merely potential future humans by mass extinction is a bad thing (even though contraception is fine).

In such cases, a non-expert who identifies such a hidden preference that they don't share may well have good reason to disregard the expert opinion.

SHAKY FOUNDATIONS

Relatedly, there is the issue of core assumptions that are largely unquestioned within a scientific field. A classic example is induction: physics assumes that just because in the past things seem to have behaved in a regular fashion, they will continue to do so. This is the basis of the belief in physical laws (and other laws of nature). Philosophers have long questioned this assumption; there really may be no reason to assume the sun will rise tomorrow, or that the speed of light was the same yesterday or a million years ago; which undermines all kinds of experiments and models. I expect many physicists are only dimly aware of this, know little of the arguments involved, and perhaps regard it as a quasi-theological debate not worth serious attention.

As with DALYs and QALYs, core assumptions like induction are often shaky, and the shakiness is often only taken seriously (or even known about) by those outside the field, e.g. philosophers. Indeed, articles of faith are often left unquestioned by true believers, lest they turn out to be an Achilles heel, and (mixing more metaphors) the whole edifice is built on sand. To question foundational beliefs may be heresy.

So an amateur outsider may well be more aware of such problems than an expert in the field; and may therefore be justified in using them to dismiss expert opinion, or at least, to take it with a big pinch of salt.

NARROW EXPERTISE

Many experts are only expert in an extremely narrow field, yet may be assumed to have a broader range of expertise (and some experts may also believe this themselves).

Apologies, but the clearest example I can think of is myself! At one time I was one of just a handful of world experts in an extremely narrow field - the music notation software industry. (As I owned a company in this field.) My knowledge was extremely in-depth - I had spent years coding this kind of software, knew endless obscure feature requirements, knew all about the market, wrote manuals and brochures, etc. Yet in other respects I knew less than many amateurs. I had never used (and hardly even seen) any music notation software other than my own company's. I knew even less about other types of music software (e.g. sequencers), used by millions of people, often my own customers. So I was a world expert in a very narrow field, yet an ignoramus both in aspects of my own field, and in very close fields.

The same is presumably true elsewhere. Amateurs may know as much as a world expert who is only slightly outside their very narrow field, or even on topics within their specialism. And at least occasionally, experts are unaware of their ignorance on these things. That is, they may make the same false assumptions as others do about the breadth & depth of their expertise.

(An example: the book The Oxford Companion to the Mind is an encyclopedia edited by the eminent psychologist Richard Gregory. Some of the entries in the original edition are by Gregory himself, despite dealing with philosophy of mind & metaphysics, topics evidently outside his expertise. They are amateurish, making confusions that would embarrass a philosophy undergraduate. Even the blurb on the cover jacket casually conflated 'brain' and 'mind' in ways only an ignoramus would do. When I was a philosophy student I was so astonished by this I almost wrote a letter to Gregory suggesting he get someone with domain expertise to rewrite his entries.)

Another reason to not have too much modesty within society is that it makes expert opinion very appealing to subvert. I wrote a bit about that here.

Note that I don't think that my views about the things that I believe subverted/unmoored would be necessarily correct, but that the first order of business would be to try and build a set of experts with better incentives.

[comment deleted]10mo1
0
0