Comment author: RobBensinger 18 November 2017 10:19:29PM 3 points [-]

Cross-posting a reply from FB:

It strikes me as much more prevalent for people to be overconfident in their own idiosyncratic opinions. If you see half of people are 90% confident in X and half of people are 90% confident in not-X, then you know on average they are overconfident. That's how most of the world looks to me.

This seems consistent with Eliezer's claim that "commenters on the Internet are often overconfident" while EAs and rationalists he interacts with in person are more often underconfident. In Dunning and Kruger's original experiment, the worst performers were (highly) overconfident, but the best performers were underconfident.

Your warnings that overconfidence and power-grabbing are big issues seem right to me. Eliezer's written a lot warning about those problems too. My main thought about this is just that different populations can exhibit different social dynamics and different levels of this or that bias; and these can also change over time. Eliezer's big-picture objection to modesty isn't "overconfidence and power-grabbing are never major problems, and you should never take big steps to try combat them"; his objection is "biases vary a lot between individuals and groups, and overcorrection in debiasing is commonplace, so it's important that whatever debiasing heuristics you use be sensitive to context rather than generically endorsing 'hit the brakes' or 'hit the accelerator'".

He then makes the further claim that top EAs and rationalists as a group are in fact currently more prone to reflexive deference, underconfidence, fear-of-failure, and not-sticking-their-neck-out than to the biases of overconfident startup founders. At least on Eliezer's view, this should be a claim that we can evaluate empirically, and our observations should then inform how much we push against overconfidence v. underconfidence.

The evolutionary just-so story isn't really necessary for that critique, though it's useful to keep in mind if we were originally thinking that humans only have overactive status-grabbing instincts, and don't also have overactive status-grab-blocking instincts. Overcorrection is already a common problem, but it's particularly likely if there are psychological drives pushing in both directions.

Comment author: RobBensinger 18 November 2017 10:34:19PM 0 points [-]
Comment author: Robert_Wiblin 17 November 2017 12:16:33AM *  8 points [-]

It strikes me as much more prevalent for people to be overconfident in their own idiosyncratic opinions. If you see half of people are 90% confident in X and half of people are 90% confident in not-X, then you know on average they are overconfident. That's how most of the world looks to me.

But no matter - they probably won't suffer much, because the meek do no inherit the Earth, at least not in this life.

People follow confidence in leaders, generating the pathological start-up founder who is sure they're 100x more likely to succeed than the base rate; someone who portrays themselves as especially competent in a job interview is more likely to be hired than someone who accurately appraises their merits; and I don't imagine deferring to a boring consensus brings more romantic success than elaborating on one's exciting contrarian opinions.

Given all this, it's unsurprising evolution has programmed us to place an astonishingly high weight on our own judgement.

While there are some social downsides to seeming arrogant, people who preach modesty here advocate going well beyond what's required to avoid triggering an anti-dominance reaction in others.

Indeed, even though I think strong modesty is epistemically the correct approach on the basis of reasoned argument, I do not and can not consistently live and speak that way, because all my personal incentives are lined up in favour of me portraying myself as very confident in my inside view.

In my experience it requires a monastic discipline to do otherwise, a discipline almost none possess.

Comment author: RobBensinger 18 November 2017 10:19:29PM 3 points [-]

Cross-posting a reply from FB:

It strikes me as much more prevalent for people to be overconfident in their own idiosyncratic opinions. If you see half of people are 90% confident in X and half of people are 90% confident in not-X, then you know on average they are overconfident. That's how most of the world looks to me.

This seems consistent with Eliezer's claim that "commenters on the Internet are often overconfident" while EAs and rationalists he interacts with in person are more often underconfident. In Dunning and Kruger's original experiment, the worst performers were (highly) overconfident, but the best performers were underconfident.

Your warnings that overconfidence and power-grabbing are big issues seem right to me. Eliezer's written a lot warning about those problems too. My main thought about this is just that different populations can exhibit different social dynamics and different levels of this or that bias; and these can also change over time. Eliezer's big-picture objection to modesty isn't "overconfidence and power-grabbing are never major problems, and you should never take big steps to try combat them"; his objection is "biases vary a lot between individuals and groups, and overcorrection in debiasing is commonplace, so it's important that whatever debiasing heuristics you use be sensitive to context rather than generically endorsing 'hit the brakes' or 'hit the accelerator'".

He then makes the further claim that top EAs and rationalists as a group are in fact currently more prone to reflexive deference, underconfidence, fear-of-failure, and not-sticking-their-neck-out than to the biases of overconfident startup founders. At least on Eliezer's view, this should be a claim that we can evaluate empirically, and our observations should then inform how much we push against overconfidence v. underconfidence.

The evolutionary just-so story isn't really necessary for that critique, though it's useful to keep in mind if we were originally thinking that humans only have overactive status-grabbing instincts, and don't also have overactive status-grab-blocking instincts. Overcorrection is already a common problem, but it's particularly likely if there are psychological drives pushing in both directions.

Comment author: JamesDrain 12 November 2017 10:48:26PM *  1 point [-]

Newcomb's problem isn't a challenge to causal decision theory. I can solve Newcomb's problem by committing to one-boxing in any of a number of ways e.g. signing a contract or building a reputation as a one-boxer. After the boxes have already been placed in front of me, however, I can no longer influence their contents, so it would be good if I two-boxed if the rewards outweighed the penalty e.g. if it turned out the contract I signed was void, or if I don't care about my one-boxing reputation because I don't think I'm going to play this game again in the future.

The "wishful thinking" hypothesis might just apply to me then. I think it would be super cool if we could spontaneously cooperate with aliens in other universes.

Edit: Wow, ok I remember what I actually meant about wishful thinking. I meant that evidential decision theory literally prescribes wishful thinking. Also, if you made a copy of a purely selfish person and then told them of the fact, then I still think it would be rational to defect. Of course, if they could commit to cooperating before being copied, then that would be the right strategy.

Comment author: RobBensinger 13 November 2017 03:47:37AM *  1 point [-]

After the boxes have already been placed in front of me, however, I can no longer influence their contents, so it would be good if I two-boxed

You would get more utility if you were willing to one-box even when there's no external penalty or opportunity to bind yourself to the decision. Indeed, functional decision theory can be understood as a formalization of the intuition: "I would be better off if only I could behave in the way I would have precommitted to behave in every circumstance, without actually needing to anticipate each such circumstance in advance." Since the predictor in Newcomb's problem fills the boxes based on your actual action, regardless of the reasoning or contract-writing or other activities that motivate the action, this suffices to always get the higher payout (compared to causal or evidential decision theory).

There are also dilemmas where causal decision theory gets less utility even if it has the opportunity to precommit to the dilemma; e.g., retro blackmail.

For a fuller argument, see the paper "Functional Decision Theory" by Yudkowsky and Soares.

Comment author: Denkenberger 08 November 2017 10:57:45PM 2 points [-]

As for the value of college for non-doctors, what about the study of GI bill recipients that were randomly chosen that found that college did have significant causal benefits (it was not just correlation that colleges were just choosing better qualified people)?

Comment author: RobBensinger 09 November 2017 02:21:17AM *  1 point [-]

I'm not an expert in this area and haven't seen that study, but I believe Eliezer generally defers to Bryan Caplan's analysis on this topic. Caplan's view, discussed in The Case Against Education (which is scheduled to come out in two months), is that something like 80% of the time students spend in school is signaling, and something like 80% of the financial reward students enjoy from school is due to signaling. So the claim isn't that school does nothing to build human capital, just that a very large chunk of schooling is destroying value.

Comment author: Gregory_Lewis 04 November 2017 08:22:29AM 0 points [-]

I don't think the account of the relative novelty of the 'LW approach' to philosophy makes a good fit for the available facts; "relatively" new is, I suggest, a pretty relative term.

You can find similar reduction-esque sensibilities among the logicial positivists around a century ago, and a very similar approach from Quine about half a century ago. In the case of the logical positivists, they enjoyed a heyday amongst the philosophical community, but gradually fell from favour due to shortcomings other philosophers identified; I suggest Quine is a sufficiently 'big name' in philosophy that his approach was at least widely appreciated by the relevant academic communities.

This is challenging to reconcile with an account of "Rationality's philosophical framework allows one to get to confidently get to the right answer across a range of hard philosophical problems, and the lack of assent of domain experts is best explained by not being aware of it". Closely analogous approaches have been tried a very long time ago, and haven't been found extraordinarily persuasive (even if we subset to naturalists). It doesn't help that when the 'LW-answer' is expounded (e.g. in the sequences) the argument offered isn't particularly sophisticated (and often turns out to be recapitulating extant literature), nor does it usually deign to address objections raised by dissenting camps.

I suggest a better fit for this data is the rationality approach looks particularly persuasive to people without subject matter expertise.

Re. decision theory. Beyond the general social epistemiological steers (i.e. the absence of good decision theorists raving about the breakthrough represented by MIRI style decision theory, despite many of them having come into contact with this work one way or another), remarks I've heard often target 'technical quality': Chalmers noted in a past AMA disappointment this theory had not been made rigorous (maybe things have changed since), and I know one decision theorist's view is that the work also isn't rigorous and a bit sloppy (on Carl's advice, I'm trying to contact more). Not being a decision theorist myself, I haven't delved into the object level considerations.

Comment author: RobBensinger 04 November 2017 04:53:28PM *  1 point [-]

The "Cheating Death in Damascus" and "Functional Decision Theory" papers came out in March and October, so I recommend sharing those, possibly along with the "Decisions Are For Making Bad Outcomes Inconsistent" conversation notes. I think these are much better introductions than e.g. Eliezer's old "Timeless Decision Theory" paper.

Quineans and logical positivists have some vague attitudes in common with people like Drescher, but the analogy seems loose to me. If you want to ask why other philosophers didn't grab all the low-hanging fruit in areas like decision theory or persuade all their peers in areas like philosophy of mind (which is an interesting set of questions from where I'm standing, and one I'd like to see examined more too), I think a more relevant group to look at will be technically minded philosophers who think in terms of Bayesian epistemology (and information-theoretic models of evidence, etc.) and software analogies. In particular, analogies that are more detailed than just "the mind is like software", though computationalism is an important start. A more specific question might be: "Why didn't E.T. Jaynes' work sweep the philosophical community?"

Comment author: Gregory_Lewis 02 November 2017 11:59:46PM 1 point [-]

I agree such an object level demonstration would be good evidence (although of course one-sided, for reasons Pablo ably articulates elsewhere). I regret I can't provide this. On many of these topics (QM, p-zombies) I don't pretend any great knowledge; for others (e.g. Theism), I can't exactly find the 'rationalist case for Atheism' crisply presented.

I am naturally hesitant to infer from the (inarguable) point that diffusion of knowledge and ideas within and across fields takes time that he best explanation for disagreement is that rationalists are just ahead of the curve. I enjoyed the small parts of Drescher I read, but I assume many reasonable philosophers are aware of his work and yet are not persuaded. Many things touted in philosophy (and elsewhere) as paradigm shifting insights transpire to be misguided, and betting on some based on your personal assent on the object level looks unlikely to go well.

I consider the decision theory work a case-in-point. The view that the F- U- T- DT is this great advance on decision theoretic state of the art is a view that is very tightly circumscribed to the rationalist community itself. Of course, many decision theorists are simply ignorant of it given it is expounded outside the academic press. Yet others are not: there were academic decision theorists who attend some MIRI workshops, others who have been shown versions (via Chalmers, I understand), and a few who have looked at MIRI's stuff on Arxiv and similar. Yet the prevailing view of these seems to be at best lukewarm, and at worst scathing.

This seems challenging to reconcile with a model of rationalists just getting to the great insights early before everyone else catches up. It could be the decision theorist community is so diseased so they cannot appreciate the technical breakthrough MIRI-style decision theory promises. Yet I find the alternative hypothesis where it is the rationalist community which is diseased and diving down a decision theory dead end without the benefit of much interaction with decision theory experts to correct them somewhat more compelling.

Comment author: RobBensinger 04 November 2017 12:38:07AM *  2 points [-]

To be clear, I'm not saying that the story I told above ("here are some cool ideas that I claim haven't sufficiently saturated the philosophy community to cause all the low-hanging fruit to get grabbed, or to produce fieldwide knowledge and acceptance in the cases where it has been grabbed") should persuade arbitrary readers that people like Eliezer or Gary Drescher are on the right track; plenty of false turns and wrong solutions can also claim to be importing neglected ideas, or combining ideas in neglected ways. I'm just gesturing at one reason why I think it's possible at all to reach confident correct beliefs about lots of controversial claims in philosophy, in spite of the fact that philosophy is a large and competitive field whose nominal purpose is to answer these kinds of questions.

I'm also implicitly making a claim about there being similarities between many of the domains you're pointing to that help make it not just a coincidence that one (relatively) new methodology and set of ideas can put you ahead of the curve on multiple issues simultaneously (plus produce multiple discovery and convergence). A framework that's unusually useful for answering questions related to naturalism, determinism, and reflective reasoning can simultaneously have implications for how we should (and shouldn't) be thinking about experience, agency, volition, decision theory, and AI, among other topics. To some extent, all of these cases can be thought of as applications of a particular naturalist/reductionist toolkit (containing concepts and formalisms that aren't widely known among philosophers who endorse naturalism) to new domains.

I'm curious what criticisms you've heard of MIRI's work on decision theory. Is there anything relevant you can link to?

Comment author: Gregory_Lewis 31 October 2017 05:51:16PM *  5 points [-]

Thanks for your helpful reply. I think your bullet points do track the main sources of disagreement, but I venture an even crisper summary:

I think the Eliezer-style 'immodest' view comprises two key claims:

1) There are a reasonably large number of cases that due to inadequate equiilbria or similar that who we might take to be expert classes are in fact going to be sufficiently poorly optimised for the truth that the views a reasonable rationalist or similar could be expected to do better.

2) We can reliably identify these cases.

If they're both true we can license ourselves to 'pick fights' where we make confident bets against expert consensus (or lack thereof) in the knowledge we are more likely than not to be right. If not, then it seems modesty is the better approach: it might be worth acting 'as if' our contra-expert impression is right and doing further work (because we might discover something important), but nonetheless defer to the expert consensus.

It seems the best vindication of the immodesty view as Eliezer defends is a track record of such cases on his behalf or the wider rationalist community. You correctly anticipate I would definitely include the track record here as highly adverse. For two reasons:

First, when domain experts look at the 'answer according to the rationalist community re. X', they're usually very unimpressed, even if they're sympathetic to the view themselves. I'm pretty Atheist, but I find the 'answer' to the theism question per LW or similar woefully rudimentary compared to state of the art discussion in the field. I see similar experts on animal consciousness, quantum mechanics, free will, and so on similarly be deeply unimpressed with the sophistication of argument offered.

Unfortunately, many of these questions tend to be the sort where a convincing adjudication is far off (i.e. it seems unlikely to discover convincing proof of physicalism any time soon). So what we observe is both compatible with 'the rationalist community is right and this field is diseased (and so gets it wrong)' and 'the rationalist community is greatly over confident and the field ia on the right track'. That said, I take the number of fields which the rationalist community takes to be sufficiently diseased that it takes itself to do better as implausible on priors.

The best thing would be a clear track record to judge - single cases, either way, don't give much to go one, as neither modesty nor immodesty would claim they should expect to win every single time. I see the rationalist community having one big win (re. AI), yet little else. That Eliezer's book offers two pretty weak examples (e.g. BoJ, where he got the argument from a recognised authority, and an n=1 medical intervention), and reports one case against (e.g. a big bet of Taubes) doesn't lead me to upgrade my pretty autumnal view of the track record.

Comment author: RobBensinger 02 November 2017 09:04:05PM *  1 point [-]

Unfortunately, many of these questions tend to be the sort where a convincing adjudication is far off (i.e. it seems unlikely to discover convincing proof of physicalism any time soon).

I think a convincing object-level argument could be given; you could potentially show on object-level grounds why the specific arguments or conclusions of various rationalists are off-base, thereby at least settling the issue (or certain sub-issues) to the satisfaction of people who take the relevant kinds of inside-view arguments sufficiently seriously in the first place. I'd be particularly interested to hear reasons you (or experts you defer to) reject the relevant arguments against gods, philosophical zombies, or objective collapse / non-realism views in QM.

If you mean that a convincing expert-consensus argument is likely to be far off, though, then I agree about that. As a start, experts' views and toolkits in general can be slow to change, particularly in areas like philosophy.

I assume one part of the model Eliezer is working with here is that it can take many decades for new conceptual discoveries to come to be widely understood, accepted, and used in a given field, and even longer for these ideas to spill over into other fields. E.g., some but not all philosophers have a deep understanding of Shannon, Solomonoff, and Jaynes' accounts of inductive inference, even though many of the key insights have been around for over fifty years at this point. When ideas spread slowly, consensus across all fields won't instantly snap into a new state that's maximally consistent with all of the world's newest developments, and there can be low-hanging fruit for the philosophers who do help import those ideas into old discussions.

This is why Eliezer doesn't claim uniqueness for his arguments in philosophy; e.g., Gary Drescher used the same methodology and background ideas to arrive largely at the same conclusions largely independently, as far as I know.

I'd consider the big advances in decision theory from Wei Dai and Eliezer to be a key example of this, and another good example of independent discovery of similar ideas by people working with similar methodologies and importing similar ideas into a relatively old and entrenched field. (Though Wei Dai and Eliezer were actively talking to each and sharing large numbers of ideas, so the independence is much weaker.)

You can find most of the relevant component ideas circulating before that, too; but they were scattered across multiple fields in a way that made them less likely to get spontaneously combined by specialists busy hashing out the standard sub-sub-arguments within old paradigms.

Comment author: WillPearson 31 October 2017 02:49:53PM 0 points [-]

Ah, it has been a while since I engaged with this stuff. That makes sense. I think we are talking past each other a bit though. I've adopted a moderately modest approach to QM since I've not touched it in a bit and I expect the debate has moved on a bit.

We started from a criticism of a particular position (the copenhagen interpretation) which I think is a fair thing to do for the modest and immodest. The modest person might misunderstand a position and be able to update themselves better if they criticize it and get a better explanation.

The question is what happens when you criticize it and don't get a better explanation. What should you do? Strongly adopt a partial solution to the problem, continue to look for other solutions or trust the specialists to figure it out?

I'm curious what you think about partial non-reality of wavefunctions (as described by the AncientGeek here and seeming to correspond to the QIT interpretation on the wiki page of interpretations, which fits with probabilities being in the mind ).

Comment author: RobBensinger 31 October 2017 03:38:42PM *  1 point [-]

I don't think we should describe all instances of deference to any authority, all uses of the outside view, etc. as "modesty". (I don't know whether you're doing that here; I just want to be clear that this at least isn't what the "modesty" debate has traditionally been about.)

The question is what happens when you criticize it and don't get a better explanation. What should you do? Strongly adopt a partial solution to the problem, continue to look for other solutions or trust the specialists to figure it out?

I don't think there's any general answer to this. The right answer depends on the strength of the object-level arguments; on how much reason you have to think you've understood and gleaned the right take-aways from those arguments; on your model of the physics community and other relevant communities; on the expected information value of looking into the issue more; on how costly it is to seek different kinds of further evidence; etc.

I'm curious what you think about partial non-reality of wavefunctions (as described by the AncientGeek here and seeming to correspond to the QIT interpretation on the wiki page of interpretations, which fits with probabilities being in the mind ).

In the context of the measurement problem: If the idea is that we may be able to explain the Born rule by revising our understanding of what the QM formalism corresponds to in reality (e.g., by saying that some hidden-variables theory is true and therefore the wave function may not be the whole story, may not be the kind of thing we'd naively think it is, etc.), then I'd be interested to hear more details. If the idea is that there are ways to talk about the experimental data without committing ourselves to a claim about why the Born rule holds, then I agree with that, though it obviously doesn't answer the question of why the Born rule holds. If the idea is that there are no facts of the matter outside of observers' data, then I feel comfortable dismissing that view even if a non-negligible number of physicists turn out to endorse it.

I also feel comfortable having lower probability in the existence of God than the average physicist does; and "physicists are the wrong kind of authority to defer to about God" isn't the reasoning I go through to reach that conclusion.

Comment author: WillPearson 31 October 2017 12:59:54PM 0 points [-]

and Eliezer hasn't endorsed any solution either, to my knowledge)

Huh, he seemed fairly confident about endorsing MWI in his sequence here

Comment author: RobBensinger 31 October 2017 01:16:17PM *  1 point [-]

He endorses "many worlds" in the sense that he thinks the wave-function formalism corresponds to something real and mind-independent, and that this wave function evolves over time to yield many different macroscopic states like our "classical" world. I've heard this family of views called "(QM) multiverse" views to distinguish this weak claim from the much stronger claim that, e.g., decoherence on its own resolves the whole question of where the Born rule comes from.

From a 2008 post in the MWI sequence:

One serious mystery of decoherence is where the Born probabilities come from, or even what they are probabilities of.

[... W]hat does the integral over squared moduli have to do with anything? On a straight reading of the data, you would always find yourself in both blobs, every time. How can you find yourself in one blob with greater probability? What are the Born probabilities, probabilities of? Here's the map—where's the territory?

I don't know. It's an open problem. [...]

This problem is even worse than it looks, because the squared-modulus business is the only non-linear rule in all of quantum mechanics. Everything else—everything else—obeys the linear rule that the evolution of amplitude distribution A, plus the evolution of the amplitude distribution B, equals the evolution of the amplitude distribution A + B.

Comment author: WillPearson 31 October 2017 08:51:24AM 1 point [-]

Concerning QM: I think Eliezer's correct that Copenhagen-associated views like "objective collapse" and "quantum non-realism" are wrong, and that the traditional arguments for these views are variously confused or mistaken, often due to misunderstandings of principles like Ockham's razor. I'm happy to talk more about this too; I think the object-level discussions are important here.

I don't think the modest view (at least as presented by Gregory) would believe in any of the particular interpretations as there is significant debate still.

The informed modest person would go, "You have object reasons to dislike these interpretations. Other people have object reasons to dislike your interpretations. Call me when you have hashed it out or done an experiments to pick a side". They would go on an do QM without worrying too much about what it all means.

Comment author: RobBensinger 31 October 2017 12:50:21PM *  0 points [-]

Yeah, I'm not making claims about what modest positions think about this issue. I'm also not endorsing a particular solution to the question of where the Born rule comes from (and Eliezer hasn't endorsed any solution either, to my knowledge). I'm making two claims:

  1. QM non-realism and objective collapse aren't true.
  2. As a performative corollary, arguments about QM non-realism and objective collapse are tractable, even for non-specialists; it's possible for non-specialists to reach fairly confident conclusions about those particular propositions.

I don't think either of those claims should be immediately obvious to non-specialists who completely reject "try to ignore object-level arguments"-style modesty, but who haven't looked much into this question. Non-modest people should initially assign at least moderate probability to both 1 and 2 being false, though I'm claiming it doesn't take an inordinate amount of investigation or background knowledge to determine that they're true.

(Edit re Will's question below: In the QM sequence, what Eliezer means by "many worlds" is only that the wave-function formalism corresponds to something real in the external world, and that this wave function evolves over time to yield many different macroscopic states like our "classical" world. I've heard this family of views called "(QM) multiverse" views to distinguish this weak claim from the much stronger claim that, e.g., decoherence on its own resolves the whole question of where the Born rule comes from.)

View more: Next