Comment author: Pablo_Stafforini 13 November 2017 07:33:48PM 4 points [-]

Thank you for writing this! The images under 'What are you going to search for?' are not loading.

Comment author: vipulnaik 30 October 2017 01:00:23AM 6 points [-]

The comments on naming beliefs by Robin Hanson (2008) appears to be how the consensus around the impressions/beliefs distinction began to form (the commenters include such movers and shakers as Eliezer and Anna Salamon).

Also, impression track records by Katja (September 2017) recent blog post/article circulated in the rationalist community that revived the terminology.

Comment author: Pablo_Stafforini 01 November 2017 09:27:56PM *  5 points [-]

Thanks for drawing our attention to that early Overcoming Bias post. But please note that it was written by Hal Finney, not Robin Hanson. It took me a few minutes to realize this, so it seemed worth highlighting lest others fail to appreciate it.

Incidentally, I've been re-reading Finney's posts over the past couple of days and have been very impressed. What a shame that such a fine thinker is no longer with us.

ETA: Though one hopes this is temporary.

Comment author: Carl_Shulman 31 October 2017 09:18:57PM 1 point [-]

Please take my comment as explaining my own views, lest they be misunderstood, not condemning your citation of me.

Comment author: Pablo_Stafforini 31 October 2017 09:28:24PM *  1 point [-]

Okay, thank you for the clarification.

[In the original version, your comment said that the quote was pulled out of context, hence my interpretation.]

Comment author: Carl_Shulman 31 October 2017 07:55:26PM *  8 points [-]

and Carl Shulman notes that his approach "has lead [him] astray repeatedly, but I haven't seen as many successes."

That quote may not convey my view, so I'll add to this. I think Eliezer has had a number of striking successes, but in that comment I was saying that it seemed to me he was overshooting more than undershooting with the base rate for dysfunctionality in institutions/fields, and that he should update accordingly and check more carefully for the good reasons that institutional practice or popular academic views often (but far from always) indicate. That doesn't mean one can't look closely and form much better estimates of the likelihood of good invisible reasons, or that the base rate of dysfunction is anywhere near zero. E.g. I think he has discharged the burden of due diligence wrt MWI.

If many physicists say X, and many others say Y and Z which seem in conflict with X, then at a high rate there will be some good arguments for X, Y, and Z. If you first see good arguments for X, you should check to see what physicists who buy Y and Z are saying, and whether they (and physicists who buy X) say they have knowledge that you don't understand.

In the case of MWI, the physicists say they don't have key obscure missing arguments (they are public and not esoteric), and that you can sort interpretations into ones that accept the unobserved parts of the wave function in QM as real (MWI, etc), ones that add new physics to pick out part of the wavefunction to be our world, and ones like shut-up-and-calculate that amount to 'don't talk about whether parts of the wave function we don't see are real.'

Physicists working on quantum foundations are mostly mutually aware of one another's arguments, and you can read or listen to them for their explanations of why they respond differently to that evidence, and look to the general success of those habits of mind. E.g. the past success of scientific realism and Copernican moves: distant lands on Earth that were previously unseen by particular communities turned out to be real, other Sun-like stars and planets were found, biological evolution, etc. Finding out that many of the interpretations amount to MWI under another name, or just refusing to answer the question of whether MWI is true, reduces the level of disagreement to be explained, as does the finding that realist/multiverse interpretations have tended to gain ground with time and to do better among among those who engage with quantum foundations and cosmology.

In terms of modesty, I would say that generally 'trying to answer the question about external reality' is a good epistemic marker for questions about external reality, as is Copernicanism/not giving humans a special place in physics or drastically penalizing theories on which the world is big/human nature looks different (consistently with past evidence). Regarding new physics for objective collapse, I would also note the failure to show it experimentally and the general opposition to it. That seems sufficient to favor the realist side of the debate among physicists.

In contrast, I hadn't seen anything like such due diligence regarding nutrition, or precedent in common law.

Regarding the OP thesis, you could summarize my stance as that assigning 'epistemic peer' or 'epistemic superior/inferior' status in the context of some question of fact requires a lot of information and understanding when we are not assumed to already have reliable fine-grained knowledge of epistemic status. That often involves descending into the object-level: e.g. if the class of 'scientific realist arguments' has a good track record, then you will need to learn enough about a given question and the debate on it to know if that systemic factor is actually at play in the debate before you can know whether to apply that track record in assessing epistemic status.

Comment author: Pablo_Stafforini 31 October 2017 08:59:50PM *  1 point [-]

In that comment I was saying that it seemed to me he was overshooting more than undershooting with the base rate for dysfunctionality in institutions/fields, and that he should update accordingly and check more carefully for the good reasons that institutional practice or popular academic views often (but far from always) indicate. That doesn't mean one can't look closely and form much better estimates of the likelihood of good invisible reasons, or that the base rate of dysfunction is anywhere near zero.

I offered that quote to cast doubt on Rob's assertion that Eliezer has "a really strong epistemic track record, and that this is good evidence that modesty is a bad idea." I didn't mean to deny that Eliezer had some successes, or that one shouldn't "look closely and form much better estimates of the likelihood of good invisible reasons" or that "the base rate of dysfunction is anywhere near zero", and I didn't offer the quote to dispute those claims.

Readers can read the original comment and judge for themselves whether the quote was in fact pulled out of context.

Comment author: Benito 31 October 2017 07:37:44PM 3 points [-]

A discussion about the merits of each of the views Eliezer holds on these issues would itself exemplify the immodest approach I'm here criticizing. What you would need to do to change my mind is to show me why Eliezer is justified in giving so little weight to the views of each of those expert communities, in a way that doesn't itself take a position on the issue by relying primarily on the inside view.

This seems correct. I just noticed you could phrase this the other way - why in general should we presume groups of people with academic qualifications have their strongest incentives towards truth? I agree that this disagreement will come down to building detailed models of incentives in human organisations more than building inside views of each field (which is why I didn't find Greg's post particularly persuasive - this isn't a matter of discussing rational bayesian agents, but of discussing the empirical incentive landscape we are in).

Comment author: Pablo_Stafforini 31 October 2017 08:39:17PM *  0 points [-]

why in general should we presume groups of people with academic qualifications have their strongest incentives towards truth?

Maybe because these people have been surprisingly accurate? In addition, it's not that Eliezer disputes that general presumption: he routinely relies on results in the natural and social sciences without feeling the need to justify in each case why we should trust e.g. computer scientists, economists, neuroscientists, game theorists, and so on.

Comment author: RobBensinger 31 October 2017 04:04:06AM *  9 points [-]

This was a really good read! In addition to being super well-timed.

I don't think there's a disagreement here about ideal in-principle reasoning. I’m guessing that the disagreement is about several different points:

  • In reality, how generally difficult is it to spot important institutions and authorities failing in large ways? Where we might ask subquestions for particular kinds of groups; e.g., maybe you and the anti-modest will turn out to agree about how dysfunctional US national politics is on average, while disagreeing about how dysfunctional academia is on average in the US.

  • In reality, how generally difficult is it to evaluate your own level of object-level accuracy in some domain, the strength of object-level considerations in that domain, your general competence or rationality or meta-rationality, etc.? To what extent should we update strongly on various kinds of data about our reasoning ability, vs. distrusting the data source and penalizing the evidence? (Or looking for ways to not have to gather or analyze data like that at all, e.g., prioritizing finding epistemic norms or policies that work relatively OK without such data.)

  • How strong are various biases, either in general or in our environs? It sounds like you think that arrogance, overconfidence, and excess reliance on inside-view arguments are much bigger problems for core EAs than underconfidence or neglect of inside-view arguments, while Eliezer thinks the opposite.

  • What are the most important and useful debiasing interventions? It sounds like you think these mostly look like attempts to reduce overconfidence in inside views, self-aggrandizing biases, and the like, while Eliezer thinks that it's too easy to overcorrect if you organize your epistemology around that goal. I think the anti-modesty view here is that we should mostly address those biases (and other biases) through more local interventions that are sensitive to the individual's state and situation, rather than through rules akin to "be less confident" or "be more confident".

  • What's the track record for more modesty-like views versus less modesty-like views overall?

  • What's the track record for critics of modesty in particular? I would say that Eliezer and his social circle have a really strong epistemic track record, and that this is good evidence that modesty is a bad idea; but I gather you want to use that track record as Exhibit A in the case for modesty being a good idea. So I assume it would help to discuss the object-level disagreements underlying these diverging generalizations.

Does that match your sense of the disagreement?

Comment author: Pablo_Stafforini 31 October 2017 06:41:17PM *  3 points [-]

I would say that Eliezer and his social circle have a really strong epistemic track record, and that this is good evidence that modesty is a bad idea; but I gather you want to use that track record as Exhibit A in the case for modesty being a good idea.

Really? My sense is that the opposite is the case. Eliezer himself acknowledges that he has an "amazing bet-losing capability" and my sense is that he tends to bet against scientific consensus (while Caplan, who almost always takes the consensus view, has won virtually all his bets). Carl Shulman notes that Eliezer's approach "has lead [him] astray repeatedly, but I haven't seen as many successes."

Comment author: RobBensinger 31 October 2017 03:38:42PM *  1 point [-]

I don't think we should describe all instances of deference to any authority, all uses of the outside view, etc. as "modesty". (I don't know whether you're doing that here; I just want to be clear that this at least isn't what the "modesty" debate has traditionally been about.)

The question is what happens when you criticize it and don't get a better explanation. What should you do? Strongly adopt a partial solution to the problem, continue to look for other solutions or trust the specialists to figure it out?

I don't think there's any general answer to this. The right answer depends on the strength of the object-level arguments; on how much reason you have to think you've understood and gleaned the right take-aways from those arguments; on your model of the physics community and other relevant communities; on the expected information value of looking into the issue more; on how costly it is to seek different kinds of further evidence; etc.

I'm curious what you think about partial non-reality of wavefunctions (as described by the AncientGeek here and seeming to correspond to the QIT interpretation on the wiki page of interpretations, which fits with probabilities being in the mind ).

In the context of the measurement problem: If the idea is that we may be able to explain the Born rule by revising our understanding of what the QM formalism corresponds to in reality (e.g., by saying that some hidden-variables theory is true and therefore the wave function may not be the whole story, may not be the kind of thing we'd naively think it is, etc.), then I'd be interested to hear more details. If the idea is that there are ways to talk about the experimental data without committing ourselves to a claim about why the Born rule holds, then I agree with that, though it obviously doesn't answer the question of why the Born rule holds. If the idea is that there are no facts of the matter outside of observers' data, then I feel comfortable dismissing that view even if a non-negligible number of physicists turn out to endorse it.

I also feel comfortable having lower probability in the existence of God than the average physicist does; and "physicists are the wrong kind of authority to defer to about God" isn't the reasoning I go through to reach that conclusion.

Comment author: Pablo_Stafforini 31 October 2017 04:09:29PM 1 point [-]

I also feel comfortable having lower probability in the existence of God than the average physicist does; and "physicists are the wrong kind of authority to defer to about God" isn't the reasoning I go through to reach that conclusion.

Out of curiosity, what is the reasoning you would go through to reach that conclusion?

Comment author: RobBensinger 31 October 2017 12:41:35AM *  1 point [-]

Going back to your list:

nutrition, animal consciousness, philosophical zombies, population ethics, and quantum mechanics

I haven't looked much at the nutrition or population ethics discussions, though I understand Eliezer mistakenly endorsed Gary Taubes' theories in the past. If anyone has links, I'd be interested to read more.

AFAIK Eliezer hasn't published why he holds his views about animal consciousness, and I don't know what he's thinking there. I don't have a strong view on whether he's right (or whether he's overconfident).

Concerning zombies: I think Eliezer is correct that the zombie argument can't provide any evidence for the claim that we instantiate mental properties that don't logically supervene on the physical world. Updating on factual evidence is a special case of a causal relationship, and if instantiating some property P is causally impacting our physical brain states and behaviors, then P supervenes on the physical.

I'm happy to talk more about this, and I think questions like this are really relevant to evaluating the track record of anti-modesty positions, so this seems like as good a place as any for discussion. I'm also happy to talk more about meta questions related to this issue, like, "If the argument above is correct, why hasn't it convinced all philosophers of mind?" I don't have super confident views on that question, but there are various obvious possibilities that come to mind.

Concerning QM: I think Eliezer's correct that Copenhagen-associated views like "objective collapse" and "quantum non-realism" are wrong, and that the traditional arguments for these views are variously confused or mistaken, often due to misunderstandings of principles like Ockham's razor. I'm happy to talk more about this too; I think the object-level discussions are important here.

Comment author: Pablo_Stafforini 31 October 2017 02:46:42PM *  2 points [-]

A discussion about the merits of each of the views Eliezer holds on these issues would itself exemplify the immodest approach I'm here criticizing. What you would need to do to change my mind is to show me why Eliezer is justified in giving so little weight to the views of each of those expert communities, in a way that doesn't itself take a position on the issue by relying primarily on the inside view.

Let’s consider a concrete example. When challenged to justify his extremely high confidence in MWI, despite the absence of a strong consensus among physicists, Eliezer tells people to "read the QM sequence”. But suppose I read the sequence and become persuaded. So what? Physicists are just as divided now as they were before I raised the challenge. By hypothesis, Eliezer was unjustified in being so confident in MWI despite the fact that it seemed to him that this interpretation was correct, because the relevant experts did not share that subjective impression. If upon reading the sequence I come to agree with Eliezer, that just puts me in the same epistemic predicament as Eliezer was originally: just like him, I too need to justify the decision to rely on my own impressions instead of deferring to expert opinion.

To persuade me, Greg, and other skeptics, what Eliezer needs to do is to persuade the physicists. Short of that, he can persuade a small random sample of members of this expert class. If, upon being exposed to the relevant sequence, a representative group of quantum physicists change their views significantly in Eliezer’s direction, this would be good evidence that the larger population of physicists would update similarly after reading those writings. Has Eliezer try to do this?

ETA: I just realized that the kind of challenge I'm raising here has been carried out, in the form of a "natural experiment", for Eliezer's views on decision theory. Years ago, David Chalmers spontaneously sent half a dozen leading decision theorists copies of Eliezer's TDT paper. If memory serves, Chalmers reported that none of these experts had been impressed (let alone persuaded).

Comment author: RobBensinger 29 October 2017 09:43:57PM *  2 points [-]

Yeah, I wasn't saying that you were making a claim about Eliezer; I just wanted to highlight that he's possibly making a stronger claim even than the one you're warning against when you say "one should generally distrust one's ability to 'beat elite common sense' even if one thinks one can accurately diagnose why members of this reference class are wrong in this particular instance".

If the claim is that we shouldn't give much weight to the views of individuals and institutions that we shouldn't expect to be closely aligned with the truth, this is something that hardly anyone would dispute.

I think the main two factual disagreements here might be "how often, and to what extent, do top institutions and authorities fail in large and easy-to-spot ways?" and "for epistemic and instrumental purposes, to what extent should people like you and Eliezer trust your own inside-view reasoning about your (and authorities') competency, epistemic rationality, meta-rationality, etc.?" I don't know whether you in particular would disagree with Eliezer on those claims, though it sounds like you may.

Nor does this vindicate various confident pronouncements Eliezer has made in the past—about nutrition, animal consciousness, AI timelines, philosophical zombies, population ethics, etc.—unless it is conjoined with an argument for thinking that his skepticism extends to the relevant community of experts in each of those fields.

Yeah, agreed. The "adequacy" level of those fields, and the base adequacy level of civilization as a whole, is one of the most important questions here.

Could you say more about what you have in mind by "confident pronouncements [about] AI timelines"? I usually think of Eliezer as very non-confident about timelines.

Comment author: Pablo_Stafforini 30 October 2017 12:35:07PM *  1 point [-]

I think the main two factual disagreements here might be "how often, and to what extent, do top institutions and authorities fail in large and easy-to-spot ways?" and "for epistemic and instrumental purposes, to what extent should people like you and Eliezer trust your own inside-view reasoning about your (and authorities') competency, epistemic rationality, meta-rationality, etc.?"

Thank you, this is extremely clear, and captures the essence of much of what's going between Eliezer and his critics in this area.

Could you say more about what you have in mind by "confident pronouncements [about] AI timelines"? I usually think of Eliezer as very non-confident about timelines.

I had in mind forecasts Eliezer made many years ago that didn't come to pass as well as his most recent bet with Bryan Caplan. But it's a stretch to call these 'confident pronouncements', so I've edited my post and removed 'AI timelines' from the list of examples.

Comment author: RobBensinger 29 October 2017 02:38:46PM *  1 point [-]

Similarly, I think one should generally distrust one's ability to "beat elite common sense" even if one thinks one can accurately diagnose why members of this reference class are wrong in this particular instance.

Note that in Eliezer's example above, he isn't claiming to have any diagnosis at all of what led the Bank of Japan to reach the wrong conclusion. The premise isn't "I have good reason to think the Bank of Japan is biased/mistaken in this particular way in this case," but rather: "It's unsurprising for institutions like the Bank of Japan to be wrong in easy-to-demonstrate ways, so it doesn't take a ton of object-level evidence for me to reach a confident conclusion that they're wrong on the object level, even if I have no idea what particular mistake they're making, what their reasons are, etc. The Bank of Japan just isn't the kind of institution that we should strongly expect to be right or wrong on this kind of issue (even though this issue is basic to its institutional function); so moderate amounts of ordinary object-level evidence can be dispositive all on its own."

From:

[W]hen I read some econbloggers who I’d seen being right about empirical predictions before saying that Japan was being grotesquely silly, and the economic logic seemed to me to check out, as best I could follow it, I wasn’t particularly reluctant to believe them. Standard economic theory, generalized beyond the markets to other facets of society, did not seem to me to predict that the Bank of Japan must act wisely for the good of Japan. It would be no surprise if they were competent, but also not much of a surprise if they were incompetent.

Comment author: Pablo_Stafforini 29 October 2017 08:41:57PM *  1 point [-]

I never claimed that this is what Eliezer was doing in that particular case, or in other cases. (I'm not even sure I understand Eliezer's position.) I was responding to the previous comment, and drawing a parallel between "beating the market" in that and other contexts. I'm sorry if this was unclear.

To address your substantive point: If the claim is that we shouldn't give much weight to the views of individuals and institutions that we shouldn't expect them to be good at tracking the truth, despite their status or prominence in society, this is something that hardly any rationalist or EA would dispute. Nor does this vindicate various confident pronouncements Eliezer has made in the past—about nutrition, animal consciousness, philosophical zombies, population ethics, and quantum mechanics, to name a few—that deviate significantly from expert opinion, unless this is conjoined with credible arguments for thinking that warranted skepticism extends to each of those expert communities. To my knowledge, no persuasive arguments of this sort have been provided.

View more: Next