Comment author: RobBensinger 04 November 2017 12:38:07AM *  2 points [-]

To be clear, I'm not saying that the story I told above ("here are some cool ideas that I claim haven't sufficiently saturated the philosophy community to cause all the low-hanging fruit to get grabbed, or to produce fieldwide knowledge and acceptance in the cases where it has been grabbed") should persuade arbitrary readers that people like Eliezer or Gary Drescher are on the right track; plenty of false turns and wrong solutions can also claim to be importing neglected ideas, or combining ideas in neglected ways. I'm just gesturing at one reason why I think it's possible at all to reach confident correct beliefs about lots of controversial claims in philosophy, in spite of the fact that philosophy is a large and competitive field whose nominal purpose is to answer these kinds of questions.

I'm also implicitly making a claim about there being similarities between many of the domains you're pointing to that help make it not just a coincidence that one (relatively) new methodology and set of ideas can put you ahead of the curve on multiple issues simultaneously (plus produce multiple discovery and convergence). A framework that's unusually useful for answering questions related to naturalism, determinism, and reflective reasoning can simultaneously have implications for how we should (and shouldn't) be thinking about experience, agency, volition, decision theory, and AI, among other topics. To some extent, all of these cases can be thought of as applications of a particular naturalist/reductionist toolkit (containing concepts and formalisms that aren't widely known among philosophers who endorse naturalism) to new domains.

I'm curious what criticisms you've heard of MIRI's work on decision theory. Is there anything relevant you can link to?

Comment author: Gregory_Lewis 04 November 2017 08:22:29AM 2 points [-]

I don't think the account of the relative novelty of the 'LW approach' to philosophy makes a good fit for the available facts; "relatively" new is, I suggest, a pretty relative term.

You can find similar reduction-esque sensibilities among the logicial positivists around a century ago, and a very similar approach from Quine about half a century ago. In the case of the logical positivists, they enjoyed a heyday amongst the philosophical community, but gradually fell from favour due to shortcomings other philosophers identified; I suggest Quine is a sufficiently 'big name' in philosophy that his approach was at least widely appreciated by the relevant academic communities.

This is challenging to reconcile with an account of "Rationality's philosophical framework allows one to get to confidently get to the right answer across a range of hard philosophical problems, and the lack of assent of domain experts is best explained by not being aware of it". Closely analogous approaches have been tried a very long time ago, and haven't been found extraordinarily persuasive (even if we subset to naturalists). It doesn't help that when the 'LW-answer' is expounded (e.g. in the sequences) the argument offered isn't particularly sophisticated (and often turns out to be recapitulating extant literature), nor does it usually deign to address objections raised by dissenting camps.

I suggest a better fit for this data is the rationality approach looks particularly persuasive to people without subject matter expertise.

Re. decision theory. Beyond the general social epistemiological steers (i.e. the absence of good decision theorists raving about the breakthrough represented by MIRI style decision theory, despite many of them having come into contact with this work one way or another), remarks I've heard often target 'technical quality': Chalmers noted in a past AMA disappointment this theory had not been made rigorous (maybe things have changed since), and I know one decision theorist's view is that the work also isn't rigorous and a bit sloppy (on Carl's advice, I'm trying to contact more). Not being a decision theorist myself, I haven't delved into the object level considerations.

Comment author: Halstead 01 November 2017 05:19:39PM 1 point [-]

Hi Greg, So, your view is that it's ok to demote people from my peer group when I not only disagree with them about p but also when I have an explanation of why they would be biased that doesn't apply to me. And on your view their verdict on p could never be evidence of their bias. This last seems wrong in many cases.

Consider some obvious truth P (e.g. if a, then a; if a or b, then a and b can't both not be true; it's wrong to torture people for fun etc.). Myself and some other equally intelligent person have been thinking about P for an equal amount of time. I learn that she believes that not-P. It seems entirely appropriate for me to demote them in this case. If you deny this, suppose now we are deciding on some proposition Q and I knew only that they had got P wrong. As you would agree, their past performance (on P) is pro tanto reason to demote with respect to Q. How can it then not also be pro tanto reason to demote with respect to P? [aside: the second example of an obvious truth I gave is denied by supervaluationists]. In short, how could epistemic peerhood not be in part determined by performance on the object level reasons?

In some of these cases, it also seems that in order to justifiably demote, one doesn't need to offer an account of why the other party is biased that is independent of the object-level reasons.

A separate point, it seems like today and historically there are and have been pockets of severe epistemic error. e.g. in the 19th century, almost all of the world's most intelligent philosophers thought that idealism is true; a large chunk of political philosophers believe that public reason is true; I'm sure there are lots of examples outside philosophy.

In this context, selective epistemic exceptionalism seems appropriate for a community that has taken lots of steps to debias. There's still very good reason to be aware of what the rest of the epistemic community thinks and why they think it, and this is a (weaker) form of modesty.

Comment author: Gregory_Lewis 03 November 2017 12:19:01AM *  1 point [-]

Minor point: epistemic peer judgements are independent of whether you disagree with them or not. I'm happy to indict people who are epistemically unvirtuous even if they happen to agree with me.

I generally think one should not use object level disagreement to judge peerhood, given the risk of entrenchment (i.e. everyone else thinks I'm wrong, so I conclude everyone else is wrong and an idiot).

For 'obvious truths' like P, there's usually a lot of tacit peer agreement in background knowledge. So the disagreement with you and these other people provides some evidence for demotion, rather than disagreeing with you alone. I find it hard to disentangle intuitions where one removes this rider, and in these cases I'm not so sure about whether steadfastness + demotion is the appropriate response. Demoting supervaluationaists as peers re. supervaluationism because they disagree with you about it, for example, seems a bad idea.

In any case, almost by definition it would be extraordinarily rare people we think prima facie are epistemic peers disagree on something sufficiently obvious. In real world cases where its some contentious topic where reasonable people disagree, one should not demote people based on their disagreement with you (or, perhaps, in these cases the evidence for demotion is sufficiently trivial that it is heuristically better ignored).

Modest accounts shouldn't be surprised by expert error. Yet being able to determine these instances ex post gives little steer as to what to do ex ante. Random renegade schools of thought assuredly have an even poorer track record. If it was the case the EA/rationalist community had a good track record of out performing expert classes in their field, that would be a good reason for epistemic exceptionalism. Yet I don't see it.

Comment author: RobBensinger 02 November 2017 09:04:05PM *  1 point [-]

Unfortunately, many of these questions tend to be the sort where a convincing adjudication is far off (i.e. it seems unlikely to discover convincing proof of physicalism any time soon).

I think a convincing object-level argument could be given; you could potentially show on object-level grounds why the specific arguments or conclusions of various rationalists are off-base, thereby at least settling the issue (or certain sub-issues) to the satisfaction of people who take the relevant kinds of inside-view arguments sufficiently seriously in the first place. I'd be particularly interested to hear reasons you (or experts you defer to) reject the relevant arguments against gods, philosophical zombies, or objective collapse / non-realism views in QM.

If you mean that a convincing expert-consensus argument is likely to be far off, though, then I agree about that. As a start, experts' views and toolkits in general can be slow to change, particularly in areas like philosophy.

I assume one part of the model Eliezer is working with here is that it can take many decades for new conceptual discoveries to come to be widely understood, accepted, and used in a given field, and even longer for these ideas to spill over into other fields. E.g., some but not all philosophers have a deep understanding of Shannon, Solomonoff, and Jaynes' accounts of inductive inference, even though many of the key insights have been around for over fifty years at this point. When ideas spread slowly, consensus across all fields won't instantly snap into a new state that's maximally consistent with all of the world's newest developments, and there can be low-hanging fruit for the philosophers who do help import those ideas into old discussions.

This is why Eliezer doesn't claim uniqueness for his arguments in philosophy; e.g., Gary Drescher used the same methodology and background ideas to arrive largely at the same conclusions largely independently, as far as I know.

I'd consider the big advances in decision theory from Wei Dai and Eliezer to be a key example of this, and another good example of independent discovery of similar ideas by people working with similar methodologies and importing similar ideas into a relatively old and entrenched field. (Though Wei Dai and Eliezer were actively talking to each and sharing large numbers of ideas, so the independence is much weaker.)

You can find most of the relevant component ideas circulating before that, too; but they were scattered across multiple fields in a way that made them less likely to get spontaneously combined by specialists busy hashing out the standard sub-sub-arguments within old paradigms.

Comment author: Gregory_Lewis 02 November 2017 11:59:46PM 3 points [-]

I agree such an object level demonstration would be good evidence (although of course one-sided, for reasons Pablo ably articulates elsewhere). I regret I can't provide this. On many of these topics (QM, p-zombies) I don't pretend any great knowledge; for others (e.g. Theism), I can't exactly find the 'rationalist case for Atheism' crisply presented.

I am naturally hesitant to infer from the (inarguable) point that diffusion of knowledge and ideas within and across fields takes time that he best explanation for disagreement is that rationalists are just ahead of the curve. I enjoyed the small parts of Drescher I read, but I assume many reasonable philosophers are aware of his work and yet are not persuaded. Many things touted in philosophy (and elsewhere) as paradigm shifting insights transpire to be misguided, and betting on some based on your personal assent on the object level looks unlikely to go well.

I consider the decision theory work a case-in-point. The view that the F- U- T- DT is this great advance on decision theoretic state of the art is a view that is very tightly circumscribed to the rationalist community itself. Of course, many decision theorists are simply ignorant of it given it is expounded outside the academic press. Yet others are not: there were academic decision theorists who attend some MIRI workshops, others who have been shown versions (via Chalmers, I understand), and a few who have looked at MIRI's stuff on Arxiv and similar. Yet the prevailing view of these seems to be at best lukewarm, and at worst scathing.

This seems challenging to reconcile with a model of rationalists just getting to the great insights early before everyone else catches up. It could be the decision theorist community is so diseased so they cannot appreciate the technical breakthrough MIRI-style decision theory promises. Yet I find the alternative hypothesis where it is the rationalist community which is diseased and diving down a decision theory dead end without the benefit of much interaction with decision theory experts to correct them somewhat more compelling.

Comment author: MikeJohnson 28 October 2017 03:02:30PM 1 point [-]

Hi Gregory,

We have never interacted before this, at least to my knowledge, and I worry that you may be bringing some external baggage into this interaction (perhaps some poor experience with some cryonics enthusiast...). I find your "let's shut this down before it competes for resources" attitude very puzzling and aggressive, especially since you show zero evidence that you understand what I'm actually attempting to do or gather support for on the object-level. Very possibly we'd disagree on that too, which is fine, but I'm reading your responses as preemptively closed and uncharitable (perhaps veering toward 'aggressively hostile') toward anything that might 'rock the EA boat' as you see it.

I don't think this is good for EA, and I don't think it's working off a reasonable model of the expected value of a new cause area. I.e., you seem to be implying the expected cause area would be at best zero, but more probably negative, due to zero-sum dynamics. On the other hand, I think a successful new cause area would more realistically draw in or internally generate at least as many resources as it would consume, and probably much more -- my intuition is that at the upper bound we may be looking at something as synergistic as a factorial relationship (with three causes, the total 'EA pie' might be 321=6; with four causes the total 'EA pie' might be 432*1=24). More realistically, perhaps 4+3+2+1 instead of 3+2+1. This could be and probably is very wrong-- but at the same time I think it's more accurate than a zero-sum model.

At any rate, I'm skeptical that we can turn this discussion into something that will generate value to either of us or to EA, so unless you have any specific things you'd like to discuss or clarify, I'm going to leave things here. Feel free to PM me questions.

Comment author: Gregory_Lewis 31 October 2017 06:35:12PM *  0 points [-]

I prefer to keep discussion on the object level, rather offering adverse impressions of one another's behaviour (e.g. uncharitable, aggressive, censorious etc.)[1] with speculative diagnoses as to the root cause of these ("perhaps some poor experience with a cryonics enthusiast").

To recall the dialectical context: the implication upthread was a worry that the EA community (or EA leadership) are improperly neglecting the metal health cause area, perhaps due to (in practice) some anti-weirdness bias. To which my counter-suggestion was that maybe EA generally/leaders thereof have instead made their best guess that the merits of this area isn't more promising than those cause areas they already attend to.

I accept that conditional on some recondite moral and empirical matters, mental health interventions look promising. Yet that does not distinguish mental health beyond many other candidate cause areas, e.g.:

  • Life extension/cryonics
  • Pro-life advocacy/natural embryo loss mitigation
  • Immigration reform
  • Improving scientific norms etc.

All generally have potentially large scale, sometimes neglected, but less persuasive tractability. In terms of some hypothetical dis aggregated EA resource (e.g. people, money), I'd prefer it to go into one of the 'big three' than any of these other areas, as my impression is the marginal returns for any of these three is greater than one of those. In other senses there may not be such zero sum dynamics (i.e. conditional on Alice only wanting to work in mental health, better that she work in EA-style mental mental), yet I aver this doesn't really apply to which topics the movement gives relative prominence to (after all, one might hope that people switch from lower- to higher-impact cause areas, as I have attempted to do).

Of course, there remains value in exploration: if in fact EA writ large is undervaluing mental health, they would want to know about it and change tack What I hope would happen if I am wrong in my determination of mental health is that public discussion of the merits would persuade more and more people of the merits of this approach (perhaps I'm incorrigible, hopefully third parties are not), and so it gains momentum from a large enough crowd of interested people it becomes its own thing with similar size and esteem to areas 'within the movement'. Inferring from the fact that this has not yet happened that the EA community is not giving a fair hearing is not necessarily wise.

[1]: I take particular exception to the accusations of censoriousness (from Plant) and wanting to 'shut down discussion' [from Plant and yourself]. In what possible world is arguing publicly on the internet a censorious act? I don't plot to 'run the mental health guys out of the EA movement', I don't work behind the scenes to talk to moderators to get rid of your contributions, I don't downvote remarks or posts on mental health, and so on and so forth for any remotely plausible 'shutting down discussion' behaviour. I leave adverse remarks I could make to this apophasis.

Comment author: RobBensinger 31 October 2017 04:04:06AM *  9 points [-]

This was a really good read! In addition to being super well-timed.

I don't think there's a disagreement here about ideal in-principle reasoning. I’m guessing that the disagreement is about several different points:

  • In reality, how generally difficult is it to spot important institutions and authorities failing in large ways? Where we might ask subquestions for particular kinds of groups; e.g., maybe you and the anti-modest will turn out to agree about how dysfunctional US national politics is on average, while disagreeing about how dysfunctional academia is on average in the US.

  • In reality, how generally difficult is it to evaluate your own level of object-level accuracy in some domain, the strength of object-level considerations in that domain, your general competence or rationality or meta-rationality, etc.? To what extent should we update strongly on various kinds of data about our reasoning ability, vs. distrusting the data source and penalizing the evidence? (Or looking for ways to not have to gather or analyze data like that at all, e.g., prioritizing finding epistemic norms or policies that work relatively OK without such data.)

  • How strong are various biases, either in general or in our environs? It sounds like you think that arrogance, overconfidence, and excess reliance on inside-view arguments are much bigger problems for core EAs than underconfidence or neglect of inside-view arguments, while Eliezer thinks the opposite.

  • What are the most important and useful debiasing interventions? It sounds like you think these mostly look like attempts to reduce overconfidence in inside views, self-aggrandizing biases, and the like, while Eliezer thinks that it's too easy to overcorrect if you organize your epistemology around that goal. I think the anti-modesty view here is that we should mostly address those biases (and other biases) through more local interventions that are sensitive to the individual's state and situation, rather than through rules akin to "be less confident" or "be more confident".

  • What's the track record for more modesty-like views versus less modesty-like views overall?

  • What's the track record for critics of modesty in particular? I would say that Eliezer and his social circle have a really strong epistemic track record, and that this is good evidence that modesty is a bad idea; but I gather you want to use that track record as Exhibit A in the case for modesty being a good idea. So I assume it would help to discuss the object-level disagreements underlying these diverging generalizations.

Does that match your sense of the disagreement?

Comment author: Gregory_Lewis 31 October 2017 05:51:16PM *  7 points [-]

Thanks for your helpful reply. I think your bullet points do track the main sources of disagreement, but I venture an even crisper summary:

I think the Eliezer-style 'immodest' view comprises two key claims:

1) There are a reasonably large number of cases that due to inadequate equiilbria or similar that who we might take to be expert classes are in fact going to be sufficiently poorly optimised for the truth that the views a reasonable rationalist or similar could be expected to do better.

2) We can reliably identify these cases.

If they're both true we can license ourselves to 'pick fights' where we make confident bets against expert consensus (or lack thereof) in the knowledge we are more likely than not to be right. If not, then it seems modesty is the better approach: it might be worth acting 'as if' our contra-expert impression is right and doing further work (because we might discover something important), but nonetheless defer to the expert consensus.

It seems the best vindication of the immodesty view as Eliezer defends is a track record of such cases on his behalf or the wider rationalist community. You correctly anticipate I would definitely include the track record here as highly adverse. For two reasons:

First, when domain experts look at the 'answer according to the rationalist community re. X', they're usually very unimpressed, even if they're sympathetic to the view themselves. I'm pretty Atheist, but I find the 'answer' to the theism question per LW or similar woefully rudimentary compared to state of the art discussion in the field. I see similar experts on animal consciousness, quantum mechanics, free will, and so on similarly be deeply unimpressed with the sophistication of argument offered.

Unfortunately, many of these questions tend to be the sort where a convincing adjudication is far off (i.e. it seems unlikely to discover convincing proof of physicalism any time soon). So what we observe is both compatible with 'the rationalist community is right and this field is diseased (and so gets it wrong)' and 'the rationalist community is greatly over confident and the field ia on the right track'. That said, I take the number of fields which the rationalist community takes to be sufficiently diseased that it takes itself to do better as implausible on priors.

The best thing would be a clear track record to judge - single cases, either way, don't give much to go one, as neither modesty nor immodesty would claim they should expect to win every single time. I see the rationalist community having one big win (re. AI), yet little else. That Eliezer's book offers two pretty weak examples (e.g. BoJ, where he got the argument from a recognised authority, and an n=1 medical intervention), and reports one case against (e.g. a big bet of Taubes) doesn't lead me to upgrade my pretty autumnal view of the track record.

Comment author: Halstead 29 October 2017 11:51:52PM *  6 points [-]

Hi Greg, thanks for this post, it was very good. I thought it would help future discussion to separate these claims, which leave your argument ambiguous:

  1. You should give equal weight to your own credences and those of epistemic peers on all propositions for which you and they are epistemic peers.
  2. Claims about the nature of the community of epistemic peers and our ability to reliably identify them.

In places, you seem to identify modesty with 1, in others with the conjunction of 1 and a subset of claims in 2. 1 doesn't seem sufficient on its own for modesty, for if 1 is true but I have no epistemic peers or can't reliably identify them, then I should pay lots of attention to my own inside view of an issue. Similarly, if EAs have no epistemic peers or superiors, then they should ignore everyone else. This is compatible with conciliationism but seems immodest. The relevant claim in 2 seems to be that for most people, including EAs, with beliefs about practically important propostions, there are epistemic peers and superiors who can be reliably identified.

This noted, I wonder how different the conjunction of 1 and 2 is to epistemic chauvinism. It seems to me that I could accept 1 and 2, but demote people from my epistemic peer group with respect to a proposition p if they disagree with me about p. If I have read all of the object-level arguments on p and someone else has as well and we disagree on p, then demotion seems appropriate at least in some cases. To give an example, I've read and thought less about vagueness less than lots of much cleverer philosophers who hold a view called supervaluationism, which I believe to be extremely implausible. I believe I can explain why they are wrong with the object-level arguments about vagueness. I received the evidence that they disagree. Very good, I reply, they are not my epistemic peers with respect to this question for object level reasons x, y, and z. (Note that my reasons for demoting them are the object-level reasons; they are not that I believe that supervaluationism is false. Generally, the fact that I believe p is usually not my reason to believe that p.) This is entirely compatible with the view that I should be modest with respect to my epistemic peers.

In this spirit, I find Scott Sumner's quote deeply strange. If he thinks that "there is no objective reason to favor my view over Krugman's", then he shouldn't believe his view over Krugman's (even though he (Sumner) does). If I were in Sumner's shoes after reasoning about p and reading the object level reasons about p, then I would EITHER become agnostic or demote krugman from my epistemic peer group.

Comment author: Gregory_Lewis 30 October 2017 07:14:21PM 1 point [-]

Hello John (and Michael - never quite how to manage these sorts of 'two to one' replies)

I would reject epistemic chauvinism. In the cases where you disagree on P with your epistemic peer, and you take some set of object reasons x, y, and z to support P, the right approach is to downgrade your confidence in the strength of these reasons rather than demote them from epistemic peerhood. I'd want to support that using some set of considerations about [2]: among others, the reference class where you demote people from peerhood (or superiority) on disagreement goes predictably much worse in the 'truly modest' one where you downgrade your confidence in the reasons that lead you to disagree (consider a typical crackpot who thinks the real numbers have the same cardinality as the natural for whatever reason, and then infers from disagreement mathematicians are all fools)

For the supervaluation case, I don't know whether it is the majority view on vagueness, but pretend it was a consensus. I'd say the right thing in such a situation is to be a supervaluationist yourself, even if it appears to you it is false. Indicting apparent peers/superiors for object level disagreement involves retrenchment, and so seems to go poorly.

In the AI case, I'd say you'd have to weigh up (which is tricky) degrees of expertise re. AI. I don't see it as a cost for my view to update towards the more sceptical AI researchers even if you don't think the object level reasons warrant it, as in plausible reference classes the strategy of going with the experts beats going with the non-expert opinion.

In essence, the challenge modesty would make is, "Why do you back yourself to have the right grasp on the object level reasons?" Returning to a supervaluation consensus, it seems one needs to offer a story as to why the object level reasons that convincingly refute the view are not appreciated by the philosophers who specialise in the subject. It could be the case they're all going systemically wrong (and so you should demote them), but it seems more likely that you have mistaken the object level balance of reason. Using the former as an assumption looks overconfident.

What I take Sumner to be saying is to take the agnosticism you suggest he should, maybe something like this:

My impression is that my theory is right, but I don't believe its more likely my impression is more likely to be right than Paul Krugman's (or others). So if you put a gun to my head and I had to give my best guess on economics, I would take an intermediate view, and not follow the theory I espouse. In my day to day work, though, I use this impression to argue in support of this view, so it can contribute to our mutual knowledge.

Of course, maybe you can investigate the object level reasons, per Michael's example. In the Adam and Beatrice case, Oliver could start talking to them about the reasons, and maybe find one of them isn't an epistemic peer to the other (or to him). Yet in cases where Oliver forms his own view about the object level considerations, he should still be modest across the impressions of Adam, Beatrice, and himself, for parallel reasons to the original case where he was an outsider (suppose we imagine Penelope who is an outsider to this conversation, etc.)

Comment author: ClaireZabel 29 October 2017 10:43:21PM 17 points [-]

Thank so much for the clear and eloquent post. I think a lot of the issues related to lack of expertise and expert bias are stronger than I think you do, and I think it's both rare and not inordinately difficult to adjust for common biases such that in certain cases a less-informed individual can often beat the expert consensus (because few enough of the experts are doing this, for now). But it was useful to read this detailed and compelling explanation of your view.

The following point seems essential, and I think underemphasized:

Modesty can lead to double-counting, or even groupthink. Suppose in the original example Beatrice does what I suggest and revise their credences to be 0.6, but Adam doesn’t. Now Charlie forms his own view (say 0.4 as well) and does the same procedure as Beatrice, so Charlie now holds a credence of 0.6 as well. The average should be lower: (0.8+0.4+0.4)/3, not (0.8+0.6+0.4)/3, but the results are distorted by using one-and-a-half helpings of Adam’s credence. With larger cases one can imagine people wrongly deferring to hold consensus around a view they should think is implausible, and in general the nigh-intractable challenge from trying to infer cases of double counting from the patterns of ‘all things considered’ evidence.

One can rectify this by distinguishing ‘credence by my lights’ versus ‘credence all things considered’. So one can say “Well, by my lights the credence of P is 0.8, but my actual credence is 0.6, once I account for the views of my epistemic peers etc.” Ironically, one’s personal ‘inside view’ of the evidence is usually the most helpful credence to publicly report (as it helps others modestly aggregate), whilst ones all things considered modest view usually for private consumption.

I rarely see any effort to distinguish between the two outside the rationalist/EA communities, which is one reason I think both over-modesty and overconfident backlash against it are common.

My experience is that most reasonable, intelligent people I know have never explicitly thought of the distinction between the two types of credence. I think many of them have an intuition that something would be lost if they stated their "all things considered" credence only, even though it feels "truer" and "more likely to be right," though they haven't formally articulated the problem. And knowing that other people rarely make this distinction, it's hard for everyone know how to update based on others' views without double-counting, as you note.

It seems like it's intuitive for people to state either their inside view, or their all-things-considered view, but not both. To me, stating "both">"inside view only">"outside view only", but I worry that calls for more modest views tend to leak nuance and end up pushing for people to publicly state "outside view only" rather than "both"

Also, I've generally heard people call the "credence by my lights" and "credence all things considered" one's "impressions" and "beliefs," respectively, which I prefer because they are less clunky. Just fyi.

(views my own, not my employer's)

Comment author: Gregory_Lewis 30 October 2017 06:46:48PM 2 points [-]

Thanks for your generous reply, Claire. I agree the 'double counting' issue remains challenging, although my thought was that most people, at least in the wider world, are currently pretty immodest, the downsides are not too large in what I take to be common applications where you are trying to weigh up large groups of people/experts. I agree there's a risk of degrading norms if people mistakenly switch to offering 'outside view' credences publicly.

I regret I hadn't seen the 'impressions' versus 'beliefs' distinction being used before. 'Impression' works very well for 'credence by my lights' (I had toyed with using the term 'image'), but I'm not sure 'belief' translates quite so well for those who haven't seen the way the term is used in the rationalist community. I guess this might just be hard, as there does seem to be a good word (or two) I can find which captures modesty ("being modest, my credence is X", "modestly, I think it's Y", maybe?)

Comment author: RobBensinger 29 October 2017 02:38:46PM *  1 point [-]

Similarly, I think one should generally distrust one's ability to "beat elite common sense" even if one thinks one can accurately diagnose why members of this reference class are wrong in this particular instance.

Note that in Eliezer's example above, he isn't claiming to have any diagnosis at all of what led the Bank of Japan to reach the wrong conclusion. The premise isn't "I have good reason to think the Bank of Japan is biased/mistaken in this particular way in this case," but rather: "It's unsurprising for institutions like the Bank of Japan to be wrong in easy-to-demonstrate ways, so it doesn't take a ton of object-level evidence for me to reach a confident conclusion that they're wrong on the object level, even if I have no idea what particular mistake they're making, what their reasons are, etc. The Bank of Japan just isn't the kind of institution that we should strongly expect to be right or wrong on this kind of issue (even though this issue is basic to its institutional function); so moderate amounts of ordinary object-level evidence can be dispositive all on its own."

From:

[W]hen I read some econbloggers who I’d seen being right about empirical predictions before saying that Japan was being grotesquely silly, and the economic logic seemed to me to check out, as best I could follow it, I wasn’t particularly reluctant to believe them. Standard economic theory, generalized beyond the markets to other facets of society, did not seem to me to predict that the Bank of Japan must act wisely for the good of Japan. It would be no surprise if they were competent, but also not much of a surprise if they were incompetent.

Comment author: Gregory_Lewis 29 October 2017 07:58:31PM 3 points [-]

If that is the view, I am unsure what the bank of Japan example is meant to motivate.

The example is confounded by the fact that Eliezer reports a lot of outside-view information to make the determination the BoJ is making a bad call. The judgement (and object level argument) which he endorses originally came from econ bloggers (I gather profs like Sumner) who Eliezer endorses due to their good track record. In addition he reports the argument the econ bloggers make does make object level sense.

Yet modest approaches can get the same answer without conceding the object-level evidence is dispositive. If the bank of Japan is debunked as an authority (for whatever reason), then in a dispute of 'them versus economists with a good empirical track record.', the outside view favours the latter's determination for standard reasons (it might caution one should look more widely across economic expertise, but bracket this). It also plausibly allows one to assert confidence in the particular used to make the determination the BoJ makes a bad call.

So I think I'd have made a similar judgement to Eliezer in this case whether or not I had any 'object level' evidence to go on: if I didn't know (or couldn't understand) the argument Sumner et al. used, I'd still conclude they're likely right.

It seems one needs to look for cases where 'outside' and 'inside' diverge. So maybe something like, "Eliezer judged from his personal knowledge of economics the BoJ was making a bad call (without inspiration from any plausible epistemic authority), and was right to back himself 'over' the BoJ."

That would be a case where someone would disagree this is the right approach. If all I had to go on was my argument and knowledge of the BoJs policy (e.g., I couldn't consult economists or econbloggers or whatever), then I suggest one should think that the incentives of the BoJ are probably at least somewhat better than orthogonal on expectation, and probably better correlated than an argument made by an amateur economist. If it transpired the argument was actually right, modesty's failure in a single case is not much of a strike against it, at least without some track record beyond this single case..

33

In defence of epistemic modesty

This piece defends a strong form of epistemic modesty: that, in most cases, one should pay scarcely any attention to what you find the most persuasive view on an issue, hewing instead to an idealized consensus of experts. I start by better pinning down exactly what is meant by ‘epistemic... Read More
Comment author: MikeJohnson 27 October 2017 11:03:22PM 1 point [-]

I worry that you're also using a fully-general argument here, one that would also apply to established EA cause areas.

This stands out at me in particular:

Naturally I don't mind if enthusiasts pick some area and give it a go, but appeals to make it a 'new cause area' based on these speculative bets look premature by my lights: better to pick winners based on which of the disparate fields shows the greatest progress, such that one forecasts similar marginal returns to the 'big three'.

There's a lot here that I'd challenge. E.g., (1) I think you're implicitly overstating how good the marginal returns on the 'big three' actually are, (2) you seem to be doubling down on the notion that "saving lives is better than improving lives" or that "the current calculus of EA does and should lean toward reduction of mortality, not improving well-being", which I challenged above, (3) I don't think your analogy between cryonics (which, for the record, I'm skeptical on as an EA cause area) and e.g., Enthea's collation of research on psilocybin seems very solid.

I would also push back on how dismissive "Naturally I don't mind if enthusiasts pick some area and give it a go, but appeals to make it a 'new cause area' based on these speculative bets look premature by my lights" sounds. Enthusiasts are the ones that create new cause areas. We wouldn't have any cause areas, save for those 'silly enthusiasts'. Perhaps I'm misreading your intended tone, however.

Comment author: Gregory_Lewis 28 October 2017 09:25:08AM 1 point [-]

Respectfully, I take 'challenging P' to require offering considerations for ¬P. Remarks like "I worry you're using a fully-general argument" (without describing what it is or how my remarks produce it), "I don't think your analogy is very solid" (without offering dis-analogies) don't have much more information than simply "I disagree".

1) I'd suggest astronomical stakes considerations imply at that one of the 'big three' do have extremely large marginal returns. If one prefers something much more concrete, I'd point to the humane reforms improving quality of life for millions of animals.

2) I don't think the primacy of the big three depends in any important way on recondite issues of disability weights or population ethics. Conditional on a strict person affecting view (which denies the badness of death) I would still think the current margin of global health interventions should offer better yields. I think this based on current best estimates of disability weights in things like the GCPP, and the lack of robust evidence for something better in mental health (we should expect, for example, Enthea's results to regress significantly, perhaps all the way back to the null).

On the general point: I am dismissive of mental health as a cause area insofar as I don't believe it to be a good direction for EA energy to go relative to the other major ones (and especially my own 'best bet' of xrisk). I don't want it to be a cause area as it will plausibly compete for time/attention/etc. with other things I deem more important. I'm no EA leader, but I don't think we need to impute some 'anti-weirdness bias' (which I think is facially implausible given the early embrace of AI stuff etc) to explain why they might think the same.

Naturally, I may be wrong in this determination, and if I am wrong, I want to know about it. Thus having enthusiasts go into more speculative things outside the currently recognised cause areas improves likelihood of the movement self-correcting and realising mental health should be on a par with (e.g.) animal welfare as a valuable use of EA energy.

Yet anointing mental health as a cause area before this case has been persuasively made would be a bad approach. There are many other candidates for 'cause area No. n+1' which (as I suggested above) have about the same plausibility as mental health. Making them all recognised 'cause areas' seems the wrong approach. Thus the threshold should be higher.

View more: Prev | Next