Comment author: SiebeRozendal 23 July 2018 12:42:07PM 1 point [-]

I just commented to SamDeere's comment above about having multiple types of votes. One indicating agreement and one indicating "helpfulness". Then you can sort by both, but the forum is sorted by default by "helpfulness". Do you think this would fix some of your issues with a voting system?

Comment author: RobBensinger 02 August 2018 09:31:21PM *  4 points [-]

Arbital uses a system where you can separately "upvote" things based on how much you like them, and give an estimate of how much probability you assign to claims. I like this system, and have recommended it be added to LW too. Among other things, I think it has a positive effect on people's mindsets if they practice keeping separate mental accounts of those two quantities.

Comment author: kbog  (EA Profile) 20 July 2018 04:25:24PM *  3 points [-]

If we upvote someone's comments then we trust them to be a better authority, so we should give them a greater weight in vote totals. So it seems straightforward that a weighted vote count is a better estimate of the quality of a comment.

The downside is that can create a feedback loop for a group of people with particular views. Having normal votes go from 1x to 3x over the course of so many thousands of karma seems like too small a change to make this happen. But the scaling of strong votes all the way up to 16x seems very excessive and risky to me.

Another downside is that it may encourage people to post stuff here that is better placed elsewhere, or left unsaid. I think that after switching to this system for a while, we should take a step back and see if there is too much crud on the forums.

Comment author: RobBensinger 02 August 2018 09:25:18PM 3 points [-]

I think the LW mods are considering features that will limit how many strong upvotes users can give out. I think the goal is for strong upvotes to look less like "karma totals get determined strictly by what forum veterans think" and more like "if you're a particularly respected and established contributor, you get the privilege of occasionally getting to 'promote/feature' site content so that a lot more people see it, and getting to dish out occasional super-karma rewards".

Comment author: Alex_Barry 01 April 2018 03:00:57PM *  8 points [-]

I think I agree with the comments on this post that job postings on the EA forum are not ideal, since if all the different orgs did it they would significantly clutter the forum.

The existing "Effective Altruism Job Postings" Facebook group and possibly the 80k job board should fulfill this purpose.

Comment author: RobBensinger 07 April 2018 02:05:37PM 2 points [-]

If clutter is the main concern, might it be useful for 80K to post a regular (say, monthly) EA Forum post noting updates to their job board, and to have other job ad posts get removed and centralized to that post? I personally would have an easier time keeping track of what's new vs. old if there were a canonical location that mentioned key job listing updates.

Comment author: RobBensinger 18 November 2017 10:19:29PM 3 points [-]

Cross-posting a reply from FB:

It strikes me as much more prevalent for people to be overconfident in their own idiosyncratic opinions. If you see half of people are 90% confident in X and half of people are 90% confident in not-X, then you know on average they are overconfident. That's how most of the world looks to me.

This seems consistent with Eliezer's claim that "commenters on the Internet are often overconfident" while EAs and rationalists he interacts with in person are more often underconfident. In Dunning and Kruger's original experiment, the worst performers were (highly) overconfident, but the best performers were underconfident.

Your warnings that overconfidence and power-grabbing are big issues seem right to me. Eliezer's written a lot warning about those problems too. My main thought about this is just that different populations can exhibit different social dynamics and different levels of this or that bias; and these can also change over time. Eliezer's big-picture objection to modesty isn't "overconfidence and power-grabbing are never major problems, and you should never take big steps to try combat them"; his objection is "biases vary a lot between individuals and groups, and overcorrection in debiasing is commonplace, so it's important that whatever debiasing heuristics you use be sensitive to context rather than generically endorsing 'hit the brakes' or 'hit the accelerator'".

He then makes the further claim that top EAs and rationalists as a group are in fact currently more prone to reflexive deference, underconfidence, fear-of-failure, and not-sticking-their-neck-out than to the biases of overconfident startup founders. At least on Eliezer's view, this should be a claim that we can evaluate empirically, and our observations should then inform how much we push against overconfidence v. underconfidence.

The evolutionary just-so story isn't really necessary for that critique, though it's useful to keep in mind if we were originally thinking that humans only have overactive status-grabbing instincts, and don't also have overactive status-grab-blocking instincts. Overcorrection is already a common problem, but it's particularly likely if there are psychological drives pushing in both directions.

Comment author: RobBensinger 18 November 2017 10:34:19PM 0 points [-]
Comment author: Robert_Wiblin 17 November 2017 12:16:33AM *  8 points [-]

It strikes me as much more prevalent for people to be overconfident in their own idiosyncratic opinions. If you see half of people are 90% confident in X and half of people are 90% confident in not-X, then you know on average they are overconfident. That's how most of the world looks to me.

But no matter - they probably won't suffer much, because the meek do no inherit the Earth, at least not in this life.

People follow confidence in leaders, generating the pathological start-up founder who is sure they're 100x more likely to succeed than the base rate; someone who portrays themselves as especially competent in a job interview is more likely to be hired than someone who accurately appraises their merits; and I don't imagine deferring to a boring consensus brings more romantic success than elaborating on one's exciting contrarian opinions.

Given all this, it's unsurprising evolution has programmed us to place an astonishingly high weight on our own judgement.

While there are some social downsides to seeming arrogant, people who preach modesty here advocate going well beyond what's required to avoid triggering an anti-dominance reaction in others.

Indeed, even though I think strong modesty is epistemically the correct approach on the basis of reasoned argument, I do not and can not consistently live and speak that way, because all my personal incentives are lined up in favour of me portraying myself as very confident in my inside view.

In my experience it requires a monastic discipline to do otherwise, a discipline almost none possess.

Comment author: RobBensinger 18 November 2017 10:19:29PM 3 points [-]

Cross-posting a reply from FB:

It strikes me as much more prevalent for people to be overconfident in their own idiosyncratic opinions. If you see half of people are 90% confident in X and half of people are 90% confident in not-X, then you know on average they are overconfident. That's how most of the world looks to me.

This seems consistent with Eliezer's claim that "commenters on the Internet are often overconfident" while EAs and rationalists he interacts with in person are more often underconfident. In Dunning and Kruger's original experiment, the worst performers were (highly) overconfident, but the best performers were underconfident.

Your warnings that overconfidence and power-grabbing are big issues seem right to me. Eliezer's written a lot warning about those problems too. My main thought about this is just that different populations can exhibit different social dynamics and different levels of this or that bias; and these can also change over time. Eliezer's big-picture objection to modesty isn't "overconfidence and power-grabbing are never major problems, and you should never take big steps to try combat them"; his objection is "biases vary a lot between individuals and groups, and overcorrection in debiasing is commonplace, so it's important that whatever debiasing heuristics you use be sensitive to context rather than generically endorsing 'hit the brakes' or 'hit the accelerator'".

He then makes the further claim that top EAs and rationalists as a group are in fact currently more prone to reflexive deference, underconfidence, fear-of-failure, and not-sticking-their-neck-out than to the biases of overconfident startup founders. At least on Eliezer's view, this should be a claim that we can evaluate empirically, and our observations should then inform how much we push against overconfidence v. underconfidence.

The evolutionary just-so story isn't really necessary for that critique, though it's useful to keep in mind if we were originally thinking that humans only have overactive status-grabbing instincts, and don't also have overactive status-grab-blocking instincts. Overcorrection is already a common problem, but it's particularly likely if there are psychological drives pushing in both directions.

Comment author: JamesDrain 12 November 2017 10:48:26PM *  1 point [-]

Newcomb's problem isn't a challenge to causal decision theory. I can solve Newcomb's problem by committing to one-boxing in any of a number of ways e.g. signing a contract or building a reputation as a one-boxer. After the boxes have already been placed in front of me, however, I can no longer influence their contents, so it would be good if I two-boxed if the rewards outweighed the penalty e.g. if it turned out the contract I signed was void, or if I don't care about my one-boxing reputation because I don't think I'm going to play this game again in the future.

The "wishful thinking" hypothesis might just apply to me then. I think it would be super cool if we could spontaneously cooperate with aliens in other universes.

Edit: Wow, ok I remember what I actually meant about wishful thinking. I meant that evidential decision theory literally prescribes wishful thinking. Also, if you made a copy of a purely selfish person and then told them of the fact, then I still think it would be rational to defect. Of course, if they could commit to cooperating before being copied, then that would be the right strategy.

Comment author: RobBensinger 13 November 2017 03:47:37AM *  2 points [-]

After the boxes have already been placed in front of me, however, I can no longer influence their contents, so it would be good if I two-boxed

You would get more utility if you were willing to one-box even when there's no external penalty or opportunity to bind yourself to the decision. Indeed, functional decision theory can be understood as a formalization of the intuition: "I would be better off if only I could behave in the way I would have precommitted to behave in every circumstance, without actually needing to anticipate each such circumstance in advance." Since the predictor in Newcomb's problem fills the boxes based on your actual action, regardless of the reasoning or contract-writing or other activities that motivate the action, this suffices to always get the higher payout (compared to causal or evidential decision theory).

There are also dilemmas where causal decision theory gets less utility even if it has the opportunity to precommit to the dilemma; e.g., retro blackmail.

For a fuller argument, see the paper "Functional Decision Theory" by Yudkowsky and Soares.

Comment author: Denkenberger 08 November 2017 10:57:45PM 2 points [-]

As for the value of college for non-doctors, what about the study of GI bill recipients that were randomly chosen that found that college did have significant causal benefits (it was not just correlation that colleges were just choosing better qualified people)?

Comment author: RobBensinger 09 November 2017 02:21:17AM *  1 point [-]

I'm not an expert in this area and haven't seen that study, but I believe Eliezer generally defers to Bryan Caplan's analysis on this topic. Caplan's view, discussed in The Case Against Education (which is scheduled to come out in two months), is that something like 80% of the time students spend in school is signaling, and something like 80% of the financial reward students enjoy from school is due to signaling. So the claim isn't that school does nothing to build human capital, just that a very large chunk of schooling is destroying value.

Comment author: Gregory_Lewis 04 November 2017 08:22:29AM 2 points [-]

I don't think the account of the relative novelty of the 'LW approach' to philosophy makes a good fit for the available facts; "relatively" new is, I suggest, a pretty relative term.

You can find similar reduction-esque sensibilities among the logicial positivists around a century ago, and a very similar approach from Quine about half a century ago. In the case of the logical positivists, they enjoyed a heyday amongst the philosophical community, but gradually fell from favour due to shortcomings other philosophers identified; I suggest Quine is a sufficiently 'big name' in philosophy that his approach was at least widely appreciated by the relevant academic communities.

This is challenging to reconcile with an account of "Rationality's philosophical framework allows one to get to confidently get to the right answer across a range of hard philosophical problems, and the lack of assent of domain experts is best explained by not being aware of it". Closely analogous approaches have been tried a very long time ago, and haven't been found extraordinarily persuasive (even if we subset to naturalists). It doesn't help that when the 'LW-answer' is expounded (e.g. in the sequences) the argument offered isn't particularly sophisticated (and often turns out to be recapitulating extant literature), nor does it usually deign to address objections raised by dissenting camps.

I suggest a better fit for this data is the rationality approach looks particularly persuasive to people without subject matter expertise.

Re. decision theory. Beyond the general social epistemiological steers (i.e. the absence of good decision theorists raving about the breakthrough represented by MIRI style decision theory, despite many of them having come into contact with this work one way or another), remarks I've heard often target 'technical quality': Chalmers noted in a past AMA disappointment this theory had not been made rigorous (maybe things have changed since), and I know one decision theorist's view is that the work also isn't rigorous and a bit sloppy (on Carl's advice, I'm trying to contact more). Not being a decision theorist myself, I haven't delved into the object level considerations.

Comment author: RobBensinger 04 November 2017 04:53:28PM *  2 points [-]

The "Cheating Death in Damascus" and "Functional Decision Theory" papers came out in March and October, so I recommend sharing those, possibly along with the "Decisions Are For Making Bad Outcomes Inconsistent" conversation notes. I think these are much better introductions than e.g. Eliezer's old "Timeless Decision Theory" paper.

Quineans and logical positivists have some vague attitudes in common with people like Drescher, but the analogy seems loose to me. If you want to ask why other philosophers didn't grab all the low-hanging fruit in areas like decision theory or persuade all their peers in areas like philosophy of mind (which is an interesting set of questions from where I'm standing, and one I'd like to see examined more too), I think a more relevant group to look at will be technically minded philosophers who think in terms of Bayesian epistemology (and information-theoretic models of evidence, etc.) and software analogies. In particular, analogies that are more detailed than just "the mind is like software", though computationalism is an important start. A more specific question might be: "Why didn't E.T. Jaynes' work sweep the philosophical community?"

Comment author: Gregory_Lewis 02 November 2017 11:59:46PM 3 points [-]

I agree such an object level demonstration would be good evidence (although of course one-sided, for reasons Pablo ably articulates elsewhere). I regret I can't provide this. On many of these topics (QM, p-zombies) I don't pretend any great knowledge; for others (e.g. Theism), I can't exactly find the 'rationalist case for Atheism' crisply presented.

I am naturally hesitant to infer from the (inarguable) point that diffusion of knowledge and ideas within and across fields takes time that he best explanation for disagreement is that rationalists are just ahead of the curve. I enjoyed the small parts of Drescher I read, but I assume many reasonable philosophers are aware of his work and yet are not persuaded. Many things touted in philosophy (and elsewhere) as paradigm shifting insights transpire to be misguided, and betting on some based on your personal assent on the object level looks unlikely to go well.

I consider the decision theory work a case-in-point. The view that the F- U- T- DT is this great advance on decision theoretic state of the art is a view that is very tightly circumscribed to the rationalist community itself. Of course, many decision theorists are simply ignorant of it given it is expounded outside the academic press. Yet others are not: there were academic decision theorists who attend some MIRI workshops, others who have been shown versions (via Chalmers, I understand), and a few who have looked at MIRI's stuff on Arxiv and similar. Yet the prevailing view of these seems to be at best lukewarm, and at worst scathing.

This seems challenging to reconcile with a model of rationalists just getting to the great insights early before everyone else catches up. It could be the decision theorist community is so diseased so they cannot appreciate the technical breakthrough MIRI-style decision theory promises. Yet I find the alternative hypothesis where it is the rationalist community which is diseased and diving down a decision theory dead end without the benefit of much interaction with decision theory experts to correct them somewhat more compelling.

Comment author: RobBensinger 04 November 2017 12:38:07AM *  2 points [-]

To be clear, I'm not saying that the story I told above ("here are some cool ideas that I claim haven't sufficiently saturated the philosophy community to cause all the low-hanging fruit to get grabbed, or to produce fieldwide knowledge and acceptance in the cases where it has been grabbed") should persuade arbitrary readers that people like Eliezer or Gary Drescher are on the right track; plenty of false turns and wrong solutions can also claim to be importing neglected ideas, or combining ideas in neglected ways. I'm just gesturing at one reason why I think it's possible at all to reach confident correct beliefs about lots of controversial claims in philosophy, in spite of the fact that philosophy is a large and competitive field whose nominal purpose is to answer these kinds of questions.

I'm also implicitly making a claim about there being similarities between many of the domains you're pointing to that help make it not just a coincidence that one (relatively) new methodology and set of ideas can put you ahead of the curve on multiple issues simultaneously (plus produce multiple discovery and convergence). A framework that's unusually useful for answering questions related to naturalism, determinism, and reflective reasoning can simultaneously have implications for how we should (and shouldn't) be thinking about experience, agency, volition, decision theory, and AI, among other topics. To some extent, all of these cases can be thought of as applications of a particular naturalist/reductionist toolkit (containing concepts and formalisms that aren't widely known among philosophers who endorse naturalism) to new domains.

I'm curious what criticisms you've heard of MIRI's work on decision theory. Is there anything relevant you can link to?

Comment author: Gregory_Lewis 31 October 2017 05:51:16PM *  7 points [-]

Thanks for your helpful reply. I think your bullet points do track the main sources of disagreement, but I venture an even crisper summary:

I think the Eliezer-style 'immodest' view comprises two key claims:

1) There are a reasonably large number of cases that due to inadequate equiilbria or similar that who we might take to be expert classes are in fact going to be sufficiently poorly optimised for the truth that the views a reasonable rationalist or similar could be expected to do better.

2) We can reliably identify these cases.

If they're both true we can license ourselves to 'pick fights' where we make confident bets against expert consensus (or lack thereof) in the knowledge we are more likely than not to be right. If not, then it seems modesty is the better approach: it might be worth acting 'as if' our contra-expert impression is right and doing further work (because we might discover something important), but nonetheless defer to the expert consensus.

It seems the best vindication of the immodesty view as Eliezer defends is a track record of such cases on his behalf or the wider rationalist community. You correctly anticipate I would definitely include the track record here as highly adverse. For two reasons:

First, when domain experts look at the 'answer according to the rationalist community re. X', they're usually very unimpressed, even if they're sympathetic to the view themselves. I'm pretty Atheist, but I find the 'answer' to the theism question per LW or similar woefully rudimentary compared to state of the art discussion in the field. I see similar experts on animal consciousness, quantum mechanics, free will, and so on similarly be deeply unimpressed with the sophistication of argument offered.

Unfortunately, many of these questions tend to be the sort where a convincing adjudication is far off (i.e. it seems unlikely to discover convincing proof of physicalism any time soon). So what we observe is both compatible with 'the rationalist community is right and this field is diseased (and so gets it wrong)' and 'the rationalist community is greatly over confident and the field ia on the right track'. That said, I take the number of fields which the rationalist community takes to be sufficiently diseased that it takes itself to do better as implausible on priors.

The best thing would be a clear track record to judge - single cases, either way, don't give much to go one, as neither modesty nor immodesty would claim they should expect to win every single time. I see the rationalist community having one big win (re. AI), yet little else. That Eliezer's book offers two pretty weak examples (e.g. BoJ, where he got the argument from a recognised authority, and an n=1 medical intervention), and reports one case against (e.g. a big bet of Taubes) doesn't lead me to upgrade my pretty autumnal view of the track record.

Comment author: RobBensinger 02 November 2017 09:04:05PM *  1 point [-]

Unfortunately, many of these questions tend to be the sort where a convincing adjudication is far off (i.e. it seems unlikely to discover convincing proof of physicalism any time soon).

I think a convincing object-level argument could be given; you could potentially show on object-level grounds why the specific arguments or conclusions of various rationalists are off-base, thereby at least settling the issue (or certain sub-issues) to the satisfaction of people who take the relevant kinds of inside-view arguments sufficiently seriously in the first place. I'd be particularly interested to hear reasons you (or experts you defer to) reject the relevant arguments against gods, philosophical zombies, or objective collapse / non-realism views in QM.

If you mean that a convincing expert-consensus argument is likely to be far off, though, then I agree about that. As a start, experts' views and toolkits in general can be slow to change, particularly in areas like philosophy.

I assume one part of the model Eliezer is working with here is that it can take many decades for new conceptual discoveries to come to be widely understood, accepted, and used in a given field, and even longer for these ideas to spill over into other fields. E.g., some but not all philosophers have a deep understanding of Shannon, Solomonoff, and Jaynes' accounts of inductive inference, even though many of the key insights have been around for over fifty years at this point. When ideas spread slowly, consensus across all fields won't instantly snap into a new state that's maximally consistent with all of the world's newest developments, and there can be low-hanging fruit for the philosophers who do help import those ideas into old discussions.

This is why Eliezer doesn't claim uniqueness for his arguments in philosophy; e.g., Gary Drescher used the same methodology and background ideas to arrive largely at the same conclusions largely independently, as far as I know.

I'd consider the big advances in decision theory from Wei Dai and Eliezer to be a key example of this, and another good example of independent discovery of similar ideas by people working with similar methodologies and importing similar ideas into a relatively old and entrenched field. (Though Wei Dai and Eliezer were actively talking to each and sharing large numbers of ideas, so the independence is much weaker.)

You can find most of the relevant component ideas circulating before that, too; but they were scattered across multiple fields in a way that made them less likely to get spontaneously combined by specialists busy hashing out the standard sub-sub-arguments within old paradigms.

View more: Next