Comment author: Jacy_Reese 21 February 2018 11:47:25PM *  5 points [-]

Those considerations make sense. I don't have much more to add for/against than what I said in the post.

On the comparison between different MCE strategies, I'm pretty uncertain which are best. The main reasons I currently favor farmed animal advocacy over your examples (global poverty, environmentalism, and companion animals) are that (1) farmed animal advocacy is far more neglected, (2) farmed animal advocacy is far more similar to potential far future dystopias, mainly just because it involves vast numbers of sentient beings who are largely ignored by most of society. I'm not relatively very worried about, for example, far future dystopias where dog-and-cat-like-beings (e.g. small, entertaining AIs kept around for companionship) are suffering in vast numbers. And environmentalism is typically advocating for non-sentient beings, which I think is quite different than MCE for sentient beings.

I think the better competitors to farmed animal advocacy are advocating broadly for antispeciesism/fundamental rights (e.g. Nonhuman Rights Project) and advocating specifically for digital sentience (e.g. a larger, more sophisticated version of People for the Ethical Treatment of Reinforcement Learners). There are good arguments against these, however, such as that it would be quite difficult for an eager EA to get much traction with a new digital sentience nonprofit. (We considered founding Sentience Institute with a focus on digital sentience. This was a big reason we didn't.) Whereas given the current excitement in the farmed animal space (e.g. the coming release of "clean meat," real meat grown without animal slaughter), the farmed animal space seems like a fantastic place for gaining traction.

I'm currently not very excited about "Start a petting zoo at Deepmind" (or similar direct outreach strategies) because it seems like it would produce a ton of backlash because it seems too adversarial and aggressive. There are additional considerations for/against (e.g. I worry that it'd be difficult to push a niche demographic like AI researchers very far away from the rest of society, at least the rest of their social circles; I also have the same traction concern I have with advocating for digital sentience), but this one just seems quite damning.

The upshot is that, even if there are some particularly high yield interventions in animal welfare from the far future perspective, this should be fairly far removed from typical EAA activity directed towards having the greatest near-term impact on animals. If this post heralds a pivot of Sentience Institute to directions pretty orthogonal to the principal component of effective animal advocacy, this would be welcome indeed.

I agree this is a valid argument, but given the other arguments (e.g. those above), I still think it's usually right for EAAs to focus on farmed animal advocacy, including Sentience Institute at least for the next year or two.

(FYI for readers, Gregory and I also discussed these things before the post was published when he gave feedback on the draft. So our comments might seem a little rehearsed.)

Comment author: Pablo_Stafforini 22 February 2018 12:31:49PM *  8 points [-]

The main reasons I currently favor farmed animal advocacy over your examples (global poverty, environmentalism, and companion animals) are that (1) farmed animal advocacy is far more neglected, (2) farmed animal advocacy is far more similar to potential far future dystopias, mainly just because it involves vast numbers of sentient beings who are largely ignored by most of society.

Wild animal advocacy is far more neglected than farmed animal advocacy, and it involves even larger numbers of sentient beings ignored by most of society. If the superiority of farmed animal advocacy over global poverty along these two dimensions is a sufficient reason for not working on global poverty, why isn't the superiority of wild animal advocacy over farmed animal advocacy along those same dimensions not also a sufficient reason for not working on farmed animal advocacy?

Comment author: Pablo_Stafforini 29 January 2018 04:49:39PM 3 points [-]

Thanks for creating this. I've added your course to this list.

Comment author: Pablo_Stafforini 13 November 2017 07:33:48PM 4 points [-]

Thank you for writing this! The images under 'What are you going to search for?' are not loading.

Comment author: vipulnaik 30 October 2017 01:00:23AM 6 points [-]

The comments on naming beliefs by Robin Hanson (2008) appears to be how the consensus around the impressions/beliefs distinction began to form (the commenters include such movers and shakers as Eliezer and Anna Salamon).

Also, impression track records by Katja (September 2017) recent blog post/article circulated in the rationalist community that revived the terminology.

Comment author: Pablo_Stafforini 01 November 2017 09:27:56PM *  6 points [-]

Thanks for drawing our attention to that early Overcoming Bias post. But please note that it was written by Hal Finney, not Robin Hanson. It took me a few minutes to realize this, so it seemed worth highlighting lest others fail to appreciate it.

Incidentally, I've been re-reading Finney's posts over the past couple of days and have been very impressed. What a shame that such a fine thinker is no longer with us.

ETA: Though one hopes this is temporary.

Comment author: Carl_Shulman 31 October 2017 09:18:57PM 2 points [-]

Please take my comment as explaining my own views, lest they be misunderstood, not condemning your citation of me.

Comment author: Pablo_Stafforini 31 October 2017 09:28:24PM *  2 points [-]

Okay, thank you for the clarification.

[In the original version, your comment said that the quote was pulled out of context, hence my interpretation.]

Comment author: Carl_Shulman 31 October 2017 07:55:26PM *  8 points [-]

and Carl Shulman notes that his approach "has lead [him] astray repeatedly, but I haven't seen as many successes."

That quote may not convey my view, so I'll add to this. I think Eliezer has had a number of striking successes, but in that comment I was saying that it seemed to me he was overshooting more than undershooting with the base rate for dysfunctionality in institutions/fields, and that he should update accordingly and check more carefully for the good reasons that institutional practice or popular academic views often (but far from always) indicate. That doesn't mean one can't look closely and form much better estimates of the likelihood of good invisible reasons, or that the base rate of dysfunction is anywhere near zero. E.g. I think he has discharged the burden of due diligence wrt MWI.

If many physicists say X, and many others say Y and Z which seem in conflict with X, then at a high rate there will be some good arguments for X, Y, and Z. If you first see good arguments for X, you should check to see what physicists who buy Y and Z are saying, and whether they (and physicists who buy X) say they have knowledge that you don't understand.

In the case of MWI, the physicists say they don't have key obscure missing arguments (they are public and not esoteric), and that you can sort interpretations into ones that accept the unobserved parts of the wave function in QM as real (MWI, etc), ones that add new physics to pick out part of the wavefunction to be our world, and ones like shut-up-and-calculate that amount to 'don't talk about whether parts of the wave function we don't see are real.'

Physicists working on quantum foundations are mostly mutually aware of one another's arguments, and you can read or listen to them for their explanations of why they respond differently to that evidence, and look to the general success of those habits of mind. E.g. the past success of scientific realism and Copernican moves: distant lands on Earth that were previously unseen by particular communities turned out to be real, other Sun-like stars and planets were found, biological evolution, etc. Finding out that many of the interpretations amount to MWI under another name, or just refusing to answer the question of whether MWI is true, reduces the level of disagreement to be explained, as does the finding that realist/multiverse interpretations have tended to gain ground with time and to do better among among those who engage with quantum foundations and cosmology.

In terms of modesty, I would say that generally 'trying to answer the question about external reality' is a good epistemic marker for questions about external reality, as is Copernicanism/not giving humans a special place in physics or drastically penalizing theories on which the world is big/human nature looks different (consistently with past evidence). Regarding new physics for objective collapse, I would also note the failure to show it experimentally and the general opposition to it. That seems sufficient to favor the realist side of the debate among physicists.

In contrast, I hadn't seen anything like such due diligence regarding nutrition, or precedent in common law.

Regarding the OP thesis, you could summarize my stance as that assigning 'epistemic peer' or 'epistemic superior/inferior' status in the context of some question of fact requires a lot of information and understanding when we are not assumed to already have reliable fine-grained knowledge of epistemic status. That often involves descending into the object-level: e.g. if the class of 'scientific realist arguments' has a good track record, then you will need to learn enough about a given question and the debate on it to know if that systemic factor is actually at play in the debate before you can know whether to apply that track record in assessing epistemic status.

Comment author: Pablo_Stafforini 31 October 2017 08:59:50PM *  1 point [-]

In that comment I was saying that it seemed to me he was overshooting more than undershooting with the base rate for dysfunctionality in institutions/fields, and that he should update accordingly and check more carefully for the good reasons that institutional practice or popular academic views often (but far from always) indicate. That doesn't mean one can't look closely and form much better estimates of the likelihood of good invisible reasons, or that the base rate of dysfunction is anywhere near zero.

I offered that quote to cast doubt on Rob's assertion that Eliezer has "a really strong epistemic track record, and that this is good evidence that modesty is a bad idea." I didn't mean to deny that Eliezer had some successes, or that one shouldn't "look closely and form much better estimates of the likelihood of good invisible reasons" or that "the base rate of dysfunction is anywhere near zero", and I didn't offer the quote to dispute those claims.

Readers can read the original comment and judge for themselves whether the quote was in fact pulled out of context.

Comment author: Benito 31 October 2017 07:37:44PM 3 points [-]

A discussion about the merits of each of the views Eliezer holds on these issues would itself exemplify the immodest approach I'm here criticizing. What you would need to do to change my mind is to show me why Eliezer is justified in giving so little weight to the views of each of those expert communities, in a way that doesn't itself take a position on the issue by relying primarily on the inside view.

This seems correct. I just noticed you could phrase this the other way - why in general should we presume groups of people with academic qualifications have their strongest incentives towards truth? I agree that this disagreement will come down to building detailed models of incentives in human organisations more than building inside views of each field (which is why I didn't find Greg's post particularly persuasive - this isn't a matter of discussing rational bayesian agents, but of discussing the empirical incentive landscape we are in).

Comment author: Pablo_Stafforini 31 October 2017 08:39:17PM *  0 points [-]

why in general should we presume groups of people with academic qualifications have their strongest incentives towards truth?

Maybe because these people have been surprisingly accurate? In addition, it's not that Eliezer disputes that general presumption: he routinely relies on results in the natural and social sciences without feeling the need to justify in each case why we should trust e.g. computer scientists, economists, neuroscientists, game theorists, and so on.

Comment author: RobBensinger 31 October 2017 04:04:06AM *  9 points [-]

This was a really good read! In addition to being super well-timed.

I don't think there's a disagreement here about ideal in-principle reasoning. I’m guessing that the disagreement is about several different points:

  • In reality, how generally difficult is it to spot important institutions and authorities failing in large ways? Where we might ask subquestions for particular kinds of groups; e.g., maybe you and the anti-modest will turn out to agree about how dysfunctional US national politics is on average, while disagreeing about how dysfunctional academia is on average in the US.

  • In reality, how generally difficult is it to evaluate your own level of object-level accuracy in some domain, the strength of object-level considerations in that domain, your general competence or rationality or meta-rationality, etc.? To what extent should we update strongly on various kinds of data about our reasoning ability, vs. distrusting the data source and penalizing the evidence? (Or looking for ways to not have to gather or analyze data like that at all, e.g., prioritizing finding epistemic norms or policies that work relatively OK without such data.)

  • How strong are various biases, either in general or in our environs? It sounds like you think that arrogance, overconfidence, and excess reliance on inside-view arguments are much bigger problems for core EAs than underconfidence or neglect of inside-view arguments, while Eliezer thinks the opposite.

  • What are the most important and useful debiasing interventions? It sounds like you think these mostly look like attempts to reduce overconfidence in inside views, self-aggrandizing biases, and the like, while Eliezer thinks that it's too easy to overcorrect if you organize your epistemology around that goal. I think the anti-modesty view here is that we should mostly address those biases (and other biases) through more local interventions that are sensitive to the individual's state and situation, rather than through rules akin to "be less confident" or "be more confident".

  • What's the track record for more modesty-like views versus less modesty-like views overall?

  • What's the track record for critics of modesty in particular? I would say that Eliezer and his social circle have a really strong epistemic track record, and that this is good evidence that modesty is a bad idea; but I gather you want to use that track record as Exhibit A in the case for modesty being a good idea. So I assume it would help to discuss the object-level disagreements underlying these diverging generalizations.

Does that match your sense of the disagreement?

Comment author: Pablo_Stafforini 31 October 2017 06:41:17PM *  3 points [-]

I would say that Eliezer and his social circle have a really strong epistemic track record, and that this is good evidence that modesty is a bad idea; but I gather you want to use that track record as Exhibit A in the case for modesty being a good idea.

Really? My sense is that the opposite is the case. Eliezer himself acknowledges that he has an "amazing bet-losing capability" and my sense is that he tends to bet against scientific consensus (while Caplan, who almost always takes the consensus view, has won virtually all his bets). Carl Shulman notes that Eliezer's approach "has lead [him] astray repeatedly, but I haven't seen as many successes."

Comment author: RobBensinger 31 October 2017 03:38:42PM *  1 point [-]

I don't think we should describe all instances of deference to any authority, all uses of the outside view, etc. as "modesty". (I don't know whether you're doing that here; I just want to be clear that this at least isn't what the "modesty" debate has traditionally been about.)

The question is what happens when you criticize it and don't get a better explanation. What should you do? Strongly adopt a partial solution to the problem, continue to look for other solutions or trust the specialists to figure it out?

I don't think there's any general answer to this. The right answer depends on the strength of the object-level arguments; on how much reason you have to think you've understood and gleaned the right take-aways from those arguments; on your model of the physics community and other relevant communities; on the expected information value of looking into the issue more; on how costly it is to seek different kinds of further evidence; etc.

I'm curious what you think about partial non-reality of wavefunctions (as described by the AncientGeek here and seeming to correspond to the QIT interpretation on the wiki page of interpretations, which fits with probabilities being in the mind ).

In the context of the measurement problem: If the idea is that we may be able to explain the Born rule by revising our understanding of what the QM formalism corresponds to in reality (e.g., by saying that some hidden-variables theory is true and therefore the wave function may not be the whole story, may not be the kind of thing we'd naively think it is, etc.), then I'd be interested to hear more details. If the idea is that there are ways to talk about the experimental data without committing ourselves to a claim about why the Born rule holds, then I agree with that, though it obviously doesn't answer the question of why the Born rule holds. If the idea is that there are no facts of the matter outside of observers' data, then I feel comfortable dismissing that view even if a non-negligible number of physicists turn out to endorse it.

I also feel comfortable having lower probability in the existence of God than the average physicist does; and "physicists are the wrong kind of authority to defer to about God" isn't the reasoning I go through to reach that conclusion.

Comment author: Pablo_Stafforini 31 October 2017 04:09:29PM 1 point [-]

I also feel comfortable having lower probability in the existence of God than the average physicist does; and "physicists are the wrong kind of authority to defer to about God" isn't the reasoning I go through to reach that conclusion.

Out of curiosity, what is the reasoning you would go through to reach that conclusion?

Comment author: RobBensinger 31 October 2017 12:41:35AM *  1 point [-]

Going back to your list:

nutrition, animal consciousness, philosophical zombies, population ethics, and quantum mechanics

I haven't looked much at the nutrition or population ethics discussions, though I understand Eliezer mistakenly endorsed Gary Taubes' theories in the past. If anyone has links, I'd be interested to read more.

AFAIK Eliezer hasn't published why he holds his views about animal consciousness, and I don't know what he's thinking there. I don't have a strong view on whether he's right (or whether he's overconfident).

Concerning zombies: I think Eliezer is correct that the zombie argument can't provide any evidence for the claim that we instantiate mental properties that don't logically supervene on the physical world. Updating on factual evidence is a special case of a causal relationship, and if instantiating some property P is causally impacting our physical brain states and behaviors, then P supervenes on the physical.

I'm happy to talk more about this, and I think questions like this are really relevant to evaluating the track record of anti-modesty positions, so this seems like as good a place as any for discussion. I'm also happy to talk more about meta questions related to this issue, like, "If the argument above is correct, why hasn't it convinced all philosophers of mind?" I don't have super confident views on that question, but there are various obvious possibilities that come to mind.

Concerning QM: I think Eliezer's correct that Copenhagen-associated views like "objective collapse" and "quantum non-realism" are wrong, and that the traditional arguments for these views are variously confused or mistaken, often due to misunderstandings of principles like Ockham's razor. I'm happy to talk more about this too; I think the object-level discussions are important here.

Comment author: Pablo_Stafforini 31 October 2017 02:46:42PM *  2 points [-]

A discussion about the merits of each of the views Eliezer holds on these issues would itself exemplify the immodest approach I'm here criticizing. What you would need to do to change my mind is to show me why Eliezer is justified in giving so little weight to the views of each of those expert communities, in a way that doesn't itself take a position on the issue by relying primarily on the inside view.

Let’s consider a concrete example. When challenged to justify his extremely high confidence in MWI, despite the absence of a strong consensus among physicists, Eliezer tells people to "read the QM sequence”. But suppose I read the sequence and become persuaded. So what? Physicists are just as divided now as they were before I raised the challenge. By hypothesis, Eliezer was unjustified in being so confident in MWI despite the fact that it seemed to him that this interpretation was correct, because the relevant experts did not share that subjective impression. If upon reading the sequence I come to agree with Eliezer, that just puts me in the same epistemic predicament as Eliezer was originally: just like him, I too need to justify the decision to rely on my own impressions instead of deferring to expert opinion.

To persuade me, Greg, and other skeptics, what Eliezer needs to do is to persuade the physicists. Short of that, he can persuade a small random sample of members of this expert class. If, upon being exposed to the relevant sequence, a representative group of quantum physicists change their views significantly in Eliezer’s direction, this would be good evidence that the larger population of physicists would update similarly after reading those writings. Has Eliezer tried to do this?

Update (2017-10-28): I just realized that the kind of challenge I'm raising here has been carried out, in the form of a "natural experiment", for Eliezer's views on decision theory. Years ago, David Chalmers spontaneously sent half a dozen leading decision theorists copies of Eliezer's TDT paper. If memory serves, Chalmers reported that none of these experts had been impressed (let alone persuaded).

Update (2018-01-20): Note the parallels between what Scott Alexander says here and what I write above (emphasis added):

I admit I don’t know as much about economics as some of you, but I am working off of a poll of the country’s best economists who came down pretty heavily on the side of this not significantly increasing growth. If you want to tell me that it would, your job isn’t to explain Economics 101 theories to me even louder, it’s to explain how the country’s best economists are getting it wrong.

View more: Next