Comment author: Denise_Melchin 14 May 2018 08:32:20PM 0 points [-]

Interesting! I am trading off accuracy with outside world manipulation in that argument, since accuracy isn't actually the main end goal I care about (but 'good done in the world' for which better forecasts of the future would be pretty useful).

Comment author: Pablo_Stafforini 15 May 2018 05:10:08PM 2 points [-]

Feel free to ignore if you don't think this is sufficiently important, but I don't understand the contrast you draw between accuracy and outside world manipulation. I thought manipulation of prediction markets was concerning precisely because it reduces their accuracy. Assuming you accept Robin's point that manipulation increases accuracy on balance, what's your residual concern?

Comment author: Denise_Melchin 13 May 2018 03:37:09PM 2 points [-]

I assumed you didn't mean an internal World Bank prediction market, sorry about that. As I said above, I'm more optimistic about large workplaces employing prediction markets. I don't know how many staff the World Bank employs. Do you agree now that prediction markets are an inferior solution to forecasting problems in small organizations? If yes, what do you think is the minimum staff size of a workplace for a prediction market to be efficient enough to be better than e.g. extremized team forecasting?

Could you link to the accuracy studies you cite that show that prediction markets do better than polling on predicting election results? I don't see any obvious big differences on a quick Google search. The next obvious alternative is asking whether people like Nate Silver did better than prediction markets. In the GJP, individual superforecasters did sometimes better than prediction markets, but team superforecasters did consistently better. Putting Nate Silver and his kin in a room seems to have a good chance to outperform prediction markets then.

You also don't state your opinion on the Intrade incident. Since I cannot see that prediction markets are obviously a lot better than polls or pundits (they didn't call the 2016 surprises either), I find it questionable whether blatant attempts at voter manipulation through prediction markets are worth the cost. This is a big price to pay even if prediction markets did a bit better than polls or pundits.

Comment author: Pablo_Stafforini 13 May 2018 09:02:20PM *  2 points [-]

I find it questionable whether blatant attempts at voter manipulation through prediction markets are worth the cost. This is a big price to pay even if prediction markets did a bit better than polls or pundits.

Robin's position is that manipulators can actually improve the accuracy of prediction markets, by increasing the rewards to informed trading. On this view, the possibility of market manipulation is not in itself a consideration that favors non-market alternatives, such as polls or pundits.

Comment author: Jacy_Reese 21 February 2018 11:47:25PM *  5 points [-]

Those considerations make sense. I don't have much more to add for/against than what I said in the post.

On the comparison between different MCE strategies, I'm pretty uncertain which are best. The main reasons I currently favor farmed animal advocacy over your examples (global poverty, environmentalism, and companion animals) are that (1) farmed animal advocacy is far more neglected, (2) farmed animal advocacy is far more similar to potential far future dystopias, mainly just because it involves vast numbers of sentient beings who are largely ignored by most of society. I'm not relatively very worried about, for example, far future dystopias where dog-and-cat-like-beings (e.g. small, entertaining AIs kept around for companionship) are suffering in vast numbers. And environmentalism is typically advocating for non-sentient beings, which I think is quite different than MCE for sentient beings.

I think the better competitors to farmed animal advocacy are advocating broadly for antispeciesism/fundamental rights (e.g. Nonhuman Rights Project) and advocating specifically for digital sentience (e.g. a larger, more sophisticated version of People for the Ethical Treatment of Reinforcement Learners). There are good arguments against these, however, such as that it would be quite difficult for an eager EA to get much traction with a new digital sentience nonprofit. (We considered founding Sentience Institute with a focus on digital sentience. This was a big reason we didn't.) Whereas given the current excitement in the farmed animal space (e.g. the coming release of "clean meat," real meat grown without animal slaughter), the farmed animal space seems like a fantastic place for gaining traction.

I'm currently not very excited about "Start a petting zoo at Deepmind" (or similar direct outreach strategies) because it seems like it would produce a ton of backlash because it seems too adversarial and aggressive. There are additional considerations for/against (e.g. I worry that it'd be difficult to push a niche demographic like AI researchers very far away from the rest of society, at least the rest of their social circles; I also have the same traction concern I have with advocating for digital sentience), but this one just seems quite damning.

The upshot is that, even if there are some particularly high yield interventions in animal welfare from the far future perspective, this should be fairly far removed from typical EAA activity directed towards having the greatest near-term impact on animals. If this post heralds a pivot of Sentience Institute to directions pretty orthogonal to the principal component of effective animal advocacy, this would be welcome indeed.

I agree this is a valid argument, but given the other arguments (e.g. those above), I still think it's usually right for EAAs to focus on farmed animal advocacy, including Sentience Institute at least for the next year or two.

(FYI for readers, Gregory and I also discussed these things before the post was published when he gave feedback on the draft. So our comments might seem a little rehearsed.)

Comment author: Pablo_Stafforini 22 February 2018 12:31:49PM *  8 points [-]

The main reasons I currently favor farmed animal advocacy over your examples (global poverty, environmentalism, and companion animals) are that (1) farmed animal advocacy is far more neglected, (2) farmed animal advocacy is far more similar to potential far future dystopias, mainly just because it involves vast numbers of sentient beings who are largely ignored by most of society.

Wild animal advocacy is far more neglected than farmed animal advocacy, and it involves even larger numbers of sentient beings ignored by most of society. If the superiority of farmed animal advocacy over global poverty along these two dimensions is a sufficient reason for not working on global poverty, why isn't the superiority of wild animal advocacy over farmed animal advocacy along those same dimensions not also a sufficient reason for not working on farmed animal advocacy?

Comment author: Pablo_Stafforini 29 January 2018 04:49:39PM 3 points [-]

Thanks for creating this. I've added your course to this list.

Comment author: Pablo_Stafforini 13 November 2017 07:33:48PM 4 points [-]

Thank you for writing this! The images under 'What are you going to search for?' are not loading.

Comment author: vipulnaik 30 October 2017 01:00:23AM 6 points [-]

The comments on naming beliefs by Robin Hanson (2008) appears to be how the consensus around the impressions/beliefs distinction began to form (the commenters include such movers and shakers as Eliezer and Anna Salamon).

Also, impression track records by Katja (September 2017) recent blog post/article circulated in the rationalist community that revived the terminology.

Comment author: Pablo_Stafforini 01 November 2017 09:27:56PM *  6 points [-]

Thanks for drawing our attention to that early Overcoming Bias post. But please note that it was written by Hal Finney, not Robin Hanson. It took me a few minutes to realize this, so it seemed worth highlighting lest others fail to appreciate it.

Incidentally, I've been re-reading Finney's posts over the past couple of days and have been very impressed. What a shame that such a fine thinker is no longer with us.

ETA: Though one hopes this is temporary.

Comment author: Carl_Shulman 31 October 2017 09:18:57PM 2 points [-]

Please take my comment as explaining my own views, lest they be misunderstood, not condemning your citation of me.

Comment author: Pablo_Stafforini 31 October 2017 09:28:24PM *  2 points [-]

Okay, thank you for the clarification.

[In the original version, your comment said that the quote was pulled out of context, hence my interpretation.]

Comment author: Carl_Shulman 31 October 2017 07:55:26PM *  8 points [-]

and Carl Shulman notes that his approach "has lead [him] astray repeatedly, but I haven't seen as many successes."

That quote may not convey my view, so I'll add to this. I think Eliezer has had a number of striking successes, but in that comment I was saying that it seemed to me he was overshooting more than undershooting with the base rate for dysfunctionality in institutions/fields, and that he should update accordingly and check more carefully for the good reasons that institutional practice or popular academic views often (but far from always) indicate. That doesn't mean one can't look closely and form much better estimates of the likelihood of good invisible reasons, or that the base rate of dysfunction is anywhere near zero. E.g. I think he has discharged the burden of due diligence wrt MWI.

If many physicists say X, and many others say Y and Z which seem in conflict with X, then at a high rate there will be some good arguments for X, Y, and Z. If you first see good arguments for X, you should check to see what physicists who buy Y and Z are saying, and whether they (and physicists who buy X) say they have knowledge that you don't understand.

In the case of MWI, the physicists say they don't have key obscure missing arguments (they are public and not esoteric), and that you can sort interpretations into ones that accept the unobserved parts of the wave function in QM as real (MWI, etc), ones that add new physics to pick out part of the wavefunction to be our world, and ones like shut-up-and-calculate that amount to 'don't talk about whether parts of the wave function we don't see are real.'

Physicists working on quantum foundations are mostly mutually aware of one another's arguments, and you can read or listen to them for their explanations of why they respond differently to that evidence, and look to the general success of those habits of mind. E.g. the past success of scientific realism and Copernican moves: distant lands on Earth that were previously unseen by particular communities turned out to be real, other Sun-like stars and planets were found, biological evolution, etc. Finding out that many of the interpretations amount to MWI under another name, or just refusing to answer the question of whether MWI is true, reduces the level of disagreement to be explained, as does the finding that realist/multiverse interpretations have tended to gain ground with time and to do better among among those who engage with quantum foundations and cosmology.

In terms of modesty, I would say that generally 'trying to answer the question about external reality' is a good epistemic marker for questions about external reality, as is Copernicanism/not giving humans a special place in physics or drastically penalizing theories on which the world is big/human nature looks different (consistently with past evidence). Regarding new physics for objective collapse, I would also note the failure to show it experimentally and the general opposition to it. That seems sufficient to favor the realist side of the debate among physicists.

In contrast, I hadn't seen anything like such due diligence regarding nutrition, or precedent in common law.

Regarding the OP thesis, you could summarize my stance as that assigning 'epistemic peer' or 'epistemic superior/inferior' status in the context of some question of fact requires a lot of information and understanding when we are not assumed to already have reliable fine-grained knowledge of epistemic status. That often involves descending into the object-level: e.g. if the class of 'scientific realist arguments' has a good track record, then you will need to learn enough about a given question and the debate on it to know if that systemic factor is actually at play in the debate before you can know whether to apply that track record in assessing epistemic status.

Comment author: Pablo_Stafforini 31 October 2017 08:59:50PM *  1 point [-]

In that comment I was saying that it seemed to me he was overshooting more than undershooting with the base rate for dysfunctionality in institutions/fields, and that he should update accordingly and check more carefully for the good reasons that institutional practice or popular academic views often (but far from always) indicate. That doesn't mean one can't look closely and form much better estimates of the likelihood of good invisible reasons, or that the base rate of dysfunction is anywhere near zero.

I offered that quote to cast doubt on Rob's assertion that Eliezer has "a really strong epistemic track record, and that this is good evidence that modesty is a bad idea." I didn't mean to deny that Eliezer had some successes, or that one shouldn't "look closely and form much better estimates of the likelihood of good invisible reasons" or that "the base rate of dysfunction is anywhere near zero", and I didn't offer the quote to dispute those claims.

Readers can read the original comment and judge for themselves whether the quote was in fact pulled out of context.

Comment author: Benito 31 October 2017 07:37:44PM 3 points [-]

A discussion about the merits of each of the views Eliezer holds on these issues would itself exemplify the immodest approach I'm here criticizing. What you would need to do to change my mind is to show me why Eliezer is justified in giving so little weight to the views of each of those expert communities, in a way that doesn't itself take a position on the issue by relying primarily on the inside view.

This seems correct. I just noticed you could phrase this the other way - why in general should we presume groups of people with academic qualifications have their strongest incentives towards truth? I agree that this disagreement will come down to building detailed models of incentives in human organisations more than building inside views of each field (which is why I didn't find Greg's post particularly persuasive - this isn't a matter of discussing rational bayesian agents, but of discussing the empirical incentive landscape we are in).

Comment author: Pablo_Stafforini 31 October 2017 08:39:17PM *  0 points [-]

why in general should we presume groups of people with academic qualifications have their strongest incentives towards truth?

Maybe because these people have been surprisingly accurate? In addition, it's not that Eliezer disputes that general presumption: he routinely relies on results in the natural and social sciences without feeling the need to justify in each case why we should trust e.g. computer scientists, economists, neuroscientists, game theorists, and so on.

Comment author: RobBensinger 31 October 2017 04:04:06AM *  9 points [-]

This was a really good read! In addition to being super well-timed.

I don't think there's a disagreement here about ideal in-principle reasoning. I’m guessing that the disagreement is about several different points:

  • In reality, how generally difficult is it to spot important institutions and authorities failing in large ways? Where we might ask subquestions for particular kinds of groups; e.g., maybe you and the anti-modest will turn out to agree about how dysfunctional US national politics is on average, while disagreeing about how dysfunctional academia is on average in the US.

  • In reality, how generally difficult is it to evaluate your own level of object-level accuracy in some domain, the strength of object-level considerations in that domain, your general competence or rationality or meta-rationality, etc.? To what extent should we update strongly on various kinds of data about our reasoning ability, vs. distrusting the data source and penalizing the evidence? (Or looking for ways to not have to gather or analyze data like that at all, e.g., prioritizing finding epistemic norms or policies that work relatively OK without such data.)

  • How strong are various biases, either in general or in our environs? It sounds like you think that arrogance, overconfidence, and excess reliance on inside-view arguments are much bigger problems for core EAs than underconfidence or neglect of inside-view arguments, while Eliezer thinks the opposite.

  • What are the most important and useful debiasing interventions? It sounds like you think these mostly look like attempts to reduce overconfidence in inside views, self-aggrandizing biases, and the like, while Eliezer thinks that it's too easy to overcorrect if you organize your epistemology around that goal. I think the anti-modesty view here is that we should mostly address those biases (and other biases) through more local interventions that are sensitive to the individual's state and situation, rather than through rules akin to "be less confident" or "be more confident".

  • What's the track record for more modesty-like views versus less modesty-like views overall?

  • What's the track record for critics of modesty in particular? I would say that Eliezer and his social circle have a really strong epistemic track record, and that this is good evidence that modesty is a bad idea; but I gather you want to use that track record as Exhibit A in the case for modesty being a good idea. So I assume it would help to discuss the object-level disagreements underlying these diverging generalizations.

Does that match your sense of the disagreement?

Comment author: Pablo_Stafforini 31 October 2017 06:41:17PM *  3 points [-]

I would say that Eliezer and his social circle have a really strong epistemic track record, and that this is good evidence that modesty is a bad idea; but I gather you want to use that track record as Exhibit A in the case for modesty being a good idea.

Really? My sense is that the opposite is the case. Eliezer himself acknowledges that he has an "amazing bet-losing capability" and my sense is that he tends to bet against scientific consensus (while Caplan, who almost always takes the consensus view, has won virtually all his bets). Carl Shulman notes that Eliezer's approach "has lead [him] astray repeatedly, but I haven't seen as many successes."

View more: Next