V

Verden

14 karmaJoined

Comments
6

Okay, this is very off-topic, but I just really want more EAs know about a browser extension that has massively improved my Twitter experience. 

https://github.com/insin/tweak-new-twitter/

Tweak New Twitter is a browser extension which removes algorithmic content from Twitter, hides news and trends, lets you control which shared tweets appear on your timeline, and adds other UI improvements

I should have clarified that I think (or at least I thought so, prior to your question; kind of confused now) Yudkowsky's answer is probably one of those two MIRI responses. Sorry about that.

I recall you or somebody else at MIRI once wrote something along the lines that most of MIRI researchers don't actually believe that p(doom) is extremely high, like >90% doom. Then, in the linked post, there is a comment from someone who marked themselves both as a technical safety and strategy researcher and who gave 0.98, 0.96 on your questions. The style/content of the comment struck me as something Yudkowsky would have written.

Rob thinking that it's not actually 99.99% is in fact an update for me.

This survey suggests that he was at 96-98% a year ago.

My point has little to do with him being the director of MIRI per se. 

I suppose I could be wrong about this, but my impression is that Nate Soares is among the top 10 most talented/insightful people with elaborate inside view and years of research experience in AI alignment. He also seems to agree with Yudkowsky on a whole lot of issues and predicts about the same p(doom) for about the same reasons. And I feel that many people don't give enough thought to the fact that while e.g. Paul Christiano has interacted a lot with Yudkowsky and disagreed with him on many key issues (while agreeing on many others), there's also Nate Soares, who broadly agrees with Yudkowsky's models that predict very high p(doom). 

Another, more minor point: if someone is bringing up Yudkowsky's track record in the context of his extreme views on AI risk, it seems helpful to talk about Soares' track record as well.

I feel like people are missing one fairly important consideration when discussing how much to defer to Yudkowsky, etc. Namely, I've heard multiple times that Nate Soares, the executive director of MIRI, has models of AI risk that are very similar to Yudkowsky's, and their p(doom) are also roughly the same. My limited impression is that Soares is no less smart or otherwise capable than Yudkowsky. So, when having this kind of discussion, focusing on Yudkowsky's track record or whatever, I think it's good to remember that there's another very smart person, who entered AI safety much later than Yudkowsky, and who holds very similar inside views on AI risk.

This leads to scepticism about the rationality of predictors which show a pattern of ‘steadily moving in one direction’ for a given question [...] Eliezer Yudkowsky has confidently asserted it is an indicator of sub-par Bayesian updating

I'm confused about your interpretation of Eliezer's words. What he seems to be saying is that recent advances in ML shouldn't have caused such a notable update in predictions on AI timelines on Metaculus, since a more rational predictor would have expected that such things would be happening more likely in this timeframe than the Metaculus crowd apparently did. Admittedly, he may be wrong about that, but I see what he wrote as a claim concerning one particular subject, not about steady updates in one direction in general.