Comment author: itaibn 13 September 2017 01:58:00PM 0 points [-]

Before you get too excited about this idea, I want you to recall your days at school and how well it turned out when the last generation of thinkers tried this.

Comment author: Benito 08 September 2017 09:24:04PM 0 points [-]

"Surely You're Joking Mr Feynman" still shows genuine curiosity, which is rare and valuable. But as I say, it's less about whether I can argue for it, and more about whether the top intellectual contributors in our community found it transformative in their youth. I think many may have read Feynman when young (e.g. it had a big impact on Eliezer).

Comment author: itaibn 09 September 2017 10:16:28AM 0 points [-]

While I couldn't quickly find the source for this, I'm pretty sure Eliezer read the Lectures on Physics as well. Again, I think Surely You're Joking is good, I just think the Lectures on Physics is better. Both are reasonable candidates for the list.

Comment author: itaibn 08 September 2017 01:00:00AM 1 point [-]

The article on machine learning doesn't discuss the possibility that more people to pursuing machine jobs can have a net negative effect. It's true your venue will generally encourage people that will be more considerate of the long-term and altruistic effects of their research and so will likely have a more positive effect than the average entrant to the field, but if accelerating the development of strong AI is a net negative then that could outweigh the benefit of the average researcher being more altruistic.

Comment author: Benito 05 September 2017 09:19:44PM 6 points [-]

I don't think the idea Anna suggests is to pick books you think young people should read, but to actually ask the best people what books they read that influenced them a lot.

Things that come to my mind include GEB, HPMOR, The Phantom Tolbooth, Feynman. Also, which surprises me but is empirically true for many people, Sam Harris's "The Moral Landscape" seems to have been the first book a number of top people I know read on their journey to doing useful things.

But either way I'd want more empirical data.

Comment author: itaibn 08 September 2017 12:38:24AM 0 points [-]

What do you mean by Feynman? I endorse his Lectures in Physics as something that had a big effect on my own intellectual development, but I worry many people won't be able to get that much out of it. While his more accessible works are good, I don't rate them as highly.

Comment author: itaibn 30 August 2017 12:01:38PM 2 points [-]

This post is a bait-and-switch: It starts off with a discussion of the Good Judgement Project and what lessons it teaches us about forecasting superintelligence. However, starting with the section "What lessons should we learn?", you switch from a general discussion of these techniques towards making a narrow point about which areas of expertise forecasters should rely on, an opinion which I suspect the author arrived at through means not strongly motivated from the Good Judgement Project.

While I also suspect the Good Judgement Project could have valuable lessons on superintelligence forecasting, I think that taking verbal descriptions of the how superforecasters make good predictions and citing them for arguments about loosely related specific policies is a poor way to do that. As a comparison, I don't think that giving a forecaster this list of suggestions and asking them to make predictions with those suggestions in mind would lead to performance similar to that of a superforecaster. In my opinion, the best way to draw lessons from the Good Judgement Project is to directly rely on existing forecasting teams, or new forecasting teams trained and tested in the same manner, to give us their predictions on potential superintelligence, and to give the appropriate weight to their expertise.

Moreover, among the list of suggestions in the section "What they found to work", you almost entirely focus on the second one, "Looking at a problem from multiple different view points and synthesising them?" to make your argument. You can also be said to be relying on the last suggestion to the extent they say essentially the same thing, that we should rely on multiple points of view. The only exception is that you rely on the fifth suggestion, "Striving to distinguish as many degrees of doubt as possible - be as precise in your estimates as you can", when you argue their strategy documents should have more explicit probability estimates. In response to that, keep in mind that these forecasters are specifically tested on giving well-calibrated probabilistic predictions. Therefore I expect that this overestimates the importance of precise probability estimates in other contexts. My hunch is that giving numerically precise subjective probability estimates is useful in discussions among people already trained to have a good subjective impression of what these probabilities mean, but among people without such training the effect of using precise probabilities is neutral or harmful. However, I have no evidence for this hunch.

I disapprove of this bait-and-switch. I think it deceptively builds a case for diversity in intelligence forecasting, and adds confusion to both the topics it discusses.

Comment author: itaibn 12 March 2017 02:34:05PM 3 points [-]

Suggestion: The author should have omitted the "Thoughts" section of this post and put the same content in a comment, and, in general, news posts should avoid subjective commentary in the main post.

Reasoning: The main content of this post is its report of EA-related news. This by itself is enough to make it worth posting. Discussion and opinions of this news can be done in the comments. By adding commentary you are effectively "bundling" a high-quality post with additional content, which grants this extra content with undue attention.

Note: This comment was not incited by any particular objection to the views discussed in this post. I also approve of the way you clearly separated the news from your thoughts on it. I don't think the post goes outside the EA Forum's community norms. Rather, I want to discuss whether shifting those community norms is a good idea.

Comment author: itaibn 08 March 2017 04:25:06AM -1 points [-]

The following is entirely a "local" criticism: It responds only to a single statement you made, and has essentially no effect on the validity of the rest of what you say.

I always run content by (a sample of) the people whose views I am addressing and the people I am directly naming/commenting on... I see essentially no case against this practice.

I found this statement surprising, because it seems to me that this practice has a high cost. It increases the amount of effort it takes to make a criticism. Increasing the cost of making criticisms can also making you less likely to consider making a criticism. There is also a fixed cost in making this into a habit.

Seeing the situation you're in as you describe in the rest of your post, and specifically that you put a lot of effort into your comments in any case, I can see this practice working well for you. However, it's not "no case" against it, especially for people who aren't public figures.