Comment author: Kaj_Sotala 18 October 2017 03:22:04PM 1 point [-]

There seem to be a lot of leads that could help us figure out the high-value interventions, though: i) knowledge about what causes it and what has contributed to changes of it over time ii) research directions that could help further improve our understanding of what causes it / what doesn't cause it iii) various interventions which already seem like they work in a small-scale setting, though it's still unclear how they might be scaled up (e.g. something like Crucial Conversations is basically about increasing trust and safety in one-to-one and small-group conversations) iv) and of course psychology in general is full of interesting ideas for improving mental health and well-being that haven't been rigorously tested, which also suggests that v) any meta-work that would improve psychology's research practices would also be even more valuable than we previously thought.

As for the "pointing out a problem people have been aware of for millenia", well, people have been aware of global poverty for millenia too. Then we got science and randomized controlled trials and all the stuff that EAs like, and got better at fixing the problem. Time to start looking at how we could apply our improved understanding of this old problem, to fixing it.

Comment author: itaibn 18 October 2017 06:16:54PM *  0 points [-]

First, I consider our knowledge of psychology today to be roughly equivalent to that of alchemists when alchemy was popular. Like with alchemy, our main advantage over previous generations is that we're doing lots of experiments and starting to notice vague patterns, but we still don't have any systematic or reliable knowledge of what is actually going on. It is premature to seriously expect to change human nature.

Improving our knowledge of psychology to the point where we can actually figure things out could have a major positive effect on society. The same could be said for other branches of science. I think basic science is a potentially high-value cause, but I don't see why psychology should be singled out.

Second, this cause is not neglected. It is one of the major issues intellectuals have been grappling with for centuries or more. Framing the issue in terms of "tribalism" may be a novelty, but I don't see it as an improvement.

Finally, I'm not saying that there's nothing the effective altruism community can do about tribalism. I'm saying I don't see how this post is helping.

edit: As an aside, I'm now wondering if I might be expressing the point too rudely, especially the last paragraph. I hope we manage to communicate effectively in spite of any mistakes on my part.

Comment author: itaibn 18 October 2017 11:35:55AM 0 points [-]

I don't see any high-value interventions here. Simply pointing out a problem people have been aware of for millenia will not help anyone.

Comment author: itaibn 14 October 2017 01:33:34PM 1 point [-]

I don't think the people of this forum are qualified to discuss this. Nobody in the post or comments (as of the time I posted my comment, and I am including myself) leaves me with a visible impression that they have detailed knowledge of the process and trade-offs for making a new government agency or any other type of major governmental action on x-risk. As laymen I believe we should not be proposing or judging any particular policy but recognizing and supporting people with genuine expertise interested in existential risk policy.

Comment author: itaibn 13 September 2017 01:58:00PM 0 points [-]

Before you get too excited about this idea, I want you to recall your days at school and how well it turned out when the last generation of thinkers tried this.

Comment author: Benito 08 September 2017 09:24:04PM 0 points [-]

"Surely You're Joking Mr Feynman" still shows genuine curiosity, which is rare and valuable. But as I say, it's less about whether I can argue for it, and more about whether the top intellectual contributors in our community found it transformative in their youth. I think many may have read Feynman when young (e.g. it had a big impact on Eliezer).

Comment author: itaibn 09 September 2017 10:16:28AM 0 points [-]

While I couldn't quickly find the source for this, I'm pretty sure Eliezer read the Lectures on Physics as well. Again, I think Surely You're Joking is good, I just think the Lectures on Physics is better. Both are reasonable candidates for the list.

Comment author: itaibn 08 September 2017 01:00:00AM 1 point [-]

The article on machine learning doesn't discuss the possibility that more people to pursuing machine jobs can have a net negative effect. It's true your venue will generally encourage people that will be more considerate of the long-term and altruistic effects of their research and so will likely have a more positive effect than the average entrant to the field, but if accelerating the development of strong AI is a net negative then that could outweigh the benefit of the average researcher being more altruistic.

Comment author: Benito 05 September 2017 09:19:44PM 6 points [-]

I don't think the idea Anna suggests is to pick books you think young people should read, but to actually ask the best people what books they read that influenced them a lot.

Things that come to my mind include GEB, HPMOR, The Phantom Tolbooth, Feynman. Also, which surprises me but is empirically true for many people, Sam Harris's "The Moral Landscape" seems to have been the first book a number of top people I know read on their journey to doing useful things.

But either way I'd want more empirical data.

Comment author: itaibn 08 September 2017 12:38:24AM 0 points [-]

What do you mean by Feynman? I endorse his Lectures in Physics as something that had a big effect on my own intellectual development, but I worry many people won't be able to get that much out of it. While his more accessible works are good, I don't rate them as highly.

Comment author: itaibn 30 August 2017 12:01:38PM 2 points [-]

This post is a bait-and-switch: It starts off with a discussion of the Good Judgement Project and what lessons it teaches us about forecasting superintelligence. However, starting with the section "What lessons should we learn?", you switch from a general discussion of these techniques towards making a narrow point about which areas of expertise forecasters should rely on, an opinion which I suspect the author arrived at through means not strongly motivated from the Good Judgement Project.

While I also suspect the Good Judgement Project could have valuable lessons on superintelligence forecasting, I think that taking verbal descriptions of the how superforecasters make good predictions and citing them for arguments about loosely related specific policies is a poor way to do that. As a comparison, I don't think that giving a forecaster this list of suggestions and asking them to make predictions with those suggestions in mind would lead to performance similar to that of a superforecaster. In my opinion, the best way to draw lessons from the Good Judgement Project is to directly rely on existing forecasting teams, or new forecasting teams trained and tested in the same manner, to give us their predictions on potential superintelligence, and to give the appropriate weight to their expertise.

Moreover, among the list of suggestions in the section "What they found to work", you almost entirely focus on the second one, "Looking at a problem from multiple different view points and synthesising them?" to make your argument. You can also be said to be relying on the last suggestion to the extent they say essentially the same thing, that we should rely on multiple points of view. The only exception is that you rely on the fifth suggestion, "Striving to distinguish as many degrees of doubt as possible - be as precise in your estimates as you can", when you argue their strategy documents should have more explicit probability estimates. In response to that, keep in mind that these forecasters are specifically tested on giving well-calibrated probabilistic predictions. Therefore I expect that this overestimates the importance of precise probability estimates in other contexts. My hunch is that giving numerically precise subjective probability estimates is useful in discussions among people already trained to have a good subjective impression of what these probabilities mean, but among people without such training the effect of using precise probabilities is neutral or harmful. However, I have no evidence for this hunch.

I disapprove of this bait-and-switch. I think it deceptively builds a case for diversity in intelligence forecasting, and adds confusion to both the topics it discusses.

Comment author: itaibn 12 March 2017 02:34:05PM 3 points [-]

Suggestion: The author should have omitted the "Thoughts" section of this post and put the same content in a comment, and, in general, news posts should avoid subjective commentary in the main post.

Reasoning: The main content of this post is its report of EA-related news. This by itself is enough to make it worth posting. Discussion and opinions of this news can be done in the comments. By adding commentary you are effectively "bundling" a high-quality post with additional content, which grants this extra content with undue attention.

Note: This comment was not incited by any particular objection to the views discussed in this post. I also approve of the way you clearly separated the news from your thoughts on it. I don't think the post goes outside the EA Forum's community norms. Rather, I want to discuss whether shifting those community norms is a good idea.

Comment author: itaibn 08 March 2017 04:25:06AM -1 points [-]

The following is entirely a "local" criticism: It responds only to a single statement you made, and has essentially no effect on the validity of the rest of what you say.

I always run content by (a sample of) the people whose views I am addressing and the people I am directly naming/commenting on... I see essentially no case against this practice.

I found this statement surprising, because it seems to me that this practice has a high cost. It increases the amount of effort it takes to make a criticism. Increasing the cost of making criticisms can also making you less likely to consider making a criticism. There is also a fixed cost in making this into a habit.

Seeing the situation you're in as you describe in the rest of your post, and specifically that you put a lot of effort into your comments in any case, I can see this practice working well for you. However, it's not "no case" against it, especially for people who aren't public figures.

View more: Next