Comment author: Carl_Shulman 14 December 2017 05:41:40PM *  15 points [-]

Keep in mind that soliciting upvotes for a comment is explicitly against Reddit rules. I understand if you think that the stakes of this situation are more important than these rules, but be sure you are consciously aware of the judgment you have made.

I'd say our policy should be 'just don't do that.' EA has learned its lesson on this from GiveWell.

Also:

Integrity:

Because we believe that trust, cooperation, and accurate information are essential to doing good, we strive to be honest and trustworthy. More broadly, we strive to follow those rules of good conduct that allow communities (and the people within them) to thrive. We also value the reputation of effective altruism, and recognize that our actions reflect on it.

Comment author: itaibn 14 December 2017 06:19:31PM 2 points [-]

Indeed, maybe I should made the point more harshly. To be clear, that comment is not about something people might do, it's about what's already present in the top post, which I see as breaking the Reddit rules.

I used soft language because I was worried about EA discussions breaking into arguments whenever someone suggests a good thing to do, and was worried that I might have erred too much in the other direction in other contexts. I still don't feel I have a good intuition on how confrontational I should be.

Comment author: itaibn 14 December 2017 04:54:50PM 6 points [-]

I've spent some time thinking and investigating what the current state of affairs is, and here's my conclusions:

I've been reading through PineappleFund's comments. Many are responses to solicitations for specific charities with him endorsing them as possibilities. One of these was for SENS foundation. Matthew_Barnett suggested that this is evidence that he particularly cares about long-term future causes, but given the diversity of other causes he endorsed I think it is pretty weak evidence.

They haven't yet commented on any of the subthreads specifically discussing EA. However, these subthreads are high up on the Reddit sorting algorithm and have many comments endorsing EA. This is already a good position and is difficult to improve: They either like what they see or they don't. It may be better if the top-level comments explicitly described and linked to a specific charity since that is what they responded well to in other comments, but I am cautious about making such surface-level generalizations which might have more to do with the distribution of existing comments than PineappleFund's tendencies.

Keep in mind that soliciting upvotes for a comment is explicitly against Reddit rules. I understand if you think that the stakes of this situation are more important than these rules, but be sure you are consciously aware of the judgment you have made.

Comment author: Kaj_Sotala 18 October 2017 03:22:04PM 1 point [-]

There seem to be a lot of leads that could help us figure out the high-value interventions, though: i) knowledge about what causes it and what has contributed to changes of it over time ii) research directions that could help further improve our understanding of what causes it / what doesn't cause it iii) various interventions which already seem like they work in a small-scale setting, though it's still unclear how they might be scaled up (e.g. something like Crucial Conversations is basically about increasing trust and safety in one-to-one and small-group conversations) iv) and of course psychology in general is full of interesting ideas for improving mental health and well-being that haven't been rigorously tested, which also suggests that v) any meta-work that would improve psychology's research practices would also be even more valuable than we previously thought.

As for the "pointing out a problem people have been aware of for millenia", well, people have been aware of global poverty for millenia too. Then we got science and randomized controlled trials and all the stuff that EAs like, and got better at fixing the problem. Time to start looking at how we could apply our improved understanding of this old problem, to fixing it.

Comment author: itaibn 18 October 2017 06:16:54PM *  0 points [-]

First, I consider our knowledge of psychology today to be roughly equivalent to that of alchemists when alchemy was popular. Like with alchemy, our main advantage over previous generations is that we're doing lots of experiments and starting to notice vague patterns, but we still don't have any systematic or reliable knowledge of what is actually going on. It is premature to seriously expect to change human nature.

Improving our knowledge of psychology to the point where we can actually figure things out could have a major positive effect on society. The same could be said for other branches of science. I think basic science is a potentially high-value cause, but I don't see why psychology should be singled out.

Second, this cause is not neglected. It is one of the major issues intellectuals have been grappling with for centuries or more. Framing the issue in terms of "tribalism" may be a novelty, but I don't see it as an improvement.

Finally, I'm not saying that there's nothing the effective altruism community can do about tribalism. I'm saying I don't see how this post is helping.

edit: As an aside, I'm now wondering if I might be expressing the point too rudely, especially the last paragraph. I hope we manage to communicate effectively in spite of any mistakes on my part.

Comment author: itaibn 18 October 2017 11:35:55AM 0 points [-]

I don't see any high-value interventions here. Simply pointing out a problem people have been aware of for millenia will not help anyone.

Comment author: itaibn 14 October 2017 01:33:34PM 1 point [-]

I don't think the people of this forum are qualified to discuss this. Nobody in the post or comments (as of the time I posted my comment, and I am including myself) leaves me with a visible impression that they have detailed knowledge of the process and trade-offs for making a new government agency or any other type of major governmental action on x-risk. As laymen I believe we should not be proposing or judging any particular policy but recognizing and supporting people with genuine expertise interested in existential risk policy.

Comment author: itaibn 13 September 2017 01:58:00PM 0 points [-]

Before you get too excited about this idea, I want you to recall your days at school and how well it turned out when the last generation of thinkers tried this.

Comment author: Benito 08 September 2017 09:24:04PM 0 points [-]

"Surely You're Joking Mr Feynman" still shows genuine curiosity, which is rare and valuable. But as I say, it's less about whether I can argue for it, and more about whether the top intellectual contributors in our community found it transformative in their youth. I think many may have read Feynman when young (e.g. it had a big impact on Eliezer).

Comment author: itaibn 09 September 2017 10:16:28AM 0 points [-]

While I couldn't quickly find the source for this, I'm pretty sure Eliezer read the Lectures on Physics as well. Again, I think Surely You're Joking is good, I just think the Lectures on Physics is better. Both are reasonable candidates for the list.

Comment author: itaibn 08 September 2017 01:00:00AM 1 point [-]

The article on machine learning doesn't discuss the possibility that more people to pursuing machine jobs can have a net negative effect. It's true your venue will generally encourage people that will be more considerate of the long-term and altruistic effects of their research and so will likely have a more positive effect than the average entrant to the field, but if accelerating the development of strong AI is a net negative then that could outweigh the benefit of the average researcher being more altruistic.

Comment author: Benito 05 September 2017 09:19:44PM 6 points [-]

I don't think the idea Anna suggests is to pick books you think young people should read, but to actually ask the best people what books they read that influenced them a lot.

Things that come to my mind include GEB, HPMOR, The Phantom Tolbooth, Feynman. Also, which surprises me but is empirically true for many people, Sam Harris's "The Moral Landscape" seems to have been the first book a number of top people I know read on their journey to doing useful things.

But either way I'd want more empirical data.

Comment author: itaibn 08 September 2017 12:38:24AM 0 points [-]

What do you mean by Feynman? I endorse his Lectures in Physics as something that had a big effect on my own intellectual development, but I worry many people won't be able to get that much out of it. While his more accessible works are good, I don't rate them as highly.

Comment author: itaibn 30 August 2017 12:01:38PM 2 points [-]

This post is a bait-and-switch: It starts off with a discussion of the Good Judgement Project and what lessons it teaches us about forecasting superintelligence. However, starting with the section "What lessons should we learn?", you switch from a general discussion of these techniques towards making a narrow point about which areas of expertise forecasters should rely on, an opinion which I suspect the author arrived at through means not strongly motivated from the Good Judgement Project.

While I also suspect the Good Judgement Project could have valuable lessons on superintelligence forecasting, I think that taking verbal descriptions of the how superforecasters make good predictions and citing them for arguments about loosely related specific policies is a poor way to do that. As a comparison, I don't think that giving a forecaster this list of suggestions and asking them to make predictions with those suggestions in mind would lead to performance similar to that of a superforecaster. In my opinion, the best way to draw lessons from the Good Judgement Project is to directly rely on existing forecasting teams, or new forecasting teams trained and tested in the same manner, to give us their predictions on potential superintelligence, and to give the appropriate weight to their expertise.

Moreover, among the list of suggestions in the section "What they found to work", you almost entirely focus on the second one, "Looking at a problem from multiple different view points and synthesising them?" to make your argument. You can also be said to be relying on the last suggestion to the extent they say essentially the same thing, that we should rely on multiple points of view. The only exception is that you rely on the fifth suggestion, "Striving to distinguish as many degrees of doubt as possible - be as precise in your estimates as you can", when you argue their strategy documents should have more explicit probability estimates. In response to that, keep in mind that these forecasters are specifically tested on giving well-calibrated probabilistic predictions. Therefore I expect that this overestimates the importance of precise probability estimates in other contexts. My hunch is that giving numerically precise subjective probability estimates is useful in discussions among people already trained to have a good subjective impression of what these probabilities mean, but among people without such training the effect of using precise probabilities is neutral or harmful. However, I have no evidence for this hunch.

I disapprove of this bait-and-switch. I think it deceptively builds a case for diversity in intelligence forecasting, and adds confusion to both the topics it discusses.

View more: Next