Comment author: Buck 26 October 2017 10:46:59PM *  5 points [-]

I appreciate this comment for being specific!

It might be helpful if people tried to drop this identity baggage when discussing diversity issues in EA.

I don't understand what you mean by that; could you clarify?

Comment author: Askell 26 October 2017 11:11:45PM 10 points [-]

So I think that if you identify with or against some group (e.g. 'anti-SJWs'), then anything that people say that pattern matches to something that this group would say triggers a reflexive negative reaction. This manifests in various ways: you're inclined to attribute way more to the person's statements than what they're actually saying or you set an overly demanding bar for them to "prove" that what they're saying is correct. And I think all of that is pretty bad for discourse.

I also suspect that if we take a detached attitude towards this sort of thing, disagreements about things like how much of a diversity problem EA has or what is causing it would be much less prominent than they currently are. These disagreements only affect benefits we expect to directly accrue from trying to improve things, but the costs of doing these things are usually pretty low and the information value of experimenting with them is really high. So I don't really see many plausible views in this area that would make it rational to take a strong stance against a lot of the easier things that people could try that might increase the number of women and minorities that get involved with EA.

Comment author: Askell 26 October 2017 10:42:34PM 25 points [-]

An example of a particular practice that I think might look kind of innocuous but can be quite harmful to women and minorities in EA is what I'm going to call "buzz talk". Buzz talk involves making highly subjective assessments of people's abilities, putting a lot of weight in those assessments, and communicating them to others in the community. Buzz talk can be very powerful, but the beneficiaries of buzz seem to disproportionately be those that conform to a stereotype of brilliance: a white, upper class male might be "the next big thing" when his black, working class female counterpart wouldn't even be noticed. These are the sorts of small, unintentional behaviors that I that it can be good for people to try to be conscious of.

I also think it's really unfortunate that there's such a large schism between those involved in the social justice movement and people who largely disagree with this movement (think: SJWs and anti-SJWs). The EA community attracts people from both of groups, and I think it can cause people to see this whole issue through the lens of whatever group they identify with. It might be helpful if people tried to drop this identity baggage when discussing diversity issues in EA.

Comment author: RyanCarey 27 March 2017 07:07:09PM 1 point [-]

I suspect that the distinctions here are actually less bright than "philosophical analysis" and "concrete research". I can think of theoretical work that is consistent with doing what you call (i) - (iii) and does not involve a lot of guesswork. After all, lot of theoretical work is empirically informed, even if it's not itself intended to gather new data. And a lot of this theoretical work is quite decision relevant. A simple example is effective altruism itself: early work in EA was empirically informed theoretical work... suspect that doing empirically informed theoretical work on these question would be more fruitful than trying to solve them through empirical means only... So my inclination is to take this on a case to case basis... What, then, about pure a priori work like mathematics and conceptual work?

I don't think I'm arguing what you think I'm arguing. To be clear, I wouldn't claim a bright dividing line, nor would I claim that more philosophical work, or pure mathematics has no use at all. Now would I claim that we should avoid theory altogether. I agree that there are cases of theoretical work that could be useful. For examples, there is AI safety, and there may be some important crossover work to be done in ethics and in understanding human experience and human values. But that doesn't mean we just need to throw up our arms and say that everything needs to be taken on a case by case bases, if in-fact we have good reasons to say we're overall overinvesting in one kind of research rather than another. The aim has to be to do some overall prioritization.

Another example that's close to my heart is value of information work. There are existing problems in how to identify high and low value of information, when to explore vs. exploit, and so on... If you find good, novel foundational work to do, then it can often bear fruit later. E.g. work in economics and game theory is of this sort, and yet I think that a lot of concepts from game theory are very useful for analyzing real world situations. It would have been a shame if this work had been dismissed early on as not decision relevant.

I agree that thinking about exploration vs exploration tradeoffs is both interesting and useful. However, the Gittins Index was discovered in 1979. Much of the payoff of this discovery came decades afterward. We have good reasons to have pretty high discount rates, such as i) returns on shaping research communities that are growing at high double-digit percentages, ii) double digit chances of human-level AI in next 15 years.

There's very little empirical research going into important concrete issues such as how to stage useful policy interventions for risky emerging technologies (Allan Dafoe, Mathias Mass notwithstanding), how to build better consensus among decision-makers, how to get people to start more good projects, how to better recruit, etc that many important decisions of EAs will depend on. It's tempting to say that many EAs have wholly forgotten what ambitious business plans and literature reviews on future-facing technologies are even supposed to look like! I would love to write that off as hyperbole but I haven't seen any recent examples. And it seems critical that theory should be feeding into such a process.

I'd be interested to know if people have counterconsiderations on the level of what should be a higher priority.

Comment author: Askell 28 March 2017 09:56:44AM *  4 points [-]

There are two different claims here: one is "type x research is not very useful" and the other is "we should be doing more type y research at the margin". In the comment above, you seem to be defending the latter, but your earlier comments support the former. I don't think we necessarily disagree on the latter claim (perhaps on how to divide x from y, and the optimal proportion of x and y, but not on the core claim). But note that the second claim is somewhat tangential to the original post. If type x research is valuable, then even though we might want more type y research at the margin, this isn't a consideration against a particular instance of type x research. Of course, if type x research is (in general or in this instance) not very useful, then this is of direct relevance to a post that is an instance of type x research. It seems important not to conflate these, or to move from a defense of the former to a defense of the latter. Above, you acknowledge that type x research can be valuable, so you don't hold the general claim that type x research isn't useful. I think you do hold the view that either this particular instance of research or this subclass of type x research is not useful. I think that's fine, but I think it's important not to frame this as merely a disagreement about what kinds of research should be done at the margin, since this is not the source of the disagreement.

Comment author: RyanCarey 25 March 2017 11:53:25PM *  1 point [-]

For an example of this view, see Nick Beckstead's research advice from back in 2014:

I think most highly abstract philosophical research is unlikely to justify making different decisions. For example, I am skeptical of the “EA upside” of most philosophical work on decision theory, anthropics, normative ethics, disagreement, epistemology, the Fermi paradox, and animal consciousness—despite the fact that I’ve done a decent amount of work in the first few categories. If someone was going to do work in these areas, I’d probably be most interested in seeing a very thorough review of the Fermi Paradox, and second most interested in a detailed critique of arguments for the overwhelming importance of the very long-term future.

I’m also skeptical of developing frameworks for making comparisons across causes right now. Rather than, e.g., trying to come up with some way of trying to trade off IQ increases per person with GDP per capita increases, I would favor learning more about how we could increase IQ and how we could increase GDP per capita. There are some exceptions to this; e.g., I see how someone could make a detailed argument that, from a long-run perspective, human interests are much more instrumentally important than animal interests. But, for the most part, I think it makes more sense to get information about promising causes now, and do this kind of analysis later. Likewise, rather than developing frameworks for choosing between career areas, I’d like to see people just gather information about career paths that look particularly promising at the moment.

Other things being equal, I strongly prefer research that involves less guesswork. This is less because I’m on board with the stuff Holden Karnofsky has said about expected value calculations—though I agree with much of it—and more because I believe we’re in the early days of effective altruism research, and most of our work will be valuable in service of future work. It is therefore important that we do our research in a way that makes it possible for others to build on it later. So far, my experience has been that it’s really hard to build on guesswork. I have much less objection to analysis that involves guesswork if I can be confident that the parts of the analysis that involve guesswork factor in the opinions of the people who are most likely to be informed on the issues.

Comment author: Askell 27 March 2017 08:40:48AM 3 points [-]

I suspect that the distinctions here are actually less bright than "philosophical analysis" and "concrete research". I can think of theoretical work that is consistent with doing what you call (i) - (iii) and does not involve a lot of guesswork. After all, lot of theoretical work is empirically informed, even if it's not itself intended to gather new data. And a lot of this theoretical work is quite decision relevant. A simple example is effective altruism itself: early work in EA was empirically informed theoretical work. Another example that's close to my heart is value of information work. There are existing problems in how to identify high and low value of information, when to explore vs. exploit, and so on. I suspect that doing empirically informed theoretical work on these question would be more fruitful than trying to solve them through empirical means only. So my inclination is to take this on a case to case basis. We see radical leaps forward sometimes being generated by theoretical work and sometimes being generated by novel empirical discoveries. It seems odd to not draw from two highly successful methods.

What, then, about pure a priori work like mathematics and conceptual work? I think I agree with Owen that this kind of work is important for building solid foundations. But I'd also go further in saying that if you find good, novel foundational work to do, then it can often bear fruit later. E.g. work in economics and game theory is of this sort, and yet I think that a lot of concepts from game theory are very useful for analyzing real world situations. It would have been a shame if this work had been dismissed early on as not decision relevant.

19

Act utilitarianism: criterion of rightness vs. decision procedure

A useful distinction for people thinking about act consequentialism in general and act utilitarianism in particular is the distinction between a criterion of rightness and a decision procedure ( which has been discussed by Toby Ord in much more detail ). A criterion of rightness tells us what it takes... Read More