Comment author: DavidMoss 30 October 2017 03:14:14AM 5 points [-]

I don't have much to contribute to the normative social epistemology questions raised here, since this is a huge debate within philosophy. People interested in a general summary might read the Philosophy Compass review or the SEP article.

But I did want to question the claim about the descriptive social epistemology of the EA movement which is made i.e. that:

What occurs instead is agreement approaching fawning obeisance to a small set of people the community anoints as ‘thought leaders’, and so centralizing on one particular eccentric and overconfident view.

I'm not sure this is useful as a general characterisation of the EA community, though certainly at times people are too confident, too deferential etc. What beliefs might be the beneficiaries of this fawning obeisance? There doesn't seem to me to be sufficient uncontroversial agreement about much (even utilitarianism has a number of prominent 'thought leaders' pushing against it saying that we ought to be opening ourselves up to alternatives).

The general characterisation seems in tension with the common idea that EA is highly combative and confrontational (it would be strange though not impossible if we had a constant disagreement and attempted argumentative one-upmanship, combined with excessive deference to certain thought leaders). Instead what I see is occasional excessive deference to people respected within certain cliques, by members of those circles, but not 'centralization' on any one particular view. Perhaps all Greg has in mind is these kinds of cases where people defer too much to people they shouldn't (perhaps due to a lack of actual experts in EA rather than due to their own vice). But then it's not clear to me what the typical EA-rationalist who has not and probably shouldn't make a deep study of many-worlds, free will, or meta-ethics should do to avoid this problem.

Comment author: kbog  (EA Profile) 30 October 2017 07:53:30AM *  2 points [-]

(even utilitarianism has a number of prominent 'thought leaders' pushing against it saying that we ought to be opening ourselves up to alternatives).

Also, EA selects for utilitarians in the first place. So you can't say that we're being irrational just because we're disproportionately utilitarian.

Comment author: Halstead 30 October 2017 12:27:44AM *  7 points [-]

I thought I'd offer up more object-level examples to try to push against your view. AI risk is a case in which EAs disagree with the consensus among numerous AI researchers and other intelligent people. In my view, a lot of the arguments I've heard from AI researchers have been very weak and haven't shifted my credence all that much. But modesty here seems to push me toward the consensus to a greater extent than the object-level reasons warrant.

With respect to the question of AI risk, it seems to me that I should demote these people from my epistemic peer group because they disagree with me on the subject of AI risk. If you accept this, then its hard to see what difference there is between immodesty and modesty

Comment author: kbog  (EA Profile) 30 October 2017 07:39:53AM *  2 points [-]

The difference in many object level claims, like the probability that there will be an intelligence explosion and so on, is not very much between EAs and AI researchers. This survey demonstrated it: https://arxiv.org/abs/1705.08807

AI researchers are just more likely to have an attitude that anything less than ~10% likely to occur should be ignored, or existential risks are not orders of magnitude more important than other things, or similar kinds of judgement calls.

The one major technical issue where EAs might be systematically different from AI researchers would be the validity of current research in addressing the problem.

Comment author: Michael_PJ 27 October 2017 06:45:21PM 6 points [-]

Whether a discussion proceeds as collaborative or combative depends on how the participants interpret what the other parties say. This is all heavily contextual, but as with many things involving conversational implicature, you can often spend some effort to clarify your implicature.

The internet is notoriously bad for conveying the unconscious signals that we usually use to pick up on implicature, and I think this is one of the reasons that internet discussions often turn hostile and combative.

So it's worth putting in more signals of your intent into the text itself, since that's all you have.

Comment author: kbog  (EA Profile) 29 October 2017 10:24:55AM *  0 points [-]

The right approach is to only look at actual points being made, and not try to infer implications in the first place.

When someone reacts to an implication, the appropriate response is to say "but I/they didn't say anything about that," ignore their complaints and move on.

Comment author: MichaelPlant 27 October 2017 10:40:46AM 1 point [-]

I guess I'm basing my subjective judgement of 'conspicuous by it's absence' by comparing how much inclusivity gets discussed in wider society vs how much it gets discussed in EA. I don't think a few posts over a few years really cuts the mustard, not when it's not obvious how much is being done on this issue.

Comment author: kbog  (EA Profile) 29 October 2017 10:19:45AM *  1 point [-]

how much inclusivity gets discussed in wider society vs how much it gets discussed in EA.

With the exception of groups which specifically exist for the purpose of promoting inclusivity, I can't think of any groups which discuss it more than EA.

Heck, even groups like that - e.g., BLM or anti-GamerGate groups or other leftist cultural movements - don't spend significantly more time worrying about their own inclusivity than EA does.

Comment author: KevinWatkinson  (EA Profile) 27 October 2017 01:08:56PM 0 points [-]

In reference to moral uncertainty? In this article i'm saying two things which i think have a fairly similar basis. Firstly, that we need to give consideration to different value systems or we risk gravitating to a single value system by default, which is what i argue has generally happened in EAA. So i outline some ways this could be addressed.

In terms of how the issues are negotiated, if referenced to this article, i'm not in favour of normative externalism which in my view represents the main situation of EAA at present (welfare / reducetarianism). My favourite theory probably wouldn't work either because other theories are marginalised in EAA, so it would be disproportionate in such a way that different theories likely wouldn't be heard. Maximising choice worthiness could work better if frameworks for intervention were more thoroughly applied and there was an improvement in cross movement communication. The parliamentary model could be a possibility, but again there is an issue of representation, and part of the reason certain moral theories aren't represented is because there isn't space for their inclusion, or they aren't well understood / the drive toward normative externalism has obfuscated them.

There is an issue in relation to how i'm talking about two seemingly different issues of inclusion concurrently, but in my view the idea of 'inclusion' is fairly broad in EA and there are a number of commonalities which can be applied to being inclusive of different value systems and of people who are marginalised by mainstream society (indeed sometimes both considerations need to be applied at the same time). This isn't to say we need to include everyone, or all value systems, though i am saying more consideration needs to be given to systems compatible with Effective Altruism so that it can better inform the work we do, and that more consideration needs to be given to people who have less privilege. If we are merely truth seeking within our own value systems, i think this isn't going to be so worthwhile, and i am less certain this really represents what Effective Altruism is about.

As i view it, there is at least some concern around these issues that is often expressed within Effective Altruism, but not so much agreement in terms of what needs to happen, or indeed, of the consequences of the present situation. However, i think there are some things that many EAs could be persuaded, and that could include the utility of meta-evaluation, and I think this would also provide a stronger foundation for making suggestions about potential changes to address issues of inclusion. This could be grounded in moral uncertainty, but as i suggested i think there could be some steps before reaching that stage, such as how value systems are represented.

Comment author: kbog  (EA Profile) 27 October 2017 06:33:18PM *  0 points [-]

As a quick comment, because now I'm busy. I'm not sure that any of those accounts of moral uncertainty are mutually exclusive, with the exceptions of MEC-MFT and Parliament-MFT. Parliamentary model is vaguely defined and MEC is the theoretically best way to construe the parliamentary model, IMO.

I think there's a rigid distinction between values systems and utility functions on one hand, and empirical questions of cause effectiveness on the other, and the former can't directly inform the latter - it's like a reverse is-ought gap.

An availability cascade of a moral theory - people assume it's right because other people believe it, and so on - is definitely bad and ought to be avoided.

Comment author: hollymorgan 27 October 2017 06:20:44PM 0 points [-]

In mid-March 2017, EA Wikipedia page views hit 3-4 times as many views as any day in preceeding or succeeding months...What happened in March?

Comment author: kbog  (EA Profile) 27 October 2017 06:22:54PM *  0 points [-]

I think it was one of those big podcasts, like MacAskill on Joe Rogan's show or something like that.

Comment author: Michael_PJ 27 October 2017 05:41:16PM 2 points [-]

I do think that there should be higher bars for overtly signalling collaborativeness online, because so many other cues are missing.

Comment author: kbog  (EA Profile) 27 October 2017 05:59:02PM *  2 points [-]

I'm confused, you mean people should be expected to explicitly signal that they are being collaborative?

In my view the basic structure of a "combative" debate need not entail any negative connotation of hostility or interpersonal trouble. Point/counterpoint is just a standard, default, acceptable mode of discussion. So ideally, when you see people talking like that, as long as things are reasonably civil then you don't feel a need to worry about it. It's a problem that some people don't see "combative" discussions in this way, but I don't think there is any better solution in the long run. If you try to evolve norms to avoid the uncertainty and negative perceptions then you run along a treadmill - like the story with politically correct terminology. It's okay to have a combative structure as long as you stick within the mainstream window of professional and academic discourse, and I think EA is mostly fine at that.

Comment author: xccf 27 October 2017 11:41:50AM 13 points [-]

The pattern I see is that "organizations" (such as government agencies or Fortune 500 companies) usually turn out OK, whereas "movements" or "communities" (e.g. the atheism movement, or the open source community) often turn out poorly.

Comment author: kbog  (EA Profile) 27 October 2017 11:50:45AM 11 points [-]

Hm, that's a good point. I can't come up with a solid counterexample off the top of my head.

Comment author: Michael_PJ 26 October 2017 11:23:49PM 13 points [-]

I'd like to move towards an inclusive community that doesn't damage the valuable aspects of EA. I think this post mostly did a good job of suggesting things in that vein (I was heartened to see "don't stop being weird" as an item), but I'd like to push on the point a bit more.

For example, I'm hugely in favour of collaborative discussions over combative discussions, but I find it very helpful to have discussions that stylistically appear combative while actually being collaborative. For example: frequent, direct criticism of ideas put forward by other people is a hallmark of combative discussion, but can be fine so long as everyone is on an even footing and "you are not your ideas" is common knowledge. If we ban this, then we make some parts of our discourse worse. Overly zealous pursuit of formalized markers can destroy a lot of value.

Of course, the solution is "don't do that", but the most obvious approach to "have more X" is "pick some formal markers of X and select for them". Doing better is harder, perhaps something like "have workshops/talks on good disagreement", "praise people who're known for being excellent at this" etc.


I agree with others that there are too many suggestions in this post. They're also a bit of a grab bag. I can see a few categories:

  • Miscellaneous criticisms, many of which seem plausible, but aren't obviously any more important for diversity than for their other benefits (collaborative discussions, humility, less hero-worship, better interpersonal interactions etc.).
  • Larger-scale shifts of uncertain effect (head vs heart, jargon, caution over "free speech", etc.). A lot of these are unclear to me, and I think we'd want to take a clear-headed look at the costs and benefits.
  • More specific diversity-boosting measures (female speakers, try to counteract bias, mentor people etc.). These seem clearest to me, and hopefully we can look and see what's worked well in other places vs the costs.

I think the miscellaneous improvements could (and should!) go stand on their own; the larger-scale shifts are perhaps best discussed individually; and what I think a diversity criticism is uniquely placed to bring is more of the third kind of thing.

Comment author: kbog  (EA Profile) 27 October 2017 10:27:25AM *  1 point [-]

For example, I'm hugely in favour of collaborative discussions over combative discussions, but I find it very helpful to have discussions that stylistically appear combative while actually being collaborative. For example: frequent, direct criticism of ideas put forward by other people is a hallmark of combative discussion, but can be fine so long as everyone is on an even footing and "you are not your ideas" is common knowledge.

Yeah, we have already gone too far with condemning combaticism on the EA forum in my opinion. Demanding that everyone stop and rephrase their language in careful flowery terms is pretty alienating and marginalizing to people who aren't accustomed to that kind of communication, so you're not going to be able to please everyone.

Comment author: xccf 27 October 2017 03:04:41AM *  31 points [-]

Thanks for this post. There's a lot I agree with here. I'm in especially vigorous agreement with your points regarding hero worship and seeing newcomers as a source of fresh ideas/arguments instead of condescending them.

There are also some points I disagree with. And in the spirit of not considering any arguments above criticism, and disagreement being critical for finding the best answers, I hope you won't mind if I lay my disagreements out. To save time, I'll focus on the differences between your view and mine. So if I don't mention a point you made, you can default to assuming I agree with it.

First, I'm broadly skeptical of the social psychology research you cite. Whenever I read about a study that claims women are more analytical than men, or women are better leaders than men, I imagine whether I would hear about it if the experiment found the opposite result.

I recommend this blog post on the lack of ideological diversity in social psychology. Social psychologists are overwhelmingly liberal, and many openly admit to discriminating against conservatives in hiring. Here is a good post by a Mexican social psychologist that discusses how this plays out. There's also the issue of publication bias at the journal level. I know someone who served on the selection committee of a (minor & unimportant, so perhaps not representative) psychology journal. The committee had an explicit philosophy of only publishing papers they liked, and espousing "problematic" views was a strike against a paper. Anyway, I think to some degree the field functions as a liberal echo chamber on controversial issues.

There's really an entire can of worms here--social psychology is currently experiencing a major reproducibility crisis--but I don't want to get too deep in to it, because to defend my position fully, I'd want to share evidence for positions that make people uncomfortable. Suffice to say that there's a third layer of publication bias at the level of your Facebook feed, and I could show you a different set of research-backed thinkpieces that point to different conclusions. (Suggestion: if you wouldn't want someone on the EA Forum to make arguments for the position not X, maybe avoid making arguments for the position X. Otherwise you put commenters in an impossible bind.)

But for me this point is really the elephant in the room:

some people in broader society now respond to correctable offenses with a mob mentality and too much readiness for ostracization, but just because some people have swung too far past the mark doesn’t mean we should default to a status quo that falls so short of it.

I would like to see a much deeper examination here. Insofar as I feel resistant to diversity efforts, this feels like most of what I'm trying to resist. If I was confident that pro-diversity people in EA won't spiral towards this, I'd be much more supportive. Relevant fable.

All else equal, increased diversity sounds great, but my issue is I see a pattern of other pro-diversity movements sacrificing all other values in the name of trying to increase diversity. Take a statement like this one:

Some of the most talented and resolute people in this community are here because they are deeply emotionally compelled to help others as much as possible, and we’re currently missing out on many such people by being so cold and calculating. There are ways to be warm and calculating! I can think of a few people in the community who manage this well.

Being warm and calculating sounds great, but what if there's actually a tradeoff here? Just taking myself as an example, I know that as I've become aware of how much suffering exists in the grand scheme of things, I've begun to worry less about random homeless people I see and stuff like that. Even if there's some hack I can use to empathize with homeless people while retaining a global perspective, that hack would require effort on my part--effort I could put towards goals that seem more important.

this particular individual — who is probably a troll in general — was banned from the groups where he repeatedly and unrelentingly said such things, though it’s concerning there was any question about whether this was acceptable behavior.

Again, I think there's a real tradeoff between "free speech" and sensitivity. I view the moderation of online communities as an unsolved problem. I think we benefit from navigating moderation tradeoffs thoughtfully rather than reactively.

Reminding people off the forum to upvote this post, in order to deal with possible hostility, is also a minor red flag from my perspective. This resembles something Gleb Tsipursky once did.

None of this seems very bad in the grand scheme of things, especially not compared to what I've seen from other champions of diversity--I just thought it'd be useful to give concrete examples.

Anyway, here are some ideas of mine, if anyone cares:

  • Phrase guidelines as neutrally as possible, e.g. "don't be a jerk" instead of "don't be a sexist". The nice thing about "don't be a jerk" is it at admits the possibility that someone could violate the guideline by e.g. loudly calling out a minor instance of sexism in a way that generates a lot of drama and does more harm than good. Rules should exist to serve everyone, and they should be made difficult to weaponize. If most agree your rules are legitimate, that also makes them easier to enforce.

  • Team-building activities, icebreakers, group singalongs, synchronous movement, sports/group exercise, and so on. The ideal activity is easy for anyone to do and creates a shared EA tribal identity just strong enough to supersede the race/gender/etc. identities we have by default. Kinda like how students at the same university will all cheer for the same sports team.

  • Following the example of the animal-focused EAs: Work towards achieving critical mass of underrepresented groups. Especially if you can saturate particular venues (e.g. a specific EA meetup group). I know that as a white male, I sometimes get uncomfortable in situations where I am the only white person or the only man in a group, even though I know perfectly well that no one is discriminating against me. I think it's a natural response to have when you're in the minority, so in a certain sense there's just a chicken-and-egg problem. Furthermore, injecting high-caliber underrepresented people into EA will help dismantle stereotypes and increase the number of one-on-one conversations people have, which I think are critical for change.

  • Take a proactive, rather than reactive, approach to helping EA men with women. Again, I think having more women is playing a big role for animal-focused EAs. More women means the average man has more female friends, better understands how women think, and empathizes with the situations women encounter more readily. In this podcast, Christine Peterson discusses the value of finding a life partner for productivity and mental health. In the same way that CFAR makes EAs more productive through lifehacking, I could imagine someone working covertly to make EAs more productive through solving their dating problems.

  • Invite the best thinkers who have heterodox views on diversity to attend "diversity in EA" events, in order to get a diverse perspective on diversity and stay aware of tradeoffs. Understand their views in enough depth to market diversity initiatives to the movement at large without getting written off.

  • When hiring a Diversity & Inclusion Officer, find someone who's good at managing tradeoffs rather than the person who's most passionate about the role.

Again, I appreciate the effort you put in to this post, and I support you working towards these goals in a thoughtful way. Also, I welcome PMs from you or anyone else reading this comment--I spent several hours on it, but I'm sure there is stuff I could have put better and I'd love to get feedback.

Comment author: kbog  (EA Profile) 27 October 2017 10:20:08AM 13 points [-]

All else equal, increased diversity sounds great, but my issue is I see a pattern of other pro-diversity movements sacrificing all other values in the name of trying to increase diversity.

It's not unheard of, but it seems more common than it is because only the movements and initiatives which go too far merit headlines and attention. The average government agency, F500 company, or similar organization piles on all kinds of diversity policies without turning into the Nightmare on Social Justice Street.

View more: Prev | Next