Comment author: Milan_Griffes 29 November 2017 04:33:30AM 3 points [-]

The study of North Korea may produce insight into how dystopian societal attractor points can be averted or what preventive measures (beyond what is present in today’s North Korea) might help people on the inside destabilize them.

This is a great point.

In response to What consequences?
Comment author: kbog  (EA Profile) 28 November 2017 06:10:14AM 0 points [-]

It's worth noting that long-run consequences doesn't necessarily imply just looking at x-risks. A fully fleshed out long-run evaluation looks at many factors of civilization quality and safety, and I think it is good enough to dominate other considerations. It's certainly better than allowing mere x-risk concerns to dominate.

But this objection only highlights the difficulty presented by cluelessness. In a very literal sense, a physician in this position is clueless about what action would be best.

I don't think this is true. Killing a random baby on the off chance that it might become a dictator is a bad idea. You can do the math on that if you want, or just trust me that the expected consequences of it are hurtful to society.

In response to comment by kbog  (EA Profile) on What consequences?
Comment author: Milan_Griffes 29 November 2017 04:24:05AM 0 points [-]

Intuitively, I completely agree that killing a random baby is socially harmful.

The example is interesting because it's tricky to "do the math" on. (Hard to arrive at a believable long-run cost of a totalitarian dictatorship; hard to arrive at a believable long-run cost of instituting a social norm of infanticide.)

In response to What consequences?
Comment author: JesseClifton 24 November 2017 09:28:01PM 2 points [-]

Thanks for writing this. I think the problem of cluelessness has not received as much attention as it should.

I’d add that, in addition to the brute good and x-risks approaches, there are approaches which attempt to reduce the likelihood of dystopian long-run scenarios. These include suffering-focused AI safety and values-spreading. Cluelessness may still plague these approaches, but one might argue that they are more robust to both empirical and moral uncertainty.

Comment author: Milan_Griffes 29 November 2017 04:19:43AM 0 points [-]

Good point, I was implicitly considering s-risks as a subset of x-risks.

In response to What consequences?
Comment author: MichaelPlant 28 November 2017 09:12:53PM 2 points [-]

A potential objection here is that the Austrian physician could in no way have foreseen that the infant they were called to tend to would later become a terrible dictator, so the physician should have done what seemed best given the information they could uncover. But this objection only highlights the difficulty presented by cluelessness. In a very literal sense, a physician in this position is clueless about what action would be best. Assessing only proximate consequences would provide some guidance about what action to take, but this guidance would not necessarily point to the action with the best consequences in the long run.

I think this example undermines, rather than supports, your point. Of course it's possible the baby would have grown up to be Hitler. It's also possible the baby would have grown up to be a great scientist. Hence, from the perspective of the doctor, who is presumably working on expected value and has no reason to think one special case is more likely than the other, these presumably just do cancel out. Hence the doctors looks the obvious causes. This seems like a case of what Greaves calls simple cluelessness.

A couple of general comments. There is already an academic literature of cluelessness and it's known to some EAs. It would be helpful therefore if you make it clear what you're doing that's novel. I don't mean this in a disparaging way. I simply can't tell if you're disagreeing with Greaves et al. or not. If you are, that's potentially very interesting and I want to know what the disagreement exactly is so I can assess it and see if I want to take your side. If you're not presenting a new line of thought, but just summarising or restating what others have said (perhaps in an effort to bring this information to new audiences, or just for your own benefit) you should say that instead so that people can better decided how closely to read it.

Additionally, I think it's unhelpful to (re)invent new terminology without a good reason. I can't tell the clear different between proximate, indirect and long-run consequences. I would much have preferred it if you'd explained cluelueness using Greaves' set up and then progressed from there as appropriate.

Comment author: Milan_Griffes 29 November 2017 04:16:26AM *  1 point [-]

There is already an academic literature of cluelessness and it's known to some EAs. It would be helpful therefore if you make it clear what you're doing that's novel ...

Do you know of worthwhile work on this beyond Greaves 2016? (Please point me to it, if you do!)

Greaves 2016 is the most useful academic work I've come across on this question; I was convinced by their arguments against Lenman 2000.

I stated my goal at the top of the piece.

I would much have preferred it if you'd explained cluelueness using Greaves' set up and then progressed from there as appropriate.

I don't think Greaves presented an analogous terminology?

"Flow-through effects" & "knock-on effects" have been used previously, but they don't distinguish between temporally near & temporally distant effects. That distinction seems interesting, so I decided to not those terms.

In response to What consequences?
Comment author: MichaelPlant 28 November 2017 09:12:53PM 2 points [-]

A potential objection here is that the Austrian physician could in no way have foreseen that the infant they were called to tend to would later become a terrible dictator, so the physician should have done what seemed best given the information they could uncover. But this objection only highlights the difficulty presented by cluelessness. In a very literal sense, a physician in this position is clueless about what action would be best. Assessing only proximate consequences would provide some guidance about what action to take, but this guidance would not necessarily point to the action with the best consequences in the long run.

I think this example undermines, rather than supports, your point. Of course it's possible the baby would have grown up to be Hitler. It's also possible the baby would have grown up to be a great scientist. Hence, from the perspective of the doctor, who is presumably working on expected value and has no reason to think one special case is more likely than the other, these presumably just do cancel out. Hence the doctors looks the obvious causes. This seems like a case of what Greaves calls simple cluelessness.

A couple of general comments. There is already an academic literature of cluelessness and it's known to some EAs. It would be helpful therefore if you make it clear what you're doing that's novel. I don't mean this in a disparaging way. I simply can't tell if you're disagreeing with Greaves et al. or not. If you are, that's potentially very interesting and I want to know what the disagreement exactly is so I can assess it and see if I want to take your side. If you're not presenting a new line of thought, but just summarising or restating what others have said (perhaps in an effort to bring this information to new audiences, or just for your own benefit) you should say that instead so that people can better decided how closely to read it.

Additionally, I think it's unhelpful to (re)invent new terminology without a good reason. I can't tell the clear different between proximate, indirect and long-run consequences. I would much have preferred it if you'd explained cluelueness using Greaves' set up and then progressed from there as appropriate.

Comment author: Milan_Griffes 29 November 2017 04:08:29AM *  0 points [-]

Thanks for the thoughtful comment :-)

This seems like a case of what Greaves calls simple cluelessness.

I'm fuzzy on Greaves' distinction between simple & complex cluelessness. Greaves uses the notion of "systematic tendency" to draw out complex cluelessness from simple, but "This talk of ‘having some reasons’ and ‘systematic tendencies’ is not as precise as one would like;" (from p. 9 of Greaves 2016).

Perhaps it comes down to symmetry. When we notice that for every imagined consequence, there is an equal & opposite consequence that feels about as likely, we can consider our cluelessness "simple." But when we can't do this, our cluelessness is complex.

This criterion is unsatisfyingly subjective though, because it relies on our assessing the equal-opposite consequence as "about as likely," plus relying on whether we are able to imagine an equal-opposite consequence or not.

Comment author: Elizabeth 21 November 2017 02:51:00AM 1 point [-]

Oops, thanks for the correction. Do you have those broken out separately?

Comment author: Milan_Griffes 21 November 2017 05:37:55PM *  1 point [-]

Yes, see rows 4 - 17 in our model:

https://docs.google.com/spreadsheets/d/1i6aRlYiITg_birU6rW7FuN9O18X6zLucbCsdPRAkR9E/edit?usp=sharing

Best-guess is that the ballot initiative costs ~$16 million all-in, whereas the yearly cost per treatment is ~$2.6 billion.

We haven't yet figured out a believable way to separate out the portion of benefit attributable to the ballot initiative compared to the portion of benefit attributable to the treatment itself.

Comment author: Milan_Griffes 21 November 2017 01:56:29AM 1 point [-]

Great post; very excited to see more good work in this direction.

Milan Griffes's post on psychedelic legalization on the same forum, in which he estimates the return to lobbying to legalize psychedelics at $52,000-$442,000/DALY.

Probably worth noting that $52,000-$442,000/DALY is the all-in cost which includes costs of treatment, in addition to the cost of a ballot initiative. Treatment costs make up almost the entirety of the total cost, and it's unclear who would bear these costs (funders of the ballot initiative almost certainly wouldn't).

Comment author: xccf 27 October 2017 03:04:41AM *  32 points [-]

Thanks for this post. There's a lot I agree with here. I'm in especially vigorous agreement with your points regarding hero worship and seeing newcomers as a source of fresh ideas/arguments instead of condescending them.

There are also some points I disagree with. And in the spirit of not considering any arguments above criticism, and disagreement being critical for finding the best answers, I hope you won't mind if I lay my disagreements out. To save time, I'll focus on the differences between your view and mine. So if I don't mention a point you made, you can default to assuming I agree with it.

First, I'm broadly skeptical of the social psychology research you cite. Whenever I read about a study that claims women are more analytical than men, or women are better leaders than men, I imagine whether I would hear about it if the experiment found the opposite result.

I recommend this blog post on the lack of ideological diversity in social psychology. Social psychologists are overwhelmingly liberal, and many openly admit to discriminating against conservatives in hiring. Here is a good post by a Mexican social psychologist that discusses how this plays out. There's also the issue of publication bias at the journal level. I know someone who served on the selection committee of a (minor & unimportant, so perhaps not representative) psychology journal. The committee had an explicit philosophy of only publishing papers they liked, and espousing "problematic" views was a strike against a paper. Anyway, I think to some degree the field functions as a liberal echo chamber on controversial issues.

There's really an entire can of worms here--social psychology is currently experiencing a major reproducibility crisis--but I don't want to get too deep in to it, because to defend my position fully, I'd want to share evidence for positions that make people uncomfortable. Suffice to say that there's a third layer of publication bias at the level of your Facebook feed, and I could show you a different set of research-backed thinkpieces that point to different conclusions. (Suggestion: if you wouldn't want someone on the EA Forum to make arguments for the position not X, maybe avoid making arguments for the position X. Otherwise you put commenters in an impossible bind.)

But for me this point is really the elephant in the room:

some people in broader society now respond to correctable offenses with a mob mentality and too much readiness for ostracization, but just because some people have swung too far past the mark doesn’t mean we should default to a status quo that falls so short of it.

I would like to see a much deeper examination here. Insofar as I feel resistant to diversity efforts, this feels like most of what I'm trying to resist. If I was confident that pro-diversity people in EA won't spiral towards this, I'd be much more supportive. Relevant fable.

All else equal, increased diversity sounds great, but my issue is I see a pattern of other pro-diversity movements sacrificing all other values in the name of trying to increase diversity. Take a statement like this one:

Some of the most talented and resolute people in this community are here because they are deeply emotionally compelled to help others as much as possible, and we’re currently missing out on many such people by being so cold and calculating. There are ways to be warm and calculating! I can think of a few people in the community who manage this well.

Being warm and calculating sounds great, but what if there's actually a tradeoff here? Just taking myself as an example, I know that as I've become aware of how much suffering exists in the grand scheme of things, I've begun to worry less about random homeless people I see and stuff like that. Even if there's some hack I can use to empathize with homeless people while retaining a global perspective, that hack would require effort on my part--effort I could put towards goals that seem more important.

this particular individual — who is probably a troll in general — was banned from the groups where he repeatedly and unrelentingly said such things, though it’s concerning there was any question about whether this was acceptable behavior.

Again, I think there's a real tradeoff between "free speech" and sensitivity. I view the moderation of online communities as an unsolved problem. I think we benefit from navigating moderation tradeoffs thoughtfully rather than reactively.

Reminding people off the forum to upvote this post, in order to deal with possible hostility, is also a minor red flag from my perspective. This resembles something Gleb Tsipursky once did.

None of this seems very bad in the grand scheme of things, especially not compared to what I've seen from other champions of diversity--I just thought it'd be useful to give concrete examples.

Anyway, here are some ideas of mine, if anyone cares:

  • Phrase guidelines as neutrally as possible, e.g. "don't be a jerk" instead of "don't be a sexist". The nice thing about "don't be a jerk" is it at admits the possibility that someone could violate the guideline by e.g. loudly calling out a minor instance of sexism in a way that generates a lot of drama and does more harm than good. Rules should exist to serve everyone, and they should be made difficult to weaponize. If most agree your rules are legitimate, that also makes them easier to enforce.

  • Team-building activities, icebreakers, group singalongs, synchronous movement, sports/group exercise, and so on. The ideal activity is easy for anyone to do and creates a shared EA tribal identity just strong enough to supersede the race/gender/etc. identities we have by default. Kinda like how students at the same university will all cheer for the same sports team.

  • Following the example of the animal-focused EAs: Work towards achieving critical mass of underrepresented groups. Especially if you can saturate particular venues (e.g. a specific EA meetup group). I know that as a white male, I sometimes get uncomfortable in situations where I am the only white person or the only man in a group, even though I know perfectly well that no one is discriminating against me. I think it's a natural response to have when you're in the minority, so in a certain sense there's just a chicken-and-egg problem. Furthermore, injecting high-caliber underrepresented people into EA will help dismantle stereotypes and increase the number of one-on-one conversations people have, which I think are critical for change.

  • Take a proactive, rather than reactive, approach to helping EA men with women. Again, I think having more women is playing a big role for animal-focused EAs. More women means the average man has more female friends, better understands how women think, and empathizes with the situations women encounter more readily. In this podcast, Christine Peterson discusses the value of finding a life partner for productivity and mental health. In the same way that CFAR makes EAs more productive through lifehacking, I could imagine someone working covertly to make EAs more productive through solving their dating problems.

  • Invite the best thinkers who have heterodox views on diversity to attend "diversity in EA" events, in order to get a diverse perspective on diversity and stay aware of tradeoffs. Understand their views in enough depth to market diversity initiatives to the movement at large without getting written off.

  • When hiring a Diversity & Inclusion Officer, find someone who's good at managing tradeoffs rather than the person who's most passionate about the role.

Again, I appreciate the effort you put in to this post, and I support you working towards these goals in a thoughtful way. Also, I welcome PMs from you or anyone else reading this comment--I spent several hours on it, but I'm sure there is stuff I could have put better and I'd love to get feedback.

Comment author: Milan_Griffes 30 October 2017 11:39:28PM 1 point [-]

Christine Peterson's life partner discussion is around 1:17:20 at the above link^^

It's part of a broader discussion about supporting yourself while being altruistic over the long haul (starts around 1:15:00).

Comment author: DominikPeters 26 October 2017 09:54:55PM *  1 point [-]

I've made a feed with Wiblin's top 10 episodes for easy importing into podcast apps.

http://bit.ly/econtalk-wiblin

expands to https://dl.getdropbox.com/s/hjdlhtv6xtklhxv/econtalk-wiblin.xml

Comment author: Milan_Griffes 27 October 2017 01:17:56AM 0 points [-]

Looks cool, but I wasn't able to figure out how to use this after 5 minutes of trying. Have a little guidance?

Comment author: Milan_Griffes 26 October 2017 09:32:58PM 1 point [-]

Great post.

Yes, some people broader society now respond to correctable offenses with a mob mentality and too much readiness for ostracization, but just because some people have swung too far past the mark doesn’t mean we should default to a status quo that falls so short of it.

I think there's an "in" missing between "people" and "broader"

View more: Next