Comment author: Ben_West  (EA Profile) 29 October 2017 08:36:11PM 5 points [-]

Things for sharing this! You've given me some ideas for the Madison group, and I look forward to hearing about your progress.

Comment author: Ben_West  (EA Profile) 26 October 2017 03:49:25PM 26 points [-]

I prefer to play the long game with my own investments in community building, and would rather for instance invest in someone reasonably sharp who has a track record of altruism and expresses interest in helping others most effectively than in someone even sharper who reasoned their way into EA and consumed all the jargon but has never really given anything up for other people

I believe that Toby Ord has talked about how, in the early days of EA, he had thought that it would be really easy to take people who are already altruistic and encourage them to be more concerned about effectiveness, but hard to take effectiveness minded people and convince them to do significant altruistic things. However, once he actually started talking to people, he found the opposite to be the case.

You mention "playing the long game" – are you suggesting that the "E first, A second" people are easier to get on board in the short run, but less dedicated and therefore in the long run "A first, E second" folks are more valuable? Or are you saying that my (possibly misremembered) quote from Toby is wrong entirely?

Comment author: Ben_West  (EA Profile) 26 October 2017 03:39:09PM *  7 points [-]

Thank you for the interesting post Kelly. I was interested in your comment:

people tend to think that women are more intuitively-driven and less analytical than men, which does not seem to be borne out and in fact the opposite may be more likely

And followed the link through to Forbes. I think the part you are citing is this:

But research shows that women are just as data-driven and analytical as men, if not more so. In a sample of 32 studies that looked at how men and women thought about a problem or made a decision, 12 of the studies found that women adopted an analytical approach more often than men, meaning that women systematically turned to the data, while men were more inclined to go with their gut, hunches, or intuitive reactions. The other 20 studies? They found no difference between men and women’s thinking styles.

Unfortunately, the link there is broken. Do you know what the original source is?

Comment author: Milan_Griffes 18 October 2017 10:18:54PM 1 point [-]

Update: I checked with the study author and he confirmed that "relationships" on p. 5 is the same as "social effects" in Table 5.

Comment author: Ben_West  (EA Profile) 19 October 2017 10:39:02PM 3 points [-]

Thanks Milan! Do you know more about how they defined "relationships" ("altruism")? Given that they think "relationships" and "altruism" are synonymous, it seems possible that the definition they use may not correspond to what people on this forum would call "altruism".

Comment author: Ben_West  (EA Profile) 18 October 2017 01:59:12PM 1 point [-]

Do you know how they measured altruism? It seems like maybe they are using "altruism" as a synonym for the "relationships" questionnaire?

Comment author: Brian_Tomasik 23 July 2017 12:26:54AM *  8 points [-]

Thanks for the post! If lazy solutions reduce suffering by reducing consciousness, they also reduce happiness. So, for example, a future civilization optimizing for very alien values relative to what humans care about might not have much suffering or happiness (if you don't think consciousness is useful for many things; I think it is), and the net balance of welfare would be unclear (even relative to a typical classical-utilitarian evaluation of net welfare).

Personally I find it very likely that the long-run future of Earth-originating intelligence will optimize for values relatively alien to human values. This has been the historical trend whenever one dominant life form replaces another. (Human values are relatively alien to those of our fish ancestors, for example.) The main way out of this conclusion is if humans' abilities for self-understanding and cooperation make our own future evolution an exception to the general trend.

Comment author: Ben_West  (EA Profile) 20 August 2017 05:25:42PM 0 points [-]

Thanks Brian!

I think you are describing two scenarios:

  1. Post-humans will become something completely alien to us (e.g. mindless outsourcers). In this case, arguments that these post-humans will not have negative states equally imply that these post-humans won't have positive states. Therefore, we might expect some (perhaps very strong) regression towards neutral moral value.
  2. Post-humans will have some sort of abilities which are influenced by current humans’ values. In this case, it seems like these post-humans will have good lives (at least as measured by our current values).

This still seems to me to be asymmetric – as long as you have some positive probability on scenario (2), isn't the expected value greater than zero?

Comment author: Peter_Hurford  (EA Profile) 20 July 2017 05:51:25PM 14 points [-]

One concern might be not malevolence, but misguided benevolence. For just one example, spreading wild animals to other planets could potentially involve at least some otherwise avoidable suffering (within at least some of the species), but might be done anyway out of misguided versions of "conservationist" or "nature-favoring" views.

Comment author: Ben_West  (EA Profile) 16 August 2017 09:27:41PM 0 points [-]

I'm curious if you think that the "reflective equilibrium" position of the average person is net negative?

E.g. many people who would describe themselves as "conservationists" probably also think that suffering is bad. If they moved into reflective equilibrium, would they give up the conservation or the anti-suffering principles (where these conflict)?

In response to comment by Ben_West  (EA Profile) on EAGx Relaunch
Comment author: Roxanne_Heston  (EA Profile) 02 August 2017 12:13:42AM 1 point [-]

In brief, large speaker events and workshops, depending on the needs of a local group. Perhaps self-evidently, large speaker events are best for nascent chapters trying to attract interest; workshops for augmenting the engagement and/or skill of existing members. There's some information about this in the Organizer FAQ, as well a prompts about this in the EAGx organizer application and on the "Get Involved' tab of effectivealtruism.org.

In response to comment by Roxanne_Heston  (EA Profile) on EAGx Relaunch
Comment author: Ben_West  (EA Profile) 13 August 2017 05:55:46PM 1 point [-]
Comment author: Peter_Hurford  (EA Profile) 20 July 2017 05:51:25PM 14 points [-]

One concern might be not malevolence, but misguided benevolence. For just one example, spreading wild animals to other planets could potentially involve at least some otherwise avoidable suffering (within at least some of the species), but might be done anyway out of misguided versions of "conservationist" or "nature-favoring" views.

Comment author: Ben_West  (EA Profile) 30 July 2017 07:38:53PM 1 point [-]

Yeah, I think the point I'm trying to make is that it would require effort for things to go badly. This is, of course, importantly different from saying that things can't go badly.

In response to comment by LawrenceC on EAGx Relaunch
Comment author: Roxanne_Heston  (EA Profile) 24 July 2017 07:27:35PM *  3 points [-]

I'm curious what prompted this change - did organizers encounter a lot of difficult converting new conference attendees to more engaged EAs?

They were often stretched so thin from making the main event happen that they didn't have the capacity to ensure that their follow-up events were solid. We think part of the problem will be mitigated if the events themselves are smaller and more targeted towards groups with a specific level of EA understanding.

I'm also curious about what sort of support CEA will be providing to smaller, less-established local groups, given that fewer groups will receive support for EAGx.

Local groups can apply for funding through the EAGx funding application, as well as use the event-organizing resources we generated for EAGx. Depending on the size and nature of the event, they can receive individualized support from different CEA staff working on community development, such as Harri, Amy, Julia, and/or Larissa. If they're running a career or rationality workshop they may be able to get 80,000 Hours' or CFAR's advice or direct support.

Here are the event-organizing resources, if you'd like to check them out: https://goo.gl/zw8AjW

In response to comment by Roxanne_Heston  (EA Profile) on EAGx Relaunch
Comment author: Ben_West  (EA Profile) 26 July 2017 09:38:20PM 0 points [-]

Depending on the size and nature of the event, they can receive individualized support from different CEA staff working on community development, such as Harri, Amy, Julia, and/or Larissa.

Could you say more about what kind of (smaller, local, non-EAGx) events CEA would like to see/would be interested in providing support for?

View more: Next