Comment author: Milan_Griffes 18 October 2017 10:18:54PM 1 point [-]

Update: I checked with the study author and he confirmed that "relationships" on p. 5 is the same as "social effects" in Table 5.

Comment author: Ben_West  (EA Profile) 19 October 2017 10:39:02PM 1 point [-]

Thanks Milan! Do you know more about how they defined "relationships" ("altruism")? Given that they think "relationships" and "altruism" are synonymous, it seems possible that the definition they use may not correspond to what people on this forum would call "altruism".

Comment author: Ben_West  (EA Profile) 18 October 2017 01:59:12PM 1 point [-]

Do you know how they measured altruism? It seems like maybe they are using "altruism" as a synonym for the "relationships" questionnaire?

Comment author: Brian_Tomasik 23 July 2017 12:26:54AM *  8 points [-]

Thanks for the post! If lazy solutions reduce suffering by reducing consciousness, they also reduce happiness. So, for example, a future civilization optimizing for very alien values relative to what humans care about might not have much suffering or happiness (if you don't think consciousness is useful for many things; I think it is), and the net balance of welfare would be unclear (even relative to a typical classical-utilitarian evaluation of net welfare).

Personally I find it very likely that the long-run future of Earth-originating intelligence will optimize for values relatively alien to human values. This has been the historical trend whenever one dominant life form replaces another. (Human values are relatively alien to those of our fish ancestors, for example.) The main way out of this conclusion is if humans' abilities for self-understanding and cooperation make our own future evolution an exception to the general trend.

Comment author: Ben_West  (EA Profile) 20 August 2017 05:25:42PM 0 points [-]

Thanks Brian!

I think you are describing two scenarios:

  1. Post-humans will become something completely alien to us (e.g. mindless outsourcers). In this case, arguments that these post-humans will not have negative states equally imply that these post-humans won't have positive states. Therefore, we might expect some (perhaps very strong) regression towards neutral moral value.
  2. Post-humans will have some sort of abilities which are influenced by current humans’ values. In this case, it seems like these post-humans will have good lives (at least as measured by our current values).

This still seems to me to be asymmetric – as long as you have some positive probability on scenario (2), isn't the expected value greater than zero?

Comment author: Peter_Hurford  (EA Profile) 20 July 2017 05:51:25PM 14 points [-]

One concern might be not malevolence, but misguided benevolence. For just one example, spreading wild animals to other planets could potentially involve at least some otherwise avoidable suffering (within at least some of the species), but might be done anyway out of misguided versions of "conservationist" or "nature-favoring" views.

Comment author: Ben_West  (EA Profile) 16 August 2017 09:27:41PM 0 points [-]

I'm curious if you think that the "reflective equilibrium" position of the average person is net negative?

E.g. many people who would describe themselves as "conservationists" probably also think that suffering is bad. If they moved into reflective equilibrium, would they give up the conservation or the anti-suffering principles (where these conflict)?

In response to comment by Ben_West  (EA Profile) on EAGx Relaunch
Comment author: Roxanne_Heston  (EA Profile) 02 August 2017 12:13:42AM 1 point [-]

In brief, large speaker events and workshops, depending on the needs of a local group. Perhaps self-evidently, large speaker events are best for nascent chapters trying to attract interest; workshops for augmenting the engagement and/or skill of existing members. There's some information about this in the Organizer FAQ, as well a prompts about this in the EAGx organizer application and on the "Get Involved' tab of effectivealtruism.org.

In response to comment by Roxanne_Heston  (EA Profile) on EAGx Relaunch
Comment author: Ben_West  (EA Profile) 13 August 2017 05:55:46PM 1 point [-]
Comment author: Peter_Hurford  (EA Profile) 20 July 2017 05:51:25PM 14 points [-]

One concern might be not malevolence, but misguided benevolence. For just one example, spreading wild animals to other planets could potentially involve at least some otherwise avoidable suffering (within at least some of the species), but might be done anyway out of misguided versions of "conservationist" or "nature-favoring" views.

Comment author: Ben_West  (EA Profile) 30 July 2017 07:38:53PM 1 point [-]

Yeah, I think the point I'm trying to make is that it would require effort for things to go badly. This is, of course, importantly different from saying that things can't go badly.

In response to comment by LawrenceC on EAGx Relaunch
Comment author: Roxanne_Heston  (EA Profile) 24 July 2017 07:27:35PM *  3 points [-]

I'm curious what prompted this change - did organizers encounter a lot of difficult converting new conference attendees to more engaged EAs?

They were often stretched so thin from making the main event happen that they didn't have the capacity to ensure that their follow-up events were solid. We think part of the problem will be mitigated if the events themselves are smaller and more targeted towards groups with a specific level of EA understanding.

I'm also curious about what sort of support CEA will be providing to smaller, less-established local groups, given that fewer groups will receive support for EAGx.

Local groups can apply for funding through the EAGx funding application, as well as use the event-organizing resources we generated for EAGx. Depending on the size and nature of the event, they can receive individualized support from different CEA staff working on community development, such as Harri, Amy, Julia, and/or Larissa. If they're running a career or rationality workshop they may be able to get 80,000 Hours' or CFAR's advice or direct support.

Here are the event-organizing resources, if you'd like to check them out: https://goo.gl/zw8AjW

In response to comment by Roxanne_Heston  (EA Profile) on EAGx Relaunch
Comment author: Ben_West  (EA Profile) 26 July 2017 09:38:20PM 0 points [-]

Depending on the size and nature of the event, they can receive individualized support from different CEA staff working on community development, such as Harri, Amy, Julia, and/or Larissa.

Could you say more about what kind of (smaller, local, non-EAGx) events CEA would like to see/would be interested in providing support for?

Comment author: Lila 22 July 2017 09:32:48AM 5 points [-]

Humans are generally not evil, just lazy

?

Human history has many examples of systematic unnecessary sadism, such as torture for religious reasons. Modern Western moral values are an anomaly.

Comment author: Ben_West  (EA Profile) 23 July 2017 01:54:18PM *  3 points [-]

Thanks for the response! But is that true? The examples I can think of seem better explained by a desire for power etc. than suffering as an end goal in itself. (To quote every placeholder text: Lorem ipsum dolor sit amet...)

Comment author: WilliamKiely 21 July 2017 12:58:27AM *  2 points [-]

7 - Therefore, the future will contain less net suffering

8 - Therefore, the future will be good

Could this be rewritten as "8. Therefore, the future will be better than the present" or would that change its meaning?

If it would change the meaning, then what do you mean by "good"? (Note: If you're confused about why I'm confused about this, then note that it seems to me that 8 does not follow from 7 for the meaning of "good" I usually hear from EAs (something like "net positive utility").)

Comment author: Ben_West  (EA Profile) 21 July 2017 11:08:32PM 3 points [-]

Yeah, it would change the meaning.

My assumption was that, if things monotonically improve, then in the long run (perhaps the very, very long run) we will get to net positive. You are proposing that we might instead asymptote at some negative value, even though we are still always improving?

Comment author: Wei_Dai 20 July 2017 07:50:49PM 15 points [-]

What lazy solutions will look like seems unpredictable to me. Suppose someone in the future wants to realistically roleplay a historical or fantasy character. The lazy solution might be to simulate a game world with conscious NPCs. The universe contains so much potential for computing power (which presumably can be turned into conscious experiences), that even if a very small fraction of people do this (or other things whose lazy solutions happen to involve suffering), that could create an astronomical amount of suffering.

Comment author: Ben_West  (EA Profile) 20 July 2017 11:06:43PM 5 points [-]

Yes, I agree. More generally: the more things consciousness (and particularly suffering) are useful for, the less reasonable point (3) above is.

View more: Next