In response to comment by LawrenceC on EAGx Relaunch
Comment author: Roxanne_Heston  (EA Profile) 24 July 2017 07:27:35PM *  3 points [-]

I'm curious what prompted this change - did organizers encounter a lot of difficult converting new conference attendees to more engaged EAs?

They were often stretched so thin from making the main event happen that they didn't have the capacity to ensure that their follow-up events were solid. We think part of the problem will be mitigated if the events themselves are smaller and more targeted towards groups with a specific level of EA understanding.

I'm also curious about what sort of support CEA will be providing to smaller, less-established local groups, given that fewer groups will receive support for EAGx.

Local groups can apply for funding through the EAGx funding application, as well as use the event-organizing resources we generated for EAGx. Depending on the size and nature of the event, they can receive individualized support from different CEA staff working on community development, such as Harri, Amy, Julia, and/or Larissa. If they're running a career or rationality workshop they may be able to get 80,000 Hours' or CFAR's advice or direct support.

Here are the event-organizing resources, if you'd like to check them out:

In response to comment by Roxanne_Heston  (EA Profile) on EAGx Relaunch
Comment author: Ben_West  (EA Profile) 26 July 2017 09:38:20PM 0 points [-]

Depending on the size and nature of the event, they can receive individualized support from different CEA staff working on community development, such as Harri, Amy, Julia, and/or Larissa.

Could you say more about what kind of (smaller, local, non-EAGx) events CEA would like to see/would be interested in providing support for?

Comment author: Lila 22 July 2017 09:32:48AM 5 points [-]

Humans are generally not evil, just lazy


Human history has many examples of systematic unnecessary sadism, such as torture for religious reasons. Modern Western moral values are an anomaly.

Comment author: Ben_West  (EA Profile) 23 July 2017 01:54:18PM *  3 points [-]

Thanks for the response! But is that true? The examples I can think of seem better explained by a desire for power etc. than suffering as an end goal in itself. (To quote every placeholder text: Lorem ipsum dolor sit amet...)

Comment author: WilliamKiely 21 July 2017 12:58:27AM *  2 points [-]

7 - Therefore, the future will contain less net suffering

8 - Therefore, the future will be good

Could this be rewritten as "8. Therefore, the future will be better than the present" or would that change its meaning?

If it would change the meaning, then what do you mean by "good"? (Note: If you're confused about why I'm confused about this, then note that it seems to me that 8 does not follow from 7 for the meaning of "good" I usually hear from EAs (something like "net positive utility").)

Comment author: Ben_West  (EA Profile) 21 July 2017 11:08:32PM 3 points [-]

Yeah, it would change the meaning.

My assumption was that, if things monotonically improve, then in the long run (perhaps the very, very long run) we will get to net positive. You are proposing that we might instead asymptote at some negative value, even though we are still always improving?

Comment author: Wei_Dai 20 July 2017 07:50:49PM 15 points [-]

What lazy solutions will look like seems unpredictable to me. Suppose someone in the future wants to realistically roleplay a historical or fantasy character. The lazy solution might be to simulate a game world with conscious NPCs. The universe contains so much potential for computing power (which presumably can be turned into conscious experiences), that even if a very small fraction of people do this (or other things whose lazy solutions happen to involve suffering), that could create an astronomical amount of suffering.

Comment author: Ben_West  (EA Profile) 20 July 2017 11:06:43PM 5 points [-]

Yes, I agree. More generally: the more things consciousness (and particularly suffering) are useful for, the less reasonable point (3) above is.

Comment author: Tobias_Baumann 20 July 2017 08:40:43AM *  11 points [-]

Thanks for writing this up! I agree that this is a relevant argument, even though many steps of the argument are (as you say yourself) not airtight. For example, consciousness or suffering may be related to learning, in which case point 3) is much less clear.

Also, the future may contain vastly larger populations (e.g. because of space colonization), which, all else being equal, may imply (vastly) more suffering. Even if your argument is valid and the fraction of suffering decreases, it's not clear whether the absolute amount will be higher or lower (as you claim in 7.).

Finally, I would argue we should focus on the bad scenarios anyway – given sufficient uncertainty – because there's not much to do if the future will "automatically" be good. If s-risks are likely, my actions matter much more.

(This is from a suffering-focused perspective. Other value systems may arrive at different conclusions.)

Comment author: Ben_West  (EA Profile) 20 July 2017 02:49:49PM *  5 points [-]

Thanks for the response!

  1. It would be surprising to me if learning required suffering, but I agree that if it does then point (3) is less clear.
  2. Good point! I rewrote it to clarify that there is less net suffering
  3. Where I disagree with you the most is your statement "there's not much to do if the future will 'automatically' be good." Most obviously, we have the difficult (and perhaps impossible) task of ensuring the future exists at all (maxipok).

An Argument for Why the Future May Be Good

In late 2014, I ate  lunch with an EA who prefers to remain anonymous. I had originally been of the opinion that, should humans survive, the future is likely to be bad. He convinced me to change my mind about this. I haven’t seen this argument written up anywhere and... Read More
Comment author: Ben_West  (EA Profile) 24 April 2017 09:59:15PM 0 points [-]
  1. Are there blocks of rooms reserved at some hotel?
  2. Are there "informal" events planned for around the official event? (I.e. should everyone plan to land Thursday night and leave Sunday night or would it make sense to leave earlier/stay later?)


In response to EA Funds Beta Launch
Comment author: Ben_West  (EA Profile) 13 March 2017 03:48:09PM 4 points [-]

Is it possible to donate through transferring money already donated to a different donor advised fund?

I generally put money into my own DAF at a time which is convenient for tax purposes, and only consider grants later. Mine is through fidelity, if that's relevant.

Comment author: Peter_Hurford  (EA Profile) 26 February 2017 09:35:43PM 1 point [-]

How effective do you think investment would have been at the margin there was at the end of October 2016? I'm surprised to see only about ~$1K of advertising put into it, for example, but maybe there were steep returns by that point?

Comment author: Ben_West  (EA Profile) 06 March 2017 11:15:11PM 1 point [-]

Yeah, good question. AdWords for terms like "vote swap" had a CPA of 2 to 3 dollars, but generic things like "stop Donald Trump" were ineffective. I don't believe we maxed out spend on the former category, and I think that the latter category probably would've been more effective if we had focused on conversion better. In summary: we probably should have put more advertising dollars and effort into the project.

Comment author: the_jaded_one 02 March 2017 09:43:57PM *  2 points [-]

2016 only one candidate had any sort of policy at all about farmed animals, so it didn't require a very extensive policy analysis to figure out who is preferable.

Beware of unintended consequences, though. The path from "Nice things are written about X on a candidate's promotional materials" to "Overall, X improved" is a very circuitous one in human politics.

The same is true for other EA focus areas.

A lot of people in EA seem to assume, without a thorough argument, that direct support for certain political tribes is good for all EA causes. I would like to see some effort put into something like a quasi realistic simulation of human political processes to back up claims like this. (Not that I am demanding specific evidence before I will believe these claims - just that it would be a good idea). Real-world human politicking seems to be full of crucial considerations.

I also feel like when we talk about human political issues, we lack an understanding of, or don't bother to think about, the causal dynamics behind how politics works in humans. I am specifically talking about things like signalling

Comment author: Ben_West  (EA Profile) 02 March 2017 11:55:38PM 2 points [-]

In order to think vote trading is a good idea, you have to think that, with some reasonable amount of work, you can predict the better candidate at a rate which outperforms chance.

Humility is important, but there's a difference between "politics is hard to predict perfectly" and "politics is impossible predict at all".

View more: Prev | Next