Comment author: Peter_Hurford  (EA Profile) 16 January 2017 06:40:32PM 0 points [-]

Thanks Ben, I revised my estimate in light of your comment! Hopefully I also phrased 80K's conclusion more correctly.

Comment author: Elizabeth 16 January 2017 05:53:25PM 2 points [-]

A list of ethical and practical concerns the EA movement has with Intentional Insights: http://effective-altruism.com/ea/12z/concerns_with_intentional_insights/ .

Gleb Tsipursky has also repeatedly said he will leave the EA movement.

Comment author: Elizabeth 16 January 2017 05:53:14PM 2 points [-]

A list of ethical and practical concerns the EA movement has with Intentional Insights: http://effective-altruism.com/ea/12z/concerns_with_intentional_insights/ .

Gleb Tsipursky has also repeatedly said he will leave the EA movement.

Comment author: Peter_Hurford  (EA Profile) 16 January 2017 04:19:14PM 2 points [-]

Cool, I always love work surfacing an otherwise unknown donation opportunity! I also find your initial framework compelling and think it motivates some of my donations, for example with SHIC.

Under "Reservations about the donation", I think it's worth mentioning the possibility that the threat is misperceived and the Trump administration turns out to not pose any significant risk to the integrity or existence of those datasets.

Comment author: Brian_Tomasik 16 January 2017 09:58:15AM 1 point [-]

Thanks for the summary! Lots of useful info here.

for every functional story about the role of valence, there exist counter-examples.

As a functionalist, I'm not at all troubled by these counter-examples. They merely show that the brain is very complicated, and they reinforce my view that crisp definitions of valence don't work. ;)

As an analogy, suppose you were trying to find the location of "pesticide regulation" in the United States. You might start with the EPA: "Pesticide regulation in the United States is primarily a responsibility of the Environmental Protection Agency." But you might notice that other federal agencies do work related to pesticides (e.g., the USDA). Moreover, some individual states have their own pesticide regulations. Plus, individual schools, golf courses, and homes decide if and how to apply pesticides; in this sense, they also "regulate" pesticide use. We might try to distinguish "legal regulation" from "individual choices" and note that the two can operate differently. We might question what counts as a pesticide. And so on. All this shows is that there's a lot of stuff going on that doesn't cleanly map onto simple constructs.

Actually, your later Barrett (2006) quote says the same thing: “the natural-kind view of emotion may be the result of an error of arbitrary aggregation. That is, our perceptual processes lead us to aggregate emotional processing into categories that do not necessarily reveal the causal structure of the emotional processing.” And you seemed to agree in your conclusion: "valence in the human brain is a complex phenomenon which defies simple description." I'm puzzled how this squares with your attempt to find a crisp definition for valence.

we don’t have a clue as to what properties are necessary or sufficient to make a given brain region a so-called “pleasure center” or “pain center”

Likewise, we can debate the necessary and sufficient properties that make something a "pesticide-regulation center".

by taking a microprocessor [...] and attempting to reverse-engineer it

Interesting. :) This is part of why I don't expect whole-brain emulation to come before de-novo AGI. Reverse-engineering of complex systems is often very difficult.

Comment author: RomeoStevens 16 January 2017 09:09:59AM 1 point [-]

The frustrating inverse point makes me think this is a reflection of the asymmetric payoff structure in the AE.

Comment author: Linch 16 January 2017 06:22:15AM 0 points [-]

UPDATE: I now have my needed number of volunteers, and intend to launch the experiment tomorrow evening. Please email, PM, or otherwise contact me in the next 12 hours if you're interested in participating.

Comment author: Carl_Shulman 15 January 2017 08:12:21PM 4 points [-]

Looks like Tim Telleen-Lawton won, as the first ten digits of the beacon at noon PST were 0CF7565C0F=55689239567. Congratulations to Tim, and to all of the early adopters.

Comment author: Carl_Shulman 15 January 2017 08:09:04PM *  5 points [-]

Looks like Tim Telleen-Lawton won, as the first ten digits of the beacon at noon PST were 0CF7565C0F=55689239567. Congratulations to Tim, and to all of the early adopters.

Comment author: Gina_Stuessy  (EA Profile) 15 January 2017 05:17:49PM 0 points [-]

Is the Boston one different from this? http://www.eagxboston.com/

Comment author: Linch 15 January 2017 08:37:35AM 2 points [-]

I often see spambots in the comments.

Comment author: Peter_Hurford  (EA Profile) 15 January 2017 06:01:21AM 1 point [-]
Comment author: John_Maxwell_IV 15 January 2017 01:29:06AM *  0 points [-]

Less Wrong has a "subscribe" feature that might be importable.

Comment author: John_Maxwell_IV 15 January 2017 01:25:21AM 0 points [-]
Comment author: Peter_Hurford  (EA Profile) 14 January 2017 09:25:30PM 4 points [-]

I think it really depends on who you criticize. I perceive criticizing particular people or organizations as having significant social costs (though I'm not saying whether those costs are merited or not).

Comment author: Daniel_Dewey 14 January 2017 07:26:19PM *  0 points [-]

I agree that if engagement with the critique doesn't follow those words, they're not helpful :) Editing my post to clarify that.

Comment author: jsteinhardt 14 January 2017 07:24:21PM 4 points [-]

In my post, I said

anything I write that wouldn't incur unacceptably high social costs would have to be a highly watered-down version of the original point, and/or involve so much of my time to write carefully that it wouldn't be worthwhile.

I would expect that conditioned on spending a large amount of time to write the criticism carefully, it would be met with significant praise. (This is backed up at least in upvotes by past examples of my own writing, e.g. Another Critique of Effective Altruism, The Power of Noise, and A Fervent Defense of Frequentist Statistics.)

Comment author: Gleb_T  (EA Profile) 14 January 2017 07:17:22PM -2 points [-]

Let me first clarify that I see the goal of doing the most good as my end goal, and YMMV - no judgment on anyone who cares more about truth than doing good. This is just my value set.

Within that value set, using "insufficient" means to get to EA ends is just as bad as using "excessive" means. In this case, being "too honest" is just as bad as "not being honest enough." The correct course of actions is to correctly calibrate one's level of honesty to maximize for positive long-term impact for doing the most good.

Now, the above refers to the ideal-type scenario. IRL, different people are differently calibrated. Some tend to be too oriented toward exaggerating, some too oriented to being humble and understating the case, and in either case, it's a mistake. So one should learn where one's bias is, and push against that bias.

Comment author: Brian_Tomasik 14 January 2017 05:45:08PM 2 points [-]

I'm surprised to hear that people see criticizing EA as incurring social costs. My impression was that many past criticisms of EA have been met with significant praise (e.g., Ben Kuhn's). One approach for dealing with this could be to provide a forum for anonymous posts + comments.

Comment author: Jeff_Kaufman 14 January 2017 04:30:27PM 1 point [-]

We have recently implemented a formal social media policy which encourages ACE staff to respond to comments about our work with great consideration, and in a way that accurately reflects our views (as opposed to those of one staff member).

Is this policy available anywhere? Looking on your site I'm finding only a different Social Media Policy that looks like maybe it's intended for people outside ACE considering posting on ACE's fb wall?

View more: Next