Comment author: Brian_Tomasik 17 January 2017 10:03:56AM 0 points [-]

One possible explanation why we have nociceptors but not direct pleasure-ceptors is that there's no stimulus that's always fitness-enhancing (or is there?), while flames, skin wounds, etc. are always bad. Sugar receptors usually convey pleasure, but not if you're full, nauseous, etc.

Also, we can't have simple pleasure-ceptors for beautiful images or music because those stimuli require complex processing by visual or auditory cortices; there's no "pleasant music molecule" that can stimulate a pleasure-ceptor neuron the way there are pleasant-tasting gustatory molecules.

Comment author: Brian_Tomasik 17 January 2017 09:46:19AM *  0 points [-]

Anything piped through the complexity of the brain will look complex, regardless of how simple or complex it starts out as.

Fair enough. :) By analogy, even if pesticide regulation looks complex, the molecular structure of a single insecticide molecule is more crisp.

there doesn't seem to be an equivalent document describing what suffering research is if we assume that consciousness should be thought of more as a linguistic confusion than a 'real' thing

Various of my essays mention examples of my intuitions on the topic, and this piece discusses one framework for thinking about the matter. But I envision this project as more like interpreting the themes and imagery of Shakespeare than like a comprehensive scientific program. It's subjective, personal, and dependent on one's emotional whims. Of course, one can choose to make it more formalized if one prefers, like formalized preference utilitarianism does.

Comment author: capybaralet 17 January 2017 05:38:13AM -1 points [-]

I was overall a bit negative on Sarah's post, because it demanded a bit too much attention, (e.g. the title), and seemed somewhat polemic. It was definitely interesting, and I learned some things.

I find the most evocative bit to be the idea that EA treats outsiders as "marks".
This strikes me as somewhat true, and sadly short-sighted WRT movement building. I do believe in the ideas of EA, and I think they are compelling enough that they can become mainstream.

Overall, though, I think it's just plain wrong to argue for an unexamined idea of honesty as some unquestionable ideal. I think doing so as a consequentialist, without a very strong justification, itself smacks of disingenuousness and seems motivated by the same phony and manipulative attitude towards PR that Sarah's article attacks.

What would be more interesting to me would be a thoughtful survey of potential EA perspectives on honesty, but an honest treatment of the subject does seem to be risky from a PR standpoint. And it's not clear that it would bring enough benefit to justify the cost. We probably will all just end up agreeing with common moral intuitions.

Comment author: JacobTref 16 January 2017 09:41:51PM 0 points [-]

Yep, good point - that's a biggie.

Comment author: MikeJohnson 16 January 2017 07:54:32PM *  1 point [-]

Interesting- to attempt to re-state your notion: it's more important to avoid death than get an easy meal, so pain&aversion should come easier than pleasure.

I'd agree with this, but perhaps this is overdetermined in that both evolution and substrate lead us to "pleasure is centralized&highly contextual, pain is distributed&easily caused".

I.e., I would expect that given a set of conscious systems with randomized configurations, valence probably doesn't fall into a standard distribution. Rather, my expectation is that high-valence states will be outnumbered by low-valence states... and so, just like it's easier to destroy value than create it, it's easier to create negative valence than positive valence. Thus, positive valence requires centralized coordination (hedonic regions) and is easily disrupted by nociceptors (injections of entropy are unlikely to push the system toward positive states, since those are rare).

Comment author: MikeJohnson 16 January 2017 07:27:06PM *  1 point [-]

Hi Brian,

Thanks for the thoughts & kind words.

Nominally, this post is simply making the point that affective neuroscience doesn't have a good definition of valence nor suffering, and based on its current trajectory, isn't likely to produce one in the foreseeable future. It seems we both agree on that. :) However, you're quite correct that the subtext to this post is that I believe a crisp definition of valence is possible, and you're curious how I square this with the above description of the sad state of affective neuroscience.

Essentially, my model is that valence in the human brain is an incredibly complex phenomenon that defies simple description-- but valence itself is probably a simple property of conscious systems. This seems entirely consistent with the above facts (Section I of my paper), and also very plausible if consciousness is a physical phenomenon. Here are the next few paragraphs of my paper:

II. Clarifying the Problem of Valence

The above section noted that affective neuroscience knows a lot about valence, but its knowledge is very messy and disorganized. If valence is intrinsically a messy, fuzzy property of conscious states, perhaps this really is the best we can do here.

However, I don’t think we live in a universe where valence is a fuzzy, fragile, high-level construction. Instead, I think it’s a crisp thing we can quantify, and the patterns in it only look incredibly messy because we’re looking at it from the wrong level of abstraction.

Brains vs conscious systems:

There are fundamentally two kinds of knowledge about valence: things that are true specifically in brains like ours, and general principles common to all conscious entities. Almost all of what we know about pain and pleasure is of the first type-- essentially, affective neuroscience has been synonymous with making maps of the mammalian brain’s evolved, adaptive affective modules and contingent architectural quirks (“spandrels”).

This paper attempts to chart a viable course for this second type of research: it’s an attempt toward a general theory of valence, a.k.a. universal, substrate-independent principles that apply equally to and are precisely true in all conscious entities, be they humans, non-human animals, aliens, or conscious artificial intelligence (AI).

...

Anything piped through the complexity of the brain will look complex, regardless of how simple or complex it starts out as. Similarly, anything will look irreducibly complex if we're looking at it from the wrong level of abstraction. So just because affective neuroscience is confused about valence, doesn't mean that valence is somehow intrinsically confusing.

In this sense, I see valence research as no different than any other physical science: progress will be made by (1) controlling for the messy complexity added by studying valence in messy systems, and (2) finding levels of abstractions that "carve reality at the joints" better. (For instance, "emotions" are not natural kinds, as Barrett notes, but "valence" may be one.)

The real kicker here is whether there exists a cache of predictive knowledge about consciousness to be discovered (similar to how Faraday&Maxwell discovered a cache of predictive knowledge about electromagnetism) or whether consciousness is a linguistic confusion, to be explained away (similar to how elan vital was a linguistic confusion & improper reification).

Fundamental research about suffering looks very, very different depending on which of these is true. Principia Qualia lays out how it would look in the case of the former, and describes a research program that I expect to bear predictive fruit if we 'turn the crank' on it.

But there doesn't seem to be an equivalent document describing what suffering research is if we assume that consciousness should be thought of more as a linguistic confusion than a 'real' thing, and that suffering is a leaky reification. Explicitly describing what fundamental research about suffering looks like, and predicting what kinds of knowledge are & aren't possible, if we assume functionalism (or perhaps 'computational constructivism' fits your views?) seems like it could be a particularly worthwhile project for FRI.

p.s. Yes, I quite enjoyed that piece on attempting to reverse-engineer a 6502 microprocessor via standard neuroscientific methods. My favorite paper of 2016 actually!

Comment author: Peter_Hurford  (EA Profile) 16 January 2017 06:40:32PM 0 points [-]

Thanks Ben, I revised my estimate in light of your comment! Hopefully I also phrased 80K's conclusion more correctly.

Comment author: Elizabeth 16 January 2017 05:53:25PM 3 points [-]

A list of ethical and practical concerns the EA movement has with Intentional Insights: http://effective-altruism.com/ea/12z/concerns_with_intentional_insights/ .

Gleb Tsipursky has also repeatedly said he will leave the EA movement.

Comment author: Elizabeth 16 January 2017 05:53:14PM 3 points [-]

A list of ethical and practical concerns the EA movement has with Intentional Insights: http://effective-altruism.com/ea/12z/concerns_with_intentional_insights/ .

Gleb Tsipursky has also repeatedly said he will leave the EA movement.

Comment author: Peter_Hurford  (EA Profile) 16 January 2017 04:19:14PM 4 points [-]

Cool, I always love work surfacing an otherwise unknown donation opportunity! I also find your initial framework compelling and think it motivates some of my donations, for example with SHIC.

Under "Reservations about the donation", I think it's worth mentioning the possibility that the threat is misperceived and the Trump administration turns out to not pose any significant risk to the integrity or existence of those datasets.

Comment author: Brian_Tomasik 16 January 2017 09:58:15AM 2 points [-]

Thanks for the summary! Lots of useful info here.

for every functional story about the role of valence, there exist counter-examples.

As a functionalist, I'm not at all troubled by these counter-examples. They merely show that the brain is very complicated, and they reinforce my view that crisp definitions of valence don't work. ;)

As an analogy, suppose you were trying to find the location of "pesticide regulation" in the United States. You might start with the EPA: "Pesticide regulation in the United States is primarily a responsibility of the Environmental Protection Agency." But you might notice that other federal agencies do work related to pesticides (e.g., the USDA). Moreover, some individual states have their own pesticide regulations. Plus, individual schools, golf courses, and homes decide if and how to apply pesticides; in this sense, they also "regulate" pesticide use. We might try to distinguish "legal regulation" from "individual choices" and note that the two can operate differently. We might question what counts as a pesticide. And so on. All this shows is that there's a lot of stuff going on that doesn't cleanly map onto simple constructs.

Actually, your later Barrett (2006) quote says the same thing: “the natural-kind view of emotion may be the result of an error of arbitrary aggregation. That is, our perceptual processes lead us to aggregate emotional processing into categories that do not necessarily reveal the causal structure of the emotional processing.” And you seemed to agree in your conclusion: "valence in the human brain is a complex phenomenon which defies simple description." I'm puzzled how this squares with your attempt to find a crisp definition for valence.

we don’t have a clue as to what properties are necessary or sufficient to make a given brain region a so-called “pleasure center” or “pain center”

Likewise, we can debate the necessary and sufficient properties that make something a "pesticide-regulation center".

by taking a microprocessor [...] and attempting to reverse-engineer it

Interesting. :) This is part of why I don't expect whole-brain emulation to come before de-novo AGI. Reverse-engineering of complex systems is often very difficult.

Comment author: RomeoStevens 16 January 2017 09:09:59AM 1 point [-]

The frustrating inverse point makes me think this is a reflection of the asymmetric payoff structure in the AE.

Comment author: Linch 16 January 2017 06:22:15AM 0 points [-]

UPDATE: I now have my needed number of volunteers, and intend to launch the experiment tomorrow evening. Please email, PM, or otherwise contact me in the next 12 hours if you're interested in participating.

Comment author: Carl_Shulman 15 January 2017 08:12:21PM 4 points [-]

Looks like Tim Telleen-Lawton won, as the first ten digits of the beacon at noon PST were 0CF7565C0F=55689239567. Congratulations to Tim, and to all of the early adopters.

Comment author: Carl_Shulman 15 January 2017 08:09:04PM *  7 points [-]

Looks like Tim Telleen-Lawton won, as the first ten digits of the beacon at noon PST were 0CF7565C0F=55689239567. Congratulations to Tim, and to all of the early adopters.

Comment author: Gina_Stuessy  (EA Profile) 15 January 2017 05:17:49PM 0 points [-]

Is the Boston one different from this? http://www.eagxboston.com/

Comment author: Linch 15 January 2017 08:37:35AM 2 points [-]

I often see spambots in the comments.

Comment author: Peter_Hurford  (EA Profile) 15 January 2017 06:01:21AM 1 point [-]
Comment author: John_Maxwell_IV 15 January 2017 01:29:06AM *  0 points [-]

Less Wrong has a "subscribe" feature that might be importable.

Comment author: John_Maxwell_IV 15 January 2017 01:25:21AM 0 points [-]

View more: Next