Comment author: Kaj_Sotala 25 July 2017 11:01:35AM *  1 point [-]

Wait, are you equating "functionalism" with "doesn't believe suffering can be meaningfully defined"? I thought your criticism was mostly about the latter; I don't think it's automatically implied by the former. If you had a precise enough theory about the functional role and source of suffering, then this would be a functionalist theory that specified objective criteria for the presence of suffering.

(You could reasonably argue that it doesn't look likely that functionalism will provide such a theory, but then I've always assumed that anyone who has thought seriously about philosophy of mind has acknowledged that functionalism has major deficiencies and is at best our "least wrong" placeholder theory until somebody comes up with something better.)

Comment author: MikeJohnson 25 July 2017 05:36:44PM *  0 points [-]

Functionalism seems internally consistent (although perhaps too radically skeptical). However, in my view it also seems to lead to some flavor of moral nihilism; consciousness anti-realism makes suffering realism difficult/complicated.

If you had a precise enough theory about the functional role and source of suffering, then this would be a functionalist theory that specified objective criteria for the presence of suffering.

I think whether suffering is a 'natural kind' is prior to this analysis: e.g., to precisely/objectively explain the functional role and source of something, it needs to have a precise/crisp/objective existence.

I've always assumed that anyone who has thought seriously about philosophy of mind has acknowledged that functionalism has major deficiencies and is at best our "least wrong" placeholder theory until somebody comes up with something better.)

Part of my reason for writing this critique is to argue that functionalism isn't a useful theory of mind, because it doesn't do what we need theories of mind to do (adjudicate disagreements in a principled way, especially in novel contexts).

If it is a placeholder, then I think the question becomes, "what would 'something better' look like, and what would count as evidence that something is better? I'd love to get your (and FRI's) input here.

Comment author: kbog  (EA Profile) 23 July 2017 07:07:12PM *  2 points [-]

much of the following critique would also apply to e.g. MIRI, FHI, and OpenPhil.

I'm a little confused here. Where does MIRI or FHI say anything about consciousness, much less assume any particular view?

Comment author: MikeJohnson 23 July 2017 08:57:34PM 3 points [-]

My sense that MIRI and FHI are fairly strong believers in functionalism, based on reading various pieces on LessWrong, personal conversation with people who work there, and 'revealed preference' research directions. OpenPhil may be more of a stretch to categorize in this way; I'm going off what I recall of Holden's debate on AI risk, some limited personal interactions with people that work there, and Luke Muehlhauser's report (he was up-front about his assumptions on this).

Of course it's harder to pin down what people at these organizations believe than it is in Brian's case, since Brian writes a great deal about his views.

So to my knowledge, this statement is essentially correct, although there may be definitional & epistemological quibbles.

Comment author: Wei_Dai 22 July 2017 10:06:39AM *  6 points [-]

The one view that seems unusually prevalent within FRI, apart from people self-identifying with suffering-focused values, is a particular anti-realist perspective on morality and moral reasoning where valuing open-ended moral reflection is not always regarded as the by default "prudent" thing to do.

Thanks for pointing this out. I've noticed this myself in some of FRI's writings, and I'd say this, along with the high amount of certainty on various object-level philosophical questions that presumably cause the disvaluing of reflection about them, are what most "turns me off" about FRI. I worry a lot about potential failures of goal preservation (i.e., value drift) too, but because I'm highly uncertain about just about every meta-ethical and normative question, I see no choice but to try to design some sort of reflection procedure that I can trust enough to hand off control to. In other words, I have nothing I'd want to "lock in" at this point and since I'm by default constantly handing off control to my future self with few safeguards against value drift, doing something better than that default is one of my highest priorities. If other people are also uncertain and place high value on (safe/correct) reflection as a result, that helps with my goal (because we can then pool resources together to work out what safe/correct reflection is), so it's regrettable to see FRI people sometimes argue for more certainty than I think is warranted and especially to see them argue against reflection.

Comment author: MikeJohnson 22 July 2017 02:42:39PM *  0 points [-]

I really enjoyed your linked piece on meta-ethics. Short but insightful. I believe I'd fall into the second bucket.

If you're looking for what (2) might look like in practice, and how we might try to relate it to the human brain's architecture/drives, you might enjoy this: http://opentheory.net/2017/05/why-we-seek-out-pleasure-the-symmetry-theory-of-homeostatic-regulation/

I'd also agree that designing trustworthy reflection procedures is important. My intuitions here are: (1) value-drift is a big potential problem with FRI's work (even if they "lock in" caring about suffering, if their definition of 'suffering' drifts, their tacit values do too); (2) value-drift will be a problem for any system of ethics that doesn't cleanly 'compile to physics'. (This is a big claim, centering around my Objection 6, above.)

Perhaps we could generalize this latter point as "if information is physical, and value is informational, then value is physical too."

Comment author: Brian_Tomasik 21 July 2017 11:18:45PM *  2 points [-]

I don't want to put words in Carl's mouth, and certainly Carl doesn't necessarily endorse anything I write. Perhaps he'll chime in. :)

For more defenses of anti-realism (i.e., type-A physicalism), here are some other authors. Dennett is the most famous, though some complain that he doesn't use rigorous philosophical arguments/jargon.

Comment author: MikeJohnson 22 July 2017 01:04:42AM 3 points [-]

This may or may not be relevant, but I would definitely say that Brian's views are not 'fringe views' in the philosophy of mind; they're quite widely held in philosophy and elsewhere. I believe Brian sticks out because his writing is so clear, and because he doesn't avoid thinking about and admitting strange implications of his views.

That said I don't know Carl's specific views on the topic.

Comment author: Lukas_Gloor 21 July 2017 08:57:37AM *  10 points [-]

Brian's view is maybe best described as eliminativism about consciousness (which may already seem counterintuitive to many) plus a counterintuitive way to draw boundaries in concept space. Luke Muehlhauser said about Brian's way of assigning non-zero moral relevance to any process that remotely resembles aspects of our concept of consciousness:

"Mr. Tomasik’s view [...] amounts to pansychism about consciousness as an uninformative special case of “pan-everythingism about everything."

See this conversation.

So the disagreement there does not appear to be about questions such as "What produces people's impression of there being a hard problem of consciousness?," but rather whether anything that is "non-infinitely separated in multi-dimensional concept space" still deserves some (tiny) recognition as fitting into the definition. As Luke says here, the concept "consciousness" works more like "life" (= fuzzy) and less like "water" (= H2O), and so if one shares this view, it becomes non-trivial to come up with an all-encompassing definition.

While most (? my impression anyway as someone who works there) researchers at FRI place highest credence on functionalism and eliminativism, there is more skepticism about Brian's inclination to never draw hard boundaries in concept space.

Comment author: MikeJohnson 22 July 2017 01:00:19AM 2 points [-]

While most (? my impression anyway as someone who works there) researchers at FRI place highest credence on functionalism and eliminativism, there is more skepticism about Brian's inclination to never draw hard boundaries in concept space.

It would be interesting to see FRI develop what 'suffering-focused ethics, as informed by functionalism/eliminativism, but with hard boundaries in concept space' might look like.

Comment author: Kaj_Sotala 20 July 2017 11:10:53PM *  10 points [-]

This looks sensible to me. I'd just quickly note that I'm not sure if it's quite accurate to describe this as "FRI's metaphysics", exactly - I work for FRI, but haven't been sold on the metaphysics that you're criticizing. In particular, I find myself skeptical of the premise "suffering is impossible to define objectively", which you largely focus on. (Though part of this may be simply because I haven't yet properly read/considered Brian's argument for it, so it's possible that I would change my mind about that.)

But in any case, I've currently got three papers in various stages of review, submission or preparation (that other FRI people have helped me with), and none of those papers presuppose this specific brand of metaphysics. There's a bunch of other work being done, too, which I know of and which I don't think presupposes it. So it doesn't feel quite accurate to me to suggest that the metaphysics would be holding back our progress, though of course there can be some research being carried out that's explicitly committed to this particular metaphysics.

(opinions in this comment purely mine, not an official FRI statement etc.)

Comment author: MikeJohnson 22 July 2017 12:57:54AM 4 points [-]

Hi Kaj- that makes a lot of sense. I would say FRI currently looks very eliminativism-heavy from the outside (see e.g., https://foundational-research.org/research/#consciousness), but it sounds like the inside view is indeed different.

As I noted on FB, I'll look forward to seeing where FRI goes with its research.

Comment author: RomeoStevens 21 July 2017 08:41:12AM *  1 point [-]

that precisely mapping between physical processes and (Turing-level) computational processes is inherently impossible

Curious for your take on the premise that ontologies always have tacit telos.

Also, we desire to expand the domains of our perception with scientific instrumentation and abstractions. This expansion always generates some mapping (ontology) from the new data to our existing sensory modalities.

I think this is relevant for the dissonance model of suffering, though I can't fully articulate how yet.

Comment author: MikeJohnson 21 July 2017 07:37:03PM *  2 points [-]

Curious for your take on the premise that ontologies always have tacit telos.

Some ontologies seem to have more of a telos 'baked in'-- e.g., Christianity might be a good example-- whereas other ontologies have zero explicit telos-- e.g., pure mathematics.

But I think you're right that there's always a tacit telos, perhaps based on elegance. When I argue that "consciousness is a physics problem", I'm arguing that it inherits physics' tacit telos, which seems to be elegance-as-operationalized-by-symmetry.

I wonder if "elegance" always captures telos? This would indicate a certain theory-of-effective-social/personal-change...

Also, we desire to expand the domains of our perception with scientific instrumentation and abstractions. This expansion always generates some mapping (ontology) from the new data to our existing sensory modalities.

Yeah, it doesn't seem technology can ever truly be "teleologically neutral".

Comment author: SoerenMind  (EA Profile) 21 July 2017 12:11:10PM *  10 points [-]

What's the problem if a group of people explores the implications of a well-respected position in philosophy and are (I think) fully aware of the implications? Exploring a different position should be a task for people who actually place more than a tiny bit of credence in it, it seems to me - especially when it comes to a new and speculative hypothesis like principle qualia.

This post mostly reads like a contribution to a long-standing philosophical debate to me and would be more appropriately presented as arguing against a philosophical assumption rather than against a research group working under that assumption.

In the cog-sci / neuroscience institute where I currently work, productive work is being done under similar, though less explicit, assumptions as Brian's / FRI's. Including relevant work on modelling valence in animals in the reinforcement learning framework.

I know you disagree with these assumptions but a post like this can make it seem to outsiders as if you're criticizing a somewhat crazy position and by extension cast a bad light on FRI.

Comment author: MikeJohnson 21 July 2017 06:57:35PM 3 points [-]

Hi Sören- your general point (am I critiquing FRI, or functionalism?) is reasonable. I do note in the piece why I focus on FRI:

Note: FRI is not the only EA organization which holds functionalist views on consciousness; much of the following critique would also apply to e.g. MIRI, FHI, and OpenPhil. I focus on FRI because (1) Brian’s writings on consciousness & functionalism have been hugely influential in the community, and are clear enough to criticize; (2) the fact that FRI is particularly clear about what it cares about- suffering- allows a particularly clear critique about what problems it will run into with functionalism; (3) I believe FRI is at the forefront of an important cause area which has not crystallized yet, and I think it’s critically important to get these objections bouncing around this subcommunity.

I should say too that the purpose of bringing up QRI's work is not to suggest FRI should be focusing on this, but instead that effort developing alternatives helps calibrate the field:

I mention all this because I think analytic functionalism- which is to say radical skepticism/eliminativism, the metaphysics of last resort- only looks as good as it does because nobody’s been building out any alternatives.

Comment author: kbog  (EA Profile) 20 July 2017 10:17:07PM *  1 point [-]

Re: 2, I don't see how we should expect functionalism to resolve disputes over which agents are conscious. Panpsychism does not such thing, nor does physicalism or dualism or any other theory of mind. Any of these theories can inform inquiry about which agents are conscious, in tandem with empirical work, but the connection is tenuous and it seems to me that at least 70% of the work is empirical. Theory of mind mostly gives a theoretical basis for empirical work.

The problem lies more with the specific anti-realist account of sentience that some people at FRI have, which basically boils down to "it's morally relevant suffering if I think it's morally relevant suffering." I suspect that a good functionalist framework need not involve this.

"But it seems a stretch to say that the alleged tension is problematic when talking about tables. So why would it be problematic when talking about suffering?"

Actually I think the tension would be problematic if we had philosophical debates about tables and edge cases which may or may not be tables.

Comment author: MikeJohnson 20 July 2017 10:31:38PM 0 points [-]

Re: 2, I don't see how we should expect functionalism to resolve disputes over which agents are conscious.

I think analytic functionalism is internally consistent on whether agents are conscious, as is the realist panpsychism approach, and so on. The problem comes in, as you note, when we want to be anti-realist about consciousness yet also care about suffering.

it seems to me that at least 70% of the work is empirical. Theory of mind mostly gives a theoretical basis for empirical work.

In practice, it may be difficult to cleanly distinguish between theoretical work on consciousness, and empirical work on consciousness. At least, we may need to be very careful in how we're defining "consciousness", "empirical", etc.

The problem lies more with the specific anti-realist account of sentience that some people at FRI have, which basically boils down to "it's morally relevant suffering if I think it's morally relevant suffering." I suspect that a good functionalist framework need not involve this.

It's an open question whether this is possible under functionalism-- my argument is that it's not possible to find a functionalist framework which has a clear or privileged definition of what morally relevant suffering is.

Comment author: MichaelPlant 20 July 2017 09:54:22PM 6 points [-]

This was great and I really enjoyed reading it. It's a pleasure to see one EA disagreeing with another with such eloquence, kindness and depth.

What I would say is that, even as someone doing a PhD in Philosophy, I found a bunch of this hard to follow (I don't really do any work on consciousness), particularly objection 7 and when you introduced QRI's own approach. I'll entirely understand if you think making this more accessible is more trouble that it's worth, I just thought I'd let you know.

Comment author: MikeJohnson 20 July 2017 10:22:11PM 1 point [-]

Thanks Michael!

Re: Objection 7, I think Aaronson's point is that, if we actually take seriously the idea that a computer / Turing machine could generate consciousness simply by running the right computer code, we should be prepared for a lot of very, very weird implications.

Re: QRI's approach, yeah I was trying to balance bringing up my work, vs not derailing the focus of the critique. I probably should have spent more words on that (I may go back and edit it).

View more: Next