Comment author: MikeJohnson 14 August 2017 02:53:09PM 2 points [-]

Hi Michael,

This is fantastic work, thanks for all the effort and thought that went into these posts. Your overall case seems solid to me-- or at minimum, I think yours is 'the argument to beat'.

One thought that I had while reading:

Drug policy reform may also allow us to better understand current pain medications and develop new treatments and uses. Your focus here is on decriminalizing existing drugs such as psilocybin, opioids, and MDMA, because you believe (with substantial evidence) that these drugs have nontrivial therapeutic potential, despite their sometimes substantial drawbacks. This seems reasonable, especially in the case of drugs with fairly benign risk profiles (e.g. psilocybin).

I do worry about some of the long-term side-effects associated with certain drugs, however, and it seems to me an interesting 'unknown unknown' here is if it's possible to develop new substances, or novel brain stimulation modalities, that allow us access to the upsides of such drugs, without suffering from the downsides.

E.g., in the case of MDMA, the not-uncommon long-term effects of chronic use include heightened anxiety & cognitive impairment, which seem very serious. But at the same time, there doesn't seem to be any 'law of the universe' mandating that the pleasant feelings of love & trust elicited by MDMA that are so therapeutically useful for PTSD must be unavoidably linked to brain damage.

I'm not completely sure how this observation interacts with your arguments, but I suspect it generally supports your case, since decriminalization could lower barriers for research into even better & safer options. Quite possibly, this could be one of the major reasons why decriminalization could lead to a better future.

On the other hand, the sword of innovation cuts both ways, as there seem to be a lot of very dangerous, toxic variants of drugs coming from overseas labs that are even less safe than current options (Fentanyl, Captagon, etc). Perhaps this is a case of "Banning dangerous substances as a precautionary principle can have perverse effects if it causes people to take a more dangerous drugs instead," and decriminalization would help mitigate this phenomenon. But I must admit to some uncertainty & worry here as to second-order effects.

Anyway, I think this is worth pursuing further. OpenPhil might be interested? I think probably Nick Beckstead might be a good contact there.

In response to Introducing Enthea
Comment author: MikeJohnson 10 August 2017 11:23:49PM *  5 points [-]

Hi Milan,

I'm glad to see this sort of project. You may enjoy my colleague Andres's summary of the Psychedelic Science 2017 conference. He notes that:

It should not come as a surprise to anyone who has been paying attention that there is a psychedelic renaissance underway. Bearing extreme world-wide counter-measures against it, in so far as psychedelic and empathogenic compounds meet the required evidentiary standards of mainstream psychopharmacology as safe and effective treatments for mental illness (and they do), they will be a staple of tomorrow’s tools for mental health. It’s not a difficult gamble: the current studies being made around the world are merely providing the scientific backing of what was already known in the 60s (for psychedelics) and 80s (for MDMA). I.e. That psychedelic medicine (people love to call it that way) in the right set and setting produces outstanding clinically-relevant effect sizes.

In short, it does seem increasingly like psychedelics aren't just for edgy recreational use, but could be part of some useful medical tradition that can measurably and reliably help people. But it does seem like it would be helpful to have answers to the following questions: 1. How do these things work? If we think they do good things, then what's a gears-level account of how they do good? 2. Are there tradeoffs, and what are they? Are there ways of getting the good without the bad?

Anyway, thanks for doing this!

Comment author: Brian_Tomasik 02 August 2017 09:11:56AM 1 point [-]

I only claim that most physical states/processes have only a very limited collection of computational states/processes that it can reasonably be interpreted as[.]

I haven't read most of this paper, but it seems to argue that.

Comment author: MikeJohnson 02 August 2017 10:30:59PM *  1 point [-]

You may also like Towards a computational theory of experience by Fekete and Edelman- here's their setup:

3.4. Counterfactually stable account of implementation To claim a computational understanding of a system, it is necessary for us to be able to map its instantaneous states and variables to those of a model. Such a mapping is, however, far from sufficient to establish that the system is actually implementing the model: without additional constraints, a large enough conglomerate of objects and events can be mapped so as to realize any arbitrary computation (Chalmers, 1994; Putnam, 1988). A careful analysis of what it means for a physical system to implement an abstract computation (Chalmers, 1994; Maudlin, 1989) suggests that, in addition to specifying a mapping between the respective instantaneous states of the system and the computational model, one needs to spell out the rules that govern the causal transitions between corresponding instantaneous states in a counterfactually resistant manner.

In the case of modeling phenomenal experience, the stakes are actually much higher: one expects a model of qualia to be not merely good (in the sense of the goodness of fit between the model and its object), but true and unique. Given that a multitude of distinct but equally good computational models may exist, why is not the system realizing a multitude of different experiences at a given time? Dodging this question amounts to conceding that computation is not nomologically related to qualia.

Construing computation in terms of causal interactions between instantaneous states and variables of a system has ramifications that may seem problematic for modeling experience. If computations and their implementations are individuated in terms of causal networks, then any given, specific experience or quale is individuated (in part) by the system’s entire space of possible instantaneous states and their causal interrelationships. In other words, the experience that is unfolding now is defined in part by the entire spectrum of possible experiences available to the system.

In subsequent sections, we will show that this explanatory problem is not in fact insurmountable, by outlining a solution for it. Meanwhile, we stress that while computation can be explicated by numbering the instantaneous states of a system and listing rules of transition between these states, it can also be formulated equivalently in dynamical terms, by defining (local) variables and the dynamics that govern their changes over time. For example, in neural-like models computation can be explicated in terms of the instantaneous state of ‘‘representational units’’ and the differential equations that together with present input lead to the unfolding of each unit’s activity over time. Under this description, computational structure results entirely from local physical interactions.

It's a little bit difficult to parse precisely how they believe they solve the multiple realization of computational interpretations of a system, but the key passage seems to be:

Third, because of multiple realizability of computation, one computational process or system can represent another, in that a correspondence can be drawn between certain organizational aspects of one process and those of the other. In the simplest representational scenario, correspondence holds between successive states of the two processes, as well as between their respective timings. In this case, the state-space trajectory of one system unfolds in lockstep with that of the other system, because the dynamics of the two systems are sufficiently close to one another; for example, formal neurons can be wired up into a network whose dynamics would emulate (Grush, 2004) that of the falling rock mentioned above. More interesting are cases in which the correspondence exists on a more abstract level, for instance between a certain similarity structure over some physical variables ‘‘out there’’ in the world (e.g., between objects that fall like a rock and those that drift down like a leaf) and a conceptual structure over certain instances of neural activity, as well as cases in which the system emulates aspects of its own dynamics. Further still, note that once representational mechanisms have been set in place, they can also be used ‘‘offline’’ (Grush, 2004). In all cases, the combinatorics of the world ensures that the correspondence relationship behind instances of representation is highly non-trivial, that is, unlikely to persist purely as a result of a chance configurational alignment between two randomly picked systems (Chalmers, 1994).

My attempt at paraphrasing this: if we can model the evolution of a physical system and the evolution of a computational system with the same phase space for some finite time t, then as t increases we can be increasingly confident the physical system is instantiating this computational system. At the limit (t->∞), this may offer a method for uniquely identifying which computational system a physical system is instantiating.

My intuition here is that the closer they get to solving the problem of how to 'objectively' determine what computations a physical system is realizing, the further their framework will stray from the Turing paradigm of computation and the closer it will get to a hypercomputation paradigm (which in turn may essentially turn out to be isomorphic to physics). But, I'm sure I'm biased, too. :) Might be worth a look.

Comment author: Brian_Tomasik 02 August 2017 09:11:56AM 1 point [-]

I only claim that most physical states/processes have only a very limited collection of computational states/processes that it can reasonably be interpreted as[.]

I haven't read most of this paper, but it seems to argue that.

Comment author: MikeJohnson 02 August 2017 06:55:02PM *  1 point [-]

The counterfactual response is typically viewed as inadequate in the face of triviality arguments. However, when we count the number of automata permitted under that response, we find it succeeds in limiting token physical systems to realizing at most a vanishingly small fraction of the computational systems they could realize if their causal structure could be ‘repurposed’ as needed. Therefore, the counterfactual response is a prima facie promising reply to triviality arguments. Someone might object this result nonetheless does not effectively handle the metaphysical issues raised by those arguments. Specifically, an ‘absolutist’ regarding the goals of an account of computational realization might hold that any satisfactory response to triviality arguments must reduce the number of possibly-realized computational systems to one, or to some number close to one. While the counterfactual response may eliminate the vast majority of computational systems from consideration, in comparison to any small constant, the number of remaining possibly-realized computational systems is still too high (2^n).

That seems like a useful approach- in particular,

On the other hand, the argument suggests at least some computational hypotheses regarding cognition are empirically substantive: by identifying types of computation characteristic of cognition (e.g., systematicity, perhaps), we limit potential cognitive devices to those whose causal structure includes these types of computation in the sets of possibilities they support.

This does seem to support the idea that progress can be made on this problem! On the other hand, the author's starting assumption is we can treat a physical system as a computational (digital) automata, which seems like a pretty big assumption.

I think this assumption may or may not turn out to be ultimately true (Wolfram et al), but given current theory it seems difficult to reduce actual physical systems to computational automata in practice. In particular, it seems difficult to apply this framework to (1) quantum systems (which all physical systems ultimately are), and (2) biological systems which have messy levels of abstraction such as the brain (which we'd want to be able to do for the purposes of functionalism).

From a physics perspective, I wonder if we could figure out a way to feed in a bounded wavefunction, and get identify some minimum upper bound of reasonable computational interpretations of the system. My instinct is that David Deutsch might be doing relevant work? But I'm not at all sure of this.

Comment author: AlexMennen 01 August 2017 08:22:51PM 0 points [-]

My critique of analytic functionalism is that it is essentially nothing but an assertion of this vagueness.

That's no reason to believe that analytic functionalism is wrong, only that it is not sufficient by itself to answer very many interesting questions.

Without a bijective mapping between physical states/processes and computational states/processes, I think my point holds.

No, it doesn't. I only claim that most physical states/processes have only a very limited collection of computational states/processes that it can reasonably be interpreted as, not that every physical state/process has exactly one computational state/process that it can reasonably be interpreted as, and certainly not that every computational state/process has exactly one physical state/process that can reasonably be interpreted as it. Those are totally different things.

it feels as though you're pattern-matching me to IIT and channeling Scott Aaronson's critique of Tononi

Kind of. But to clarify, I wasn't trying to argue that there will be problems with the Symmetry Theory of Valence that derive from problems with IIT. And when I heard about IIT, I figured that there were probably trivial counterexamples to the claim that Phi measures consciousness and that perhaps I could come up with one if I thought about the formula enough, before Scott Aaronson wrote the blog post where he demonstrated this. So although I used that critique of IIT as an example, I was mainly going off of intuitions I had prior to it. I can see why this kind of very general criticism from someone who hasn't read the details could be frustrating, but I don't expect I'll look into it enough to say anything much more specific.

I mention all this because I think analytic functionalism- which is to say radical skepticism/eliminativism, the metaphysics of last resort- only looks as good as it does because nobody’s been building out any alternatives.

But people have tried developing alternatives to analytic functionalism.

Comment author: MikeJohnson 01 August 2017 09:07:05PM *  0 points [-]

That's no reason to believe that analytic functionalism is wrong, only that it is not sufficient by itself to answer very many interesting questions.

I think that's being generous to analytic functionalism. As I suggested in Objection 2,

In short, FRI’s theory of consciousness isn’t actually a theory of consciousness at all, since it doesn’t do the thing we need a theory of consciousness to do: adjudicate disagreements in a principled way. Instead, it gives up any claim on the sorts of objective facts which could in principle adjudicate disagreements.

.

I only claim that most physical states/processes have only a very limited collection of computational states/processes that it can reasonably be interpreted as[.]

I'd like to hear more about this claim; I don't think it's ridiculous on its face (per Brian's and Michael_PJ's comments), but it seems a lot of people have banged their head against this without progress, and my prior is formalizing this is a lot harder than it looks (it may be unformalizable). If you could formalize it, that would have a lot of value for a lot of fields.

So although I used that critique of IIT as an example, I was mainly going off of intuitions I had prior to it. I can see why this kind of very general criticism from someone who hasn't read the details could be frustrating, but I don't expect I'll look into it enough to say anything much more specific.

I don't expect you to either. If you're open to a suggestion about how to approach this in the future, though, I'd offer that if you don't feel like reading something but still want to criticize it, instead of venting your intuitions (which could be valuable, but don't seem calibrated to the actual approach I'm taking), you should press for concrete predictions.

The following phrases seem highly anti-scientific to me:

sounds wildly implausible | These sorts of theories never end up getting empirical support, although their proponents often claim to have empirical support | I won't be at all surprised if you claim to have found substantial empirical support for your theory, and I still won't take your theory at all seriously if you do, because any evidence you cite will inevitably be highly dubious | The heuristic that claims that a qualia-related concept is some simple other thing are wrong, and that claims of empirical support for such claims never hold up | I am almost certain that there are trivial counterexamples to the Symmetry Theory of Valence

I.e., these statements seem to lack epistemological rigor, and seem to absolutely prevent you from updating in response to any evidence I might offer, even in principle (i.e., they're actively hostile to your improving your beliefs, regardless of whether I am or am not correct).

I don't think your intention is to be closed-minded on this topic, and I'm not saying I'm certain STV is correct. Instead, I'm saying you seem to be overreacting to some stereotype you initially pattern-matched me as, and I'd suggest talking about predictions is probably a much healthier way to move forward if you want to spend more time on this. (Thanks!)

Comment author: AlexMennen 30 July 2017 10:17:36PM *  6 points [-]

Speaking of the metaphysical correctness of claims about qualia sounds confused, and I think precise definitions of qualia-related terms should be judged by how useful they are for generalizing our preferences about central cases. I expect that any precise definition for qualia-related terms that anyone puts forward before making quite a lot of philosophical progress is going to be very wrong when judged by usefulness for describing preferences, and that the vagueness of the analytic functionalism used by FRI is necessary to avoid going far astray.

Regarding the objection that shaking a bag of popcorn can be interpreted as carrying out an arbitrary computation, I'm not convinced that this is actually true, and I suspect it isn't. It seems to me that the interpretation would have to be doing essentially all of the computation itself, and it should be possible to make precise the sense in which brains and computers simulating brains carry out a certain computation that waterfalls and bags of popcorn don't. The defense of this objection that you quote from McCabe is weak; the uncontroversial fact that many slightly different physical systems can carry out the same computation does not establish that an arbitrary physical system can be reasonably interpreted as carrying out an arbitrary computation.

I think the edge cases that you quote Scott Aaronson bringing up are good ones to think about, and I do have a large amount of moral uncertainty about them. But I don't see these as problems specific to analytic functionalism. These are hard problems, and the fact that some more precise theory about qualia may be able to easily answer them is not a point in favor of that theory, since wrong answers are not helpful.

The Symmetry Theory of Valence sounds wildly implausible. There are tons of claims that people put forward, often contradicting other such claims, that some qualia-related concept is actually some other simple thing. For instance, I've heard claims that goodness is complexity and that what humans value is increasing complexity. Complexity and symmetry aren't quite opposites, but they're certainly anti-correlated, and both theories can't be right. These sorts of theories never end up getting empirical support, although their proponents often claim to have empirical support. For example, proponents of Integrated Information Theory often cite that the cerebrum has a higher Phi value than the cerebellum does as support for the hypothesis that Phi is a good measure of the amount of consciousness a system has, as if comparing two data points was enough to support such a claim, and it turns out that large regular rectangular grids of transistors, and the operation of multiplication by a large Vandermonde matrix, both have arbitrarily high Phi values, and yet the claim that Phi measures consciousness still survives and claims empirical support, despite this damning disconfirmation. And I think the “goodness is complexity” people also provided examples of good things that they thought they had established are complex and bad things that they thought they had established are not. I know this sounds totally unfair, but I won't be at all surprised if you claim to have found substantial empirical support for your theory, and I still won't take your theory at all seriously if you do, because any evidence you cite will inevitably be highly dubious. The heuristic that claims that a qualia-related concept is some simple other thing are wrong, and that claims of empirical support for such claims never hold up, seems to be pretty well supported. I am almost certain that there are trivial counterexamples to the Symmetry Theory of Valence, even though perhaps you may have developed a theory sophisticated enough to avoid the really obvious failure modes like claiming that a square experiences more pleasure and less suffering than a rectangle because its symmetry group is twice as large.

Comment author: MikeJohnson 31 July 2017 06:34:36PM *  1 point [-]

Speaking of the metaphysical correctness of claims about qualia sounds confused, and I think precise definitions of qualia-related terms should be judged by how useful they are for generalizing our preferences about central cases.

I agree a good theory of qualia should help generalize our preferences about central cases. I disagree that we can get there with the assumption that qualia are intrinsically vague/ineffable. My critique of analytic functionalism is that it is essentially nothing but an assertion of this vagueness.

Regarding the objection that shaking a bag of popcorn can be interpreted as carrying out an arbitrary computation, I'm not convinced that this is actually true, and I suspect it isn't.

Without a bijective mapping between physical states/processes and computational states/processes, I think my point holds. I understand it's counterintuitive, but we should expect that when working in these contexts.

I think the edge cases that you quote Scott Aaronson bringing up are good ones to think about, and I do have a large amount of moral uncertainty about them. But I don't see these as problems specific to analytic functionalism. These are hard problems, and the fact that some more precise theory about qualia may be able to easily answer them is not a point in favor of that theory, since wrong answers are not helpful.

Correct; they're the sorts of things a theory of qualia should be able to address- necessary, not sufficient.

Re: your comments on the Symmetry Theory of Valence, I feel I have the advantage here since you haven't read the work. Specifically, it feels as though you're pattern-matching me to IIT and channeling Scott Aaronson's critique of Tononi, which is a bit ironic since that forms a significant part of PQ's argument why an IIT-type approach can't work.

At any rate I'd be happy to address specific criticism of my work. This is obviously a complicated topic and informed external criticism is always helpful. At the same time, I think it's a bit tangential to my critique about FRI's approach: as I noted,

I mention all this because I think analytic functionalism- which is to say radical skepticism/eliminativism, the metaphysics of last resort- only looks as good as it does because nobody’s been building out any alternatives.

Comment author: RomeoStevens 30 July 2017 08:29:49AM 0 points [-]

The quote seems very myopic. Let's say that we have a religion X that has an excellent track record at preventing certain sorts of defections by helping people coordinate on enforcement costs. Suffering in the service of stabilizing this state of affairs may be the best use of resources in a given context.

Comment author: MikeJohnson 31 July 2017 05:32:40PM 0 points [-]

I think that's fair-- beneficial equilibriums could depend on reifying things like this.

On the other hand, I'd suggest that with regard to identifying entities that can suffer, false positives are much less harmful than false negatives but they still often incur a cost. E.g., I don't think corporations can suffer, so in many cases it'll be suboptimal to grant them the sorts of protections we grant humans, apes, dogs, and so on. Arguably, a substantial amount of modern ethical and perhaps even political dysfunction is due to not kicking leaky reifications out of our circle of caring. (This last bit is intended to be provocative and I'm not sure how strongly I'd stand behind it...)

Comment author: Kaj_Sotala 25 July 2017 11:17:19PM *  1 point [-]

I think whether suffering is a 'natural kind' is prior to this analysis: e.g., to precisely/objectively explain the functional role and source of something, it needs to have a precise/crisp/objective existence.

I take this as meaning that you agree that accepting functionalism is orthogonal to the question of whether suffering is "real" or not?

If it is a placeholder, then I think the question becomes, "what would 'something better' look like, and what would count as evidence that something is better?

What something better would look like - if I knew that, I'd be busy writing a paper about it. :-) That seems to be a part of the problem - everyone (that I know of) agrees that functionalism is deeply unsatisfactory, but very few people seem to have any clue of what a better theory might look like. Off the top of my head, I'd like such a theory to at least be able to offer some insight into what exactly is conscious, and not have the issue where you can hypothesize all kinds of weird computations (like Aaronson did in your quote) and be left confused about which of them are conscious and which are not, and why. (roughly, my desiderata are similar to Luke Muehlhauser's)

Comment author: MikeJohnson 27 July 2017 09:37:33PM *  2 points [-]

An additional note on this:

What something better would look like - if I knew that, I'd be busy writing a paper about it. :-) That seems to be a part of the problem - everyone (that I know of) agrees that functionalism is deeply unsatisfactory, but very few people seem to have any clue of what a better theory might look like.

I'd propose that if we split the problem of building a theory of consciousness up into subproblems, the task gets a lot easier. This does depend on elegant problem decompositon. Here are the subproblems I propose: http://opentheory.net/wp-content/uploads/2016/11/Eight-Problems2-1.png

A quick-and-messy version of my framework:

  • (1) figure out what sort of ontology you think can map to both phenomenology (what we're trying to explain) and physics (the world we live in);

  • (2) figure out what subset of that ontology actively contributes to phenomenology;

  • (3) figure out how to determine the boundary of where minds stop, in terms of that-stuff-that-contributes-to-phenomenology;

  • (4) figure out how to turn the information inside that boundary into a mathematical object isomorphic to phenomenology (and what the state space of the object is);

  • (5) figure out how to interpret how properties of this mathematical object map to properties of phenomenology.

The QRI approach is:

  • (1) Choice of core ontology -> physics (since it maps to physical reality cleanly, or some future version like string theory will);

  • (2) Choice of subset of core ontology that actively contributes to phenomenology -> Andres suspects quantum coherence; I'm more agnostic (I think Barrett 2014 makes some good points);

  • (3) Identification of boundary condition -> highly dependent on (2);

  • (4) Translation of information in partition into a structured mathematical object isomorphic to phenomenology -> I like how IIT does this;

  • (5) Interpretation of what the mathematical output means -> Probably, following IIT, the dimensional magnitude of the object could correspond with the degree of consciousness of the system. More interestingly, I think the symmetry of this object may plausibly have an identity relationship with the valence of the experience.

Anyway, certain steps in this may be wrong, but that's what the basic QRI "full stack" approach looks like. I think we should be able to iterate as we go, since we can test parts of (5) (like the Symmetry Hypothesis of Valence) without necessarily having the whole 'stack' figured out.

Comment author: Kaj_Sotala 25 July 2017 11:17:19PM *  1 point [-]

I think whether suffering is a 'natural kind' is prior to this analysis: e.g., to precisely/objectively explain the functional role and source of something, it needs to have a precise/crisp/objective existence.

I take this as meaning that you agree that accepting functionalism is orthogonal to the question of whether suffering is "real" or not?

If it is a placeholder, then I think the question becomes, "what would 'something better' look like, and what would count as evidence that something is better?

What something better would look like - if I knew that, I'd be busy writing a paper about it. :-) That seems to be a part of the problem - everyone (that I know of) agrees that functionalism is deeply unsatisfactory, but very few people seem to have any clue of what a better theory might look like. Off the top of my head, I'd like such a theory to at least be able to offer some insight into what exactly is conscious, and not have the issue where you can hypothesize all kinds of weird computations (like Aaronson did in your quote) and be left confused about which of them are conscious and which are not, and why. (roughly, my desiderata are similar to Luke Muehlhauser's)

Comment author: MikeJohnson 26 July 2017 06:33:54PM 2 points [-]

I take this as meaning that you agree that accepting functionalism is orthogonal to the question of whether suffering is "real" or not?

Ah, the opposite actually- my expectation is that if 'consciousness' isn't real, 'suffering' can't be real either.

What something better would look like - if I knew that, I'd be busy writing a paper about it. :-) That seems to be a part of the problem - everyone (that I know of) agrees that functionalism is deeply unsatisfactory, but very few people seem to have any clue of what a better theory might look like. Off the top of my head, I'd like such a theory to at least be able to offer some insight into what exactly is conscious, and not have the issue where you can hypothesize all kinds of weird computations (like Aaronson did in your quote) and be left confused about which of them are conscious and which are not, and why. (roughly, my desiderata are similar to Luke Muehlhauser's)

Thanks, this is helpful. :)

The following is tangential, but I thought you'd enjoy this Yuri Harari quote on abstraction and suffering:

In terms of power, it’s obvious that this ability [to create abstractions] made Homo sapiens the most powerful animal in the world, and now gives us control of the entire planet. From an ethical perspective, whether it was good or bad, that’s a far more complicated question. The key issue is that because our power depends on collective fictions, we are not good in distinguishing between fiction and reality. Humans find it very difficult to know what is real and what is just a fictional story in their own minds, and this causes a lot of disasters, wars and problems.

The best test to know whether an entity is real or fictional is the test of suffering. A nation cannot suffer, it cannot feel pain, it cannot feel fear, it has no consciousness. Even if it loses a war, the soldier suffers, the civilians suffer, but the nation cannot suffer. Similarly, a corporation cannot suffer, the pound sterling, when it loses its value, it doesn’t suffer. All these things, they’re fictions. If people bear in mind this distinction, it could improve the way we treat one another and the other animals. It’s not such a good idea to cause suffering to real entities in the service of fictional stories.

Comment author: Kaj_Sotala 25 July 2017 11:01:35AM *  3 points [-]

Wait, are you equating "functionalism" with "doesn't believe suffering can be meaningfully defined"? I thought your criticism was mostly about the latter; I don't think it's automatically implied by the former. If you had a precise enough theory about the functional role and source of suffering, then this would be a functionalist theory that specified objective criteria for the presence of suffering.

(You could reasonably argue that it doesn't look likely that functionalism will provide such a theory, but then I've always assumed that anyone who has thought seriously about philosophy of mind has acknowledged that functionalism has major deficiencies and is at best our "least wrong" placeholder theory until somebody comes up with something better.)

Comment author: MikeJohnson 25 July 2017 05:36:44PM *  2 points [-]

Functionalism seems internally consistent (although perhaps too radically skeptical). However, in my view it also seems to lead to some flavor of moral nihilism; consciousness anti-realism makes suffering realism difficult/complicated.

If you had a precise enough theory about the functional role and source of suffering, then this would be a functionalist theory that specified objective criteria for the presence of suffering.

I think whether suffering is a 'natural kind' is prior to this analysis: e.g., to precisely/objectively explain the functional role and source of something, it needs to have a precise/crisp/objective existence.

I've always assumed that anyone who has thought seriously about philosophy of mind has acknowledged that functionalism has major deficiencies and is at best our "least wrong" placeholder theory until somebody comes up with something better.)

Part of my reason for writing this critique is to argue that functionalism isn't a useful theory of mind, because it doesn't do what we need theories of mind to do (adjudicate disagreements in a principled way, especially in novel contexts).

If it is a placeholder, then I think the question becomes, "what would 'something better' look like, and what would count as evidence that something is better? I'd love to get your (and FRI's) input here.

View more: Next