The following is my considered evaluation of the Foundational Research Institute, circa July 2017. I discuss its goal, where I foresee things going wrong with how it defines suffering, and what it could do to avoid these problems.

TL;DR version: functionalism ("consciousness is the sum-total of the functional properties of our brains") sounds a lot better than it actually turns out to be in practice. In particular, functionalism makes it impossible to define ethics & suffering in a way that can mediate disagreements.

 

I. What is the Foundational Research Institute?

 

The Foundational Research Institute (FRI) is a Berlin-based group that "conducts research on how to best reduce the suffering of sentient beings in the near and far future." Executive Director Max Daniel introduced them at EA Global Boston as “the only EA organization which at an organizational level has the mission of focusing on reducing s-risk.” S-risks are, according to Daniel, “risks where an adverse outcome would bring about suffering on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far.”

 

Essentially, FRI wants to become the research arm of suffering-focused ethics, and help prevent artificial general intelligence (AGI) failure-modes which might produce suffering on a cosmic scale.

 

 

What I like about FRI:

While I have serious qualms about FRI’s research framework, I think the people behind FRI deserve a lot of credit- they seem to be serious people, working hard to build something good. In particular, I want to give them a shoutout for three things:

 

  • First, FRI takes suffering seriously, and I think that’s important. When times are good, we tend to forget how tongue-chewingly horrific suffering can be. S-risks seem particularly horrifying.

 

  • Second, FRI isn’t afraid of being weird. FRI has been working on s-risk research for a few years now, and if people are starting to come around to the idea that s-risks are worth thinking about, much of the credit goes to FRI.

 

  • Third, I have great personal respect for Brian Tomasik, one of FRI’s co-founders. I’ve found him highly thoughtful, generous in debates, and unfailingly principled. In particular, he’s always willing to bite the bullet and work ideas out to their logical end, even if it involves repugnant conclusions.

 

What is FRI’s research framework?

FRI believes in analytic functionalism, or what David Chalmers calls “Type-A materialism”. Essentially, what this means is there’s no ’theoretical essence’ to consciousness; rather, consciousness is the sum-total of the functional properties of our brains. Since ‘functional properties’ are rather vague, this means consciousness itself is rather vague, in the same way words like “life,” “justice,” and “virtue” are messy and vague.

 

Brian suggests that this vagueness means there’s an inherently subjective, perhaps arbitrary element to how we define consciousness:

Analytic functionalism looks for functional processes in the brain that roughly capture what we mean by words like "awareness", "happy", etc., in a similar way as a biologist may look for precise properties of replicators that roughly capture what we mean by "life". Just as there can be room for fuzziness about where exactly to draw the boundaries around "life", different analytic functionalists may have different opinions about where to define the boundaries of "consciousness" and other mental states. This is why consciousness is "up to us to define". There's no hard problem of consciousness for the same reason there's no hard problem of life: consciousness is just a high-level word that we use to refer to lots of detailed processes, and it doesn't mean anything in addition to those processes.

 

Finally, Brian argues that the phenomenology of consciousness is identical with the phenomenology of computation:

I know that I'm conscious. I also know, from neuroscience combined with Occam's razor, that my consciousness consists only of material operations in my brain -- probably mostly patterns of neuronal firing that help process inputs, compute intermediate ideas, and produce behavioral outputs. Thus, I can see that consciousness is just the first-person view of certain kinds of computations -- as Eliezer Yudkowsky puts it, "How An Algorithm Feels From Inside". Consciousness is not something separate from or epiphenomenal to these computations. It is these computations, just from their own perspective of trying to think about themselves.

 

In other words, consciousness is what minds compute. Consciousness is the collection of input operations, intermediate processing, and output behaviors that an entity performs.

And if consciousness is all these things, so too is suffering. Which means suffering is computational, yet also inherently fuzzy, and at least a bit arbitrary; a leaky high-level reification impossible to speak about accurately, since there’s no formal, objective “ground truth”.

 

II. Why do I worry about FRI’s research framework?

 

In short, I think FRI has a worthy goal and good people, but its metaphysics actively prevent making progress toward that goal. The following describes why I think that, drawing heavily on Brian’s writings (of FRI’s researchers, Brian seems the most focused on metaphysics):

 

Note: FRI is not the only EA organization which holds functionalist views on consciousness; much of the following critique would also apply to e.g. MIRI, FHI, and OpenPhil. I focus on FRI because (1) Brian’s writings on consciousness & functionalism have been hugely influential in the community, and are clear enough *to* criticize; (2) the fact that FRI is particularly clear about what it cares about- suffering- allows a particularly clear critique about what problems it will run into with functionalism; (3) I believe FRI is at the forefront of an important cause area which has not crystallized yet, and I think it’s critically important to get these objections bouncing around this subcommunity.

 

Objection 1: Motte-and-bailey

Brian: “Consciousness is not a thing which exists ‘out there’ or even a separate property of matter; it's a definitional category into which we classify minds. ‘Is this digital mind really conscious?’ is analogous to ‘Is a rock that people use to eat on really a table?’ [However,] That consciousness is a cluster in thingspace rather than a concrete property of the world does not make reducing suffering less important.”

 

The FRI model seems to imply that suffering is ineffable enough such that we can't have an objective definition, yet sufficiently effable that we can coherently talk and care about it. This attempt to have it both ways seems contradictory, or at least in deep tension.

 

Indeed, I’d argue that the degree to which you can care about something is proportional to the degree to which you can define it objectively. E.g., If I say that “gnireffus” is literally the most terrible thing in the cosmos, that we should spread gnireffus-focused ethics, and that minimizing g-risks (far-future scenarios which involve large amounts of gnireffus) is a moral imperative, but also that what is and what isn’t gnireffus is rather subjective with no privileged definition, and it’s impossible to objectively tell if a physical system exhibits gnireffus, you might raise any number of objections. This is not an exact metaphor for FRI’s position, but I worry that FRI’s work leans on the intuition that suffering is real and we can speak coherently about it, to a degree greater than its metaphysics formally allow.

 

Max Daniel (personal communication) suggests that we’re comfortable with a degree of ineffability in other contexts; “Brian claims that the concept of suffering shares the allegedly problematic properties with the concept of a table. But it seems a stretch to say that the alleged tension is problematic when talking about tables. So why would it be problematic when talking about suffering?” However, if we take the anti-realist view that suffering is ‘merely’ a node in the network of language, we have to live with the consequences of this: that ‘suffering’ will lose meaning as we take it away from the network in which it’s embedded (Wittgenstein). But FRI wants to do exactly this, to speak about suffering in the context of AGIs, simulated brains, even video game characters.

 

We can be anti-realists about suffering (suffering-is-a-node-in-the-network-of-language), or we can argue that we can talk coherently about suffering in novel contexts (AGIs, mind crime, aliens, and so on), but it seems inherently troublesome to claim we can do both at the same time.

 

Objection 2: Intuition duels

Two people can agree on FRI’s position that there is no objective fact of the matter about what suffering is (no privileged definition), but this also means they have no way of coming to any consensus on the object-level question of whether something can suffer. This isn’t just an academic point: Brian has written extensively about how he believes non-human animals can and do suffer extensively, whereas Yudkowsky (who holds computationalist views, like Brian) has written about how he’s confident that animals are not conscious and cannot suffer, due to their lack of higher-order reasoning.

 

And if functionalism is having trouble adjudicating the easy cases of suffering--whether monkeys can suffer, or whether dogs can— it doesn’t have a sliver of a chance at dealing with the upcoming hard cases of suffering: whether a given AGI is suffering, or engaging in mind crime; whether a whole-brain emulation (WBE) or synthetic organism or emergent intelligence that doesn’t have the capacity to tell us how it feels (or that we don’t have the capacity to understand) is suffering; if any aliens that we meet in the future can suffer; whether changing the internal architecture of our qualia reports means we’re also changing our qualia; and so on.

 

In short, FRI’s theory of consciousness isn’t actually a theory of consciousness at all, since it doesn’t do the thing we need a theory of consciousness to do: adjudicate disagreements in a principled way. Instead, it gives up any claim on the sorts of objective facts which could in principle adjudicate disagreements.

 

This is a source of friction in EA today, but it’s mitigated by the sense that

(1) The EA pie is growing, so it’s better to ignore disagreements than pick fights;

(2) Disagreements over the definition of suffering don’t really matter yet, since we haven’t gotten into the business of making morally-relevant synthetic beings (that we know of) that might be unable to vocalize their suffering.

If the perception of one or both of these conditions change, the lack of some disagreement-adjudicating theory of suffering will matter quite a lot.

 

Objection 3: Convergence requires common truth

Mike: “[W]hat makes one definition of consciousness better than another? How should we evaluate them?”

Brian: “Consilience among our feelings of empathy, principles of non-discrimination, understandings of cognitive science, etc. It's similar to the question of what makes one definition of justice or virtue better than another.”

 

Brian is hoping that affective neuroscience will slowly converge to accurate views on suffering as more and better data about sentience and pain accumulates. But convergence to truth implies something (objective) driving the convergence- in this way, Brian’s framework still seems to require an objective truth of the matter, even though he disclaims most of the benefits of assuming this.

 

 

Objection 4: Assuming that consciousness is a reification produces more confusion, not less

Brian: “Consciousness is not a reified thing; it's not a physical property of the universe that just exists intrinsically. Rather, instances of consciousness are algorithms that are implemented in specific steps. … Consciousness involves specific things that brains do.”

 

Brian argues that we treat conscious/phenomenology as more 'real' than it is. Traditionally, whenever we’ve discovered something is a leaky reification and shouldn’t be treated as ‘too real’, we’ve been able to break it down into more coherent constituent pieces we can treat as real. Life, for instance, wasn’t due to élan vital but a bundle of self-organizing properties & dynamics which generally co-occur. But carrying out this “de-reification” process on consciousness-- enumerating its coherent constituent pieces-- has proven difficult, especially if we want to preserve some way to speak cogently about suffering.

 

Speaking for myself, the more I stared into the depths of functionalism, the less certain everything about moral value became-- and arguably, I see the same trajectory in Brian’s work and Luke Muehlhauser’s report. Their model uncertainty has seemingly become larger as they’ve looked into techniques for how to “de-reify” consciousness while preserving some flavor of moral value, not smaller. Brian and Luke seem to interpret this as evidence that moral value is intractably complicated, but this is also consistent with consciousness not being a reification, and instead being a real thing. Trying to “de-reify” something that’s not a reification will produce deep confusion, just as surely trying to treat a reification as ‘more real’ than it actually is will.

 

Edsger W. Dijkstra famously noted that “The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise.” And so if our ways of talking about moral value fail to ‘carve reality at the joints’- then by all means let’s build better ones, rather than giving up on precision.

 

Objection 5: The Hard Problem of Consciousness is a red herring

Brian spends a lot of time discussing Chalmers’ “Hard Problem of Consciousness”, i.e. the question of why we’re subjectively conscious, and seems to base at least part of his conclusion on not finding this question compelling— he suggests “There's no hard problem of consciousness for the same reason there's no hard problem of life: consciousness is just a high-level word that we use to refer to lots of detailed processes, and it doesn't mean anything in addition to those processes.” I.e., no ‘why’ is necessary; when we take consciousness and subtract out the details of the brain, we’re left with an empty set.

 

But I think the “Hard Problem” isn’t helpful as a contrastive centerpiece, since it’s unclear what the problem is, and whether it’s analytic or empirical, a statement about cognition or about physics. At the Qualia Research Institute (QRI), we don’t talk much about the Hard Problem; instead, we talk about Qualia Formalism, or the idea that any phenomenological state can be crisply and precisely represented by some mathematical object. I suspect this would be a better foil for Brian’s work than the Hard Problem.

 

Objection 6: Mapping to reality

Brian argues that consciousness should be defined at the functional/computational level: given a Turing machine, or neural network, the right ‘code’ will produce consciousness. But the problem is that this doesn’t lead to a theory which can ‘compile’ to physics. Consider the following:

 

Imagine you have a bag of popcorn. Now shake it. There will exist a certain ad-hoc interpretation of bag-of-popcorn-as-computational-system where you just simulated someone getting tortured, and other interpretations that don't imply that. Did you torture anyone? If you're a computationalist, no clear answer exists- you both did, and did not, torture someone. This sounds like a ridiculous edge-case that would never come up in real life, but in reality it comes up all the time, since there is no principled way to *objectively derive* what computation(s) any physical system is performing.

 

I don’t think this is an outlandish view of functionalism; Brian suggests much the same in How to Interpret a Physical System as a Mind: “Physicalist views that directly map from physics to moral value are relatively simple to understand. Functionalism is more complex, because it maps from physics to computations to moral value. Moreover, while physics is real and objective, computations are fictional and ‘observer-relative’ (to use John Searle's terminology). There's no objective meaning to ‘the computation that this physical system is implementing’ (unless you're referring to the specific equations of physics that the system is playing out).”

 

Gordon McCabe (McCabe 2004) provides a more formal argument to this effect— that precisely mapping between physical processes and (Turing-level) computational processes is inherently impossible— in the context of simulations. First, McCabe notes that:

[T]here is a one-[to-]many correspondence between the logical states [of a computer] and the exact electronic states of computer memory. Although there are bijective mappings between numbers and the logical states of computer memory, there are no bijective mappings between numbers and the exact electronic states of memory.

This lack of an exact bijective mapping means that subjective interpretation necessarily creeps in, and so a computational simulation of a physical system can’t be ‘about’ that system in any rigorous way:

In a computer simulation, the values of the physical quantities possessed by the simulated system are represented by the combined states of multiple bits in computer memory. However, the combined states of multiple bits in computer memory only represent numbers because they are deemed to do so under a numeric interpretation. There are many different interpretations of the combined states of multiple bits in computer memory. If the numbers represented by a digital computer are interpretation-dependent, they cannot be objective physical properties. Hence, there can be no objective relationship between the changing pattern of multiple bit-states in computer memory, and the changing pattern of quantity-values of a simulated physical system.

McCabe concludes that, metaphysically speaking,

A digital computer simulation of a physical system cannot exist as, (does not possess the properties and relationships of), anything else other than a physical process occurring upon the components of a computer. In the contemporary case of an electronic digital computer, a simulation cannot exist as anything else other than an electronic physical process occurring upon the components and circuitry of a computer.

 

Where does this leave ethics? In Flavors of Computation Are Flavors of Consciousness, Brian notes that “In some sense all I've proposed here is to think of different flavors of computation as being various flavors of consciousness. But this still leaves the question: Which flavors of computation matter most? Clearly whatever computations happen when a person is in pain are vastly more important than what's happening in a brain on a lazy afternoon. How can we capture that difference?”

 

But if Brian grants the former point- that "There's no objective meaning to ‘the computation that this physical system is implementing’”- then this latter task of figuring out “which flavors of computation matter most” is provably impossible. There will always be multiple computational (and thus ethical) interpretations of a physical system, with no way to figure out what’s “really” happening. No way to figure out if something is suffering or not. No consilience; not now, not ever.

 

Note: despite apparently granting the point above, Brian also remarks that:

I should add a note on terminology: All computations occur within physics, so any computation is a physical process. Conversely, any physical process proceeds from input conditions to output conditions in a regular manner and so is a computation. Hence, the set of computations equals the set of physical processes, and where I say "computations” in this piece, one could just as well substitute "physical processes" instead.

This seems to be (1) incorrect, for the reasons I give above, or (2) taking substantial poetic license with these terms, or (3) referring to hypercomputation (which might be able to salvage the metaphor, but would invalidate many of FRI’s conclusions dealing with the computability of suffering on conventional hardware).

 

This objection may seem esoteric or pedantic, but I think it’s important, and that it ripples through FRI’s theoretical framework with disastrous effects.

 

Objection 7: FRI doesn't fully bite the bullet on computationalism

Brian suggests that “flavors of computation are flavors of consciousness” and that some computations ‘code’ for suffering. But if we do in fact bite the bullet on this metaphor and place suffering within the realm of computational theory, we need to think in “near mode” and accept all the paradoxes that brings. Scott Aaronson, a noted expert on quantum computing, raises the following objections to functionalism:

I’m guessing that many people in this room side with Dennett, and (not coincidentally, I’d say) also with Everett. I certainly have sympathies in that direction too. In fact, I spent seven or eight years of my life as a Dennett/Everett hardcore believer. But, while I don’t want to talk anyone out of the Dennett/Everett view, I’d like to take you on a tour of what I see as some of the extremely interesting questions that that view leaves unanswered. I’m not talking about “deep questions of meaning,” but about something much more straightforward: what exactly does a computational process have to do to qualify as “conscious”?

There’s this old chestnut, what if each person on earth simulated one neuron of your brain, by passing pieces of paper around. It took them several years just to simulate a single second of your thought processes. Would that bring your subjectivity into being? Would you accept it as a replacement for your current body? If so, then what if your brain were simulated, not neuron-by-neuron, but by a gigantic lookup table? That is, what if there were a huge database, much larger than the observable universe (but let’s not worry about that), that hardwired what your brain’s response was to every sequence of stimuli that your sense-organs could possibly receive. Would that bring about your consciousness? Let’s keep pushing: if it would, would it make a difference if anyone actually consulted the lookup table? Why can’t it bring about your consciousness just by sitting there doing nothing?

 

To these standard thought experiments, we can add more. Let’s suppose that, purely for error-correction purposes, the computer that’s simulating your brain runs the code three times, and takes the majority vote of the outcomes. Would that bring three “copies” of your consciousness into being? Does it make a difference if the three copies are widely separated in space or time—say, on different planets, or in different centuries? Is it possible that the massive redundancy taking place in your brain right now is bringing multiple copies of you into being?

...

Maybe my favorite thought experiment along these lines was invented by my former student Andy Drucker.  In the past five years, there’s been a revolution in theoretical cryptography, around something called Fully Homomorphic Encryption (FHE), which was first discovered by Craig Gentry.  What FHE lets you do is to perform arbitrary computations on encrypted data, without ever decrypting the data at any point.  So, to someone with the decryption key, you could be proving theorems, simulating planetary motions, etc.  But to someone without the key, it looks for all the world like you’re just shuffling random strings and producing other random strings as output.

 

You can probably see where this is going.  What if we homomorphically encrypted a simulation of your brain?  And what if we hid the only copy of the decryption key, let’s say in another galaxy?  Would this computation—which looks to anyone in our galaxy like a reshuffling of gobbledygook—be silently producing your consciousness?

 

When we consider the possibility of a conscious quantum computer, in some sense we inherit all the previous puzzles about conscious classical computers, but then also add a few new ones.  So, let’s say I run a quantum subroutine that simulates your brain, by applying some unitary transformation U.  But then, of course, I want to “uncompute” to get rid of garbage (and thereby enable interference between different branches), so I apply U-1.  Question: when I apply U-1, does your simulated brain experience the same thoughts and feelings a second time?  Is the second experience “the same as” the first, or does it differ somehow, by virtue of being reversed in time? Or, since U-1U is just a convoluted implementation of the identity function, are there no experiences at all here?

 

Here’s a better one: many of you have heard of the Vaidman bomb.  This is a famous thought experiment in quantum mechanics where there’s a package, and we’d like to “query” it to find out whether it contains a bomb—but if we query it and there is a bomb, it will explode, killing everyone in the room.  What’s the solution?  Well, suppose we could go into a superposition of querying the bomb and not querying it, with only ε amplitude on querying the bomb, and √(1-ε2) amplitude on not querying it.  And suppose we repeat this over and over—each time, moving ε amplitude onto the “query the bomb” state if there’s no bomb there, but moving ε2 probability onto the “query the bomb” state if there is a bomb (since the explosion decoheres the superposition).  Then after 1/ε repetitions, we’ll have order 1 probability of being in the “query the bomb” state if there’s no bomb.  By contrast, if there is a bomb, then the total probability we’ve ever entered that state is (1/ε)×ε2 = ε.  So, either way, we learn whether there’s a bomb, and the probability that we set the bomb off can be made arbitrarily small.  (Incidentally, this is extremely closely related to how Grover’s algorithm works.)

 

OK, now how about the Vaidman brain?  We’ve got a quantum subroutine simulating your brain, and we want to ask it a yes-or-no question.  We do so by querying that subroutine with ε amplitude 1/ε times, in such a way that if your answer is “yes,” then we’ve only ever activated the subroutine with total probability ε.  Yet you still manage to communicate your “yes” answer to the outside world.  So, should we say that you were conscious only in the ε fraction of the wavefunction where the simulation happened, or that the entire system was conscious?  (The answer could matter a lot for anthropic purposes.)

 

To sum up: Brian’s notion that consciousness is the same as computation raises more issues than it solves; in particular, the possibility that if suffering is computable, it may also be uncomputable/reversible, would suggest s-risks aren’t as serious as FRI treats them.

 

Objection 8: Dangerous combination

Three themes which seem to permeate FRI’s research are:

(1) Suffering is the thing that is bad.

(2) It’s critically important to eliminate badness from the universe.

(3) Suffering is impossible to define objectively, and so we each must define what suffering means for ourselves.

 

Taken individually, each of these seems reasonable. Pick two, and you’re still okay. Pick all three, though, and you get A Fully General Justification For Anything, based on what is ultimately a subjective/aesthetic call.

 

Much can be said in FRI’s defense here, and it’s unfair to single them out as risky: in my experience they’ve always brought a very thoughtful, measured, cooperative approach to the table. I would just note that ideas are powerful, and I think theme (3) is especially pernicious if incorrect.

 

III. QRI’s alternative

 

Analytic functionalism is essentially a negative hypothesis about consciousness: it's the argument that there's no order to be found, no rigor to be had. It obscures this with talk of "function", which is a red herring it not only doesn't define, but admits is undefinable. It doesn't make any positive assertion. Functionalism is skepticism- nothing more, nothing less.

 

But is it right?

 

Ultimately, I think these a priori arguments are much like people in the middle ages arguing whether one could ever formalize a Proper System of Alchemy. Such arguments may in many cases hold water, but it's often difficult to tell good arguments apart from arguments where we're just cleverly fooling ourselves. In retrospect, the best way to *prove* systematized alchemy was possible was to just go out and *do* it, and invent Chemistry. That's how I see what we're doing at QRI with Qualia Formalism: we’re assuming it’s possible to build stuff, and we’re working on building the object-level stuff.

 

What we’ve built with QRI’s framework

Note: this is a brief, surface-level tour of our research; it will probably be confusing for readers who haven't dug into our stuff before. Consider this a down-payment on a more substantial introduction.

 

My most notable work is Principia Qualia, in which I lay out my meta-framework for consciousness (a flavor of dual-aspect monism, with a focus on Qualia Formalism) and put forth the Symmetry Theory of Valence (STV). Essentially, the STV is an argument that much of the apparent complexity of emotional valence is evolutionarily contingent, and if we consider a mathematical object isomorphic to a phenomenological experience, the mathematical property which corresponds to how pleasant it is to be that experience is the object’s symmetry. This implies a bunch of testable predictions and reinterpretations of things like what ‘pleasure centers’ do (Section XI; Section XII). Building on this, I offer the Symmetry Theory of Homeostatic Regulation, which suggests understanding the structure of qualia will translate into knowledge about the structure of human intelligence, and I briefly touch on the idea of Neuroacoustics.

 

Likewise, my colleague Andrés Gomez Emilsson has written about the likely mathematics of phenomenology, including The Hyperbolic Geometry of DMT Experiences, Tyranny of the Intentional Object, and Algorithmic Reduction of Psychedelic States. If I had to suggest one thing to read in all of these links, though, it would be the transcript of his recent talk on Quantifying Bliss, which lays out the world’s first method to objectively measure valence from first principles (via fMRI) using Selen Atasoy’s Connectome Harmonics framework, the Symmetry Theory of Valence, and Andrés’s CDNS model of experience.

 

These are risky predictions and we don’t yet know if they’re right, but we’re confident that if there is some elegant structure intrinsic to consciousness, as there is in many other parts of the natural world, these are the right kind of risks to take.

 

I mention all this because I think analytic functionalism- which is to say radical skepticism/eliminativism, the metaphysics of last resort- only looks as good as it does because nobody’s been building out any alternatives.

 

IV. Closing thoughts

 

FRI is pursuing a certain research agenda, and QRI is pursuing another, and there’s lots of value in independent explorations of the nature of suffering. I’m glad FRI exists, everybody I’ve interacted with at FRI has been great, I’m happy they’re focusing on s-risks, and I look forward to seeing what they produce in the future.

 

On the other hand, I worry that nobody’s pushing back on FRI’s metaphysics, which seem to unavoidably lead to the intractable problems I describe above. FRI seems to believe these problems are part of the territory, unavoidable messes that we just have to make philosophical peace with. But I think that functionalism is a bad map, that the metaphysical messes it leads to are much worse than most people realize (fatal to FRI’s mission), and there are other options that avoid these problems (which, to be fair, is not to say they have no problems).

 

Ultimately, FRI doesn’t owe me a defense of their position. But if they’re open to suggestions on what it would take to convince a skeptic like me that their brand of functionalism is viable, or at least rescuable, I’d offer the following:

 

Re: Objection 1 (motte-and-bailey), I suggest FRI should be as clear and complete as possible in their basic definition of suffering. In which particular ways is it ineffable/fuzzy, and in which particular ways is it precise? What can we definitely say about suffering, and what can we definitely never determine? Preregistering ontological commitments and methodological possibilities would help guard against FRI’s definition of suffering changing based on context.

 

Re: Objection 2 (intuition duels), FRI may want to internally “war game” various future scenarios involving AGI, WBE, etc, with one side arguing that a given synthetic (or even extraterrestrial) organism is suffering, and the other side arguing that it isn’t. I’d expect this would help diagnose what sorts of disagreements future theories of suffering will need to adjudicate, and perhaps illuminate implicit ethical intuitions. Sharing the results of these simulated disagreements would also be helpful in making FRI’s reasoning less opaque to outsiders, although making everything transparent could lead to certain strategic disadvantages.

 

Re: Objection 3 (convergence requires common truth), I’d like FRI to explore exactly might drive consilience/convergence in theories of suffering, and what precisely makes one theory of suffering better than another, and ideally to evaluate a range of example theories of suffering under these criteria.

 

Re: Objection 4 (assuming that consciousness is a reification produces more confusion, not less), I would love to see a historical treatment of reification: lists of reifications which were later dissolved (e.g., élan vital), vs scattered phenomena that were later unified (e.g., electromagnetism). What patterns do the former have, vs the latter, and why might consciousness fit one of these buckets better than the other?

 

Re: Objection 5 (the Hard Problem of Consciousness is a red herring), I’d like to see a more detailed treatment of what kinds of problem people have interpreted the Hard Problem as, and also more analysis on the prospects of Qualia Formalism (which I think is the maximally-empirical, maximally-charitable interpretation of the Hard Problem). It would be helpful for us, in particular, if FRI preregistered their expectations about QRI’s predictions, and their view of the relative evidence strength of each of our predictions.

 

Re: Objection 6 (mapping to reality), this is perhaps the heart of most of our disagreement. From Brian’s quotes, he seems split on this issue; I’d like clarification about whether he believes we can ever precisely/objectively map specific computations to specific physical systems, and vice-versa. And if so— how? If not, this seems to propagate through FRI’s ethical framework in a disastrous way, since anyone can argue that any physical system does, or does not, ‘code’ for massive suffering, and there’s no principled way derive any ‘ground truth’ or even pick between interpretations in a principled way (e.g. my popcorn example). If this isn’t the case— why not?

 

Brian has suggested that “certain high-level interpretations of physical systems are more ‘natural’ and useful than others” (personal communication); I agree, and would encourage FRI to explore systematizing this.

 

It would be non-trivial to port FRI’s theories and computational intuitions to the framework of “hypercomputation”-- i.e., the understanding that there’s a formal hierarchy of computational systems, and that Turing machines are only one level of many-- but it may have benefits too. Namely, it might be the only way they could avoid Objection 6 (which I think is a fatal objection) while still allowing them to speak about computation & consciousness in the same breath. I think FRI should look at this and see if it makes sense to them.

 

Re: Objection 7 (FRI doesn't fully bite the bullet on computationalism), I’d like to see responses to Aaronson’s aforementioned thought experiments.

 

Re: Objection 8 (dangerous combination), I’d like to see a clarification about why my interpretation is unreasonable (as it very well may be!).

 

---

In conclusion- I think FRI has a critically important goal- reduction of suffering & s-risk. However, I also think FRI has painted itself into a corner by explicitly disallowing a clear, disagreement-mediating definition for what these things are. I look forward to further work in this field.

---

 

Mike Johnson

Qualia Research Institute

 

 

 

Acknowledgements: thanks to Andrés Gomez Emilsson, Brian Tomasik, and Max Daniel for reviewing earlier drafts of this.

 


 

Sources:

 

My sources for FRI’s views on consciousness:

 

Flavors of Computation are Flavors of Consciousness:

https://foundational-research.org/flavors-of-computation-are-flavors-of-consciousness/

 

Is There a Hard Problem of Consciousness?

http://reducing-suffering.org/hard-problem-consciousness/

 

Consciousness Is a Process, Not a Moment

http://reducing-suffering.org/consciousness-is-a-process-not-a-moment/

 

How to Interpret a Physical System as a Mind

http://reducing-suffering.org/interpret-physical-system-mind/

 

Dissolving Confusion about Consciousness

http://reducing-suffering.org/dissolving-confusion-about-consciousness/

 

Debate between Brian & Mike on consciousness:

https://www.facebook.com/groups/effective.altruists/permalink/1333798200009867/?comment_id=1333823816673972&comment_tracking=%7B%22tn%22%3A%22R9%22%7D



Max Daniel’s EA Global Boston 2017 talk on s-risks:

https://www.youtube.com/watch?v=jiZxEJcFExc

 

Multipolar debate between Eliezer Yudkowsky and various rationalists about animal suffering:

https://rationalconspiracy.com/2015/12/16/a-debate-on-animal-consciousness/

 

The Internet Encyclopedia of Philosophy on functionalism:

http://www.iep.utm.edu/functism/

 

Gordon McCabe on why computation doesn’t map to physics:

http://philsci-archive.pitt.edu/1891/1/UniverseCreationComputer.pdf

 

Toby Ord on hypercomputation, and how it differs from Turing’s work:

https://arxiv.org/abs/math/0209332

 

Luke Muehlhauser’s OpenPhil-funded report on consciousness and moral patienthood:

http://www.openphilanthropy.org/2017-report-consciousness-and-moral-patienthood

 

Scott Aaronson’s thought experiments on computationalism:

http://www.scottaaronson.com/blog/?p=1951

 

Selen Atasoy on Connectome Harmonics, a new way to understand brain activity:

https://www.nature.com/articles/ncomms10340



My work on formalizing phenomenology:

 

My meta-framework for consciousness, including the Symmetry Theory of Valence:

http://opentheory.net/PrincipiaQualia.pdf

 

My hypothesis of homeostatic regulation, which touches on why we seek out pleasure:

http://opentheory.net/2017/05/why-we-seek-out-pleasure-the-symmetry-theory-of-homeostatic-regulation/

 

My exploration & parametrization of the ‘neuroacoustics’ metaphor suggested by Atasoy’s work:

http://opentheory.net/2017/06/taking-brain-waves-seriously-neuroacoustics/

 

My colleague Andrés’s work on formalizing phenomenology:

 

A model of DMT-trip-as-hyperbolic-experience:

https://qualiacomputing.com/2017/05/28/eli5-the-hyperbolic-geometry-of-dmt-experiences/

 

June 2017 talk at Consciousness Hacking, describing a theory and experiment to predict people’s valence from fMRI data:

https://qualiacomputing.com/2017/06/18/quantifying-bliss-talk-summary/

 

A parametrization of various psychedelic states as operators in qualia space:

https://qualiacomputing.com/2016/06/20/algorithmic-reduction-of-psychedelic-states/

 

A brief post on valence and the fundamental attribution error:

https://qualiacomputing.com/2016/11/19/the-tyranny-of-the-intentional-object/

 

A summary of some of Selen Atasoy’s current work on Connectome Harmonics:

https://qualiacomputing.com/2017/06/18/connectome-specific-harmonic-waves-on-lsd/



Comments76
Sorted by Click to highlight new comments since: Today at 11:11 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

This was great and I really enjoyed reading it. It's a pleasure to see one EA disagreeing with another with such eloquence, kindness and depth.

What I would say is that, even as someone doing a PhD in Philosophy, I found a bunch of this hard to follow (I don't really do any work on consciousness), particularly objection 7 and when you introduced QRI's own approach. I'll entirely understand if you think making this more accessible is more trouble that it's worth, I just thought I'd let you know.

3
MikeJohnson
7y
Thanks Michael! Re: Objection 7, I think Aaronson's point is that, if we actually take seriously the idea that a computer / Turing machine could generate consciousness simply by running the right computer code, we should be prepared for a lot of very, very weird implications. Re: QRI's approach, yeah I was trying to balance bringing up my work, vs not derailing the focus of the critique. I probably should have spent more words on that (I may go back and edit it).

This looks sensible to me. I'd just quickly note that I'm not sure if it's quite accurate to describe this as "FRI's metaphysics", exactly - I work for FRI, but haven't been sold on the metaphysics that you're criticizing. In particular, I find myself skeptical of the premise "suffering is impossible to define objectively", which you largely focus on. (Though part of this may be simply because I haven't yet properly read/considered Brian's argument for it, so it's possible that I would change my mind about that.)

But in any case, I've currently got three papers in various stages of review, submission or preparation (that other FRI people have helped me with), and none of those papers presuppose this specific brand of metaphysics. There's a bunch of other work being done, too, which I know of and which I don't think presupposes it. So it doesn't feel quite accurate to me to suggest that the metaphysics would be holding back our progress, though of course there can be some research being carried out that's explicitly committed to this particular metaphysics.

(opinions in this comment purely mine, not an official FRI statement etc.)

9
Wei Dai
7y
What would you say are the philosophical or other premises that FRI does accept (or tends to assume in its work), which distinguishes it from other people/organizations working in a similar space such as MIRI, OpenAI, and QRI? Is it just something like "preventing suffering is the most important thing to work on (and the disjunction of assumptions that can lead to this conclusion)"? It seems to me that a belief in anti-realism about consciousness explains a lot of Brian's (near) certainty about his values and hence his focus on suffering. People who are not so sure about consciousness anti-realism tend to be less certain about their values as a result, and hence don't focus on suffering as much. Does this seem right, and if so, can you explain what premises led you to work for FRI?

Is it just something like "preventing suffering is the most important thing to work on (and the disjunction of assumptions that can lead to this conclusion)"?

This sounds right. Before 2016, I would have said that rough value alignment (normatively "suffering-focused") is very-close-to necessary, but we updated away from this condition and for quite some time now hold the view that it is not essential if people are otherwise a good fit. We still have an expectation that researchers think about research-relevant background assumptions in ways that are not completely different from ours on every issue, but single disagreements are practically never a dealbreaker. We've had qualia realists both on the team (part-time) and as interns, and some team members now don't hold strong views on the issue one way or the other. Brian especially is a really strong advocate of epistemic diversity and goes much further with it than I feel most people would go.

People who are not so sure about consciousness anti-realism tend to be less certain about their values as a result, and hence don't focus on suffering as much.

Hm, this does not fit my observations. We had and still ... (read more)

9
Wei Dai
7y
Thanks for pointing this out. I've noticed this myself in some of FRI's writings, and I'd say this, along with the high amount of certainty on various object-level philosophical questions that presumably cause the disvaluing of reflection about them, are what most "turns me off" about FRI. I worry a lot about potential failures of goal preservation (i.e., value drift) too, but because I'm highly uncertain about just about every meta-ethical and normative question, I see no choice but to try to design some sort of reflection procedure that I can trust enough to hand off control to. In other words, I have nothing I'd want to "lock in" at this point and since I'm by default constantly handing off control to my future self with few safeguards against value drift, doing something better than that default is one of my highest priorities. If other people are also uncertain and place high value on (safe/correct) reflection as a result, that helps with my goal (because we can then pool resources together to work out what safe/correct reflection is), so it's regrettable to see FRI people sometimes argue for more certainty than I think is warranted and especially to see them argue against reflection.
3
Lukas_Gloor
7y
That makes sense. I do think as a general policy, valuing reflection is more positive-sum, and if one does not feel like much is "locked in" yet then it becomes very natural too. I'm not saying that people who value reflection more than I do are doing it wrong; I think I would even argue for reflection being very important and recommend it to new people, if I felt more comfortable that they'd end up pursuing things that are beneficial from all/most plausible perspectives. Though what I find regrettable is that the "default" interventions that are said to be good from as many perspectives as possible oftentimes do not seem great from a suffering-focused perspective.
1
MikeJohnson
7y
I really enjoyed your linked piece on meta-ethics. Short but insightful. I believe I'd fall into the second bucket. If you're looking for what (2) might look like in practice, and how we might try to relate it to the human brain's architecture/drives, you might enjoy this: http://opentheory.net/2017/05/why-we-seek-out-pleasure-the-symmetry-theory-of-homeostatic-regulation/ I'd also agree that designing trustworthy reflection procedures is important. My intuitions here are: (1) value-drift is a big potential problem with FRI's work (even if they "lock in" caring about suffering, if their definition of 'suffering' drifts, their tacit values do too); (2) value-drift will be a problem for any system of ethics that doesn't cleanly 'compile to physics'. (This is a big claim, centering around my Objection 6, above.) Perhaps we could generalize this latter point as "if information is physical, and value is informational, then value is physical too."

Rather than put words in the mouths of other people at FRI, I'd rather let them personally answer which philosophical premises they accept and what motivates them, if they wish.

For me personally, I've just had, for a long time, the intuition that preventing extreme suffering is the most important priority. To the best that I can tell, much of this intuition can be traced to having suffered from depression and general feelings of crushing hopelessness for large parts of my life, and wanting to save anyone else from experiencing a similar (or worse!) magnitude of suffering. I seem to recall that I was less suffering-focused before I started getting depressed for the first time.

Since then, that intuition has been reinforced by reading up on other suffering-focused works; something like tranquilism feels like a sensible theory to me, especially given some of my own experiences with meditation which are generally compatible with the kind of theory of mind implied by tranquilism. That's something that has come later, though.

To clarify, none of this means that I would only value suffering prevention: I'd much rather see a universe-wide flourishing civilization full of minds in various sta... (read more)

9
Brian_Tomasik
7y
I also don't want to speak for FRI as a whole, but yeah, I think it's safe to say that a main thing that makes FRI unique is its suffering focus. My high confidence in suffering-focused values results from moral anti-realism generally (or, if moral realism is true, then my unconcern for the moral truth). I don't think consciousness anti-realism plays a big role because I would still be suffering-focused even if qualia were "real". My suffering focus is ultimately driven by the visceral feeling that extreme suffering is so severe that nothing else compares in importance. Theoretical arguments take a back seat to this conviction.
2
kokotajlod
7y
Interesting. I'm a moral anti-realist who also focuses on suffering, but not to the extent that you do (e.g. not worrying that much about suffering at the level of fundamental physics.) I would have predicted that theoretical arguments were what convinced you to care about fundamental physics suffering, not any sort of visceral feeling.
2
Brian_Tomasik
7y
Sorry, I meant that emotion is what makes me care about (extreme) suffering in the first place. With that foundation, one should use arguments to clarify what reducing suffering looks like in practice and what "suffering" even means. Also, there's some blending of rational arguments and emotion. I now care a bit about suffering in fundamental physics on an emotional level because my conception of suffering has been changed by learning more about the world and philosophy of mind. (That said, I still care a lot about animals.)
4
MikeJohnson
7y
Hi Kaj- that makes a lot of sense. I would say FRI currently looks very eliminativism-heavy from the outside (see e.g., https://foundational-research.org/research/#consciousness), but it sounds like the inside view is indeed different. As I noted on FB, I'll look forward to seeing where FRI goes with its research.
[anonymous]7y11
0
0

What's the problem if a group of people explores the implications of a well-respected position in philosophy and are (I think) fully aware of the implications? Exploring a different position should be a task for people who actually place more than a tiny bit of credence in it, it seems to me - especially when it comes to a new and speculative hypothesis like principle qualia.

This post mostly reads like a contribution to a long-standing philosophical debate to me and would be more appropriately presented as arguing against a philosophical assumption rather than against a research group working under that assumption.

In the cog-sci / neuroscience institute where I currently work, productive work is being done under similar, though less explicit, assumptions as Brian's / FRI's. Including relevant work on modelling valence in animals in the reinforcement learning framework.

I know you disagree with these assumptions but a post like this can make it seem to outsiders as if you're criticizing a somewhat crazy position and by extension cast a bad light on FRI.

4
MikeJohnson
7y
Hi Sören- your general point (am I critiquing FRI, or functionalism?) is reasonable. I do note in the piece why I focus on FRI: I should say too that the purpose of bringing up QRI's work is not to suggest FRI should be focusing on this, but instead that effort developing alternatives helps calibrate the field:
0[anonymous]7y
Makes sense :)
4
Brian_Tomasik
7y
Cool. :) I found that article enlightening and discussed it on pp. 20-21 of my RL paper.
2[anonymous]7y
One of the authors (Peter Dayan) is my supervisor, let me know if you'd like me to ask him anything, he does a lot of RL-style modelling :)
5
Brian_Tomasik
7y
Great! It's not super important, but I'd be curious to know his own thoughts on the question of why pleasure and pain feel different and aren't just a single dimension of motivation, given that you can shift all rewards up or down uniformly while keeping behavior unchanged. Here is one possible explanation, which mentions Daw et al. (2002). I'd also be curious to know at what level of complexity / ability of artificial RL systems he would start to grant them ethical consideration.
1[anonymous]7y
I've had a look into Dayan's suggested papers - they imply an interesting theory. I'll put my thoughts here so the discussion can be public. The theory contradicts the one you link above where the separation between pain and pleasure is a contingency of how our brain works. You've written about another (very intuitive) theory, where the zero-point is where you'd be indifferent between prolonging and ending your life: "This explanation may sound plausible due to its analogy to familiar concepts, but it seems to place undue weight on whether an agent’s lifetime is fixed or variable. Yet I would still feel pain and pleasure as being distinct even if I knew exactly when I would die, and a simple RL agent has no concept of death to begin with." Dayan's research suggests that the zero-point will also come up in many circumstances relating to opportunity costs which would deal with that objection. To simplify, let's say the agent expects a fixed average rate of return rho for the foreseeable future. It is faced with a problem where it can either act fast (high energy expenditure) or act slowly (high opportunity costs as it won't get the average return for a while). If rho is negative or zero, there is no need to act quickly at all because there are not opportunity costs. But the higher the opportunity costs get, the fast the agent will want to be at getting its average reward back so it will act quickly despite the immediate cost. The speed with which the agent acts is called vigour in Dayan's research. The agent's vigour mathematically implies an average rate of return if the agent is rational. There can be other reasons for low vigour such as a task that requires patience - they have some experiments here in figure 1. In their experiment the optimal vigour (one over tao*) is proportional to the square root of the average return. A recent paper has confirmed the predictions of this model in humans. So when is an agent happy according to this model? The model would i
1
Brian_Tomasik
7y
Thanks!! Interesting. I haven't read the linked papers, so let me know if I don't understand properly (as I probably don't). I've always thought of simple RL agents as getting a reward at fixed time intervals no matter what they do, in which case they can't act faster or slower. For example, if they skip pressing a lever, they just get a reward of 0 for that time step. Likewise, in an actual animal, the animal's reward neurons don't fire during the time when the lever isn't being pressed, which is equivalent to a reward of 0. Of course, animals would prefer to press the lever more often to get a positive reward rather than a reward of 0, but this would be true whether the lever gave positive reward or merely relief from punishment. For example, maybe the time between lever presses is painful, and the pressed lever is merely less painful. This could be the experience of, e.g., a person after a breakup consuming ice cream scoops at a higher rate than normal to escape her pain: even with the increased rate of ice cream intake, she may still have negative welfare, just less negative. It seems like vigor just says that what you're doing is better than not doing it? For really simple RL agents like those living in Grid World, there is no external clock. Time is sort of defined by when the agent takes its next step. So it's again not clear if a "rate of actions" explanation can help here (but if it helps for more realistic RL agents, that's cool!). This answer says that for a Markov Decision Process, "each action taken is done in a time step." So it seems like a time step is defined as the interval between one action and the next?
1[anonymous]7y
Thanks for the reply. I think I can clarify the issue about discrete time intervals. I'd be curious on your thoughts on the last sentence of my comment above if you have any. Discrete time Yes. But in a SEMI or a https://en.wikipedia.org/wiki/Markov_decision_process#Continuous-time_Markov_Decision_Process Markov Decision Process (SMDP) this is not the case. SMDPs allow temporally extended actions and are commonly used in RL research. Dayan's papers use a continuous SMDP. You can still have RL agents in this formalism and it tracks our situation more closely. But I don't think the formalism matters for our discussion because you can arbitrarily approximate any formalism with a standard MDP - I'll explain below. The continuous-time experiment looks roughly like this: Imagine you're in a room and you have to press a lever to get out - and get back to what you would normally be doing and get an average reward rho per second. However, the lever is hard to press. You can press it hard and fast or light and slowly, taking a total time T to complete the press. The total energy cost of pressing is 1/T so ideally you'd press very slowly but that would mean you couldn't be outside the room during that time (opportunity costs). In this setting, the 'action' is just the time T that you to press the lever. We can easily approximate this with a standard MDP. E.g. you could take action 1 which completely presses the lever in one time step, costing you 1/1=1 reward in energy. Or you could take action 2, which you would have to take twice to complete the press, costing you only 1/2 reward (so 1/4 for each time you take action 2). And so forth. Does that make sense? Zero point Of course, if you don't like it outside the room at all, you'll never press the lever - so there is a 'zero point' in terms of how much you like it outside. Below that point you'll never press the lever. I'm not entirely sure what you mean, but I'll clarify that acting vigorously doesn't say anything abou
1
Brian_Tomasik
7y
Your explanation was clear. :) Yeah, I guess I meant the trivial observation that you act vigorously if you judge that doing so has higher expected total discounted reward than not doing so. But this doesn't speak to whether, after making that vigorous effort, your experiences will be net positive; they might just be less negative. ...assuming that sticking around inside the room is neutral. This gets back to the "unwarranted assumption that the agent is at the zero-point before it presses the lever." Hm. :) I feel like there's a difference between (a) an agent inside the room who hasn't yet pressed the lever to get out and (b) the agent not existing at all. For (a), it seems we ought to be able to give a (qualia and morally nonrealist) answer about whether its experiences are positive or negative or neutral, while for (b), such a question seems misplaced. If it were a human in the room, we could ask that person whether her experiences before lever pressing were net positive or negative. I guess such answers could vary a lot between people based on various cultural, psychological, etc. factors unrelated to the activity level of reward networks. If so, perhaps one position could be that the distinction between positive vs. negative welfare is a pretty anthropomorphic concept that doesn't travel well outside of a cognitive system capable of making these kinds of judgments. Intuitively, I feel like there is more to the sign of one's welfare than these high-level, potentially idiosyncratic evaluations, but it's hard to say what. I suppose another approach could be to say that the person in the room definitely is at welfare 0 (by fiat) based on lack of reward or punishment signals, regardless of how the person evaluates her welfare verbally.
1[anonymous]7y
Yes that's probably the right way to think about it. I'm also considering an alternative though: Since we're describing the situation with a simple computational model we shouldn't assume that there's anything going on that isn't captured by the model. E.g. if the agent in the room is depressed, it will be performing 'mental actions' - imagining depressing scenarios etc. But we may have to assume that away, similar to how high school physics would assume no friction etc. So we're left with an agent that decides initially that it won't do anything at all (not even updating its beliefs) because it doesn't want to be outside of the room and then remains inactive. The question arises if that's an agent at all and if it's meaningfully different unconsciousness.
2
Brian_Tomasik
7y
Hm. :) Well, what if the agent did do stuff inside the room but still decided not to go out? We still wouldn't be able to tell if it was experiencing net positive, negative, or neutral welfare. Examples: 1. It's winter. The agent is cold indoors and is trying to move to the warm parts of the room. We assume its welfare is net negative. But it doesn't go outside because it's even colder outside. 2. The agent is indoors having a party. We assume it's experiencing net positive welfare. It doesn't want to go outside because the party is inside. We can reproduce the behavior of these agents with reward/punishment values that are all positive numbers, all negative numbers, or a combination of the two. So if we omit the higher-level thoughts of the agents and just focus on the reward numbers at an abstract level, it doesn't seem like we can meaningfully distinguish positive or negative welfare. Hence, the sign of welfare must come from the richer context that our human-centered knowledge and evaluations bring? Of course, qualia nonrealists already knew that the sign and magnitude of an organism's welfare are things we make up. But most people can agree upon, e.g., the sign of the welfare of the person at the party. In contrast, there doesn't seem to be a principled way that most people would agree upon for us to attribute a sign of welfare to a simple RL agent that reproduces the high-level behavior of the person at the party.
1[anonymous]7y
After some clarification Dayan thinks that vigour is not the thing I was looking for. We discussed this a bit further and he suggested that the temporal difference error does track pretty closely what we mean by happiness/suffering, at least as far as the zero point is concerned. Here's a paper making the case (but it has limited scope IMO). If that's true, we wouldn't need e.g. the theory that there's a zero point to keep firing rates close to zero. The only problem with TD errors seems to be that they don't account for the difference between wanting and liking. But it's currently just unresolved what the function of liking is. So I came away with the impression that liking vs wanting and not the zero point is the central question. I've seen one paper suggesting that liking is basically the consumption of rewards, which would bring us back to the question of the zero point though. But we didn't find that theory satisfying. E.g. food is just a proxy for survival. And as the paper I linked shows, happiness can follow TD errors even when no rewards are consumed. Dayan mentioned that liking may even be an epiphenomenon of some things that are going on in the brain when we eat food/have sex etc, similar to how the specific flavour of pleasure we get from listening to music is such an epiphenomenon. I don't know if that would mean that liking has no function. Any thoughts?
2
Brian_Tomasik
7y
Interesting. :) Daswani and Leike (2015) also define (p. 4) happiness as the temporal difference error (in an MDP), and for model-based agents, the definition is, in my interpretation, basically the common Internet slogan that "happiness = reality - expectations". However, the authors point out (p. 2) that pleasure = reward != happiness. This still leaves open the issue of what pleasure is. Personally I think pleasure is more morally relevant. In Tomasik (2014), I wrote (p. 11): In this post commenting on Daswani and Leike (2015), I said: ---------------------------------------- I'm not sure I understand, but I wrote a quick thing here inspired by this comment. Do you think that's what he meant? If so, may I attribute him/you for the idea? It seems fairly plausible. :) Studying what separates red from blue might help shine light on this topic.
3
kbog
7y
If the position is wrong then their work is of little use, or possibly harmful. FRI is a nonprofit organization affiliated with EA which uses nontrivial amounts of human and financial capital, of course it's a problem if the work isn't high value. I wouldn't be so quick to assume that the idea that moral status boils down to asking 'which computations do I care about' is a well-respected position in philosophy. It probably exists but not in substantial measure.
6[anonymous]7y
As far as I can see that's just functionalism / physicalism plus moral anti-realism which are both well-respected. But as philosophy of mind and moral philosophy are separate fields you won't see much discussion of the intersection of these views. Completely agreed if you do assume the position is wrong.
6
kbog
7y
I think the choice of a metaethical view is less important than you think. Anti-realism is frequently a much richer view than just talking about preferences. It says that our moral statements aren't truth-apt, but just because our statements aren't truth-apt doesn't mean they're merely about preferences. Anti-realists can give accounts of why a rigorous moral theory is justified and is the right one to follow, not much different from how realists can. Conversely, you could even be a moral realist who believes that moral status boils down to which computations you happen to care about. Anyway, the point is that anti-realists can take pretty much any view in normative ethics, and justify those views in mostly the same ways that realists tend to justify their views (i.e. reasons other than personal preference). Just because we're not talking about whether a moral principle is true or not doesn't mean that we can no longer use the same basic reasons and arguments in favor of or against that principle. Those reasons will just have a different meaning. Plus, physicalism is a weaker assertion than the view that consciousness is merely a matter of computation or information processing. Consciousness could be reducible to physical phenomena but without being reducible to computational steps. (eta: this is probably what most physicalists think.)
2[anonymous]7y
Thanks, for the clarification, I can't comment much as I don't know much about the different flavors or anti realism. One thing I'd like to point out, and I'm happy to be corrected on that, is that when an anti realist argues they will often (always?) base themselves on principles such as consistency. It seems hard to argue anything without referring to any principle. But someone who who doesn't support the application of a principle won't be convinced and that's up to preferences too. (I certainly know people who reject the drowning child argument because they explicitly don't care about consistency). So you could see debate about ethics because people are exploring the implications of principles they happen to share. Agree on physicalism being a fairly general set of views.
2
Lukas_Gloor
7y
I agree with this.
0
kokotajlod
7y
SoerenMind: It's wayyy more than just functionalism/physicalism plus moral anti-realism. There are tons of people who hold both views, and only a tiny fraction of them are negative utilitarians or anything close. In fact I'd bet it's somewhat unusual for any sort of moral anti-realist to be any sort of utilitarian.

Brian's view is maybe best described as eliminativism about consciousness (which may already seem counterintuitive to many) plus a counterintuitive way to draw boundaries in concept space. Luke Muehlhauser said about Brian's way of assigning non-zero moral relevance to any process that remotely resembles aspects of our concept of consciousness:

"Mr. Tomasik’s view [...] amounts to pansychism about consciousness as an uninformative special case of “pan-everythingism about everything."

See this conversation.

So the disagreement there does not appear to be about questions such as "What produces people's impression of there being a hard problem of consciousness?," but rather whether anything that is "non-infinitely separated in multi-dimensional concept space" still deserves some (tiny) recognition as fitting into the definition. As Luke says here, the concept "consciousness" works more like "life" (= fuzzy) and less like "water" (= H2O), and so if one shares this view, it becomes non-trivial to come up with an all-encompassing definition.

While most (? my impression anyway as someone who works there) researchers at FRI place... (read more)

0
MikeJohnson
7y
It would be interesting to see FRI develop what 'suffering-focused ethics, as informed by functionalism/eliminativism, but with hard boundaries in concept space' might look like.

Speaking of the metaphysical correctness of claims about qualia sounds confused, and I think precise definitions of qualia-related terms should be judged by how useful they are for generalizing our preferences about central cases. I expect that any precise definition for qualia-related terms that anyone puts forward before making quite a lot of philosophical progress is going to be very wrong when judged by usefulness for describing preferences, and that the vagueness of the analytic functionalism used by FRI is necessary to avoid going far astray.

Regardin... (read more)

3
MikeJohnson
7y
I agree a good theory of qualia should help generalize our preferences about central cases. I disagree that we can get there with the assumption that qualia are intrinsically vague/ineffable. My critique of analytic functionalism is that it is essentially nothing but an assertion of this vagueness. Without a bijective mapping between physical states/processes and computational states/processes, I think my point holds. I understand it's counterintuitive, but we should expect that when working in these contexts. Correct; they're the sorts of things a theory of qualia should be able to address- necessary, not sufficient. Re: your comments on the Symmetry Theory of Valence, I feel I have the advantage here since you haven't read the work. Specifically, it feels as though you're pattern-matching me to IIT and channeling Scott Aaronson's critique of Tononi, which is a bit ironic since that forms a significant part of PQ's argument why an IIT-type approach can't work. At any rate I'd be happy to address specific criticism of my work. This is obviously a complicated topic and informed external criticism is always helpful. At the same time, I think it's a bit tangential to my critique about FRI's approach: as I noted,
1
AlexMennen
7y
That's no reason to believe that analytic functionalism is wrong, only that it is not sufficient by itself to answer very many interesting questions. No, it doesn't. I only claim that most physical states/processes have only a very limited collection of computational states/processes that it can reasonably be interpreted as, not that every physical state/process has exactly one computational state/process that it can reasonably be interpreted as, and certainly not that every computational state/process has exactly one physical state/process that can reasonably be interpreted as it. Those are totally different things. Kind of. But to clarify, I wasn't trying to argue that there will be problems with the Symmetry Theory of Valence that derive from problems with IIT. And when I heard about IIT, I figured that there were probably trivial counterexamples to the claim that Phi measures consciousness and that perhaps I could come up with one if I thought about the formula enough, before Scott Aaronson wrote the blog post where he demonstrated this. So although I used that critique of IIT as an example, I was mainly going off of intuitions I had prior to it. I can see why this kind of very general criticism from someone who hasn't read the details could be frustrating, but I don't expect I'll look into it enough to say anything much more specific. But people have tried developing alternatives to analytic functionalism.
1
MikeJohnson
7y
I think that's being generous to analytic functionalism. As I suggested in Objection 2, . I'd like to hear more about this claim; I don't think it's ridiculous on its face (per Brian's and Michael_PJ's comments), but it seems a lot of people have banged their head against this without progress, and my prior is formalizing this is a lot harder than it looks (it may be unformalizable). If you could formalize it, that would have a lot of value for a lot of fields. I don't expect you to either. If you're open to a suggestion about how to approach this in the future, though, I'd offer that if you don't feel like reading something but still want to criticize it, instead of venting your intuitions (which could be valuable, but don't seem calibrated to the actual approach I'm taking), you should press for concrete predictions. The following phrases seem highly anti-scientific to me: I.e., these statements seem to lack epistemological rigor, and seem to absolutely prevent you from updating in response to any evidence I might offer, even in principle (i.e., they're actively hostile to your improving your beliefs, regardless of whether I am or am not correct). I don't think your intention is to be closed-minded on this topic, and I'm not saying I'm certain STV is correct. Instead, I'm saying you seem to be overreacting to some stereotype you initially pattern-matched me as, and I'd suggest talking about predictions is probably a much healthier way to move forward if you want to spend more time on this. (Thanks!)
1
Brian_Tomasik
7y
I haven't read most of this paper, but it seems to argue that.
2
MikeJohnson
6y
Aaronson's "Is 'information is physical' contentful?" also seems relevant to this discussion (though I'm not sure exactly how to apply his arguments): https://www.scottaaronson.com/blog/?p=3327
1
MikeJohnson
7y
You may also like Towards a computational theory of experience by Fekete and Edelman- here's their setup: It's a little bit difficult to parse precisely how they believe they solve the multiple realization of computational interpretations of a system, but the key passage seems to be: My attempt at paraphrasing this: if we can model the evolution of a physical system and the evolution of a computational system with the same phase space for some finite time t, then as t increases we can be increasingly confident the physical system is instantiating this computational system. At the limit (t->∞), this may offer a method for uniquely identifying which computational system a physical system is instantiating. My intuition here is that the closer they get to solving the problem of how to 'objectively' determine what computations a physical system is realizing, the further their framework will stray from the Turing paradigm of computation and the closer it will get to a hypercomputation paradigm (which in turn may essentially turn out to be isomorphic to physics). But, I'm sure I'm biased, too. :) Might be worth a look.
1
MikeJohnson
7y
That seems like a useful approach- in particular, This does seem to support the idea that progress can be made on this problem! On the other hand, the author's starting assumption is we can treat a physical system as a computational (digital) automata, which seems like a pretty big assumption. I think this assumption may or may not turn out to be ultimately true (Wolfram et al), but given current theory it seems difficult to reduce actual physical systems to computational automata in practice. In particular, it seems difficult to apply this framework to (1) quantum systems (which all physical systems ultimately are), and (2) biological systems which have messy levels of abstraction such as the brain (which we'd want to be able to do for the purposes of functionalism). From a physics perspective, I wonder if we could figure out a way to feed in a bounded wavefunction, and get identify some minimum upper bound of reasonable computational interpretations of the system. My instinct is that David Deutsch might be doing relevant work? But I'm not at all sure of this.
2
Brian_Tomasik
7y
To steelman the popcorn objection, one could say that separating "normal" computations from popcorn shaking requires at least certain sorts of conditions on what counts as a valid interpretation, and such conditions increase the arbitrariness of the theory. Of course, if we adopt a complexity-of-value approach to moral value (as I and probably you think we should), then those conditions on what counts as a computation may be minimal compared with the other forms of arbitrariness we bring to bear. I haven't read Principia Qualia and so can't comment competently, but I agree that symmetry seems like not the kind of thing I'm looking for when assessing the moral importance of a physical system, or at least it's not more than one small part of what I'm looking for. Most of what I care about is at the level of ordinary cognitive science, such as mental representations, behaviors, learning, preferences, introspective abilities, etc. That said, I do think theories like IIT are at least slightly useful insofar as they expand our vocabulary and provide additional metrics that we might care a little bit about.
1
AlexMennen
7y
If you expanded on this, I would be interested.
3
Brian_Tomasik
7y
I didn't have in mind anything profound. :) The idea is just that "degree of information integration" is one interesting metric along which to compare minds, along with metrics like "number of neurons", "number of synapses", "number of ATP molecules consumed per second", "number of different brain structures", "number of different high-level behaviors exhibited", and a thousand other similar things.

Interesting that you mention the "waterfall"/"bag of popcorn" argument against computationalism in the same article as citing Scott Aaronson, since he actually gives some arguments against it (see section 6 of https://arxiv.org/abs/1108.1791). In particular, he suggests that we can argue that a process P isn't contributing any computation when having a P-oracle doesn't let you solve the problem faster.

I don't think this fully lays to rest the question of what things are performing computations, but I think we can distinguish them in som... (read more)

1
Brian_Tomasik
7y
Interesting idea. :) Aaronson says (p. 23): I'm not so sure this is true. There might be clever ways to use the implicit computations of falling water to save computational cost. For example, Fernando and Sojakka (2003) used water waves to help process inputs: That said, I agree that the computational-complexity test seems like one helpful consideration for identifying which computations a system is performing.

I'm a bit surprised to find that Brian Tomasik attributes his current views on consciousness to his conversations with Carl Shulman, since in my experience Carl is a very careful thinker and the case for accepting anti-realism as the answer to the problem of consciousness seems pretty weak, at least as explained by Brian. I'm very curious to read Carl's own explanation of his views, if he has written one down. I scanned Carl Shulman's list of writings but was unable to find anything that addressed this.

2
Brian_Tomasik
7y
I don't want to put words in Carl's mouth, and certainly Carl doesn't necessarily endorse anything I write. Perhaps he'll chime in. :) For more defenses of anti-realism (i.e., type-A physicalism), here are some other authors. Dennett is the most famous, though some complain that he doesn't use rigorous philosophical arguments/jargon.
7
MikeJohnson
7y
This may or may not be relevant, but I would definitely say that Brian's views are not 'fringe views' in the philosophy of mind; they're quite widely held in philosophy and elsewhere. I believe Brian sticks out because his writing is so clear, and because he doesn't avoid thinking about and admitting strange implications of his views. That said I don't know Carl's specific views on the topic.

Aside:

Essentially, the STV is an argument that much of the apparent complexity of emotional valence is evolutionarily contingent, and if we consider a mathematical object isomorphic to a phenomenological experience, the mathematical property which corresponds to how pleasant it is to be that experience is the object’s symmetry.

I don't see how this can work given (I think) isomorphism is transitive and there are lots of isomorphisms between sets of mathematical objects which will not preserve symmetry.

Toy example. Say we can map the set of all phenomen... (read more)

2[anonymous]7y
Trivial objection, but the y=0 axis also gets transformed so the symmetries are preserved. In maths, symmetries aren't usually thought of as depending on some specific axis. E.g. the symmetry group of a cube is the same as the symmetry group of a rotated version of the cube.
2
Gregory Lewis
7y
Mea culpa. I was naively thinking of super-imposing the 'previous' axes. I hope the underlying worry still stands given the arbitrarily many sets of mathematical objects which could be reversibly mapped onto phenomenological states, but perhaps this betrays a deeper misunderstanding.
3[anonymous]7y
I'll assume you meant isomorphically mapped rather than reversibly mapped, otherwise there's indeed a lot of random things you can map anything. I tend to think of isomorphic objects as equivalent in every way that can be mathematically described (and that includes every way I could think of). However, objects can be made of different elements so the equivalence is only after stripping away all information about the elements and seeing them as abstract entities that relate to each other in some way. So you could get {Paris, Rome, London} == {1,2,3}. What Mike is getting at though I think is that the elements also have to be isomorphic all the way down - then I can't think of a reason to not see such completely isomorphic objects as the same.
2
Michael_PJ
7y
If they're isomorphic, then they really are the same for mathematical purposes. Possibly if you view STV as having a metaphysical component then you incur some dependence on philosophy of mathematics to say what a mathematical structure is, whether isomorphic structures are distinct, etc.

that precisely mapping between physical processes and (Turing-level) computational processes is inherently impossible

Curious for your take on the premise that ontologies always have tacit telos.

Also, we desire to expand the domains of our perception with scientific instrumentation and abstractions. This expansion always generates some mapping (ontology) from the new data to our existing sensory modalities.

I think this is relevant for the dissonance model of suffering, though I can't fully articulate how yet.

3
MikeJohnson
7y
Some ontologies seem to have more of a telos 'baked in'-- e.g., Christianity might be a good example-- whereas other ontologies have zero explicit telos-- e.g., pure mathematics. But I think you're right that there's always a tacit telos, perhaps based on elegance. When I argue that "consciousness is a physics problem", I'm arguing that it inherits physics' tacit telos, which seems to be elegance-as-operationalized-by-symmetry. I wonder if "elegance" always captures telos? This would indicate a certain theory-of-effective-social/personal-change... Yeah, it doesn't seem technology can ever truly be "teleologically neutral".
0
RomeoStevens
7y
Elegance is probably worth exploring in the same way that moral descriptivism as a field turned up some interesting things. My naive take is something like 'efficient compression of signaling future abundance.' Another frame for the problem: what is mathematical and scientific taste and how does it work? Also, more efficient objection to religion: 'your compression scheme is lossy bro.' :D

much of the following critique would also apply to e.g. MIRI, FHI, and OpenPhil.

I'm a little confused here. Where does MIRI or FHI say anything about consciousness, much less assume any particular view?

1
MikeJohnson
7y
My sense that MIRI and FHI are fairly strong believers in functionalism, based on reading various pieces on LessWrong, personal conversation with people who work there, and 'revealed preference' research directions. OpenPhil may be more of a stretch to categorize in this way; I'm going off what I recall of Holden's debate on AI risk, some limited personal interactions with people that work there, and Luke Muehlhauser's report (he was up-front about his assumptions on this). Of course it's harder to pin down what people at these organizations believe than it is in Brian's case, since Brian writes a great deal about his views. So to my knowledge, this statement is essentially correct, although there may be definitional & epistemological quibbles.
3
Kaj_Sotala
7y
Wait, are you equating "functionalism" with "doesn't believe suffering can be meaningfully defined"? I thought your criticism was mostly about the latter; I don't think it's automatically implied by the former. If you had a precise enough theory about the functional role and source of suffering, then this would be a functionalist theory that specified objective criteria for the presence of suffering. (You could reasonably argue that it doesn't look likely that functionalism will provide such a theory, but then I've always assumed that anyone who has thought seriously about philosophy of mind has acknowledged that functionalism has major deficiencies and is at best our "least wrong" placeholder theory until somebody comes up with something better.)
1
MikeJohnson
7y
Functionalism seems internally consistent (although perhaps too radically skeptical). However, in my view it also seems to lead to some flavor of moral nihilism; consciousness anti-realism makes suffering realism difficult/complicated. I think whether suffering is a 'natural kind' is prior to this analysis: e.g., to precisely/objectively explain the functional role and source of something, it needs to have a precise/crisp/objective existence. Part of my reason for writing this critique is to argue that functionalism isn't a useful theory of mind, because it doesn't do what we need theories of mind to do (adjudicate disagreements in a principled way, especially in novel contexts). If it is a placeholder, then I think the question becomes, "what would 'something better' look like, and what would count as evidence that something is better? I'd love to get your (and FRI's) input here.
0
Kaj_Sotala
7y
I take this as meaning that you agree that accepting functionalism is orthogonal to the question of whether suffering is "real" or not? What something better would look like - if I knew that, I'd be busy writing a paper about it. :-) That seems to be a part of the problem - everyone (that I know of) agrees that functionalism is deeply unsatisfactory, but very few people seem to have any clue of what a better theory might look like. Off the top of my head, I'd like such a theory to at least be able to offer some insight into what exactly is conscious, and not have the issue where you can hypothesize all kinds of weird computations (like Aaronson did in your quote) and be left confused about which of them are conscious and which are not, and why. (roughly, my desiderata are similar to Luke Muehlhauser's)
4
Brian_Tomasik
7y
I don't. :) I see lots of free parameters for what flavor of functionalism to hold and how to rule on the Aaronson-type cases. But functionalism (perhaps combined with some other random criteria I might reserve the right to apply) perfectly captures my preferred way to think about consciousness. I think what is unsatisfactory is that we still know so little about neuroscience and, among other things, what it looks like in the brain when we feel ourselves to have qualia.
4
MikeJohnson
7y
Ah, the opposite actually- my expectation is that if 'consciousness' isn't real, 'suffering' can't be real either. Thanks, this is helpful. :) The following is tangential, but I thought you'd enjoy this Yuri Harari quote on abstraction and suffering:
2
RomeoStevens
7y
The quote seems very myopic. Let's say that we have a religion X that has an excellent track record at preventing certain sorts of defections by helping people coordinate on enforcement costs. Suffering in the service of stabilizing this state of affairs may be the best use of resources in a given context.
2
MikeJohnson
7y
I think that's fair-- beneficial equilibriums could depend on reifying things like this. On the other hand, I'd suggest that with regard to identifying entities that can suffer, false positives are much less harmful than false negatives but they still often incur a cost. E.g., I don't think corporations can suffer, so in many cases it'll be suboptimal to grant them the sorts of protections we grant humans, apes, dogs, and so on. Arguably, a substantial amount of modern ethical and perhaps even political dysfunction is due to not kicking leaky reifications out of our circle of caring. (This last bit is intended to be provocative and I'm not sure how strongly I'd stand behind it...)
2
RomeoStevens
7y
Yeah, S-risk minimizer being trivially exploitable etc.
2
MikeJohnson
7y
An additional note on this: I'd propose that if we split the problem of building a theory of consciousness up into subproblems, the task gets a lot easier. This does depend on elegant problem decompositon. Here are the subproblems I propose: http://opentheory.net/wp-content/uploads/2016/11/Eight-Problems2-1.png A quick-and-messy version of my framework: * (1) figure out what sort of ontology you think can map to both phenomenology (what we're trying to explain) and physics (the world we live in); * (2) figure out what subset of that ontology actively contributes to phenomenology; * (3) figure out how to determine the boundary of where minds stop, in terms of that-stuff-that-contributes-to-phenomenology; * (4) figure out how to turn the information inside that boundary into a mathematical object isomorphic to phenomenology (and what the state space of the object is); * (5) figure out how to interpret how properties of this mathematical object map to properties of phenomenology. The QRI approach is: * (1) Choice of core ontology -> physics (since it maps to physical reality cleanly, or some future version like string theory will); * (2) Choice of subset of core ontology that actively contributes to phenomenology -> Andres suspects quantum coherence; I'm more agnostic (I think Barrett 2014 makes some good points); * (3) Identification of boundary condition -> highly dependent on (2); * (4) Translation of information in partition into a structured mathematical object isomorphic to phenomenology -> I like how IIT does this; * (5) Interpretation of what the mathematical output means -> Probably, following IIT, the dimensional magnitude of the object could correspond with the degree of consciousness of the system. More interestingly, I think the symmetry of this object may plausibly have an identity relationship with the valence of the experience. Anyway, certain steps in this may be wrong, but that's what the basic QRI "full stack" approach looks l
1
kbog
7y
Well I think there is a big difference between FRI, where the point of view is at the forefront of their work and explicitly stated in research, and MIRI/FHI, where it's secondary to their main work and is only something which is inferred on the basis of what their researchers happen to believe. Plus as Kaj said you can be a functionalist without being all subjectivist about it. But Open Phil does seem to have this view now to at least the same extent as FRI does (cf. Muelhauser's consciousness document).
2
Brian_Tomasik
7y
I think a default assumption should be that works by individual authors don't necessarily reflect the views of the organization they're part of. :) Indeed, Luke's report says this explicitly: Of course, there is nonzero Bayesian evidence in the sense that an organization is unlikely to publish a viewpoint that it finds completely misguided. When FRI put my consciousness pieces on its site, we were planning to add a counterpart article (I think defending type-F monism or something) to have more balance, but that latter article never got written.
1
kbog
7y
MIRI/FHI have never published anything which talks about any view of consciousness. There is a huge difference between inferring based on things that people happen to write outside of the organization, and the actual research being published by the organization. In the second case, it's relevant to the research, whether it's an official value of the organization or not. In the first case, it's not obvious why it's relevant at all. Luke affirmed elsewhere that Open Phil really heavily leans towards his view on consciousness and moral status.

Re: 2, I don't see how we should expect functionalism to resolve disputes over which agents are conscious. Panpsychism does not such thing, nor does physicalism or dualism or any other theory of mind. Any of these theories can inform inquiry about which agents are conscious, in tandem with empirical work, but the connection is tenuous and it seems to me that at least 70% of the work is empirical. Theory of mind mostly gives a theoretical basis for empirical work.

The problem lies more with the specific anti-realist account of sentience that some people at F... (read more)

0
MikeJohnson
7y
I think analytic functionalism is internally consistent on whether agents are conscious, as is the realist panpsychism approach, and so on. The problem comes in, as you note, when we want to be anti-realist about consciousness yet also care about suffering. In practice, it may be difficult to cleanly distinguish between theoretical work on consciousness, and empirical work on consciousness. At least, we may need to be very careful in how we're defining "consciousness", "empirical", etc. It's an open question whether this is possible under functionalism-- my argument is that it's not possible to find a functionalist framework which has a clear or privileged definition of what morally relevant suffering is.

This is a super interesting article, but...

I worry that FRI’s work leans on the intuition that suffering is real and we can speak coherently about it, to a degree greater than its metaphysics formally allow.

To me, it reads like it was written by someone who has never really encountered suffering.

http://www.mattball.org/2014/11/excerpts-from-letter-to-young-matt.html