Comment author: MikeJohnson 02 March 2018 05:15:49AM 0 points [-]

EA forum threads auto-hide so I’m not too worried about clutter.

I don’t think you’re fully accounting for the difference in my two models of meaning. And, I think the objections you raise to consciousness being well-defined would also apply to physics being well-defined, so your arguments seem to prove too much.

To attempt to address your specific question, I find the hypothesis that ‘qualia (and emotional valence) are well-defined across all arrangements of matter’ convincing because (1) it seems to me the alternative is not coherent (as I noted in the piece on computationalism I linked for you) and (2) it seems generative and to lead to novel and plausible predictions I think will be proven true (as noted in the linked piece on quantifying bliss and also in Principia Qualia).

All the details and sub arguments can be found in those links.

Will be traveling until Tuesday; probably with spotty internet access until then.

Comment author: itaibn 10 March 2018 11:38:30PM 0 points [-]

I haven't responded to you for so long firstly because I felt like we got to the point in the discussion where it's difficult to get across anything new and I wanted to be attentive to what I say, and then because after a while without writing anything I became disinclined from continuing. The conversation may close soon.

Some quick points:

  • My whole point in my previous comment is that the conceptual structure of physics is not what you make it out to be, and so your analogy to physics is invalid. If you want to say that my arguments against consciousness apply equally well to physics you will need to explain the analogy.

  • My views on consciousness that I mentioned earlier but did not elaborate on are becoming more relevant. It would be a good idea for me to explain them in more detail.

  • I read your linked piece on quantifying bliss and I am unimpressed. I concur with the last paragraph of this comment.

Comment author: MikeJohnson 28 February 2018 06:05:47PM 0 points [-]

This is an important point and seems to hinge on the notion of reference, or the question of how language works in different contexts. The following may or may not be new to you, but trying to be explicit here helps me think through the argument.

Mostly, words gain meaning from contextual embedding- i.e. they’re meaningful as nodes in a larger network. Wittgenstein observed that often, philosophical confusion stems from taking a perfectly good word and trying to use it outside its natural remit. His famous example is the question, “what time is it on the sun?”. As you note, maybe notions about emotional valence are similar- trying to ‘universalize’ valence may be like trying to universalize time-zones, an improper move.

But there’s another notable theory of meaning, where parts of language gain meaning through deep structural correspondence with reality. Much of physics fits this description, for instance, and it’s not a type error to universalize the notion of the electromagnetic force (or electroweak force, or whatever the fundamental unification turns out to be). I am essentially asserting that qualia is like this- that we can find universal principles for qualia that are equally and exactly true in humans, dogs, dinosaurs, aliens, conscious AIs, etc. When I note I’m a physicalist, I intend to inherit many of the semantic properties of physics, how meaning in physics ‘works’.

I suspect all conscious experiences have an emotional valence, in much the same way all particles have a charge or spin. I.e. it’s well-defined across all physical possibilities.

Comment author: itaibn 02 March 2018 12:23:36AM 0 points [-]

Do you think we should move the conversation to private messages? I don't want to clutter a discussion thread that's mostly on a different topic, and I'm not sure whether the average reader of the comments benefits or is distracted by long conversations on a narrow subtopic.

Your comment appears to be just reframing the point I just made in your own words, and then affirming that you believe that the notion of qualia generalizes to all possible arrangements of matter. This doesn't answer the question, why do you believe this?

By the way, although there is no evidence for this, it is commonly speculated by physicists that the laws of physics allow multiple metastable vacuum states, and the observable universe only occupies one such vacuum, and near different vacua there different fields and forces. If this is true then the electromagnetic field and other parts of the Standard Model are not much different from my earlier example of the alignment of an ice crystal. One reason this view is considered plausible is simply the fact that it's possible: It's not considered so unusual for a quantum field theory to have multiple vacuum states, and if the entire observable universe is close to one vacuum then none of our experiments give us any evidence on what other vacuum states are like or whether they exist.

This example is meant to illustrate a broader point: I think that making a binary distinction between contextual concepts and universal concepts is oversimplified. Rather, here's how I would put it: Many phenomena generalize beyond the context in which they were originally observed. Taking advantage of this, physicists deliberate seek out the phenomena that generalize as far as possible, and over history broadened their grasp very far. Nonetheless, they avoid thinking about any concept as "universal", and often when they do think a concept generalizes they have a specific explanation for why it should, while if there's a clear alternative to the concept generalizing they keep an open mind.

So again: Why do you think that qualia and emotional valence generalize to all possible arrangements of matter?

Comment author: MikeJohnson 26 February 2018 03:35:03AM *  0 points [-]

Thanks, this is helpful. My general position on your two questions is indeed "Yes/No".

The question of 'what are reality's natural kinds?' is admittedly complex and there's always room for skepticism. That said, I'd suggest the following alternatives to your framing:

  • Whether the existence of qualia itself is 'crisp' seems prior to whether pain/pleasure are. I call this the 'real problem' of consciousness.

  • I'm generally a little uneasy with discussing pain/pleasure in technically precise contexts- I prefer 'emotional valence'.

  • Another reframe to consider is to disregard talk about pain/pleasure, and instead focus on whether value is well-defined on physical systems (i.e. the subject of Tegmark's worry here). Conflation of emotional valence & moral value can then be split off as a subargument.

Generally speaking, I think if one accepts that it's possible in principle to talk about qualia in a way that 'carves reality at the joints', it's not much of a stretch to assume that emotional valence is one such natural kind (arguably the 'c. elegans of qualia'). I don't think we're logically forced to assume this, but I think it's prima facie plausible, and paired with some of our other work it gives us a handhold for approaching qualia in a scientific/predictive/falsifiable way.

Essentially, QRI has used this approach to bootstrap the world's first method for quantifying emotional valence in humans from first principles, based on fMRI scans. (It also should work for most non-human animals; it's just harder to validate in that case.) We haven't yet done the legwork on connecting future empirical results here back to the computationalism vs physicalism debate, but it's on our list.

TL;DR: If consciousness is a 'crisp' thing with discoverable structure, we should be able to build/predict useful things with this that cannot be built/predicted otherwise, similar to how discovering the structure of electromagnetism let us build/predict useful things we could not have otherwise. This is probably the best route to solve these metaphysical disagreements.

Comment author: itaibn 27 February 2018 11:05:45PM 0 points [-]

It wasn't clear to me from your comment, but based on your link I am presuming that by "crisp" you mean "amenable to generalizable scientific theories" (rather than "ontologically basic"). I was using "pleasure/pain" as a catch-all term and would not mind substituting "emotional valence".

It's worth emphasizing that just because a particular feature is crisp does not imply that it generalizes to any particular domain in any particular way. For example, a single ice crystalline has a set of directions in which the molecular bonds are oriented which is the same throughout the crystal, and this surely qualifies as a "crisp" feature. Nonetheless, when the ice melts, this feature becomes undefined -- no direction is distinguished from any other direction in water. When figuring out whether a concept from one domain extends to a new domain, to posit that there's a crisp theory describing the concept does not answer this question without any information on what that theory looks like.

So even if there existed a theory describing qualia and emotional valence as it exists on Earth, it need not extend to being able to describe every physically possible arrangement of matter, and I see no reason to expect it to. Since a far future civilization will be likely to approach the physical limits of matter in many ways, we should not assume that it is not one such arrangement of matter where the notion of qualia is inapplicable.

Comment author: MikeJohnson 25 February 2018 08:46:38PM *  0 points [-]

It seems to me your #2 and #4 still imply computationalism and/or are speaking about a straw man version of physicalism. Different physical theories will address your CPT reversal objection differently, but it seems pretty trivial to me.

If I understood you correctly, physicalism as a statement about consciousness is primary a negative statement, "the computational behavior of a system is not sufficient to determine what sort of conscious activity occurs there", which doesn't by itself tell you what sort of conscious activity occurs.

I would generally agree, but would personally phrase this differently; rather, as noted here, there is no objective fact-of-the-matter as to what the 'computational behavior' of a system is. I.e., no way to objectively derive what computations a physical system is performing. In terms of a positive statement about physicalism & qualia, I'm assuming something on the order of dual-aspect monism / neutral monism. And yes insofar as a formal theory of consciousness which has broad predictive power would depart from folk intuition, I'd definitely go with the formal theory.

Comment author: itaibn 26 February 2018 01:50:05AM *  0 points [-]

Thanks for the link. I didn't think to look at what other posts you have published and now I understand your position better.

As I now see it, there two critical questions for distinguishing the different positions on the table:

  1. Does our intuitive notion of pleasure/suffering have objective precisely defined fundamental concept underlying it?
  2. In practice, is it a useful approach to look for computational structures exhibiting pleasure/suffering in the distant future as a means to judge possible outcomes?

Brian Tomasik answers these questions "No/Yes", and a supporter of the Sentience Institute would probably answer "Yes" to the second question. Your answers are "Yes/No", and so you prefer to work on finding the underlying theory for pleasure/suffering. My answers are "No/No", and am at a loss.

I see two reasons why a person might think that pleasure/pain of conscious entities is a solid enough concept to answer "Yes" to either of these questions (not counting conservative opinions over what futures are possible for question 2). The first is a confusion caused by subtle implicit assumptions in the way we talk about consciousness, which makes a sort of conscious experience from which includes in it pleasure and pain seem more ontologically basic than it really is. I won't elaborate on this in this comment, but for now you can round me as an eliminativist.

The second is what I was calling "a sort wishful thinking" in argument #4: These people have moral intuitions that tell them to care about others' pleasure and pain, which implies not fooling themselves about how much pleasure and pain others experience. On the other hand, there are many situations where their intuition does not give them a clear answer, but also tells them that picking an answer arbitrarily is like fooling themselves. They resolve this tension by telling themselves, "there is a 'correct answer' to this dilemma, but I don't know what it is. I should act to best approximate this 'correct answer' with the information I have." People then treat these "correct answers" like other things they are ignorant about, and in particular imagine that a scientific theory might be able to answer these questions in the same way science answered other things we used to be ignorant about.

However, this expectation infers something external, the existence of a certain kind of scientific theory, from evidence that is internal, their own cognitive tensions. This seems fallacious to me.

Comment author: MikeJohnson 25 February 2018 05:30:04AM 1 point [-]

Possibly the biggest unknown in ethics is whether bits matter, or whether atoms matter.

If you assume bits matter, then I think this naturally leads into a concept cluster where speaking about utility functions, preference satisfaction, complexity of value, etc, makes sense. You also get a lot of weird unresolved thought-experiments like homomorphic encryption.

If you assume atoms matter, I think this subtly but unavoidably leads to a very different concept cluster-- qualia turns out to be a natural kind instead of a leaky reification, for instance. Talking about the 'unity of value thesis' makes more sense than talking about the 'complexity of value thesis'.

TL;DR: I think you're right that if we assume computationalism/functionalism is true, then pleasure and suffering are inherently ill-defined, not crisp. They do seem well-definable if we assume physicalism is true, though.

Comment author: itaibn 25 February 2018 12:13:38PM 1 point [-]

Thanks for reminding me that I was implicitly assuming computationalism. Nonetheless, I don't think physicalism substantially affects the situation. My arguments #2 and #4 stand unaffected; you have not backed up your claim that qualia is a natural kind under physicalism. While it's true that physicalism gives clear answers for the value of two identical systems or a system simulated with homomorphic encryption, it may still be possible to have quantum computations involving physically instantiated conscious beings, by isolating the physical environment of this being and running the CPT reversal of this physical system after an output has been extracted to maintain coherence. Finally, physicalism adds its own questions, namely, given a bunch of physical systems that all appear to have behavior that appears to be conscious, which ones are actually conscious and which are not. If I understood you correctly, physicalism as a statement about consciousness is primary a negative statement, "the computational behavior of a system is not sufficient to determine what sort of conscious activity occurs there", which doesn't by itself tell you what sort of conscious activity occurs.

Comment author: itaibn 25 February 2018 12:12:38AM 3 points [-]

My current position is that the amount of pleasure/suffering that conscious entities will experience in a far-future technological civilization will not be well-defined. Some arguments for this:

  1. Generally utility functions or reward functions are invariant under affine transformations (with suitable rescaling for the learning rate for reward functions). Therefore they cannot be compared between different intelligent agents as a measure of pleasure.

  2. The clean separation of our civilization into many different individuals is an artifact of how evolution operates. I don't expect far future civilization to have a similar division of its internal processes into agents. Therefore the method of counting conscious entities with different levels of pleasure is inapplicable.

  3. Theoretical computer science gives many ways to embed one computational process within another so that it is unclear whether or how many times the inner process "occurs", such as running identical copies of the same program, using a quantum computer to run the same program with many inputs in superposition, and homomorphic encryption. Similar methods we don't know about will likely be discovered in the future.

  4. Our notions of pleasure and suffering are mostly defined extensionally with examples from the present and the past. I see no reason that such an extensionally-derived concept to have a natural definition that applies to extremely different situations. Uncharitably, it seems like the main reason people assume this is a sort of wishful thinking due to their normal moral reasoning breaking down if they allow pleasure/suffering to be undefined.

I'm currently uncertain about how to make decisions relating to the far future in light of the above arguments. My current favorite position is to try to understand the far future well enough until I find something I have strong moral intuitions about.

Comment author: Carl_Shulman 14 December 2017 05:41:40PM *  15 points [-]

Keep in mind that soliciting upvotes for a comment is explicitly against Reddit rules. I understand if you think that the stakes of this situation are more important than these rules, but be sure you are consciously aware of the judgment you have made.

I'd say our policy should be 'just don't do that.' EA has learned its lesson on this from GiveWell.



Because we believe that trust, cooperation, and accurate information are essential to doing good, we strive to be honest and trustworthy. More broadly, we strive to follow those rules of good conduct that allow communities (and the people within them) to thrive. We also value the reputation of effective altruism, and recognize that our actions reflect on it.

Comment author: itaibn 14 December 2017 06:19:31PM 2 points [-]

Indeed, maybe I should made the point more harshly. To be clear, that comment is not about something people might do, it's about what's already present in the top post, which I see as breaking the Reddit rules.

I used soft language because I was worried about EA discussions breaking into arguments whenever someone suggests a good thing to do, and was worried that I might have erred too much in the other direction in other contexts. I still don't feel I have a good intuition on how confrontational I should be.

Comment author: itaibn 14 December 2017 04:54:50PM 6 points [-]

I've spent some time thinking and investigating what the current state of affairs is, and here's my conclusions:

I've been reading through PineappleFund's comments. Many are responses to solicitations for specific charities with him endorsing them as possibilities. One of these was for SENS foundation. Matthew_Barnett suggested that this is evidence that he particularly cares about long-term future causes, but given the diversity of other causes he endorsed I think it is pretty weak evidence.

They haven't yet commented on any of the subthreads specifically discussing EA. However, these subthreads are high up on the Reddit sorting algorithm and have many comments endorsing EA. This is already a good position and is difficult to improve: They either like what they see or they don't. It may be better if the top-level comments explicitly described and linked to a specific charity since that is what they responded well to in other comments, but I am cautious about making such surface-level generalizations which might have more to do with the distribution of existing comments than PineappleFund's tendencies.

Keep in mind that soliciting upvotes for a comment is explicitly against Reddit rules. I understand if you think that the stakes of this situation are more important than these rules, but be sure you are consciously aware of the judgment you have made.

Comment author: Kaj_Sotala 18 October 2017 03:22:04PM 1 point [-]

There seem to be a lot of leads that could help us figure out the high-value interventions, though: i) knowledge about what causes it and what has contributed to changes of it over time ii) research directions that could help further improve our understanding of what causes it / what doesn't cause it iii) various interventions which already seem like they work in a small-scale setting, though it's still unclear how they might be scaled up (e.g. something like Crucial Conversations is basically about increasing trust and safety in one-to-one and small-group conversations) iv) and of course psychology in general is full of interesting ideas for improving mental health and well-being that haven't been rigorously tested, which also suggests that v) any meta-work that would improve psychology's research practices would also be even more valuable than we previously thought.

As for the "pointing out a problem people have been aware of for millenia", well, people have been aware of global poverty for millenia too. Then we got science and randomized controlled trials and all the stuff that EAs like, and got better at fixing the problem. Time to start looking at how we could apply our improved understanding of this old problem, to fixing it.

Comment author: itaibn 18 October 2017 06:16:54PM *  0 points [-]

First, I consider our knowledge of psychology today to be roughly equivalent to that of alchemists when alchemy was popular. Like with alchemy, our main advantage over previous generations is that we're doing lots of experiments and starting to notice vague patterns, but we still don't have any systematic or reliable knowledge of what is actually going on. It is premature to seriously expect to change human nature.

Improving our knowledge of psychology to the point where we can actually figure things out could have a major positive effect on society. The same could be said for other branches of science. I think basic science is a potentially high-value cause, but I don't see why psychology should be singled out.

Second, this cause is not neglected. It is one of the major issues intellectuals have been grappling with for centuries or more. Framing the issue in terms of "tribalism" may be a novelty, but I don't see it as an improvement.

Finally, I'm not saying that there's nothing the effective altruism community can do about tribalism. I'm saying I don't see how this post is helping.

edit: As an aside, I'm now wondering if I might be expressing the point too rudely, especially the last paragraph. I hope we manage to communicate effectively in spite of any mistakes on my part.

Comment author: itaibn 18 October 2017 11:35:55AM 0 points [-]

I don't see any high-value interventions here. Simply pointing out a problem people have been aware of for millenia will not help anyone.

View more: Next