Comment author: RandomEA 05 August 2017 08:34:38PM 1 point [-]

One possible benefit of blood, kidney, and bone marrow donations is that they could demonstrate that EAs actually do care about other people in their country (which could help with movement building), but such donations can only be associated with EA if they are in fact effective on the margin (which does not seem to be the case with blood donations).

Comment author: Brian_Tomasik 06 August 2017 09:56:53AM 0 points [-]

You could put blood donation into the "relaxation" or "fun social activity" category.

Comment author: MikeJohnson 01 August 2017 09:07:05PM *  0 points [-]

That's no reason to believe that analytic functionalism is wrong, only that it is not sufficient by itself to answer very many interesting questions.

I think that's being generous to analytic functionalism. As I suggested in Objection 2,

In short, FRI’s theory of consciousness isn’t actually a theory of consciousness at all, since it doesn’t do the thing we need a theory of consciousness to do: adjudicate disagreements in a principled way. Instead, it gives up any claim on the sorts of objective facts which could in principle adjudicate disagreements.

.

I only claim that most physical states/processes have only a very limited collection of computational states/processes that it can reasonably be interpreted as[.]

I'd like to hear more about this claim; I don't think it's ridiculous on its face (per Brian's and Michael_PJ's comments), but it seems a lot of people have banged their head against this without progress, and my prior is formalizing this is a lot harder than it looks (it may be unformalizable). If you could formalize it, that would have a lot of value for a lot of fields.

So although I used that critique of IIT as an example, I was mainly going off of intuitions I had prior to it. I can see why this kind of very general criticism from someone who hasn't read the details could be frustrating, but I don't expect I'll look into it enough to say anything much more specific.

I don't expect you to either. If you're open to a suggestion about how to approach this in the future, though, I'd offer that if you don't feel like reading something but still want to criticize it, instead of venting your intuitions (which could be valuable, but don't seem calibrated to the actual approach I'm taking), you should press for concrete predictions.

The following phrases seem highly anti-scientific to me:

sounds wildly implausible | These sorts of theories never end up getting empirical support, although their proponents often claim to have empirical support | I won't be at all surprised if you claim to have found substantial empirical support for your theory, and I still won't take your theory at all seriously if you do, because any evidence you cite will inevitably be highly dubious | The heuristic that claims that a qualia-related concept is some simple other thing are wrong, and that claims of empirical support for such claims never hold up | I am almost certain that there are trivial counterexamples to the Symmetry Theory of Valence

I.e., these statements seem to lack epistemological rigor, and seem to absolutely prevent you from updating in response to any evidence I might offer, even in principle (i.e., they're actively hostile to your improving your beliefs, regardless of whether I am or am not correct).

I don't think your intention is to be closed-minded on this topic, and I'm not saying I'm certain STV is correct. Instead, I'm saying you seem to be overreacting to some stereotype you initially pattern-matched me as, and I'd suggest talking about predictions is probably a much healthier way to move forward if you want to spend more time on this. (Thanks!)

Comment author: Brian_Tomasik 02 August 2017 09:11:56AM 1 point [-]

I only claim that most physical states/processes have only a very limited collection of computational states/processes that it can reasonably be interpreted as[.]

I haven't read most of this paper, but it seems to argue that.

Comment author: Brian_Tomasik 02 August 2017 08:34:46AM *  8 points [-]

I'd be interested in literature on this topic as well, because it seems to bedevil all far-future-aware EA work.

Some articles:

Comment author: AlexMennen 31 July 2017 01:29:48PM 1 point [-]

That said, I do think theories like IIT are at least slightly useful insofar as they expand our vocabulary and provide additional metrics that we might care a little bit about.

If you expanded on this, I would be interested.

Comment author: Brian_Tomasik 01 August 2017 09:31:46AM 0 points [-]

I didn't have in mind anything profound. :) The idea is just that "degree of information integration" is one interesting metric along which to compare minds, along with metrics like "number of neurons", "number of synapses", "number of ATP molecules consumed per second", "number of different brain structures", "number of different high-level behaviors exhibited", and a thousand other similar things.

Comment author: AlexMennen 30 July 2017 10:17:36PM *  6 points [-]

Speaking of the metaphysical correctness of claims about qualia sounds confused, and I think precise definitions of qualia-related terms should be judged by how useful they are for generalizing our preferences about central cases. I expect that any precise definition for qualia-related terms that anyone puts forward before making quite a lot of philosophical progress is going to be very wrong when judged by usefulness for describing preferences, and that the vagueness of the analytic functionalism used by FRI is necessary to avoid going far astray.

Regarding the objection that shaking a bag of popcorn can be interpreted as carrying out an arbitrary computation, I'm not convinced that this is actually true, and I suspect it isn't. It seems to me that the interpretation would have to be doing essentially all of the computation itself, and it should be possible to make precise the sense in which brains and computers simulating brains carry out a certain computation that waterfalls and bags of popcorn don't. The defense of this objection that you quote from McCabe is weak; the uncontroversial fact that many slightly different physical systems can carry out the same computation does not establish that an arbitrary physical system can be reasonably interpreted as carrying out an arbitrary computation.

I think the edge cases that you quote Scott Aaronson bringing up are good ones to think about, and I do have a large amount of moral uncertainty about them. But I don't see these as problems specific to analytic functionalism. These are hard problems, and the fact that some more precise theory about qualia may be able to easily answer them is not a point in favor of that theory, since wrong answers are not helpful.

The Symmetry Theory of Valence sounds wildly implausible. There are tons of claims that people put forward, often contradicting other such claims, that some qualia-related concept is actually some other simple thing. For instance, I've heard claims that goodness is complexity and that what humans value is increasing complexity. Complexity and symmetry aren't quite opposites, but they're certainly anti-correlated, and both theories can't be right. These sorts of theories never end up getting empirical support, although their proponents often claim to have empirical support. For example, proponents of Integrated Information Theory often cite that the cerebrum has a higher Phi value than the cerebellum does as support for the hypothesis that Phi is a good measure of the amount of consciousness a system has, as if comparing two data points was enough to support such a claim, and it turns out that large regular rectangular grids of transistors, and the operation of multiplication by a large Vandermonde matrix, both have arbitrarily high Phi values, and yet the claim that Phi measures consciousness still survives and claims empirical support, despite this damning disconfirmation. And I think the “goodness is complexity” people also provided examples of good things that they thought they had established are complex and bad things that they thought they had established are not. I know this sounds totally unfair, but I won't be at all surprised if you claim to have found substantial empirical support for your theory, and I still won't take your theory at all seriously if you do, because any evidence you cite will inevitably be highly dubious. The heuristic that claims that a qualia-related concept is some simple other thing are wrong, and that claims of empirical support for such claims never hold up, seems to be pretty well supported. I am almost certain that there are trivial counterexamples to the Symmetry Theory of Valence, even though perhaps you may have developed a theory sophisticated enough to avoid the really obvious failure modes like claiming that a square experiences more pleasure and less suffering than a rectangle because its symmetry group is twice as large.

Comment author: Brian_Tomasik 31 July 2017 05:13:53AM 0 points [-]

To steelman the popcorn objection, one could say that separating "normal" computations from popcorn shaking requires at least certain sorts of conditions on what counts as a valid interpretation, and such conditions increase the arbitrariness of the theory. Of course, if we adopt a complexity-of-value approach to moral value (as I and probably you think we should), then those conditions on what counts as a computation may be minimal compared with the other forms of arbitrariness we bring to bear.

I haven't read Principia Qualia and so can't comment competently, but I agree that symmetry seems like not the kind of thing I'm looking for when assessing the moral importance of a physical system, or at least it's not more than one small part of what I'm looking for. Most of what I care about is at the level of ordinary cognitive science, such as mental representations, behaviors, learning, preferences, introspective abilities, etc.

That said, I do think theories like IIT are at least slightly useful insofar as they expand our vocabulary and provide additional metrics that we might care a little bit about.

Comment author: kbog  (EA Profile) 26 July 2017 12:05:44AM *  2 points [-]

Well I think there is a big difference between FRI, where the point of view is at the forefront of their work and explicitly stated in research, and MIRI/FHI, where it's secondary to their main work and is only something which is inferred on the basis of what their researchers happen to believe. Plus as Kaj said you can be a functionalist without being all subjectivist about it.

But Open Phil does seem to have this view now to at least the same extent as FRI does (cf. Muelhauser's consciousness document).

Comment author: Brian_Tomasik 27 July 2017 02:54:58AM *  2 points [-]

I think a default assumption should be that works by individual authors don't necessarily reflect the views of the organization they're part of. :) Indeed, Luke's report says this explicitly:

the rest of this report does not necessarily reflect the intuitions and judgments of the Open Philanthropy Project in general. I explain my views in this report merely so they can serve as one input among many as the Open Philanthropy Project considers how to clarify its values and make its grantmaking choices.

Of course, there is nonzero Bayesian evidence in the sense that an organization is unlikely to publish a viewpoint that it finds completely misguided.

When FRI put my consciousness pieces on its site, we were planning to add a counterpart article (I think defending type-F monism or something) to have more balance, but that latter article never got written.

Comment author: Kaj_Sotala 25 July 2017 11:17:19PM *  1 point [-]

I think whether suffering is a 'natural kind' is prior to this analysis: e.g., to precisely/objectively explain the functional role and source of something, it needs to have a precise/crisp/objective existence.

I take this as meaning that you agree that accepting functionalism is orthogonal to the question of whether suffering is "real" or not?

If it is a placeholder, then I think the question becomes, "what would 'something better' look like, and what would count as evidence that something is better?

What something better would look like - if I knew that, I'd be busy writing a paper about it. :-) That seems to be a part of the problem - everyone (that I know of) agrees that functionalism is deeply unsatisfactory, but very few people seem to have any clue of what a better theory might look like. Off the top of my head, I'd like such a theory to at least be able to offer some insight into what exactly is conscious, and not have the issue where you can hypothesize all kinds of weird computations (like Aaronson did in your quote) and be left confused about which of them are conscious and which are not, and why. (roughly, my desiderata are similar to Luke Muehlhauser's)

Comment author: Brian_Tomasik 27 July 2017 02:47:28AM *  4 points [-]

everyone (that I know of) agrees that functionalism is deeply unsatisfactory

I don't. :) I see lots of free parameters for what flavor of functionalism to hold and how to rule on the Aaronson-type cases. But functionalism (perhaps combined with some other random criteria I might reserve the right to apply) perfectly captures my preferred way to think about consciousness.

I think what is unsatisfactory is that we still know so little about neuroscience and, among other things, what it looks like in the brain when we feel ourselves to have qualia.

Comment author: John_Maxwell_IV 25 July 2017 05:05:47AM *  1 point [-]

Not sure if "lazy" is quite the right word. For example, it took work to rebuild chicken housing so that each chicken got even less space. I think "greedy" is a more accurate word.

By the way, does the vegan movement talk about running non-factory farms that sell animal products which are subsidized so they are priced competitively with factory farm products? If farming animals ethically costs a premium, from a purely consequentialist perspective, it doesn't seem like it should matter whether the premium is paid by the customer or by some random person who wants to convert dollars in to reduced suffering.

BTW I think this is pretty relevant to the Moloch line of thinking.

Comment author: Brian_Tomasik 25 July 2017 05:29:48PM 3 points [-]

does the vegan movement talk about running non-factory farms that sell animal products which are subsidized so they are priced competitively with factory farm products?

I would guess it'd be much less cost-effective than lobbying for welfare reforms and such.

it doesn't seem like it should matter whether the premium is paid by the customer or by some random person who wants to convert dollars in to reduced suffering.

If the altruist spends her money on this, she has less left over to spend on other things. In contrast, most consumers won't spend their savings on highly altruistic causes.

Comment author: JanBrauner 21 July 2017 04:54:38PM *  3 points [-]

Here is another argument why the future with humanity is likely better than the future without it. Possibly, there are many things of moral weight that are independent of humanity's survival. And if you think that humanity would care about moral outcomes more than zero, then it might be better to have humanity around.

For example in many szenarios of human extinction, wild animals would continue existing. In your post you assigned farmed animals enough moral weight to determine the moral value of the future, and wild animals should probably have even more moral weight. There are 10 x more wild birds than farmed birds, 100-1000x more wild mammals than farmed animals (and of course many, many more fish or even invertebrates). I am not convinced that wild animals' lives are on average not worth living (= that they contain more suffering than happiness), but even without that, there surely is a huge amount of suffering. If you believe that humanity will have the potential to prevent/alleviate that suffering some time in the future, that seems pretty important.

The same goes for unknown unknowns. I think we know extremely little about what is morally good or bad, and maybe our views will fundamentally change in the (far) future. Maybe there are suffering non-intelligent extraterrestrials, maybe bacteria suffer, maybe there is moral weight in places were we would not have expected it (http://reducing-suffering.org/is-there-suffering-in-fundamental-physics/), maybe something completely different.

Let's see what the future brings, but it might be better to have an intelligent and at least slightly utility-concerned species around, as compared to no intelligent species.

Comment author: Brian_Tomasik 23 July 2017 05:50:20AM *  2 points [-]

For those with a strong suffering focus, there are reasons to worry about an intelligent future even if you think suffering in fundamental physics dominates, because intelligent agents seem to me more likely to want to increase the size or vivacity of physics rather than decrease it, given generally pro-life, pro-sentience sentiments (or, if paperclip maximizers control the future, to increase the number of quasi-paperclips that exist).

Comment author: Brian_Tomasik 23 July 2017 12:26:54AM *  8 points [-]

Thanks for the post! If lazy solutions reduce suffering by reducing consciousness, they also reduce happiness. So, for example, a future civilization optimizing for very alien values relative to what humans care about might not have much suffering or happiness (if you don't think consciousness is useful for many things; I think it is), and the net balance of welfare would be unclear (even relative to a typical classical-utilitarian evaluation of net welfare).

Personally I find it very likely that the long-run future of Earth-originating intelligence will optimize for values relatively alien to human values. This has been the historical trend whenever one dominant life form replaces another. (Human values are relatively alien to those of our fish ancestors, for example.) The main way out of this conclusion is if humans' abilities for self-understanding and cooperation make our own future evolution an exception to the general trend.

View more: Next