M

MichaelStJules

9633 karmaJoined May 2016

Sequences
1

Welfare and moral weights

Comments
2197

Topic contributions
12

If you don't care about where or when duplicate experiences exist, only their number, then not caring about duplicates at all gives you a fanatical wager against the universe having infinitely many moral patients, e.g. by being infinitely large spatially, going on forever in time, having infinitely many pocket universes.

It would also give you a wager against the many-worlds interpretation of quantum mechanics, because there will be copies of you having identical experiences in (at least slightly) already physically distinct branches.

Also, I'd guess most people who value diversity of experience mean that only for positive experiences. I doubt most would mean repeated bad experiences aren't as bad as diverse bad experiences, all else equal.

If you were being tortured, it seems horrible to create a copy of you being tortured identically (all else equal). I don't see why it would matter any less, let alone somewhat less or, as implied by your post, not at all.

(EDITED) And if a copy of you were to be tortured in mental states X in the future or elsewhere, then it wouldn't be bad for you to be tortured in mental states X here and now. If you're impartial, you'd actually be wrong to disvalue your own torture.

Or, you have to consider only simultaneous states or within some bands of time or discount some other way. This doesn't get around simultaneous copies elsewhere.

But under diversity-valuing ethical theories, if we take a reasonable estimate of 10,000 meaningfully distinct shrimp minds at birth times 1 million possible external environmental inputs to those minds, that's only 10 billion distinct shrimp lived experiences.

Why is 10,000 meaingfully distinct shrimp minds at birth a reasonable estimate? Why is 1 million possible external environmental inputs to those minds a reasonable estimate?

Also, the argument doesn't take into account uncertainty about these numbers. You discuss the possibility that we could be away from the ceiling, but not what to do under uncertainty. If there's a 1% chance that nearly all shrimp experiences are meaningfully distinct in practice, then we can just multiply through by 1% as a lower bound.

My impression is that Healthier Hens wouldn't have caused confusion for corporations dealing with other major asks like cage-free or broiler asks, because HH was planning to work directly with different targets, specifically farms and feed mills, and in Kenya to start. Do you mean it would have just been better to further support corporate cage-free and broiler campaigns (about which you've stated skepticism here), or another ask the movement would consolidate to focus on?

They discuss things that didn't go well for them here: fundraising, feed testing, delays, survey response collection and (negative results in their) split-feeding trial.

(I don't have much sense about the impact of Animal Ask, both how much impact they're having and why. Some of their research looks useful, but I don't know how their work is informing decisions or at what scale.)

I agree there is something a bit weird about it, but I'm not sure I endorse that reaction. This doesn’t seem so different from p-zombies, and probably some moral thought experiments.

I don't think it's true that everything we know about the universe would be equally undermined. Most things wouldn't be undermined at all or at worst would need to be slightly reinterpreted. Our understanding of physics in our universe could still be about as reliable (depending on the simulation), and so would anything that follows from it. There's just more stuff outside our universe.

I guess you can imagine short simulations where all our understanding of physics is actually just implanted memories and fabricated records. But in doing so, you're throwing away too much of the causal structure that apparently explains our beliefs and makes them reliable. Longer simulations can preserve that causal structure.

Some other arguments that push in favour of functionalism, the consciousness of simulated brains, including the China brain and digital minds, and brains with other artificial neurons:

  1. We may ourselves be simulated in a similar way without knowing it, if our entire reality is also simulated. We wouldn't necessarily have access to what the simulation is run on.
  2. In a simulated brain and the conscious biological brain it simulates, introspection would give the brains same beliefs about phenomenal properties and qualia, because it's only sensitive to the causal/functional structure at a given level of detail, and those details are by design/assumption preserved under simulation. If the biological brain is phenomenally conscious, but the simulated brain is not, then it's a surprising coincidence that the resulting beliefs about phenomenal consciousness are accurate in the biological brain but not in the simulated brain. Introspection doesn't seem to give the biological brain any more reason to believe in its own phenomenal consciousness than it should for the simulated brain in its own, because introspection is only sensitive to causal/functional details common to both.[1]
  3. It's hard for me to imagine a compelling explanation of our consciousness that doesn't extend to simulated brains, including the China brain and digital minds. Theories out there now don't seem on track to address the hard problem, and this and other reasons (like above) incline me to dissolve it and accept illusionism about phenomenal properties/consciousness. Illusionism is generally functionalist, and I don't see how an illusionist theory would deny the consciousness of the China brain and digital simulations of brains.
  1. ^

    This is essentially the coincidence argument for illusionism in Chalmers, 2018.

(EDIT: Split this up into two comments, the other here.)

I think that there's probably a minimum level of substrate independence we should accept, e.g. that it doesn't matter exactly what matter a "brain" is made out of, as long as the causal structure is similar enough on a fine enough level. The mere fact that neurons are largely made out of carbon doesn't seem essential. Furthermore, human and (apparently) conscious animal brains are noisy and vary substantially from one another, so exact duplication of the causal structure doesn't seem necessary, as long as the errors don't accumulate so much that the result isn't similar to a plausible state for a plausible conscious biological brain.[1] So, I'm inclined to say that we could replace biological neurons with artificial neurons and retain consciousness, at least in principle, but it could depend on the artificial neurons.

It's worth pointing out that the China brain[2] and a digital mind (or digital simulation of a mind, on computers like today's) aren't really causally isomorphic to biological brains even if you ignore a lot of the details of biological brains. Obviously, you also have to ignore a lot of the details of the China brain and digital minds. But I could imagine that the extra details in the China brain and digital minds make a difference.

  1. In a simulated neuron, both in the China brain and digital minds, there are details to ignore. In the China brain, that's all the stuff happening inside each person simulating a neuron. For a digital mind, there's probably lots of extra hardware stuff going on.
  2. In a digital mind on a computer like today's computers or even distributed across hundreds of computers or processing units (CPU cores, GPUs), it seems you must ignore the fact that the digital state transitions are orchestrated centrally "from the outside", today through some kind of loop (e.g. a for-loop or while-loop, or some number of these with some asynchronous distribution). Individual biological neurons act relatively autonomously/asynchronously just in response to local neural activity (including electrical and chemical signals), without this kind of external central orchestration. Actually, if you were to ignore the centralized orchestration in a digital mind, depending on how you cash that out, the digital mind might never change states, so maybe the digital mind isn't actually isomorphic to a biological brain at the right level(s) of causal structure for each at all.

These extra details make me less sure that we should attribute consciousness to the China brain and digital minds, but they don’t seem decisive.

  1. ^

    From footnote 4 from Godfrey-Smith, 2023 (based on the talk he gave):

    At the NYU talk, Chalmers raised a passage from The Conscious Mind (p. 331) where he claims, in relation to replacement scenarios, that "when it comes to duplicating our cognitive capacities, a close approximation is as good as the real thing." His argument is that in biological systems, random "noise" processes play a role (greater than the role of any analogous processes in a computer). When the biological system performs some operation, the outcome is never entirely reliable and will instead fall within a band of possibilities. An artificial duplicate of the biological system only has to give a result somewhere in that band. The duplicate's output might depart from what the biological system actually does, on some occasion, but the biological system could just as well have produced the same output as the duplicate, if noise had played a different role. When a duplicate gives a result within the band, it is doing "as well as the system itself can reliably do."

    In response, it is true that this role for noise is an important micro-functional feature of living systems. In addition, neurons change what they do as a result of their normal operation, they don't respond to the "same" stimulus twice in the same way (see "Mind, Matter, and Metabolism" for references). The "rules" or the "program" being followed are always changing as a result of the activity of the system itself and its embedding in other biological processes. Over time, the effects of these factors will accumulate and compound – a comparison of what a living system and a duplicate might do in a single operation doesn't capture their importance. I see all this not as a "lowering of the bar" that enables us to keep talking in a rough way about functional identity, but another functional difference between living and artificial systems.

  2. ^

    From the Wikipedia page:

    the China brain thought experiment (also known as the Chinese Nation or Chinese Gym) considers what would happen if each member of the Chinese nation were asked to simulate the action of one neuron in the brain, using telephones or walkie-talkies to simulate the axons and dendrites that connect neurons. Would this arrangement have a mind or consciousness in the same way that brains do?

    (China's population, at 1.4 billion, isn't large enough for each person to only simulate one neuron and so simulate a whole human brain with >80 billion neurons, but we could imagine a larger population, or a smaller animal brain being simulated, e.g. various mammals or birds.)

Downsides and risks should also be considered. You write:

and even contribute to safe AI governance (e.g. by securing the US AI lead over China)

but it could also accelerate AI capabilities progress, which would leave less time for AI safety work.

There's also the meat-eater problem, i.e. increasing animal product consumption and factory farming, if we help move people to countries where they'll consume more animal products.

Load more