Also, I'd guess most people who value diversity of experience mean that only for positive experiences. I doubt most would mean repeated bad experiences aren't as bad as diverse bad experiences, all else equal.
If you were being tortured, it seems horrible to create a copy of you being tortured identically (all else equal). I don't see why it would matter any less, let alone somewhat less or, as implied by your post, not at all.
(EDITED) And if a copy of you were to be tortured in mental states X in the future or elsewhere, then it wouldn't be bad for you to be tortured in mental states X here and now. If you're impartial, you'd actually be wrong to disvalue your own torture.
Or, you have to consider only simultaneous states or within some bands of time or discount some other way. This doesn't get around simultaneous copies elsewhere.
But under diversity-valuing ethical theories, if we take a reasonable estimate of 10,000 meaningfully distinct shrimp minds at birth times 1 million possible external environmental inputs to those minds, that's only 10 billion distinct shrimp lived experiences.
Why is 10,000 meaingfully distinct shrimp minds at birth a reasonable estimate? Why is 1 million possible external environmental inputs to those minds a reasonable estimate?
Also, the argument doesn't take into account uncertainty about these numbers. You discuss the possibility that we could be away from the ceiling, but not what to do under uncertainty. If there's a 1% chance that nearly all shrimp experiences are meaningfully distinct in practice, then we can just multiply through by 1% as a lower bound.
My impression is that Healthier Hens wouldn't have caused confusion for corporations dealing with other major asks like cage-free or broiler asks, because HH was planning to work directly with different targets, specifically farms and feed mills, and in Kenya to start. Do you mean it would have just been better to further support corporate cage-free and broiler campaigns (about which you've stated skepticism here), or another ask the movement would consolidate to focus on?
They discuss things that didn't go well for them here: fundraising, feed testing, delays, survey response collection and (negative results in their) split-feeding trial.
(I don't have much sense about the impact of Animal Ask, both how much impact they're having and why. Some of their research looks useful, but I don't know how their work is informing decisions or at what scale.)
I agree there is something a bit weird about it, but I'm not sure I endorse that reaction. This doesn’t seem so different from p-zombies, and probably some moral thought experiments.
I don't think it's true that everything we know about the universe would be equally undermined. Most things wouldn't be undermined at all or at worst would need to be slightly reinterpreted. Our understanding of physics in our universe could still be about as reliable (depending on the simulation), and so would anything that follows from it. There's just more stuff outside our universe.
I guess you can imagine short simulations where all our understanding of physics is actually just implanted memories and fabricated records. But in doing so, you're throwing away too much of the causal structure that apparently explains our beliefs and makes them reliable. Longer simulations can preserve that causal structure.
Some other arguments that push in favour of functionalism, the consciousness of simulated brains, including the China brain and digital minds, and brains with other artificial neurons:
This is essentially the coincidence argument for illusionism in Chalmers, 2018.
(EDIT: Split this up into two comments, the other here.)
I think that there's probably a minimum level of substrate independence we should accept, e.g. that it doesn't matter exactly what matter a "brain" is made out of, as long as the causal structure is similar enough on a fine enough level. The mere fact that neurons are largely made out of carbon doesn't seem essential. Furthermore, human and (apparently) conscious animal brains are noisy and vary substantially from one another, so exact duplication of the causal structure doesn't seem necessary, as long as the errors don't accumulate so much that the result isn't similar to a plausible state for a plausible conscious biological brain.[1] So, I'm inclined to say that we could replace biological neurons with artificial neurons and retain consciousness, at least in principle, but it could depend on the artificial neurons.
It's worth pointing out that the China brain[2] and a digital mind (or digital simulation of a mind, on computers like today's) aren't really causally isomorphic to biological brains even if you ignore a lot of the details of biological brains. Obviously, you also have to ignore a lot of the details of the China brain and digital minds. But I could imagine that the extra details in the China brain and digital minds make a difference.
These extra details make me less sure that we should attribute consciousness to the China brain and digital minds, but they don’t seem decisive.
From footnote 4 from Godfrey-Smith, 2023 (based on the talk he gave):
At the NYU talk, Chalmers raised a passage from The Conscious Mind (p. 331) where he claims, in relation to replacement scenarios, that "when it comes to duplicating our cognitive capacities, a close approximation is as good as the real thing." His argument is that in biological systems, random "noise" processes play a role (greater than the role of any analogous processes in a computer). When the biological system performs some operation, the outcome is never entirely reliable and will instead fall within a band of possibilities. An artificial duplicate of the biological system only has to give a result somewhere in that band. The duplicate's output might depart from what the biological system actually does, on some occasion, but the biological system could just as well have produced the same output as the duplicate, if noise had played a different role. When a duplicate gives a result within the band, it is doing "as well as the system itself can reliably do."
In response, it is true that this role for noise is an important micro-functional feature of living systems. In addition, neurons change what they do as a result of their normal operation, they don't respond to the "same" stimulus twice in the same way (see "Mind, Matter, and Metabolism" for references). The "rules" or the "program" being followed are always changing as a result of the activity of the system itself and its embedding in other biological processes. Over time, the effects of these factors will accumulate and compound – a comparison of what a living system and a duplicate might do in a single operation doesn't capture their importance. I see all this not as a "lowering of the bar" that enables us to keep talking in a rough way about functional identity, but another functional difference between living and artificial systems.
From the Wikipedia page:
the China brain thought experiment (also known as the Chinese Nation or Chinese Gym) considers what would happen if each member of the Chinese nation were asked to simulate the action of one neuron in the brain, using telephones or walkie-talkies to simulate the axons and dendrites that connect neurons. Would this arrangement have a mind or consciousness in the same way that brains do?
(China's population, at 1.4 billion, isn't large enough for each person to only simulate one neuron and so simulate a whole human brain with >80 billion neurons, but we could imagine a larger population, or a smaller animal brain being simulated, e.g. various mammals or birds.)
Downsides and risks should also be considered. You write:
and even contribute to safe AI governance (e.g. by securing the US AI lead over China)
but it could also accelerate AI capabilities progress, which would leave less time for AI safety work.
There's also the meat-eater problem, i.e. increasing animal product consumption and factory farming, if we help move people to countries where they'll consume more animal products.
If you don't care about where or when duplicate experiences exist, only their number, then not caring about duplicates at all gives you a fanatical wager against the universe having infinitely many moral patients, e.g. by being infinitely large spatially, going on forever in time, having infinitely many pocket universes.
It would also give you a wager against the many-worlds interpretation of quantum mechanics, because there will be copies of you having identical experiences in (at least slightly) already physically distinct branches.