Hide table of contents

(Warning: a thought experiment I'm referencing here is a spoiler for a novel called Permutation City by Greg Egan. I've added several lines of dust below to give you a chance to bail out if you don't want to be spoiled.)

.

..

....

..

......

...

..

.

...

.

....

.

"The problem of the dust" is, I think, called "dust theory" in the novel. The idea is that, if you buy that simulations of people run on computers can be conscious, then presumably you think that consciousness is substrate-independent. Also, presumably you identify consciousness with a series of discrete states, and are using some mapping from the physical world to those states (e.g. the mapping from voltages to the states {0, 1} that we use in computers). Presumably also the specific mapping doesn't matter to you — you don't care at what voltage we've decided to call something 0 and at what voltage we've decided to call it 1, for instance.

But if the substrate doesn't matter to you, and neither does the mapping, then what stops me from looking at a cloud of dust floating in space, and concocting some extremely contrived mapping which says that the position of the dust particles at time t so happens to represent the state of your brain as you open your mouth to eat an ice cream sandwich, the position at time t+1 represents your state as you bite down, etc. ? Have I now "simulated" you eating an ice cream sandwich?

(From what I remember, this is just dust-theory-lite, without the additional idea of messing with the temporal ordering of the states. But I think it's all I need to make the point.)

Bostrom's Simulation Argument asks us to consider a posthuman civilization with an enormous amount of computing power, and whether it will devote some of that power to simulating its ancestors. If so, it argues, and if such a civilization is likely, then we're probably in such a simulation. But the problem of the dust is that it seems like we should think a very large (or infinite?) number of simulations are happening anyway, and hence it seems we're probably in one of those. I wouldn't say that "dust theory" refutes the Simulation Argument, but to me it seems to indicate that there's something confused about my concept of "being simulated," and hence I feel inclined to back off arguments that involve it.

I'm curious about solutions to 'the problem of the dust' and/or how people square it with their beliefs about the simulation hypothesis.

(Greg Egan says in his FAQ on the novel that he takes dust theory "[n]ot very seriously, although I have yet to hear a convincing refutation of it on purely logical grounds." He points out that "I think the universe we live in provides strong empirical evidence against the “pure” Dust Theory, because it is far too orderly and obeys far simpler and more homogeneous physical laws than it would need to, merely in order to contain observers with an enduring sense of their own existence. If every arrangement of the dust that contained such observers was realised, then there would be billions of times more arrangements in which the observers were surrounded by chaotic events, than arrangements in which there were uniform physical laws.")

15

0
0

Reactions

0
0
New Answer
New Comment

2 Answers sorted by

maybe relevant:

Is There Suffering in Fundamental Physics?

By Brian Tomasik


"This essay explores the speculative possibility that fundamental physical operations—atomic movements, electron orbits, photon collisions, etc.—could collectively deserve significant moral weight. While I was initially skeptical of this conclusion, I've since come to embrace it. In practice I might adopt a kind of moral-pluralism approach in which I maintain some concern for animal-like beings even if simple physics-based suffering dominates numerically. I also explore whether, if the multiverse does contain enormous amounts of suffering from fundamental physical operations, there are ways we can change how much of it occurs and what the distribution of "experiences" is. An argument based on vacuum fluctuations during the eternal lifetime of the universe suggests that if we give fundamental physics any nonzero weight, then almost all of our expected impact may come through how intelligence might transform fundamental physics to reduce the amount of suffering it contains. Alas, it's not clear whether negative-leaning consequentialists should actively promote concern for suffering in physics, even if they personally care a lot about it."

Thanks, but I think this is a different topic.

I guess I'd just say, yes a cloud can be conscious if it reaches some level of complexity (which I don't think clouds do). The mapping and substrate are not irrelevant. There has to be some kind of complex network that produces mental states (but I haven't looked into theories of consciousness much). I don't see how this proves the impossibility of simulating consciousness though.

Comments4
Sorted by Click to highlight new comments since:

This is a very interesting and weird problem. It feels like the solution should have something to do with the computational complexity of the mapping? E.g. is it a mapping that could be calculated in polynomial or exponential time? If the mapping function is as expensive to compute as just simulating the brain in the first place, then the dust hasn't really done any of the computational work.

Another way of looking at this: if you do take the dust argument seriously, why do you even need the dust at all? The mapping from dust to mental states exists in the space of mathematical functions, but so does the mapping from time straight to mental states, with no dust involved.

I guess the big question here is when does a sentient observer contained inside a mathematical function "exist"? What needs to happen in the physical universe for them to have experiences? That's a really puzzling and interesting question.

Hmm. Thanks for the example of the "pure time" mapping of t --> mental states. It's an interesting one. It reminds me of Max Tegmark's mathematical universe hypothesis at "level 4," where, as far as I understand, all possible mathematical structures are taken to "exist" equally. This isn't my current view, in part because I'm not sure what it would mean to believe this.

I think the physical dust mapping is meaningfully different from the "pure time" mapping. The dust mapping could be defined by the relationships between dust specks. E.g. at each time t, I identify each possible pairing of dust specks with a different neuron in George Soros's brain, then say "at time t+1, if a pair of dust specks is farther apart than it was at time t, the associated neuron fires; if a pair is closer together, the associated neuron does not fire."

This could conceivably fail if there's not enough pairs of dust specks in the universe to make the numbers work out. The "pure time" mapping could never fail to work; it would work (I think) even in an empty universe containing no dust specks. So it feels less grounded, and like an extra leap.

...

I agree that it seems like there's something around "how complex is the mapping." I think what we care about is the complexity of the description of the mapping, though, rather than the computational complexity. I think George Soros mapping is pretty quick to compute once defined? All the work seems hidden in the definition — how do I know which pairs of dust specks should correspond to which neurons?

This seems similar to Boltzmann brains. Do you think it differs?

It seems related but different. E.g. Boltzmann brains expect to die in the next second, but dust-brains do not.

Curated and popular this week
Relevant opportunities