muflax65ngodyewp.onion/content_blog/consciousness/cellular-p-zombies.mkd

10 KiB

title date tags techne episteme slug
Cellular P-Zombies 1970-01-01
cellular automatons
materialism
non-dualism
:wip :speculation ?p=885

Cellular automaton implies no green, but green, therefore no mind. Checkmate, materialists!

Ok, that was the short version, now the long version.

The argument is fundamentally very similar to Chalmers P-Zombie argument. (Wait! I swear I'm not peddling dualism here. I'm not even trolling. (Ok, maybe a little bit.)) However, it has the advantage of being a really simple setup that you can grasp visually. That's neat. But it's also the exact argument in my head that convinced me to be skeptical of materialism, and for some reason, people rarely argue using cellular automatons. They are so neat. I only remember one case of Dennett using them in Freedom Evolves. Let's give 'em a second chance. (The general argument is not at all new, but the presentation in terms of cellular automatons might be.)

GoL

reductionism vs. HashLife

You can build some interesting patterns in GoL:

But wait, there's more! GoL is actually turing-complete. You can run any kind of computation on the board. Here's an actual Turing machine implementation:

Don't underestimate these little automatons. They are seriously powerful. Anything your PC can do, they can do. (Well, not always fast, but sure.) They have some cool specific uses in biology, cryptography and so on. They aren't just pretty toys.

So if the materialists are right, then minds are to be identified with brain-states, typically computations done by neurons. This implies that any turing-complete machine can run a mind, and this mind would be indistinguishable from ours. (Except maybe with regards to performance or resource requirements.)

This makes cellular automatons interesting for philosophical arguments. Fundamentally, there's nothing in GoL that plain materialism can't deal with. In fact, it's so similar to the materialistic ontology that I will soon use it as an substitute to argue against materialism. But before we get there, let's have a closer look at the metaphysics.

What's the ontology in GoL? In less fancier terms, what kind of stuff and ways to change stuff do we have?

We have a discrete, infinite, 2-dimensional space. Each point in that space has 2 possible states - on or off. We have a discrete, infinite time - a simple counter, really. (We could also look at finite versions, both temporally and spatially. Finite boards are, of course, equivalent to infinite boards that are mostly empty.)

What neat properties can we see? Well, it's all deterministic. No probabilities involved at all. Furthermore, there are no hidden variables or unique histories. You can just take the board at any point in time you want and start running it from there. You don't need to know anything about its past at all. The computations are all nicely local, both in space and time.

There are also no individual particles. In fact, there are no particles at all. (You could say there are relations between particles, without there being any actual particles.) You only have a space that consists of points that have possible states. That's all. There is no "on particle" traveling over the board. It might look like that, as patterns get propagated, but that's only an additional abstraction you might use to understand what's going on. The board has no movement, only state changes. (Zeno made that point a long time ago.)

Furthermore, one could eliminate time by thinking of the possible states of the whole board as arranged in a directed graph, like so:

If you think about it that way, then there isn't an objective time and no privileged board. There are just ((in)finitely many) board configurations, and they are causally linked, as determined by the transition rule we decided on. So you can look at any board and decide, if I apply this rule, which boards can I reach, and which boards can reach me? (Why would you prefer a timeless setup over a timed one? Because it's algorithmically simpler. You don't have to specify "these are the rules, and only this board and its descendants exist", you just say "all boards exist". The downside is, you now have all boards flying around and they require many more resources. But the rules are simpler. It's a trade-off. For our purposes, both approaches are fine. This argument works either way.)

Are we missing anything? No. I can totally run this now. This is literally all I need to know to write a program that runs the Game of Life. I could also run it using a Go board or rocks in the desert. Causally speaking, we're done.

Now where's the conscious awareness?

This question might sound a bit inane. It does sound a bit cranky, like someone pointing at a radio and the electronics within it, and asking, "Where's the music? Can you show me the music?". But really, think about it. Where's the conscious awareness?

If there are mental phenomena, they must exist somewhere in the ontology. So what candidates do we have?

There's the cells. Personally, I think that's the most natural place to look. But each cell is only connected to 8 neighboring cells. Not more. That's it. They are entirely local. So even if there are mental phenomena involved in cells, there could only be a tiny amount of them. (At most 512, in fact.) So this doesn't get us large-scale phenomena like "green apple".

Maybe it's in time? Well, time is not fundamental. Time is itself just an artifact of the way we phrased our transition rule. Not a good candidate.

It could be an aspect of the rules. But the rules are extremely simple. "If 3 or 4 neighbors, on; else, off." is all there is. You might include the initial board configuration in the rules, but "initial" isn't all that meaningful in the timeless formulation. And materialists generally believe that conscious minds evolved from non-conscious matter, so at some mental states would have to emerge. They can't be there in the rules from the beginning. This doesn't work either.

We have one last remaining thing - all of state-space. The whole board could have mental states. Certainly a plausible guess. But then, wouldn't you expect mental phenomena to always be global? And unless you are the solipsist, you probably think there is more than one mind in the universe. So that's not good either.

It's as if minds would be constrained to a certain subset of cells, a certain section on the board. But where do these borders come from? They are not in the rules. The cells don't know them. Where are they coming from? There would have to be a separate set of rules, additional to everything we know, that determine what states are mental and what aren't. That's property dualism. (Chalmers defends it. Many physicalists are property dualists in denial. I'm not particularly fond of it, personally. I don't like dualisms.)

Or you simply deny mental states. It's the obvious implication, really. If you didn't know that consciousness existed, if you were some computer scientist from a P-Zombie universe without mental phenomena, would you ever suspect any? Probably not. And just as naturally, why not dismiss all this talk about "experience" as confused. Take a thorough third-person perspective and get rid of consciousness. (Dennett seems to try this, though I can't make sense of half the stuff he says.)

There's one last possibility. You might say that the mental states are in the computation. It's not the actual machine that matters, it's the causal entanglement in the software that runs on it. But if you take this view, then what do you need the machine for? You really don't. You don't need instances, don't need worlds at all. You just need raw math, just dependencies. It's all there in the decision theory. And as much sympathy as I have for this position, that's still no physicalism, certainly no materialism. It's algorithmic idealism.

Here's another way to look at it. Imagine an infinite board filled with a properly random arrangement of cells. Any sub-pattern you can think of occurs somewhere on the board. If (non-eliminative) materialism is right, we should be able to do the following:

We pick a specific location and zoom in. In this snapshot, there is no conscious mind.

But then as we zoom out more (and this is slightly misleading because we would have to zoom out a lot), eventually we would observe a conscious mind.

And as we zoom out even more, other minds would appear, separate from the first one.

What property in the cellular automaton do we use to draw these boundaries? Is there any reason to say these boundaries are conscious, but if we shift them all one cell to the left, they aren't? Excuse me, but I'm invoking the argument from incredulity here.

Now if there were a way to connect certain cells, if they shared a common state, were in some way entangled, then this claim would seem plausible. There would be some internal information we could use to pick out patterns without imposing our own (arbitrary) interpretation on the board. But there is no such shared state in a Turing machine. Sucks, bro.

And if you haven't been screaming "But muflax, you overlooked obvious feature X!" for a couple of paragraphs (and if so, please let me know), then I'm done. Case closed.

Abandon materialism all ye who experience green.