Is the mind a machine?

My latest 3QD essay explores the “mind as machine” metaphor, and metaphors in general.

Putting the “cog” in “cognitive”: on the “mind as machine” metaphor

Here’s an excerpt:

People who study the mind and brain often confront the limits of metaphor. In the essay ‘Brain Metaphor and Brain Theory‘, the vision scientist John Daugman draws our attention to the fact that thinkers throughout history have used the latest material technology as a model for the mind and body. In the Katha Upanishad (which Daugman doesn’t mention), the body is a chariot and the mind is the reins. For the pre-Socratic Greeks, hydraulic metaphors for the psyche were popular: imbalances in the four humors produced particular moods and dispositions. By the 18th and 19th centuries, mechanical metaphors predominated in western thinking: the mind worked like clockwork. The machine metaphor has remained with us in some form or the other since the industrial revolution: for many contemporary scientists and philosophers, the only debate seems to be about what sort of machine the mind really is. Is it an electrical circuit? A cybernetic feedback device? A computing machine that manipulates abstract symbols? Some thinkers so convinced that the mind is a computer that they invite us to abandon the notion that the idea is a metaphor. Daugman quotes the cogntive scientist Zenon Pylyshyn, who claimed that “there is no reason why computation ought to be treated merely as a metaphor for cognition, as opposed to the literal nature of cognition”.

Daugman reacts to this Whiggish attitude with a confession of incredulity that many of us can relate to: “who among us finds any recognizable strand of their personhood or of their experience of others and of the world and its passions, to be significantly illuminated by, or distilled in, the metaphor of computation?.” He concludes his essay with the suggestion that “[w]e should remember than the enthusiastically embraced metaphors of each “new era” can become, like their predecessors, as much the prisonhouse of thought as they at first appeared to represent its liberation.”

Read the rest at 3 Quarks Daily:

Putting the “cog” in “cognitive”: on the “mind as machine” metaphor

Could the brain be a radio for receiving consciousness?

 

bradio.pngHere’s an answer I wrote a while ago to the following question:

 Is there any conclusive proof that the brain produces consciousness? What rules out the case that brain acts as receptor antennae for consciousness?

This is actually a fun question! Taken in the right spirit, it can be a good way to learn about what science is, and also what the limitations of science are.

What would count as proof that the brain produces consciousness? In the future we might try an experiment like this: we build an artificial brain. Let’s say we can all agree that it exhibits consciousness (leaving aside for now the extremely tricky question of what the word “consciousness” even means). Would this prove that the brain “produced” consciousness? Maybe.

But maybe the brain-as-antenna crowd would claim that their favored hypothesis hasn’t been ruled out. After all, if consciousness is somehow floating in the ether, how could we be sure that our artificial brain wasn’t just tuned to the ‘consciousness frequency’, like a gooey pink radio?

We’d need to construct some kind of cosmic-consciousness-blocking material, and then line the walls of our laboratory with it. Then we’d be able to decide on the question one way or the other! If our artificial brain showed no signs of consciousness, the antenna crowd could claim victory, and say “See!, you need cosmic consciousness in order to get biological consciousness! Consciousness is like yogurt: if you have some you can always make more.”

Constructing an artificial brain is hard enough. We have no idea if we will ever have enough understanding of neuroscience to do so. But constructing a consciousness-shield is straight out of science fiction, and just sounds absurd.


In any case, there’s actually a much bigger problem facing any scientific approach to consciousness. No one has any idea what consciousness is. Sure, there’s plenty of philosophical speculation and mystical musing, but in my opinion there’s almost nothing solid from a scientific perspective.

Here’s why I think science cannot ever address the subject of consciousness: science studies objectively observable phenomena, whereas the most crucial aspect of consciousness is only subjectively observable. What are objectively observable phenomena? They’re the ones that more than one person can observe and communicate about. Through communication, they can agree on their properties. So the word “inter-subjective” is a pretty good synonym for “objective”. Objectivity is what can be agreed upon by multiple subjective perspectives.

So the sun is a pretty objective feature of reality. We can point to it, talk about it, and make measurements about it that can be corroborated by independent groups of people.

But consciousness is not objective in the same way that the sun is. I do not observe anyone else’s consciousness. All I observe are physical perceptions: the sights and sounds and smells and textures associated with bodies. From these perceptions I build up a picture of the behavior of an organism, and from the behavior I infer things about the organism’s state of mind or consciousness. The only consciousness I have direct experience of is my own. And even my own consciousness is mysterious. I do not necessarily observe my consciousness. I observe with my consciousness. Consciousness is the medium for observation, but it not necessarily a target of observation.

Clearly all the scientists who claim to study consciousness would disagree with my perspective. Their approach is to take some observable phenomenon — either behavior or some neural signal — and define it as the hallmark of consciousness. There’s nothing wrong with defining consciousness as you see fit, but you can never be completely sure if your explicit definition lines up with all your intuitions about the boundary between conscious and non-conscious.

For example, Information Integration Theory (IIT) proposes that there is a quantity called phi (which at the current historical juncture appears impossible to compute) that captures the degree of consciousness in a system. Armed with this kind of theory, it is possible to argue* that extended, abstract entities — such as the United States as a whole — are conscious. Some people like this generous approach. Why lock up consciousness in skulls? The proponents of IIT have gone so far as to claim that they are okay with panpsychism: the idea that everything from quarks to quasars is at least a little bit conscious.

If everything is conscious, then the question of whether the brain “produces” consciousness — or the universe “transmits” it — becomes moot. There is no ‘problem of consciousness’, since it’s already everywhere.

Neuroscientists like me will probably still have jobs even if society decides to bite the panpsychist bullet. We have other things to worry about beyond consciousness. In fact many of us are actively uninterested in talking about consciousness — we call it “the c-word”. We’re happy to just study behavior in all its objectively observable glory, and hope to understand how the brain produces that. Whether and where exactly consciousness arises during this process seems like a question we can leave unanswered for a generation or two (while enjoying the various after-work conversations about it, of course!). For now we can focus on how our gooey pink radios give rise to language, or memory, or emotion, or even the basic control of muscles.


Notes

* Philosopher Eric Schwitzgebel wrote a very interesting essay entitled ‘If Materialism Is True, the United States Is Probably Conscious’.

More on the dreaded c-word!

Here are some consciousness-related answers that may be of interest:

How does the brain create consciousness?

What percent chance is there that whole brain emulation or mind uploading to a neural prosthetic will be feasible by 2048? [I’ve posted this one on this blog too.]

What are some of the current neuroscientific theories of consciousness?

What do neuroscientists think of the philosopher David Chalmers?

Is anything real beyond our own perspective?

What is the currently best scientific answer to the psycho-physical (body-mind) question?

“Conscious realism”: a new way to think about reality (or the lack thereof?)

Venn

Interesting interview in the Atlantic with cognitive scientist Donald D. Hoffman:

The Case Against Reality

“I call it conscious realism: Objective reality is just conscious agents, just points of view. Interestingly, I can take two conscious agents and have them interact, and the mathematical structure of that interaction also satisfies the definition of a conscious agent. This mathematics is telling me something. I can take two minds, and they can generate a new, unified single mind. Here’s a concrete example. We have two hemispheres in our brain. But when you do a split-brain operation, a complete transection of the corpus callosum, you get clear evidence of two separate consciousnesses. Before that slicing happened, it seemed there was a single unified consciousness. So it’s not implausible that there is a single conscious agent. And yet it’s also the case that there are two conscious agents there, and you can see that when they’re split. I didn’t expect that, the mathematics forced me to recognize this. It suggests that I can take separate observers, put them together and create new observers, and keep doing this ad infinitum. It’s conscious agents all the way down.”

[…]

Here’s the striking thing about that. I can pull the W out of the model and stick a conscious agent in its place and get a circuit of conscious agents. In fact, you can have whole networks of arbitrary complexity. And that’s the world.

[…]

“As a conscious realist, I am postulating conscious experiences as ontological primitives, the most basic ingredients of the world. I’m claiming that experiences are the real coin of the realm. The experiences of everyday life—my real feeling of a headache, my real taste of chocolate—that really is the ultimate nature of reality.”

I don’t agree with everything in the article (especially the quantum stuff) but I think many people interested in consciousness and metaphysics will find plenty of food for thought here:

The Case Against Reality

Also, the “conscious agents all the way down” is the exact position I was criticizing in a recent 3QD essay:

3quarksdaily: Persons all the way down: On viewing the scientific conception of the self from the inside out

The diagram above is from a science fiction story I was working on, back when I was a callow youth. It closely related to the idea of a network of conscious agents. Here’s another ‘version’ of it.

TriHead

Not sure why I made it look so morbid. 🙂

Is it possible for the Internet to one day gain consciousness?

A recent Quora answer I wrote:

Sometimes I wonder if Quora bots are conscious! 🙂
I often think about whether the internet could become sentient… and also whether it is already! But the most important question is this: how would we tell one way or the other? Perhaps each of us is like a neuron in the internet’s hive brain.
Neurons and brains are separated by a gulf of scale, structure, and complexity. How could a neuron ‘know’ that the brain it is part of is conscious? How could a brain know if a neuron (or group of neurons) is conscious? It may be an unbridgeable gap. And the same sort of gap may exist between humans and the internet. To paraphrase Wittgenstein, if the internet could talk we would not understand it.
In any case, the internet doesn’t even have a ‘mouth’ or a central communication device. How do we decide what the internet is ‘saying’? I could imagine a future in which ‘analysts’ read into the internet’s dynamic trajectories in the way astrologers read into the stars’ trajectories.
Sometimes I think of consciousness as an irreducibly social phenomenon. Consciousness may be the ‘fire’ produced by the ‘friction’ between different intelligent agents that each have partial knowledge of the world. Perhaps the test of whether the internet is conscious involves encountering an alien internet. Perhaps when civilizations from two different planets interact, their ‘planetary consciousnesses’ (or internets) interact in a way that their inhabitants only have a dim awareness of.

 

Is it possible for the Internet to one day gain consciousness?

Is consciousness complex?

Someone on Quora asked the following question: What’s the correlation between complexity and consciousness?

Here’s my answer:

Depends on who you ask! Both complexity and consciousness are contentious words, and mean different things to different people.

I’ll build my answer around the idea of complexity, since it’s easier to talk about scientifically (or at least mathematically) than consciousness. Half-joking comments about complexity and consciousness are to be found in italics.

I came across a nice list of measures of complexity, compiled by Seth Lloyd, a researcher from MIT, which I will structure my answer around. [pdf]

Lloyd describes measures of complexity as ways to answer three questions we might ask about a system or process:

  1. How hard is it to describe?
  2. How hard is it to create?
  3. What is its degree of organization?

1. Difficulty of description: Some objects are complex because they are difficult for us to describe. We frequently measure this difficulty in binary digits (bits), and also use concepts like entropy (information theory) and Kolmogorov (algorithmic) complexity. I particularly like Kolmogorov complexity. It’s a measure of the computational resources required to specify a string of characters. It’s the size of the smallest algorithm that can  generate that string of letters or numbers (all of which can be  converted into bits). So if you have a string like  “121212121212121212121212”, it has a description in English — “12  repeated 12 times” — that is even shorter that the actual string. Not very complex. But the string “asdh41ubmzzsa4431ncjfa34” may have no description shorter than the string itself, so it will have higher Kolmogorov complexity. This measure of complexity can also give us an interesting way to talk about randomness. Loosely speaking, a random process is one whose simulation is harder to accomplish than simply watching the process unfold! Minimum message length is a related idea that also has practical applications. (It seems Kolmogorov complexity is technically uncomputable!)

Consciousness is definitely hard to describe. In fact we seem to be stuck at the description stage at the moment. Describing consciousness is so difficult that bringing in bits and algorithms seem a tad premature. (Though as we shall see, some brave scientists beg to differ.)

2. Difficulty of creation: Some objects and processes are seen as complex because they are really hard  to make. Komogorov complexity could show up here too, since simulating a string can be seen both as an act of description (the code itself) and an act  of creation (the output of the code). Lloyd lists the following  terms that I am not really familiar with: Time Computational Complexity; Space Computational Complexity; Logical depthThermodynamic depth; and “Crypticity” (!?).  In additional to computational  difficulty, we might add other costs: energetic, monetary, psychological, social, and ecological. But perhaps then we’d be  confusing the complex with the cumbersome? 🙂

Since we haven’t created a consciousness yet, and don’t know how nature accomplished it, perhaps we are forced to say that consciousness really is complex from the perspective of artificial synthesis. But if/when we have made an artificial mind — or settled upon a broad definition of consciousness that includes existing machines — then perhaps we’ll think of consciousness as easy! Maybe it’s everywhere already! Why pay for what’s free?

3. Degree of organization: Objects and processes that seem intricately structured are also seen as  complex. This type of complexity differs strikingly from computational complexity. A string of random noise is extremely complex from an information-theoretic perspective, because it is virtually incompressible — it  cannot be condensed into a simple algorithm. A book consisting of totally random characters contains more information, and is therefore more algorithmically complex, that a meaningful  text of the same length. But strings of random characters are typically interpreted as totally lacking in structure, and are therefore in a sense very simple. Some measures that Lloyd associates with organizational complexity include: Fractal dimension, metric entropy, Stochastic Complexity and several more, most of which I confess I had never heard of until today. I suspect that characterizing organizational structure is an ongoing research endeavor. In a sense that’s what mathematics is — the study of abstract structure.

Consciousness seems pretty organized, especially if you’re having a good day! But it’s also the framework by which we come to know that organization exists in nature in the first place…so this gets a bit Ioopy . 🙂

Seth Lloyd ends his list with concepts that are related to complexity, but don’t necessarily have measures. These I think are particularly relevant to consciousness and, to the more prosaic world I work in: neural network modeling.

Self-organization
Complex adaptive system
Edge of chaos

Consciousness may or may not be self-organized, but it definitely adapts, and it’s occasionally chaotic.

To Lloyd’s very handy list led me also add self-organized criticality and emergence. Emergence is an interesting concept which has been falsely accused of being obscurantism. A property is emergent is if is seen in a system, but not in any constituent of the system. For instance, the thermodynamic gas laws emerge out of kinetic theory, but they make no reference to molecules. The laws governing gases show up when there is a large enough number of particles, and when these laws reveal themselves, microscopic details often become irrelevant. But gases are the least interesting substrates for emergence. Condensed matter physicists talk about phenomena like the emergence of quasiparticles, which are excitations in a solid that behave as if they are independent particles, but depend for this independence, paradoxically, on the physics of the whole object.  (Emergence is a fascinating subject in its own right, regardless of its relevance to consciousness. Here’s a paper that proposes a neat formalism for talking about emergence: Emergence is coupled to scope, not level. PW Anderson’s classic paper “More is Different” also talks about a related issue: pdf )

Consciousness may well be an emergent process — we rarely say that a single neuron or a chunk of nervous tissue has a mind of its own. Consciousness is a word that is reserved for the whole organism, typically.

So is consciousness complex? Maybe…but not really in measurable ways. We can’t agree on how to describe it, we haven’t created it artificially yet, and we don’t know how it is organized, or how it emerged!

In my personal opinion many of the concepts people associate with consciousness are far outside of the scope of mainstream science. These include qualia, the feeling of what-it-is-like, and intentionality, the observation that mental “objects” always seems to be “about” something.

This doesn’t mean I think these aspects of consciousness are meaningless, only that they are scientifically intractable. Other aspects of  consciousness, such as awareness, attention, and emotion might also be shrouded in mystery, but I think neuroscience has much to say about them — this is because they have some measurable aspects, and these aspects step out of the shadows during neurological disorders, chemical modulation, and other abnormal states of being.

However…

There are famous neuroscientists who might disagree. Giulio Tononi has come up with something called integrated information theory, which comes with a measure of consciousness he christened phi. Phi is supposed to capture the degree of “integratedness” of a network. I remain quite skeptical of this sort of thing — for now it  seems to be a metaphor inspired by information theory, rather than a measurable quantity. I can’t imagine how we will be able  to relate it to actual experimental data. Information, contrary to popular perception, is not something intrinsic to physical objects. The amount of information in a signal depends on the device receiving the signal. Right now we have no way of knowing how many “bits” are being transmitted between two neurons, let alone between entire regions of the brain. Information theory is best applied when we already know the nature of the message, the communication channel, and the encoding/decoding process. We have only partially characterized these  aspects of neural dynamics. Our experimental data seem far too fuzzy  for any precise formal approach. [Information may actually be a concept of very limited use in biology, outside of data fitting. See this excellent paper for more: A deflationary account of information in biology. This sums it up: “if information is in the concrete world, it is causality. If it is abstract, it is in the head.”]

But perhaps this paper  will convince me otherwise: Practical Measures of Integrated Information for Time-Series Data. [I very much doubt it though.]

___

I thought I would write a short answer… but I ended up learning a lot as I added more info.

View Answer on Quora

Will we ever be able to upload our minds to a computer?

Image

My answer to a Quora question: What percent chance is there that whole brain emulation or mind uploading to a neural prosthetic will be feasible within 35 years?

This opinion is unlikely to be popular among sci-fi fans, but I think the “chance” of mind uploading happening at any time in the future is zero.  Or better yet, as a scientist I would assign no number at all to this subjective probability. [Also see the note on probability at the end.]

I think the concept is still incoherent from both philosophical and scientific perspectives.

In brief, these are the problems:

  • We don’t know what the mind is from a scientific/technological perspective.
  • We don’t know which processes in the brain (and body!) are essential to subjective mental experience.
  • We don’t have any intuition for what “uploading” means in terms of mental unity and continuity.
  • We have no way of knowing whether an upload has been successful.

You could of course take the position that clever scientists and engineers will figure it out while the silly philosophers debate semantics. But this I think is based on shaky analogies with other examples of scientific progress. You might justifiably ask “What are the chances of faster-than-light travel?” You could argue that our vehicles keep getting faster, so it’s only a matter of time before we have Star Trek style warp drives. But everything we know about physics says that crossing the speed of light is impossible. So the “chance” in this domain is zero. I think that the idea of uploading minds is even more problematic than faster-than-light travel, because the idea does not have any clear or widely accepted scientific meaning, let alone philosophical meaning. Faster-than-light travel is conceivable at least, but mind uploading may not even pass that test!

I’ll now discuss some of these issues in more detail.

The concept of uploading a mind is based on the assumption that mind and body are separate entities that can in principle exist without each other. There is currently no scientific proof of this idea. There is also no philosophical agreement about what the mind is. Mind-body dualism is actually quite controversial among scientists and philosophers these days.

People (including scientists) who make grand claims about mind uploading generally avoid the philosophical questions. They assume that if we have a good model of brain function, and a way to scan the brain in sufficient detail, then we have all the technology we need.

But this idea is full of unquestioned assumptions. Is the mind identical to a particular structural or dynamic pattern? And if software can emulate this pattern, does it mean that the software has a mind? Even if the program “says” it has a mind, should we believe it? It could be a philosophical zombie that lacks subjective experience.

Underlying the idea of mind/brain uploading is the notion of Multiple Realizability — the idea that minds are processes that can be realized in a variety of substrates. But is this true? It is still unclear what sort of process mind is. There are always properties of a real process that a simulation doesn’t possess. A computer simulation of water can reflect the properties of water (in the simulated ‘world’), but you wouldn’t be able to drink it! 🙂

Even if we had the technology for “perfect” brain scans (though it’s not clear what a “perfect” copy is), we run into another problem: we don’t understand what “uploading” entails. We run into the Ship of Theseus problem. In one variant of this problem/paradox we imagine that Theseus has a ship. He repairs it every once in a while, each time replacing one of the wooden boards. Unbeknownst to him, his rival has been keeping the boards he threw away, and over time he constructed an exact physical replica of Theseus’s ship. Now, which is the real ship of Theseus? His own ship, which is now physically distinct from the one he started with, or the (counterfeit?) copy, which is physically identical to the initial ship? There is no universally accepted answer to this question.

We can now explicitly connect this with the idea of uploading minds. Let’s say the mind is like the original (much repaired) ship of Theseus. Let’s say the computer copy of the brain’s structures and patterns is like the counterfeit ship. For some time there are two copies of the same mind/brain system — the original biological one, and the computer simulation. The very existence of two copies violates a basic notion most people have of the Self — that it must obey a kind of psychophysical unity. The idea that there can be two processes that are both “me” is incoherent (meaning neither wrong nor right). What would that feel like for the person whose mind had been copied?

Suppose in response to this thought experiment you say, “My simulated Self won’t be switched on until after I die, so I don’t have to worry about two Selves — unity is preserved.” In this case another basic notion is violated — continuity. Most people don’t think of the Self as something that can cease to exist and then start existing again. Our biological processes, including neural processes, are always active — even when we’re asleep or in a coma. What reason do we have to assume that when these activities cease, the Self can be recreated?

Let’s go even further: let’s suppose we have a great model of the mind, and a perfect scanner, and we have successfully run a simulated version of your mind on a computer. Does this simulation have a sense of Self? If you ask it, it might say yes. But is this enough? Even currently-existing simulations can be programmed to say “yes” to such questions. How can we be sure that the simulation really has subjective experience?  And how can we be sure that it has your subjective experience? We might have just created a zombie simulation that has access to your memories, but cannot feel anything. Or we might have created a fresh new consciousness that isn’t yours at all! How do we know that a mind without your body will feel like you? [See the link on embodied cognition for more on this very interesting topic.]

And — perhaps most importantly — who on earth would be willing to test out the ‘beta versions’ of these techniques? 🙂

Let me end with a verse from William Blake’s poem “The Tyger“.

What the hammer? what the chain?
In what furnace was thy brain?

~

Further reading:

Will Brains Be Dowloaded? Of Course Not!

Personal Identity

Ship of Theseus

Embodied Cognition

~

EDIT: I added this note to deal with some interesting issues to do with “chance” that came up in the Quora comments.

A note on probability and “chance”

Assigning a number to a single unique event such as the discovery of mind-uploading is actually problematic. What exactly does “chance” mean in such contexts? The meaning of probability is still being debated by statisticians, scientists and philosophers. For the purposes of this discussion, there are 2 basic notions of probability:

(1) Subjective degree of belief. We start with a statement A. The probability p(A) = 0 if I don’t believe A, and p(A) = 1 if I believe A. In other words, if your probability p(A) moves from 0 to 1, your subjective doubt decreases. If A is the statement “God exists” then an atheist’s p(A) is equal to 0, and a theist’s p(A) is 1.

(2) Frequency of a type of repeatable event. In this case the probability p(A) is the number of times event A happens, divided by the total number of events. Alternatively, it is the total number of outcomes that correspond to event A, divided by the total number of possible outcomes. For example, suppose statement A is “the die roll results in a 6”. There are 6 possible outcomes of a die roll, and one of them is 6. So p (A) = 1/6. In other words, if you roll an (ideal) die 600 times, you will see the side with 6 dots on it roughly 100 times.

Clearly, if statement A is “Mind uploading will be discovered in the future”, then we cannot use frequentist notions of probability. We do not have access to a large collection of universes from which to count the ones in which mind uploading has been successfully discovered, and then divide that number by the total number of universes. In other words, statement A does not refer to a statistical ensemble — it is a unique event. For frequentists, the probability of a unique event can only be 0 (hasn’t happened) or 1 (happened). And since mind uploading hasn’t happened yet, the frequency-based probability is 0.

So when a person asks about the “chance” of some unique future event, he or she is implicitly asking for a subjective degree of belief in the feasibility of this event. If you force me to answer the question, I’ll say that my subjective degree of belief in the possibility of mind uploading is zero. But I actually prefer not to assign any number, because I actually think the concept of mind uploading is incoherent (as opposed to unfeasible). The concept of its feasibility does not really arise (subjectively), because the idea of mind uploading is almost meaningless to me. Data can be uploaded and downloaded. But is the mind data? I don’t know one way or the other, so how can I believe in some future technology that presupposes that the mind is data?

More on probability theory:

Chances Are — NYT article on probabily by Steve Strogatz

Why the distinction between single-event probabilities and frequencies is important for psychology (and vice versa)

Interpretations of Probability – Stanford Encyclopedia of Philosophy