Why can most people identify a color without a reference but not a musical note?

[I was asked this on Quora. Here’s a slightly modified version of my answer.]

This is an excellent question! I’m pretty sure there is not yet a definitive answer, but I suspect that the eventual answer will involve two factors:

  1. The visual system in humans is much more highly developed than the auditory system.
  2. Human cultures typically teach color words to all children, but formal musical training — complete with named notes — is relatively rare.

When you look at the brain’s cortical regions, you realize that the primary visual cortex has the most well-defined laminar structure in the whole brain. Primary auditory cortex is less structured. We still don’t know exactly how the brain’s layers contribute to sensory processing, but some theories suggest that the more well-defined cortices are capable of making more fine distinctions.

[See this blog post for more on cortical lamination:
How to navigate on Planet Brain]

However, I don’t think the explanation for the difference between music and color perception is purely neuroscientific. Culture may well play an important role. I think that with training, absolute pitch — the ability to identify the exact note rather than the interval between notes — could become more common. Speakers of tonal languages like Mandarin or Cantonese are more likely to have absolute pitch, especially if they’ve had early musical training. (More on this below.)

Also: when people with no musical training are exposed to tunes they are familiar with, many of them can tell if the absolute pitch is correct or not [1] Similarly, when asked to produce a familiar tune, many people can hit the right pitch. [2]. This suggests that at least some humans have the latent ability to use and/or recognize absolute pitch.

Perhaps with early training, note names will become as common as color words.

This article by a UCSD psychologist described the mystery quite well:

Diana Deutsch – Absolute Pitch.

As someone with absolute pitch, it has always seemed puzzling to me that  this ability should be so rare. When we name a color, for example as  green, we do not do this by viewing a different color, determining its  name, and comparing the relationship between the two colors. Instead,  the labeling process is direct and immediate.

She has some fascinating data on music training among tonal language speakers:

” Figure 2. Percentages of subjects who obtained a score of at least  85% correct on the test for absolute pitch. CCOM: students at the  Central Conservatory of Music, Beijing, China; all speakers of Mandarin.  ESM: students at Eastman School of Music, Rochester, New York; all  nontone language speakers.”

Looks like if you speak a tonal language and start learning music early, you are far more likely to have perfect pitch. (Separating causation from correlation may be tricky.)


References:

[1] Memory for the absolute pitch of familiar songs.
[2] Absolute memory for musical pitch: evidence from the production of learned melodies.

Quora: Why can most people identify a color without a reference but not a musical note?

What are the limits of neuroscience?

[My answer to a recent Quora question.]

There are two major problems with neuroscience:

  1. Weak philosophical foundations when dealing with mental concepts
  2. Questionable statistical analyses of experimental results

1. Neuroscience needs a bit of philosophy

Many neuroscientific results are presented without sufficiently nuanced  philosophical knowledge. This can lead to cartoonish and potentially harmful conceptions of the brain, and by extension, of human behavior, psychology, and culture. Concepts related to the mind are among the hardest to pin down, and yet some neuroscientists give the impression that there are no issues that require philosophical reflection.

Because of a certain disdain for philosophy (and sometimes even psychology!), some neuroscientists end up drawing inappropriate inferences from their research, or distorting the meaning of their results.

One particularly egregious example is the “double subject fallacy”, which was recently discussed in an important paper:

“Me & my brain”: exposing neuroscience’s closet dualism.

Here’s the abstract of the paper:

Our intuitive concept of the relations between brain and mind is  increasingly challenged by the scientific world view. Yet, although few  neuroscientists openly endorse Cartesian dualism, careful reading  reveals dualistic intuitions in prominent neuroscientific texts. Here,  we present the “double-subject fallacy”: treating the brain and the  entire person as two independent subjects who can simultaneously occupy  divergent psychological states and even have complex interactions with  each other-as in “my brain knew before I did.” Although at first, such  writing may appear like harmless, or even cute, shorthand, a closer look  suggests that it can be seriously misleading. Surprisingly, this  confused writing appears in various cognitive-neuroscience texts, from  prominent peer-reviewed articles to books intended for lay audience. Far  from being merely metaphorical or figurative, this type of writing  demonstrates that dualistic intuitions are still deeply rooted in  contemporary thought, affecting even the most rigorous practitioners of  the neuroscientific method. We discuss the origins of such writing and  its effects on the scientific arena as well as demonstrate its relevance  to the debate on legal and moral responsibility.

[My answer to the earlier question raises related issues: What are the limits of neuroscience with respect to subjectivity, identity, self-reflection, and choice?]

2. Neuroscience needs higher data analysis standards

On a more practical level, neuroscience is besieged by problems related to bad statistics. The data in neuroscience (and all “complex system” science) are extremely noisy, so increasingly sophisticated statistical techniques are deployed to extract meaning from them. This sophistication means that  fewer and fewer neuroscientists actually understand the math behind the statistical methods they employ. This can create a variety of problems, including incorrect inferences. Scientists looking for “sexy” results can use poorly understood methods to show ‘significant’ effects where there really is only a random fluke. (The more methods you use, the more chances you create for finding a random “statistically significant” effect. This kind of thing has been called “torturing the data until it confesses”.)

Chance effects are unreproducible, and this is a major problem for many branches of science. Replication is central to good science, so when it frequently fails to occur, then we know there are problems with research and with how it is reviewed and published. Many times there is a “flash in the pan” at a laboratory that turns out to be fool’s gold.

See these article for more:

Bad Stats Plague Neuroscience

Voodoo Correlations in Social Neuroscience

The Dangers of Double Dipping (Voodoo IV)

Erroneous analyses of interactions in neuroscience: a problem of significance.

Fixing Science, Not Just Psychology – Neuroskeptic

The Replication Problem in the Brain Sciences


Quora: What are the limits of neuroscience?

Is neuroscience really ruining the humanities?

For my latest 3QD post, I expanded on my answer to a Quora question: Is neuroscience ruining the humanities?


Here’s an excerpt:

“Neuroscience is ruining the humanities”. This was the provocative title of a recent article by Arthur Krystal in The Chronicle of Higher Education.  To me the question was pure clickbait [1], since I am both a  neuroscientist and an avid spectator of the drama and intrigue on the  other side of the Great Academic Divide [2]. Given the sensational  nature of many of the claims made on behalf of the cognitive and neural  sciences, I am inclined to assure people in the humanities that they  have little to fear. On close inspection, the bold pronouncements of  fields like neuro-psychology, neuro-economics and neuro-aesthetics — the  sorts of statements that mutate into TED talks and pop science books —  often turn out to be wild extrapolations from a limited (and internally  inconsistent) data set.

Unlike many of my fellow scientists, I have occasionally grappled  with the weighty ideas that emanate from the humanities, even coming to  appreciate elements of postmodern thinking. (Postmodern — aporic? — jargon is of course a different matter entirely.) I think the  tapestry that is human culture is enriched by the thoughts that emerge  from humanities departments, and so I hope the people in these  departments can exercise some constructive skepticism when confronted  with the latest trendy factoid from neuroscience or evolutionary  psychology. Some of my neuroscience-related essays here at 3QD were  written with this express purpose [3, 4].

The Chronicle article begins with a 1942 quote from New York intellectual Lionel  Trilling: “What gods were to the ancients at war, ideas are to us”.  This sets the tone for the mythic narrative that lurks beneath much of  the essay, a narrative that can be crudely caricatured as follows. Once  upon a time the University was a paradise of creative ferment. Ideas  were warring gods, and the sparks that flew off their clashing swords  kept the flames of wisdom and liberty alight. The faithful who erected  intellectual temples to bear witness to these clashes were granted the  boon of enlightened insight. But faith in the great ideas gradually  faded, and so the golden age came to an end. The temple-complex of ideas  began to decay from within, corroded by doubt. New prophets arose, who  claimed that ideas were mere idols to be smashed, and that the temples  were metanarrative prisons from which to escape. In this weak and bewildered state, the  intellectual paradise was invaded. The worshipers were herded into a  shining new temple built from the rubble of the old ones. And into this  temple the invaders’ idols were installed: the many-armed goddess of  instrumental rationality, the one-eyed god of essentialism, the cold  metallic god of materialism…

The over-the-top quality of my little academia myth might give the  impression that I think it is a tissue of lies. But perhaps more nuance  is called for. As with all myths, I think there are elements of truth in  this narrative.


Read the rest at 3 Quarks Daily: Is neuroscience really ruining the humanities?

Does dopamine produce a feeling of bliss? On the chemical self, the social self, and reductionism.

Here’s the intro to my latest blog post at 3 Quarks Daily.


“The  osmosis of neuroscience into popular culture is neatly symbolized by a  phenomenon I recently chanced upon: neurochemical-inspired jewellery. It  appears there is a market for silvery pendants shaped like molecules of  dopamine, serotonin, acetylcholine, norepinephrine and other celebrity  neurotransmitters. Under pictures of dopamine necklaces, the  neuro-jewellers have placed words like “love”, “passion”, or “pleasure”.  Under serotonin they write “happiness” and “satisfaction”, and under  norepinephrine, “alertness” and “energy”. These associations presumably  stem from the view that the brain is a chemical soup in which each  ingredient generates a distinct emotion, mood, or feeling. Subjective  experience, according to this view, is the sum total of the  contributions of each “mood molecule”. If we strip away the modern  scientific veneer, the chemical soup idea evokes the four humors of  ancient Greek medicine: black bile to make you melancholic, yellow bile  to make you choleric, phlegm to make you phlegmatic, and blood to make  you sanguine.

“A dopamine pendant worn round the neck as a symbol for bliss is  emblematic of modern society’s attitude towards current scientific  research. A multifaceted and only partially understood set  of experiments is hastily distilled into an easily marketed molecule of  folk wisdom. Having filtered out the messy details, we are left with an  ornamental nugget of thought that appears both novel and reassuringly  commonsensical. But does neuroscience really support this reductionist  view of human subjectivity? Can our psychological states be understood  in terms of a handful of chemicals? Does neuroscience therefore pose a  problem for a more holistic view, in which humans are integrated in  social and environmental networks? In other words, are the “chemical  self” and the “social self” mutually exclusive concepts?”

– Read the rest at 3QD: The Chemical Self and the Social Self

The holy grail of computational neuroscience: Invariance

There are quite a few problems that computational neuroscientists need to solve in order to achieve a true theoretical understanding of biological intelligence.  But I’d like to talk about one problem that I think is the holy grail of computational neuroscience and artificial intelligence: the quest for invariance. From a purely scientific and technological perspective I think this is a far more important and interesting problem than anything to do with the “C-word”: Consciousness. 🙂

Human (and animal) perception has an extraordinary feature that we still can’t fully emulate with artificial devices. Our brains somehow create and/or discover invariances in the world. Let me start with a few examples and then explain what invariance is.

Invariance in vision

Think about squares. You can recognize a square irrespective of it’s size, color, and position. You can even recognize a square with reasonable accuracy when viewing it from an oblique angle. This ability is something we take for granted, but we haven’t really figured it out yet.

Now think about human faces. You can recognize a familiar face in various lighting conditions, and under changes of facial hair, make-up, age, and context. How does the brain allow you to do things like this?

Invariance in hearing

Think about a musical tune you know well. You will probably be able to recognize it even if it is slowed down, sped up, hummed, whistled, or even sung wordlessly by someone who is tone-deaf. In some special cases, you can even recognize a piece of music from its rhythmic pattern alone, without any melody. How do you manage to do this?

Think about octave equivalence. A sound at a particular frequency sounds like the same note as a sound at double the frequency. In other words, notes an octave apart sound similar. What is happening here?

What is invariance?

How does your brain discover similarity in the midst of so much dissimilarity? The answer is that the brain somehow creates invariant representations of objects and patterns. Many computational neuroscientists are working on this problem, but there are no unifying theoretical frameworks yet.

So what does “invariance” mean? It means “immunity to a possible change”. It’s related to the formal concept of symmetry. According to mathematics and theoretical physics, an object has a symmetry if it looks the same even after a change. a square looks exactly the same if you rotate it by 90 degrees around the center. We say it is invariant (or symmetrical) with respect to a 90 degree rotation.

Our neural representations of sensory patterns somehow allow us to discover symmetries and using them for recognition and flexible behavior. And we manage to do this implicitly, without any conscious effort. This type of ability is limited and it varies from person to person, but all people have it to some extent.

Back to the examples

We can redefine our examples using the language of invariance.

 

  • The way human represent squares and other shapes is invariant with respect to rotation, as well as with respect to changes in position, lighting, and even viewing angle.
  • The way humans represent faces is invariant with respect to changes in make-up, facial hair, context, and age. (This ability varies from person to person, of course.)
  • The way humans represent musical tunes is invariant with respect to changes in speed, musical key, and timbre.
  • The way humans represent musical notes is invariant with respect to doubling of frequency ( which is equivalent to shifting by an octave.)


All these invariances are partial and limited in scope, but they are still extremely useful, and far more sophisticated than anything we can do with artificial systems.

Invariance of thought patterns?

The power of invariance is particularly striking when we enter the domain of abstract ideas — particularly metaphors and analogies.

Consider perceptual metaphors. We can touch a surface and describe it as smooth. But we can also use the word “smooth” to describe sounds. How is it that we can use texture words for things that we do not literally touch?

Now consider analogies, which are the more formal cousins of metaphors. Think of analogy questions in tests like the GRE and the SATs. Here’s an example

Army: Soldier :: Navy : _____

The answer is “Sailor”.

These questions take the form “A:B::C:D”, which we normally read as “A is to B as C is to D”. The test questions normally ask you to specify what D should be.

To make an analogy more explicit, we can re-write it this way: “R(x,y) for all (x,y) =  (A,B) or (C,D)”.  The relation “R” holds for pairs of words (x,y), and in particular, for pairs (A,B) as well as (C,D).

In this example, the analogical relationship R can be captured in the phrase “is made up of”. An army is made up of soldiers and a navy is made up of sailors. In any analogy, we are able to pick out an abstract relationship between things or concepts.

Here’s another example discussed in the Wikipedia page on analogy:

Hand: Palm :: Foot: _____

The answer most people give is “Sole”. What’s interesting about this example is that many people can understand the analogy without necessarily being able to explain the relationship R in words. This is true of various analogies. We can see implicit relationships without necessarily being able to describe them.

We can translate metaphors and analogies into the language or invariance.

 

  • The way humans represent perceptual experiences allows us to create metaphors that are invariant with respect to changes in sensory modality. So we can perceive smoothness in the modalities of touch, hearing and other senses.
  • The way humans represent abstract relationships allows us to find/create analogies that are invariant with respect to the particular things being spoken about. The validity of the analogy R(x,y) in invariant with respect to replacing the pair (x,y) with (A,B) or (C,D).


The words “metaphor” and “analogy” are essentially synonyms for the word “invariant” in the domains of percepts and concepts. Science, mathematics and philosophy often involve trying to make explicit our implicit analogies and metaphors.

Neuroscience, psychology and cognitive science aim to understand how we form these invariant representations in the first place. In my opinion doing so will revolutionize artificial intelligence.

 



Further reading:

I’ve only scratched the surface of the topic of invariance and symmetry.

I talk about symmetry and invariance in this answer too:

Mathematics: What are some small but effective theses or ideas in mathematics that you have came across? [Quora link. Sign-up required]

I talk about the importance of metaphors in this blog post:

Metaphor: the Alchemy of Thought

I was introduced to many of these ideas through a book by physicist Joe Rosen called Symmetry Rules: How Science and Nature Are Founded on Symmetry. It’s closer to a textbook that a popular treatment, but for people interested in the mathematics of symmetry and group theory, and how it relates to science, this is an excellent introduction. Here is a summary of the book: [pdf]

Relatively recent techniques such as deep learning have helped artificial systems form invariant representations. This is how facial recognition software used by Google and Facebook work. But these algorithms still don’t have the accuracy and generality of human skills, and the way they work, despite being inspired by real neural networks, is sufficiently unlike real neural processes that these algorithms may not shed much light on how human intelligence works.


 

Notes:

This post is a slightly edited form of a Quora answer I wrote recently.

In the comments section someone brought up the idea that some invariants can be easily extracted using Fourier decomposition. This is what I said is response:

Good point. Fourier decomposition is definitely part of the story (for sound at the very least), but it seems there is a lot more.

Some people think that the auditory system is just doing a Fourier transform. But this was actually shown to be partially false a century ago. The idea that pitch corresponds to the frequencies of sinusoids is called Ohm’s acoustic law.

From the wiki page:

 

For years musicians have been told that the ear is able to separate  any complex signal into a series of sinusoidal signals – that it acts as  a Fourier analyzer.  This quarter-truth, known as Ohm’s Other Law, has served to increase  the distrust with which perceptive musicians regard scientists, since it  is readily apparent to them that the ear acts in this way only under  very restricted conditions.
—W. Dixon Ward (1970)


This web page discusses some of the dimensions other that frequency that contribute to pitch:

Introduction to Psychoacoustics – Module 05

There are interesting aspects of pitch perception that render the Fourier picture problematic. For example, there is the Phenomenon of the missing    fundamental: “the observation that the pitch of a complex harmonic tone matches  the frequency of its fundamental spectral component, even if this component is  missing from the tone’s spectrum.”

Evidence suggests that the human auditory system uses both frequency and time/phase coding.

Missing fundamental:  “The brain perceives the pitch of a tone not only by its fundamental frequency, but also by the periodicity of the waveform; we may perceive the same pitch (perhaps with a different timbre) even if the fundamental frequency is missing from a tone.”

This book chapter also covers some of the evidence: [pdf]

” One of the most remarkable properties of the human auditory system is its ability to extract pitch from complex tones. If a group of pure tones, equally spaced in freque ncy are presented together, a pitch corresponding to the common frequency distance between the individual components will be heard. For example, if the pure tones with frequencies of 700, 800, and 900 Hz ar e presented together, the result is a complex sound with an underlying pitch corresponding to that of a 100 Hz tone. Since there is no physical energy at the frequency of 100 Hz in the complex, such a pitch sensation is called residual pitch or virtual pitch (Schouten 1940; Schouten, Ritsma and Cardozo, 1961). Licklider (1954) demonstrated that both the plac e (spectral) pitch and the residual (virtual) pitch have the same properties and cannot be auditorally differentiated.”

The status of Fourier decomposition in vision might be more controversial. Spatial frequency based models have their adherents, but also plenty of critics. One of my professors says that claiming the visual system does spatial Fourier amounts to confusing the object of study with the tools of study. 🙂 We still don’t whether and how the brain performs spatial Fourier decomposition.

A very recent paper reviews this issue:

The neural bases of spatial frequency processing during scene perception

“how and where spatial frequencies are processed within the brain remain unresolved questions.”

Vision scientists I know often talk about how the time domain cannot be ignored in visual processing.

A general point to be made is that even if we have mathematical solutions that are invariant, computational neuroscientists haven’t quite figured out how neural networks achieve such invariant representations. The quest for invariance is more about plausible neural implementation than mathematical description per se.

 

From Cell Membranes to Computational Aesthetics: On the Importance of Boundaries in Life and Art

My next 3QD column is out. I speculate about the role of boundaries in life and aesthetic experience. (Dopamine cells make a cameo appearance too.)

This image is a taster:

If you want to know what this diagram might mean, check out the article:
From Cell Membranes to Computational Aesthetics: On the Importance of Boundaries in Life and Art

A group composed of brilliant individuals will not automatically be the most brilliant group

Perhaps the whole can be better than the sum of its parts?

I came across a very interesting study on McGill University’s excellent Brain from Top to Bottom Blog.

In this study of collective intelligence, the researchers performed numerous statistical analyses. The most interesting finding that emerged from them, and that went beyond the debate about just what exactly collective intelligence might represent, was that this factor was not highly correlated with either the average intelligence of the groups’ members or with the intelligence of the group member who had scored the highest on the individual-intelligence test. In other words, a group composed of brilliant individuals will not automatically be the most brilliant group.
The psychologists did find some factors that let them predict whether a given group would be collectively intelligent. But to identify three, they had to look at factors associated with co-operation. The first such factor was the group’s overall social sensitivity—the members’ ability to perceive each other’s emotions. The second factor was equality in taking turns speaking during group decision-making. The third factor was the proportion of women in the group. This last finding is highly consistent with other data showing that women tend to be more socially sensitive than men and to take turns speaking more naturally than men do.



via The Collective Intelligence of Groups

What is a biological model? Here’s a useful categorization system for people interested in neuroscience, cognitive science, and biology

I found an excellent classification of models in a paper on neurogenesis: Using theoretical models to analyse neural development.

I think this should be illuminating for anyone interested in theoretical, mathematical and/or computational approaches in neuroscience, cognitive science, and biology.

There are several ways in which models of biological processes can be classified. 

Formal or informal models

Informal models are expressed in words or diagrams, whereas formal models — which this Review is concerned with — are described in mathematical equations or computer instructions. Using formal language forces a model to be precise and self-consistent. The process of constructing a formal model can therefore identify inconsistencies, hidden assumptions and missing pieces of experimental data. Formal models allow us to deduce the consequences of the postulated interactions among the components of a given system, and thus to test the plausibility of hypothetical mechanisms. Models can generate new hypotheses and make testable predictions, thereby guiding further experimental research. Equally importantly, models can explain and integrate existing data.

 Phenomenological or mechanistic models 

Most formal models lie on a continuum between two extreme categories: phenomenological and mechanistic. A phenomenological model attempts to replicate the experimental data without requiring the variables, parameters and mathematical relationships in the model to have any direct correspondence in the underlying biology. In a mechanistic model, the mathematical equations directly represent biological elements and their actions. Solving the equations then shows how the system behaves. We understand which processes in the model are mechanistically responsible for the observed behaviour, the variables and parameters have a direct biological meaning and the model lends itself better to testing hypotheses and making predictions. Although mechanistic models are often considered superior, both types of model can be informative. For example, a phenomenological model can be useful as a forerunner to a more mechanistic model in which the variables are given explicit biological interpretations. This is particularly important considering that a complete mechanistic model may be difficult to construct because of the great amount of information it should incorporate. Mechanistic models therefore often focus on exploring the consequences of a selected set of processes, or try to capture the essential aspects of the mechanisms, with a more abstract reference to underlying biological processes. 

Top-down or bottom-up models 

Formal models can be constructed using a top-down or a bottom-up approach. In a top-down approach, a model is created that contains the elements and interactions that enable it to have specific behaviours or properties. In a bottom-up approach, instead of starting with a pre-described, desired behaviour, the properties that arise from the interactions among the elements of the model are investigated. Although it is a strategy and not a type of model, the top-down approach resembles phenomenological modelling because it is generally easier to generate the desired behaviour without all of the elements of the model having a clear biological interpretation. Conversely, the bottom-up approach is related to mechanistic modelling, as it is usual to start with model elements that have a biological meaning. Both approaches have their strengths and weaknesses.

(I removed citation numbers for clarity.)

One point might be relevant here: a model is neither true nor false — ideally it’s an internally consistent mini-world. A theory is the assertion that a model corresponds with reality.

The Mysterious Power of Naming in Human Cognition

I’ve written a long-form essay for the blog/aggregator site 3 Quarks Daily:

Boundaries and Subtleties: the Mysterious Power of Naming in Human Cognition

Here’s a taster:

I’ve divided up the essay into four parts. Here’s the plan:

  1. We’ll introduce two key motifs — the named and the nameless — with a little help from the Tao Te Ching.
  2. We’ll examine a research problem that crops up in cognitive  psychology, neuroscience and artificial intelligence, and link it with  more Taoist motifs.
  3. We’ll look at how naming might give us power over animals, other people, and even mathematical objects.
  4. We’ll explore the power of names in computer science, which will facilitate some wild cosmic speculation.

Will we ever be able to upload our minds to a computer?

Image

My answer to a Quora question: What percent chance is there that whole brain emulation or mind uploading to a neural prosthetic will be feasible within 35 years?

This opinion is unlikely to be popular among sci-fi fans, but I think the “chance” of mind uploading happening at any time in the future is zero.  Or better yet, as a scientist I would assign no number at all to this subjective probability. [Also see the note on probability at the end.]

I think the concept is still incoherent from both philosophical and scientific perspectives.

In brief, these are the problems:

  • We don’t know what the mind is from a scientific/technological perspective.
  • We don’t know which processes in the brain (and body!) are essential to subjective mental experience.
  • We don’t have any intuition for what “uploading” means in terms of mental unity and continuity.
  • We have no way of knowing whether an upload has been successful.

You could of course take the position that clever scientists and engineers will figure it out while the silly philosophers debate semantics. But this I think is based on shaky analogies with other examples of scientific progress. You might justifiably ask “What are the chances of faster-than-light travel?” You could argue that our vehicles keep getting faster, so it’s only a matter of time before we have Star Trek style warp drives. But everything we know about physics says that crossing the speed of light is impossible. So the “chance” in this domain is zero. I think that the idea of uploading minds is even more problematic than faster-than-light travel, because the idea does not have any clear or widely accepted scientific meaning, let alone philosophical meaning. Faster-than-light travel is conceivable at least, but mind uploading may not even pass that test!

I’ll now discuss some of these issues in more detail.

The concept of uploading a mind is based on the assumption that mind and body are separate entities that can in principle exist without each other. There is currently no scientific proof of this idea. There is also no philosophical agreement about what the mind is. Mind-body dualism is actually quite controversial among scientists and philosophers these days.

People (including scientists) who make grand claims about mind uploading generally avoid the philosophical questions. They assume that if we have a good model of brain function, and a way to scan the brain in sufficient detail, then we have all the technology we need.

But this idea is full of unquestioned assumptions. Is the mind identical to a particular structural or dynamic pattern? And if software can emulate this pattern, does it mean that the software has a mind? Even if the program “says” it has a mind, should we believe it? It could be a philosophical zombie that lacks subjective experience.

Underlying the idea of mind/brain uploading is the notion of Multiple Realizability — the idea that minds are processes that can be realized in a variety of substrates. But is this true? It is still unclear what sort of process mind is. There are always properties of a real process that a simulation doesn’t possess. A computer simulation of water can reflect the properties of water (in the simulated ‘world’), but you wouldn’t be able to drink it! 🙂

Even if we had the technology for “perfect” brain scans (though it’s not clear what a “perfect” copy is), we run into another problem: we don’t understand what “uploading” entails. We run into the Ship of Theseus problem. In one variant of this problem/paradox we imagine that Theseus has a ship. He repairs it every once in a while, each time replacing one of the wooden boards. Unbeknownst to him, his rival has been keeping the boards he threw away, and over time he constructed an exact physical replica of Theseus’s ship. Now, which is the real ship of Theseus? His own ship, which is now physically distinct from the one he started with, or the (counterfeit?) copy, which is physically identical to the initial ship? There is no universally accepted answer to this question.

We can now explicitly connect this with the idea of uploading minds. Let’s say the mind is like the original (much repaired) ship of Theseus. Let’s say the computer copy of the brain’s structures and patterns is like the counterfeit ship. For some time there are two copies of the same mind/brain system — the original biological one, and the computer simulation. The very existence of two copies violates a basic notion most people have of the Self — that it must obey a kind of psychophysical unity. The idea that there can be two processes that are both “me” is incoherent (meaning neither wrong nor right). What would that feel like for the person whose mind had been copied?

Suppose in response to this thought experiment you say, “My simulated Self won’t be switched on until after I die, so I don’t have to worry about two Selves — unity is preserved.” In this case another basic notion is violated — continuity. Most people don’t think of the Self as something that can cease to exist and then start existing again. Our biological processes, including neural processes, are always active — even when we’re asleep or in a coma. What reason do we have to assume that when these activities cease, the Self can be recreated?

Let’s go even further: let’s suppose we have a great model of the mind, and a perfect scanner, and we have successfully run a simulated version of your mind on a computer. Does this simulation have a sense of Self? If you ask it, it might say yes. But is this enough? Even currently-existing simulations can be programmed to say “yes” to such questions. How can we be sure that the simulation really has subjective experience?  And how can we be sure that it has your subjective experience? We might have just created a zombie simulation that has access to your memories, but cannot feel anything. Or we might have created a fresh new consciousness that isn’t yours at all! How do we know that a mind without your body will feel like you? [See the link on embodied cognition for more on this very interesting topic.]

And — perhaps most importantly — who on earth would be willing to test out the ‘beta versions’ of these techniques? 🙂

Let me end with a verse from William Blake’s poem “The Tyger“.

What the hammer? what the chain?
In what furnace was thy brain?

~

Further reading:

Will Brains Be Dowloaded? Of Course Not!

Personal Identity

Ship of Theseus

Embodied Cognition

~

EDIT: I added this note to deal with some interesting issues to do with “chance” that came up in the Quora comments.

A note on probability and “chance”

Assigning a number to a single unique event such as the discovery of mind-uploading is actually problematic. What exactly does “chance” mean in such contexts? The meaning of probability is still being debated by statisticians, scientists and philosophers. For the purposes of this discussion, there are 2 basic notions of probability:

(1) Subjective degree of belief. We start with a statement A. The probability p(A) = 0 if I don’t believe A, and p(A) = 1 if I believe A. In other words, if your probability p(A) moves from 0 to 1, your subjective doubt decreases. If A is the statement “God exists” then an atheist’s p(A) is equal to 0, and a theist’s p(A) is 1.

(2) Frequency of a type of repeatable event. In this case the probability p(A) is the number of times event A happens, divided by the total number of events. Alternatively, it is the total number of outcomes that correspond to event A, divided by the total number of possible outcomes. For example, suppose statement A is “the die roll results in a 6”. There are 6 possible outcomes of a die roll, and one of them is 6. So p (A) = 1/6. In other words, if you roll an (ideal) die 600 times, you will see the side with 6 dots on it roughly 100 times.

Clearly, if statement A is “Mind uploading will be discovered in the future”, then we cannot use frequentist notions of probability. We do not have access to a large collection of universes from which to count the ones in which mind uploading has been successfully discovered, and then divide that number by the total number of universes. In other words, statement A does not refer to a statistical ensemble — it is a unique event. For frequentists, the probability of a unique event can only be 0 (hasn’t happened) or 1 (happened). And since mind uploading hasn’t happened yet, the frequency-based probability is 0.

So when a person asks about the “chance” of some unique future event, he or she is implicitly asking for a subjective degree of belief in the feasibility of this event. If you force me to answer the question, I’ll say that my subjective degree of belief in the possibility of mind uploading is zero. But I actually prefer not to assign any number, because I actually think the concept of mind uploading is incoherent (as opposed to unfeasible). The concept of its feasibility does not really arise (subjectively), because the idea of mind uploading is almost meaningless to me. Data can be uploaded and downloaded. But is the mind data? I don’t know one way or the other, so how can I believe in some future technology that presupposes that the mind is data?

More on probability theory:

Chances Are — NYT article on probabily by Steve Strogatz

Why the distinction between single-event probabilities and frequencies is important for psychology (and vice versa)

Interpretations of Probability – Stanford Encyclopedia of Philosophy