How to navigate on Planet Brain

I was asked the following question on Quora: “How do you most easily memorize Brodmann’s areas?”. The question details added the following comment: “Brodmann area 7 is honestly where the numbering starts to seem really arbitrary.” Here’s how I responded:

Yup. The Brodmann numbering system for cortical areas is arbitrary. If you find a mnemonic, do let us know!

I’m a computational modeler working in an anatomy lab, so I confront the deficits in my anatomical knowledge on a daily basis! I can barely remember the handful of Brodmann areas relevant to my project, let alone the full list! I have a diagram of the areas taped up next to my monitor. 🙂

Neuroanatomists become familiar with the brain’s geography over years and years of “travel” through the brain. Think of it like this: what they’re doing is like navigating a city that doesn’t have a neat New York -style city block structure with sensibly numbered streets and avenues. Boston, where I live, is largely lacking in regularity, so one really has to use landmarks — like the Charles River, the Citgo sign, or the Prudential Center. The landmarks for neuroanatomists are sulci and gyri. Over time they learn the Brodmann area numbers. Only instead of a 2D city, neuroanatomists are mapping a 3D planet!


Over the years my lab — the Neural Systems Laboratory at Boston University — has developed a structural model that explains cortical areas and their interconnections in terms of cytoarchitectonic features. They don’t have a naming/addressing system, but at least they provide a way to make sense of the forest of areas!

Fig 1. Schematic representation of four broad cortical types. Agranular and dysgranular cortices are of the limbic type. Figure from [1].

The structural model [1,2] is based on the observation that the 6-layer nature of isocortex is not uniform, but varies systematically. The simplest parts of the cortex are the “limbic” cortices, which include posterior orbitofrontal and anterior cingulate cortices. Limbic cortices have around 4 distinct layers. The most differentiated parts of the cortex are the “eulaminate” cortices, which include primary sensory areas, and some (but not all!) parts of the prefrontal cortex, such as dorsolateral prefrontal cortex. Eulaminate cortices have 6 easily distinguished layers. [See Fig 1]. Interestingly, there is some evidence that the simplest cortices are phylogenetically oldest, and that the most differentiated are most recent.

Fig 2. Schematic representation of cortico-cortical projections. Figure from [2].

Every functional cortical hierarchy* consists of a spectrum of cortices from limbic to eulaminate areas. Areas which are similar tend to be more strongly connected to each other, with many layers linking to each other in a way that can be described as “columnar”, “lateral” or “symmetric”. Dissimilar areas are generally more weakly connected, and have an “asymmetric” laminar pattern of connections, in which projections from a less differentiated area to a more differentiated area originate in deep layers (5 and 6), and terminate in superficial layers (1,2 and 3). Projections from a more differentiated area to a less differentiated area have the opposite pattern: they originate in superficial layers (2 and 3), and terminate in deep layers (4,5 and 6). [See Fig 2.]

 For more on the details of the model, check out the references [1,2]. My boss, Helen Barbas, just submitted a short review about the structural model. When it is out I will append it to this answer.

To return to the city analogy, the structural model tells us that we can infer the (transportation/social/cultural?) links between pairs of neighborhoods based on what the two neighborhoods look like. If the structural model were true for cities, then neighborhoods that have similar houses and street layouts would be more closely linked that dissimilar neighborhoods. Similar neighborhoods would have one type of linkage (the “symmetric” type), whereas dissimilar neighborhoods would have another (the “asymmetric” type).

References

[1] Dombrowski SM, Hilgetag CC, Barbas H (2001) Quantitative architecture distinguishes prefrontal cortical systems in the rhesus monkey. Cereb Cortex 11: 975-988.

[2] Barbas H, Rempel-Clower N (1997) Cortical structure predicts the pattern of corticocortical connections. Cereb Cortex 7: 635-646.

Notes

* Heterarchy might be a better description than hierarchy.

Here’s a link to the Quora answer: How do you most easily memorize Brodmann’s areas?

Why we can’t anticipate what future science will look like

I was asked the following question on Quora:

What kind of information do we need to discover everything about memory in the brain and its mechanism?

I took the opportunity to recapitulate an excellent point made by Paul Feyerabend in his book Against Method, which I am currently reading.

Here’s my answer:

We’ll need to collect information at the genetic, synaptic, cellular, network, and behavioral levels (and perhaps even environmental and social levels), and integrate them into a single picture of memory in action. In other words neuroscientists are already more or less on the right track. Sometimes we know exactly what we’d like to study experimentally, but we lack the technical ability to do so. (For example, our non-invasive techniques for measuring human neural activity are extremely coarse-grained and indirect.)

But I don’t think it is possible to know in advance what specific kinds of data will prove decisive in the creation of a comprehensive theory of memory. Every new experiment can potentially throw up new theoretical questions. We can’t anticipate the evolution of a scientific research program, because we lack the very thing we are searching for: a theory that tells us what is important and what isn’t. If we already had a perfect theory, it wouldn’t be research.

We typically think of experiments and theories as completely separate entities. So we imagine that science involves a linear process like this:

observation -> theory -> new observation -> new theory ->

…and so on. But this doesn’t really capture how science actually proceeds. Think of it this way. Before we have a good theory, our observations may be contaminated by the old partially-successful theories. A theory — even a half-baked one — comes with its own ontology of what exists and what doesn’t. Experimentalists have their own working models and rules-of-thumb that tell them what is worth recording/analyzing and what isn’t. Some of these models and rules may prove wrong, once a good theory comes along. But before that theory comes along, we can’t say much about them. Theory and experiment are intertwined –each can reinforce (or refute) the other.

Philosophers have pointed out for a while now that experiments are not just true pictures of the world — they are intrinsically theory-laden. Theory goes into both the design and the analysis of experiments. This doesn’t mean they are not to be trusted. It only means that in the periods where there is no obviously successful theory, you cannot say which experiments will prove to be the building blocks of a future theory, and which will eventually prove to be wrong or in need of a fresh interpretation.

To sum this up: if we find ourselves in an unlit room we’ve never entered before, we have no choice but to fumble around in the dark until we find a light switch. We can’t anticipate our trajectory through the room — but after we find the light switch, every stumbling step and bruised toe can be restrospectively explained.

 

Is consciousness complex?

Someone on Quora asked the following question: What’s the correlation between complexity and consciousness?

Here’s my answer:

Depends on who you ask! Both complexity and consciousness are contentious words, and mean different things to different people.

I’ll build my answer around the idea of complexity, since it’s easier to talk about scientifically (or at least mathematically) than consciousness. Half-joking comments about complexity and consciousness are to be found in italics.

I came across a nice list of measures of complexity, compiled by Seth Lloyd, a researcher from MIT, which I will structure my answer around. [pdf]

Lloyd describes measures of complexity as ways to answer three questions we might ask about a system or process:

  1. How hard is it to describe?
  2. How hard is it to create?
  3. What is its degree of organization?

1. Difficulty of description: Some objects are complex because they are difficult for us to describe. We frequently measure this difficulty in binary digits (bits), and also use concepts like entropy (information theory) and Kolmogorov (algorithmic) complexity. I particularly like Kolmogorov complexity. It’s a measure of the computational resources required to specify a string of characters. It’s the size of the smallest algorithm that can  generate that string of letters or numbers (all of which can be  converted into bits). So if you have a string like  “121212121212121212121212”, it has a description in English — “12  repeated 12 times” — that is even shorter that the actual string. Not very complex. But the string “asdh41ubmzzsa4431ncjfa34” may have no description shorter than the string itself, so it will have higher Kolmogorov complexity. This measure of complexity can also give us an interesting way to talk about randomness. Loosely speaking, a random process is one whose simulation is harder to accomplish than simply watching the process unfold! Minimum message length is a related idea that also has practical applications. (It seems Kolmogorov complexity is technically uncomputable!)

Consciousness is definitely hard to describe. In fact we seem to be stuck at the description stage at the moment. Describing consciousness is so difficult that bringing in bits and algorithms seem a tad premature. (Though as we shall see, some brave scientists beg to differ.)

2. Difficulty of creation: Some objects and processes are seen as complex because they are really hard  to make. Komogorov complexity could show up here too, since simulating a string can be seen both as an act of description (the code itself) and an act  of creation (the output of the code). Lloyd lists the following  terms that I am not really familiar with: Time Computational Complexity; Space Computational Complexity; Logical depth;  Thermodynamic depth; and “Crypticity” (!?).  In additional to computational  difficulty, we might add other costs: energetic, monetary, psychological, social, and ecological. But perhaps then we’d be  confusing the complex with the cumbersome? 🙂

Since we haven’t created a consciousness yet, and don’t know how nature accomplished it, perhaps we are forced to say that consciousness really is complex from the perspective of artificial synthesis. But if/when we have made an artificial mind — or settled upon a broad definition of consciousness that includes existing machines — then perhaps we’ll think of consciousness as easy! Maybe it’s everywhere already! Why pay for what’s free?

3. Degree of organization: Objects and processes that seem intricately structured are also seen as  complex. This type of complexity differs strikingly from computational complexity. A string of random noise is extremely complex from an information-theoretic perspective, because it is virtually incompressible — it  cannot be condensed into a simple algorithm. A book consisting of totally random characters contains more information, and is therefore more algorithmically complex, that a meaningful  text of the same length. But strings of random characters are typically interpreted as totally lacking in structure, and are therefore in a sense very simple. Some measures that Lloyd associates with organizational complexity include: Fractal dimension, metric entropy, Stochastic Complexity and several more, most of which I confess I had never heard of until today. I suspect that characterizing organizational structure is an ongoing research endeavor. In a sense that’s what mathematics is — the study of abstract structure.

Consciousness seems pretty organized, especially if you’re having a good day! But it’s also the framework by which we come to know that organization exists in nature in the first place…so this gets a bit Ioopy . 🙂

Seth Lloyd ends his list with concepts that are related to complexity, but don’t necessarily have measures. These I think are particularly relevant to consciousness and, to the more prosaic world I work in: neural network modeling.

Self-organization
Complex adaptive system
Edge of chaos

Consciousness may or may not be self-organized, but it definitely adapts, and it’s occasionally chaotic.

To Lloyd’s very handy list led me also add self-organized criticality and emergence. Emergence is an interesting concept which has been falsely accused of being obscurantism. A property is emergent is if is seen in a system, but not in any constituent of the system. For instance, the thermodynamic gas laws emerge out of kinetic theory, but they make no reference to molecules. The laws governing gases show up when there is a large enough number of particles, and when these laws reveal themselves, microscopic details often become irrelevant. But gases are the least interesting substrates for emergence. Condensed matter physicists talk about phenomena like the emergence of quasiparticles, which are excitations in a solid that behave as if they are independent particles, but depend for this independence, paradoxically, on the physics of the whole object.  (Emergence is a fascinating subject in its own right, regardless of its relevance to consciousness. Here’s a paper that proposes a neat formalism for talking about emergence: Emergence is coupled to scope, not level. PW Anderson’s classic paper “More is Different” also talks about a related issue: pdf )

Consciousness may well be an emergent process — we rarely say that a single neuron or a chunk of nervous tissue has a mind of its own. Consciousness is a word that is reserved for the whole organism, typically.

So is consciousness complex? Maybe…but not really in measurable ways. We can’t agree on how to describe it, we haven’t created it artificially yet, and we don’t know how it is organized, or how it emerged!

In my personal opinion many of the concepts people associate with consciousness are far outside of the scope of mainstream science. These include qualia, the feeling of what-it-is-like, and intentionality, the observation that mental “objects” always seems to be “about” something.

This doesn’t mean I think these aspects of consciousness are meaningless, only that they are scientifically intractable. Other aspects of  consciousness, such as awareness, attention, and emotion might also be shrouded in mystery, but I think neuroscience has much to say about them — this is because they have some measurable aspects, and these aspects step out of the shadows during neurological disorders, chemical modulation, and other abnormal states of being.

However…

There are famous neuroscientists who might disagree. Giulio Tononi has come up with something called integrated information theory, which comes with a measure of consciousness he christened phi. Phi is supposed to capture the degree of “integratedness” of a network. I remain quite skeptical of this sort of thing — for now it  seems to be a metaphor inspired by information theory, rather than a measurable quantity. I can’t imagine how we will be able  to relate it to actual experimental data. Information, contrary to popular perception, is not something intrinsic to physical objects. The amount of information in a signal depends on the device receiving the signal. Right now we have no way of knowing how many “bits” are being transmitted between two neurons, let alone between entire regions of the brain. Information theory is best applied when we already know the nature of the message, the communication channel, and the encoding/decoding process. We have only partially characterized these  aspects of neural dynamics. Our experimental data seem far too fuzzy  for any precise formal approach. [Information may actually be a concept of very limited use in biology, outside of data fitting. See this excellent paper for more: A deflationary account of information in biology. This sums it up: “if information is in the concrete world, it is causality. If it is abstract, it is in the head.”]

But perhaps this paper  will convince me otherwise: Practical Measures of Integrated Information for Time-Series Data. [I very much doubt it though.]

___

I thought I would write a short answer… but I ended up learning a lot as I added more info.

View Answer on Quora

Is the right-brain/left-brain (emotional/rational) distinction still useful?

[I was asked to answer this question on Quora.]

I think there are actually three different questions here:

1. Is the right-brain/left-brain distinction still useful?
Yes. Marc Ettlinger‘s answer on Quora gets into aspects of this. There are many differences between the left and right hemispheres — some of the most unambiguous differences are in the domain of language processing. But the devil is in the details, many of which come from methods that are not always easy to interpret.

2. Is the emotional/rational distinction still useful?
Sort of (but it is often a source of confusion). Emotion and rationality/cognition are no longer seen as enemies. Rather than a dichotomy, there is a symbiotic spectrum of emotional and rational processes.

3. Is the right-brain/left brain distinction the same as the emotional/rational distinction?
Absolutely Not.


Let’s look at each issue in turn.

1. Lateralization: the right-brain/left-brain distinction

There is no doubt that there are differences in brain structure and function between the two hemispheres. Handedness is the best known example — the right side of the brain controls the left side of the body and vice versa, and humans are typically more dexterous with one hand than the other, so we say that one side of the brain is “dominant”. But it is also true that normal behavior requires cooperation between the two hemispheres. Split brain patients aren’t really normal.

However, there are reasons to be cautious about reading too much into the left-right difference data (especially for emotion and cognition). A lot of lateralization research is done using fMRI, which has many methodological and interpretational problems (e.g. Stark & Squire, 2001, Friston et al., 1996). For instance, a statistically significant difference in processing between the two hemispheres might have a small effect size. Also, focusing on differences in activity — as is common in brain scanning studies — obscures the fact that very often both areas are contributing to the behavior in question. In team sports, the observation that one team member is burning more calories than another doesn’t imply the other team members are doing nothing.

I have written about some of these fMRI issues in two blog posts: here and here. In this post I come up with some analogies to help (me) understand the statistical methods, and why they might occasionally mislead us.

The problem with fMRI is twofold: (1) We still don’t know what exactly fMRI is measuring, and (2) there is a lot of statistical hocus-pocus in the analysis of the results. See the infamous paper Voodoo Correlations in Social Neuroscience for some of the most heinous examples of statistical mistakes in fMRI research, particularly studies of emotion (Vul et al., 2009). More on the backlash against fMRI can be found in this Mind Hacks post. This critique of fMRI from the perspective of a PET researcher is also worth reading (Fox, 2012). Also see this page for a discussion of the subtraction method and related issues. Some statistical problems apply to other experimental issue too. And there is plenty of fMRI research that is carefully done, avoiding statistical and interpretational mistakes.

Even lesion studies can be problematic, particularly because one cannot easily tease apart the effect of the lesion from the effect of compensatory mechanisms (e.g. Hagemann et al. 2003). The brain’s plasticity is an extraordinary thing, so recovery processes add another level of complexity to the issue.

Also check out this blog post by Bradley Voytek. He says

just remember to ask, “can a person who has a lesion to that brain region not experience that emotion or do that behavior anymore?” If the person still can, then that is not where that behavior is located in the brain. And, in all likelihood, that function can’t be localized to any one region at all.

2. Passion versus Reason: the age-old distinction between emotion and “rationality”

The opposition between emotion and rationality has been exaggerated quite a bit. This may be a hangover of traditional and/or Victorian value systems that esteem self-restraint over displays of emotion. (Stiff upper lip, old chap!) This said, there are disorders that lead to inappropriate emotions that interfere with cognition. But there are also disorders that lead to a lack of emotion, and these disorders also interfere with normal cognition. Antonio Damasio makes this case in his 1994 book Descartes’s Error.

I’m going to copy excerpts from a paper I was co-author of (John et al., 2013):

The debate on the nature of cognition and emotion is a modern scientific manifestation of an age-old dichotomy. “Cognition” has come to refer to an assortment of useful behaviors – such as attention, memory and symbolic reasoning, while “emotion” carries with it the connotation of behavior that is irrational, evolutionarily ancient, and antithetical to efficient rationality. In this paper we outline findings that demonstrate both functional and anatomical overlap between cognitive and emotional processes, and use computational modeling to illustrate how learning processes may make cognitive-emotional interactions adaptive.

[…]
Though reason and emotion have been viewed as opposed processes in popular culture since ancient times, emotions have been treated as adaptive behavioral phenotypes by scientists since the time of Darwin (1872). Treating emotion as an adaptive phenotype fundamentally subverts any reason-emotion antithesis, because it places emotion as another, if distinctive, enabler of “biological rationality” (Damasio, 1994). Animals have a complex array of cognitive operations to draw upon, and an animal is rational if it knows or can learn how to draw upon those operations to maximize its well-being and minimize threats. In recent years, neuroscientists have shown that the parts of the brain that are recruited during episodes with emotion-arousing stimuli are also de-recruited when no emotion arousing stimuli are present, or when an animal learns that formerly emotion-arousing cues can be safely ignored (e.g. LaBar et al., 1998; Sehlmeyer et al., 2009; Bach et al., 2011; Hartley et al., 2011; van Well et al., 2012). Emotion is indeed a highly adaptive behavioral phenotype.
[…]
We have argued that rather than being opposing forces, cognition and emotion can be seen as points on a continuum or gradient of flexible processes required for adaptive categorization of, and response to, changes in the external and internal environment of an organism. While this conceptualization may not capture all the psychological nuances of the terms, it highlights the experimentally tractable facets of “cognition” and “emotion”.

The functional continuum is based on the robust connections between areas associated with cognition and those associated with emotion.

3. Right brain = Emotional? Left brain = Rational?

This is almost certainly not true. Both halves of the brain have corresponding limbic structures that are involved in cognitive-emotional interaction.

“The experience of emotion is not lateralized to one or the other hemisphere; rather, it involves dynamic processes that include interactions between anterior and posterior regions of both hemispheres, as well as between cortical and subcortical structures of the brain.” (Heller et al., 1998)

“results of the studies reviewed herein suggest that a simple left/right dichotomy with respect to hemispheric specialization for the autonomic component of the emotional response is probably untenable.” (Hagemann et al. 2003)

While there may be differences in the signal-processing properties of the left and right hemispheres, these differences cannot be aligned with the rational/emotional dichotomy. A brief scan of google scholar should bear this out (such as this or this). Studies seems to suggest that the left and right amygdalae constribute to different aspects of cognitive-emotional interaction (Markowitsch, 1999, Baas et al., 2004), but do not divide along a cognition-emotion faultline. There may not be significant differences between left and right orbitofrontal cortices (Kringelback, 2005), which are also involved in emotion.

The fMRI results are complex and often mutually contradictory, but the more solid anatomical connectivity data point to a model of cognitive-emotional interactions that is very different from the cartoonish assertion that “the right brain is emotional and the left brain is rational”.

References

Baas, D., Aleman, A., & Kahn, R. S. (2004). Lateralization of amygdala activation: a systematic review of functional neuroimaging studies. Brain Research Reviews, 45(2), 96-103.

Fox, P. T. (2012). The coupling controversy. NeuroImage, 62(2), 594-601.

Friston, K. J., Price, C. J., Fletcher, P., Moore, C., Frackowiak, R. S. J., & Dolan, R. J. (1996). The trouble with cognitive subtraction. NeuroImage, 4(2), 97-104.

Hagemann, D., Waldstein, S., & Thayer, J. (2003). Central and autonomic nervous system integration in emotion Brain and Cognition, 52 (1), 79-87 DOI: 10.1016/S0278-2626(03)00011-3

Heller, W., Nitschke, J. B., & Miller, G. A. (1998). Lateralization in emotion and emotional disorders. Current Directions in Psychological Science.

John, Y. J., Bullock, D., Zikopoulos, B., & Barbas, H. (2013). Anatomy and computational modeling of networks underlying cognitive-emotional interaction.Frontiers in Human Neuroscience, 7.

Kringelbach, M. L. (2005). The human orbitofrontal cortex: linking reward to hedonic experience. Nature Reviews Neuroscience, 6(9), 691-702.

Markowitsch, H. J. (1999). Differential contribution of right and left amygdala to affective information processing. Behavioural Neurology, 11(4), 233-244.

Stark, C. E., & Squire, L. R. (2001). When zero is not zero: the problem of ambiguous baseline conditions in fMRI. Proceedings of the National Academy of Sciences, 98(22), 12760-12766.

Vul, E., Harris, C., Winkielman, P., & Pashler, H. (2009). Puzzlingly high correlations in fMRI studies of emotion, personality, and social cognition.Perspectives on Psychological Science, 4(3), 274-290.

ResearchBlogging.org

Will we ever be able to upload our minds to a computer?

Image

My answer to a Quora question: What percent chance is there that whole brain emulation or mind uploading to a neural prosthetic will be feasible within 35 years?

This opinion is unlikely to be popular among sci-fi fans, but I think the “chance” of mind uploading happening at any time in the future is zero.  Or better yet, as a scientist I would assign no number at all to this subjective probability. [Also see the note on probability at the end.]

I think the concept is still incoherent from both philosophical and scientific perspectives.

In brief, these are the problems:

  • We don’t know what the mind is from a scientific/technological perspective.
  • We don’t know which processes in the brain (and body!) are essential to subjective mental experience.
  • We don’t have any intuition for what “uploading” means in terms of mental unity and continuity.
  • We have no way of knowing whether an upload has been successful.

You could of course take the position that clever scientists and engineers will figure it out while the silly philosophers debate semantics. But this I think is based on shaky analogies with other examples of scientific progress. You might justifiably ask “What are the chances of faster-than-light travel?” You could argue that our vehicles keep getting faster, so it’s only a matter of time before we have Star Trek style warp drives. But everything we know about physics says that crossing the speed of light is impossible. So the “chance” in this domain is zero. I think that the idea of uploading minds is even more problematic than faster-than-light travel, because the idea does not have any clear or widely accepted scientific meaning, let alone philosophical meaning. Faster-than-light travel is conceivable at least, but mind uploading may not even pass that test!

I’ll now discuss some of these issues in more detail.

The concept of uploading a mind is based on the assumption that mind and body are separate entities that can in principle exist without each other. There is currently no scientific proof of this idea. There is also no philosophical agreement about what the mind is. Mind-body dualism is actually quite controversial among scientists and philosophers these days.

People (including scientists) who make grand claims about mind uploading generally avoid the philosophical questions. They assume that if we have a good model of brain function, and a way to scan the brain in sufficient detail, then we have all the technology we need.

But this idea is full of unquestioned assumptions. Is the mind identical to a particular structural or dynamic pattern? And if software can emulate this pattern, does it mean that the software has a mind? Even if the program “says” it has a mind, should we believe it? It could be a philosophical zombie that lacks subjective experience.

Underlying the idea of mind/brain uploading is the notion of Multiple Realizability — the idea that minds are processes that can be realized in a variety of substrates. But is this true? It is still unclear what sort of process mind is. There are always properties of a real process that a simulation doesn’t possess. A computer simulation of water can reflect the properties of water (in the simulated ‘world’), but you wouldn’t be able to drink it! 🙂

Even if we had the technology for “perfect” brain scans (though it’s not clear what a “perfect” copy is), we run into another problem: we don’t understand what “uploading” entails. We run into the Ship of Theseus problem. In one variant of this problem/paradox we imagine that Theseus has a ship. He repairs it every once in a while, each time replacing one of the wooden boards. Unbeknownst to him, his rival has been keeping the boards he threw away, and over time he constructed an exact physical replica of Theseus’s ship. Now, which is the real ship of Theseus? His own ship, which is now physically distinct from the one he started with, or the (counterfeit?) copy, which is physically identical to the initial ship? There is no universally accepted answer to this question.

We can now explicitly connect this with the idea of uploading minds. Let’s say the mind is like the original (much repaired) ship of Theseus. Let’s say the computer copy of the brain’s structures and patterns is like the counterfeit ship. For some time there are two copies of the same mind/brain system — the original biological one, and the computer simulation. The very existence of two copies violates a basic notion most people have of the Self — that it must obey a kind of psychophysical unity. The idea that there can be two processes that are both “me” is incoherent (meaning neither wrong nor right). What would that feel like for the person whose mind had been copied?

Suppose in response to this thought experiment you say, “My simulated Self won’t be switched on until after I die, so I don’t have to worry about two Selves — unity is preserved.” In this case another basic notion is violated — continuity. Most people don’t think of the Self as something that can cease to exist and then start existing again. Our biological processes, including neural processes, are always active — even when we’re asleep or in a coma. What reason do we have to assume that when these activities cease, the Self can be recreated?

Let’s go even further: let’s suppose we have a great model of the mind, and a perfect scanner, and we have successfully run a simulated version of your mind on a computer. Does this simulation have a sense of Self? If you ask it, it might say yes. But is this enough? Even currently-existing simulations can be programmed to say “yes” to such questions. How can we be sure that the simulation really has subjective experience?  And how can we be sure that it has your subjective experience? We might have just created a zombie simulation that has access to your memories, but cannot feel anything. Or we might have created a fresh new consciousness that isn’t yours at all! How do we know that a mind without your body will feel like you? [See the link on embodied cognition for more on this very interesting topic.]

And — perhaps most importantly — who on earth would be willing to test out the ‘beta versions’ of these techniques? 🙂

Let me end with a verse from William Blake’s poem “The Tyger“.

What the hammer? what the chain?
In what furnace was thy brain?

~

Further reading:

Will Brains Be Dowloaded? Of Course Not!

Personal Identity

Ship of Theseus

Embodied Cognition

~

EDIT: I added this note to deal with some interesting issues to do with “chance” that came up in the Quora comments.

A note on probability and “chance”

Assigning a number to a single unique event such as the discovery of mind-uploading is actually problematic. What exactly does “chance” mean in such contexts? The meaning of probability is still being debated by statisticians, scientists and philosophers. For the purposes of this discussion, there are 2 basic notions of probability:

(1) Subjective degree of belief. We start with a statement A. The probability p(A) = 0 if I don’t believe A, and p(A) = 1 if I believe A. In other words, if your probability p(A) moves from 0 to 1, your subjective doubt decreases. If A is the statement “God exists” then an atheist’s p(A) is equal to 0, and a theist’s p(A) is 1.

(2) Frequency of a type of repeatable event. In this case the probability p(A) is the number of times event A happens, divided by the total number of events. Alternatively, it is the total number of outcomes that correspond to event A, divided by the total number of possible outcomes. For example, suppose statement A is “the die roll results in a 6”. There are 6 possible outcomes of a die roll, and one of them is 6. So p (A) = 1/6. In other words, if you roll an (ideal) die 600 times, you will see the side with 6 dots on it roughly 100 times.

Clearly, if statement A is “Mind uploading will be discovered in the future”, then we cannot use frequentist notions of probability. We do not have access to a large collection of universes from which to count the ones in which mind uploading has been successfully discovered, and then divide that number by the total number of universes. In other words, statement A does not refer to a statistical ensemble — it is a unique event. For frequentists, the probability of a unique event can only be 0 (hasn’t happened) or 1 (happened). And since mind uploading hasn’t happened yet, the frequency-based probability is 0.

So when a person asks about the “chance” of some unique future event, he or she is implicitly asking for a subjective degree of belief in the feasibility of this event. If you force me to answer the question, I’ll say that my subjective degree of belief in the possibility of mind uploading is zero. But I actually prefer not to assign any number, because I actually think the concept of mind uploading is incoherent (as opposed to unfeasible). The concept of its feasibility does not really arise (subjectively), because the idea of mind uploading is almost meaningless to me. Data can be uploaded and downloaded. But is the mind data? I don’t know one way or the other, so how can I believe in some future technology that presupposes that the mind is data?

More on probability theory:

Chances Are — NYT article on probabily by Steve Strogatz

Why the distinction between single-event probabilities and frequencies is important for psychology (and vice versa)

Interpretations of Probability – Stanford Encyclopedia of Philosophy

Will the study of mind mimic mathematics?

Mathematics and the mind reflect each other. 🙂 

The question: How much will the study of mind mimic the foundations of math and find use of math?

The fields of computational neuroscience and cognitive science both use quite a bit of mathematics.

The answer hangs on what is meant by the word “mimic” in the question. One way to interpret the question is this: Will ideas about cognition mirror ideas about abstract mathematical concepts? From one perspective you could say this is necessarily the case: since human minds produce mathematics, the structures of mathematics must conform to the constraints of the structures of human thought.

Going in the other direction, mathematics is a key tool for understanding the natural world, and so if we study the mind/brain using methodologies derived from physics, chemistry and engineering, we will no doubt find ourselves using mathematical terminology, analogies, and concepts to describe mental processes.

This blog post I wrote gives an overview of the theoretical and mathematical approaches to the mind/brain: The Pentagon of Neuroscience — An Infographic/Listicle for Understanding the Neuroculture

And this book may be of interest in exploring the psychological roots of mathematical concepts: Where Mathematics Comes From

Based on an answer I wrote on Quora.

Image: Drawing Hands by MC Escher.

How accurate is it to say that a brain area is the ‘seat of’ any particular facet of human experience?

A question from Quora:

In  neuroscience, how accurate is it to say that a brain area (or circuit)  ‘mediates’, ‘evaluates’ or is the ‘seat of’ any particular facet of  human experience?

My answer:

A very interesting and important question! Here’s a capsule version of my answer:

Understanding the brain is like understanding the shape of a very complex, dynamic and multifaceted object by looking at the shadows projected by it on a wall. The color of the light source and its orientation with respect to the object (and the wall) will change the nature of the shadow. Different scientific techniques are like different light sources. Different experimental protocols are like different angles. We have to look at the sequence of shadows and imagine what the object’s actual form is — it is not possible to view this object simultaneously from all angles and with all illumination sources. So there is a degree of creativity and freedom in each person’s own conception of the true nature of the object. Nevertheless, the conception that scientists eventually agree on is likely to be one that tempers this freedom with rationality and responsibility.

(Picture from Wikipedia.)

In general, cautious scientists try to avoid stating explicitly that a particular brain region or circuit is solely responsible for a particular subjective experience. Instead, we use phrases like “dopamine seems to be implicated in the processing of discovering novelty” or “emotional processes may be mediated in part by the amygdala”. Having to qualify all our statements with ‘maybe’, ‘perhaps’, and ‘seems’ is part of the reason academic papers can be a drag for both readers and writers. By the time scientific findings enter the popular press, they are simplified for mass consumption, and come to seem more certain.

When we say a region is implicated in, say, emotion, we generally use several converging lines of evidence. So brain scanning techniques like fMRI — which have several widely publicized problems — are not the only ways to infer function. The earliest studies to implicate the temporal lobe in emotion used very coarse lesions. In the 1800s the entire temporal lobe (including the amygdala) was removed in monkeys, and several mood/emotion related disorders ensued. Progressively more precise lesions were conducted, allowing scientists to discover that the amygdala was the key to what is now known as KlĂĽver–Bucy syndrome. Symptoms include docility, hypersexuality, and hyperorality — overeating or inappropriately exploring things orally.

So long before fMRI was invented, people were starting to piece together a picture of brain functioning based on studies in monkeys and rodents.

Every technique has some confound or drawback, so specific claims about functions require integrating a variety of experimental findings. Lesions, for instance, lead to compensatory mechanisms during post-surgical recovery, so one must be careful about attributing a change in behavior to the lesion per se, rather than to the recovery process. The same goes for post-mortem studies of human brains.

A complementary technique that helps us understand the flow of signals in the brain is Neuroanatomy. By tracing how axons travel from region to region, anatomists can infer how signals from the body — sense organs, muscles, glands, viscera etc — travel to the brain and percolate through the brain. The confound here is that form does not completely constrain function — given the complexity of the brain’s connections, multiple theoretical models can be constructed using a common circuit.

Brain scanning techniques like fMRI are part of a broader field known as Electrophysiology — the study of electrical signals in the cells and tissues. Older techniques in this lineage include EEG, neuronal recordings, and PET. Neuronal firing patterns can be correlated with external stimuli or with bodily processes, so the neural dynamics that occur in parallel with a particular cognitive, emotional or unconscious/autonomic process can be inferred.

So to sum it up, sweeping statements about brain function should be taken with a pinch of salt, but the size of the pinch should be proportional to the newness of the claim. Some ideas about the hypothalamus, for example, have been corroborated by many of the techniques I have mentioned. The intro or review section of a neuroscience paper typically tries to link new findings with older ones.

None of this addresses the far more difficult philosophical questions surrounding neuroscience such as

  1. How do we best integrate information from various species?
  2. How similar are observable animal behaviors to subjective human feelings?
  3. How do we understand the idea of a neural correlate of some mental phenomenon?

This integration and interpretation problem is sometimes described as being underdetermined. But hopefully, the scientific community will work to ensure that their (always provisional!) descriptions of the brain/mind interface harmonize the data with logic and practical usefulness.

View Answer on Quora

Synesthesia — secret passageways in the mansion of memory?

This post is a slightly modified version of my answer to a Quora question:

Is there a link between synesthetia and involuntary memory?

This is a very interesting question. I can add some neuroscientific flesh to the skeleton you have already laid out.

Involuntary memory seems to involve the ability for memories to be accessed via sensory “triggers”. This may occur via a proposed neural mechanism called Hebbian learning. Through this mechanism, “cells that fire together wire together.” In other words, if two neurons are connected together, then if they both happen to to fire at the same time, their synaptic connections are strengthened.

So to use Proust’s famous example from In Search of Lost Time, let’s say you are eating a piece of cake at teatime in your aunt’s house. A neuron — or group of neurons — linked to the taste of the cake will presumably fire. Let’s call it neuron A. Similarly a neuron that is linked to the sight of your aunt will also fire at the same time — let this be neuron B. Let’s assume that neuron A projects to neuron B. Then according to Hebb’s rule, the strength of the connection between A and B increases. This in turn improves the ability of A to cause B to fire.

The more connections there are, the more opportunities for Hebbian learning. So while you are listening to a lecture or reading a book, connections with other experiences and memories are being made, rendering the information easier to access. Synesthesia may give you more “storage space”, and as a bonus, it may give you more ways of accessing that storage space.

The memory system may be like a labyrinthine mansion. Your memories are locked away in the rooms of this vast maze of a building, and to remember something is to find a way to get to the the room where it is stored. The memory mansion of someone without synesthesia may be full of rooms that each have only a single entrance. Without knowing the way to the right entrance — the right recollection strategy — the memory may be present but inaccessible. The memory system of a synesthete, by contrast, may be like a mansion whose rooms have several entrances, as well as secret passageways linking wings of the building that are usually far apart. So a synesthete may have more ways to navigate the maze of his or her own memory!

This is of course speculation, and careful experimentation and theory will be needed to come up with a more solid explanation!

There are documented cases of synesthesia co-occurring with exceptional memory:

Savant Memory in a Man with Colour Form-Number Synaesthesia and Asperger

Synesthetic Color Experiences Influence Memory

Some more relevant citations:

Savant Memory for Digits in a Case of Synaesthesia and Asperger Syndrome is Related to Hyperactivity in the Lateral Prefrontal Cortex

Do Synesthetes Have a General Advantage in Visual Search and Episodic Memory? A Case for Group Studies

This paper suggests synesthesia may not always confer an advantage. The authors say “The results indicate that synesthesia per se does not seem to lead to a  strong performance advantage. Rather, the superior performance of  synesthetes observed in some case-report studies may be due to  individual differences, to a selection bias or to a strategic use of  synesthesia as a mnemonic.”

So synesthesia may not itself be a memory-enhancing condition, but a basis from which to discover or create improved memory-recovery strategies — a way to build new doors and secret passageways in your memory mansion! 🙂

View Answer on Quora