The Pentagon of Neuroscience — An Infographic/Listicle for Understanding the Neuroculture

Click here to go straight to the infographic. It should open in Firefox and Chrome.

brainzNeuroscience has hit the big time. Every day, popular newspapers, websites and blogs offer up a heady stew of brain-related self-help (neuro-snake oil?) and gee wiz science reporting (neuro-wow?). Some scientists and journalists — perhaps caught up in the neuro-fervor — throw caution to the wind, promising imminent brain-based answers to the kinds of questions that probably predate civilization itself: What is the nature of mind? Why do we feel the way we do? Does each person have a fundamental essence? How can we avoid pain and suffering, and discover joy, creativity, and interpersonal harmony?

Continue reading

A group composed of brilliant individuals will not automatically be the most brilliant group

Perhaps the whole can be better than the sum of its parts?

I came across a very interesting study on McGill University’s excellent Brain from Top to Bottom Blog.

In this study of collective intelligence, the researchers performed numerous statistical analyses. The most interesting finding that emerged from them, and that went beyond the debate about just what exactly collective intelligence might represent, was that this factor was not highly correlated with either the average intelligence of the groups’ members or with the intelligence of the group member who had scored the highest on the individual-intelligence test. In other words, a group composed of brilliant individuals will not automatically be the most brilliant group.
The psychologists did find some factors that let them predict whether a given group would be collectively intelligent. But to identify three, they had to look at factors associated with co-operation. The first such factor was the group’s overall social sensitivity—the members’ ability to perceive each other’s emotions. The second factor was equality in taking turns speaking during group decision-making. The third factor was the proportion of women in the group. This last finding is highly consistent with other data showing that women tend to be more socially sensitive than men and to take turns speaking more naturally than men do.



via The Collective Intelligence of Groups

What is a biological model? Here’s a useful categorization system for people interested in neuroscience, cognitive science, and biology

I found an excellent classification of models in a paper on neurogenesis: Using theoretical models to analyse neural development.

I think this should be illuminating for anyone interested in theoretical, mathematical and/or computational approaches in neuroscience, cognitive science, and biology.

There are several ways in which models of biological processes can be classified. 

Formal or informal models

Informal models are expressed in words or diagrams, whereas formal models — which this Review is concerned with — are described in mathematical equations or computer instructions. Using formal language forces a model to be precise and self-consistent. The process of constructing a formal model can therefore identify inconsistencies, hidden assumptions and missing pieces of experimental data. Formal models allow us to deduce the consequences of the postulated interactions among the components of a given system, and thus to test the plausibility of hypothetical mechanisms. Models can generate new hypotheses and make testable predictions, thereby guiding further experimental research. Equally importantly, models can explain and integrate existing data.

 Phenomenological or mechanistic models 

Most formal models lie on a continuum between two extreme categories: phenomenological and mechanistic. A phenomenological model attempts to replicate the experimental data without requiring the variables, parameters and mathematical relationships in the model to have any direct correspondence in the underlying biology. In a mechanistic model, the mathematical equations directly represent biological elements and their actions. Solving the equations then shows how the system behaves. We understand which processes in the model are mechanistically responsible for the observed behaviour, the variables and parameters have a direct biological meaning and the model lends itself better to testing hypotheses and making predictions. Although mechanistic models are often considered superior, both types of model can be informative. For example, a phenomenological model can be useful as a forerunner to a more mechanistic model in which the variables are given explicit biological interpretations. This is particularly important considering that a complete mechanistic model may be difficult to construct because of the great amount of information it should incorporate. Mechanistic models therefore often focus on exploring the consequences of a selected set of processes, or try to capture the essential aspects of the mechanisms, with a more abstract reference to underlying biological processes. 

Top-down or bottom-up models 

Formal models can be constructed using a top-down or a bottom-up approach. In a top-down approach, a model is created that contains the elements and interactions that enable it to have specific behaviours or properties. In a bottom-up approach, instead of starting with a pre-described, desired behaviour, the properties that arise from the interactions among the elements of the model are investigated. Although it is a strategy and not a type of model, the top-down approach resembles phenomenological modelling because it is generally easier to generate the desired behaviour without all of the elements of the model having a clear biological interpretation. Conversely, the bottom-up approach is related to mechanistic modelling, as it is usual to start with model elements that have a biological meaning. Both approaches have their strengths and weaknesses.

(I removed citation numbers for clarity.)

One point might be relevant here: a model is neither true nor false — ideally it’s an internally consistent mini-world. A theory is the assertion that a model corresponds with reality.

The Mysterious Power of Naming in Human Cognition

I’ve written a long-form essay for the blog/aggregator site 3 Quarks Daily:

Boundaries and Subtleties: the Mysterious Power of Naming in Human Cognition

Here’s a taster:

I’ve divided up the essay into four parts. Here’s the plan:

  1. We’ll introduce two key motifs — the named and the nameless — with a little help from the Tao Te Ching.
  2. We’ll examine a research problem that crops up in cognitive  psychology, neuroscience and artificial intelligence, and link it with  more Taoist motifs.
  3. We’ll look at how naming might give us power over animals, other people, and even mathematical objects.
  4. We’ll explore the power of names in computer science, which will facilitate some wild cosmic speculation.

Is consciousness complex?

Someone on Quora asked the following question: What’s the correlation between complexity and consciousness?

Here’s my answer:

Depends on who you ask! Both complexity and consciousness are contentious words, and mean different things to different people.

I’ll build my answer around the idea of complexity, since it’s easier to talk about scientifically (or at least mathematically) than consciousness. Half-joking comments about complexity and consciousness are to be found in italics.

I came across a nice list of measures of complexity, compiled by Seth Lloyd, a researcher from MIT, which I will structure my answer around. [pdf]

Lloyd describes measures of complexity as ways to answer three questions we might ask about a system or process:

  1. How hard is it to describe?
  2. How hard is it to create?
  3. What is its degree of organization?

1. Difficulty of description: Some objects are complex because they are difficult for us to describe. We frequently measure this difficulty in binary digits (bits), and also use concepts like entropy (information theory) and Kolmogorov (algorithmic) complexity. I particularly like Kolmogorov complexity. It’s a measure of the computational resources required to specify a string of characters. It’s the size of the smallest algorithm that can  generate that string of letters or numbers (all of which can be  converted into bits). So if you have a string like  “121212121212121212121212″, it has a description in English — “12  repeated 12 times” — that is even shorter that the actual string. Not very complex. But the string “asdh41ubmzzsa4431ncjfa34″ may have no description shorter than the string itself, so it will have higher Kolmogorov complexity. This measure of complexity can also give us an interesting way to talk about randomness. Loosely speaking, a random process is one whose simulation is harder to accomplish than simply watching the process unfold! Minimum message length is a related idea that also has practical applications. (It seems Kolmogorov complexity is technically uncomputable!)

Consciousness is definitely hard to describe. In fact we seem to be stuck at the description stage at the moment. Describing consciousness is so difficult that bringing in bits and algorithms seem a tad premature. (Though as we shall see, some brave scientists beg to differ.)

2. Difficulty of creation: Some objects and processes are seen as complex because they are really hard  to make. Komogorov complexity could show up here too, since simulating a string can be seen both as an act of description (the code itself) and an act  of creation (the output of the code). Lloyd lists the following  terms that I am not really familiar with: Time Computational Complexity; Space Computational Complexity; Logical depthThermodynamic depth; and “Crypticity” (!?).  In additional to computational  difficulty, we might add other costs: energetic, monetary, psychological, social, and ecological. But perhaps then we’d be  confusing the complex with the cumbersome? :)

Since we haven’t created a consciousness yet, and don’t know how nature accomplished it, perhaps we are forced to say that consciousness really is complex from the perspective of artificial synthesis. But if/when we have made an artificial mind — or settled upon a broad definition of consciousness that includes existing machines — then perhaps we’ll think of consciousness as easy! Maybe it’s everywhere already! Why pay for what’s free?

3. Degree of organization: Objects and processes that seem intricately structured are also seen as  complex. This type of complexity differs strikingly from computational complexity. A string of random noise is extremely complex from an information-theoretic perspective, because it is virtually incompressible — it  cannot be condensed into a simple algorithm. A book consisting of totally random characters contains more information, and is therefore more algorithmically complex, that a meaningful  text of the same length. But strings of random characters are typically interpreted as totally lacking in structure, and are therefore in a sense very simple. Some measures that Lloyd associates with organizational complexity include: Fractal dimension, metric entropy, Stochastic Complexity and several more, most of which I confess I had never heard of until today. I suspect that characterizing organizational structure is an ongoing research endeavor. In a sense that’s what mathematics is — the study of abstract structure.

Consciousness seems pretty organized, especially if you’re having a good day! But it’s also the framework by which we come to know that organization exists in nature in the first place…so this gets a bit Ioopy . :)

Seth Lloyd ends his list with concepts that are related to complexity, but don’t necessarily have measures. These I think are particularly relevant to consciousness and, to the more prosaic world I work in: neural network modeling.

Self-organization
Complex adaptive system
Edge of chaos

Consciousness may or may not be self-organized, but it definitely adapts, and it’s occasionally chaotic.

To Lloyd’s very handy list led me also add self-organized criticality and emergence. Emergence is an interesting concept which has been falsely accused of being obscurantism. A property is emergent is if is seen in a system, but not in any constituent of the system. For instance, the thermodynamic gas laws emerge out of kinetic theory, but they make no reference to molecules. The laws governing gases show up when there is a large enough number of particles, and when these laws reveal themselves, microscopic details often become irrelevant. But gases are the least interesting substrates for emergence. Condensed matter physicists talk about phenomena like the emergence of quasiparticles, which are excitations in a solid that behave as if they are independent particles, but depend for this independence, paradoxically, on the physics of the whole object.  (Emergence is a fascinating subject in its own right, regardless of its relevance to consciousness. Here’s a paper that proposes a neat formalism for talking about emergence: Emergence is coupled to scope, not level. PW Anderson’s classic paper “More is Different” also talks about a related issue: pdf )

Consciousness may well be an emergent process — we rarely say that a single neuron or a chunk of nervous tissue has a mind of its own. Consciousness is a word that is reserved for the whole organism, typically.

So is consciousness complex? Maybe…but not really in measurable ways. We can’t agree on how to describe it, we haven’t created it artificially yet, and we don’t know how it is organized, or how it emerged!

In my personal opinion many of the concepts people associate with consciousness are far outside of the scope of mainstream science. These include qualia, the feeling of what-it-is-like, and intentionality, the observation that mental “objects” always seems to be “about” something.

This doesn’t mean I think these aspects of consciousness are meaningless, only that they are scientifically intractable. Other aspects of  consciousness, such as awareness, attention, and emotion might also be shrouded in mystery, but I think neuroscience has much to say about them — this is because they have some measurable aspects, and these aspects step out of the shadows during neurological disorders, chemical modulation, and other abnormal states of being.

However…

There are famous neuroscientists who might disagree. Giulio Tononi has come up with something called integrated information theory, which comes with a measure of consciousness he christened phi. Phi is supposed to capture the degree of “integratedness” of a network. I remain quite skeptical of this sort of thing — for now it  seems to be a metaphor inspired by information theory, rather than a measurable quantity. I can’t imagine how we will be able  to relate it to actual experimental data. Information, contrary to popular perception, is not something intrinsic to physical objects. The amount of information in a signal depends on the device receiving the signal. Right now we have no way of knowing how many “bits” are being transmitted between two neurons, let alone between entire regions of the brain. Information theory is best applied when we already know the nature of the message, the communication channel, and the encoding/decoding process. We have only partially characterized these  aspects of neural dynamics. Our experimental data seem far too fuzzy  for any precise formal approach. [Information may actually be a concept of very limited use in biology, outside of data fitting. See this excellent paper for more: A deflationary account of information in biology. This sums it up: "if information is in the concrete world, it is causality. If it is abstract, it is in the head."]

But perhaps this paper  will convince me otherwise: Practical Measures of Integrated Information for Time-Series Data. [I very much doubt it though.]

___

I thought I would write a short answer… but I ended up learning a lot as I added more info.

View Answer on Quora

Is the right-brain/left-brain (emotional/rational) distinction still useful?

[I was asked to answer this question on Quora.]

I think there are actually three different questions here:

1. Is the right-brain/left-brain distinction still useful?
Yes. Marc Ettlinger‘s answer on Quora gets into aspects of this. There are many differences between the left and right hemispheres — some of the most unambiguous differences are in the domain of language processing. But the devil is in the details, many of which come from methods that are not always easy to interpret.

2. Is the emotional/rational distinction still useful?
Sort of (but it is often a source of confusion). Emotion and rationality/cognition are no longer seen as enemies. Rather than a dichotomy, there is a symbiotic spectrum of emotional and rational processes.

3. Is the right-brain/left brain distinction the same as the emotional/rational distinction?
Absolutely Not.


Let’s look at each issue in turn.

1. Lateralization: the right-brain/left-brain distinction

There is no doubt that there are differences in brain structure and function between the two hemispheres. Handedness is the best known example — the right side of the brain controls the left side of the body and vice versa, and humans are typically more dexterous with one hand than the other, so we say that one side of the brain is “dominant”. But it is also true that normal behavior requires cooperation between the two hemispheres. Split brain patients aren’t really normal.

However, there are reasons to be cautious about reading too much into the left-right difference data (especially for emotion and cognition). A lot of lateralization research is done using fMRI, which has many methodological and interpretational problems (e.g. Stark & Squire, 2001, Friston et al., 1996). For instance, a statistically significant difference in processing between the two hemispheres might have a small effect size. Also, focusing on differences in activity — as is common in brain scanning studies — obscures the fact that very often both areas are contributing to the behavior in question. In team sports, the observation that one team member is burning more calories than another doesn’t imply the other team members are doing nothing.

I have written about some of these fMRI issues in two blog posts: here and here. In this post I come up with some analogies to help (me) understand the statistical methods, and why they might occasionally mislead us.

The problem with fMRI is twofold: (1) We still don’t know what exactly fMRI is measuring, and (2) there is a lot of statistical hocus-pocus in the analysis of the results. See the infamous paper Voodoo Correlations in Social Neuroscience for some of the most heinous examples of statistical mistakes in fMRI research, particularly studies of emotion (Vul et al., 2009). More on the backlash against fMRI can be found in this Mind Hacks post. This critique of fMRI from the perspective of a PET researcher is also worth reading (Fox, 2012). Also see this page for a discussion of the subtraction method and related issues. Some statistical problems apply to other experimental issue too. And there is plenty of fMRI research that is carefully done, avoiding statistical and interpretational mistakes.

Even lesion studies can be problematic, particularly because one cannot easily tease apart the effect of the lesion from the effect of compensatory mechanisms (e.g. Hagemann et al. 2003). The brain’s plasticity is an extraordinary thing, so recovery processes add another level of complexity to the issue.

Also check out this blog post by Bradley Voytek. He says

just remember to ask, “can a person who has a lesion to that brain region not experience that emotion or do that behavior anymore?” If the person still can, then that is not where that behavior is located in the brain. And, in all likelihood, that function can’t be localized to any one region at all.

2. Passion versus Reason: the age-old distinction between emotion and “rationality”

The opposition between emotion and rationality has been exaggerated quite a bit. This may be a hangover of traditional and/or Victorian value systems that esteem self-restraint over displays of emotion. (Stiff upper lip, old chap!) This said, there are disorders that lead to inappropriate emotions that interfere with cognition. But there are also disorders that lead to a lack of emotion, and these disorders also interfere with normal cognition. Antonio Damasio makes this case in his 1994 book Descartes’s Error.

I’m going to copy excerpts from a paper I was co-author of (John et al., 2013):

The debate on the nature of cognition and emotion is a modern scientific manifestation of an age-old dichotomy. “Cognition” has come to refer to an assortment of useful behaviors – such as attention, memory and symbolic reasoning, while “emotion” carries with it the connotation of behavior that is irrational, evolutionarily ancient, and antithetical to efficient rationality. In this paper we outline findings that demonstrate both functional and anatomical overlap between cognitive and emotional processes, and use computational modeling to illustrate how learning processes may make cognitive-emotional interactions adaptive.

[...]
Though reason and emotion have been viewed as opposed processes in popular culture since ancient times, emotions have been treated as adaptive behavioral phenotypes by scientists since the time of Darwin (1872). Treating emotion as an adaptive phenotype fundamentally subverts any reason-emotion antithesis, because it places emotion as another, if distinctive, enabler of “biological rationality” (Damasio, 1994). Animals have a complex array of cognitive operations to draw upon, and an animal is rational if it knows or can learn how to draw upon those operations to maximize its well-being and minimize threats. In recent years, neuroscientists have shown that the parts of the brain that are recruited during episodes with emotion-arousing stimuli are also de-recruited when no emotion arousing stimuli are present, or when an animal learns that formerly emotion-arousing cues can be safely ignored (e.g. LaBar et al., 1998; Sehlmeyer et al., 2009; Bach et al., 2011; Hartley et al., 2011; van Well et al., 2012). Emotion is indeed a highly adaptive behavioral phenotype.
[...]
We have argued that rather than being opposing forces, cognition and emotion can be seen as points on a continuum or gradient of flexible processes required for adaptive categorization of, and response to, changes in the external and internal environment of an organism. While this conceptualization may not capture all the psychological nuances of the terms, it highlights the experimentally tractable facets of “cognition” and “emotion”.

The functional continuum is based on the robust connections between areas associated with cognition and those associated with emotion.

3. Right brain = Emotional? Left brain = Rational?

This is almost certainly not true. Both halves of the brain have corresponding limbic structures that are involved in cognitive-emotional interaction.

“The experience of emotion is not lateralized to one or the other hemisphere; rather, it involves dynamic processes that include interactions between anterior and posterior regions of both hemispheres, as well as between cortical and subcortical structures of the brain.” (Heller et al., 1998)

“results of the studies reviewed herein suggest that a simple left/right dichotomy with respect to hemispheric specialization for the autonomic component of the emotional response is probably untenable.” (Hagemann et al. 2003)

While there may be differences in the signal-processing properties of the left and right hemispheres, these differences cannot be aligned with the rational/emotional dichotomy. A brief scan of google scholar should bear this out (such as this or this). Studies seems to suggest that the left and right amygdalae constribute to different aspects of cognitive-emotional interaction (Markowitsch, 1999, Baas et al., 2004), but do not divide along a cognition-emotion faultline. There may not be significant differences between left and right orbitofrontal cortices (Kringelback, 2005), which are also involved in emotion.

The fMRI results are complex and often mutually contradictory, but the more solid anatomical connectivity data point to a model of cognitive-emotional interactions that is very different from the cartoonish assertion that “the right brain is emotional and the left brain is rational”.

References

Baas, D., Aleman, A., & Kahn, R. S. (2004). Lateralization of amygdala activation: a systematic review of functional neuroimaging studies. Brain Research Reviews, 45(2), 96-103.

Fox, P. T. (2012). The coupling controversy. NeuroImage, 62(2), 594-601.

Friston, K. J., Price, C. J., Fletcher, P., Moore, C., Frackowiak, R. S. J., & Dolan, R. J. (1996). The trouble with cognitive subtraction. NeuroImage, 4(2), 97-104.

Hagemann, D., Waldstein, S., & Thayer, J. (2003). Central and autonomic nervous system integration in emotion Brain and Cognition, 52 (1), 79-87 DOI: 10.1016/S0278-2626(03)00011-3

Heller, W., Nitschke, J. B., & Miller, G. A. (1998). Lateralization in emotion and emotional disorders. Current Directions in Psychological Science.

John, Y. J., Bullock, D., Zikopoulos, B., & Barbas, H. (2013). Anatomy and computational modeling of networks underlying cognitive-emotional interaction.Frontiers in Human Neuroscience, 7.

Kringelbach, M. L. (2005). The human orbitofrontal cortex: linking reward to hedonic experience. Nature Reviews Neuroscience, 6(9), 691-702.

Markowitsch, H. J. (1999). Differential contribution of right and left amygdala to affective information processing. Behavioural Neurology, 11(4), 233-244.

Stark, C. E., & Squire, L. R. (2001). When zero is not zero: the problem of ambiguous baseline conditions in fMRI. Proceedings of the National Academy of Sciences, 98(22), 12760-12766.

Vul, E., Harris, C., Winkielman, P., & Pashler, H. (2009). Puzzlingly high correlations in fMRI studies of emotion, personality, and social cognition.Perspectives on Psychological Science, 4(3), 274-290.

ResearchBlogging.org

Will we ever be able to upload our minds to a computer?

Image

My answer to a Quora question: What percent chance is there that whole brain emulation or mind uploading to a neural prosthetic will be feasible within 35 years?

This opinion is unlikely to be popular among sci-fi fans, but I think the “chance” of mind uploading happening at any time in the future is zero.  Or better yet, as a scientist I would assign no number at all to this subjective probability. [Also see the note on probability at the end.]

I think the concept is still incoherent from both philosophical and scientific perspectives.

In brief, these are the problems:

  • We don’t know what the mind is from a scientific/technological perspective.
  • We don’t know which processes in the brain (and body!) are essential to subjective mental experience.
  • We don’t have any intuition for what “uploading” means in terms of mental unity and continuity.
  • We have no way of knowing whether an upload has been successful.

You could of course take the position that clever scientists and engineers will figure it out while the silly philosophers debate semantics. But this I think is based on shaky analogies with other examples of scientific progress. You might justifiably ask “What are the chances of faster-than-light travel?” You could argue that our vehicles keep getting faster, so it’s only a matter of time before we have Star Trek style warp drives. But everything we know about physics says that crossing the speed of light is impossible. So the “chance” in this domain is zero. I think that the idea of uploading minds is even more problematic than faster-than-light travel, because the idea does not have any clear or widely accepted scientific meaning, let alone philosophical meaning. Faster-than-light travel is conceivable at least, but mind uploading may not even pass that test!

I’ll now discuss some of these issues in more detail.

The concept of uploading a mind is based on the assumption that mind and body are separate entities that can in principle exist without each other. There is currently no scientific proof of this idea. There is also no philosophical agreement about what the mind is. Mind-body dualism is actually quite controversial among scientists and philosophers these days.

People (including scientists) who make grand claims about mind uploading generally avoid the philosophical questions. They assume that if we have a good model of brain function, and a way to scan the brain in sufficient detail, then we have all the technology we need.

But this idea is full of unquestioned assumptions. Is the mind identical to a particular structural or dynamic pattern? And if software can emulate this pattern, does it mean that the software has a mind? Even if the program “says” it has a mind, should we believe it? It could be a philosophical zombie that lacks subjective experience.

Underlying the idea of mind/brain uploading is the notion of Multiple Realizability — the idea that minds are processes that can be realized in a variety of substrates. But is this true? It is still unclear what sort of process mind is. There are always properties of a real process that a simulation doesn’t possess. A computer simulation of water can reflect the properties of water (in the simulated ‘world’), but you wouldn’t be able to drink it! :)

Even if we had the technology for “perfect” brain scans (though it’s not clear what a “perfect” copy is), we run into another problem: we don’t understand what “uploading” entails. We run into the Ship of Theseus problem. In one variant of this problem/paradox we imagine that Theseus has a ship. He repairs it every once in a while, each time replacing one of the wooden boards. Unbeknownst to him, his rival has been keeping the boards he threw away, and over time he constructed an exact physical replica of Theseus’s ship. Now, which is the real ship of Theseus? His own ship, which is now physically distinct from the one he started with, or the (counterfeit?) copy, which is physically identical to the initial ship? There is no universally accepted answer to this question.

We can now explicitly connect this with the idea of uploading minds. Let’s say the mind is like the original (much repaired) ship of Theseus. Let’s say the computer copy of the brain’s structures and patterns is like the counterfeit ship. For some time there are two copies of the same mind/brain system — the original biological one, and the computer simulation. The very existence of two copies violates a basic notion most people have of the Self — that it must obey a kind of psychophysical unity. The idea that there can be two processes that are both “me” is incoherent (meaning neither wrong nor right). What would that feel like for the person whose mind had been copied?

Suppose in response to this thought experiment you say, “My simulated Self won’t be switched on until after I die, so I don’t have to worry about two Selves — unity is preserved.” In this case another basic notion is violated — continuity. Most people don’t think of the Self as something that can cease to exist and then start existing again. Our biological processes, including neural processes, are always active — even when we’re asleep or in a coma. What reason do we have to assume that when these activities cease, the Self can be recreated?

Let’s go even further: let’s suppose we have a great model of the mind, and a perfect scanner, and we have successfully run a simulated version of your mind on a computer. Does this simulation have a sense of Self? If you ask it, it might say yes. But is this enough? Even currently-existing simulations can be programmed to say “yes” to such questions. How can we be sure that the simulation really has subjective experience?  And how can we be sure that it has your subjective experience? We might have just created a zombie simulation that has access to your memories, but cannot feel anything. Or we might have created a fresh new consciousness that isn’t yours at all! How do we know that a mind without your body will feel like you? [See the link on embodied cognition for more on this very interesting topic.]

And — perhaps most importantly — who on earth would be willing to test out the ‘beta versions’ of these techniques? :)

Let me end with a verse from William Blake’s poem “The Tyger“.

What the hammer? what the chain?
In what furnace was thy brain?

~

Further reading:

Will Brains Be Dowloaded? Of Course Not!

Personal Identity

Ship of Theseus

Embodied Cognition

~

EDIT: I added this note to deal with some interesting issues to do with “chance” that came up in the Quora comments.

A note on probability and “chance”

Assigning a number to a single unique event such as the discovery of mind-uploading is actually problematic. What exactly does “chance” mean in such contexts? The meaning of probability is still being debated by statisticians, scientists and philosophers. For the purposes of this discussion, there are 2 basic notions of probability:

(1) Subjective degree of belief. We start with a statement A. The probability p(A) = 0 if I don’t believe A, and p(A) = 1 if I believe A. In other words, if your probability p(A) moves from 0 to 1, your subjective doubt decreases. If A is the statement “God exists” then an atheist’s p(A) is equal to 0, and a theist’s p(A) is 1.

(2) Frequency of a type of repeatable event. In this case the probability p(A) is the number of times event A happens, divided by the total number of events. Alternatively, it is the total number of outcomes that correspond to event A, divided by the total number of possible outcomes. For example, suppose statement A is “the die roll results in a 6″. There are 6 possible outcomes of a die roll, and one of them is 6. So p (A) = 1/6. In other words, if you roll an (ideal) die 600 times, you will see the side with 6 dots on it roughly 100 times.

Clearly, if statement A is “Mind uploading will be discovered in the future”, then we cannot use frequentist notions of probability. We do not have access to a large collection of universes from which to count the ones in which mind uploading has been successfully discovered, and then divide that number by the total number of universes. In other words, statement A does not refer to a statistical ensemble — it is a unique event. For frequentists, the probability of a unique event can only be 0 (hasn’t happened) or 1 (happened). And since mind uploading hasn’t happened yet, the frequency-based probability is 0.

So when a person asks about the “chance” of some unique future event, he or she is implicitly asking for a subjective degree of belief in the feasibility of this event. If you force me to answer the question, I’ll say that my subjective degree of belief in the possibility of mind uploading is zero. But I actually prefer not to assign any number, because I actually think the concept of mind uploading is incoherent (as opposed to unfeasible). The concept of its feasibility does not really arise (subjectively), because the idea of mind uploading is almost meaningless to me. Data can be uploaded and downloaded. But is the mind data? I don’t know one way or the other, so how can I believe in some future technology that presupposes that the mind is data?

More on probability theory:

Chances Are — NYT article on probabily by Steve Strogatz

Why the distinction between single-event probabilities and frequencies is important for psychology (and vice versa)

Interpretations of Probability – Stanford Encyclopedia of Philosophy

Will the study of mind mimic mathematics?

Mathematics and the mind reflect each other. :) 

The question: How much will the study of mind mimic the foundations of math and find use of math?

The fields of computational neuroscience and cognitive science both use quite a bit of mathematics.

The answer hangs on what is meant by the word “mimic” in the question. One way to interpret the question is this: Will ideas about cognition mirror ideas about abstract mathematical concepts? From one perspective you could say this is necessarily the case: since human minds produce mathematics, the structures of mathematics must conform to the constraints of the structures of human thought.

Going in the other direction, mathematics is a key tool for understanding the natural world, and so if we study the mind/brain using methodologies derived from physics, chemistry and engineering, we will no doubt find ourselves using mathematical terminology, analogies, and concepts to describe mental processes.

This blog post I wrote gives an overview of the theoretical and mathematical approaches to the mind/brain: The Pentagon of Neuroscience — An Infographic/Listicle for Understanding the Neuroculture

And this book may be of interest in exploring the psychological roots of mathematical concepts: Where Mathematics Comes From

Based on an answer I wrote on Quora.

Image: Drawing Hands by MC Escher.