The Pentagon of Neuroscience — An Infographic/Listicle for Understanding the Neuroculture

Click here to go straight to the infographic. It should open in Firefox and Chrome.

brainzNeuroscience has hit the big time. Every day, popular newspapers, websites and blogs offer up a heady stew of brain-related self-help (neuro-snake oil?) and gee wiz science reporting (neuro-wow?). Some scientists and journalists — perhaps caught up in the neuro-fervor — throw caution to the wind, promising imminent brain-based answers to the kinds of questions that probably predate civilization itself: What is the nature of mind? Why do we feel the way we do? Does each person have a fundamental essence? How can we avoid pain and suffering, and discover joy, creativity, and interpersonal harmony?

Continue reading

Does dopamine produce a feeling of bliss? On the chemical self, the social self, and reductionism.

Here’s the intro to my latest blog post at 3 Quarks Daily.


“The  osmosis of neuroscience into popular culture is neatly symbolized by a  phenomenon I recently chanced upon: neurochemical-inspired jewellery. It  appears there is a market for silvery pendants shaped like molecules of  dopamine, serotonin, acetylcholine, norepinephrine and other celebrity  neurotransmitters. Under pictures of dopamine necklaces, the  neuro-jewellers have placed words like “love”, “passion”, or “pleasure”.  Under serotonin they write “happiness” and “satisfaction”, and under  norepinephrine, “alertness” and “energy”. These associations presumably  stem from the view that the brain is a chemical soup in which each  ingredient generates a distinct emotion, mood, or feeling. Subjective  experience, according to this view, is the sum total of the  contributions of each “mood molecule”. If we strip away the modern  scientific veneer, the chemical soup idea evokes the four humors of  ancient Greek medicine: black bile to make you melancholic, yellow bile  to make you choleric, phlegm to make you phlegmatic, and blood to make  you sanguine.

“A dopamine pendant worn round the neck as a symbol for bliss is  emblematic of modern society’s attitude towards current scientific  research. A multifaceted and only partially understood set  of experiments is hastily distilled into an easily marketed molecule of  folk wisdom. Having filtered out the messy details, we are left with an  ornamental nugget of thought that appears both novel and reassuringly  commonsensical. But does neuroscience really support this reductionist  view of human subjectivity? Can our psychological states be understood  in terms of a handful of chemicals? Does neuroscience therefore pose a  problem for a more holistic view, in which humans are integrated in  social and environmental networks? In other words, are the “chemical  self” and the “social self” mutually exclusive concepts?”

- Read the rest at 3QD: The Chemical Self and the Social Self

The holy grail of computational neuroscience: Invariance

There are quite a few problems that computational neuroscientists need to solve in order to achieve a true theoretical understanding of biological intelligence.  But I’d like to talk about one problem that I think is the holy grail of computational neuroscience and artificial intelligence: the quest for invariance. From a purely scientific and technological perspective I think this is a far more important and interesting problem than anything to do with the “C-word”: Consciousness. :)

Human (and animal) perception has an extraordinary feature that we still can’t fully emulate with artificial devices. Our brains somehow create and/or discover invariances in the world. Let me start with a few examples and then explain what invariance is.

Invariance in vision

Think about squares. You can recognize a square irrespective of it’s size, color, and position. You can even recognize a square with reasonable accuracy when viewing it from an oblique angle. This ability is something we take for granted, but we haven’t really figured it out yet.

Now think about human faces. You can recognize a familiar face in various lighting conditions, and under changes of facial hair, make-up, age, and context. How does the brain allow you to do things like this?

Invariance in hearing

Think about a musical tune you know well. You will probably be able to recognize it even if it is slowed down, sped up, hummed, whistled, or even sung wordlessly by someone who is tone-deaf. In some special cases, you can even recognize a piece of music from its rhythmic pattern alone, without any melody. How do you manage to do this?

Think about octave equivalence. A sound at a particular frequency sounds like the same note as a sound at double the frequency. In other words, notes an octave apart sound similar. What is happening here?

What is invariance?

How does your brain discover similarity in the midst of so much dissimilarity? The answer is that the brain somehow creates invariant representations of objects and patterns. Many computational neuroscientists are working on this problem, but there are no unifying theoretical frameworks yet.

So what does “invariance” mean? It means “immunity to a possible change”. It’s related to the formal concept of symmetry. According to mathematics and theoretical physics, an object has a symmetry if it looks the same even after a change. a square looks exactly the same if you rotate it by 90 degrees around the center. We say it is invariant (or symmetrical) with respect to a 90 degree rotation.

Our neural representations of sensory patterns somehow allow us to discover symmetries and using them for recognition and flexible behavior. And we manage to do this implicitly, without any conscious effort. This type of ability is limited and it varies from person to person, but all people have it to some extent.

Back to the examples

We can redefine our examples using the language of invariance.

 

  • The way human represent squares and other shapes is invariant with respect to rotation, as well as with respect to changes in position, lighting, and even viewing angle.
  • The way humans represent faces is invariant with respect to changes in make-up, facial hair, context, and age. (This ability varies from person to person, of course.)
  • The way humans represent musical tunes is invariant with respect to changes in speed, musical key, and timbre.
  • The way humans represent musical notes is invariant with respect to doubling of frequency ( which is equivalent to shifting by an octave.)


All these invariances are partial and limited in scope, but they are still extremely useful, and far more sophisticated than anything we can do with artificial systems.

Invariance of thought patterns?

The power of invariance is particularly striking when we enter the domain of abstract ideas — particularly metaphors and analogies.

Consider perceptual metaphors. We can touch a surface and describe it as smooth. But we can also use the word “smooth” to describe sounds. How is it that we can use texture words for things that we do not literally touch?

Now consider analogies, which are the more formal cousins of metaphors. Think of analogy questions in tests like the GRE and the SATs. Here’s an example

Army: Soldier :: Navy : _____

The answer is “Sailor”.

These questions take the form “A:B::C:D”, which we normally read as “A is to B as C is to D”. The test questions normally ask you to specify what D should be.

To make an analogy more explicit, we can re-write it this way: “R(x,y) for all (x,y) =  (A,B) or (C,D)”.  The relation “R” holds for pairs of words (x,y), and in particular, for pairs (A,B) as well as (C,D).

In this example, the analogical relationship R can be captured in the phrase “is made up of”. An army is made up of soldiers and a navy is made up of sailors. In any analogy, we are able to pick out an abstract relationship between things or concepts.

Here’s another example discussed in the Wikipedia page on analogy:

Hand: Palm :: Foot: _____

The answer most people give is “Sole”. What’s interesting about this example is that many people can understand the analogy without necessarily being able to explain the relationship R in words. This is true of various analogies. We can see implicit relationships without necessarily being able to describe them.

We can translate metaphors and analogies into the language or invariance.

 

  • The way humans represent perceptual experiences allows us to create metaphors that are invariant with respect to changes in sensory modality. So we can perceive smoothness in the modalities of touch, hearing and other senses.
  • The way humans represent abstract relationships allows us to find/create analogies that are invariant with respect to the particular things being spoken about. The validity of the analogy R(x,y) in invariant with respect to replacing the pair (x,y) with (A,B) or (C,D).


The words “metaphor” and “analogy” are essentially synonyms for the word “invariant” in the domains of percepts and concepts. Science, mathematics and philosophy often involve trying to make explicit our implicit analogies and metaphors.

Neuroscience, psychology and cognitive science aim to understand how we form these invariant representations in the first place. In my opinion doing so will revolutionize artificial intelligence.

 



Further reading:

I’ve only scratched the surface of the topic of invariance and symmetry.

I talk about symmetry and invariance in this answer too:

Mathematics: What are some small but effective theses or ideas in mathematics that you have came across? [Quora link. Sign-up required]

I talk about the importance of metaphors in this blog post:

Metaphor: the Alchemy of Thought

I was introduced to many of these ideas through a book by physicist Joe Rosen called Symmetry Rules: How Science and Nature Are Founded on Symmetry. It’s closer to a textbook that a popular treatment, but for people interested in the mathematics of symmetry and group theory, and how it relates to science, this is an excellent introduction. Here is a summary of the book: [pdf]

Relatively recent techniques such as deep learning have helped artificial systems form invariant representations. This is how facial recognition software used by Google and Facebook work. But these algorithms still don’t have the accuracy and generality of human skills, and the way they work, despite being inspired by real neural networks, is sufficiently unlike real neural processes that these algorithms may not shed much light on how human intelligence works.


 

Notes:

This post is a slightly edited form of a Quora answer I wrote recently.

In the comments section someone brought up the idea that some invariants can be easily extracted using Fourier decomposition. This is what I said is response:

Good point. Fourier decomposition is definitely part of the story (for sound at the very least), but it seems there is a lot more.

Some people think that the auditory system is just doing a Fourier transform. But this was actually shown to be partially false a century ago. The idea that pitch corresponds to the frequencies of sinusoids is called Ohm’s acoustic law.

From the wiki page:

 

For years musicians have been told that the ear is able to separate  any complex signal into a series of sinusoidal signals – that it acts as  a Fourier analyzer.  This quarter-truth, known as Ohm’s Other Law, has served to increase  the distrust with which perceptive musicians regard scientists, since it  is readily apparent to them that the ear acts in this way only under  very restricted conditions.
—W. Dixon Ward (1970)


This web page discusses some of the dimensions other that frequency that contribute to pitch:

Introduction to Psychoacoustics – Module 05

There are interesting aspects of pitch perception that render the Fourier picture problematic. For example, there is the Phenomenon of the missing    fundamental: “the observation that the pitch of a complex harmonic tone matches  the frequency of its fundamental spectral component, even if this component is  missing from the tone’s spectrum.”

Evidence suggests that the human auditory system uses both frequency and time/phase coding.

Missing fundamental:  “The brain perceives the pitch of a tone not only by its fundamental frequency, but also by the periodicity of the waveform; we may perceive the same pitch (perhaps with a different timbre) even if the fundamental frequency is missing from a tone.”

This book chapter also covers some of the evidence: [pdf]

” One of the most remarkable properties of the human auditory system is its ability to extract pitch from complex tones. If a group of pure tones, equally spaced in freque ncy are presented together, a pitch corresponding to the common frequency distance between the individual components will be heard. For example, if the pure tones with frequencies of 700, 800, and 900 Hz ar e presented together, the result is a complex sound with an underlying pitch corresponding to that of a 100 Hz tone. Since there is no physical energy at the frequency of 100 Hz in the complex, such a pitch sensation is called residual pitch or virtual pitch (Schouten 1940; Schouten, Ritsma and Cardozo, 1961). Licklider (1954) demonstrated that both the plac e (spectral) pitch and the residual (virtual) pitch have the same properties and cannot be auditorally differentiated.”

The status of Fourier decomposition in vision might be more controversial. Spatial frequency based models have their adherents, but also plenty of critics. One of my professors says that claiming the visual system does spatial Fourier amounts to confusing the object of study with the tools of study. :) We still don’t whether and how the brain performs spatial Fourier decomposition.

A very recent paper reviews this issue:

The neural bases of spatial frequency processing during scene perception

“how and where spatial frequencies are processed within the brain remain unresolved questions.”

Vision scientists I know often talk about how the time domain cannot be ignored in visual processing.

A general point to be made is that even if we have mathematical solutions that are invariant, computational neuroscientists haven’t quite figured out how neural networks achieve such invariant representations. The quest for invariance is more about plausible neural implementation than mathematical description per se.

 

From Cell Membranes to Computational Aesthetics: On the Importance of Boundaries in Life and Art

My next 3QD column is out. I speculate about the role of boundaries in life and aesthetic experience. (Dopamine cells make a cameo appearance too.)

This image is a taster:

If you want to know what this diagram might mean, check out the article:
From Cell Membranes to Computational Aesthetics: On the Importance of Boundaries in Life and Art

A group composed of brilliant individuals will not automatically be the most brilliant group

Perhaps the whole can be better than the sum of its parts?

I came across a very interesting study on McGill University’s excellent Brain from Top to Bottom Blog.

In this study of collective intelligence, the researchers performed numerous statistical analyses. The most interesting finding that emerged from them, and that went beyond the debate about just what exactly collective intelligence might represent, was that this factor was not highly correlated with either the average intelligence of the groups’ members or with the intelligence of the group member who had scored the highest on the individual-intelligence test. In other words, a group composed of brilliant individuals will not automatically be the most brilliant group.
The psychologists did find some factors that let them predict whether a given group would be collectively intelligent. But to identify three, they had to look at factors associated with co-operation. The first such factor was the group’s overall social sensitivity—the members’ ability to perceive each other’s emotions. The second factor was equality in taking turns speaking during group decision-making. The third factor was the proportion of women in the group. This last finding is highly consistent with other data showing that women tend to be more socially sensitive than men and to take turns speaking more naturally than men do.



via The Collective Intelligence of Groups

What is a biological model? Here’s a useful categorization system for people interested in neuroscience, cognitive science, and biology

I found an excellent classification of models in a paper on neurogenesis: Using theoretical models to analyse neural development.

I think this should be illuminating for anyone interested in theoretical, mathematical and/or computational approaches in neuroscience, cognitive science, and biology.

There are several ways in which models of biological processes can be classified. 

Formal or informal models

Informal models are expressed in words or diagrams, whereas formal models — which this Review is concerned with — are described in mathematical equations or computer instructions. Using formal language forces a model to be precise and self-consistent. The process of constructing a formal model can therefore identify inconsistencies, hidden assumptions and missing pieces of experimental data. Formal models allow us to deduce the consequences of the postulated interactions among the components of a given system, and thus to test the plausibility of hypothetical mechanisms. Models can generate new hypotheses and make testable predictions, thereby guiding further experimental research. Equally importantly, models can explain and integrate existing data.

 Phenomenological or mechanistic models 

Most formal models lie on a continuum between two extreme categories: phenomenological and mechanistic. A phenomenological model attempts to replicate the experimental data without requiring the variables, parameters and mathematical relationships in the model to have any direct correspondence in the underlying biology. In a mechanistic model, the mathematical equations directly represent biological elements and their actions. Solving the equations then shows how the system behaves. We understand which processes in the model are mechanistically responsible for the observed behaviour, the variables and parameters have a direct biological meaning and the model lends itself better to testing hypotheses and making predictions. Although mechanistic models are often considered superior, both types of model can be informative. For example, a phenomenological model can be useful as a forerunner to a more mechanistic model in which the variables are given explicit biological interpretations. This is particularly important considering that a complete mechanistic model may be difficult to construct because of the great amount of information it should incorporate. Mechanistic models therefore often focus on exploring the consequences of a selected set of processes, or try to capture the essential aspects of the mechanisms, with a more abstract reference to underlying biological processes. 

Top-down or bottom-up models 

Formal models can be constructed using a top-down or a bottom-up approach. In a top-down approach, a model is created that contains the elements and interactions that enable it to have specific behaviours or properties. In a bottom-up approach, instead of starting with a pre-described, desired behaviour, the properties that arise from the interactions among the elements of the model are investigated. Although it is a strategy and not a type of model, the top-down approach resembles phenomenological modelling because it is generally easier to generate the desired behaviour without all of the elements of the model having a clear biological interpretation. Conversely, the bottom-up approach is related to mechanistic modelling, as it is usual to start with model elements that have a biological meaning. Both approaches have their strengths and weaknesses.

(I removed citation numbers for clarity.)

One point might be relevant here: a model is neither true nor false — ideally it’s an internally consistent mini-world. A theory is the assertion that a model corresponds with reality.

The Mysterious Power of Naming in Human Cognition

I’ve written a long-form essay for the blog/aggregator site 3 Quarks Daily:

Boundaries and Subtleties: the Mysterious Power of Naming in Human Cognition

Here’s a taster:

I’ve divided up the essay into four parts. Here’s the plan:

  1. We’ll introduce two key motifs — the named and the nameless — with a little help from the Tao Te Ching.
  2. We’ll examine a research problem that crops up in cognitive  psychology, neuroscience and artificial intelligence, and link it with  more Taoist motifs.
  3. We’ll look at how naming might give us power over animals, other people, and even mathematical objects.
  4. We’ll explore the power of names in computer science, which will facilitate some wild cosmic speculation.

Is consciousness complex?

Someone on Quora asked the following question: What’s the correlation between complexity and consciousness?

Here’s my answer:

Depends on who you ask! Both complexity and consciousness are contentious words, and mean different things to different people.

I’ll build my answer around the idea of complexity, since it’s easier to talk about scientifically (or at least mathematically) than consciousness. Half-joking comments about complexity and consciousness are to be found in italics.

I came across a nice list of measures of complexity, compiled by Seth Lloyd, a researcher from MIT, which I will structure my answer around. [pdf]

Lloyd describes measures of complexity as ways to answer three questions we might ask about a system or process:

  1. How hard is it to describe?
  2. How hard is it to create?
  3. What is its degree of organization?

1. Difficulty of description: Some objects are complex because they are difficult for us to describe. We frequently measure this difficulty in binary digits (bits), and also use concepts like entropy (information theory) and Kolmogorov (algorithmic) complexity. I particularly like Kolmogorov complexity. It’s a measure of the computational resources required to specify a string of characters. It’s the size of the smallest algorithm that can  generate that string of letters or numbers (all of which can be  converted into bits). So if you have a string like  “121212121212121212121212”, it has a description in English — “12  repeated 12 times” — that is even shorter that the actual string. Not very complex. But the string “asdh41ubmzzsa4431ncjfa34″ may have no description shorter than the string itself, so it will have higher Kolmogorov complexity. This measure of complexity can also give us an interesting way to talk about randomness. Loosely speaking, a random process is one whose simulation is harder to accomplish than simply watching the process unfold! Minimum message length is a related idea that also has practical applications. (It seems Kolmogorov complexity is technically uncomputable!)

Consciousness is definitely hard to describe. In fact we seem to be stuck at the description stage at the moment. Describing consciousness is so difficult that bringing in bits and algorithms seem a tad premature. (Though as we shall see, some brave scientists beg to differ.)

2. Difficulty of creation: Some objects and processes are seen as complex because they are really hard  to make. Komogorov complexity could show up here too, since simulating a string can be seen both as an act of description (the code itself) and an act  of creation (the output of the code). Lloyd lists the following  terms that I am not really familiar with: Time Computational Complexity; Space Computational Complexity; Logical depthThermodynamic depth; and “Crypticity” (!?).  In additional to computational  difficulty, we might add other costs: energetic, monetary, psychological, social, and ecological. But perhaps then we’d be  confusing the complex with the cumbersome? :)

Since we haven’t created a consciousness yet, and don’t know how nature accomplished it, perhaps we are forced to say that consciousness really is complex from the perspective of artificial synthesis. But if/when we have made an artificial mind — or settled upon a broad definition of consciousness that includes existing machines — then perhaps we’ll think of consciousness as easy! Maybe it’s everywhere already! Why pay for what’s free?

3. Degree of organization: Objects and processes that seem intricately structured are also seen as  complex. This type of complexity differs strikingly from computational complexity. A string of random noise is extremely complex from an information-theoretic perspective, because it is virtually incompressible — it  cannot be condensed into a simple algorithm. A book consisting of totally random characters contains more information, and is therefore more algorithmically complex, that a meaningful  text of the same length. But strings of random characters are typically interpreted as totally lacking in structure, and are therefore in a sense very simple. Some measures that Lloyd associates with organizational complexity include: Fractal dimension, metric entropy, Stochastic Complexity and several more, most of which I confess I had never heard of until today. I suspect that characterizing organizational structure is an ongoing research endeavor. In a sense that’s what mathematics is — the study of abstract structure.

Consciousness seems pretty organized, especially if you’re having a good day! But it’s also the framework by which we come to know that organization exists in nature in the first place…so this gets a bit Ioopy . :)

Seth Lloyd ends his list with concepts that are related to complexity, but don’t necessarily have measures. These I think are particularly relevant to consciousness and, to the more prosaic world I work in: neural network modeling.

Self-organization
Complex adaptive system
Edge of chaos

Consciousness may or may not be self-organized, but it definitely adapts, and it’s occasionally chaotic.

To Lloyd’s very handy list led me also add self-organized criticality and emergence. Emergence is an interesting concept which has been falsely accused of being obscurantism. A property is emergent is if is seen in a system, but not in any constituent of the system. For instance, the thermodynamic gas laws emerge out of kinetic theory, but they make no reference to molecules. The laws governing gases show up when there is a large enough number of particles, and when these laws reveal themselves, microscopic details often become irrelevant. But gases are the least interesting substrates for emergence. Condensed matter physicists talk about phenomena like the emergence of quasiparticles, which are excitations in a solid that behave as if they are independent particles, but depend for this independence, paradoxically, on the physics of the whole object.  (Emergence is a fascinating subject in its own right, regardless of its relevance to consciousness. Here’s a paper that proposes a neat formalism for talking about emergence: Emergence is coupled to scope, not level. PW Anderson’s classic paper “More is Different” also talks about a related issue: pdf )

Consciousness may well be an emergent process — we rarely say that a single neuron or a chunk of nervous tissue has a mind of its own. Consciousness is a word that is reserved for the whole organism, typically.

So is consciousness complex? Maybe…but not really in measurable ways. We can’t agree on how to describe it, we haven’t created it artificially yet, and we don’t know how it is organized, or how it emerged!

In my personal opinion many of the concepts people associate with consciousness are far outside of the scope of mainstream science. These include qualia, the feeling of what-it-is-like, and intentionality, the observation that mental “objects” always seems to be “about” something.

This doesn’t mean I think these aspects of consciousness are meaningless, only that they are scientifically intractable. Other aspects of  consciousness, such as awareness, attention, and emotion might also be shrouded in mystery, but I think neuroscience has much to say about them — this is because they have some measurable aspects, and these aspects step out of the shadows during neurological disorders, chemical modulation, and other abnormal states of being.

However…

There are famous neuroscientists who might disagree. Giulio Tononi has come up with something called integrated information theory, which comes with a measure of consciousness he christened phi. Phi is supposed to capture the degree of “integratedness” of a network. I remain quite skeptical of this sort of thing — for now it  seems to be a metaphor inspired by information theory, rather than a measurable quantity. I can’t imagine how we will be able  to relate it to actual experimental data. Information, contrary to popular perception, is not something intrinsic to physical objects. The amount of information in a signal depends on the device receiving the signal. Right now we have no way of knowing how many “bits” are being transmitted between two neurons, let alone between entire regions of the brain. Information theory is best applied when we already know the nature of the message, the communication channel, and the encoding/decoding process. We have only partially characterized these  aspects of neural dynamics. Our experimental data seem far too fuzzy  for any precise formal approach. [Information may actually be a concept of very limited use in biology, outside of data fitting. See this excellent paper for more: A deflationary account of information in biology. This sums it up: "if information is in the concrete world, it is causality. If it is abstract, it is in the head."]

But perhaps this paper  will convince me otherwise: Practical Measures of Integrated Information for Time-Series Data. [I very much doubt it though.]

___

I thought I would write a short answer… but I ended up learning a lot as I added more info.

View Answer on Quora