Can science account for taste?

I was asked the question “From a scientific point of view, how are our tastes created?” Here’s my answer.

“There’s no accounting for taste!”

Typically we explain taste — in food, music, movies, art —  in terms of culture, upbringing, and sheer chance. In recent years there have been several attempts to explain taste from biological perspectives: either neuroscience or evolutionary psychology. In my opinion these types of explanations are vague enough to always sound true, but they rarely contain enough detail to account for the specific tastes of individuals or groups. Still, there’s much food for thought in these scientific proto-theories of taste and aesthetics.

[An early aesthete?]

Let’s look at the evolutionary approach first. An evolutionary explanation of taste assumes that human preferences arise from natural selection. We like salt and sugar and fat, according to this logic, because it was beneficial for our ancestors to seek out foods with these tastes. We like landscape scenes involving greenery and water bodies because such landscapes were promising environments for our wandering ancestors. This line of thinking is true as far as it goes, but it doesn’t go that far. After all, there are plenty of people who don’t much care for deep-fried salty-sweet foods. And many people who take art seriously quickly tire of clichéd landscape paintings.

[Are you a homo sapien? They you must love this. 😉 ]

Evolutionary psychology can provide broad explanations for why humans as a species tend to like certain things more than others, but it really provides us with no map for navigating differences in taste between individuals and groups. (These obvious, glaring limitations of evolutionary psychology have not prevented the emergence of a cottage industry of pop science books that explain everything humans do as consequences of the incidents and accidents that befell our progenitor apes on the savannahs of Africa.)

Explanations involving the neural and cognitive sciences get closer to what we are really after — an explanation of differences in taste — but not by much. Neuroscientific explanations are essentially half way between cultural theories and evolutionary theories. We like things because the ‘pleasure centers’ in our brains ‘light up’ when we encounter them. And the pleasure centers are shaped by experience (on the time scale of a person’s life), and by natural selection (on the time scale of the species). Whatever we inherit because of natural selection is presumably common to all humans, so differences in taste must be traced to differences in experience, which become manifest in the brain as differences in neural connectivity and activity. If your parents played the Beatles for you as a child, and conveyed their pleasure to you, then associative learning might cause the synapses in your brain that link sound patterns with emotional reactions to be gradually modified, so that playing ‘Hey Jude’ now triggers a cascade of neural events that generate the subjective feeling of enjoyment.

[What’s not to love about the Beatles?]

But there is so much more to the story of enjoyment. Not everyone likes their parents’ music. In English-speaking countries there is a decades-old stereotype of the teenager who seeks out music to piss off his or her parents. And many of us have a friend who insists on listening to music that no one else seems to have heard of. What is the neural basis of this fascinating phenomenon?

We must now enter extremely speculative territory. One of the most thought-provoking ‘theories’ of aesthetics that I have come across was proposed by a machine learning researcher named Jürgen Schmidhuber. He has a provocative way of summing up his theory: Interestingness is the first derivative of beauty.

What he means is that we are not simply drawn to things that are beautiful or pleasurable. We are also drawn to things that are interesting: things that somehow intrigue us and capture our attention. These things, according to Schmidhuber, entice us with the possibility of enhancing our categories of experience. In his framework, humans and animals are constantly seeking to understand the environment, and in order to do this, they must be drawn to the edge of what they already know. Experiences that are already fully understood offer no opportunity for new learning.  Experiences that are completely beyond comprehension are similarly useless. But experiences that are in the sweet spot of interestingness are not boringly familiar — but they are not bafflingly alien either. By seeking out experiences in this ‘border territory’, we expand our horizons, gaining a new understanding of the world.

For example, I’m a Beatles fan, but I don’t listen to the Beatles that often. I am, however, intrigued by music that is ‘Beatlesque’: such music can lead me in new directions, and also reflect back on the Beatles, giving me a deeper appreciation of their music.

The basic intuition of this theory is well-supported by research in animals and humans. Animals all have some baseline level of curiosity. Lab rats will thoroughly investigate a new object introduced into their cages. Novelty seems to have a gravitational pull for organisms.

But again, there are differences even in this tendency. Some people are perfectly content to eat the same foods over and over again, or listen to the same songs or artists. At the other extreme we find the freaks, the hipsters, the critics, the obsessives, and all the assorted avant garde seekers of “the Shock of the New”.

Linking back to evolutionary speculation, all we can really say is that even the desire for novelty is a variable trait in human populations. (Actually it’s multiple traits: I am far more adventurous when it comes to music than food.) Perhaps a healthy society needs its ‘conservatives’ and its ‘progressives’ in the domain of taste and aesthetic experience. Group selection  — natural selection operating on tribes, societies and cultures — is still somewhat controversial in mainstream evolutionary biology, so to go any further in our theories of taste we have to be willing to wander on the wild fringes of scientific thought…

… those fringes are, after all, where everything interesting happens! 🙂

For more speculation on interestingness, beauty, and the pull of the not-completely-familiar, see this essay I wrote. I go into more detail about Schmidhuber’s theory about interestingness:
From Cell Membranes to Computational Aesthetics: On the Importance of Boundaries in Life and Art

This has nothing to do with science, but I find this David Mitchell video on taste very funny:

After writing this answer I realized that the questioner was most probably asking about gustation — meaning, the sense of taste. Oh well.

Me and My Brain: What the “Double-Subject Fallacy” reveals about contemporary conceptions of the Self

MiBMy latest essay for 3 Quarks Daily is up: Me and My Brain: What the “Double-Subject Fallacy” reveals about contemporary conceptions of the Self

Here’s an excerpt:
What is a person? Does each of us have some fundamental essence? Is it the body? Is it the mind? Is it something else entirely? Versions of this question seem always to have animated human thought. In the aftermath of the scientific revolution, it seems as if one category of answer — the dualist idea that the essence of a person is an incorporeal soul that inhabits a material body — must be ruled out. But as it turns out, internalizing a non-dualist conception of the self is actually rather challenging for most people, including neuroscientists.
[…]
 A recent paper in the Journal of Cognitive Neuroscience suggests that even experts in the sciences of mind and brain find it difficult to shake off dualistic intuitions. Liad Mudrik and Uri Maoz, in their paper “Me & My Brain”: Exposing Neuroscienceʼs Closet Dualism, argue that not only do neuroscientists frequently lapse into dualistic thinking, they also attribute high-level mental states to the brain, treating these states as distinct from the mental states of the person as a whole. They call this the double-subject fallacy. ( I will refer to the fallacy as “dub-sub”, and the process of engaging in it as “dub-subbing”.) Dub-subbing is going on in constructions like”my brain knew before I did” or “my brain is hiding information from me”. In addition to the traditional subject — “me”, the self, the mind — there is a second subject, the brain, which is described in anthropomorphic terms such as ‘knowing’ or ‘hiding’. But ‘knowing’ and ‘hiding’ are precisely the sorts of things that we look to neuroscience to explain; when we fall prey to the double-subject fallacy we are actually doing the opposite of what we set out to do as materialists.  Rather than explaining “me” in terms of physical brain processes, dub-subbing induces us to describe the brain in terms of an obscure second “me”. Instead of dispelling those pesky spirits, we allow them to proliferate!
Read the whole thing at 3QD:

Fifty terms to avoid in psychology and psychiatry?

The excellent blog Mind Hacks shared a recent Frontiers in Psychology paper entitled “Fifty psychological and psychiatric terms to avoid: a list of inaccurate, misleading, misused, ambiguous, and logically confused words and phrases”.

As mentioned in the Mind Hacks post, the advice in this article may not always be spot-on, but it’s still worth reading. Here are some excerpts:

(7) Chemical imbalance. Thanks in part to the success of direct-to-consumer marketing campaigns by drug companies, the notion that major depression and allied disorders are caused by a “chemical imbalance” of neurotransmitters, such as serotonin and norepinephrine, has become a virtual truism in the eyes of the public […] Nevertheless, the evidence for the chemical imbalance model is at best slim […]  There is no known “optimal” level of neurotransmitters in the brain, so it is unclear what would constitute an “imbalance.” Nor is there evidence for an optimal ratio among different neurotransmitter levels.”

“(9) Genetically determined. Few if any psychological capacities are genetically “determined”; at most, they are genetically influenced. Even schizophrenia, which is among the most heritable of all mental disorders, appears to have a heritability of between 70 and 90% as estimated by twin designs”

“(12) Hard-wired. The term “hard-wired” has become enormously popular in press accounts and academic writings in reference to human psychological capacities that are presumed by some scholars to be partially innate, such as religion, cognitive biases, prejudice, or aggression. For example, one author team reported that males are more sensitive than females to negative news stories and conjectured that males may be “hard wired for negative news” […] Nevertheless, growing data on neural plasticity suggest that, with the possible exception of inborn reflexes, remarkably few psychological capacities in humans are genuinely hard-wired, that is, inflexible in their behavioral expression”

“(27) The scientific method. Many science textbooks, including those in psychology, present science as a monolithic “method.” Most often, they describe this method as a hypothetical-deductive recipe, in which scientists begin with an overarching theory, deduce hypotheses (predictions) from that theory, test these hypotheses, and examine the fit between data and theory. If the data are inconsistent with the theory, the theory is modified or abandoned. It’s a nice story, but it rarely works this way”

“(42) Personality type. Although typologies have a lengthy history in personality psychology harkening back to the writings of the Roman physician Galen and later, Swiss psychiatrist Carl Jung, the assertion that personality traits fall into distinct categories (e.g., introvert vs. extravert) has received minimal scientific support. Taxometric studies consistently suggest that normal-range personality traits, such as extraversion and impulsivity, are underpinned by dimensions rather than taxa, that is, categories in nature”

Lilienfeld, S. O., Sauvigné, K. C., Lynn, S. J., Cautin, R. L., Latzman, R. D., & Waldman, I. D. (2015). Fifty psychological and psychiatric terms to avoid: a list of inaccurate, misleading, misused, ambiguous, and logically confused words and phrases. Frontiers in Psychology, 6, 1100.

Why can most people identify a color without a reference but not a musical note?

[I was asked this on Quora. Here’s a slightly modified version of my answer.]

This is an excellent question! I’m pretty sure there is not yet a definitive answer, but I suspect that the eventual answer will involve two factors:

  1. The visual system in humans is much more highly developed than the auditory system.
  2. Human cultures typically teach color words to all children, but formal musical training — complete with named notes — is relatively rare.

When you look at the brain’s cortical regions, you realize that the primary visual cortex has the most well-defined laminar structure in the whole brain. Primary auditory cortex is less structured. We still don’t know exactly how the brain’s layers contribute to sensory processing, but some theories suggest that the more well-defined cortices are capable of making more fine distinctions.

[See this blog post for more on cortical lamination:
How to navigate on Planet Brain]

However, I don’t think the explanation for the difference between music and color perception is purely neuroscientific. Culture may well play an important role. I think that with training, absolute pitch — the ability to identify the exact note rather than the interval between notes — could become more common. Speakers of tonal languages like Mandarin or Cantonese are more likely to have absolute pitch, especially if they’ve had early musical training. (More on this below.)

Also: when people with no musical training are exposed to tunes they are familiar with, many of them can tell if the absolute pitch is correct or not [1] Similarly, when asked to produce a familiar tune, many people can hit the right pitch. [2]. This suggests that at least some humans have the latent ability to use and/or recognize absolute pitch.

Perhaps with early training, note names will become as common as color words.

This article by a UCSD psychologist described the mystery quite well:

Diana Deutsch – Absolute Pitch.

As someone with absolute pitch, it has always seemed puzzling to me that  this ability should be so rare. When we name a color, for example as  green, we do not do this by viewing a different color, determining its  name, and comparing the relationship between the two colors. Instead,  the labeling process is direct and immediate.

She has some fascinating data on music training among tonal language speakers:

” Figure 2. Percentages of subjects who obtained a score of at least  85% correct on the test for absolute pitch. CCOM: students at the  Central Conservatory of Music, Beijing, China; all speakers of Mandarin.  ESM: students at Eastman School of Music, Rochester, New York; all  nontone language speakers.”

Looks like if you speak a tonal language and start learning music early, you are far more likely to have perfect pitch. (Separating causation from correlation may be tricky.)


References:

[1] Memory for the absolute pitch of familiar songs.
[2] Absolute memory for musical pitch: evidence from the production of learned melodies.

Quora: Why can most people identify a color without a reference but not a musical note?

Does dopamine produce a feeling of bliss? On the chemical self, the social self, and reductionism.

Here’s the intro to my latest blog post at 3 Quarks Daily.


“The  osmosis of neuroscience into popular culture is neatly symbolized by a  phenomenon I recently chanced upon: neurochemical-inspired jewellery. It  appears there is a market for silvery pendants shaped like molecules of  dopamine, serotonin, acetylcholine, norepinephrine and other celebrity  neurotransmitters. Under pictures of dopamine necklaces, the  neuro-jewellers have placed words like “love”, “passion”, or “pleasure”.  Under serotonin they write “happiness” and “satisfaction”, and under  norepinephrine, “alertness” and “energy”. These associations presumably  stem from the view that the brain is a chemical soup in which each  ingredient generates a distinct emotion, mood, or feeling. Subjective  experience, according to this view, is the sum total of the  contributions of each “mood molecule”. If we strip away the modern  scientific veneer, the chemical soup idea evokes the four humors of  ancient Greek medicine: black bile to make you melancholic, yellow bile  to make you choleric, phlegm to make you phlegmatic, and blood to make  you sanguine.

“A dopamine pendant worn round the neck as a symbol for bliss is  emblematic of modern society’s attitude towards current scientific  research. A multifaceted and only partially understood set  of experiments is hastily distilled into an easily marketed molecule of  folk wisdom. Having filtered out the messy details, we are left with an  ornamental nugget of thought that appears both novel and reassuringly  commonsensical. But does neuroscience really support this reductionist  view of human subjectivity? Can our psychological states be understood  in terms of a handful of chemicals? Does neuroscience therefore pose a  problem for a more holistic view, in which humans are integrated in  social and environmental networks? In other words, are the “chemical  self” and the “social self” mutually exclusive concepts?”

– Read the rest at 3QD: The Chemical Self and the Social Self

The holy grail of computational neuroscience: Invariance

There are quite a few problems that computational neuroscientists need to solve in order to achieve a true theoretical understanding of biological intelligence.  But I’d like to talk about one problem that I think is the holy grail of computational neuroscience and artificial intelligence: the quest for invariance. From a purely scientific and technological perspective I think this is a far more important and interesting problem than anything to do with the “C-word”: Consciousness. 🙂

Human (and animal) perception has an extraordinary feature that we still can’t fully emulate with artificial devices. Our brains somehow create and/or discover invariances in the world. Let me start with a few examples and then explain what invariance is.

Invariance in vision

Think about squares. You can recognize a square irrespective of it’s size, color, and position. You can even recognize a square with reasonable accuracy when viewing it from an oblique angle. This ability is something we take for granted, but we haven’t really figured it out yet.

Now think about human faces. You can recognize a familiar face in various lighting conditions, and under changes of facial hair, make-up, age, and context. How does the brain allow you to do things like this?

Invariance in hearing

Think about a musical tune you know well. You will probably be able to recognize it even if it is slowed down, sped up, hummed, whistled, or even sung wordlessly by someone who is tone-deaf. In some special cases, you can even recognize a piece of music from its rhythmic pattern alone, without any melody. How do you manage to do this?

Think about octave equivalence. A sound at a particular frequency sounds like the same note as a sound at double the frequency. In other words, notes an octave apart sound similar. What is happening here?

What is invariance?

How does your brain discover similarity in the midst of so much dissimilarity? The answer is that the brain somehow creates invariant representations of objects and patterns. Many computational neuroscientists are working on this problem, but there are no unifying theoretical frameworks yet.

So what does “invariance” mean? It means “immunity to a possible change”. It’s related to the formal concept of symmetry. According to mathematics and theoretical physics, an object has a symmetry if it looks the same even after a change. a square looks exactly the same if you rotate it by 90 degrees around the center. We say it is invariant (or symmetrical) with respect to a 90 degree rotation.

Our neural representations of sensory patterns somehow allow us to discover symmetries and using them for recognition and flexible behavior. And we manage to do this implicitly, without any conscious effort. This type of ability is limited and it varies from person to person, but all people have it to some extent.

Back to the examples

We can redefine our examples using the language of invariance.

 

  • The way human represent squares and other shapes is invariant with respect to rotation, as well as with respect to changes in position, lighting, and even viewing angle.
  • The way humans represent faces is invariant with respect to changes in make-up, facial hair, context, and age. (This ability varies from person to person, of course.)
  • The way humans represent musical tunes is invariant with respect to changes in speed, musical key, and timbre.
  • The way humans represent musical notes is invariant with respect to doubling of frequency ( which is equivalent to shifting by an octave.)


All these invariances are partial and limited in scope, but they are still extremely useful, and far more sophisticated than anything we can do with artificial systems.

Invariance of thought patterns?

The power of invariance is particularly striking when we enter the domain of abstract ideas — particularly metaphors and analogies.

Consider perceptual metaphors. We can touch a surface and describe it as smooth. But we can also use the word “smooth” to describe sounds. How is it that we can use texture words for things that we do not literally touch?

Now consider analogies, which are the more formal cousins of metaphors. Think of analogy questions in tests like the GRE and the SATs. Here’s an example

Army: Soldier :: Navy : _____

The answer is “Sailor”.

These questions take the form “A:B::C:D”, which we normally read as “A is to B as C is to D”. The test questions normally ask you to specify what D should be.

To make an analogy more explicit, we can re-write it this way: “R(x,y) for all (x,y) =  (A,B) or (C,D)”.  The relation “R” holds for pairs of words (x,y), and in particular, for pairs (A,B) as well as (C,D).

In this example, the analogical relationship R can be captured in the phrase “is made up of”. An army is made up of soldiers and a navy is made up of sailors. In any analogy, we are able to pick out an abstract relationship between things or concepts.

Here’s another example discussed in the Wikipedia page on analogy:

Hand: Palm :: Foot: _____

The answer most people give is “Sole”. What’s interesting about this example is that many people can understand the analogy without necessarily being able to explain the relationship R in words. This is true of various analogies. We can see implicit relationships without necessarily being able to describe them.

We can translate metaphors and analogies into the language or invariance.

 

  • The way humans represent perceptual experiences allows us to create metaphors that are invariant with respect to changes in sensory modality. So we can perceive smoothness in the modalities of touch, hearing and other senses.
  • The way humans represent abstract relationships allows us to find/create analogies that are invariant with respect to the particular things being spoken about. The validity of the analogy R(x,y) in invariant with respect to replacing the pair (x,y) with (A,B) or (C,D).


The words “metaphor” and “analogy” are essentially synonyms for the word “invariant” in the domains of percepts and concepts. Science, mathematics and philosophy often involve trying to make explicit our implicit analogies and metaphors.

Neuroscience, psychology and cognitive science aim to understand how we form these invariant representations in the first place. In my opinion doing so will revolutionize artificial intelligence.

 



Further reading:

I’ve only scratched the surface of the topic of invariance and symmetry.

I talk about symmetry and invariance in this answer too:

Mathematics: What are some small but effective theses or ideas in mathematics that you have came across? [Quora link. Sign-up required]

I talk about the importance of metaphors in this blog post:

Metaphor: the Alchemy of Thought

I was introduced to many of these ideas through a book by physicist Joe Rosen called Symmetry Rules: How Science and Nature Are Founded on Symmetry. It’s closer to a textbook that a popular treatment, but for people interested in the mathematics of symmetry and group theory, and how it relates to science, this is an excellent introduction. Here is a summary of the book: [pdf]

Relatively recent techniques such as deep learning have helped artificial systems form invariant representations. This is how facial recognition software used by Google and Facebook work. But these algorithms still don’t have the accuracy and generality of human skills, and the way they work, despite being inspired by real neural networks, is sufficiently unlike real neural processes that these algorithms may not shed much light on how human intelligence works.


 

Notes:

This post is a slightly edited form of a Quora answer I wrote recently.

In the comments section someone brought up the idea that some invariants can be easily extracted using Fourier decomposition. This is what I said is response:

Good point. Fourier decomposition is definitely part of the story (for sound at the very least), but it seems there is a lot more.

Some people think that the auditory system is just doing a Fourier transform. But this was actually shown to be partially false a century ago. The idea that pitch corresponds to the frequencies of sinusoids is called Ohm’s acoustic law.

From the wiki page:

 

For years musicians have been told that the ear is able to separate  any complex signal into a series of sinusoidal signals – that it acts as  a Fourier analyzer.  This quarter-truth, known as Ohm’s Other Law, has served to increase  the distrust with which perceptive musicians regard scientists, since it  is readily apparent to them that the ear acts in this way only under  very restricted conditions.
—W. Dixon Ward (1970)


This web page discusses some of the dimensions other that frequency that contribute to pitch:

Introduction to Psychoacoustics – Module 05

There are interesting aspects of pitch perception that render the Fourier picture problematic. For example, there is the Phenomenon of the missing    fundamental: “the observation that the pitch of a complex harmonic tone matches  the frequency of its fundamental spectral component, even if this component is  missing from the tone’s spectrum.”

Evidence suggests that the human auditory system uses both frequency and time/phase coding.

Missing fundamental:  “The brain perceives the pitch of a tone not only by its fundamental frequency, but also by the periodicity of the waveform; we may perceive the same pitch (perhaps with a different timbre) even if the fundamental frequency is missing from a tone.”

This book chapter also covers some of the evidence: [pdf]

” One of the most remarkable properties of the human auditory system is its ability to extract pitch from complex tones. If a group of pure tones, equally spaced in freque ncy are presented together, a pitch corresponding to the common frequency distance between the individual components will be heard. For example, if the pure tones with frequencies of 700, 800, and 900 Hz ar e presented together, the result is a complex sound with an underlying pitch corresponding to that of a 100 Hz tone. Since there is no physical energy at the frequency of 100 Hz in the complex, such a pitch sensation is called residual pitch or virtual pitch (Schouten 1940; Schouten, Ritsma and Cardozo, 1961). Licklider (1954) demonstrated that both the plac e (spectral) pitch and the residual (virtual) pitch have the same properties and cannot be auditorally differentiated.”

The status of Fourier decomposition in vision might be more controversial. Spatial frequency based models have their adherents, but also plenty of critics. One of my professors says that claiming the visual system does spatial Fourier amounts to confusing the object of study with the tools of study. 🙂 We still don’t whether and how the brain performs spatial Fourier decomposition.

A very recent paper reviews this issue:

The neural bases of spatial frequency processing during scene perception

“how and where spatial frequencies are processed within the brain remain unresolved questions.”

Vision scientists I know often talk about how the time domain cannot be ignored in visual processing.

A general point to be made is that even if we have mathematical solutions that are invariant, computational neuroscientists haven’t quite figured out how neural networks achieve such invariant representations. The quest for invariance is more about plausible neural implementation than mathematical description per se.

 

A group composed of brilliant individuals will not automatically be the most brilliant group

Perhaps the whole can be better than the sum of its parts?

I came across a very interesting study on McGill University’s excellent Brain from Top to Bottom Blog.

In this study of collective intelligence, the researchers performed numerous statistical analyses. The most interesting finding that emerged from them, and that went beyond the debate about just what exactly collective intelligence might represent, was that this factor was not highly correlated with either the average intelligence of the groups’ members or with the intelligence of the group member who had scored the highest on the individual-intelligence test. In other words, a group composed of brilliant individuals will not automatically be the most brilliant group.
The psychologists did find some factors that let them predict whether a given group would be collectively intelligent. But to identify three, they had to look at factors associated with co-operation. The first such factor was the group’s overall social sensitivity—the members’ ability to perceive each other’s emotions. The second factor was equality in taking turns speaking during group decision-making. The third factor was the proportion of women in the group. This last finding is highly consistent with other data showing that women tend to be more socially sensitive than men and to take turns speaking more naturally than men do.



via The Collective Intelligence of Groups