What does the frontal lobe have to do with planning and decision-making?

I was asked the following question on Quora:

Why is planning & decision making situated in the frontal lobe?

Here’s my answer:

Why not? 🙂

I suppose the most obvious answer is the fact that the motor cortex resides in the frontal cortex.

Now you may justifiably ask: what does the motor cortex have to do with planning and decision-making?

The connection is this: to a large extent, the purpose of the brain is to control the body. So planning and decision-making, at the simplest possible level, involves determining when and how to move the body.

From this perspective, it is interesting to speculate that all thoughts derive from the process of virtual or simulated movement. Thought arises in the ‘gap’ between perception and action.

The way an organism interacts with its environment can be understood in terms of the perception-action loop.

Stimuli from the outside world enter the brain through the sensory organs and percolate through the various brain regions, allowing the organism to form neural ‘representations’ or ‘maps’ of the world. Signals originating inside the body (such as from the stomach or the lungs) allow for similar maps of the inner world of the organism.

By using memory to compare past experience with present conditions, an organism can anticipate the future to meet its current needs: either by acting in the present, or by planning an action for some future time.

The signals that control our voluntary muscles emanate from the motor cortex: the neurons in this part of the brain are the ‘switches’, ‘levers’ and ‘buttons’ that allow us to change our body position and configuration.

So going back one more step, the signals that influence the motor cortex constitute the ‘proximal’ decision signal. Much of the input to motor cortex comes from premotor and prefrontal areas, which are nearby in the frontal lobe. The thalamus also sends important signals to motor cortex, as do the neuromodulatory systems (which include the dopamine, acetylcholine, norepinephrine and serotonin systems).

Ultimately you can keep going ‘back’ to see how every part of the brain influences the ultimate decision: sensation, memory and emotion all play a role. But the prefrontal and premotor areas constitute the most easily identifiable decision areas.

As to why these brain areas are located in the frontal lobe at all
 this is a much more difficult question. The short answer is evolution by natural selection. But the long answer is still incomplete. Brains are soft tissue, so they don’t leave fossils.

Is the mind a machine?

My latest 3QD essay explores the “mind as machine” metaphor, and metaphors in general.

Putting the “cog” in “cognitive”: on the “mind as machine” metaphor

Here’s an excerpt:

People who study the mind and brain often confront the limits of metaphor. In the essay ‘Brain Metaphor and Brain Theory‘, the vision scientist John Daugman draws our attention to the fact that thinkers throughout history have used the latest material technology as a model for the mind and body. In the Katha Upanishad (which Daugman doesn’t mention), the body is a chariot and the mind is the reins. For the pre-Socratic Greeks, hydraulic metaphors for the psyche were popular: imbalances in the four humors produced particular moods and dispositions. By the 18th and 19th centuries, mechanical metaphors predominated in western thinking: the mind worked like clockwork. The machine metaphor has remained with us in some form or the other since the industrial revolution: for many contemporary scientists and philosophers, the only debate seems to be about what sort of machine the mind really is. Is it an electrical circuit? A cybernetic feedback device? A computing machine that manipulates abstract symbols? Some thinkers so convinced that the mind is a computer that they invite us to abandon the notion that the idea is a metaphor. Daugman quotes the cogntive scientist Zenon Pylyshyn, who claimed that “there is no reason why computation ought to be treated merely as a metaphor for cognition, as opposed to the literal nature of cognition”.

Daugman reacts to this Whiggish attitude with a confession of incredulity that many of us can relate to: “who among us finds any recognizable strand of their personhood or of their experience of others and of the world and its passions, to be significantly illuminated by, or distilled in, the metaphor of computation?.” He concludes his essay with the suggestion that “[w]e should remember than the enthusiastically embraced metaphors of each “new era” can become, like their predecessors, as much the prisonhouse of thought as they at first appeared to represent its liberation.”

Read the rest at 3 Quarks Daily:

Putting the “cog” in “cognitive”: on the “mind as machine” metaphor

Can science account for taste?

I was asked the question “From a scientific point of view, how are our tastes created?” Here’s my answer.

“There’s no accounting for taste!”

Typically we explain taste — in food, music, movies, art —  in terms of culture, upbringing, and sheer chance. In recent years there have been several attempts to explain taste from biological perspectives: either neuroscience or evolutionary psychology. In my opinion these types of explanations are vague enough to always sound true, but they rarely contain enough detail to account for the specific tastes of individuals or groups. Still, there’s much food for thought in these scientific proto-theories of taste and aesthetics.

[An early aesthete?]

Let’s look at the evolutionary approach first. An evolutionary explanation of taste assumes that human preferences arise from natural selection. We like salt and sugar and fat, according to this logic, because it was beneficial for our ancestors to seek out foods with these tastes. We like landscape scenes involving greenery and water bodies because such landscapes were promising environments for our wandering ancestors. This line of thinking is true as far as it goes, but it doesn’t go that far. After all, there are plenty of people who don’t much care for deep-fried salty-sweet foods. And many people who take art seriously quickly tire of clichĂ©d landscape paintings.

[Are you a homo sapien? They you must love this. 😉 ]

Evolutionary psychology can provide broad explanations for why humans as a species tend to like certain things more than others, but it really provides us with no map for navigating differences in taste between individuals and groups. (These obvious, glaring limitations of evolutionary psychology have not prevented the emergence of a cottage industry of pop science books that explain everything humans do as consequences of the incidents and accidents that befell our progenitor apes on the savannahs of Africa.)

Explanations involving the neural and cognitive sciences get closer to what we are really after — an explanation of differences in taste — but not by much. Neuroscientific explanations are essentially half way between cultural theories and evolutionary theories. We like things because the ‘pleasure centers’ in our brains ‘light up’ when we encounter them. And the pleasure centers are shaped by experience (on the time scale of a person’s life), and by natural selection (on the time scale of the species). Whatever we inherit because of natural selection is presumably common to all humans, so differences in taste must be traced to differences in experience, which become manifest in the brain as differences in neural connectivity and activity. If your parents played the Beatles for you as a child, and conveyed their pleasure to you, then associative learning might cause the synapses in your brain that link sound patterns with emotional reactions to be gradually modified, so that playing ‘Hey Jude’ now triggers a cascade of neural events that generate the subjective feeling of enjoyment.

[What’s not to love about the Beatles?]

But there is so much more to the story of enjoyment. Not everyone likes their parents’ music. In English-speaking countries there is a decades-old stereotype of the teenager who seeks out music to piss off his or her parents. And many of us have a friend who insists on listening to music that no one else seems to have heard of. What is the neural basis of this fascinating phenomenon?

We must now enter extremely speculative territory. One of the most thought-provoking ‘theories’ of aesthetics that I have come across was proposed by a machine learning researcher named JĂŒrgen Schmidhuber. He has a provocative way of summing up his theory: Interestingness is the first derivative of beauty.

What he means is that we are not simply drawn to things that are beautiful or pleasurable. We are also drawn to things that are interesting: things that somehow intrigue us and capture our attention. These things, according to Schmidhuber, entice us with the possibility of enhancing our categories of experience. In his framework, humans and animals are constantly seeking to understand the environment, and in order to do this, they must be drawn to the edge of what they already know. Experiences that are already fully understood offer no opportunity for new learning.  Experiences that are completely beyond comprehension are similarly useless. But experiences that are in the sweet spot of interestingness are not boringly familiar — but they are not bafflingly alien either. By seeking out experiences in this ‘border territory’, we expand our horizons, gaining a new understanding of the world.

For example, I’m a Beatles fan, but I don’t listen to the Beatles that often. I am, however, intrigued by music that is ‘Beatlesque’: such music can lead me in new directions, and also reflect back on the Beatles, giving me a deeper appreciation of their music.

The basic intuition of this theory is well-supported by research in animals and humans. Animals all have some baseline level of curiosity. Lab rats will thoroughly investigate a new object introduced into their cages. Novelty seems to have a gravitational pull for organisms.

But again, there are differences even in this tendency. Some people are perfectly content to eat the same foods over and over again, or listen to the same songs or artists. At the other extreme we find the freaks, the hipsters, the critics, the obsessives, and all the assorted avant garde seekers of “the Shock of the New”.

Linking back to evolutionary speculation, all we can really say is that even the desire for novelty is a variable trait in human populations. (Actually it’s multiple traits: I am far more adventurous when it comes to music than food.) Perhaps a healthy society needs its ‘conservatives’ and its ‘progressives’ in the domain of taste and aesthetic experience. Group selection  — natural selection operating on tribes, societies and cultures — is still somewhat controversial in mainstream evolutionary biology, so to go any further in our theories of taste we have to be willing to wander on the wild fringes of scientific thought…

… those fringes are, after all, where everything interesting happens! 🙂

For more speculation on interestingness, beauty, and the pull of the not-completely-familiar, see this essay I wrote. I go into more detail about Schmidhuber’s theory about interestingness:
From Cell Membranes to Computational Aesthetics: On the Importance of Boundaries in Life and Art

This has nothing to do with science, but I find this David Mitchell video on taste very funny:

After writing this answer I realized that the questioner was most probably asking about gustation — meaning, the sense of taste. Oh well.

Me and My Brain: What the “Double-Subject Fallacy” reveals about contemporary conceptions of the Self

MiBMy latest essay for 3 Quarks Daily is up: Me and My Brain: What the “Double-Subject Fallacy” reveals about contemporary conceptions of the Self

Here’s an excerpt:
What is a person? Does each of us have some fundamental essence? Is it the body? Is it the mind? Is it something else entirely? Versions of this question seem always to have animated human thought. In the aftermath of the scientific revolution, it seems as if one category of answer — the dualist idea that the essence of a person is an incorporeal soul that inhabits a material body — must be ruled out. But as it turns out, internalizing a non-dualist conception of the self is actually rather challenging for most people, including neuroscientists.
 A recent paper in the Journal of Cognitive Neuroscience suggests that even experts in the sciences of mind and brain find it difficult to shake off dualistic intuitions. Liad Mudrik and Uri Maoz, in their paper “Me & My Brain”: Exposing NeuroscienceÊŒs Closet Dualism, argue that not only do neuroscientists frequently lapse into dualistic thinking, they also attribute high-level mental states to the brain, treating these states as distinct from the mental states of the person as a whole. They call this the double-subject fallacy. ( I will refer to the fallacy as “dub-sub”, and the process of engaging in it as “dub-subbing”.) Dub-subbing is going on in constructions like”my brain knew before I did” or “my brain is hiding information from me”. In addition to the traditional subject — “me”, the self, the mind — there is a second subject, the brain, which is described in anthropomorphic terms such as ‘knowing’ or ‘hiding’. But ‘knowing’ and ‘hiding’ are precisely the sorts of things that we look to neuroscience to explain; when we fall prey to the double-subject fallacy we are actually doing the opposite of what we set out to do as materialists.  Rather than explaining “me” in terms of physical brain processes, dub-subbing induces us to describe the brain in terms of an obscure second “me”. Instead of dispelling those pesky spirits, we allow them to proliferate!
Read the whole thing at 3QD:

Fifty terms to avoid in psychology and psychiatry?

The excellent blog Mind Hacks shared a recent Frontiers in Psychology paper entitled “Fifty psychological and psychiatric terms to avoid: a list of inaccurate, misleading, misused, ambiguous, and logically confused words and phrases”.

As mentioned in the Mind Hacks post, the advice in this article may not always be spot-on, but it’s still worth reading. Here are some excerpts:

(7) Chemical imbalance. Thanks in part to the success of direct-to-consumer marketing campaigns by drug companies, the notion that major depression and allied disorders are caused by a “chemical imbalance” of neurotransmitters, such as serotonin and norepinephrine, has become a virtual truism in the eyes of the public […] Nevertheless, the evidence for the chemical imbalance model is at best slim […]  There is no known “optimal” level of neurotransmitters in the brain, so it is unclear what would constitute an “imbalance.” Nor is there evidence for an optimal ratio among different neurotransmitter levels.”

“(9) Genetically determined. Few if any psychological capacities are genetically “determined”; at most, they are genetically influenced. Even schizophrenia, which is among the most heritable of all mental disorders, appears to have a heritability of between 70 and 90% as estimated by twin designs”

“(12) Hard-wired. The term “hard-wired” has become enormously popular in press accounts and academic writings in reference to human psychological capacities that are presumed by some scholars to be partially innate, such as religion, cognitive biases, prejudice, or aggression. For example, one author team reported that males are more sensitive than females to negative news stories and conjectured that males may be “hard wired for negative news” […] Nevertheless, growing data on neural plasticity suggest that, with the possible exception of inborn reflexes, remarkably few psychological capacities in humans are genuinely hard-wired, that is, inflexible in their behavioral expression”

“(27) The scientific method. Many science textbooks, including those in psychology, present science as a monolithic “method.” Most often, they describe this method as a hypothetical-deductive recipe, in which scientists begin with an overarching theory, deduce hypotheses (predictions) from that theory, test these hypotheses, and examine the fit between data and theory. If the data are inconsistent with the theory, the theory is modified or abandoned. It’s a nice story, but it rarely works this way”

“(42) Personality type. Although typologies have a lengthy history in personality psychology harkening back to the writings of the Roman physician Galen and later, Swiss psychiatrist Carl Jung, the assertion that personality traits fall into distinct categories (e.g., introvert vs. extravert) has received minimal scientific support. Taxometric studies consistently suggest that normal-range personality traits, such as extraversion and impulsivity, are underpinned by dimensions rather than taxa, that is, categories in nature”

Lilienfeld, S. O., Sauvigné, K. C., Lynn, S. J., Cautin, R. L., Latzman, R. D., & Waldman, I. D. (2015). Fifty psychological and psychiatric terms to avoid: a list of inaccurate, misleading, misused, ambiguous, and logically confused words and phrases. Frontiers in Psychology, 6, 1100.

Why can most people identify a color without a reference but not a musical note?

[I was asked this on Quora. Here’s a slightly modified version of my answer.]

This is an excellent question! I’m pretty sure there is not yet a definitive answer, but I suspect that the eventual answer will involve two factors:

  1. The visual system in humans is much more highly developed than the auditory system.
  2. Human cultures typically teach color words to all children, but formal musical training — complete with named notes — is relatively rare.

When you look at the brain’s cortical regions, you realize that the primary visual cortex has the most well-defined laminar structure in the whole brain. Primary auditory cortex is less structured. We still don’t know exactly how the brain’s layers contribute to sensory processing, but some theories suggest that the more well-defined cortices are capable of making more fine distinctions.

[See this blog post for more on cortical lamination:
How to navigate on Planet Brain]

However, I don’t think the explanation for the difference between music and color perception is purely neuroscientific. Culture may well play an important role. I think that with training, absolute pitch — the ability to identify the exact note rather than the interval between notes — could become more common. Speakers of tonal languages like Mandarin or Cantonese are more likely to have absolute pitch, especially if they’ve had early musical training. (More on this below.)

Also: when people with no musical training are exposed to tunes they are familiar with, many of them can tell if the absolute pitch is correct or not [1] Similarly, when asked to produce a familiar tune, many people can hit the right pitch. [2]. This suggests that at least some humans have the latent ability to use and/or recognize absolute pitch.

Perhaps with early training, note names will become as common as color words.

This article by a UCSD psychologist described the mystery quite well:

Diana Deutsch – Absolute Pitch.

As someone with absolute pitch, it has always seemed puzzling to me that  this ability should be so rare. When we name a color, for example as  green, we do not do this by viewing a different color, determining its  name, and comparing the relationship between the two colors. Instead,  the labeling process is direct and immediate.

She has some fascinating data on music training among tonal language speakers:

” Figure 2. Percentages of subjects who obtained a score of at least  85% correct on the test for absolute pitch. CCOM: students at the  Central Conservatory of Music, Beijing, China; all speakers of Mandarin.  ESM: students at Eastman School of Music, Rochester, New York; all  nontone language speakers.”

Looks like if you speak a tonal language and start learning music early, you are far more likely to have perfect pitch. (Separating causation from correlation may be tricky.)


[1] Memory for the absolute pitch of familiar songs.
[2] Absolute memory for musical pitch: evidence from the production of learned melodies.

Quora: Why can most people identify a color without a reference but not a musical note?

Does dopamine produce a feeling of bliss? On the chemical self, the social self, and reductionism.

Here’s the intro to my latest blog post at 3 Quarks Daily.

“The  osmosis of neuroscience into popular culture is neatly symbolized by a  phenomenon I recently chanced upon: neurochemical-inspired jewellery. It  appears there is a market for silvery pendants shaped like molecules of  dopamine, serotonin, acetylcholine, norepinephrine and other celebrity  neurotransmitters. Under pictures of dopamine necklaces, the  neuro-jewellers have placed words like “love”, “passion”, or “pleasure”.  Under serotonin they write “happiness” and “satisfaction”, and under  norepinephrine, “alertness” and “energy”. These associations presumably  stem from the view that the brain is a chemical soup in which each  ingredient generates a distinct emotion, mood, or feeling. Subjective  experience, according to this view, is the sum total of the  contributions of each “mood molecule”. If we strip away the modern  scientific veneer, the chemical soup idea evokes the four humors of  ancient Greek medicine: black bile to make you melancholic, yellow bile  to make you choleric, phlegm to make you phlegmatic, and blood to make  you sanguine.

“A dopamine pendant worn round the neck as a symbol for bliss is  emblematic of modern society’s attitude towards current scientific  research. A multifaceted — and only partially understood — set  of experiments is hastily distilled into an easily marketed molecule of  folk wisdom. Having filtered out the messy details, we are left with an  ornamental nugget of thought that appears both novel and reassuringly  commonsensical. But does neuroscience really support this reductionist  view of human subjectivity? Can our psychological states be understood  in terms of a handful of chemicals? Does neuroscience therefore pose a  problem for a more holistic view, in which humans are integrated in  social and environmental networks? In other words, are the “chemical  self” and the “social self” mutually exclusive concepts?”

– Read the rest at 3QD: The Chemical Self and the Social Self