The Pentagon of Neuroscience — An Infographic/Listicle for Understanding the Neuroculture

Click here to go straight to the infographic. It should open in Firefox and Chrome.

brainzNeuroscience has hit the big time. Every day, popular newspapers, websites and blogs offer up a heady stew of brain-related self-help (neuro-snake oil?) and gee wiz science reporting (neuro-wow?). Some scientists and journalists — perhaps caught up in the neuro-fervor — throw caution to the wind, promising imminent brain-based answers to the kinds of questions that probably predate civilization itself: What is the nature of mind? Why do we feel the way we do? Does each person have a fundamental essence? How can we avoid pain and suffering, and discover joy, creativity, and interpersonal harmony?

Continue reading

Fifty terms to avoid in psychology and psychiatry?

The excellent blog Mind Hacks shared a recent Frontiers in Psychology paper entitled “Fifty psychological and psychiatric terms to avoid: a list of inaccurate, misleading, misused, ambiguous, and logically confused words and phrases”.

As mentioned in the Mind Hacks post, the advice in this article may not always be spot-on, but it’s still worth reading. Here are some excerpts:

(7) Chemical imbalance. Thanks in part to the success of direct-to-consumer marketing campaigns by drug companies, the notion that major depression and allied disorders are caused by a “chemical imbalance” of neurotransmitters, such as serotonin and norepinephrine, has become a virtual truism in the eyes of the public […] Nevertheless, the evidence for the chemical imbalance model is at best slim […]  There is no known “optimal” level of neurotransmitters in the brain, so it is unclear what would constitute an “imbalance.” Nor is there evidence for an optimal ratio among different neurotransmitter levels.”

“(9) Genetically determined. Few if any psychological capacities are genetically “determined”; at most, they are genetically influenced. Even schizophrenia, which is among the most heritable of all mental disorders, appears to have a heritability of between 70 and 90% as estimated by twin designs”

“(12) Hard-wired. The term “hard-wired” has become enormously popular in press accounts and academic writings in reference to human psychological capacities that are presumed by some scholars to be partially innate, such as religion, cognitive biases, prejudice, or aggression. For example, one author team reported that males are more sensitive than females to negative news stories and conjectured that males may be “hard wired for negative news” […] Nevertheless, growing data on neural plasticity suggest that, with the possible exception of inborn reflexes, remarkably few psychological capacities in humans are genuinely hard-wired, that is, inflexible in their behavioral expression”

“(27) The scientific method. Many science textbooks, including those in psychology, present science as a monolithic “method.” Most often, they describe this method as a hypothetical-deductive recipe, in which scientists begin with an overarching theory, deduce hypotheses (predictions) from that theory, test these hypotheses, and examine the fit between data and theory. If the data are inconsistent with the theory, the theory is modified or abandoned. It’s a nice story, but it rarely works this way”

“(42) Personality type. Although typologies have a lengthy history in personality psychology harkening back to the writings of the Roman physician Galen and later, Swiss psychiatrist Carl Jung, the assertion that personality traits fall into distinct categories (e.g., introvert vs. extravert) has received minimal scientific support. Taxometric studies consistently suggest that normal-range personality traits, such as extraversion and impulsivity, are underpinned by dimensions rather than taxa, that is, categories in nature”

Lilienfeld, S. O., Sauvigné, K. C., Lynn, S. J., Cautin, R. L., Latzman, R. D., & Waldman, I. D. (2015). Fifty psychological and psychiatric terms to avoid: a list of inaccurate, misleading, misused, ambiguous, and logically confused words and phrases. Frontiers in Psychology, 6, 1100.

The Neural Citadel — a wildly speculative metaphor for how the brain works

My latest 3QD essay is a bit of a wild one. I start by talking about Goodhart’s Law, a quirk of economics that I think has implications elsewhere. I try to link it with neuroscience, but in order to do so I first construct an analogy between the brain and an economy. We might not understand economic networks any better than we do neural networks, but this analogy is a fun way to re-frame matters of neuroscience and society.

Plan of a Citadel (from Wikipedia)

Plan of a Citadel (from Wikipedia)

Here’s an excerpt:

The Neural Citadel

Nowadays we routinely encounter descriptions of the brain as a computer, especially in the pop science world. Just like computers, brains accept inputs (sensations from the world) and produce outputs (speech, action, and influence on internal organs). Within the world of neuroscience there is a widespread belief that the computer metaphor becomes unhelpful very quickly, and that new analogies must be sought. So you can also come across conceptions of the brain as a dynamical system, or as a network. One of the purposes of a metaphor is to link things we understand (like computers) with thing we are still stymied by (like brains). Since the educated public has plenty of experience with computers, but at best nebulous conceptions of dynamical systems and networks, it makes sense that the computer metaphor is the most popular one. In fact, outside of a relatively small group of mathematically-minded thinkers, even scientists often feel most comfortable thinking of the brain as a elaborate biological computer. [3]

However, there is another metaphor for the brain that most human beings will be able to relate to. The brain can be thought of as an economy: as a biological social network, in which the manufacturers, marketers, consumers, government officials and investors are neurons. Before going any further, let me declare up front that this analogy has a fundamental flaw. The purpose of metaphor is to understand the unknown — in this case the brain — in terms of the known. But with all due respect to economists and other social scientists, we still don’t actually understand socio-economic networks all that well. Not nearly as well as computer scientists understand computers. Nevertheless, we are all embedded in economies and social networks, and therefore have intuitions, suspicions, ideologies, and conspiracy theories about how they work.

Because of its fundamental flaw, the brain-as-economy metaphor isn’t really going to make my fellow neuroscientists’ jobs any easier, which is why I am writing about it on 3 Quarks Daily rather than in a peer-reviewed academic journal. What the brain-as-economy metaphor does do is allow us to translate neural or mental phenomena into the language of social cooperation and competition, and vice versa. Even though brains and economies seem equally mysterious and unpredictable, perhaps in attempting to bridge the two domains something can be gained in translation. If nothing else, we can expect some amusing raw material for armchair philosophizing about life, the universe, and everything. [4]

So let’s paint a picture of the neural economy. Imagine that the brain is a city — the capital of the vast country that is the body. The neural citadel is a fortress; the blood-brain barrier serves as its defensive wall, protecting it from the goings-on in the countryside, and only allowing certain raw materials through its heavily guarded gates — oxygen and nutrients, for the most part. Fuel for the crucial work carried out by the city’s residents: the neurons and their helper cells. The citadel needs all this fuel to deal with its main task: the industrial scale transformation of raw data into refined information. The unprocessed data pours into the citadel through the various axonal highways.  The trucks carrying the data are dispatched by the nervous system’s network of spies and informants. Their job is to inform the citadel of the goings-on outside its walls. The external sense organs — the eyes, ears, nose, tongue and skin — are the body’s border patrols, coast guards, observatories, and foreign intelligence agencies. The muscles and internal organs, meanwhile, are monitored by the home ministry’s police and bureaucrats, always on the look-out for any domestic turbulence. (The stomach, for instance, is known to be a hotbed of labor unrest.)

The neural citadel enables an information economy — a marketplace of ideas, as it were. Most of this information is manufactured within the brain and internally traded, but some of it — perhaps the most important information — is exported from the brain in the form of executive orders, requests and the occasional plaintive plea from the citadel to the sense organs, muscles, glands and viscera. The purpose of the brain is definitely subject to debate — even within the citadel — but one thing most people can agree on is that it must serve as an effective and just ruler of the body: a government that marries a harmonious domestic policy — unstressed stomach cells, unblackened lung cells, radiant skin cells and resilient muscle cells — with a peaceful and profitable foreign policy. (The country is frustratingly dependent on foreign countries, over which it has limited control, for its energy and construction material.)

The citadel is divided into various neighborhoods, according to the types of information being processed. There are neighborhoods subject to strict zoning requirements that process only one sort of information: visions, sounds, smells, tastes, or textures. Then there are mixed use neighborhoods where different kinds of information are assembled into more complex packages, endlessly remixed and recontextualized. These neighborhoods are not arranged in a strict hierarchy. Allegiances can form and dissolve. Each is trying to do something useful with the information that is fed to it: to use older information to predict future trends, or to stay on the look-out for a particular pattern that might arise in the body, the outside world, or some other part of the citadel.  Each neighborhood has an assortment of manufacturing strategies, polling systems, research groups, and experimental start-up incubators. Though they are all working for the welfare of the country, they sometimes compete for the privilege of contributing to governmental policies. These policies seem to be formulated at the centers of planning and coordination in the prefrontal cortex — an ivory tower (or a corporate skyscraper, if you prefer merchant princes to philosopher kings) that has a panoramic view of the citadel. The prefrontal tower then dispatches its decisions to the motor control areas of the citadel, which notify the body of governmental marching orders.

~

The essay is not just about the metaphor though. There are bits about dopamine, and addiction, and also some wide-eyed idealism. :) Check the whole thing out at 3 Quarks Daily.

For the record, there is a major problem with personifying neurons. It doesn’t actually explain anything, since we are just as baffled by persons as we are by neurons. Personifying neurons creates billions of microscopic homunculi. The Neural Citadel metaphor was devised in a spirit of play, rather than as serious science or philosophy.

Why does dreaming sometimes produce Inception-style time distortions?

I answered the following question on Quora:

Last night I slept for 8.5 hours and had a dream that lasted for a month. It was full of incredible landscapes, animals, and interesting interactions with people. I woke to my alarm at 9:30, silenced it, then went back to my dream for a week before waking up at 10:30. What was happening in my brain?

Looks like no one has mentioned hippocampal replay yet!

I’m not a big fan of the movie Inception, but there is some tentative neuroscientific evidence that the time distortions experienced during dreaming have measurable neural correlates. (Experiential time distortions are of course purely subjective, so no one can tell you that you didn’t experience them. All experiences are real experiences.)

There are neurons in the hippocampus called place cells that tend to fire when an animal is in a particular location. (Incidentally, the discoverers of place cells and grid cells won the 2014 Nobel Prize in Physiology or Medicine.)

Let’s say a rat is navigating through a maze. When it reaches point A, a particular cell (or group of cells) fires. When it reaches point B, another cell fires. So there is a sequence of neuronal firing patterns that corresponds to the sequence of locations that the animal has experienced. In the picture above, each color represents the firing of one place cell. So each place cell covers a region of the maze/track.

So what does any of this have to do with dreaming? Well, when the animal is in REM (dreaming) sleep, or is quietly resting, the place cells that were recently active become reactivated. These reactivations are typically much faster than actual experience. They can also run backwards relative to prior waking experience, and can even be jumbled.

Of course, your experiences in dreams are more than a sequence of places. To extend the insights from rodent place cells into the study of actual human dreaming, we have to make a few speculative leaps. Perhaps in humans, there are ‘experience cells’ or ‘episode cells’ that encode broad categories of perception and cognition. Many neuroscientists refer to the set of cells that participate in such categorization as a cognitive map. Sleep seems to involve a free-form exploration of the cognitive map.

Hippocampal replay is widely seen as crucial for consolidating memories, and for learning. If you’ve been doing something during the day, when you sleep or rest, unconscious neural processes help you extract useful information, so that the next day your performance can improve.

Most of the data on hippocampal processing come from animals. It’s worth remembering that we can’t know if animals have the kinds of dreams that humans do. Nevertheless, the picture of sleep emerging from various lines of inquiry suggests that dreaming may be a subjective ‘side-effect’ of various sleep-related neural processes such as hippocampal replay. (Nowadays most neuroscientists would refrain from claiming that the dreaming experience per se is the purpose of dreaming.)

Many neuroscientists think that one of the purposes of sleep is to replay the past, and thereby discover new possibilities for the future. Perhaps speeding up the process is an efficient way to cycle through multiple ‘angles’ on the past, or on the wider space of possibilities. No one has any idea why this process should have subjective experiential correlates — and we are even more in the dark about why these experiences tend to be so vivid and bizarre.

Last night I slept for 8.5 hours and had a dream that lasted for a month. It was full of incredible landscapes, animals, and interesting …

Why can most people identify a color without a reference but not a musical note?

[I was asked this on Quora. Here’s a slightly modified version of my answer.]

This is an excellent question! I’m pretty sure there is not yet a definitive answer, but I suspect that the eventual answer will involve two factors:

  1. The visual system in humans is much more highly developed than the auditory system.
  2. Human cultures typically teach color words to all children, but formal musical training — complete with named notes — is relatively rare.

When you look at the brain’s cortical regions, you realize that the primary visual cortex has the most well-defined laminar structure in the whole brain. Primary auditory cortex is less structured. We still don’t know exactly how the brain’s layers contribute to sensory processing, but some theories suggest that the more well-defined cortices are capable of making more fine distinctions.

[See this blog post for more on cortical lamination:
How to navigate on Planet Brain]

However, I don’t think the explanation for the difference between music and color perception is purely neuroscientific. Culture may well play an important role. I think that with training, absolute pitch — the ability to identify the exact note rather than the interval between notes — could become more common. Speakers of tonal languages like Mandarin or Cantonese are more likely to have absolute pitch, especially if they’ve had early musical training. (More on this below.)

Also: when people with no musical training are exposed to tunes they are familiar with, many of them can tell if the absolute pitch is correct or not [1] Similarly, when asked to produce a familiar tune, many people can hit the right pitch. [2]. This suggests that at least some humans have the latent ability to use and/or recognize absolute pitch.

Perhaps with early training, note names will become as common as color words.

This article by a UCSD psychologist described the mystery quite well:

Diana Deutsch – Absolute Pitch.

As someone with absolute pitch, it has always seemed puzzling to me that  this ability should be so rare. When we name a color, for example as  green, we do not do this by viewing a different color, determining its  name, and comparing the relationship between the two colors. Instead,  the labeling process is direct and immediate.

She has some fascinating data on music training among tonal language speakers:

” Figure 2. Percentages of subjects who obtained a score of at least  85% correct on the test for absolute pitch. CCOM: students at the  Central Conservatory of Music, Beijing, China; all speakers of Mandarin.  ESM: students at Eastman School of Music, Rochester, New York; all  nontone language speakers.”

Looks like if you speak a tonal language and start learning music early, you are far more likely to have perfect pitch. (Separating causation from correlation may be tricky.)


References:

[1] Memory for the absolute pitch of familiar songs.
[2] Absolute memory for musical pitch: evidence from the production of learned melodies.

Quora: Why can most people identify a color without a reference but not a musical note?

What are the limits of neuroscience?

[My answer to a recent Quora question.]

There are two major problems with neuroscience:

  1. Weak philosophical foundations when dealing with mental concepts
  2. Questionable statistical analyses of experimental results

1. Neuroscience needs a bit of philosophy

Many neuroscientific results are presented without sufficiently nuanced  philosophical knowledge. This can lead to cartoonish and potentially harmful conceptions of the brain, and by extension, of human behavior, psychology, and culture. Concepts related to the mind are among the hardest to pin down, and yet some neuroscientists give the impression that there are no issues that require philosophical reflection.

Because of a certain disdain for philosophy (and sometimes even psychology!), some neuroscientists end up drawing inappropriate inferences from their research, or distorting the meaning of their results.

One particularly egregious example is the “double subject fallacy”, which was recently discussed in an important paper:

“Me & my brain”: exposing neuroscience’s closet dualism.

Here’s the abstract of the paper:

Our intuitive concept of the relations between brain and mind is  increasingly challenged by the scientific world view. Yet, although few  neuroscientists openly endorse Cartesian dualism, careful reading  reveals dualistic intuitions in prominent neuroscientific texts. Here,  we present the “double-subject fallacy”: treating the brain and the  entire person as two independent subjects who can simultaneously occupy  divergent psychological states and even have complex interactions with  each other-as in “my brain knew before I did.” Although at first, such  writing may appear like harmless, or even cute, shorthand, a closer look  suggests that it can be seriously misleading. Surprisingly, this  confused writing appears in various cognitive-neuroscience texts, from  prominent peer-reviewed articles to books intended for lay audience. Far  from being merely metaphorical or figurative, this type of writing  demonstrates that dualistic intuitions are still deeply rooted in  contemporary thought, affecting even the most rigorous practitioners of  the neuroscientific method. We discuss the origins of such writing and  its effects on the scientific arena as well as demonstrate its relevance  to the debate on legal and moral responsibility.

[My answer to the earlier question raises related issues: What are the limits of neuroscience with respect to subjectivity, identity, self-reflection, and choice?]

2. Neuroscience needs higher data analysis standards

On a more practical level, neuroscience is besieged by problems related to bad statistics. The data in neuroscience (and all “complex system” science) are extremely noisy, so increasingly sophisticated statistical techniques are deployed to extract meaning from them. This sophistication means that  fewer and fewer neuroscientists actually understand the math behind the statistical methods they employ. This can create a variety of problems, including incorrect inferences. Scientists looking for “sexy” results can use poorly understood methods to show ‘significant’ effects where there really is only a random fluke. (The more methods you use, the more chances you create for finding a random “statistically significant” effect. This kind of thing has been called “torturing the data until it confesses”.)

Chance effects are unreproducible, and this is a major problem for many branches of science. Replication is central to good science, so when it frequently fails to occur, then we know there are problems with research and with how it is reviewed and published. Many times there is a “flash in the pan” at a laboratory that turns out to be fool’s gold.

See these article for more:

Bad Stats Plague Neuroscience

Voodoo Correlations in Social Neuroscience

The Dangers of Double Dipping (Voodoo IV)

Erroneous analyses of interactions in neuroscience: a problem of significance.

Fixing Science, Not Just Psychology – Neuroskeptic

The Replication Problem in the Brain Sciences


Quora: What are the limits of neuroscience?

Is neuroscience really ruining the humanities?

For my latest 3QD post, I expanded on my answer to a Quora question: Is neuroscience ruining the humanities?


Here’s an excerpt:

“Neuroscience is ruining the humanities”. This was the provocative title of a recent article by Arthur Krystal in The Chronicle of Higher Education.  To me the question was pure clickbait [1], since I am both a  neuroscientist and an avid spectator of the drama and intrigue on the  other side of the Great Academic Divide [2]. Given the sensational  nature of many of the claims made on behalf of the cognitive and neural  sciences, I am inclined to assure people in the humanities that they  have little to fear. On close inspection, the bold pronouncements of  fields like neuro-psychology, neuro-economics and neuro-aesthetics — the  sorts of statements that mutate into TED talks and pop science books —  often turn out to be wild extrapolations from a limited (and internally  inconsistent) data set.

Unlike many of my fellow scientists, I have occasionally grappled  with the weighty ideas that emanate from the humanities, even coming to  appreciate elements of postmodern thinking. (Postmodern — aporic? — jargon is of course a different matter entirely.) I think the  tapestry that is human culture is enriched by the thoughts that emerge  from humanities departments, and so I hope the people in these  departments can exercise some constructive skepticism when confronted  with the latest trendy factoid from neuroscience or evolutionary  psychology. Some of my neuroscience-related essays here at 3QD were  written with this express purpose [3, 4].

The Chronicle article begins with a 1942 quote from New York intellectual Lionel  Trilling: “What gods were to the ancients at war, ideas are to us”.  This sets the tone for the mythic narrative that lurks beneath much of  the essay, a narrative that can be crudely caricatured as follows. Once  upon a time the University was a paradise of creative ferment. Ideas  were warring gods, and the sparks that flew off their clashing swords  kept the flames of wisdom and liberty alight. The faithful who erected  intellectual temples to bear witness to these clashes were granted the  boon of enlightened insight. But faith in the great ideas gradually  faded, and so the golden age came to an end. The temple-complex of ideas  began to decay from within, corroded by doubt. New prophets arose, who  claimed that ideas were mere idols to be smashed, and that the temples  were metanarrative prisons from which to escape. In this weak and bewildered state, the  intellectual paradise was invaded. The worshipers were herded into a  shining new temple built from the rubble of the old ones. And into this  temple the invaders’ idols were installed: the many-armed goddess of  instrumental rationality, the one-eyed god of essentialism, the cold  metallic god of materialism…

The over-the-top quality of my little academia myth might give the  impression that I think it is a tissue of lies. But perhaps more nuance  is called for. As with all myths, I think there are elements of truth in  this narrative.


Read the rest at 3 Quarks Daily: Is neuroscience really ruining the humanities?

How to navigate on Planet Brain

I was asked the following question on Quora: “How do you most easily memorize Brodmann’s areas?”. The question details added the following comment: “Brodmann area 7 is honestly where the numbering starts to seem really arbitrary.” Here’s how I responded:

Yup. The Brodmann numbering system for cortical areas is arbitrary. If you find a mnemonic, do let us know!

I’m a computational modeler working in an anatomy lab, so I confront the deficits in my anatomical knowledge on a daily basis! I can barely remember the handful of Brodmann areas relevant to my project, let alone the full list! I have a diagram of the areas taped up next to my monitor. :)

Neuroanatomists become familiar with the brain’s geography over years and years of “travel” through the brain. Think of it like this: what they’re doing is like navigating a city that doesn’t have a neat New York -style city block structure with sensibly numbered streets and avenues. Boston, where I live, is largely lacking in regularity, so one really has to use landmarks — like the Charles River, the Citgo sign, or the Prudential Center. The landmarks for neuroanatomists are sulci and gyri. Over time they learn the Brodmann area numbers. Only instead of a 2D city, neuroanatomists are mapping a 3D planet!


Over the years my lab — the Neural Systems Laboratory at Boston University — has developed a structural model that explains cortical areas and their interconnections in terms of cytoarchitectonic features. They don’t have a naming/addressing system, but at least they provide a way to make sense of the forest of areas!

Fig 1. Schematic representation of four broad cortical types. Agranular and dysgranular cortices are of the limbic type. Figure from [1].

The structural model [1,2] is based on the observation that the 6-layer nature of isocortex is not uniform, but varies systematically. The simplest parts of the cortex are the “limbic” cortices, which include posterior orbitofrontal and anterior cingulate cortices. Limbic cortices have around 4 distinct layers. The most differentiated parts of the cortex are the “eulaminate” cortices, which include primary sensory areas, and some (but not all!) parts of the prefrontal cortex, such as dorsolateral prefrontal cortex. Eulaminate cortices have 6 easily distinguished layers. [See Fig 1]. Interestingly, there is some evidence that the simplest cortices are phylogenetically oldest, and that the most differentiated are most recent.

Fig 2. Schematic representation of cortico-cortical projections. Figure from [2].

Every functional cortical hierarchy* consists of a spectrum of cortices from limbic to eulaminate areas. Areas which are similar tend to be more strongly connected to each other, with many layers linking to each other in a way that can be described as “columnar”, “lateral” or “symmetric”. Dissimilar areas are generally more weakly connected, and have an “asymmetric” laminar pattern of connections, in which projections from a less differentiated area to a more differentiated area originate in deep layers (5 and 6), and terminate in superficial layers (1,2 and 3). Projections from a more differentiated area to a less differentiated area have the opposite pattern: they originate in superficial layers (2 and 3), and terminate in deep layers (4,5 and 6). [See Fig 2.]

 For more on the details of the model, check out the references [1,2]. My boss, Helen Barbas, just submitted a short review about the structural model. When it is out I will append it to this answer.

To return to the city analogy, the structural model tells us that we can infer the (transportation/social/cultural?) links between pairs of neighborhoods based on what the two neighborhoods look like. If the structural model were true for cities, then neighborhoods that have similar houses and street layouts would be more closely linked that dissimilar neighborhoods. Similar neighborhoods would have one type of linkage (the “symmetric” type), whereas dissimilar neighborhoods would have another (the “asymmetric” type).

References

[1] Dombrowski SM, Hilgetag CC, Barbas H (2001) Quantitative architecture distinguishes prefrontal cortical systems in the rhesus monkey. Cereb Cortex 11: 975-988.

[2] Barbas H, Rempel-Clower N (1997) Cortical structure predicts the pattern of corticocortical connections. Cereb Cortex 7: 635-646.

Notes

* Heterarchy might be a better description than hierarchy.

Here’s a link to the Quora answer: How do you most easily memorize Brodmann’s areas?