What neuroscience too often neglects: Behavior

A Quora conversation led me to recent paper in Neuron that highlights a very important problem with a lot of neuroscience research: there is insufficient attention paid to the careful analysis of behavior. The paper is not quite a call to return to behaviorism, but it is an invitation to consider that the pendulum has swing too far in the opposite direction, towards ‘blind’ searches for neural correlates. The paper is a wonderful big picture critique, so I’d like to just share some excerpts.


“Neuroscience is replete with cases that illustrate the fundamental epistemological difficulty of deriving processes from processors. For example, in the case of the roundworm (Caenorhabditis elegans), we know the genome, the cell types, and the connectome—every cell and its connections (Bargmann, 1998; White et al., 1986). Despite this wealth of knowledge, our understanding of how all this structure maps onto the worm’s behavior remains frustratingly incomplete.”

“New technologies have enabled the acquisition of massive and intricate datasets, and the means to analyze them have become concomitantly more complex. This in turn has led to a need for experts in computation and data analysis, with a reduced emphasis on organismal-level thinkers who develop detailed functional analyses of behavior, its developmental trajectory,and its evolutionary basis. Deep and thorny questions like‘‘what would even count as an explanation in this context,’’ ‘‘what is a mechanism for the behavior we are trying to understand,’’and ‘‘what does it mean to understand the brain’’ get sidelined. The emphasis in neuroscience has transitioned fromthese larger scope questions to the development of technologies,model systems, and the approaches needed to analyze the deluge of data they produce. Technique-driven neuroscience could be considered an example of what is known as the substitution bias: ‘‘[.] when faced with a difficult question, weoften answer an easier one instead, usually without noticing the substitution’’ (Kahneman, 2011, p. 12).”

This next excerpt raises an important issue with interpretations of mirror neuron studies. (I also have my own little rant about mirror neuron “theory”.)
“Interpretation then has the following logic: as neurons can be decoded for intention in the first person, and these same neurons decoded for the same intention in the third person, then activation of the mirror neurons can be interpreted as meaning that the primate has understood the intention of the primate it is watching. The problem with this attempt to posit an algorithm for ‘‘understanding’’ based on neuronal responses is that no independent behavioral experiment is done to show evidence that any kind of understanding is actually occurring, understanding that could then be correlated with the mirror neurons. This is a key error in our view: behavior is used to drive neuronal activity but no either/ or behavioral hypothesis is being tested per se. Thus, an interpretation is being mistaken for a result; namely, that the mirror neurons understand the other individual.”

Here the authors talk about the importance of emergence as a bridge between neurons and behavior:

“The phenomenon at issue here, when making a case for recording from populations of neurons or characterizing whole networks, is emergence—neurons in their aggregate organization cause effects that are not apparent in any single neuron. Following this logic, however, leads to the conclusion that behavior itself is emergent from aggregated neural circuits and therefore should also be studied in its own right. An example of an emergent behavior that can only be understood at the algorithmic level, which in turn can only be determined by studying the emergent behavior itself, is flocking in birds. First one has to observe the behavior and then one can begin to test simple rules that will lead to reproduction of the behavior, in this case best done through simulation. The rules are simple—for example, one of them is ‘‘steer to average heading of neighbors’’ (Reynolds, 1987). Clearly, observing or dissecting an individual bird, or even several birds could never derive such a rule. Substitute flocking with a behavior like reaching, and birds for neurons, and it becomes clear how adopting an overly reductionist approach can hinder understanding.”

Krakauer, John W., Asif A. Ghazanfar, Alex Gomez-Marin, Malcolm A. MacIver, and David Poeppel. “Neuroscience needs behavior: correcting a reductionist Bias.” Neuron 93, no. 3 (2017): 480-490. [Paywalled]

My main criticism of this paper might be that their proposed solution — a separation between analysis of behavior and analysis of neural data, with behavioral analysis ideally happening first — might be too rigid, and also might leave the behavioral analysis somewhat underconstrained. It is definitely important to have a clear behavioral hypothesis if you are running a behavioral study, even if is part of a larger study of neural data. But the actual process of understanding might require a lot more ‘cross-talk’. It may not always be useful to come up with a high-level analysis of behavior in isolation from neural data. There is no guarantee that the high-level analysis will be accurate: there may be multiple high level models that correspond to the same behavioral data. So we might need to add a third circle to their Figure 1: one for the space of behavioral explanations.

“Conscious realism”: a new way to think about reality (or the lack thereof?)


Interesting interview in the Atlantic with cognitive scientist Donald D. Hoffman:

The Case Against Reality

“I call it conscious realism: Objective reality is just conscious agents, just points of view. Interestingly, I can take two conscious agents and have them interact, and the mathematical structure of that interaction also satisfies the definition of a conscious agent. This mathematics is telling me something. I can take two minds, and they can generate a new, unified single mind. Here’s a concrete example. We have two hemispheres in our brain. But when you do a split-brain operation, a complete transection of the corpus callosum, you get clear evidence of two separate consciousnesses. Before that slicing happened, it seemed there was a single unified consciousness. So it’s not implausible that there is a single conscious agent. And yet it’s also the case that there are two conscious agents there, and you can see that when they’re split. I didn’t expect that, the mathematics forced me to recognize this. It suggests that I can take separate observers, put them together and create new observers, and keep doing this ad infinitum. It’s conscious agents all the way down.”


Here’s the striking thing about that. I can pull the W out of the model and stick a conscious agent in its place and get a circuit of conscious agents. In fact, you can have whole networks of arbitrary complexity. And that’s the world.


“As a conscious realist, I am postulating conscious experiences as ontological primitives, the most basic ingredients of the world. I’m claiming that experiences are the real coin of the realm. The experiences of everyday life—my real feeling of a headache, my real taste of chocolate—that really is the ultimate nature of reality.”

I don’t agree with everything in the article (especially the quantum stuff) but I think many people interested in consciousness and metaphysics will find plenty of food for thought here:

The Case Against Reality

Also, the “conscious agents all the way down” is the exact position I was criticizing in a recent 3QD essay:

3quarksdaily: Persons all the way down: On viewing the scientific conception of the self from the inside out

The diagram above is from a science fiction story I was working on, back when I was a callow youth. It closely related to the idea of a network of conscious agents. Here’s another ‘version’ of it.


Not sure why I made it look so morbid. 🙂

The Emotional Gatekeeper — a computational model of emotional attention

My paper is finally out in PLoS Computational Biology. It’s an open access journal, so everyone can read it:

The Emotional Gatekeeper: A Computational Model of Attentional Selection and Suppression through the Pathway from the Amygdala to the Inhibitory Thalamic Reticular Nucleus

Here’s the Author Summary, which is a simplified version of the abstract:

“Emotional experiences grab our attention. Information about the emotional significance of events helps individuals weigh opportunities and dangers to guide goal-directed behavior, but may also lead to irrational decisions when the stakes are perceived to be high. Which neural circuits underlie these contrasting outcomes? A recently discovered pathway links the amygdala—a key center of the emotional system—with the inhibitory thalamic reticular nucleus (TRN) that filters information between the thalamus and cortex. We developed a neural network model—the Emotional Gatekeeper—that demonstrates how the newly discovered pathway from the amygdala to TRN highlights relevant information to help assess threats and opportunities. The model also shows how the amygdala-TRN pathway can lead normal individuals to discount neutral but useful information in highly charged emotional situations, and predicts that disruption of specific nodes in this circuit underlies distinct psychiatric disorders.”


Here’s the full citation:

John YJ, Zikopoulos B, Bullock D, Barbas H (2016) The Emotional Gatekeeper: A Computational Model of Attentional Selection and Suppression through the Pathway from the Amygdala to the Inhibitory Thalamic Reticular Nucleus. PLoS Comput Biol 12(2): e1004722. doi:10.1371/journal.pcbi.1004722

Perception is a creative act: On the connection between creativity and pattern recognition

An answer I wrote to the Quora question Does the human brain work solely by pattern recognition?:

Great question! Broadly speaking, the brain does two things: it processes ‘inputs’ from the world and from the body, and generates ‘outputs’ to the muscles and internal organs.

Pattern recognition shows up most clearly during the processing of inputs. Recognition allows us to navigate the world, seeking beneficial/pleasurable experiences and avoiding harmful/negative experiences.* So pattern recognition must also be supplemented by associative learning: humans and animals must learn how patterns relate to each other, and to their positive and negative consequences.

And patterns must not simply be recognized: they must also be categorized. We are bombarded by patterns all the time. The only way to make sense of them is to categorize them into classes that can all be treated similarly. We have one big category for ‘snake’, even though the sensory patterns produced by specific snakes can be quite different. Pattern recognition and classification are closely intertwined, so in what follows I’m really talking about both.

Creativity does have a connection with pattern recognition. One of the most complex and fascinating manifestations of pattern recognition is the process of analogy and metaphor. People often draw analogies between seemingly disparate topics: this requires creative use of the faculty of pattern recognition. Flexible intelligence depends on the ability to recognize patterns of similarity between phenomena. This is a particularly useful skill for scientists, teachers, artists, writers, poets and public thinkers, but it shows up all over the place. Many internet memes, for example, involve drawing analogies: seeing the structural connections between unrelated things.

One of my favourites is a meme on twitter called #sameguy. It started as a game of uploading pictures of two celebrities that resemble each other, followed by the hashtag #sameguy. But it evolved to include abstract ideas and phenomena that are the “same” in some respect. Making cultural metaphors like this requires creativity, as does understanding them. One has to free one’s mind of literal-mindedness in order to temporarily ignore the ever-present differences between things and focus on the similarities.

Here’s a blog that collects #sameguy submissions: Same Guy

On twitter you sometimes come across more imaginative, analogical #sameguy posts: #sameguy – Twitter Search

The topic of metaphor and analogy is one of the most fascinating aspects of intelligence, in my opinion. I think it’s far more important that coming up with theories about ‘consciousness’. 🙂 Check out this answer:

Why are metaphors and allusions used while writing?
(This Quora answer is a cross-post of a blog post I wrote: Metaphor: the Alchemy of Thought)

In one sense metaphor and analogy are central to scientific research. I’ve written about this here:

What are some of the most important problems in computational neuroscience?

Science: the Quest for Symmetry

This essay is tangentially related to the topic of creativity and patterns:

From Cell Membranes to Computational Aesthetics: On the Importance of Boundaries in Life and Art

* The brain’s outputs — commands to muscles and glands — are closely  linked with pattern recognition too. What you choose to do depends on  what you can do given your intentions, circumstances, and bodily  configuration. The state that you and the universe happen to be in  constrains what you can do, and so it is useful for the brain to  recognize and categorize the state in order to mediate decision-making,  or even non-conscious behavior.When you’re walking on a busy street, you rapidly process pathways that are available to you. even if you stumble, you can quickly and unconsciously act to minimize damage to yourself and others. Abilities of this sort suggest that pattern recognition is not purely a way to create am ‘image’ of the world, but also a central part of our ability to navigate it.

Does the human brain work solely by pattern recognition?

Me and My Brain: What the “Double-Subject Fallacy” reveals about contemporary conceptions of the Self

MiBMy latest essay for 3 Quarks Daily is up: Me and My Brain: What the “Double-Subject Fallacy” reveals about contemporary conceptions of the Self

Here’s an excerpt:
What is a person? Does each of us have some fundamental essence? Is it the body? Is it the mind? Is it something else entirely? Versions of this question seem always to have animated human thought. In the aftermath of the scientific revolution, it seems as if one category of answer — the dualist idea that the essence of a person is an incorporeal soul that inhabits a material body — must be ruled out. But as it turns out, internalizing a non-dualist conception of the self is actually rather challenging for most people, including neuroscientists.
 A recent paper in the Journal of Cognitive Neuroscience suggests that even experts in the sciences of mind and brain find it difficult to shake off dualistic intuitions. Liad Mudrik and Uri Maoz, in their paper “Me & My Brain”: Exposing Neuroscienceʼs Closet Dualism, argue that not only do neuroscientists frequently lapse into dualistic thinking, they also attribute high-level mental states to the brain, treating these states as distinct from the mental states of the person as a whole. They call this the double-subject fallacy. ( I will refer to the fallacy as “dub-sub”, and the process of engaging in it as “dub-subbing”.) Dub-subbing is going on in constructions like”my brain knew before I did” or “my brain is hiding information from me”. In addition to the traditional subject — “me”, the self, the mind — there is a second subject, the brain, which is described in anthropomorphic terms such as ‘knowing’ or ‘hiding’. But ‘knowing’ and ‘hiding’ are precisely the sorts of things that we look to neuroscience to explain; when we fall prey to the double-subject fallacy we are actually doing the opposite of what we set out to do as materialists.  Rather than explaining “me” in terms of physical brain processes, dub-subbing induces us to describe the brain in terms of an obscure second “me”. Instead of dispelling those pesky spirits, we allow them to proliferate!
Read the whole thing at 3QD:

Fifty terms to avoid in psychology and psychiatry?

The excellent blog Mind Hacks shared a recent Frontiers in Psychology paper entitled “Fifty psychological and psychiatric terms to avoid: a list of inaccurate, misleading, misused, ambiguous, and logically confused words and phrases”.

As mentioned in the Mind Hacks post, the advice in this article may not always be spot-on, but it’s still worth reading. Here are some excerpts:

(7) Chemical imbalance. Thanks in part to the success of direct-to-consumer marketing campaigns by drug companies, the notion that major depression and allied disorders are caused by a “chemical imbalance” of neurotransmitters, such as serotonin and norepinephrine, has become a virtual truism in the eyes of the public […] Nevertheless, the evidence for the chemical imbalance model is at best slim […]  There is no known “optimal” level of neurotransmitters in the brain, so it is unclear what would constitute an “imbalance.” Nor is there evidence for an optimal ratio among different neurotransmitter levels.”

“(9) Genetically determined. Few if any psychological capacities are genetically “determined”; at most, they are genetically influenced. Even schizophrenia, which is among the most heritable of all mental disorders, appears to have a heritability of between 70 and 90% as estimated by twin designs”

“(12) Hard-wired. The term “hard-wired” has become enormously popular in press accounts and academic writings in reference to human psychological capacities that are presumed by some scholars to be partially innate, such as religion, cognitive biases, prejudice, or aggression. For example, one author team reported that males are more sensitive than females to negative news stories and conjectured that males may be “hard wired for negative news” […] Nevertheless, growing data on neural plasticity suggest that, with the possible exception of inborn reflexes, remarkably few psychological capacities in humans are genuinely hard-wired, that is, inflexible in their behavioral expression”

“(27) The scientific method. Many science textbooks, including those in psychology, present science as a monolithic “method.” Most often, they describe this method as a hypothetical-deductive recipe, in which scientists begin with an overarching theory, deduce hypotheses (predictions) from that theory, test these hypotheses, and examine the fit between data and theory. If the data are inconsistent with the theory, the theory is modified or abandoned. It’s a nice story, but it rarely works this way”

“(42) Personality type. Although typologies have a lengthy history in personality psychology harkening back to the writings of the Roman physician Galen and later, Swiss psychiatrist Carl Jung, the assertion that personality traits fall into distinct categories (e.g., introvert vs. extravert) has received minimal scientific support. Taxometric studies consistently suggest that normal-range personality traits, such as extraversion and impulsivity, are underpinned by dimensions rather than taxa, that is, categories in nature”

Lilienfeld, S. O., Sauvigné, K. C., Lynn, S. J., Cautin, R. L., Latzman, R. D., & Waldman, I. D. (2015). Fifty psychological and psychiatric terms to avoid: a list of inaccurate, misleading, misused, ambiguous, and logically confused words and phrases. Frontiers in Psychology, 6, 1100.

Is neuroscience really ruining the humanities?

For my latest 3QD post, I expanded on my answer to a Quora question: Is neuroscience ruining the humanities?

Here’s an excerpt:

“Neuroscience is ruining the humanities”. This was the provocative title of a recent article by Arthur Krystal in The Chronicle of Higher Education.  To me the question was pure clickbait [1], since I am both a  neuroscientist and an avid spectator of the drama and intrigue on the  other side of the Great Academic Divide [2]. Given the sensational  nature of many of the claims made on behalf of the cognitive and neural  sciences, I am inclined to assure people in the humanities that they  have little to fear. On close inspection, the bold pronouncements of  fields like neuro-psychology, neuro-economics and neuro-aesthetics — the  sorts of statements that mutate into TED talks and pop science books —  often turn out to be wild extrapolations from a limited (and internally  inconsistent) data set.

Unlike many of my fellow scientists, I have occasionally grappled  with the weighty ideas that emanate from the humanities, even coming to  appreciate elements of postmodern thinking. (Postmodern — aporic? — jargon is of course a different matter entirely.) I think the  tapestry that is human culture is enriched by the thoughts that emerge  from humanities departments, and so I hope the people in these  departments can exercise some constructive skepticism when confronted  with the latest trendy factoid from neuroscience or evolutionary  psychology. Some of my neuroscience-related essays here at 3QD were  written with this express purpose [3, 4].

The Chronicle article begins with a 1942 quote from New York intellectual Lionel  Trilling: “What gods were to the ancients at war, ideas are to us”.  This sets the tone for the mythic narrative that lurks beneath much of  the essay, a narrative that can be crudely caricatured as follows. Once  upon a time the University was a paradise of creative ferment. Ideas  were warring gods, and the sparks that flew off their clashing swords  kept the flames of wisdom and liberty alight. The faithful who erected  intellectual temples to bear witness to these clashes were granted the  boon of enlightened insight. But faith in the great ideas gradually  faded, and so the golden age came to an end. The temple-complex of ideas  began to decay from within, corroded by doubt. New prophets arose, who  claimed that ideas were mere idols to be smashed, and that the temples  were metanarrative prisons from which to escape. In this weak and bewildered state, the  intellectual paradise was invaded. The worshipers were herded into a  shining new temple built from the rubble of the old ones. And into this  temple the invaders’ idols were installed: the many-armed goddess of  instrumental rationality, the one-eyed god of essentialism, the cold  metallic god of materialism…

The over-the-top quality of my little academia myth might give the  impression that I think it is a tissue of lies. But perhaps more nuance  is called for. As with all myths, I think there are elements of truth in  this narrative.

Read the rest at 3 Quarks Daily: Is neuroscience really ruining the humanities?