Is there a ‘multi-dimensional universe’ in the brain? A case study in neurobabble

I was asked a question on Quora about a recent study that talked about high-dimensional ‘structures’ in the brain. It has been receiving an inordinate amount of hype, partly as a result of the journal’s own blog. Their headline reads:

‘Blue Brain Team Discovers a Multi-Dimensional Universe in Brain Networks’

As if the reference to a ‘universe’ weren’t bad enough, the last author, Henry Markram, says the following:

“We found a world that we had never imagined”.

The following passage in the blog post doubles-down on the conflation:

“If 4D worlds stretch our imagination, worlds with 5, 6 or more dimensions are too complex for most of us to comprehend.”

As will soon be clear, using words like ‘universe’ and ‘world’ in conjunction with the word ‘dimension’ creates a false impression that these researchers are dealing with spatial dimensions and/or how the brain represents them. This is simply not the case.

This is the question I was asked:

What exactly are the recently discovered multidimensional geometrical objects in the neuronal networks of the brain?

Here is what I wrote:

In this particular case the hype has gotten so out of control that the truth may already be irretrievably buried in mindless nonsense.

The key message is this: the word ‘dimension’ in this paper has nothing to do with the dimensions of space.

Here’s the paper that is receiving all the hype about higher dimensions:

Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function

It came out recently in the journal Frontiers in Computational Neuroscience. The authors employ somewhat complex ideas from graph theory to analyze connectivity among neurons in a small segment of rat neocortex.

This is where the authors of the paper talk about ‘dimension’:

“Networks are often analyzed in terms of groups of nodes that are all-to-all connected, known as cliques. The number of neurons in a clique determines its size, or more formally, its dimension.” [Italics in original.]

So dimension here just refers to the number of neurons that are connected in an all-to-all network. In the area of rat neocortex they studied, they found that 11-neuron cliques were common.

The other concept they talk about is ‘cavities’:

“The manner in which directed cliques bind together can be represented geometrically. When directed cliques bind appropriately by sharing neurons, and without forming a larger clique due to missing connections, they form cavities (“holes,” “voids”) in this geometric representation, with high-dimensional cavities forming when high-dimensional (large) cliques bind together.”

The word dimension has a variety of meanings in mathematics and science. The idea of spatial dimension is most common: when we refer to 3D movies, this is the kind of dimension we are thinking of. The space we are familiar with has 3 dimensions, which we can think of in terms of X, Y, and Z coordinates, or in terms of up-down, left-right, and front-back directions.

The dimension of a network has nothing to do with spatial dimension. Instead, it has more to do with the number of degrees of freedom in a system. In physics, the number of degrees of freedom is the number of independent parameters or quantities that uniquely define a system.

So really, this paper is just talking about the statistics of local neuronal connectivity. They describe their findings as surprising when compared to other statistical models. All this means is that certain connectivity patterns are more common that one might expect under certain ‘random’ models. That’s what they mean here when they compare their results to ‘null models’:

“The numbers of high-dimensional cliques and cavities found in the reconstruction are also far higher than in null models, even in those closely resembling the biology-based reconstructed microcircuit, but with some of the biological constraints released. We verified the existence of high-dimensional directed simplices in actual neocortical tissue.”

Outside of the narrow community of computational neuroscientists who use graph theory, these results are interesting but hardly ground-breaking. Moreover, as far as I can tell these findings have no definitive functional implications. (There are some implications for network synchrony, but in my opinion synchrony has itself not been clearly linked with higher-level concepts of function.)

Neural network modelers assume all kinds of connectivity patterns than deviate from pure ‘randomness’, so such findings aren’t particularly surprising.

So I am baffled by the hype this research is getting. It strikes me that extremely lazy science journalism has collided with opportunistic PR practices.

Given what I’ve explained, I hope it’s clear that these kinds of headlines are profoundly — almost maliciously— misleading:

In fact, given that the brain contains around 80–100 billion neurons, we might consider 11 dimensions to be rather low for sub-networks, if we remind ourselves that dimension in this case simply means the number of neurons that are connected to each other.

What neuroscience too often neglects: Behavior

A Quora conversation led me to recent paper in Neuron that highlights a very important problem with a lot of neuroscience research: there is insufficient attention paid to the careful analysis of behavior. The paper is not quite a call to return to behaviorism, but it is an invitation to consider that the pendulum has swing too far in the opposite direction, towards ‘blind’ searches for neural correlates. The paper is a wonderful big picture critique, so I’d like to just share some excerpts.


“Neuroscience is replete with cases that illustrate the fundamental epistemological difficulty of deriving processes from processors. For example, in the case of the roundworm (Caenorhabditis elegans), we know the genome, the cell types, and the connectome—every cell and its connections (Bargmann, 1998; White et al., 1986). Despite this wealth of knowledge, our understanding of how all this structure maps onto the worm’s behavior remains frustratingly incomplete.”

“New technologies have enabled the acquisition of massive and intricate datasets, and the means to analyze them have become concomitantly more complex. This in turn has led to a need for experts in computation and data analysis, with a reduced emphasis on organismal-level thinkers who develop detailed functional analyses of behavior, its developmental trajectory,and its evolutionary basis. Deep and thorny questions like‘‘what would even count as an explanation in this context,’’ ‘‘what is a mechanism for the behavior we are trying to understand,’’and ‘‘what does it mean to understand the brain’’ get sidelined. The emphasis in neuroscience has transitioned fromthese larger scope questions to the development of technologies,model systems, and the approaches needed to analyze the deluge of data they produce. Technique-driven neuroscience could be considered an example of what is known as the substitution bias: ‘‘[.] when faced with a difficult question, weoften answer an easier one instead, usually without noticing the substitution’’ (Kahneman, 2011, p. 12).”

This next excerpt raises an important issue with interpretations of mirror neuron studies. (I also have my own little rant about mirror neuron “theory”.)
“Interpretation then has the following logic: as neurons can be decoded for intention in the first person, and these same neurons decoded for the same intention in the third person, then activation of the mirror neurons can be interpreted as meaning that the primate has understood the intention of the primate it is watching. The problem with this attempt to posit an algorithm for ‘‘understanding’’ based on neuronal responses is that no independent behavioral experiment is done to show evidence that any kind of understanding is actually occurring, understanding that could then be correlated with the mirror neurons. This is a key error in our view: behavior is used to drive neuronal activity but no either/ or behavioral hypothesis is being tested per se. Thus, an interpretation is being mistaken for a result; namely, that the mirror neurons understand the other individual.”

Here the authors talk about the importance of emergence as a bridge between neurons and behavior:

“The phenomenon at issue here, when making a case for recording from populations of neurons or characterizing whole networks, is emergence—neurons in their aggregate organization cause effects that are not apparent in any single neuron. Following this logic, however, leads to the conclusion that behavior itself is emergent from aggregated neural circuits and therefore should also be studied in its own right. An example of an emergent behavior that can only be understood at the algorithmic level, which in turn can only be determined by studying the emergent behavior itself, is flocking in birds. First one has to observe the behavior and then one can begin to test simple rules that will lead to reproduction of the behavior, in this case best done through simulation. The rules are simple—for example, one of them is ‘‘steer to average heading of neighbors’’ (Reynolds, 1987). Clearly, observing or dissecting an individual bird, or even several birds could never derive such a rule. Substitute flocking with a behavior like reaching, and birds for neurons, and it becomes clear how adopting an overly reductionist approach can hinder understanding.”

Krakauer, John W., Asif A. Ghazanfar, Alex Gomez-Marin, Malcolm A. MacIver, and David Poeppel. “Neuroscience needs behavior: correcting a reductionist Bias.” Neuron 93, no. 3 (2017): 480-490. [Paywalled]

My main criticism of this paper might be that their proposed solution — a separation between analysis of behavior and analysis of neural data, with behavioral analysis ideally happening first — might be too rigid, and also might leave the behavioral analysis somewhat underconstrained. It is definitely important to have a clear behavioral hypothesis if you are running a behavioral study, even if is part of a larger study of neural data. But the actual process of understanding might require a lot more ‘cross-talk’. It may not always be useful to come up with a high-level analysis of behavior in isolation from neural data. There is no guarantee that the high-level analysis will be accurate: there may be multiple high level models that correspond to the same behavioral data. So we might need to add a third circle to their Figure 1: one for the space of behavioral explanations.

“Conscious realism”: a new way to think about reality (or the lack thereof?)


Interesting interview in the Atlantic with cognitive scientist Donald D. Hoffman:

The Case Against Reality

“I call it conscious realism: Objective reality is just conscious agents, just points of view. Interestingly, I can take two conscious agents and have them interact, and the mathematical structure of that interaction also satisfies the definition of a conscious agent. This mathematics is telling me something. I can take two minds, and they can generate a new, unified single mind. Here’s a concrete example. We have two hemispheres in our brain. But when you do a split-brain operation, a complete transection of the corpus callosum, you get clear evidence of two separate consciousnesses. Before that slicing happened, it seemed there was a single unified consciousness. So it’s not implausible that there is a single conscious agent. And yet it’s also the case that there are two conscious agents there, and you can see that when they’re split. I didn’t expect that, the mathematics forced me to recognize this. It suggests that I can take separate observers, put them together and create new observers, and keep doing this ad infinitum. It’s conscious agents all the way down.”


Here’s the striking thing about that. I can pull the W out of the model and stick a conscious agent in its place and get a circuit of conscious agents. In fact, you can have whole networks of arbitrary complexity. And that’s the world.


“As a conscious realist, I am postulating conscious experiences as ontological primitives, the most basic ingredients of the world. I’m claiming that experiences are the real coin of the realm. The experiences of everyday life—my real feeling of a headache, my real taste of chocolate—that really is the ultimate nature of reality.”

I don’t agree with everything in the article (especially the quantum stuff) but I think many people interested in consciousness and metaphysics will find plenty of food for thought here:

The Case Against Reality

Also, the “conscious agents all the way down” is the exact position I was criticizing in a recent 3QD essay:

3quarksdaily: Persons all the way down: On viewing the scientific conception of the self from the inside out

The diagram above is from a science fiction story I was working on, back when I was a callow youth. It closely related to the idea of a network of conscious agents. Here’s another ‘version’ of it.


Not sure why I made it look so morbid. 🙂

The Emotional Gatekeeper — a computational model of emotional attention

My paper is finally out in PLoS Computational Biology. It’s an open access journal, so everyone can read it:

The Emotional Gatekeeper: A Computational Model of Attentional Selection and Suppression through the Pathway from the Amygdala to the Inhibitory Thalamic Reticular Nucleus

Here’s the Author Summary, which is a simplified version of the abstract:

“Emotional experiences grab our attention. Information about the emotional significance of events helps individuals weigh opportunities and dangers to guide goal-directed behavior, but may also lead to irrational decisions when the stakes are perceived to be high. Which neural circuits underlie these contrasting outcomes? A recently discovered pathway links the amygdala—a key center of the emotional system—with the inhibitory thalamic reticular nucleus (TRN) that filters information between the thalamus and cortex. We developed a neural network model—the Emotional Gatekeeper—that demonstrates how the newly discovered pathway from the amygdala to TRN highlights relevant information to help assess threats and opportunities. The model also shows how the amygdala-TRN pathway can lead normal individuals to discount neutral but useful information in highly charged emotional situations, and predicts that disruption of specific nodes in this circuit underlies distinct psychiatric disorders.”


Here’s the full citation:

John YJ, Zikopoulos B, Bullock D, Barbas H (2016) The Emotional Gatekeeper: A Computational Model of Attentional Selection and Suppression through the Pathway from the Amygdala to the Inhibitory Thalamic Reticular Nucleus. PLoS Comput Biol 12(2): e1004722. doi:10.1371/journal.pcbi.1004722

Perception is a creative act: On the connection between creativity and pattern recognition

An answer I wrote to the Quora question Does the human brain work solely by pattern recognition?:

Great question! Broadly speaking, the brain does two things: it processes ‘inputs’ from the world and from the body, and generates ‘outputs’ to the muscles and internal organs.

Pattern recognition shows up most clearly during the processing of inputs. Recognition allows us to navigate the world, seeking beneficial/pleasurable experiences and avoiding harmful/negative experiences.* So pattern recognition must also be supplemented by associative learning: humans and animals must learn how patterns relate to each other, and to their positive and negative consequences.

And patterns must not simply be recognized: they must also be categorized. We are bombarded by patterns all the time. The only way to make sense of them is to categorize them into classes that can all be treated similarly. We have one big category for ‘snake’, even though the sensory patterns produced by specific snakes can be quite different. Pattern recognition and classification are closely intertwined, so in what follows I’m really talking about both.

Creativity does have a connection with pattern recognition. One of the most complex and fascinating manifestations of pattern recognition is the process of analogy and metaphor. People often draw analogies between seemingly disparate topics: this requires creative use of the faculty of pattern recognition. Flexible intelligence depends on the ability to recognize patterns of similarity between phenomena. This is a particularly useful skill for scientists, teachers, artists, writers, poets and public thinkers, but it shows up all over the place. Many internet memes, for example, involve drawing analogies: seeing the structural connections between unrelated things.

One of my favourites is a meme on twitter called #sameguy. It started as a game of uploading pictures of two celebrities that resemble each other, followed by the hashtag #sameguy. But it evolved to include abstract ideas and phenomena that are the “same” in some respect. Making cultural metaphors like this requires creativity, as does understanding them. One has to free one’s mind of literal-mindedness in order to temporarily ignore the ever-present differences between things and focus on the similarities.

Here’s a blog that collects #sameguy submissions: Same Guy

On twitter you sometimes come across more imaginative, analogical #sameguy posts: #sameguy – Twitter Search

The topic of metaphor and analogy is one of the most fascinating aspects of intelligence, in my opinion. I think it’s far more important that coming up with theories about ‘consciousness’. 🙂 Check out this answer:

Why are metaphors and allusions used while writing?
(This Quora answer is a cross-post of a blog post I wrote: Metaphor: the Alchemy of Thought)

In one sense metaphor and analogy are central to scientific research. I’ve written about this here:

What are some of the most important problems in computational neuroscience?

Science: the Quest for Symmetry

This essay is tangentially related to the topic of creativity and patterns:

From Cell Membranes to Computational Aesthetics: On the Importance of Boundaries in Life and Art

* The brain’s outputs — commands to muscles and glands — are closely  linked with pattern recognition too. What you choose to do depends on  what you can do given your intentions, circumstances, and bodily  configuration. The state that you and the universe happen to be in  constrains what you can do, and so it is useful for the brain to  recognize and categorize the state in order to mediate decision-making,  or even non-conscious behavior.When you’re walking on a busy street, you rapidly process pathways that are available to you. even if you stumble, you can quickly and unconsciously act to minimize damage to yourself and others. Abilities of this sort suggest that pattern recognition is not purely a way to create am ‘image’ of the world, but also a central part of our ability to navigate it.

Does the human brain work solely by pattern recognition?

Me and My Brain: What the “Double-Subject Fallacy” reveals about contemporary conceptions of the Self

MiBMy latest essay for 3 Quarks Daily is up: Me and My Brain: What the “Double-Subject Fallacy” reveals about contemporary conceptions of the Self

Here’s an excerpt:
What is a person? Does each of us have some fundamental essence? Is it the body? Is it the mind? Is it something else entirely? Versions of this question seem always to have animated human thought. In the aftermath of the scientific revolution, it seems as if one category of answer — the dualist idea that the essence of a person is an incorporeal soul that inhabits a material body — must be ruled out. But as it turns out, internalizing a non-dualist conception of the self is actually rather challenging for most people, including neuroscientists.
 A recent paper in the Journal of Cognitive Neuroscience suggests that even experts in the sciences of mind and brain find it difficult to shake off dualistic intuitions. Liad Mudrik and Uri Maoz, in their paper “Me & My Brain”: Exposing Neuroscienceʼs Closet Dualism, argue that not only do neuroscientists frequently lapse into dualistic thinking, they also attribute high-level mental states to the brain, treating these states as distinct from the mental states of the person as a whole. They call this the double-subject fallacy. ( I will refer to the fallacy as “dub-sub”, and the process of engaging in it as “dub-subbing”.) Dub-subbing is going on in constructions like”my brain knew before I did” or “my brain is hiding information from me”. In addition to the traditional subject — “me”, the self, the mind — there is a second subject, the brain, which is described in anthropomorphic terms such as ‘knowing’ or ‘hiding’. But ‘knowing’ and ‘hiding’ are precisely the sorts of things that we look to neuroscience to explain; when we fall prey to the double-subject fallacy we are actually doing the opposite of what we set out to do as materialists.  Rather than explaining “me” in terms of physical brain processes, dub-subbing induces us to describe the brain in terms of an obscure second “me”. Instead of dispelling those pesky spirits, we allow them to proliferate!
Read the whole thing at 3QD:

Fifty terms to avoid in psychology and psychiatry?

The excellent blog Mind Hacks shared a recent Frontiers in Psychology paper entitled “Fifty psychological and psychiatric terms to avoid: a list of inaccurate, misleading, misused, ambiguous, and logically confused words and phrases”.

As mentioned in the Mind Hacks post, the advice in this article may not always be spot-on, but it’s still worth reading. Here are some excerpts:

(7) Chemical imbalance. Thanks in part to the success of direct-to-consumer marketing campaigns by drug companies, the notion that major depression and allied disorders are caused by a “chemical imbalance” of neurotransmitters, such as serotonin and norepinephrine, has become a virtual truism in the eyes of the public […] Nevertheless, the evidence for the chemical imbalance model is at best slim […]  There is no known “optimal” level of neurotransmitters in the brain, so it is unclear what would constitute an “imbalance.” Nor is there evidence for an optimal ratio among different neurotransmitter levels.”

“(9) Genetically determined. Few if any psychological capacities are genetically “determined”; at most, they are genetically influenced. Even schizophrenia, which is among the most heritable of all mental disorders, appears to have a heritability of between 70 and 90% as estimated by twin designs”

“(12) Hard-wired. The term “hard-wired” has become enormously popular in press accounts and academic writings in reference to human psychological capacities that are presumed by some scholars to be partially innate, such as religion, cognitive biases, prejudice, or aggression. For example, one author team reported that males are more sensitive than females to negative news stories and conjectured that males may be “hard wired for negative news” […] Nevertheless, growing data on neural plasticity suggest that, with the possible exception of inborn reflexes, remarkably few psychological capacities in humans are genuinely hard-wired, that is, inflexible in their behavioral expression”

“(27) The scientific method. Many science textbooks, including those in psychology, present science as a monolithic “method.” Most often, they describe this method as a hypothetical-deductive recipe, in which scientists begin with an overarching theory, deduce hypotheses (predictions) from that theory, test these hypotheses, and examine the fit between data and theory. If the data are inconsistent with the theory, the theory is modified or abandoned. It’s a nice story, but it rarely works this way”

“(42) Personality type. Although typologies have a lengthy history in personality psychology harkening back to the writings of the Roman physician Galen and later, Swiss psychiatrist Carl Jung, the assertion that personality traits fall into distinct categories (e.g., introvert vs. extravert) has received minimal scientific support. Taxometric studies consistently suggest that normal-range personality traits, such as extraversion and impulsivity, are underpinned by dimensions rather than taxa, that is, categories in nature”

Lilienfeld, S. O., Sauvigné, K. C., Lynn, S. J., Cautin, R. L., Latzman, R. D., & Waldman, I. D. (2015). Fifty psychological and psychiatric terms to avoid: a list of inaccurate, misleading, misused, ambiguous, and logically confused words and phrases. Frontiers in Psychology, 6, 1100.