The Emotional Gatekeeper — a computational model of emotional attention

My paper is finally out in PLoS Computational Biology. It’s an open access journal, so everyone can read it:

The Emotional Gatekeeper: A Computational Model of Attentional Selection and Suppression through the Pathway from the Amygdala to the Inhibitory Thalamic Reticular Nucleus

Here’s the Author Summary, which is a simplified version of the abstract:

“Emotional experiences grab our attention. Information about the emotional significance of events helps individuals weigh opportunities and dangers to guide goal-directed behavior, but may also lead to irrational decisions when the stakes are perceived to be high. Which neural circuits underlie these contrasting outcomes? A recently discovered pathway links the amygdala—a key center of the emotional system—with the inhibitory thalamic reticular nucleus (TRN) that filters information between the thalamus and cortex. We developed a neural network model—the Emotional Gatekeeper—that demonstrates how the newly discovered pathway from the amygdala to TRN highlights relevant information to help assess threats and opportunities. The model also shows how the amygdala-TRN pathway can lead normal individuals to discount neutral but useful information in highly charged emotional situations, and predicts that disruption of specific nodes in this circuit underlies distinct psychiatric disorders.”

journal.pcbi.1004722.g001.PNG

Here’s the full citation:

John YJ, Zikopoulos B, Bullock D, Barbas H (2016) The Emotional Gatekeeper: A Computational Model of Attentional Selection and Suppression through the Pathway from the Amygdala to the Inhibitory Thalamic Reticular Nucleus. PLoS Comput Biol 12(2): e1004722. doi:10.1371/journal.pcbi.1004722

Is it possible for the Internet to one day gain consciousness?

A recent Quora answer I wrote:

Sometimes I wonder if Quora bots are conscious! 🙂
I often think about whether the internet could become sentient… and also whether it is already! But the most important question is this: how would we tell one way or the other? Perhaps each of us is like a neuron in the internet’s hive brain.
Neurons and brains are separated by a gulf of scale, structure, and complexity. How could a neuron ‘know’ that the brain it is part of is conscious? How could a brain know if a neuron (or group of neurons) is conscious? It may be an unbridgeable gap. And the same sort of gap may exist between humans and the internet. To paraphrase Wittgenstein, if the internet could talk we would not understand it.
In any case, the internet doesn’t even have a ‘mouth’ or a central communication device. How do we decide what the internet is ‘saying’? I could imagine a future in which ‘analysts’ read into the internet’s dynamic trajectories in the way astrologers read into the stars’ trajectories.
Sometimes I think of consciousness as an irreducibly social phenomenon. Consciousness may be the ‘fire’ produced by the ‘friction’ between different intelligent agents that each have partial knowledge of the world. Perhaps the test of whether the internet is conscious involves encountering an alien internet. Perhaps when civilizations from two different planets interact, their ‘planetary consciousnesses’ (or internets) interact in a way that their inhabitants only have a dim awareness of.

 

Is it possible for the Internet to one day gain consciousness?

Can science account for taste?

I was asked the question “From a scientific point of view, how are our tastes created?” Here’s my answer.

“There’s no accounting for taste!”

Typically we explain taste — in food, music, movies, art —  in terms of culture, upbringing, and sheer chance. In recent years there have been several attempts to explain taste from biological perspectives: either neuroscience or evolutionary psychology. In my opinion these types of explanations are vague enough to always sound true, but they rarely contain enough detail to account for the specific tastes of individuals or groups. Still, there’s much food for thought in these scientific proto-theories of taste and aesthetics.

[An early aesthete?]

Let’s look at the evolutionary approach first. An evolutionary explanation of taste assumes that human preferences arise from natural selection. We like salt and sugar and fat, according to this logic, because it was beneficial for our ancestors to seek out foods with these tastes. We like landscape scenes involving greenery and water bodies because such landscapes were promising environments for our wandering ancestors. This line of thinking is true as far as it goes, but it doesn’t go that far. After all, there are plenty of people who don’t much care for deep-fried salty-sweet foods. And many people who take art seriously quickly tire of clichéd landscape paintings.

[Are you a homo sapien? They you must love this. 😉 ]

Evolutionary psychology can provide broad explanations for why humans as a species tend to like certain things more than others, but it really provides us with no map for navigating differences in taste between individuals and groups. (These obvious, glaring limitations of evolutionary psychology have not prevented the emergence of a cottage industry of pop science books that explain everything humans do as consequences of the incidents and accidents that befell our progenitor apes on the savannahs of Africa.)

Explanations involving the neural and cognitive sciences get closer to what we are really after — an explanation of differences in taste — but not by much. Neuroscientific explanations are essentially half way between cultural theories and evolutionary theories. We like things because the ‘pleasure centers’ in our brains ‘light up’ when we encounter them. And the pleasure centers are shaped by experience (on the time scale of a person’s life), and by natural selection (on the time scale of the species). Whatever we inherit because of natural selection is presumably common to all humans, so differences in taste must be traced to differences in experience, which become manifest in the brain as differences in neural connectivity and activity. If your parents played the Beatles for you as a child, and conveyed their pleasure to you, then associative learning might cause the synapses in your brain that link sound patterns with emotional reactions to be gradually modified, so that playing ‘Hey Jude’ now triggers a cascade of neural events that generate the subjective feeling of enjoyment.

[What’s not to love about the Beatles?]

But there is so much more to the story of enjoyment. Not everyone likes their parents’ music. In English-speaking countries there is a decades-old stereotype of the teenager who seeks out music to piss off his or her parents. And many of us have a friend who insists on listening to music that no one else seems to have heard of. What is the neural basis of this fascinating phenomenon?

We must now enter extremely speculative territory. One of the most thought-provoking ‘theories’ of aesthetics that I have come across was proposed by a machine learning researcher named Jürgen Schmidhuber. He has a provocative way of summing up his theory: Interestingness is the first derivative of beauty.

What he means is that we are not simply drawn to things that are beautiful or pleasurable. We are also drawn to things that are interesting: things that somehow intrigue us and capture our attention. These things, according to Schmidhuber, entice us with the possibility of enhancing our categories of experience. In his framework, humans and animals are constantly seeking to understand the environment, and in order to do this, they must be drawn to the edge of what they already know. Experiences that are already fully understood offer no opportunity for new learning.  Experiences that are completely beyond comprehension are similarly useless. But experiences that are in the sweet spot of interestingness are not boringly familiar — but they are not bafflingly alien either. By seeking out experiences in this ‘border territory’, we expand our horizons, gaining a new understanding of the world.

For example, I’m a Beatles fan, but I don’t listen to the Beatles that often. I am, however, intrigued by music that is ‘Beatlesque’: such music can lead me in new directions, and also reflect back on the Beatles, giving me a deeper appreciation of their music.

The basic intuition of this theory is well-supported by research in animals and humans. Animals all have some baseline level of curiosity. Lab rats will thoroughly investigate a new object introduced into their cages. Novelty seems to have a gravitational pull for organisms.

But again, there are differences even in this tendency. Some people are perfectly content to eat the same foods over and over again, or listen to the same songs or artists. At the other extreme we find the freaks, the hipsters, the critics, the obsessives, and all the assorted avant garde seekers of “the Shock of the New”.

Linking back to evolutionary speculation, all we can really say is that even the desire for novelty is a variable trait in human populations. (Actually it’s multiple traits: I am far more adventurous when it comes to music than food.) Perhaps a healthy society needs its ‘conservatives’ and its ‘progressives’ in the domain of taste and aesthetic experience. Group selection  — natural selection operating on tribes, societies and cultures — is still somewhat controversial in mainstream evolutionary biology, so to go any further in our theories of taste we have to be willing to wander on the wild fringes of scientific thought…

… those fringes are, after all, where everything interesting happens! 🙂

For more speculation on interestingness, beauty, and the pull of the not-completely-familiar, see this essay I wrote. I go into more detail about Schmidhuber’s theory about interestingness:
From Cell Membranes to Computational Aesthetics: On the Importance of Boundaries in Life and Art

This has nothing to do with science, but I find this David Mitchell video on taste very funny:

After writing this answer I realized that the questioner was most probably asking about gustation — meaning, the sense of taste. Oh well.

Where do thoughts come from?

Here’s my answer to a recent Quora question: Where do our thoughts come from?

Thoughts come from nowhere! And from everywhere! I think both answers contain an element of truth.

Subjectively, our thoughts come from nowhere: they just pop into our heads, or emerge in the form of words leaving our mouths.

Objectively, we can say that thoughts emerge from neural processes, and that neural processes come from everywhere. What I mean by this is that the forms and dynamics of thought are influenced by everything that has a causal connection with you, your society, and your species.

We don’t know exactly how thoughts emerge from the activity of neurons — or even how to define what a thought is in biological terms (!)— but there is plenty of indirect evidence to support the general claim that the brain is where thoughts emerge.

The neuronal patterns that mediate and enable thought and behavior have proximal and distal causes.

The proximal causes are the stimuli and circumstances we experience. These experiences have causal impacts on our bodies, and are also partly caused by our bodies. The forces inside and outside the body become manifest in the brain as ‘clouds’ of information. In the right circumstances these nebulous patterns can condense into streams of thought. We can add to these identifiable causes the mysterious element of randomness: that seemingly ever-present “ghost in the machine” that makes complex processes such as life fundamentally unpredictable. Perhaps randomness is what provides the ‘seeds’ around which the condensation of thoughts can occur.

The distal causes are our experiential history and our evolutionary pre-history. Our experiential history consists of the things we’ve learned, consciously and unconsciously, and the various events that have shaped our bodies and our neural connections in large and small ways. Our evolutionary pre-history is essentially the experiential history of our species, and more generally of life itself, going back all the way to the first single-celled organism. The traits of a species are a sort of historical record of successes and failures. And going even further, life ultimately takes its particular forms because of the possibilities inherent in matter — and this takes us all the way to the formation of stars and planets.

Perception is a creative act: On the connection between creativity and pattern recognition

An answer I wrote to the Quora question Does the human brain work solely by pattern recognition?:

Great question! Broadly speaking, the brain does two things: it processes ‘inputs’ from the world and from the body, and generates ‘outputs’ to the muscles and internal organs.

Pattern recognition shows up most clearly during the processing of inputs. Recognition allows us to navigate the world, seeking beneficial/pleasurable experiences and avoiding harmful/negative experiences.* So pattern recognition must also be supplemented by associative learning: humans and animals must learn how patterns relate to each other, and to their positive and negative consequences.

And patterns must not simply be recognized: they must also be categorized. We are bombarded by patterns all the time. The only way to make sense of them is to categorize them into classes that can all be treated similarly. We have one big category for ‘snake’, even though the sensory patterns produced by specific snakes can be quite different. Pattern recognition and classification are closely intertwined, so in what follows I’m really talking about both.

Creativity does have a connection with pattern recognition. One of the most complex and fascinating manifestations of pattern recognition is the process of analogy and metaphor. People often draw analogies between seemingly disparate topics: this requires creative use of the faculty of pattern recognition. Flexible intelligence depends on the ability to recognize patterns of similarity between phenomena. This is a particularly useful skill for scientists, teachers, artists, writers, poets and public thinkers, but it shows up all over the place. Many internet memes, for example, involve drawing analogies: seeing the structural connections between unrelated things.

One of my favourites is a meme on twitter called #sameguy. It started as a game of uploading pictures of two celebrities that resemble each other, followed by the hashtag #sameguy. But it evolved to include abstract ideas and phenomena that are the “same” in some respect. Making cultural metaphors like this requires creativity, as does understanding them. One has to free one’s mind of literal-mindedness in order to temporarily ignore the ever-present differences between things and focus on the similarities.

Here’s a blog that collects #sameguy submissions: Same Guy

On twitter you sometimes come across more imaginative, analogical #sameguy posts: #sameguy – Twitter Search


The topic of metaphor and analogy is one of the most fascinating aspects of intelligence, in my opinion. I think it’s far more important that coming up with theories about ‘consciousness’. 🙂 Check out this answer:

Why are metaphors and allusions used while writing?
(This Quora answer is a cross-post of a blog post I wrote: Metaphor: the Alchemy of Thought)

In one sense metaphor and analogy are central to scientific research. I’ve written about this here:

What are some of the most important problems in computational neuroscience?

Science: the Quest for Symmetry

This essay is tangentially related to the topic of creativity and patterns:

From Cell Membranes to Computational Aesthetics: On the Importance of Boundaries in Life and Art


* The brain’s outputs — commands to muscles and glands — are closely  linked with pattern recognition too. What you choose to do depends on  what you can do given your intentions, circumstances, and bodily  configuration. The state that you and the universe happen to be in  constrains what you can do, and so it is useful for the brain to  recognize and categorize the state in order to mediate decision-making,  or even non-conscious behavior.When you’re walking on a busy street, you rapidly process pathways that are available to you. even if you stumble, you can quickly and unconsciously act to minimize damage to yourself and others. Abilities of this sort suggest that pattern recognition is not purely a way to create am ‘image’ of the world, but also a central part of our ability to navigate it.

Does the human brain work solely by pattern recognition?

Me and My Brain: What the “Double-Subject Fallacy” reveals about contemporary conceptions of the Self

MiBMy latest essay for 3 Quarks Daily is up: Me and My Brain: What the “Double-Subject Fallacy” reveals about contemporary conceptions of the Self

Here’s an excerpt:
What is a person? Does each of us have some fundamental essence? Is it the body? Is it the mind? Is it something else entirely? Versions of this question seem always to have animated human thought. In the aftermath of the scientific revolution, it seems as if one category of answer — the dualist idea that the essence of a person is an incorporeal soul that inhabits a material body — must be ruled out. But as it turns out, internalizing a non-dualist conception of the self is actually rather challenging for most people, including neuroscientists.
[…]
 A recent paper in the Journal of Cognitive Neuroscience suggests that even experts in the sciences of mind and brain find it difficult to shake off dualistic intuitions. Liad Mudrik and Uri Maoz, in their paper “Me & My Brain”: Exposing Neuroscienceʼs Closet Dualism, argue that not only do neuroscientists frequently lapse into dualistic thinking, they also attribute high-level mental states to the brain, treating these states as distinct from the mental states of the person as a whole. They call this the double-subject fallacy. ( I will refer to the fallacy as “dub-sub”, and the process of engaging in it as “dub-subbing”.) Dub-subbing is going on in constructions like”my brain knew before I did” or “my brain is hiding information from me”. In addition to the traditional subject — “me”, the self, the mind — there is a second subject, the brain, which is described in anthropomorphic terms such as ‘knowing’ or ‘hiding’. But ‘knowing’ and ‘hiding’ are precisely the sorts of things that we look to neuroscience to explain; when we fall prey to the double-subject fallacy we are actually doing the opposite of what we set out to do as materialists.  Rather than explaining “me” in terms of physical brain processes, dub-subbing induces us to describe the brain in terms of an obscure second “me”. Instead of dispelling those pesky spirits, we allow them to proliferate!
Read the whole thing at 3QD:

Fifty terms to avoid in psychology and psychiatry?

The excellent blog Mind Hacks shared a recent Frontiers in Psychology paper entitled “Fifty psychological and psychiatric terms to avoid: a list of inaccurate, misleading, misused, ambiguous, and logically confused words and phrases”.

As mentioned in the Mind Hacks post, the advice in this article may not always be spot-on, but it’s still worth reading. Here are some excerpts:

(7) Chemical imbalance. Thanks in part to the success of direct-to-consumer marketing campaigns by drug companies, the notion that major depression and allied disorders are caused by a “chemical imbalance” of neurotransmitters, such as serotonin and norepinephrine, has become a virtual truism in the eyes of the public […] Nevertheless, the evidence for the chemical imbalance model is at best slim […]  There is no known “optimal” level of neurotransmitters in the brain, so it is unclear what would constitute an “imbalance.” Nor is there evidence for an optimal ratio among different neurotransmitter levels.”

“(9) Genetically determined. Few if any psychological capacities are genetically “determined”; at most, they are genetically influenced. Even schizophrenia, which is among the most heritable of all mental disorders, appears to have a heritability of between 70 and 90% as estimated by twin designs”

“(12) Hard-wired. The term “hard-wired” has become enormously popular in press accounts and academic writings in reference to human psychological capacities that are presumed by some scholars to be partially innate, such as religion, cognitive biases, prejudice, or aggression. For example, one author team reported that males are more sensitive than females to negative news stories and conjectured that males may be “hard wired for negative news” […] Nevertheless, growing data on neural plasticity suggest that, with the possible exception of inborn reflexes, remarkably few psychological capacities in humans are genuinely hard-wired, that is, inflexible in their behavioral expression”

“(27) The scientific method. Many science textbooks, including those in psychology, present science as a monolithic “method.” Most often, they describe this method as a hypothetical-deductive recipe, in which scientists begin with an overarching theory, deduce hypotheses (predictions) from that theory, test these hypotheses, and examine the fit between data and theory. If the data are inconsistent with the theory, the theory is modified or abandoned. It’s a nice story, but it rarely works this way”

“(42) Personality type. Although typologies have a lengthy history in personality psychology harkening back to the writings of the Roman physician Galen and later, Swiss psychiatrist Carl Jung, the assertion that personality traits fall into distinct categories (e.g., introvert vs. extravert) has received minimal scientific support. Taxometric studies consistently suggest that normal-range personality traits, such as extraversion and impulsivity, are underpinned by dimensions rather than taxa, that is, categories in nature”

Lilienfeld, S. O., Sauvigné, K. C., Lynn, S. J., Cautin, R. L., Latzman, R. D., & Waldman, I. D. (2015). Fifty psychological and psychiatric terms to avoid: a list of inaccurate, misleading, misused, ambiguous, and logically confused words and phrases. Frontiers in Psychology, 6, 1100.