Me and My Brain: What the “Double-Subject Fallacy” reveals about contemporary conceptions of the Self

MiBMy latest essay for 3 Quarks Daily is up: Me and My Brain: What the “Double-Subject Fallacy” reveals about contemporary conceptions of the Self

Here’s an excerpt:
What is a person? Does each of us have some fundamental essence? Is it the body? Is it the mind? Is it something else entirely? Versions of this question seem always to have animated human thought. In the aftermath of the scientific revolution, it seems as if one category of answer — the dualist idea that the essence of a person is an incorporeal soul that inhabits a material body — must be ruled out. But as it turns out, internalizing a non-dualist conception of the self is actually rather challenging for most people, including neuroscientists.
[…]
 A recent paper in the Journal of Cognitive Neuroscience suggests that even experts in the sciences of mind and brain find it difficult to shake off dualistic intuitions. Liad Mudrik and Uri Maoz, in their paper “Me & My Brain”: Exposing Neuroscienceʼs Closet Dualism, argue that not only do neuroscientists frequently lapse into dualistic thinking, they also attribute high-level mental states to the brain, treating these states as distinct from the mental states of the person as a whole. They call this the double-subject fallacy. ( I will refer to the fallacy as “dub-sub”, and the process of engaging in it as “dub-subbing”.) Dub-subbing is going on in constructions like”my brain knew before I did” or “my brain is hiding information from me”. In addition to the traditional subject — “me”, the self, the mind — there is a second subject, the brain, which is described in anthropomorphic terms such as ‘knowing’ or ‘hiding’. But ‘knowing’ and ‘hiding’ are precisely the sorts of things that we look to neuroscience to explain; when we fall prey to the double-subject fallacy we are actually doing the opposite of what we set out to do as materialists.  Rather than explaining “me” in terms of physical brain processes, dub-subbing induces us to describe the brain in terms of an obscure second “me”. Instead of dispelling those pesky spirits, we allow them to proliferate!
Read the whole thing at 3QD:

Fifty terms to avoid in psychology and psychiatry?

The excellent blog Mind Hacks shared a recent Frontiers in Psychology paper entitled “Fifty psychological and psychiatric terms to avoid: a list of inaccurate, misleading, misused, ambiguous, and logically confused words and phrases”.

As mentioned in the Mind Hacks post, the advice in this article may not always be spot-on, but it’s still worth reading. Here are some excerpts:

(7) Chemical imbalance. Thanks in part to the success of direct-to-consumer marketing campaigns by drug companies, the notion that major depression and allied disorders are caused by a “chemical imbalance” of neurotransmitters, such as serotonin and norepinephrine, has become a virtual truism in the eyes of the public […] Nevertheless, the evidence for the chemical imbalance model is at best slim […]  There is no known “optimal” level of neurotransmitters in the brain, so it is unclear what would constitute an “imbalance.” Nor is there evidence for an optimal ratio among different neurotransmitter levels.”

“(9) Genetically determined. Few if any psychological capacities are genetically “determined”; at most, they are genetically influenced. Even schizophrenia, which is among the most heritable of all mental disorders, appears to have a heritability of between 70 and 90% as estimated by twin designs”

“(12) Hard-wired. The term “hard-wired” has become enormously popular in press accounts and academic writings in reference to human psychological capacities that are presumed by some scholars to be partially innate, such as religion, cognitive biases, prejudice, or aggression. For example, one author team reported that males are more sensitive than females to negative news stories and conjectured that males may be “hard wired for negative news” […] Nevertheless, growing data on neural plasticity suggest that, with the possible exception of inborn reflexes, remarkably few psychological capacities in humans are genuinely hard-wired, that is, inflexible in their behavioral expression”

“(27) The scientific method. Many science textbooks, including those in psychology, present science as a monolithic “method.” Most often, they describe this method as a hypothetical-deductive recipe, in which scientists begin with an overarching theory, deduce hypotheses (predictions) from that theory, test these hypotheses, and examine the fit between data and theory. If the data are inconsistent with the theory, the theory is modified or abandoned. It’s a nice story, but it rarely works this way”

“(42) Personality type. Although typologies have a lengthy history in personality psychology harkening back to the writings of the Roman physician Galen and later, Swiss psychiatrist Carl Jung, the assertion that personality traits fall into distinct categories (e.g., introvert vs. extravert) has received minimal scientific support. Taxometric studies consistently suggest that normal-range personality traits, such as extraversion and impulsivity, are underpinned by dimensions rather than taxa, that is, categories in nature”

Lilienfeld, S. O., Sauvigné, K. C., Lynn, S. J., Cautin, R. L., Latzman, R. D., & Waldman, I. D. (2015). Fifty psychological and psychiatric terms to avoid: a list of inaccurate, misleading, misused, ambiguous, and logically confused words and phrases. Frontiers in Psychology, 6, 1100.

Is neuroscience really ruining the humanities?

For my latest 3QD post, I expanded on my answer to a Quora question: Is neuroscience ruining the humanities?


Here’s an excerpt:

“Neuroscience is ruining the humanities”. This was the provocative title of a recent article by Arthur Krystal in The Chronicle of Higher Education.  To me the question was pure clickbait [1], since I am both a  neuroscientist and an avid spectator of the drama and intrigue on the  other side of the Great Academic Divide [2]. Given the sensational  nature of many of the claims made on behalf of the cognitive and neural  sciences, I am inclined to assure people in the humanities that they  have little to fear. On close inspection, the bold pronouncements of  fields like neuro-psychology, neuro-economics and neuro-aesthetics — the  sorts of statements that mutate into TED talks and pop science books —  often turn out to be wild extrapolations from a limited (and internally  inconsistent) data set.

Unlike many of my fellow scientists, I have occasionally grappled  with the weighty ideas that emanate from the humanities, even coming to  appreciate elements of postmodern thinking. (Postmodern — aporic? — jargon is of course a different matter entirely.) I think the  tapestry that is human culture is enriched by the thoughts that emerge  from humanities departments, and so I hope the people in these  departments can exercise some constructive skepticism when confronted  with the latest trendy factoid from neuroscience or evolutionary  psychology. Some of my neuroscience-related essays here at 3QD were  written with this express purpose [3, 4].

The Chronicle article begins with a 1942 quote from New York intellectual Lionel  Trilling: “What gods were to the ancients at war, ideas are to us”.  This sets the tone for the mythic narrative that lurks beneath much of  the essay, a narrative that can be crudely caricatured as follows. Once  upon a time the University was a paradise of creative ferment. Ideas  were warring gods, and the sparks that flew off their clashing swords  kept the flames of wisdom and liberty alight. The faithful who erected  intellectual temples to bear witness to these clashes were granted the  boon of enlightened insight. But faith in the great ideas gradually  faded, and so the golden age came to an end. The temple-complex of ideas  began to decay from within, corroded by doubt. New prophets arose, who  claimed that ideas were mere idols to be smashed, and that the temples  were metanarrative prisons from which to escape. In this weak and bewildered state, the  intellectual paradise was invaded. The worshipers were herded into a  shining new temple built from the rubble of the old ones. And into this  temple the invaders’ idols were installed: the many-armed goddess of  instrumental rationality, the one-eyed god of essentialism, the cold  metallic god of materialism…

The over-the-top quality of my little academia myth might give the  impression that I think it is a tissue of lies. But perhaps more nuance  is called for. As with all myths, I think there are elements of truth in  this narrative.


Read the rest at 3 Quarks Daily: Is neuroscience really ruining the humanities?

Why we can’t anticipate what future science will look like

I was asked the following question on Quora:

What kind of information do we need to discover everything about memory in the brain and its mechanism?

I took the opportunity to recapitulate an excellent point made by Paul Feyerabend in his book Against Method, which I am currently reading.

Here’s my answer:

We’ll need to collect information at the genetic, synaptic, cellular, network, and behavioral levels (and perhaps even environmental and social levels), and integrate them into a single picture of memory in action. In other words neuroscientists are already more or less on the right track. Sometimes we know exactly what we’d like to study experimentally, but we lack the technical ability to do so. (For example, our non-invasive techniques for measuring human neural activity are extremely coarse-grained and indirect.)

But I don’t think it is possible to know in advance what specific kinds of data will prove decisive in the creation of a comprehensive theory of memory. Every new experiment can potentially throw up new theoretical questions. We can’t anticipate the evolution of a scientific research program, because we lack the very thing we are searching for: a theory that tells us what is important and what isn’t. If we already had a perfect theory, it wouldn’t be research.

We typically think of experiments and theories as completely separate entities. So we imagine that science involves a linear process like this:

observation -> theory -> new observation -> new theory ->

…and so on. But this doesn’t really capture how science actually proceeds. Think of it this way. Before we have a good theory, our observations may be contaminated by the old partially-successful theories. A theory — even a half-baked one — comes with its own ontology of what exists and what doesn’t. Experimentalists have their own working models and rules-of-thumb that tell them what is worth recording/analyzing and what isn’t. Some of these models and rules may prove wrong, once a good theory comes along. But before that theory comes along, we can’t say much about them. Theory and experiment are intertwined –each can reinforce (or refute) the other.

Philosophers have pointed out for a while now that experiments are not just true pictures of the world — they are intrinsically theory-laden. Theory goes into both the design and the analysis of experiments. This doesn’t mean they are not to be trusted. It only means that in the periods where there is no obviously successful theory, you cannot say which experiments will prove to be the building blocks of a future theory, and which will eventually prove to be wrong or in need of a fresh interpretation.

To sum this up: if we find ourselves in an unlit room we’ve never entered before, we have no choice but to fumble around in the dark until we find a light switch. We can’t anticipate our trajectory through the room — but after we find the light switch, every stumbling step and bruised toe can be restrospectively explained.

 

Does dopamine produce a feeling of bliss? On the chemical self, the social self, and reductionism.

Here’s the intro to my latest blog post at 3 Quarks Daily.


“The  osmosis of neuroscience into popular culture is neatly symbolized by a  phenomenon I recently chanced upon: neurochemical-inspired jewellery. It  appears there is a market for silvery pendants shaped like molecules of  dopamine, serotonin, acetylcholine, norepinephrine and other celebrity  neurotransmitters. Under pictures of dopamine necklaces, the  neuro-jewellers have placed words like “love”, “passion”, or “pleasure”.  Under serotonin they write “happiness” and “satisfaction”, and under  norepinephrine, “alertness” and “energy”. These associations presumably  stem from the view that the brain is a chemical soup in which each  ingredient generates a distinct emotion, mood, or feeling. Subjective  experience, according to this view, is the sum total of the  contributions of each “mood molecule”. If we strip away the modern  scientific veneer, the chemical soup idea evokes the four humors of  ancient Greek medicine: black bile to make you melancholic, yellow bile  to make you choleric, phlegm to make you phlegmatic, and blood to make  you sanguine.

“A dopamine pendant worn round the neck as a symbol for bliss is  emblematic of modern society’s attitude towards current scientific  research. A multifaceted and only partially understood set  of experiments is hastily distilled into an easily marketed molecule of  folk wisdom. Having filtered out the messy details, we are left with an  ornamental nugget of thought that appears both novel and reassuringly  commonsensical. But does neuroscience really support this reductionist  view of human subjectivity? Can our psychological states be understood  in terms of a handful of chemicals? Does neuroscience therefore pose a  problem for a more holistic view, in which humans are integrated in  social and environmental networks? In other words, are the “chemical  self” and the “social self” mutually exclusive concepts?”

– Read the rest at 3QD: The Chemical Self and the Social Self

The holy grail of computational neuroscience: Invariance

There are quite a few problems that computational neuroscientists need to solve in order to achieve a true theoretical understanding of biological intelligence.  But I’d like to talk about one problem that I think is the holy grail of computational neuroscience and artificial intelligence: the quest for invariance. From a purely scientific and technological perspective I think this is a far more important and interesting problem than anything to do with the “C-word”: Consciousness. 🙂

Human (and animal) perception has an extraordinary feature that we still can’t fully emulate with artificial devices. Our brains somehow create and/or discover invariances in the world. Let me start with a few examples and then explain what invariance is.

Invariance in vision

Think about squares. You can recognize a square irrespective of it’s size, color, and position. You can even recognize a square with reasonable accuracy when viewing it from an oblique angle. This ability is something we take for granted, but we haven’t really figured it out yet.

Now think about human faces. You can recognize a familiar face in various lighting conditions, and under changes of facial hair, make-up, age, and context. How does the brain allow you to do things like this?

Invariance in hearing

Think about a musical tune you know well. You will probably be able to recognize it even if it is slowed down, sped up, hummed, whistled, or even sung wordlessly by someone who is tone-deaf. In some special cases, you can even recognize a piece of music from its rhythmic pattern alone, without any melody. How do you manage to do this?

Think about octave equivalence. A sound at a particular frequency sounds like the same note as a sound at double the frequency. In other words, notes an octave apart sound similar. What is happening here?

What is invariance?

How does your brain discover similarity in the midst of so much dissimilarity? The answer is that the brain somehow creates invariant representations of objects and patterns. Many computational neuroscientists are working on this problem, but there are no unifying theoretical frameworks yet.

So what does “invariance” mean? It means “immunity to a possible change”. It’s related to the formal concept of symmetry. According to mathematics and theoretical physics, an object has a symmetry if it looks the same even after a change. a square looks exactly the same if you rotate it by 90 degrees around the center. We say it is invariant (or symmetrical) with respect to a 90 degree rotation.

Our neural representations of sensory patterns somehow allow us to discover symmetries and using them for recognition and flexible behavior. And we manage to do this implicitly, without any conscious effort. This type of ability is limited and it varies from person to person, but all people have it to some extent.

Back to the examples

We can redefine our examples using the language of invariance.

 

  • The way human represent squares and other shapes is invariant with respect to rotation, as well as with respect to changes in position, lighting, and even viewing angle.
  • The way humans represent faces is invariant with respect to changes in make-up, facial hair, context, and age. (This ability varies from person to person, of course.)
  • The way humans represent musical tunes is invariant with respect to changes in speed, musical key, and timbre.
  • The way humans represent musical notes is invariant with respect to doubling of frequency ( which is equivalent to shifting by an octave.)


All these invariances are partial and limited in scope, but they are still extremely useful, and far more sophisticated than anything we can do with artificial systems.

Invariance of thought patterns?

The power of invariance is particularly striking when we enter the domain of abstract ideas — particularly metaphors and analogies.

Consider perceptual metaphors. We can touch a surface and describe it as smooth. But we can also use the word “smooth” to describe sounds. How is it that we can use texture words for things that we do not literally touch?

Now consider analogies, which are the more formal cousins of metaphors. Think of analogy questions in tests like the GRE and the SATs. Here’s an example

Army: Soldier :: Navy : _____

The answer is “Sailor”.

These questions take the form “A:B::C:D”, which we normally read as “A is to B as C is to D”. The test questions normally ask you to specify what D should be.

To make an analogy more explicit, we can re-write it this way: “R(x,y) for all (x,y) =  (A,B) or (C,D)”.  The relation “R” holds for pairs of words (x,y), and in particular, for pairs (A,B) as well as (C,D).

In this example, the analogical relationship R can be captured in the phrase “is made up of”. An army is made up of soldiers and a navy is made up of sailors. In any analogy, we are able to pick out an abstract relationship between things or concepts.

Here’s another example discussed in the Wikipedia page on analogy:

Hand: Palm :: Foot: _____

The answer most people give is “Sole”. What’s interesting about this example is that many people can understand the analogy without necessarily being able to explain the relationship R in words. This is true of various analogies. We can see implicit relationships without necessarily being able to describe them.

We can translate metaphors and analogies into the language or invariance.

 

  • The way humans represent perceptual experiences allows us to create metaphors that are invariant with respect to changes in sensory modality. So we can perceive smoothness in the modalities of touch, hearing and other senses.
  • The way humans represent abstract relationships allows us to find/create analogies that are invariant with respect to the particular things being spoken about. The validity of the analogy R(x,y) in invariant with respect to replacing the pair (x,y) with (A,B) or (C,D).


The words “metaphor” and “analogy” are essentially synonyms for the word “invariant” in the domains of percepts and concepts. Science, mathematics and philosophy often involve trying to make explicit our implicit analogies and metaphors.

Neuroscience, psychology and cognitive science aim to understand how we form these invariant representations in the first place. In my opinion doing so will revolutionize artificial intelligence.

 



Further reading:

I’ve only scratched the surface of the topic of invariance and symmetry.

I talk about symmetry and invariance in this answer too:

Mathematics: What are some small but effective theses or ideas in mathematics that you have came across? [Quora link. Sign-up required]

I talk about the importance of metaphors in this blog post:

Metaphor: the Alchemy of Thought

I was introduced to many of these ideas through a book by physicist Joe Rosen called Symmetry Rules: How Science and Nature Are Founded on Symmetry. It’s closer to a textbook that a popular treatment, but for people interested in the mathematics of symmetry and group theory, and how it relates to science, this is an excellent introduction. Here is a summary of the book: [pdf]

Relatively recent techniques such as deep learning have helped artificial systems form invariant representations. This is how facial recognition software used by Google and Facebook work. But these algorithms still don’t have the accuracy and generality of human skills, and the way they work, despite being inspired by real neural networks, is sufficiently unlike real neural processes that these algorithms may not shed much light on how human intelligence works.


 

Notes:

This post is a slightly edited form of a Quora answer I wrote recently.

In the comments section someone brought up the idea that some invariants can be easily extracted using Fourier decomposition. This is what I said is response:

Good point. Fourier decomposition is definitely part of the story (for sound at the very least), but it seems there is a lot more.

Some people think that the auditory system is just doing a Fourier transform. But this was actually shown to be partially false a century ago. The idea that pitch corresponds to the frequencies of sinusoids is called Ohm’s acoustic law.

From the wiki page:

 

For years musicians have been told that the ear is able to separate  any complex signal into a series of sinusoidal signals – that it acts as  a Fourier analyzer.  This quarter-truth, known as Ohm’s Other Law, has served to increase  the distrust with which perceptive musicians regard scientists, since it  is readily apparent to them that the ear acts in this way only under  very restricted conditions.
—W. Dixon Ward (1970)


This web page discusses some of the dimensions other that frequency that contribute to pitch:

Introduction to Psychoacoustics – Module 05

There are interesting aspects of pitch perception that render the Fourier picture problematic. For example, there is the Phenomenon of the missing    fundamental: “the observation that the pitch of a complex harmonic tone matches  the frequency of its fundamental spectral component, even if this component is  missing from the tone’s spectrum.”

Evidence suggests that the human auditory system uses both frequency and time/phase coding.

Missing fundamental:  “The brain perceives the pitch of a tone not only by its fundamental frequency, but also by the periodicity of the waveform; we may perceive the same pitch (perhaps with a different timbre) even if the fundamental frequency is missing from a tone.”

This book chapter also covers some of the evidence: [pdf]

” One of the most remarkable properties of the human auditory system is its ability to extract pitch from complex tones. If a group of pure tones, equally spaced in freque ncy are presented together, a pitch corresponding to the common frequency distance between the individual components will be heard. For example, if the pure tones with frequencies of 700, 800, and 900 Hz ar e presented together, the result is a complex sound with an underlying pitch corresponding to that of a 100 Hz tone. Since there is no physical energy at the frequency of 100 Hz in the complex, such a pitch sensation is called residual pitch or virtual pitch (Schouten 1940; Schouten, Ritsma and Cardozo, 1961). Licklider (1954) demonstrated that both the plac e (spectral) pitch and the residual (virtual) pitch have the same properties and cannot be auditorally differentiated.”

The status of Fourier decomposition in vision might be more controversial. Spatial frequency based models have their adherents, but also plenty of critics. One of my professors says that claiming the visual system does spatial Fourier amounts to confusing the object of study with the tools of study. 🙂 We still don’t whether and how the brain performs spatial Fourier decomposition.

A very recent paper reviews this issue:

The neural bases of spatial frequency processing during scene perception

“how and where spatial frequencies are processed within the brain remain unresolved questions.”

Vision scientists I know often talk about how the time domain cannot be ignored in visual processing.

A general point to be made is that even if we have mathematical solutions that are invariant, computational neuroscientists haven’t quite figured out how neural networks achieve such invariant representations. The quest for invariance is more about plausible neural implementation than mathematical description per se.

 

From Cell Membranes to Computational Aesthetics: On the Importance of Boundaries in Life and Art

My next 3QD column is out. I speculate about the role of boundaries in life and aesthetic experience. (Dopamine cells make a cameo appearance too.)

This image is a taster:

If you want to know what this diagram might mean, check out the article:
From Cell Membranes to Computational Aesthetics: On the Importance of Boundaries in Life and Art