“Are thoughts just a bunch of electrical and chemical signals being tossed around inside the brain, or is there more to it than that?”

“In our world,” said Eustace, “a star is a huge ball of flaming gas.”

“Even in your world, my son, that is not what a star is but only what it is made of…”

The Voyage of the Dawn Treader, CS Lewis

I really like the quote above, which is from the Chronicles of Narnia. It raises a neat little metaphysical question:

Why do we assume that what a thing is made up of is what a thing is?

As a neuroscientist I have to point out that no one really knows what a thought is from a scientific perspective. This means that we don’t know what we would need to measure in order to ‘decode’ a person’s thoughts. For the foreseeable future, I cannot look at a brain scan and say, “This person is definitely thinking about pineapples!”

Of course, thoughts seem to be closely linked with neural patterns in the brain, and those patterns are clearly linked with electro-chemical signaling. Tinkering with the signaling clearly tinkers with the thinking. Otherwise the effects of drugs such as alcohol and coffee on thought would be a mystery. Perhaps some day we will have a scanner that tells us what a person is thinking of.


Matter and form

While I admit that electro-chemical signals being tossed about is a necessary precondition for thinking — no phenomenon that lacks such tossing will be unanimously labeled as thinking — I think that material constituency is a less than stellar guide for thinking about what something is.

Consider charcoal, diamonds, graphite, and graphene. These are made up of carbon. But is that all there is to the story of what they are? I hope the answer is an emphatic no, since they all have radically different properties. Charcoal is black and relatively soft. Diamonds are transparent and exceptionally hard. Graphene and graphite conduct electricity whereas other forms do not.

What explains the differences between the various allotropes of carbon? Clearly it isn’t what they are made of — it’s the same stuff in each case.

Eight allotropes of carbon: a) diamond, b) graphite, c) lonsdaleite, d) C60 buckminsterfullerene, e) C540, Fullerite f) C70, g) amorphous carbon, and h) single-walled carbon nanotube. Source: Wikipedia

~

What differs among the allotropes is the arrangements of carbon atoms. In other words, form is as important as ‘content’. Depending on how you arrange carbon atoms, you will end up with something soft and opaque or hard and transparent. Clearly the properties of the substances are not to be found in the properties of the atoms.

This is generally true for most complex and interesting objects and processes. You can boil them down to some set of elements — and these elements maybe subatomic particles, atoms, molecules, genes, cells, neurotransmitters — but some defining feature of the overarching process will be missing from the constituent parts considered in isolation, just as transparency or opaqueness are missing from individual carbon atoms.

In complex systems theory and in condensed matter physics, the word emergence is often used to describe the phenomenon by which collections of matter acquire new properties as a result of arrangement or sheer scale. Chemistry is full of examples. Oxygen is a gas. Hydrogen is a gas. But when they combine together in the right way, they produce water, which is a liquid — and one with all kinds of properties that can’t be predicted from first principles through analyzing the constituent parts.


Are thoughts emergent?

Since we don’t know exactly what thoughts are, we cannot say for sure whether they are emergent phenomena or not. But we can indirectly infer that they are by considering properties of thoughts and comparing them with properties of chemicals being tossed around in the brain.

A hallmark of thoughts is that they are about things. When you are thinking about a pineapple, there is an “aboutness” relation between the thought and the pineapple. Thoughts refer to things — which may be real things in the world, or imaginary things like dragons. This is a distinctive feature of mental phenomena, and the philosophers call it intentionality. (Note that intentionality has nothing to do with intentions or motivations — it’s not the best term, but that’s where you’ll find the relevant writings.)

This “aboutness” or “intentionality” is not a feature of chemical tossing patterns. A pattern is a pattern is a pattern, and isn’t intrinsically about any other pattern. At the very least, we can say that modern physics and chemistry have had no reason to invent an “aboutness” concept so far. In other words, there is no purely physical theory of reference.

So it seems reasonable to at least consider the possibility that the property of “aboutness” emerges when matter is arranged in just the right way.


“Is there more to it?”

This admittedly abstract concept is not really going to satisfy people who were hoping that thoughts were actually composed of “magic dust”, as Sam Moss quite rightly termed it. Thoughts are not “made up” of some special secret sauce. If you look at a brain — or any other tissue — under a microscope, all you see are cells. And cells are made up of atoms — mostly carbon, hydrogen and oxygen, with some crucial cameos by nitrogen, calcium, phosphorus, sulfur, sodium, potassium, magnesium and choride.

So is that all a brain or a body is? A stew of a dozen elements? If you followed the story with carbon, then you’ll know that the answer is no. The arrangement of the atoms makes all the difference in the world.

But does this mean that “there is more to it”? If “more” implies a substance of which thoughts are made, then the answer is most likely no.

In any case, given that matter makes up everything, saying that something is “just” matter seems a bit unfair to matter — it’s about as magical a dust as you could possibly hope for!

Matter gives you the universe and you ask if there is more to it?! 😉

If you like, you can call arrangement or form the special “something more”. Arrangement is the “something more” that distinguishes charcoal from diamonds, and thought from nonsense.

But arrangement will not fulfill all the duties of magical dust. It is not the same as the traditional notion of a soul. A soul can live on without a body. But a form has no meaning without the constituent matter that is arranged.

So here’s the compromise: thoughts are made up of electro-chemical signals tossing around, but that is not what they are, since this definition does not distinguish in any useful way between thoughts and perceptions, feelings, moods, emotions or sensations — or even unconscious neural processes for that matter — all of which are also made up of electro-chemical signals.

So saying thoughts are electro-chemical signals is about as useful as saying diamonds are carbon. It’s true, but not in an especially interesting or informative sense.


Notes

If you’ve made it this far, well done! I guess I was having a slow Friday evening! 🙂

I know that what I’ve written is quite abstract, but that goes with the territory if you are thinking about thoughts.

Here are some answers that may be of interest:

____

This post was originally a Quora answer.

Advertisements

What neuroscience too often neglects: Behavior

A Quora conversation led me to recent paper in Neuron that highlights a very important problem with a lot of neuroscience research: there is insufficient attention paid to the careful analysis of behavior. The paper is not quite a call to return to behaviorism, but it is an invitation to consider that the pendulum has swing too far in the opposite direction, towards ‘blind’ searches for neural correlates. The paper is a wonderful big picture critique, so I’d like to just share some excerpts.

neurobehav_krakauer_etal_poeppel2017

Continue reading

Is consciousness complex?

Someone on Quora asked the following question: What’s the correlation between complexity and consciousness?

Here’s my answer:

Depends on who you ask! Both complexity and consciousness are contentious words, and mean different things to different people.

I’ll build my answer around the idea of complexity, since it’s easier to talk about scientifically (or at least mathematically) than consciousness. Half-joking comments about complexity and consciousness are to be found in italics.

I came across a nice list of measures of complexity, compiled by Seth Lloyd, a researcher from MIT, which I will structure my answer around. [pdf]

Lloyd describes measures of complexity as ways to answer three questions we might ask about a system or process:

  1. How hard is it to describe?
  2. How hard is it to create?
  3. What is its degree of organization?

1. Difficulty of description: Some objects are complex because they are difficult for us to describe. We frequently measure this difficulty in binary digits (bits), and also use concepts like entropy (information theory) and Kolmogorov (algorithmic) complexity. I particularly like Kolmogorov complexity. It’s a measure of the computational resources required to specify a string of characters. It’s the size of the smallest algorithm that can  generate that string of letters or numbers (all of which can be  converted into bits). So if you have a string like  “121212121212121212121212”, it has a description in English — “12  repeated 12 times” — that is even shorter that the actual string. Not very complex. But the string “asdh41ubmzzsa4431ncjfa34” may have no description shorter than the string itself, so it will have higher Kolmogorov complexity. This measure of complexity can also give us an interesting way to talk about randomness. Loosely speaking, a random process is one whose simulation is harder to accomplish than simply watching the process unfold! Minimum message length is a related idea that also has practical applications. (It seems Kolmogorov complexity is technically uncomputable!)

Consciousness is definitely hard to describe. In fact we seem to be stuck at the description stage at the moment. Describing consciousness is so difficult that bringing in bits and algorithms seem a tad premature. (Though as we shall see, some brave scientists beg to differ.)

2. Difficulty of creation: Some objects and processes are seen as complex because they are really hard  to make. Komogorov complexity could show up here too, since simulating a string can be seen both as an act of description (the code itself) and an act  of creation (the output of the code). Lloyd lists the following  terms that I am not really familiar with: Time Computational Complexity; Space Computational Complexity; Logical depthThermodynamic depth; and “Crypticity” (!?).  In additional to computational  difficulty, we might add other costs: energetic, monetary, psychological, social, and ecological. But perhaps then we’d be  confusing the complex with the cumbersome? 🙂

Since we haven’t created a consciousness yet, and don’t know how nature accomplished it, perhaps we are forced to say that consciousness really is complex from the perspective of artificial synthesis. But if/when we have made an artificial mind — or settled upon a broad definition of consciousness that includes existing machines — then perhaps we’ll think of consciousness as easy! Maybe it’s everywhere already! Why pay for what’s free?

3. Degree of organization: Objects and processes that seem intricately structured are also seen as  complex. This type of complexity differs strikingly from computational complexity. A string of random noise is extremely complex from an information-theoretic perspective, because it is virtually incompressible — it  cannot be condensed into a simple algorithm. A book consisting of totally random characters contains more information, and is therefore more algorithmically complex, that a meaningful  text of the same length. But strings of random characters are typically interpreted as totally lacking in structure, and are therefore in a sense very simple. Some measures that Lloyd associates with organizational complexity include: Fractal dimension, metric entropy, Stochastic Complexity and several more, most of which I confess I had never heard of until today. I suspect that characterizing organizational structure is an ongoing research endeavor. In a sense that’s what mathematics is — the study of abstract structure.

Consciousness seems pretty organized, especially if you’re having a good day! But it’s also the framework by which we come to know that organization exists in nature in the first place…so this gets a bit Ioopy . 🙂

Seth Lloyd ends his list with concepts that are related to complexity, but don’t necessarily have measures. These I think are particularly relevant to consciousness and, to the more prosaic world I work in: neural network modeling.

Self-organization
Complex adaptive system
Edge of chaos

Consciousness may or may not be self-organized, but it definitely adapts, and it’s occasionally chaotic.

To Lloyd’s very handy list led me also add self-organized criticality and emergence. Emergence is an interesting concept which has been falsely accused of being obscurantism. A property is emergent is if is seen in a system, but not in any constituent of the system. For instance, the thermodynamic gas laws emerge out of kinetic theory, but they make no reference to molecules. The laws governing gases show up when there is a large enough number of particles, and when these laws reveal themselves, microscopic details often become irrelevant. But gases are the least interesting substrates for emergence. Condensed matter physicists talk about phenomena like the emergence of quasiparticles, which are excitations in a solid that behave as if they are independent particles, but depend for this independence, paradoxically, on the physics of the whole object.  (Emergence is a fascinating subject in its own right, regardless of its relevance to consciousness. Here’s a paper that proposes a neat formalism for talking about emergence: Emergence is coupled to scope, not level. PW Anderson’s classic paper “More is Different” also talks about a related issue: pdf )

Consciousness may well be an emergent process — we rarely say that a single neuron or a chunk of nervous tissue has a mind of its own. Consciousness is a word that is reserved for the whole organism, typically.

So is consciousness complex? Maybe…but not really in measurable ways. We can’t agree on how to describe it, we haven’t created it artificially yet, and we don’t know how it is organized, or how it emerged!

In my personal opinion many of the concepts people associate with consciousness are far outside of the scope of mainstream science. These include qualia, the feeling of what-it-is-like, and intentionality, the observation that mental “objects” always seems to be “about” something.

This doesn’t mean I think these aspects of consciousness are meaningless, only that they are scientifically intractable. Other aspects of  consciousness, such as awareness, attention, and emotion might also be shrouded in mystery, but I think neuroscience has much to say about them — this is because they have some measurable aspects, and these aspects step out of the shadows during neurological disorders, chemical modulation, and other abnormal states of being.

However…

There are famous neuroscientists who might disagree. Giulio Tononi has come up with something called integrated information theory, which comes with a measure of consciousness he christened phi. Phi is supposed to capture the degree of “integratedness” of a network. I remain quite skeptical of this sort of thing — for now it  seems to be a metaphor inspired by information theory, rather than a measurable quantity. I can’t imagine how we will be able  to relate it to actual experimental data. Information, contrary to popular perception, is not something intrinsic to physical objects. The amount of information in a signal depends on the device receiving the signal. Right now we have no way of knowing how many “bits” are being transmitted between two neurons, let alone between entire regions of the brain. Information theory is best applied when we already know the nature of the message, the communication channel, and the encoding/decoding process. We have only partially characterized these  aspects of neural dynamics. Our experimental data seem far too fuzzy  for any precise formal approach. [Information may actually be a concept of very limited use in biology, outside of data fitting. See this excellent paper for more: A deflationary account of information in biology. This sums it up: “if information is in the concrete world, it is causality. If it is abstract, it is in the head.”]

But perhaps this paper  will convince me otherwise: Practical Measures of Integrated Information for Time-Series Data. [I very much doubt it though.]

___

I thought I would write a short answer… but I ended up learning a lot as I added more info.

View Answer on Quora