Here’s an answer I wrote a while ago to the following question:
This is actually a fun question! Taken in the right spirit, it can be a good way to learn about what science is, and also what the limitations of science are. Continue reading
Here’s an answer I wrote a while ago to the following question:
This is actually a fun question! Taken in the right spirit, it can be a good way to learn about what science is, and also what the limitations of science are. Continue reading
I just came across a nice article explaining why the metaphor of organism as machine is misleading and unhelpful.
The machine conception of the organism in development and evolution: A critical analysis
This excerpt makes a key point:
“Although both organisms and machines operate towards the attainment of particular ends that is, both are purposive systems the former are intrinsically purposive whereas the latter are extrinsically purposive. A machine is extrinsically purposive in the sense that it works towards an end that is external to itself; that is, it does not serve its own interests but those of its maker or user. An organism, on the other hand, is intrinsically purposive in the sense that its activities are directed towards the maintenance of its own organization; that is, it acts on its own behalf.”
In this section the author explains how the software/hardware idea found its way into developmental biology.
“The situation changed considerably in the mid-twentieth century with the advent of modern computing and the introduction of the conceptual distinction between software and hardware. This theoretical innovation enabled the construction of a new kind of machine, the computer, which contains algorithmic sequences of coded instructions or programs that are executed by a central processing unit. In a computer, the software is totally independent from the hardware that runs it. A program can be transferred from one computer and run in another. Moreover, the execution of a program is always carried out in exactly the same fashion, regardless of the number of times it is run and of the hardware that runs it. The computer is thus a machine with Cartesian and Laplacian overtones. It is Cartesian because the software/hardware distinction echoes the soul/body dualism: the computer has an immaterial ‘soul’ (the software) that governs the operations of a material ‘body’ (the hardware). And it is Laplacian because the execution of a program is completely deterministic and fully predictable, at least in principle. These and other features made the computer a very attractive theoretical model for those concerned with elucidating the role of genes in development in the early days of molecular biology.”
The machine conception of the organism in development and evolution: A critical analysis
I’ve actually criticized the genetic program metaphor myself, in the following 3QD essay:
3quarksdaily: How informative is the concept of biological information?
____
Image source: Digesting Duck – Wikipedia

Interesting interview in the Atlantic with cognitive scientist Donald D. Hoffman:
“I call it conscious realism: Objective reality is just conscious agents, just points of view. Interestingly, I can take two conscious agents and have them interact, and the mathematical structure of that interaction also satisfies the definition of a conscious agent. This mathematics is telling me something. I can take two minds, and they can generate a new, unified single mind. Here’s a concrete example. We have two hemispheres in our brain. But when you do a split-brain operation, a complete transection of the corpus callosum, you get clear evidence of two separate consciousnesses. Before that slicing happened, it seemed there was a single unified consciousness. So it’s not implausible that there is a single conscious agent. And yet it’s also the case that there are two conscious agents there, and you can see that when they’re split. I didn’t expect that, the mathematics forced me to recognize this. It suggests that I can take separate observers, put them together and create new observers, and keep doing this ad infinitum. It’s conscious agents all the way down.”
[…]
Here’s the striking thing about that. I can pull the W out of the model and stick a conscious agent in its place and get a circuit of conscious agents. In fact, you can have whole networks of arbitrary complexity. And that’s the world.
[…]
“As a conscious realist, I am postulating conscious experiences as ontological primitives, the most basic ingredients of the world. I’m claiming that experiences are the real coin of the realm. The experiences of everyday life—my real feeling of a headache, my real taste of chocolate—that really is the ultimate nature of reality.”
I don’t agree with everything in the article (especially the quantum stuff) but I think many people interested in consciousness and metaphysics will find plenty of food for thought here:
Also, the “conscious agents all the way down” is the exact position I was criticizing in a recent 3QD essay:
The diagram above is from a science fiction story I was working on, back when I was a callow youth. It closely related to the idea of a network of conscious agents. Here’s another ‘version’ of it.

Not sure why I made it look so morbid. 🙂
I was asked this question on Quora recently:
This is an excellent set of questions!
Short(ish) answers:
1. Why does the brain have waves?
There is no consensus on their functional role, but some researchers think oscillations facilitate flexible coordination and communication among neurons.
2. The brain has neurons which can approximately be understood as performing function evaluations on their inputs, though the actual mechanisms are more complex. But how would this cause brainwaves of different frequencies to exist?
There are several possible mechanisms that can cause the input-output transformation of neurons to lead to oscillations when the neurons are connected in networks. Furthermore, neurons themselves often have intrinsic oscillatory properties (other than basic spiking), such as rebound excitation following inhibition.
3. Is there some sort of “controller” that synchronizes the firing?
There is no single controller causing synchronization or oscillation — there are actually several local and meso-scale mechanisms, often involving inhibitory interneurons*. (Note that synchronization and oscillation are distinct phenomena. You can have synchronized non-oscillatory processes, and oscillations that are not synch-ed.)
Long answers:
What are brain waves?
Even though we use the generic term ‘brain waves’, there are actually a variety of distinct mechanisms at work that lead to rhythmic behavior in different frequency bands. And in many cases we still don’t know what the mechanism is that causes a particular brain rhythm, or can’t decide among multiple plausible mechanisms.
Even though I’m a computational neuroscientist, I find it hard to conceptually integrate all the different perspectives on neural activity. But I’ve been thinking about this a lot lately, so here goes!
The most fine-grained perspective involves measuring the voltage of an individual neuron. This voltage can change in various ways. The most well known is the spike, or action potential. Spikes travel efficiently down the axon, and typically cause vesicle release in the synapse, which allows neurotransmitters to affect the post-synaptic neuron. But spiking is not the only kind of voltage fluctuation in a neuron. There are also ‘sub-threshold’ fluctuations, which are often oscillatory. These oscillations may represent oscillatory and/or synchronized inputs to the neuron that are insufficient to cause spiking. Single-electrode and multi-electrode recordings of cell activities can pick up both the sub-threshold fluctuations and the supra-threshold spiking.
Brain waves were first discovered through electroencephalography (EEG), which has been around for a century or so. Unlike electrode-based recording, EEG is non-invasive (meaning we don’t have to poke any sharp objects into anyone). But the main drawback of EEG is that it is a coarse-grained measure of neural activity. It measures the cumulative electrical activity of very large numbers of neurons.
It’s also important to realize that EEG essentially measures the synchronized inputs to a brain area, rather than the firing outputs. This has to do with the biophysics of the technique, which you can read about in more detail in my answer to the question “Are EEG voltages related to the average action potential firing rate of the cortical neurons near the electrode or are the voltages the average of low frequency voltage oscillations of the neurons?” EEG effectively measures the degree of synchronization of neurons that send inputs to the brain region directly underneath the EEG electrode.
Other techniques that can pick up oscillatory activity include magnetoencephalography (MEG) and Electrocorticography (ECoG).
Brain rhythms tend to be grouped into frequency bands. The most well-studied bands have been assigned Greek letters that reflect the order of their discovery. Here’s a list of the bands, along with their frequency ranges in hertz (Hz). I’ve linked to their Wikipedia pages, when available.
What are the mechanisms that cause brain waves?
Through a combination of experimental techniques and theoretical approaches, neuroscientists have come up with several mechanisms that can explain oscillations in various frequency bands. In many cases it is not clear which mechanism is in fact at work.
I can’t really go though all the theoretical mechanisms that have been proposed, but I can talk about one that is very intuitive to understand: the PING model of Gamma activity. This model, developed by Nancy Kopell, has been widely corroborated by experimental work. (There may be different types of Gamma, however, and not all of them are covered by this model.)
PING stands for pyramidal-interneuronal network gamma, and involves interaction between excitatory cells and inhibitory cells, resulting in the creation of a nonlinear oscillator. The following diagram [1] illustrates the mechanism quite nicely:
The E(xcitatory) cells excite the I(nhibitory) cells, which in turn inhibit the same excitatory cells. This results in an oscillation whose frequency is determined by the rate of integration of the I-cells. Fast I-cells can produce fast rhythms.
There are plenty of other mechanisms for oscillations, but the PING model will give you a flavor of how they can be constructed. Arousal levels can by incorporated into models by incorporating factors such as neurotransmitter level fluctuations, and the effects of such fluctuation on the sub-threshold and/or firing properties of neurons. Changes in behavioral state can also change the inputs to various brain areas from the body and from the outside world, which in turn will affect the network activity mode. This is a vast topic for theory and computational modeling.
What is the purpose of brain waves?
This is actually still a very contentious issue. The field of neuroscience can be broadly divided into researchers who care about oscillations, and researchers who don’t. People who don’t care about oscillations are more interested in the firing activities and the input-output transformations of neurons and networks. Some of these researcher even go so far as to claim that oscillations are ‘epiphenomena’ — mere side-effects of the ‘main’ neural processes, such as integration, contrast enhancement, switching, resetting, and so on. I was broadly in the ‘skeptics’ category for many years, but I’ve started to realized that oscillations can’t be ignored. The power in various frequency bands often correlates strongly with behavioral measures, so oscillations are at the very least telling us something important about how the brain works.
One theory of brain waves that is becoming popular is the idea of coordination, or “communication through coherence”. The idea is that neurons which are in the same sub-threshold oscillatory state are more likely to be able to communicate spikes with each other. This is shown diagrammatically below [2]. The black neuron is out of phase with the blue neuron, so it communicates with the blue neuron less effectively that the red neuron.
As I mentioned earlier, synchrony and rhythmicity are completely distinct. Presumably you can get coherence without any rhythms, just by synchronizing groups of neurons. But maybe rhythmic behavior is more easy to control.
I’ve only scratched the surface of this topic. There is definitely a lot more to the story of brain waves, and in the coming decades hopefully researchers will work towards an integrated theory.
Images from:
[1] Cortical enlightenment: are attentional gamma oscillations driven by ING or PING? | pdf
[2] A mechanism for cognitive dynamics: neuronal communication through neuronal coherence. | pdf
Further reading
* What do inhibitory neurons do in the brain? by Yohan John on Neurologism
What changes occur in the brain when we close our eyes?
Medical Imaging: How strong is the correlation between a fMRI and an EEG?
My paper is finally out in PLoS Computational Biology. It’s an open access journal, so everyone can read it:
Here’s the Author Summary, which is a simplified version of the abstract:
“Emotional experiences grab our attention. Information about the emotional significance of events helps individuals weigh opportunities and dangers to guide goal-directed behavior, but may also lead to irrational decisions when the stakes are perceived to be high. Which neural circuits underlie these contrasting outcomes? A recently discovered pathway links the amygdala—a key center of the emotional system—with the inhibitory thalamic reticular nucleus (TRN) that filters information between the thalamus and cortex. We developed a neural network model—the Emotional Gatekeeper—that demonstrates how the newly discovered pathway from the amygdala to TRN highlights relevant information to help assess threats and opportunities. The model also shows how the amygdala-TRN pathway can lead normal individuals to discount neutral but useful information in highly charged emotional situations, and predicts that disruption of specific nodes in this circuit underlies distinct psychiatric disorders.”

Here’s the full citation:
John YJ, Zikopoulos B, Bullock D, Barbas H (2016) The Emotional Gatekeeper: A Computational Model of Attentional Selection and Suppression through the Pathway from the Amygdala to the Inhibitory Thalamic Reticular Nucleus. PLoS Comput Biol 12(2): e1004722. doi:10.1371/journal.pcbi.1004722
A recent Quora answer I wrote:
Is it possible for the Internet to one day gain consciousness?
I was asked the question “From a scientific point of view, how are our tastes created?” Here’s my answer.
“There’s no accounting for taste!”
Typically we explain taste — in food, music, movies, art — in terms of culture, upbringing, and sheer chance. In recent years there have been several attempts to explain taste from biological perspectives: either neuroscience or evolutionary psychology. In my opinion these types of explanations are vague enough to always sound true, but they rarely contain enough detail to account for the specific tastes of individuals or groups. Still, there’s much food for thought in these scientific proto-theories of taste and aesthetics.
[An early aesthete?]
Let’s look at the evolutionary approach first. An evolutionary explanation of taste assumes that human preferences arise from natural selection. We like salt and sugar and fat, according to this logic, because it was beneficial for our ancestors to seek out foods with these tastes. We like landscape scenes involving greenery and water bodies because such landscapes were promising environments for our wandering ancestors. This line of thinking is true as far as it goes, but it doesn’t go that far. After all, there are plenty of people who don’t much care for deep-fried salty-sweet foods. And many people who take art seriously quickly tire of clichéd landscape paintings.
[Are you a homo sapien? They you must love this. 😉 ]
Evolutionary psychology can provide broad explanations for why humans as a species tend to like certain things more than others, but it really provides us with no map for navigating differences in taste between individuals and groups. (These obvious, glaring limitations of evolutionary psychology have not prevented the emergence of a cottage industry of pop science books that explain everything humans do as consequences of the incidents and accidents that befell our progenitor apes on the savannahs of Africa.)
Explanations involving the neural and cognitive sciences get closer to what we are really after — an explanation of differences in taste — but not by much. Neuroscientific explanations are essentially half way between cultural theories and evolutionary theories. We like things because the ‘pleasure centers’ in our brains ‘light up’ when we encounter them. And the pleasure centers are shaped by experience (on the time scale of a person’s life), and by natural selection (on the time scale of the species). Whatever we inherit because of natural selection is presumably common to all humans, so differences in taste must be traced to differences in experience, which become manifest in the brain as differences in neural connectivity and activity. If your parents played the Beatles for you as a child, and conveyed their pleasure to you, then associative learning might cause the synapses in your brain that link sound patterns with emotional reactions to be gradually modified, so that playing ‘Hey Jude’ now triggers a cascade of neural events that generate the subjective feeling of enjoyment.
[What’s not to love about the Beatles?]
But there is so much more to the story of enjoyment. Not everyone likes their parents’ music. In English-speaking countries there is a decades-old stereotype of the teenager who seeks out music to piss off his or her parents. And many of us have a friend who insists on listening to music that no one else seems to have heard of. What is the neural basis of this fascinating phenomenon?
We must now enter extremely speculative territory. One of the most thought-provoking ‘theories’ of aesthetics that I have come across was proposed by a machine learning researcher named Jürgen Schmidhuber. He has a provocative way of summing up his theory: Interestingness is the first derivative of beauty.
What he means is that we are not simply drawn to things that are beautiful or pleasurable. We are also drawn to things that are interesting: things that somehow intrigue us and capture our attention. These things, according to Schmidhuber, entice us with the possibility of enhancing our categories of experience. In his framework, humans and animals are constantly seeking to understand the environment, and in order to do this, they must be drawn to the edge of what they already know. Experiences that are already fully understood offer no opportunity for new learning. Experiences that are completely beyond comprehension are similarly useless. But experiences that are in the sweet spot of interestingness are not boringly familiar — but they are not bafflingly alien either. By seeking out experiences in this ‘border territory’, we expand our horizons, gaining a new understanding of the world.
For example, I’m a Beatles fan, but I don’t listen to the Beatles that often. I am, however, intrigued by music that is ‘Beatlesque’: such music can lead me in new directions, and also reflect back on the Beatles, giving me a deeper appreciation of their music.
The basic intuition of this theory is well-supported by research in animals and humans. Animals all have some baseline level of curiosity. Lab rats will thoroughly investigate a new object introduced into their cages. Novelty seems to have a gravitational pull for organisms.
But again, there are differences even in this tendency. Some people are perfectly content to eat the same foods over and over again, or listen to the same songs or artists. At the other extreme we find the freaks, the hipsters, the critics, the obsessives, and all the assorted avant garde seekers of “the Shock of the New”.
Linking back to evolutionary speculation, all we can really say is that even the desire for novelty is a variable trait in human populations. (Actually it’s multiple traits: I am far more adventurous when it comes to music than food.) Perhaps a healthy society needs its ‘conservatives’ and its ‘progressives’ in the domain of taste and aesthetic experience. Group selection — natural selection operating on tribes, societies and cultures — is still somewhat controversial in mainstream evolutionary biology, so to go any further in our theories of taste we have to be willing to wander on the wild fringes of scientific thought…
… those fringes are, after all, where everything interesting happens! 🙂

For more speculation on interestingness, beauty, and the pull of the not-completely-familiar, see this essay I wrote. I go into more detail about Schmidhuber’s theory about interestingness:
From Cell Membranes to Computational Aesthetics: On the Importance of Boundaries in Life and Art
This has nothing to do with science, but I find this David Mitchell video on taste very funny:
After writing this answer I realized that the questioner was most probably asking about gustation — meaning, the sense of taste. Oh well.
Here’s my answer to a recent Quora question: Where do our thoughts come from?
Thoughts come from nowhere! And from everywhere! I think both answers contain an element of truth.
Subjectively, our thoughts come from nowhere: they just pop into our heads, or emerge in the form of words leaving our mouths.
Objectively, we can say that thoughts emerge from neural processes, and that neural processes come from everywhere. What I mean by this is that the forms and dynamics of thought are influenced by everything that has a causal connection with you, your society, and your species.
We don’t know exactly how thoughts emerge from the activity of neurons — or even how to define what a thought is in biological terms (!)— but there is plenty of indirect evidence to support the general claim that the brain is where thoughts emerge.
The neuronal patterns that mediate and enable thought and behavior have proximal and distal causes.
The proximal causes are the stimuli and circumstances we experience. These experiences have causal impacts on our bodies, and are also partly caused by our bodies. The forces inside and outside the body become manifest in the brain as ‘clouds’ of information. In the right circumstances these nebulous patterns can condense into streams of thought. We can add to these identifiable causes the mysterious element of randomness: that seemingly ever-present “ghost in the machine” that makes complex processes such as life fundamentally unpredictable. Perhaps randomness is what provides the ‘seeds’ around which the condensation of thoughts can occur.
The distal causes are our experiential history and our evolutionary pre-history. Our experiential history consists of the things we’ve learned, consciously and unconsciously, and the various events that have shaped our bodies and our neural connections in large and small ways. Our evolutionary pre-history is essentially the experiential history of our species, and more generally of life itself, going back all the way to the first single-celled organism. The traits of a species are a sort of historical record of successes and failures. And going even further, life ultimately takes its particular forms because of the possibilities inherent in matter — and this takes us all the way to the formation of stars and planets.
An answer I wrote to the Quora question Does the human brain work solely by pattern recognition?:
Great question! Broadly speaking, the brain does two things: it processes ‘inputs’ from the world and from the body, and generates ‘outputs’ to the muscles and internal organs.
Pattern recognition shows up most clearly during the processing of inputs. Recognition allows us to navigate the world, seeking beneficial/pleasurable experiences and avoiding harmful/negative experiences.* So pattern recognition must also be supplemented by associative learning: humans and animals must learn how patterns relate to each other, and to their positive and negative consequences.
And patterns must not simply be recognized: they must also be categorized. We are bombarded by patterns all the time. The only way to make sense of them is to categorize them into classes that can all be treated similarly. We have one big category for ‘snake’, even though the sensory patterns produced by specific snakes can be quite different. Pattern recognition and classification are closely intertwined, so in what follows I’m really talking about both.
Creativity does have a connection with pattern recognition. One of the most complex and fascinating manifestations of pattern recognition is the process of analogy and metaphor. People often draw analogies between seemingly disparate topics: this requires creative use of the faculty of pattern recognition. Flexible intelligence depends on the ability to recognize patterns of similarity between phenomena. This is a particularly useful skill for scientists, teachers, artists, writers, poets and public thinkers, but it shows up all over the place. Many internet memes, for example, involve drawing analogies: seeing the structural connections between unrelated things.
One of my favourites is a meme on twitter called #sameguy. It started as a game of uploading pictures of two celebrities that resemble each other, followed by the hashtag #sameguy. But it evolved to include abstract ideas and phenomena that are the “same” in some respect. Making cultural metaphors like this requires creativity, as does understanding them. One has to free one’s mind of literal-mindedness in order to temporarily ignore the ever-present differences between things and focus on the similarities.
Here’s a blog that collects #sameguy submissions: Same Guy
On twitter you sometimes come across more imaginative, analogical #sameguy posts: #sameguy – Twitter Search
The topic of metaphor and analogy is one of the most fascinating aspects of intelligence, in my opinion. I think it’s far more important that coming up with theories about ‘consciousness’. 🙂 Check out this answer:
Why are metaphors and allusions used while writing?
(This Quora answer is a cross-post of a blog post I wrote: Metaphor: the Alchemy of Thought)
In one sense metaphor and analogy are central to scientific research. I’ve written about this here:
What are some of the most important problems in computational neuroscience?
Science: the Quest for Symmetry
This essay is tangentially related to the topic of creativity and patterns:
From Cell Membranes to Computational Aesthetics: On the Importance of Boundaries in Life and Art
* The brain’s outputs — commands to muscles and glands — are closely linked with pattern recognition too. What you choose to do depends on what you can do given your intentions, circumstances, and bodily configuration. The state that you and the universe happen to be in constrains what you can do, and so it is useful for the brain to recognize and categorize the state in order to mediate decision-making, or even non-conscious behavior.When you’re walking on a busy street, you rapidly process pathways that are available to you. even if you stumble, you can quickly and unconsciously act to minimize damage to yourself and others. Abilities of this sort suggest that pattern recognition is not purely a way to create am ‘image’ of the world, but also a central part of our ability to navigate it.
My latest essay for 3 Quarks Daily is up: Me and My Brain: What the “Double-Subject Fallacy” reveals about contemporary conceptions of the Self
What is a person? Does each of us have some fundamental essence? Is it the body? Is it the mind? Is it something else entirely? Versions of this question seem always to have animated human thought. In the aftermath of the scientific revolution, it seems as if one category of answer — the dualist idea that the essence of a person is an incorporeal soul that inhabits a material body — must be ruled out. But as it turns out, internalizing a non-dualist conception of the self is actually rather challenging for most people, including neuroscientists.[…]A recent paper in the Journal of Cognitive Neuroscience suggests that even experts in the sciences of mind and brain find it difficult to shake off dualistic intuitions. Liad Mudrik and Uri Maoz, in their paper “Me & My Brain”: Exposing Neuroscienceʼs Closet Dualism, argue that not only do neuroscientists frequently lapse into dualistic thinking, they also attribute high-level mental states to the brain, treating these states as distinct from the mental states of the person as a whole. They call this the double-subject fallacy. ( I will refer to the fallacy as “dub-sub”, and the process of engaging in it as “dub-subbing”.) Dub-subbing is going on in constructions like”my brain knew before I did” or “my brain is hiding information from me”. In addition to the traditional subject — “me”, the self, the mind — there is a second subject, the brain, which is described in anthropomorphic terms such as ‘knowing’ or ‘hiding’. But ‘knowing’ and ‘hiding’ are precisely the sorts of things that we look to neuroscience to explain; when we fall prey to the double-subject fallacy we are actually doing the opposite of what we set out to do as materialists. Rather than explaining “me” in terms of physical brain processes, dub-subbing induces us to describe the brain in terms of an obscure second “me”. Instead of dispelling those pesky spirits, we allow them to proliferate!