Why can’t we perceive cells? Or atoms?

I was asked the following question on Quora:

Why can’t we see, touch, hear and smell on a cellular level? And what happens if we can?

Here’s what I wrote:

Essentially, we perceive the visible world in the way that we do because of our overall size, the shape of our eyes, and the sizes of objects in the world that are relevant to our voluntary behavior.

This question might seem silly, but a closely related question can serve as a springboard for us to think very deeply about physical scale, and how it relates to biological life and to the very concept of a scientific law.


But first lets deal with the basic question.

Let’s pull out a section from the question comment, which helps clarify the question:

“And if I had a microscopic glasses(as if) since child seeing on cellular level, would that be too much for the capacity of the brain?”

When you perceive something or measure it with a device, there is almost always a trade-off between scope and resolution. We all experience this regularly. When you zoom in on Google maps, you get more detail, but for a smaller region. When you zoom out, you get a wider view, but everything is less detailed.

This is also true of measuring devices like microscopes and telescopes. If you want both detail (high res) and a wide view (large scope), then you typically have to piece together multiple narrow high-res ‘views’.

For both humans and machines, the need for piecing together multiple views or ‘snapshots’ of the world relates to a limit on the amount of information that can be processed simultaneously.

I can give you a super-high-res map of your city that is as big as a room, but because of your size and the size and shape of your eyes, only a certain amount of this map can be visible at a given time. This is a physical constraint. You can get around this by standing far away, or by using special lenses.

But there are additional constraints within our visual system.

The human eye possesses a region of high sensitivity called the fovea centralis. We can only process detailed visual information if it arrives at this part of the eye. The foveal region is around 2 degrees of arc, which is roughly the width of your thumb when held at arm’s length. Everything outside this region is perceived rather poorly. So it is by moving our eyes that we create a picture of the world.

Now imagine if you had to navigate the world but saw things in your fovea at the resolution of maybe ten or a hundred micrometres, which is the scale of biological cells. All other things being equal — such as body size and movement speed — you would have to look around for minutes or perhaps hours before you got any sense of where you were or what was in front of you. By the end of this period you might even forget what you saw initially!

This kind of visual resolution amounts to navigating the world with a microscope attached to your eyes! You’d have to be extremely still, since any slight movement would completely change what is in your field of view. And random breezes would bring dust particles and fluff into your field of view. It would be mind-numbingly irritating and laborious! 🙂

Perhaps an alternative would be to look at things from thousands of miles away. But that would be pretty pointless for an organism that is meter-scaled, not mile-scaled! By the time you reach your destination, it might not even exist any more!

Here’s another way to think about this: the objects that humans are interested in — food, mates, predators, shelter, clothing — exist on scales far larger than cells. If we ate food one cell at a time, we’d be searching for food all the time, and would probably never even fill our stomachs!

If we were amoeba-sized, we’d be interested in objects that exist at that scale. But we would not see such objects in any normal sense, since as far as we know, vision requires the complex multi-cellular apparatus of the eye and the nervous system.


A more tricky issue

There is actually a subtle philosophical point that can be gleaned from this line of thinking. It’s probably best to proceed further only if you’ve already understood what I’ve written above, particularly the parts about the fovea and dust.

The physicist Erwin Schrödinger once asked a question that seems equally bizarre, in his book What Is Life?

Why are atoms so small?

What a strange question! Schrödinger refines it by asking why atoms are so small compared to us. His answer, essentially, is that organisms with the kind of perceptual apparatus we possess could only arise if the world in which they live contains law-like regularities that are relatively predictable, and therefore useful for survival. But atoms are governed by quantum mechanics and are unpredictable.

So it would be impossible to live and perceive at the level of atoms. You need to move upwards in scale until regularities start to emerge from the wild world of quantum noise. The world in which regularities become visible is the classical world! Intriguingly, life seems to emerge at precisely the scale at which quantum effects become less common — in fact life may even straddle the worlds of quantum and classical. Photosynthesis involves quantum mechanical processes, and some quantum effects may even occur in proteins and DNA. But multi-cellular life tries to find safe neighborhoods of scale, far away from quantum mischief!

Anyway, if you’re curious, here’s the relevant excerpt from Schrödinger’s book What is Life?

Why are atoms so small? A good method of developing ‘the naive physicist’s ideas’ is to start from the odd, almost ludicrous, question: Why are atoms s o small? To begin with, they are very small indeed. Every little piece of matter handled in everyday life contains an enormous number of them.

[…]

Now, why are atoms so small? Clearly, the question is an evasion. For it is not really aimed at the size of the atoms. It is concerned with the size of organisms, more particularly with the size of our own corporeal selves. Indeed, the atom is small, when referred to our civic unit of length, say the yard or the metre. In atomic physics one is accustomed to use the so-called Angstrom (abbr. A), which is the 10^10th part of a metre, or in decimal notation 0.0000000001 metre. Atomic diameters range between 1 and 2A.

[…]

Why must our bodies be so large compared with the atom? I can imagine that many a keen student of physics or chemistry may have deplored the fact that everyone of our sense organs, forming a more or less substantial part of our body and hence (in view of the magnitude of the said ratio) being itself composed of innumerable atoms, is much too coarse to be affected by the impact of a single atom. We cannot see or feel or hear the single atoms. Our hypotheses with regard to them differ widely from the immediate findings of our gross sense organs and cannot be put to the test of direct inspection. Must that be so? Is there an intrinsic reason for it? Can we trace back this state of affairs to some kind of first principle, in order to ascertain and to understand why nothing else is compatible with the very laws of Nature? Now this, for once, is a problem which the physicist is able to clear up completely. The answer to all the queries is in the affirmative.

The working of an organism requires exact physical laws

If it were not so, if we were organisms so sensitive that a single atom, or even a few atoms, could make a perceptible impression on our senses — Heavens, what would life be like! To stress one point: an organism of that kind would most certainly not be capable of developing the kind of orderly thought which, after passing through a long sequence of earlier stages, ultimately results in forming, among many other ideas, the idea of an atom.

[…]

Physical laws rest on atomic statistics and are therefore only approximate

And why could all this not be fulfilled in the case of an organism composed of a moderate number of atoms only and sensitive already to the impact of one or a few atoms only? Because we know all atoms to perform all the time a completely disorderly heat motion, which, so to speak, opposes itself to their orderly behaviour and does not allow the events that happen between a small number of atoms to enroll themselves according to any recognizable laws. Only in the cooperation of an enormously large number of atoms do statistical laws begin to operate and control the behaviour of these assemblies with an accuracy increasing as the number of atoms involved increases. It is in that way that the events acquire truly orderly features.

You can read the complete passage, which is in Chapter 1 of the book, online here: [What is Life – pdf]

Advertisements

Is the mind a machine?

My latest 3QD essay explores the “mind as machine” metaphor, and metaphors in general.

Putting the “cog” in “cognitive”: on the “mind as machine” metaphor

Here’s an excerpt:

People who study the mind and brain often confront the limits of metaphor. In the essay ‘Brain Metaphor and Brain Theory‘, the vision scientist John Daugman draws our attention to the fact that thinkers throughout history have used the latest material technology as a model for the mind and body. In the Katha Upanishad (which Daugman doesn’t mention), the body is a chariot and the mind is the reins. For the pre-Socratic Greeks, hydraulic metaphors for the psyche were popular: imbalances in the four humors produced particular moods and dispositions. By the 18th and 19th centuries, mechanical metaphors predominated in western thinking: the mind worked like clockwork. The machine metaphor has remained with us in some form or the other since the industrial revolution: for many contemporary scientists and philosophers, the only debate seems to be about what sort of machine the mind really is. Is it an electrical circuit? A cybernetic feedback device? A computing machine that manipulates abstract symbols? Some thinkers so convinced that the mind is a computer that they invite us to abandon the notion that the idea is a metaphor. Daugman quotes the cogntive scientist Zenon Pylyshyn, who claimed that “there is no reason why computation ought to be treated merely as a metaphor for cognition, as opposed to the literal nature of cognition”.

Daugman reacts to this Whiggish attitude with a confession of incredulity that many of us can relate to: “who among us finds any recognizable strand of their personhood or of their experience of others and of the world and its passions, to be significantly illuminated by, or distilled in, the metaphor of computation?.” He concludes his essay with the suggestion that “[w]e should remember than the enthusiastically embraced metaphors of each “new era” can become, like their predecessors, as much the prisonhouse of thought as they at first appeared to represent its liberation.”

Read the rest at 3 Quarks Daily:

Putting the “cog” in “cognitive”: on the “mind as machine” metaphor

Why an organism is not a “machine”

I just came across a nice article explaining why the metaphor of organism as machine is misleading and unhelpful.

The machine conception of the organism in development and evolution: A critical analysis

This excerpt makes a key point:

“Although both organisms and machines operate towards the attainment of particular ends that is, both are purposive systems the former are intrinsically purposive whereas the latter are extrinsically purposive. A machine is extrinsically purposive in the sense that it works towards an end that is external to itself; that is, it does not serve its own interests but those of its maker or user. An organism, on the other hand, is intrinsically purposive in the sense that its activities are directed towards the maintenance of its own organization; that is, it acts on its own behalf.”

In this section the author explains how the software/hardware idea found its way into developmental biology.

“The situation changed considerably in the mid-twentieth century with the advent of modern computing and the introduction of the conceptual distinction between software and hardware. This theoretical innovation enabled the construction of a new kind of machine, the computer, which contains algorithmic sequences of coded instructions or programs that are executed by a central processing unit. In a computer, the software is totally independent from the hardware that runs it. A program can be transferred from one computer and run in another. Moreover, the execution of a program is always carried out in exactly the same fashion, regardless of the number of times it is run and of the hardware that runs it. The computer is thus a machine with Cartesian and Laplacian overtones. It is Cartesian because the software/hardware distinction echoes the soul/body dualism: the computer has an immaterial ‘soul’ (the software) that governs the operations of a material ‘body’ (the hardware). And it is Laplacian because the execution of a program is completely deterministic and fully predictable, at least in principle. These and other features made the computer a very attractive theoretical model for those concerned with elucidating the role of genes in development in the early days of molecular biology.”

The machine conception of the organism in development and evolution: A critical analysis

I’ve actually criticized the genetic program metaphor myself, in the following 3QD essay:

3quarksdaily: How informative is the concept of biological information?

____

Image source: Digesting Duck – Wikipedia

“Conscious realism”: a new way to think about reality (or the lack thereof?)

Venn

Interesting interview in the Atlantic with cognitive scientist Donald D. Hoffman:

The Case Against Reality

“I call it conscious realism: Objective reality is just conscious agents, just points of view. Interestingly, I can take two conscious agents and have them interact, and the mathematical structure of that interaction also satisfies the definition of a conscious agent. This mathematics is telling me something. I can take two minds, and they can generate a new, unified single mind. Here’s a concrete example. We have two hemispheres in our brain. But when you do a split-brain operation, a complete transection of the corpus callosum, you get clear evidence of two separate consciousnesses. Before that slicing happened, it seemed there was a single unified consciousness. So it’s not implausible that there is a single conscious agent. And yet it’s also the case that there are two conscious agents there, and you can see that when they’re split. I didn’t expect that, the mathematics forced me to recognize this. It suggests that I can take separate observers, put them together and create new observers, and keep doing this ad infinitum. It’s conscious agents all the way down.”

[…]

Here’s the striking thing about that. I can pull the W out of the model and stick a conscious agent in its place and get a circuit of conscious agents. In fact, you can have whole networks of arbitrary complexity. And that’s the world.

[…]

“As a conscious realist, I am postulating conscious experiences as ontological primitives, the most basic ingredients of the world. I’m claiming that experiences are the real coin of the realm. The experiences of everyday life—my real feeling of a headache, my real taste of chocolate—that really is the ultimate nature of reality.”

I don’t agree with everything in the article (especially the quantum stuff) but I think many people interested in consciousness and metaphysics will find plenty of food for thought here:

The Case Against Reality

Also, the “conscious agents all the way down” is the exact position I was criticizing in a recent 3QD essay:

3quarksdaily: Persons all the way down: On viewing the scientific conception of the self from the inside out

The diagram above is from a science fiction story I was working on, back when I was a callow youth. It closely related to the idea of a network of conscious agents. Here’s another ‘version’ of it.

TriHead

Not sure why I made it look so morbid. 🙂

Where do thoughts come from?

Here’s my answer to a recent Quora question: Where do our thoughts come from?

Thoughts come from nowhere! And from everywhere! I think both answers contain an element of truth.

Subjectively, our thoughts come from nowhere: they just pop into our heads, or emerge in the form of words leaving our mouths.

Objectively, we can say that thoughts emerge from neural processes, and that neural processes come from everywhere. What I mean by this is that the forms and dynamics of thought are influenced by everything that has a causal connection with you, your society, and your species.

We don’t know exactly how thoughts emerge from the activity of neurons — or even how to define what a thought is in biological terms (!)— but there is plenty of indirect evidence to support the general claim that the brain is where thoughts emerge.

The neuronal patterns that mediate and enable thought and behavior have proximal and distal causes.

The proximal causes are the stimuli and circumstances we experience. These experiences have causal impacts on our bodies, and are also partly caused by our bodies. The forces inside and outside the body become manifest in the brain as ‘clouds’ of information. In the right circumstances these nebulous patterns can condense into streams of thought. We can add to these identifiable causes the mysterious element of randomness: that seemingly ever-present “ghost in the machine” that makes complex processes such as life fundamentally unpredictable. Perhaps randomness is what provides the ‘seeds’ around which the condensation of thoughts can occur.

The distal causes are our experiential history and our evolutionary pre-history. Our experiential history consists of the things we’ve learned, consciously and unconsciously, and the various events that have shaped our bodies and our neural connections in large and small ways. Our evolutionary pre-history is essentially the experiential history of our species, and more generally of life itself, going back all the way to the first single-celled organism. The traits of a species are a sort of historical record of successes and failures. And going even further, life ultimately takes its particular forms because of the possibilities inherent in matter — and this takes us all the way to the formation of stars and planets.

Me and My Brain: What the “Double-Subject Fallacy” reveals about contemporary conceptions of the Self

MiBMy latest essay for 3 Quarks Daily is up: Me and My Brain: What the “Double-Subject Fallacy” reveals about contemporary conceptions of the Self

Here’s an excerpt:
What is a person? Does each of us have some fundamental essence? Is it the body? Is it the mind? Is it something else entirely? Versions of this question seem always to have animated human thought. In the aftermath of the scientific revolution, it seems as if one category of answer — the dualist idea that the essence of a person is an incorporeal soul that inhabits a material body — must be ruled out. But as it turns out, internalizing a non-dualist conception of the self is actually rather challenging for most people, including neuroscientists.
[…]
 A recent paper in the Journal of Cognitive Neuroscience suggests that even experts in the sciences of mind and brain find it difficult to shake off dualistic intuitions. Liad Mudrik and Uri Maoz, in their paper “Me & My Brain”: Exposing Neuroscienceʼs Closet Dualism, argue that not only do neuroscientists frequently lapse into dualistic thinking, they also attribute high-level mental states to the brain, treating these states as distinct from the mental states of the person as a whole. They call this the double-subject fallacy. ( I will refer to the fallacy as “dub-sub”, and the process of engaging in it as “dub-subbing”.) Dub-subbing is going on in constructions like”my brain knew before I did” or “my brain is hiding information from me”. In addition to the traditional subject — “me”, the self, the mind — there is a second subject, the brain, which is described in anthropomorphic terms such as ‘knowing’ or ‘hiding’. But ‘knowing’ and ‘hiding’ are precisely the sorts of things that we look to neuroscience to explain; when we fall prey to the double-subject fallacy we are actually doing the opposite of what we set out to do as materialists.  Rather than explaining “me” in terms of physical brain processes, dub-subbing induces us to describe the brain in terms of an obscure second “me”. Instead of dispelling those pesky spirits, we allow them to proliferate!
Read the whole thing at 3QD:

The Neural Citadel — a wildly speculative metaphor for how the brain works

My latest 3QD essay is a bit of a wild one. I start by talking about Goodhart’s Law, a quirk of economics that I think has implications elsewhere. I try to link it with neuroscience, but in order to do so I first construct an analogy between the brain and an economy. We might not understand economic networks any better than we do neural networks, but this analogy is a fun way to re-frame matters of neuroscience and society.

Plan of a Citadel (from Wikipedia)

Plan of a Citadel (from Wikipedia)

Here’s an excerpt:

The Neural Citadel

Nowadays we routinely encounter descriptions of the brain as a computer, especially in the pop science world. Just like computers, brains accept inputs (sensations from the world) and produce outputs (speech, action, and influence on internal organs). Within the world of neuroscience there is a widespread belief that the computer metaphor becomes unhelpful very quickly, and that new analogies must be sought. So you can also come across conceptions of the brain as a dynamical system, or as a network. One of the purposes of a metaphor is to link things we understand (like computers) with thing we are still stymied by (like brains). Since the educated public has plenty of experience with computers, but at best nebulous conceptions of dynamical systems and networks, it makes sense that the computer metaphor is the most popular one. In fact, outside of a relatively small group of mathematically-minded thinkers, even scientists often feel most comfortable thinking of the brain as a elaborate biological computer. [3]

However, there is another metaphor for the brain that most human beings will be able to relate to. The brain can be thought of as an economy: as a biological social network, in which the manufacturers, marketers, consumers, government officials and investors are neurons. Before going any further, let me declare up front that this analogy has a fundamental flaw. The purpose of metaphor is to understand the unknown — in this case the brain — in terms of the known. But with all due respect to economists and other social scientists, we still don’t actually understand socio-economic networks all that well. Not nearly as well as computer scientists understand computers. Nevertheless, we are all embedded in economies and social networks, and therefore have intuitions, suspicions, ideologies, and conspiracy theories about how they work.

Because of its fundamental flaw, the brain-as-economy metaphor isn’t really going to make my fellow neuroscientists’ jobs any easier, which is why I am writing about it on 3 Quarks Daily rather than in a peer-reviewed academic journal. What the brain-as-economy metaphor does do is allow us to translate neural or mental phenomena into the language of social cooperation and competition, and vice versa. Even though brains and economies seem equally mysterious and unpredictable, perhaps in attempting to bridge the two domains something can be gained in translation. If nothing else, we can expect some amusing raw material for armchair philosophizing about life, the universe, and everything. [4]

So let’s paint a picture of the neural economy. Imagine that the brain is a city — the capital of the vast country that is the body. The neural citadel is a fortress; the blood-brain barrier serves as its defensive wall, protecting it from the goings-on in the countryside, and only allowing certain raw materials through its heavily guarded gates — oxygen and nutrients, for the most part. Fuel for the crucial work carried out by the city’s residents: the neurons and their helper cells. The citadel needs all this fuel to deal with its main task: the industrial scale transformation of raw data into refined information. The unprocessed data pours into the citadel through the various axonal highways.  The trucks carrying the data are dispatched by the nervous system’s network of spies and informants. Their job is to inform the citadel of the goings-on outside its walls. The external sense organs — the eyes, ears, nose, tongue and skin — are the body’s border patrols, coast guards, observatories, and foreign intelligence agencies. The muscles and internal organs, meanwhile, are monitored by the home ministry’s police and bureaucrats, always on the look-out for any domestic turbulence. (The stomach, for instance, is known to be a hotbed of labor unrest.)

The neural citadel enables an information economy — a marketplace of ideas, as it were. Most of this information is manufactured within the brain and internally traded, but some of it — perhaps the most important information — is exported from the brain in the form of executive orders, requests and the occasional plaintive plea from the citadel to the sense organs, muscles, glands and viscera. The purpose of the brain is definitely subject to debate — even within the citadel — but one thing most people can agree on is that it must serve as an effective and just ruler of the body: a government that marries a harmonious domestic policy — unstressed stomach cells, unblackened lung cells, radiant skin cells and resilient muscle cells — with a peaceful and profitable foreign policy. (The country is frustratingly dependent on foreign countries, over which it has limited control, for its energy and construction material.)

The citadel is divided into various neighborhoods, according to the types of information being processed. There are neighborhoods subject to strict zoning requirements that process only one sort of information: visions, sounds, smells, tastes, or textures. Then there are mixed use neighborhoods where different kinds of information are assembled into more complex packages, endlessly remixed and recontextualized. These neighborhoods are not arranged in a strict hierarchy. Allegiances can form and dissolve. Each is trying to do something useful with the information that is fed to it: to use older information to predict future trends, or to stay on the look-out for a particular pattern that might arise in the body, the outside world, or some other part of the citadel.  Each neighborhood has an assortment of manufacturing strategies, polling systems, research groups, and experimental start-up incubators. Though they are all working for the welfare of the country, they sometimes compete for the privilege of contributing to governmental policies. These policies seem to be formulated at the centers of planning and coordination in the prefrontal cortex — an ivory tower (or a corporate skyscraper, if you prefer merchant princes to philosopher kings) that has a panoramic view of the citadel. The prefrontal tower then dispatches its decisions to the motor control areas of the citadel, which notify the body of governmental marching orders.

~

The essay is not just about the metaphor though. There are bits about dopamine, and addiction, and also some wide-eyed idealism. 🙂 Check the whole thing out at 3 Quarks Daily.

For the record, there is a major problem with personifying neurons. It doesn’t actually explain anything, since we are just as baffled by persons as we are by neurons. Personifying neurons creates billions of microscopic homunculi. The Neural Citadel metaphor was devised in a spirit of play, rather than as serious science or philosophy.