The Pentagon of Neuroscience — An Infographic/Listicle for Understanding the Neuroculture

Click here to go straight to the infographic. It should open in Firefox and Chrome.

brainzNeuroscience has hit the big time. Every day, popular newspapers, websites and blogs offer up a heady stew of brain-related self-help (neuro-snake oil?) and gee wiz science reporting (neuro-wow?). Some scientists and journalists — perhaps caught up in the neuro-fervor — throw caution to the wind, promising imminent brain-based answers to the kinds of questions that probably predate civilization itself: What is the nature of mind? Why do we feel the way we do? Does each person have a fundamental essence? How can we avoid pain and suffering, and discover joy, creativity, and interpersonal harmony?

Continue reading

Advertisements

Why can’t we perceive cells? Or atoms?

I was asked the following question on Quora:

Why can’t we see, touch, hear and smell on a cellular level? And what happens if we can?

Here’s what I wrote:

Essentially, we perceive the visible world in the way that we do because of our overall size, the shape of our eyes, and the sizes of objects in the world that are relevant to our voluntary behavior.

This question might seem silly, but a closely related question can serve as a springboard for us to think very deeply about physical scale, and how it relates to biological life and to the very concept of a scientific law.


But first lets deal with the basic question.

Let’s pull out a section from the question comment, which helps clarify the question:

“And if I had a microscopic glasses(as if) since child seeing on cellular level, would that be too much for the capacity of the brain?”

When you perceive something or measure it with a device, there is almost always a trade-off between scope and resolution. We all experience this regularly. When you zoom in on Google maps, you get more detail, but for a smaller region. When you zoom out, you get a wider view, but everything is less detailed.

This is also true of measuring devices like microscopes and telescopes. If you want both detail (high res) and a wide view (large scope), then you typically have to piece together multiple narrow high-res ‘views’.

For both humans and machines, the need for piecing together multiple views or ‘snapshots’ of the world relates to a limit on the amount of information that can be processed simultaneously.

I can give you a super-high-res map of your city that is as big as a room, but because of your size and the size and shape of your eyes, only a certain amount of this map can be visible at a given time. This is a physical constraint. You can get around this by standing far away, or by using special lenses.

But there are additional constraints within our visual system.

The human eye possesses a region of high sensitivity called the fovea centralis. We can only process detailed visual information if it arrives at this part of the eye. The foveal region is around 2 degrees of arc, which is roughly the width of your thumb when held at arm’s length. Everything outside this region is perceived rather poorly. So it is by moving our eyes that we create a picture of the world.

Now imagine if you had to navigate the world but saw things in your fovea at the resolution of maybe ten or a hundred micrometres, which is the scale of biological cells. All other things being equal — such as body size and movement speed — you would have to look around for minutes or perhaps hours before you got any sense of where you were or what was in front of you. By the end of this period you might even forget what you saw initially!

This kind of visual resolution amounts to navigating the world with a microscope attached to your eyes! You’d have to be extremely still, since any slight movement would completely change what is in your field of view. And random breezes would bring dust particles and fluff into your field of view. It would be mind-numbingly irritating and laborious! 🙂

Perhaps an alternative would be to look at things from thousands of miles away. But that would be pretty pointless for an organism that is meter-scaled, not mile-scaled! By the time you reach your destination, it might not even exist any more!

Here’s another way to think about this: the objects that humans are interested in — food, mates, predators, shelter, clothing — exist on scales far larger than cells. If we ate food one cell at a time, we’d be searching for food all the time, and would probably never even fill our stomachs!

If we were amoeba-sized, we’d be interested in objects that exist at that scale. But we would not see such objects in any normal sense, since as far as we know, vision requires the complex multi-cellular apparatus of the eye and the nervous system.


A more tricky issue

There is actually a subtle philosophical point that can be gleaned from this line of thinking. It’s probably best to proceed further only if you’ve already understood what I’ve written above, particularly the parts about the fovea and dust.

The physicist Erwin Schrödinger once asked a question that seems equally bizarre, in his book What Is Life?

Why are atoms so small?

What a strange question! Schrödinger refines it by asking why atoms are so small compared to us. His answer, essentially, is that organisms with the kind of perceptual apparatus we possess could only arise if the world in which they live contains law-like regularities that are relatively predictable, and therefore useful for survival. But atoms are governed by quantum mechanics and are unpredictable.

So it would be impossible to live and perceive at the level of atoms. You need to move upwards in scale until regularities start to emerge from the wild world of quantum noise. The world in which regularities become visible is the classical world! Intriguingly, life seems to emerge at precisely the scale at which quantum effects become less common — in fact life may even straddle the worlds of quantum and classical. Photosynthesis involves quantum mechanical processes, and some quantum effects may even occur in proteins and DNA. But multi-cellular life tries to find safe neighborhoods of scale, far away from quantum mischief!

Anyway, if you’re curious, here’s the relevant excerpt from Schrödinger’s book What is Life?

Why are atoms so small? A good method of developing ‘the naive physicist’s ideas’ is to start from the odd, almost ludicrous, question: Why are atoms s o small? To begin with, they are very small indeed. Every little piece of matter handled in everyday life contains an enormous number of them.

[…]

Now, why are atoms so small? Clearly, the question is an evasion. For it is not really aimed at the size of the atoms. It is concerned with the size of organisms, more particularly with the size of our own corporeal selves. Indeed, the atom is small, when referred to our civic unit of length, say the yard or the metre. In atomic physics one is accustomed to use the so-called Angstrom (abbr. A), which is the 10^10th part of a metre, or in decimal notation 0.0000000001 metre. Atomic diameters range between 1 and 2A.

[…]

Why must our bodies be so large compared with the atom? I can imagine that many a keen student of physics or chemistry may have deplored the fact that everyone of our sense organs, forming a more or less substantial part of our body and hence (in view of the magnitude of the said ratio) being itself composed of innumerable atoms, is much too coarse to be affected by the impact of a single atom. We cannot see or feel or hear the single atoms. Our hypotheses with regard to them differ widely from the immediate findings of our gross sense organs and cannot be put to the test of direct inspection. Must that be so? Is there an intrinsic reason for it? Can we trace back this state of affairs to some kind of first principle, in order to ascertain and to understand why nothing else is compatible with the very laws of Nature? Now this, for once, is a problem which the physicist is able to clear up completely. The answer to all the queries is in the affirmative.

The working of an organism requires exact physical laws

If it were not so, if we were organisms so sensitive that a single atom, or even a few atoms, could make a perceptible impression on our senses — Heavens, what would life be like! To stress one point: an organism of that kind would most certainly not be capable of developing the kind of orderly thought which, after passing through a long sequence of earlier stages, ultimately results in forming, among many other ideas, the idea of an atom.

[…]

Physical laws rest on atomic statistics and are therefore only approximate

And why could all this not be fulfilled in the case of an organism composed of a moderate number of atoms only and sensitive already to the impact of one or a few atoms only? Because we know all atoms to perform all the time a completely disorderly heat motion, which, so to speak, opposes itself to their orderly behaviour and does not allow the events that happen between a small number of atoms to enroll themselves according to any recognizable laws. Only in the cooperation of an enormously large number of atoms do statistical laws begin to operate and control the behaviour of these assemblies with an accuracy increasing as the number of atoms involved increases. It is in that way that the events acquire truly orderly features.

You can read the complete passage, which is in Chapter 1 of the book, online here: [What is Life – pdf]

The kind of brain training that actually works!

Just saw this on twitter:

Here’s the link to the (thus far unreviewed!) study:

How much does education improve intelligence? A meta-analysis

“… we found consistent evidence for beneficial effects of education on cognitive abilities, of approximately 1 to 5 IQ points for an additional year of education. Moderator analyses indicated that the effects persisted across the lifespan, and were present on all broad categories of cognitive ability studied. Education appears to be the most consistent, robust, and durable method yet to be identified for raising intelligence.”

Why human memory is not a bit like a computer’s

My latest 3QD essay is about the mystery of human memory, and why it is not at all like computer memory. I discuss the quirks of human memory formation and recall, and the concept of “content-addressable memory”.

3quarksdaily: Why human memory is not a bit like a computer’s

Here is an excerpt:

Decades of experience with electronics has led many people to think of memory as a matter of placing digital files in memory slots. It then seems natural to wonder about storage and deletion, capacity in bytes, and whether we can download information into the brain ‘directly’, as in the Matrix movies.

The computer metaphor may seem cutting edge, but its essence may be as old as civilization it is the latest iteration of the “inscription metaphor”. Plato, for example, described memory in terms of impressions on wax-tablets — the hard drives of the era. According to the inscription metaphor, when we remember something, we etch a representation of it in a physical medium — like carvings on rock or ink on paper. Each memory is then understood as a discrete entity with a specific location in space. In the case of human beings, this space is between the ears. Some memory researchers even use the term “engram” to refer to the neural counterpart of a specific memory, further reifying the engraving metaphor.

Before getting to the problems with the inscription metaphor, I should say that at a sufficiently fuzzy level of abstraction, it is not entirely useless. There is plenty of neuroscientific evidence that memories are tied to particular brain regions; damage to these regions can weaken or eliminate specific memories. So the general concepts of physical storage and localizability are the least controversial aspect of the inscription metaphor (at least at first glance).

The issue with the inscription metaphor is that it leaves out the aspects of human memory that are arguably the most interesting and mysterious — how we acquire memories and how we evoke them. When we look more closely at how humans form and recall memories, we may even find that the storage and localizability ideas need to be revised.

Read the whole piece here:

3quarksdaily: Why human memory is not a bit like a computer’s

 

“Authentic bio-gibberish”

It turns out that one of the 2017 Nobel Laureates is quite a character!

“Jeffrey Hall, a retired professor at Brandeis University, shared the 2017 Nobel Prize in medicine for discoveries elucidating how our internal body clock works. He was honored along with Michael Young and his close collaborator Michael Roshbash. Hall said in an interview from his home in rural Maine that he collaborated with Roshbash because they shared common interests in “sports, rock and roll, beautiful substances and stuff.”

“About half of Hall’s professional career, starting in the 1980s, was spent trying to unravel the mysteries of the biological clock. When he left science some 10 years ago, he was not in such a jolly mood. In a lengthy 2008 interview with the journal Current Biology, he brought up some serious issues with how research funding is allocated and how biases creep into scientific publications.”

I highly recommend watching this video, where he comes up with the term “authentic bio-gibberish” to describe the overly technical jargon used by scientists.

What does the frontal lobe have to do with planning and decision-making?

I was asked the following question on Quora:

Why is planning & decision making situated in the frontal lobe?

Here’s my answer:

Why not? 🙂

I suppose the most obvious answer is the fact that the motor cortex resides in the frontal cortex.

Now you may justifiably ask: what does the motor cortex have to do with planning and decision-making?

The connection is this: to a large extent, the purpose of the brain is to control the body. So planning and decision-making, at the simplest possible level, involves determining when and how to move the body.

From this perspective, it is interesting to speculate that all thoughts derive from the process of virtual or simulated movement. Thought arises in the ‘gap’ between perception and action.

The way an organism interacts with its environment can be understood in terms of the perception-action loop.

Stimuli from the outside world enter the brain through the sensory organs and percolate through the various brain regions, allowing the organism to form neural ‘representations’ or ‘maps’ of the world. Signals originating inside the body (such as from the stomach or the lungs) allow for similar maps of the inner world of the organism.

By using memory to compare past experience with present conditions, an organism can anticipate the future to meet its current needs: either by acting in the present, or by planning an action for some future time.

The signals that control our voluntary muscles emanate from the motor cortex: the neurons in this part of the brain are the ‘switches’, ‘levers’ and ‘buttons’ that allow us to change our body position and configuration.

So going back one more step, the signals that influence the motor cortex constitute the ‘proximal’ decision signal. Much of the input to motor cortex comes from premotor and prefrontal areas, which are nearby in the frontal lobe. The thalamus also sends important signals to motor cortex, as do the neuromodulatory systems (which include the dopamine, acetylcholine, norepinephrine and serotonin systems).

Ultimately you can keep going ‘back’ to see how every part of the brain influences the ultimate decision: sensation, memory and emotion all play a role. But the prefrontal and premotor areas constitute the most easily identifiable decision areas.

As to why these brain areas are located in the frontal lobe at all… this is a much more difficult question. The short answer is evolution by natural selection. But the long answer is still incomplete. Brains are soft tissue, so they don’t leave fossils.

Is the mind a machine?

My latest 3QD essay explores the “mind as machine” metaphor, and metaphors in general.

Putting the “cog” in “cognitive”: on the “mind as machine” metaphor

Here’s an excerpt:

People who study the mind and brain often confront the limits of metaphor. In the essay ‘Brain Metaphor and Brain Theory‘, the vision scientist John Daugman draws our attention to the fact that thinkers throughout history have used the latest material technology as a model for the mind and body. In the Katha Upanishad (which Daugman doesn’t mention), the body is a chariot and the mind is the reins. For the pre-Socratic Greeks, hydraulic metaphors for the psyche were popular: imbalances in the four humors produced particular moods and dispositions. By the 18th and 19th centuries, mechanical metaphors predominated in western thinking: the mind worked like clockwork. The machine metaphor has remained with us in some form or the other since the industrial revolution: for many contemporary scientists and philosophers, the only debate seems to be about what sort of machine the mind really is. Is it an electrical circuit? A cybernetic feedback device? A computing machine that manipulates abstract symbols? Some thinkers so convinced that the mind is a computer that they invite us to abandon the notion that the idea is a metaphor. Daugman quotes the cogntive scientist Zenon Pylyshyn, who claimed that “there is no reason why computation ought to be treated merely as a metaphor for cognition, as opposed to the literal nature of cognition”.

Daugman reacts to this Whiggish attitude with a confession of incredulity that many of us can relate to: “who among us finds any recognizable strand of their personhood or of their experience of others and of the world and its passions, to be significantly illuminated by, or distilled in, the metaphor of computation?.” He concludes his essay with the suggestion that “[w]e should remember than the enthusiastically embraced metaphors of each “new era” can become, like their predecessors, as much the prisonhouse of thought as they at first appeared to represent its liberation.”

Read the rest at 3 Quarks Daily:

Putting the “cog” in “cognitive”: on the “mind as machine” metaphor