Neuroscience has hit the big time. Every day, popular newspapers, websites and blogs offer up a heady stew of brain-related self-help (neuro-snake oil?) and gee wiz science reporting (neuro-wow?). Some scientists and journalists — perhaps caught up in the neuro-fervor — throw caution to the wind, promising imminent brain-based answers to the kinds of questions that probably predate civilization itself: What is the nature of mind? Why do we feel the way we do? Does each person have a fundamental essence? How can we avoid pain and suffering, and discover joy, creativity, and interpersonal harmony?
This post, which mostly provides useful citations, was originally written as an answer to the following Quora question:
It really depends on what you mean by “objective”, but the answer is “probably not”.
Since we do not understand the underlying causes of ADHD — or any major psychiatric disorder — we diagnose them based on clusters of symptoms.
In the United States and several other countries, a large number of psychiatrists use a book called Diagnostic and Statistical Manual of Mental Disorders (DSM).
DSM (now in it’s 5th, revised edition, DSM-5) essentially uses a system of checklists to enable a clinician to assess if a person has a given disorder. This is a controversial book for various reasons, but for now it is what most psychiatrists use.
Instead of the complex philosophical question of ‘objectivity’, the usefulness of DSM can be assessed using statistical measures of ‘reliability’.
Given that one clinician uses DSM-5 to give the diagnosis of ADHD, how likely is another clinician to do so using the DSM-5? Measures of “test-retest reliability” capture this probability.
Here is a paper that explains the statistical measurement of reliability in some detail:
There are conflicting reports on the reliability of DSM-5, but here is one paper that reports statistical assessments:
“There were a total of 15 adult and eight child/adolescent diagnoses for which adequate sample sizes were obtained to report adequately precise estimates of the intraclass kappa. Overall, five diagnoses were in the very good range(kappa=0.60–0.79), nine in the good range(kappa=0.40–0.59), six in the questionable range (kappa = 0.20–0.39), and three in the unacceptable range (kappa values,0.20). Eight diagnoses had insufficient sample sizes to generate precise kappa estimates at any site.”
“Two were in the very good (kappa=0.60–0.79) range: autism spectrum disorder and ADHD.”
For more on the quantity reported here, kappa, see this paper:
The quantity kappa ranges from 0 to 1. Zero means that there was no agreement between raters (clinicians in this case), and 1 means there was perfect agreement.
As I said before, the DSM is controversial — and not just because of reliability issues. Here is a sampling of papers and popular articles on the general topic:
“However, the standards for evaluating κ-statistics have relaxed substantially over time. In the early days of systematic reliability research, Spitzer and Fleiss  suggested that in psychiatric research κ-values ≥0.90 are excellent; values between 0.70 and 0.90 are good, while values ≤0.70 are unacceptable. In 1977, Landis and Koch  proposed the frequently used thresholds: values ≥0.75 are excellent; values between 0.40 and 0.75 indicate fair to good reliability, and values ≤0.40 indicate poor reliability. More recently, Baer and Blais  suggested that κ-values >0.70 are excellent; values between 0.60 and 0.70 are good; values between 0.41 and 0.59 are questionable, and values ≤0.40 are poor. Considering these standards, the norms used in the DSM-5 field trial are unacceptably generous.”
“Today, 26 years later, did the DSM system succeed in improving the reliability of psychiatric diagnoses? Two answers exist. The DSM did improve the reliability of psychiatric diagnoses at the research level. If a researcher or a clinician can afford to spend 2 to 3 hours per patient using the DSM criteria and a structured interview or a rating scale, the reliability would improve.  For psychiatrists and clinicians, who live in a world without hours to spare, the reliability of psychiatric diagnoses is still poor. [2,3]”
“The fifth revision of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) was the most controversial in the manual’s history. This review selectively surveys some of the most important changes in DSM-5, including structural/organizational changes, modifications of diagnostic criteria, and newly introduced categories. It analyzes why these changes led to such heated controversies, which included objections to the revision’s process, its goals, and the content of altered criteria and new categories. The central focus is on disputes concerning the false positives problem of setting a valid boundary between disorder and normal variation. Finally, this review highlights key problems and issues that currently remain unresolved and need to be addressed in the future, including systematically identifying false positive weaknesses in criteria, distinguishing risk from disorder, including context in diagnostic criteria, clarifying how to handle fuzzy boundaries, and improving the guidelines for “other specified” diagnosis.”
“You will need to display fewer and fewer symptoms to get labeled with certain disorders, for example Attention Deficit Disorder and Generalized Anxiety Disorder. Children will have more and more mental disorder labels available to pin on them. These are clearly boons to the mental health industry but are they legitimate additions to the manual that mental health professionals use to diagnose their clients?”
“This is the saddest moment in my 45 year career of studying, practicing, and teaching psychiatry. The Board of Trustees of the American Psychiatric Association has given its final approval to a deeply flawed DSM 5 containing many changes that seem clearly unsafe and scientifically unsound. My best advice to clinicians, to the press, and to the general public – be skeptical and don’t follow DSM 5 blindly down a road likely to lead to massive over-diagnosis and harmful over-medication. Just ignore the ten changes that make no sense.”
“Among the flashpoints: Asperger’s disorder will be folded into autism spectrum disorder; grief will no longer exempt someone from a diagnosis of depression; irritable children who throw frequent temper tantrums can be diagnosed with disruptive mood dysregulation disorder. [Hypersex to Hoarding: 7 New Psychological Disorders]
“One prominent critic has been Allen Frances, a professor emeritus of psychiatry at Duke University who chaired the DSM-IV task force.
“Frances charges that through a combination of new disorders and lowered thresholds, the DSM-5 is expanding the boundaries of psychiatry to encompass many whom he describes as the “worried well.””
I was asked the following question on Quora some time ago:
Here’s my answer:
People who have had a western-style education generally point to their head when they are talking about their mind, and to their chest when they are talking about their heart, which is the “metaphorical container” for many if not most emotions.
Around the world, people have always associated the heart with intense emotions — anger, love, fear and so on. This may be because these emotions are actually felt in the heart and lungs. When you are aroused by strong anger, love, or fear, you may feel that your chest is pounding — and it often is! Your emotional state can affect your heart-rate and your breathing.
So using the heart as the metaphorical container for emotion is quite understandable.
What is harder for us to understand is why some cultures — such as Ancient Egypt — use(d) the heart as their metaphorical container for all mental concepts, including intelligence.
Didn’t Ancient Egyptians know that head injuries affected behavior and intelligence? The answer is yes they did know*, but for some reason this knowledge wasn’t prominent in their literary culture, which was happy to stick with the heart metaphor.
We can speculate that certain cultures — both ancient and modern — identify the agent or person with the seat of emotion, rather than the seat of seeing, hearing, smelling and tasting (which is easily identified with the head). Personhood is a complex concept, and to this day no one fully understands what it is. There is nothing intrinsically wrong with identifying the Self with emotions, and then assigning the heart as the symbol for emotional experience.
Our language and gestures are symbolic, and ultimately any symbol will suffice to communicate a basic idea. You might wonder why we don’t point to some specific part of the head, for example, when we talk about a particular aspect of cognition — after all, we have rough scientific conceptions of neural processes now. The answer is that it doesn’t matter all that much for the purpose of communication.
Having said that, it’s helpful for understanding (and it’s also aesthetically pleasing!) if our symbols partially reflect the underlying biological process, which is why using the heart as a metaphor for the seat of emotion is still quite acceptable. In the same way, we might say that a surgeon or pianist has “good hands”, even though neuroscientists tend to agree that dexterity is largely achieved by neural connections in the head, not the hands.
One of my favorite books is, by Julian Jaynes. It’s a deeply strange book, so you have to take all its conclusions with a hefty pinch of salt. But the main reason I love it is because Jaynes asks questions that many people simply neglect to ask in the first place.
For example, Jaynes asks how exactly body parts became “metaphorical containers” for abstract qualities. How did courage become associated with the gut, or emotion with the heart, or life-force with the air in the lungs (from which the word ‘psyche’ ultimately derives)?
Jaynes’s answer — which you don’t have to believe, of course — is that humans discovered the associations between body parts and attributes through violence and death. He thinks that ancient battlefields might have taught people where abstract attributes were ‘located’. A stomach injury might make a person decidedly less brave. And when a person exhales for the last time their life-force seems to leave the body.
This way of thinking can seem quite primitive, but the way we conduct neuroscience now is basically an outgrowth of the same logic. We see what function is lost when a particular brain area breaks down, and then we label that brain area as the seat of that function.
* See this answer for some quotes that show that at least some Ancient Egyptians were well aware of the importance of the head:
Just came across a nice little article by Ed Yong on how the two major phases of sleep — REM and slow-wave sleep — might contribute to creativity. These ideas have been floating around for a while, but it’s nice to see them in a pop sci article.
(This is a cross-post of a 3 Quarks Daily article I wrote last year.)
A few months ago I attended a rather peculiar seminar at MIT’s Department of Brain and Cognitive Sciences. A neuroscientist colleague of mine named Robert Ajemian had invited an unusual speaker: a man named Jim Karol, who was billed as having the world’s best memory. According to his website, his abilities include “knowing over 80,000 zip codes, thousands of digits of Pi, the Scrabble dictionary, sports almanacs, MEDICAL journals, and thousands of other facts.” He has memorized the day of the week for every date stretching back to 1AD. And his abilities are not simply matter of superhuman willingness to spend hours memorizing lists. He can add new items to his memory rapidly, on the fly. After a quick look at a deck of cards, he can recall perfectly the order in which they were shuffled. I witnessed him do this last ‘trick’, as well as a few others, so I can testify that his abilities are truly extraordinary .
I was asked this question on Quora:
Here’s my answer:
It’s very difficult to define what exactly the will is in neuroscientific terms. It seems best to reserve this word for the whole organism, rather than some part of it.
Some philosophers would say that attributing will to some sub-component of an organism is an example of the “mereological fallacy”. It’s like saying the stomach eats, or the brain thinks, or the legs walk. We use these kinds of phrases as a kind of poetic shorthand, but only a complete organism can be said to eat, think, or walk.
In the case of the two hemispheres, we also know that the left brain right brain story — that one is “rational” and the other “holistic/artistic” — is wildly misguided. Some neural processes are, but most normal tasks that humans perform require close integration and communication between the hemispheres.
But we do have to make sense of a common experience — being “in two minds” about something. Most people know what it is like to be in a conflicted state — multiple goals or biases seem to be tugging at us. Clearly decision-making involves a sort of “parliamant” in the brain, in which multiple vested interests vie to enact legislation that suits them. 🙂
But the parliament metaphor should not be taken too seriously. There is little to be gained in anthropomorphizing neurons or groups of neurons. Neural ensembles might sometimes seem to behave as if they have a will, but that idea will not really help us understand decision-making, or the subjective feeling of having a will.
So brain areas don’t have likes or dislikes — organisms do, and brain areas mediate the processes by which these likes and disliked become manifest.
For more on the problems with anthropomorphizing neural processes, see these two essays I wrote:
(This essay is partly a gentle critique of the Pixar movie Inside Out.)
(This essay explores the tendency of people, including neuroscientists, to think of the brain is a separate agent from the person as a whole.)
I admit that it is often fun to anthropomorphize neurons, which is what I do in the essay below. I paint a picture of a neural city and a neural economy, complete with start-ups and investors. 🙂
I was asked this question on Quora:
Here’s how I responded:
Here’s a question: in a system composed of, is there a “code”?
I ask this because I find that the “code” metaphor is often misleading when thinking about biology. Codes are composed of symbols. But it is not clear that neurons communicate using symbols.
The way a neuron affects other neurons is more like how a gear affects other gears. There is no code — there is causality. An active neuron releases some neurotransmitter, and this in turn makes other neurons more active. It’s like a complex network of dominoes.
Does the idea of a “code” help us understand how one domino affects the next one in the chain?
I admit that by the time a human is thinking in terms of words and symbols, “code” is probably a useful metaphor for what is going on. But the origin of coding schemes remains a great mystery in neuroscience, cognitive science, and artificial intelligence. So I recommend starting with a much less loaded metaphor, such as clockwork or dominoes. Thinking in mechanical terms helps us realize what exactly neuroscience and AI research are trying to achieve.
For now, there is a fascinating gap in our understanding of what exactly codes are in the first place.
Anyway, if you are interested in the causal “domino effect” that starts at the retina, have a look at this answer:
“In our world,” said Eustace, “a star is a huge ball of flaming gas.”
“Even in your world, my son, that is not what a star is but only what it is made of…”
The Voyage of the Dawn Treader, CS Lewis
I really like the quote above, which is from the Chronicles of Narnia. It raises a neat little metaphysical question:
Why do we assume that what a thing is made up of is what a thing is?
As a neuroscientist I have to point out that no one really knows what a thought is from a scientific perspective. This means that we don’t know what we would need to measure in order to ‘decode’ a person’s thoughts. For the foreseeable future, I cannot look at a brain scan and say, “This person is definitely thinking about pineapples!”
Of course, thoughts seem to be closely linked with neural patterns in the brain, and those patterns are clearly linked with electro-chemical signaling. Tinkering with the signaling clearly tinkers with the thinking. Otherwise the effects of drugs such as alcohol and coffee on thought would be a mystery. Perhaps some day we will have a scanner that tells us what a person is thinking of.
Matter and form
While I admit that electro-chemical signals being tossed about is a necessary precondition for thinking — no phenomenon that lacks such tossing will be unanimously labeled as thinking — I think that material constituency is a less than stellar guide for thinking about what something is.
Consider charcoal, diamonds, graphite, and graphene. These are made up of carbon. But is that all there is to the story of what they are? I hope the answer is an emphatic no, since they all have radically different properties. Charcoal is black and relatively soft. Diamonds are transparent and exceptionally hard. Graphene and graphite conduct electricity whereas other forms do not.
What explains the differences between the various allotropes of carbon? Clearly it isn’t what they are made of — it’s the same stuff in each case.
Eight allotropes of carbon: a) diamond, b) graphite, c) lonsdaleite, d) C60 buckminsterfullerene, e) C540, Fullerite f) C70, g) amorphous carbon, and h) single-walled carbon nanotube. Source: Wikipedia
What differs among the allotropes is the arrangements of carbon atoms. In other words, form is as important as ‘content’. Depending on how you arrange carbon atoms, you will end up with something soft and opaque or hard and transparent. Clearly the properties of the substances are not to be found in the properties of the atoms.
This is generally true for most complex and interesting objects and processes. You can boil them down to some set of elements — and these elements maybe subatomic particles, atoms, molecules, genes, cells, neurotransmitters — but some defining feature of the overarching process will be missing from the constituent parts considered in isolation, just as transparency or opaqueness are missing from individual carbon atoms.
In complex systems theory and in condensed matter physics, the word emergence is often used to describe the phenomenon by which collections of matter acquire new properties as a result of arrangement or sheer scale. Chemistry is full of examples. Oxygen is a gas. Hydrogen is a gas. But when they combine together in the right way, they produce water, which is a liquid — and one with all kinds of properties that can’t be predicted from first principles through analyzing the constituent parts.
Are thoughts emergent?
Since we don’t know exactly what thoughts are, we cannot say for sure whether they are emergent phenomena or not. But we can indirectly infer that they are by considering properties of thoughts and comparing them with properties of chemicals being tossed around in the brain.
A hallmark of thoughts is that they are about things. When you are thinking about a pineapple, there is an “aboutness” relation between the thought and the pineapple. Thoughts refer to things — which may be real things in the world, or imaginary things like dragons. This is a distinctive feature of mental phenomena, and the philosophers call it intentionality. (Note that intentionality has nothing to do with intentions or motivations — it’s not the best term, but that’s where you’ll find the relevant writings.)
This “aboutness” or “intentionality” is not a feature of chemical tossing patterns. A pattern is a pattern is a pattern, and isn’t intrinsically about any other pattern. At the very least, we can say that modern physics and chemistry have had no reason to invent an “aboutness” concept so far. In other words, there is no purely physical theory of reference.
So it seems reasonable to at least consider the possibility that the property of “aboutness” emerges when matter is arranged in just the right way.
“Is there more to it?”
This admittedly abstract concept is not really going to satisfy people who were hoping that thoughts were actually composed of “magic dust”, as Sam Moss quite rightly termed it. Thoughts are not “made up” of some special secret sauce. If you look at a brain — or any other tissue — under a microscope, all you see are cells. And cells are made up of atoms — mostly carbon, hydrogen and oxygen, with some crucial cameos by nitrogen, calcium, phosphorus, sulfur, sodium, potassium, magnesium and choride.
So is that all a brain or a body is? A stew of a dozen elements? If you followed the story with carbon, then you’ll know that the answer is no. The arrangement of the atoms makes all the difference in the world.
But does this mean that “there is more to it”? If “more” implies a substance of which thoughts are made, then the answer is most likely no.
In any case, given that matter makes up everything, saying that something is “just” matter seems a bit unfair to matter — it’s about as magical a dust as you could possibly hope for!
Matter gives you the universe and you ask if there is more to it?! 😉
If you like, you can call arrangement or form the special “something more”. Arrangement is the “something more” that distinguishes charcoal from diamonds, and thought from nonsense.
But arrangement will not fulfill all the duties of magical dust. It is not the same as the traditional notion of a soul. A soul can live on without a body. But a form has no meaning without the constituent matter that is arranged.
So here’s the compromise: thoughts are made up of electro-chemical signals tossing around, but that is not what they are, since this definition does not distinguish in any useful way between thoughts and perceptions, feelings, moods, emotions or sensations — or even unconscious neural processes for that matter — all of which are also made up of electro-chemical signals.
So saying thoughts are electro-chemical signals is about as useful as saying diamonds are carbon. It’s true, but not in an especially interesting or informative sense.
If you’ve made it this far, well done! I guess I was having a slow Friday evening! 🙂
I know that what I’ve written is quite abstract, but that goes with the territory if you are thinking about thoughts.
Here are some answers that may be of interest:
- Yohan John’s answer to What is thought?
- Yohan John’s answer to Complexity: Can emergent phenomena be described mathematically?
This post was originally a Quora answer.