Neuroscience has hit the big time. Every day, popular newspapers, websites and blogs offer up a heady stew of brain-related self-help (neuro-snake oil?) and gee wiz science reporting (neuro-wow?). Some scientists and journalists — perhaps caught up in the neuro-fervor — throw caution to the wind, promising imminent brain-based answers to the kinds of questions that probably predate civilization itself: What is the nature of mind? Why do we feel the way we do? Does each person have a fundamental essence? How can we avoid pain and suffering, and discover joy, creativity, and interpersonal harmony?
A very important point about dopamine (DA) and reward, from a recent review paper:
“The DA hypothesis of reward is a ubiquitous feature of the scientific literature, as well as popular media, the internet, and film. Yet, despite the almost automatic tendency of some to explain virtually any aspect of DA function as somehow being dependent on reward, there are critical theoretical and empirical problems with this, many of which have been reviewed in detail elsewhere (Salamone et al., 1997, 2007; Salamone and Correa, 2002, 2012; Floresco, 2015; Nicola, 2016). First and foremost, the term reward has no consistent scientific meaning (Salamone et al., 2005; Salamone and Correa, 2012), and, depending upon the paper, or even the paragraph, this term is used variously to refer to subjective pleasure or hedonic reactivity, appetite, preference, and even reinforcement learning. Given the slippery and imprecise nature of this term, it is wholly inadequate to attribute specific effects in experiments simply to reward without any qualification or explication.” [Emphasis added]
Salamone, J. D., Correa, M., Ferrigno, S., Yang, J. H., Rotolo, R. A., & Presby, R. E. (2018). The psychopharmacology of effort-related decision making: Dopamine, adenosine, and insights into the neurochemistry of motivation. Pharmacological Reviews, 70(4), 747-762.
This post, which mostly provides useful citations, was originally written as an answer to the following Quora question:
It really depends on what you mean by “objective”, but the answer is “probably not”.
Since we do not understand the underlying causes of ADHD — or any major psychiatric disorder — we diagnose them based on clusters of symptoms.
In the United States and several other countries, a large number of psychiatrists use a book called Diagnostic and Statistical Manual of Mental Disorders (DSM).
DSM (now in it’s 5th, revised edition, DSM-5) essentially uses a system of checklists to enable a clinician to assess if a person has a given disorder. This is a controversial book for various reasons, but for now it is what most psychiatrists use.
Instead of the complex philosophical question of ‘objectivity’, the usefulness of DSM can be assessed using statistical measures of ‘reliability’.
Given that one clinician uses DSM-5 to give the diagnosis of ADHD, how likely is another clinician to do so using the DSM-5? Measures of “test-retest reliability” capture this probability.
Here is a paper that explains the statistical measurement of reliability in some detail:
There are conflicting reports on the reliability of DSM-5, but here is one paper that reports statistical assessments:
“There were a total of 15 adult and eight child/adolescent diagnoses for which adequate sample sizes were obtained to report adequately precise estimates of the intraclass kappa. Overall, five diagnoses were in the very good range(kappa=0.60–0.79), nine in the good range(kappa=0.40–0.59), six in the questionable range (kappa = 0.20–0.39), and three in the unacceptable range (kappa values,0.20). Eight diagnoses had insufficient sample sizes to generate precise kappa estimates at any site.”
“Two were in the very good (kappa=0.60–0.79) range: autism spectrum disorder and ADHD.”
For more on the quantity reported here, kappa, see this paper:
The quantity kappa ranges from 0 to 1. Zero means that there was no agreement between raters (clinicians in this case), and 1 means there was perfect agreement.
As I said before, the DSM is controversial — and not just because of reliability issues. Here is a sampling of papers and popular articles on the general topic:
“However, the standards for evaluating κ-statistics have relaxed substantially over time. In the early days of systematic reliability research, Spitzer and Fleiss  suggested that in psychiatric research κ-values ≥0.90 are excellent; values between 0.70 and 0.90 are good, while values ≤0.70 are unacceptable. In 1977, Landis and Koch  proposed the frequently used thresholds: values ≥0.75 are excellent; values between 0.40 and 0.75 indicate fair to good reliability, and values ≤0.40 indicate poor reliability. More recently, Baer and Blais  suggested that κ-values >0.70 are excellent; values between 0.60 and 0.70 are good; values between 0.41 and 0.59 are questionable, and values ≤0.40 are poor. Considering these standards, the norms used in the DSM-5 field trial are unacceptably generous.”
“Today, 26 years later, did the DSM system succeed in improving the reliability of psychiatric diagnoses? Two answers exist. The DSM did improve the reliability of psychiatric diagnoses at the research level. If a researcher or a clinician can afford to spend 2 to 3 hours per patient using the DSM criteria and a structured interview or a rating scale, the reliability would improve.  For psychiatrists and clinicians, who live in a world without hours to spare, the reliability of psychiatric diagnoses is still poor. [2,3]”
“The fifth revision of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) was the most controversial in the manual’s history. This review selectively surveys some of the most important changes in DSM-5, including structural/organizational changes, modifications of diagnostic criteria, and newly introduced categories. It analyzes why these changes led to such heated controversies, which included objections to the revision’s process, its goals, and the content of altered criteria and new categories. The central focus is on disputes concerning the false positives problem of setting a valid boundary between disorder and normal variation. Finally, this review highlights key problems and issues that currently remain unresolved and need to be addressed in the future, including systematically identifying false positive weaknesses in criteria, distinguishing risk from disorder, including context in diagnostic criteria, clarifying how to handle fuzzy boundaries, and improving the guidelines for “other specified” diagnosis.”
“You will need to display fewer and fewer symptoms to get labeled with certain disorders, for example Attention Deficit Disorder and Generalized Anxiety Disorder. Children will have more and more mental disorder labels available to pin on them. These are clearly boons to the mental health industry but are they legitimate additions to the manual that mental health professionals use to diagnose their clients?”
“This is the saddest moment in my 45 year career of studying, practicing, and teaching psychiatry. The Board of Trustees of the American Psychiatric Association has given its final approval to a deeply flawed DSM 5 containing many changes that seem clearly unsafe and scientifically unsound. My best advice to clinicians, to the press, and to the general public – be skeptical and don’t follow DSM 5 blindly down a road likely to lead to massive over-diagnosis and harmful over-medication. Just ignore the ten changes that make no sense.”
“Among the flashpoints: Asperger’s disorder will be folded into autism spectrum disorder; grief will no longer exempt someone from a diagnosis of depression; irritable children who throw frequent temper tantrums can be diagnosed with disruptive mood dysregulation disorder. [Hypersex to Hoarding: 7 New Psychological Disorders]
“One prominent critic has been Allen Frances, a professor emeritus of psychiatry at Duke University who chaired the DSM-IV task force.
“Frances charges that through a combination of new disorders and lowered thresholds, the DSM-5 is expanding the boundaries of psychiatry to encompass many whom he describes as the “worried well.””
I was asked the following question on Quora some time ago:
Here’s my answer:
People who have had a western-style education generally point to their head when they are talking about their mind, and to their chest when they are talking about their heart, which is the “metaphorical container” for many if not most emotions.
Around the world, people have always associated the heart with intense emotions — anger, love, fear and so on. This may be because these emotions are actually felt in the heart and lungs. When you are aroused by strong anger, love, or fear, you may feel that your chest is pounding — and it often is! Your emotional state can affect your heart-rate and your breathing.
So using the heart as the metaphorical container for emotion is quite understandable.
What is harder for us to understand is why some cultures — such as Ancient Egypt — use(d) the heart as their metaphorical container for all mental concepts, including intelligence.
Didn’t Ancient Egyptians know that head injuries affected behavior and intelligence? The answer is yes they did know*, but for some reason this knowledge wasn’t prominent in their literary culture, which was happy to stick with the heart metaphor.
We can speculate that certain cultures — both ancient and modern — identify the agent or person with the seat of emotion, rather than the seat of seeing, hearing, smelling and tasting (which is easily identified with the head). Personhood is a complex concept, and to this day no one fully understands what it is. There is nothing intrinsically wrong with identifying the Self with emotions, and then assigning the heart as the symbol for emotional experience.
Our language and gestures are symbolic, and ultimately any symbol will suffice to communicate a basic idea. You might wonder why we don’t point to some specific part of the head, for example, when we talk about a particular aspect of cognition — after all, we have rough scientific conceptions of neural processes now. The answer is that it doesn’t matter all that much for the purpose of communication.
Having said that, it’s helpful for understanding (and it’s also aesthetically pleasing!) if our symbols partially reflect the underlying biological process, which is why using the heart as a metaphor for the seat of emotion is still quite acceptable. In the same way, we might say that a surgeon or pianist has “good hands”, even though neuroscientists tend to agree that dexterity is largely achieved by neural connections in the head, not the hands.
One of my favorite books is, by Julian Jaynes. It’s a deeply strange book, so you have to take all its conclusions with a hefty pinch of salt. But the main reason I love it is because Jaynes asks questions that many people simply neglect to ask in the first place.
For example, Jaynes asks how exactly body parts became “metaphorical containers” for abstract qualities. How did courage become associated with the gut, or emotion with the heart, or life-force with the air in the lungs (from which the word ‘psyche’ ultimately derives)?
Jaynes’s answer — which you don’t have to believe, of course — is that humans discovered the associations between body parts and attributes through violence and death. He thinks that ancient battlefields might have taught people where abstract attributes were ‘located’. A stomach injury might make a person decidedly less brave. And when a person exhales for the last time their life-force seems to leave the body.
This way of thinking can seem quite primitive, but the way we conduct neuroscience now is basically an outgrowth of the same logic. We see what function is lost when a particular brain area breaks down, and then we label that brain area as the seat of that function.
* See this answer for some quotes that show that at least some Ancient Egyptians were well aware of the importance of the head:
Just came across a nice little article by Ed Yong on how the two major phases of sleep — REM and slow-wave sleep — might contribute to creativity. These ideas have been floating around for a while, but it’s nice to see them in a pop sci article.
(This is a cross-post of a 3 Quarks Daily article I wrote last year.)
A few months ago I attended a rather peculiar seminar at MIT’s Department of Brain and Cognitive Sciences. A neuroscientist colleague of mine named Robert Ajemian had invited an unusual speaker: a man named Jim Karol, who was billed as having the world’s best memory. According to his website, his abilities include “knowing over 80,000 zip codes, thousands of digits of Pi, the Scrabble dictionary, sports almanacs, MEDICAL journals, and thousands of other facts.” He has memorized the day of the week for every date stretching back to 1AD. And his abilities are not simply matter of superhuman willingness to spend hours memorizing lists. He can add new items to his memory rapidly, on the fly. After a quick look at a deck of cards, he can recall perfectly the order in which they were shuffled. I witnessed him do this last ‘trick’, as well as a few others, so I can testify that his abilities are truly extraordinary .
I was asked this question on Quora:
Here’s my answer:
It’s very difficult to define what exactly the will is in neuroscientific terms. It seems best to reserve this word for the whole organism, rather than some part of it.
Some philosophers would say that attributing will to some sub-component of an organism is an example of the “mereological fallacy”. It’s like saying the stomach eats, or the brain thinks, or the legs walk. We use these kinds of phrases as a kind of poetic shorthand, but only a complete organism can be said to eat, think, or walk.
In the case of the two hemispheres, we also know that the left brain right brain story — that one is “rational” and the other “holistic/artistic” — is wildly misguided. Some neural processes are, but most normal tasks that humans perform require close integration and communication between the hemispheres.
But we do have to make sense of a common experience — being “in two minds” about something. Most people know what it is like to be in a conflicted state — multiple goals or biases seem to be tugging at us. Clearly decision-making involves a sort of “parliamant” in the brain, in which multiple vested interests vie to enact legislation that suits them. 🙂
But the parliament metaphor should not be taken too seriously. There is little to be gained in anthropomorphizing neurons or groups of neurons. Neural ensembles might sometimes seem to behave as if they have a will, but that idea will not really help us understand decision-making, or the subjective feeling of having a will.
So brain areas don’t have likes or dislikes — organisms do, and brain areas mediate the processes by which these likes and disliked become manifest.
For more on the problems with anthropomorphizing neural processes, see these two essays I wrote:
(This essay is partly a gentle critique of the Pixar movie Inside Out.)
(This essay explores the tendency of people, including neuroscientists, to think of the brain is a separate agent from the person as a whole.)
I admit that it is often fun to anthropomorphize neurons, which is what I do in the essay below. I paint a picture of a neural city and a neural economy, complete with start-ups and investors. 🙂
I was asked this question on Quora:
Here’s how I responded:
Here’s a question: in a system composed of, is there a “code”?
I ask this because I find that the “code” metaphor is often misleading when thinking about biology. Codes are composed of symbols. But it is not clear that neurons communicate using symbols.
The way a neuron affects other neurons is more like how a gear affects other gears. There is no code — there is causality. An active neuron releases some neurotransmitter, and this in turn makes other neurons more active. It’s like a complex network of dominoes.
Does the idea of a “code” help us understand how one domino affects the next one in the chain?
I admit that by the time a human is thinking in terms of words and symbols, “code” is probably a useful metaphor for what is going on. But the origin of coding schemes remains a great mystery in neuroscience, cognitive science, and artificial intelligence. So I recommend starting with a much less loaded metaphor, such as clockwork or dominoes. Thinking in mechanical terms helps us realize what exactly neuroscience and AI research are trying to achieve.
For now, there is a fascinating gap in our understanding of what exactly codes are in the first place.
Anyway, if you are interested in the causal “domino effect” that starts at the retina, have a look at this answer: