This is a slightly edited version of an answer I wrote on Quora: What is the best way to understand consciousness?
This emoticon captures my attitude towards questions about consciousness:
Consciousness seems to be invisible to scientific methods. The only way to incorporate consciousness into science is to change the definition of science so it includes subjective experience. I am not really comfortable with doing this, because it waters down science to the point where it becomes indistinguishable from introspective philosophy. Science is useful because of what makes it different from philosophy: it makes predictions that can be tested objectively. Truly subjective experiences seem to be ruled out by definition.
My favorite way to think about consciousness is inspired by Indian philosophy, though the general idea crops up all over the place:
Consciousness is not a phenomenon: it is the precondition for the appearance of phenomena. The mind is not a thing to be observed, but the medium by which things are observed.
This is emphatically not a scientific statement, but that’s okay: science is only one of many ways to look at the universe.
So let’s examine the scientific approach to consciousness in detail!
I. On the ‘real’ problem of consciousness
The questioner mentioned an Aeon magazine article on the ‘real’ problem of consciousness. It’s interesting as a survey of research, but there are several issues I have with it.
- It makes no mention of qualia, the actual subjective experiences that most people are talking about when they use the word “consciousness”.
- It does not mention the many criticisms of the information-integratedness theory, such as those from a paper that I summarized here: “You have a theory of something, I am just not sure what it is” — An interesting critique of the Integrated Information Theory (IIT) of consciousness
- It mentions binocular rivalry as a ‘signature’ of conscious experience, but a recent study indicates that even anesthetized (and therefore presumably non-conscious?) monkeys show neural patterns that correlate with binocular rivalry: Rivalry-Like Neural Activity in Primary Visual Cortex in Anesthetized Monkeys
This last point gets to the heart of the problem with finding neural correlates of consciousness (NCC). This is the general approach, which seems pretty ‘objective’.
- You identify an ostensibly subjective phenomenon like binocular rivalry, which occurs when you present two different images to each eye. The conscious percept subjectively shifts between the two images. Let’s call the shifting conscious percept ‘Phenomenon A’.
- You correlate the subjective shifts with objectively measurable neural patterns, such as activity in visual cortex. We’ll call the measurable pattern ‘Phenomenon B’.
- When there’s a strong correlation between Phenomenon A and Phenomenon B, we can say that B is a neural correlate of A.
- We look for B, and then infer A. In other words, when we see an NCC, we infer a particular conscious state.
But in order to make the inference in step 4, we need more data: we need to be sure that A only happens when B happens and vice versa. Given the complexity of neural connectivity, it is entirely possible that the conscious state A can occur as a result of some unrelated neural phenomenon C. Further, we might even be able to find situations where the neural phenomenon B occurs without any accompanying conscious percept A. That is what the paper on Rivalry-Like Neural Activity in Primary Visual Cortex in Anesthetized Monkeys suggests.
II. The logic of consciousness science?
Let’s explore what ‘consciousness science’ might look like if expressed in logical form.
Assume for the sake of argument that some subjective conscious state A correlates perfectly with B in studies where A is induced and B is recorded. This assumption is expressed in a semi-formal way as follows:
Axiom: A => B (Read this as “A implies B.”)
What does logic allow us to infer from this? Only the contrapositive:
Inference: ~B => ~A (“Not B implies not A.”)
So we can infer that if there is no neural pattern B, then there is no subjective state A. We cannot claim the following
(Wrong!) Inference: B => A
If we come across the neural state B, we cannot infer that the conscious state A is in fact being experienced. That would be like saying that because a wood fire (A) always produce heat (B), then heat is always the result of wood fires! It’s a basic logical error.
You might think that scientists and philosophers are immune from making such basic logical errors, but they seem to make statements that skirt dangerously close to them. For example, consider the following statement, variations of which I have come across in the scientific world:
“Novelty is inherently rewarding because dopamine cells fire during novel experiences.”
It is important to understand that neuroscientists and philosophers routinely say things like this especially when theorizing about NCCs. These kinds of statements are problematic, to put it mildly.
When you give an animal a food reward (Phenomenon A), some of its dopamine cells fire (Phenomenon B). This led people to the (perfectly reasonable) hypothesis that dopamine cell firing is the objective neural correlate of subjective reward and/or pleasure.
But then people found out that dopamine cells also fire in situations involving no reward, but instead some kind of unexpected and novel event (Phenomenon C) . So we have
Axiom: A => B (“Reward implies dopamine firing.”)
Axiom: C => B (“Novelty implies dopamine firing.”)
(Wrong!) Inference: C => A (“Novelty implies reward.”)
This is wildly incorrect reasoning. And the reason we are so prone to it is because the conclusion flatters our intuitions and biases. I get a kick out of novel experiences, so yeah, it must have something to do with dopamine cells, right?
Wrong. All we have learned is that dopamine cells can be fired by different kinds of situations, including reward and novelty. No inference can be made about the subjective experience of novelty, because we have not established the key additional axiom:
B = > A (“Dopamine firing implies subjective reward.”)
If we have A =>B and separately find out that B => A, then we can say the following:
A <=> B (A occurs if and only if B occurs.)
In neuroscience it is virtually impossible to establish these kinds of if and only if relations. We cannot explore all possible rewarding experiences in order to establish that dopamine fires in all of them. In any case that is not all we have to do. We also have to explore non-rewarding situations in order to establish that dopamine cells do not fire in them. Luckily one counterexample is all we need to falsify the perfect bidirectional link between dopamine cell firing and reward/pleasure. As many studies have shown, dopamine cells also fire in subjectively painful situations too. So the link between dopamine per se and reward/pleasure is quite broken.
But note that this does not disprove the claim that novelty is subjectively rewarding. All we have shown is that the inference using dopamine firing was faulty. Novelty may well be rewarding for entirely different reasons.
Perhaps you can now see why finding neural correlates of anything is so difficult. Neural correlates of consciousness are even more tricky, since we can really only go on what people say about their subjective states. Everything else is about finding the neural correlates of behavior, rather than consciousness.
III. The problem of inferring conscious states in animals
We can’t ask a lab animal what their subjective state is, so we infer it from behavior. It might seem obvious what an animal is feeling from their reactions and body language, but we have reason to be suspicious of our own body-language ‘literacy’, particularly given how prone we are to anthropomorphising animals.
Consider the statements routinely made about drugs of abuse. Early studies of dopaminergic drugs like cocaine and methamphetamine were done in rats and mice. The animals would often self-administer the drugs at the cost of all else, including feeding themselves. They would starve themselves to death even though they had access to unlimited food.
A first-pass analysis of this behavior lead to the following inference: dopaminergic drugs are so pleasurable that the animals could not help but take them.
But were the drugs really pleasurable, or was something else going on? It seems as if some prejudices about human drug users infected our analysis of the rodent behavior. The Rat Park experiments, though controversial, suggested another possibility.The ‘do drugs till they die’ rats were house in isolated cages and had no alternative sources of stimulation. They had nothing to do except eat food and self-administer the drug. But rats are highly social creatures, just like humans. When the rats were housed in ‘enriched’ environments — with plenty of playmates and opportunities for fun and exploration — they didn’t become starving junkies. They still ingested the drugs osccasionally, but didn’t become addicted: they were integrated into rat society, and had other things to do. (Check out this webcomic about the Rat Park to learn more.)
So pleasure is not an intrinsic quality that inheres in the chemicals themselves. Both pleasure and the motivation to take drugs seem to depend on other factors including social and environmental cues.
A lack of imagination might also prevent us from viewing neuroscientific experiments with the proper circumspection. Neuroscientists now have other reasons to doubt the “if and only if” style linkage between dopamine and pleasure. Many neuroscientists now make a distinction between “liking” and “wanting”. It seems as if dopamine system is not really involved in the subjective experience of “liking” or pleasure, but instead has more to do with the purely behavioral phenomenon of wanting or motivation. Drugs of addiction may produce some pleasure initially, but the addiction may turn out to be the result of a hijacking of the motivational system, rather than the pleasure system.
And motivation need not correlate with subjective pleasure. Just ask someone who takes adderall as a cognitive enhancer (which is technically illegal, but extremely common). Adderall excites dopamine D1 receptors, but this is not guaranteed to produce anything like euphoria or pleasure. Often what happens is quite the reverse: the emotional system is subdued and the person becomes more ‘robotic’. They become more motivated to complete tasks, but not for reasons of pleasure per se.
IV. “Unknown unknowns”
So finding neural correlates of consciousness is in theory feasible if we can find one-to-one “if and only if” style mappings between subjective states and neural/bodily states, but in practice this is almost impossible. Whenever we think we’ve found an NCC, we find that things are way more complicated.
And this even applies to the most basic question about consciousness: whether it is even present or not.
We typically say that when a person is in REM sleep, they are consciously experiencing a dream. We also say that when a person is in deep sleep, the person is unconscious. The inferences goes as follows: when you wake up a person from REM sleep, they can report what their dreams were. When you wake up a person from deep sleep, they don’t report any memories of subjective experiences.
But note that this only shows that deep sleep leaves no trace on conscious short-term memory. It seems logically conceivable that deep sleep does actually have a conscious component — a “what it is like” — but that it leaves no trace in memory. The same may be true of some coma states.
In other words, when it comes to consciousness: “absence of evidence is not evidence of absence”.
So… now what do you think is the best approach to studying consciousness?
For a broader critique of the scientific study of consciousness, see this essay I wrote for 3 Quarks Daily: Why some neuroscientists call consciousness “the c-word”