Why an organism is not a “machine”

I just came across a nice article explaining why the metaphor of organism as machine is misleading and unhelpful.

The machine conception of the organism in development and evolution: A critical analysis

This excerpt makes a key point:

“Although both organisms and machines operate towards the attainment of particular ends that is, both are purposive systems the former are intrinsically purposive whereas the latter are extrinsically purposive. A machine is extrinsically purposive in the sense that it works towards an end that is external to itself; that is, it does not serve its own interests but those of its maker or user. An organism, on the other hand, is intrinsically purposive in the sense that its activities are directed towards the maintenance of its own organization; that is, it acts on its own behalf.”

In this section the author explains how the software/hardware idea found its way into developmental biology.

“The situation changed considerably in the mid-twentieth century with the advent of modern computing and the introduction of the conceptual distinction between software and hardware. This theoretical innovation enabled the construction of a new kind of machine, the computer, which contains algorithmic sequences of coded instructions or programs that are executed by a central processing unit. In a computer, the software is totally independent from the hardware that runs it. A program can be transferred from one computer and run in another. Moreover, the execution of a program is always carried out in exactly the same fashion, regardless of the number of times it is run and of the hardware that runs it. The computer is thus a machine with Cartesian and Laplacian overtones. It is Cartesian because the software/hardware distinction echoes the soul/body dualism: the computer has an immaterial ‘soul’ (the software) that governs the operations of a material ‘body’ (the hardware). And it is Laplacian because the execution of a program is completely deterministic and fully predictable, at least in principle. These and other features made the computer a very attractive theoretical model for those concerned with elucidating the role of genes in development in the early days of molecular biology.”

The machine conception of the organism in development and evolution: A critical analysis

I’ve actually criticized the genetic program metaphor myself, in the following 3QD essay:

3quarksdaily: How informative is the concept of biological information?

____

Image source: Digesting Duck – Wikipedia

“Conscious realism”: a new way to think about reality (or the lack thereof?)

Venn

Interesting interview in the Atlantic with cognitive scientist Donald D. Hoffman:

The Case Against Reality

“I call it conscious realism: Objective reality is just conscious agents, just points of view. Interestingly, I can take two conscious agents and have them interact, and the mathematical structure of that interaction also satisfies the definition of a conscious agent. This mathematics is telling me something. I can take two minds, and they can generate a new, unified single mind. Here’s a concrete example. We have two hemispheres in our brain. But when you do a split-brain operation, a complete transection of the corpus callosum, you get clear evidence of two separate consciousnesses. Before that slicing happened, it seemed there was a single unified consciousness. So it’s not implausible that there is a single conscious agent. And yet it’s also the case that there are two conscious agents there, and you can see that when they’re split. I didn’t expect that, the mathematics forced me to recognize this. It suggests that I can take separate observers, put them together and create new observers, and keep doing this ad infinitum. It’s conscious agents all the way down.”

[…]

Here’s the striking thing about that. I can pull the W out of the model and stick a conscious agent in its place and get a circuit of conscious agents. In fact, you can have whole networks of arbitrary complexity. And that’s the world.

[…]

“As a conscious realist, I am postulating conscious experiences as ontological primitives, the most basic ingredients of the world. I’m claiming that experiences are the real coin of the realm. The experiences of everyday life—my real feeling of a headache, my real taste of chocolate—that really is the ultimate nature of reality.”

I don’t agree with everything in the article (especially the quantum stuff) but I think many people interested in consciousness and metaphysics will find plenty of food for thought here:

The Case Against Reality

Also, the “conscious agents all the way down” is the exact position I was criticizing in a recent 3QD essay:

3quarksdaily: Persons all the way down: On viewing the scientific conception of the self from the inside out

The diagram above is from a science fiction story I was working on, back when I was a callow youth. It closely related to the idea of a network of conscious agents. Here’s another ‘version’ of it.

TriHead

Not sure why I made it look so morbid. 🙂

Where do thoughts come from?

Here’s my answer to a recent Quora question: Where do our thoughts come from?

Thoughts come from nowhere! And from everywhere! I think both answers contain an element of truth.

Subjectively, our thoughts come from nowhere: they just pop into our heads, or emerge in the form of words leaving our mouths.

Objectively, we can say that thoughts emerge from neural processes, and that neural processes come from everywhere. What I mean by this is that the forms and dynamics of thought are influenced by everything that has a causal connection with you, your society, and your species.

We don’t know exactly how thoughts emerge from the activity of neurons — or even how to define what a thought is in biological terms (!)— but there is plenty of indirect evidence to support the general claim that the brain is where thoughts emerge.

The neuronal patterns that mediate and enable thought and behavior have proximal and distal causes.

The proximal causes are the stimuli and circumstances we experience. These experiences have causal impacts on our bodies, and are also partly caused by our bodies. The forces inside and outside the body become manifest in the brain as ‘clouds’ of information. In the right circumstances these nebulous patterns can condense into streams of thought. We can add to these identifiable causes the mysterious element of randomness: that seemingly ever-present “ghost in the machine” that makes complex processes such as life fundamentally unpredictable. Perhaps randomness is what provides the ‘seeds’ around which the condensation of thoughts can occur.

The distal causes are our experiential history and our evolutionary pre-history. Our experiential history consists of the things we’ve learned, consciously and unconsciously, and the various events that have shaped our bodies and our neural connections in large and small ways. Our evolutionary pre-history is essentially the experiential history of our species, and more generally of life itself, going back all the way to the first single-celled organism. The traits of a species are a sort of historical record of successes and failures. And going even further, life ultimately takes its particular forms because of the possibilities inherent in matter — and this takes us all the way to the formation of stars and planets.

Me and My Brain: What the “Double-Subject Fallacy” reveals about contemporary conceptions of the Self

MiBMy latest essay for 3 Quarks Daily is up: Me and My Brain: What the “Double-Subject Fallacy” reveals about contemporary conceptions of the Self

Here’s an excerpt:
What is a person? Does each of us have some fundamental essence? Is it the body? Is it the mind? Is it something else entirely? Versions of this question seem always to have animated human thought. In the aftermath of the scientific revolution, it seems as if one category of answer — the dualist idea that the essence of a person is an incorporeal soul that inhabits a material body — must be ruled out. But as it turns out, internalizing a non-dualist conception of the self is actually rather challenging for most people, including neuroscientists.
[…]
 A recent paper in the Journal of Cognitive Neuroscience suggests that even experts in the sciences of mind and brain find it difficult to shake off dualistic intuitions. Liad Mudrik and Uri Maoz, in their paper “Me & My Brain”: Exposing Neuroscienceʼs Closet Dualism, argue that not only do neuroscientists frequently lapse into dualistic thinking, they also attribute high-level mental states to the brain, treating these states as distinct from the mental states of the person as a whole. They call this the double-subject fallacy. ( I will refer to the fallacy as “dub-sub”, and the process of engaging in it as “dub-subbing”.) Dub-subbing is going on in constructions like”my brain knew before I did” or “my brain is hiding information from me”. In addition to the traditional subject — “me”, the self, the mind — there is a second subject, the brain, which is described in anthropomorphic terms such as ‘knowing’ or ‘hiding’. But ‘knowing’ and ‘hiding’ are precisely the sorts of things that we look to neuroscience to explain; when we fall prey to the double-subject fallacy we are actually doing the opposite of what we set out to do as materialists.  Rather than explaining “me” in terms of physical brain processes, dub-subbing induces us to describe the brain in terms of an obscure second “me”. Instead of dispelling those pesky spirits, we allow them to proliferate!
Read the whole thing at 3QD:

The Neural Citadel — a wildly speculative metaphor for how the brain works

My latest 3QD essay is a bit of a wild one. I start by talking about Goodhart’s Law, a quirk of economics that I think has implications elsewhere. I try to link it with neuroscience, but in order to do so I first construct an analogy between the brain and an economy. We might not understand economic networks any better than we do neural networks, but this analogy is a fun way to re-frame matters of neuroscience and society.

Plan of a Citadel (from Wikipedia)

Plan of a Citadel (from Wikipedia)

Here’s an excerpt:

The Neural Citadel

Nowadays we routinely encounter descriptions of the brain as a computer, especially in the pop science world. Just like computers, brains accept inputs (sensations from the world) and produce outputs (speech, action, and influence on internal organs). Within the world of neuroscience there is a widespread belief that the computer metaphor becomes unhelpful very quickly, and that new analogies must be sought. So you can also come across conceptions of the brain as a dynamical system, or as a network. One of the purposes of a metaphor is to link things we understand (like computers) with thing we are still stymied by (like brains). Since the educated public has plenty of experience with computers, but at best nebulous conceptions of dynamical systems and networks, it makes sense that the computer metaphor is the most popular one. In fact, outside of a relatively small group of mathematically-minded thinkers, even scientists often feel most comfortable thinking of the brain as a elaborate biological computer. [3]

However, there is another metaphor for the brain that most human beings will be able to relate to. The brain can be thought of as an economy: as a biological social network, in which the manufacturers, marketers, consumers, government officials and investors are neurons. Before going any further, let me declare up front that this analogy has a fundamental flaw. The purpose of metaphor is to understand the unknown — in this case the brain — in terms of the known. But with all due respect to economists and other social scientists, we still don’t actually understand socio-economic networks all that well. Not nearly as well as computer scientists understand computers. Nevertheless, we are all embedded in economies and social networks, and therefore have intuitions, suspicions, ideologies, and conspiracy theories about how they work.

Because of its fundamental flaw, the brain-as-economy metaphor isn’t really going to make my fellow neuroscientists’ jobs any easier, which is why I am writing about it on 3 Quarks Daily rather than in a peer-reviewed academic journal. What the brain-as-economy metaphor does do is allow us to translate neural or mental phenomena into the language of social cooperation and competition, and vice versa. Even though brains and economies seem equally mysterious and unpredictable, perhaps in attempting to bridge the two domains something can be gained in translation. If nothing else, we can expect some amusing raw material for armchair philosophizing about life, the universe, and everything. [4]

So let’s paint a picture of the neural economy. Imagine that the brain is a city — the capital of the vast country that is the body. The neural citadel is a fortress; the blood-brain barrier serves as its defensive wall, protecting it from the goings-on in the countryside, and only allowing certain raw materials through its heavily guarded gates — oxygen and nutrients, for the most part. Fuel for the crucial work carried out by the city’s residents: the neurons and their helper cells. The citadel needs all this fuel to deal with its main task: the industrial scale transformation of raw data into refined information. The unprocessed data pours into the citadel through the various axonal highways.  The trucks carrying the data are dispatched by the nervous system’s network of spies and informants. Their job is to inform the citadel of the goings-on outside its walls. The external sense organs — the eyes, ears, nose, tongue and skin — are the body’s border patrols, coast guards, observatories, and foreign intelligence agencies. The muscles and internal organs, meanwhile, are monitored by the home ministry’s police and bureaucrats, always on the look-out for any domestic turbulence. (The stomach, for instance, is known to be a hotbed of labor unrest.)

The neural citadel enables an information economy — a marketplace of ideas, as it were. Most of this information is manufactured within the brain and internally traded, but some of it — perhaps the most important information — is exported from the brain in the form of executive orders, requests and the occasional plaintive plea from the citadel to the sense organs, muscles, glands and viscera. The purpose of the brain is definitely subject to debate — even within the citadel — but one thing most people can agree on is that it must serve as an effective and just ruler of the body: a government that marries a harmonious domestic policy — unstressed stomach cells, unblackened lung cells, radiant skin cells and resilient muscle cells — with a peaceful and profitable foreign policy. (The country is frustratingly dependent on foreign countries, over which it has limited control, for its energy and construction material.)

The citadel is divided into various neighborhoods, according to the types of information being processed. There are neighborhoods subject to strict zoning requirements that process only one sort of information: visions, sounds, smells, tastes, or textures. Then there are mixed use neighborhoods where different kinds of information are assembled into more complex packages, endlessly remixed and recontextualized. These neighborhoods are not arranged in a strict hierarchy. Allegiances can form and dissolve. Each is trying to do something useful with the information that is fed to it: to use older information to predict future trends, or to stay on the look-out for a particular pattern that might arise in the body, the outside world, or some other part of the citadel.  Each neighborhood has an assortment of manufacturing strategies, polling systems, research groups, and experimental start-up incubators. Though they are all working for the welfare of the country, they sometimes compete for the privilege of contributing to governmental policies. These policies seem to be formulated at the centers of planning and coordination in the prefrontal cortex — an ivory tower (or a corporate skyscraper, if you prefer merchant princes to philosopher kings) that has a panoramic view of the citadel. The prefrontal tower then dispatches its decisions to the motor control areas of the citadel, which notify the body of governmental marching orders.

~

The essay is not just about the metaphor though. There are bits about dopamine, and addiction, and also some wide-eyed idealism. 🙂 Check the whole thing out at 3 Quarks Daily.

For the record, there is a major problem with personifying neurons. It doesn’t actually explain anything, since we are just as baffled by persons as we are by neurons. Personifying neurons creates billions of microscopic homunculi. The Neural Citadel metaphor was devised in a spirit of play, rather than as serious science or philosophy.

What are the limits of neuroscience?

[My answer to a recent Quora question.]

There are two major problems with neuroscience:

  1. Weak philosophical foundations when dealing with mental concepts
  2. Questionable statistical analyses of experimental results

1. Neuroscience needs a bit of philosophy

Many neuroscientific results are presented without sufficiently nuanced  philosophical knowledge. This can lead to cartoonish and potentially harmful conceptions of the brain, and by extension, of human behavior, psychology, and culture. Concepts related to the mind are among the hardest to pin down, and yet some neuroscientists give the impression that there are no issues that require philosophical reflection.

Because of a certain disdain for philosophy (and sometimes even psychology!), some neuroscientists end up drawing inappropriate inferences from their research, or distorting the meaning of their results.

One particularly egregious example is the “double subject fallacy”, which was recently discussed in an important paper:

“Me & my brain”: exposing neuroscience’s closet dualism.

Here’s the abstract of the paper:

Our intuitive concept of the relations between brain and mind is  increasingly challenged by the scientific world view. Yet, although few  neuroscientists openly endorse Cartesian dualism, careful reading  reveals dualistic intuitions in prominent neuroscientific texts. Here,  we present the “double-subject fallacy”: treating the brain and the  entire person as two independent subjects who can simultaneously occupy  divergent psychological states and even have complex interactions with  each other-as in “my brain knew before I did.” Although at first, such  writing may appear like harmless, or even cute, shorthand, a closer look  suggests that it can be seriously misleading. Surprisingly, this  confused writing appears in various cognitive-neuroscience texts, from  prominent peer-reviewed articles to books intended for lay audience. Far  from being merely metaphorical or figurative, this type of writing  demonstrates that dualistic intuitions are still deeply rooted in  contemporary thought, affecting even the most rigorous practitioners of  the neuroscientific method. We discuss the origins of such writing and  its effects on the scientific arena as well as demonstrate its relevance  to the debate on legal and moral responsibility.

[My answer to the earlier question raises related issues: What are the limits of neuroscience with respect to subjectivity, identity, self-reflection, and choice?]

2. Neuroscience needs higher data analysis standards

On a more practical level, neuroscience is besieged by problems related to bad statistics. The data in neuroscience (and all “complex system” science) are extremely noisy, so increasingly sophisticated statistical techniques are deployed to extract meaning from them. This sophistication means that  fewer and fewer neuroscientists actually understand the math behind the statistical methods they employ. This can create a variety of problems, including incorrect inferences. Scientists looking for “sexy” results can use poorly understood methods to show ‘significant’ effects where there really is only a random fluke. (The more methods you use, the more chances you create for finding a random “statistically significant” effect. This kind of thing has been called “torturing the data until it confesses”.)

Chance effects are unreproducible, and this is a major problem for many branches of science. Replication is central to good science, so when it frequently fails to occur, then we know there are problems with research and with how it is reviewed and published. Many times there is a “flash in the pan” at a laboratory that turns out to be fool’s gold.

See these article for more:

Bad Stats Plague Neuroscience

Voodoo Correlations in Social Neuroscience

The Dangers of Double Dipping (Voodoo IV)

Erroneous analyses of interactions in neuroscience: a problem of significance.

Fixing Science, Not Just Psychology – Neuroskeptic

The Replication Problem in the Brain Sciences


Quora: What are the limits of neuroscience?

Does dopamine produce a feeling of bliss? On the chemical self, the social self, and reductionism.

Here’s the intro to my latest blog post at 3 Quarks Daily.


“The  osmosis of neuroscience into popular culture is neatly symbolized by a  phenomenon I recently chanced upon: neurochemical-inspired jewellery. It  appears there is a market for silvery pendants shaped like molecules of  dopamine, serotonin, acetylcholine, norepinephrine and other celebrity  neurotransmitters. Under pictures of dopamine necklaces, the  neuro-jewellers have placed words like “love”, “passion”, or “pleasure”.  Under serotonin they write “happiness” and “satisfaction”, and under  norepinephrine, “alertness” and “energy”. These associations presumably  stem from the view that the brain is a chemical soup in which each  ingredient generates a distinct emotion, mood, or feeling. Subjective  experience, according to this view, is the sum total of the  contributions of each “mood molecule”. If we strip away the modern  scientific veneer, the chemical soup idea evokes the four humors of  ancient Greek medicine: black bile to make you melancholic, yellow bile  to make you choleric, phlegm to make you phlegmatic, and blood to make  you sanguine.

“A dopamine pendant worn round the neck as a symbol for bliss is  emblematic of modern society’s attitude towards current scientific  research. A multifaceted and only partially understood set  of experiments is hastily distilled into an easily marketed molecule of  folk wisdom. Having filtered out the messy details, we are left with an  ornamental nugget of thought that appears both novel and reassuringly  commonsensical. But does neuroscience really support this reductionist  view of human subjectivity? Can our psychological states be understood  in terms of a handful of chemicals? Does neuroscience therefore pose a  problem for a more holistic view, in which humans are integrated in  social and environmental networks? In other words, are the “chemical  self” and the “social self” mutually exclusive concepts?”

– Read the rest at 3QD: The Chemical Self and the Social Self

From Cell Membranes to Computational Aesthetics: On the Importance of Boundaries in Life and Art

My next 3QD column is out. I speculate about the role of boundaries in life and aesthetic experience. (Dopamine cells make a cameo appearance too.)

This image is a taster:

If you want to know what this diagram might mean, check out the article:
From Cell Membranes to Computational Aesthetics: On the Importance of Boundaries in Life and Art

The Mysterious Power of Naming in Human Cognition

I’ve written a long-form essay for the blog/aggregator site 3 Quarks Daily:

Boundaries and Subtleties: the Mysterious Power of Naming in Human Cognition

Here’s a taster:

I’ve divided up the essay into four parts. Here’s the plan:

  1. We’ll introduce two key motifs — the named and the nameless — with a little help from the Tao Te Ching.
  2. We’ll examine a research problem that crops up in cognitive  psychology, neuroscience and artificial intelligence, and link it with  more Taoist motifs.
  3. We’ll look at how naming might give us power over animals, other people, and even mathematical objects.
  4. We’ll explore the power of names in computer science, which will facilitate some wild cosmic speculation.

Will we ever be able to upload our minds to a computer?

Image

My answer to a Quora question: What percent chance is there that whole brain emulation or mind uploading to a neural prosthetic will be feasible within 35 years?

This opinion is unlikely to be popular among sci-fi fans, but I think the “chance” of mind uploading happening at any time in the future is zero.  Or better yet, as a scientist I would assign no number at all to this subjective probability. [Also see the note on probability at the end.]

I think the concept is still incoherent from both philosophical and scientific perspectives.

In brief, these are the problems:

  • We don’t know what the mind is from a scientific/technological perspective.
  • We don’t know which processes in the brain (and body!) are essential to subjective mental experience.
  • We don’t have any intuition for what “uploading” means in terms of mental unity and continuity.
  • We have no way of knowing whether an upload has been successful.

You could of course take the position that clever scientists and engineers will figure it out while the silly philosophers debate semantics. But this I think is based on shaky analogies with other examples of scientific progress. You might justifiably ask “What are the chances of faster-than-light travel?” You could argue that our vehicles keep getting faster, so it’s only a matter of time before we have Star Trek style warp drives. But everything we know about physics says that crossing the speed of light is impossible. So the “chance” in this domain is zero. I think that the idea of uploading minds is even more problematic than faster-than-light travel, because the idea does not have any clear or widely accepted scientific meaning, let alone philosophical meaning. Faster-than-light travel is conceivable at least, but mind uploading may not even pass that test!

I’ll now discuss some of these issues in more detail.

The concept of uploading a mind is based on the assumption that mind and body are separate entities that can in principle exist without each other. There is currently no scientific proof of this idea. There is also no philosophical agreement about what the mind is. Mind-body dualism is actually quite controversial among scientists and philosophers these days.

People (including scientists) who make grand claims about mind uploading generally avoid the philosophical questions. They assume that if we have a good model of brain function, and a way to scan the brain in sufficient detail, then we have all the technology we need.

But this idea is full of unquestioned assumptions. Is the mind identical to a particular structural or dynamic pattern? And if software can emulate this pattern, does it mean that the software has a mind? Even if the program “says” it has a mind, should we believe it? It could be a philosophical zombie that lacks subjective experience.

Underlying the idea of mind/brain uploading is the notion of Multiple Realizability — the idea that minds are processes that can be realized in a variety of substrates. But is this true? It is still unclear what sort of process mind is. There are always properties of a real process that a simulation doesn’t possess. A computer simulation of water can reflect the properties of water (in the simulated ‘world’), but you wouldn’t be able to drink it! 🙂

Even if we had the technology for “perfect” brain scans (though it’s not clear what a “perfect” copy is), we run into another problem: we don’t understand what “uploading” entails. We run into the Ship of Theseus problem. In one variant of this problem/paradox we imagine that Theseus has a ship. He repairs it every once in a while, each time replacing one of the wooden boards. Unbeknownst to him, his rival has been keeping the boards he threw away, and over time he constructed an exact physical replica of Theseus’s ship. Now, which is the real ship of Theseus? His own ship, which is now physically distinct from the one he started with, or the (counterfeit?) copy, which is physically identical to the initial ship? There is no universally accepted answer to this question.

We can now explicitly connect this with the idea of uploading minds. Let’s say the mind is like the original (much repaired) ship of Theseus. Let’s say the computer copy of the brain’s structures and patterns is like the counterfeit ship. For some time there are two copies of the same mind/brain system — the original biological one, and the computer simulation. The very existence of two copies violates a basic notion most people have of the Self — that it must obey a kind of psychophysical unity. The idea that there can be two processes that are both “me” is incoherent (meaning neither wrong nor right). What would that feel like for the person whose mind had been copied?

Suppose in response to this thought experiment you say, “My simulated Self won’t be switched on until after I die, so I don’t have to worry about two Selves — unity is preserved.” In this case another basic notion is violated — continuity. Most people don’t think of the Self as something that can cease to exist and then start existing again. Our biological processes, including neural processes, are always active — even when we’re asleep or in a coma. What reason do we have to assume that when these activities cease, the Self can be recreated?

Let’s go even further: let’s suppose we have a great model of the mind, and a perfect scanner, and we have successfully run a simulated version of your mind on a computer. Does this simulation have a sense of Self? If you ask it, it might say yes. But is this enough? Even currently-existing simulations can be programmed to say “yes” to such questions. How can we be sure that the simulation really has subjective experience?  And how can we be sure that it has your subjective experience? We might have just created a zombie simulation that has access to your memories, but cannot feel anything. Or we might have created a fresh new consciousness that isn’t yours at all! How do we know that a mind without your body will feel like you? [See the link on embodied cognition for more on this very interesting topic.]

And — perhaps most importantly — who on earth would be willing to test out the ‘beta versions’ of these techniques? 🙂

Let me end with a verse from William Blake’s poem “The Tyger“.

What the hammer? what the chain?
In what furnace was thy brain?

~

Further reading:

Will Brains Be Dowloaded? Of Course Not!

Personal Identity

Ship of Theseus

Embodied Cognition

~

EDIT: I added this note to deal with some interesting issues to do with “chance” that came up in the Quora comments.

A note on probability and “chance”

Assigning a number to a single unique event such as the discovery of mind-uploading is actually problematic. What exactly does “chance” mean in such contexts? The meaning of probability is still being debated by statisticians, scientists and philosophers. For the purposes of this discussion, there are 2 basic notions of probability:

(1) Subjective degree of belief. We start with a statement A. The probability p(A) = 0 if I don’t believe A, and p(A) = 1 if I believe A. In other words, if your probability p(A) moves from 0 to 1, your subjective doubt decreases. If A is the statement “God exists” then an atheist’s p(A) is equal to 0, and a theist’s p(A) is 1.

(2) Frequency of a type of repeatable event. In this case the probability p(A) is the number of times event A happens, divided by the total number of events. Alternatively, it is the total number of outcomes that correspond to event A, divided by the total number of possible outcomes. For example, suppose statement A is “the die roll results in a 6”. There are 6 possible outcomes of a die roll, and one of them is 6. So p (A) = 1/6. In other words, if you roll an (ideal) die 600 times, you will see the side with 6 dots on it roughly 100 times.

Clearly, if statement A is “Mind uploading will be discovered in the future”, then we cannot use frequentist notions of probability. We do not have access to a large collection of universes from which to count the ones in which mind uploading has been successfully discovered, and then divide that number by the total number of universes. In other words, statement A does not refer to a statistical ensemble — it is a unique event. For frequentists, the probability of a unique event can only be 0 (hasn’t happened) or 1 (happened). And since mind uploading hasn’t happened yet, the frequency-based probability is 0.

So when a person asks about the “chance” of some unique future event, he or she is implicitly asking for a subjective degree of belief in the feasibility of this event. If you force me to answer the question, I’ll say that my subjective degree of belief in the possibility of mind uploading is zero. But I actually prefer not to assign any number, because I actually think the concept of mind uploading is incoherent (as opposed to unfeasible). The concept of its feasibility does not really arise (subjectively), because the idea of mind uploading is almost meaningless to me. Data can be uploaded and downloaded. But is the mind data? I don’t know one way or the other, so how can I believe in some future technology that presupposes that the mind is data?

More on probability theory:

Chances Are — NYT article on probabily by Steve Strogatz

Why the distinction between single-event probabilities and frequencies is important for psychology (and vice versa)

Interpretations of Probability – Stanford Encyclopedia of Philosophy