Home | About Me | Photos | Writing | Research | Scratchpad | Projects

Mind-brain identity theory

Philosophy : Philosophy of Mind

Identity theory is a kind of materialism developed as a rection to work in psychology and the physical sciences in the mid 20th century. It essentially boils down to these statements:

1) Minds are identical to brains
2) Mental states are identical to brain states
3) The realm of the mental is a subset of the realm of the physical

Proponents of this theory state that this claim is a contingent fact about the nature of the mind and the brain; it makes no attempt to explain the meaning of mental terms and so isn't a semantic thesis like logical behaviourism (e.g. 'pain' means physiological state X).

One identity theorist, Smart, claimed that it ought to be a strict identity statement. By this he meant that mind and brain refer to exactly the same thing, i.e. if minds are identical to brains, then brains are identical to minds. Strict identity statements are therefore logically symmetrical. An "un-strict" identity statement would be assymmetrical, for example: rain is identical to bad weather, but bad weather could be rain, sleet, snow, etc.


Identity theory is deemed to be contingent because of the history of the theory. Generally, necessary facts are deemed to be a priori, and so discoverable through study of language, e.g. "one plus one equals two", or "a bachelor is an unmarried man". But because identity theory came from scientific discoveries, the thought is that it cannot be necessary. But then if two things are identical, must that fact not be necessary? One cannot say that in conscious beings on earth, their brains are identical to their minds, but that it is possible that that wouldn't be the case, since that contradicts the theory.

One can however point out that the strength of gravity at the earth's surface is a necessary fact, and yet we only discovered that after thousands of years of scientific investigation. Necessary facts needn't be a priori facts accessible through analysis of language.


Type-type or token-token?

One problem that arises from identity theory is just how identical these two states, mental and brain, are. There are two kinds of identity theory one can subscribe to:

1) Individual mental states are identical to individual brain states (token-token identity theory)
2) Types of mental states are identical to types of brain states (type-type identity theory)

Type-type identity theory would hold that if my mental state for seeing red, X, were identical to my brain state Y, then whenever I saw red I would always have mental state X and therefore brain state Y. In other words, conscious experiences can be categorised into types, each with its characteristic mental and brain states.

Objection: the brain is labile (open to change), type-type identity theory seems difficult to maintain. For example, if one part of my brain is damaged, my brain will often route around this problem, resulting into two different brain states for ostensibly the same mental state. The context of a mental state also seems important, since the mental state of an experience of red may result in an entirely different brain state if I am feeling hot or cold (not least because of the related symbolism).

This becomes especially problematic between different brains. If I share the belief: "the capital of Italy is Rome", with another person, must our brain states be the same? What do we mean by brain states here? An analogy with a hard drive might be useful, since a hard drive may store the data in any number of ways across the surface of the disc, whilst still retaining the same information. Are all these many combinations of possible storage states to be considered as a single information state? If so, then the many ways in which a mental state may be represented in brain states may all be considered to be the selfsame brain state.

But is this not beginning to sound more like token-token identity theory, in which each mental state is said to be identical to an individual brain state, allowing for no generalising categorisation, but still not explaining how one can have a logically symmetrical identity statement when one single mental state can have many different brain states. That is unless one can claim that each different brain state means that there is an entirely different mental state.


Realisibility and species problems

Obviously, humans aren't the only species that can realise consciousness and therefore mental states. Given that this is the case, type-type identity theory must either be rejected or become species-specific. For example:

Human mental state X is identical to human brain state Y
Gorilla mental state X is identical to gorilla brain state Y
Human mental state X is not identical to gorilla mental state Y
etc.

This one retreat for type-type identity theorists opens up a can of worms though, since it again begs the question: should one not take into account the physical nature of individual brains, the contexts of mental and brain states, the age of the subjects, etc? In this case, one either draws up an extremely long, comprehensive list of possible mental-brain states, or one admits token-token identity theory.


A final objection

There is one thing identity theory still has little to say on, however, and that is: what is it to have a particular conscious experience?. To answer what it is to be in pain, an identity theorist might look up you in a table and find the corresponding physical properties of the brain, or simply the latter, and tell you that to be in pain is to have a particular physiological state. But does that tell us all there is to know about pain? It somehow seems unsatisfactory.


References:
http://www.seop.leeds.ac.uk/entries/mind-identity/index.html


What follows is my essay on this topic:



Is the thesis of Central State Materialism true, false, or of doubtful intelligibility?

Central state materialism, the dominant branch of identity theory, grew out of a desire in philosophy to explain mental phenomena in terms of the material sciences, which in turn was a result of the growing potential in the mid-20th century of the neurosciences. Its central claim is that all mental phenomena are physical phenomena, identifying thoughts, beliefs, dispositions and other mental states and processes with events and processes that the neurosciences can study in the brain. But although may at first seem commonsense, central state materialism fails to provide a framework by which we can understand most mental phenomena, and when scrutinised in this light it is clear that it also makes next to no sense at all.

First, we need to understand the theory itself. Learning from behaviourism and its objections, U. T. Place made the extraordinary claim that mental processes can be identified with brain processes in the same way that lightning can be identified with an electrical discharge; when we speak of a thought and of a particular brain state, we speak of exactly the same thing but in different terms. Because this identity theory is usually proposed by materialists, and because what philosophers call 'brain', neuroscientists call the 'central nervous system' (CNS), this theory is often called central state materialism (CSM).This means that all mental states are identical to CNS states, so that if one were to say "I can see an ostrich", one could identify that state with a particular state in the CNS and say that that thought is a particular neurological state and nothing more. To put it in perspective, since the idea that a thought might be reduced to a few neurons doesn't seem too odd, a CSM would have to hold that the phenomena known as 'loving one's parents' can be similarly reduced to certain material phenomena in the CNS.

Note that CSM says nothing about what it means to love one's parents, nor what it is like to have that experience; it is simply a statement of fact, that that is the ontology of the experience. This in itself seems like a shortcoming, since if we are to reduce all mental concepts to physical concepts and in doing so demonstrate that they are identical, surely we must be able to provide physical explanations or descriptions for concepts like 'what it is like to love one's parents'? A CSM might reply that one can say simply that what it is like can again be explained in terms of the CNS states and processes that are identified with the sensations caused by the experience; that an explanation of what it is like to experience mental state X and what that mental state X is are different needn't invalidate CSM, since we can find parallels in everyday language that we find unobjectionable. For example, I might refer to "the table", by which I mean an object with legs on which I can rest other objects; I might also refer to "that lump of wood", by which I mean an object created out of pieces of wood that sits in my room. Though each statement appears to refer to different physical concepts, they each refer to the selfsame object, and so a CSM can claim that one can reduce all mental concepts to material concepts, even if this seems counter-intuitive.

A general problem that any counter-intuitive theory brings up is that it must be contingent, since if it were necessary, we should have been able to discover it through analysis of language rather than through scientific discovery. Its contingency suggests that it is tied to a particular understanding of physics, in this case a materialist account of reality. Smart, one of the leading proponents of identity theory, thought this didn't matter: "there can be contingent statements of the form 'A is identical to B', and a person may well know that something is an A without knowing that it is a B" (Smart, p. 58) He further defended the materialist conception, stating that "there does seem to be, so far as science is concerned, nothing in the world but increasingly complex arrangements of physical constituents" (Smart, p. 53) By physical, of course, he means material, since if physics were to posit the existence of immaterial substances, they would become physical. But this is not a good reason because it is based upon a field of study that conceptually limits itself to the physical (neuroscience), and so will inevitably provide a physicalistic view of the cosmos. That we can provide physicalistic explanations of things does not mean that those things are wholly physical, just as my explaining the exchange of information over a network in terms of bits and processes doesn't mean that there is nothing more to this thing, because if we look we can also explain it in terms of sub-atomic particles in cables and circuits. What it does suggest is that we don't need purely mental concepts to explain mental phenomena, and so it makes sense to stop at material explanations rather than speculate about immaterial concepts for which we have no empirical evidence.

Or so Smart and colleagues thought back in the 1960s. If we were to start from scratch and write an account of the mind/brain problem framed in terms of contemporary science, it might look quite different. For example, since in introduction and development of string theory from the 1960s, we can now, within the confines of acceptable theoretical science, suggest that reality is composed of 26 dimensions, 22 of which are so tightly wrapped up in our four dimensional spatial-temporal reality that they needn't have spatial or temporal properties. This allows for effects like gravity and quantum effects like action at a distance to be described in terms of the interaction of spatial and non-spatial 'things', and so conceivably the mind could be an immaterial thing wrapped up in our four dimensions and in some way closely connected to our CNS. In this case, physicalist identity theorists must commit themselves to the possibility of dualism, leaving central state materialists cohering only with the larger, more proven subset of scientific theory.

A materialist identity theory also introduces a question of how we locate thoughts in our CNS; though we never imply or apply any spatial properties when using mental terms, according to CSM "we must begin to locate thoughts in the head". But how can we do this? According to Malcolm, Smart's criteria for strict identity are that "if x occurs in a certain place at a certain time, then y is strictly identical with x only if y occurs in the same place at the same time". But this is impossible to test empirically, since if we locate a particular CNS process in the CNS, then how do we separately locate the mental process in the exact same location? Not only is it impossible, but it is also unintelligible (Malcolm, p. 174), therefore we must conclude that identity theory can be nothing more than a hypothesis, an irony considering identity theory's emphasis on the material sciences.

Furthermore, it is questionable whether or not the material sciences are compatible with identity theory. When I have a sudden thought, that mental process has what Wittgenstein called 'surroundings', the circumstances that contributed to the formation of the thought. To borrow Malcolm's example, my thought that I need to take the milk bottles out may be tied to the circumstance that the milkman is about to arrive. According to Smart's strict identity theory, CNS processes must also have these circumstantial properties, but these cannot be explained in terms of physics; one couldn't explain the milkman's arriving soon in terms of physical properties of the CNS.

Further problems with CSM arise when we analyse the two varieties of the theory: type-type CSM and token-token CSM. According to type-type CSM, types of mental states are identical to types of CNS states, e.g. every time I think 'that is a table', we have a mental state X that is identical to CNS state Y, and that state is always the same, every time I have that thought. This is quite obviously ridiculous, not least because of context, but also because of the lability of the CNS. We can say, for example, that one time I think 'that is a table' I will be looking to lay some cutlery, whilst another time I have that thought that has the selfsame description in language, I may be looking to buy a chair in a furniture store. Each time the thought seems the same, and yet it makes sense that the CNS state would be quite different because of the context. And what if the part of my CNS that deals with that thought was slightly damaged? Neuroscience tells us that the CNS will often be able to route around the damage, creating an entirely new CNS state for the same mental state.

Type-type CSM creates further problems when you consider the differences in individual's brains, and the fact that other species can have conscious mental experiences with completely different CNSs. For the theory to work and avoid species chauvinism, we would have to create a complex table listing, for each type of CNS, their characteristic types of CNS states, and then identifying these types of states with types of mental states, which seems extraordinarily complicated. In fact, it begins to resemble token-token CSM, which holds that though every event or process may be different, for each token mental state there is an identical CNS state. This resolves all of the problems associated with type-type CSM, and can be well explained by analogy. We can store information in many different ways, be it in a bit travelling over a network cable, as a magnetic signature on a hard disc, an indent on a CD or an ink mark on paper. According to token-token CSM, as each different form of storage is entirely different, so the information in each case must be different, such that the information stored in the ink mark is identical to that ink mark, and the information stored as an indent on a CD is identical to that indent, but those two bits of information are slightly different, each perhaps containing the nuances of context and the state of the CNS at the time.

But even accepting this resolution, we still find ourselves unable to provide a scientifically intelligible account of that CNS state since we are unable to make reference to nuances of context. And even if we did, we would not be providing any reason to accept CSM, except that ontologically it doesn't posit anything that we don't know about empirically, and so is the simplest. Perhaps in a completed neuroscience we will be able to provide material-material explanations for all mental concepts, and we will resolve the problems in physics returning to a purely materialistic reality. But then CSM relies entirely on a weighted possibility, and at the same time is fraught with conceptual and ontological problems. We have no reason to believe that neuroscience will be able to tell is 'what it is like to love one's parents' nor 'what it means to love one's parents'; trying to show that something so complex as a thought with surroundings can be reduced to something explicable in terms of physics seems wrong; suggesting we can categorise mental experiences seems chauvinistic and naive; positing the identity of token mental states with token CNS states seems to do no more than highlight the close relation between the two. In other words, CSM does little to enhance our understanding of the relation between the mind and the CNS, and is so riddled with problems that it quite simply must be wrong.


Bibliography

Gefter, 'Throwing Einstein for a Loop', Scientific American, December 2002

N.Malcolm, 'Scientific Materialism and the Identity Theory', in C.V.Borst, The Mind/Brain Identity Theory, 1970

J.J.C.Smart, 'Sensations and Brain Processes', in C.V.Borst, The Mind/Brain Identity Theory, 1970