Home | About Me | Photos | Writing | Research | Scratchpad | Projects

Fodor's intentional realism, and the representational theory of mind

Philosophy : Philosophy of Mind

Commonsense psychology developed out of functionalism, in a move to explain how functionally equivalent beings (e.g. me at age 2 and me at age 20) might exhibit different kinds of behaviour, and why some beings might be considered whilst others wouldn't despite seeming to be functionally or behaviourally equivalent. Central to this philosophy is Jerry Fodor's Representational Theory of Mind (RTM), which postulates a system of symbols that function as mental representations, and which in turn consitute as a whole a language of thought.

What differentiates conscious and unconscious beings? A functionalist would say that conscious beings have beliefs and desires that cause behaviour, whilst an unconscious being acts upon "hardware" reflexes. According to adherents of the RTM, this is the only way to explain and predict behaviour; neuroscience can't say anything on this subject (at the moment).


Intentional realism and RTM

The central question made here is: how can mental phenomena be "about" someone, e.g. the sentence "Aristotle is dead" is about Aristotle and so intends 'Aristotle' in some way. How can mental phenomena and symbols be about things, particularly if one assumed materialism and more so if the thing the phenomena and symbols are about doesn't actually exist, e.g. a unicorn. It would seem that if we are to grant mental phenomena any intentionality, we must explain how they can relate to other things that may themselves be mental or non-mental phenomena.

According to Jerry Fodor, if I believe A, then the representational content of the belief is A, no more than a sentence. Intentional mental states are therefore relations to sentences that organisms stand in. We can only understand intentional states if we accept that beliefs and desires are real, and that they represent intentional mental states in the form of sentences in what Fodor called the language of thought.

To extend the computer analogy, the language of thought is like the binary machine code understood by the hardware. It exists between the hardware and software as a way of representing the machine's states in a form, or syntax, that is understandable for humans, and that can be translated into more human-readable languages. Thus my forming a belief that "it is cold" is a matter of my body 'feeling' the cold, forming a belief that it is cold in a sentence in the language of thought that acquires an appropriate functional role, and then possibly translating this belief into the English language, "it is cold".

The language of thought is a system of representation, and is said by its proponents to be the only way that we can possibly understand how a mind that is understood as a higher level entity could systematically correspond with bodily functions, hence the term "commonsense". It provides both semantic (substance) and syntactic (form) parallels between thoughts and "human language" sentences (e.g. English, French, etc.) that cannot otherwise be posited.

The language provides us with a semantic engine, a tool that can interpret the syntax of thoughts and of human languages without regards to their semantics (substance/meaning) and represent them in an appropriate fashion, retaining the semantics in the same way that a programming language might manipulate data based purely on the syntax of the code, without needing to know the content of that data. This semantic engine needn't understand the information it processes - it is intelligent in the sense that it performs operations or functions - but it contributes to an understanding in the mind, in that it enables the higher level entity to work with representations of bodily matters and with beliefs in a common syntax. John Searle, by means of an analogy called The Chinese Room, proposed that the mind shouldn't be thought of as understanding its functions any more than a computer might understand its own functions; this suggests that a machine might be understood to be as intelligent and conscious as a human, with immense consequences for our understanding of consciousness, in particular related to the Turing test. As such his views have generally been thought of as begging the question, and worked around or refuted by many RTM functionalists.


A further question arises related to the semantically neutral syntax of the language of thought, and that is this: what gives sentences in the language of thought their meaning? If we are to stick to the computing analogy, then in code an object could equally refer to, for example, a banana or an apple, thanks to its semantic neutrality. But at some point in the language of thought a particular symbol (object) must have a meaning, it must take on semantics. The RTM assumes that symbols do not have intrinsic meanings, and that their meaning depends upon how they are deployed, but that suggests that symbols in our language of thought only have meanings insofar as we assign them meanings, which, if true, means that the language of thought cannot explain how we can have meaningful thoughts, since the assignment of meanings presupposes meaningful thought.

If meaning is neither intrinsic nor interpretively assigned by the thinker, where does it enter this picture of the mind? The only answer, though liberal, is to posit that symbols owe their semantics to their relation to the mind's surroundings, e.g. a symbol 'banana' takes on meaning when it relates to a banana the thinker has come across. Thoughts, therefore, are the composition of symbols in the language of thought and their relation to the external world.


According to Noam Chomsky, if language is both productive and creative, then it can have no theoretical limit, and therefore neither can thought. This is a startling syntactic parallel, suggesting that our 'hardwired' langugage of thought adapts and evolves within us, and with the species' evolution. But if this is the case, then by our late age we must each have quantitatively different languages of thought that we each attempt translate into our native tongues. How then do we communicate so well, if our mental representations of our surroundings can be so qualitatively different? I would move to suggest that the differences in our languages of thought are in fact slight when compared to the unity between people's languages, and that these differences manifest themselves in subtle qualitative differences in our representations of our surroundings, different enough to allow for conflict of opinion and misunderstandings but similar enough to allow for common reference points. Perhaps this evolution of languages of thought also relates to culture, and cultural relativism?

This raises a further question of native tongues; could the language of thought in fact be English, Spanish, Latin, or any other 'human language'? fodor thought not, that it ought to be labelled mentalese, a universal language of thought. Chomsky agreed that it must be universal, since different cultures with no contact share such commonalities in their behaviour and mental functionality that they must share a common mental frame of reference.


The RTM can help us understand learning and understanding. Concept learning can be understood as involving operations of hypotheses, testing, representing test results and forming conclusions. This process is most naturally performed and described with language, which suggests that there is nothing more to learning and understanding than our commonsense perception of the processes. When we hypothesise, we are forming connections between mental sentences, creating structures from objects that can then be tested against surroundings, and so on. As we learn and understand, different mental objects take on different functional states in the process.


Objection: Need the language of thought be a language? Can it not be comprised of other kinds of data structures, for example images? Since we take in information in many different forms (visual, aural, sensual, etc.) doesn't it make sense that our minds process each form of input in a different way, and stores and represents these processes and states in different ways? It is possible that all data is broken down into linguistic structures, just as computers break down all data, be it music, art or writing, down into text-based data and ultimately into binary code, but these structures no longer have obvious linguistic qualities. We could only understand the language of thought therefore as something similar or analogous to binary machine code, which universalises all data for computers.

It certainly seems more sensible to posit a linguistic mental system than a visual or aural one; to represent all data visually, the mind would need to be able to break down all information into a common visual 'format' that could not be confused with any immediate visual data, e.g. it would need to break down the sentence "hello there", the noise of a cat's meow and Monet's water lillies into one single system of visual representation that didn't resemble some raw visual data we might otherwise see in the world. Though it would be possible, it is harder to imagine than a language that can do that, and to come up with such a visual sytem of representation we would most likely naturally use language of some form (even be it mathematics) to work it out, suggesting that intuitively we turn to language for sophisticated manipulation of data.


According to followers of Wittgenstein, there are many more objections to the RTM:

Objection 1: Thoughts are neither simple nor complex; they cannot be broken down in the same way as sentences
Objection 2: Thinking isn't a process; talking and using sentences are
Objection 3: Thoughts can be entirely instantiated in an instant; sentences take time to use
Objection 4: Talking is a form of behaviour; thinking isn't

Broadly speaking, they contend that Fodor and his followers misrepresent thoughts and language, falsely asserting similarities. Followers of RTM reply that thoughts do take time, for example when surfacing from the unconscious, or when being formed as the mind learns of new ideas and forms new mental connections that can over time be understood as one thought.


Tropisms also pose a problem for functionalists in general and therefore also followers of the RTM. A tropism is a being whose behaviour is genetic, or hardwired, with no flexibility and no thought, but that, in terms of its behaviour and all other publically observable phenomena, appears intelligent. The distinguishing difference between a conscious being and a tropism, or unconscious being, seems to be the ability to process new information and adapt. Fodor suggested that to be conscious, a being must have the ability to represent states of affairs and to act upon these representations. This answer reinforces the importance of representation in the understanding of our mind.


Finally, one can ask: even if the RTM is correct, how can we ever know of the language of thought that underlies all human language that we can actually study? It would be a matter of reverse engineering a code based on human behaviour and the goings on in the brain, either a daunting or an impossible task.