Mental representation is a kind of representation posited by cognitive scientists to account for various aspects of human and animal cognition (see Von Eckardt 1993 for a more detailed account). To better understand the nature of mental representation, it is useful to begin with a consideration of representation in general. Following Peirce (Hartshorne, Weiss, and Burks 1931-1958), we can say that any representation has four essential aspects: it is realized by a representation bearer, it has content or represents one or more objects, its representation relations are "grounded" somehow, and it is interpretable by (that is, it will function as a representation for) some interpreter.

If we take one of the foundational assumptions of cognitive science to be that the mind/brain is a computational device (see COMPUTATIONAL THEORY OF MIND), the representation bearers of mental representations will be computational structures or states. The specific nature of these structures or states depends on what kind of computer the mind/brain is hypothesized to be. To date, cognitive science research has focused on two kinds: conventional (or von Neumann, symbolic, or rule-based) computers and connectionist (or parallel distributed processing) computers (see COGNITIVE MODELING, SYMBOLIC and COGNITIVE MODELING, CONNECTIONIST). If the mind/brain is a conventional computer, then the representation bearers of our mental representations will be data structures. Kosslyn's (1980) work on mental IMAGERY provides a nice illustration. If the mind/brain is a connectionist computer, then the representation bearers of representations of occurrent mental states will be activation states of connectionist nodes or sets of nodes. In the first case, representation is considered to be "local;" in the second case, "distributed" (see DISTRIBUTED VS LOCAL REPRESENTATION and McClelland, Rumelhart, and Hinton 1986). There may, in addition, also be implicit representation (storage of information) in the connections themselves, a form of representation appropriate for dispositional mental states.

While individual claims about what our representations are about are frequently made in the cognitive science literature, we do not know enough about mental representation as a system to theorize about its semantics in the sense in which linguistics provides us with the formal SEMANTICS of natural language (as in POSSIBLE WORLD SEMANTICS or DYNAMIC SEMANTICS). However, if we reflect on what it is that mental representations are hypothesized to explain -- namely, certain features of our cognitive capacities -- we can plausibly infer that the semantics of our mental representation system must have certain characteristics. To be more specific: pre-theoretically conceived, the human cognitive capacities have the following three properties: (a) each capacity is intentional; that is, it involves states that have content or are "about" something; (b) virtually all of the capacities are pragmatically evaluable; that is, they can be exercised with varying degrees of success; and (c) most of the capacities are productive; that is, once a person has the capacity in question, he or she is typically in a position to manifest it in a practically unlimited number of novel ways. To account for these features, we must posit mental representations that are able to represent specific objects; to represent many different kinds of objects -- concrete objects, sets, properties, events, and states of affairs in this world, in possible worlds, and in fictional worlds as well as abstract objects such as universals and numbers; to represent both an object (tout court) and an aspect of that object (or both extension and intension); and to represent both correctly and incorrectly. In addition, if we take seriously the productivity of our cognitive capacities, we must posit representations with constituent structure and a compositional semantics. (Fodor and Pylyshyn 1988 use this fact to argue that our mental representation system cannot be connectionist. See CONNECTIONISM, PHILOSOPHICAL ISSUES.)

Cognitive scientists study not only the contents of mental representations, they are also concerned to discover where this content comes from, that is, what it is about the mind/brain that makes a mental representation of a tree have the content of being about a tree. Theories of what determines content are often referred to as this-or-that kind of "semantics." Note, however, that it is important to distinguish such "theories of content determination" (Von Eckardt 1993) from the kind of semantics which systematically describes the content being determined (i.e. the kind referred to in the previous paragraph).

There are currently five types of proposals regarding how mental representational content is grounded. Two are discussed elsewhere -- FUNCTIONAL ROLE SEMANTICS and INFORMATIONAL SEMANTICS. The remaining three are characterized below.

1. The structural isomorphism approach takes a representation to be "some sort of model of the thing (or things) it represents" (Palmer 1978). The representation (or more precisely, the representation bearer) represents aspects of the represented object by means of aspects of itself. Palmer (1978) treats both the representation bearer and the represented object as relational systems; that is, sets of constituent objects and sets of relations defined over these objects. A representation bearer then represents a represented object under some aspect if there exists a set G of relations which constitute the representation bearer and a set D of relations which constitute the object such that G is isomorphic to D.

2. The causal historical approach (Devitt 1989; Sterelny 1990) is intended to apply only to the mental analogues of designational expressions. The view is that a token designational "expression" in the LANGUAGE OF THOUGHT designates an object if there is a certain sort of causal chain connecting the representation bearer with the object. Such causal chains include perceiving the object, designating the object in natural language, and borrowing a designating expression from another person (see REFERENCE, THEORIES OF).

3. According to the biological function approach (Millikan 1984), mental representations, like animal communication signals, are "intentional icons," a form of representation which is "articulate" (has constituent structure and a compositonal semantics) and mediates between producer mechanisms and interpreter mechanisms. The content of any given representation bearer, on this view, will be determined by two things -- the systematic natural associations that exist between the family of intentional icons to which the representation bearer belongs and some set of representational objects, and the biological functions of the interpreter device. More specifically, a representation bearer will represent an object if the existence of a mapping from the representation bearer-family to the object-family is a condition of the interpreter device successfully performing its biological functions. For example, there is an association between bee dances and the location of nectar relative to the hive. The interpreter device for bee dances consists of the gatherer bees and among their biological functions are functions adapted to specific bee dances, e.g. finding nectar 120 ft. to the north of the hive in response to, say, bee dance #23. Bee dance #23 then expresses the proposition that the nectar is 120 ft. to the north of the hive if the interpreter function can successfully perform its function only if bee dance #23 is in fact associated with the nectar's being at that location.

It can be argued that for a mental entity or state to be a representation, it must not only have content, it must also be significant for the subject who has it. According to Peirce, a representation that has such significance has the power to produce an "interpretant" state or process in the subject and this state or process is related to both the representation and the subject in such a way that, by means of the interpretant, what the representation represents can make a difference to the internal states and behavior of the subject. Although this aspect of mental representation has received little explicit attention, and its existence or importance might even be disputed by some, many cognitive scientists seem to assume that the interpretant of a mental representation, for a given subject, consists of all the possible (token) computational consequences, including both the processes and the results of these processes, contingent upon the subject's actively "entertaining" that representation.

Cognitive scientists engaged in the process of modeling or devising empirical theories of specific cognitive capacities (or specific features of those capacities) often posit particular kinds of mental representations. For pedagogical purposes, Thagard (1996) categorizes representations into six main kinds, each of which is typically associated with certain types of computational processes. They are: sentences or well-formed formulas of a logical system (see LOGICAL REASONING SYSTEMS), rules (as in PRODUCTION SYSTEMS -- see NEWELL), representations of concepts such as frames, SCHEMATA, scripts (see CATEGORIZATION), analogies (see ANALOGY), images, and connectionist representations. Another popular distinction is between symbolic representation (found in "conventional" computational devices) and sub-symbolic representation (found in connectionist devices). There is unfortunately no conceptually tidy taxonomy of representational kinds. Sometimes such kinds are distinguished by their computational or formal characteristics -- for example, local vs. distributed representation in connectionist systems. Sometimes they are distinguished in terms of what they represent -- for example, phonological, lexical, syntactic, and semantic representation in linguistics and psycholinguistics. And, sometimes, both form and content play a role. For example, Paivio's (1986) dual-coding theory claims that there are two basic modes of representation -- imagistic and propositional. According to Eysenck and Keane (1995), imagistic representations are modality-specific, non-discrete, implicit, and involve loose combination rules whereas propositional representations are amodal, discrete, explicit, and involve strong combination rules. The first contrast, modality-specific vs. amodal, refers to the aspect under which the object is represented, hence, to content. The other three contrasts all concern form.

Not all philosophers interested in cognitive science regard the positing of mental representations as being necessary or, even, unproblematic. For example, Stich (1983) argues that if one compares the strengths and weaknesses of a "Syntactic Theory of Mind" (STM), according to which mental states are treated as relations to purely syntactic mental sentence tokens and generalizations are framed in purely formal/computational terms, with representational approaches, STM will win. Representational approaches, on his view, necessarily encounter difficulties explaining the cognition of young children, "primitive" folk, and the mentally and neurally impaired which STM handles without difficulty. Furthermore, it is not clear that cognitive science ought to aim at explaining the sorts of intentional phenomena (capacities or behavior) that mental representations are typically posited to explain.

Even more damning critiques of mental representation can be found in Judge (1985) and Horst (1996). Judge accepts the Peircian tripartite conception of representation according to which a representation involves a representation bearer R, an object represented O, and an interpretant I, but takes the interpretant to require an agent performing an intentional act such as understanding R to represent O. The latter condition, however, causes problems for mental representation, on her view. For understanding R to represent O itself necessitates that the agent have non-mediated access to O but, on the assumption that all cognition is mediated by mental representation, this is impossible. (Another problem with this view of the interpretant, not discussed by Judge, is that it leads to an infinite regress of mental representations.)

Horst (1996) also believes that cognitive science's attempt to explain INTENTIONALITY by positing mental representations is fundamentally confused. Mental representations are usually taken to be symbols. But a symbol, in the standard semantic sense, involves conventions, both with respect to its meaning and with respect to its syntactic type. And since conventions themselves involve intentionality, intentionality cannot be explained by positing mental representations. An alternative is to treat 'mental symbol' as a technical term. But Horst argues that, viewed in this technical way, the positing of mental representations also fails to be explanatory. Furthermore, even if such an alternative approach were to work, cognitive science would still be saddled with the conventionality of mental syntax.

See also

CONCEPTS
KNOWLEDGE REPRESENTATION
MENTAL MODELS


-- Barbara Von Eckardt


REFERENCES



Devitt, M. (1981). Designation. New York: Columbia University Press.

Eysenck, M.W. and M.T. Keane. (1995). Cognitive Psychology: A Student's Handbook. Hillsdale: Erlbaum Associates.

Fodor, J.A. and Z. W. Pylyshyn. (1988). Connectionism and cognitive architecture: a critical analysis. In S. Pinker and J. Mehler (Eds.), Connections and Symbols. Cambridge, MA: MIT Press, pp. 3-71.

Hartshorne, C., P. Weiss and A. Burks (Eds.). (1931-1958). Collected Papers of Charles Sanders Peirce. Cambridge, MA: Belknap Press of Harvard University Press.

Horst, S. W. (1996). Symbols, Computation, and Intentionality: A Critique of the Computational Theory of Mind. Berkeley: University of California Press.

Judge, B. (1985). Thinking About Things: A Philosophical Study of Representation. Edinburgh: Scottish Academic Press.

Kosslyn, S. M. (1980). Image and Mind. Cambridge, MA: Harvard University Press.

McClelland, J. L., D.E. Rumelhart and G.E.Hinton (Eds.). (1986). Parallel Distributed Processing: Explorations in the Microstructures of Cognition, vol. 1 and 2. Cambridge, MA: MIT Press.

Millikan, R. (1984). Language, Thought, and Other Biological Categories. Cambridge. MA: MIT Press.

Paivio, A. (1986). Mental Representations: A Dual Coding Approach. Oxford: Oxford University Press.

Sterelny, K. (1990). The Representational Theory of Mind: An Introduction. Oxford: Blackwell.

Stich, S. P. (1983). From Folk Psychology to Cognitive Science. Cambridge, MA: MIT Press.

Thagard, P. (1995). Mind: Introduction to Cognitive Science. Cambridge, MA: MIT Press.

Von Eckardt, B. (1993). What is Cognitive Science? Cambridge, MA: MIT Press.


Further Readings

Anderson, J.R. (1983). The Architecture of Cognition. Cambridge, MA: Harvard University Press.

Anderson, J.R. (1993). Rules of the Mind. Hillsdale, N.J.: Erlbaum.

Bechtel, W. and A. Abrahamson. (1991). Connectionism and the Mind: An Introduction to Parallel Processing in Networks. Oxford: Blackwell.

Block, N. (1986). Advertisement for a semantics for psychology. In P.A. French, T.E. Uehling, Jr. and H.K. Wettstein (Eds.), Midwest Studies in Philosophy, Studies in the Philosophy of Mind, vol. 10. Minneapolis: University of Minnesota Press, pp. 615-678.

Cummins, R. (1989). Meaning and Mental Representation. Cambridge, MA: MIT Press.

Devitt, M. and K. Sterelny. Language and Reality. Cambridge, MA: MIT Press.

Dretske, F. (1981). Knowledge and the Flow of Information. Cambridge, MA: MIT Press.

Fodor, J. (1975). The Language of Thought. New York: Crowell.

Fodor, J. (1981). Representations. Cambridge, MA: MIT Press.

Fodor, J. (1987). Psychosemantics. Cambridge, MA: MIT Press.

Fodor, J. (1990). A Theory of Content and Other Essays. Cambridge, MA: MIT Press.

Genesereth, M.R. and N.J. Nilsson. (1987. Logical Foundations of Artificial Intelligence. Los Altos, Ca.: Morgan Kaufmann.

Hall, R. (1989). Computational approaches to analogical reasoning: A comparative analysis. Artificial Intelligence 39: 39-120.

Holyoak, K.J. and P.Thagard. (1995). Mental Leaps: Analogy in Creative Thought. Cambridge, MA: MIT Press.

Johnson-Laird, P.N. (1983). Mental Models. Cambridge, MA: Harvard University Press.

Kosslyn, S.M. (1994). Image and Brain: The Resolution of the Imagery Debate. Cambridge, MA: MIT Press.

Lloyd, D. (1987). Mental representation from the bottom up. Synthese 70: 23-78.

Loar, B. (1981). Mind and Meaning. Cambridge: Cambridge University Press.

Millikan, R. (1989). Biosemantics. Journal of Philosophy 86: 281-297.

Minsky, M. (1975). A framework for representing knowledge. In P.H. Winston (Ed.), The Psychology of Computer Vision. New York: McGraw-Hill, pp. 211-277.

Newell, A. (1990). Unified Theories of Cognition. Cambridge, MA: Harvard University Press.

Rumelhart, D.E. (1980). Schemata: The building blocks of cognition. In R Spiro, B. Bruce and W. Brewer (Eds.), Theoretical Issues in Reading Comprehension. Hillsdale, NJ: Erlbaum, pp. 33-58.

Schank, R.C. and R. P. Abelson. (1977). Scripts, Plans, Goals, and Understanding: An Inquiry into Human Knowledge Structures. Hillsdale, NJ: Erlbaum.

Searle, J. R. (1983). Intentionality. Cambridge: Cambridge University Press.

Smith, E.E. and D.L. Medin. (1981). Categories and Concepts. Cambridge, MA: Harvard University Press.