(2) the contents of thoughts are determined by their construction out of concepts; and
(3) the contents of concepts are determined by their `functional role' in a person's psychology, where
(4) functional role is conceived nonsolipsistically as involving relations to things in the world, including things in the past and future.
I include the parenthetical modifier `(nonsolipsistic)' in the phrase `(nonsolipsistic) conceptual role semantics' to contrast this approach with that of some recent authors (Field, 1977; Fodor, 1980; Loar, 1981) who think of conceptual role solipsistically as a completely internal matter. I put parentheses around `nonsolipsistic' because, as I will argue below, the term is redundant: conceptual role must be conceived nonsolipsistically
Commenting on Harman (1982), Loewer (1982) takes this nonsolipsistic aspect to be an important revision in an earlier solipsistic theory. This is not so. Conceptual role semantics derives from nonsolipsistic behaviorism. It was originally and has until recently been a nonsolipsistic theory (e.g Sellars, 1954; Harman, 1973). This is discussed further below.
(Nonsolipsistic) conceptual role semantics represents one thing that might be meant by the slogan `meaning is use'. But a proper appreciation of the point requires distinguishing (at least) two uses of symbols, their use in calculation, as in adding a column of figures, and their use in communication, as in telling someone the result.
(Nonsolipsistic) conceptual role semantics may be seen as a version of the theory that meaning is use, where the basic use of symbols is taken to be m calculation, not in communication, and where concepts are treated as symbols in a `language of thought'. Clearly, the relevant use of such `symbols', the use of which determines their content, is their use in thought and calculation rather than in communication. If thought is like talking to yourself, it is the sort of talking involved in figuring something out, not the sort of talking involved in communication. Thinking is not communicating with yourself.
However, it would be more accurate to say content is use than to say meaning is use; strictly speaking, thoughts and concepts have content, not meaning.
Grice proposes to analyze expression meaning in terms of speaker meaning, and he proposes, more controversially, to analyze speaker meaning in terms of a speaker's intentions to communicate something. This last proposal appears to overlook the meaningful use of symbols in calculation. You might invent a special notation in order to work out a certain sort of problem. It would be quite proper to say that by a given symbol you meant so-and-so, even though you have no intentions to use these symbols in any sort of communication.
There does seem to be some sort of connection between speaker or user meaning and the speaker's or user's intentions. Suppose you use your special notation to work out a specific problem. You formulate the assumptions of the problem in your notation, do some calculating, and end up with a meaningful result in that notation. It would be correct to say of you that, when you wrote down a particular assumption in your notation, you meant such and such by what you wrote: but it would be incorrect to say of you that, when you wrote the conclusion you reached in your notation, you meant so and so by what you wrote. This seems connected with the fact that, in formulating the assumption as you did in your notation, you intended to express such and such an assumption; whereas, in writing down the conclusion you reached in your notation, your intention was not to express such and such a conclusion but rather to reach whatever conclusion in your notation followed from earlier steps by the rules of your calculus. This suggests that you mean that so and so in using certain symbols if and only if you use those symbols to express the thought that so and so, with the intention of expressing such a thought.
Unexpressed thoughts (beliefs, fears, desires, etc.) do not have meaning. We would not ordinarily say that in thinking as you did you meant that so and so. If thoughts are in a language of thought, they are not (normally) also expressed in that language.
I say `normally' because sometimes one has thoughts in English or some other real language. Indeed, I am inclined to think a language, properly so called, is a symbol system that is used both for communication and thought. If one cannot think in a language, one has not yet mastered it. A symbol system used only for communication, like Morse code, is not a language.
Concepts and other aspects of mental representation have content but not (normally) meaning (unless they are also expression in a language used in communication). We would not normally say that your concept of redness meant anything in the way that the word `red' in English means something. Nor would we say that you meant anything by that concept on a particular occasion of its exercise.
Similarly, it is concepts that have uses or functions or roles in thought, not the possible attitudes in which those concepts occur. There are indefinitely many possible attitudes. Most possible attitudes are never taken by anyone, and most attitudes that are at some point taken by someone are taken by someone only once. Possible beliefs, desires, and other attitudes do not normally have regular uses or functions or roles that make them the possible attitudes they are. Consider, for example, what use or role or function there might be for the possible belief of yours that you have bathed in coca cola. This belief would have a certain content, but no obvious use or role or function. The content of a belief does not derive from its own role or function but rather from the uses of the concepts it exercises.
Loar (1983a) objects to this. He supposes it implies that thoughts literally contain (tokens of) concepts as parts, so that, for example, all conjunctive beliefs share a constituent representing conjunction, and similarly for other concepts. Loar rejects this view, arguing that our ordinary conception of belief and other attitudes does not require it. He agrees we must suppose that all conjunctive beliefs have something in common, and similarly that all negative beliefs have something in common, but this he says is not to suppose that all conjunctive or negative beliefs have a constituent in common. He says that conjunctive or negative beliefs might have certain `second order properties in common' without having any `first order, structural properties in common'.
I find the issue here quite obscure, like the question of whether the prime factors of a number are constituents of the number or not. All numbers that have three as a prime factor have something in common. Do they have a first order structural property in common or only a second order property? This does not strike me as a well defined issue, and I feel the same way about Loar's issue. Just as it is useful for certain purposes to think of the prime factors of a number as its constituents, it is also useful for certain purposes to think of attitudes as having concepts as constituents. But it is hard to know what is meant by the question whether concepts are literally constituents of thoughts.
It may seem that Loar is thinking along the following lines. A relation of negation might hold between two beliefs without there being anything that determines which belief is the negative one, and similarly for other concepts. In the case of conjunction, let us say that P has the `relation of conjunction' to Q and R if and only if Q and R together obviously imply and are obviously implied by P. Belief P might have the relation of conjunction in this sense to Q and R without anything distinguishing the case in which P has the structure `Q and R' from that in which P does not have that structure but instead Q and R have the respective structures, `P or S' and `P or not S'. Similarly for other concepts.
This may seem a promising way to understand a denial of the claim that thoughts are constructed out of concepts, until one realizes that what is being imagined is simply that equivalent beliefs cannot be distinguished, so that `not not P' cannot be distinguished from `P', `P and Q' cannot be distinguished from `neither not P nor not Q', and so on. But it is easy to reject this possibility. If the issue is whether there can be two different but logically equivalent beliefs, the answer is obviously `Yes'. So this cannot be how Loar intends the issue to be understood.
Loar (1983a) says that the issue is which of the following possibilities is correct: is it (1) that the structures attitudes have helped explain functional relations among attitudes or is it (2) that the functional relations among the attitudes help explain why we assign sentence-like structures to the attitudes? But (1) and (2) are not exclusive alternatives. Any reasonable theory will allow that people often accept conclusions because the conclusions are instances of generalizations they accept. This is to allow that beliefs can have the quantified structure of generalizations and that this can explain certain functional relations among beliefs. And any reasonable theory will allow that we determine what structures attitudes have by considering functional relations among attitudes. So any reasonable theory will accept both (1) and (2).
In a reply to this, Loar (1983b) says to consider what it would be like if a conjunctive belief C(P,Q) were internally represented as a single unstructured symbol R that was linked by particular rules to its conjuncts P and Q, so that R obviously implied and was obviously implied by P and Q although one's recognition of the implication did not depend on thinking of R as the conjunction of P and Q. In other words, the reason why one immediately recognized the implication from R to P would in this case be different from the reason why one immediately recognized the implication from U to S, where U was the conjunction of S and T. The fact that R and U were conjunctions would not be part of the explanation of one's recognition of these implications. And similarly for other logical constants.
But this sort of example is precisely not compatible with our ordinary thinking about belief, since (as I have already observed) we do suppose people often recognize implications because of the relevant formal properties; for example, we think people often accept certain conclusions because the conclusions are instances of generalizations they accept, which is to think that beliefs have a certain internal structure. So this example does not support Loar's scepticism.
In the case of concepts of shape and number, inferential connections play a larger role. Perceptual connections are still relevant; to some extent your concept of a triangle involves your notion of what a triangle looks like and your concept of various natural numbers is connection with your ability to count objects you perceive. But the role these notions play in inference looms larger.
The concept expressed by the word `because' plays an important role in one's understanding of phenomena and has (I believe) a central role in inference, since inference is often inference to the best explanation. This role makes the concept expressed by `because' the concept it is, I believe. Is perception relevant at all here? Perhaps. It may be that you sometimes directly perceive causality or certain explanatory relations, and it may be that this helps to determine the content of the concept you express with the word `because'. Or perhaps not. Maybe the perception of causality and other explanatory relations is always mediated by inference.
Logical words like `and', `not', `every', and `some' express concepts whose function in inference seems clearly quite important to their content, which is why it seems plausible to say that these words do not mean in intuitionistic logic and quantum logic what they mean in so called classical logic, although even here there may be crucial perceptual functions. It may, for example, be central to your concept of negation that you can sometimes perceive that certain things are not so, as when you perceive that Pierre is not in the cafe, for instance. It may be central to your concept of generality or universal quantification that you can sometimes perceive that everything of a certain sort is so and so, for instance that everyone in the room is wearing a hat.
It is possible that there are certain sorts of theoretical term, like `quark', that play no role in perception at all, so that the content of the concepts they express is determined entirely by inferential role. (But maybe it is important to the concept of a quark that the concept should play a role in the perception of certain pictures or diagrams!)
There is as yet no substantial theory of inference or reasoning. To be sure, logic is well developed; but logic is not a theory of inference or reasoning. Logic is a theory of implication and inconsistency.
Logic is relevant to reasoning because implication and inconsistency are. Implication and inconsistency are relevant to reasoning because implication is an explanatory relation and because inconsistency is a kind of incoherence, and in reasoning you try among other things to increase the explanatory coherence of your view (Harman, 1986a). Particularly relevant are relations of immediate or obvious psychological implication and immediate or obvious psychological inconsistency.
These notions, of immediate implication and inconsistency for a person S, might be partly explained as follows. If P and R immediately imply Q for S, then, if S accepts P and R and considers whether Q, S is strongly disposed to accept Q too, unless S comes to reject P or R. If U and V are immediately inconsistent for S, S is strongly disposed not to accept both, so that, if S accepts the one, S is strongly disposed not to accept the other without giving up the first.
I should stress that these dispositions are only dispositions or tendencies which might be overridden by other factors. Sometimes one has to continue to believe things one knows are inconsistent, because one does not know of any good way to resolve the inconsistency. Furthermore, the conditions I have stated are at best only necessary conditions. For example, as Scott Soames has pointed out to me, U and `I do not believe U' satisfy the last condition without being inconsistent. Soames has also observed that the principles for implication presuppose there is not a purely probabilistic rule of acceptance for belief. Otherwise one might accept P and Q without accepting their conjunction, which they obviously imply, on the grounds that the conjuncts can have a high probability without the conjunction having such a high probability. I have elsewhere argued against such a purely probabilistic rule (Harman, 1986a).
Now, if logical concepts are entirely fixed by their functions in reasoning, a concept C expresses logical conjunction if it serves to combine two thoughts P and Q to form a third thought C(P,Q), where the role of C can be characterized in terms of the principles of `conjunction introduction' and `conjunction elimination'. In other words, P and Q obviously and immediately imply, and are immediate obvious psychological implications of, C(P,Q). Similarly, a concept N expresses logical negation if it applies to a thought P to form a second thought N(P) and the role of N can be characterized as follows: N(P) is obviously inconsistent with P and is immediately implied by anything else that is obviously inconsistent with P and vice versa, that is, anything obviously inconsistent with N(P) immediately implies P. (I am indebted to Scott Soames for pointing out that this last clause is needed.) In the same way, concepts express one or another type of logical quantification if their function can be specified by relevant principles of generalization and instantiation. To repeat, this holds only on the assumption that logical concepts are determined entirely by their role in reasoning and that any role in perception they might have is not essential or derives from role in reasoning.
Presumably, such accounts of logical form should be relevant to or perhaps even part of a grammatical analyses of the relevant sentences. Here is an area where there may be useful interaction between what philosophers do and what linguists do. However, as Chomsky (1980) observes, distinctions that are important for linguistics may not coincide with the distinctions that are important for philosophers. Or, to put the point in another way, the factors that determine relations of implication may not all be of the same sort. Some may be aspects of what Chomsky calls `sentence grammar', others may not. And some aspects of `sentence grammar' that function syntactically like logical features may not be directly connected with implication. For example, Chomsky suggests that the rules of grammar that determine how quantifiers are understood, which are of course crucial in determining what the logical implications of the sentence are, may be the same as the rules that determine such things as the `focus' of a sentence, something which seems not to affect the logical implications of a sentence but only its `conversational implicatures'.
Quine (1960) argues plausibly that, even given all possible evidence about a language, that may not decide between various locally incompatible `analytical hypotheses', where by `analytical hypotheses' he means hypotheses about logical or grammatical form of the sort just mentioned. There has been considerable dispute as to exactly how Quine's claim should be interpreted, whether it is true, and what the implications of its truth might be. It has been said (falsely) that all Quine's thesis amounts to is the claim that a theory is underdetermined by the evidence. It has also been said (correctly, I believe) that whatever valid point Quine may be making, it does not involve any significant difference between the `hard sciences', like physics, and the study of language.
One issue suggested by Quine's argument is this. Suppose you have a theory, of physical reality or of language, which you think is true. Even though you think the theory is true, you can go on to consider what aspects of the theory correspond to reality and what aspects are instead mere artifacts of the notation in which the theory is presented. A true geographical description of the Earth will mention longitudes as well as cities and mountains, but longitudes do not have geographical reality in the way that cities and mountains do. It is true that Greenwich, England, is at zero degrees longitude, but this truth is an artifact of our way of describing the Earth, since there are other equally true ways of describing the geography of the Earth that would assign Greenwich other longitudes. Similarly, there are various true physical descriptions of the world, which assign a given space-time point different coordinates. It may be true that under a particular description a particular point has the special coordinates (0,0,0,0) but that is an artifact of the description which, by itself, does not correspond to anything in reality, and the same is true as regards grammars and theories of logical form. Even if a given account of grammar or logical form is true, there is still a question what aspects of the account correspond to reality and what aspects are merely artifacts of that particular description. It is quite possible that several different locally incompatible accounts might all be true, just as several different locally incompatible assignments of longitudes and latitudes to places on Earth might all be true.
This might be put in another way. Reality is what is invariant among true complete theories. Geographical reality is what is invariant in different true complete geographical descriptions of the world. Physical reality is what is invariant in different true complete physical descriptions of the universe. What worries Quine is that he has a fairly good sense of physical and geographical reality but little or no sense of grammatical reality or of the reality described by accounts of logical form. Indeed, Quine is inclined to think that there are only two relevant levels of reality here:
(2) behavioral reality, including dispositions to behave in various ways.
The possibility of indeterminacy allows an interpretation of the question raised by Loar (1983a) and discussed above as to whether thoughts really contain concepts as parts. Suppose there are many different sets of `analytical hypotheses' that account for the facts. On one set of hypotheses, Q would be a simpler belief than P, and P would be the explicit negation of Q. On a different set of analytical hypotheses things would be reversed and Q would be the explicit negation of P. Nothing would determine which belief really contained the explicit negation independently of one or another set of hypotheses. If this should prove to be so, which I doubt, it would show that a particular assignment of structure to a given thought was an artifact of a given way of describing thoughts. It would of course also be true that, relative to a given set of analytical hypotheses, a given thought would truly consist in a particular structure of concepts.
There are well known difficulties with the view that a theory of truth might provide even part of a theory of meaning (e.g., Foster, 1976). For one thing, how are the truth conditions to be specified? There seem to be two possibilities. The first is that truth conditions are assigned to beliefs by virtue of the theory's implying clauses of the form, `Belief b is true if and only if C'. Then the problem is that the same theory will also imply indefinitely many results of the form `C if and only if D', where `C' and `D' are not synonymous, so the theory will imply indefinitely many `incorrect' clauses of the form `Belief b is true if and only if D'. The problem is, in other words, that specifying truth conditions in this way does not distinguish among beliefs that are equivalent in relation to the principles of the theory.
The other possibility is to allow that equivalent beliefs might have different truth conditions. The trouble with this possibility is that it treats truth conditions as very much like meanings or contents which are no longer specifiable by the usual Tarski-type theories of truth. It is unclear how this appeal to truth conditions might offer any benefit to the theory of content beyond the tautology that a theory of content must include an account of content.
This is not to deny that attempts to develop theories of truth adequate for certain aspects of natural language may well shed light on meaning. Examples might include the truth functional analysis of `and', `not', and `or'; the Frege-Tarski analysis of quantification; Davidson's analysis of action sentences; and the possible worlds account of modality. But in all these cases the analyses help specify implications among sentences. Their bearing on meaning may be due entirely to that, apart from anything further necessary to having a theory of truth, although this, of course, allows that there may also be a heuristic point to attempting to develop theories of truth (Harman, 1972, 1974).
For the most part you have to accept propositions in an all or nothing way. Conservatism is important. You should continue to believe as you do in the absence of any special reason to doubt your views, and in reasoning you should try to minimize change in your initial opinions in attaining other goals of reasoning. Such other goals include explanatory coherence and, of course, practical success in satisfying your needs and desires. But these points are vague and do not take us very far. Furthermore, something ultimately needs to be said about practical reasoning (Harman, 1986a, b).
In Harman (1973) I emphasize how the appeal to a background of normality figures in all identifications of representational states, even those in an artifact such as a radar aimer. A radar aimer interprets data from radar and calculates where to fire guns in order to shoot down enemy planes. We can describe the device as representing the present and future locations of the planes because radar waves reflected from the planes are received and interpreted to form a representation of the future locations of the planes, and that representation is used in the aiming and firing of the guns that shoot down the planes. We can describe the device as representing the locations of planes even if something goes wrong and the guns miss, because we have a conception of the normal operation of the radar aimer. We can even treat the device as representing enemy planes when it is being tested in the laboratory, unconnected to radar and guns, since in our testing we envision it as connected in the right way. However, given a different conception of normal context, we could not describe the device as representing planes at all.
The moral is that (nonsolipsistic) conceptual role semantics does not involve a `solipsistic' theory of the content of thoughts. There is no suggestion that content depends only on functional relations among thoughts and concepts, such as the role a particular concept plays in inference. Of primary importance are functional relations to the external world in connection with perception, on the one hand, and action, on the other. The functions of logical concepts can be specified `solipsistically', in terms of the inner workings of one's conceptual system, without reference to things in the `external world'. But this is not true for the functions of other concepts.
Concepts include individual concepts and general concepts, where an individual concept functions in certain contexts to pick out a particular object in the external world to which a general concept applies, in the simplest case to enable one to handle the object as a thing of the appropriate sort (cf. Strawson, 1974, pp. 42-51). To repeat an earlier example, it is an important function of the concept of food that in certain circumstances one can recognize particular stuff as food, this recognition enabling one to treat that thing appropriately as food by eating it (Dennett, 1969, p. 73). What makes something the concept red is in part the way in which the concept is involved in the perception of red objects in the external world. What makes something the concept of danger is in part the way in which the concept is involved in thoughts that affect action in certain ways.
(Nonsolipsistic) conceptual role semantics asserts that an account of the content of thoughts is more basic than an account of communicated meaning and the significance of speech acts. In this view, the content of linguistic expressions derives from the contents of thoughts they can be used to express. However, allowance must also be made for cases in which the content of your thoughts depends in part on the content of certain words, such as `oak' and `elm'.
Of course, in this case, there are other people who can recognize oaks and distinguish them from elms and who know various distinguishing properties of the trees. These other people may have a concept of an oak tree which has functional roles that are sufficient to make it the concept of an oak tree apart from any relations the concept has with the word `oak'. It is plausible
(2) that the word `oak' as they use it has the meaning it has because of its connection with their concept of an oak tree
(3) that the word `oak' as used by a more ignorant person can have the same meaning by virtue of connections between that person's ignorant use of the word and the expert's use, and
(4) that the content of the more ignorant person's concept of an oak tree derives from its connection to his or her use of the word and its meaning as he or she uses it.
(1)-(4) would still allow one to say that the meanings of words derive ultimately from the contents of concepts the words are used to express where the contents of these concepts do not themselves derive from the meanings of words; however the meanings of a particular person's words may not derive in this way from the contents of that person's concepts.
This suggests an interesting question. Is there any word for which there is a real division of linguistic labor, so that no single person has a corresponding concept whose content is functionally determined apart from its relation to the person's use of that word? It is certainly imaginable that this should be so in connection with some sort of group investigation. Different people might investigate different aspects of a phenomenon which each might identify es `whatever it is we are all investigating and which has such and such effects when investigated in the way in which I have investigated it'. Even in such a case, the meaning of the word would derive from the role the corresponding concept plays in thought, although different aspects of that role would be fulfilled by different people's instance of the concept.
Now, comparing Earth in 1750 (before the micro-structure of water has been investigated) with Twin Earth at the corresponding time, we find that the English word `water' means something different in the two places, simply because the word is used on Earth to refer to what is in fact H2O and is used on Twin Earth to refer to what is in fact XYZ. Similarly, where Earthlings think about H2O, Twin Earthlings think about XYZ. This difference is not in 1750 reflected in any difference in dispositions to react to various perceptual situations, in any difference in inferences that people in the respective places would make, nor in any differences in the actions which people undertake as the result of thoughts involving the relevant concept.
The difference is also not simply a difference in context of utterance or context of thought. Suppose an Earthling were to travel by spaceship and land on an ocean of XYZ in Twin Earth. Looking around, the Earthling comes to believe there is water all around. This belief is false, since the Earthling's concept of water is a concept of something that is in fact H2O. The Earthling's concept of water remains a concept of the same thing that is referred to by `water' on Earth even though the Earthling is now located in a different context. The context of the thoughts of the Earthling and the context of the thoughts of the Twin Earthlings are now the same; but their thoughts are about XYZ where his are still about water. So this difference in the content of the thoughts of Earthlings and Twin Earthlings cannot be simply a difference in the context in which they have their thoughts.
The difference is due rather to the fact that the content of a person's concept is determined by its functional role in some normal context. The normal context for an Earthling's thoughts about what he or she calls `water' is here on Earth, while the normal context for a Twin Earthling's thoughts about what he or she calls `water' is on Twin Earth.
The normal context can change. If the traveler from Earth to Twin Earth stays on, after a while the normal context for the concepts he or she uses will be properly taken to be the Twin Earth contexts. Thoughts about what he or she calls `water' will be properly considered thoughts about XYZ rather than H2O. There is, of course, a certain amount of arbitrariness in any decision about when this change has occurred. It will sometimes be possible with equal justice to consider a given thought a thought about H2O or a thought about XYZ.
A similar arbitrariness would arise concerning a person created spontaneously in outer space as the improbable result of random events at the quantum level, supposing the person were saved from space death by a fortuitously passing space ship, and supposing the person spoke something that sounded very much like English. Suppose, indeed, that this person is a duplicate of you and also (of course) of your Twin Earth counterpart. When the person has thoughts that he or she would express using the term `water', are these thoughts about water (H2O) or thoughts about XYZ? If we interpret this person's thoughts against a normal background on Earth, we will interpret the relevant thoughts as thoughts about water. If we take the normal background to be Twin Earth, they are thoughts about XYZ. Clearly it is quite arbitrary what we say here.
One argument for this is that it is possible to imagine a person whose spectrum was inverted with respect to yours, so that the quality of experience you have in seeing something red is the quality this other person has in seeing something green, the quality of experience you have in seeing something blue is the quality this other person has in seeing something orange, and similarly for other colors, although in all relevant respects your color experiences function similarly, so that each of you is just as good as the other in applying the public color words to colored objects. According to this argument, the two of you would have different concepts which you would express using the word `red', although it might be difficult or even impossible to discover this difference, since it is not a functional difference.
I speak of an `argument' here, although (as Lewis (1980) observes in a similar context), the `argument' really comes down simply to denying the functionalist account of the content of concepts and thoughts, without actually offering any reason for that denial. This makes the `argument' difficult to answer. All one can do is look more closely at a functionalist account of the content of color concepts in order to bring out the way in which, according to functionalism, this content does not depend on the intrinsic character of experiences of color.
How could you imagine someone whose spectrum was inverted with respect to yours? One way would be to imagine this happening to yourself. Suppose there were color-inverting contact lenses. You put on a pair of lenses and the colors things seem to have are reversed. The sky now looks orange rather than blue, ripe apples look green, unripe apples look red, and so on. Suppose you keep these lenses on and adapt your behavior. You learn to say `green' rather than `red' when you see something that looks the way red things used to look; you learn to treat what you used to consider a green appearance of apples as a sign of ripeness, and so on. The years pass and your adaption becomes habitual. Would not this be an intelligible case in which someone, the imagined future you, has a notion of what it is like to have the experience of seeing something to which the term `red' applies, where the notion functions in exactly the way in which your notion of what such an experience is like functions, although your notions are different? The functionalist must deny this and say that the imagined you associates the same concept with the word `red' as the actual you does now and indeed sees the world as you now do.
Consider an analogous case. There actually exist lenses that are spatially inverting. With these lenses on, things that are up look down and vice versa. At first it is very difficult to get around if you are wearing such lenses, since things are not where they seem to be. But after a while you begin to adapt. If you want to grab something that looks high, you reach low, and vice versa. If you want to look directly at something that appears in the bottom of your visual field you raise your head, and so on. Eventually, such adaption becomes more or less habitual.
Now functionalism implies that if you become perfectly adapted to such space inverting lenses, then your experience will be the same as that of someone who is not wearing the inverting lenses (who has adapted to not wearing them if necessary), because now the normal context in relation to which your concepts function will have become a context in which you are wearing the inverting lenses. And in fact, people who have worn such lenses do say that, as they adapt to the lenses, the world tends to look right side up again (Taylor, 1962; Pitcher, 1971; Thomas, 1978).
Similarly, functionalism implies that if you become perfectly adapted to color inverting lenses, the world will come to look to you as it looked before in the sense that given such perfect adaption the normal context in which your color concepts function will be a context in which you are wearing the color inverting lenses. According to functionalism, the way things look to you is a relational characteristic of your experience, not part of its intrinsic character.
In order to get a feel for this aspect of (nonsolipsistic) conceptual role semantics, it may be useful to consider certain further cases. Consider Inverted Earth, a world just like ours, with duplicates of us, with the sole difference that there the actual colors of objects are the opposite of what they are here. The sky is orange, ripe apples are green, etc. The inhabitants of Inverted Earth speak something that sounds like English, except that they say the sky is `blue', they call ripe apples `red', and so on. Question: what color does their sky look to them? Answer: it looks orange. The concept they express with the word `blue' plays a relevantly special role in the normal perception of things that are actually orange.
Suppose there is a distinctive physical basis for each different color experience. Suppose also that the physical basis for the experience of red is the same for all normal people not adapted to color inverting lenses, and similarly for the other colors. According to (nonsolipsistic) conceptual role semantics this fact is irrelevant. The person who has perfectly adapted to color inverting lenses will be different from everyone else as regards the physical basis of his or her experience of red, but that will not affect the quality of his or her experience.
Consider someone on Inverted Earth who perfectly adapts to color inverting lenses. Looking at the sky of Inverted Earth, this person has an experience of color whose physical basis is the same as that of a normal person on Earth looking at Earth's sky. But the sky looks orange to the person on Inverted Earth and blue to normal people on Earth. What makes an experience the experience of something's looking the color it looks is not its intrinsic character and/or physical basis but rather its functional characteristics within an assumed normal context.
Consider a brain spontaneously created in space as the improbable result of random events at the quantum level. The physical events in the brain happen to be the same as those in you on Earth looking at the sky on a clear day and also the same as those in a person adapted to color inverting spectacles looking at the sky of Inverted Earth. What is it like for the brain? Is it having an experience of orange or of blue? According to (nonsolipsistic) conceptual role semantics, there is no nonarbitrary way to answer this question; it depends on what is taken as the normal context for assessing the functional role of events in that brain. If the normal context is taken to be the normal context for perception of color on Earth, the brain is having an experience of blue. If the normal context is taken to be the normal context for a wearer of inverted spectacles on Inverted Earth, the brain is having an experience of orange.
But this distinction is unmotivated and the suggestion is unworkable. The distinction is unmotivated because there is no natural border between inner and outer. Should the inner realm be taken to include everything in the local environment that can be perceived, or should it stop at the skin, the nerve ends, the central nervous system, the brain, the central part of the brain, or what? The suggestion is unworkable because, for most concepts, inner conceptual role can only be specified in terms of conceptual role in u wider sense, namely the function a concept has in certain contexts in relation to things in the so called `external world' (Harman, 1983, pp. 62-65).
To be sure, there are cases of illusion in which one mistakes something else for food. From a solipsistic point of view, these cases may be quite similar to veridical cases, but clearly the cases of mistake are not cases that bring out the relevant function of the concept of food. They are cases of misfunctioning. We can see these as cases of mistake precisely because the function of concept of food is specified with reference to real and not just apparent food.
Mental states and processes are functional states and processes, that is, they are complex relational or dispositional states and processes, and it is useful to consider simpler dispositions, like fragility or solubility. Water solubility cannot be identified with possession of a particular molecular structure, because (a) different sorts of molecular structure underlie the water solubility of different substances and, more importantly, (b) attributions of water solubility are relative to a choice of background or normal context. Rate of dissolving is affected by such things as the presence or absence of electrical, magnetic, or gravitational fields, the amount of vibration, varying degrees of atmospheric pressure, the purity and temperature of the water, and so forth. Whether it is proper to say a given substance is water soluble will depend on what the normal set of conditions for mixing the substance with water is taken to be. A substance is soluble in water if it would dissolve at a fast enough rate if mixed with water in a certain way under certain conditions. Solubility is a relational state of a substance, relating it to potential external things--water and various conditions of mixture and the process of dissolving under those conditions.
Notice that we cannot say that for a substance to be water soluble is for it to be such that, if it receives certain `stimuli', at its surface, it reacts in a certain way. We must also mention water and various external conditions. There is a moral here for Quine's (1960) account of language in terms of `stimulus meaning' and of the related later attempts I have been discussing to develop a purely solipsistic notion of conceptual role.
We are led to attribute beliefs, desires, and so on to a creature only because the creature is able to attain what we take to be its goals by being able to detect aspects of its environment. In the first instance, we study its capacity for mental representation by discovering which aspects of the environment it is sensitive to. Only after that can we investigate the sorts of mistakes it is capable of that might lead to inappropriate behavior. This gives us further evidence about the content of its concepts. But we could never even guess at this without considering how the creature's mental states are connected with things in the outside world.
But the point is not merely one of evidence, since concepts have the content they have because of the way they function in the normal case in relation to an external world. If there were no external constraints, we could conceive of anything as instantiating any system of purely solipsistic `functional' relations and processes. We could think of a pane of glass or a pencil point as instantiating Albert Einstein or George Miller, solipsistically conceived. But that does not count. Concepts really must be capable of functioning appropriately. No one has ever described a way of explaining what beliefs, desires, and other mental states are except m terms of actual or possible relations to things in the external world (Dennett, 1969, pp. 7282, Harman, 1973, pp. 62-65; Bennett, 1976, pp. 36-110).
The most primitive psychological notions are not belief and desire but rather knowledge and successful intentional action. Belief and intention are generalizations of knowledge and success that allow for mistake and failure. We conceive a creature as believing things by conceiving it as at least potentially knowing things, and similarly for intention.
Does this show a theory of truth plays a role in semantics? To be sure, in my view the content of a concept is determined by the way in which the concept functions in paradigm or standard cases in which nothing goes wrong. In such cases, one has true beliefs that lead one to act appropriately in the light of one's needs and other ends. But an account of correct functioning is not itself a full account of the truth of beliefs, since beliefs can be true by accident in cases where there is some misfunctioning. And there are no serious prospects for a theory of content that significantly involves an account of truth conditions.
However, it may be that such words can be analyzed as expressing a combination of concepts which individually have contents that are connected with distinctive conceptual roles. For example, perhaps `Hello' means something like `I acknowledge your presence', or sometimes maybe `let us talk', and analogously for `Good-bye' and other words and phrases of this sort. If so, the issue becomes whether the aspect of meaning expressed by the imperative in `let us talk' and the performative aspect of the meaning of `I acknowledge your presence' derive irreducibly from the use of words in speech acts or ultimately derive instead from the use of language to express concepts whose content is determined by their role in calculation and thought. These are complex issues which we must consider in a moment. As for the question whether this is the right way to analyze `Hello', `Good-bye', and so on, I am not very sure what to think. The analyses I have suggested seem to leave something out, but this might be accommodated by better analyses.
Again, in this case, it might be argued that in this use `Please' means the same as some longer expression, like `if you please', i.e., `if it pleases you to do so', and `Thank you' means something like `I hereby thank you'. If so, the issue would become whether the meaning of the term `you' derives entirely from its use to express concepts whose content is determined by their distinctive function in calculation or thought, an issue we need to discuss. There is, of course, also again here the question whether the performative element, in `I hereby thank you', carries a meaning which at least in part is irreducibly connected with speech acts, another issue we will be coming to in a moment. As for the question whether such polite phrases can always be analyzed in this sort of way, I am not sure what to say.
Similar remarks apply to the interrogative mood. Indeed, questions are not unlike requests for information, so that the interrogative mood is plausibly analyzed in terms of the imperative mood. In any event, questions obviously have a function in thinking. You pose a problem to yourself and work out the answer, perhaps by posing various sub-questions and answering them.
On the other hand, each of the words in a sentence like `I promise to be there' has a meaning which expresses a concept whose content is arguably determined by its functional role in calculation and thought. And it is possible that the meaning of the whole sentence, including whatever gives the sentence its performative function of being appropriate for actually promising, arises from the meaning of the words used in a regular way. Given what the words in the sentence mean and given the way these words are put together, it may be predictable that the sentence has a performative use (Bach, 1975).
Suppose we adopted the convention that promises have to be made in some special way, for example by writing down the content of the promise in purple chalk on a special promise board that is not used for any other purpose. The convention would be that nothing else is to count as a promise. In such a case, the words, `I promise to be there' could not be used to promise you will be there. Would this be a way to pry off the performative meaning of `I promise' from that aspect of its meaning that derives from its use to express concepts whose content is determined by their functional role in thought? Not obviously. For one thing, this might change the concept of promising in a significant way. The word `promise' might not mean what it means when a promise can be made by saying `I promise'. It could be argued that if `I promise' means what it ordinarily means, then it follows from the concepts expressed by the words `I promise' that these words can be used to promise.
Alternatively, it might be said that, even if the word `promise' would retain (enough of) its usual meaning when promising was restricted by such a convention, the example is like one in which a special convention is adopted that an utterance of the sentence `The sky is blue' is not to be interpreted as an assertion that the sky is blue but rather as a question asking whether it rained last week. This would not show that there is any aspect of the meaning of an ordinary assertion of the sentence `The sky is blue', as we use it now without such a bizarre convention, which does not derive from the way the words in the sentence are used to express concepts that have the content they have because of their functional role in thought.
However, some of these phenomena can occur in thinking to oneself, where they are presumably not due to conversational implicature. Calculation and reasoning often involve various presuppositions. One will normally want descriptions used in reasoning to relate events in an orderly way, so the same phenomenon with the word `and' may occur. On the other hand, it is doubtful that use of an `either ... or' proposition in thought normally carries the implication that one does not know which alternative is the case; so this phenomenon may really occur only at the level of conversation.
The content of a concept depends on its role in inference and sometimes in perception. Particularly important for inference are a term's implications. Implication is relevant to inference and therefore to meaning, because implication is explanatory and inference aims at explanatory coherence. Accounts of truth conditions can shed light on meaning to the extent that they bring out implications; it is doubtful whether such accounts have any further bearing on meaning, although they may have heuristic value for studies of logical form. Probabilistic semantics does not provide an adequate conceptual role semantics, because people do not and cannot make much use of probabilistic reasoning.
Allowance must be made for various connections between concepts and the external world. Some concepts have the content they have because of the words they are associated with, although (according to conceptual role semantics) this content ultimately always derives from someone's use of concepts. The content of concepts is often relative to a choice of a normal context of functioning. This is true of color concepts, despite the unargued view of some philosophers that these concepts depend on the intrinsic character of experience.
Finally, it is not clear whether any aspects of meaning derive directly from the use of language in speech acts in a way not reducible to the expression of concepts whose content is independently determined. In any event, many phenomena often taken to be particularly connected with speech acts and conversation also occur in calculation and thought.
Bennett, J., Linguistic Behaviour. Cambridge: Cambridge University Press, 1976
Block, N. & Fodor, J. A., "What Psychological States Are Not". Philosophical Review, 1972, 81, 159-181.
Chomsky, N., Reflections on Language. New York: Columbia University Press,
Davidson, D., "Truth and Meaning". Synthese, 1967, 17, 304-323.
Dennett, D. C., Content and Consciousness. London: Routledge and Kegan Paul, 1969.
Field, H., "Probabilistic Semantics". Journal of Philosophy, 1977, 74, 379-409.
Fodor, J. A., "Methodological Solipsism as a Research Strategy in Psychology". Behavioral and Brain Sciences, 1980, 3, 63-73.
Foster, J. A.,"Meaning and Truth Theory". In G. Evans & J. McDowell (Eds), Truth and Meaning: Essays in Semantics. Oxford: Oxford University Press, 1976.
Grice, H. P., Meaning. Philosophical Review, 1959, 68, 377-388.
Grice, H. P., "The Causal Theory of Perception". Proceedings of the Aristotelian Society, 1961, Suppl. Vol. 35.
Grice, H. P., "Logic and Conversation". In D. Davidson & G. Harman (Eds), The Logic of Grammar. Encino, CA: Dickenson, 1975.
Harman, G., "Logical Form". Foundations of Language, 1972, 9, 38-65.
Harman, G., Thought. Princeton, NJ: Princeton University Press, 1973.
Harman, G., "Meaning and Semantics". In M. K. Munitz & P. Unger (Eds), Semantics and Philosophy. New York: New York University Press, 1974.
Harman, G., "Conceptual Role Semantics". Notre Dame Journal of Formal Logic, 1982, 28, 252-256.
Harman, G., "Problems with Probabilistic Semantics". In A. Orenstein et al. (Eds), Developments in Semantics. New York: Haven, 1985.
Harman, G., Change in View. Cambridge, MA: MIT Press, 1986a.
Harman, G., "Willing and Intending". In R. Grandy & R. Warner (Eds), Philosophical Grounds of Rationality. Oxford: Oxford University Press, 1986b.
Harman, G., "Quine's Grammar". In P. Schillp (Ed). The Philosophy of W. V. Quine. La Salle, Illinois: Open Court, 1986c.
Kripke, S., "Naming and Necessity". In D. Davidson & G. Harman (Eds), Semantics of Natural Language. Dordrecht: Reidel, 1972.
Lewis D., "General Semantics". In D. Davidson & G. Harman (Eds), Semantics of Natural Language. Dordrecht: Reidel, 1972.
Lewis, D., "Mad Pain and Martian Pain". In N. Block (Ed), Readings in Philosophy of Psychology. Cambridge, MA: Harvard University Press, 1980.
Loar, B., Mind and Meaning. Cambridge: Cambridge University Press, 1981.
Loar, B., "Must Beliefs Be Sentences?" In P. D. Asquith & T. Nickles (Eds), PSA 1982, Vol. 2. East-Lansing, MI: Philosophy of Science Association, 1983a.
Loar, B, "Reply to Fodor and Harman". In P. D. Asquith & T. Nickles (Eds), PSA 1982; Vol. 2. East-Lansing, MI: Philosophy of Science Association, 1983b.
Loewer, B., "The Role of `Conceptual Role Semantics'". Notre Dame Journal of Formal Logic, 1982, 23, 305-332.
Nagel, T., "What Is It Like to Be a Bat?". Philosophical Review, 1974, 83, 435-450.
Pitcher, G., A Theory of Perception. Princeton, NJ: Princeton University Press, 1971.
Putnam, H., "The Meaning of Meaning". In H. Putnam, Mind, Language, and Reality: Philosophical Papers, Vol. 2. Cambridge: Cambridge University Press, 1975.
Quine, W. V., Word and Object. Cambridge, MA: MIT Press, 1960.
Ryle, G., "Ordinary Language". Philosophical Review, 1953, 62. (Reprinted in Ryle (1971))
Ryle, G., "Use, Usage, and Meaning". Proceedings of the Aristotelian Society, 1961. Suppl. Vol. 35 (Reprinted in Ryle (1971))
Ryle, G., Collected Papers, Vol. II. London: Hutchinson, 1971.
Sellars, W., "Some Reflections on Language Games". Philosophy of Science, 1954, 21, 204-228.
Strawson, P. F., Subject and Predicate in Logic and Grammar. London: Methuen, 1974
Taylor, J. G., The Behavioral Basis of Perception. New Haven, CT: Yale University Press, 1962.
Thomas, S., The Formal Mechanics of Mind. Ithaca, NY: Cornell University Press, 1978.