CHAPTER 5

There Are No Recognitional Concepts, Not Even RED.

Part 2: The Plot Thickens

 

 

Introduction:

The story till now.

Some of the nastiest problems in philosophy and cognitive science are either versions of, or live nearby, what I'll call question Q:

Q: What are the essential (\constitutive) properties of a linguistic expression qua linguistic?

Here are some currently live issues to which I suppose (and to which I suppose I suppose untendentiously) an answer to Q would provide the key:

-What do you have to learn (/know/master) to learn (/know/master) a linguistic expression (/concept)? Variant: What are the `possession conditions' for a linguistic expression (/concept)?1

-What is the principle of individuation for linguistic expressions?

-What makes two linguistic tokens tokens of the same linguistic type?

-Suppose G is the grammar of language L and E is a lexical expression in L (roughly, a word or morpheme). What sort of information about E should the lexicon of G contain?

-What's the difference between linguistic and `encyclopaedic' knowledge?

-What belongs to the `mental representation' of a linguistic expression as opposed to the mental representation of its denotation?

-Assume that some of the inferences in which a lexical item is involved are constitutive. What distinguishes these constitutive inferences from the rest?

-Which of the inferences that a lexical item is involved in are analytic?

-Assume that some lexical expressions have perceptual criteria of application (roughly equivalent: Assume there some lexical items express `recognitional' concepts.) Which expressions are these? Under what conditions is a `way of telling' whether an expression applies constitutive of the identity of the expression?

These are all interesting and important questions, and you will be unsurprised to hear that I don't know how to answer them. I do, however, have a constraint to offer which, I'll argue, does a fair amount of work; in particular, it excludes many of the proposed answers that are currently popular in philosophy and cognitive science, thereby drastically narrowing the field. Here's the constraint: Nothing is constitutive of the content of a primitive linguistic expression except what it contributes to the content of the complex expressions that are its hosts; and nothing is constitutive of the content of a complex expression except what it inherits from (either its syntax or) the lexical expression that are its parts.

Short form (Principle P): The essential properties of a linguistic expression qua linguistic include only its compositional properties.

Principle P can't, of course, be a sufficient condition for content constitutivity. Suppose, for example, that all cows are gentle. Then, all brown cows are gentle a fortiori, so `cow' contributes gentle to its (nonmodal) hosts, and `brown cow' inherits gentle from its constituents. It doesn't follow ---and, presumably, it isn't true--- that gentle is constitutive of the content of either `brown', `cow' or `brown cow'. The situation doesn't change appreciably if you include modal hosts since not all necessary truths are analytic. I.e. they're not all constitutive of the content of the expressions that enter into them. `Two' contributes `prime' to `two cows'; necessarily, two cows is a prime number of cows. But I suppose that `prime' isn't part of the lexical meaning of `two'. Not, anyhow, if concept posession requires the mastery of whatever inferences are content constitutive.

Still, P is a serious constraint on a theory of content, or so I maintain. For example: If P is true, than probabilistic generalizations can't be part of lexical content. Suppose that if something is a cow, then it is probably gentle. It doesn't follow, of course, that if something is a brown cow, then it is probably gentle, so `cow' doesn't contribute probably gentle to its hosts. Mutatis mutandis for other probabilistic generalizations; so probabilistic generalizations aren't part of lexical content.

Likewise, Chapter 4 argued that if you grant principle P it follows, on weak empirical assumptions, that the epistemic properties of lexical expressions (eg. the criteria for telling whether they apply) can't be among their essential properties.(fn) The argument went like this: Suppose that for modifier A, W is the way of telling whether A applies; that AN is a complex expression containing the head N with A as its modifier; and that W* is the way of telling whether AN applies. Then, by principle P, W must be part of W*; the way of telling whether A applies must be part of the way of telling whether AN applies.2 Arguably this works fine for words like, eg. `triangle' with respect to hosts like, eg. `red triangle', since if counting the sides is a good way of telling whether something is a triangle, it's likewise a good way of telling whether it's a red triangle. Nothing is either a triangle or a red triangle unless it's got three sides. But the suggestion that hosts inherit ways of telling from their constituents, though it's arguably OK for `red' in `red triangle,' doesn't work, even prima facie, in the general case; eg. it doesn't work for `fish' in `pet fish'3 Finding out whether it lives in a stream or lake or ocean is a good way of telling whether something is a fish; but it's a rotten way of telling whether it's a pet fish. Pet fish generally live in bowls. It follows, if principle P is true, that being a way of telling whether an expression applies is not an essential property of that expression. All forms of semantic empiricism to the contrary notwithstanding.

All this, I continued to argue in the earlier paper, should strike you as unsurprising. A way of telling whether an expression E applies to an object O is, at best, a technique (procedure, skill, or whatever) which allows one to tell whether E applies to O given that O is a good instance of E and the circumstances are favorable. This must be right if there's to be any hope that ways of telling are possession conditions. I guess I know what `triangle' means. But it's certainly not the case that I can tell, for an arbitrary object in an arbitrary situation, whether `triangle' applies to it. (Consider triangles outside my light cone). At best, my knowing what `triangle' means is (or requires) my knowing how to apply it to a good instance of triangles in circumstances that are favorable for triangle recognition. I don't think anybody disputes this. And, I don't think that anybody should.

But now, the property of being a good instance doesn't itself compose. What's a good instance of a fish needn't be a good instance of a pet fish, or vice versa. For that matter, what's a good instance of a triangle needn't be a good instance of a red triangle, or vice versa. That goodinstancehood doesn't compose is, I think ineliminably, the fly in the empiricist's ointment.

Notice, crucially, that goodinstancehood's not composing does not mean that `pet fish' or `red triangle' aren't themselves compositional. To the contrary, O is a pet fish iff it's a pet and a fish, and O is a red triangle iff it's red and a triangle; `pet fish' and `red triangle' are thus as compositional as anything can get. What it means, rather, is that the epistemic properties of lexical items aren't essential to their identity qua linguistic. Not, anyhow, if principle P is true and the essential properties of an expression are its compositional properties.

The essential properties of an expression include only the ones it contributes to its hosts. But, in the general case, expressions don't contribute their good instances to their hosts (being a good instance of a fish isn't necessary for being a good instance of a pet fish). Since `criteria' (and the like) are ways of recognizing good instances, it follows that criteria (and the like) aren't essential properties of linguistic expressions. I think that's a pretty damned good argument that the epistemic properties of lexical items aren't essential to their identity qua linguistic. However, I have shown this argument to several philosophical friends who disagree. They think, rather, that it's a pretty damned good argument that principle P can't be true.4 In particular, so the reply goes, epistemic properties like having the criteria of application that they do are essential to morphosyntactically primitive linguistic expressions (like `red' and `fish') but not to their hosts (like `red triangle' and `pet fish') even in cases where the hosts are semantically compositional.5 If this reply is right then, a fortiori, the essential properties of a linguistic expression can't be identical to the ones that its hosts inherit.

I was, for reasons, that the earlier paper elaborated, surprised to hear this suggestion so widely endorsed. In particular, I argued like this: Consider any property P that is essential to E but not inherited from E by its hosts; I'll call such a property an `extra'. If E has such an extra property, then, presumably, it would be possible for a speaker to satisfy the possession conditions for a complex expression containing E without satisfying the possession conditions for E itself: Somebody who has learned the linguistically essential properties of `pet fish,' for example, need not have learned the linguistically essential properties of `pet' or `fish'. For, by assumption, the mastery of `pet' and `fish' requires an ability to recognize good instances of each in favorable circumstances; whereas, again by assumption, the mastery of `pet fish' requires neither of these abilities (not even if it does require an ability to recognize good instance of pet fish in favorable circumstances.)6

But, I supposed, it is something like true by definition that mastering a complex expression requires mastering its constituents, since, after all, constituents are by definition parts of their hosts. So, to say that you could master `pet fish' without mastering `pet' (or, mutatis mutandis, `red triangle' without mastering `red') is tantamount to saying that `pet' isn't really a constituent of `pet fish' after all; which is, in turn, tantamount to saying that `pet fish' is an idiom. Which, however, `pet fish' patently is not. So I win.

Now, there is a reply to this reply. (`Dialectics,' this sort of thing is called.) One could just stipulate that recognitional capacities (or, indeed, any other sort of extra that you're fond of) are to count as essential to the primitive expressions that they attach to even though they are not inherited by the hosts of which such primitives are constituents. To which reply there is a reply once again; namely, that explanation is better than stipulation. Whereas Principle P explains why meeting the posession conditions for a complex expression almost always7 requires meeting the possession conditions for its constituents, the proposed stipulation just stipulates that it does; as does any account of constituency that allows primitive expressions to have extras.

To which there is again a reply: Namely, that you can't expect a theory to explain everything; and, given a forced choice between Empiricism and the identification of the essential properties of an expression with its compositional properties, one should hold on to the Empiricism and give up principle P. If that requires a revision of the notion of constituency, so be it. That can be stipulated too, along the lines: A (syntactic) part of a complex host expression is a constituent of the expression only if it contributes all of its semantic properties to the host except (possibly) its epistemic ones.

Thus far has the World Spirit progressed. I don't, myself, think well of philosophy by stipulation; but if you don't mind it, so be it. In this paper, I want to float another kind of defense for principle P; roughly, that the learnability of the lexicon demands it. I take it that these two lines of argument are mutually compatible; indeed, that they are mutually reinforcing.

Compositonality and learnability:

My argument will be that, given the usual assumptions, learnability requires that primitive expressions (lexical items) must have no extras. A fortiori, it can't be that criteria, recognitional capacities, etc. are constitutive of primitive linguistic expressions but not inherited by their hosts.

The `usual assumptions' about learnability are these:

i) The languages whose learnability we are concerned with are semantically infinite (/productive); i.e. they contain infinitely many semantically distinct expressions.

ii) A theory of the learnability of a language is a theory of the learnability of the grammar of that language. Equivalently, a learnability theory has the form of a computible function (a `learning algorithm') which takes any adequate, finite sample of L onto a correct grammar G of L. I assume, for convenience, that G is unique.

iii) A learning algorithm for L is `adequate' only if the grammar of L that it delivers it tenable. A grammar G of L is tenable iff L contains not more than finitely expressions that are counterexamples to G (and all of these finitely many counterexamples are idioms. See fn.7).

Comments:

-Assumption (i) is inessential. As usual, systematicity would do as well as productivity for any serious polemical purposes. (See Fodor and Pylyshyn, 1988).

-Assumption (ii) is inessential. It's convenient for the exposition to assume that learning a language is learning a theory of the language, and that the relvant theory is a grammar of the language. But nothing turns on this. If, for example, you hold that language learning is `learning how' rather than `learning that', that's perfectly ok for present purposes; the kind of argument I'm going to run has an obvious reformulation that accomodates this view. I am myself commited to the idea that learning the semantics of the lexicon is neither `learning that' nor `learning how'; it's becoming causally (/nomologically) connected, in a certain information-engendering way, to the things that the item applies to. That too is perfectly ok with the present line of argument, I'm glad to report.

-Though I've borrowed the style of i-iii from learnability theory, it's important to see that the intuition they incorporate is really quite plausible independent of any particular theoretical commitment. Suppose that, as a result of his linguistic exprience, a child were to arrive at the following view (subdoxastic or explicit) of English: the only well-formed English sentence is `Burbank is damp'. Surely something has gone wrong; but what exactly? Well, the theory the child has learned isn't tenable. English actually offers infinitely many counter-examples to the theory the child has arrived at. For, not only is `Burbank is damp' a sentence of English, but so too are `Burbank is damp and dull', `The cat is on the mat', `That's a rock', and so on indefinitely. The core of i-iii is the idea that a learning algorithm that permits this sort of situation is ipso facto inadequate. A technique for learning L has, at a minimum, to deliver an adequate representation of the productive part of L; and to a correct representation of the productive part of a language there cannot be more than finitely many counterexamples among the expressions that the language contains.

-Assumptions (i-iii) constrain language learning rather than concept learning. I've set things up this way because, while I'm sure that lexicons are learned, I'm not at all sure that concepts are. What I'm about to offer is, in effect, a transcendental argument from the premise that a lexicon is learnabale to the conclusion that none of the properties of its lexical items are extras. Now, historically, transcendental arguments from the possibility of language learning are themselves typically vitiated by Empiricist assumptions: In effect, they take for granted that word learning involves concept acquisition (that, for example, the questions `how does one learn (the word) `red'?' and `how does one learn (the concept) RED?' get much the same answer. This begs the question against all forms of conceptual nativism; whereas, I'm inclined to think that some or other sort of conceptual nativism is probably true. (For discussion of the methodological situation, see Fodor and Lepore, 1992).

`Transcendental arguments from language learning are typically vitiated by Empiricist assumptions' might do well as the epitaph of analytical philosophy. I propose, therefore, to consider seriously only transcendental conditions on language learning which persist even if it's assumed that concept acquisition is prior to and independent of word learning. That is, I will accept only such transcendental constraints as continue to hold even on the assumption that learning the lexicon is just connecting words with previously available concepts. I take it that constraints of that sort really must be enforced. Even people like me who think that RED is innate agree that `red' expresses RED has to be learned.

So, here's the argument at last.

Consider, to begin with, `pet fish'; and suppose that `good instances of fish are R' is part of the lexical entry for `fish': According to this lexicon, being R is part of the content that the word `fish' expresses (it's, if you prefer, part of the concept FISH). Finally, suppose that a specification of R is not among the contributions of `fish' to `pet fish' (being R is not part of the concept PET FISH), so R is an extra within the meaning of the act. Then, on the one hand, the lexicon says that it's constitutive of `fish' that being R is part of what tokenings of `fish' convey; but, on the other hand, the tokenings of `fish' in `pet fish' do not convey this. So `pet fish' is a counterexample to this lexicon.

Notice that the learnability of `fish' is not impugned by these assumptions so far; not even if principle P is true.8 Tenability requires that English should offer not more than finitely many exceptions to the lexical representation of its primitive expressions. But, so far, all we've got is that English offers one exception to the lexicon which says that `fish' conveys typically R; viz `pet fish,' which typically does not convery this. The most we're entitled to conclude is that, on the present assumptions, the child couldn't learn `fish' from data about `pet fish'. Which maybe is true (though, also, maybe it's not.)

However, an infinity of trouble is on the way. For, it's not just the case that good instances of fish needn't be good instances of pets; it's also the case that good instances of big pet fish needn't be (for all I know, typically aren't) good instances of pets or of fish or of pet fish. Likewise, good instances of big pet fish that are owned by people who also own cats needn't be (for all I know, typically aren't) good examples of fish, or of pet fish, or of big pet fish, or of big pet fish that are owned by people who also own cats and live in Chicago.

And so on, forever and forever, what with the linguistic form (A* N)N9 being productive in English. The sum and substance is this: The good instance of the nth expression in the sequence A* N are not, in general, inherited from the good instances of the n-1th expression in that sequence; and they are not, in general, contributed to the good instances of the n+1th expression in that sequence. Informally: you can be a good instance of N but not a good instance of AN; `pet fish' shows this. But, much worse, you can be a good instance of AN but not a good instance of A(AN)N; and you can be a good instance of A(AN)N but not a good instance of AN. Good instances don't, as I'll sometimes say, distribute from modifiers to heads or from heads to modifiers, or from modifiers to one another. And their failure to do so is iterative; it generates an infinity of counterexamples to the compositionality of goodinstancehood.10 Nothing in this depends on the specifics of the example, nor is the construction `A* N' semantically eccentric. Adverbs, relatives, prepositional phrases and, indeed, all the other productive forms of modification, exhibit the same behavior mutatis mutandis. Thus, good instances of triangle needn't be good instances of red triangle; and good instances of red triangle needn't be good instances of red triangle I saw last week; and good instances of red triangles I saw last week in Kansas needn't be good instances of things I saw last week, or of triangles, or of red triangles, or of things I saw last week in Kansas.... Etc. The hard fact: Goodinstancehood doesn't distribute and is therefore not productive.

Every primitive exprssion has an infinity of hosts to which it fails to contribute its good instances (mutatis mutandis its sterotype, it's prototype, etc.) So, a grammar that takes goodinstancehood to be an essential property of primitive items faces an infinity of counterexamples. As far as I know, this point has been missed just about entirely in the discussion of the `pet fish problem' in the cognitive science literature. In consequence there's been a tendancy to think of the semantics of `pet fish', `male nurse' and the like as idiosyncratic. It's even been suggested, from time to time, that such expressions are idioms after all. But, in fact, the pet fish phenomenon is completely general. That typical pet fish aren't typical pets or typical fish is a fact not different in kind from the fact that expensive pet fish are, practically by definition, not typical pet fish; or that very, very large red triangles aren't typical red triangles (or typical red things; or typical triangles). To repeat the point: it's not just that there are occasional exceptions to the compositionality of goodinstancehood. It's that there is an infinity of exceptions. Which, of course, assumption (iii) does not permit.

Short form of the argument: The evidence from which a language is learned is, surely, the tokenings of its expressions. So, if a lexical expression is to be learnable, its tokenings must reliably manifest its essential properties;11 that's just a way of saying what tenability demands of learnability. But tokenings of A(AN) do not, in general, reliably manifest the essential properties of AN on the assumption that the epistemic properties of A (or of N) are constitutive. Nothing about tokenings of `pet fish' reliably manifests the properties of good instances of fish (or of pets) tout court; nothing about triangles I saw in Kansas reliably manifests the properties of good instances of triangles tout court. And so on for infinitely many cases. So either tenability doesn't hold or the epistemic properties of lexical expressions aren't constitutive. QED.

Very short form of the argument: Assume that learning that they typically live in lakes and streams is part of learning `fish'. Since typical pet fish live in bowls, typical utterances of `pet fish' are counterexamples to the lexical entry for `fish'. This argument iterates, (eg. from `pet fish' to `big pet fish' etc.). So a lexicon that makes learning where they typically live part of learning `fish' isn't tenable. Since the corresponding argument can be run for any choice of a `way of telling' (indeed, for any `extra' that you choose) the moral is that that an Empiricist lexicon is ipso facto not tenable. QED again.

So, maybe we should give up on tenability? I really don't advise it. Can one, after all, even make sense of the idea that there are infinitely many sentences of English (/infinitely many sentences generated by the correct grammar/theory of English) which, nevertheless, are counter-instances to the mental representation of English that its speakers acquire? What, then, would their being sentences of English consist in? Deep metaphysical issues (about `Platonism' and such) live around here; I don't propose to broach them now. Suffice it that if you think the truth makers for claims about the grammars of natural languages are psychological (as, indeed, you should), then surely you can't but think that the grammars that good learning algorithms choose have to be tenable.

But I prefer to avoid a priorism whenever I can, so let's just assume that being a tenable theory of L and being a well formed expression of L are not actually interdefined. On that assumption, I'm prepared to admit that there is a coherent story about learning English according to which recognitional capacities for fish are constitutive of the mastery of `fish' but not of the mastery of `pet fish' or of any other expression in the sequence A* fish. I.e. there's a coherent story about leanability that doesn't require tenability, contrary to assumption (iii) above.

Here's the story: The child learns `fish' (and the like) only from atomic sentences (`that's a fish') or from such complex host expressions as happen to inherit the criteria for fish. (Presumably these include such expressions as: `good instance of a fish' `perfectly standard example of a fish' and the like). Occurrences of `fish' in any other complex expressions are ignored. So the right lexicon is untenable (there are infinitely many occurrences of `fish' in complex expressions to which `fish' does not contribute its epistemic properties). But it doesn't follow that the right lexicon is unlearnable. Rather, the lexical entry for `fish' is responsive to the use of `fish' in atomic sentences and to its use in those complex expressions which do inherit its epistemic properties (and to only to those).12

But while I'm prepared to admit that this theory is coherent, I wouldn't have thought that there was any chance of its being true. I'm pretty sure that tenability is de facto a condition for a learning algorithm for a productive language. Here are my reasons.

I just said that, in principle, you could imagine some data about complex A* N expressions constraining the lexical entry for N (and A) even if the lexicon isn't required to be tenable. The criteria of application for `perfectly standard example of a fish' are presumably inherited from the criteria of application for `fish'. But, though this is so, it's no use to the child since he has no way to pick out the complex expression tokens that are relevant from the ones that aren't. The criteria for `perfectly standard fish' constrain the lexical entry for fish; the criteria for `pet fish' don't. But you have no way to know this unless you already know a lot about English; which, of course, the child can't be supposed to do.

What it really comes down to is that if the lexical representations that the learning procedure chooses aren't required to be tenable, then it must be that the procedure treats as irrelevant all data except those that derive from the behavior of atomic sentences. These are the only ones that can be relied on not to provide counter-examples to the correct entry. If they typically live in streams or oceans is specified in the lexical entry for `fish', that has to be learned from tokenings of sentences which reliably manifest that typical fish live in streams or oceans. And, if you don't know the semantics, so all you've got to go on is form, `a is a fish' is the only kind of sentence that can be relied on to do so. So it's the only kind of sentence whose behavior the child can allow to constrain his lexical entry for `fish'.

You might put it like this: Compositionality works `up' from the semantic properties of constituents to the semantic properties of their hosts. But learnability works `down' from the semantic properties of hosts to those of their constituents. If the learning procedure for L is required to choose a tenable theory of L, then, with only a finite number of (idiomatic) exceptions, every host can (potentially) provide data for each of its lexical constituents. If, however, the procedure is not required to chose tenable grammars, then it has to filter out (ignore) the data about any of infinitely many (actual and possible) host constructions. The only way it can do this reliably is by a constraint on the syntax of the data sentences since, by assumption, their semantics isn't given; it's what the algorithm is trying to learn. And since, qua `extra', the property of being a good instance doesn't distribute from heads to modifers, the only syntactic constraint that an Empiricist learning algorithm can rely on is to ignore everything but atomic sentences.

So then: If a representation of goodinstancehood is an extra, English couldn't be learned from a corpus of data that includes no syntactically atomic sentences. Notice that this is quite different from, and much stronger than, the claim that the child couldn't learn his language from a corpus that includes no demonstrative sentences. I would have thought, for example, that routine employments of the method of differences might enable the learning procedure to sort out the semantics of `red' given lots and lots of examples like `this is a red rock', `this is a red cow', `this is a red crayon' .... etc., even if a child had no examples of `this is red' tout court. But not so if mastering `red' involves having a recognitional capacity for red things. For even if, as a matter of fact, red rocks, red cows and red crayons are good instances of red things, the child has no way of knowing that this is so. A fortiori, he cannot assume that typical utterances of `red rock', `red cow' etc. are germane to determining the epistemic clauses in the lexical entry for `red'.

Here's the bottom line. I've previously argued that an Empiricist can't explain why everybody who understands `AN' also understands `A'; he can, at best, stipulate that this is so. The present argument is that, since epistemic properties turn on the notion of a good instance, and since being a good instance doesn't distribute from modifiers to heads or vice versa, if you insist on epistemic properties being specified in the lexicon you will have to treat them as extras. But the learnability of the lexicon is now in jeopardy; for the learning procedure for a lexical item has, somehow, to ignore infinitely many expressions that contain it. In practice this means ignoring everything except atomic expressions, so the Empiricist is committed to an empirical claim which is, in fact, very not plausible: You can only learn a language from a corpus that contains syntactically atomic sentences. There is, to repeat, no reason in the world to suppose that this consequence is true. Do you really want to be bound to a theory of lexical content which entails it? And, anyhow, what business has a metaphysical theory about meaning got legislating on this sort of issue?

However, I suppose most of the readers of this stuff will be philosophers, and philosophers like demonstrative arguments. And the argument I've given isn't demonstrative since, as it stands, it's compatible with a learnability theory which assumes that there are (adequately many) atomic sentences in the corpora from which a language is learned. Well, I can't give you a demonstrative argument; but I can give you a diagnosis, one that I think should satisfy even a philosopher. To wit:

`Being able to tell' whether an expression applies is being able to recognize good instances. But being able to recognize a good instance of N doesn't mean being able to recognize a good instance of AN. That's because your ability to recognize a good instance of N may depend on good instances of N being R; and, since goodinstancehood doesn't compose, there's no guaranty that good instances of AN are also R. That is, as I remarked above, the Empiricist's bane. Well, but just why doesn't goodinstancehood compose? Not, I think, because of some strange or wonderful fact about semantics properties that lexical items fail to share with the their hosts, but rather because of a boring and obvious fact about typicality. What makes something a typical member of the set of Xs needn't be, and generally isn't, what make something a typical member of some arbitrary sub (or super) set of the Xs. And even when it is, it's generally a contingent fact that it is; a fortiori, it isn't a necessary truth that it is; a fortiori, it isn't a linguistic truth that it is since, I suppose, linguistic truths are necessary whatever else they are. Whether being a red makes a triangle typical of the kind of triangles they have in Kansas, depends on brute facts about what kinds of triangles they have in Kansas; if you want to know, you have to go and look. So it's perfectly possible to know what makes something a typical X, and to know your language, and none the less not to have a clue what makes something a typical member of some sub- (or super-) set. What makes a such and such a good example of a such and such just isn't a question of linguistics; all those criteriologists, and all those paradigm case arguments, to the contrary notwithstanding.

It looks alright to have recognitional (/epistemic) features that are constitutive of terms (/concepts) until you notice that being a good instance isn't compositional, whereas the essential (a fortiori, the semantic) properties of linguistic expressions have to be. There is, in short, an inherent tension between the idea that there are recognitional terms/ concepts and the idea that the semantics of language/ thought is productive. That this sort of problem should arise for empiricists in semantics is perhaps not very surprising: Their motivations has always had a lot more to do with refuting skeptics and/or metaphysicians than with solving problems about linguistic or conceptual content. Empiricist semanticists took their empiricism much more seriously than they took their semantics; it was really justification that they cared about, not language or thought. Now all that playing fast and loose with the intentional has caught up with them. Good! It's about time!

I'm afraid this has been a long, hard slog. So I'll say what the morals are in case you decided to skip the arguments and start at the end.

The methodological moral is: compositionality, learnability and lexical entry need to take in one another's wash. Considerations of learnability powerfully constrain how we answer the galaxy of questions around Q. And considerations of compositionality powerfully constrain the conditions for learnability.

The substantive moral is: All versions of empiricist semantics are false. There are no recognitional concepts; not even red.

Notes to Chapter 5

 

1. For all the cases that will concern us, I'll assume that concepts are what linguistic expressions express. So I'll move back and forth between talking about concepts and talking about words as ease of exposition suggests.

2. Of course, there will generally be more than one way of telling whether an expression applies; but there are obvious ways of modifying the point in the text to accomodate that.

3. It could be, of course, that this is because `red' and `triangle' express bona fide recognitional concepts but `pet' and `fish' do not. Notoriously, there isn't a lot of philosophical consensus about what words (\concepts) are good candidates for being recognitional. But, in fact, it doesn't matter. As will presently become clear, however the recognitional primitives are chosen, each one has infinitely many hosts to which it doesn't contribute its proprietary recognition procedure.

4. Philosophical friends who have endorsed this conclusion in one or another form, and with more or less enthusiasm, include: Ned Block, Paul Horwich, Brian Loar, Chris Peacocke, and Stephen Schiffer. That's a daunting contingent to be sure, and I am grateful to them for conversation and criticism. But they're wrong.

5. A variant of this proposal I've heard frquently is that, whereas the criterial and denotational properties of primitive expressions are both constitutive, only the denotational properties of primitive expressions are compositional; i.e. these are the only semantic properties that primitive expressions contribute to their hosts.

It's more or less common ground that denotation (instancehood, as opposed to goodinstancehood) does compose. The (actual and possible) red triangles are the intersection of the (actual and possible) things `red' denotes and the (actual and possible) things `triangle' denotes; the pet fish are the intersection of the things `pet' denotes with the things that `fish' denotes. And so on. It's thus on the cards that if, as I argue, compositionality rules out epistemic properties as linguistically constitutive, the default theory is that the constitutive semantic properties of words (and concepts) are their satisfaction conditions.

6. Notice that the argument now unfolding is neutral on the parenthetical caveat. I.e. it is not committed either way with respect to what Schiffer calls the `Agglomerative Principle'. See Part 1.

7. I.e. always excepting idioms. This caveat isn't question-begging, by the way, since there are lots of independent tests for whether an expression is an idiom (eg. resistance to syntactic transformation; to say nothing of entaliling and being entailed by its constituents.)

8. On the other hand, if P is true, then this lexicon implies that `pet fish' is an idiom.

9. `(A* N)N' is a schema for the infinite sequence of (phrasal) nouns: (A1 N)N, (A2 N)N ... etc.

10. Notice that the point I'm making does not depend on modal or "counterfactive" adjectives (`probably', `fake' etc.), where it's well know that AN can be compatible with, or entail, not-N.

Notice, likewise, that theough for convenience I talk of goodinstancehood being not compositional, exactly parallel considerations show that constraints on good instancehood aren't. The requirements for being a good instance of N aren't, in general, inherited by the requirements for being a good instance of AN; and so forth. This is a special case of the intertranslatability of stereotype-like theories of concepts (/lexical meanings) with exemplar-like theories.

11. It isn't good enough, of course, that the data from which an expression is learned should merely be compatible with its having the essential properties that it does. That `pet fish' is typically said of things that live in bowls is compatible with `fish' being typically said of things that don't; for it's contingent, and neither a priori nor linguistic, that people don't generally keep good instances of fish as pets.

12. Notice that the present proposal is not that the child learns that only primitive expressions exhibit the constitutive epistemic properties. `Typical triangle' and the like show that that's not true. The relevant constraint is (not on what the child learns but) on what data he learns it from.