Some Concepts of Consciousness[1]

 

Ned Block

NYU

 

Abstract

Consciousness is a mongrel concept: there are a number of very different "consciousnesses".  Phenomenal consciousness is experience; the phenomenally conscious aspect of a state is what it is like to be in that state.  The mark of access-consciousness, by contrast, is availability for use in reasoning and rationally guiding speech and action.  These concepts are often partly or totally conflated, with bad results. 

 

The concept of consciousness is a hybrid or better, a mongrel concept: the word `consciousness' connotes a number of different concepts and denotes a number of different phenomena.  We reason about "consciousness" using some premises that apply to one of the phenomena that fall under "consciousness", other premises that apply to other "consciousnesses" and we end up with trouble.  There are many parallels in the history of science.  Aristotle used `velocity' sometimes to mean average velocity and sometimes to mean instantaneous velocity; his failure to see the distinction caused confusion (Kuhn, 1964).  The Florentine Experimenters of the 17th Century used a single word (roughly translatable as "degree of heat") for temperature and for heat, generating paradoxes (Block and Dworkin, 1974).  For example, when they measured "degree of heat" by whether various heat sources could melt paraffin, heat source A came out hotter than B, but when they measured "degree of heat" by how much ice a heat source could melt in a given time, B was hotter than A (Wiser and Carey, 1983).  These are very different cases, but there is a similarity, one that they share with the case of `consciousness'.  The similarity is: very different concepts are treated as a single concept.  I think we all have some tendency to make this mistake in the case of "consciousness".

 

 

Phenomenal Consciousness

First, consider phenomenal consciousness, or P-consciousness, as I will call it.  Phenomenal consciousness is experience; what makes a state phenomenally conscious is that there is something “it is like” (Nagel, 1974) to be in that state.  Let me acknowledge at the outset that I cannot define P-consciousness in any remotely non-circular way.  I don't consider this an embarrassment.  The history of reductive definitions in philosophy should lead one not to expect a reductive definition of anything.  But the best one can do for P-consciousness is in some respects worse than for many other things because really all one can do is point to the phenomenon (cf.  Goldman, 1993a).  Nonetheless, it is important to point properly. John Searle, acknowledging that consciousness cannot be defined non-circularly, defines it as follows:

 

“By consciousness I simply mean those subjective states of awareness or sentience that begin when one wakes in the morning and continue throughout the period that one is awake until one falls into a dreamless sleep, into a coma, or dies or is otherwise, as they say, unconscious. [This comes from Searle 1990; there is a much longer attempt along the same lines in his 1992, p.  83ff.]”

 

I will argue that this sort of pointing is flawed because it points to too many things, too many different consciousnesses.

 

So how should we point to P-consciousness?  Well, one way is via rough synonyms.  As I said, P-consciousness is experience.  P-conscious properties are experiential properties.  P-conscious states are experiential states; that is, a state is P-conscious just in case it has experiential properties.  The totality of the experiential properties of a state are “what it is like" to have it.  Moving from synonyms to examples, we have P-conscious states when we see, hear, smell, taste and have pains. P-conscious properties include the experiential properties of sensations, feelings and perceptions, but I would also include thoughts, wants and emotions.[2]   An important feature of P-consciousness is that differences in intentional content often make a P-conscious difference.  What it is like to hear a sound as coming from the left differs from what it is like to hear a sound as coming from the right.  Further, P-conscious differences often make an intentional difference.  And this is partially explained by the fact that  P-consciousness is often—perhaps even always--representational. (See Jackendoff, 1987; van Gulick, 1989; McGinn, 1991, Ch 2; Flanagan, 1992, Ch 4; Goldman,1993b.)  So far, I don't take myself to have said anything terribly controversial.  The controversial part is that I take P-conscious properties to be distinct from any cognitive, intentional, or functional property.  At least, no such reduction of P-consciousness to the cognitive, intentional or functional can be known in the armchair manner of recent deflationist approaches.   (Cognitive = essentially involving thought; intentional properties = properties in virtue of which a representation or state is about something; functional properties = e.g. properties definable in terms of a computer program.  See Searle, 1983 on intentionality; See Block, 1980,1994, for better characterizations of a functional property.)  But I am trying hard to limit the controversiality of my assumptions.  Though I will be assuming that functionalism about P-consciousness is false, I will be pointing out that limited versions of many of the points I will be making can be acceptable to the functionalist.[3]

 

By way of homing in on P-consciousness, it is useful to appeal to what maybe a contingent property of it, namely the famous "explanatory gap".  To quote T.H. Huxley (1866), "How it is that anything so remarkable as a state of consciousness comes about as a result of irritating nervous tissue, is just as unaccountable as the appearance of Djin when Aladdin rubbed his lamp."  Consider a famous neurophysiological theory of P-consciousness offered by Francis Crick and Christof Koch: namely, that a synchronized35-75 hertz neural oscillation in the sensory areas of the cortex is at the heart of phenomenal consciousness.  Assuming for the moment that such neural oscillations are the neural basis of sensory consciousness, no one has produced the concepts that would allow us to explain why such oscillations are the neural basis of one phenomenally consciousness state rather than another or why the oscillations are the neural basis of a phenomenally conscious state rather than a phenomenally unconscious state. 

 

However, Crick and Koch have offered a sketch of an account of how the 35-75 hertz oscillation might contribute to a solution to the "binding problem".  Suppose one simultaneously sees a red square moving to the right and a blue circle moving to the left.  Different areas of the visual cortex are differentially sensitive to color, shape, motion, etc. so what binds together redness, squareness and rightward motion?  That is, why don't you see redness and blueness without seeing them as belonging with particular shapes and particular motions?  And why aren't the colors normally seen as bound to the wrong shapes and motions?  Representations of colors, shapes and motions of a single object are supposed to involve oscillations that are in phase with one another but not with representations of other objects.  But even if the oscillation hypothesis deals with the informational aspect of the binding problem (and there is some evidence against it), how does it explain what it is like to see something as red in the first place--or for that matter, as square or as moving to the right?  Why couldn't there be brains functionally or physiologically just like ours, including oscillation patterns, whose owners' experience was different from ours or who had no experience at all? (Note that I don't say that there could be such brains.  I just want to know why not.) No one has a clue how to answer these questions.

 

The explanatory gap in the case of P-consciousness contrasts with our better (though still not very good) understanding of  the scientific basis of cognition.  We have two serious research programs into the nature of cognition, the classical "language of thought” paradigm, and the connectionist research program.  Both assume that the scientific basis of cognition is computational.  If this is idea is right—and it seems increasingly promising—it gives us a better grip on why the neural basis of a thought state is the neural basis of that thought rather than some other thought or none at all than we have about the analogous issue for consciousness

 

What I've been saying about P-consciousness is of course controversial in a variety of ways, both for some advocates and some opponents of some notion of P-consciousness.  I have tried to steer clear of some controversies, e.g. controversies over inverted and absent qualia; over Jackson's (1986) Mary, the woman who is raised in a black and white room, learning all the physiological and functional facts about the brain and color vision, but nonetheless discovers a new fact when she goes outside the room for the first time and learns what it is like to see red; and even Nagel's view that we cannot know what it is like to be a bat.[4]  Even if you think that P-consciousness as I have described it is an incoherent notion, you may be able to agree with the main point of this paper, which is that a great deal of confusion arises as a result of confusing P-consciousness with something else.  Not even the concept of what time it is now on the sun is so confused that it cannot itself be confused with something else.

 

Access-Consciousness

I now turn to the non-phenomenal notion of consciousness that is most easily and dangerously conflated with P-consciousness: access-consciousness.  I will characterize access-consciousness, give some examples of how it makes sense for someone to have access-consciousness without phenomenal consciousness and vice versa, and then go on to the main theme of the paper, the damage done by conflating the two.

 

A-consciousness is access-consciousness.  A representation is A-conscious if it is broadcast for free use in reasoning and for direct “rational” control of action (including reporting).  An A-state is one that consists in having an A-representation.  I see A-consciousness as a cluster concept in which reportability is the element of the cluster that has the smallest weight even though it is often the best practical guide to A--consciousness.

 

The ‘rational’ is meant to rule out the kind of automatic control that obtains in blindsight.  (Blindsight is a syndrome involving patients who have brain damage in the first stage of visual processing, primary visual cortex.  These patients seem to have “holes” in their visual fields.  If the experimenter flashes stimuli in these holes and asks the patient what was flashed, the patient claims to see nothing but can often guess at high levels of accuracy, choosing between two locations or directions or whether what was flashed was an ‘X’ or an ‘O’.)

 

I will suggest that A-consciousness plays a deep role in our ordinary `consciousness' talk and thought.  However, I must admit at the outset that this role allows for substantial indeterminacy in the concept itself.  In addition, there are some loose ends in the characterization of the concept which cannot be tied up without deciding about certain controversial issues, to be mentioned below.[5] My guide in making precise the notion of A-consciousness is to formulate an information processing correlate of P-consciousness that is not ad hoc and mirrors P-consciousness as well as a non-ad hoc information processing notion can. 

 

In the original version of this paper, I defined ‘A-consciousness’ as (roughly) ‘poised for control of speech, reasoning and action’.[6]  In a comment on the original version of this paper, David Chalmers (1997) suggested defining ‘A-consciousness’ instead as ‘directly available for global control’.  Chalmers’ definition has the advantage of avoiding enumerating the kinds of control.  That makes the notion more general, applying to creatures who have kinds of control that differ from ours.  But it has the disadvantage of that advantage, counting simple organisms as having A-consciousness if they have representations that are directly available for global control of whatever resources they  happen to have.  If the idea of A-consciousness is to be an information processing image of P-consciousness, it would not do to count a slug as having A-conscious states simply because there is some machinery of control of the resources that a slug happens to command..

 

As I noted, my goal in precisifying the ordinary notion of access as it is used in thinking about consciousness is to formulate a non-ad hoc notion that is close to an information processing image of P-consciousness.  A flaw in both my definition and Chalmers’ definition is that they make A-consciousness dispositional whereas P-consciousness is occurrent.  As noted in the critique by Atkinson and Davies (1995), that makes the relation between P consciousness and A-consciousness the relation between the ground of a disposition and the disposition itself. (See also Burge, 1997.)   This has long been one ground of criticism of both functionalism and behaviorism (Block and Fodor, 1972), but there is no real need for an information-processing notion of consciousness to be saddled with a category mistake of this sort.  I have dealt with the issue here by using the term ‘broadcast’, as in Baars’ (1988) theory that conscious representations are ones that are broadcast in a global workspace.  A-consciousness is similar to that notion and to Dennett’s (1993) notion of consciousness as cerebral celebrity.[7]

 

The interest in the A/P distinction arises from the battle between two different conceptions of the mind, the biological and the computational.  The computational approach supposes that all of the mind (including consciousness) can be captured with notions of information processing, computation and function in a system.  According to this view (often called functionalism by philosophers), the level of abstraction for understanding the mind is one that allows multiple realizations, just as one computer can be realized electrically or hydraulically.  Their bet is that the different realizations don’t matter to the mind, generally, and to consciousness specifically.  The biological approach bets that the realization does matter.  If P = A, the information processing side is right.  But if the biological nature of experience is crucial, then realizations do matter, and we can expect that P and A will diverge.[8]

 

Although I make a distinction between A-consciousness and P-consciousness, I also want to insist that they interact.  For example, what perceptual information is being accessed can change figure to ground and conversely, and a figure-ground switch can affect one's phenomenal state.  For example, attending to the feel of the shirt on your neck, accessing those perceptual contents, switches what was in the background to the foreground, thereby changing one's phenomenal state.  (See Hill, 1991, 118-126; Searle, 1992.)

Of course, there are notions of access in which the blindsight patient's guesses count as access. There is no right or wrong here. Access comes in various degrees and kinds, and my choice here is mainly determined by the desideratum of finding a notion of A-consciousness that mirrors P-consciousness.  If the blindsight patient’s perceptual representations are not P-conscious, it would not do to count them as A-conscious.  (I also happen to think that the notion I characterize is more or less one that plays a big role in our thought, but that won't be a major factor here.)

 

I will mention three main differences between P-consciousness and A-consciousness.  The first point, put crudely, is that P-conscious content is phenomenal, whereas A-conscious content is representational.  It is of the essence of A-conscious content to play a role in reasoning, and only representational content can figure in reasoning.  The reason this way of putting the point is crude is that many (perhaps even all) phenomenal contents are also representational.  And some of  the representational contents of a P-conscious state may be intrinsic to those P-contents..[9]

 

(In the last paragraph, I used the notion of P-conscious content. The P-conscious content of a state is the totality of the state's experiential properties, what it is like to be in that state.  One can think of the P-conscious content of a state as the state's experiential "value" by analogy to the representational content as the state’s representational "value".  In my view, the content of an experience can be both P-conscious and A-conscious; the former in virtue of its phenomenal feel and the latter in virtue of its representational properties.)

 

A closely related point: A-conscious states are necessarily transitive: A-conscious states must always be states of consciousness of. P-conscious states, by contrast, sometimes are and sometimes are not transitive.  P-consciousness, as such, is not consciousness of.  (I'll return to this point in a few paragraphs.)

 

Second, A-consciousness is a functional notion, and so A-conscious content is system-relative: what makes a state A-conscious is what a representation of its content does in a system.  P-consciousness is not a functional notion.[10]  In terms of Schacter's model of the mind (see the original version of this paper Block (1995)), content gets to be P-conscious because of what happens inside the P-consciousness module.  But what makes content A-conscious is not anything that could go on inside a module, but rather informational relations among modules.  Content is A-conscious in virtue of (a representation with that content) reaching the Executive system, the system that is in charge of rational control of action and speech, and to that extent, we could regard the Executive module as the A-consciousness module.  But to regard anything as an A-consciousness module is misleading, because what makes a typical A-conscious representation A-conscious is what getting to the Executive module sets it up to do, namely affect reasoning and action.

 

A third difference is that there is such a thing as a P-conscious type or kind of state.  For example the feel of pain is a P-conscious type--every pain must have that feel.  But any particular token thought that is A-conscious at a given time could fail to be accessible at some other time, just as my car is accessible now, but will not be later when my wife has it.  A state whose content is informationally promiscuous now may not be so later.

 

The paradigm P-conscious states are sensations, whereas the paradigm A-conscious states are "propositional attitude" states like thoughts, beliefs and desires, states with representational content expressed by "that" clauses.  (E.g. the thought that grass is green.)  What, then, gets broadcast when a P-conscious state is also A-conscious?  The most straightforward answer is: the P-content itself.  However, exactly what this comes to depends on what exactly P-content is.  If P-content is non-conceptual, it may be said that P contents are not the right sort of thing to play a role in inference and guiding action.  However, even with non-humans, pain plays a rational role in guiding action.  Different actions are appropriate responses to pains in different locations.  Since the contents of pain do in fact play a rational role, either their contents are conceptualized enough, or else non-conceptual or not very conceptual content can play a rational role. 

 

There is a familiar distinction, alluded to above, between `consciousness’ in the sense in which we speak of a state as being a conscious state (intransitive consciousness) and consciousness of something (transitive consciousness).  (The transitive/intransitive terminology seems to have appeared first in Malcolm (1984) , but see also Rosenthal (1997).  Humphrey  (1992) mentions that the intransitive usage is much more recent, only 200 years old.)  It is easy to fall into an identification of P-consciousness with intransitive consciousness and a corresponding identification of access-consciousness with transitive consciousness.  Such an identification is over simple.  As I mentioned earlier, P-conscious contents can be representational.  Consider a perceptual state of seeing a square.  This state has a P-conscious content that represents something, a square, and thus it is a state of P-consciousness of the square.  It is a state of P-consciousness of the square even if it doesn't represent the square as a square, as would be the case if the perceptual state is a state of an animal that doesn't have the concept of a square.  Since there can be P-consciousness of something, P-consciousness is not to be identified with intransitive consciousness.

 

Here is a second reason why the transitive/intransitive distinction cannot be identified with the P-consciousness/A-consciousness distinction: The of-ness required for transitivity does not guarantee that a content be utilizable by a consuming system, a system that uses the representations for reasoning or planning or control of action at the level required for A-consciousness.  For example, a perceptual state of a brain-damaged creature might be a state of P-consciousness of, say, motion, even though connections to reasoning and rational control of action are damaged so that the state is not A-conscious.  In sum, P-consciousness can be consciousness of, and consciousness of need not be A-consciousness.

 

Those who are uncomfortable with P-consciousness should pay close attention to A-consciousness because it is a good candidate for a reductionist identification with P-consciousness.[11]

 

Many of my critics (Searle, 1992, Burge, 1997) have noted that if there can be “zombies”, cases of A without P, they are not conscious in any sense of the term.  I am sympathetic, but I don’t agree with the conclusion that some have drawn that the A-sense is not a sense of “consciousness” and that A is not a kind of consciousness.  A-consciousness can be a kind of consciousness even if it is parasitic on a core notion of P-consciousness.  A parquet floor is a floor even though it requires another floor beneath it.  A consciousness can come and go against a background of P consciousness.

 

The rationale for calling A consciousness a kind of consciousness is first that it fits a certain kind of quasi-ordinary usage.  Suppose one has a vivid mental image that is repressed.  Repression need not make the image go away or make it non-phenomenal.  One might realize after psychoanalysis that one had the image all along, but that one could not cope with it.  It is “unconscious” in the Freudian sense—which is A-unconsciousness.  Second, A-consciousness is typically the kind of consciousness that is relevant to use of words like “conscious” and “aware” in cognitive neuroscience.  This point is made in detail in my comment on a special issue of the journal Cognition  (Block, 2001) This issue summarizes the “state of the art” and some of the writers are clearly talking about A-consciousness (or one or another version of monitoring consciousness—see below) whereas others are usually talking about P-consciousness. The A notion of consciousness is the most prominent one in the discussion in that issue and in much of the rest of cognitive neuroscience.  (See the article by Dehaene and Naccache in that volume which is very explicit about the use of A-consciousness.)  Finally, recall that my purpose in framing the notion of A consciousness is to get a functional notion of consciousness that is not ad hoc and comes as close to matching P consciousness as a purely functional notion can.  I hope to show that nonetheless there are cracks between P and A.   In this context, I prefer to be liberal with terminology, allowing that A is a form of consciousness but not identical to phenomenal consciousness.

 

A-Consciousness Without P-Consciousness

The main point of this paper is that these two concepts of consciousness are distinct and quite likely have different extensions yet are easily confused.  Let us consider conceptually possible cases of one without the other.  Actual cases will be more controversial.

 

First, I will give some putative examples of A-consciousness without P-consciousness.  If there could be a full-fledged phenomenal zombie, say a robot computationally identical to a person, but whose silicon brain did not support P-consciousness, that would do the trick.  I think such cases conceptually possible, but this is very controversial. (See Shoemaker, 1975, 1981)

 

But there is a less controversial kind of case, a very limited sort of partial zombie.  Consider the blindsight patient who "guesses" that there is an `X' rather than an `O' in his blind field.  Taking his word for it (for the moment), I am assuming that he has no P-consciousness of the `X'.  The blindsight patient also has no `X'-representing A-conscious content, because although the information that there is an `X' affects his "guess", it is not available as a premise in reasoning (until he has the quite distinct state of hearing and believing his own guess), or for rational control of action or speech.  Marcel (1986) points out that the thirsty blindsight patient would not reach for a glass of water in the blind field. So the blindsight patient's perceptual or quasi-perceptual state is unconscious in the phenomenal and access senses (and in the monitoring senses to be mentioned below too).

 

Now imagine something that may not exist, what we might call superblindsight.  A real blindsight patient can only guess when given a choice from a small set of alternatives (`X'/`O'; horizontal/vertical, etc). But suppose--interestingly, apparently contrary to fact--that a blindsight patient could be trained to prompt himself at will, guessing what is in the blind field without being told to guess.  The superblindsighter spontaneously says "Now I know that there is a horizontal line in my blind field even though I don't actually see it." Visual information of a certain limited sort (excluding color and complicated shapes) from his blind field simply pops into his thoughts in the way that solutions to problems we've been worrying about pop into our thoughts, or in the way some people just know the time or which way is North without having any perceptual experience of it.  He knows there is an ‘X’ in his blind field, but he doesn’t know the type font of the ‘X’.  The superblindsighter himself contrasts what it is like to know visually about an `X' in his blind field and an `X' in his sighted field.  There is something it is like to experience the latter, but not the former, he says.  It is the difference between just knowing and knowing via a visual experience.  Taking his word for it, here is the point: the perceptual content that there is an `X' in his visual field is A-conscious but not P-conscious. The superblindsight case is a very limited partial zombie.

 

Of course, the superblindsighter has a thought that there is an ‘X’ in his blind field that is both A-conscious and P-conscious.  But I am not talking about the thought.  Rather, I am talking about the state of his perceptual system that gives rise to the thought.  It is this state that is A-conscious without being P-conscious.[12]

 

The (apparent) non-existence of superblindsight is a striking fact, one that a number of writers have noticed, more or less.  What Marcel was in effect pointing out was that the blindsight patients, in not reaching for a glass of water, are not superblindsighters.  (See also Farah (1994).  Blind perception is never super blind perception.[13]

 

Notice that the superblindsighter I have described is just a little bit different (though in a crucial way) from the ordinary blindsight patient. In particular, I am not relying on what might be thought of as a full-fledged quasi-zombie, a super-duper-blindsighter whose blindsight is every bit as good, functionally speaking, as his sight. In the case of the super-duper blindsighter, the only difference between vision in the blind and sighted fields, functionally speaking, is that the quasi-zombie himself regards them differently.  Such an example will be regarded by some (though not me) as incoherent--see Dennett, 1991, for example.  But we can avoid disagreement about the super-duper-blindsighter by illustrating the idea of A-consciousness without P-consciousness by appealing only to the superblindsighter. Functionalists may want to know why the superblindsight case counts as A-conscious without P-consciousness.  After all, they may say, if we have really high quality access in mind, the superblindsighter that I have described does not have it, so he lacks both P-consciousness and really high quality A-consciousness.  The super-duper-blindsighter, on the other hand, has both, according to the functionalist, so in neither case, according to the objection, is there A-consciousness without P-consciousness.

 

One could put the point by distinguishing three types of access: (1) really high quality access, (2) medium access and (3)poor access.  The actual blindsight patient has poor access (he has to be prompted to guess), the superblindsight patient has medium access and the super-duper blindsight patient--as well as most of us--has really high quality access.  The functionalist objector I am talking about identifies P-consciousness with A-consciousness of the really high quality kind, whereas I am allowing A-consciousness with only medium access. (We agree in excluding low quality access.  The issue, then, is whether the functionalist can get away with restricting access to high quality access.  I think not.  I believe that in some cases, normal phenomenal vision involves only medium access.  The easiest case to see for yourself with is peripheral vision.  If you wave a colored object near your ear, you will find that in the right location you can see the movement without having the kind of rich access that you have in foveal vision.  For example, your ability to recover shape and color is poor.

 

Why isn’t peripheral vision a case of A without P?  In peripheral vision, we are both A and P conscious of the same features—e.g. motion but not color.  But in superblindsight—so the story goes—there is no P consciousness of the horizontal line.  (He just knows.)  I conclude that A without P is conceptually possible even if not actual.

 

P-Consciousness Without A-Consciousness

Consider an animal that you are happy to think of as having P-consciousness for which brain damage has destroyed centers of reasoning and rational control of action, thus preventing A-consciousness.  It certainly seems conceptually possible that the neural bases of P-consciousness systems and A-consciousness systems be distinct, and if they are distinct, then it is possible, at least conceptually possible, for one to be damaged while the other is working well.  Evidence has been accumulating for twenty-five years that the primate visual system has distinct dorsal and ventral subsystems.  Though there is much disagreement about the specializations of the two systems, it does appear that much of the information in the ventral system is much more closely connected to P-consciousness than information in the dorsal system (Goodale and Milner, 1992).  So it may actually be possible to damage A-consciousness without P-consciousness and perhaps even conversely.[14]

 

Further, one might suppose (Rey, 1983, 1988; White, 1987) that some of our own subsystems--say each of the two hemispheres of the brain—might themselves be separately P-conscious.  Some of these subsystems might also be A-consciousness, but other subsystems might not have sufficient machinery for reasoning or reporting or rational control of action to allow their P-conscious states to be A-conscious; so if those states are not accessible to another system that does have adequate machinery, they will be P-conscious but not A-conscious.

 

Here is another reason to believe in P-consciousness without A-consciousness:  Suppose that you are engaged in intense conversation when suddenly at noon you realize that right outside your window, there is—and has been for some time--a pneumatic drill digging up the street.  You were aware of the noise all along, one might say, but only at noon are you consciously aware of it.  That is, you were P-conscious of the noise all along, but at noon you are both P-conscious and A-conscious of it. Of course, there is a very similar string of events in which the crucial event at noon is a bit more intellectual.  In this alternative scenario, at noon you realize not just that there is and has been a noise, but also that you are now and have been hearing the noise.  In this alternative scenario, you get "higher order thought" as well as A-consciousness at noon.  So on the first scenario, the belief that is acquired at noon is that there is and has been a noise, and on the second scenario, the beliefs that are acquired at noon are the first one plus the belief that you are and have been hearing the noise.  But it is the first scenario, not the second that interests me.  It is a good case of P-consciousness without A-consciousness.  Only at noon is the content of your representation of the drill broadcast for use in rational control of action and speech.  (Note that A-consciousness requires being broadcast, not merely being available for use.)

 

In addition, this case involves a natural use of `conscious' and `aware' for A-consciousness and P-consciousness..  `Conscious' and `aware' are more or less synonymous, so when we have one of them we might think of it as awareness, but when we have both it is natural to call that conscious awareness.  This case of P-consciousness without A-consciousness exploits what William James (1890) called "secondary consciousness"  (at least I think it does; James scholars may know better), a category that he may have meant to include cases of P-consciousness without attention.

 

I have found that the argument of the last paragraph makes those who are distrustful of introspection uncomfortable.  I agree that introspection is not the last word, but it is the first word, when it comes to P-consciousness.  The example shows the conceptual distinctness of P-consciousness from A-consciousness and it also puts the burden of proof on anyone who would argue that as a matter of empirical fact they come to the same thing.

 

A-consciousness and P-consciousness very often occur together. When one or the other is missing, we can often speak of unconscious states (when the context is right).  Thus, in virtue of missing A-consciousness, we think of Freudian states as unconscious.  And in virtue of missing P-consciousness, it is natural to describe the superblindsighter or the unfeeling robot or computer as unconscious.  Lack of monitoring-consciousness in the presence of A and P is also sometimes described as unconsciousness.  Thus Julian Jaynes describes Greeks as becoming conscious when—in between the time of the Illiad and the Odyssey, they become more reflective.

 

Flanagan (1992) criticizes my notion of A-consciousness, suggesting that we replace it with a more liberal notion of informational sensitivity that counts the blindsight patient as having access-consciousness of the stimuli in his blind field.  The idea is that the blindsight patient has some access to the information about the stimuli in the blind field, and that amount of access is enough for access consciousness.  Of course, as I keep saying, the notion of A-consciousness that I have framed is just one of a family of access notions.  But there is more than a verbal issue here.  The real question is what good is A-consciousness as I have framed it in relation to the blindsight issue?  The answer is that in blindsight, the patient is supposed to lack "consciousness" of the stimuli in the blind field.  My point is that the blindsight lacks both P-consciousness and a kind of access (both medium and high level access in the terminology used earlier), and that these are easily confused.  This point is not challenged by pointing out that the blindsight patient also has a lower level of access to this information.

 

The kind of access that I have built into A-consciousness plays a role in theory outside of this issue and in daily life.  Consider the Freudian unconscious.  Suppose I have a Freudian unconscious desire to kill my father and marry my mother.  Nothing in Freudian theory requires that this desire be P-unconscious; for all Freudians should care, it might be P-conscious.  What is the key to the desire being Freudianly unconscious is that it come out in slips, dreams, and the like, but not be freely available as a premise in reasoning (in virtue of having the unconscious desire) and that it not be freely available to guide action and reporting. Coming out in slips and dreams makes it conscious in Flanagan's sense, so that sense of access is no good for capturing the Freudian idea.  But it is  unconscious in my A-sense.  If I can just tell you that I have a desire to kill my father and marry my mother (and not as a result of therapy) then it isn't an unconscious state in either Freud's sense or my A sense.  Similar points can be made about a number of the syndromes that are often regarded as disorders of consciousness.  For example, consider prosopagnosia, a syndrome in which someone who can see noses, eyes, etc., cannot recognize faces.   Prosopagnosia is a disorder of A-consciousness, not P-consciousness and not Flanagan's informational sensitivity.  We count someone as a prosopagnosic even when they are able to guess at better than a chance level who the face belongs to, so that excludes Flanagan's notion. Further, P-consciousness is irrelevant, and that excludes P-consciousness as a criterion.  It isn't the presence or absence of a feeling of familiarity that defines prosopagnosia, but rather the patient not knowing who the person is whose face he is seeing or whether he knows that person.

 

I am finished sketching the contrast between P-consciousness and A-consciousness.  In the remainder of this section, I will briefly discuss two cognitive notions of consciousness, so that they are firmly distinguished from both P-consciousness and A-consciousness. 

 

Self-consciousness #

By this term, I mean the possession of the concept of the self and the ability to use this concept in thinking about oneself.  A number of higher primates show signs of recognizing that they see themselves in mirrors.  They display interest in correspondences between their own actions and the movements of their mirror images.  By contrast, dogs treat their mirror images as strangers at first, slowly habituating.  In one experimental paradigm, experimenters painted colored spots on the foreheads and ears of anesthetized primates, watching what happened.  Chimps between ages 7 and 15 usually try to wipe the spot off (Povinelli, 1994; Gallup, 1982).  Monkeys do not do this, according to published reports as of 1994. (Since then, Hauser et. al., 1995, have shown that monkeys can pass the test if the mark is salient enough)  Human babies don't show similar behavior until the last half of their second year.  Perhaps this is a test for self-consciousness.  (Or perhaps it is only a test for understanding mirrors; but what is involved in understanding mirrors if not that it is oneself one is seeing?) But even if monkeys and dogs have no self-consciousness, no one should deny that they have P-conscious pains, or that there is something it is like for them to see their reflections in the mirror.  P-conscious states often seem to have a "me-ishness" about them, the phenomenal content often represents the state as a state of me.  But this fact does not at all suggest that we can reduce P-consciousness to self-consciousness, since such "me-ishness" is the same in states whose P-conscious content is different.  For example, the experience as of red is the same as the experience as of green in self-orientation, but the two states are different in phenomenal feel.[15]

 

Monitoring-consciousness

The idea of consciousness as some sort of internal monitoring takes many forms.  One notion is that of some sort of inner perception.  This could be a form of P-consciousness, namely P-consciousness of one's own states or of the self.  Another notion is often put in information-processing terms: internal scanning.  And a third, metacognitive notion, is that of a conscious state as one that is accompanied by a thought to the effect that one is in that state.[16] Let us lump these together as one or another form of monitoring-consciousness.  Given my liberal terminological policy, I have no objection to monitoring-consciousness as a notion of consciousness.  Where I balk is at the idea that P-consciousness just is one or another form of monitoring-consciousness.

 

To identify P-consciousness with internal scanning is just to grease the slide to eliminativism about P-consciousness.  Indeed, as Georges Rey (1983) has pointed out, ordinary laptop computers are capable of various types of self-scanning, but as he also points out, no one would think of their laptop computer as "conscious" (using the term in the ordinary way, without making any of the distinctions I've introduced).  Since, according to Rey, internal scanning is essential to consciousness, he concludes that the concept of consciousness is incoherent.  If one regards the various elements of the mongrel concept that I have been delineating as elements of a single concept, then that concept is indeed incoherent and needs repair by making distinctions along the lines I have been suggesting.  I doubt that the ordinary concept of consciousness is sufficiently determinate for it to be incoherent, though whether or not this is so is an empirical question about how people use words that it is not my job to decide.  However that inquiry turns out,  Rey’s mistake is to trumpet the putative incoherence of the concept of consciousness as if it showed the incoherence of the concept of phenomenal consciousness.[17]

 

Rosenthal (1997) defines reflexive consciousness as follows:  S is a reflexively conscious state of mine « S is accompanied by a thought--arrived at non-inferentially and non-observationally-- to the effect that I am in S. He offers this “higher order thought” (HOT) theory as a theory of phenomenal consciousness.  It is obvious that phenomenal consciousness without HOT and HOT without phenomenal consciousness are both conceptually possible. For examples, perhaps dogs and infants have phenomenally conscious pains without higher order thoughts about them.  For the converse case, imagine that by bio-feedback and imaging techniques of the distant future, I learn to detect the state in myself of having the Freudian unconscious thought that it would be nice to kill my father and marry my mother.  I could come to know—non-inferentially and non-observationally—that I have this Freudian thought even though the thought is not phenomenally conscious. 

 

Rosenthal sometimes talks as if it is supposed to be a basic law of nature that phenomenal states and HOTs about them co-occur. That is a very adventurous claim.  But even if it is true,  then there must be a mechanism that explains the correlation, as the fact that both heat and electricity are carried by free electrons explains the correlation of electrical and thermal conductivity.  But any mechanism breaks down under extreme conditions, as does the correlation of electrical and thermal conductivity at extremely high temperatures.  So the correlation between phenomenality and HOT would break down too, showing that higher order thought does not yield the basic scientific nature of phenomenality.

 

            Rosenthal’s definition of his version of monitoring-consciousness has a number of ad hoc features.  “Non-observationally” is required to rule out (e.g.) a case in which I know about a thought I have repressed by observing my own behavior.  “Non-inferentially” is needed to avoid a somewhat different case in which I appreciate (non-observationally) my own pain and infer a repressed thought from it.  Further, Rosenthal’s definition involves a stipulation that the possessor of the monitoring-conscious state is the same as the thinker of the thought—otherwise my thinking about your pain would make it a conscious pain.  All these ad hoc features can be eliminated by moving to the following definition of monitoring-consciousness: S is a monitoring-conscious state « S is phenomenally presented in a thought about S.  This definition uses the notion of phenomenality, but this is no disadvantage unless one holds that there is no such thing apart from monitoring itself.  The new definition, requiring phenomenality as it does, has the additional advantage of making it clear why monitoring-consciousness is a kind of consciousness. 

 

There is an element of plausibility to the collapse of P-consciousness into monitoring-consciousness. Consider two dogs, one of which has a perceptual state whereas the other has a similar perceptual state plus a representation of it.  Surely the latter dog has a conscious state even if the former dog does not.  Quite right, because consciousness of plausibly brings consciousness with it. (I'm only endorsing the plausibility of this idea, not its truth.) But the converse is more problematic.  If I am conscious of a pain or a thought, then, plausibly, that pain or thought has some P-conscious aspect.  But even if consciousness of entails P-consciousness, that gives us no reason to believe that P-consciousness entails consciousness of, and it is the implausibility of this converse proposition that is pointed to by the dog problem.  The first dog can have a P-conscious state too, even if it is not conscious of it.

 

Perhaps you are wondering why I am being so terminologically liberal, counting P-consciousness, A-consciousness, monitoring consciousness and self-consciousness all as types of consciousness.  Oddly, I find that many critics wonder why I would count phenomenal consciousness as consciousness, whereas many others wonder why I would count access or monitoring or self consciousness as consciousness.  In fact two reviewers of this paper complained about my terminological liberalism, but for incompatible reasons.  One reviewer said: "While what he uses ["P-consciousness"] to refer to--the "what it is like" aspect of mentality--seems to me interesting and important, I suspect that the discussion of it under the heading "consciousness" is a source of confusion...he is right to distinguish access-consciousness (which is what I think deserves the name "consciousness") from this."  Another reviewer said: "I really still can't see why access is called...access-consciousness?  Why isn't access just...a purely information processing (functionalist) analysis?"  This is not a merely verbal matter.  In my view, all of us, despite our explicit verbal preferences, have some tendency to use `conscious' and related words in both ways, and our failure to see this causes a good deal of difficulty in thinking about "consciousness". 

 

I've been talking about different concepts of "consciousness" and I've also said that the concept of consciousness is a mongrel concept.  Perhaps, you are thinking, I should make up my mind.  My view is that `consciousness' is actually an ambiguous word, though the ambiguity I have in mind is not one that I've found in any dictionary.  I started the paper with an analogy between `consciousness' and `velocity', and I think there is an important similarity.  One important difference, however, is that in the case of `velocity', it is easy to get rid of the temptation to conflate the two senses, even though for many purposes the distinction is not very useful.  With `consciousness', there is a tendency towards "now you see it, now you don't."  I think the main reason for this is that P-consciousness presents itself to us in a way that makes it hard to imagine how a conscious state could fail to be accessible and self-reflective, so it is easy to fall into habits of thought that do not distinguish these concepts.[18]

 

The chief alternative to the ambiguity hypothesis is that there is a single concept of consciousness that is a cluster concept.  For example, a prototypical religion involves belief in supernatural beings, sacred and profane objects, rituals, a moral code, religious feelings, prayer, a worldview, an organization of life based on the world view and a social group bound together by the previous items (Alston, 1967).  But for all of these items, there are actual or possible religions that lack them. For example, some forms of Buddhism do not involve belief in a supreme being and Quakers have no sacred objects.  It is convenient for us to use a concept of religion that binds together a number of disparate concepts whose referents are often found together.

 

The distinction between ambiguity and cluster concept can be drawn in a number of equally legitimate ways that classify some cases differently. That is, there is some indeterminacy in the distinction.  Some might even say that velocity is a cluster concept because for many purposes it is convenient to group average and instantaneous velocity together.  I favor tying the distinction to the clear and present danger of conflation, especially in the form of equivocation in an argument.  Of course, this is no analysis, since equivocation is definable in terms of ambiguity.  My point, rather, is that one can make up one's mind about whether there is ambiguity by finding equivocation hard to deny.  In Block (1995), the longer paper from which this paper derives, I give some examples of conflations.

 

When I called consciousness a mongrel concept I was not declaring allegiance to the cluster theory.  Rather, what I had in mind was that an ambiguous word often corresponds to an ambiguous mental representation, one that functions in thought as a unitary entity and thereby misleads.  These are mongrels.  I would also describe velocity and degree of heat (as used by the Florentine Experimenters of the 17th Century) as mongrel concepts.  This is the grain of truth in the cluster-concept theory.

 

Note the distinction between the claim that the concept of consciousness is a mongrel concept and the claim that consciousness is not a natural kind (Churchland, 1983, 1986). The former is a claim about the concept, one that can be verified by reflection alone.  The latter is like the claim that dirt or cancer are not natural kinds, claims that require empirical investigation.[19]

 

 

REFERENCES

 

Alston, W. (1967) Religion.  In The Encyclopedia of Philosophy. Macmillan/Free Press, 140-145.

 

Armstrong, D.  M.  (1968) A Materialist Theory of Mind.  Humanities Press

 

-- What is consciousness? In The Nature of Mind.  Cornell University Press

 

Atkinson, A. and Davies, M. (1995),  “Consciousness without Conflation”, in in The Behavioral and Brain Sciences 18, 2, 1995, 248-249

 

Baars, B.J.  (1988) A cognitive Theory of Consciousness. Cambridge University Press

 

Block, N.  (1980) What is functionalism? In N.  Block (ed) Readings in the Philosophy of Psychology vol 1.  Harvard University Press

 

--(1990) Consciousness and accessibility.  Behavioral and Brain Sciences 13: 596-598

 

--(1991) Evidence against epiphenomenalism.  Behavioral and Brain Sciences 14 (4):670-672

 

--(1992) Begging the question against phenomenal consciousness. Behavioral and Brain Sciences

 

-- (1993) Review of D.  Dennett, Consciousness Explained , The Journal of Philosophy XC,4:181-193

 

--(1994) "Consciousness", "Functionalism", "Qualia".  In S.  Guttenplan (ed) A Companion to Philosophy of Mind. Blackwell.

 

--(1995) “On a Confusion about a Function of Consciousness” in The Behavioral and Brain Sciences 18, 2, 1995

 

--(2001) “Paradox and cross purposes in recent work on consciousness”, Cognition 79,1-2, Pages 197-219

 

Block, N. and Dworkin, G. (1974)  "IQ, Heritability and Inequality," Part I, Philosophy and Public Affairs,Vol. 3, No. 4,  331-409

 

Burge, Tyler, 1997: “Two Kinds of Consciousness” in Block, Ned, Flanagan, Owen & Guzeldere, Guven, 1997: The Nature of Consciousness: Philosophical Debates, Cambridge: MIT Press

Carruthers, P.  (1989) Brute experience.  Journal of Philosophy 86

 

--(1992) Consciousness and concepts.  Proceedings of the Aristotelian Society, Supplementary Volume LXVI: 40-59

 

Chalmers, D.J.  (1997) “Availability: The Cognitive Basis of Experience?”  In
N. Block, O. Flanagan, and G. Guzeldere, eds The Nature of Consciousness, MIT Press: Cambridge

 

Churchland, P.S.  (1983) Consciousness: the transmutation of a concept Pacific Philosophical Quarterly 64: 80-93

 

Crick, F.  and Koch, C.  (1990) Towards a neurobiological theory of consciousness Seminars in the Neurosciences 2:263-275

 

Crick, F. (1994).  The Astonishing Hypothesis. Scribners

 

Davies, M.  & Humphreys, G.  (1993a) Consciousness Blackwell

 

Davies, M.  & Humphreys, G.  (1993b) Introduction.  In Davies and Humphreys (1993a)

 

Dennett, D.  (1991) Consciousness Explained.  Little Brown.

 

--(1993) The message is: there is no medium.  In Philosophy and Phenomenological Research III, 4

--(2001) “Are we explaining consciousness yet?, Cognition 79, 1-2 Pages 221-237

 

Dennett, D.  & Kinsbourne, M.  (1992a) Time and the observer: the where and when of consciousness in the brain Behavioral and Brain Sciences 15:183-200

 

Dennett, D.  & Kinsbourne, M.  (1992b) Escape from the Cartesian theater Behavioral and Brain Sciences 15: 234-248

 

Farah, M. (1994) Visual perception and visual awareness after brain damage: a tutorial overview.  In Umilta and Moscovitch, 1994

 

Flanagan, O.  (1992) Consciousness Reconsidered MIT Press

 

Gallup, G.  (1982) Self-awareness and the emergence of mind in primates. American Journal of Primatology 2: 237-248

 

 

Goldman, A.  (1993a) The psychology of folk psychology The Behavioral and Brain Sciences 16:1:15-28

 

Goldman, A.  (1993b) Consciousness, folk psychology and cognitive science.  Consciousness and Cognition II, 3

 

Goodale, M.  and Milner, D.  (1992) Separate visual pathways for perception and action

Hauser, M. D., Kralik, J., Botto, C., Garrett, M., and Oser, J. (1995). Self-recognition in primates: Phylogeny and the salience of species-typical traits. Proc. Nat. Acad. Sci.  92,  10811-10814.

Hill, C. (1991) Sensations; A Defense of Type Materialism. Cambridge.

 

Humphrey, N.  (1992) A History of the Mind.  Simon & Schuster

 

Huxley, T.H. (1866) Lessons in Elementary Psychology 8, p. 210.  Quoted in Humphrey, 1992.

 

Jackendoff, R.  (1987) Consciousness and the Computational Mind.  MIT Press

 

Jackson, F.  (1986) What Mary didn't know.  Journal of Philosophy 83: 291-95

 

Kirk, R.  (1992) Consciousness and concepts. Proceedings of the Aristotelian Society, Supplementary Volume LXVI: 23-40

 

Kuhn, T.  (1964) A function for thought experiments.  In Melanges Alexandre Koyre Vol 1.  Hermann:307-334

 

Levine, J.  (1994) Review of Owen Flanagan's Consciousness Reconsidered In The Philosophical Review

 

Loar, B.  (1990) Phenomenal properties.  In J.  Tomberlin (ed)Philosophical Perspectives: Action Theory and Philosophy of Mind. Ridgeview. 

 

Lormand, E. (forthcoming) What qualitative consciousness is like. Manuscript.

 

Lycan, W.  (1987) Consciousness MIT Press

 

McGinn, C.  (1991) The Problem of Consciousness.  Blackwell

 

Malcolm, N. (1984), “Consciousness and Causality” in Armstrong, D.M. and Malcolm, N. Consciousness and Causality, Blackwell: Oxford.

 

Marcel, A.  J.  (1986) Consciousness and processing: choosing and testing a null hypothesis.  The Behavioral and Brain Sciences 9: 40-41

 

Nagel, T.  (1974) What is it like to be a bat? Philosophical Review

--(1979) Mortal Questions Cambridge University Press

 

Natsoulas, T.  (1993) What is wrong with the appendage theory of consciousness? Philosophical Psychology VI,2: 137-154

 

Nelkin, N.  The connection between intentionality and consciousness.  In Davies and Humphreys (1993a)

 

Povinelli, D. (1994) What chimpanzees know about the mind.  In Behavioral Diversity in chimpanzees Harvard University Press

 

Rey, G.  (1983) A reason for doubting the existence of consciousness.  In Consciousness and Self-Regulation, vol 3.  R.  Davidson, G.  Schwartz, D. Shapiro (eds).  Plenum

 

-- (1988) A question about consciousness.  In Perspectives on Mind, H> Otto & J.  Tuedio (eds).  Reidel

 

Rosenthal, David (1986) Two concepts of consciousness.  Philosophical Studies 49: 329-359

 

--(1997) “A Theory of Consciousness”, in Block, Ned, Flanagan, Owen & Guzeldere, Guven, 1997: The Nature of Consciousness: Philosophical Debates, Cambridge: MIT Press

 

Schacter, D.  (1989) On the relation between memory and consciousness: dissociable interactions and conscious experience.  In: H.  Roediger & F. Craik (eds), Varieties of Memory and Consciousness: Essays in Honour of Endel Tulving Erlbaum

 

Searle, J.  (1983) Intentionality Cambridge

 

--(1990) Who is computing with the brain? Behavioral and Brain Sciences 13:4: 632-642

 

--(1992) The Rediscovery of the Mind MIT Press

 

Shoemaker, S.  (1975) Functionalism and qualia.  Philosophical Studies27: 291-315. 

 

--(1981) The inverted spectrum.  The Journal of Philosophy 74,7:357-381

 

Stich, S.  (1978) Autonomous psychology and the belief-desire thesis.  The Monist 61

 

Van Gulick (1989) What difference does consciousness make? Philosophical Topics 17,1: 211-230

 

Van Gulick (1993) Understanding the phenomenal mind: are we all just armadillos? In Davies and Humphreys (1993a)

 

Weiskrantz, L.  (1988) Some contributions of neuropsychology of vision and memory to the problem of consciousness.  In Marcel and Bisiach (1988).

 

White, S.  L.  (1987) What is it like to be an homunculus.  Pacific Philosophical Quarterly 68: 148-174

 

Wiser, M. & Carey, S. (1983) When heat and temperature were one.  In D. Gentner and A. Stevens (eds) Mental Models  Lawrence Erlbaum

 



[1] Abridged (with changes by the author) from “On a Confusion about a Function of Consciousness” in The Behavioral and Brain Sciences 18, 2, 1995 with permission of the author and Cambridge University Press.  I have changed only what seems mistaken even from the point of view of my former position.  No attempt has been made to systematically update the references.

[2] But what is it about thoughts that makes them P-conscious?  One possibility is that it is just a series of mental images or sub vocalizations that make thoughts P-conscious.  Another possibility is that the contents themselves have a P-conscious aspect independently of their vehicles.  See Lormand, forthcoming and Burge, 1997.

[3] My view is that although P-conscious content cannot be reduced to or identified with intentional content, (at least not on relatively apriori grounds) P-conscious contents often—maybe always--have an intentional aspect, representing in a primitive non-intentional way. 

[4] I know some will think that I invoked inverted and absent qualia a few paragraphs above when I described the explanatory gap as involving the question of why a creature with a brain which has a physiological and functional nature like ours couldn't have different experience or none at all.  But the spirit of the question as I asked it allows for an answer that explains why such creatures cannot exist, and thus there is no presupposition that these are real possibilities.

[5] I have been using the P-consciousness/A-consciousness distinction in my lectures for many years, but it only found its way into print in my "Consciousness and Accessibility" (1990), and my (1991, 1992, 1993).  My claims about the distinction have been criticized in Searle (1990, 1992) and Flanagan (1992)--I reply to Flanagan below; and there is an illuminating discussion in Davies and Humphreys (1993b), a point of which will be taken up in a footnote to follow.  See also Levine's (1994) review of Flanagan which discusses Flanagan's critique of the distinction.  See also Kirk (1992) for an identification of P-consciousness with something like A-consciousness.

[6] The full definition was:  A state is access-conscious if, in virtue of one’s having the state, a representation of its content is (1) inferentially  promiscuous, that is, poised for use as a premise in reasoning, (2) poised for rational control of action, and (3) poised for rational control of speech.

[7] Dennett (1991) and Dennett and Kinsbourne (1992) advocate the “multiple drafts” account of consciousness.  Dennett switched to the cerebral celebrity view in his 1993 paper

[8] See Dennett (2001) and Block (2001) for a more sophisticated treatment of this dialectic.

[9] Some may say that only fully conceptualized content can play a role in reasoning, be reportable, and rationally control action.  Such a view should not be adopted in isolation from views about which contents are personal and which are sub-personal.

[10] The concept of  P-consciousness is not a functional concept, however, I acknowledge the empirical possibility that the scientific nature of P-consciousness has something to do with information processing.  We can ill afford to close off empirical possibilities given the difficulty of solving the mystery of P-consciousness.

[11] The distinction has some similarity to the sensation/perception distinction; I won't take the space to lay out the differences.  See Humphrey (1992) for an interesting discussion of the latter distinction.

[12] If you are tempted to deny the existence of these states of the perceptual system, you should think back to the total zombie just mentioned.  Putting aside the issue of the possibility of this zombie, note that on a computational notion of cognition, the zombie has all the same A-conscious contents that you have (if he is your computational duplicate).  A-consciousness is an informational notion.  The states of the superblindsighter's perceptual system are A-conscious for the same reason as the zombie's.

[13] Farah claims that blindsight is more degraded than sight.  But Weiskrantz (1988) notes that his patient DB had better acuity in some areas of the blind field (in some circumstances) than in his sighted field.  It would be better to understand her "degraded" in terms of lack of access.

 

[14] Thus, there is a conflict between this physiological claim and the Schacter model which dictates that destroying the P-consciousness module will prevent A-consciousness.}

 

[15] See White (1987) for an account of why self-consciousness should be firmly distinguished from P-consciousness, and why self-consciousness is more relevant to certain issues of value.

[16] The pioneer of these ideas in the philosophical literature is David Armstrong (1968, 1980).  William Lycan (1987) has energetically pursued self-scanning, and David Rosenthal (1986, 1993), Peter Carruthers (1989, 1992) and Norton Nelkin (1993) have championed higher order thought.  Seealso Natsoulas (1993).  Lormand (forthcoming) makes some powerful criticisms of Rosenthal.

[17] To be fair to Rey, his argument is more like a dilemma: for any supposed feature of consciousness, either a laptop of the sort we have today has it or else you can't be sure you have it yourself. In the case of P-consciousness, laptops don't have it, and we are sure we do, so once we make these distinctions, his argument loses plausibility.

[18] This represents a change of view from Block, 1994, wherein I said that `consciousness' ought to be ambiguous rather than saying it is now ambiguous.

[19] I would like to thank Tyler Burge, Susan Carey, David Chalmers, Martin Davies, Wayne Davis, Bert Dreyfus, Guven Guzeldere, Paul Horwich, Jerry Katz, Leonard Katz, Joe Levine, David Rosenthal, Jerome Schaffer, Sydney Shoemaker, Stephen White and Andrew Young for their very helpful comments on earlier versions of this paper.  I have been giving this paper at colloquia and meetings since the fall of 1990, and I am grateful to the many audiences which have made interesting and useful comments, especially the audience at the conference on my work at the University of Barcelona in June, 1993