How an experiment about consciousness in rhesus monkeys may imply two kinds of cognitive processing
David L. Boyer
Author’s c.v.: David L. Boyer. B.A. 1968, Yale College. M.S. 1970, Pacific Lutheran University. Ph.D. 1977, Boson University. Taught in the Philosophy Department at St. Cloud State University, 1976-2006.
Abstract: M.S. Ben-Haim et al. have shown that rhesus monkeys, like humans, can be visually conscious: both guess differently, left or right, about a target image depending on whether a hint was subliminal. This intrigues in the philosophies of psychology and mind, both carried out in large part by psychologists. Based on information as found in nature I offer definitions—none very original, and briefly developed—of cognition, representations and consciousness. Laypeople register these routinely in ourselves and fellow humans. Information in nature has causal, propositional and inferential features that cognitive systems inherit. I derive a possibly testable speculation about how unconscious and conscious processing may have worked differently in the experiment: in the latter, the humans and monkeys took a cognitive step back and made inferences based in part on the character of their (single or series) visual representations of the cue lights, thereby comparing them to the targets.
- The research
Researchers at Yale and other universities have shown that rhesus monkeys are able, like humans, to be conscious of some of what they see; they have devised a method for detecting this. A popular report describes their work:
In a series of experiments, they had monkeys and humans guess whether a target image would appear on the left or right side of a screen. Before the target appeared, participants received a visual cue—a small star—on the side opposite of where the target would subsequently appear. The researchers varied whether the cue was presented supraliminally [long enough to be noticed] or subliminally. When the cue was presented for a few seconds, human participants successfully learned that the target would appear in the opposite location from the cue. But when the cue was presented subliminally—quickly enough that it escaped people’s conscious perception—participants showed a different pattern of performance; they continued to choose the side that was subliminally cued, failing to learn the rule that the cue predicted the opposite side.
Surprisingly, the researchers found that monkeys showed exactly the same response patterns as the people did….
The researchers conclude that some perception by rhesus monkeys is conscious and some unconscious; and that these utilize two different levels of processing. It is that point about levels of processing that I’ll try to enlarge on by the end.
I will suggest, broadly, what kind of distinctive processing may be at work on the conscious side. Along the way I’ll sketch one connection between the philosophy of consciousness and psychological research.
To do so I’ll sneak up on a conclusion about consciousness in rhesus monkeys, starting from a sketch of philosophy in general and working towards philosophical accounts of consciousness. I’ll unavoidably start a lot of hares, that is, raise collateral philosophical problems I won’t address, in part because doing so won’t advance my purposes here and in part because other philosophers know many times more than I do about those inhouse controversies. Some definitions I’ll just wave at in passing. I’ll spring my conclusion, specifically about conscious and unconscious processing in monkeys and humans, right at the end.
You’ll also find my key proposals to lack very much argumentative support, because, again, convincing you I am right doesn’t advance my aim of showing more generally how a line could be drawn from philosophy to psychology. If I don’t get it right, others may. And philosophers won’t find anything novel here; the things I’ll say here are in the air, even if not all philosophers will be on board with them.
Besides saying too little I may also seem to say too much. The reader may ask, “Why waste my time on the obvious?” But on reflection I see that as a strength. A lot of philosophy consists in articulating the obvious because (a) it’s intellectually productive to do so; (b) it may not be so obvious after all; (c) it’s pertinent to the basic questions we need to ask; and (d) the more obvious an answer to a basic conceptual question sounds, the more apt it is to be a right answer.
Starting here, if you’re passingly familiar with today’s philosophical Zeitgeist you can skim to the paydirt at the end, under “At last, rhesus monkeys: a suggestion about conscious processing.” Not all philosophers will agree with all my claims between here and there, but even if I’m wrong about some of this I hope to have shown one way in which philosophy and experimental psychology can augment one another.
What is philosophy?
That in itself is controversial, but here’s a good working definition: Philosophy is the attempt to think through answers to basic questions. I won’t define “basic question” here except to observe that these tend to be broad in scope and their answers, if correct, underlie lots of other things. But as you can see, “basic” and “underlie” are two versions of the same metaphor. So that’s not much help, is it?
The word “attempt” identifies philosophy as a non-success term, which is right because there can be (and there are) lots of mistakes and even incoherence in philosophy. But there are lots of productive results as well. I’m no believer in “It’s all just questions.”
And “think through” restricts philosophy’s proper scope to questions that don’t need scientific, observational theory confirmation. It’s not a restriction, really, more a shading off.
“The philosophy of X” identifies X as a specialized subject area. There exists, for example, the philosophy of sport: What is sport? What is it good for? We all ask basic questions, so academic philosophers don’t have a monopoly. Specialists in area X frequently engage in the best philosophy of X and often collaborate with academic philosophers. Chapter 0 of, let’s say, an intro text in biology might identify living things as metabolizing, reproducing etc., thereby addressing the basic “What is it?” question about that science.
The nature of psychological concepts, including cognition, belief and consciousness
The experiment with rhesus monkeys has implications for the philosophy of psychology and the philosophy of mind, which are both carried out in large part by psychologists as they devise experiments and assess the significance of outcomes.
By the essence of X I mean what it is to be X. But I don’t mean this as in “To be a spy is to sleep with one eye open and trust no one.” I mean it as in “to be a spy is to ferret out secrets from an adversary.” A statement of an essence is of course a certain sort of definition. Sometimes variant definitions can capture the same essence. (Consider definitions of “square.”)
The kind of concepts I have in mind for this essay are not like bachelor, whose definition trips off the tongue. Nor are they like gold whose essence was a scientific discovery. People were able to identify gold for millennia before we learned that to be an atom of gold is to have 79 protons in one’s nucleus.
In this essay I want to aim at a kind of essence that lies midway between bachelors and gold. Consider a knotted loop as in mathematical knot theory. One generally gets the idea right away but has trouble saying what distinguishes these knots from a simple loop, the unknot. But once one reads that a knot can’t be laid flat on a table it’s easy to see that that captures the property one was responding to, or registering, all along.
I would call this kind of essence an unarticulate concept. Yes, I made that word up. For small children square is an unarticulate concept; they just know them when they see them, thereby registering that their familiar essence is instantiated.
I propose that cognition, belief and consciousness are unarticulate concepts for grownups. You land on Mars, say, and become familiar with a mushroom shaped life form. Eventually you become sufficiently attuned to the mushrooms’ behavior to register that they possess cognition and even consciousness. You have no trouble citing your evidence for these attributions but find it hard to say why that evidence should count. You are responding to the mushrooms as instantiating a couple of essences without knowing quite how.
But a proposed definition of conscious cognition might clarify things for you up there on Mars. That’s my parsing of how philosophical definitions can, if done right, provide genuine, novel substance to familiar subject matter. Far from making empirical claims about, say, conscious cognition, such definitions spell out the lingua franca that scientists and lay people use to attribute—wrongly or rightly—conscious cognition, appraise its everyday consequences and advance psychological theories about how it works.
In philosophy we call such things theories or accounts of cognition, consciousness and so on. They are not to be confused with further theories about how the human nervous system realizes cognition; how, why or when it evolved; or how to get robots to simulate it. Same for monkeys. Whatever we cook up has got to work for Martians and future robots just as well.
…is widely assumed by psychologists and philosophers today to be the foundation and substance of cognition. When we register cognition in our fellow primates we are sensing the organizing power of interlocking pieces of information.
I don’t mean “information” as in mathematical information theory; that’s a different sense of the term. And so is information in the sense in which people inform one another of things. There are connections there to my sense, but they aren’t the same thing. My ideas here were inspired by a theory from the late Fred Dretske, although you won’t find them collected together in one place in the works I am citing here,.
Information occurs in nature, even outside cognitive systems. A footprint in the earth registers the information that a cougar passed by here since the last rain, even if nobody happens by and reads off that information, pace the common wisdom that information exists only in the mind of the interpreter. The information is, rather, simply there. It’s a kind of fact.
Information is causal in character. The presence of the footprint was caused by the cougar’s passing by, and the presence of the footprint would enable one to infer to the cougar’s passing by (even if nobody actually makes the connection). That’s how the footprint bears the information about the cougar. In a variant on that causal pattern, wet sidewalks out the window bear the information that the newly tilled garden will be too muddy to walk in; the two have a common prior cause (a recent rain), and the wet sidewalks enable one to infer back to the recent rain and thence forward to the muddy garden.
Information is intentional in the sense that information takes sentential complements, either in the form of “that” clauses (“…that a cougar passed by here”) or in a thin disguise: “the presence of the footprint,” “the cougar’s passing by.” The intentionality of causality, inherited by information, is what makes it possible for cognition to develop in the natural world.
Lots of information is lodged in our bodies that is not cognitively informational. A tattoo might register information about the manufacturer of its ink. The same holds for old injuries we were unaware of at the time. So we need to tell a tale about what it is for some networks of information, and not others, to underlie cognitive systems. The cognitive layer that we recognize in one another as supervening on information is generally a conceptually winnowed-out and bounded selection from an embedding informational hatchwork that is richer and more diffuse. But to stick with this essay’s purposes I need to slide by this entire problematic.
Information can be inferential.
Suppose the earth that holds the footprint is too dry and hard to take a fresh imprint. The hardened mud, then, carries the information that it’s been at least several days since the last rain. Taken together, both pieces of information entail that a cougar came by here at least several days ago. I will call this pattern informational inference; it’s surely the basis of inferences made by cognitive subjects (in whom informational conclusions are generally registered in loci distinct from their informational premises, unlike that dried mud).
In the cognitive systems of living things (unlike the dried mud) an inferential conclusion is typically lodged, or realized, within a locus that distinct from those of its informational premises. (Same for anthropomorphic scifi robots.)
And not all its premises will have been registered as prior information. A cognitively endowed creature C typically has ingrained, unlearned patterns of inferential inference in place of some of the premises a logician would look for. That’s generally because C’s species, or a still older ancestral line, has registered or “learned” those premises through the cleverness of natural selection.
Information can be false or poorly grounded.
In most theoretical usage this is a contradiction. But in popular usage, false information is perfectly comprehensible. And false information as we find it in nature can underlie false or unreliable belief in cognitive individuals. Suppose the footprint was really made by a human with a cougar paw stamp. To do this she had to climb through dangerous terrain most people could not traverse. But she did. In that case I’d say a medium (this mud) that normally can veridically register the recent passage of a cougar can, under abnormal circumstances, falsely register the same thing.
Or suppose that several small rocks fall in the mud in the exact configuration of a cougar’s pawprint. The mud hardens and rats passing by knock the rocks loose. Again, an abnormal circumstance causes the mud to falsely register information that normally could only be true.
I need some vocabulary to proceed.
Here’s where it helps me to draw certain lines. Imagine one of today’s robotic vacuum cleaners bumping against a door, moving over a few inches and repeating the cycle. You say, “Oh, it thinks it can get into the kitchen that way.” But isn’t that anthropomorphic, and literally mistaken? I’d like to refer to the vacuum cleaner as having a proto-belief about the kitchen: a more primitive belief-like state. (“Proto-belief” has been used in other ways, none relevant here.)
Cognition is in common scientific usage a broad concept which, I believe, is marked by having at least proto-beliefs; and possibly beliefs tout court. (I prefer to define proto-belief as upwards-inclusive, but I will sidestep that point here. It’s easy enough to refer to “mere proto-belief” as needed.) By analogy we might say that a bartering system can be a proto-economy, but a true economy is a proto-economy that also involves money. (Economists might draw those lines differently. I am braced for being corrected.)
I’ve convinced myself that cognitive creatures with beliefs proper have minds, as laypeople use that term. Features of the mind that don’t look very informational, such as emotions, can, I think, be conceptually treated as derivative.
If I’m right, all of these, with and without the “proto‑”s, are unarticulate concepts: people would rarely think to invoke a structured informational theory to say how they can tell that a person has a certain belief or that the vacuum cleaner has a belief-like state; and the same for being able to attribute cognition without explicitly ringing in informational networks. But on my account that is precisely what we are registering when we routinely attribute belief, mere proto-belief and cognition. Being able to make those attributions without usually knowing how we do so may be part of our adaptation as a social species. I’ll shortly imply that consciousness is a similarly unarticulate concept.
Even cognition, as broad a concept as it is, has its lower limits. Thermostats and certain bacteria use information to guide what they do, but their informational systems, such as they are, lack the richness and holistic focus that generally qualify animals, I’d guess from arthropods on up, as having cognition. (Here I would include the mapping models of Roombas.) I suspect that various worms and mollusks lie on the sub-cognitive side of the divide, octopuses being a breathtaking exception; they plainly have minds. But a biologist might correct me on some of this.
The most basic representations in a proto-cognitive system are, I’d say, the ones that register the content of a proto-belief that P. They are sentential in content, if not in format. Thus they represent the fact, or would-be fact, that P. There are plenty of derivative types of representations. Some preserve the content of wishes, plans or Plan Bs. Some memorialize aspects or parts of propositions, like colors and objects. A map in your head holistically represents a whole raft of information, a propositional patty melt. Other keep track of proto-actions underway. But it will suit my purposes here to stick to the contents of proto-beliefs, including perceptual proto-beliefs.
Can you feel the text speeding up? Philosophes do dally on foundational questions. By the time I get back to rhesus monkeys, you will barely notice my main conclusion streaking by.
As with representations, consciousness is a multifarious concept. We can do things consciously, we can be generally conscious as opposed to being unconscious, and we can consciously consider the content of a belief. But I’ll stick with the one that I think is logically prior to all these: simply paying attention to X, which I find to be the same thing as being conscious of X. We call preconscious things that we aren’t paying attention to but could do so, whether by casual effort, reminders, lengthy effort, luck, journaling old memories or psychotherapy (assuming that would work).
Consider everyday comments like these: “From here your house looks like a blur on the hillside, behind the neighbor’s tree”; “As best I can recall my appointment is on Tuesday”; and “The map in my head says that the Dairy Queen is up this road.” In each case I seem to hear an oblique reference to a cognitive representation of something propositional (perhaps inter alia): representations of the juxtaposition of a house and a tree, the day of an appointment and the way to drive to Dairy Queen. We don’t generally have a full, clear idea of our representations and how they work, but we do have a dim notion that some of them are perceptual, some are memories and some are, well, a sort of map. Note also that in each case we derive conclusions about what is represented from attention to how it is represented. That kind of derivation is far from universal within the workings of cognitive systems. It’s comparatively rare among animals and nowhere near ubiquitous in human cognition; traditional empiricist pictures of the mind get that wrong.
That’s my nomination for what it is to pay attention to X. (In the rhesus monkeys X might be one of the tiny lights as seen, or the series of lights and target images as recalled.) Let R(X) be X’s representation in the subject’s cognitive system (or one of them, anyhow). I’d say that paying attention to X essentially and conceptually consists in deriving proto-beliefs about X from a combination of the cognitive content of R(X) as it represents X in the ordinary way, and from some aspect of the character of R(X) itself in its role as X’s representation. For the R(X)s I mentioned at the outset of the previous paragraph, such aspects might include such things as R(X)’s representing a house that is blurred and partially obscured by a tree; R(X)’s being recalled; and R(X)’s being what we sometimes call a map.
Where are we so far?
Here’s a sweeping look back. Starting with a causal account of information as we find it in nature, complete with its propositional, factual content, we locate a coherent, collected, bounded cluster of potential streams of information, with an inward end (proto-perception and perhaps other entry points for information) and an outward end (proto-action), which we narrow down and from which we winnow out content so as to identify and demarcate a cognitive individual. From that basis we define a certain basic class of propositional representations that underlie (at least proto‑)belief and a foundational sense of paying conscious attention to X by deriving further proto-beliefs about X in part by taking a step back to get oversight over X along with a representation of X.
At last, rhesus monkeys: a suggestion about conscious processing
By now it’s almost a corollary to propose a manner in which humans and monkeys might process differently the two modes of reacting to the tiny lights that cue the opposite side, left or right, on which to expect a target image. Here is where I’ll transition from the analysis of concepts to a factual speculation about how humans and monkeys work. Here the philosophy of mind shakes hands with experimental psychology.
First let’s consider unconscious processing based on subliminal cues. Here the cognitive systems of both the humans and the monkeys took in the cue and somehow arrived at a choice as to which side to guess for the target, namely always the same side as the preceding light. I can’t speculate about what combination of hard wiring and learning since childhood may have led to that hard-unconscious conclusion. It may work on the model of unconscious priming—however that may work cognitively. On the account above, that processing did not extract and use any information about the character, in its own right, of the cognitive representations of the tiny stars as they were seen, in the manner that conscious processing would have done (second paragraph down).
(Now, the star shaped cue lights were too brief be brought to conscious attention; they were hard-unconscious, i.e. not even preconscious. That’s how the experimenters knew the resultant processing was unconscious. But I suspect that being hard-unconscious is not part of the explanation for the consistently wrong guesses; that is, I’m betting that preconscious-yet-somehow-unconscious cognition would also have been consistently wrong. To test that hypothesis we’d need to let the cue be consciously noticed but somehow detect resultant choosing that somehow ended up proceeding non-consciously. In everyday life people often notice things without reasoning out their consequences. On the other hand, that would be a much more complex experiment, requiring still more clever ways to detect types of processing.)
Second, let’s consider conscious processing based on supraliminal cues. On the account above, that processing did extract and use facts about the character of cognitive representations of the tiny stars as seen (both each in turn and the series) in conjunction with information about the target and thereby arrived at a choice. Over a number of trials the subjects learned the opposite-side rule. It’s easy to see how that learning could have—in fact, surely must have—used those representations. We can paraphrase the processing as if it were explicit inference, and at least in the human subjects it may indeed have been silently spoken: “In every previous trial the target has appeared on the side opposite from where I saw the little star. I’ll bet it will always be that way.”
That speculation about processing provides two benefits. First, it might suggest further research strategies aimed at somehow detecting the second-level use of representations (in this case visual ones). And I hope it shows how theorizing about everyday psychological concepts like cognition and consciousness can help reveal the implications of research results.
 Moshe Shay Ben-Haim, Moshe Shay Ben-Haim, Olga Dal Monte, Nicholas A. Fagan, Yarrow Dunham, Ran R. Hassim, Steve W. C. Chang, Laurie R. Santos (2021) Disentangling perceptual awareness from nonconscious processing in rhesus monkeys (Macaca mulatta), Proceedings of the National Academy of Sciences in the United States of America (PNAS) 118 (15) e2017543118;
 Hathaway, Bill (2021) Monkeys experience the visual world the same way people do, Yale News, March 29, 2021;
 Ben-Haim et al. (2021) passim, esp. in their introduction, Significance, Discussion and Conclusions.
 Kripke, Saul (1980) Naming and Necessity, Cambridge: Harvard University Press. (I’m appropriating his example for my own purposes here.)
 Dretske, Fred (1981) Knowledge and the Flow of Information, Cambridge: MIT.
 Dretske, Fred (1983) Précis of Knowledge and the Flow of Information The Behavioral and Brain Sciences 6 55-90, https://web.csulb.edu/~cwallis/382/readings/680/dretske.precis knowledge flow of info.pdf.
 Clearly information, and the cognition it underlies, are at home in the material world, and in fact this kind of philosophy of mind is meant, among other things, to show how matter can straightforwardly underlie cognition. That’s what I believe is actually the case. Still, there are dualist (soul-and-body) theories of mind. In order to be coherent, on my view, such a dualist theory would need to allow for information to be registered in souls, and for those souls to have causal traffic with the material bodies and environments of their owners. If so, fine. You won’t find that an informational theory of mind provides an across-the-board knockout against any sort of dualism. Arguments against soul theories must lie elsewhere.
 You sometimes hear claims that we and other animals are born knowing some things and thus (on my view) born with some information already registered. I can’t try to address that here. But it’s at least thinkable, so an account of the general concept of cognition had better allow for the idea. You sometimes encounter readymade knowledge in science fiction, and those stories seem perfectly coherent.