Acting from the Gut

Responsibility Without Awareness

Chandra Sripada


1. Introduction

It is widely agreed that to be morally responsible for an action, a person must satisfy certain epistemic requirements, and it is useful to divide these requirements into two parts. The first part addresses the question of what must be known (or at least sincerely believed; following the usage in this literature, I use these terms interchange­ably). For example, one might insist a person needs to know what she is doing, her reasons for doing it, the consequences of what she does, and so on. Most all the philosophical attention — what little has been devoted to the epistemic requirement at all — has been devoted to this question. But, however we answer this question, there is an additional question that needs to be addressed: in what way must these things be known? A person can know something consciously, she can be aware of it at the time of acting. Or she can know something without aware­ness, the relevant mental states might be non-conscious.

Neil Levy’s impressive Consciousness and Moral Responsibility is among the few book-length treatments of this second question. This is an important book. It will be widely discussed and will set the agenda in this area for a long time to come.

Levy offers an extended defence of what he calls the consciousness thesis, which says that to be morally responsible for her action a person must be consciously aware of the features that make her action morally significant. In this commentary, I want to suggest that Levy’s thesis is too strong. A person can be morally responsible for an action even if some of the features that make the action morally significant are known non-consciously, so long as these non-conscious states play the right aetiological role in her action.

Most of my attention will be directed at a specific case, the case of Mark Twain’s Huckleberry Finn as interpreted and elaborated in recent philosophical work. Huck, a boy with a good heart, sponta­neously tells a lie to save the runaway slave Jim from slave hunters. Huck acts from the gut; he does not know the reasons for which he acts or the properties in virtue of which his act has the moral signifi­cance it does. Yet he seems morally responsible and indeed praise­worthy for what he does.

Levy acknowledges this case presents a strong challenge to the con­sciousness thesis. I want to do three things in what follows. First I clarify the case, filling in some details about the psychological aeti­ology of Huck’s action that will help us better understand how it stands with respect to the consciousness thesis. Next, I turn to two objections to the claim that Huck is morally responsible for what he does that are drawn from Levy’s discussion. The first says that con­scious awareness of morally significant features is needed for an action to be intelligent and rationally guided. The second objection is based on Levy’s discussion of real self views of moral responsibility. Levy argues that non-conscious attitudes are too ‘thin’ to ground responsibility.

2. Clarifying the Huckleberry Finn Case

In Mark Twain’s Adventures of Huckleberry Finn, there is a remark­able scene involving Huck and his companion, the slave Jim. This scene has been elaborated and interpreted by several philosophers including Jonathon Bennett, Timothy Schroeder, and Nomy Arpaly.[1] Here I focus on the interpretation of the case offered by Nomy Arpaly in Unprincipled Virtue (see Arpaly, 2003, especially pp. 75–8).

Growing up in rural Missouri, Huck is inculcated with the belief that black slaves are property — a slave rightfully belongs to whoever owns him. Huck has absconded with the slave Jim and afterwards comes to think that he should turn Jim in. Doing otherwise would be wrong, Huck reasons. It would be a kind of stealing. Not too much later, two slave hunters approach Huck asking who is on his raft. This is the perfect opportunity for Huck to do what he thinks is right, and what he had resolved earlier to do, and turn Jim in. Huck, however, simply can’t bring himself to do this. Instead, he finds himself spontaneously telling a lie — at considerable risk to himself given the real potential of being caught — that gets the slave hunters to leave. He later curses himself for what he has done, saying he lacks the ‘spunk of a rabbit’.

Arpaly proposes that Huck’s spontaneous action is rooted in a non-conscious belief that Jim is a person and friend; Huck is responding to Jim’s humanity. Given Huck’s upbringing, how did he come to have these attitudes towards Jim? Arpaly suggests through extensive day-to-day personal interactions. Huck recognizes that he and Jim share the same idiom and superstitions. He talks to Jim about his hopes and fears. In this way, Huck is constantly presented with data that Jim is a person just like him. As a result, Huck experiences what Arpaly calls a ‘perceptual shift’; on some level, Huck sees Jim as a person and indeed as his friend, and therefore as someone deserving of certain kinds of respect and positive treatment.

These changes in Huck, Arpaly argues, aren’t a product of explicit reasoning. She says that Huck never makes a conscious inference along the following lines: ‘Jim acts in all ways like a human being, therefore there is no reason to treat him as inferior, and thus what all adults in my life think about blacks is wrong’ (ibid., p. 77). Rather, the registering of these data about Jim and the ensuing perceptual shift all occur at a non-conscious level. It is these non-conscious beliefs about Jim, Arpaly argues, that are expressed when Huck can’t turn Jim in and instead tells a lie.

Both common sense intuition and philosophical opinion appear to agree that Huck is morally responsible and praiseworthy for what he does. Levy’s consciousness thesis delivers a contrary verdict. To see why, let us examine in more detail Huck’s conscious and non-con­scious beliefs as he stands before the slave hunters. Huck thinks (con­sciously) that turning Jim in is the right thing to do and indeed has previously resolved to do so. But Huck doesn’t do this and instead tells a lie. The features that give Huck’s action its moral significance plausibly consist of facts such as these: Jim is a person; Jim is a friend; Jim doesn’t deserve certain kinds of negative treatment. By stipulation, however, Huck is not consciously aware of these things. Now, Huck does believe these things — as a result of the perceptual shift described earlier, Huck holds all these beliefs about Jim non-consciously. And, except for being non-conscious, these beliefs other­wise play the action-theoretic role that one’s evaluative beliefs usually do in shaping one’s actions. But because these beliefs about the features that give his action its moral significance lie outside of conscious awareness, Levy’s consciousness thesis says Huck is not morally responsible for what he does.

I believe the Huck Finn case presents a clear prima facie counter­example to the consciousness thesis. Based on my reading of Levy there are, broadly speaking, two lines of argument that he invokes to show that Huck Finn-like agents, despite first impressions, aren’t after all morally responsible for what they do. I take these up in the next two sections.

3. Consciousness and Integration

Levy argues, convincingly in my view, that conscious awareness serves an integrative function. As evidence, he discusses various con­ditions and states in which conscious awareness is compromised: sleepwalking, persistent vegetative states, states involving elevated cognitive load, etc. These are all associated with limited, fixed, stereo­typed responses. Levy writes: ‘The integration that consciousness pro­vides allows for flexible, reasons-responsive online adjustment of behaviors. Without such integration, behaviors are stimulus driven rather than intelligent responses to situations…’ (Levy, 2014, p. 39).

Call the set of things about one’s action that one is aware of around the time of acting the ‘awareness set’. How populated must the aware­ness set be, how rich and detailed must its contents be, for the person to exhibit intelligent, rationally guided action? One might suppose the awareness set must be crammed quite full. Indeed, Levy’s conscious­ness thesis seems to suggest as much in that if a person fails to know all the morally significant features of her action, then this is supposed to compromise her agency in a way that undermines moral responsibility.

I disagree; I claim that intelligent, rationally guided action in fact requires substantially less information about one’s action to be con­scious than one might suppose. That is, the contents of the awareness set can in fact be quite meagre. To see this, we will need to step back a bit and consider two contrasting approaches to how rational, integra­tive action might be produced.

Consider a mind with multiple motivational modules,[2] each special­ized to handle a specific domain.[3] For example, there is a ‘fear’ system that stores information about important threats, performs rapid infer­ences about when a situation constitutes a threat, maintains a store of action plans for what to do about frequently encountered threats, and so on. There are a number of other systems as well, systems special­ized for such things as food and nourishment, family and kinship, hierarchy and social status, norms and obligations, and many others.

In order to achieve rational, integrated action, these motivational modules must somehow be able to communicate with each other and generate an overall ‘verdict’ on what to do. If integration fails to occur — if only one module plays a role in producing action — then the result will be simple, stereotyped action that is responsive to just one motivational perspective rather than the integration of multiple varied perspectives.

Let us contrast two ways that the process of rational integration might operate. First, each separate motivational module might glo­bally broadcast[4] all the propositional information stored within the module that is relevant to action selection. As a consequence, the person is consciously aware of all this information. For example, the fear system might send out information such as: ‘Approaching the bear is dangerous because there is evidence that bears unpredictably maul people’ or ‘Pressing the left button is dangerous because the last four times I pressed that button, it resulted in an electric shock’, and so on. There would of course need to be some central mechanism that takes all this information and integrates it for the purposes of finding the overall best action. Call this the total information model because it relies on comprehensive broadcast of all the propositional information within each motivational module.

The second model is what I shall call the valenced signal model. In this model, a motivational module doesn’t broadcast all its internal, proprietary information pertaining to whether to undertake an action. It instead only broadcasts the most important single piece of informa­tion needed for rational action guidance: a valenced signal that repre­sents the module’s bottom line take on the goodness or badness of undertaking the prospective action. That is, the signal has the structure of a scalar and it represents the overall ‘to be doneness’ of the action from the perspective of the considerations that are proprietary to the module. Applying this model to the case of the fear motivational module, the module doesn’t broadcast such things as ‘Approaching the bear is dangerous because there is evidence that bears unpre­dictably maul people’. Rather, the module uses this propositional information internally to compute a valenced signal about how good or bad — from the perspective of the module — it is to undertake the action under consideration. It is the valenced signal by itself, not the full set of propositional information that justifies setting the sign and strength of the signal, that is broadcast widely and of which the person is thus consciously aware. Here too, just as with the total information model, there would need to be a downstream mechanism that integrates the information broadcast from the separate modules to arrive at an overall motivational verdict. For example, a natural way this integration system would work is that the downstream aggrega­tion mechanism performs a simple addition operation: for each pro­spective action, it sums the valenced signals arising from the various motivational modules and selects the highest scoring.

It bears emphasis that, though the quantity of information that is consciously broadcast is quite limited, the valenced signal model still achieves extremely high levels of rational integration for the purposes of action guidance. This is because a wide variety of motivational modules, each separately sensitive to a diversity of considerations, evaluate the prospective action and output valenced signals. When these valenced signals are aggregated, the considerations that led to the assignment of the signals are, in effect, indirectly (and collect­ively) brought to bear in settling the question of whether to perform the action. Indeed the degree of rational, integrative guidance achieved in the valenced signal model plausibly approximates that of the total information model, but the computational resources required are far less.

Which is more plausible as a model of how human minds achieve rational, integrative guidance of action, the total information model or valenced signal model? There are two considerations that I believe strongly favour the latter. First, the valenced signal model has strong neurobiological support. A view broadly along the lines of this model was proposed by Antonio Damasio as part of his somatic marker hypothesis (see Damasio, 1994). Damasio’s view begins with the observation that, in our day-to-day transactions with the world, various different kinds of emotions, associated with different affective systems, are routinely elicited by the situations we encounter. He proposes that markers of these emotions, and in particular repre­sentations of the bodily states associated with them, are stored in a specific region of the brain, the ventromedial prefrontal cortex. During deliberation, when a person mentally ‘tries out’ candidate actions that are expected to lead to certain outcomes, these somatic markers pre­viously associated with the outcomes are called up and replayed. Importantly, somatic markers function much like the valenced signals just described because all of the extraneous information about the situation that led the occurrence of the original emotion — details about how this situation was understood, processed, and ultimately interpreted — are stripped away and only the information that repre­sents the goodness and badness of the situation is retained.

The second consideration favouring the valenced signal model is that there is a basic tension at the heart of the total information model. Like many other theorists, Levy supports a modular account of mind because, among other reasons, modularity helps to solve a problem of information overload that would otherwise arise. A modular archi­tecture allows separate specialized systems, each with its own pro­prietary information stores and inference mechanisms, to deal with distinct problem domains. If a single central cognition system had to assemble and evaluate all the information needed to for all the various problem domains, it would quickly become overwhelmed. Of course, in solving the problem of information overload, modularity introduces another problem, the problem of how modules communicate in order to produce rational, integrative action, which is precisely the problem we were earlier considering.

The problem with the total information model is that it simply undoes whatever benefits modularity was supposed to bring and places us right back where we started. The total information model solves the problem of rational integration by having the separate modules broadcast all of their relevant internal information to some other downstream processor that is charged with assembling, evalu­ating, and integrating this information. But this just recreates the prob­lem of information overload, exactly the problem that modularity was introduced to solve.

The valenced signal model thus has strong neurobiological and theoretical support. Let us assume the model is in fact correct (i.e. as a description of how human minds actually work), and turn to a differ­ent question: what implications does the model have for Levy’s con­sciousness thesis?

I believe the valenced signal model creates serious problems for the consciousness thesis. It should be clear that when prospective action is guided by valenced signals, the quantity of information about the action that the person is consciously aware of will be minimal, i.e. the contents of the awareness set will be meagre. That is, as a person con­siders what to do, he will consciously experience various valenced signals, e.g. a quick glow of pleasure or a sting of distress and the like, associated with considering each candidate action, and these signals will help guide his choice. But, much like Huck Finn, the person will (frequently[5]) be unable to say what considerations justify the selected action and give it its moral significance.

Importantly, on the valenced signal model, though the person is often not consciously aware of the justifications for why he does what he does, it is not the case that these justifications don’t exist. There are various considerations that play a rationalizing role in the aeti­ology of the action — they play this role by entering into the module-level calculation of valenced signals. But the person is not consciously aware of these considerations since it is only the valenced signals by themselves, and not the considerations that go into calculating them, that are broadcast.

Let me sum up this section. One way that Levy might show that Huckleberry Finn is not after all morally responsible for what he does is by arguing that conscious awareness of the features that make one’s action morally significant is needed for the intelligence and rational guidance of that action. I have suggested that the valenced signal model provides a plausible account of how modular human minds achieve rationally integrated action guidance. If this model is right, however, agents engaged in flexible reasons-responsive actions will nonetheless routinely fail to be consciously aware of the considera­tions that shape those actions, and this presents a serious problem for the consciousness thesis.

4. Non-conscious Attitudes and the Real Self

If Huckleberry Finn is morally responsible and praiseworthy for what he does, as common sense and philosophical opinion seem to think, what is the basis for this? What theory of moral responsibility can best explain this result? An attractive answer is provided by real self views. These views say a person is morally responsible for an action if that action expresses (at least some part of) the person’s evaluative point of view. Huck plausibly cares for Jim — even if he doesn’t con­sciously know he cares — and his spontaneously telling a lie to the slave hunters is expressive of this caring attitude. Since one’s cares are plausibly constituents of one’s evaluative point of view (I say more about this a bit later), real self views seem well positioned to explain why Huck is morally responsible and praiseworthy for what he does.

Levy disagrees. In Chapter 5, Levy argues that agents such as Huck who aren’t consciously aware of the morally significant features of their actions can’t be morally responsible for what they do according to the criteria offered by real self views. Much of Levy’s defence of this claim is occupied by a discussion of what social psychologists call ‘implicit attitudes’, so I need to say a little bit about what these are.

Implicit attitudes are evaluative attitudes typically directed at members of stigmatized groups. They operate below the level of con­scious awareness and exert subtle effects on one’s thinking.[6] They can be measured with a number of special experimental procedures. One of the most widely used, the Implicit Association Test (IAT), measures differences in the time it takes to make stereotypical associations (e.g. European-American and intelligent or African-American and violent) versus counter-stereotypical associations. The magnitude of this difference, typically on the order of 50–100 milli­seconds, is a measure of subjects’ non-conscious tendency to implicitly ‘believe’ the stereotypical associations.

I will refer to the kinds of implicit attitudes studied in social psych­ology using procedures such as the IAT as ‘SP-implicit attitudes’ to more clearly separate them from the larger category of non-conscious attitudes, the other members of which may or may not resemble SP-implicit attitudes.

There is considerable controversy about how best to understand SP-implicit attitudes. One view says they constitute full-fledged beliefs with propositional content and exhibit a full suite of belief-like func­tional roles, while another view says they are non-propositional in structure and represent little more than spontaneous associations, something like the way that thoughts of pepper spontaneously trigger thoughts of salt. Levy argues for the latter position, and on this basis says SP-implicit attitudes aren’t the right sort of thing to ground moral responsibility according to the criteria offered by real self theorists:

[An implicit attitude’s] content consists in the associations it activates, and the related content it primes, nothing more and nothing more coherent than that… Because implicit attitudes have this kind of thin, and morally empty, content, they can’t play the roles that contemporary real self theorists require of them. In expressing these attitudes, we do not express anything that is an apt target of moral condemnation: the fact that I associate X and Y, nonconsciously, is no basis for holding me responsible. (Levy, 2014, p. 102)

For the present purposes, I want to grant Levy’s claim that implicit attitudes — keep in mind Levy is talking about SP-implicit attitudes — are too ‘thin’ to be the basis of moral responsibility on real self views. Now, this would be a major problem for real self views if the following were also true:

(All) All non-conscious attitudes are like SP-implicit attitudes in the ways relevant for moral responsibility.

As far as I can tell, Levy doesn’t provide an argument for All. It is plausible, moreover, that All is in fact false. Return to the case of Huckleberry Finn, and consider Huck’s cluster of non-conscious atti­tudes towards Jim, which includes his seeing Jim as a friend and the like. These attitudes seem to differ from SP-implicit attitudes in a number of ways. First, Huck’s non-conscious attitudes have a sub­stantial and reliable effect on his actions. Not only did Huck sponta­neously tell a lie to the slave hunters, he did this at considerable risk to himself — there was a clear possibility of being found out and punished. Moreover, this robust effect on motivation is likely to be pervasive across situations and contexts. Huck’s non-conscious atti­tudes towards Jim also exert wide ranging cognitive effects. If Jim is suddenly threatened, Huck’s attention will instantly fixate on the threat and his thoughts will turn to how to help. They also have effects on emotions, such as spontaneous reactions of sympathy or, if Jim were hauled away by the slave hunters, sadness.

So Huck’s non-conscious attitudes towards Jim involve a coherent syndrome of motivational, cognitive, and emotional effects that are manifested pervasively across times and situations. Contrast this with SP-implicit attitudes. These have small, subtle, highly context-specific effects, and the existence of even these modest effects are sometimes questioned.[7] It seems, then, that the principal thing Huck’s attitudes towards Jim share with SP-implicit attitudes is that both are non-conscious. Otherwise, the functional roles of these respective attitudes seem quite different.

Moreover, these differences seem potentially quite relevant for moral responsibility. Elsewhere, I offer a care-based account of one’s real self and a functional role account of caring.[8] The relevant func­tional roles that make something a care, on my view, are precisely the sorts of motivational, cognitive, and emotional dispositions that differ­entiate Huck’s attitudes towards Jim from SP-implicit attitudes. I won’t try to defend these accounts of the real self and of cares here. I hope, however, I have made it at least plausible that real self theories have the resources to explain why Huck’s non-conscious Jim-directed attitudes can ground moral responsibility even if non-conscious SP-implicit attitudes cannot.[9]

5. Conclusion

Levy offers an impressive defence of the consciousness thesis, the thesis that moral responsibility requires that a person must be con­sciously aware of the morally significant features of her actions. In this commentary, I have considered the case of Huckleberry Finn in some detail and tried to show that this case presents stubborn prob­lems for the consciousness thesis. If my arguments are on the right track, then it is possible the thesis needs some refinement.


Arpaly, N. (2003) Unprincipled Virtue: An Inquiry into Moral Agency, Oxford: Oxford University Press.

Arpaly, N. & Schroeder, T. (1999) Praise, blame and the whole self, Philosophical Studies, 93, pp. 161–188.

Bennett, J. (1974) The conscience of Huckleberry Finn, Philosophy, 49, pp. 123–134.

Carruthers, P. (2006) The Architecture of the Mind, Oxford: Oxford University Press.

Damasio, A. (1994) Descartes’ Error: Emotion, Reason, and the Human Brain, New York: Avon Books.

Greenwald, A.G. & Banaji, M.R. (1995) Implicit social cognition: Attitudes, self-esteem, and stereotypes, Psychological Review, 102, pp. 4–27.

Greenwald, A.G. & Krieger, L.H. (2006) Implicit bias: Scientific foundations, California Law Review, 94, p. 945.

King, M. & Carruthers, P. (2012) Moral responsibility and consciousness, Journal of Moral Philosophy, 9, pp. 200–228.

Levy, N. (2014) Consciousness and Moral Responsibility, New York: Oxford Uni­versity Press.

Oswald, F.L., Mitchell, G., Blanton, H., Jaccard, J. & Tetlock, P.E. (2013) Pre­dicting ethnic and racial discrimination: A meta-analysis of IAT criterion studies, Journal of Personality and Social Psychology, 105, pp. 171–192.

Sripada, C. (under review) Self-expression: A deep self theory of moral responsibility.

[1]      See Bennett (1974) and Arpaly and Schroeder (1999).

[2]      For more on motivational modules, see Carruthers (2006, Section 3.6).

[3]      Of note, Levy endorses a broadly modular picture of mind in Consciousness and Responsibility.

[4]      Peter Carruthers argues persuasively that only quasi-perceptual representations can be globally broadcast. The present proposal should thus be understood as saying propo­sitional attitudes gain global access only indirectly by first being formulated in a quasi-perceptual form such as imagery or inner speech. See Carruthers (2006).

[5]      If conditions are favourable and if a person is sufficiently motivated, she might be able to piece together or reconstruct the nature of the considerations operating at the modular level that led to the assignment of consciously experienced valenced signals. But con­ditions aren’t always favourable and people often aren’t so motivated.

[6]      For reviews, see Greenwald and Krieger (2006) and Greenwald and Banaji (1995).

[7]      See for example Oswald and colleagues (2013).

[8]      See Sripada (under review). Of note, there I use the terminology ‘deep self’ rather than ‘real self’.

[9]      Matt King and Peter Carruthers offer a quite different argument against real self views that also appeals to considerations regarding consciousness (see King and Carruthers, 2012). Their critique applies, however, to versions of real self views (such as Harry Frankfurt’s) that have an agentially-demanding endorsement-based approach to the real self, and it does not apply to the care-based approach to the real self that I and others have proposed.