The Elephant & the Rider
A Myth Masquerading as a Misunderstood Metaphor
In my spare time—that is, when I’m not making dozens of dollars per year writing snarky Substack posts—I moonlight as a gainfully tenured Professor of Philosophy. So, you know, I prep and teach classes, go to meetings-that-could-have-been-emails, occasionally serve as Interim Dean, start Substacks when I should be grading, and write and publish things for academic audiences.
Sometimes my academic research and writing is grant-supported. Right now, for example, I’m working with a small team that includes a developmental psychologist, a clinical psychologist, a theologian, and a psychology doctoral student. We’re doing research on questions about the role of emotions in Christians’ engagement with scientific technologies (think: vaccines, cloning, AI, that sort of thing).
In a recent meeting, we were discussing how to “message” some background research stemming from Jonathan Haidt’s work on moral psychology. We needed to allude to his image of the “elephant and rider”.
Folks, I’ve got beef with this metaphor.

Before we get to my beef, we need to get clear about what the image is meant to represent. The image illustrates Haidt’s picture of the relationship between two forms of cognition, what he calls “intuition” and “reason”. The idea is that one of these—intuition—is really running the show, and reason is there to give intuition a little (mostly facile) feedback and, far more importantly, to serve as a kind of mouthpiece or public advocate for intuition.
Haidt calls this view “social intuitionism”. It’s “intuitionism” because intuition is running the thing. And it’s “social” because reason developed on top of intuition to advocate for intuition within communities of other agents. In The Righteous Mind, Haidt summarizes his view in a section called, helpfully, “IN SUM”:
People reason and people have moral intuitions (including moral emotions), but what is the relationship among these processes? Plato believed that reason could and should be the master; Jefferson believed that the two processes were equal partners (head and heart) ruling a divided empire; Hume believed that reason was (and was only fit to be) the servant of the passions. In this chapter I tried to show that Hume was right:
The mind is divided into parts, like a rider (controlled processes) on an elephant (automatic processes). The rider evolved to serve the elephant.
You can see the rider serving the elephant when people are morally dumbfounded. They have strong gut feelings about what is right and wrong, and they struggle to construct post hoc justifications for those feelings. Even when the servant (reasoning) comes back empty-handed, the master (intuition) doesn’t change his judgment.
The social intuitionist model starts with Hume’s model and makes it more social. Moral reasoning is part of our lifelong struggle to win friends and influence people. That’s why I say that “intuitions come first, strategic reasoning second.” You’ll misunderstand moral reasoning if you think about it as something people do by themselves in order to figure out the truth.
Therefore, if you want to change someone’s mind about a moral or political issue, talk to the elephant first. If you ask people to believe something that violates their intuitions, they will devote their efforts to finding an escape hatch—a reason to doubt your argument or conclusion. They will almost always succeed. (pp. 58-9)
To summarize “IN SUM”: elephants = intuition, riders = reason, and riders mostly just rationalize what elephants were gonna do anyway.
Haidt’s work has been profoundly influential. The elephant-and-rider metaphor gets trotted out a lot. I’ve come to you today to rain on this parade.
We’ll be returning to emotions and their role in our reasoning in future posts, but I want to focus here on two issues in the more public conversations stemming from Haidt’s work.
First, Haidt is misunderstood. The misunderstanding is understandable. When most of us think of “intuition” and especially “emotion”, we think of mere feeling. We think of an experience devoid of real content or information, of something that is simply a bodily excitation of one kind or another. Intuitions are not robustly cognitive on this sort of view. Here is one example of someone reading Haidt through this sort of understanding:
The rider is the conscious mind with its rational functions and volitional power. But the elephant is everything else: all the internal presuppositions, genetic inclinations, subconscious motives, and layers upon layers of uninterrogated, raw experience.
The elephant is quarantined from “rational functions”. It is not “conscious”. Those presuppositions are evidently as uninterrogated as the “raw” experience.
That is not Haidt’s picture. Haidt thinks of the elephant as a kind of cognition. It’s the human mind that is “divided into parts”. Indeed, Haidt thinks the rider can, at least in the long run, “direct” the elephant to some degree. More explicitly, social intuitionism makes room for feedback from reasoning to judgment.
The real questions concern the size of the rider’s reins, whether she’s wearing spurs, how big the bit is between the elephant’s teeth. Conversations about Haidt’s work tend to insist that Haidt and those whose work he relies on have shown that moral reasoning is merely post hoc rationalization. That we only play at reasoning from moral reasons or evidence to moral judgments. That the rider doesn’t really have reins at at all, that she is just the PR rep of the elephant. Strictly speaking, this just isn’t true of Haidt’s view.
On the other hand, one can be forgiven for thinking this. Because Haidt, as much as his more careful presentations of his view say otherwise, says so himself:
Once human beings developed language and began to use it to gossip about each other, it became extremely valuable for elephants to carry around on their backs a full-time public relations firm. (p. 54)
Indeed, Haidt spends very little time talking about the feedback from reasoning to judgement and action and even less time on how we might encourage or strengthen those connections. The sort of evidence he presents is meant, in fact, to display the farcicality of just that sort of “feedback”. It’s very easy to get the impression that, at the end of the day, the rider is merely a PR firm for the elephant.
This brings us to the second issue: Haidt claims more for his data than it can support. Perhaps I’ll have occasion to get into more detail down the road, but for now, I’ll just consider one study Haidt cites, an appeal that is indicative of the pattern he uses to support social intuitionism:
[Howard Margolis discusses] logic problems such as the Wason 4-card task, in which you are shown four cards on a table. You know that each card comes from a deck in which all cards have a letter on one side and a number on the other. Your task is to choose the smallest number of cards…that you must turn over to decide whether this rule is true: “If there is a vowel on one side, then there is an even number on the other side.”
Everyone immediately sees that you have to turn over the E, but many people also say you need to turn over the 4. They seem to be doing simple-minded pattern matching: There was a vowel and an even number in the question, so let’s turn over the vowel and the even number. Many people resist the explanation of the simple logic behind the task: turning over the 4 and finding a B on the other side would not invalidate the rule, whereas turning over the 7 and finding a U would do it, so you need to turn over the E and the 7.
When people are told up front what the answer is and asked to explain why that answer is correct, they can do it. But amazingly, they are just as able to offer an explanation, and just as confident in their reasoning, whether they are told the right answer (E and 7) or the popular but wrong answer (E and 4). (p. 49)
The conclusion Haidt asks us to draw from this is the one he claims Margolis draws: “rationales…are only ex post rationalizations” (p. 50)1 This is meant to be no less true for those who were reasoning well than for those who were reasoning poorly!
Wason’s experiment, however, shows nothing of the sort. It doesn’t even suggest it. What it shows is that way too many people are bad at logic, and that the not-good-at-logic folks are equally bad at using good reasons to arrive at their conclusions, and that they’re really confident of their very bad reasons. But the study just doesn’t show that people who are good at logic have constructed ex post rationalizations of the (obviously true) logical principles one needs to understand to do the puzzle correctly.
The idea that the resistance of people who are bad at logic to logical instruction could show that people who are good at logic aren’t thinking logically is, frankly, bizarre.
More importantly, people can get better at this sort of thing! Studying logic can make people better at logic-requiring tasks, less likely to make the same sorts of mistakes. There are, after all, at least some true logical principles, and humans can grasp them (even if with difficulty), and when they do, they can use them to make reasoned judgments about particular cases.
Logical intuition is unlike moral intuition in some ways, of course. I don’t want to suggest otherwise. My point is that Haidt’s underlying rationale for what we might call the Public Relations View of Moral Reasoning is exactly analogous to the rationale he attributes to Margolis that arrives at the view that logical reasoning is post hoc rationalization. From the fact that we rely on moral intuitions and are all equally adept at justifying our intuition-based conclusions come-what-may, we’re meant to conclude that moral reasoning is PR rationalization on behalf of the elephant.
But intuitions in the moral space, like intuitions in the logical space, are subject to reasoned evaluation. Our logical intuitions can be systematized into patters of thought. And those patterns can then be carefully considered. But in the moment of judgement, we don’t often take the big step back away from those intuitions to consider what patterns they are suggesting. Nevertheless, such back-stepping is possible, and important, and even helpful. But in the moment, the intuitions are enough.
They are enough because logical intuitions give us reasons to believe that there is a good logical pattern of the sort we’re deploying in our judgements. The intuition isn’t itself the pattern, but it is meant to be responsive to the pattern. That is, when our minds are working well, logical intuitions that an inference is appropriate happen when and only when that inference follows a pattern that is rationally appropriate.
Logical intuitions, in this sense, give us reason to believe, or at least to act as if, there are reasons to draw that inference.
Something like this is, in my view, what moral emotions do for moral reasoning. Indeed, emotions in general play this sort of role in judgement. Fear, for example, gives us reason to believe there’s a reason we’re in danger, even if we’re not quite sure what it is that’s dangerous.2 Indeed, most of the things in one’s environment can be perfectly safe and fear can nevertheless be an apt response. It only takes one lion lurking in the bushes! (I’ll come back to this shortly.)
Moral emotions are rightly described as intuitions because they give us reason to believe that there are reasons for a particular moral judgement. It’s perfectly legitimate to go ahead and make that judgement even if you haven’t hunted for the first-order reasons. The emotions, in many cases, are good enough on their own. Our reasoning is meant to reflect a more careful, discursive, systematic, communicable pattern of mind that, in ideal cases, was already encoded in the intuition.
Such intuitions are, in this sense, like the sense that you left your keys somewhere in the kitchen. You know the keys are [gesticulating wildly] around here, even if you can’t say where, exactly, they are. Likewise, you might know the right moral principle is around here without know where, exactly, that moral principle is.
If that’s right, we can explain the resilience of intuition-based beliefs despite repeated failures to find the reasons to which intuitions point. The belief that the keys are in the kitchen survives even in the face of a series of failed attempts to identify their precise location. That the keys aren’t on the counter next to the kettle is no reason to give up the sense that the keys are in the kitchen, even if you initially thought next to the kettle was where they’d be in light of your intuition. “Not next to the kettle? Not under the mail? Not in the pen drawer? Ahhh, right, here they are in the hip pack I used yesterday…”
The point of all this is two-fold. Not only is Haidt’s view often misrepresented to be more radical than it actually is, the arguments garnered in its favor are incapable of supporting it.3
Thing is, all this matters. Haidt’s work has been, in my view, disruptive of public and not-so-public interchange. Consider the end of the “IN SUM” section, the close of the opening chapter of The Righteous Mind:
I have tried to use intuitionism while writing this book. My goal is to change the way a diverse group of readers—liberal and conservative, secular and religious—think about morality, politics, religion, and each other. I knew that I had to take things slowly and address myself more to elephants than to riders. I couldn’t just lay out the theory in chapter 1 and then ask readers to reserve judgment until I had presented all the supporting evidence. Rather, I decided to weave together the history of moral psychology and my own personal story to create a sense of movement from rationalism to intuitionism. I threw in historical anecdotes, quotations from the ancients, and praise of a few visionaries. I set up metaphors (such as the rider and the elephant) that will recur throughout the book. I did these things in order to “tune up” your intuitions about moral psychology. If I have failed and you have a visceral dislike of intuitionism or of me, then no amount of evidence I could present will convince you that intuitionism is correct. But if you now feel an intuitive sense that intuitionism might be true, then let’s keep going. (pp. 59-60, emphasis mine)
Notice the implication: if you’re persuaded that intuitionism is false, then you must have a visceral dislike of it or of Haidt. It couldn’t be because you think the view is unsupported or problematic for various reasons. Much less could it be that your dislike is signaling that there are reasons to think the view false, whether you have an easy time identifying them or not.
This implication is unhelpful. Haidt is explaining away resistance to his ideas by suggesting that resistance isn’t intellectually serious. This sort of maneuver has become, in my view, endemic.
It’s a plague.
“You only believe that because…” ends conversations. There’s nowhere to go from there, at least not if you’re trying to get at the truth by doing the best that humans can do in their hunt for truth: think through reasons and evidence.
It’s (no doubt unintentionally) dehumanizing to those with whom one disagrees. It treats them as sub-rational. Even if it’s true that most of us most of the time are slaves to our passions, we should nevertheless treat people with the dignity and respect due to those who are at least capable of honest, deliberate reasoning. We should ourselves strive to align our thinking with our feeling, and both with the realities of things outside of us. Our communities can help us do this, by challenging our thoughts and our feelings, by offering us alternatives, and by working with us to do better.
And if I’m right that emotions are, at minimum, reasons to believe that there are reasons, then even emotions need to be taken into the fold as upstanding citizens of the realm of reason. The question with emotions is, as it is with any of reason’s citizens, whether they are in a particular case misleading or not.
And very often, emotions point us to the truth directly. So let’s take them seriously, but let’s also take seriously the fact that they are subject to evaluation just like any other item in the realm of reason.
The elephant and the rider is a misleading image because it suggests that we’re made up of two fundamentally different kinds of things, one of which is incapable of discursive, deliberate reasoning. But a human person isn’t built up from separate things that are merely artificially joined, doomed to always struggle to communicate across an unbridgeable divide. A human person is fundamentally one, a unified thing, with interconnected yet fallen faculties.
There isn’t an elephant on a rider, there’s just the rider. And she’s kind of a mess.
In Patterns, Thinking, and Cognition, Margolis is fairly careful about all this and is much more measured in drawing inferences from the data. The passage Haidt quotes from Margolis is Margolis’s characterization of others’ conclusions. He thinks these experimental results built a sort of puzzle. (p. 21) He is also careful to point to a continuity between the two sorts of cognition between which Haidt seems to draw a bright line, and to offer an alternative way to understand the social function of reason (see, e.g., pp. 106ff.).
For those familiar with the philosophical literature on the epistemology of emotions: I’m disagreeing with some prominent views here, like those of Robert C. Roberts, and I’m doing so without any real arguments. Can’t do everything, know what I mean? If you’re curious to know why I disagree with those views, and to see some hints that point in the direction I’m suggesting, Michael Brady’s Emotional Insight is where I’d go first.
To be clear: I don’t take myself to have demonstrated this; I’ve only gestured in the direction of how I’d go about demonstrating it!




I think there is something between the elephant and the rider. Since I’ve been a Christian I’ve been bombarded by talk about the “heart”. I’ve finally figured out what that is, I think. It is not the emotions or moods, it is the set of relatively stable desires and “loves” (as Augustine called them) that we all have, and that change slowly in what we call “sanctification”. And C S Lewis thought that cognitive ideas and unexamined assumptions, that we assume, and things that may not occur to us at all, influence the “heart “ or “chest.”
I have decided to go with my gut and decide that this post is wrong, because it might contradict the intuition that I am right.
(but seriously, this was super good. I haven't read Haidt's book in like 5 years so I think this really cleared some of the vagueness I had lodged in my mind about his system while also helpfully clarifying and expanding upon it).