“We’ll always have Paris.” Really?

casablanca222One of the most memorable lines in film comes from the 1942 classic Casablanca, when the cynical ex-pat Rick tells his former lover Ilsa: “We’ll always have Paris.” Rick is referring to their brief romance on the eve of World War II—a courtship that ended abruptly with the Nazi invasion of France. When he speaks those words to Ilsa in the movie’s final scene, Rick has accepted that he and Ilsa will never be reunited. There are higher causes and historical forces at play, but at least they will always have that one cherished memory.

Or will they? When the jaded anti-hero Rick speaks those tender words, he is really stating his theory of human memory—and assessing his own. Humans in fact spend a lot of time theorizing about memory. We may not even notice that we’re doing it, but in fact we’re constantly thinking about the effectiveness of memory—what might improve it and whether to trust it or not. Every time we make a grocery list or put in another hour of studying—or talk about forever—we are making a prediction about memory in general and our own memory in particular.

But how reliable are our beliefs about human memory? And how good are we at judging our own memory abilities? Can we really predict what we’ll recall and what we’ll forget in the future? Williams College psychological scientist Nate Kornell is skeptical about memory beliefs and judgments. Human judgment in general is subject to so many irrational cognitive biases, that it would seem likely that our assessments of memory are also imperfect. Kornell decided to explore two well known mental quirks, to see if they do indeed skew predictions of future memory in important ways.

The first is called the “stability bias.” This is the tendency to discount any future learning—to believe that we will continue to know exactly what we know right now. The second is a bias in favor of “cognitive fluency”—or ease-of-processing: If information, for whatever reason, seems fluid and easy to digest, we tend to think it’s more memorable. Neither of these biases is rational, yet each is potent. Kornell decided to see what happens when they interact in skewing our beliefs about memory.

He conducted an on-line experiment, for which he recruited a group of volunteers ranging in age from 17 to 67. The experiment was straightforward. The volunteers studied a list of common nouns—mustard, tooth, and so forth—some of which were printed in small type (like you’re reading right now) and others in much large type. The type size was a manipulation of  cognitive fluency, with the larger words being easier to process. The volunteers also learned, as they studied each word, that they would have a second chance to study half of the words again before being tested; the others they would not. Based on this information, they estimated the likelihood that they would recall each word later on. The volunteers then restudied half the words, and finally took the test—recalling as many of the words as they could.

The volunteers’ predictions were “breathtakingly inaccurate,” Kornell writes in the on-line version of the journal Psychological Science. They did predict that a second study opportunity would enhance their learning somewhat, but the actual effect of further study on learning was dramatic, dwarfing the predicted effect. The volunteers also predicted that they would recall the larger words better—but in fact there was no difference in memory later on. So they were doubly wrong: They believed (wrongly) that cognitive fluency would influence memory, and they believed (wrongly) that future study would not.

The results were so dramatic that Kornell wanted to re-run the experiment. In a second version, he told the volunteers that they would have three additional chances to study some words—four altogether—to give them every opportunity to recognize and factor in the value of future study. Yet they did not. They again predicted that the additional learning time would help with memory—but not nearly as much as it did in fact. Indeed, the volunteers thought that type size would be more important to learning than even the intensified study. In other words, they greatly underestimated the value of studying, and greatly overestimated the influence of the (irrelevant) type size.

Here’s the really interesting part. In a third version of the study, Kornell focused in on people’s beliefs about human memory in general—as opposed to assessments of their own memory. Not surprisingly, people actually do believe that type size affects learning and remembering—even though it has no effect whatsoever. But oddly, they also believe in the theory that more study improves learning and memory—even though they don’t take that into account when predicting their own performance. It appears that people make these predictions based on their own immediate subjective experience, rather than factoring in beliefs about something that has not yet taken place.

Don’t get me wrong. Rick and Ilsa may indeed always have Paris–since it was such an emotional and singular experience. But the core finding here—that our predictions about memory ability are wildly inaccurate—has implications that reach way beyond memorizing word lists in the lab– into classrooms, courtrooms, and—yes—even romantic relationships.

Wray Herbert’s book, On Second Thought, is an in-depth look at cognitive biases in everyday life. Excerpts from his two blogs—“We’re Only Human” and “Full Frontal Psychology”—appear regularly in The Huffington Post and in Scientific American Mind.


“really stating his theory”
If that is what he was really doing, what was he apparently doing?

Thanks for writing the article, I found it worth reading (though it did not address the Casablanca quotation that my search was for).

But it is marred by the claim that ordinary people possess, deploy, and reconsider theories about their own behaviours. The claim that people are (or “really” are — since they are obviously not) thinking about theories is that the writer believes their behaviours can best be explained by their following a predictable course that could be meta-explained by a theory the writer has in mind.

Merry Christmas 2013!

Leave a Comment

Your email address will not be published.
In the interest of transparency, we do not accept anonymous comments.
Required fields are marked*