Our ability to imagine is an awesome power. But since it uses the same brain machinery as other thoughts and perceptions, and because we can remember what we imagine, we face a serious problem: How can we make sure we can tell the difference between memories of things that happened from memories of things we simply imagined?
Distinguishing memories of things that really happened from those that did not is a mental process known as reality monitoring. When we look at something in the environment, powerful signals from the eyes make their way up through the visual cortex, leading to recognition in higher-order parts of the brain. During imagination, the information comes from the other direction: Higher-order areas of the visual cortex are activated first. Because imagination is often deliberate, we also see more recruitment of the frontal cortex, important for cognitive control.
These distinctions are key when it comes to determining the source of memories, a task in which the anterior medial prefrontal cortex—thought to be critical to attention, and to working, spatial, and long-term memory—seems to play an important role. When this part of the mind is doing its job, we are pretty good at distinguishing memories of what we saw from what we imagined.
But it doesn’t always work. There are now decades of research on false memory that examine how people sometimes mistake remembered imaginings for remembered real experiences—first demonstrated in the 1990s by the work of Elizabeth Loftus, and a phenomenon that has plagued everything from eyewitness testimony to talk therapy. But can people somehow recategorize these false memories, which they can be very convinced really happened, to the correct source, as imagined rather than real?
We rely on our memory to understand the reality we live in.
A recent study by University College Dublin psychologist Ciara Greene and colleagues replicated Loftus’ early work by intentionally giving study participants a false memory (of getting lost at the mall as a young child). Some 52 percent of participants believed the fabricated incident had actually happened to them. I talked to Greene about what might be happening in this process: “There’s a lot of evidence suggesting that true memories tend to have more sensory detail like smells and sounds—and tend to have more emotion,” she said. The more vividly you imagine the memory, the more like real life it seems.
Greene and her colleagues wanted to see if simply explaining to people that their memory was false would cause them to change their minds. Two to four weeks after researchers gave participants the false memory—attempting to trick their reality monitoring network—participants were fully debriefed and told that the incident they’d been led to believe was true, was in fact made up. In a survey three days after that, only 8 percent of people said they still believed that the false memory had actually happened.
If we judge the reality of the memory based on its vividness, then why might this debriefing work? Debriefing on its own doesn’t make memories less vivid. This is because reality monitoring can chuck “memories” into the false category through two primary ways. The first is the assessment of the richness of the memory. If a memory seems too just-the-facts, without those other enriching impressions, it seems more likely not to have truly happened to us. The second involves inference. That is, we reason, at some level, that a memory must have been originally imagined, for some reason or other. If we have a vivid memory of flying through the air with arms outstretched, we can conclude that we must have dreamed or imagined it because we know people cannot fly.
It appears that, even though participants’ memories of the false incident were as vivid as before and the incident was entirely plausible, hearing that the memory was false was enough for most of them to stop believing that it had actually happened. Telling someone something is a message to the executive control part of the mind, where reality monitoring also lives.
The psychological literature abounds with findings of human irrationality and distortions of perception. What makes work like Greene’s so important is that it shows how using our rational faculties can sometimes overcome our default conclusions—and how we might shape our beliefs to be more accurate.
Participants of course didn’t have the lost-in-the-mall incident wiped clean from their memory. But most did refile it as something they’d generated internally, rather than having a source in a real experience.
We rely on our memory to understand the reality we live in. But our ability to simulate the world in our heads—doing copious mental cartwheels of remembering and imagining, and remembering what we imagined—introduces a problem the mind has to solve, which it does imperfectly. It’s nice to know that simple logic and hearing the truth can offer a quick fix, at least in this kind of experimental setup. Though we all know distinguishing reality outside the lab can be a little more complicated.
Lead image: FGC / Shutterstock