Your Brain Has Two Filing Cabinets. Use Both.
You Already Know This Works. You Just Don't Know Why.
Think about the best teacher you ever had. The one whose lessons actually stuck. Odds are, they didn't just talk at you. They drew on the board. They showed you pictures. They told stories that painted scenes in your mind while they explained abstract concepts with words.
Now think about the worst teacher you had. The one who droned through text-heavy slides for an hour while you desperately tried to stay conscious. All words. No images. No stories. Nothing for your mind's eye to grab onto.
You intuitively know that the first approach works better. Everyone does. But here's what most people don't know: there's a specific, well-tested theory in cognitive psychology that explains exactly why it works better. And the explanation reveals something genuinely surprising about how your brain organizes information.
Your brain doesn't have one filing system. It has two. And they operate almost independently.
Allan Paivio's Big Idea
In 1971, a Canadian psychologist named Allan Paivio published a book called Imagery and Verbal Processes that would fundamentally reshape how scientists understand human memory. His central claim was bold for the time: the brain maintains two distinct representational systems for knowledge.
The verbal system processes and stores information as language. Words, sentences, narratives, verbal labels. When you remember that the capital of France is Paris, that's your verbal system. It operates sequentially, one word or concept at a time, much like speech itself.
The imagistic system (Paivio called it the "nonverbal" or "imagery" system) processes and stores information as mental pictures, spatial relationships, and sensory impressions. When you remember what your childhood bedroom looked like, that's your imagistic system. It operates simultaneously and holistically, capturing entire scenes at once rather than processing them piece by piece.
These two systems are separate but connected. They have their own storage, their own processing rules, and their own way of representing the world. But they can talk to each other. When you hear the word "dog," your verbal system activates the linguistic representation, and it also pings your imagistic system, which might conjure an image of a golden retriever.
This cross-talk between the two systems is where the learning magic happens.
The Experiment That Proved It
Paivio didn't just theorize. He ran hundreds of experiments, and the results were startlingly consistent.
In one classic paradigm, participants were shown lists of words to memorize. Some words were concrete nouns, things you can picture: "apple," "bicycle," "elephant." Others were abstract nouns, things that are hard to visualize: "justice," "frequency," "obligation."
The results were dramatic. People remembered concrete words roughly twice as well as abstract words of matched frequency and length. Not 10% better. Not 20% better. Twice as well.
Paivio's explanation was straightforward. Concrete words automatically activate both systems. When you see the word "elephant," your verbal system encodes the word and your imagistic system generates a mental image of an elephant. You get two memory traces for the price of one. Abstract words, by contrast, primarily activate only the verbal system. One trace. Half the retrieval pathways.
This became known as the concreteness effect, and it's one of the most replicated findings in all of memory research. It shows up in children and adults, across languages, and in both recall and recognition tests.
But the really interesting question wasn't whether concrete words are easier to remember. It was whether you could make abstract information behave like concrete information by deliberately adding visual encoding.
The answer is yes. And that's what makes dual coding theory so practically powerful.
Two Traces Are Better Than One
The core mechanism is simple and elegant. Every piece of information you learn creates a memory trace, a pattern of neural connections that represents that information. If you encode something through only one system (verbal or visual), you create one trace. If you encode it through both systems, you create two independent traces.
When you try to recall that information later, you have twice as many routes to find it. If the verbal trace has faded, you might still reach the visual trace. If you can't remember the exact words someone said, you might remember the diagram they drew. Two chances to retrieve instead of one.
This isn't just metaphor. EEG studies have shown that successfully recalled dual-coded memories produce activation patterns in both the language-processing regions (left frontal and temporal cortex) and the visual-processing regions (occipital and parietal cortex). Single-coded memories activate only one network. The two traces are neurologically real.
Here's something most articles about dual coding don't mention. The two memory traces aren't just separate copies of the same information. They're different kinds of representations. The verbal trace stores relationships sequentially (A causes B, which leads to C). The imagistic trace stores relationships spatially (A is above B, which is next to C). This means dual-coded memories aren't just more durable. They're richer. They contain both the narrative structure and the spatial structure of the information, giving you more ways to reason about what you've learned.
What's Actually Happening in Your Brain
Paivio developed dual coding theory in the 1970s, before neuroimaging existed. He was working from behavioral data, reaction times, and recall scores. But modern neuroscience has confirmed his model in remarkable detail.
The verbal system maps primarily to the left hemisphere, particularly Broca's area in the left frontal lobe (language production) and Wernicke's area in the left temporal lobe (language comprehension). Damage to these areas produces specific language deficits: aphasia, word-finding difficulties, inability to comprehend speech.
The imagistic system maps primarily to the right hemisphere and the occipital-parietal network. Visual imagery activates many of the same brain regions as actual seeing. When you imagine a beach, your visual cortex fires in patterns remarkably similar to when you look at an actual beach. The difference is that the signal comes from the top down (prefrontal cortex telling visual cortex what to generate) rather than from the bottom up (eyes sending visual information to visual cortex).
EEG recordings show distinct signatures for verbal and visual processing. Verbal encoding tends to produce increased left-lateralized beta activity (13-30 Hz) in frontal and temporal regions. Visual encoding tends to produce increased alpha suppression over the occipital cortex (the visual cortex "wakes up" and alpha drops) along with increased theta activity in parietal regions.
When both systems activate together during dual coding, you can literally see it in the EEG. There's bilateral activation, engagement across both hemispheres, with simultaneous changes in multiple frequency bands. The brain is working harder, engaging more cortical real estate, creating a richer and more distributed memory trace.
How to Actually Use This
Knowing that dual coding works is one thing. Using it effectively is another. Here are the principles backed by research.
Combine, Don't Replace
The most common mistake is treating visuals as a replacement for words. "A picture is worth a thousand words" is one of those sayings that's true in spirit but misleading in practice. A picture alone isn't dual coding. Neither is text alone. Dual coding requires both, simultaneously.
When studying a concept, read the text and look at a diagram. Better yet, read the text, then create your own diagram. The act of translating verbal information into visual form forces both systems to engage.
Generate, Don't Just Receive
Passive dual coding (looking at a textbook with pictures next to text) works, but active dual coding (creating your own visuals while reading) works much better. This is because generating an image requires deeper processing than simply viewing one.
When you read that "the hippocampus is a seahorse-shaped structure deep in the temporal lobe," you might form a faint mental image. But if you pause and actually sketch a rough seahorse shape in the margin, labeling it with its location, you've engaged your motor cortex, your spatial processing, and your verbal labeling system. Three encoding pathways instead of two.
Make Abstract Concepts Concrete
This is where dual coding becomes a superpower for difficult material. Abstract concepts, things like "opportunity cost," "statistical regression," or "cognitive load," are hard to remember precisely because they don't naturally activate the imagistic system. You have to do the work of creating a visual representation.

A concept map showing how "opportunity cost" connects to related ideas like "trade-offs," "scarcity," and "decision-making" turns an abstract concept into a spatial arrangement. A timeline diagram turns a sequence of historical events from a verbal list into a visual landscape. A simple sketch of a balanced scale turns the concept of "trade-off" into something your imagistic system can grab onto.
The consistent research finding: the weirder or more vivid the image, the better it sticks. If you're trying to remember that cortisol is a stress hormone produced by the adrenal glands, imagining a tiny factory on top of your kidneys (where the adrenals sit) pumping out alarm bells is going to be more memorable than a bland anatomical diagram.
Space Them Together, Not Apart
Richard Mayer, who built his multimedia learning theory on Paivio's dual coding foundation, discovered something important about how to present combined information. Words and images that explain the same concept should be physically close together. When a diagram is on one page and its text explanation is on another page, the learner has to hold one in memory while processing the other, which actually increases cognitive load rather than decreasing it.
This is called the spatial contiguity principle, and it's one of the strongest findings in multimedia learning research. Labels should be placed directly on the diagram. Captions should be adjacent to figures. Narration should be synchronized with animation. When the two channels are temporally and spatially aligned, dual coding works best.
What Is the Neuroscience of Why Pictures Are Special?
There's a deeper question lurking here. Why does the brain have two separate systems in the first place? Why not one general-purpose memory system?
The answer probably lies in evolution. For most of our species' history, survival depended on two fundamentally different kinds of knowledge: knowing where things are (spatial, visual) and knowing what things mean (conceptual, eventually linguistic). A forager needs to remember what a poisonous berry looks like (visual system) and remember being told it's dangerous (verbal system). These are different computational problems that benefit from specialized hardware.
The imagistic system is older, evolutionarily speaking. Animals without language still have excellent spatial memory and visual recognition. The verbal system is a relatively recent addition, appearing with the evolution of language in Homo sapiens over the past 100,000 to 200,000 years.
This evolutionary history explains a curious asymmetry in dual coding. The picture superiority effect shows that pictures are remembered better than words, even when exposure time is matched. In one striking study, participants viewed 10,000 photographs for five seconds each over several days. Their recognition accuracy afterward was about 83%. No comparable feat is possible with 10,000 words.
The imagistic system isn't just older. It's more capacious. It evolved to represent the rich, high-dimensional complexity of the physical world. The verbal system, by contrast, evolved to represent a compressed, sequential, symbolically encoded version of reality. Both are essential. But the imagistic system appears to have a deeper well.
Dual Coding and the Spacing Effect
Here's where dual coding theory intersects with another major finding in learning science: spaced repetition.
When you review material at increasing intervals over time (the spacing effect, which has its own fascinating story), dual coding amplifies the benefit. Each time you revisit a concept, you can encode it through a different combination of verbal and visual channels. First encounter: read the text. Second encounter: study the diagram. Third encounter: draw the diagram from memory while explaining it aloud. Each review creates new associative connections between the two systems.
Research by Mayer and others shows that spaced, dual-coded review produces retention rates roughly 50-60% higher than massed, single-coded review. That's not a small effect. In practical terms, it means the difference between remembering something for a week and remembering it for months.
What Dual Coding Looks Like in the Brain: An EEG Perspective
If you were wearing an EEG device while studying using dual coding strategies, what would the data show?
During verbal processing (reading text, listening to a lecture), you'd see increased beta power (13-30 Hz) over the left frontal and temporal regions, reflecting language processing in Broca's and Wernicke's areas. Theta power (4-8 Hz) at frontal midline sites (around Fz) would indicate working memory engagement.
During visual processing (studying a diagram, creating a mental image), you'd see alpha suppression over the occipital cortex (positions like O1, O2, PO3, PO4), meaning the visual cortex is actively processing. You'd also see increased gamma activity (30+ Hz) in parietal regions as the brain integrates spatial information.
During combined dual coding (reading text while studying a related diagram), both patterns appear simultaneously. The EEG shows broader cortical activation, more distributed processing, and often increased theta-gamma coupling, a neural signature that researchers associate with successful memory encoding. The brain is, in a very literal sense, working harder and encoding more richly.
This is measurable in real-time with consumer EEG devices. The Neurosity Crown, with channels at positions including F5, F6 (frontal), C3, C4 (central), and PO3, PO4 (parieto-occipital), captures activity across the key regions involved in both verbal and visual processing. It's the kind of data that lets you actually see whether your brain is engaging one system or both.
Beyond Textbooks: Dual Coding in the Real World
Dual coding isn't just for students cramming for exams. The principle applies anywhere humans need to process and remember complex information.
Software development. Why are architecture diagrams so much more useful than written specifications alone? Dual coding. The diagram activates the spatial system while the spec activates the verbal system. Together, they create a richer mental model of the system than either could alone.
Medical training. Medical students who study anatomy with labeled 3D models alongside textbook descriptions consistently outperform those who study with text alone. The spatial relationships between organs, vessels, and nerves are inherently visual information that the verbal system encodes poorly.
Data presentation. A chart with clear labels and a brief text summary is dual coding. A spreadsheet of numbers is single coding. This is why data visualization is powerful: it translates verbal/numerical information into the visual system's language.
Music and memory. Why do song lyrics stick in your head so much better than prose? Partial dual coding. Music engages auditory-spatial processing (melody, rhythm, harmony) while the lyrics engage the verbal system. The combination creates an extraordinarily durable memory trace, which is why you can still remember the words to songs you haven't heard in decades.
What Are the Limits of Dual Coding?
Dual coding theory is not a magic bullet, and it's important to understand where it breaks down.
Cognitive overload. If the visual and verbal information are too complex or too unrelated, trying to process both simultaneously can overwhelm working memory. This is why badly designed PowerPoint slides, the ones with a wall of text AND a complex diagram AND the presenter talking about something different, actually impair learning. The three channels compete rather than complement.
Irrelevant imagery. Decorative images that don't directly relate to the content provide no dual coding benefit and can actually hurt learning by distracting from the relevant information. A stock photo of a smiling person next to a paragraph about cellular biology isn't dual coding. It's noise.
Individual differences. A small percentage of people have aphantasia, the inability to generate voluntary mental imagery. For these individuals, the imagistic system works differently, and standard dual coding strategies may need to be adapted. Research on aphantasia and dual coding is still in its early stages.
Two Systems, One Mind
Allan Paivio spent his career studying something that seems obvious in hindsight: pictures and words are different. But the implications of that simple observation run deeper than most people realize.
Your brain didn't evolve a single, general-purpose knowledge system. It evolved specialized hardware for different kinds of information. And the richest, most durable, most retrievable memories are the ones that light up both systems at once.
Every effective teacher, every great communicator, every memorable presentation you've ever experienced was probably using dual coding, whether they knew the theory or not. They showed you something while they told you something. They painted a picture with words while they explained a concept with logic.
Now you know why it works. And the next time you're trying to learn something genuinely difficult, something that resists sticking in your memory, the prescription is simple. Don't just read about it. See it. Draw it. Map it. Give your brain's two filing systems something to work with.
Both cabinets open at once. That's when things get interesting.

