Event-Related Potentials (ERPs)
Your Brain Replies to Everything. These Are the Replies.
Right now, as you read this sentence, your brain is doing something extraordinary. Every time your eyes land on a new word, a tiny electrical signal fires in your cortex. It happens about 170 milliseconds after the word appears, a negative voltage deflection over the back of your skull, as your visual system recognizes that yes, this cluster of lines and curves is a word you know.
If the next word I write makes no sense (the word "banana" would do it here), a different signal fires about 400 milliseconds later. This one is larger, more negative, and it means your brain just noticed something semantically unexpected.
These signals are absurdly small. We're talking about 5 to 10 microvolts. For scale, the static shock you get from a doorknob is about 20,000 volts. These brain signals are roughly four million times weaker than that. They're buried under the constant hum of background brain activity, muscle artifacts, blinking, and electrical noise from every device around you.
And yet, for nearly 60 years, neuroscientists have been extracting these signals with remarkable precision. The technique is called event-related potentials, or ERPs. And the story of how we pull these whisper-quiet neural replies out of the chaos of raw EEG is one of the cleverest tricks in all of neuroscience.
What EEG Sees vs What ERPs Reveal: Which Is Better?
Before we get into ERPs, it helps to understand what they are not. If you've read anything about brainwaves, you've probably encountered terms like alpha, beta, theta, and gamma. These are oscillatory rhythms, repeating patterns of electrical activity that reflect the brain's ongoing state. alpha brainwaves (8-13 Hz) dominate when you close your eyes and relax. beta brainwaves (13-30 Hz) increase when you're focused and alert. These oscillations are the background music of your brain.
ERPs are something different entirely. They're not about the brain's ongoing state. They're about its responses.
Think of it this way. If oscillatory EEG is like listening to the ambient noise in a restaurant, ERPs are like recording the specific thing the waiter says every time he delivers a plate. The ambient noise is always there, complex, layered, and interesting in its own right. But the waiter's words are time-locked to a specific event: the plate hitting the table. If you could somehow record a hundred plate deliveries and average all the sound together, the ambient noise would cancel itself out (because it's different every time), and you'd be left with a crystal-clear recording of just the waiter's voice.
That is, quite literally, how ERPs work.
The Averaging Trick: How to Hear a Whisper in a Hurricane
The fundamental problem of ERP research is signal-to-noise ratio. The brain's response to a single stimulus, a flash of light, a beep, a word on a screen, is genuinely tiny. Somewhere between 1 and 30 microvolts, depending on the component. The background EEG is typically 10 to 100 microvolts. So the signal you're looking for is often smaller than the noise it's swimming in.
The solution, first demonstrated by Dawson in 1954, is elegant. You present the same stimulus many times. Dozens, sometimes hundreds of times. Each time, you record a short chunk of EEG starting just before the stimulus and extending a second or so afterward. These chunks are called epochs.
Then you average all the epochs together.
Here's why this works. The background EEG is, from the perspective of any particular stimulus, effectively random. Sometimes it's positive at 200 milliseconds after the beep, sometimes it's negative. When you average, the random fluctuations cancel each other out, converging toward zero. But the brain's actual response to the stimulus is not random. It happens at roughly the same time and with roughly the same shape on every trial. So when you average, the consistent response adds up while the noise washes away.
Signal-to-noise ratio in ERP averaging improves by the square root of the number of trials. So 100 trials gives you 10x better signal-to-noise than a single trial. This is why ERP experiments involve many repetitions. It is also why a component like the P300 (relatively large at 10-20 microvolts) needs fewer trials than the N400 (often 3-5 microvolts). The bigger the signal, the less averaging you need.
The result of this averaging is a waveform. A clean, time-locked voltage trace that shows the brain's consistent response to the stimulus. And the peaks and valleys of this waveform, each one appearing at a specific time after the stimulus, tell you something different about what the brain is doing.
Scientists name these peaks and valleys by their polarity and their timing. P means positive. N means negative. The number indicates the typical latency in milliseconds. So the P300 is a positive peak at about 300 milliseconds. The N400 is a negative peak at about 400 milliseconds. Simple enough. But the stories these components tell are anything but simple.
The ERP Components: A Field Guide to Your Brain's Replies
P300: The Signal That Says "I Noticed That"
The P300 is the rock star of ERP research. Discovered by Sutton and colleagues in 1965, it's one of the most studied signals in all of cognitive neuroscience, and for good reason. It tells you something fundamental about attention and awareness.
Here's the classic experiment. You play a series of beeps for someone. Most beeps are the same tone (say, 1000 Hz). But every so often, maybe 20% of the time, you sneak in a different tone (say, 2000 Hz). You tell the person to count the rare ones.
When you average the EEG time-locked to the rare tones, you see a large positive wave peaking at about 300 milliseconds. Time-lock to the frequent tones, and it's absent or much smaller.
The P300 reflects several cognitive operations happening at once: attention allocation, context updating (your brain's model of "what's happening" just changed), and working memory engagement. Its amplitude (how big it is) reflects how surprising or task-relevant the stimulus was. Its latency (how late it peaks) reflects how long it took your brain to evaluate the stimulus.
But here's where the P300 gets really interesting. Because it reliably appears when a person detects something rare or meaningful, it has become the foundation of one of the most practical brain-computer interfaces ever built.
The P300 Speller: Typing With Your Brain
In 1988, researchers Farwell and Donchin built the first P300 speller. The concept is brilliant. Display a grid of letters on a screen. Flash rows and columns randomly. The letter the user wants to type will flash sometimes as part of a row, sometimes as part of a column. Each time the target letter flashes, the user's brain generates a P300 because that's the letter they're attending to. Each time a non-target letter flashes, no P300 (or a much smaller one).
Average a few flashes, detect which row and which column consistently produce a P300, and their intersection is the intended letter.
People who are completely paralyzed, unable to move any muscle, have used P300 spellers to communicate. The system works because attention is enough. You don't need to move, speak, or do anything except pay attention to the letter you want. Your brain's P300 does the rest.
Modern P300 spellers, combined with better signal processing and machine learning, can achieve typing speeds of 5-10 characters per minute. That's slow by keyboard standards but extraordinary for someone who has no other means of communication.
N200: The Conflict Detector
The N200 appears about 200 milliseconds after a stimulus, and it's especially prominent when there's a conflict between what you expected and what actually happened, or between a response you want to make and one you need to suppress.
In "go/no-go" tasks, where you press a button for most stimuli but must withhold your response for a specific one, the N200 is larger on no-go trials. Your brain is detecting the conflict between the prepotent motor response ("press the button!") and the task instruction ("not this time").
This makes the N200 valuable for studying impulse control, attention disorders like ADHD brain patterns (where the N200 is often reduced), and the neural mechanisms of self-regulation.
N400: The Meaning Processor
The N400 is one of the most elegant discoveries in cognitive neuroscience. It was found by Marta Kutas and Steven Hillyard in 1980, and it responds to semantic violations, moments when meaning goes off the rails.
Their original experiment presented sentences like "He spread the warm bread with socks." The word "socks" (compared to "butter") produced a large negative deflection peaking around 400 milliseconds.
The N400 is not about grammar. It's about meaning. It appears for any stimulus that is semantically unexpected or hard to integrate with the current context. A word that doesn't fit a sentence. A picture that doesn't match a spoken word. Even a chord that doesn't belong in a musical phrase.
This makes the N400 a window into how the brain constructs meaning in real time. It has been used to study language comprehension, bilingualism, semantic memory in Alzheimer's disease, and even humor (the punchline of a joke elicits a specific N400 pattern followed by a late positive wave, as your brain processes the incongruity and then "gets it").
P600: The Grammar Police
If the N400 responds to meaning violations, the P600 responds to structural ones. Present a sentence like "The cat was chasing by the dog" and you'll see a positive wave at about 600 milliseconds. Your brain noticed the syntactic error, even if you weren't consciously looking for one.
The P600 is generated in a distributed network involving Broca's area, Wernicke's area, and the inferior frontal gyrus. It's slower than the N400 because syntax requires more computational resources than initial semantic access. Your brain has to parse the structure of the sentence, discover it doesn't work, and attempt a reanalysis.
Together, the N400 and P600 have helped settle long-standing debates in linguistics about whether the brain processes meaning and grammar through the same or separate systems. The answer: separate systems, with distinct temporal signatures.
MMN: Your Brain's Change Detector (Even When You're Not Listening)
The mismatch negativity, or MMN, might be the most surprising ERP component of all. It appears when an auditory stimulus violates an established pattern, and here's the remarkable part, it happens even when you're not paying attention to the sounds.
Play a steady sequence of identical tones (beep, beep, beep, beep) while someone reads a book. Slip in a deviant tone (beep, beep, boop, beep). Even though they're ignoring the sounds entirely, their brain generates a negative deflection about 100-250 milliseconds after the deviant. The auditory cortex noticed the change, processed it, and registered a prediction error, all without conscious awareness.
The MMN is evidence that your brain is constantly building predictive models of the environment, even for sensory streams you're ignoring. It's not waiting for you to pay attention before it starts processing. It's always processing, always comparing incoming data against its predictions.
Clinically, the MMN is used to assess auditory processing in newborns, track cognitive decline in dementia, and evaluate states of consciousness in comatose patients. If a comatose patient still generates an MMN, it suggests their auditory cortex is still processing, which carries prognostic significance for recovery.
ERN: The "Oh No" Signal
The error-related negativity, or ERN, appears within 100 milliseconds of making a mistake, often before you're consciously aware you've made one. It originates in the anterior cingulate cortex, the brain region that monitors for conflicts between intended and actual outcomes.
In a fast-paced task where you occasionally press the wrong button, your EEG shows a sharp negative deflection right at the moment of the incorrect response. Your brain knew it was wrong before you did.
The ERN is larger in people with anxiety disorders, particularly obsessive-compulsive disorder. Their error-monitoring system is hypersensitive, detecting threats and mistakes everywhere, which maps onto the subjective experience of constantly feeling like something is wrong.
| ERP Component | Polarity/Latency | What It Reflects | Key Application |
|---|---|---|---|
| P300 | Positive, ~300ms | Attention, surprise, context updating | P300 speller BCIs, attention assessment |
| N200 | Negative, ~200ms | Conflict detection, response inhibition | ADHD research, impulse control studies |
| N400 | Negative, ~400ms | Semantic processing, meaning integration | Language research, Alzheimer's screening |
| P600 | Positive, ~600ms | Syntactic processing, structural reanalysis | Grammar processing, bilingual studies |
| MMN | Negative, ~100-250ms | Automatic change detection, prediction error | Consciousness assessment, infant audiology |
| ERN | Negative, ~0-100ms post-error | Error monitoring, performance adjustment | Anxiety/OCD research, quality control |

From Lab to Life: What ERPs Are Used For
Clinical Diagnostics
ERPs have become valuable clinical tools precisely because they're objective. A patient doesn't need to report their symptoms. Their brain's electrical responses tell the story.
Cognitive decline. The P300 latency increases and amplitude decreases in early Alzheimer's disease, often before behavioral symptoms are apparent. Clinicians use P300 assessments as one tool in the early detection toolkit, tracking changes over time to monitor progression.
ADHD. Children and adults with ADHD consistently show reduced P300 amplitude and altered N200 patterns during attention tasks. These ERP markers can help differentiate ADHD from other conditions that present with similar behavioral symptoms.
Disorders of consciousness. When a patient is in a coma or vegetative state, behavioral assessment has limits. But if an oddball paradigm produces an MMN or P300, it suggests cognitive processing is occurring beneath the surface. This has changed clinical decisions about care and prognosis.
The Curious Case of ERP-Based Lie Detection
Here's an application that sounds like science fiction. The P300 can detect whether someone recognizes information they're trying to hide.
The logic is straightforward. If you committed a crime at a specific location, and a researcher flashes images of various locations (including the crime scene) while recording your EEG, the crime scene image will elicit a larger P300 because your brain recognizes it. You can try to suppress your reaction, keep a poker face, control your breathing. But you cannot voluntarily suppress your P300. It's an automatic response.
This technique, called the "P300 concealed information test," has been used in criminal investigations and has shown accuracy rates around 90% in controlled laboratory studies. Real-world application is more complicated (legal admissibility varies by jurisdiction, and the technique requires that only the guilty person would recognize the concealed information), but the underlying science is solid.
Your P300 is, in a very real sense, an honesty signal that you cannot fake.
Language and the Brain
ERPs transformed the study of language processing because they offer something no other technique can: real-time tracking of how the brain processes each word as it arrives.
With fMRI, you can see which brain regions activate during language tasks, but the temporal resolution is seconds, not milliseconds. Language unfolds word by word, each one processed in a fraction of a second. ERPs can track that millisecond-by-millisecond unfolding.
The N400 revealed that the brain begins accessing word meaning within 300-400 milliseconds. The P600 showed that syntactic parsing follows shortly after. The left anterior negativity (LAN) suggested that grammatical processing begins even earlier, within 200-500 milliseconds. Together, these components painted a detailed picture of the temporal cascade that turns sounds or squiggles on a page into understanding.
Extracting ERPs: The Practical Challenge
If the theory behind ERPs is elegant, the practice is finicky. Getting clean ERPs requires attention to a stack of technical details, each one capable of ruining your data if you get it wrong.
Stimulus timing precision. Your epoch is time-locked to the stimulus onset. If your stimulus timing is off by even 10-20 milliseconds (entirely possible with poorly configured software), your ERPs will smear and lose definition. The presentation system needs to deliver stimuli with millisecond precision and send a time-synchronized trigger to the EEG recording system.
Artifact rejection. Eye blinks produce voltage deflections of 100-200 microvolts, ten to fifty times larger than most ERPs. A single blink in an epoch can obliterate the signal. Researchers use a combination of artifact rejection (throwing out contaminated epochs) and artifact correction (using algorithms like independent component analysis to remove the blink component while preserving the brain signal).
Baseline correction. Before comparing the post-stimulus voltage to anything, you need a baseline. Typically, you take the average voltage in a short window before the stimulus (say, 200 milliseconds pre-stimulus) and subtract it from the entire epoch. This removes slow voltage drifts that would otherwise distort your measurements.
Filtering. ERP researchers typically bandpass filter their data between 0.1 and 30 Hz. The low-frequency cutoff removes slow drift, the high-frequency cutoff removes muscle artifacts and line noise. But filtering is tricky. Aggressive filtering can distort the shape and timing of ERP components. The exact filter settings are a perpetual debate in the field.
Number of trials. More trials means better signal-to-noise. But more trials also means a longer experiment, and participant fatigue degrades data quality. The sweet spot depends on the component you're targeting. A strong P300 might emerge from 30-50 trials per condition. A subtle N400 effect might need 80-200.
- Record continuous EEG with event markers (triggers) at each stimulus onset
- Filter the raw data (typically 0.1-30 Hz bandpass)
- Segment into epochs around each trigger (e.g., -200ms to +800ms)
- Baseline correct using the pre-stimulus interval
- Reject or correct artifacts (blinks, muscle activity, drift)
- Average all clean epochs per condition
- Measure component amplitudes and latencies at relevant electrode sites
- Compare between conditions using statistical tests
Consumer EEG and ERPs: What's Actually Possible
For decades, ERP research required laboratory-grade EEG systems with 32, 64, or even 256 channels, each filled with conductive gel, connected to research-grade amplifiers, all confined to a shielded room. The setup alone could take 45 minutes. The equipment cost tens of thousands of dollars.
That landscape is changing. Consumer EEG devices now offer sample rates and signal quality that make certain ERP research feasible outside the lab.
The key factors that determine whether a device can capture ERPs are:
Sampling rate. You need at least 250Hz to capture the temporal dynamics of ERP components. Lower sampling rates will alias the signal, smearing peaks and making accurate latency measurement impossible.
Electrode placement. Where you put the sensors determines which ERPs you can detect. The P300 is largest at central and parietal midline sites (Cz, Pz). The N400 is prominent over centro-parietal regions. The MMN is strongest over frontal and central sites.
Raw data access. Many consumer devices only provide processed metrics (like "attention scores" or frequency band power). To do ERP work, you need access to the raw voltage data, sample by sample, so you can perform your own epoching, averaging, and analysis.
Trigger synchronization. You need a way to mark the exact moment each stimulus occurred in the EEG data stream. This requires either hardware triggers or software-based event marking with known, consistent latency.
The Neurosity Crown checks several of these boxes in interesting ways. Its 8 channels are positioned at CP3, C3, F5, PO3, PO4, F6, C4, and CP4. That's frontal, central, and parietal coverage across both hemispheres. The sampling rate is 256Hz, which meets the minimum threshold for ERP work. And critically, the Crown provides raw EEG data at the full 256Hz through both its JavaScript and Python SDKs.
The Crown's electrode positions cover key areas for several ERP components. C3/C4 and CP3/CP4 sit over central and centro-parietal cortex, where the P300 is typically largest. F5/F6 cover lateral frontal areas relevant to the MMN and N200. PO3/PO4 cover parieto-occipital regions where early visual ERPs like the P1 and N170 are prominent. While it's not a 64-channel research cap, this layout captures many of the most commonly studied components.
For developers, the Crown's SDK ecosystem opens up ERP experimentation in ways that weren't possible before. You can write a stimulus presentation program in JavaScript, record raw EEG through the Neurosity SDK, and perform epoch extraction and averaging in real time. The Crown's BrainFlow integration provides access to the data through a standardized API that's compatible with established EEG analysis pipelines. And Lab Streaming Layer (LSL) support enables precise synchronization between stimulus events and EEG recordings.
The Crown's N3 chipset handles on-device signal processing with hardware-level encryption, which means you can run ERP experiments without raw brain data ever leaving the device until you explicitly export it. For researchers working with sensitive populations or in environments where data privacy matters, this is a meaningful feature.
Could you run a publication-quality N400 language study with 8 channels? That would be a stretch. But a P300 speller demo, an attention monitoring system, or a classroom experiment exploring how the brain detects change? Those are well within reach. The field is moving toward ecological validity anyway, studying the brain in real-world contexts rather than artificial laboratory settings. Portable, accessible EEG is a prerequisite for that shift.
ERPs, BCIs, and the Future of Thinking at Machines
The P300 speller was just the beginning. ERPs are now central to a growing family of brain-computer interfaces that use specific neural responses as control signals.
Error-related potentials in BCI. Here's a clever one. When a BCI makes a mistake (selects the wrong letter, moves a cursor the wrong way), the user's brain generates an error-related potential, a variant of the ERN. Researchers have built systems that detect this error signal in real time and automatically correct the BCI's output. Your brain detects the error, the BCI reads the detection, and the system self-corrects, all without the user pressing a button or saying a word.
Hybrid BCIs. Modern systems increasingly combine ERP-based control with other EEG features. A user might use motor imagery (imagining hand movements, which produces distinct oscillatory patterns) for one type of command, and a P300 paradigm for another. The combination allows more flexible and faster control than either approach alone.
Rapid serial visual presentation (RSVP). Instead of flashing letters in a grid, RSVP systems stream images rapidly (10 per second) and detect the P300 when the target image appears. Military researchers have explored this for rapid image triage, using soldiers' P300 responses to flag target images faster than the soldiers could consciously report them. Your brain detects the target before you know you've seen it, and the BCI catches the detection.
The throughput of ERP-based BCIs is still modest compared to typing on a keyboard. But for people who have lost all motor function, these systems represent something that no other technology offers: a direct communication channel from brain to computer, built on signals the brain produces automatically when it encounters something meaningful.
And as consumer EEG devices get better, as machine learning improves single-trial ERP detection, and as the hardware shrinks from laboratory racks to things you wear on your head, the gap between research-grade and consumer-grade ERP capabilities will keep narrowing.
What Your Brain's Timestamps Are Really Telling You
Step back for a moment and consider what ERPs actually represent. Every component, the P300, the N400, the MMN, the ERN, is a timestamp. A precise record of when your brain performed a specific computation.
The P300 says: "At 300 milliseconds, I recognized this as important." The N400 says: "At 400 milliseconds, I tried to access the meaning and hit a wall." The MMN says: "At 150 milliseconds, I noticed the world changed, even though nobody asked me to pay attention." The ERN says: "At the exact moment of the error, before consciousness caught up, I already knew."
These timestamps are astonishing when you think about them. Your brain is making decisions, detecting errors, processing language, modeling the world, all within fractions of a second, and leaving electrical traces of each computation on the surface of your scalp.
For most of human history, these signals were invisible. Now they're accessible. A research student can stream raw EEG from a device that fits in a backpack, present stimuli from a laptop, and extract ERPs that reveal the timing of cognitive processes. A developer can build an application that reads the P300 and turns attention into a control signal. A clinician can track the P300 latency of a patient over months and detect cognitive changes before any behavioral test would catch them.
The brain has always been timestamping its work. We just couldn't read the timestamps before. Now we can, and the applications we build with that information are limited only by how well we understand the signals and how creatively we use them.
Your brain is composing a continuous stream of replies to the world. Tiny, precise, exquisitely timed. The question is no longer whether we can read those replies. It's what we'll do with them.

