Neurosity
Open Menu
Guide

How Does EEG Actually Work?

AJ Keller
By AJ Keller, CEO at Neurosity  •  January 2026
EEG detects the synchronized electrical activity of millions of neurons firing together, captured through electrodes on your scalp, amplified, filtered, and digitized into data.
Every thought, every feeling, every flicker of attention produces electrical signals in your brain. EEG is the technology that turns those invisible signals into something you can see, measure, and act on. Here's the full picture of how it happens, from individual neurons to the data on your screen.
Explore the Crown
8-channel EEG with JavaScript and Python SDKs

Right Now, Your Brain Is Doing Something Extraordinary. And You Can't Feel It.

As you read this sentence, roughly 86 billion neurons in your brain are doing something that would make any electrical engineer lose sleep. They're generating electricity. Not metaphorically. Actual, measurable electrical current, produced by ions flooding in and out of cell membranes through channels that open and close in patterns so complex that we've been studying them for nearly a century and still haven't fully mapped them.

Your skull is, right now, humming with electrical activity. Voltage fluctuations ripple across your cortex in waves. Some of those waves cycle 4 times per second. Others cycle 40 times per second. The pattern they form at this exact moment is different from the pattern they formed three seconds ago, and it will be different again three seconds from now. This electrical symphony is you. Your thoughts, your attention, your emotions, your sense of being a conscious person reading words on a screen. All of it shows up in the electricity.

And here's the part that still blows my mind: you can read that electricity from the outside. You can stick electrodes on someone's scalp, through their hair, through their skin, through their skull, and pick up the electrical whisper of their brain thinking.

That's EEG. Electroencephalography. And the fact that it works at all is one of the most underappreciated marvels of modern science.

It Starts With a Single Neuron (But a Single Neuron Isn't Enough)

If you want to understand how EEG works, you need to start where the signal starts: at the neuron.

A neuron communicates using two types of electrical signals. The first is the action potential, a brief, all-or-nothing voltage spike that travels down the neuron's axon like a lit fuse racing down a wire. Action potentials are fast (they last about 1 millisecond) and they're the primary way neurons send long-distance messages.

The second type is the postsynaptic potential (PSP). When a neuron receives a signal from another neuron across a synapse, ions flow through channels in the receiving neuron's membrane. This creates a small, local change in voltage that lasts tens of milliseconds. It's slower and messier than an action potential. And, somewhat counterintuitively, it's the postsynaptic potential that EEG actually measures.

Why? Because of geometry.

The neurons that matter most for EEG are called pyramidal neurons, and they're named for their shape. Each one has a long, straight trunk (called an apical dendrite) that extends perpendicular to the cortical surface, like a tree growing straight up from the ground. When a postsynaptic potential occurs at the top of this dendrite, positive ions flow in at the top and create a relative negativity at the bottom. This separation of charge creates a tiny electrical dipole, a miniature battery, oriented vertically.

Now here's the critical point. One of these dipoles is unimaginably small. A single pyramidal neuron's postsynaptic potential generates a voltage so faint it would be completely invisible to any electrode on the scalp. You would need equipment sensitive enough to detect the gravitational pull of a feather from across a football field. It can't be done.

So how does EEG work at all?

The Secret Is Synchrony: Millions of Tiny Voices Singing in Unison

The answer is one of those facts that, once you hear it, rearranges how you think about the brain.

EEG doesn't detect individual neurons. It detects what happens when approximately 10,000 to 50,000 neurons fire at the same time, in the same direction, in the same rhythm.

Think about it this way. Imagine you're standing outside a stadium. If one person inside the stadium claps, you hear nothing. If a hundred people clap at random times, you hear a faint, indistinct murmur. But if 50,000 people clap in unison, in rhythm, the sound is thunderous. You can hear it from the parking lot.

That's what happens in your cortex. Pyramidal neurons are arranged in parallel, like trees in an orchard, all oriented perpendicular to the brain's surface. When thousands of them receive postsynaptic input at the same time, their individual tiny dipoles add up. The currents sum. The voltages sum. And the combined electrical field becomes strong enough to pass through the meninges, through the cerebrospinal fluid, through the skull, through the scalp, and reach an electrode sitting on the surface.

This is why EEG is fundamentally a measurement of synchronized population activity. It tells you when large groups of neurons are working together in time. And this turns out to be incredibly informative, because synchronization is how different brain regions coordinate with each other. When you focus your attention, neurons in your frontal and parietal cortex synchronize. When you relax, different populations synchronize at different frequencies. The rhythms change, and EEG catches those changes.

The Synchrony Threshold

For an EEG signal to be detectable at the scalp, roughly 10,000 to 50,000 neurons need to be firing with synchronized postsynaptic potentials within the same cortical patch, typically spanning about 6 square centimeters. This means EEG is blind to neural activity that is highly localized or temporally scattered. What it excels at detecting is large-scale, coordinated brain dynamics, exactly the kind of activity that underlies cognitive states like attention, relaxation, and drowsiness.

Volume Conduction: The Signal's Obstacle Course

So the neurons fire in sync and produce a detectable electrical field. But that field doesn't teleport to the electrode. It has to travel through tissue. And that journey changes the signal in important ways.

Between a firing neuron in your cortex and an electrode on your scalp, the electrical signal passes through:

  1. Cerebrospinal fluid (CSF) surrounding the brain. CSF is a good conductor, which means current spreads easily through it, and that spreading blurs the spatial precision of the signal.
  2. The skull. Bone is a poor conductor, roughly 80 times more resistive than CSF. This is the biggest obstacle. The skull dramatically attenuates and spatially smears the signal. A focused electrical source in the cortex gets spread out over a wide area of the scalp.
  3. The scalp. Another layer of tissue with its own conductivity properties. By the time the signal arrives here, it's been blurred significantly.

This process is called volume conduction, and it's the reason EEG has relatively low spatial resolution compared to something like fMRI. A single EEG electrode doesn't "see" a pinpoint in the brain. It sees a blurred average of electrical activity from a broad cortical area beneath it. The technical term for this blurring challenge is the inverse problem: given a voltage pattern on the scalp, it's mathematically impossible to determine a unique source configuration in the brain without making simplifying assumptions.

Here's an analogy. Imagine you're standing outside a building, pressing your ear against the wall. You can tell that there's a party going on inside. You can even tell whether the music is loud or quiet, fast or slow. But you can't pinpoint which room the music is coming from, and you definitely can't make out individual conversations. That's roughly what EEG is doing. It gives you the big picture of what's happening across the brain, with extraordinary timing precision, but it trades spatial detail for that speed.

And that trade-off is actually what makes EEG special.

What EEG Loses in Space, It Wins in Time

Here's where the comparison with other brain imaging technologies gets interesting.

fMRI (functional magnetic resonance imaging) measures blood flow changes in the brain. It gives you beautiful, detailed spatial maps. You can see which brain region is active with millimeter precision. But blood flow is slow. It takes 1 to 2 seconds for blood to rush to an active brain region. So fMRI's temporal resolution is measured in seconds.

EEG measures electrical activity directly. The signal travels at nearly the speed of light through tissue. When a population of neurons fires, EEG detects it within milliseconds. The temporal resolution of EEG is typically 1 to 4 milliseconds, depending on the sampling rate.

To put that in perspective: in the time it takes fMRI to register one brain event, EEG could have captured a thousand.

FeatureEEGfMRI
What it measuresElectrical activity (postsynaptic potentials)Blood oxygenation changes (BOLD signal)
Temporal resolution1-4 milliseconds1-2 seconds
Spatial resolution~1-2 cm (scalp level)~1-2 mm
PortabilityHighly portable, wearableRequires massive MRI scanner
Cost$100 - $10,000 (consumer to research)$1M+ for the machine, $500+ per session
Real-time capableYesLimited
Movement toleranceModerate (with artifact management)Almost none (must lie still)
Feature
What it measures
EEG
Electrical activity (postsynaptic potentials)
fMRI
Blood oxygenation changes (BOLD signal)
Feature
Temporal resolution
EEG
1-4 milliseconds
fMRI
1-2 seconds
Feature
Spatial resolution
EEG
~1-2 cm (scalp level)
fMRI
~1-2 mm
Feature
Portability
EEG
Highly portable, wearable
fMRI
Requires massive MRI scanner
Feature
Cost
EEG
$100 - $10,000 (consumer to research)
fMRI
$1M+ for the machine, $500+ per session
Feature
Real-time capable
EEG
Yes
fMRI
Limited
Feature
Movement tolerance
EEG
Moderate (with artifact management)
fMRI
Almost none (must lie still)

This is why EEG remains the dominant technology for real-time brain monitoring, brain-computer interfaces, and any application where you need to know what the brain is doing right now, not what it was doing two seconds ago.

Electrode Placement: The 10-20 System and Why Position Matters

If you're going to listen to the brain's electrical activity from the scalp, the obvious question is: where do you put the electrodes?

In 1958, neurologist Herbert Jasper proposed a standardized system that the field still uses today. It's called the 10-20 system, named because electrodes are placed at intervals of 10% and 20% of the distance between standard skull landmarks.

Each electrode position gets a label. The letter tells you which brain region is underneath:

  • F = Frontal
  • C = Central
  • P = Parietal
  • T = Temporal
  • O = Occipital
  • Fp = Frontopolar

The number tells you which hemisphere. Odd numbers (1, 3, 5, 7) are on the left. Even numbers (2, 4, 6, 8) are on the right. "z" means midline.

So F3 is the left frontal area. C4 is right central. Pz is parietal midline.

Why does position matter so much? Because the brain is not one uniform organ. Different regions handle different functions. Frontal electrodes pick up activity related to executive function, decision-making, and emotional regulation. Parietal electrodes catch activity related to spatial processing and attention. Occipital electrodes detect visual processing. Central electrodes sit over the motor and sensory cortex.

A clinical EEG setup typically uses 19 to 21 electrodes. Research systems may use 64, 128, or even 256. More electrodes means finer spatial sampling, but also more setup time, more data to manage, and more opportunities for noisy connections.

Consumer EEG devices make a different trade-off. They use fewer electrodes placed at strategically chosen positions to capture the most informative signals with minimal setup friction.

Neurosity Crown
The Neurosity Crown gives you real-time access to your own brainwave data across 8 EEG channels at 256Hz, with on-device processing and open SDKs.
See the Crown

From Scalp to Screen: Amplification, Filtering, and Digitization

The raw electrical signal that reaches a scalp electrode is tiny. We're talking about 1 to 100 microvolts. For comparison, a standard AA battery produces 1.5 volts. The EEG signal is roughly 15,000 to 1,500,000 times weaker than a AA battery. Picking up this signal is like trying to hear someone whisper from across a construction site.

Getting from that whisper to usable data involves three critical stages.

Stage 1: Amplification

The first thing any EEG system does is amplify the signal. Specialized differential amplifiers boost the voltage by a factor of 1,000 to 100,000. But here's the trick: the amplifier doesn't just amplify the brain signal. It amplifies everything, including noise from power lines, muscle activity, and electromagnetic interference from nearby electronics.

This is where differential amplification earns its name. Each EEG channel actually uses two electrodes and measures the difference in voltage between them. Any noise that affects both electrodes equally (like the 50/60 Hz hum from power lines) gets subtracted out. Only signals that differ between the two electrode positions, which is where brain activity shows up, get amplified. This technique is called common-mode rejection, and it's the reason EEG works outside of a Faraday cage.

Stage 2: Filtering

After amplification, the signal passes through filters that strip away frequencies outside the range of interest.

A typical EEG recording cares about frequencies between about 0.1 Hz and 100 Hz. Anything below 0.1 Hz is likely slow electrode drift or sweat artifact. Anything above 100 Hz is likely muscle noise or electromagnetic interference.

Filters come in several types:

  • High-pass filters remove frequencies below a threshold (cutting the slow drift)
  • Low-pass filters remove frequencies above a threshold (cutting high-frequency noise)
  • Notch filters remove a specific frequency band, typically 50 Hz (in Europe) or 60 Hz (in the US) to eliminate power line interference

The band that remains after filtering contains the brainwaves you've probably heard of:

BandFrequencyAssociated With
Delta0.5 - 4 HzDeep sleep, unconsciousness
Theta4 - 8 HzDrowsiness, meditation, memory encoding
Alpha8 - 13 HzRelaxed wakefulness, eyes closed, calm focus
Beta13 - 30 HzActive thinking, concentration, alertness
Gamma30 - 100+ HzHigher cognitive functions, perception binding, focus
Band
Delta
Frequency
0.5 - 4 Hz
Associated With
Deep sleep, unconsciousness
Band
Theta
Frequency
4 - 8 Hz
Associated With
Drowsiness, meditation, memory encoding
Band
Alpha
Frequency
8 - 13 Hz
Associated With
Relaxed wakefulness, eyes closed, calm focus
Band
Beta
Frequency
13 - 30 Hz
Associated With
Active thinking, concentration, alertness
Band
Gamma
Frequency
30 - 100+ Hz
Associated With
Higher cognitive functions, perception binding, focus

Stage 3: Digitization

Finally, the amplified, filtered analog signal gets converted into digital data through an analog-to-digital converter (ADC). This is where the sampling rate comes in.

The sampling rate determines how many voltage measurements the system takes per second. A 256 Hz sampling rate means the system captures 256 snapshots of the brain's electrical state every second. A 512 Hz system captures 512.

How fast is fast enough? There's a principle in signal processing called the Nyquist theorem that gives a clean answer: to accurately capture a signal at a given frequency, you need to sample at least twice that frequency. So if you want to capture brain activity up to 100 Hz (which covers delta through gamma), you need a sampling rate of at least 200 Hz.

This is why 256 Hz has become the sweet spot for consumer and many research EEG systems. It comfortably captures the full range of relevant brainwave frequencies while keeping data manageable. At 256 Hz with 8 channels, you're generating about 2,048 data points per second. That's a rich, detailed picture of the brain's electrical landscape, streaming in real-time.

Artifacts: The Enemies of Clean Data (And How to Fight Them)

Here's an uncomfortable truth about EEG: your brain isn't the only thing producing electrical signals in your head.

Your eyes generate massive voltages when they move or blink (the cornea carries a positive charge relative to the retina). Your jaw muscles fire when you clench or chew. Your forehead muscles tense when you concentrate or frown. Even your heartbeat produces electrical artifacts that propagate to the scalp. And then there's the external environment: power lines, fluorescent lights, Wi-Fi routers, and phones, all radiating electromagnetic noise.

These non-brain signals are called artifacts, and they're often 10 to 100 times larger than the brain signals you're trying to measure. An eye blink can produce a voltage spike of 100 to 200 microvolts, dwarfing the 1 to 20 microvolt brain signals underneath.

Managing artifacts is arguably the single most important challenge in EEG engineering. The solutions come in layers:

Hardware solutions. Better electrode contact reduces impedance, which reduces noise susceptibility. Shielded cables and on-device processing minimize electromagnetic interference. Differential amplification rejects common-mode noise.

Real-time filtering. Notch filters remove power line interference. High-pass filters remove slow drift from sweat and electrode movement. Adaptive filters can track and subtract known noise sources.

Algorithmic solutions. Independent Component Analysis (independent component analysis) is a mathematical technique that separates mixed signals into independent sources. Applied to EEG, ICA can identify components that look like eye blinks, muscle activity, or heartbeats and remove them while preserving brain signals. More recent approaches use machine learning trained on labeled artifact data to detect and clean contamination in real-time.

On-device processing. This is where modern consumer EEG has made the biggest leap. Rather than streaming raw, artifact-contaminated data to a computer for later cleaning, advanced devices process the signal on the device itself. This means artifact rejection happens at the source, before the data ever leaves the hardware.

Why On-Device Processing Matters

Traditional EEG systems send raw data to an external computer, where artifacts are removed after recording. This introduces latency and requires the user to stay tethered. On-device processing, like the Neurosity Crown's N3 chipset, handles signal conditioning, artifact rejection, and feature extraction directly on the hardware. This means lower latency, better privacy (raw brain data never needs to leave the device), and a wearable form factor that works outside the lab.

The "I Had No Idea" Moment: Your Brain Is Transparent to Itself

Here's something that stops most people in their tracks when they first hear it.

The electrical signals that EEG measures aren't just a byproduct of brain activity. They're not exhaust fumes from the computational engine. Many neuroscientists now believe that the oscillating electrical fields detected by EEG are functionally important. They're part of how the brain computes.

The theory, supported by a growing body of research, is that brain oscillations serve as a synchronization mechanism. When the frontal cortex needs to communicate with the visual cortex during a task that requires both attention and perception, they synchronize their oscillations. They "tune in" to the same frequency, like two walkie-talkies switching to the same channel. This is called neural coherence, and it may be one of the brain's primary mechanisms for routing information between regions.

What this means is striking: when you look at an EEG recording and see alpha brainwaves spreading across the parietal cortex, you're not just seeing evidence that the brain is relaxed. You may be seeing the brain's actual communication protocol in action. The oscillations aren't a shadow of the computation. They might be the computation, or at least an essential part of it.

This reframes EEG entirely. You're not just passively listening to the brain's electrical noise. You're eavesdropping on the brain's own networking protocol.

From Laboratory Curiosity to Wearable Technology

Hans Berger recorded the first human EEG in 1924. He used crude galvanometers and silver foil electrodes inserted under his patients' scalps. The equipment filled a room. The recordings were scratchy lines on paper. But Berger saw something in those wiggly lines that changed neuroscience: the alpha rhythm, a clear 10 Hz oscillation that appeared when subjects closed their eyes and vanished when they opened them. The brain had a measurable, predictable electrical signature. It was real.

For the next 80 years, EEG remained a clinical and research tool. The equipment was expensive, fragile, and required trained technicians to operate. Getting an EEG meant going to a hospital or university lab, sitting in a specially shielded room, and having a technician spend 30 to 45 minutes gluing electrodes to your head with conductive paste.

The transformation started with dry electrodes (no gel required), miniaturized amplifiers, Bluetooth connectivity, and on-device computing. These four advances made something possible that Berger couldn't have imagined: putting a functional EEG on your head like a pair of headphones and streaming brainwave data to your phone.

The Neurosity Crown represents what this trajectory leads to. Eight EEG channels at positions CP3, C3, F5, PO3, PO4, F6, C4, and CP4, covering frontal, central, parietal, and occipital regions. Sampling at 256 Hz. All signal processing, artifact rejection, and feature extraction handled on-device by the N3 chipset. No gel. No wires. No technician. And the data streams directly to applications through JavaScript and Python SDKs.

This isn't a watered-down version of clinical EEG. Those 8 channels, strategically positioned, capture the core brainwave dynamics that matter for real-time brain-computer interaction: frontal alpha asymmetry (emotional regulation), frontal beta (active cognition), parietal alpha (relaxed attention), and cross-regional coherence (inter-region communication). The 256 Hz sampling rate satisfies the Nyquist criterion for all standard brainwave bands through gamma.

What 8 Channels at 256 Hz Actually Gets You

With the Crown's sensor configuration, you get real-time access to:

  • Raw EEG at 256 samples per second across all 8 channels
  • Power spectral density showing the strength of each frequency band at each electrode
  • FFT analysis frequency data for custom spectral analysis
  • Focus and calm scores derived from validated brainwave patterns
  • Signal quality indicators so you know when the data is clean
  • Accelerometer data for movement artifact flagging

For developers, this data is accessible through the Neurosity SDK in JavaScript and Python. For AI applications, the Crown integrates with Claude, ChatGPT, and other tools through the Model Context Protocol (MCP), meaning your brain data can directly inform AI agents and workflows.

Why EEG Still Matters More Than Ever

In an era of fMRI, MEG, fNIRS, and invasive brain implants, you might wonder whether EEG is becoming obsolete. The opposite is true.

EEG is the only brain measurement technology that combines millisecond temporal resolution, portability, affordability, and real-time capability. fMRI gives better pictures but requires a million-dollar scanner and a motionless patient. MEG costs even more. Invasive BCIs require surgery. fNIRS is portable but measures blood flow, not electrical activity, and has second-scale latency.

For brain-computer interfaces, neurofeedback, cognitive state monitoring, and any application where you need to know what the brain is doing right now, in the real world, on a moving human, EEG is it. The physics hasn't changed since Berger's day. What's changed is our ability to build smaller, smarter, more capable hardware and the software to make sense of the data.

We are, for the first time in human history, at a point where an individual person can measure their own brain's electrical activity in real-time, from their own home, and pipe that data into applications and AI systems that can act on it. The neurons are still doing the same thing they've always done: generating postsynaptic potentials, synchronizing in oscillatory patterns, producing fields that propagate through tissue to the scalp. The extraordinary part is that now, you can actually see it happening.

You've been carrying the most complex electrical system in the known universe inside your skull your entire life. EEG is how you finally get to listen to it.

Stay in the loop with Neurosity, neuroscience and BCI
Get more articles like this one, plus updates on neurotechnology, delivered to your inbox.
Frequently Asked Questions
How does EEG work?
EEG works by placing electrodes on the scalp that detect the tiny electrical voltages produced when large populations of neurons fire in synchrony. These signals, typically between 1 and 100 microvolts, are amplified by a factor of 1,000 to 100,000, filtered to remove noise, and digitized into numerical data that can be analyzed by software. The result is a real-time recording of your brain's electrical activity.
What do EEG electrodes actually measure?
EEG electrodes primarily measure postsynaptic potentials from pyramidal neurons in the cerebral cortex. These are not individual neuron firings (action potentials) but the summed electrical activity of millions of neurons firing in sync. The signal must pass through cerebrospinal fluid, skull bone, and scalp tissue before reaching the electrode, which weakens and blurs it through a process called volume conduction.
Why does EEG have lower spatial resolution than fMRI?
EEG has lower spatial resolution because electrical signals from the brain must pass through multiple layers of tissue (meninges, cerebrospinal fluid, skull, scalp) before reaching electrodes. Each layer smears and distorts the signal, making it difficult to pinpoint the exact source location. This is called the inverse problem. However, EEG has far superior temporal resolution, capturing changes in milliseconds rather than seconds.
What is the 10-20 system in EEG?
The 10-20 system is the international standard for EEG electrode placement. Electrodes are positioned at intervals of 10% and 20% of the distance between skull landmarks (nasion, inion, and preauricular points). Each position is labeled with a letter indicating the brain region (F for frontal, C for central, P for parietal, O for occipital, T for temporal) and a number indicating hemisphere (odd for left, even for right, z for midline).
Can EEG read your thoughts?
EEG cannot read specific thoughts or decode the content of what you're thinking. What it can detect are patterns of brain activity associated with different cognitive states, such as focused attention, relaxation, mental effort, and motor imagery. With machine learning, EEG data can classify these states with increasing accuracy, enabling brain-computer interfaces that respond to broad mental commands and states rather than specific thoughts.
What causes artifacts in EEG recordings?
Common EEG artifacts include eye blinks and movements (which produce large electrical signals from the eye muscles), jaw clenching, head movement, muscle tension in the scalp and forehead, poor electrode contact, and external electromagnetic interference from power lines or electronic devices. Modern EEG systems use filtering, independent component analysis, and on-device processing to identify and remove these artifacts in real-time.
Copyright © 2026 Neurosity, Inc. All rights reserved.