Source Localization in EEG
The Crime Scene Problem
Imagine you're standing outside a building. You can hear sounds coming through the walls, but they're muffled. Distorted. You can tell something loud is happening on the left side of the building, and something quieter is happening on the right. But you can't tell what floor it's on. You can't tell if it's one source or three. And the walls themselves are reshaping the sound as it passes through, so what you hear outside doesn't perfectly match what's happening inside.
That's EEG.
Every time you place an electrode on someone's scalp and record a brainwave signal, you're listening through walls. The signal you see is not a clean readout of neural activity. It's the smeared, blurred, volume-conducted aftermath of something that happened centimeters beneath the surface, filtered through cerebrospinal fluid, bone, and skin.
For decades, this was considered good enough. You could measure what was happening on the scalp, assign it to the nearest brain region, and move on. But some researchers looked at those messy scalp signals and asked a much more ambitious question: can we reverse-engineer the original source?
Can we take the muffled sounds outside the building and figure out exactly which room they came from?
That question launched an entire subfield of neuroscience. It's called source localization, and it involves one of the most fascinating mathematical puzzles you've never heard of.
Why Your Scalp Is a Terrible Window Into Your Brain
Before we get into the solution, you need to understand why the problem is so hard. And that starts with a phenomenon called volume conduction.
When neurons fire, they produce tiny electrical currents. These currents don't stay put. They spread outward through the surrounding brain tissue, then through the cerebrospinal fluid that bathes the brain, then through the skull, and finally through the scalp. By the time an EEG electrode picks up the signal, it has traveled through multiple layers of tissue, each with different electrical conductivity.
Here's the key insight: the skull is a terrible electrical conductor. It has roughly 80 times the resistivity of the brain tissue beneath it. When electrical currents hit the skull, they don't pass through cleanly. They spread out laterally, like water hitting a flat rock. A focal point source inside the brain produces a broad, blurry pattern on the scalp.
This is called spatial smearing, and it's the fundamental reason EEG has poor spatial resolution compared to techniques like fMRI. A source that's only a few square centimeters inside the brain might produce a detectable signal across 10 or more centimeters of scalp. And if two sources are close together, their smeared scalp patterns overlap and blend into something that looks like a single source.
Think of it like looking at city lights from an airplane on a foggy night. You can see bright patches of light below, but you can't tell if a bright patch is one big building or five small ones clustered together. The fog has blurred everything.
Source localization is the attempt to see through that fog.
The Inverse Problem: One of the Hardest Problems in Physics
Here's where things get genuinely interesting.
There are two versions of the problem, and they're radically different in difficulty.
The forward problem asks: if I know there's an electrical source at a specific location inside the brain with a specific orientation and strength, what pattern will it produce on the scalp? This is solvable. You build a mathematical model of the head (the layers of brain, fluid, skull, and skin) and use Maxwell's equations to calculate how the electrical field propagates outward. It's computationally intensive but not conceptually tricky. One input, one output.
The inverse problem asks the opposite: given the pattern I observe on the scalp, where is the source inside the brain? This is where the math gets uncomfortable.
The inverse problem is ill-posed. That's a technical term from mathematics, and it means the problem doesn't have a unique solution. There are infinitely many different arrangements of sources inside the brain that could produce the exact same pattern on the scalp.
Let that sink in. Not "many" possible solutions. Not "thousands." Infinitely many.
This isn't a failure of our instruments or our algorithms. It's a fundamental mathematical property of the electromagnetic inverse problem, proven rigorously by Hermann von Helmholtz in the 1850s. No matter how perfect your EEG data is, no matter how many channels you have, the scalp pattern alone cannot uniquely determine the sources that generated it.
Helmholtz showed that you can always find a distribution of sources on any closed surface inside the head that produces the identical potential on the scalp as the true source configuration. This means the inverse problem is fundamentally underdetermined. Every method of source localization deals with this by adding assumptions, or constraints, that narrow down the infinite possibilities to a single best estimate. The choice of constraints is what distinguishes the different methods.
So if the problem has no unique solution, how does anyone do source localization at all?
The answer: by cheating. Strategically and brilliantly.
How Scientists Cheat (Brilliantly)
Every source localization method works by adding extra information, assumptions about the brain, to constrain the infinite solution space down to something manageable. Different methods make different assumptions, and those assumptions determine what the method is good at and where it fails.
There are three major families of approaches, and each one represents a fundamentally different philosophy about what's happening inside the brain.
Dipole Fitting: The Spotlight Approach
The oldest and simplest approach models brain activity as coming from a small number of "equivalent current dipoles." A dipole is just a tiny arrow representing the direction and magnitude of current flow at a single point. The idea is that sometimes, a large population of neurons firing in synchrony can be approximated as a single point source.
Here's how it works. You pick a number of dipoles (say, two), place them at a starting position inside a head model, and use the forward model to calculate what scalp pattern those dipoles would produce. Then you compare that predicted pattern to your actual EEG data. If they don't match, you move the dipoles, adjust their orientations and strengths, and try again. This iterative process, guided by optimization algorithms, continues until the predicted pattern matches the observed data as closely as possible.
It's like playing a game of "warmer, colder" in three dimensions.
Dipole fitting works beautifully when the brain activity you're studying really does come from a small number of focal sources. Epileptic spikes, for example, often originate from a single region, and dipole fitting can localize them with impressive accuracy. Early sensory responses, like the brain's reaction to a sudden sound or flash of light, also tend to come from well-defined cortical areas.
But dipole fitting has a critical weakness: you have to decide in advance how many dipoles to use. Pick too few, and you miss real sources. Pick too many, and the algorithm will happily fit noise. There's no objective way to determine the right number without prior knowledge.
LORETA and Distributed Source Methods: The Floodlight Approach
In the 1990s, Roberto Pascual-Marqui introduced a method called LORETA (Low Resolution Electromagnetic Tomography) that took a completely different approach. Instead of modeling a few point sources, LORETA estimates activity across the entire brain simultaneously.
The method divides the brain volume into thousands of small elements (called voxels), places a potential current dipole at each one, and then asks: what's the smoothest possible distribution of activity across all these voxels that explains the observed scalp data?
That word "smoothest" is the key constraint. LORETA assumes that neighboring brain regions tend to have similar activity levels. This makes biological sense, since neurons that are physically close tend to be wired together and fire in correlated patterns. The smoothness assumption is what makes the infinite solution space collapse to a unique answer.
The tradeoff is right there in the name: Low Resolution. LORETA's solutions are inherently blurry. They'll tell you that activity is concentrated in the general vicinity of the right temporal lobe, but they won't pinpoint a specific gyrus with millimeter precision. Improved versions (sLORETA, eLORETA) have better mathematical properties and localization accuracy, but all distributed source methods trade spatial precision for robustness.
Dipole Fitting models a small number of point sources. Best for focal activity like epileptic spikes or early sensory responses. Requires knowing the number of sources in advance. High precision when assumptions are correct, wildly wrong when they're not.
Distributed Methods (LORETA, sLORETA, eLORETA, minimum norm) estimate activity across the entire brain volume. No need to specify the number of sources. Biologically plausible smoothness constraints. Lower spatial resolution but stronger when you don't know what to expect.
Beamforming (LCMV, SAM) uses adaptive spatial filters to scan through brain locations one at a time, estimating activity at each point while suppressing interference from everywhere else. Excellent for tracking activity that changes over time. Sensitive to correlated sources.
Beamforming: The Directional Microphone
Beamforming comes from radar and sonar technology, and the analogy is perfect. Imagine you're in a noisy room and you have a directional microphone. You point it at one corner, and it picks up the conversation happening there while suppressing everything else. Then you point it at another corner and do the same.
In EEG beamforming, the "directional microphone" is a mathematical spatial filter. For each location in the brain, the algorithm designs a filter that maximizes sensitivity to activity at that specific point while minimizing contributions from all other locations. It then applies this filter to the multichannel EEG data and produces a time course of estimated activity at that location.
The most common variant is called LCMV (Linearly Constrained Minimum Variance) beamforming. It's particularly good for tracking how brain activity moves through different regions over time, because it produces a continuous time series for each brain location, not just a static image.
Beamforming's Achilles heel is correlated sources. If two brain regions are active at the same time with similar timing patterns, the spatial filters break down because the algorithm can't separate the two. This is a real problem because many cognitive processes involve multiple brain regions working in tight synchrony.
The Channel Count Question
Here's the practical question that brings all of this back to earth: how many EEG channels do you actually need to do source localization?
The answer depends on what you mean by "source localization."
| Channel Count | Source Localization Capability | Typical Use Case |
|---|---|---|
| 1-4 channels | No source localization possible. Regional inference only based on electrode position. | Neurofeedback training at a specific scalp site |
| 8 channels | Basic regional source estimation. Can distinguish frontal vs. parietal, left vs. right hemisphere. Crude dipole fitting possible for single dominant sources. | Consumer BCIs, cognitive state monitoring, basic research |
| 19-32 channels | Functional source localization. Dipole fitting with 2-3 sources. LORETA produces meaningful images. Basic beamforming. | Clinical EEG, routine research, moderate-precision source imaging |
| 64 channels | Good source localization. Multiple dipole fitting. High-quality distributed source imaging. Strong beamforming. | Standard research-grade source imaging |
| 128-256 channels | High-density source localization. Millimeter-precision dipole fitting. Detailed cortical current density maps. State-of-the-art beamforming. | Advanced research, presurgical epilepsy mapping, brain mapping studies |
The math behind this is straightforward. Each EEG channel gives you one data point: a voltage value at a specific scalp location at a specific moment in time. Source localization algorithms need to estimate three spatial coordinates plus orientation and strength for each dipole, which is five unknowns per source. With 8 channels, you have 8 equations. You can constrain at most one, maybe two, simple sources before the system becomes underdetermined.
With 64 channels, you have 64 equations. That's enough to constrain a much more detailed model of brain activity. With 256 channels, you can start to resolve activity patterns with spatial detail approaching the limits of what EEG physics allows.
This is why high-density EEG systems exist. It's not because researchers enjoy spending $200,000 on equipment. It's because source localization is hungry for data points, and each channel on the scalp is another equation constraining the infinite solution space.

What Source Localization Actually Reveals
When source localization works well, the results are remarkable. Here are some of the things it has shown us.
The Timing of Thought
One area where EEG source localization genuinely outshines other brain imaging techniques is temporal resolution. fMRI can localize brain activity with millimeter precision, but it's slow, capturing snapshots every 1-2 seconds. EEG captures data at millisecond resolution. When you combine that temporal precision with source localization, you can track the flow of neural activity through the brain in near-real-time.
Studies using EEG source localization have shown that when you see a face, visual cortex activates within 100 milliseconds, then activity cascades forward to the fusiform face area by 170 milliseconds, and finally reaches prefrontal decision-making regions by 300 milliseconds. That entire processing sequence happens faster than an eye blink, and EEG source localization is one of the few tools that can resolve it.
Epilepsy Surgery Planning
This is the clinical application where source localization has the most life-changing impact. For patients with drug-resistant epilepsy, surgery to remove the seizure-producing region is sometimes the only option. But you need to know exactly where the seizures start.
High-density EEG with source localization can identify the seizure onset zone non-invasively, sometimes with enough precision to guide surgery without implanting electrodes directly into the brain. This doesn't replace invasive monitoring in all cases, but it narrows the search area and can spare some patients from additional surgical procedures.
Mapping Cognitive Networks
Source localization has revealed that even simple cognitive tasks involve coordinated activity across widely distributed brain networks. Reading a single word, for example, involves visual cortex (seeing the letters), angular gyrus (converting letters to sounds), Wernicke's area (accessing meaning), and prefrontal cortex (holding the word in working memory). All of this happens within half a second.
Without source localization, EEG researchers could only talk about "the signal at electrode Fz" or "the pattern at electrode Pz." With source localization, they can talk about activity in specific brain structures and track how information flows between them.
The Honest Limitations
Source localization is impressive, but it's important to be clear about what it can't do.
It can't see deep structures well. The hippocampus, amygdala, thalamus, and other subcortical structures sit deep inside the brain, far from the scalp. Their electrical fields are weak and heavily attenuated by the time they reach EEG electrodes. Source localization methods can sometimes detect strong hippocampal activity (like seizure onset), but for normal cognitive processing, these deep structures are essentially invisible to scalp EEG.
It depends on the head model. Every source localization algorithm needs a model of the head's geometry and electrical properties. Generic head models introduce errors because everyone's skull thickness, tissue conductivity, and brain anatomy are slightly different. MRI-based individual head models improve accuracy significantly, but most people don't have a spare MRI lying around.
The smoothness assumptions aren't always right. LORETA assumes neighboring brain regions have similar activity levels. This is generally true, but not always. When two adjacent brain regions are doing very different things (which happens), the smoothness assumption forces the algorithm to blur across them.
Correlated sources are problematic for everyone. When multiple brain regions activate simultaneously with similar timing, which happens constantly during normal cognition, most algorithms struggle to separate them. This is a fundamental limitation, not a bug that future methods will fix.
Spatial resolution has a ceiling. Even with 256 channels and a perfect head model, EEG source localization can't match the spatial resolution of fMRI. The physics of volume conduction impose a fundamental limit. You can get to roughly 1-2 centimeter resolution with high-density EEG under ideal conditions. fMRI routinely achieves 1-3 millimeters.
The 8-Channel Reality Check
Let's be straightforward about what this means for consumer EEG devices.
The Neurosity Crown has 8 channels. That's the highest channel count among consumer brain-computer interfaces, and those 8 channels are strategically placed across all four cortical lobes. For what the Crown is designed to do, which is real-time cognitive state monitoring, focus and calm detection, neurofeedback, and BCI applications, 8 channels is genuinely excellent. You don't need source localization to measure your attention levels or train your brainwave patterns.
But if your goal is clinical-grade source localization, pinpointing the cortical generator of a specific EEG component to a particular gyrus, you need more channels. That's not a limitation of the Crown's engineering. It's a limitation of the physics. Eight spatial samples on the scalp simply don't provide enough constraint to uniquely solve a 3D inverse problem with high precision.
What 8 channels can do is basic regional source estimation. With the Crown's coverage of frontal (F5, F6), central (C3, C4), centroparietal (CP3, CP4), and parieto-occipital (PO3, PO4) regions, you can meaningfully distinguish frontal versus posterior activity and left versus right lateralization. For developers building applications with the Neurosity SDK, this regional information is powerful enough to drive real-time brain-aware software.
The honest picture: source localization is a spectrum, not a binary. And 8 well-placed channels put you meaningfully on that spectrum, just not at the research-imaging end of it.
Where This Is All Going
Source localization is getting better, and the improvements are coming from unexpected directions.
Machine learning is starting to replace the traditional physics-based inverse solvers. Instead of assuming smoothness or specifying dipole counts, neural networks trained on simulated EEG data can learn to map scalp patterns to source configurations directly. Early results show that these data-driven approaches can outperform classical methods, especially in situations where the traditional assumptions break down.
There's also exciting work on combining EEG with other modalities. fMRI provides spatial precision but poor temporal resolution. EEG provides temporal precision but poor spatial resolution. Simultaneous EEG-fMRI recordings, constrained by each other's strengths, can achieve both. This combined approach is becoming the gold standard for brain mapping research.
And channel counts keep climbing. High-density EEG caps with 256 channels are now common in research labs, and systems with 512 or even 1,024 channels are being developed. Each additional channel adds another equation to constrain the inverse problem, slowly tightening the noose around the true solution.
But here's the thing that sticks with me. Even with all these advances, even if someone builds a 10,000-channel EEG system with a perfect head model and an infinitely powerful computer, the inverse problem remains fundamentally ill-posed. Helmholtz's proof doesn't expire. The math doesn't change.
Every source localization solution will always be an estimate. A very good estimate, an incredibly useful estimate, but an estimate nonetheless. The brain keeps some of its secrets even from our most sophisticated mathematics.
And somehow, that makes the whole enterprise more fascinating, not less. We're working at the edge of what physics allows us to know about the electrical life happening inside our own skulls. Every improvement in source localization is a small victory against a mathematically impossible problem.
Your brain is the most complex electrical system in the known universe, and we're learning to read it from the outside. One channel at a time.

