Neurosity
Open Menu
Guide

Source Reconstruction in EEG

AJ Keller
By AJ Keller, CEO at Neurosity  •  February 2026
Source reconstruction uses mathematical models to estimate the three-dimensional brain regions generating the electrical patterns you see on the scalp.
Every EEG signal recorded on the scalp is a distorted echo of neural activity happening deeper inside the brain. Source reconstruction is the collection of computational techniques that attempts to reverse that distortion, working backward from surface measurements to build a three-dimensional map of which brain regions are actually producing the signal. The math is beautiful, the physics is stubborn, and the results are reshaping how we study the living brain.
Explore the Crown
8-channel EEG with JavaScript and Python SDKs

The Puzzle of Hearing Through Walls

Picture yourself standing outside a concert hall. You can hear the orchestra playing through the thick stone walls, but the sound is muffled and muddled. You can tell the brass section is somewhere to the left. The percussion is probably at the back. But you can't pinpoint the first violin. You can't separate the oboe from the clarinet. The walls have blurred everything together.

Now imagine someone hands you the building's architectural blueprints and a set of physics equations describing how sound travels through stone. They tell you: "Using these tools, can you figure out exactly which instruments are playing and where each one is sitting?"

That, in a nutshell, is source reconstruction in EEG.

Every time an EEG electrode records activity on your scalp, it's picking up a blurred, smeared version of something happening centimeters deeper inside your brain. The signal has traveled through brain tissue, through the cerebrospinal fluid that cushions the brain, through bone, and through skin. Each of these layers distorts and spreads the signal. What arrives at the electrode is nothing like a clean readout of the original neural activity.

Source reconstruction is the set of techniques that tries to undo all that distortion. It takes the muddled signals from the scalp and mathematically works backward to estimate what's actually happening inside the three-dimensional brain.

And here's what makes it genuinely wild: this problem is, in a strict mathematical sense, impossible to solve perfectly. Yet researchers solve it every day, well enough to guide brain surgery and reveal how thoughts flow through your cortex in real time.

How? That's the interesting part.

What Your Scalp Actually Measures (And Why It's a Mess)

To understand why source reconstruction is hard, you need to understand what happens to a brain signal on its way to the scalp. The story starts with a phenomenon called volume conduction, and it's the single biggest reason EEG has a reputation for poor spatial resolution.

When a population of neurons fires in synchrony, they produce a tiny electrical current. This current doesn't stay local. It radiates outward in all directions, spreading through the surrounding tissue like ripples in a pond. First it moves through the highly conductive brain tissue. Then it hits the cerebrospinal fluid, which conducts electricity even better than brain tissue. Then it reaches the skull.

And the skull is where things fall apart.

Bone is a terrible electrical conductor. The human skull has roughly 80 times the resistivity of the brain tissue beneath it. When the spreading electrical current hits this wall of bone, it doesn't pass through cleanly. It fans out sideways, dispersing across a wide area of skull before finally seeping through to the scalp.

The result is spatial smearing. A neural source that's only a couple of square centimeters in the brain might produce a detectable voltage across 10 or more centimeters of scalp. If two sources are close together inside the brain, their smeared fields overlap on the scalp, blending into what looks like a single broad patch of activity.

Think of it as a privacy screen working in reverse. The brain's fine spatial details get obliterated on their way out. By the time EEG electrodes pick up the signal, the rich three-dimensional electrical landscape of the cortex has been flattened and blurred into a two-dimensional smudge.

Source reconstruction is the attempt to reverse this degradation, to take the smudge and computationally reconstruct the painting underneath.

The Forward Problem: Building a Virtual Head

Before you can work backward from scalp data to brain sources (the hard part), you need to be able to work forward from brain sources to scalp data (the tractable part). This forward direction is the foundation that every source reconstruction method stands on.

The forward problem asks a simple question: if I know there's an electrical source at a specific location inside the brain, with a specific orientation and strength, what voltage pattern would it produce on the scalp?

This is solvable. You build a mathematical model of the head, a representation of the geometry and electrical properties of each tissue layer, and then use the physics of electromagnetism to calculate how the current propagates outward. One source in, one scalp pattern out.

The head model, also called the forward model, is the virtual head that the algorithm uses for its calculations. It comes in several flavors.

Spherical head models approximate the head as a set of concentric spheres, one each for brain, cerebrospinal fluid, skull, and scalp. This is the simplest option. The math has elegant closed-form solutions, and the computations are fast. But your head is not a sphere. The frontal bone of the skull is thicker than the temporal bone. The brain's folded surface (gyri and sulci) creates sources at all kinds of orientations. Spherical models miss all of this.

Boundary element models (BEM) use realistic head shapes extracted from MRI scans. They model the boundaries between tissue layers as surfaces made of thousands of triangles, computing how currents cross each boundary. BEM models capture individual skull thickness variations, the shape of the brain's folds, and the actual geometry of the cerebrospinal fluid layer. They're more accurate than spheres but require an MRI of the specific person being studied.

Finite element models (FEM) go even further, dividing the entire head volume into millions of tiny elements and assigning electrical conductivity values to each one. FEM models can account for skull holes, anisotropic (direction-dependent) conductivity in white matter, and even local variations in tissue properties. They're the most accurate but also the most computationally expensive.

Why the Head Model Matters So Much

The forward model is the lens through which source reconstruction algorithms see the brain. Every inaccuracy in the head model introduces errors in the source estimate. Using a spherical model when the person has an unusually thick frontal bone, or missing a skull defect from prior surgery, can shift the estimated source location by centimeters. This is why researchers who need precise source reconstruction invest in individual MRI-based head models rather than relying on generic templates.

Here's a useful way to frame it. The forward model is like having the blueprints to the concert hall. The better you understand how the building (the head) transforms sound (electrical signals) as they pass through its walls (tissues), the better your chances of figuring out what's happening on the stage (the brain) from what you hear outside.

The Inverse Problem: Where Math Gets Uncomfortable

Now the hard part. You have your forward model. You have your scalp data. And you want to run the process in reverse: given this pattern of voltages on the scalp, which sources inside the brain produced it?

This is called the inverse problem, and it's one of those beautiful problems in science where the difficulty is not a matter of needing better computers or smarter people. The difficulty is fundamental. Proven. Permanent.

Hermann von Helmholtz demonstrated in the 1850s that the electromagnetic inverse problem is ill-posed. That's a precise mathematical term meaning the problem does not have a unique solution. There are infinitely many different arrangements of electrical sources inside the brain that could produce the exact same pattern of voltages on the scalp.

Not "lots of" possible solutions. Not "millions." Infinitely many.

Here's an intuition for why. Imagine you measure the temperature at 32 points on the outside of a box, and you're told there are heat sources inside the box. Can you figure out exactly where they are? One big source in the middle could produce the same temperature profile as three smaller sources near the edges. A particular arrangement of sources near the surface could mimic a deeper source. The outside measurements constrain the answer, but they don't determine it uniquely.

The same is true for EEG. The scalp voltage pattern constrains which source configurations are possible, but it can never nail down the one true answer. This isn't a limitation of 2026 technology. It's a property of the underlying physics that no future technology will change.

So how does anyone do source reconstruction?

By making assumptions. Every source reconstruction method works by adding extra information, beliefs about how the brain probably behaves, that collapses the infinite solution space down to a single "best" estimate. The choice of assumptions is what distinguishes the different methods, and it's what determines when each method works brilliantly versus when it fails.

The Methods: Four Ways to Solve the Impossible

Source reconstruction methods fall into a few major families, and each one represents a genuinely different philosophy about what's happening inside the brain.

Dipole Fitting: Find the Spotlight

The oldest approach models brain activity as coming from a small number of "equivalent current dipoles." A dipole is a tiny arrow representing the direction and strength of current flow at a single point in the brain. The assumption is that a large population of synchronized neurons can sometimes be approximated as one point source.

The algorithm works by trial and improvement. You place one or two dipoles at starting positions inside your head model, use the forward model to predict what scalp pattern they'd produce, compare that prediction to your actual EEG data, and then nudge the dipoles (adjusting position, orientation, and strength) to reduce the mismatch. Repeat until the prediction is as close as possible.

It's like playing Marco Polo in three dimensions, but with math instead of shouting.

Dipole fitting shines when the brain activity really does come from a small number of focal sources. Epileptic spikes often originate from a single cortical region, and dipole fitting can localize them with remarkable precision (within millimeters, if you have enough channels and a good head model). Early sensory responses, like the brain's reaction to a sudden sound or flash of light, also tend to activate compact cortical patches that dipole fitting handles well.

The weakness is obvious: you have to decide in advance how many dipoles to use. Pick too few and you miss real sources. Pick too many and the algorithm happily fits noise, producing phantom sources that don't exist. There's no purely data-driven way to know the right number.

LORETA and Minimum Norm: Spread the Floodlights

In the 1990s, Roberto Pascual-Marqui introduced LORETA (Low Resolution Electromagnetic Tomography), which took a fundamentally different approach. Instead of hunting for a few point sources, LORETA estimates activity everywhere in the brain simultaneously.

The method divides the brain into thousands of small volume elements called voxels, places a potential source at each one, and asks: what's the smoothest distribution of activity across all these voxels that explains the observed scalp data?

That smoothness constraint is the key assumption. LORETA assumes that if neurons in one spot are active, their neighbors are probably active too. This is biologically plausible because nearby neurons tend to be interconnected and fire in correlated patterns. A single isolated voxel blazing while everything around it sits quiet would be neurologically unusual.

Minimum norm estimation (MNE) is a related approach that instead minimizes the total amount of source activity needed to explain the data. Where LORETA says "find the smoothest answer," MNE says "find the simplest answer." Both collapse the infinite solution space to a unique solution, but they produce subtly different images.

The tradeoff is spelled out in LORETA's name: Low Resolution. Its estimates are inherently blurred. You'll learn that activity is concentrated somewhere in the right temporal region, but you won't get pinpoint accuracy to a specific fold of cortex. Improved variants (sLORETA and eLORETA) have better mathematical properties and zero localization bias for point sources, but all distributed methods trade sharpness for robustness.

Beamforming: The Directional Microphone

Beamforming was borrowed from radar and sonar engineering, and the analogy is perfect. Imagine you're at a cocktail party with a highly directional microphone. You point it at one person, and it picks up their voice while suppressing everyone else's. Then you swing it to the next person and do the same thing.

In EEG beamforming, the "directional microphone" is a mathematical spatial filter. For each location in the brain, the algorithm designs a filter that maximizes sensitivity to that specific spot while minimizing contributions from everywhere else. It applies this filter to your multichannel EEG data to produce a time series of estimated activity at that location.

The most common variant, LCMV (Linearly Constrained Minimum Variance) beamforming, is particularly good at tracking how activity shifts between brain regions over time. Because it produces a continuous signal for each brain location rather than a static snapshot, it's the method of choice for studying dynamic brain networks.

Beamforming's weakness is correlated sources. If two brain regions activate simultaneously with similar timing, the spatial filters get confused because they can't distinguish the two. This isn't rare. Many cognitive tasks involve multiple brain areas working in tight synchrony, which is precisely the situation where beamformers struggle.

Bayesian Methods: Let the Brain Tell You What's Plausible

The newest family of approaches uses Bayesian inference, a framework from statistics that combines observed data with prior knowledge to produce the most probable explanation.

In Bayesian source reconstruction, you encode what you already know about the brain, such as which regions tend to activate together, what typical source strengths look like, and anatomical constraints from MRI data, as a "prior" probability distribution. The algorithm then combines this prior with the actual EEG data to produce a "posterior" estimate of the most likely source configuration.

The appeal is flexibility. Instead of choosing one rigid assumption (smoothness, minimum energy, point sources), Bayesian methods let you incorporate multiple types of prior knowledge simultaneously. You can tell the algorithm "I expect sources to be smooth AND to follow the brain's cortical anatomy AND to be sparse." The result is often more accurate than any single-assumption method.

The catch is computational cost. Bayesian source reconstruction can be orders of magnitude slower than classical approaches. And the results are only as good as the priors. Feed the algorithm bad assumptions about the brain, and you'll get confidently wrong answers.

Source Reconstruction Methods at a Glance

Dipole Fitting models a few point sources. Best for focal activity like epileptic spikes or early sensory responses. Requires specifying the number of sources. Precise when assumptions hold, unreliable when they don't.

Distributed Methods (LORETA, MNE, sLORETA, eLORETA) estimate activity at all brain locations simultaneously. No need to specify source count. Lower resolution but strong across diverse scenarios.

Beamforming (LCMV, SAM, DICS) uses adaptive spatial filters to isolate activity at each brain location. Excellent temporal resolution. Sensitive to correlated sources.

Bayesian Methods (MSP, champagne, sparse Bayesian learning) combine prior anatomical and functional knowledge with observed data. Flexible and potentially more accurate. Computationally expensive.

The Comparison Table

Here's how the major methods stack up across the dimensions that matter most.

MethodAssumptionBest ForMinimum ChannelsKey Limitation
Dipole fittingFew point sourcesFocal activity, epileptic spikes, early ERPs19-32Must specify number of sources in advance
LORETA / sLORETA / eLORETASmooth distributed activityExploratory analysis, unknown source count19-32Inherently low spatial resolution
Minimum norm (MNE)Minimum total source energyGeneral-purpose distributed imaging32-64Biased toward superficial sources
LCMV beamformingUncorrelated sourcesDynamic network tracking, oscillatory activity32-64Fails with correlated sources
Bayesian (MSP)Multiple sparse priorsComplex multi-source scenarios32-128Computationally expensive, prior-dependent
Method
Dipole fitting
Assumption
Few point sources
Best For
Focal activity, epileptic spikes, early ERPs
Minimum Channels
19-32
Key Limitation
Must specify number of sources in advance
Method
LORETA / sLORETA / eLORETA
Assumption
Smooth distributed activity
Best For
Exploratory analysis, unknown source count
Minimum Channels
19-32
Key Limitation
Inherently low spatial resolution
Method
Minimum norm (MNE)
Assumption
Minimum total source energy
Best For
General-purpose distributed imaging
Minimum Channels
32-64
Key Limitation
Biased toward superficial sources
Method
LCMV beamforming
Assumption
Uncorrelated sources
Best For
Dynamic network tracking, oscillatory activity
Minimum Channels
32-64
Key Limitation
Fails with correlated sources
Method
Bayesian (MSP)
Assumption
Multiple sparse priors
Best For
Complex multi-source scenarios
Minimum Channels
32-128
Key Limitation
Computationally expensive, prior-dependent

No method wins in every situation. Experienced researchers often run multiple methods on the same data and look for convergence. If dipole fitting, LORETA, and beamforming all point to the same region, you can feel significantly more confident than if any single method did alone.

Neurosity Crown
The Crown captures brainwave data at 256Hz across 8 channels. All processing happens on-device. Build with JavaScript or Python SDKs.
Explore the Crown

The "I Had No Idea" Moment: Your Brain Creates Phantom Sources

Here's something about source reconstruction that genuinely surprised me when I first learned about it, and it changes how you think about every EEG recording you've ever seen.

Your brain doesn't just produce signals that get blurred on the way out. It produces signals that cancel each other out before they ever reach the scalp.

Consider this: the cerebral cortex is a folded sheet. When neurons on opposite walls of a cortical fold (called a sulcus) fire at the same time with similar strength, their electrical fields point in opposite directions. They cancel. An EEG electrode on the scalp sees nothing, even though both populations are perfectly active.

Neuroscientists call these "closed-field" configurations, and they're invisible to EEG. Some estimates suggest that up to 80% or more of cortical activity may be undetectable on the scalp because of geometric cancellation. Let that number sit for a moment. The vast majority of what your cortex does might never show up in an EEG recording at all.

This means source reconstruction isn't just solving an underdetermined problem (infinitely many solutions). It's solving an underdetermined problem with missing data. Some sources physically cannot contribute to the scalp signal, no matter how many electrodes you use.

It's like trying to reconstruct a conversation from an echo, except some speakers are standing in a corner that produces no echo whatsoever. You can't reconstruct what you never received.

Every source reconstruction result should come with this invisible asterisk: this is our best estimate of the sources that are visible to EEG. The brain may be doing considerably more than what we can reconstruct.

How Many Channels Do You Actually Need?

This is the practical question that brings everything back to real hardware. And the answer depends on what you're trying to reconstruct.

Each EEG channel gives you one measurement: a voltage at a specific point on the scalp at a specific moment. Source reconstruction algorithms need to estimate the activity at hundreds or thousands of brain locations from those measurements. The more channels you have, the more equations constraining the solution, and the less the algorithms have to rely on assumptions to fill the gaps.

Channel CountSource Reconstruction CapabilityTypical Context
4-8 channelsNo meaningful source reconstruction. Regional estimation based on electrode position only.Consumer BCIs, basic neurofeedback
19-32 channelsBasic source reconstruction possible. Dipole fitting for 1-2 focal sources. LORETA provides rough images.Clinical EEG, introductory research
64 channelsGood source reconstruction. Multiple source dipole fitting. Detailed distributed imaging. Strong beamforming.Standard research applications
128-256 channelsHigh-resolution source reconstruction. Millimeter-range dipole accuracy. Detailed cortical current density maps.Advanced research, presurgical mapping
Over 256 channelsState-of-the-art precision approaching the physical limits of EEG.Specialized research laboratories
Channel Count
4-8 channels
Source Reconstruction Capability
No meaningful source reconstruction. Regional estimation based on electrode position only.
Typical Context
Consumer BCIs, basic neurofeedback
Channel Count
19-32 channels
Source Reconstruction Capability
Basic source reconstruction possible. Dipole fitting for 1-2 focal sources. LORETA provides rough images.
Typical Context
Clinical EEG, introductory research
Channel Count
64 channels
Source Reconstruction Capability
Good source reconstruction. Multiple source dipole fitting. Detailed distributed imaging. Strong beamforming.
Typical Context
Standard research applications
Channel Count
128-256 channels
Source Reconstruction Capability
High-resolution source reconstruction. Millimeter-range dipole accuracy. Detailed cortical current density maps.
Typical Context
Advanced research, presurgical mapping
Channel Count
Over 256 channels
Source Reconstruction Capability
State-of-the-art precision approaching the physical limits of EEG.
Typical Context
Specialized research laboratories

The math is straightforward. Each dipole source has five unknowns: three spatial coordinates, one orientation parameter, and one strength value. With 8 channels, you have 8 data points. That's enough to constrain maybe one simple dipole, and even then the solution is shaky. With 64 channels, you have 64 data points, enough to support much more sophisticated models. With 256 channels, you're approaching the resolution ceiling imposed by volume conduction physics.

This is why high-density EEG systems exist and cost tens of thousands of dollars. Source reconstruction is data-hungry. Every additional electrode is another piece of the puzzle.

Where the Neurosity Crown Fits (Honestly)

Let's be direct about this. The Neurosity Crown has 8 EEG channels at positions CP3, C3, F5, PO3, PO4, F6, C4, and CP4. That's the highest channel count among consumer brain-computer interfaces, and those positions are strategically spread across frontal, central, centroparietal, and parieto-occipital regions of both hemispheres.

For what the Crown is designed to do, 8 channels is genuinely excellent. Real-time focus and calm scoring, neurofeedback training, brainwave-driven applications, frequency analysis across brain regions, all of these work powerfully with 8 well-placed channels. The on-device N3 chipset processes your data at 256Hz without it ever leaving the device, and the open JavaScript and Python SDKs let developers build whatever they can imagine.

But 8 channels is not enough for detailed source reconstruction. That's not a design oversight. It's the physics. Eight spatial samples on the scalp don't provide sufficient constraint to uniquely solve a 3D inverse problem involving thousands of potential source locations. You'd need at least 32 channels for basic source reconstruction, and ideally 64 or more for anything approaching research quality.

What 8 channels can do is broad regional source estimation. The Crown's coverage of frontal (F5, F6), central (C3, C4), centroparietal (CP3, CP4), and parieto-occipital (PO3, PO4) sites lets you meaningfully distinguish frontal versus posterior activity and left versus right lateralization. For most practical applications of consumer EEG, from cognitive monitoring to BCI development, this regional information is exactly what you need.

The honest picture: source reconstruction and consumer EEG serve different purposes. Source reconstruction is the research microscope. Consumer EEG is the daily instrument. Both are valuable. They're just answering different questions.

The Limitations Nobody Should Gloss Over

Source reconstruction has come a long way since the first dipole fitting experiments in the 1980s. But there are hard limits that no amount of algorithmic cleverness can fully overcome.

Deep sources are nearly invisible. The hippocampus, amygdala, thalamus, and other subcortical structures sit deep inside the brain, far from the scalp. Their electrical fields attenuate dramatically as they pass through centimeters of tissue. Source reconstruction can sometimes detect strong deep activity (like seizure onset in the hippocampus), but normal-level processing in these structures is essentially invisible to scalp EEG. If you want to study the hippocampus, you need fMRI or intracranial electrodes.

Head model errors propagate into source errors. If your model of the skull is wrong by a few millimeters of thickness, your source estimate shifts. If the tissue conductivities in your model don't match the real values (which vary between individuals), everything moves. Using a generic template head model instead of an individual MRI-based model can introduce centimeters of localization error.

The non-uniqueness never goes away. This deserves emphasis. No matter how many channels you add, no matter how perfect your head model, the inverse problem remains fundamentally ill-posed. Every source reconstruction solution is an estimate shaped by assumptions. It's our best guess, informed by physics and constrained by data, but a guess nonetheless.

Temporal smoothing can mask dynamics. Many source reconstruction pipelines average data across time windows to improve the signal-to-noise ratio. But the brain changes rapidly. A 100-millisecond averaging window can blur together activity patterns that are genuinely distinct. The tradeoff between temporal resolution and source accuracy is constant.

Correlated sources confuse almost everything. When two brain regions activate simultaneously (which they do all the time during normal cognition), most algorithms struggle to separate them. Dipole fitting may merge them into a phantom source between the two. Beamformers may suppress one or both. Distributed methods may smear the estimate across the space between them. This isn't a solvable engineering problem. It's a fundamental constraint of the measurement geometry.

Where Source Reconstruction Is Going

Three developments are converging to push source reconstruction beyond its current limits.

Machine learning is replacing physics-based solvers. Instead of hand-crafting assumptions about smoothness or sparsity, researchers are training neural networks on simulated EEG data where the ground truth source configuration is known. These networks learn to map scalp patterns to source configurations directly, bypassing the traditional mathematical framework entirely. Early results suggest they can outperform classical methods, especially when the standard assumptions break down. The catch is interpretability: neural networks find answers, but they don't explain why those answers are right.

Multimodal fusion is getting practical. fMRI gives you millimeter spatial resolution but captures activity every 1-2 seconds. EEG gives you millisecond temporal resolution but centimeter spatial accuracy at best. Combining them, using fMRI's spatial precision to constrain EEG's source reconstruction and EEG's temporal precision to fill in fMRI's timing gaps, produces results better than either modality alone. Simultaneous EEG-fMRI is becoming the gold standard for brain mapping research.

Individual head models are becoming routine. As MRI access expands and automated segmentation software improves, creating a person-specific head model is getting faster and cheaper. Some research groups are even working on head models derived from structural information in the EEG itself, eliminating the MRI requirement altogether. Better head models mean better forward models, which mean better source estimates.

Why This Matters Beyond the Lab

Step back from the algorithms and head models for a moment. Consider what source reconstruction actually represents.

For most of the history of EEG, since Hans Berger recorded the first human electroencephalogram in 1924, researchers could only talk about brain activity regarding where on the scalp it appeared. "There's alpha activity at electrode O1." "There's a large P300 at Pz." These were descriptions of shadows on a wall, not the things casting them.

Source reconstruction turned EEG from a two-dimensional surface technique into a three-dimensional brain imaging method. Not as spatially precise as fMRI, certainly. But with something fMRI can never offer: the ability to track neural activity as it unfolds millisecond by millisecond, watching a thought propagate from visual cortex to temporal association areas to prefrontal decision-making regions in the span of an eye blink.

That combination of depth and speed is unique. No other non-invasive technique provides it.

And it all works by embracing, rather than fighting, the fundamental impossibility at its core. Source reconstruction doesn't solve the inverse problem. It makes peace with the inverse problem. It says: I know there are infinitely many answers. But with the right assumptions, the right physics, and enough data, I can find the one answer that's most likely to be true.

There's something deeply satisfying about that. The most complex electrical system in the known universe, the human brain, keeps most of its spatial secrets hidden behind a wall of bone. And yet, through a combination of physics, mathematics, and sheer stubbornness, we've found ways to peer through that wall. Imperfectly. Approximately. But with enough clarity to transform how we study the living mind.

Your brain generates electrical patterns right now, as you read this sentence. Those patterns are ricocheting through your skull, smearing and distorting and canceling as they go. By the time they reach the surface, they're shadows of shadows. And we've built the math to trace those shadows back to their source.

Not perfectly. Not yet. But closer every year.

Stay in the loop with Neurosity, neuroscience and BCI
Get more articles like this one, plus updates on neurotechnology, delivered to your inbox.
Frequently Asked Questions
What is source reconstruction in EEG?
Source reconstruction is a set of computational techniques that estimate where inside the three-dimensional brain an EEG signal originated. EEG electrodes sit on the scalp and measure electrical activity that has been blurred and distorted by cerebrospinal fluid, skull, and skin. Source reconstruction algorithms use mathematical models of the head, called forward models, combined with inverse solution methods to work backward from the scalp data and estimate the location, orientation, and strength of the underlying neural generators.
What is the inverse problem in EEG?
The inverse problem is the challenge of determining which internal brain sources produced a given pattern of voltages on the scalp. It is mathematically ill-posed, meaning infinitely many different source configurations inside the brain can produce the exact same scalp pattern. This was proven by Helmholtz in the 1850s and is a fundamental property of electromagnetic physics, not a limitation of current technology. Every source reconstruction method deals with this by adding constraints or assumptions that narrow the infinite solution space to a single best estimate.
What is a forward model in EEG source reconstruction?
A forward model, also called a head model, is a mathematical representation of how electrical currents travel from sources inside the brain through the layers of brain tissue, cerebrospinal fluid, skull, and scalp to reach the EEG electrodes. It predicts what scalp pattern a known brain source would produce. Forward models range from simple concentric spheres to realistic models based on individual MRI scans. The forward model is an essential building block because every inverse method uses it to test whether a proposed source configuration matches the observed data.
How many EEG channels are needed for source reconstruction?
Meaningful source reconstruction typically requires at least 32 channels, with 64 or 128 channels preferred for research applications. Each channel provides one spatial data point, and the algorithms need enough data points to constrain the solution. Consumer devices with 8 channels, like the Neurosity Crown, can distinguish activity across broad brain regions such as frontal versus parietal or left versus right hemisphere, but cannot perform the fine-grained source reconstruction that high-density arrays enable.
What is the difference between dipole fitting and distributed source methods?
Dipole fitting models brain activity as a small number of discrete point sources and iteratively adjusts their positions until the predicted scalp pattern matches the observed data. It works well for focal activity but requires you to specify how many sources to look for. Distributed source methods like LORETA and minimum norm estimation place potential sources throughout the entire brain volume and estimate activity at all locations simultaneously, using mathematical constraints like smoothness to find a unique solution. Distributed methods are more flexible but produce lower-resolution images.
Can consumer EEG devices perform source reconstruction?
Consumer EEG devices with 8 to 14 channels can perform basic regional source estimation, identifying whether activity is stronger in frontal versus posterior or left versus right regions. However, detailed source reconstruction that maps activity to specific cortical structures requires higher channel counts, accurate head models, and precise electrode positioning. The Neurosity Crown's 8 channels across four cortical regions provide excellent real-time cognitive state monitoring and regional brain activity tracking, but not research-grade source imaging.
Copyright © 2026 Neurosity, Inc. All rights reserved.