The Brain's Directional Microphone
A Cocktail Party Inside Your Skull
You're at a loud party. Music thumping. Thirty conversations happening at once. Someone across the room is telling a story you desperately want to hear, but their voice is buried in a wall of noise.
Now imagine you had a magical microphone. You point it at that person, and suddenly their voice comes through crystal clear. Every other sound in the room just... vanishes. The music, the laughter, the guy next to you arguing about whether a hot dog is a sandwich. Gone. You hear only the voice you're aimed at.
That's beamforming.
And it turns out this trick, born in the world of radar arrays and submarine sonar, is one of the most powerful tools neuroscientists have for figuring out what's happening inside your brain.
Here's the problem it solves. When you record EEG or MEG data, every sensor on your head picks up signals from the entire brain simultaneously. Electrode Fz doesn't just hear frontal cortex. It hears frontal cortex plus parietal cortex plus temporal cortex plus motor cortex plus everything else, all blurred together and overlapping. The situation is exactly like the cocktail party. You've got dozens of neural conversations happening at once, and your sensors are picking up all of them at the same time.
For decades, neuroscientists mostly just accepted this. They'd look at the signal at each electrode and try to make inferences about what brain region might be responsible. But starting in the 1990s, researchers borrowed an idea from electrical engineering that changed the game entirely.
They asked: what if we could build a mathematical microphone that points at one location inside the brain and suppresses everything else?
The Old Ways of Finding Brain Sources (and Why They Weren't Enough)
Before we get to how beamforming works, you need to understand why neuroscientists were desperate enough to borrow techniques from radar engineers.
The fundamental challenge in EEG and MEG is called the inverse problem. You've got signals measured on the outside of the head, and you want to figure out where inside the brain those signals came from. This is fiendishly difficult because many different combinations of brain sources can produce the exact same pattern on the scalp. It's mathematically proven that the problem has no unique solution.
Before beamforming, there were two main approaches.
Dipole fitting says: "I bet the brain activity comes from one or two point sources. Let me find the locations and orientations that best explain the data." This works great when the brain really is doing something focal, like an epileptic spike. But you have to guess how many sources exist before you start. Guess wrong, and your results are meaningless.
Distributed source methods like LORETA say: "I won't guess the number of sources. Instead, I'll estimate activity everywhere in the brain at once, with the constraint that the solution should be smooth." This avoids the guessing problem, but the smoothness assumption blurs everything. You get a fuzzy cloud of activity rather than a precise answer.
Both approaches require you to make strong assumptions about the brain before you even look at the data. Beamforming takes a fundamentally different path. It doesn't assume anything about how many sources there are. It doesn't require the solution to be smooth. Instead, it asks a much more modest question, one brain location at a time:
What is happening right here?
How Does Beamforming Actually Work?
This is where it gets genuinely beautiful. The core idea is simple, but the implementation is one of those things that makes you appreciate how clever signal processing engineers really are.
Step 1: Pick a Spot in the Brain
You choose a single location inside the brain. Let's call it point P. Maybe it's in the left motor cortex. Maybe it's in the visual cortex. Doesn't matter yet.
Step 2: Calculate the Lead Field for That Point
The lead field is a mathematical description of how a signal generated at point P would look by the time it reaches each of your sensors. Think of it as a fingerprint. If a tiny electrical source lit up at point P and nowhere else, what pattern would your EEG or MEG sensors see?
Computing the lead field requires a model of the head: the geometry of the brain, skull, and scalp, and the electrical (or magnetic) properties of each tissue layer. For EEG, this is called a forward model, and it accounts for how the skull smears and distorts electrical signals. For MEG, the forward model is actually simpler because magnetic fields pass through the skull unaffected.
The lead field for point P is a vector, one value per channel, that describes the expected sensor pattern for a unit-strength source at that location.
Step 3: Compute the Data Covariance Matrix
Here's where the data itself enters the picture. The data covariance matrix captures the statistical relationships between all pairs of channels in your recording. If channels 1 and 7 tend to go up together, that shows up as a positive covariance. If channel 3 does its own thing while everyone else is correlated, that shows up too.
The covariance matrix is a square matrix with dimensions equal to the number of channels. For a 64-channel EEG, it's 64 by 64. For a 306-sensor MEG, it's 306 by 306. Every entry encodes how two sensors co-vary over time.
This matrix contains everything the beamformer needs to know about the noise and interference structure of your data.
Step 4: Calculate the Weights
This is the magical step. The beamformer combines the lead field (what a source at P should look like) with the data covariance (what the noise and interference actually look like) to calculate a set of spatial filter weights, one weight per channel.
The weights are designed to satisfy two constraints simultaneously:
-
Pass the signal of interest. When the weights are applied to the sensor data, activity genuinely coming from point P should pass through with unit gain. The filter should faithfully reproduce whatever is happening at that location.
-
Minimize everything else. The total output power of the filter should be as small as possible, subject to constraint one. Since the only thing guaranteed to pass through is the signal from point P, minimizing total output means minimizing the contribution of every other source in the brain.
The mathematical formula for the weights (in the classic LCMV beamformer) is:
w = (C inverse times L) divided by (L transposed times C inverse times L)
Where C is the data covariance matrix, and L is the lead field vector for location P.
The inverse of the covariance matrix is the secret sauce. It effectively "whitens" the data, down-weighting channels that carry a lot of interference and up-weighting channels that carry independent information. This is why beamforming is called an adaptive spatial filter. It doesn't use fixed, predetermined weights. It adapts to the actual noise and interference structure of your specific recording. Two recordings from the same person on different days would produce different weight vectors because the noise environment changed.
Step 5: Apply the Weights and Listen
Once you have the weight vector for point P, you multiply it by the multichannel sensor data at each time point. The result is a single time series representing the estimated neural activity at point P.
Then you move to the next point. Recalculate the lead field for the new location. Recompute the weights. Apply them. Get another time series. Repeat this for every location on a 3D grid covering the brain, typically 5,000 to 10,000 points, and you've got a complete picture of estimated brain activity everywhere.
Each brain location gets its own custom-designed spatial filter. The beamformer builds thousands of virtual sensors, each one tuned to listen to a different spot inside the skull.
The Two Big Flavors: LCMV and DICS
Not all beamformers are the same. The two you'll encounter most often in the literature are LCMV and DICS, and they approach the problem from different angles.
LCMV: The Time-Domain Workhorse
LCMV stands for Linearly Constrained Minimum Variance. It's the beamformer we just described. It works in the time domain, meaning it takes your raw multichannel time series data, computes a covariance matrix across time, and produces a reconstructed time series at each brain location.
LCMV is your go-to when you care about when things happen. Event-related responses. Transient bursts of activity. The precise timing of neural processing. Because LCMV outputs a time series, you can look at how activity at each brain location unfolds millisecond by millisecond.
The pioneering work came from Van Veen and colleagues in 1997, adapting the classic Capon beamformer from array signal processing to neuroscience. Their paper showed that you could apply spatial filters to MEG data and recover source activity with impressive spatial resolution, even beating dipole fitting in some scenarios.
DICS: The Frequency-Domain Specialist
DICS stands for Dynamic Imaging of Coherent Sources, introduced by Gross and colleagues in 2001. Instead of working with the time-domain covariance matrix, DICS works with the cross-spectral density matrix at a specific frequency.
Think of it this way. LCMV asks "what is happening at this brain location over time?" DICS asks "how much power does this brain location have at 10 Hz?" or "how coherent is the activity at 10 Hz between this location and that location?"
This makes DICS exceptionally good at studying oscillatory brain activity, the rhythmic waves (alpha, beta, gamma, theta) that are the bread and butter of EEG and MEG research. If you want to know where in the brain alpha power is strongest during meditation, or where gamma oscillations increase during a working memory task, DICS is your tool.
DICS has another superpower: it can measure functional connectivity between brain regions at specific frequencies. You can ask "how synchronized are the gamma oscillations in left prefrontal cortex and right parietal cortex during this cognitive task?" and get a meaningful answer with proper source-level spatial resolution, not just the smeared sensor-level connectivity that raw EEG gives you.
| Feature | LCMV Beamformer | DICS Beamformer |
|---|---|---|
| Domain | Time domain | Frequency domain |
| Input matrix | Data covariance (time-based) | Cross-spectral density (frequency-based) |
| Output | Time series at each brain location | Power or coherence maps at a target frequency |
| Best for | Event-related activity, transient responses, timing analysis | Oscillatory activity, spectral power mapping, functional connectivity |
| Frequency specificity | Broadband (all frequencies at once) | Narrowband (one frequency or band at a time) |
| Connectivity analysis | Limited (requires post-hoc spectral analysis) | Native (coherence between locations built in) |
| Introduced by | Van Veen et al., 1997 | Gross et al., 2001 |
| Common software | MNE-Python, FieldTrip, Brainstorm | FieldTrip, MNE-Python, DAiSS |
Why Beamforming Beats the Alternatives (Sometimes)
Here's the thing that makes beamforming genuinely compelling compared to the older source localization methods.
No source count required. Unlike dipole fitting, you never have to specify how many sources are active. The beamformer scans every location independently. If there are two active sources, it finds two peaks. If there are twelve, it finds twelve. You don't have to know the answer before you ask the question.
No smoothness assumption. Unlike LORETA and related distributed methods, beamforming doesn't assume that neighboring brain regions should have similar activity levels. If motor cortex is blazing while the region 1 centimeter away is silent, the beamformer can represent that sharp boundary. LORETA would blur across it.
Continuous time courses. LCMV beamforming produces a full time series at each brain location. This means you can do all the same temporal analyses (event-related averaging, time-frequency decomposition, connectivity analysis) that you'd normally do on sensor data, but now at the source level. You're analyzing what the brain is doing, not what the scalp looks like.
Adaptive to your data. Because the spatial filter weights depend on the data covariance matrix, beamforming automatically adapts to the noise characteristics of your specific recording. If one channel is noisy, the beamformer down-weights it. If your data has strong environmental interference, the beamformer learns its spatial pattern and suppresses it. This self-tuning property is why beamformers often outperform other methods in practice, even when the theoretical spatial resolution should be similar.

The Achilles Heel: Correlated Sources
If beamforming sounds too good to be true, that's because there's one brutal weakness. And it's not a minor footnote. It's a fundamental limitation that has kept the beamforming community busy for three decades.
Beamforming fails when two brain sources are correlated.
Here's why. The spatial filter weights are computed from the data covariance matrix. That matrix captures the statistical relationships between channels. When two sources in different brain locations fire with similar timing, their contributions to the covariance matrix become entangled. The beamformer can't tell them apart. It treats correlated activity from two locations as if it came from one, and the spatial filter designed for one location ends up partially canceling the other.
The result: correlated sources get suppressed, distorted, or smeared together.
This would be a minor issue if correlated brain sources were rare. But they're not. They're everywhere. The two hemispheres of your brain are connected by the corpus callosum, which synchronizes activity between homologous regions constantly. Language processing involves tight coordination between temporal and frontal regions. Visual processing synchronizes occipital and parietal cortex. The default mode network links medial prefrontal and posterior cingulate with precise temporal coordination.
In other words, the brain's normal mode of operation, coordinated activity across distributed networks, is exactly the situation where beamforming struggles most.
Researchers have developed partial workarounds. Dual-source beamformers model pairs of correlated sources explicitly. Nulling beamformers add constraints to suppress known interfering sources. And some clever approaches use separate time windows or frequency bands where the sources happen to be less correlated.
But there's no clean, general solution. Correlated sources remain beamforming's hardest problem.
The Channel Count Reality
Here's where honesty matters.
Beamforming is hungry for channels. The technique constructs spatial filters from the data covariance matrix, and the quality of those filters depends directly on how much spatial information you feed in.
With 306-sensor MEG, beamforming achieves its best performance. The large sensor count provides rich spatial information, and magnetic fields pass through the skull without distortion, giving the forward model high accuracy. This is the gold standard.
With 64 or more EEG channels, beamforming produces research-grade results. Enough spatial samples exist to construct effective spatial filters, though EEG's forward model is more complex due to skull conductivity effects.
With 32 EEG channels, basic beamforming is feasible but the spatial filters have limited selectivity. You can identify major source regions, but fine spatial distinctions between nearby areas become unreliable.
With fewer than 32 channels, beamforming is not recommended. The covariance matrix lacks sufficient rank, and the spatial filters can't adequately separate nearby sources. The virtual sensors become broad and leaky, picking up activity from wide swaths of brain rather than focused points.
The Neurosity Crown's 8 channels are optimized for a completely different job. The Crown's strength is real-time cognitive state monitoring: focus detection, calm scoring, neurofeedback, and powering brain-computer interface applications through the Neurosity SDK. These applications rely on spectral analysis and machine learning across strategically placed electrodes, not source-level beamforming. Eight channels spread across four cortical lobes give you excellent coverage for tracking the broad patterns of neural activity that correlate with cognitive states.
Beamforming and consumer BCI are solving different problems. Beamforming asks "where exactly in the brain is this signal originating?" Consumer BCI asks "what is this brain doing right now, and how can I respond to it in real time?" You don't need a directional microphone to know the party is loud. You need one to isolate a single voice in the crowd.
The "I Had No Idea" Part: Beamforming Creates Virtual Sensors That Don't Physically Exist
Here's the thing that made me stop and re-read the original papers twice.
When a beamformer processes your MEG data and outputs a time series for, say, a point in left auditory cortex, that time series doesn't correspond to any physical sensor. No sensor was at that location. No probe was inserted. The beamformer mathematically constructed a virtual sensor, a measurement device that exists only as a set of weights applied to the real sensors.
And that virtual sensor can be more informative than any of the real sensors.
Think about what's happening. Each physical sensor sees a blurred mixture of signals from the whole brain. But the virtual sensor at left auditory cortex sees activity only from left auditory cortex (in theory), with everything else suppressed. It's as if you could reach inside the skull, place a tiny electrode at exactly the location you care about, and listen. Without touching the brain. Without surgery. Without any physical sensor at all.
The same 306 physical sensors in an MEG system can be transformed into 10,000 virtual sensors, each tuned to a different brain location. You go from 306 measurements on the surface to 10,000 measurements distributed throughout the brain volume. The beamformer creates spatial resolution from thin air, or more precisely, from linear algebra.
This is why beamforming felt like magic when it first arrived in neuroscience. You don't add sensors. You add math. And the math gives you something the physical hardware never could: a direct view into the interior of a living brain.
What Beamforming Has Actually Shown Us
The technique isn't just theoretically elegant. It has produced genuinely important scientific results.
Oscillatory dynamics during cognition. DICS beamforming revealed that different brain regions communicate at different frequencies during different cognitive tasks. Gamma-band synchronization (30 to 100 Hz) between prefrontal and parietal regions increases during working memory. Alpha-band power (8 to 12 Hz) increases in task-irrelevant regions, suggesting active suppression. Beta-band desynchronization (15 to 30 Hz) in motor cortex precedes voluntary movements. All of these discoveries required source-level frequency analysis, exactly what DICS provides.
Pre-surgical mapping. For patients with brain tumors or epilepsy, knowing the precise location of critical brain functions (language, motor control, sensation) before surgery can be the difference between a successful outcome and a devastating one. Beamforming applied to MEG data can map these functions non-invasively, helping surgeons plan their approach.
Resting-state brain networks. Beamforming applied to MEG data was instrumental in showing that the brain's resting-state networks, the default mode network, the dorsal attention network, the salience network, have distinct oscillatory signatures that can be detected at the source level. This was previously only known from fMRI studies, which can't capture the oscillatory dynamics.
Infant and pediatric research. Because MEG is completely passive and non-invasive (it just listens to magnetic fields), and because beamforming can work even when the subject moves slightly, the combination has become a powerful tool for studying brain development in infants and young children who can't hold still in an fMRI scanner.
Where Beamforming Is Going Next
The field isn't standing still.
Wearable MEG with OPMs. The newest generation of MEG sensors, called optically pumped magnetometers (OPMs), don't need liquid helium cooling. They can sit directly on the scalp in a flexible cap. This changes everything for beamforming because the sensors move with the head, eliminating motion artifacts, and they sit closer to the brain, increasing signal strength. OPM-MEG systems are still expensive and limited in channel count, but they're evolving fast.
Machine learning meets beamforming. Instead of using the classical LCMV formula to compute weights, researchers are training neural networks to learn optimal spatial filters directly from data. These learned beamformers can potentially handle correlated sources better than classical approaches because they're not constrained by the same mathematical assumptions.
Real-time beamforming. As computing power increases, it's becoming feasible to run beamforming in real time during an experiment or clinical procedure. Imagine a surgeon seeing a live, source-localized map of brain activity updating multiple times per second during an operation. This is already being prototyped in a few labs.
Integration with brain stimulation. Combining beamforming-based source localization with transcranial magnetic stimulation (TMS) or transcranial electrical stimulation allows researchers to both observe and manipulate brain activity at specific locations. You localize the source with beamforming, then stimulate that exact spot with TMS, and watch what happens.
The Bigger Picture
Beamforming is one of those rare technical achievements that makes you reconsider what's possible. The idea that you can take a jumbled mess of sensor readings from outside the skull and mathematically reconstruct what's happening at specific locations inside a living brain, without ever touching it, is genuinely astonishing. It's computational alchemy: turning surface noise into interior maps.
But it also reminds you how much spatial information matters. The whole technique hinges on having enough sensors to build good spatial filters. Fewer sensors means worse filters, which means blurrier maps, until eventually the virtual sensors are so broad they're not telling you much about specific locations anymore.
This is the tradeoff that defines the spectrum from research instruments to consumer devices. A 306-sensor MEG in a shielded room can construct 10,000 virtual sensors across the brain. A 64-channel research EEG can manage respectable beamforming. An 8-channel consumer device like the Crown excels at an entirely different task: detecting the broad neural signatures that map to cognitive states, running in real time, on your head, connected to applications you build yourself.
The brain reveals different secrets depending on how you ask. Beamforming asks with surgical spatial precision: "what is this specific cubic centimeter of cortex doing at this exact moment?" Consumer BCI asks with temporal immediacy: "what is this brain's state right now, and what should happen next?"
Both questions matter. Both are worth asking. And as sensor technology gets cheaper, smaller, and more powerful, the gap between them will keep shrinking. Someday, beamforming-quality source localization might run on something you wear like headphones.
We're not there yet. But the math is ready whenever the hardware catches up.

