Power Spectral vs. Connectivity Analysis
One Recording. Two Completely Different Brains.
Here's a thought experiment that should make you uncomfortable about everything you thought you knew about EEG analysis.
Imagine two people sitting in the same room, both wearing identical EEG headsets, both staring at the same fixation cross on a screen. Person A is in a state of deep, absorbed focus. Person B is in the grip of anxious rumination, their mind spinning through worst-case scenarios.
You pull up their power spectral analysis. Both show elevated frontal beta power. Both show suppressed posterior alpha. By every standard spectral metric, their brains look almost identical.
But they're not. Not even close.
If you look at the connectivity between their brain regions, the picture flips entirely. Person A shows strong, stable coherence between frontal and parietal electrodes in the beta band. Their attention network is locked in, coordinating smoothly across distant cortical regions. Person B shows fragmented, unstable connectivity across the same regions. High beta everywhere, but none of it is synchronized. The frontal cortex is screaming, but nobody else is listening.
Same power. Same frequencies. Same topographic heat map. Completely different brain states. And you'd never know the difference if you only looked at one type of analysis.
This is the fork in EEG analysis that most people never learn about. There are two fundamentally different ways to interrogate the same raw brainwave data, and they answer fundamentally different questions about what your brain is doing.
The Trunk: What EEG Data Actually Contains
Before we split into two analytical paths, we need to understand what we're working with.
An EEG electrode on your scalp picks up voltage fluctuations produced by large populations of cortical neurons firing in synchrony. These aren't individual neuron signals. They're the summed electrical activity of millions of pyramidal cells in the cortex beneath the electrode. The signal is tiny, on the order of 10 to 100 microvolts, and it's a messy, chaotic-looking squiggle that changes constantly.
But buried inside that squiggle are patterns. Lots of them. Overlapping oscillations at different frequencies, statistical relationships between what's happening at different locations, timing correlations that shift from millisecond to millisecond. The raw data is the same no matter what you do with it. The analysis you choose determines which patterns you extract.
Think of it like an audio recording of a symphony. The waveform is the waveform. But you can analyze it two completely different ways. You can ask: "How loud is each instrument?" That's a power question, and it gives you a volume profile of the orchestra. Or you can ask: "Are the violins and cellos playing in sync?" That's a connectivity question, and it gives you a coordination profile. Same recording. Different questions. Different answers. Both real.
EEG analysis works exactly the same way.
Power Spectral Analysis: How Loud Is Each Frequency?
Power spectral analysis is the workhorse of EEG. It's been the default analytical approach since Hans Berger first noticed that brain signals oscillate at regular rhythms in 1929. And for good reason: it's intuitive, computationally simple, and it works.
The core idea is straightforward. Take a window of raw EEG data from a single electrode. Apply the Fast Fourier Transform (FFT), a mathematical operation that decomposes any complex signal into its component sine waves. The output tells you exactly how much energy (power) exists at each frequency in that window of data.
The result is a power spectral density (PSD) plot: frequency on the x-axis, power on the y-axis. Peaks in the PSD tell you which oscillatory rhythms are strongest at that brain location. A big peak around 10 Hz means strong alpha activity. A broad elevation from 15 to 25 Hz means elevated beta.
Neuroscientists have carved the EEG frequency spectrum into standard bands, each associated with different brain states:
| Band | Frequency Range | Associated States | What Power Tells You |
|---|---|---|---|
| Delta | 0.5-4 Hz | Deep sleep, unconscious processing | High delta while awake can indicate brain injury or pathology |
| Theta | 4-8 Hz | Memory encoding, drowsiness, meditation | Elevated frontal theta tracks working memory load and internal focus |
| Alpha | 8-13 Hz | Relaxed wakefulness, sensory idling | Strong alpha in a region means that region is idling, not actively processing |
| Beta | 13-30 Hz | Active thinking, motor planning, alertness | Elevated beta at frontal sites correlates with active cognitive engagement |
| Gamma | 30-100+ Hz | Perceptual binding, higher cognition | Gamma bursts link to conscious awareness and information integration |
When you run PSD on every electrode, you get a topographic map of spectral power across the scalp. You can visualize exactly which brain regions are producing which frequencies, and how that distribution shifts as a person goes from eyes-closed rest to focused work to meditation.
This is what the Neurosity Crown computes natively on its N3 chipset. You don't need to write your own FFT code. The device delivers PSD data directly through its SDK, giving you band power values for each of its 8 channels without any signal processing on your end.
What PSD Is Great At
Power spectral analysis shines when you need to know what's happening at a specific location. Clinical neurofeedback practitioners use it to identify regions with abnormal power distributions, like the elevated theta/beta ratio at frontal sites that's a well-studied marker for ADHD brain patterns. Sleep researchers use it to stage sleep, because the dominant frequency band shifts predictably across sleep stages. Meditation apps use alpha power as a proxy for relaxed awareness.
It's fast, too. Computing a PSD takes milliseconds. You can update it in real time, hundreds of times per second if you want to. That speed is why it's the backbone of most consumer EEG applications and neurofeedback protocols.
What PSD Cannot Tell You
But PSD has a blind spot, and it's a big one.
Power spectral analysis treats every electrode as an independent measurement. It asks, "What is happening here?" without ever asking, "Is what's happening here related to what's happening over there?" It measures the activity at each location in isolation.
This is like assessing a company by measuring how busy each employee is, without ever checking whether they're actually working on the same project. You could have a room full of people working furiously on completely unrelated tasks, and a power analysis would say, "High activity, everything looks great." The fact that nobody is coordinating with anyone else wouldn't show up.
Your brain faces the same problem. High frontal beta could mean focused attention. Or anxious rumination. Or motor planning. Or all three at once. Without knowing how that frontal activity relates to activity elsewhere, you're interpreting a signal that's fundamentally ambiguous.
Connectivity Analysis: Who's Talking to Whom?
Connectivity analysis starts from a completely different premise. Instead of asking "how much energy is this location producing at each frequency," it asks "are the signals at these two locations related?"
The raw data is identical. The same EEG channels, the same voltage traces, the same time series you'd use for PSD. But instead of analyzing each channel independently, you analyze them in pairs. And instead of looking for power, you look for statistical relationships.
There are several ways to quantify these relationships, each capturing a slightly different aspect of inter-regional communication.
Coherence: The Frequency-Domain Handshake
Coherence is the most established connectivity metric. It's essentially a correlation coefficient computed in the frequency domain. For any pair of electrodes and any frequency band, coherence tells you: how consistently do these two signals share a common oscillatory pattern over time?
Coherence ranges from 0 (the two signals have nothing in common at that frequency) to 1 (they're perfectly synchronized). High alpha coherence between frontal and parietal electrodes suggests those regions are coordinating their alpha rhythms, possibly maintaining a shared attentional state. High beta coherence between left and right motor regions suggests bilateral motor preparation.
The mathematics are a natural extension of the same Fourier analysis used for PSD. You compute the cross-spectral density between two channels (how their frequency components relate), normalize by the power spectral density of each channel, and the result is coherence. If you can compute a PSD, you're most of the way to computing coherence.
Phase Synchrony: The Timing Question
Phase-locking value (PLV) takes a different angle. Two brain regions can oscillate at the same frequency but with constantly shifting timing. PLV asks: is the phase relationship between these two signals stable?
Imagine two runners on a track, both running at the same speed. If Runner A is always exactly half a lap ahead of Runner B, that's a stable phase relationship, their strides are locked. If Runner A's lead constantly fluctuates randomly, there's no phase lock.
When two brain regions maintain a stable phase relationship, it means they've established a consistent temporal framework for exchanging information. Neurons communicate through precisely timed spikes, and a stable phase relationship between two regions creates reliable windows for that communication.
PLV strips out amplitude entirely. It doesn't care how loud the signals are. It only cares whether they're temporally locked. This makes it strong against amplitude fluctuations but also means it captures something different from coherence: two regions could have low coherence (different amplitudes) but high phase synchrony (perfectly timed), or vice versa.
Granger Causality: Which Direction Is Information Flowing?
Here's where it gets really interesting. Coherence and PLV tell you that two regions are connected, but they don't tell you which one is driving the conversation. Granger causality does.
The logic is elegant. If the past activity at Location A helps predict the current activity at Location B (above and beyond what Location B's own past activity predicts), then there's directed information flow from A to B. This isn't true causation in the philosophical sense. But it reveals the direction of influence between brain regions.
During a top-down attention task, you'd expect to see Granger causality flowing from frontal to parietal regions (the frontal cortex directing attention to the parietal cortex's sensory processing). During bottom-up perceptual processing, you'd see the reverse: parietal-to-frontal flow as sensory information propagates forward.
Coherence: How correlated are two signals at each frequency? Ranges from 0 to 1. Easy to compute. Well established in clinical EEG. Sensitive to volume conduction artifacts from shared sources.
Phase-Locking Value (PLV): How stable is the timing relationship between two signals? Ignores amplitude. Captures pure temporal coordination. Less affected by differences in signal strength.
Granger Causality: Does one signal predict the other? Reveals direction of information flow. Computationally heavier. Assumes linear relationships. Powerful when those assumptions hold.
Phase Lag Index (PLI): Like PLV, but only counts non-zero-lag relationships. Filters out volume conduction artifacts that create artificial zero-lag correlations. More conservative but more trustworthy.
The Head-to-Head: What Each Method Actually Reveals
Now that both approaches are on the table, let's get specific about how they compare across real scenarios.
| Dimension | Power Spectral Analysis | Connectivity Analysis |
|---|---|---|
| Core question | How much energy at each frequency, per location? | Are these two locations communicating? |
| Input data | Single channel at a time | Pairs of channels (minimum 2) |
| Mathematical basis | Fast Fourier Transform (FFT) | Cross-spectral density, phase statistics, or predictive modeling |
| Output | Power spectral density plot per channel | Connectivity matrix between all channel pairs |
| Computation speed | Milliseconds per channel | Seconds for full matrix (depends on method) |
| Spatial information | What is happening at this location | How two locations relate to each other |
| Temporal sensitivity | Excellent (short windows are fine) | Requires longer windows for stable estimates |
| Volume conduction | Minimal concern (single channel) | Major concern (shared sources create false connections) |
| Clinical use | ADHD screening, sleep staging, neurofeedback | Network disorders, early neurodegeneration, connectivity-based biomarkers |
| Minimum channels needed | 1 | 2 (but more channels yield richer network maps) |
| Interpretation | Relatively straightforward | Requires understanding of network concepts |
| Consumer device readiness | Fully mature, available natively on devices like the Crown | Emerging, requires custom analysis code |
There's a pattern here worth calling out. Power spectral analysis is simpler, faster, and more mature. Connectivity analysis is richer, more specific, and more revealing. They're not competing methods. They're complementary layers of the same data.
The "I Had No Idea" Moment: Your Brain's Dark Energy
Here's a finding that genuinely changes how you think about spectral power.
In 2007, neuroscientist Marcus Raichle (the same researcher who discovered the default mode network) published a paper pointing out something strange about the brain's energy budget. The brain consumes about 20% of the body's total energy. That's well known. What's less well known is how that energy breaks down.
Only about 5% of the brain's total energy consumption is driven by the kind of task-related neural activity that shows up clearly in power spectral analysis. The shifts in alpha and beta power you see when someone starts a cognitive task represent a tiny fraction of the brain's metabolic expenditure.
The other 95% goes to something Raichle called the brain's "dark energy": the ongoing, intrinsic activity that maintains the brain's vast network of functional connections. The coherence between regions. The phase synchronization. The connectivity architecture.
Let that land for a second. The signals that power spectral analysis measures (the task-related power changes) represent roughly 5% of what the brain is spending its energy on. The signals that connectivity analysis measures (the ongoing inter-regional coordination) represent the other 95%.
We've been studying the tip of the iceberg and calling it the whole thing.
This doesn't mean power spectral analysis is useless. That 5% contains critical information. But it does mean that if you only look at spectral power, you're ignoring the vast majority of what the brain is actually doing with its energy. The network maintenance, the inter-regional coordination, the constant reconfiguration of functional connections. All of that lives in the connectivity layer.

When Power Spectral Analysis Is the Right Tool
Despite what I just said about dark energy, there are plenty of scenarios where PSD is exactly what you need and connectivity analysis would be overkill.
Real-Time Neurofeedback
Neurofeedback protocols are built on spectral power. Train up your sensorimotor rhythm (12-15 Hz beta at central sites). Reduce your theta/beta ratio. Increase frontal alpha for relaxation. These are power-based targets, and they work. The speed of PSD computation (milliseconds) makes it ideal for the tight feedback loops that neurofeedback requires. You need to update the display or audio feedback many times per second, and PSD can keep up.
Sleep Staging
The gold standard for sleep staging, polysomnographic scoring, is fundamentally a spectral classification task. Stage N1 shows theta onset. Stage N2 shows sleep spindles and K-complexes (12-14 Hz bursts) and K-complexes. Stage N3 shows high-amplitude delta. REM shows mixed-frequency, low-amplitude activity resembling wakefulness. All of these are spectral power signatures. Connectivity adds nuance, but the core classification works fine with PSD alone.
Quick Clinical Screening
A 5-minute resting state recording analyzed with PSD can flag dozens of potential issues: abnormal alpha peak frequency, elevated slow-wave activity suggesting encephalopathy, focal power asymmetries suggesting structural lesions, the elevated theta/beta ratio associated with ADHD. This is fast, cheap, standardized, and validated across decades of research.
Consumer Brain Monitoring
This is where devices like the Crown live day-to-day. Focus scores, calm scores, power-by-band visualization. The on-device PSD processing gives users immediate, interpretable feedback about their brain state without requiring any knowledge of signal processing. For most consumer use cases, this is exactly the right level of analysis.
When Connectivity Analysis Changes Everything
But there are scenarios where PSD alone misses the signal entirely, and connectivity is the only way to see what's actually happening.
Distinguishing States That Look Identical in Power
Remember our two people from the opening? Focused attention and anxious rumination can produce nearly identical power spectra. So can creative flow and unfocused mind-wandering (both show elevated frontal theta). So can engaged meditation and drowsy pre-sleep states (both show prominent alpha with theta). Power spectral analysis struggles to distinguish these pairs because the distinguishing feature isn't how much power is present. It's how that power is organized across the network.
Connectivity pulls these states apart cleanly. Focused attention shows coherent fronto-parietal beta. Anxiety shows fragmented frontal beta with decoupled parietal activity. Creative flow shows enhanced long-range theta coherence. Mind-wandering shows high theta power with weak inter-regional coupling.
If you're new to connectivity analysis, start with a single comparison. Record yourself during focused work and during deliberate mind-wandering. Compute coherence between a frontal channel (F5 or F6 on the Crown) and a parietal channel (CP3 or CP4) in the beta band (13-30 Hz). In nearly every person, fronto-parietal beta coherence is significantly higher during focus. That one metric tells you something PSD alone never could.
Early Detection of Neurological Decline
This is where the clinical stakes get serious. Alzheimer's disease shows characteristic spectral slowing (shift from alpha to theta/delta dominance) in PSD. But that slowing only becomes reliably detectable after significant cognitive decline has already occurred.
Connectivity analysis tells a different story. Years before the first clinical symptoms, the brain's network topology begins to degrade. The "small-world" architecture that allows efficient information routing breaks down. Hub regions in the posterior cortex lose their central network position. The overall connectivity becomes more random and less organized.
A 2020 meta-analysis in Clinical Neurophysiology found that EEG-based connectivity measures could differentiate mild cognitive impairment from healthy aging with higher sensitivity and specificity than spectral power measures alone. The connectivity signatures appeared, on average, 2 to 3 years before spectral changes became diagnostically reliable.
Brain-Computer Interface Performance
BCI systems that rely purely on spectral power for classification (detecting motor imagery through alpha/beta changes at motor sites, for example) typically hit accuracy ceilings around 70-80%. Adding connectivity features, particularly inter-hemispheric coherence during motor imagery and fronto-parietal phase synchrony during attention tasks, consistently pushes accuracy into the 85-95% range in published studies. The connectivity features capture coordination patterns that power alone misses.
Tracking Network-Level Disorders
Conditions like schizophrenia, autism spectrum disorders, and traumatic brain injury are increasingly understood as disorders of connectivity, not disorders of regional activation. A brain region might be perfectly active (normal power spectrum) while being functionally disconnected from the regions it needs to coordinate with. PSD would say "this brain looks fine." Connectivity would reveal the broken links.
The Computational Divide
I should be honest about the practical gap between these two methods.
Power spectral analysis is computationally trivial. If you have access to the Crown's SDK, you already have PSD data. It's computed on-device and delivered to your application. Even if you wanted to compute it yourself from raw data, an FFT on a 1-second window of single-channel EEG takes a few milliseconds on any modern processor.
Connectivity analysis is harder. Not impossibly hard. But meaningfully harder.
Power Spectral Analysis:
- Computation per update: milliseconds
- Signal processing knowledge: basic (FFT)
- Code complexity: 10-20 lines with any scientific computing library
- Native on the Crown: yes, PSD data available through SDK
- Real-time capable: absolutely
Coherence (Basic Connectivity):
- Computation per update: tens of milliseconds
- Signal processing knowledge: intermediate (cross-spectral density)
- Code complexity: 30-60 lines
- Native on the Crown: no, computed from raw EEG via SDK
- Real-time capable: yes, with appropriate windowing
Full Network Analysis (Graph Theory):
- Computation per update: hundreds of milliseconds to seconds
- Signal processing knowledge: advanced (connectivity matrix, graph metrics, null models)
- Code complexity: 100-300 lines plus a graph library
- Native on the Crown: no, requires custom pipeline
- Real-time capable: possible with optimized code and longer update intervals
The gap is real, but it's closing. Open-source tools like MNE-Python and the Brain Connectivity Toolbox have turned what used to be weeks of custom MATLAB scripting into a few function calls. And the Crown's raw EEG access at 256Hz through its JavaScript and Python SDKs gives you everything you need as input.
The Convergence: Why You Want Both
Here's where the story arrives at its natural conclusion.
The strongest EEG analysis pipelines in both research and clinical practice don't choose between power spectral and connectivity analysis. They use both, because the two methods answer different questions about the same underlying neural reality.
Consider a focus-tracking application. PSD gives you the fast, real-time indicator. Frontal beta is up, posterior alpha is down, the user is probably focusing. That's your first pass, and it updates in milliseconds. But then connectivity gives you the confirmation. Is that elevated frontal beta actually coordinated with parietal activity? Is the attention network genuinely engaged, or is the frontal cortex just revving up on its own? That's your second pass, and it takes a bit longer to compute, but it catches the false positives that PSD alone would miss.
This two-layer approach, spectral power for speed and connectivity for specificity, is how the next generation of brain-responsive applications will work. The Crown already delivers the first layer natively. The second layer lives in the same raw data, waiting for developers to extract it.
And with the Crown's MCP integration connecting brain data directly to AI systems like Claude, there's an even more interesting possibility. An AI system that receives both spectral and connectivity features could learn to interpret your brain state with a nuance that neither layer achieves alone. It could learn that for your specific brain, focus looks like a particular combination of frontal beta power and fronto-parietal coherence, while creative flow looks like a completely different combination of theta power and long-range phase synchrony.
Personalized, multi-layer brain state interpretation. Built on two analytical methods that are nearly a century old, applied to data from a device that weighs 228 grams and sits on your head while you work.
The Signal Was Always There
Here's what I keep coming back to.
Every EEG recording ever made contained both types of information. Every clinical recording, every research dataset, every consumer brain monitoring session. The spectral power was there. The connectivity was there. Both layers, encoded in the same voltage fluctuations, present in every single data point.
For most of the history of EEG, we only looked at one layer. Not because the other didn't exist, but because we didn't have the tools, the computational power, or, honestly, the conceptual framework to ask the right questions.
Now we do.
And here's the part that gets genuinely strange. Your brain is, right now, generating both layers simultaneously. Right now, at this exact moment, every region of your cortex is producing oscillatory activity at specific power levels (the spectral layer) while simultaneously coordinating that activity with dozens of other regions through precise timing relationships (the connectivity layer). You're not just a collection of brain regions doing their own thing. You're a network, a constantly reconfiguring web of synchronized activity that somehow produces the unified experience of reading these words and understanding them.
We spent a century measuring the volume of the instruments. We're finally starting to listen to the music.
The question is: what will you hear?

