Your Brain Is a Network. Here's How to Map It.
The Most Connected Object in the Known Universe Has a Map Problem
Here's a number that should make you pause: 86 billion neurons. Each capable of forming up to 10,000 connections with other neurons. That gives the human brain somewhere in the neighborhood of 100 trillion synaptic connections, a number so large it's genuinely hard to think about.
For most of neuroscience's history, researchers dealt with this absurd complexity by studying brain regions one at a time. They'd ask questions like "What does the hippocampus do?" or "Which area handles language?" And they made real progress. We got beautiful maps of brain anatomy, functional specialization, lesion studies that told us what breaks when you damage specific areas.
But there was a problem. The brain doesn't actually work that way.
No single brain region operates in isolation. The hippocampus doesn't just "do memory" by itself. It coordinates with the prefrontal cortex, the entorhinal cortex, the thalamus, and dozens of other regions in precisely timed patterns. Language isn't in Broca's area. Language is a dynamic process that ripples across a distributed network of regions, shifting configuration depending on whether you're speaking, listening, reading, or thinking in words.
The brain, it turns out, is not a collection of specialized parts. It's a network. And if you want to understand a network, you need a branch of mathematics that was invented almost 300 years ago to solve a puzzle about bridges.
Seven Bridges and 86 Billion Neurons
In 1736, the mathematician Leonhard Euler was thinking about the city of Konigsberg (now Kaliningrad, Russia). The city was built on a river with two islands, connected by seven bridges. The citizens had a puzzle they couldn't solve: could you walk through the city, crossing each bridge exactly once, and return to where you started?
Euler proved it was impossible. But more importantly, he invented an entirely new way of thinking about the problem. Instead of worrying about geography, distances, or the physical layout of the bridges, he stripped the problem down to its essence. The landmasses became nodes (also called vertices). The bridges became edges (also called links). The specific locations didn't matter. What mattered was the pattern of connections.
Graph theory was born.
For nearly three centuries, graph theory remained the domain of pure mathematics and (later) computer science. It's the math behind social networks, internet routing, airline flight paths, and epidemiological models. Any time you have a set of things and a set of relationships between those things, graph theory gives you the tools to analyze the structure.
And then, in the late 1990s, neuroscientists had a realization that changed the field. The brain is a network. Neurons are nodes. Synapses are edges. And graph theory, this 250-year-old mathematical framework, might be exactly what they needed to crack the brain's organizational code.
EEG Turns Your Skull Into a Graph
Here's where EEG enters the picture, and where graph theory goes from abstract math to something you can compute from data recorded off your own head.
When you place EEG electrodes on the scalp, each electrode picks up the aggregate electrical activity of millions of neurons beneath it. In graph-theoretic terms, each electrode becomes a node. But you need more than nodes to have a graph. You need edges, connections between the nodes.
This is where connectivity analysis comes in. Two EEG channels aren't "connected" by a physical wire. They're connected by the statistical relationship between their signals. If channel F5 and channel C3 consistently show correlated activity, synchronized phase relationships, or predictive patterns, there's an edge between them. The strength of that statistical relationship becomes the edge's weight.
The specific methods for calculating these relationships are worth understanding, because the choice of connectivity measure fundamentally shapes the graph you get:
| Connectivity Measure | What It Captures | Best For |
|---|---|---|
| Coherence | Shared frequency content between two channels | Identifying which regions oscillate together in the same band |
| Phase-locking value (PLV) | Consistency of phase relationship over time | Detecting synchronized timing between regions, independent of amplitude |
| Mutual information | Total shared statistical information | Capturing both linear and nonlinear relationships |
| Granger causality | Whether one signal predicts another's future | Inferring directionality: which region drives which |
| Weighted phase lag index (wPLI) | Phase consistency corrected for volume conduction | Reducing false connectivity from shared electrical fields |
That last one, the weighted phase lag index, deserves a moment. One of the tricky things about EEG connectivity is volume conduction: the electrical signals spread through the skull and scalp, so two nearby electrodes can look correlated simply because they're picking up the same underlying source. It's like two microphones placed close together at a concert. They'll record very similar signals, but not because they're "connected." They're just hearing the same thing. The wPLI and similar measures were specifically designed to filter out this artifact, giving you a cleaner picture of genuine brain-to-brain-region communication.
Once you've computed connectivity between every pair of electrodes, you have a connectivity matrix: a table where each cell contains the connection strength between two nodes. This matrix IS the graph. And now you can unleash the entire toolkit of graph theory on it.
The Metrics That Reveal How Your Brain Is Wired
A graph by itself is just a picture. What makes graph theory powerful is the set of metrics you can compute from that picture, each revealing a different aspect of the network's architecture. Here are the ones that matter most for brain networks.
Degree: Who's the Most Popular?
The simplest graph metric is degree: the number of edges connected to a node. In an EEG graph, a node with high degree is an electrode site that shows strong connectivity with many other sites. That region of the brain is a connector, participating in communication with widespread areas.
In a weighted graph (where edges have different strengths), you compute strength instead: the sum of all edge weights for a node. A node can have moderate degree but extremely high strength if its connections are all very strong.
Clustering Coefficient: Are Your Friends Also Friends With Each Other?
The clustering coefficient measures how interconnected a node's neighbors are. If electrode A is connected to both B and C, the clustering coefficient asks: are B and C also connected to each other?
A high clustering coefficient means the node sits inside a tight-knit cluster where neighboring regions all communicate with each other. This indicates local specialization, brain areas working together as a functional unit.
For the whole brain, you can average the clustering coefficients across all nodes to get the global clustering coefficient. Healthy brains show high global clustering, meaning the network naturally organizes into specialized local communities.
Path Length: How Many Hops Between Any Two Regions?
Characteristic path length is the average number of steps (edges) you need to traverse to get from any node to any other node in the graph. Think of it as a measure of global communication efficiency. Short path lengths mean information can travel between any two brain regions quickly, through just a few intermediary steps.
This metric is where things get clinically interesting. In Alzheimer's disease, characteristic path length increases. The brain's network becomes harder to traverse. Information that used to zip across the network in two or three hops now requires five or six. The highways are breaking down.
Modularity: Does Your Brain Have Departments?
Modularity measures how neatly the network divides into distinct communities or modules, groups of nodes that are densely connected internally but sparsely connected to nodes outside the group.
A brain with high modularity has clear functional departments. The visual processing nodes cluster together. The motor control nodes cluster together. The language nodes cluster together. These modules correspond, remarkably well, to known functional brain systems.
But here's the twist: modularity isn't fixed. It changes depending on what you're doing. During rest, the brain's network is highly modular, with distinct communities doing their own thing. During complex tasks that require integrating information across domains, modularity decreases. The boundaries between departments blur. The network reconfigures to allow cross-module communication.
Hub Identification: Finding Your Brain's Power Brokers
Some nodes matter more than others. Hubs are nodes with disproportionately high connectivity, centrality, or influence in the network. They're the power brokers, the nodes that hold the network together.
There are several ways to identify hubs:
- Degree centrality: which node has the most connections
- Betweenness centrality: which node sits on the most shortest paths between other nodes (the ultimate middleman)
- Eigenvector centrality: which node is connected to other highly connected nodes (it's not just about having connections, it's about having the right connections)
In the brain, hubs tend to cluster in regions of association cortex, areas that integrate information across sensory modalities: the posterior parietal cortex, the prefrontal cortex, the precuneus. These are the same regions that are metabolically expensive, develop late in childhood, and are disproportionately affected in neurodegenerative disease.
That last point is not a coincidence. Hubs carry more traffic, consume more energy, and are more vulnerable to damage. Research by Olaf Sporns and others has shown that targeted damage to hub nodes causes far more network disruption than equivalent damage to peripheral nodes. This "hub vulnerability" hypothesis may explain why Alzheimer's disease preferentially attacks the brain's most connected regions.
Your brain's network follows the same mathematical rules as airport systems. Just as a few major airports (hubs like Atlanta and Chicago O'Hare) handle a disproportionate share of all air traffic, a few brain regions handle a disproportionate share of all neural communication. And just as canceling flights at a hub airport cascades into delays across the entire system, damage to a brain hub cascades into dysfunction far beyond the local region. The brain's network topology isn't just interesting math. It explains why certain types of brain damage are catastrophically worse than others, even when the physical size of the lesion is the same.
Small-World Topology: The Brain's Architectural Sweet Spot
In 1998, mathematicians Duncan Watts and Steven Strogatz published a paper that introduced the concept of small-world networks. These are networks that combine two properties that seem contradictory: high clustering (like a regular lattice, where everyone knows their neighbors) and short path lengths (like a random network, where any node can reach any other in just a few hops).
The analogy that works best: think of a small town where everyone knows everyone locally (high clustering), but there are a few people who have connections to faraway places (creating shortcuts that slash path lengths). Those few long-range connections transform a provincial, slow network into one where information can travel across the entire structure in just a few steps.
When researchers computed these metrics on brain networks, constructed from EEG, fMRI, and anatomical data, the result was striking. The brain is a small-world network. Not approximately. Not sort of. The small-world properties of neural networks are among the strongest and replicated findings in network neuroscience.
Why does this matter? Because small-world architecture is optimally efficient. It supports both local specialization (high clustering lets nearby regions work together on specialized tasks) and global integration (short path lengths let distant regions share information rapidly). And it does this with far less wiring than a fully connected network would require.
The small-world index (sigma) compares your network's clustering and path length against a random network with the same number of nodes and edges.
Sigma = (C/C_random) / (L/L_random)
Where C is the clustering coefficient and L is the characteristic path length. If sigma is significantly greater than 1, the network has small-world properties. Healthy brains typically show sigma values between 1.5 and 3.0 for EEG-derived graphs. A sigma near 1.0 suggests the network has become randomized, losing its organized structure.
The small-world property is not just a curiosity. It appears to be a fundamental design constraint of biological neural networks. It shows up in C. elegans (a worm with exactly 302 neurons), in cat cortex, in macaque visual systems, and in the human brain at every scale from local microcircuits to whole-brain connectivity. Evolution seems to have converged on this architecture repeatedly, in very different organisms, suggesting it solves a fundamental computational problem: how to process information both locally and globally without drowning in wiring costs.
When Networks Break: Clinical Applications of Graph Theory in EEG
Here's where the abstract math hits the clinic. If healthy brains have characteristic network properties, then deviations from those properties might serve as biomarkers for neurological and psychiatric conditions. And that's exactly what researchers have found.

Alzheimer's Disease
Alzheimer's patients show a consistent pattern of graph-theoretic disruption in EEG studies. Clustering coefficient drops. Path length increases. Small-world index degrades toward randomness. Hub structure deteriorates, with the most connected nodes losing their disproportionate connectivity. The brain's network is literally becoming more random, less organized, less efficient.
Critically, these changes appear before clinical symptoms become obvious. Several studies have shown that graph-theoretic EEG metrics can distinguish between healthy aging, mild cognitive impairment, and early Alzheimer's with accuracy rates above 85%. The network starts fraying before the person notices they're forgetting things.
Epilepsy
Epileptic networks show a different pattern. Instead of becoming random, they become excessively ordered, too regular, too synchronized. During a seizure, the brain's normal small-world architecture collapses into something closer to a lattice: extremely high clustering, extremely high path length, and a loss of the long-range shortcuts that enable efficient global communication.
Between seizures, the epileptic focus (the region where seizures originate) often shows abnormal hub properties, as if that region is accumulating connectivity until it crosses a threshold and fires uncontrollably. Graph-theoretic analysis of EEG has been used to identify seizure foci with enough accuracy to guide surgical planning.
ADHD brain patterns
EEG graph studies of ADHD reveal reduced global efficiency, lower small-world properties, and altered modularity, particularly in frontal networks. The prefrontal hubs that coordinate sustained attention show weaker connectivity with other regions. The network's ability to reconfigure itself when switching between rest and task states is impaired. This aligns with what we know about ADHD clinically: it's not that the brain lacks the hardware for attention. It's that the network coordination required for sustained, directed attention is compromised.
Depression
Depression shows up in EEG graphs as altered frontal connectivity, increased modularity (the network becomes more siloed, with less cross-module communication), and shifts in hub structure. The default mode network, which is already known to be overactive in depression, shows excessive internal connectivity at the expense of communication with task-positive networks. The brain gets stuck in a self-referential loop, and graph theory can quantify exactly how stuck.
| Condition | Clustering | Path Length | Small-World Index | Notable Pattern |
|---|---|---|---|---|
| Healthy brain | High | Short | 1.5 to 3.0 | Balanced local and global efficiency |
| Alzheimer's disease | Decreased | Increased | Approaches 1.0 | Network randomization, hub degradation |
| Epilepsy | Very high | Very high | Altered | Excessive regularity, abnormal hubs at seizure focus |
| ADHD | Reduced | Variable | Reduced | Frontal hub weakness, impaired reconfiguration |
| Depression | Altered | Increased | Reduced | DMN hyper-modularity, frontal connectivity shifts |
8 Nodes, 28 Edges: What a Consumer EEG Can Actually Tell You
Let's be honest about scale. Research-grade EEG systems use 64, 128, or even 256 channels. That gives you a graph with 64 to 256 nodes and thousands of edges. The resolution is extraordinary.
The Neurosity Crown has 8 channels: CP3, C3, F5, PO3, PO4, F6, C4, and CP4. That gives you 8 nodes and a maximum of 28 edges. This is not a high-density brain graph. It's more like a sketch than a photograph.
But a sketch drawn by someone who knows what to look for can still tell you a lot.
Eight channels covering frontal, central, and parietal-occipital regions give you representation across the major functional zones. You can compute meaningful connectivity between frontal and posterior sites (which tracks attention and executive function), between left and right hemispheres (which tracks interhemispheric coordination), and within local clusters (which tracks regional processing efficiency).
Here's what's realistic with 8 nodes:
- Global clustering coefficient and characteristic path length are computable and meaningful, though with wider confidence intervals than high-density systems
- Small-world index can be calculated, and the direction of change (increasing vs. decreasing sigma) is informative even if the absolute value is less precise
- Frontal-posterior connectivity tracks the long-range connections most relevant to attention, working memory, and executive function
- Hemispheric asymmetry in graph metrics correlates with emotional regulation and approach/withdrawal motivation
- State-dependent network changes are detectable: your 8-node graph looks measurably different during focused work versus relaxed mind-wandering
What 8 channels cannot do is resolve fine-grained modularity, identify precise hub locations within cortical subregions, or compute the kind of source-localized connectivity maps that 256-channel systems produce. That's real. Pretending otherwise would be dishonest.
But consider this: the first social network analyses that revealed small-world properties in human social systems used graphs with fewer than 100 nodes. The Watts-Strogatz paper that launched the entire field used model networks of similar scale. You don't always need more nodes. Sometimes you need smarter analysis of the nodes you have.
With the Crown's JavaScript or Python SDK, you have access to raw EEG at 256Hz and power spectral density data from all 8 channels. To build a connectivity graph, you'd compute pairwise coherence or phase-locking values between all 28 channel pairs in a sliding time window (typically 2 to 4 seconds). Apply a statistical threshold to keep only significant edges. The result: a dynamic, real-time brain graph that updates every few seconds, reflecting your brain's shifting network state. Researchers using BrainFlow or Lab Streaming Layer (LSL) integration can pipe this data into Python for graph-theoretic analysis using libraries like NetworkX or the Brain Connectivity Toolbox.
The Network Perspective Changes Everything
The most important thing about graph theory applied to EEG isn't any single metric. It's the conceptual shift.
For decades, EEG analysis meant looking at individual channels: how much alpha is there at O1? What's the theta/beta ratio at Fz? How big is the P300 at Pz? This is like analyzing a city by studying each building individually. You learn a lot about individual structures, but you miss the roads, the traffic patterns, the neighborhoods, the way the city actually functions as a system.
Graph theory forces you to think about relationships. Not "what is this brain region doing?" but "how is this brain region talking to other regions, and what does the pattern of that conversation look like?" This shift from node-level to network-level analysis has revealed properties of brain organization that were invisible under the old paradigm.
Consider consciousness itself. Giulio Tononi's Integrated Information Theory (IIT) proposes that consciousness arises from a system's ability to integrate information across its parts. The key mathematical quantity in IIT, called phi, is essentially a graph-theoretic measure of how much a network is more than the sum of its parts. When you lose consciousness (anesthesia, deep sleep, severe brain injury), phi drops. The network becomes fragmented. The integration that constitutes conscious experience dissolves.
This is a genuinely strange idea. It means consciousness isn't produced by any particular brain region or chemical. It's a property of the network's topology, its pattern of connectivity, its graph structure. If that doesn't qualify as an "I had no idea" moment, I don't know what does.
Where This Is All Going
We're in the early days of brain graph analysis, which means the low-hanging fruit is still being picked. Here's what's coming.
Dynamic graph theory. Most current studies analyze static graphs, averaged over a recording period. But the brain's network reconfigures itself constantly, moment by moment. Dynamic graph theory tracks how network properties change over time, revealing the brain's flexibility, its ability to rapidly assemble and disassemble functional networks as task demands shift. Early work shows that network flexibility itself is a meaningful metric: people with higher network flexibility show better cognitive performance and more creative thinking.
Personalized graph baselines. Your brain's network topology is as individual as your fingerprint. Two healthy people can have very different clustering coefficients, path lengths, and hub distributions, all within the normal range. The future of brain network analysis is personalized baselines: tracking YOUR graph metrics over days, weeks, and months, detecting deviations from YOUR normal that might signal fatigue, cognitive decline, or the early stages of a neurological condition.
AI-powered graph analysis. Graph neural networks, a class of machine learning models designed specifically for graph-structured data, are increasingly being applied to EEG connectivity matrices. These models can learn to classify brain states, detect anomalies, and predict cognitive outcomes from graph structure with accuracy that hand-crafted metrics can't match. Combine this with real-time EEG from a device you actually wear, and you're looking at continuous, AI-interpreted brain network monitoring.
A few hundred years ago, Euler looked at seven bridges and invented a new branch of mathematics. He couldn't have imagined that his framework for analyzing connections between riverbanks would one day be applied to connections between brain regions, revealing the architecture of thought itself.
Your brain has been running its network your entire life. Every moment of focus, every creative insight, every time you zoned out and snapped back, your nodes were forming edges, your clusters were computing locally, and your hubs were routing information across short paths through a small-world architecture that evolution spent hundreds of millions of years optimizing.
The graph was always there. Now you can start reading it.

