Two Windows Into the Same Brain
Your Brain Has Two Telltale Signs, and They Operate on Completely Different Timescales
Right now, as you read this sentence, two things are happening inside your skull simultaneously.
First, billions of neurons are firing electrical impulses. Tiny voltage spikes cascade across networks of brain cells, rippling through your cortex in synchronized waves. This electrical chatter is constant, fast, and astonishingly complex. It happens on the scale of milliseconds.
Second, blood is rushing to the parts of your brain doing the heaviest lifting. Neurons that just fired are hungry. They burned through oxygen and glucose, and your vascular system responds by flooding those regions with fresh, oxygenated blood. This hemodynamic response is slower, more deliberate. It takes seconds, not milliseconds.
These two phenomena, the electrical and the hemodynamic, are both real signatures of your brain at work. And here's the thing that makes the EEG vs. fNIRS question so interesting: each technology is tuned to detect exactly one of them while being almost completely blind to the other.
EEG listens to the electricity. fNIRS watches the blood.
That single distinction explains almost every practical difference between the two. The speed. The spatial resolution. The noise sources. The cost. The kinds of applications each one is good at. Once you understand what each technology is actually measuring, the tradeoffs stop being confusing and start being obvious.
The Physics: Voltage vs. Photons
Let's start at the most fundamental level. What is physically happening when each device sits on your head?
How EEG Works: Eavesdropping on Electrical Conversations
EEG stands for electroencephalography, which is a mouthful that translates to "writing down the electrical activity of the brain." The technique has been around since 1924, when a German psychiatrist named Hans Berger stuck electrodes to a patient's scalp and recorded the first human brain waves. Nearly a century later, the basic principle hasn't changed.
When a large group of neurons fires in synchrony, they produce electrical fields strong enough to detect through the skull, the cerebrospinal fluid, and the skin. EEG electrodes sitting on your scalp pick up these voltage fluctuations, typically in the range of 10 to 100 microvolts. That's millionths of a volt. For comparison, a AA battery produces 1.5 volts, which is roughly 15,000 to 150,000 times stronger than the signals EEG is trying to detect.
The fact that EEG works at all is kind of miraculous. You're trying to listen to whispers through a wall, and the wall is your skull. But the signal is there, and modern amplifiers are sensitive enough to capture it cleanly.
What EEG actually sees: the summed electrical activity of large populations of cortical neurons, particularly pyramidal neurons oriented perpendicular to the scalp surface. It detects these signals with a temporal resolution of about 1 millisecond, meaning it can track the moment-to-moment dynamics of brain activity in near real-time.
How fNIRS Works: Shining a Light Through Your Head
fNIRS stands for functional near-infrared spectroscopy. It was developed in the late 1970s, and its operating principle is beautifully simple once you hear it.
Near-infrared light, at wavelengths between roughly 700 and 900 nanometers, has a special property: it can pass through skin, bone, and brain tissue. Not perfectly, not deeply, but enough. When you shine near-infrared light into someone's head, some of that light scatters through the outer layers of the cortex before bouncing back out. Detectors placed a few centimeters from the light source can pick up this returning light.
Here's the clever part. Oxygenated hemoglobin and deoxygenated hemoglobin absorb near-infrared light at different wavelengths. By using two or more wavelengths and measuring how much light comes back, fNIRS can calculate changes in the concentration of each type of hemoglobin in the tissue between the source and detector.
When a brain region becomes active, it consumes oxygen, and the vascular system overcompensates by flooding the area with more oxygenated blood than was actually needed. This is the same hemodynamic response that fMRI measures (using magnetic fields instead of light). fNIRS detects this change optically, from outside the skull, without any magnets, without any radiation, and without requiring the person to lie motionless in a giant tube.
What fNIRS actually sees: changes in oxygenated and deoxygenated hemoglobin concentration in the cortical tissue beneath the sensor, reflecting regional changes in blood flow that correlate with neural activity. The signal peaks about 4 to 6 seconds after the neural event that triggered it.
The fundamental difference is not about "better" or "worse." EEG measures the cause (electrical firing) with incredible time precision but poor spatial precision. fNIRS measures the consequence (blood flow change) with better spatial precision but an inherent multi-second delay. They are looking at the same brain activity through two completely different lenses.
The Temporal Resolution Gap: Milliseconds vs. Seconds
This is where the practical implications start to hit hard.
EEG's temporal resolution is on the order of 1 to 2 milliseconds. When your neurons fire, EEG sees it almost instantaneously. If you blink, if a sound startles you, if you shift your attention from one task to another, EEG captures the electrical signature of that event within milliseconds of it happening. This is why EEG has been the backbone of brain-computer interfaces, neurofeedback, and real-time cognitive monitoring for decades.
fNIRS operates on a fundamentally different timescale. The hemodynamic response function, the biological process by which blood flow increases to an active brain region, takes about 4 to 6 seconds to reach its peak. No amount of hardware improvement can speed this up. It's not a limitation of the technology. It's a limitation of biology. Blood vessels don't dilate instantaneously. Oxygenated blood doesn't teleport to hungry neurons. The plumbing takes time.
Think of it this way. Imagine you're watching a concert. EEG is like having a microphone on stage, picking up every note the moment it's played. fNIRS is like measuring the heat signature of the crowd, watching which sections of the audience get excited. Both tell you something real about the concert. But the microphone tells you what's happening right now. The heat map tells you what happened a few seconds ago.
For applications that need real-time responsiveness (neurofeedback, focus tracking, meditation monitoring, brain-controlled interfaces), this distinction is everything. You can't give someone meaningful real-time feedback on their brain state if the signal you're reading is 5 seconds old. By the time fNIRS registers that you lost focus, you've already been distracted for the length of a commercial break.
| Property | EEG | fNIRS |
|---|---|---|
| What it measures | Electrical fields from neural firing | Blood oxygenation changes (hemodynamic response) |
| Temporal resolution | 1-2 milliseconds | 4-6 seconds (hemodynamic delay) |
| Spatial resolution | 1-3 cm (scalp level) | 1-2 cm (cortical surface) |
| Depth of measurement | Primarily cortical surface | Outer 1-3 cm of cortex |
| Signal origin | Direct neural activity | Metabolic consequence of neural activity |
| Susceptible to motion artifacts | Yes (especially muscle/eye) | Moderate (less than EEG) |
| Susceptible to electrical noise | Yes (power lines, other electronics) | No (optical measurement) |
| Hair interference | Significant (requires good contact) | Moderate (light must reach scalp) |
| Typical sample rate | 256-1024 Hz | 1-10 Hz |
| Consumer portability | Excellent (wireless, lightweight) | Improving (still bulkier) |
| Approximate cost (consumer) | $500-$1,500 | $1,000-$5,000+ |
| Approximate cost (research) | $5,000-$50,000 | $15,000-$150,000 |
Spatial Resolution: Where fNIRS Pulls Ahead
Here's where fNIRS gets its moment to shine (pun intended, and I apologize for nothing).
EEG has a well-known spatial resolution problem called volume conduction. The electrical signals generated by your neurons don't travel in neat, straight lines from the cortex to the scalp. They spread out through the conductive tissue of the brain, the cerebrospinal fluid, and the skull, smearing and blending as they go. By the time they reach the scalp electrodes, what you're recording is a blurry mixture of signals from a wide area. Trying to pinpoint exactly where in the brain a signal originated from scalp EEG data is a notoriously difficult inverse problem. It can be done with mathematical source localization techniques, but it's never as precise as you'd like.
fNIRS has a natural advantage here. Because it measures the change in light absorption in the tissue directly beneath each source-detector pair, it inherently provides better spatial specificity. The "banana-shaped" path that photons travel between a source and detector defines a relatively localized measurement volume. With a well-designed sensor array, fNIRS can achieve a spatial resolution of about 1 to 2 centimeters on the cortical surface.
That's still not great by medical imaging standards. An fMRI machine can resolve structures down to 1 to 2 millimeters. But compared to EEG's 1 to 3 centimeter blurry smear, fNIRS offers a meaningful improvement in knowing where in the cortex something is happening.
This matters for research questions like: "Which specific region of the prefrontal cortex activates during this task?" or "Does the left motor cortex respond differently than the right?" EEG can sometimes answer these questions, but fNIRS does it more naturally.
The Noise Problem: Different Enemies for Different Technologies
Every brain measurement technology has to fight noise. But EEG and fNIRS face completely different adversaries, and understanding those differences matters enormously in practice.
EEG's Enemies
Muscle artifacts. Every time you clench your jaw, furrow your brow, or tense your neck, the electrical signals from your muscles drown out your brain signals. Muscle activity produces voltages that are orders of magnitude larger than cortical EEG. This is why EEG research traditionally required subjects to sit perfectly still, which is not exactly how people live their lives.
Eye movement artifacts. Your eyeballs are electrically polarized (the cornea is positive relative to the retina). Every blink and eye movement produces a large voltage change that propagates across the scalp. Experienced EEG researchers can spot blink artifacts in raw data from across the room.
Electrical interference. EEG operates in the microvolt range, which means it's vulnerable to any ambient electrical noise. The 50/60 Hz hum from power lines is a constant adversary. Other electronic devices, fluorescent lights, and even the static charge in your chair can contaminate the signal.
Electrode impedance. If an EEG electrode doesn't make good contact with your scalp (hair is the main culprit), the impedance rises and the signal quality plummets. Getting reliable electrode contact through thick hair has been one of the enduring challenges of consumer EEG.
fNIRS's Enemies
Ambient light. Because fNIRS is an optical measurement, any stray light that leaks into the detectors corrupts the signal. Direct sunlight is the worst offender, but even overhead lighting can be problematic. Most fNIRS systems use light-blocking caps or headbands, and the detectors use filters to reject wavelengths outside the near-infrared range.
Systemic physiology. This is fNIRS's sneakiest problem. The signal it measures, changes in blood oxygenation, doesn't come exclusively from the brain. Heart rate changes, breathing patterns, blood pressure fluctuations, and even scalp blood flow all affect the optical signal. Separating the cerebral hemodynamic response from these systemic physiological changes is a significant signal processing challenge. Some fNIRS systems use "short-separation channels" that measure only scalp blood flow, allowing researchers to subtract it from the deeper signal.
Motion artifacts. When a sensor moves relative to the skin, the optical coupling changes and the signal jumps. fNIRS is generally stronger to motion than EEG (no electrode impedance to worry about), but it's not immune, especially during vigorous movement.
Melanin and hair. Near-infrared light has to pass through the skin and hair to reach the brain. Darker skin absorbs more light, reducing the signal-to-noise ratio. Thick, dark hair can block the light path entirely. This has been a significant equity issue in fNIRS research, and manufacturers are actively working on sensor designs that perform more consistently across diverse populations.
EEG fights electrical and muscular contamination. fNIRS fights optical and physiological contamination. Neither is "noisier" in absolute terms. They just have different vulnerabilities, which means different environments and use cases favor different technologies.
Real-Time Brain-Computer Interfaces: Why Milliseconds Win
Here's where the rubber meets the road for anyone who wants to actually do something with their brain data, not just study it.

Brain-computer interfaces need three things: speed, reliability, and responsiveness. A BCI that takes 5 seconds to register a change in your mental state isn't a [brain-computer interface](/guides/what-is-bci-brain-computer-interface). It's a brain-computer suggestion box.
This is why EEG dominates the BCI landscape. When you shift from unfocused mind-wandering to deep concentration, your brain's electrical signature changes within hundreds of milliseconds. alpha brainwaves (8-13 Hz) suppress. Beta activity (13-30 Hz) increases. Gamma oscillations may spike. EEG sees all of this happening in real time, fast enough to trigger an immediate response: adjust the music, send an alert, change the lighting, log a focus session, or control a cursor on a screen.
fNIRS-based BCIs exist. Researchers have built systems that classify mental states using hemodynamic signals. But the inherent 4-6 second delay limits the interaction paradigm. You can use fNIRS to detect sustained cognitive states over longer windows (minutes, not seconds), and some creative researchers have built binary "yes/no" communication systems for patients who can't move or speak. These are remarkable achievements. But for the everyday use case of real-time neurofeedback, focus tracking, or thought-controlled applications, EEG's speed advantage is not just incremental. It's structural.
The Neurosity Crown, for example, processes EEG data on-device using the N3 chipset, delivering focus scores, calm scores, and raw brainwave data to applications in real time. That kind of responsiveness simply isn't possible with a hemodynamic measurement. It's not that fNIRS hardware is too slow. It's that blood is too slow.
Cost and Accessibility: The Practical Calculus
Let's talk money, because technology that nobody can afford or access is just a science experiment.
Consumer EEG has come a long way. Devices range from under $200 for basic single-channel headbands to $500-$1,500 for multi-channel systems like the Neurosity Crown (8 channels, on-device processing, open SDK access). Research-grade EEG systems with 32, 64, or 128 channels run from $5,000 to $50,000, depending on channel count and amplifier quality.
Consumer fNIRS is a smaller and younger market. Dedicated consumer fNIRS devices are rare and typically cost $1,000 to $5,000. Research-grade fNIRS systems, with their multiple light sources, detectors, and sophisticated optode arrays, can run $15,000 to $150,000. The optical components (LEDs, photodetectors, fiber optics) and the engineering required to maintain consistent light coupling add cost that EEG doesn't have to worry about.
There's also the ecosystem factor. EEG has decades of open-source software, established file formats (EDF, BDF), community tools (MNE-Python, EEGLAB, BrainFlow), and a massive body of published research that newcomers can build on. fNIRS tooling is growing (Homer3, MNE-NIRS) but the ecosystem is smaller and less mature. If you're a developer who wants to build applications using brain data, EEG gives you more tools, more documentation, and more community support to work with today.
The Multimodal Future: Why "vs." Is the Wrong Framing
Here's the "I had no idea" moment of this guide, and it's the reason I've been carefully avoiding declaring a winner.
EEG and fNIRS are not competing technologies. They're complementary ones. And the most exciting work in non-invasive brain measurement right now involves using both simultaneously.
Think about what you get when you combine them. EEG tells you when something happened in the brain, with millisecond precision. fNIRS tells you where blood flow changed, with centimeter-level localization. Together, they answer both questions at once: this specific region of the cortex (fNIRS) activated at this precise moment (EEG).
Researchers call this multimodal neuroimaging, and it's not just theoretical. Studies combining EEG and fNIRS have shown improved classification accuracy for brain-computer interfaces, better localization of seizure foci in epilepsy patients, and richer characterization of cognitive states during complex tasks. A 2022 meta-analysis found that EEG-fNIRS hybrid systems achieved BCI classification accuracies 5-15% higher than either modality alone.
The two technologies are even physically compatible. EEG electrodes can sit on the scalp right next to fNIRS optodes without interfering with each other. EEG uses electrical measurement. fNIRS uses optical measurement. They operate in completely different physical domains, which means they don't crosstalk.
This is unusually elegant. Most multimodal imaging combinations involve painful tradeoffs. You can't easily combine EEG with fMRI (the magnetic field wreaks havoc on the electrical recordings). You can't put a PET scanner in someone's living room. But EEG plus fNIRS? You could, in principle, build that into a single wearable device. Some research labs already have.
The next generation of non-invasive brain-sensing hardware will likely integrate multiple modalities into a single device. Today, the Neurosity Crown captures the electrical side of the equation with 8-channel EEG. As optical sensor technology shrinks and costs drop, the dream of a single wearable that captures both electrical and hemodynamic brain signals is getting closer to reality.
Choosing the Right Tool: A Decision Framework
So when does each technology make sense? Here's a practical breakdown.
Choose EEG when you need:
- Real-time feedback (neurofeedback, focus tracking, meditation monitoring)
- Millisecond temporal precision (event-related potentials, BCI control)
- Frequency-band analysis (alpha, beta, gamma power tracking)
- Consumer-grade portability and price
- A mature developer ecosystem (SDKs, open-source tools, community)
- Sleep staging and overnight monitoring
Choose fNIRS when you need:
- Better spatial localization of cortical activity
- Robustness to electrical noise (industrial environments, near heavy machinery)
- Measurement during tasks involving significant facial or jaw movement
- Prefrontal cortex monitoring in scenarios where EEG electrode contact is difficult
- Compatibility with metallic implants (fNIRS doesn't care about metal; EEG doesn't either, but fMRI does)
Consider combining both when you need:
- Maximum information about both timing and location of brain activity
- Higher BCI classification accuracy than either modality alone
- Research applications where the scientific question demands both temporal and spatial resolution
- Validation of findings across independent measurement modalities
For most people reading this guide, people interested in brain-computer interfaces, neurofeedback, cognitive performance tracking, or building applications that respond to brain states, EEG is the right starting point. It's faster, more affordable, more portable, better supported by software tools, and it measures the thing you most often care about: what your brain is doing right now, this millisecond, not what it was doing 5 seconds ago.
Your Brain Is Already Talking. The Question Is How You Listen.
We started this guide with two physical phenomena: electricity and blood flow. Two signals produced by the same brain, operating on different timescales, visible through different physics, each revealing something the other can't.
EEG has been listening to the brain's electrical voice since Hans Berger put electrodes on a patient's head in 1924. A century of refinement has turned that technique from a laboratory curiosity into something you can wear on your head while you work, meditate, or build software that responds to your thoughts. The Neurosity Crown represents the current peak of that trajectory: 8 channels, 256Hz sampling, on-device processing, and open APIs that let developers build on top of raw brain data.
fNIRS is younger, still maturing, and already proving its value in research contexts where spatial information matters more than speed. Its future is bright, especially as optical sensor technology continues to shrink.
But here's the thought that sticks with me. A hundred years from now, the idea that we had to choose between measuring electricity and measuring blood flow will seem quaint. Like choosing between a camera and a microphone. Obviously you want both. Obviously the richest picture of the brain comes from combining every modality we have.
We're not there yet. But we're closer than most people realize. And in the meantime, your brain is generating both signals right now, every millisecond of every day. The electricity is there to be read. The question isn't whether the technology exists to capture it. It does. The question is what you'll do once you can see your own mind thinking.

