What Developers Are Building with the Neurosity SDK
A JavaScript API That Reads Your Mind (Almost Literally)
Five years ago, if you wanted to write software that responded to brain activity, you had three options. You could spend $20,000 on a research-grade EEG amplifier, learn MATLAB, and pray your impedances stayed below 10 kilohms. You could hack together a consumer headband that gave you one channel of questionable data and a proprietary SDK with no documentation. Or you could give up and go build another CRUD app.
None of those options produced the kind of software that gets interesting.
Then something happened that the developer community is still catching up to. Neurosity shipped the Crown, an 8-channel EEG headset with an open JavaScript SDK. Not a REST API that returns yesterday's averages. A real-time data stream. Raw microvolts at 256Hz. Frequency-band power. Focus and calm scores. Mental commands. All of it, flowing through a WebSocket connection into your Node.js server, your React app, your Python notebook. The same kind of data that used to require institutional review boards and six figures of grant funding, now accessible with npm install @neurosity/sdk.
This isn't a theoretical capability. Developers are building with it right now, and the projects range from practical productivity tools to things that feel like they fell out of a science fiction novel. Here's what's possible, how to build it, and where the most interesting opportunities are hiding.
The Building Blocks: What the SDK Actually Exposes
Before we get into use cases, you need to understand what you're working with. The Neurosity SDK isn't a single data feed. It's a layered system that lets you choose your level of abstraction.
| Data Stream | SDK Method | What You Get | Update Rate |
|---|---|---|---|
| Raw EEG | .brainwaves('raw') | Microvolt readings from 8 channels (CP3, C3, F5, PO3, PO4, F6, C4, CP4) | 256Hz |
| Frequency Bands (FFT) | .brainwaves('powerByBand') | Power in delta, theta, alpha, beta, gamma for each channel | ~4Hz |
| Power Spectral Density | .brainwaves('psd') | Full spectral decomposition across all frequencies | ~4Hz |
| Focus Score | .focus() | Single value from 0 to 1 representing cognitive focus | ~4Hz |
| Calm Score | .calm() | Single value from 0 to 1 representing mental calm | ~4Hz |
| Kinesis | .kinesis() | Mental command events with label and confidence | Event-driven |
| Signal Quality | .signalQuality() | Contact quality per channel (good, ok, bad, no contact) | ~1Hz |
| Accelerometer | .accelerometer() | Head tilt and movement in x, y, z axes | ~50Hz |
Think of these as floors in a building. The ground floor is raw EEG, the electrical signals your neurons produce. Each floor above adds processing and abstraction. The top floor is focus and calm scores, a single number between 0 and 1 that represents a complex cognitive state distilled down to something any application can use.
The use case you're building determines which floor you live on.
Building a consumer productivity app? Start with .focus() and .calm(). You'll ship faster and the scores are already tuned to be meaningful. Building a research tool or custom classifier? You want .brainwaves('raw') or .brainwaves('psd'). Building something in between, like a neurofeedback protocol? The frequency bands from .brainwaves('powerByBand') give you the best balance of interpretability and control.
Now, the use cases. I've ordered these from most practical to most experimental, but honestly, some of the "experimental" ones are closer to production than you'd expect.
1. Real-Time Focus Dashboards
Complexity: Low | Primary SDK methods: .focus(), .calm(), .brainwaves('powerByBand')
This is the "hello world" of brain-computer interface development, but don't let that fool you. A well-built focus dashboard is genuinely useful, and it's the use case that makes most developers realize brain data is real and actionable, not just a novelty.
The concept is simple. Subscribe to the focus stream, pipe the scores into a visualization, and watch the line move as your concentration shifts. When you're locked into a coding session, the score climbs. When someone messages you on Slack, you can literally watch it drop. There's something unsettling and thrilling about seeing your own attention quantified in real time.
The interesting engineering starts when you add context. Log focus scores alongside your calendar events, and suddenly you have data showing which meetings drain your cognitive resources and which ones energize you. Overlay focus data on your Git commit history, and you can see which hours produce your best code. Combine focus and calm scores to distinguish "stressed productivity" (high focus, low calm) from "flow state" (high focus, high calm).
Technical approach: A React app subscribing to .focus() and .calm() with a charting library like Recharts or D3.js. Store time-series data in a lightweight database (SQLite for local, Supabase for cloud). Add calendar API integration for context.
2. Neurofeedback Training Apps
Complexity: Medium | Primary SDK methods: .brainwaves('powerByBand'), .focus(), .calm()
Neurofeedback is the practice of showing your brain its own activity and letting it learn to self-regulate. It's been used clinically for decades to treat ADHD brain patterns, anxiety, and insomnia. The protocols are well-documented. What's been missing is a way for developers to build neurofeedback experiences without a $5,000 clinical system.
The Crown changes that equation completely.
Here's how a basic neurofeedback protocol works in code. You subscribe to .brainwaves('powerByBand') and extract the ratio of beta power to theta power at frontal channels (F5, F6). This beta/theta ratio is one of the most studied markers of sustained attention. When the ratio is above the user's personal baseline, you provide a reward: a sound, a visual change, a point on the scoreboard. When it drops below baseline, the reward stops.
That's it. The brain does the rest. Over repeated sessions, the brain learns to produce more of the pattern that earns rewards. This isn't mysticism. It's operant conditioning applied to cortical oscillations, and there are hundreds of peer-reviewed studies backing it.
The developer opportunity is in the experience layer. Clinical neurofeedback is boring. You sit in a chair and watch a bar graph. But there's no reason the reward mechanism can't be a beautiful game, an immersive soundscape, or a generative art piece. The protocol is the same. The experience is where a good developer makes all the difference.
There are dozens of established neurofeedback protocols, each targeting different cognitive states. Alpha enhancement for relaxation. SMR (sensorimotor rhythm) training for calm focus. Theta suppression for attention. Alpha/theta crossover training for creativity and insight. Each protocol maps to specific frequency bands at specific channel locations, and the Crown's 8 channels cover the positions these protocols require. A developer who builds a well-designed protocol library with the Neurosity SDK is building something clinicians and consumers both want.
3. Brain-Controlled Interfaces with Kinesis
Complexity: Medium-High | Primary SDK methods: .kinesis(), .signalQuality()
This is the one that makes people's eyes go wide. Kinesis is the Neurosity SDK's mental command system. You train it to recognize specific mental intentions, and then those intentions become events in your code.
The training process works like this: you put on the Crown, open the training interface, and repeatedly imagine a specific action (pushing something left, for example) while the system records your brain's electrical pattern. After enough training samples, the classifier learns to distinguish that pattern from your baseline. From then on, when you imagine that same action, the SDK fires a kinesis event with a label and a confidence score.
Here's where it gets interesting. Kinesis events are just events. They plug into the same event-driven architecture you use for clicks, keypresses, and touch gestures. Want a drone to turn left when you think "left"? Subscribe to .kinesis(), filter for the label, and send the command over the drone's control API. Want a web page to scroll when you imagine pushing something forward? Same pattern. Want a wheelchair to respond to thought? Same architecture, higher stakes, bigger impact.
Technical approach: Train 2-3 mental commands using the Neurosity app. Subscribe to .kinesis() events in your code. Map commands to actions. Always check .signalQuality() first, because kinesis accuracy depends heavily on good electrode contact.
The "I had no idea" moment: Kinesis accuracy improves with each user over time. Your brain gets better at producing distinctive patterns, and the classifier gets better at recognizing them. Developers who've used the system for months report that mental commands feel almost as natural as pressing a key. Not instant, but reliable enough for real applications.
4. AI Integration Through MCP
Complexity: Low | Primary integration: Neurosity MCP Server
This is the use case that didn't exist two years ago and might be the most important one on this list.
The Neurosity MCP (Model Context Protocol) server connects your Crown to AI assistants like Claude and ChatGPT. Your real-time brain state, focus scores, calm scores, frequency-band power, flows directly into the AI's context. The AI doesn't just know what you're typing. It knows how your brain is doing while you type it.
Think about what this means for developer tools. You're pair programming with Claude, working through a complex system design. Your focus score is high, your beta/gamma ratio suggests deep engagement, so Claude keeps the conversation technical and detailed. Then your focus drops. Your theta power increases, which is a marker of cognitive fatigue. Claude notices and shifts: simpler explanations, shorter code blocks, maybe a suggestion to take a break and come back to the hard part later.
This isn't Claude being polite. It's Claude adapting to measurable changes in your cognitive state. The same input (your question) produces a different output because the AI has access to a data channel that didn't exist before: your brain.

Other AI integration patterns: Feed brain data into a custom GPT for personalized cognitive coaching. Use focus scores as a feature in ML models that predict task completion time. Build a RAG system that weights document relevance by your cognitive state when you first read them.
5. Accessibility Tools
Complexity: Medium | Primary SDK methods: .kinesis(), .focus(), .accelerometer()
Brain-computer interfaces were invented for accessibility. The first BCI systems in the 1970s were designed to help people with locked-in syndrome communicate. That mission hasn't changed, but the technology has gotten dramatically more accessible to developers.
With the Neurosity SDK, you can build communication interfaces for people who can't use keyboards or touchscreens. A kinesis-driven speller that lets someone type by thinking. A focus-based yes/no system where sustained attention means "yes" and relaxation means "no." An accelerometer-based head mouse that works alongside kinesis for click events. These are applications that directly improve quality of life.
Technical approach: For communication tools, combine .kinesis() for selection events with .accelerometer() for cursor movement. Use .signalQuality() to validate that the device is properly seated before relying on signals. Build in calibration flows that adapt to each user's unique neural patterns.
The accessibility use case is particularly interesting for developers because the technical requirements, reliable classification, low latency, clear feedback, push you to write better BCI code than any other application domain.
6. Generative Music from Brain State
Complexity: Medium-High | Primary SDK methods: .brainwaves('powerByBand'), .calm(), .brainwaves('psd')
Your brain produces oscillations. Music is oscillations. The mapping between them is more natural than it sounds.
The simplest version: subscribe to .brainwaves('powerByBand') and map each frequency band to a musical parameter. Alpha power (8-13Hz, dominant during relaxed wakefulness) controls a pad synthesizer's filter cutoff. Theta power (4-8Hz, associated with daydreaming and creativity) triggers melodic phrases. Gamma bursts (30Hz+, linked to active information processing) add percussive hits. Your brain state becomes a control surface for sound.
But the more sophisticated approach involves the PSD data. Instead of 5 frequency bands, you get the full spectral decomposition, dozens of frequency bins that can drive dozens of musical parameters simultaneously. Map this to a granular synthesizer, and the texture of the sound literally follows the texture of your brain's electrical activity. Composers have described the results as "hearing your own thinking," which is poetically accurate and also technically precise.
Integration paths: Node.js SDK to MIDI (via midi npm package) to Ableton Live. WebSocket to Max/MSP or Pure Data. Python SDK to SuperCollider via OSC. Web Audio API for browser-based sonification.
7. Meditation and mindfulness-based stress reduction Apps
Complexity: Low-Medium | Primary SDK methods: .calm(), .focus(), .brainwaves('powerByBand')
The meditation app market is enormous and growing, but almost every meditation app has the same fundamental problem: it can't tell whether you're actually meditating. You press play, close your eyes, and the app assumes you're following along. You might be. You might be mentally composing a grocery list. The app has no idea.
The Crown changes this. The .calm() score tracks your mental state in real time. Alpha power at posterior channels (PO3, PO4) increases during genuine meditative states. The data doesn't lie, and it gives your meditation app something no audio-only app can offer: actual feedback.
What developers are building: Guided sessions that adapt pacing based on calm scores. Meditation timers that don't end until you've hit a target calm state (not just sat there for 10 minutes). Progress tracking that shows real neurological changes over weeks and months of practice. Gamified meditation where your brain data drives the experience, like a garden that grows when you're genuinely calm and wilts when your mind wanders.
8. Research and Experiment Tools
Complexity: Medium-High | Primary SDK methods: .brainwaves('raw'), .brainwaves('psd'), .signalQuality()
Neuroscience researchers are some of the Crown's most active SDK users, and for good reason. The device costs a fraction of research-grade systems, ships with open APIs instead of proprietary lock-in, and integrates with the tools researchers already use through BrainFlow and Lab Streaming Layer (LSL).
What's possible with the SDK: Event-related potential (ERP) studies using raw EEG and precise timestamping. Cognitive load measurement during task performance. Sleep staging with frequency-band analysis. Longitudinal studies tracking brain changes over weeks or months, something that's impractical with lab-only equipment but trivial when participants can wear the Crown at home.
Integration with research tools: BrainFlow gives you access to Crown data in Python with built-in signal processing (bandpass filters, wavelet denoising, artifact removal). LSL provides sub-millisecond time synchronization across multiple data streams. MNE-Python, the standard library for EEG analysis, works with both.
Most EEG research suffers from a fundamental problem: the lab environment is nothing like real life. Participants sit in a shielded room, stare at a monitor, and press buttons. Their brainwaves reflect that artificial context, not their natural cognitive patterns. The Crown lets participants collect data at their desk, in their office, during their actual work. For ecological validity research, this is not a minor improvement. It's a different category of study design.
9. Smart Home and IoT Integration
Complexity: Medium | Primary SDK methods: .focus(), .calm(), Node.js SDK
Your brain state is a signal. Your smart home is a system that responds to signals. Connecting them is surprisingly straightforward.
The pattern looks like this: run the Neurosity SDK in a Node.js process, subscribe to .focus() or .calm(), define thresholds, and fire webhooks or MQTT messages when those thresholds are crossed. On the receiving end, Home Assistant, IFTTT, or Apple HomeKit (via homebridge) picks up the signal and executes the action.
Practical examples that developers have built: Desk lights that shift to warm tones when focus exceeds 0.7. A "do not disturb" indicator outside the office door that activates during deep focus. Ambient sound that starts automatically when calm drops below 0.3 (stress detected). A coffee machine trigger that fires when your calm score spikes after a long focus block (the "I just finished deep work" pattern).
The key design insight: brain-state automation works best with hysteresis. Don't trigger on single readings. Wait for a sustained state (30+ seconds above or below a threshold) before firing actions. Brains are noisy. Smart thresholds prevent your lights from flickering.
10. Gaming and Interactive Art
Complexity: High | Primary SDK methods: .kinesis(), .focus(), .calm(), .brainwaves('powerByBand')
Brain-controlled gaming sounds like a gimmick until you play a game that does it well. The difference between a gimmick and a genuine mechanic is whether the brain data adds something a controller can't.
Consider: a horror game that reads your stress markers (elevated beta, suppressed alpha) and dials up the intensity when you're genuinely scared. A controller can't do that, because the game can't tell from your button inputs whether you're terrified or bored. A puzzle game that adjusts difficulty based on cognitive load (the theta/beta ratio), getting harder when you're breezing through and easier when you're struggling, without you ever opening a settings menu. A cooperative game where two players' focus scores must synchronize to unlock abilities, creating a genuine shared mental state as a game mechanic.
Integration approach: The Crown streams data via WebSocket. Unity reads WebSocket data through libraries like NativeWebSocket. Map brain metrics to game variables in your update loop. The bridge between brain and game engine takes an afternoon to build. The game design is the hard part, and the interesting part.
For interactive art: The PSD data is particularly rich for generative visuals. Each frequency bin becomes a control parameter for particle systems, color palettes, geometry, and physics. Installations where the audience's brain activity shapes the visual environment are technically straightforward with the Crown SDK. The artistic challenge is making the mapping legible and beautiful.
11. Health and Wellness Monitoring Dashboards
Complexity: Medium | Primary SDK methods: .brainwaves('powerByBand'), .focus(), .calm(), .signalQuality()
This is not clinical diagnosis, and the Crown is not a medical device. But there's a vast, underserved space between "FDA-cleared diagnostic tool" and "useless novelty." The Crown sits right in the productive middle.
Developers are building personal wellness dashboards that track brain metrics over time. Daily trends in alpha power, which correlates with relaxed wakefulness. Weekly patterns in focus scores, showing which days and times produce the best cognitive performance. Monthly baselines that reveal whether lifestyle changes (sleep, exercise, meditation) are actually affecting brain function, not just how you feel about them, but what the electricity in your head is actually doing.
Technical approach: Node.js SDK running as a background service, logging to a time-series database (InfluxDB or TimescaleDB). A Grafana dashboard or custom React frontend for visualization. Anomaly detection using simple statistical methods (z-scores against personal baselines) to flag unusual patterns.
The long-term data is where it gets fascinating. A single focus score reading tells you very little. Six months of daily readings, correlated with sleep data, exercise logs, and work patterns? That's a personal neuroscience dataset that nobody had access to before the Crown existed.
Every use case on this list combines three elements: a data stream from the SDK, a processing layer, and an output. The magic is in choosing the right combination. High-level scores (.focus(), .calm()) pair best with consumer-facing apps where simplicity matters. Frequency-band data (.brainwaves('powerByBand')) gives you enough control for neurofeedback protocols and creative applications. Raw EEG and PSD are for research, custom classifiers, and applications where you need the full signal. The SDK gives you all of them simultaneously. You pick the level that fits your use case.
The Use Case That Doesn't Have a Name Yet
Here's what's quietly remarkable about this moment. Every use case I've described fits into a category that already exists. Dashboards. Neurofeedback. Accessibility. Music. Gaming. Health tracking. We're taking existing software categories and adding a neural input channel.
But the most important application of the Neurosity SDK probably doesn't fit into any existing category. It's the thing that only makes sense when real-time brain data is a given, the way ride-sharing only made sense when GPS-equipped phones were a given. The way Stories only made sense when cameras became a default input device.
What's the software category that only exists because a JavaScript API can read brainwaves?
Nobody knows yet. That's not uncertainty. That's opportunity with a 256Hz sample rate.
The SDK is open. The data streams are documented. The community is building. And the history of platform shifts tells us that the most valuable applications are never obvious from the specifications. They're obvious only in retrospect, after some developer connects two capabilities that nobody thought to connect, and suddenly everyone wonders how they ever lived without it.
Your brain produces about 2,048 data points per second across 8 channels. Each one is a piece of information that no software could access before. The question isn't whether those data points are valuable. The question is what you build when you can finally read them.

