How Motor Imagery Lets You Control Technology With Imagined Movement
Close Your Eyes and Imagine Squeezing Your Right Hand. You Just Sent a Signal.
Here is something that should genuinely blow your mind. Right now, without moving a single muscle, you can change the electrical output of a specific region of your brain, on demand, in a way that a computer can detect and act on.
Don't take my word for it. Try this. Close your eyes and vividly imagine squeezing a tennis ball in your right hand. Feel the resistance. Feel your fingers curling. Feel the seams of the ball pressing into your palm.
You didn't move. Nothing in the external world changed. But inside your skull, something very specific just happened. A cluster of neurons in your left motor cortex, the ones that would normally orchestrate the actual squeezing of your right hand, fired in a pattern nearly identical to the one they produce during real movement. And that firing was strong enough to alter the electrical field measurable on the surface of your scalp.
This phenomenon is called motor imagery, and it is the foundation of one of the most important paradigms in brain-computer interface research. Not because it is the newest or flashiest approach. But because it is the most self-generated, the most trainable, and the most natural way for a human brain to talk to a machine.
You don't need flashing screens. You don't need external stimuli. You just need to think about moving.
Why Your Brain Can't Tell the Difference (Almost)
The fact that imagined movement activates the motor cortex is not a curiosity or a side effect. It reflects something fundamental about how your brain plans and executes actions.
When neuroscientists mapped the motor cortex using fMRI in the early 2000s, they discovered that the overlap between real and imagined movement activation is about 60-80%, depending on the individual and the task. The primary motor cortex (M1), the premotor cortex, and the supplementary motor area all light up during both actual and imagined movement. The main difference is that during imagery, the signal to your spinal cord gets inhibited at the last stage, so the command fires but the body doesn't move.
Think of it like writing an email and hovering over the send button. The email is composed. The recipient is selected. Your finger is on the button. The only difference between "sent" and "imagined sending" is that final click. In motor imagery, the brain goes through every step of movement planning except the final execution command. And every step before that last one produces measurable electrical activity.
This is why motor imagery works for BCIs. The brain doesn't produce some vague, fuzzy echo of movement when you imagine it. It produces a structured, patterned electrical event that follows the same spatial rules as real movement. Imagine your right hand, and the left hemisphere activates. Imagine your left hand, and the right hemisphere activates. Imagine your foot, and the midline vertex area activates.
These spatial patterns are what make motor imagery classifiable. And the specific electrical signal that BCIs latch onto has a name: event-related desynchronization.
The Mu Rhythm: Your Motor Cortex's Idle Signal
To understand event-related desynchronization (ERD), you first need to understand what it is desynchronizing from.
When your motor cortex is at rest, not planning or executing any movement, its neurons tend to fire in a synchronized, rhythmic pattern at about 8-12 Hz. This is the mu rhythm, sometimes called the sensorimotor rhythm or the rolandic rhythm (named after the Rolandic fissure, the anatomical landmark that marks the motor cortex's territory).
The mu rhythm is essentially your motor cortex humming to itself. It is the default idle state. Millions of motor neurons oscillating together in a steady 8-12 Hz pulse, like an orchestra tuning up before a performance, everyone playing roughly the same note.
When you decide to move, or even just imagine moving, something dramatic happens. The synchronized humming breaks apart. The mu rhythm amplitude drops, sometimes by 30-50%, as the neurons transition from their idle synchrony into the complex, task-specific firing patterns needed for movement planning. Researchers call this event-related desynchronization because the previously synchronized oscillation desynchronizes in response to an event (the imagined or real movement).
Here is the critical detail: ERD is lateralized. If you imagine moving your right hand, mu desynchronization is strongest at electrode position C3, which sits directly over the left motor cortex hand area. If you imagine moving your left hand, the desynchronization shifts to C4 over the right hemisphere. Foot imagery produces desynchronization at Cz, the midline electrode over the foot region of the motor cortex.
The lateralized nature of mu desynchronization is what makes binary motor imagery classification possible. A BCI doesn't need to "understand" what you're imagining. It just needs to detect whether mu power decreased more on the left or the right. That asymmetry is the signal. It's like a light switch: the system doesn't care why the light went off, only which side of the room went dark.
This is not subtle. In a well-trained user, the difference in mu power between left-hand and right-hand imagery can be statistically significant within a single two-second trial. That is fast enough for real-time control.
What a Motor Imagery BCI Actually Measures
Now that you know what ERD looks like, here is how a BCI turns it into a command.
The system records EEG from electrodes positioned over the motor cortex. At minimum, you need C3 and C4, the left and right motor cortex hand areas. More electrodes give you more spatial resolution. The Neurosity Crown's 8-channel layout includes both C3 and C4 along with CP3 and CP4 (slightly behind the motor cortex, over the somatosensory area), giving it coverage of the full sensorimotor strip.
From the raw EEG, the system extracts power in the mu band (8-12 Hz) and often the beta band (13-30 Hz) as well, since beta also shows task-related desynchronization during motor imagery. The power values from each channel become the feature vector: a set of numbers that describes the current state of your motor cortex.
A machine learning classifier, trained on examples of your specific brain patterns, takes that feature vector and outputs a prediction. "Left hand." "Right hand." "Neither." That prediction becomes a command.
The whole pipeline from thought to output runs in well under a second in modern systems. Some research platforms achieve classification latencies of 250 milliseconds or less.
| Signal Feature | What It Reflects | Electrode Location |
|---|---|---|
| Mu ERD (8-12 Hz) | Motor cortex activation for imagined or real movement | C3 (left hand area), C4 (right hand area), Cz (foot area) |
| Beta ERD (13-25 Hz) | Motor planning and imagery engagement | C3, C4, and surrounding central electrodes |
| Beta ERS (rebound) | Post-imagery motor cortex recovery, appears 500-1000ms after imagery stops | Same as beta ERD, shifted in time |
| Laterality index | Asymmetry of mu/beta power between hemispheres | Computed from C3 vs C4 power ratio |
| Common Spatial Patterns (CSP) | Optimized spatial filters maximizing class separability | Derived from all available motor cortex channels |
The Training Loop: Teaching Both Sides of the Interface
Here is something about motor imagery BCIs that surprises most people: you are not just training the computer. The computer is also training you.
Motor imagery BCI performance depends on a feedback loop between two adaptive systems. On one side, the machine learning classifier is learning your brain patterns. On the other side, your brain is learning to produce clearer, more consistent patterns. This co-adaptation is one of the most fascinating aspects of BCI research, and it is the reason motor imagery control improves so dramatically with practice.
How a typical training session works
During calibration, the system presents you with a series of cues. "Imagine moving your left hand." "Imagine moving your right hand." "Relax." Each cue lasts a few seconds, and the system records the EEG throughout. After collecting enough examples (usually 20-40 per class), the classifier trains on the labeled data.
In early sessions, the classifier might achieve only 60-65% accuracy, barely above chance for a two-class problem. That sounds discouraging. But something interesting happens over the next several sessions.
Your brain starts to figure out what the computer is looking for. Not consciously, not through any deliberate effort, but through the feedback loop. When you produce a clear motor imagery pattern and the system correctly classifies it, you get visual or auditory feedback. A cursor moves in the right direction. A bar fills up. A sound plays. Your brain registers the success and subtly reinforces the neural pattern that produced it. This is operant conditioning at the neural level.
After 5-10 sessions, many users see accuracy jump from 65% to 80% or higher. The best performers in BCI competitions have achieved binary classification accuracies above 95%, though these are outliers after extensive training.
About 15-20% of people initially cannot produce classifiable motor imagery signals. The BCI research community calls this "BCI illiteracy," though the term is somewhat misleading, because it implies a permanent condition.
In reality, most "BCI-illiterate" users simply need more time, better feedback, or a different training approach. A 2019 meta-analysis in the Journal of Neural Engineering found that extended training with adaptive classifiers reduced the illiteracy rate to under 10%. Some researchers have had success with strategies like starting with actual movement (which produces stronger signals), then gradually transitioning to pure imagery, or using kinesthetic imagery instruction ("feel the movement in your muscles") rather than visual imagery ("picture your hand moving").
The takeaway: if motor imagery doesn't click for you immediately, that doesn't mean it won't work. Your brain is plastic. Give it feedback, give it time, and the signal will emerge.
What makes a good motor imagery strategy
Not all imagined movements are created equal. Research consistently shows that kinesthetic imagery, actually feeling the sensation of movement rather than just visualizing it, produces stronger and more classifiable ERD. If you're imagining squeezing your hand, don't just picture your hand closing. Feel the effort in your forearm. Feel the pressure in your fingertips.
The choice of movement also matters. Hand opening and closing produces strong ERD over the hand knob area of M1. Foot dorsiflexion (pulling your toes up toward your shin) produces midline ERD that is spatially distinct from hand imagery. Tongue movement produces ERD in the lateral portion of the motor cortex. Each of these is neuroanatomically separated enough that a classifier with decent spatial resolution can distinguish them.
How Accurate Can Motor Imagery BCIs Get?
Let's be honest about the numbers. Motor imagery is not the most accurate BCI paradigm. P300 spellers and SSVEP systems regularly achieve classification rates above 95%, because they rely on stimulus-evoked responses that are strong and stereotyped. Motor imagery is self-generated, variable, and requires the user to actively maintain a mental state. That makes it harder to classify.
But "harder" does not mean "bad." Here is where the field stands today:
| Paradigm | Typical Accuracy | Advantages | Limitations |
|---|---|---|---|
| Motor Imagery (2-class) | 70-85% | No external stimulus needed, self-paced, natural control metaphor | Requires training, user-dependent performance, lower accuracy than evoked paradigms |
| Motor Imagery (4-class) | 55-75% | More commands available, richer interaction | Lower accuracy with more classes, needs more channels |
| P300 Speller | 90-98% | High accuracy, minimal training needed | Requires visual attention to flashing grid, not self-paced |
| SSVEP | 90-99% | Very high accuracy, fast command selection | Requires external flickering stimuli, can cause visual fatigue |
| Hybrid (MI + P300) | 85-95% | Combines advantages of both paradigms | More complex setup, higher cognitive load |
The accuracy gap is real, but motor imagery has a massive advantage that the numbers don't capture: it is asynchronous. You don't need to stare at a screen. You don't need to wait for a flash. You initiate the command whenever you want, from any mental state, in any context. That freedom makes motor imagery the most practical paradigm for everyday BCI use, even if its raw accuracy lags behind stimulus-driven approaches.
And the gap is closing. Modern deep learning architectures, particularly convolutional neural networks designed for EEG data, have pushed motor imagery classification accuracy into ranges that were unthinkable a decade ago. Transfer learning techniques allow models pre-trained on large datasets to adapt to new users with minimal calibration data. The trend line is clear and heading upward.

From Lab to Living Room: The Crown's Kinesis Feature
For decades, motor imagery BCIs lived almost exclusively in research labs. The setups required 64-channel caps, conductive gel, a quiet shielded room, and a PhD student to run the software. The idea of someone casually using motor imagery to control their computer at home was, to put it politely, ambitious.
The Neurosity Crown changed that equation. And it did it by solving three problems that had kept motor imagery out of everyday life.
Problem 1: You can't ask people to wear a gel cap
The Crown uses dry, flexible rubber electrodes. No preparation. No gel. No cleanup. You put it on like a pair of headphones. The electrodes at C3, C4, CP3, and CP4 sit directly over the sensorimotor cortex, which is exactly where you need coverage for motor imagery detection. The fact that the device has 8 channels total means it also captures frontal and parietal activity, giving the classifier additional spatial context that improves discrimination accuracy.
Problem 2: You can't ship a research lab's compute stack
The Crown runs an N3 chipset that handles signal processing on the device itself. Filtering, artifact rejection, feature extraction, and classification all happen on your head, not in the cloud, not on a laptop. This means two things. First, latency stays low because the data doesn't have to travel anywhere. Second, your raw brain data never leaves the device unless you explicitly grant access through the SDK. For something as personal as motor cortex activity, that privacy architecture matters.
Problem 3: Training needs to be quick and painless
Research BCI calibration sessions can take 30-60 minutes of sustained mental effort. Nobody is going to do that before checking their email. The Crown's kinesis training is designed to be short, typically under 15 minutes, and iterative. You do a brief calibration. The model trains. You use it. The model refines itself based on your ongoing use. Over multiple sessions, the classifier adapts to your evolving brain patterns, and your brain adapts to the feedback. The co-adaptation loop kicks in without you having to think about it.
What you can actually do with kinesis
Kinesis fires a digital event when it detects your trained motor imagery pattern. That event is accessible through the Neurosity JavaScript SDK and the Python SDK. Which means anything a computer can do in response to an event, kinesis can trigger.
Developers have built applications that use kinesis to control smart home devices, navigate presentations, trigger keyboard shortcuts, send messages, and interact with AI assistants. Because the SDK is open, the command layer is entirely customizable. The Crown handles the brain signal classification. You decide what happens when the signal fires.
Here is what makes this genuinely remarkable. The underlying science is the same mu desynchronization that researchers discovered in the 1990s. The EEG signal hasn't changed. What changed is that the entire pipeline, from electrode to classified output, now fits in a device that weighs 228 grams and charges in 30 minutes.
What Is the Neuroscience of Getting Better at Motor Imagery?
One of the most fascinating aspects of motor imagery training is what it does to your brain over time. This is not just about the classifier getting better at reading you. Your brain physically changes.
A 2013 study in NeuroImage tracked motor imagery BCI users over several weeks of training and found measurable increases in gray matter volume in the premotor and parietal cortices. The neural pathways involved in motor imagery became more efficient, producing stronger and more consistent ERD patterns. This is neuroplasticity in action, your brain literally rewiring itself to become a better communicator with the machine.
Other studies have found that experienced motor imagery users show more focal, less diffuse ERD patterns. Beginners tend to produce broad, sloppy desynchronization that spreads across the motor cortex. Experts produce tight, targeted ERD confined to the specific cortical area corresponding to the imagined movement. This is the neural equivalent of going from shouting in a crowded room to whispering directly into someone's ear.
The practical implication is that motor imagery BCI performance has a much higher ceiling than most people realize. The 70-85% accuracy range commonly cited in the literature reflects a mixture of beginners, intermediates, and experts. Well-trained individuals working with adaptive classifiers routinely achieve 85-95% binary classification accuracy. The system gets better because both sides of the interface are learning.
What Motor Imagery Can't Do (Yet)
It would be dishonest to talk about motor imagery without acknowledging its current limitations. This technology is real and it works, but it is not telepathy.
You can't think arbitrary thoughts and have a computer understand them. Motor imagery BCIs detect specific, trained patterns of motor cortex activation. They can distinguish "imagined left hand" from "imagined right hand." They cannot decode "open my email" or "call mom."
The number of distinct commands is limited. With current non-invasive technology, most systems reliably distinguish 2-4 motor imagery classes. Some researchers have pushed to 6 or 8 classes by adding foot, tongue, and compound imagery tasks, but accuracy drops with each additional class.
Performance varies between people and between sessions. Your mental state, fatigue level, caffeine intake, and even time of day affect motor imagery signals. A classifier trained on Monday morning data might perform worse on Friday afternoon. Adaptive algorithms help, but session-to-session variability remains a real challenge.
It requires mental effort. Sustained motor imagery is cognitively demanding. Using it continuously for hours is not realistic in the way that typing on a keyboard is. Motor imagery is better suited for discrete commands (trigger an action, confirm a selection) than for continuous control (steering a cursor smoothly).
These limitations are real. They're also shrinking. Every year brings better classifiers, better electrode technology, better training protocols, and better understanding of the neural mechanisms involved. The trajectory is clear even if the destination is still some distance away.
The Bigger Picture: Why Motor Imagery Matters Beyond BCIs
Here is the part that most articles about motor imagery BCIs completely miss. The ability to detect imagined movement from outside the skull has implications that go far beyond controlling computers.
Stroke rehabilitation. Motor imagery training, combined with real-time EEG feedback, has shown significant promise in helping stroke patients recover movement. A 2015 randomized controlled trial in The Lancet found that BCI-assisted motor imagery training improved upper limb function in chronic stroke patients who had not responded to conventional therapy. The feedback loop works because it reconnects the brain's motor planning system with actual outcomes, essentially rebuilding the neural pathways disrupted by the stroke.
Prosthetic control. For amputees, motor imagery of the missing limb still produces detectable ERD over the corresponding motor cortex area. This means non-invasive BCIs could provide a control interface for prosthetic limbs without any surgery. The imagined movements of a phantom hand become the control signals for a robotic one.
Cognitive assessment. The quality and speed of motor imagery ERD correlate with motor cortex health. Researchers are investigating whether motor imagery EEG could serve as an early biomarker for neurodegenerative conditions like Parkinson's disease and ALS, where motor cortex function degrades before obvious physical symptoms appear.
Athletic training. Elite athletes have used motor imagery (often called "visualization") for decades. Now, EEG gives them objective feedback on the quality of their mental rehearsal. An athlete can see whether their motor cortex is actually engaging during visualization or whether they're just daydreaming. That feedback makes practice more effective.
Motor imagery is not just a BCI trick. It is a window into how your motor cortex works, what it is doing when you are not moving, and what it reveals about the health and capability of your brain. The ability to measure it in real time, outside a hospital, with a device you can wear while working, is a genuinely new thing in the world.
Where This Is Going
The motor imagery BCI field is evolving fast. Here are the threads worth watching.
Few-shot calibration. Current systems need 10-20 minutes of training data per user. Researchers are developing transfer learning models that can generalize from large databases of motor imagery EEG, needing only a few minutes (or even zero minutes) of new user data to produce a working classifier. When this matures, "put on device, start using" becomes reality.
Continuous decoding. Instead of classifying discrete events ("was that left hand or right hand?"), next-generation systems aim to decode continuous motor intention: the speed, direction, and force of imagined movements, in real time. This would enable smooth, analog control rather than binary switching.
Multimodal fusion. Combining motor imagery EEG with eye tracking, EMG (muscle signals), and even fNIRS (blood oxygen measurement) gives the classifier multiple streams of evidence about user intent. Hybrid systems consistently outperform any single modality alone.
Always-on ambient BCI. The endgame is a motor imagery system that runs continuously in the background, detecting your movement intentions and cognitive state without you having to actively "use" it. The Crown's kinesis feature is an early step in this direction, running on-device and always ready to detect trained patterns.
Every one of these developments brings us closer to the original promise of brain-computer interfaces: a direct, natural channel between your thoughts and the digital world. Motor imagery is the paradigm best positioned to deliver on that promise, because it is the one that starts with you. No stimuli. No screens. Just your brain, doing what it already knows how to do, and a machine that is finally learning to listen.
Your motor cortex has been broadcasting since before you were born. Now, for the first time, you get to decide what happens when it speaks.

