Moving Objects With Your Mind
In 2012, a Woman Who Hadn't Moved Her Limbs in 15 Years Picked Up a Cup of Coffee
Cathy Hutchinson was 58 years old. She'd had a brainstem stroke in 1996 that left her completely paralyzed. She couldn't move her arms. She couldn't move her legs. She couldn't speak. For a decade and a half, she communicated by moving her eyes to select letters on a screen.
Then researchers at Brown University connected a small array of electrodes, implanted in her motor cortex, to a robotic arm sitting next to her hospital bed.
They asked her to think about reaching for a bottle of coffee.
She thought about it. The arm moved. It reached out, grasped the bottle, brought it to her lips. She took a sip. And then she smiled, the first time in 15 years that her own intention had directly caused something in the physical world to move.
That moment was recorded. You can watch it. And if you aren't at least a little bit awestruck by what you're seeing, you might want to check your pulse. A human thought, nothing but electrical patterns firing across cortical tissue, traveled through a wire, into a computer, through a decoding algorithm, and out into a mechanical arm that obeyed.
Moving objects with your mind isn't a comic book power. It's an engineering problem. And we've been solving it.
The Gap Between Telekinesis and Technology Is Smaller Than You Think
Let's get one thing out of the way. Telekinesis, the psychic ability to move objects with pure mental force, has never been demonstrated under controlled conditions. Every spoon-bending performance in history has been either a magic trick or an untestable claim. The physics simply don't support it. Your neurons don't generate electromagnetic fields strong enough to push a paperclip across a table, let alone bend metal.
But here's what your neurons do generate: electrical signals. Billions of them. Every second. And those signals contain information. Specific, decodable, actionable information about what you're thinking, what you intend to do, and what you're imagining doing.
The entire field of brain-computer interfaces rests on a single insight: if you can read those signals accurately enough and fast enough, you can translate thought into action without involving the body at all.
No muscles. No nerves. No movement. Just thought, and then result.
The question was never "can the brain control external objects?" Your brain controls external objects every time you pick up a fork. The question was: can we skip the middle steps? Can we go straight from the electrical pattern in your cortex to the movement of a thing in the world?
The answer, it turns out, is yes. And there are three very different ways to do it.
The Three Languages Your Brain Speaks to Machines
Brain-computer interfaces aren't one technology. They're a family of approaches, each exploiting a different quirk of how your brain produces electrical signals. Understanding these three paradigms is the key to understanding the entire field.
motor imagery: Thinking About Moving (Without Actually Moving)
Here's something remarkable about your motor cortex. It doesn't just fire when you move. It fires when you think about moving.
Close your eyes right now and imagine clenching your right fist. Don't actually clench it. Just think about it. Picture the movement. Feel the imagined tension in your fingers.
If someone had an EEG strapped to your head while you did that, they would have seen something specific happen over your left motor cortex (the left side controls the right hand). A signal called the mu rhythm, which normally oscillates at about 8-12 Hz over motor areas, would have suddenly decreased in power. Neuroscientists call this event-related desynchronization, or ERD. Your motor neurons started firing in a less synchronized pattern because they were busy simulating the movement you imagined.
Now imagine wiggling the toes on your left foot. The ERD shifts. It moves to different electrodes, because foot movements are controlled by neurons at the top of the motor cortex (in the medial area), while hand movements are controlled by neurons on the side.
This is the foundation of motor imagery BCI. Different imagined movements produce different spatial patterns of electrical activity, and a computer can learn to tell them apart.
When you imagine moving your right hand, your left motor cortex shows decreased mu rhythm power (8-12 Hz). Imagining your left hand produces the opposite pattern. Imagining foot movement shifts the signal to the central midline. A BCI classifier trained on your personal patterns can distinguish these thoughts in real time and map each one to a different command: left, right, forward, activate.
The first motor imagery BCIs were slow and frustrating. Early systems could distinguish between two mental states (imagine left hand vs. imagine right hand) with maybe 70% accuracy, and it took users weeks of training to get even that far. But the technology has improved dramatically. Modern machine learning classifiers, combined with better signal processing and higher-quality EEG hardware, can now distinguish between four or more motor imagery classes with accuracy above 90% in trained users.
And here's the "I had no idea" moment: your brain gets better at this with practice. Just like learning a musical instrument, the neural patterns you produce during motor imagery become cleaner and more distinct the more you do it. Your brain literally rewires itself to speak more clearly to the machine. The machine adapts to you, and you adapt to the machine. It's a feedback loop between biological and artificial intelligence.
P300: Your Brain's Surprise Signal
In the 1960s, neuroscientists discovered that when you see something you're looking for, something rare among a stream of irrelevant stimuli, your brain produces a characteristic electrical response about 300 milliseconds later. They called it the P300, for "positive deflection at 300 milliseconds."
Here's how this becomes a BCI. Imagine a grid of letters on a screen, like a keyboard. Rows and columns flash in rapid sequence, one at a time, in random order. You focus on the letter you want to type. Let's say you're staring at the letter "H."
Most of the flashes are irrelevant. They're rows and columns that don't contain your letter. Your brain mostly ignores them. But when the row containing "H" flashes, and when the column containing "H" flashes, your brain goes: that one. It produces a P300 response. A spike of electrical activity over your parietal cortex, roughly 300 milliseconds after the flash.
The BCI doesn't need you to push a button. It doesn't need you to move a muscle. It just watches for the P300 spike, figures out which row and column triggered it, finds the intersection, and types the letter.
This is how people who are completely paralyzed can type messages, browse the internet, and communicate with their families. The P300 speller, first demonstrated by Farwell and Donchin in 1988, has been refined over decades into a reliable communication tool for people with locked-in syndrome and advanced ALS.
| BCI Paradigm | Signal Used | Training Needed | Typical Accuracy | Speed |
|---|---|---|---|---|
| Motor Imagery | Mu/beta rhythm changes over motor cortex | Days to weeks | 70-90% | Moderate (one command every 2-5 seconds) |
| P300 | Event-related potential at ~300ms after target stimulus | Minutes (calibration) | 85-95% | Moderate (5-8 characters per minute) |
| SSVEP | Frequency-locked response in visual cortex | None to minutes | 90-98% | Fast (up to 60+ bits per minute) |
SSVEP: Hijacking Your Visual Cortex
The third major BCI paradigm is, frankly, a little eerie. It exploits a property of your visual cortex that you've never consciously noticed: when you look at a light flickering at a specific frequency, neurons in your visual cortex start firing at exactly that frequency. This is called a Steady-State Visual Evoked Potential, or SSVEP.
If you look at a light flickering at 12 Hz, your visual cortex produces strong electrical activity at 12 Hz. Look at a 15 Hz light instead, and your visual cortex shifts to 15 Hz. The response is automatic, involuntary, and extraordinarily precise.
To build an SSVEP BCI, you place several buttons on a screen, each flickering at a different frequency. The user looks at the button they want to press. The system reads the dominant frequency from their visual cortex and identifies which button they're looking at.
SSVEP BCIs are the fastest and most accurate of the three paradigms. They require almost no user training because the visual cortex response is automatic. Some systems achieve information transfer rates above 60 bits per minute, fast enough for real-time control of complex applications. A landmark 2015 study from Tsinghua University demonstrated an SSVEP speller that could type 60 characters per minute. That's faster than most people type on their phones.
The tradeoff: you need to be looking at a screen with flickering elements. This limits SSVEP's use cases compared to motor imagery, which works with your eyes closed, in any environment.
From Lab to Living Room: The Consumer BCI Arrives
For decades, all of this technology lived exclusively in research labs. BCI experiments required medical-grade EEG systems costing tens of thousands of dollars, conductive gel slathered on the scalp, and a team of technicians to run the setup. The idea that anyone could use a BCI at home was laughable.
That started changing around 2010, and by the 2020s, the shift accelerated dramatically. Several forces converged: EEG sensor technology got smaller and cheaper, dry electrode materials eliminated the need for conductive gel, and machine learning algorithms got good enough to extract meaningful signals from noisier consumer-grade recordings.
But hardware alone wasn't enough. The real barrier for consumer BCIs was always software. Even if you could record decent EEG at home, what would you do with it? Research-grade BCI software was arcane, fragmented across different programming languages, and required a PhD-level understanding of signal processing to operate.
This is where things get interesting for anyone who builds things.
When BCI Meets the Developer Ecosystem
The Neurosity Crown is an 8-channel EEG device that sits on your head like a pair of headphones. It records from positions CP3, C3, F5, PO3, PO4, F6, C4, and CP4, covering the motor cortex, prefrontal cortex, and parietal-occipital regions. It samples at 256 Hz with on-device processing through the N3 chipset.
Those are specs. Here's what they mean in plain English: the Crown covers exactly the brain regions involved in all three BCI paradigms. The C3 and C4 positions sit over the motor cortex (motor imagery). The parietal positions capture P300 responses. The parietal-occipital positions pick up visual cortex activity (SSVEP). You don't need a separate device for each paradigm.
But the Crown's most interesting feature, from a "moving objects with your mind" perspective, is kinesis.

Kinesis: What It Actually Feels Like to Control Things With Thought
Kinesis is the Crown's trained thought command system. And using it is one of those experiences that rewires your intuitions about what technology can do.
Here's how it works. You put on the Crown. You open the Neurosity app. You choose a mental command you want to train, something like "push" or "lift" or "activate." Then you think that thought, repeatedly, while the system records your brain's electrical patterns.
What counts as "thinking the thought"? This is where it gets fascinating. You don't have to imagine a specific movement (though you can). Kinesis works with any reproducible mental pattern. Some people imagine pushing a wall. Some imagine a bright flash of light. Some think of a specific word or image. The point isn't what you think. The point is that you think the same thing consistently. The system needs a repeatable pattern to lock onto.
After training, the Crown's machine learning model can detect when you're producing that specific mental pattern in real-time and fire an event through the JavaScript or Python SDK. That event can trigger anything: toggling a light, sending a notification, starting a playlist, launching a drone, navigating a game character, feeding a command to an AI assistant through the Neurosity MCP.
The kinesis API fires events when the Crown detects a trained mental command. Because it integrates with the JavaScript and Python SDKs, developers have connected it to smart home systems (think a light before you think it), robotic controllers, music applications that respond to mental states, and AI workflows through the Neurosity MCP server. The mental command becomes a programmable input, just like a keypress or voice command, except it comes from thought.
Let's pause and appreciate what's happening here. A consumer device, not an implant, not a lab system, is detecting a specific mental pattern from the electrical activity on your scalp and converting it into a software event that can control virtually anything connected to a computer.
Is it as fast and precise as Cathy Hutchinson's implanted electrode array? No. An implant that sits directly on cortical tissue can read individual neuron firing patterns. EEG recorded from outside the skull is inherently blurrier, like listening to a stadium concert through the parking lot wall. You can tell what song is playing, but you can't hear individual instruments.
But the tradeoff is enormous. No surgery. No risk of infection. No wires threading into brain tissue. You put the Crown on, you train a command, and ten minutes later you're controlling software with your thoughts. You take it off, put it on the charger, and go about your day.
That accessibility difference is not a small thing. It's the difference between BCI being a medical intervention for a few thousand people and BCI being a computing platform for everyone.
The Neuroscience of Why This Works (And Where It's Going)
Let's go one level deeper into why your brain can talk to machines at all.
Your cerebral cortex contains roughly 16 billion neurons. Each one communicates with its neighbors through electrical impulses. When large populations of neurons fire together in synchrony, their combined electrical fields are strong enough to travel through the cerebrospinal fluid, through the skull, and through the skin, where an electrode sitting on your scalp can detect them.
This is what EEG measures: the aggregate electrical activity of millions of neurons firing in concert. It's like putting a microphone on the roof of a football stadium. You can't hear any one person shouting. But you can absolutely tell when the crowd is doing the wave, when they're chanting in unison, and when something unexpected just happened on the field.
Each of the three BCI paradigms exploits a different type of neural "crowd behavior":
Motor imagery exploits the fact that motor planning neurons fire in characteristic rhythms. When those rhythms desynchronize (the crowd stops chanting in unison), it means the motor cortex is doing something, even if that something is only imagined.
P300 exploits the attentional system. When something relevant appears among irrelevant stimuli, a specific cascade of neural activity ripples from the parietal cortex through the frontal regions. It's the brain's way of flagging: pay attention to that.
SSVEP exploits the visual cortex's tendency to frequency-lock with external stimuli. Neurons in V1 (primary visual cortex) entrain to the flicker frequency, producing a signal so clean and strong that it's detectable even through low-density EEG.
The field is advancing on multiple fronts simultaneously. Classification algorithms are getting more sophisticated, moving from simple linear discriminant analysis to deep learning models that can adapt to individual brain patterns in real time. Electrode technology is improving, with new dry-contact materials that offer signal quality approaching wet-gel systems. And the integration with AI is opening new possibilities: the Neurosity MCP server lets brain data flow directly into large language models like Claude and ChatGPT, meaning your mental state can inform AI behavior in real time.
The Neurosity MCP (Model Context Protocol) connects your brain data to AI tools. This means an AI assistant could adapt its responses based on your cognitive state, perhaps simplifying its explanations when it detects your focus dropping, or pausing when it detects mental fatigue. This is brain-computer-AI interaction, and it's possible today with the Crown's SDK.
What's Standing Between You and Everyday Mind-Control
If BCI technology works this well, why isn't everyone controlling their smart home with their thoughts?
Honest answer: there are real limitations, and pretending they don't exist would be doing you a disservice.
Signal-to-noise ratio. EEG signals are measured in microvolts. Muscle movements, eye blinks, and electrical interference from nearby devices produce signals that are orders of magnitude larger. Extracting meaningful brain patterns from this noise is like trying to hear a whisper at a rock concert. Modern signal processing handles this remarkably well, but it's not perfect. You'll get better results sitting still in a quiet room than walking through a busy office.
Individual variation. Everyone's brain is slightly different. The exact spatial pattern that your motor imagery produces is unique to you, shaped by your cortical anatomy, your neural connectivity, and even your life experience. This is why BCI systems require individual calibration. The Crown's kinesis feature handles this through personalized training, but it means you can't just put on someone else's trained device and expect it to work.
Degrees of freedom. An implanted BCI with 100 electrodes on the motor cortex can decode dozens of independent movement intentions simultaneously, enough to control individual fingers of a robotic hand. An 8-channel EEG system can reliably distinguish between a handful of distinct mental commands. For many applications (toggling devices, sending commands, switching between states), a handful of commands is plenty. For controlling a robotic arm with surgical precision, we need more.
Mental effort. Producing a consistent mental command takes concentration. It's not physically tiring, but it requires focused attention. This means current BCI control is best suited for intentional, deliberate commands rather than continuous, effortless control. Think of it less like moving your arm and more like pressing a button, except the button is in your mind.
None of these limitations are fundamental. They're all engineering challenges being actively worked on. And many of them have improved dramatically in the last five years alone.
The Real Superpower Isn't Telekinesis. It's the Feedback Loop.
Here's something that gets lost in the headline-grabbing "mind-controlled drone" demonstrations. The most profound aspect of BCI technology isn't controlling external objects. It's the feedback loop between you and your own brain.
When you use a BCI, you learn things about your own neural activity that were previously completely invisible to you. You discover that thinking about your right hand feels different from thinking about your left hand, not just subjectively, but in a way that shows up in measurable data. You learn that certain mental strategies produce cleaner, more detectable signals than others. You develop a kind of neural self-awareness that simply didn't exist before BCI.
This is neuroplasticity in action. Your brain is a pattern-recognition machine that optimizes for whatever you measure and reward. When you give it real-time feedback about its own electrical activity, it adapts. The signals get cleaner. The commands get more reliable. Users who practice with BCI systems regularly show measurable changes in their motor cortex excitability and their ability to modulate specific frequency bands on demand.
You don't just learn to control the machine. The machine teaches you to understand your own brain.
And that, honestly, is more interesting than telekinesis. Telekinesis would be a neat party trick. But understanding the electrical patterns of your own cognition? Learning to intentionally modulate your neural activity? Having a real-time window into the computational state of the most complex object in the known universe?
That's a superpower that actually scales.
Your Brain Has Been Broadcasting. Now Something Is Listening.
Right now, as you read these words, your brain is producing electrical activity at roughly 20 microvolt peaks across multiple frequency bands. Your visual cortex is processing the shapes of letters. Your language centers are converting those shapes into meaning. Your prefrontal cortex is deciding whether this information is worth your continued attention. Your motor cortex is mostly quiet, except for the small activations that keep your posture stable and your eyes tracking.
All of that information radiates outward through your skull. It always has. For the entire history of the human species, those signals have dissipated into the air, unrecorded and unused.
In 1929, Hans Berger first recorded human EEG and proved those signals could be captured. For the next 90 years, only researchers with expensive lab equipment could listen in. Today, a 228-gram device that sits on your head like a pair of headphones can capture those signals at 256 samples per second across 8 channels and stream them to any application a developer can imagine.
We went from "the brain produces electricity" to "you can fly a drone with your thoughts" in less than a century. The trajectory isn't slowing down. It's accelerating.
The question isn't whether you'll eventually interact with technology using your brain. The question is what you'll build when you can.

