Neurosity
Open Menu
Guide

Who Owns Your Thoughts?

AJ Keller
By AJ Keller, CEO at Neurosity  •  February 2026
Neuroethics is the field that examines the ethical, legal, and social implications of neuroscience and neurotechnology, from brain data privacy to cognitive liberty to the very definition of personal identity.
We've spent centuries building legal frameworks around physical property, speech, and bodily autonomy. But what about mental autonomy? As brain-computer interfaces become consumer products and neural data becomes a commodity, neuroethics asks the questions that will define the next era of human rights. The answers we arrive at now will shape what it means to have a mind in the 21st century.
Explore the Crown
Real-time brainwave data with on-device privacy

The Question Nobody Is Asking Loudly Enough

Sometime in the next decade, and probably sooner than you think, your employer might ask you to wear a brain-sensing device at work. Not to read your thoughts, they'll say. Just to measure focus. Optimize productivity. Maybe adjust the lighting when your attention drifts. Perfectly benign. Totally voluntary.

Now ask yourself: how voluntary is "voluntary" when your performance review depends on it?

Or consider this: a health insurance company offers you a 20% discount if you share your neural data from a consumer EEG device. They just want to verify that you're managing your stress levels, practicing mindfulness-based stress reduction, maintaining cognitive health. What's the harm in that?

Or this: a social media company develops a BCI headband that lets you scroll feeds with thought alone. The device is free. The app is free. You pay with your brain data, which the company uses to train algorithms that predict what will make you angry, anxious, or excited, then serves you content calibrated to those predictions.

These aren't science fiction scenarios. The technology for each one either exists today or is under active development. And the ethical frameworks we'd need to evaluate them? They're still being written.

This is the domain of neuroethics. And it might be the most important conversation in technology that almost nobody is having.

What Neuroethics Actually Is

Neuroethics emerged as a formal field in 2002, when William Safire coined the term at a Dana Foundation conference. But the questions it addresses are much older than the name suggests. Whenever humans have developed tools to study or alter the mind (from lobotomies in the 1940s to Prozac in the 1980s to consumer BCIs today), ethical questions have followed.

The field breaks into two complementary branches.

The Ethics of Neuroscience

This branch asks: How should we conduct brain research? What are the limits of acceptable experimentation? When is it ethical to alter someone's brain?

These questions have a long, uncomfortable history. The lobotomy era, when tens of thousands of people had their frontal lobes surgically disconnected to "treat" mental illness, stands as a permanent warning about what happens when neuroscience advances faster than ethics. Walter Freeman, who popularized the procedure, performed lobotomies using crude instruments in his office rather than an operating room. He won acclaim for it. His patients won lifelong disability.

More recent ethical challenges include: Deep brain stimulation for psychiatric conditions (when does treatment become personality modification?). The use of cognitive enhancing drugs by healthy individuals (is it unfair? Is it coercion?). Brain imaging in the courtroom (can we really detect deception or criminal intent?). Neuroimaging studies of vulnerable populations who may not be able to give fully informed consent.

The Neuroscience of Ethics

The second branch turns the lens around. Instead of asking "what is ethical in neuroscience," it asks "what can neuroscience tell us about ethics itself?"

Research in this area has produced genuinely unsettling findings. Joshua Greene at Harvard used fMRI to show that different types of moral dilemmas activate different brain circuits. Personal moral dilemmas (pushing someone off a bridge to save five others) activate emotional circuits centered on the amygdala and vmPFC. Impersonal moral dilemmas (flipping a switch to divert a trolley) activate cognitive circuits in the dorsolateral PFC.

This means our moral judgments aren't computed by a single, rational "morality module." They emerge from the competition between emotional and cognitive systems, and which system wins depends on how the problem is framed, how much time you have to decide, and even your current stress level.

Some philosophers find this deeply troubling. If moral intuitions are products of neural architecture rather than access to moral truth, what grounds do we have for trusting them?

What Are the Five Pillars of Neuroethics?

As neurotechnology has matured from research tools to consumer products, the ethical landscape has crystallized around five core issues. Understanding each one is essential for anyone who wants to think clearly about where brain technology is heading.

1. Neural Data Privacy

Neural data is different from any other type of personal data, and the differences matter enormously for privacy.

When a company collects your search history, they learn what you're interested in. When they collect your location data, they learn where you go. When they collect your neural data, they learn what you think. Not precisely, not yet, but the trajectory is clear and the resolution is improving every year.

Here's the part that should keep you up at night. EEG data contains far more information than the user typically intends to share. You might put on a brain-sensing headband to measure your focus while working. But that same EEG signal contains correlates of your emotional state, your cognitive workload, your response to stimuli, your fatigue level, and potentially even markers associated with neurological and psychiatric conditions.

A 2017 study by Martinovic et al. demonstrated that EEG data collected during a simple gaming task could be used to infer private information the user never intended to disclose, including which bank they used, which neighborhood they lived in, and their political preferences. The technique exploited a brain signal called the P300, an event-related potential that fires when you encounter something personally significant or surprising. By flashing stimuli (logos, locations, political symbols) during the task and measuring P300 responses, researchers could extract private information without the user's knowledge.

The implications are staggering. Neural data isn't just data about what you've done. It's data about what you think, feel, and recognize. And once it's collected, it's permanent. You can change your password. You can't change your P300 response.

The Re-identification Problem

Even "anonymized" neural data may not be truly anonymous. A 2019 study showed that individual EEG patterns are as unique as fingerprints. Just 12 seconds of EEG recording was sufficient to identify individuals from a database with over 95% accuracy. This means that even if identifying information is stripped from a neural data set, the brain data itself can serve as a biometric identifier, potentially linking "anonymous" neural data back to specific individuals.

2. Cognitive Liberty

Cognitive liberty is the proposed right to mental self-determination. It has three components:

The right to mental privacy. No one should be able to access or infer your mental states without your informed consent. This seems obvious, but existing law provides almost no protection. In most countries, there is no legal prohibition against an employer requiring EEG monitoring as a condition of employment, or a school requiring students to wear attention-tracking headbands.

The right to cognitive self-determination. You should have the right to alter your own consciousness as you see fit, whether through meditation, neurofeedback, or other means, without state interference (beyond the usual limits that apply to actions that harm others).

The right to freedom from unauthorized cognitive manipulation. No one should be able to use neurotechnology to influence your thoughts, emotions, or decisions without your knowledge and consent.

Marcello Ienca and Roberto Andorno proposed in a landmark 2017 paper that cognitive liberty should be added to the Universal Declaration of Human Rights. Their argument: just as the physical body is protected from unauthorized interference (through laws against assault, battery, and nonconsensual medical treatment), the mind deserves equivalent protection. And as neurotechnology makes the mind increasingly accessible, that protection is no longer philosophical. It's practical.

3. Identity and Authenticity

Deep brain stimulation (DBS) for Parkinson's disease and treatment-resistant depression has produced some of the most thought-provoking case studies in neuroethics.

Neurosity Crown
Brainwave data, captured at 256Hz across 8 channels, processed on-device. The Crown's open SDKs let developers build brain-responsive applications.
Explore the Crown

Some DBS patients report that the device changes their personality. Not dramatically, not in ways that make them unrecognizable. But in ways that raise profound questions. One patient, described in a 2009 paper by Schupbach et al., said after DBS activation: "I feel like an electrical doll." Another said: "I don't know whether I'm happy because of the stimulation or because I'm really happy."

When a neurotechnology changes how you think, feel, or behave, is the resulting person still "you"? If you become more impulsive, more creative, or less anxious because of a neural device, are those changes authentic expressions of your personality, or are they artifacts of the technology?

This isn't an abstract question for DBS patients. And it won't be abstract for the much larger population of people who will use consumer neurotechnology for cognitive enhancement in the coming decades. If a neurofeedback protocol makes you measurably calmer, is that "real" calm? Does the distinction even matter?

Philosopher Walter Glannon argues that it depends on whether the change is consistent with the person's own values and goals. If you want to be calmer and you use a tool to achieve that, the resulting calm is authentically yours. If a technology changes your desires themselves, that's a different, more troubling situation.

4. Enhancement and Equity

As neurotechnology becomes more effective at improving cognitive performance, a familiar equity question arises: who gets access?

If a consumer EEG device can genuinely improve focus and productivity through neurofeedback, and if that improvement translates to professional advantage, then access to the technology becomes an equity issue. Students with neurofeedback tools might outperform those without. Knowledge workers with real-time cognitive monitoring might be more productive than those flying blind.

This isn't hypothetical. Studies have shown that neurofeedback training can improve attention, working memory, and executive function. If these improvements are real and reliable, they represent a competitive advantage, and competitive advantages tend to accrue to those who can afford them.

The neuroethical response to this challenge isn't to restrict the technology. It's to democratize it. Making neurotechnology affordable, open-source, and accessible is itself an ethical imperative. The alternative, a world where cognitive enhancement is available only to the wealthy, would exacerbate existing inequalities in ways that could become self-reinforcing and permanent.

5. Responsibility and Agency

If your brain activity can be monitored and analyzed, what happens to the concept of personal responsibility?

Consider this scenario: a person commits a crime. Brain scans reveal abnormal activity in their prefrontal cortex, specifically in the circuits responsible for impulse control. Their defense attorney argues that the neural abnormality diminished their capacity for self-control, and therefore their moral responsibility.

This isn't hypothetical either. Brain scans have been introduced as evidence in criminal cases, with varying degrees of success. In the US, the case of Grady Nelson in 2010 used PET scan evidence of frontal lobe damage during the sentencing phase of a murder trial. The jury voted against the death penalty, though whether the brain scan was the deciding factor is impossible to know.

The deeper issue is what neuroscience does to the concept of free will. If every decision you make is the product of neural activity, and that neural activity is shaped by genetics, development, and experience, then in what sense are you "free" to choose otherwise? And if you're not truly free, what does that mean for moral responsibility, criminal justice, and the social contract?

Neuroethics doesn't resolve these questions. But it insists we take them seriously, especially as neurotechnology gives us increasingly precise windows into the neural machinery of decision-making.

The Regulatory Landscape: Who's Protecting Your Brain?

JurisdictionProtectionYearScope
ChileConstitutional amendment protecting neural data and mental integrity2021Broadest: covers all neurotechnology
EU (GDPR)Neural data classified as biometric/health data with enhanced protections2018 (applied)Requires explicit consent for processing
Colorado (US)Neural data classified as sensitive personal data2024Consumer privacy protections
SpainProposed constitutional amendment (pending)2023Modeled on Chile's approach
BrazilNeural rights bill under consideration2025Comprehensive framework proposed
Federal USNo specific neural data protectionsN/ARegulatory gap
Jurisdiction
Chile
Protection
Constitutional amendment protecting neural data and mental integrity
Year
2021
Scope
Broadest: covers all neurotechnology
Jurisdiction
EU (GDPR)
Protection
Neural data classified as biometric/health data with enhanced protections
Year
2018 (applied)
Scope
Requires explicit consent for processing
Jurisdiction
Colorado (US)
Protection
Neural data classified as sensitive personal data
Year
2024
Scope
Consumer privacy protections
Jurisdiction
Spain
Protection
Proposed constitutional amendment (pending)
Year
2023
Scope
Modeled on Chile's approach
Jurisdiction
Brazil
Protection
Neural rights bill under consideration
Year
2025
Scope
Comprehensive framework proposed
Jurisdiction
Federal US
Protection
No specific neural data protections
Year
N/A
Scope
Regulatory gap

Chile's 2021 constitutional amendment is the most significant legal development in neuroethics to date. It added "neuroprotection" to the constitutional right to mental integrity, giving Chileans the explicit right to control their own neural data and be free from technologies that could alter their brain activity without consent.

The Chilean model is being watched closely by other nations, but progress is slow. In the United States, there is no federal law specifically protecting neural data. HIPAA covers brain data collected in medical contexts but not consumer settings. The patchwork of state biometric privacy laws (like Illinois's BIPA) may apply to neural data but weren't written with it in mind.

This regulatory vacuum is concerning because the technology is advancing faster than the law. By the time comprehensive neural data protections are enacted in most countries, billions of data points will already have been collected, stored, and potentially sold.

The Privacy-First Architecture: An Ethical Imperative

The neuroethics challenges above paint a concerning picture. But they also illuminate a clear path forward: the architecture of neurotechnology itself must be ethical by design, not by afterthought.

This is a principle the Neurosity team took seriously from the beginning. The Crown's N3 chipset processes EEG data on the device itself. Raw brain data doesn't get transmitted to cloud servers for processing. There's no backend database accumulating your neural patterns. Hardware-level encryption ensures that even if the device were physically compromised, the data would be unreadable.

When you use the Crown's JavaScript or Python SDK, you're accessing data that's computed locally. The focus and calm scores, the power spectral density, the raw EEG at 256Hz, all of this is generated on the device and stays on the device until you, the user, explicitly choose to send it somewhere. You have full control. Not because of a privacy policy that could change next quarter, but because of a hardware architecture that can't.

The Neurosity MCP integration, which allows the Crown to feed brain state data to AI tools like Claude, follows the same principle. The user initiates the connection. The user controls what data flows and where. The AI tool receives only what the user chooses to share, in real-time, with no persistent storage on the AI side.

This is what privacy-first neurotechnology looks like. Not "we promise to be careful with your data." Instead: "Your data physically cannot leave without your active choice." The distinction matters enormously, and it's a distinction that the neuroethics community has been calling for.

The Conversation We Need to Have

Here's the uncomfortable truth about neuroethics: the people building neurotechnology are generally moving faster than the people thinking about its implications. This isn't because the builders are careless. It's because the default mode of technology development, in any field, is to solve technical problems first and ethical problems later.

But brain technology is different from other technologies. Your credit card number can be reissued. Your social security number can be monitored for fraud. Your password can be changed. Your neural data is permanent, intimate, and uniquely identifying. A brain data breach isn't like a financial data breach. There is no "new account" for your brainwaves.

The questions neuroethics raises aren't theoretical. They're questions that consumers of brain-computer interfaces need to ask right now:

Who has access to my neural data? Not just today, but in the terms of service I agreed to, who could have access tomorrow?

Where is my brain data processed? On the device I own, or on a server I don't control?

What can be inferred from my data beyond what I intended to share? If I'm using a device for focus training, could the same data reveal my emotional state, my health status, my cognitive vulnerabilities?

What happens to my neural data if the company that made the device goes bankrupt, gets acquired, or changes its privacy policy?

These are the questions of our era. Not because brain-reading technology is coming. Because it's here.

The future of the mind will be shaped by the decisions we make in the next few years about who has the right to access, analyze, and act on neural data. Neuroethics isn't an academic discipline separate from the technology. It's the foundation the technology must be built on.

The brain is the last private space. Whether it stays that way depends on whether we have the wisdom to protect it with the same vigor we once applied to protecting our homes, our bodies, and our speech.

Your thoughts are your own. The question is whether the technology you invite into your mind will respect that, or exploit it. And that question isn't answered by promises. It's answered by architecture.

Stay in the loop with Neurosity, neuroscience and BCI
Get more articles like this one, plus updates on neurotechnology, delivered to your inbox.
Frequently Asked Questions
What is neuroethics?
Neuroethics is an interdisciplinary field that examines the ethical, legal, social, and philosophical implications of neuroscience research and neurotechnology. It covers two broad domains: the ethics of neuroscience (how brain research should be conducted) and the neuroscience of ethics (what the brain can tell us about moral decision-making). As neurotechnology becomes more powerful and accessible, neuroethics increasingly focuses on issues like neural data privacy, cognitive liberty, mental autonomy, and the social implications of brain-reading devices.
Who owns brain data?
This is one of the most unsettled questions in neuroethics and law. In most jurisdictions, brain data falls into a legal gray area. It's not clearly classified as health data (which has strong protections under laws like HIPAA), biometric data (which some state laws protect), or personal data (covered by GDPR in Europe). Some legal scholars argue that neural data deserves a new category of protection given its intimate connection to thoughts, emotions, and identity. Chile became the first country to constitutionally protect neural data in 2021.
What is cognitive liberty?
Cognitive liberty is the proposed right to mental self-determination, including the freedom to control one's own consciousness, use neurotechnologies by choice, and be free from unauthorized mental surveillance or manipulation. Advocates argue that cognitive liberty should be considered a fundamental human right, alongside physical liberty and freedom of speech, especially as neurotechnology makes it increasingly possible to infer mental states from brain data.
Is brain data different from other personal data?
Yes, in several critical ways. Brain data is continuous (your brain never stops generating data), intimate (it correlates with thoughts, emotions, and cognitive states), inferential (machine learning can extract information the user didn't intend to share), and irrevocable (you can't change your brain's electrical patterns the way you can change a password). These properties mean that a brain data breach has implications fundamentally different from the leak of a credit card number.
How does the Neurosity Crown handle neural data privacy?
The Crown processes brain data on-device through its N3 chipset with hardware-level encryption. Raw EEG data never leaves the device unless the user explicitly chooses to share it through the SDK. There is no cloud processing of raw brain data, no third-party access, and no advertising based on neural patterns. Users maintain full ownership and control of their brain data at all times.
What countries have laws protecting brain data?
Chile led the world by amending its constitution in 2021 to protect neural data as part of the right to mental integrity. Spain has proposed similar constitutional amendments. The EU's GDPR provides some protection through its biometric data provisions. In the US, Colorado passed a law in 2024 classifying neural data as sensitive personal data. Several other countries and US states are considering neural data protection legislation as of 2026.
Copyright © 2026 Neurosity, Inc. All rights reserved.