What Is the NeuroRights Foundation?
The Neuroscientist Who Rewrote a Constitution
In October 2021, the Chilean Senate voted 37 to 0 to amend the country's constitution. Not for taxes. Not for elections. For brains.
Specifically, they voted to add protections for "brain activity and the information derived from it" to Chile's constitutional framework. This made Chile the first nation in human history to enshrine the protection of neural data in its highest legal document.
The story of how this happened starts not in Santiago but in a neuroscience lab at Columbia University in New York, with a Spanish-born professor named Rafael Yuste who had spent decades studying how the brain produces the thing we call "mind." Somewhere along the way, he realized that understanding the brain wasn't enough. Someone also needed to protect it.
The organization he built to do that is called the NeuroRights Foundation. And its story is one of the most consequential, and least known, developments in the intersection of science, technology, and human rights.
What Is the BRAIN Initiative and the Moment of Clarity?
To understand why Rafael Yuste created the NeuroRights Foundation, you need to understand what he helped create first.
In 2011, Yuste co-authored a paper in the journal Neuron proposing something audacious: a large-scale federal project to map the activity of every neuron in the human brain. Not the structure (the Human Genome Project's cousin, the Human Connectome Project, was already working on that). The activity. The real-time, dynamic, electrical conversation between billions of neurons that produces consciousness.
The paper caught the attention of the Obama White House. In 2013, President Obama announced the BRAIN Initiative (Brain Research through Advancing Notable Neurotechnologies), a multi-billion dollar federal research program that Yuste's proposal had helped inspire. The goal was to develop new tools for recording and manipulating brain activity at unprecedented resolution.
Yuste was at the center of it. And as the tools got better, as researchers gained the ability to record from thousands, then hundreds of thousands of neurons simultaneously, something started to bother him.
The technology was advancing on a trajectory that would eventually allow detailed reading and writing of neural activity. Reading, in the sense of decoding mental states, intentions, and perhaps even specific thoughts from brain signals. Writing, in the sense of stimulating specific neural populations to create experiences, alter moods, or change behavior. And the legal infrastructure for dealing with these capabilities was essentially nonexistent.
"We were building the most powerful technology in human history," Yuste has said in interviews, "and nobody was thinking about the human rights implications."
Five Rights for the Age of Neurotechnology
In 2017, Yuste and a group of colleagues from neuroscience, ethics, and law published a commentary in Nature titled "Four ethical priorities for neurotechnologies and AI." The paper argued that the rapid development of brain-reading and brain-stimulating technologies required new human rights frameworks, not just extensions of existing privacy law.
This paper became the intellectual foundation for the NeuroRights Foundation, which Yuste formally established at Columbia University. The Foundation proposed five specific neurorights that it argues should be recognized as fundamental human rights:
1. The Right to Mental Privacy
No person or organization should be able to access, collect, or use an individual's brain data without their explicit, informed, and ongoing consent. This goes beyond standard data privacy by recognizing that brain data is categorically different from other personal information. It reveals the contents of consciousness itself.
Mental privacy means more than just protecting brain data from hackers. It means establishing that brain data cannot be compelled by employers, demanded by insurers, subpoenaed without extraordinary cause, or collected as a condition of using a service.
2. The Right to Personal Identity
Neurotechnology should not be used to alter an individual's sense of self without their knowledge and consent. This right addresses something that existing human rights frameworks never anticipated: the possibility that technology could change who you are.
This isn't science fiction. Deep brain stimulation (DBS) patients have reported changes in personality, interests, and sense of identity as a result of their neural implants. If a technology alters your preferences, your emotional responses, or your personality, does the "you" that consented to the treatment still exist? The right to personal identity says that these changes require specific, informed consent, and that individuals must be told when a neurotechnology could affect their sense of self.
3. The Right to Free Will
No technology should be able to override an individual's ability to make autonomous decisions. This is the most philosophically charged of the five neurorights, because it touches on the ancient question of whether free will exists at all.
The Foundation sidesteps the metaphysical debate. Regardless of whether free will is philosophically "real," the practical concern is this: brain stimulation technologies can influence decision-making. Neurofeedback can condition brain states. AI systems acting on neural data can nudge behavior. The right to free will establishes that these capabilities must never be deployed to override a person's autonomous decision-making without their knowledge and consent.
4. The Right to Fair Access to Cognitive Enhancement
If neurotechnology can enhance cognitive abilities (memory, focus, learning speed, emotional regulation), those enhancements should be accessible to everyone, not just those who can afford them. This right addresses the possibility of a "neurodivide," a world where cognitive enhancement technologies create a new axis of inequality between the neurologically enhanced and the neurologically unmodified.
This is one of the most debated neurorights. Critics argue that we don't guarantee equal access to other enhancement technologies (private tutoring, good nutrition, selective universities). Proponents counter that cognitive enhancement is different because it affects the fundamental machinery of thought itself. If some people can literally upgrade their brains and others can't, the resulting inequality would be unlike anything in human history.
5. The Right to Protection from Algorithmic Bias
When AI systems analyze neural data, they must be free from biases that could discriminate against individuals based on their neurological characteristics. This right is a response to a documented problem: machine learning models trained on brain data can inherit and amplify biases from their training data.
If a cognitive assessment algorithm was trained primarily on data from neurotypical individuals, it might systematically mischaracterize neurodivergent brain patterns as "deficient" rather than "different." If a brain-based hiring tool was trained on data from employees that a biased manager rated highly, it would learn to replicate that bias in neural terms. This right demands transparency, auditing, and accountability for AI systems that process brain data.
From Paper to Constitution: The Chile Story
Publishing an academic paper about neurorights is one thing. Getting a country to actually rewrite its constitution is another. Here's how it happened.
In 2019, Yuste was invited to present his neurorights framework to the Chilean Senate's Future Challenges Committee. Chile might seem like an unexpected first mover, but the country has a strong tradition of constitutional rights protection and a legislative culture that takes scientific advisory seriously.
Yuste's presentation was, by all accounts, electrifying. He didn't talk about abstract philosophy. He showed the senators what neurotechnology could already do and where it was heading. He demonstrated how brain data could be used to infer emotional states. He explained how neurofeedback could condition behavior. He walked them through the regulatory vacuum.
Senator Guido Girardi, who chaired the committee, became the bill's champion. Girardi had a background in public health and immediately grasped the parallel to genetic privacy. Just as the mapping of the human genome created the need for genetic data protections, the mapping of brain activity created the need for neural data protections.
The constitutional amendment passed the Senate unanimously in December 2021. A companion law, providing specific implementation details, followed in 2024.
The Chilean law establishes that:
- Brain data is a special category of personal data requiring enhanced protection
- Non-consensual collection of neural data is prohibited
- Neural data cannot be used to discriminate in employment, insurance, or education
- Individuals have the right to know how their neural data is being processed and to request its deletion
- Technologies that could alter personal identity require specific informed consent

The Ripple Effect: What's Happening Elsewhere
Chile was the first domino. It hasn't been the last.
Spain
In 2023, Spain introduced a Digital Rights Charter that includes provisions for "neurological data protection." While not as comprehensive as Chile's constitutional approach, it signals that the EU's largest Southern European economy takes neurorights seriously. Spain's proximity to EU policy discussions means these provisions are likely to influence broader European frameworks.
Brazil
Brazil's Senate began considering a neurorights bill in 2024, modeled closely on Chile's legislation. Given Brazil's population of over 200 million, passage would represent by far the largest-scale implementation of neurorights protections.
Mexico
Mexico's Chamber of Deputies introduced neurorights legislation in 2023, focused on the right to mental privacy and protection from non-consensual neural data collection. The bill is still in committee as of early 2026.
The European Union
The EU AI Act, which came into full effect in 2025, classifies AI systems that process biometric data (including potentially neural data) as "high-risk," subjecting them to stringent requirements for transparency, human oversight, and accuracy. The European Parliament has also commissioned studies on whether neural data requires protections beyond what GDPR currently provides.
The United States
The US lags behind. There is no federal neurorights legislation and no indication that one is imminent. However, state-level activity is picking up. Colorado introduced a neurorights bill in 2025 that would establish mental privacy protections and restrict employer use of neural data. California's privacy framework (CCPA/CPRA) is being evaluated for its applicability to brain data.
The Critics Have a Point (But Not the One They Think)
The neurorights movement isn't without critics, and some of their objections are worth taking seriously.
"The technology isn't there yet." This is the most common criticism. Current consumer EEG devices can detect broad mental states, not read specific thoughts. Nobody's brainwaves are being decoded into sentences. Why write constitutional amendments for capabilities that don't exist?
The Foundation's response is compelling: that's exactly when you should write them. Privacy protections established after a technology is ubiquitous are reactive, incomplete, and riddled with grandfathered exemptions. Protections established before widespread deployment can shape the technology's development from the start. The time to build the guardrails is before the highway exists, not after traffic is already moving at highway speed.
"These rights are too vague to enforce." Legal specificity is a real challenge. How do you measure whether someone's "personal identity" has been altered? How do you prove that an AI system's bias affected a neural data analysis? These are hard questions. But they're not harder than the enforcement challenges that accompanied earlier generations of rights (how do you prove employment discrimination? how do you measure "reasonable expectation of privacy"?). The law figures these things out through case law, regulatory guidance, and iterative refinement.
"This will stifle neurotechnology innovation." Perhaps the most important objection. If companies face strict regulations on brain data, will they invest in developing brain-computer interfaces? Yuste has a clear answer to this: the opposite is true. Without trust, consumers won't adopt neurotechnology at all. Clear, strong privacy protections build the trust that allows a market to develop. The countries and companies that lead on neurorights will be the ones that build the products people actually feel safe using.
What the Foundation Gets Right About the Future
The NeuroRights Foundation's most important insight isn't about any specific right. It's about timing.
Every major technology platform of the last two decades was built during a regulatory vacuum. Social media collected behavioral data for years before privacy laws caught up. AI systems were trained on copyrighted material and personal data before anyone established rules about it. The consequences of this "innovate first, regulate later" approach are well documented: surveillance capitalism, algorithmic bias, misinformation at scale, and a constant game of legal catch-up.
The Foundation is making a bet that neurotechnology can be different. That if the legal, ethical, and technical frameworks are established before the technology reaches mass adoption, we can build a neurotech ecosystem that respects human rights by default rather than by afterthought.
This is why companies building consumer brain-computer interfaces right now have an outsized role in this story. The technical decisions they make today, whether to process data on-device or in the cloud, whether to encrypt at the hardware level, whether to give users genuine control over their brain data, aren't just engineering choices. They're choices about what kind of future neurotechnology creates.
The Neurosity Crown was designed with these principles as architectural commitments, not marketing copy. On-device processing via the N3 chipset. Hardware-level encryption. No third-party access to raw brain data. These decisions align with every one of the Foundation's five neurorights, not because we were trying to check boxes, but because building a [brain-computer interface](/guides/what-is-bci-brain-computer-interface) any other way would betray the people who trust us with their brain data.
The Moral of the Story Is Also the Practical One
Rafael Yuste is fond of saying that neurorights are not a luxury for the future. They're a necessity for the present. And he's right. Not because brain-reading technology is about to crack open our skulls and expose our innermost thoughts. But because the precedents being set right now, in boardrooms, in legislatures, regarding service agreements that nobody reads, will determine whether the next generation of brain-computer interfaces is built on a foundation of consent and privacy or on a foundation of extraction and surveillance.
The NeuroRights Foundation has given the world a framework. Chile has proven it can be implemented. The question now is whether the rest of the world catches up before the window of opportunity closes.
The most remarkable thing about this story might be the simplest. One neuroscientist looked at the technology he was helping create, realized that the legal system wasn't prepared for it, and decided to do something about it. He didn't wait for a crisis. He didn't wait for a scandal. He didn't wait for a data breach that exposed millions of people's neural patterns.
He built the guardrails first.
That's not just admirable ethics. It's good engineering. You don't add brakes to a car after the first crash. You design them in from the start.

