Neurosity
Open Menu
Guide

Your Brain Thinks in Probabilities, Not Certainties

AJ Keller
By AJ Keller, CEO at Neurosity  •  February 2026
The Bayesian brain hypothesis proposes that the brain represents the world not as a fixed picture but as a set of probability distributions. It continuously updates these probabilities using Bayes' theorem, combining prior beliefs with new sensory evidence to compute the most likely interpretation of reality.
This isn't just a theory about how the brain might work. It's a mathematical framework that explains why optical illusions fool you, how you catch a ball without calculus, why experts have better intuition than novices, and how mental illness can emerge from perfectly logical machinery running on bad assumptions.
Explore the Crown
8-channel EEG. 256Hz. On-device processing.

You've Never Been Certain About Anything (And That's the Point)

Quick experiment. Look at the object nearest to you. A coffee mug, a lamp, your phone. You see it clearly. It's definitely there. You are certain about what it is.

Except you're not.

What's actually happening is that photons are bouncing off that object, hitting your retina, and generating a cascade of electrical signals that arrive at your visual cortex as a noisy, ambiguous, two-dimensional smear of data. From this smear, your brain is computing the most probable three-dimensional object that could have produced these signals. It's running a probability calculation, and the answer it returns with the highest probability is what you "see."

You never see the object. You see the probability.

This is the core of the Bayesian brain hypothesis: the proposal that your brain is, at its deepest level, a probability engine. It doesn't represent the world in certainties. It represents the world in distributions of possibilities, weighted by how likely each possibility is given the available evidence.

It's named after the Reverend Thomas Bayes, an 18th-century English minister who worked out the mathematics of how to update beliefs in light of new evidence. He probably never imagined that his theorem would become the leading candidate for how the most complex object in the universe computes reality.

The Reverend's Theorem (In Human Terms)

Before we can understand the Bayesian brain, we need to understand Bayes' theorem. Don't worry. The math is simpler than it looks, and the intuition behind it is something your brain already does.

Bayes' theorem answers a simple question: How should I update my beliefs when I get new evidence?

Imagine you hear a scratching sound behind a closed door. Before you opened the door, you believed there was a 90% chance your cat was on the other side (because she's usually there) and a 10% chance it was something else. That's your prior belief.

The scratching sound is evidence. How likely is scratching if the cat is there? Very likely: maybe 80%. How likely is scratching if it's something else? Less likely: maybe 10%.

Bayes' theorem takes your prior belief and updates it using the evidence. After hearing the scratching, your belief that the cat is behind the door rises from 90% to something like 98%. The evidence was exactly what you'd expect if your prior were true, so the prior gets strengthened.

Now imagine you hear barking instead. Your prior was "cat" at 90%, but barking is very unlikely from a cat. Bayes' theorem would dramatically revise your belief, dropping "cat" and elevating "dog" even though your prior for "dog" was low.

That's it. That's the whole framework. Prior beliefs + new evidence = updated beliefs. Your brain is doing this constantly, for everything, all the time.

Your Brain Didn't Read Bayes, But It Does Bayes

Here's the genuinely remarkable thing: no one taught your brain Bayes' theorem. No neurons attend statistics lectures. Yet decades of behavioral and neural research show that the brain performs computations that are, at minimum, approximately Bayesian.

The evidence comes from multiple directions.

Visual Perception Is Bayesian

Consider this: you're looking at a two-dimensional image on your retina, but you perceive a three-dimensional world. How? There are infinitely many 3D arrangements that could produce any given 2D retinal image. Your brain has to pick one, and it consistently picks the most probable one given its priors about how the world usually works.

A 1999 study by Daniel Kersten at the University of Minnesota showed that visual perception follows Bayesian predictions almost exactly. When he manipulated the ambiguity of visual stimuli, subjects' perceptions shifted precisely as Bayes' theorem would predict: more ambiguous input led to more prior-dominated perception, while clearer input led to more evidence-dominated perception.

Optical illusions are Bayesian priors exposed. The Muller-Lyer illusion (the arrows that make identical lines look different lengths) works because your brain has a prior about perspective: lines with outward-pointing arrows look like inside corners of rooms (which are farther away and therefore larger). The prior biases the interpretation. You "see" the line as longer not because the evidence says so, but because the prior pulls your perception toward the most probable interpretation in a 3D world.

Motor Control Is Bayesian

When you reach for a cup of coffee, your brain doesn't have perfect information about where the cup is or where your hand is. Sensory signals are noisy and delayed. Yet you grab the cup with remarkable accuracy. How?

In 2004, Konrad Kording and Daniel Wolpert at University College London showed that the brain solves this problem using Bayesian integration. It combines a noisy visual estimate of the cup's position with a noisy proprioceptive estimate of the hand's position, weighting each signal by its reliability. When they artificially degraded visual information, subjects shifted to relying more on proprioception, exactly as a Bayesian model would predict.

This is called optimal cue integration, and it's been demonstrated in touch, hearing, balance, and even the integration of information across seconds of time. The brain doesn't just average its sensory channels. It weights them by reliability. That's Bayesian.

Decision-Making Is (Approximately) Bayesian

Even high-level decision-making shows Bayesian signatures. In experiments where subjects accumulate evidence before making a decision (like judging whether a cloud of moving dots is drifting left or right), the brain integrates evidence over time in a way that closely matches Bayesian ideal observer models.

This evidence accumulation shows up on EEG as a buildup of activity in parietal and frontal regions, ramping up until it crosses a threshold and a decision is made. The rate of buildup correlates with the strength of the evidence. Strong evidence produces fast, steep ramps. Weak evidence produces slow, gradual ramps. The brain is computing posterior probability and committing to a decision when it crosses a confidence threshold.

The Core Insight

The Bayesian brain hypothesis doesn't claim that neurons literally compute Bayes' theorem. It claims that the brain's computations approximate Bayesian inference closely enough that the math accurately predicts perception, motor control, and decision-making. Whether neurons are literally "doing Bayes" or doing something that produces Bayesian results is a deep open question. The behavioral predictions hold either way.

Priors: Your Brain's Accumulated Wisdom (and Bias)

The most powerful concept in the Bayesian brain framework is the prior. Your priors are everything your brain brings to the table before new evidence arrives. They're the sum total of your experience, your learning, your evolutionary heritage, all encoded as probability distributions over possible states of the world.

Priors are why an experienced radiologist spots a tumor that a medical student misses. The radiologist has stronger, more refined priors about what tumors look like, which means their brain assigns higher probability to "tumor" when ambiguous evidence appears.

Priors are why a native English speaker can read text with missing letters ("Th_ c_t s_t on th_ m_t") while someone who doesn't know English cannot. The prior model of English fills in the gaps.

Priors are why your grandmother's cooking smells like comfort. Your brain has a strong prior linking that specific olfactory pattern to positive emotional states, built over decades of association.

But priors are also why prejudice exists. If your brain has been trained, through media, through culture, through limited exposure, to associate certain faces or names with threat, that prior will bias your perception of ambiguous behavior. A Bayesian brain that has absorbed biased training data will produce biased inferences, not because it's malfunctioning, but because it's doing exactly what it's supposed to do with the priors it has.

This is uncomfortable but important. The Bayesian framework reveals that bias isn't a failure of rationality. It's a feature of a system designed to make fast inferences from incomplete data using prior experience. The solution isn't to stop using priors (you can't). It's to deliberately update them with diverse, representative evidence.

What Bayesian Inference Looks Like on EEG

If the brain really computes Bayesian inference, there should be electrical signatures we can measure. And there are.

The Mismatch Negativity: The Likelihood Signal

The mismatch negativity (MMN) appears on EEG about 150-250 milliseconds after an unexpected stimulus. Play a series of identical tones and then change the pitch, and the MMN fires. In Bayesian terms, the MMN reflects the likelihood, specifically how surprising the sensory evidence is given the current prior model.

The amplitude of the MMN scales with how much the stimulus violates the prior. A big deviation from the expected tone produces a big MMN. A small deviation produces a small one. This is exactly what a Bayesian system should do: generate larger error signals when the evidence more strongly contradicts the prior.

The P300: The Belief Update Signal

The P300 component, a positive wave about 300 milliseconds after a stimulus, reflects a different part of the Bayesian computation. While the MMN signals surprise at the sensory level, the P300 signals belief updating at a higher cognitive level.

In oddball paradigms where subjects must count or respond to rare stimuli, the P300 amplitude correlates with how much the subject's beliefs need updating. Very rare events produce large P300s. Moderately rare events produce moderate P300s. Expected events produce no P300.

In Bayesian terms, the P300 reflects the magnitude of the posterior update. It's the brain's electrical signature of changing its mind.

Frontal Theta: The Uncertainty Signal

When the brain is uncertain, when priors are weak and evidence is ambiguous, frontal theta oscillations (4-8 Hz) increase. This has been shown in gambling tasks, conflict monitoring, and any situation where the brain doesn't have a confident prediction.

In Bayesian terms, frontal theta may reflect the computational cost of inference under high uncertainty. When priors are strong, inference is cheap: the prior dominates and the calculation is fast. When priors are weak, the brain has to rely more heavily on noisy evidence, requiring more computation and generating more theta.

EEG ComponentTimingBayesian Role
Mismatch Negativity (MMN)150-250 msSensory surprise signal (likelihood)
P300250-500 msBelief updating signal (posterior revision)
Frontal theta increaseSustainedUncertainty and computational load
Repetition suppressionProgressivePrior strengthening through confirmation
N400400 msSemantic prediction error
EEG Component
Mismatch Negativity (MMN)
Timing
150-250 ms
Bayesian Role
Sensory surprise signal (likelihood)
EEG Component
P300
Timing
250-500 ms
Bayesian Role
Belief updating signal (posterior revision)
EEG Component
Frontal theta increase
Timing
Sustained
Bayesian Role
Uncertainty and computational load
EEG Component
Repetition suppression
Timing
Progressive
Bayesian Role
Prior strengthening through confirmation
EEG Component
N400
Timing
400 ms
Bayesian Role
Semantic prediction error
Neurosity Crown
Brainwave data, captured at 256Hz across 8 channels, processed on-device. The Crown's open SDKs let developers build brain-responsive applications.
Explore the Crown

The Neural Code for Probability

So the brain behaves as if it's doing Bayesian inference. But how? Neurons fire or don't fire. They don't obviously compute probability distributions. Where's the math happening?

This is one of the hottest questions in computational neuroscience, and several competing answers are on the table.

Population Coding

The most widely accepted proposal is probabilistic population coding. The idea is that a single neuron doesn't represent a probability. A population of neurons does. The pattern of activity across a group of neurons encodes an entire probability distribution over possible states.

Imagine 100 neurons that respond to different orientations of a visual edge. Each neuron fires most strongly for its "preferred" orientation. When the brain processes an ambiguous edge, the population doesn't settle on one answer. Instead, many neurons fire at moderate rates, with the distribution of activity across the population encoding the brain's uncertainty about the true orientation.

Alexandre Pouget and colleagues at the University of Geneva have shown that neural populations can represent both the best estimate (the mean of the distribution) and the confidence (the width of the distribution) simultaneously. This is exactly what Bayesian inference requires.

Sampling-Based Inference

An alternative proposal, championed by researchers like Wolfgang Maass, is that neurons represent probabilities through sampling. Rather than encoding a full distribution at once, neural circuits generate samples from the posterior distribution over time. The fluctuations in neural activity that look like noise might actually be the brain drawing samples from its probability distributions.

This would explain something puzzling about neural variability. Neurons are noisy. Their firing rates fluctuate even when the stimulus is constant. Under the sampling hypothesis, this "noise" is signal. It's the brain exploring the probability space, drawing different samples from its posterior distribution to represent uncertainty.

Predictive Coding

The third proposal connects directly to predictive processing. In this framework, the prior is encoded in the top-down predictions sent from higher to lower cortical areas. The likelihood is encoded in the bottom-up prediction errors sent from lower to higher areas. And the posterior, the updated belief, emerges from the combination of the two.

This is mathematically elegant because the prediction error is proportional to the difference between prior and evidence, which is exactly the quantity that Bayes' theorem uses to compute the update.

Three Proposals for How Neurons Do Bayes

Population coding: Groups of neurons represent probability distributions through their collective firing patterns. The shape of the distribution across the population encodes both the best guess and the uncertainty.

Neural sampling: Individual neurons draw samples from probability distributions over time. What looks like noise in neural firing is actually the brain exploring possible interpretations.

Predictive coding: Top-down predictions carry the prior, bottom-up errors carry the likelihood, and the posterior emerges from their interaction across cortical layers.

These proposals are not mutually exclusive. The brain may use different mechanisms for different computations.

When the Probability Engine Breaks

The Bayesian framework becomes especially powerful when you use it to understand what goes wrong in mental illness.

If the brain is a Bayesian machine, then mental illness can be understood as pathological inference. Not irrational thinking in a loose sense, but mathematically specifiable failures in the Bayesian machinery.

Anxiety is what happens when threat-related priors are too strong and too precise. The brain assigns high probability to danger even when sensory evidence is ambiguous or benign. The system isn't irrational. It's doing correct Bayesian inference with pathologically calibrated priors. If your prior probability for "this social situation will go badly" is 90%, then even neutral evidence will produce a posterior that's heavily weighted toward "bad." The math works perfectly. The priors are the problem.

Chronic pain presents a similar picture. Chronic pain often persists even after the original injury has healed. In a Bayesian framework, the brain's prior for "this body region is damaged and producing pain" has become so strong and precise that actual sensory evidence from the healed tissue can't override it. The prior dominates the posterior, and the patient continues to experience pain that reflects the model, not the body.

Psychosis involves the opposite problem. Here, the precision of sensory evidence is reduced, so priors run unchecked. The brain generates internal predictions (voices, patterns, conspiracies) and, because sensory prediction errors are weakened, these internal predictions aren't corrected. The result is hallucinations and delusions that feel utterly real because, within the brain's Bayesian computation, they have the highest posterior probability.

You're a Bayesian Machine Reading About Bayesian Machines

Let me point out something strange about this moment.

Right now, your brain is using Bayesian inference to understand an article about Bayesian inference. It has priors about what kind of article this is (educational, neuroscience-related), what the next sentence will probably say, and whether the claims are credible. As each sentence arrives, your brain computes prediction errors, updates its model, and revises its beliefs.

If you started this article skeptical of the Bayesian brain hypothesis, your prior was "this probably isn't right." Each piece of evidence, each study, each example, generated a prediction error that may have gradually shifted your posterior toward "maybe this is right." Or maybe your priors were too strong and the evidence wasn't sufficient. In which case, you're still skeptical, and Bayes' theorem would say: that's perfectly rational given your priors.

This self-referential quality is part of what makes the Bayesian brain hypothesis so compelling. It's not just a theory about perception or motor control. It's a theory about thinking itself. About belief formation. About what it means to change your mind.

The Neurosity Crown captures the EEG signatures of this process. The mismatch negativity when your brain detects unexpected input. The P300 when it updates a belief. The frontal theta when it's uncertain. These aren't abstract measurements. They're readouts of your personal probability engine at work, visible in the electrical fluctuations of your cortex, sampled 256 times per second across eight channels.

The 18th-century reverend who worked out the math could never have imagined that his theorem would describe the organ producing human thought. But two and a half centuries later, that appears to be exactly what it does.

Your brain is not a certainty machine. It's a doubt machine that converts uncertainty into useful action through the elegant mathematics of probability. Every perception is a bet. Every decision is a wager. Every moment of conscious experience is the posterior distribution that your 86 billion neurons computed as their best guess about what's actually out there.

You've never been certain about anything. You've only ever been very, very probably right. And that, it turns out, is more than enough.

Stay in the loop with Neurosity, neuroscience and BCI
Get more articles like this one, plus updates on neurotechnology, delivered to your inbox.
Frequently Asked Questions
What is the Bayesian brain hypothesis in simple terms?
The Bayesian brain hypothesis says that your brain works like a probability calculator. Instead of creating a single fixed picture of the world, it maintains a range of possible interpretations, each with an assigned probability. When new sensory evidence arrives, the brain updates these probabilities using Bayes' theorem, a mathematical rule for revising beliefs in light of new data. The interpretation with the highest probability after updating becomes your perception, your conscious experience of reality.
What is Bayes' theorem and how does the brain use it?
Bayes' theorem is a mathematical formula: P(A|B) = P(B|A) times P(A) divided by P(B). In brain terms, P(A) is the prior belief about what is happening in the world. P(B|A) is how likely the sensory evidence would be if that belief were true. P(A|B) is the updated belief after considering the evidence. The brain does not literally compute this formula. Instead, neural circuits implement approximate Bayesian inference through mechanisms like population coding, where groups of neurons represent probability distributions.
What are priors in the context of the Bayesian brain?
Priors are the brain's existing beliefs or expectations before new evidence arrives. They are built from a lifetime of experience, from evolutionary history encoded in brain architecture, and from recent context. Strong priors are beliefs held with high confidence based on extensive experience. Weak priors are uncertain beliefs. Priors shape perception by biasing interpretation toward likely outcomes. This is why you read ambiguous handwriting correctly, why optical illusions work, and why experts see patterns that novices miss.
How is the Bayesian brain different from predictive processing?
The Bayesian brain hypothesis is the broad idea that the brain performs probabilistic inference using Bayes' theorem. Predictive processing is a specific theory about how the brain implements this inference: through a hierarchy of cortical layers that send predictions downward and prediction errors upward. Think of the Bayesian brain as the principle (the brain computes probabilities) and predictive processing as the proposed mechanism (it does so through hierarchical prediction error minimization). They are complementary, not competing, ideas.
Can you see Bayesian inference on EEG?
Yes, several EEG components reflect Bayesian processing. The mismatch negativity (MMN) at 150-250 ms reflects sensory prediction errors, which are the likelihood signals in Bayesian inference. The P300 at 250-500 ms reflects the updating of beliefs when surprising evidence arrives. Repetition suppression, where EEG responses decrease for repeated stimuli, reflects priors becoming stronger with each confirmation. Frontal theta oscillations (4-8 Hz) increase during uncertain conditions when priors are weak and the brain must rely more heavily on incoming evidence.
Does the Bayesian brain hypothesis explain mental illness?
Several mental health conditions can be understood as disorders of Bayesian inference. In anxiety, priors about threat are too strong, causing the brain to interpret ambiguous evidence as dangerous. In psychosis, the precision of sensory evidence (likelihoods) is reduced, allowing priors to generate perception unchecked by reality, producing hallucinations and delusions. In autism, precision on sensory evidence may be unusually high, making the world feel unpredictable and overwhelming because the brain trusts its priors less than neurotypical brains do.
Copyright © 2026 Neurosity, Inc. All rights reserved.