Neurosity
Open Menu
Guide

The Number That Predicts (Almost) Everything

AJ Keller
By AJ Keller, CEO at Neurosity  •  February 2026
The g-factor, or general intelligence, is a statistically derived variable that captures the shared variance across all cognitive tests. People who score well on one type of mental task tend to score well on others, and g represents what all those abilities have in common.
It's psychology's most powerful predictor and its most heated debate. The g-factor has predicted academic success, job performance, income, health, and even lifespan across thousands of studies. But what is it? Is it a real property of the brain or a statistical artifact? Neuroscience is finally answering that question.
Explore the Crown
8-channel EEG. 256Hz. On-device processing.

The Discovery That Made Psychologists Uncomfortable

In 1904, a British psychologist named Charles Spearman noticed something he couldn't explain away.

Spearman was analyzing the test scores of schoolchildren across different subjects. You'd expect some correlation: a kid who's good at math might also be decent at science, since both involve numerical reasoning. But what Spearman found was much stranger than that.

Performance on every test correlated positively with performance on every other test. Not just math and science. Math and Latin. Latin and music. Music and physical coordination. Every single cognitive ability he measured showed a positive correlation with every other ability. Kids who were good at one thing tended to be at least somewhat good at everything.

This wasn't supposed to happen. If intelligence were a collection of completely independent abilities, as many people assumed, then being good at spatial reasoning should tell you nothing about someone's verbal ability. But it did. Consistently.

Spearman used a statistical technique called factor analysis to extract what these diverse abilities had in common. He called it g, for "general factor." It was a single variable that accounted for roughly 40-50% of the variance across all cognitive tests. People who scored high on g scored high on most tests. People who scored low on g scored low on most tests.

And that finding, that one number could capture so much about human cognitive ability, made a lot of people very uncomfortable. It still does.

What g Actually Is (And Why It's So Easy to Misunderstand)

Let's be precise about this, because the g-factor is one of the most misunderstood concepts in all of psychology.

g is not a test score. You can't take a "g test." It's a statistical extraction, the common thread running through performance on many different cognitive tasks.

Think of it like this. Imagine you measured 50 different physical abilities in a thousand people: sprinting speed, grip strength, flexibility, endurance, vertical jump, swimming speed, reaction time, and so on. If you ran a factor analysis on those measurements, you'd probably find a general factor too. People who are physically capable in one domain tend to be physically capable in others. That general physical fitness factor wouldn't be the same as any single measurement. It would be what all of them share.

That's what g is for cognitive abilities. It's the shared variance. The thing all mental tests have in common.

Here's what makes g remarkable: it's strong. It shows up in every battery of diverse cognitive tests ever administered to a large enough sample. It doesn't matter which specific tests you use, as long as they're cognitively diverse. You always find the same positive manifold (the fancy term for "everything correlates with everything"), and you always extract roughly the same g factor.

And its predictive power is extraordinary.

OutcomeCorrelation with gContext
Academic performance0.5-0.7Strongest for complex subjects like physics and philosophy
Job performance (all occupations)0.3-0.5Increases with job complexity
Job performance (complex jobs)0.5-0.6Managers, engineers, scientists
Income0.3-0.4After controlling for education and occupation
Health behaviors0.2-0.3Exercise, diet, medication compliance
Longevity0.2-0.3One standard deviation in g predicts 20% reduced mortality risk
Criminal behavior (inverse)-0.2Lower g associated with higher incidence
Outcome
Academic performance
Correlation with g
0.5-0.7
Context
Strongest for complex subjects like physics and philosophy
Outcome
Job performance (all occupations)
Correlation with g
0.3-0.5
Context
Increases with job complexity
Outcome
Job performance (complex jobs)
Correlation with g
0.5-0.6
Context
Managers, engineers, scientists
Outcome
Income
Correlation with g
0.3-0.4
Context
After controlling for education and occupation
Outcome
Health behaviors
Correlation with g
0.2-0.3
Context
Exercise, diet, medication compliance
Outcome
Longevity
Correlation with g
0.2-0.3
Context
One standard deviation in g predicts 20% reduced mortality risk
Outcome
Criminal behavior (inverse)
Correlation with g
-0.2
Context
Lower g associated with higher incidence

A single variable that predicts academic success, job performance, income, health, and even how long you live? That's either the most important finding in psychology or the most dangerous oversimplification in science. The argument over which it is has been running for 120 years.

The Positive Manifold: Why Everything Correlates With Everything

The fact that sits at the foundation of g is this: if you give someone a large battery of cognitive tests, their scores on every test will correlate positively with their scores on every other test. This is called the positive manifold, and it's one of the most replicated findings in all of behavioral science.

Good vocabulary? You'll also tend to have good spatial reasoning. Fast processing speed? You'll also tend to have high working memory capacity. Strong logical reasoning? You'll also tend to be good at recognizing patterns.

Why?

This is where the debate starts. There are broadly three schools of thought.

The g-as-real-thing camp argues that g reflects a genuine, unitary property of the brain. Something about overall neural architecture, whether it's white matter integrity, neural processing efficiency, or network connectivity, creates a general capacity that influences all cognitive operations. g is real in the same way that height is real: a single measurement that captures something meaningful about an organism.

The g-as-sampling camp argues that g emerges because all cognitive tests sample from the same pool of basic neural processes: processing speed, working memory, attention control. Tests correlate because they all draw on these shared building blocks, not because there's some overarching "general intelligence" module in the brain. g is real in the same way that "general fitness" is real: a useful summary of correlated abilities, but not a thing in itself.

The mutualism camp argues that g isn't a common cause at all, but rather an emergent property of cognitive abilities that develop together and reinforce each other. A child with slightly better processing speed reads faster. Reading faster builds vocabulary. Better vocabulary improves abstract reasoning. The abilities bootstrap each other, creating correlations without any underlying general factor. g, in this model, is a statistical pattern that emerges from a developmental feedback loop.

Here's the "I had no idea" part: after decades of debate, neuroscience is increasingly suggesting that all three camps might be partially right, and the full picture is richer than any single theory.

Jensen's Clock

Arthur Jensen, one of g's most prominent and controversial advocates, demonstrated a striking relationship between g and simple reaction time. People with higher g scores respond to a flashing light milliseconds faster than people with lower g scores. The effect is small for any single reaction, but it's consistent and well-replicated. Jensen argued this proves g is about neural processing efficiency, not learned knowledge. Critics argued it just proves that reaction time tests are themselves cognitive tests. The debate continues, but the fact that g predicts something as simple as how fast you hit a button when a light turns on is genuinely remarkable.

What Neuroscience Found When It Opened the Hood

For a century, g was purely a statistical construct. Factor analysis could tell you it existed, but it couldn't tell you what it was made of. You can't do a brain scan of a mathematical abstraction.

Then neuroimaging arrived.

Starting in the 1990s and accelerating dramatically in the 2000s, researchers began correlating g scores with brain measures. And g, the statistical ghost, turned out to have a surprisingly clear biological signature.

Brain volume. Total brain size correlates with g at about 0.3 to 0.4. This is a weak-to-moderate correlation, meaning plenty of smart people have small brains and plenty of people with large brains score average on cognitive tests. But it's consistent, and it's real. Roughly 6-16% of the variance in g can be explained by brain volume alone.

White matter integrity. This is where things get more interesting. White matter is the brain's wiring, the myelinated axons that connect different regions. Diffusion tensor imaging (DTI) has consistently shown that people with higher g scores have more intact, better-organized white matter. The signals travel faster. The connections are cleaner. The brain's information highways have fewer potholes.

Cortical thickness. Higher g correlates with greater cortical thickness in specific regions: the lateral prefrontal cortex, the parietal cortex, and the temporal cortex. These are the same regions identified by the Parieto-Frontal Integration Theory (P-FIT) as the core intelligence network. It's not that smarter brains are bigger everywhere. They're thicker in the regions that matter most for reasoning and integration.

Neural efficiency. This is the most elegant finding. When performing cognitive tasks, people with higher g scores show less neural activation, not more. Their brains use fewer resources to achieve the same or better results. It's like comparing a Formula 1 engine to a lawn mower engine: the F1 engine produces vastly more power with better fuel efficiency because it's more precisely engineered.

Multiple EEG studies have captured this efficiency effect in real-time. Higher g is associated with more focused, less diffuse patterns of cortical activation during problem-solving. The brain doesn't light up like a Christmas tree. It activates precisely the regions it needs and keeps the rest quiet.

Neurosity Crown
The Neurosity Crown gives you real-time access to your own brainwave data across 8 EEG channels at 256Hz, with on-device processing and open SDKs.
See the Crown

The Network Model: Intelligence as Information Flow

The most current neuroscience of g has moved beyond asking "which brain regions matter?" to asking "how do brain regions communicate?"

The answer is revealing. g appears to be most closely related to the global efficiency of brain networks, a measure of how quickly and accurately information can travel between any two points in the brain.

Your brain is a network of roughly 86 billion neurons connected by trillions of synapses. Some connections are local (within a single brain region), and some are long-range (connecting distant regions). Network neuroscience uses graph theory to quantify the organizational properties of this network, and the metric that most consistently predicts g is global efficiency: the average shortest path length between all pairs of brain regions.

High global efficiency means the brain can get information from point A to point B with fewer intermediate steps. It means the network is well-integrated, with strong hubs that facilitate rapid communication. It's the neural equivalent of a well-designed transit system where you can get from any station to any other station with minimal transfers.

This explains why g predicts performance across such diverse cognitive tasks. Every complex mental activity requires multiple brain regions to share information. Whether you're solving a math problem (frontal and parietal), understanding a metaphor (temporal and frontal), or navigating a new city (parietal and hippocampal), the speed and fidelity of inter-regional communication matters. g, in the network model, is essentially a measure of how well your brain's communication infrastructure works.

The Small-World Architecture

Human brain networks have what's called "small-world" topology: most connections are local (like a regular lattice), but there are also long-range shortcuts (like a random network). This architecture is optimal for balancing efficiency with wiring cost. People with higher g scores show brains that are slightly more "small-world" than average, with stronger long-range connections and more efficient hub structures. Their brains have better internet.

The Genetics of g: Nature's Strongest Cognitive Effect

Twin studies have consistently found that g is substantially heritable. Identical twins raised apart show IQ correlations of about 0.75. Adoption studies consistently show that adopted children's adult IQs correlate more strongly with their biological parents than their adoptive parents. The heritability of g is estimated at 50-80% in adulthood, making it one of the most heritable behavioral traits ever measured.

But here's where the genetics story gets complicated, and genuinely fascinating.

The era of genome-wide association studies (GWAS) has revealed that g isn't controlled by a few "intelligence genes." Instead, thousands of genetic variants, each with a tiny effect, collectively influence cognitive ability. The largest GWAS of intelligence to date (2018, with over 269,000 participants) identified 205 genomic loci associated with intelligence, and even all of them together explained only about 5% of the variance.

This means g is massively polygenic. It's influenced by so many genes, each contributing such a tiny amount, that there's no such thing as a "smart gene." It's the combined effect of your entire genome's influence on brain development, neural myelination, synaptic efficiency, neurotransmitter function, and a hundred other biological processes.

The environmental 20-50% matters too. Childhood nutrition, educational opportunity, exposure to neurotoxins (like lead), stress levels, and cognitive stimulation all influence where within your genetic range your g lands. The interaction between genes and environment isn't additive. It's multiplicative. A genetic predisposition for efficient neural processing means very little if malnutrition during critical developmental periods impairs brain growth.

The Dark History (And Why It Matters for the Science)

Any honest discussion of g has to acknowledge why it makes people nervous.

The concept of general intelligence has been weaponized. In the early 20th century, IQ tests were used to justify forced sterilization programs in the United States. Over 60,000 Americans were forcibly sterilized under laws that used intelligence testing as a criterion. Immigration restrictions in the 1920s were explicitly based on claims about the "inferior intelligence" of certain ethnic groups. The eugenics movement, which led directly to the horrors of Nazi racial science, drew heavily on intelligence research.

This history is not an argument against the science of g. The statistical phenomenon is strong and well-replicated. The biological correlates are real. The predictive validity is among the strongest in behavioral science. But the history is an argument for extreme care in how the science is communicated and applied.

The key scientific points that the historical misuse ignored:

g is about population-level statistics, not individual fate. A high g doesn't guarantee success, and a low g doesn't guarantee failure.

g is substantially influenced by environment. Group differences in average test scores are fully consistent with environmental explanations, including nutritional disparities, educational inequality, and the well-documented effects of discrimination itself on cognitive development.

g is not the totality of intelligence. Practical intelligence, creativity, emotional intelligence, social skills, and domain expertise all contribute to real-world outcomes and are not captured by g.

The science of g is sound. The application of that science has sometimes been monstrous. Separating the two requires both intellectual honesty and moral seriousness.

The EEG Window Into General Cognitive Processing

The brainwave correlates of g have been studied extensively, and they reveal something important about what general intelligence looks like in real-time neural activity.

Alpha power and efficiency. Individuals with higher g show more prominent, more organized alpha rhythms (8-12 Hz) during both rest and cognitive tasks. Alpha is increasingly understood as the brain's "idle" rhythm, a sign that cortical regions are in a ready-but-quiet state. More prominent alpha during a task means the brain is better at suppressing irrelevant activity, keeping uncommitted regions in standby mode rather than wasting energy on noise.

Faster P300 latency. The P300 event-related potential reflects the speed at which the brain evaluates incoming stimuli. Higher g is associated with shorter P300 latency, meaning faster stimulus evaluation. This is consistent with the neural efficiency model: information is processed more quickly and with less wasted effort.

Frontal theta during working memory. When the brain is actively holding and manipulating information, frontal midline theta (4-8 Hz) increases. The strength and coherence of frontal theta during working memory tasks correlates with both working memory capacity and g. More efficient theta generation suggests a more organized prefrontal cortex.

Long-range coherence. EEG coherence between frontal and parietal electrodes during reasoning tasks is consistently higher in individuals with higher g scores. This maps directly onto the network efficiency model: the brain regions that need to collaborate for complex cognition are better synchronized.

These patterns are measurable with consumer-grade EEG technology. The Neurosity Crown's electrode positions at F5, F6 (frontal), C3, C4 (central), CP3, CP4 (centroparietal), and PO3, PO4 (parieto-occipital) span the exact network that P-FIT identifies as the biological basis of g. The 256Hz sampling rate captures both the slow oscillations (theta, alpha) and the faster dynamics (beta, gamma, ERPs) that index cognitive processing efficiency.

Through the Neurosity SDK, developers and researchers can access these signals programmatically. The JavaScript and Python SDKs expose raw EEG data, power spectral density, and frequency-band power, the building blocks of the neural efficiency measures that correlate with g. The MCP integration allows this brain-state data to flow into AI workflows, opening the door to tools that adapt to your cognitive processing patterns.

What g Tells Us (And What It Doesn't)

Here's the clearest way to think about g.

It's real. It reflects genuine properties of brain organization. It predicts important outcomes. Denying this doesn't serve anyone.

It's not everything. It misses creativity, emotional intelligence, practical wisdom, social skill, and the domain-specific expertise that drives most real-world accomplishment. Treating it as the complete picture of human cognitive ability is both scientifically wrong and morally reckless.

It's not fixed. While substantially heritable, it's influenced by environment, and its expression can be supported or undermined by factors within your control: sleep, exercise, nutrition, cognitive engagement, and freedom from chronic stress.

The most productive way to think about g is as a foundation, not a ceiling. It tells you something about the efficiency of your brain's information-processing infrastructure. But what you build on that foundation, the skills you develop, the knowledge you acquire, the emotional and social capacities you cultivate, that's a much bigger story.

Spearman discovered that all cognitive abilities share something in common. A century later, neuroscience has identified what that something is: the efficiency with which your brain's networks communicate. That's worth knowing. What you do with that knowledge is entirely up to you.

And that's the part no test can measure.

Stay in the loop with Neurosity, neuroscience and BCI
Get more articles like this one, plus updates on neurotechnology, delivered to your inbox.
Frequently Asked Questions
What is the g-factor in psychology?
The g-factor (general factor of intelligence) is a statistical construct discovered by Charles Spearman in 1904. It represents the common variance shared across all cognitive ability tests. When you give people many different mental tests, performance on all of them tends to correlate positively. The g-factor captures this overlap. It's the single best predictor of cognitive performance across diverse tasks, though it doesn't capture the full picture of any specific ability.
Is the g-factor real or just a statistical artifact?
Neuroscience evidence increasingly supports g as reflecting a real property of the brain, not merely a mathematical abstraction. The g-factor correlates with measurable brain properties: white matter integrity, cortical thickness in specific regions, neural efficiency (how much energy the brain uses during cognitive tasks), and the strength of frontoparietal network connectivity. These biological correlates suggest g represents something physically real about brain organization.
How is g-factor different from IQ?
IQ is a test score; g is a statistical factor. IQ scores are influenced by the specific tests used, the scoring method, and cultural factors. The g-factor is extracted from the correlations between multiple tests and represents only the shared variance, stripping away test-specific abilities. In practice, IQ and g correlate very highly (around 0.9), but they're conceptually different. G is what all cognitive tests have in common; IQ is a particular way of scoring a particular set of tests.
Can you increase your g-factor?
This is heavily debated. The g-factor is substantially heritable (50-80% genetic influence), and no training program has convincingly demonstrated lasting increases in g. Working memory training can improve working memory, but transfer to g remains unproven. However, environmental factors like education, nutrition, and cognitive engagement affect where you land within your genetic range. Physical exercise, adequate sleep, and avoiding neurotoxins can help maintain your existing g-level.
What brain features correlate with higher general intelligence?
Higher g correlates with greater white matter integrity (faster neural communication), more efficient neural processing (less brain activation for the same task), larger total brain volume (weak but consistent correlation of about 0.3), greater cortical thickness in prefrontal and parietal regions, and stronger functional connectivity within the frontoparietal network. The strongest neural correlate is overall brain network efficiency, not the size or activity of any single region.
Why is the g-factor controversial?
The g-factor is controversial for both scientific and societal reasons. Scientifically, some researchers argue it's a statistical artifact of how tests are constructed rather than a real cognitive entity. Societarily, historical misuse of intelligence testing for eugenics, racial discrimination, and immigration restriction has made any claims about 'general intelligence' politically charged. The scientific consensus is that g is statistically strong and biologically grounded, but public debate often conflates the science with its historical misuse.
Copyright © 2026 Neurosity, Inc. All rights reserved.