2503.01830
Model: nemotron-free
# From Language to Cognition: How LLMs Outgrow the Human Language Network
**Authors**:
- Antoine Bosselut Martin Schrimpf (EPFL âMIT âGeorgia Institute of Technology)
Abstract
Large language models (LLMs) exhibit remarkable similarity to neural activity in the human language network. However, the key properties of language underlying this alignmentâand how brain-like representations emerge and change across trainingâremain unclear. We here benchmark 34 training checkpoints spanning 300B tokens across 8 different model sizes to analyze how brain alignment relates to linguistic competence. Specifically, we find that brain alignment tracks the development of formal linguistic competenceâi.e., knowledge of linguistic rulesâmore closely than functional linguistic competence. While functional competence, which involves world knowledge and reasoning, continues to develop throughout training, its relationship with brain alignment is weaker, suggesting that the human language network primarily encodes formal linguistic structure rather than broader cognitive functions. Notably, we find that the correlation between next-word prediction, behavioral alignment, and brain alignment fades once models surpass human language proficiency. We further show that model size is not a reliable predictor of brain alignment when controlling for the number of features. Finally, using the largest set of rigorous neural language benchmarks to date, we show that language brain alignment benchmarks remain unsaturated, highlighting opportunities for improving future models. Taken together, our findings suggest that the human language network is best modeled by formal, rather than functional, aspects of language. Project Page: language-to-cognition.epfl.ch
From Language to Cognition: How LLMs Outgrow the Human Language Network
Badr AlKhamissi 1 Greta Tuckute 2 Yingtian Tang 1 Taha Binhuraib 3 Antoine Bosselut â,1 Martin Schrimpf â,1 1 EPFL 2 MIT 3 Georgia Institute of Technology
\NoHyper â Equal Supervision \endNoHyper
1 Introduction
<details>
<summary>figures/brain-score-llms-main-final-final.drawio-4.png Details</summary>

### Visual Description
## Multi-Metric Model Performance Analysis
### Overview
The image presents three comparative graphs analyzing model performance across different sizes (410M to 6.9B parameters) against three metrics: Brain Alignment, Formal Competence, and Functional Competence. Each graph tracks performance against token count (0-286B) with distinct color-coded model size indicators.
### Components/Axes
1. **Brain Alignment Graph**
- X-axis: Number of Tokens (0-286B)
- Y-axis: Brain Alignment (0-0.6 scale)
- Legend: Model sizes (410M, 1B, 1.4B, 2.8B, 6.9B) with color gradients
- RÂČ values: 0.65 (left) and 0.36 (right)
- Notable annotation: "94.4% of training time" at 100B tokens
2. **Formal Competence Graph**
- X-axis: Number of Tokens (0-286B)
- Y-axis: Formal Competence (0-0.7 scale)
- Legend: Model sizes with line style variations
- Key threshold: Vertical line at 100B tokens
3. **Functional Competence Graph**
- X-axis: Number of Tokens (0-286B)
- Y-axis: Functional Competence (0-0.3 scale)
- Legend: Model sizes with blue gradient
- Vertical threshold line at 100B tokens
### Detailed Analysis
**Brain Alignment Trends**
- 410M (light green): Gradual increase with plateau at ~0.45
- 1B (green): Steeper rise to ~0.55
- 1.4B (dark green): Peaks at 0.58 before declining
- 2.8B (dotted green): Stable ~0.52
- 6.9B (solid green): Sharp peak at 100B tokens (0.58), then decline
**Formal Competence Trends**
- All models show sigmoidal growth
- 410M: Reaches 0.65 at 100B tokens
- 6.9B: Fastest ascent, plateauing at 0.72
- Post-100B tokens: Minimal improvement across models
**Functional Competence Trends**
- Linear progression for all models
- 410M: Max 0.28 at 286B tokens
- 6.9B: Reaches 0.30 at 200B tokens
- Post-100B tokens: Accelerated gains for larger models
### Key Observations
1. **Model Size Correlation**: Larger models consistently outperform smaller ones across all metrics
2. **Token Threshold Effect**: Significant performance shifts occur around 100B tokens
3. **Brain Alignment Paradox**: 6.9B model shows peak at 100B tokens followed by decline
4. **RÂČ Discrepancy**: Brain Alignment has higher correlation (0.65) than Functional Competence (0.36)
5. **Divergent Scaling**: Formal Competence plateaus faster than Functional Competence
### Interpretation
The data suggests diminishing returns in Brain Alignment for the largest model (6.9B) beyond 100B tokens, contrasting with sustained gains in Functional Competence. This implies architectural limitations in neural alignment mechanisms at extreme scale. The 94.4% training time marker at 100B tokens indicates a critical inflection point where models achieve ~95% of their potential performance. The RÂČ values reveal that Brain Alignment metrics better capture model capabilities than Functional Competence measures, possibly due to more direct neural correlation metrics. The consistent performance across all metrics for models above 1.4B suggests diminishing returns in parameter scaling beyond this threshold.
</details>
Figure 1: Model Alignment with the Human Language Network is Primarily Driven by Formal than Functional Linguistic Competence. (a) Average brain alignment across five Pythia models and five brain recording datasets, normalized by cross-subject consistency, throughout training. (b) Average normalized accuracy of the same models on formal linguistic competence benchmarks (two benchmarks). (c) Average normalized accuracy on functional linguistic competence benchmarks (six benchmarks). The x-axis is logarithmically spaced up to 16B tokens, capturing early training dynamics, and then evenly spaced every 20B tokens from 20B to ~300B tokens.
Deciphering the brainâs algorithms underlying our ability to process language and communicate is a core goal in neuroscience. Human language processing is supported by the brainâs language network (LN), a set of left-lateralized fronto-temporal regions in the brain (Binder et al., 1997; Bates et al., 2003; GornoâTempini et al., 2004; Price, 2010; Fedorenko, 2014; Hagoort, 2019) that respond robustly and selectively to linguistic input (Fedorenko et al., 2024a). Driven by recent advances in machine learning, large language models (LLMs) trained via next-word prediction on large corpora of text are now a particularly promising model family to capture the internal processes of the LN. In particular, when these models are exposed to the same linguistic stimuli (e.g., sentences or narratives) as human participants during neuroimaging and electrophysiology experiments, they account for a substantial portion of neural response variance (Schrimpf et al., 2021; Caucheteux and King, 2022; Goldstein et al., 2022; Pasquiou et al., 2022; Aw et al., 2023; Tuckute et al., 2024a; AlKhamissi et al., 2025; Rathi et al., 2025).
1.1 Key Questions and Contributions
This work investigates four key questions, all aimed at distilling why LLM aligns to brain responses. Specifically, we investigate the full model development cycle as a combination of model architecture (structural priors) and how linguistic competence emerges across training (developmental experience). We ask: (1) What drives brain alignment in untrained models? (2) Is brain alignment primarily linked to formal or functional linguistic competence (Mahowald et al., 2024)? (3) Do language models diverge from humans as they surpass human-level prediction? (4) Do current LLMs fully account for the explained variance in brain alignment benchmarks? To answer these questions, we introduce a rigorous brain-scoring framework to conduct a controlled and large-scale analysis of LLM brain alignment.
Our findings reveal that the initial brain alignment of models with untrained parameters is driven by context integration. During training, alignment primarily correlates with formal linguistic competenceâtasks that probe mastery of grammar, syntax, and compositional rules, such as identifying subjectâverb agreement, parsing nested syntactic structures, or completing well-formed sentences. This competence saturates relatively early in training ( $\sim 4$ B tokens), consistent with a plateauing of model-to-brain alignment. Functional linguistic competence, in contrast, concerns how language is used in context to convey meaning, intent, and social/pragmatic contentâfor example, tasks involving discourse coherence, reference resolution, inference about speaker meaning, or interpreting figurative language. Functional competence emerges later in training, tracks brain alignment less strongly, and continues to grow even after alignment with the language network has saturated.
This disconnect later in training is further exemplified by a fading of the correlation between modelsâ brain alignment and their next-word-prediction performance, as well as their behavioral alignment. Further, we show that model size is not a reliable predictor of brain alignment when controlling for the number of features, challenging the assumption that larger models necessarily resemble the brain more. Finally, we demonstrate that current brain alignment benchmarks remain unsaturated, indicating that LLMs can still be improved to model human language processing.
2 Preliminaries & Related Work
A Primer on Language in the Human Brain
The human language network (LN) is a set of left-lateralized frontal and temporal brain regions supporting language. These regions are functionally defined by contrasting responses to language inputs over perceptually matched controls (e.g., lists of non-words) (Fedorenko et al., 2010). The language network exhibits remarkable selectivity for language processing compared to various non-linguistic inputs and tasks, such as music perception (Fedorenko et al., 2012; Chen et al., 2023) or arithmetic computation (Fedorenko et al., 2011; Monti et al., 2012) (for review, see Fedorenko et al. (2024a)) and the language network only shows weak responses when participants comprehend or articulate meaningless non-words (Fedorenko et al., 2010; Hu et al., 2023). This selectivity profile is supported by extensive neuroimaging research and further corroborated by behavioral evidence from aphasia studies: when brain damage is confined to language areas, individuals lose their linguistic abilities while retaining other skills, such as mathematics (Benn et al., 2013; Varley et al., 2005), general reasoning (Varley and Siegal, 2000), and theory of mind (Siegal and Varley, 2006).
Model-to-Brain Alignment
Prior work has shown that the internal representations of certain artificial neural networks resemble those in the brain. This alignment was initially observed in the domain of vision (Yamins et al., 2014; Khaligh-Razavi and Kriegeskorte, 2014; Cichy et al., 2016; Schrimpf et al., 2018, 2020; Cadena et al., 2019; Kubilius et al., 2019; Zhuang et al., 2021) and has more recently been extended to auditory processing (Kell et al., 2018; Tuckute et al., 2023; Koumura et al., 2023) and language processing (Schrimpf et al., 2021; Caucheteux and King, 2022; Goldstein et al., 2022; Kauf et al., 2023; Hosseini et al., 2024; Aw et al., 2023; AlKhamissi et al., 2025; Tuckute et al., 2024b; Rathi et al., 2025).
Untrained Models
Recent work in vision neuroscience has shown that untrained convolutional networks can yield high brain alignment to recordings in the visual ventral stream without the need for training (Geiger et al., 2022; Kazemian et al., 2024). Other works have investigated the inductive biases in different architectures and initializations in models of visual processing (Cichy et al., 2016; Cadena et al., 2019; Geiger et al., 2022), speech perception (Millet and King, 2021; Tuckute et al., 2023), and language (Schrimpf et al., 2021; Pasquiou et al., 2022; Hosseini et al., 2024), highlighting that randomly initialized networks are not random functions (Teney et al., 2024).
3 Methods
3.1 Benchmarks for Brain Alignment
Neuroimaging & Behavioral Datasets
The neuroimaging datasets used in this work can be categorized along three dimensions: the imaging modality, the context length of the experimental materials, and the modality through which the language stimulus was presented to human participants (auditory or visual). Table 1 in Appendix A provides an overview of all datasets in this study. To focus specifically on language, we consider neural units (electrodes, voxels, or regions) associated with the brainâs language network, as localized by the original dataset authors using the method described in the Section 3.2 and implemented in Brain-Score Schrimpf et al. (2020, 2021) (however, see Appendix J for control brain regions). An exception is the Narratives dataset, which lacks functional localization. We here approximate the language regions using a probabilistic atlas of the human language network (Lipkin et al., 2022), extracting the top-10% language-selective voxels (from the probabilistic atlas) within anatomically defined language parcels, in line with the functional localization procedure used in the other datasets. In an additional analysis, we investigate model alignment with language behavior using the Futrell et al. (2018) dataset, which contains self-paced, per-word human reading times. See Appendix A for details of each dataset. To the best of our knowledge, this study examines the largest number of benchmarks compared to previous work, providing a more comprehensive and reliable foundation for identifying the properties that drive brain alignment in LLMs. The diversity of datasets ensures that our conclusions generalize beyond specific experimental stimuli and paradigms.
Brain-Alignment Metrics
Following standard practice in measuring brain alignment, we train a ridge regression model to predict brain activity from model representations, using the same linguistic stimuli presented to human participants in neuroimaging studies (Schrimpf et al., 2020, 2021). We then measure the Pearson correlation between the predicted brain activations and the actual brain activations of human participants on a held-out set that covers entirely different stories or topics (see Section 4). This process is repeated over $k$ cross-validation splits, and we report the average (mean) Pearson correlation as our final result. We refer to this metric as Linear Predictivity. In Section 5.1, we demonstrate why other metrics such as Centered Kernel Alignment (CKA; Kornblith et al., 2019) and Representational Similarity Analysis (RSA; Kriegeskorte et al., 2008) are not suitable measures for brain alignment on current language datasets.
Estimation of Cross-Subject Consistency
To assess the reliability of our datasets and account for the inherent noise in brain recordings, we compute a cross-subject consistency score (Feather et al., 2025), also referred to as the noise ceiling (Schrimpf et al., 2021). The consistency score is estimated by predicting the brain activity of a held-out subject using data from all other subjects, through 10-fold cross-validation of all subjects. To obtain a conservative ceiling estimate, we extrapolate subject pool sizes and report the final value based on extrapolation to infinitely many subjects. For Tuckute2024 we use the theoretical estimate provided by (Tuckute et al., 2024b). Consistency scores are provided in Appendix K. To aggregate scores across benchmarks, we normalize each modelâs Pearson correlation ( $r$ ) score for Linear Predictivity by the cross-subject consistency estimate, using the formula: ( $\textnormal{normalized score}=\frac{\textnormal{raw score}}{\textnormal{consistency}}$ ). The final alignment score for each model is reported as the average across all benchmarks. Otherwise, when reporting raw alignment, we compute the mean Pearson correlation across datasets without normalization.
3.2 Functional Localization
The human language network (LN) is defined functionally which means that units are chosen according to a âlocalizerâ experiment (Saxe et al., 2006). Specifically, the LN is the set of neural units (e.g., voxels/electrodes) that are more selective to sentences over a perceptually-matched control condition (Fedorenko et al., 2010). When selecting units from artificial models for comparison against LN units, previous work selected output units from an entire Transformer block based on brain alignment scores (Schrimpf et al., 2021). However, LLMs learn diverse concepts and behaviors during their considerable pretraining, not all of which are necessarily related to language processing, e.g., storage of knowledge (AlKhamissi et al., 2022) and the ability to perform complex reasoning (Huang and Chang, 2023). Therefore, we here follow the method proposed by AlKhamissi et al. (2025) that identifies language units in LLMs using functional localization as is already standard in neuroscience. This approach offers a key advantage: it enables direct comparisons across models by selecting a fixed set of units, identified through the independent localizer experiment. In this work, we localize $128$ units for all models unless otherwise specified, and we show in Appendix H that the results hold when selecting a different number of units.
<details>
<summary>figures/brain-score-llms-untrained-greens.drawio.png Details</summary>

### Visual Description
## Composite Visualization: Brain Alignment and Model Performance Analysis
### Overview
The image contains four panels (a-d) analyzing neural architecture performance across brain alignment metrics and task accuracy. Panels (a) and (b) use bar charts to compare architectures, (c) illustrates a neural network architecture, and (d) compares task performance.
### Components/Axes
#### Panel (a): Architecture Comparison
- **X-axis**: Architectures (MLP, GRU, LSTM, MLP+Mean, Transformer-v1, Transformer-v2)
- **Y-axis**: Brain Alignment (0.0â0.4)
- **Legend**: Right-aligned, color-coded architectures
- **Error Bars**: Present for all bars (±0.02â0.03)
#### Panel (b): Model Configuration Comparison
- **X-axis**: Brain Alignment (0.0â0.6)
- **Y-axis**: Configurations (Tokens, Pos+MLP, MLP, Pos+Attn, Attn, Attn+MLP, Pos+Attn+MLP)
- **Legend**: Implicit via bar colors
- **Error Bars**: Smaller (±0.02)
#### Panel (c): Neural Network Architecture
- **Components**:
- Input: Tokens + Position Embeddings
- Layers: Multihead Attention â LayerNorm â MLP â LayerNorm
- **Flow**: Bottom-up processing with bidirectional connections
#### Panel (d): Task Performance
- **X-axis**: Task Type (Formal, Functional)
- **Y-axis**: Normalized Accuracy (0.0â0.2)
- **Error Bars**: Large for Formal (±0.05), minimal for Functional (±0.01)
### Detailed Analysis
#### Panel (a)
- **Transformer-v2**: Highest alignment (0.38 ± 0.03)
- **Transformer-v1**: 0.25 ± 0.02
- **MLP+Mean**: 0.23 ± 0.02
- **LSTM**: 0.22 ± 0.02
- **GRU**: 0.18 ± 0.02
- **MLP**: 0.12 ± 0.02
#### Panel (b)
- **Pos+Attn+MLP**: 0.55 ± 0.02 (highest)
- **Pos+Attn**: 0.50 ± 0.02
- **Attn+MLP/Attn**: 0.35 ± 0.02
- **Pos+MLP**: 0.38 ± 0.02
- **MLP**: 0.25 ± 0.02
- **Tokens**: 0.10 ± 0.02
#### Panel (d)
- **Formal**: 0.15 ± 0.05
- **Functional**: 0.00 ± 0.01
### Key Observations
1. **Transformer Dominance**: Transformer variants (v1/v2) outperform classical architectures (MLP, GRU, LSTM) by 50â100% in brain alignment.
2. **Attention Impact**: Adding attention mechanisms (Pos+Attn, Attn+MLP) increases alignment by 20â30% over base models.
3. **Task Disparity**: Formal tasks show 15x higher accuracy than Functional, but with 5x greater variability.
### Interpretation
The data demonstrates that Transformer architectures with attention mechanisms achieve superior brain alignment, likely due to their ability to model long-range dependencies. Panel (c) reveals that the combination of multihead attention and MLP layers (as seen in high-performing configurations in panel b) creates a robust representation learning pipeline. The stark contrast in task performance (panel d) suggests Formal tasks may rely more on syntactic patterns captured by these architectures, while Functional tasks require different cognitive processing not yet optimized by current models. The error bars indicate significant variability in Formal task performance, potentially reflecting dataset heterogeneity or task complexity differences.
</details>
Figure 2: Context Integration drives Brain Alignment of Untrained Models. (a) Sequence-based models (GRU, LSTM, Transformers, and mean pooling) achieve higher brain alignment than models that rely solely on the last token representation (Linear, MLP), highlighting the importance of temporal integration. Error bars report five random initializations in all subplots. (b) Ablation study of architectural components in a single untrained Transformer-v2 block, demonstrating that attention mechanisms combined with positional encoding yield the highest brain alignment. (c) Diagram of the Transformer block architecture used in (b), with components grouped into attention (lower box) and MLP (upper box). (d) The average performance of five Pythia models with untrained parameters on formal and functional linguistic competence benchmarks, showing that formal competence exceeds chance level even in untrained parameter models.
3.3 Benchmarks for Linguistic Competence
There is substantial evidence in neuroscience research that formal and functional linguistic competence are governed by distinct neural mechanisms Mahowald et al. (2024); Fedorenko et al. (2024a, b). Formal linguistic competence pertains to the knowledge of linguistic rules and patterns, while functional linguistic competence involves using language to interpret and interact with the world. Therefore, to accurately track the evolution of each type of competence during training, we focus on benchmarks that specifically target these cognitive capacities in LLMs.
Formal Linguistic Competence
To assess formal linguistic competence, we use two benchmarks: BLiMP (Warstadt et al., 2019) and SyntaxGym (Gauthier et al., 2020). BLiMP evaluates key grammatical phenomena in English through 67 tasks, each containing 1,000 minimal pairs designed to test specific contrasts in syntax, morphology, and semantics. Complementing this, SyntaxGym consists of 31 tasks that systematically measure the syntactic knowledge of language models. Together, these benchmarks provide a robust framework for evaluating how well LLMs acquire and apply linguistic rules.
Functional Linguistic Competence
Functional competence extends beyond linguistic rules, engaging a broader set of cognitive mechanisms. To assess this, we use six benchmarks covering world knowledge (ARC-Easy, ARC-Challenge (Clark et al., 2018)), social reasoning (Social IQa (Sap et al., 2019)), physical reasoning (PIQA (Bisk et al., 2019)), and commonsense reasoning (WinoGrande (Sakaguchi et al., 2019), HellaSwag (Zellers et al., 2019)). Together, these benchmarks provide a comprehensive evaluation of an LLMâs ability to reason, infer implicit knowledge, and navigate real-world contexts.
Metrics
Inline with prior work, we evaluate all benchmarks in a zero-shot setting, using surprisal as the evaluation metric. where the modelâs prediction is determined by selecting the most probable candidate, as packaged in the language model evaluation harness (Gao et al., 2024). We report accuracy normalized by chance performance, where 0% indicates performance at the random chance level.
Benchmark for Language Modeling
We use a subset of FineWebEdu Penedo et al. (2024) to evaluate the perplexity of the models on a held-out set. Specifically, use a maximum sequence length of 2048, and evaluate on the first 1000 documents of the Ay CC-MAIN-2024-10 subset.
3.4 Large Language Models (LLMs)
Throughout this work, we use eight models from the Pythia model suite (Biderman et al., 2023), spanning a range of sizes: {14M, 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B}. Each model is evaluated across 34 training checkpoints, spanning approximately 300B tokens. These checkpoints include the untrained model, the final trained model, and 16 intermediate checkpoints that are logarithmically spaced up to 128B tokens. The remaining 14 checkpoints are evenly spaced every 20B tokens from 20B to 280B tokens, ensuring a comprehensive analysis of alignment trends throughout training. Since smaller models fail to surpass chance performance on many functional benchmarks, we exclude 14M, 70M, 160M from analyses that compare brain alignment with functional performance.
4 Rigorous Brain-Scoring
While substantial progress has been made in measuring alignment between LLM representations and neural activity, thereâs no standard for comparing brain alignment across datasets and conditions. Therefore, to ensure we perform meaningful inferences, we propose two criteria: (1) alignment should reflect stimulus-driven responses, dropping for random token sequences; and (2) models should generalize to new linguistic contexts. We justify our metrics and cross-validation choices accordingly. For all benchmarks, we identify language-selective units to ensure fair model comparisons, consistent with neural site selection in neuroscience AlKhamissi et al. (2025).
4.1 Robust Metrics and Generalization Tests
Measuring Stimulus-Driven Responses
We first ask if the alignment procedure is meaningful, i.e., whether the encoding models capture meaningful linguistic information and generalize to new linguistic contexts. Figure 6 (a) in Appendix B shows average brain alignment across all brain datasets under three conditions: (1) a pretrained model processing original stimuli, (2) a pretrained model processing random token sequences, and (3) an untrained model processing original stimuli. To evaluate metric reliability, we expect random sequences to yield significantly lower alignment than real stimuli. However, CKA fails this criterion, assigning similar alignment scores to both, and even untrained models surpass pretrained ones. In contrast, linear predictivity differentiates between real and random stimuli, more so than RSA.
Generalization and Contextualization
The second criterion we propose is that LLMs with high brain alignment should be able to generalize to held-out stimuli, with a preference for generalizing far outside the stimuli used for mapping the model to brain activity. A key factor in designing a corresponding cross-validation scheme is contextualizationâhow the data is split into train and test sets Feghhi et al. (2024). The Pereira2018 dataset consists of 24 topics composed of multi-sentence passages, and sentences are presented in their original order to both humans and models. A random sentence split (contextualization) allows sentences from the same topic in both train and test sets, and is thus less demanding of generalization. A stronger generalization test ensures entire topics are held out, preventing models from leveraging shared context. Figure 6 (b) shows that contextualization makes it easier for the model to predict brain activity. In contrast, topic-based splits halve the raw alignment score for pretrained models. The score of untrained models is reduced even more strongly when enforcing generalization across topics, suggesting that much of their alignment is context-dependent. Nonetheless, untrained models retain significant alignment â about 50% of pretrained models â even with strong generalization requirements.
<details>
<summary>figures/brain-score-llms-brain-alignment-final.drawio.png Details</summary>

### Visual Description
## Line Graphs: Brain Alignment Across Pythia Model Sizes
### Overview
Three line graphs compare brain alignment metrics across different Pythia language model sizes (1.4B, 2.8B, 6.9B) against token counts. Each graph shows multiple datasets with shaded confidence intervals, suggesting variability in measurements.
### Components/Axes
- **X-axis**: "Number of Tokens" (0 to 266B, with markers at 2M, 4M, 8M, 16M, 32M, 64M, 128M, 256M, 512M, 1B, 2B, 4B, 8B, 16B, 32B, 64B, 128B, 256B)
- **Y-axis**: "Brain Alignment" (0.0 to 1.2, in 0.2 increments)
- **Legend**: Located at bottom, with:
- **Pereira2018**: Light green circles
- **Blank2014**: Light green crosses
- **Fedorenko2016**: Dark green squares
- **Tuckute2024**: Light green plus signs
- **Narratives**: Dark green diamonds
- **Average**: Dark green stars
### Detailed Analysis
1. **Pythia-1.4B**:
- Brain alignment starts near 0.2 at 2M tokens, peaks at ~0.6 at 128M tokens, then declines to ~0.4 at 266B tokens.
- Shaded regions (confidence intervals) widen significantly after 128M tokens.
2. **Pythia-2.8B**:
- Initial alignment ~0.3 at 2M tokens, rises to ~0.7 at 128M tokens, then stabilizes near 0.6 at 266B tokens.
- Confidence intervals remain narrower than Pythia-1.4B.
3. **Pythia-6.9B**:
- Starts at ~0.25 at 2M tokens, surges to ~0.9 at 128M tokens, then plateaus at ~0.7 at 266B tokens.
- Largest confidence intervals, especially between 128M and 256B tokens.
### Key Observations
- **Model Size Correlation**: Larger models (6.9B > 2.8B > 1.4B) show higher peak brain alignment, particularly at 128M tokens.
- **Dataset Variability**:
- **Fedorenko2016** (squares) consistently shows highest alignment across all models.
- **Pereira2018** (circles) and **Blank2014** (crosses) exhibit lower alignment values.
- **Tuckute2024** (plus signs) and **Narratives** (diamonds) fall between these extremes.
- **Token Threshold**: All models show a notable inflection point at ~128M tokens, where alignment sharply increases before plateauing.
### Interpretation
The data suggests that:
1. **Model Capacity Matters**: Larger Pythia models achieve higher brain alignment, likely due to improved contextual understanding.
2. **Dataset-Specific Performance**: Fedorenko2016 and Tuckute2024 datasets demonstrate stronger alignment, possibly reflecting better-suited training data or task design.
3. **Token Efficiency**: The 128M token threshold appears critical, where models transition from shallow to deeper processing.
4. **Uncertainty Patterns**: Wider confidence intervals in larger models (especially 6.9B) indicate greater variability in alignment measurements at scale.
The shaded regions highlight measurement uncertainty, with Pythia-6.9B showing the most variability, suggesting challenges in maintaining consistent alignment at extreme scales.
</details>
Figure 3: Brain Alignment Saturates Early on in Training. Plots indicate the brain alignment scores of three models from the Pythia model suite with varying sizes (log x-axis up to 16B tokens, uneven spacing after black line). Scores are normalized by their cross-subject consistency scores. Alignment quickly peaks around 2â8B tokens before saturating or declining, regardless of model size (see Appendix D and F for more models).
5 Results
The following sections progressively unpack the emergence and limits of brain alignment with the human language network in LLMs. Section 5.1 establishes the foundation by showing that untrained models already exhibit modest brain alignment, pointing to the role of architectural priors. Building on this, Section 5.2 tracks how alignment evolves with training and reveals that it strongly correlates with the early acquisition of formal linguistic competence, but less so with functional abilities. Section 5.3 then shows that as models exceed human-level performance in next-word prediction, their brain and behavioral alignment begins to diverge, suggesting that at this point, LLMs outgrow their initial alignment with human language processing.
5.1 Brain Alignment of Untrained Models
In Figure 6 we show that untrained models, despite achieving lower alignment scores than their pretrained counterparts ( $\sim 50\%$ ), still achieve relatively decent alignment and surpass that of the models evaluated with a random sequence of tokens. Therefore, we here ask, what are the main drivers for this surprising alignment.
Inductive Biases of Untrained Models
We evaluate the brain alignment of various LLMs with untrained parameters to determine which architecture exhibits the strongest inductive bias toward the human language network. Figure 2 (a) presents the average alignment across five different random initializations for six different untrained models. Each model consists of a stack of two building blocks from its respective architecture, with a hidden state of $1024$ . To ensure a fair comparison, we apply the localizer to the output representations of the last token in the sequence from these two blocks, extracting 128 units to predict brain activity. Our findings reveal two key insights. First, sequence-based modelsâsuch as GRU, LSTM, Transformers, and even a simple mean operation over token representationsâexhibit higher brain alignment than models that rely solely on the last tokenâs representation, such as Linear or MLP. In other words, context or temporal integration is a crucial factor in achieving high alignment. Second, we observe a notable difference between Transformer-v1 and Transformer-v2. While Transformer-v2 applies static positional embeddings by directly adding them to token embeddings, Transformer-v1 uses rotary position encoding. Our results suggest that static positional encoding enables models to capture intrinsic temporal dynamics in sentencesâpossibly tracking evolving word positionsâproviding further evidence that temporal integration is critical for brain-like language representations.
<details>
<summary>figures/brain-score-llms-lineplot-correlations.drawio.png Details</summary>

### Visual Description
## Line Graphs: Brain Alignment vs. Formal Competence Across Pythia Model Sizes
### Overview
The image contains eight line graphs comparing brain alignment and formal competence metrics across different Pythia model sizes (5 Models, 1B, 2.8B, 6.9B). Each graph plots brain alignment (green) and formal competence (blue) against token count (x-axis), with functional competence (dark blue) as a secondary metric. RÂČ values quantify the correlation between brain alignment and formal competence for each model.
### Components/Axes
- **X-axis**: Number of Tokens (logarithmic scale: 0.01B to 100B)
- **Y-axis (Left)**: Brain Alignment (0.2â0.7)
- **Y-axis (Right)**: Formal Competence (0.0â0.7)
- **Legend**:
- Green: Brain Alignment
- Blue: Formal Competence
- Dark Blue: Functional Competence
- **RÂČ Values**:
- (a) 0.65, (b) 0.82, (c) 0.51, (d) 0.67
- (e) 0.36, (f) 0.80, (g) 0.40, (h) 0.51
### Detailed Analysis
#### Top Row (Formal Competence Metrics)
1. **(a) Pythia (5 Models)**
- Brain alignment (green) rises steeply from ~0.3 to 0.65 as tokens increase.
- Formal competence (blue) shows moderate growth (0.2â0.5).
- RÂČ = 0.65: Strong positive correlation.
2. **(b) Pythia-1B**
- Brain alignment peaks at ~0.65, then plateaus.
- Formal competence (blue) increases sharply after 10B tokens.
- RÂČ = 0.82: Very strong correlation.
3. **(c) Pythia-2.8B**
- Brain alignment fluctuates (0.3â0.5) with a peak at 10B tokens.
- Formal competence (blue) rises slowly (0.1â0.3).
- RÂČ = 0.51: Moderate correlation.
4. **(d) Pythia-6.9B**
- Brain alignment (green) dominates, reaching 0.7 at 100B tokens.
- Formal competence (blue) lags, peaking at ~0.5.
- RÂČ = 0.67: Strong correlation.
#### Bottom Row (Functional Competence Metrics)
5. **(e) Pythia (5 Models)**
- Brain alignment (green) grows steadily (0.3â0.6).
- Functional competence (dark blue) lags, peaking at ~0.25.
- RÂČ = 0.36: Weak correlation.
6. **(f) Pythia-1B**
- Brain alignment (green) rises sharply (0.3â0.6).
- Functional competence (dark blue) increases modestly (0.05â0.2).
- RÂČ = 0.80: Strong correlation.
7. **(g) Pythia-2.8B**
- Brain alignment (green) peaks at 10B tokens (~0.5).
- Functional competence (dark blue) remains low (<0.15).
- RÂČ = 0.40: Weak correlation.
8. **(h) Pythia-6.9B**
- Brain alignment (green) dominates (0.3â0.6).
- Functional competence (dark blue) shows minimal growth (0.05â0.2).
- RÂČ = 0.51: Moderate correlation.
### Key Observations
- **Model Size Impact**: Larger models (1B, 6.9B) show stronger brain alignment but weaker functional competence correlations (RÂČ < 0.5).
- **Token Thresholds**: Significant brain alignment growth occurs after 10B tokens in most models.
- **Anomalies**:
- 2.8B model (c, g) exhibits lower RÂČ values despite higher token counts, suggesting inefficiency.
- Functional competence (dark blue) consistently lags behind brain alignment across all models.
### Interpretation
The data suggests that larger Pythia models achieve higher brain alignment but struggle to translate this into formal competence, particularly in functional tasks. The 1B model (b, f) demonstrates the strongest overall correlation (RÂČ = 0.82), indicating optimal balance between alignment and competence. The 2.8B modelâs lower RÂČ values (0.51, 0.40) may reflect architectural limitations or training data mismatches. Functional competence metrics (dark blue) consistently underperform, highlighting a potential gap between neural alignment and practical utility. These trends align with prior research on model scaling laws, where increased size improves specific metrics but not holistic performance.
</details>
Figure 4: Formal Competence Tracks Brain Alignment More Closely Than Functional Competence. Each column compares how the evolution of formal competence (top) and functional competence (bottom) tracks the evolution of brain alignment during training. The $R^{2}$ values quantify the strength of this relationship, with higher values in formal competence suggesting it as the key driver of the observed brain alignment. (a): The data averaged across models of five different sizes. (b-d): the same comparison as in (a), but with comparisons were made for models from the Pythia suite with three different sizes.
Key Components of Transformers
To further isolate the key elements responsible for brain alignment in untrained parameter models, we perform an ablation study on the architectural components of Transformer-v2 using a single block (Figure 2 (c)). By focusing on the untrained model, we isolate the effect of architecture alone, without confounding influences from training. The architectural components analyzed are labeled on the left of each bar in Figure 2 (b). Ay Attn refers to all components inside the lower box in Figure 2 (c), including the first layer norm, multi-head attention, and the residual connection that follows. Ay MLP corresponds to the components in the upper box, comprising the post-attention layer norm, MLP, and the subsequent residual layer. Ay Pos represents the addition of positional embeddings to token embeddings. Ay Tokens means the model directly returns the raw token embeddings without further processing. This systematic ablation helps pinpoint the components that contribute most to brain alignment. Once again, we observe that integration across tokens, via attention mechanisms and positional encoding, yields the highest brain alignment. Further, we found that untrained parameter models perform better than chance-level performance on formal competence benchmarks, mirroring their non-zero brain alignment. In contrast, functional competence benchmarks remain at chance level for untrained models. This further supports the finding that brain alignment is primarily driven by formal, rather than functional, linguistic competence. (see Figure 2 (d)).
<details>
<summary>figures/brain-score-llms-correlation-ppl-behavior.drawio.png Details</summary>

### Visual Description
## Scatter Plots: Brain Alignment vs. Model Parameters
### Overview
The image contains eight scatter plots comparing brain alignment (NWP and Behavioral) with model parameters across different Pythia architectures and training stages. Each plot includes correlation coefficients (r), training stage indicators (Early/Late), and confidence intervals. Data points are color-coded by training stage, with Early (circles) and Late (squares) stages.
---
### Components/Axes
1. **Top Row (NWP Perplexity vs. Brain Alignment)**:
- **X-axis**: Log(NWP Perplexity) (logarithmic scale, 4â10)
- **Y-axis**: Brain Alignment (Perplexity) (linear scale, 0.15â0.55)
- **Legends**: Early (blue circles), Late (orange squares)
- **Correlation Labels**: r-values (e.g., r=0.92, r=0.60) with asterisks for significance.
2. **Bottom Row (Behavioral Alignment vs. Brain Alignment)**:
- **X-axis**: Behavioral Alignment (linear scale, 0.36â0.46)
- **Y-axis**: Brain Alignment (linear scale, 0.15â0.55)
- **Legends**: Early (blue circles), Late (orange squares)
- **Correlation Labels**: r-values (e.g., r=0.97, r=-0.54) with asterisks.
---
### Detailed Analysis
#### Top Row (NWP Perplexity vs. Brain Alignment)
1. **(a) Pythia-70M**:
- Early: r=0.92 (strong positive correlation), Late: r=0.60 (moderate positive).
- Trend: Early data points cluster tightly along the line; Late points show wider dispersion.
- Confidence Interval: Shaded gray band indicates variability.
2. **(b) Pythia-160M**:
- Early: r=0.89 (strong positive), Late: r=n.s. (no significant correlation).
- Trend: Early points align closely; Late points scatter broadly, especially at high perplexity.
3. **(c) Pythia-2.8B**:
- Early: r=0.63 (moderate positive), Late: r=n.s.
- Trend: Early points show a gradual increase; Late points cluster at lower brain alignment.
4. **(d) Pythia (8 Models)**:
- Early: r=0.81 (strong positive), Late: r=0.26 (weak positive).
- Trend: Early points follow a steep upward slope; Late points are more dispersed.
#### Bottom Row (Behavioral Alignment vs. Brain Alignment)
1. **(a) Pythia-70M**:
- Early: r=0.97 (very strong positive), Late: r=n.s.
- Trend: Early points align almost perfectly; Late points scatter widely.
2. **(b) Pythia-160M**:
- Early: r=0.89 (strong positive), Late: r=n.s.
- Trend: Early points cluster tightly; Late points show no clear pattern.
3. **(c) Pythia-2.8B**:
- Early: r=0.89 (strong positive), Late: r=-0.54 (negative correlation).
- Trend: Early points follow a steep upward slope; Late points show a downward trend.
4. **(d) Pythia (8 Models)**:
- Early: r=0.84 (strong positive), Late: r=0.84 (strong positive).
- Trend: Both stages show a consistent upward slope, with Late points slightly more dispersed.
---
### Key Observations
1. **Early Training Dominance**: Early-stage models consistently show stronger correlations (r > 0.8) between brain alignment and model parameters, suggesting better alignment during initial training.
2. **Late-Stage Variability**: Late-stage correlations are weaker or non-significant (r=n.s.) in most cases, except Pythia (8 Models), where Late retains r=0.84.
3. **Negative Correlation Anomaly**: Pythia-2.8B Late stage exhibits a negative correlation (r=-0.54), indicating an inverse relationship between behavioral alignment and brain alignment.
4. **Confidence Intervals**: Shaded regions highlight uncertainty, with wider bands in Late stages, reflecting higher variability.
---
### Interpretation
1. **Training Dynamics**: Early training stages likely capture foundational patterns in brain alignment, while Late stages may overfit or diverge due to optimization pressures.
2. **Model Complexity**: Larger models (e.g., Pythia-160M) show reduced Late-stage correlations, possibly due to increased capacity leading to overfitting.
3. **Behavioral Alignment**: Strong positive correlations in Early stages suggest that behavioral alignment is a robust proxy for brain alignment during initial learning.
4. **Anomaly in Pythia-2.8B Late**: The negative correlation may indicate a shift in learning dynamics, such as prioritizing different features or data artifacts.
This analysis underscores the importance of early training stages in aligning model parameters with brain and behavioral data, while Late stages require careful regularization to maintain alignment.
</details>
Figure 5: NWP and Behavioral Alignment Correlate with Brain Alignment Only in Early Training. (Top Row): Correlation between brain alignment and language modeling loss shows a strong, significant relationship during early training (up to 2B tokens). While this correlation weakens in later stages (up to ~300B tokens). Results are shown for three models and the average of all 8 models (last column). (Bottom Row): The same analysis, but for the correlation between brain alignment and behavioral alignment, revealing a similar trendâstrong correlation early in training, but no significant relationship as models surpass human proficiency.
5.2 Brain Alignment Over Training
Having established the architectural components that make an untrained model brain-aligned in the previous section, we now investigate how brain alignment evolves during training. To do so, we use the Pythia model suite Biderman et al. (2023), which consists of models of various sizes, all trained on the same $\sim$ 300B tokens, with publicly available intermediate checkpoints. We report results for a model from a different family, SmolLM2-360M (Allal et al., 2025), which provides checkpoints at 250B-token intervals, in Appendix F.
Figure 3 illustrates the brain alignment of six Pythia models across five brain recording datasets at 34 training checkpoints, spanning approximately 300B tokens. Each panel presents checkpoints that are logarithmically spaced up to the vertical line, emphasizing the early-stage increase in brain alignment, which occurs within the first 5.6% of training time. Beyond this point, the panels display the remaining training period, where brain alignment stabilizes. More specifically, we observe the following trend: (1) Brain alignment is similar to the untrained model until approximately 128M tokens. (2) A sharp increase follows, peaking around 8B tokens. (3) Brain alignment then saturates for the remainder of training. Despite the vast difference in model sizes shown in Figure 3, the trajectory of brain alignment is remarkably similar.
Alignment Tracks Formal Competence
Following the observation that brain alignment plateaus early in training, we next investigate how this relates to the emergence of formal and functional linguistic competence in LLMs. Figure 4 displays the average brain alignment alongside the average performance on formal competence benchmarks (top row) and functional competence benchmarks (bottom row). This is shown for three Pythia models (1B, 2.8B, and 6.9B parameters) and the average of five Pythia models (first column) across the training process. To quantify this relationship, we train a ridge regression model (with a single scalar weight) to predict brain alignment scores from benchmark scores using 10-fold cross-validation. The average R-squared value across these folds serves as our metric for comparing the relationship between formal/functional linguistic competence and brain alignment. These R-squared values are shown in each panel of Figure 4. Finally, we perform a Wilcoxon signed-rank test on the distributions of R-squared values. This test reveals that formal linguistic competence is significantly more strongly correlated with brain alignment than functional competence (W = 0.0, p $<$ 0.002). One possible explanation for why brain alignment emerges before formal linguistic competence is that existing LLM benchmarks assess performance using discrete accuracy thresholds (hard metrics), rather than capturing the gradual progression of competence through more nuanced, continuous measures (soft metrics) (Schaeffer et al., 2023). We show the individual benchmark scores across all checkpoints in Figure 8 in Appendix E.
5.3 LLMs Lose Behavioral Alignment
Do language models that improve in next-word prediction remain aligned with human behavioral and neural responses, or do they diverge as they surpass human proficiency? To answer this question we use the Futrell2018 benchmark, which has been widely used in previous research to measure linguistic behavior (Futrell et al., 2018; Schrimpf et al., 2021; Aw et al., 2023). This dataset consists of self-paced reading times for naturalistic story materials from 180 participants. Per-word reading times provide a measure of incremental comprehension difficulty, a cornerstone of psycholinguistic research for testing theories of sentence comprehension (Gibson, 1998; Smith and Levy, 2013; Brothers and Kuperberg, 2021; Shain et al., 2024). We measure alignment by calculating the Pearson correlation between a modelâs cross-entropy loss for a specific token in the sequence and the average human per-word reading time. The loss for words that comprise multiple tokens is added together before computing the correlation.
Early in training, LLMs align with this pattern, but as they surpass human proficiency (Shlegeris et al., 2022), their perplexity drops and they begin encoding statistical regularities that diverge from human intuition (Oh and Schuler, 2023; Steuer et al., 2023). This shift correlates with a decline in behavioral alignment, suggesting that superhuman models rely on different mechanisms than those underlying human language comprehension. Figure 5 shows that brain alignment initially correlates with perplexity and behavioral alignment, but only during the early stages of training (up to ~2B tokens). Beyond this point, these correlations diminish. In larger models, we observe a negative correlation between brain alignment and behavioral alignment in the later stages of training. This trend reinforces that early training aligns LLMs with human-like processing as also observed in earlier stages, while in later stages their language mechanisms diverge from humans.
6 Conclusion
In this work, we investigate how brain alignment in LLMs evolves throughout training, revealing different learning processes at play. We demonstrate that alignment with the human language network (LN) primarily correlates with formal linguistic competence Mahowald et al. (2024), peaking and saturating early in training. In contrast, functional linguistic competence, which involves world knowledge and reasoning, continues to grow beyond this stage. These findings suggest that the LN primarily encodes syntactic and compositional structure, in line with the literature of language neuroscience Fedorenko et al. (2024a), while broader linguistic functions may rely on other cognitive systems beyond the LN. This developmental approach reveals when brain-like representations emerge, offering a dynamic perspective compared to prior work focused on fully trained models. For example, Oota et al. (2023) demonstrated that syntactic structure contributes to alignment by selectively removing specific properties from already trained models. In contrast, we show that formal linguistic competence actively drives brain alignment during the early phases of training. Similarly, Hosseini et al. (2024) reported that models achieve strong alignment with limited data; we identify why: the brain-like representations emerge as soon as core formal linguistic knowledge is acquired. Further, their study evaluated only four training checkpoints and 2 models on a single dataset (Pereira2018). Our study evaluated eight models (14Mâ6.7B parameters) across 34 checkpoints spanning 300B tokens, and used five neural benchmarks within a rigorous brainâscoring framework. This extensive design enabled fineâgrained correlations with both formal and functional linguistic benchmarks and ensured our results are robust and generalizable.
We also show that model size is not a reliable predictor of brain alignment when controlling for the number of features (see Appendix I). Instead, alignment is shaped by architectural inductive biases, token integration mechanisms, and training dynamics. Our standardized brain-scoring framework eliminates contextualization biases from previous work, ensuring more rigorous evaluations. Finally, we demonstrate that current brain alignment benchmarks are not saturated, indicating that LLMs can still be improved in modeling human language processing. Together, these findings challenge prior assumptions about how alignment emerges in LLMs and provide new insights into the relationship between artificial and biological language processing.
Limitations
While this study offers a comprehensive analysis of brain alignment in LLMs, several open questions remain. If functional competence extends beyond the language network, future work should explore which additional brain regions LLMs align with as they develop reasoning and world knowledge, particularly in other cognitive networks like the multiple demand (Duncan and Owen, 2000) or theory of mind network (Saxe and Kanwisher, 2003; Saxe and Powell, 2006). Our findings suggest that LLM brain alignment studies should be broadened from the LN to downstream representations underlying other parts of cognition. This raises the question of whether specific transformer units specialize in formal vs. functional linguistic competence (AlKhamissi et al., 2025).
One other limitation of our study is that we rely exclusively on brain data collected from experiments conducted with English stimuli. As such, we do not explore whether our findings generalize across languages. This remains an open question and warrants further investigation. That said, evidence from cross-linguistic neuroscience research studying 45 languages from 12 language families (Malik-Moraleda et al., 2022) suggests the existence of a universal language network in the brain that is robust across languages and language families, both in topography and core functional properties.
Finally, a key question remains: Does LLM alignment evolution mirror human language acquisition? Comparing LLM representations to developmental data could reveal insights into learning trajectories and help differentiate formal from functional language learning. Expanding brain-scoring benchmarks and incorporating multimodal models will help address these questions, further bridging the gap between artificial and biological intelligence and deepening our understanding of how both systems process and represent language.
Ethical Statement
This research relies on previously published neuroimaging (fMRI, ECoG) and behavioral datasets, collected by the original research groups under their institutional ethical guidelines with informed consent and IRB/ethics approval. Our work involved only secondary analysis of de-identified data, with no new data collection or direct participant interaction, and we remain committed to using such data responsibly and respectfully.
Acknowledgments
We thank the members of the EPFL NeuroAI and NLP labs for their valuable feedback and insightful suggestions. We also gratefully acknowledge the support of the Swiss National Science Foundation (No. 215390), Innosuisse (PFFS-21-29), the EPFL Center for Imaging, Sony Group Corporation, and a Meta LLM Evaluation Research Grant.
References
- AlKhamissi et al. (2022) Badr AlKhamissi, Millicent Li, Asli Celikyilmaz, Mona T. Diab, and Marjan Ghazvininejad. 2022. A review on language models as knowledge bases. ArXiv, abs/2204.06031.
- AlKhamissi et al. (2025) Badr AlKhamissi, Greta Tuckute, Antoine Bosselut, and Martin Schrimpf. 2025. The LLM language network: A neuroscientific approach for identifying causally task-relevant units. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 10887â10911, Albuquerque, New Mexico. Association for Computational Linguistics.
- Allal et al. (2025) Loubna Ben Allal, Anton Lozhkov, Elie Bakouch, Gabriel MartĂn BlĂĄzquez, Guilherme Penedo, Lewis Tunstall, AndrĂ©s Marafioti, Hynek KydlĂÄek, AgustĂn Piqueres LajarĂn, Vaibhav Srivastav, and 1 others. 2025. Smollm2: When smol goes bigâdata-centric training of a small language model. arXiv preprint arXiv:2502.02737.
- Aw et al. (2023) Khai Loong Aw, Syrielle Montariol, Badr AlKhamissi, Martin Schrimpf, and Antoine Bosselut. 2023. Instruction-tuning aligns llms to the human brain.
- Bates et al. (2003) Elizabeth Bates, Stephen M. Wilson, Ayse Pinar Saygin, Frederic Dick, Martin I. Sereno, Robert T. Knight, and Nina F. Dronkers. 2003. Voxel-based lesionâsymptom mapping. Nature Neuroscience, 6(5):448â450.
- Benn et al. (2013) Yael Benn, Iain D. Wilkinson, Ying Zheng, Kathrin Cohen Kadosh, Charles A.J. Romanowski, Michael Siegal, and Rosemary Varley. 2013. Differentiating core and co-opted mechanisms in calculation: The neuroimaging of calculation in aphasia. Brain and Cognition, 82(3):254â264.
- Biderman et al. (2023) Stella Biderman, Hailey Schoelkopf, Quentin Anthony, Herbie Bradley, Kyle OâBrien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar Van Der Wal. 2023. Pythia: a suite for analyzing large language models across training and scaling. In Proceedings of the 40th International Conference on Machine Learning, ICMLâ23. JMLR.org.
- Binder et al. (1997) Jeffrey R. Binder, Julie A. Frost, Thomas A. Hammeke, Robert W. Cox, Stephen M. Rao, and Thomas Prieto. 1997. Human brain language areas identified by functional magnetic resonance imaging. The Journal of Neuroscience, 17(1):353â362.
- Bisk et al. (2019) Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. 2019. Piqa: Reasoning about physical commonsense in natural language. In AAAI Conference on Artificial Intelligence.
- Blank et al. (2014) Idan Blank, Nancy Kanwisher, and Evelina Fedorenko. 2014. A functional dissociation between language and multiple-demand systems revealed in patterns of BOLD signal fluctuations. Journal of Neurophysiology, 112(5):1105â1118.
- Brothers and Kuperberg (2021) Trevor Brothers and Gina R Kuperberg. 2021. Word predictability effects are linear, not logarithmic: Implications for probabilistic models of sentence comprehension. Journal of Memory and Language, 116:104174.
- Cadena et al. (2019) Santiago A Cadena, George H Denfield, Edgar Y Walker, Leon A Gatys, Andreas S Tolias, Matthias Bethge, and Alexander S Ecker. 2019. Deep convolutional models improve predictions of macaque v1 responses to natural images. PLoS computational biology, 15(4):e1006897.
- Caucheteux and King (2022) Charlotte Caucheteux and Jean-Rémi King. 2022. Brains and algorithms partially converge in natural language processing. Communications biology, 5(1):134.
- Chen et al. (2023) Xuanyi Chen, Josef Affourtit, Rachel Ryskin, Tamar I Regev, Samuel Norman-Haignere, Olessia Jouravlev, Saima Malik-Moraleda, Hope Kean, Rosemary Varley, and Evelina Fedorenko. 2023. The human language system, including its inferior frontal component in âbrocaâs area,â does not support music perception. Cerebral Cortex, 33(12):7904â7929.
- Cichy et al. (2016) Radoslaw Martin Cichy, Aditya Khosla, Dimitrios Pantazis, Antonio Torralba, and Aude Oliva. 2016. Comparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence. Scientific reports, 6(1):27755.
- Clark et al. (2018) Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question answering? try arc, the ai2 reasoning challenge. ArXiv, abs/1803.05457.
- Duncan and Owen (2000) John Duncan and Adrian M Owen. 2000. Common regions of the human frontal lobe recruited by diverse cognitive demands. Trends in Neurosciences, 23(10):475â483.
- Feather et al. (2025) Jenelle Feather, Meenakshi Khosla, N. Apurva, Ratan Murty, and Aran Nayebi. 2025. Brain-model evaluations need the neuroai turing test.
- Fedorenko (2014) Evelina Fedorenko. 2014. The role of domain-general cognitive control in language comprehension. Frontiers in Psychology, 5.
- Fedorenko et al. (2011) Evelina Fedorenko, Michael K Behr, and Nancy Kanwisher. 2011. Functional specificity for high-level linguistic processing in the human brain. Proceedings of the National Academy of Sciences, 108(39):16428â16433.
- Fedorenko et al. (2010) Evelina Fedorenko, Po-Jang Hsieh, Alfonso Nieto-Castanon, Susan L. Whitfield-Gabrieli, and Nancy G. Kanwisher. 2010. New method for fmri investigations of language: defining rois functionally in individual subjects. Journal of neurophysiology, 104 2:1177â94.
- Fedorenko et al. (2024a) Evelina Fedorenko, Anna A. Ivanova, and Tamar I. Regev. 2024a. The language network as a natural kind within the broader landscape of the human brain. Nature Reviews Neuroscience, 25(5):289â312.
- Fedorenko et al. (2012) Evelina Fedorenko, Josh H. McDermott, Sam Norman-Haignere, and Nancy Kanwisher. 2012. Sensitivity to musical structure in the human brain. Journal of Neurophysiology, 108(12):3289â3300.
- Fedorenko et al. (2024b) Evelina Fedorenko, Steven T. Piantadosi, and Edward A. F. Gibson. 2024b. Language is primarily a tool for communication rather than thought. Nature, 630(8017):575â586.
- Fedorenko et al. (2016) Evelina Fedorenko, Terri L. Scott, Peter Brunner, William G. Coon, Brianna Pritchett, Gerwin Schalk, and Nancy Kanwisher. 2016. Neural correlate of the construction of sentence meaning. Proceedings of the National Academy of Sciences, 113(41):E6256âE6262.
- Feghhi et al. (2024) Ebrahim Feghhi, Nima Hadidi, Bryan Song, Idan A. Blank, and Jonathan C. Kao. 2024. What are large language models mapping to in the brain? a case against over-reliance on brain scores.
- Futrell et al. (2018) Richard Futrell, Edward Gibson, Harry J. Tily, Idan Blank, Anastasia Vishnevetsky, Steven Piantadosi, and Evelina Fedorenko. 2018. The natural stories corpus. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).
- Gao et al. (2024) Leo Gao, Jonathan Tow, Baber Abbasi, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Alain Le Noacâh, Haonan Li, Kyle McDonell, Niklas Muennighoff, Chris Ociepa, Jason Phang, Laria Reynolds, Hailey Schoelkopf, Aviya Skowron, Lintang Sutawika, and 5 others. 2024. A framework for few-shot language model evaluation.
- Gauthier et al. (2020) Jon Gauthier, Jennifer Hu, Ethan Wilcox, Peng Qian, and Roger Levy. 2020. SyntaxGym: An online platform for targeted evaluation of language models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 70â76, Online. Association for Computational Linguistics.
- Geiger et al. (2022) Franziska Geiger, Martin Schrimpf, Tiago Marques, and James J DiCarlo. 2022. Wiring up vision: Minimizing supervised synaptic updates needed to produce a primate ventral stream. In International Conference on Learning Representations 2022 Spotlight.
- Gibson (1998) Edward Gibson. 1998. Linguistic complexity: locality of syntactic dependencies. Cognition, 68(1):1â76.
- Goldstein et al. (2022) Ariel Goldstein, Zaid Zada, Eliav Buchnik, Mariano Schain, Amy Price, Bobbi Aubrey, Samuel A. Nastase, Amir Feder, Dotan Emanuel, Alon Cohen, Aren Jansen, Harshvardhan Gazula, Gina Choe, Aditi Rao, Catherine Kim, Colton Casto, Lora Fanda, Werner Doyle, Daniel Friedman, and 13 others. 2022. Shared computational principles for language processing in humans and deep language models. Nature Neuroscience, 25(3):369â380.
- GornoâTempini et al. (2004) Maria Luisa GornoâTempini, Nina F. Dronkers, Katherine P. Rankin, Jennifer M. Ogar, La Phengrasamy, Howard J. Rosen, Julene K. Johnson, Michael W. Weiner, and Bruce L. Miller. 2004. Cognition and anatomy in three variants of primary progressive aphasia. Annals of Neurology, 55(3):335â346.
- Hagoort (2019) Peter Hagoort. 2019. The neurobiology of language beyond single-word processing. Science, 366(6461):55â58.
- Harvey et al. (2023) Sarah E Harvey, Brett W. Larsen, and Alex H Williams. 2023. Duality of bures and shape distances with implications for comparing neural representations. In UniReps: the First Workshop on Unifying Representations in Neural Models.
- Hosseini et al. (2024) Eghbal A Hosseini, Martin Schrimpf, Yian Zhang, Samuel Bowman, Noga Zaslavsky, and Evelina Fedorenko. 2024. Artificial neural network language models predict human brain responses to language even after a developmentally realistic amount of training. Neurobiology of Language, pages 1â21.
- Hu et al. (2023) Jennifer Hu, Hannah Small, Hope Kean, Atsushi Takahashi, Leo Zekelman, Daniel Kleinman, Elizabeth Ryan, Alfonso Nieto-Castañón, Victor Ferreira, and Evelina Fedorenko. 2023. Precision fmri reveals that the language-selective network supports both phrase-structure building and lexical access during language production. Cerebral Cortex, 33(8):4384â4404.
- Huang and Chang (2023) Jie Huang and Kevin Chen-Chuan Chang. 2023. Towards reasoning in large language models: A survey. In Findings of the Association for Computational Linguistics: ACL 2023, pages 1049â1065, Toronto, Canada. Association for Computational Linguistics.
- Kauf et al. (2023) Carina Kauf, Greta Tuckute, Roger Levy, Jacob Andreas, and Evelina Fedorenko. 2023. Lexical-Semantic Content, Not Syntactic Structure, Is the Main Contributor to ANN-Brain Similarity of fMRI Responses in the Language Network. Neurobiology of Language, pages 1â36.
- Kazemian et al. (2024) Atlas Kazemian, Eric Elmoznino, and Michael F. Bonner. 2024. Convolutional architectures are cortex-aligned de novo. bioRxiv.
- Kell et al. (2018) Alexander JE Kell, Daniel LK Yamins, Erica N Shook, Sam V Norman-Haignere, and Josh H McDermott. 2018. A task-optimized neural network replicates human auditory behavior, predicts brain responses, and reveals a cortical processing hierarchy. Neuron, 98(3):630â644.
- Khaligh-Razavi and Kriegeskorte (2014) Seyed Mahdi Khaligh-Razavi and Nikolaus Kriegeskorte. 2014. Deep Supervised, but Not Unsupervised, Models May Explain IT Cortical Representation. PLoS Computational Biology, 10(11). Publisher: Public Library of Science ISBN: 1553-7358 (Electronic)\r1553-734X (Linking).
- Kornblith et al. (2019) Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey Hinton. 2019. Similarity of neural network representations revisited. In International conference on machine learning, pages 3519â3529. PMLR.
- Koumura et al. (2023) Takuya Koumura, Hiroki Terashima, and Shigeto Furukawa. 2023. Human-like modulation sensitivity emerging through optimization to natural sound recognition. Journal of Neuroscience, 43(21):3876â3894.
- Kriegeskorte et al. (2008) Nikolaus Kriegeskorte, Marieke Mur, and Peter Bandettini. 2008. Representational similarity analysis - connecting the branches of systems neuroscience. Frontiers in Systems Neuroscience, 2.
- Kubilius et al. (2019) Jonas Kubilius, Martin Schrimpf, Kohitij Kar, Rishi Rajalingham, Ha Hong, Najib Majaj, Elias Issa, Pouya Bashivan, Jonathan Prescott-Roy, Kailyn Schmidt, Aran Nayebi, Daniel Bear, Daniel L Yamins, and James J DiCarlo. 2019. Brain-like object recognition with high-performing shallow recurrent anns. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.
- Lipkin et al. (2022) Benjamin Lipkin, Greta Tuckute, Josef Affourtit, Hannah Small, Zachary Mineroff, Hope Kean, Olessia Jouravlev, Lara Rakocevic, Brianna Pritchett, Matthew Siegelman, Caitlyn Hoeflin, Alvincé Pongos, Idan A. Blank, Melissa Kline Struhl, Anna Ivanova, Steven Shannon, Aalok Sathe, Malte Hoffmann, Alfonso Nieto-Castañón, and Evelina Fedorenko. 2022. Probabilistic atlas for the language network based on precision fmri data from>800 individuals. Scientific Data, 9(1).
- Mahowald et al. (2024) Kyle Mahowald, Anna A Ivanova, Idan A Blank, Nancy Kanwisher, Joshua B Tenenbaum, and Evelina Fedorenko. 2024. Dissociating language and thought in large language models. Trends in Cognitive Sciences.
- Malik-Moraleda et al. (2022) Saima Malik-Moraleda, Dima Ayyash, Jeanne GallĂ©e, Josef Affourtit, Malte Hoffmann, Zachary Mineroff, Olessia Jouravlev, and Evelina Fedorenko. 2022. An investigation across 45 languages and 12 language families reveals a universal language network. Nature Neuroscience, 25(8):1014â1019.
- Millet and King (2021) Juliette Millet and Jean-Rémi King. 2021. Inductive biases, pretraining and fine-tuning jointly account for brain responses to speech. ArXiv, abs/2103.01032.
- Monti et al. (2012) Martin M Monti, Lawrence M Parsons, and Daniel N Osherson. 2012. Thought beyond language: neural dissociation of algebra and natural language. Psychological science, 23(8):914â922.
- Nastase et al. (2021) Samuel A. Nastase, Yun-Fei Liu, Hanna Hillman, Asieh Zadbood, Liat Hasenfratz, Neggin Keshavarzian, Janice Chen, Christopher J. Honey, Yaara Yeshurun, Mor Regev, and et al. 2021. The ânarrativesâ fmri dataset for evaluating models of naturalistic language comprehension. Scientific Data, 8(1).
- Oh and Schuler (2023) Byung-Doh Oh and William Schuler. 2023. Why does surprisal from larger transformer-based language models provide a poorer fit to human reading times? Transactions of the Association for Computational Linguistics, 11:336â350.
- Oota et al. (2023) Subba Reddy Oota, Manish Gupta, and Mariya Toneva. 2023. Joint processing of linguistic properties in brains and language models. Preprint, arXiv:2212.08094.
- Pasquiou et al. (2022) Alexandre Pasquiou, Yair Lakretz, John Hale, Bertrand Thirion, and Christophe Pallier. 2022. Neural language models are not born equal to fit brain data, but training helps. Preprint, arXiv:2207.03380.
- Penedo et al. (2024) Guilherme Penedo, Hynek KydlĂÄek, Loubna Ben allal, Anton Lozhkov, Margaret Mitchell, Colin Raffel, Leandro Von Werra, and Thomas Wolf. 2024. The fineweb datasets: Decanting the web for the finest text data at scale. In The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track.
- Pereira et al. (2018) Francisco Pereira, Bin Lou, Brianna Pritchett, Samuel Ritter, Samuel J. Gershman, Nancy Kanwisher, Matthew Botvinick, and Evelina Fedorenko. 2018. Toward a universal decoder of linguistic meaning from brain activation. Nature Communications, 9(1):963.
- Price (2010) Cathy J. Price. 2010. The anatomy of language: a review of 100 fmri studies published in 2009. Annals of the New York Academy of Sciences, 1191(1):62â88.
- Rathi et al. (2025) Neil Rathi, Johannes Mehrer, Badr AlKhamissi, Taha Binhuraib, Nicholas M. Blauch, and Martin Schrimpf. 2025. TopoLM: Brain-like spatio-functional organization in a topographic language model. In International Conference on Learning Representations (ICLR).
- Sakaguchi et al. (2019) Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019. Winogrande. Communications of the ACM, 64:99 â 106.
- Sap et al. (2019) Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, and Yejin Choi. 2019. Social IQa: Commonsense reasoning about social interactions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4463â4473, Hong Kong, China. Association for Computational Linguistics.
- Saxe and Kanwisher (2003) R Saxe and N Kanwisher. 2003. People thinking about thinking peoplethe role of the temporo-parietal junction in âtheory of mindâ. NeuroImage, 19(4):1835â1842.
- Saxe et al. (2006) Rebecca Saxe, Matthew Brett, and Nancy Kanwisher. 2006. Divide and conquer: a defense of functional localizers. Neuroimage, 30(4):1088â1096.
- Saxe and Powell (2006) Rebecca Saxe and Lindsey J. Powell. 2006. Itâs the thought that counts: Specific brain regions for one component of theory of mind. Psychological Science, 17(8):692â699.
- Schaeffer et al. (2023) Rylan Schaeffer, Brando Miranda, and Oluwasanmi Koyejo. 2023. Are emergent abilities of large language models a mirage? ArXiv, abs/2304.15004.
- Schrimpf et al. (2021) Martin Schrimpf, Idan Asher Blank, Greta Tuckute, Carina Kauf, Eghbal A. Hosseini, Nancy Kanwisher, Joshua B. Tenenbaum, and Evelina Fedorenko. 2021. The neural architecture of language: Integrative modeling converges on predictive processing. Proceedings of the National Academy of Sciences, 118(45):e2105646118.
- Schrimpf et al. (2018) Martin Schrimpf, Jonas Kubilius, Ha Hong, Najib J. Majaj, Rishi Rajalingham, Elias B. Issa, Kohitij Kar, Pouya Bashivan, Jonathan Prescott-Roy, Franziska Geiger, Kailyn Schmidt, Daniel L. K. Yamins, and James J. DiCarlo. 2018. Brain-Score: Which Artificial Neural Network for Object Recognition is most Brain-Like? preprint, Neuroscience.
- Schrimpf et al. (2020) Martin Schrimpf, Jonas Kubilius, Michael J. Lee, N. Apurva Ratan Murty, Robert Ajemian, and James J. DiCarlo. 2020. Integrative benchmarking to advance neurally mechanistic models of human intelligence. Neuron, 108(3):413â423.
- Shain et al. (2024) Cory Shain, Clara Meister, Tiago Pimentel, Ryan Cotterell, and Roger Levy. 2024. Large-scale evidence for logarithmic effects of word predictability on reading time. Proceedings of the National Academy of Sciences, 121(10):e2307876121.
- Shlegeris et al. (2022) Buck Shlegeris, Fabien Roger, Lawrence Chan, and Euan McLean. 2022. Language models are better than humans at next-token prediction. ArXiv, abs/2212.11281.
- Siegal and Varley (2006) Michael Siegal and Rosemary Varley. 2006. Aphasia, language, and theory of mind. Social Neuroscience, 1(3â4):167â174.
- Smith and Levy (2013) Nathaniel J. Smith and Roger Levy. 2013. The effect of word predictability on reading time is logarithmic. Cognition, 128(3):302â319.
- Steuer et al. (2023) Julius Steuer, Marius Mosbach, and Dietrich Klakow. 2023. Large gpt-like models are bad babies: A closer look at the relationship between linguistic competence and psycholinguistic measures. arXiv preprint arXiv:2311.04547.
- Teney et al. (2024) Damien Teney, Armand Nicolicioiu, Valentin Hartmann, and Ehsan Abbasnejad. 2024. Neural redshift: Random networks are not random functions. Preprint, arXiv:2403.02241.
- Tuckute et al. (2023) Greta Tuckute, Jenelle Feather, Dana Boebinger, and Josh H. McDermott. 2023. Many but not all deep neural network audio models capture brain responses and exhibit correspondence between model stages and brain regions. PLOS Biology, 21(12):1â70.
- Tuckute et al. (2024a) Greta Tuckute, Nancy Kanwisher, and Evelina Fedorenko. 2024a. Language in brains, minds, and machines. Annual Review of Neuroscience, 47.
- Tuckute et al. (2024b) Greta Tuckute, Aalok Sathe, Shashank Srikant, Maya Taliaferro, Mingye Wang, Martin Schrimpf, Kendrick Kay, and Evelina Fedorenko. 2024b. Driving and suppressing the human language network using large language models. Nature Human Behaviour, pages 1â18.
- Varley and Siegal (2000) Rosemary Varley and Michael Siegal. 2000. Evidence for cognition without grammar from causal reasoning and âtheory of mindâ in an agrammatic aphasic patient. Current Biology, 10(12):723â726.
- Varley et al. (2005) Rosemary A. Varley, Nicolai J. C. Klessinger, Charles A. J. Romanowski, and Michael Siegal. 2005. Agrammatic but numerate. Proceedings of the National Academy of Sciences, 102(9):3519â3524.
- Warstadt et al. (2019) Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mohananey, Wei Peng, Sheng-Fu Wang, and Samuel R. Bowman. 2019. Blimp: The benchmark of linguistic minimal pairs for english. Transactions of the Association for Computational Linguistics, 8:377â392.
- Wolf et al. (2019) Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, RĂ©mi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingfaceâs transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771.
- Yamins et al. (2014) Daniel LK Yamins, Ha Hong, Charles F Cadieu, Ethan A Solomon, Darren Seibert, and James J DiCarlo. 2014. Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proceedings of the national academy of sciences, 111(23):8619â8624.
- Zellers et al. (2019) Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. Hellaswag: Can a machine really finish your sentence? In Annual Meeting of the Association for Computational Linguistics.
- Zhuang et al. (2021) Chengxu Zhuang, Siming Yan, Aran Nayebi, Martin Schrimpf, Michael C. Frank, James J. DiCarlo, and Daniel L.K. Yamins. 2021. Unsupervised neural network models of the ventral visual stream. Proceedings of the National Academy of Sciences (PNAS), 118(3). Publisher: Cold Spring Harbor Laboratory.
| Pereira2018 | fMRI | Reading | Accordions produce sound with bellows ⊠|
| --- | --- | --- | --- |
| Blank2014 | fMRI | Listening | A clear and joyous day it was and out on the wide ⊠|
| Fedorenko2016 | ECoG | Reading | âALEXâ, âWASâ, âTIREDâ, âSOâ, âHEâ, âTOOKâ, ⊠|
| Tuckute2024 | fMRI | Reading | The judge spoke, breaking the silence. |
| Narratives | fMRI | Listening | Okay so getting back to our story about uh Lucy ⊠|
| Futrell2018 | Reading Times | Reading | A clear and joyous day it was and out on the wide ⊠|
Table 1: Datasets Used for Evaluating Model Alignment. Neuroimaging datasets were collected via either functional magnetic resonance imaging (fMRI) or electrocorticography (ECoG). Stimuli range from short sentences (Fedorenko2016, Tuckute2024) to paragraphs (Pereira2018) and entire stories (Blank2014, Narratives, Futrell2018) and were presented either visually or auditorily. Futrell2018 is a behavioral dataset.
<details>
<summary>figures/brain-score-llms-metrics.drawio.png Details</summary>

### Visual Description
## Bar Chart: Brain Alignment (Pearson's r) Across Conditions
### Overview
The image presents a grouped bar chart comparing brain alignment (Pearson's r) across six experimental conditions. The chart is divided into two main sections: **(a)** with three conditions (Linear, CKA, RSA) and **(b)** with two conditions (Contextualization, No Contextualization). Each condition contains three bars representing different experimental groups, with error bars indicating variability.
---
### Components/Axes
- **X-axis (Conditions)**:
- **(a)**: Linear, CKA, RSA
- **(b)**: Contextualization, No Contextualization
- **Y-axis**: Brain Alignment (Pearson's r) with scales:
- **(a)**: 0.00â0.30
- **(b)**: 0.00â0.70
- **Legend** (bottom):
- **Light Green**: Pretrained | Original Stimuli
- **Medium Green**: Pretrained | Random Stimuli (= Length)
- **Dark Green**: Untrained | Original Stimuli
---
### Detailed Analysis
#### Section (a)
1. **Linear**:
- Pretrained | Original Stimuli: ~0.13 (error: ~0.12â0.14)
- Pretrained | Random Stimuli: ~0.015 (error: ~0.01â0.02)
- Untrained | Original Stimuli: ~0.065 (error: ~0.06â0.07)
2. **CKA**:
- Pretrained | Original Stimuli: ~0.17 (error: ~0.16â0.18)
- Pretrained | Random Stimuli: ~0.075 (error: ~0.07â0.08)
- Untrained | Original Stimuli: ~0.28 (error: ~0.27â0.29)
3. **RSA**:
- Pretrained | Original Stimuli: ~0.09 (error: ~0.08â0.10)
- Pretrained | Random Stimuli: ~0.02 (error: ~0.015â0.025)
- Untrained | Original Stimuli: ~0.14 (error: ~0.13â0.15)
#### Section (b)
1. **Contextualization**:
- Pretrained | Original Stimuli: ~0.68 (error: ~0.67â0.69)
- Pretrained | Random Stimuli: ~0.15 (error: ~0.14â0.16)
- Untrained | Original Stimuli: ~0.72 (error: ~0.71â0.73)
2. **No Contextualization**:
- Pretrained | Original Stimuli: ~0.98 (error: ~0.97â0.99)
- Pretrained | Random Stimuli: ~0.12 (error: ~0.11â0.13)
- Untrained | Original Stimuli: ~0.45 (error: ~0.44â0.46)
---
### Key Observations
1. **Section (a)**:
- Untrained | Original Stimuli consistently shows higher alignment than Pretrained | Random Stimuli across all conditions.
- Pretrained | Original Stimuli exhibits moderate alignment, outperforming Pretrained | Random Stimuli but underperforming Untrained | Original Stimuli in CKA.
2. **Section (b)**:
- Contextualization enhances alignment for both Pretrained | Original Stimuli and Untrained | Original Stimuli compared to No Contextualization.
- Pretrained | Random Stimuli shows minimal alignment (~0.12â0.15) across all conditions.
---
### Interpretation
1. **Stimulus Type Impact**:
- Original stimuli (dark green) drive higher brain alignment than random stimuli (medium green), particularly in the Untrained group. This suggests that stimulus familiarity or relevance is critical for neural synchronization.
2. **Contextualization Effect**:
- Contextualization amplifies alignment for original stimuli (e.g., Untrained | Original Stimuli: 0.72 vs. 0.45 in No Contextualization), indicating that contextual framing improves neural representation.
3. **Pretrained vs. Untrained**:
- Pretrained groups show lower alignment than Untrained groups with original stimuli in CKA and RSA, possibly reflecting prior knowledge biasing neural responses.
4. **Random Stimuli Baseline**:
- Pretrained | Random Stimuli (medium green) consistently shows the lowest alignment (~0.01â0.15), establishing a near-zero baseline for comparison.
---
### Spatial Grounding & Trend Verification
- **Legend Position**: Bottom-center, clearly mapping colors to groups.
- **Trend Consistency**:
- In **(a)**, dark green bars (Untrained | Original) are taller than medium green (Pretrained | Random) across all conditions, matching the numerical values.
- In **(b)**, Contextualization elevates alignment for original stimuli, aligning with the observed trend of higher values in this condition.
---
### Conclusion
The data demonstrates that brain alignment is strongly influenced by stimulus type (original vs. random) and contextual framing. Untrained participants with original stimuli show the highest alignment, while contextualization further enhances this effect. These findings highlight the interplay between stimulus familiarity, prior training, and contextual processing in neural synchronization.
</details>
Figure 6: Evaluating Brain Alignment with Linear Predictivity and No Contextualization is Most Stringent. (a) Average brain alignment across 8 Pythia models under three conditions: (1) a pretrained model processing the original stimuli, (2) a pretrained model processing random sequences of the same length (averaged over five random seeds) as a control condition, and (3) the model with untrained parameters processing the original stimuli. The linear predictivity metric differentiates between meaningful and random stimuli most strongly, while RSA and CKA overestimate alignment. (b) Brain alignment on the Pereira2018 dataset under two cross-validation schemes: with contextualization (random sentence split) and without contextualization (story-based split).
Appendix
Appendix A Neuroimaging & Behavioral Datasets
Table 1 shows the different neuroimaging and behavioral datasets used in this work, along with the dataset modality, presentation mode, and a stimulus example.
A.1 Neuroimaging Datasets
Pereira et al. (2018)
This dataset consists of fMRI activations (blood-oxygen-level-dependent; BOLD responses) recorded as participants read short passages presented one sentence at a time for 4 s. The dataset is composed of two distinct experiments: one with 9 subjects presented with 384 sentences, and another with 6 subjects presented with 243 sentences each. The passages in each experiment spanned 24 different topics. The results reported for this dataset are the average alignment across both experiments after normalizing with their respective cross-subject consistency estimates.
Blank et al. (2014)
This dataset also involves fMRI signals but recorded from only 12 functional regions of interest (fROI) instead of the higher resolution signal used by Pereira et al. (2018). The data was collected from 5 participants as they listened to 8 long naturalistic stories that were adapted from existing fairy tales and short stories (Futrell et al., 2018). Each story was approximately 5 minutes long, averaging up to 165 sentences, providing a much longer context length than the other neuroimaging datasets. When measuring brain alignment, we use the input stimuli of the last 32 TRs as the modelâs context.
Fedorenko et al. (2016)
This dataset captures ECoG signals from 5 participants as they read 8-word-long sentences presented one word at a time for 450 or 700 ms. Following Schrimpf et al. (2021) we select the 52/80 sentences that were presented to all participants.
Tuckute et al. (2024b)
In this dataset, 5 participants read 1000 6-word sentences presented one sentence at a time for 2 s. BOLD responses from voxels in the language network were averaged within each participant and then across participants to yield an overall average language network response to each sentence. The stimuli used span a large part of the linguistic space, enabling model-brain comparisons across a wide range of single sentences. Sentence presentation order was randomized across participants. In combination with the diversity in linguistic materials, this dataset presents a particularly challenging dataset for model evaluation.
Narratives Dataset (Nastase et al., 2021)
This dataset consists of fMRI data collected while human subjects listened to 27 diverse spoken story stimuli. The collection includes 345 subjects, 891 functional scans, and approximately 4.6 hours of unique audio stimuli. For our story-based analysis, we focused on 5 participants who each listened to both the Lucy and Tunnel stories. Since functional localization was not performed in the Narratives dataset, we approximated language regions by extracting the top-10% voxels from each anatomically defined language region according to a probabilistic atlas for the human language system (Lipkin et al., 2022). Due to the limited corpus of two stories, traditional 10-fold cross-validation was not feasible. To implement topic-based splitting while maintaining methodological rigor, we partitioned each story into $n$ distinct segments, with each segment functioning as an independent narrative unit. This segmentation approach effectively prevented cross-contamination of contextual information between splits, thereby preserving the integrity of our evaluation framework.
A.2 Behavioral Dataset
(Futrell et al., 2018)
This dataset consists of self-paced reading times for each word from 180 participants. The stimuli include 10 stories from the Natural Stories Corpus (Futrell et al., 2018), similar to Blank2014. Each participant read between 5 and all 10 stories.
Appendix B Rigorous Brain-Scoring
Despite progress in linking LLMs to neural activity, thereâs no standard for comparing brain alignment across datasets and conditions. Here, we aim to establish a set of desiderata for evaluating brain alignment. For a model to be considered truly brain-aligned, two key criteria must be met. First, high alignment scores should indicate that the model captures stimulus-driven responsesâmeaning that when presented with a random sequence of tokens, alignment should drop significantly compared to original linguistic stimuli. Second, a brain-aligned model should generalize effectively to new linguistic contexts rather than overfitting to specific examples. We address these two points in Section 4 to justify our choice of metric and cross-validation scheme for each dataset (see Figure 6). For all benchmarks, we localize language-selective units, which is consistent with neural site selection in neuroscience experiments and allows for fair comparisons across models irrespective of model size AlKhamissi et al. (2025). A key limitation of previous methods is their reliance on the raw hidden state dimensions, which inherently favors larger models by providing a greater feature space and artificially inflating alignment scores.
| 250B 500B 750B | 1.00 0.97 0.99 | 0.19 0.08 0.08 | 0.47 0.51 0.52 | 0.78 0.87 0.78 | 0.04 0.04 0.04 | 0.50 0.49 0.48 |
| --- | --- | --- | --- | --- | --- | --- |
| 1T | 1.07 | 0.12 | 0.55 | 0.84 | 0.04 | 0.52 |
| 1.25T | 1.00 | 0.12 | 0.50 | 0.82 | 0.03 | 0.49 |
| 1.5T | 1.00 | 0.12 | 0.52 | 0.79 | 0.03 | 0.49 |
| 1.75T | 0.96 | 0.13 | 0.48 | 0.79 | 0.04 | 0.48 |
| 2T | 1.05 | 0.15 | 0.56 | 0.84 | 0.04 | 0.53 |
| 2.25T | 1.08 | 0.16 | 0.55 | 0.75 | 0.04 | 0.51 |
| 2.5T | 1.12 | 0.17 | 0.52 | 0.72 | 0.01 | 0.51 |
| 2.75T | 1.13 | 0.12 | 0.49 | 0.75 | 0.04 | 0.49 |
| 3T | 1.03 | 0.26 | 0.51 | 0.55 | 0.01 | 0.47 |
| 3.25T | 1.02 | 0.13 | 0.52 | 0.68 | 0.02 | 0.47 |
| 3.5T | 1.04 | 0.14 | 0.52 | 0.72 | 0.04 | 0.49 |
| 3.75T | 1.14 | 0.06 | 0.57 | 0.84 | 0.03 | 0.53 |
| 4T | 1.05 | 0.13 | 0.63 | 0.82 | 0.05 | 0.54 |
Table 2: Brain Alignment Performance of SmolLM2-360M Across Training Checkpoints. Reported scores correspond to normalized correlations with neural responses from five benchmark datasets (Pereira2018, Blank2014, Tuckute2024, Fedorenko2016, Narratives), along with their average (Avg). These results assess the extent to which the modelâs internal representations align with activity in the human language network.
Appendix C Brain-Score Using Additional Metrics
Centered Kernel Alignment (CKA)
Kornblith et al. (2019) introduced CKA as a substitute for Canonical Correlation Analysis (CCA) to assess the similarity between neural network representations. Unlike linear predictivity, it is a non-parameteric metric and therefore does not require any additional training. CKA is particularly effective with high-dimensional representations, and its reliability in identifying correspondences between representations in networks trained from different initializations (Kornblith et al., 2019).
Representational Similarity Analysis (RSA)
Kriegeskorte et al. (2008) introduced RDMs as a solution to the challenge of integrating brain-activity measurements, behavioral observations, and computational models in systems neuroscience. RDMs are part of a broader analytical framework referred to as representational similarity analysis (RSA). In practical terms, to compute the dissimilarity matrix for an $N$ -dimensional networkâs responses to $M$ different stimuli, an $M$ Ă $M$ matrix of distances between all pairs of evoked responses is generated for both brain activity and the language modelâs activations Harvey et al. (2023). The correlation between these two matrices is then used as a measure of brain alignment.
| 250B 500B 750B | 0.81 0.80 0.80 | 0.80 0.78 0.82 | 0.81 0.79 0.81 | 0.33 0.78 0.69 | 0.66 0.66 0.69 | 0.35 0.35 0.34 | 0.70 0.70 0.71 | 0.55 0.56 0.57 | 0.47 0.49 0.50 | 0.52 0.53 0.53 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1T | 0.81 | 0.78 | 0.80 | 0.69 | 0.69 | 0.35 | 0.71 | 0.57 | 0.50 | 0.54 |
| 1.25T | 0.81 | 0.78 | 0.79 | 0.68 | 0.68 | 0.35 | 0.71 | 0.57 | 0.51 | 0.54 |
| 1.5T | 0.81 | 0.80 | 0.80 | 0.69 | 0.68 | 0.35 | 0.72 | 0.56 | 0.51 | 0.54 |
| 1.75T | 0.80 | 0.79 | 0.79 | 0.68 | 0.68 | 0.36 | 0.72 | 0.59 | 0.51 | 0.54 |
| 2T | 0.81 | 0.81 | 0.81 | 0.69 | 0.69 | 0.35 | 0.72 | 0.59 | 0.52 | 0.54 |
| 2.25T | 0.81 | 0.82 | 0.81 | 0.68 | 0.68 | 0.35 | 0.71 | 0.59 | 0.51 | 0.54 |
| 2.5T | 0.81 | 0.82 | 0.82 | 0.68 | 0.68 | 0.36 | 0.70 | 0.56 | 0.52 | 0.54 |
| 2.75T | 0.81 | 0.82 | 0.81 | 0.25 | 0.23 | 0.35 | 0.50 | 0.57 | 0.50 | 0.50 |
| 3T | 0.81 | 0.81 | 0.81 | 0.25 | 0.23 | 0.35 | 0.50 | 0.57 | 0.50 | 0.50 |
| 3.25T | 0.81 | 0.77 | 0.79 | 0.67 | 0.67 | 0.34 | 0.67 | 0.57 | 0.51 | 0.52 |
| 3.5T | 0.81 | 0.79 | 0.80 | 0.71 | 0.71 | 0.38 | 0.72 | 0.58 | 0.53 | 0.55 |
| 3.75T | 0.80 | 0.78 | 0.79 | 0.72 | 0.72 | 0.58 | 0.58 | 0.54 | 0.56 | 0.56 |
| 4T | 0.81 | 0.79 | 0.80 | 0.73 | 0.73 | 0.39 | 0.74 | 0.61 | 0.56 | 0.57 |
Table 3: Performance of SmolLM2-360M on Formal and Functional Linguistic Benchmarks Across Training Checkpoints. Formal competence is measured using BLiMP and SyntaxGym (with averages reported as Avg Formal). Functional competence is measured using ARC-Easy, ARC-Challenge, Social-IQA, PIQA, WinoGrande, and HellaSwag (with averages reported as Avg Functional). Together, these results characterize the relationship between training progression and the development of different aspects of linguistic ability.
Appendix D Brain Alignment Over Training
<details>
<summary>figures/brain-score-llms-brain-alignment-final.drawio-3.png Details</summary>

### Visual Description
## Line Graphs: Brain Alignment Across Token Counts for Pythia Models
### Overview
Three line graphs compare brain alignment metrics across token counts for three Pythia models (160M, 410M, 1B). Each graph shows six datasets (Pereira2018, Blank2014, Fedorenko2016, Tuckute2024, Narratives, Average) with shaded confidence intervals. A vertical dashed line marks 100B tokens, a critical threshold.
### Components/Axes
- **X-axis**: "Number of Tokens" (0 to 268B, logarithmic scale)
- **Y-axis**: "Brain Alignment" (0 to 1.2, linear scale)
- **Legend**: Located at the bottom, with symbols/colors:
- Light green circles: Pereira2018
- Light green crosses: Blank2014
- Dark green squares: Fedorenko2016
- Dark green pluses: Tuckute2024
- Dark green diamonds: Narratives
- Dark green stars: Average
- **Vertical dashed line**: 100B tokens (center of each graph)
### Detailed Analysis
#### Pythia-160M
- **Pereira2018** (light green circles): Starts at ~0.5 (0 tokens), peaks at ~1.0 (100B tokens), declines to ~0.8 (268B tokens).
- **Blank2014** (light green crosses): Ranges 0.2â0.4, peaks at ~0.4 (100B tokens).
- **Fedorenko2016** (dark green squares): Starts at ~0.3, peaks at ~0.6 (100B tokens), drops to ~0.4 (268B tokens).
- **Tuckute2024** (dark green pluses): Similar to Fedorenko2016 but lower (~0.3â0.5).
- **Narratives** (dark green diamonds): Starts at ~0.2, peaks at ~0.5 (100B tokens), declines to ~0.3 (268B tokens).
- **Average** (dark green stars): Peaks at ~0.7 (100B tokens), declines to ~0.6 (268B tokens).
#### Pythia-410M
- **Pereira2018**: Starts at ~0.6, peaks at ~1.1 (100B tokens), declines to ~0.9 (268B tokens).
- **Blank2014**: Ranges 0.3â0.5, peaks at ~0.5 (100B tokens).
- **Fedorenko2016**: Starts at ~0.4, peaks at ~0.7 (100B tokens), drops to ~0.5 (268B tokens).
- **Tuckute2024**: Similar to Fedorenko2016 but lower (~0.4â0.6).
- **Narratives**: Starts at ~0.3, peaks at ~0.6 (100B tokens), declines to ~0.4 (268B tokens).
- **Average**: Peaks at ~0.8 (100B tokens), declines to ~0.7 (268B tokens).
#### Pythia-1B
- **Pereira2018**: Starts at ~0.7, peaks at ~1.2 (100B tokens), declines to ~1.0 (268B tokens).
- **Blank2014**: Ranges 0.4â0.6, peaks at ~0.6 (100B tokens).
- **Fedorenko2016**: Starts at ~0.5, peaks at ~0.8 (100B tokens), drops to ~0.6 (268B tokens).
- **Tuckute2024**: Similar to Fedorenko2016 but lower (~0.5â0.7).
- **Narratives**: Starts at ~0.4, peaks at ~0.7 (100B tokens), declines to ~0.5 (268B tokens).
- **Average**: Peaks at ~0.9 (100B tokens), declines to ~0.8 (268B tokens).
### Key Observations
1. **Peak at 100B tokens**: All datasets show maximum brain alignment at 100B tokens, followed by a decline.
2. **Model scaling**: Larger models (Pythia-1B) exhibit higher alignment values than smaller models (Pythia-160M).
3. **Dataset variability**: Pereira2018 and Blank2014 consistently outperform others, while Narratives and Tuckute2024 show lower alignment.
4. **Average trend**: The "Average" line (dark green stars) tracks mid-range performance across datasets.
### Interpretation
The data suggests that brain alignment peaks at 100B tokens for all models, likely reflecting an optimal balance between token quantity and cognitive processing. Larger models (Pythia-1B) achieve higher alignment, indicating scalability benefits. Pereira2018 and Blank2014 datasets outperform others, possibly due to methodological differences (e.g., narrative focus vs. generic text). The decline post-100B tokens may signal diminishing returns or overfitting. The "Average" line highlights general trends, masking dataset-specific anomalies (e.g., Tuckute2024âs dip in Pythia-1B). This analysis underscores the importance of token quantity and dataset choice in aligning neural representations with brain activity.
</details>
Figure 7: Brain Alignment Saturates Early on in Training. Plots complementing Figure 3 showing the brain alignment scores of three other models from the Pythia model suite with varying sizes (log x-axis up to 16B tokens, uneven spacing after black line). Scores are normalized by their cross-subject consistency scores. Alignment quickly peaks around 2â8B tokens before saturating or declining, regardless of model size.
Figure 7 complements Figure 3 in the main paper, illustrating that brain alignment saturates early on in training for all models analyzed in this work.
Appendix E Formal & Functional Scores
<details>
<summary>figures/brain-score-llms-formal-competence.drawio.png Details</summary>

### Visual Description
## Line Graphs: Model Performance Across Formal and Functional Competence
### Overview
The image contains four grouped line graphs comparing model performance across formal and functional competence tasks. Each graph represents a different Pythia model size (1B, 2.8B, 6.9B, and 5 Models) with two subplots per graph: formal competence (top) and functional competence (bottom). The graphs show normalized accuracy against the number of tokens, with shaded regions indicating 95% confidence intervals.
### Components/Axes
- **X-axis**: Number of Tokens (0 to 256, logarithmic scale)
- **Y-axis**: Normalized Accuracy (-0.1 to 0.8)
- **Legends**:
- **Formal Competence**:
- BLiMP (light blue circles)
- SyntaxGym (light blue crosses)
- **Functional Competence**:
- ARC-Easy (dark blue circles)
- PIQA (dark blue crosses)
- Social-IQA (dark blue squares)
- ARC Challenge (dark blue diamonds)
- HellaSwag (dark blue stars)
- WinoGrande (dark blue plus signs)
- **Shading**: Light blue regions represent 95% confidence intervals.
### Detailed Analysis
#### Formal Competence (Top Subplots)
- **Pythia-1B (a)**:
- BLiMP and SyntaxGym start at ~0.1 accuracy, plateauing at ~0.7-0.8 after ~100 tokens.
- Confidence intervals narrow significantly after 100 tokens.
- **Pythia-2.8B (b)**:
- Similar trend to 1B but with slightly higher initial accuracy (~0.2 vs. 0.1).
- Plateaus at ~0.7-0.8 with tighter confidence intervals.
- **Pythia-6.9B (c)**:
- Rapid rise to ~0.7 accuracy by ~50 tokens, plateauing at ~0.8.
- Confidence intervals remain narrow throughout.
- **Pythia (5 Models) (d)**:
- Combines results from multiple models, showing ~0.7 accuracy by ~100 tokens.
- Confidence intervals widen slightly compared to single-model graphs.
#### Functional Competence (Bottom Subplots)
- **Pythia-1B (a)**:
- ARC-Easy and PIQA start near 0, peaking at ~0.3-0.4 after ~200 tokens.
- Social-IQA and ARC Challenge show negative accuracy (-0.1 to 0.1) initially.
- HellaSwag and WinoGrande plateau at ~0.2-0.3.
- **Pythia-2.8B (b)**:
- ARC-Easy and PIQA reach ~0.4-0.5, with Social-IQA and ARC Challenge improving to ~0.1-0.2.
- HellaSwag and WinoGrande plateau at ~0.3-0.4.
- **Pythia-6.9B (c)**:
- ARC-Easy and PIQA peak at ~0.5-0.6, with Social-IQA and ARC Challenge reaching ~0.3-0.4.
- HellaSwag and WinoGrande plateau at ~0.4-0.5.
- **Pythia (5 Models) (d)**:
- Combines results, showing ~0.4-0.5 accuracy for most tasks.
- Confidence intervals are wider, indicating higher variability.
### Key Observations
1. **Formal vs. Functional Tasks**:
- Formal tasks (BLiMP, SyntaxGym) consistently outperform functional tasks across all models.
- Functional tasks show greater variability, with some models (e.g., Social-IQA, ARC Challenge) performing poorly in smaller models.
2. **Model Size Impact**:
- Larger models (6.9B) achieve higher accuracy in both task types compared to smaller models (1B, 2.8B).
- The "5 Models" graph (d) suggests ensemble approaches improve functional task performance but with increased computational cost (5.6% training time noted).
3. **Confidence Intervals**:
- Functional tasks exhibit wider confidence intervals, indicating less reliable performance estimates.
- Formal tasks show tighter intervals, suggesting more stable results.
### Interpretation
The data demonstrates that larger language models (e.g., Pythia-6.9B) excel in formal linguistic tasks (e.g., syntax, grammar) but struggle with functional reasoning (e.g., commonsense, logic). Functional tasks require more tokens to reach stable performance, and smaller models often fail to achieve meaningful accuracy. The "5 Models" graph highlights a trade-off between performance gains and training efficiency, as combining models improves functional task results but increases computational overhead. The shaded confidence intervals emphasize the uncertainty in functional task evaluations, suggesting these tasks may require more robust evaluation frameworks.
</details>
Figure 8: Individual Benchmark Scores for Formal and Functional Competence. (a-c): each column shows the evolution of individual benchmark scores for formal competence (top) and functional competence (bottom) during training. Data is presented for Pythia models of three different sizes. (d): the same as (aâc), with data averaged across models of five different sizes.
Figure 8 presents the individual benchmark scores for both formal and functional linguistic competence across training. Formal benchmarks peak early, mirroring the trajectory of brain alignment, and remain saturated throughout training. In contrast, functional benchmarks continue to improve, reflecting the modelsâ increasing ability to acquire factual knowledge and reasoning skills as they are trained on significantly more tokens using next-word prediction.
Appendix F Results on SmolLM2-360M
To assess the generalizability of our findings, we replicated our experiments using a model from a different language family. Specifically, we evaluated multiple training checkpoints of SmolLM2-360M on the brain alignment, formal, and functional linguistic competence benchmarks. Since SmolLM2 only provides checkpoints at intervals of 250B tokens, we cannot capture the gradual emergence of brain alignment and formal competence, both of which typically saturate around 4Bâ8B tokens. Given this limitation, our hypothesis was that brain alignment and formal competence would remain largely stable across these checkpoints, while functional competence would continue to improve. The results are consistent with this hypothesis as shown in Tables 2 and 3.
Appendix G Role of Weight Initialization
<details>
<summary>figures/untrained_init_range_comparison_nunits=128.png Details</summary>

### Visual Description
## Line Graph: Brain Alignment vs. Initialization Standard Deviation
### Overview
The image depicts a line graph illustrating the relationship between **Brain Alignment (Pearson's r)** and **Initialization Standard Deviation**. The graph includes a green line representing the trend, shaded green areas indicating variability, and scattered green data points. The x-axis (logarithmic scale) ranges from 10â»Âł to 10â°, while the y-axis (linear scale) spans 0.06 to 0.12.
---
### Components/Axes
- **X-axis (Initialization Standard Deviation)**: Logarithmic scale with markers at 10â»Âł, 10â»ÂČ, 10â»Âč, and 10â°.
- **Y-axis (Brain Alignment)**: Linear scale with markers at 0.06, 0.08, 0.10, and 0.12.
- **Legend**: Located in the top-right corner, labeled "Brain Alignment (Pearson's r)" with a green line and shaded area.
- **Data Points**: Green circles with error bars (not explicitly labeled but implied by shaded regions).
---
### Detailed Analysis
1. **Trend Line**:
- **10â»Âł**: Brain Alignment â 0.10 (data points cluster around this value).
- **10â»ÂČ**: Peak at â 0.11 (highest alignment, shaded area widest here).
- **10â»Âč**: Sharp decline to â 0.08 (data points spread between 0.07â0.09).
- **10â°**: Further drop to â 0.07 (data points range from 0.065â0.075).
2. **Shaded Area (Variability)**:
- Narrowest at 10â»Âł and 10â°, widest at 10â»ÂČ, suggesting higher uncertainty in alignment at the peak.
3. **Data Points**:
- All points align with the trend line, though some scatter exists (e.g., at 10â»Âč, points range from 0.065â0.09).
---
### Key Observations
- **Peak Alignment**: Maximum Brain Alignment (â0.11) occurs at an Initialization Standard Deviation of 10â»ÂČ.
- **Decline at Higher Deviations**: Alignment drops sharply as deviation increases beyond 10â»ÂČ.
- **Variability Pattern**: Uncertainty (shaded area) is highest at the peak (10â»ÂČ) and lowest at the extremes (10â»Âł and 10â°).
---
### Interpretation
The data suggests an **optimal initialization standard deviation** of ~10â»ÂČ for maximizing Brain Alignment. However, the sharp decline at higher deviations implies that excessively large initializations may degrade performance. The increased variability at the peak (10â»ÂČ) could indicate sensitivity to initialization parameters in this range. This trend might reflect a trade-off between alignment and stability, where moderate deviations yield the best balance. Further investigation into initialization strategies could leverage this relationship to improve model robustness.
</details>
Figure 9: Role of Weight Initialization on Brain Alignment in Untrained Models The default initialization standard deviation in the HuggingFace library (sd = 0.02) yields the highest brain alignment for untrained models, suggesting that initialization choices play a crucial role in shaping alignment even before training begins.
Figure 9 examines the effect of weight initialization variance on brain alignment in untrained models. We systematically vary the initialization standard deviation (sd) and find that the default HuggingFace Wolf et al. (2019) initialization (sd = 0.02) achieves the highest alignment across datasets. This suggests that even before training begins, the choice of initialization can significantly influence how well a modelâs representations align with neural activity. This finding raises an intriguing hypothesis: could brain alignment, a computationally inexpensive metric, serve as a useful heuristic for selecting optimal initialization parameters? If so, it could help models learn tasks more efficiently and converge faster, reducing the need for extensive trial-and-error in training from scratch. The results highlight the importance of architectural inductive biases and suggest that brain alignment may serve as a useful heuristic for optimizing model initialization.
Appendix H Effect of Number of Units on Brain Alignment
<details>
<summary>figures/pretrained_num_units_model_size.png Details</summary>

### Visual Description
## Bar Chart: Brain Alignment (Pearson's r) Across Model Sizes and Units
### Overview
The chart compares brain alignment (measured by Pearson's r) across different neural network model sizes (14M to 6.9B parameters) and three unit counts (128, 1024, 4096). Each model size is represented by a distinct color, with error bars indicating measurement variability.
### Components/Axes
- **X-axis**: "Number of Units" with categories: 128, 1024, 4096.
- **Y-axis**: "Brain Alignment (Pearson's r)" scaled from 0.00 to 0.20.
- **Legend**: Located on the right, mapping colors to model sizes:
- Purple: 14M
- Dark Blue: 70M
- Teal: 160M
- Darker Teal: 410M
- Green: 1B
- Lighter Green: 1.4B
- Yellow: 2.8B
- Light Yellow: 6.9B
- **Error Bars**: Black vertical lines atop each bar, representing standard deviation.
### Detailed Analysis
1. **128 Units Group**:
- **14M (Purple)**: ~0.16 (highest alignment)
- **70M (Dark Blue)**: ~0.15
- **160M (Teal)**: ~0.14
- **410M (Darker Teal)**: ~0.13
- **1B (Green)**: ~0.12
- **1.4B (Lighter Green)**: ~0.11
- **2.8B (Yellow)**: ~0.10
- **6.9B (Light Yellow)**: ~0.09
2. **1024 Units Group**:
- **14M (Purple)**: ~0.18
- **70M (Dark Blue)**: ~0.17
- **160M (Teal)**: ~0.16
- **410M (Darker Teal)**: ~0.15
- **1B (Green)**: ~0.14
- **1.4B (Lighter Green)**: ~0.13
- **2.8B (Yellow)**: ~0.12
- **6.9B (Light Yellow)**: ~0.11
3. **4096 Units Group**:
- **14M (Purple)**: ~0.19
- **70M (Dark Blue)**: ~0.18
- **160M (Teal)**: ~0.17
- **410M (Darker Teal)**: ~0.16
- **1B (Green)**: ~0.15
- **1.4B (Lighter Green)**: ~0.14
- **2.8B (Yellow)**: ~0.13
- **6.9B (Light Yellow)**: ~0.12
**Error Bar Variability**: Larger models (e.g., 6.9B) show longer error bars, indicating higher measurement uncertainty. Smaller models (e.g., 14M) have shorter error bars, suggesting more consistent results.
### Key Observations
1. **Inverse Relationship Between Model Size and Alignment**: Larger models (6.9B) consistently show lower alignment than smaller models (14M) within the same unit group.
2. **Unit Count Impact**: Alignment increases with more units (e.g., 128 â 4096 units), but the effect diminishes for larger models.
3. **Error Bar Trends**: Larger models exhibit greater variability in alignment scores, as seen in longer error bars.
### Interpretation
The data suggests that smaller models (14M) achieve higher brain alignment than larger models, even when scaled to the same number of units. This could indicate that smaller models generalize better or avoid overfitting. The positive correlation between unit count and alignment implies that increasing model capacity improves performance, but only up to a pointâlarger models may introduce complexity that reduces alignment. The error bars highlight that larger models are less reliable in their alignment measurements, possibly due to architectural complexity or training instability.
</details>
Figure 10: The Effect of the Number of Localized Units on Final Brain Alignment Brain alignment is evaluated after localizing 128, 1024, and 4096 units. While increasing the number of units slightly affects overall alignment, the relative ranking of models remains largely unchanged, indicating that model comparisons are robust to the choice of unit count.
Figure 10 illustrates the impact of localizing more units on final brain alignment across the eight Pythia models used in this study. We find that increasing the number of units has minimal impact on the relative ranking of models, with only a slight increase in average alignment. Additionally, model size does not influence brain alignment once the number of units is controlled, reinforcing the idea that alignment is driven by feature selection rather than scale.
<details>
<summary>figures/brain-score-llms-brain-alignment-v1.drawio.png Details</summary>

### Visual Description
## Line Charts: Brain Alignment Across Model Sizes
### Overview
Three line charts compare brain alignment (Pearson's r) between "Language Network" (green circles) and "V1" (purple crosses) across three model sizes: 14M, 70M, and 160M. Each chart tracks alignment as tokens increase from 0 to 2868M, with a vertical reference line at 512M tokens.
---
### Components/Axes
- **X-axis**: "Number of Tokens" (0, 2M, 4M, ..., 2868M)
- **Y-axis**: "Brain Alignment (Pearson's r)" (-0.025 to 0.150)
- **Legend**: Located at bottom center, with:
- Green circles: Language Network
- Purple crosses: V1
- **Vertical Line**: At 512M tokens in all charts
---
### Detailed Analysis
#### 14M Model
- **Language Network**:
- Starts at ~0.055 (0 tokens)
- Peaks at ~0.125 (2868M tokens)
- Steady upward trend with minor fluctuations
- **V1**:
- Starts at ~0.01 (0 tokens)
- Peaks at ~0.03 (2868M tokens)
- Fluctuates between 0.01 and 0.03
#### 70M Model
- **Language Network**:
- Starts at ~0.05 (0 tokens)
- Peaks at ~0.12 (2868M tokens)
- Consistent upward slope with slight dips
- **V1**:
- Starts at ~0.015 (0 tokens)
- Peaks at ~0.035 (2868M tokens)
- More variability than 14M model
#### 160M Model
- **Language Network**:
- Starts at ~0.05 (0 tokens)
- Peaks at ~0.12 (2868M tokens)
- Stable increase with minor noise
- **V1**:
- Starts at ~0.02 (0 tokens)
- Peaks at ~0.04 (2868M tokens)
- Smoother trend than smaller models
---
### Key Observations
1. **Model Size Correlation**: Larger models (160M > 70M > 14M) show consistently higher brain alignment for Language Network.
2. **Token Count Impact**: Alignment improves for both metrics as token count increases, with sharper gains after 512M tokens.
3. **V1 Variability**: V1 shows more fluctuation in smaller models (14M) but stabilizes in larger models (160M).
4. **Shaded Regions**: Confidence intervals widen with token count, indicating increased measurement uncertainty at higher token volumes.
---
### Interpretation
- **Language Network Dominance**: The green line (Language Network) consistently outperforms V1 across all model sizes, suggesting it better captures brain-related patterns.
- **Scaling Benefits**: Larger models (160M) achieve higher alignment with fewer tokens compared to smaller models, indicating improved efficiency.
- **V1 as Baseline**: V1's lower alignment values and higher variability suggest it represents a less optimized or smaller-scale baseline.
- **512M Threshold**: The vertical line at 512M tokens may mark a critical point where model performance stabilizes or diverges significantly.
The data implies that model size and token processing capacity directly influence brain alignment, with larger models achieving stronger, more stable correlations.
</details>
Figure 11: Brain Alignment with the Language Network vs. V1 Across Training. Raw brain alignment scores (Pearsonâs r) of three Pythia models of varying sizes are shown on the Pereira2018 dataset. The x-axis (log-scaled up to 16B tokens; then evenly spaced after the black line every 20B tokens) represents training progress. Alignment with V1, an early visual region, remains stable throughout training, while alignment with the language network (LN) increases around 4B tokens before plateauing.
Appendix I Model Size Does Not Predict Alignment
<details>
<summary>figures/brain-score-llms-model-size-greens.drawio.png Details</summary>

### Visual Description
## Line Chart: Brain Alignment Across Pythia Model Sizes
### Overview
The chart visualizes brain alignment scores for multiple datasets across varying Pythia model sizes (14M to 6.9B parameters). It includes six datasets (Pereira2018, Fedorenko2016, Tuckute2024, Narratives, Blank2014, and Average) with shaded regions indicating variability/confidence intervals.
### Components/Axes
- **X-axis**: Pythia Model Size (14M, 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B)
- **Y-axis**: Brain Alignment (0.0 to 1.4)
- **Legend**: Located in the top-right corner, mapping datasets to symbols/colors:
- Pereira2018: Light green circles
- Fedorenko2016: Dark green squares
- Tuckute2024: Green plus signs
- Narratives: Dark green diamonds
- Blank2014: Light green crosses
- Average: Dark green stars
### Detailed Analysis
1. **Pereira2018** (light green circles):
- Starts at ~1.15 (14M), peaks at ~1.2 (70M), then declines to ~0.75 (6.9B)
- Shaded region widens significantly between 160M and 1.4B
2. **Fedorenko2016** (dark green squares):
- Stable between 0.8â0.85 across all sizes
- Sharp dip to ~0.2 at 160M, then recovers to ~0.8 (6.9B)
3. **Tuckute2024** (green plus signs):
- Peaks at ~0.6 (70M), drops to ~0.2 (160M), then fluctuates between 0.4â0.5
- Shaded region narrows at 160M, suggesting lower confidence
4. **Narratives** (dark green diamonds):
- Consistently low (~0.1â0.15) across all sizes
- Minimal variability (narrow shaded region)
5. **Blank2014** (light green crosses):
- Lowest alignment (~0.05â0.1) across all sizes
- Shaded region remains narrow
6. **Average** (dark green stars):
- Weighted mean trends downward from ~0.55 (14M) to ~0.45 (6.9B)
- Shaded region widens at 160M and 1.4B
### Key Observations
- **Model Size vs. Alignment**: Larger models (1Bâ6.9B) generally show lower alignment than smaller models (14Mâ70M), contradicting the "bigger is better" hypothesis.
- **Pereira2018 Anomaly**: The sharp decline after 70M suggests potential overfitting or task-specific limitations.
- **Tuckute2024 Dip**: The 160M model's drastic drop may indicate architectural instability or dataset incompatibility.
- **Narratives Consistency**: Low but stable alignment implies this dataset may represent a baseline or control group.
- **Average Trend**: The overall decline in alignment with model size challenges assumptions about model efficacy.
### Interpretation
The data suggests that increasing model size does not universally improve brain alignment, with some datasets (e.g., Pereira2018) showing inverted U-shaped relationships. The Average line's downward trend implies that larger models may introduce noise or inefficiencies for this specific task. The shaded regions highlight uncertainty, particularly for Pereira2018 and Tuckute2024 at mid-sized models, suggesting methodological variability or dataset-specific challenges. These findings could inform debates about optimal model scaling strategies in neuroimaging applications.
</details>
Figure 12: Model Size Does Not Predict Brain Alignment when localizing a fixed set of language units. Brain alignment across model sizes in the Pythia suite, measured at their final training checkpoints. Brain alignment is shown for each dataset, along with the average score across datasets, for eight models of varying sizes.
Figure 12 presents the brain alignment for each dataset, along with the average alignment across datasets, for eight models of varying sizes from the Pythia model suite (final checkpoint). Contrary to the assumption that larger models exhibit higher brain alignment Aw et al. (2023), we observe a decline in average alignment starting from 1B parameters up to 6.9B parameters, when controlling for feature size. This analysis is made possible by functional localization, which allows us to extract a fixed number of units from each model, rather than relying on hidden state dimensions, as done in previous studies. This approach ensures a fairer comparison among models. We show in Appendix H that increasing the number of localized units has minimal impact on the relative ranking of the models. Additionally, these findings align with expectations in the neuroscience language community, where it is widely believed that human language processing does not require superhuman-scale models to capture neural activity in the brainâs language network.
Appendix J Alignment with Other Brain Regions
As a control, we also examine alignment with non-language brain regions. Specifically, Figure 11 shows the brain alignment of three Pythia models with both the language network (LN) and V1âan early visual cortex regionâon the Pereira2018 dataset. While alignment with the LN increases early in training (around 4B tokens) and then saturates, alignment with V1 remains largely unchanged throughout training. This divergence highlights a key aspect of LLM representations: they do not appear to encode low-level perceptual features, such as those processed in early visual areas. If models were learning perceptual structure from the stimuli, we would expect alignment with V1 to increase alongside LN alignment. Instead, the stability of V1 alignment across training suggests that language models selectively develop internal representations that align with higher-order linguistic processing rather than general sensory processing.
One reason for not measuring alignment against other higher-level cognitive brain regions such as the default mode network (DMN), the multiple demand network (MD) or the theory of mind network (ToM) is due to a major limitation in current neuroimaging datasets: the linguistic stimuli used in studies with publicly available datasets (e.g., Pereira2018) do not reliably engage these higher-level cognitive regions, leading to substantial variability across individuals and thus much lower cross-subject consistency scores. Simply âlookingâ for alignment in the DMN or MD is therefore insufficient. Instead, we need new datasets that deliberately activate nonâlanguage networks and record itemâlevel neural responses. For example, most MD studies rely on blocked fMRI designs (e.g., hard vs. easy math), yielding one activation estimate per condition rather than per stimulus. Such coarse measurements limit their utility to evaluate modelâtoâbrain correspondence at the granularity of individual items. We expect alignment with the MD network, a brain region involved in logical reasoning, to track functional linguistic competence more than formal competence as models improve on relevant benchmarks. We leave this investigation for future work, pending the availability of suitable datasets.
Appendix K Cross-Subject Consistency Scores
| Pereira2018 (Exp 2) â Pereira2018 (Exp 3) Blank2014 | 0.086 0.144 0.178 |
| --- | --- |
| Fedorenko2016 | 0.222 |
| Tucktue2024 | 0.559 |
| Narratives | 0.181 |
| Futrell2018 | 0.858 |
Table 4: Cross-Subject Consistency Scores The values used to normalize the raw Pearson correlation. â Pereira2018 (Exp 2) was computed without extrapolation.
Table 4 shows the cross-subject consistency scores computed with extrapolation for the different benchmarks used in this work.