2503.01830
Model: gemini-2.0-flash
# From Language to Cognition: How LLMs Outgrow the Human Language Network
**Authors**:
- Antoine Bosselut Martin Schrimpf (EPFL āMIT āGeorgia Institute of Technology)
Abstract
Large language models (LLMs) exhibit remarkable similarity to neural activity in the human language network. However, the key properties of language underlying this alignmentāand how brain-like representations emerge and change across trainingāremain unclear. We here benchmark 34 training checkpoints spanning 300B tokens across 8 different model sizes to analyze how brain alignment relates to linguistic competence. Specifically, we find that brain alignment tracks the development of formal linguistic competenceāi.e., knowledge of linguistic rulesāmore closely than functional linguistic competence. While functional competence, which involves world knowledge and reasoning, continues to develop throughout training, its relationship with brain alignment is weaker, suggesting that the human language network primarily encodes formal linguistic structure rather than broader cognitive functions. Notably, we find that the correlation between next-word prediction, behavioral alignment, and brain alignment fades once models surpass human language proficiency. We further show that model size is not a reliable predictor of brain alignment when controlling for the number of features. Finally, using the largest set of rigorous neural language benchmarks to date, we show that language brain alignment benchmarks remain unsaturated, highlighting opportunities for improving future models. Taken together, our findings suggest that the human language network is best modeled by formal, rather than functional, aspects of language. Project Page: language-to-cognition.epfl.ch
From Language to Cognition: How LLMs Outgrow the Human Language Network
Badr AlKhamissi 1 Greta Tuckute 2 Yingtian Tang 1 Taha Binhuraib 3 Antoine Bosselut ā,1 Martin Schrimpf ā,1 1 EPFL 2 MIT 3 Georgia Institute of Technology
\NoHyper ā Equal Supervision \endNoHyper
1 Introduction
<details>
<summary>figures/brain-score-llms-main-final-final.drawio-4.png Details</summary>

### Visual Description
## Chart Type: Multiple Line Graphs
### Overview
The image presents three line graphs comparing the performance of different sized language models across various metrics as they are trained on increasing numbers of tokens. The graphs are titled "(a) Brain Alignment", "(b) Formal Competence", and "(c) Functional Competence". Each graph plots the metric on the y-axis against the number of tokens on the x-axis. The models are differentiated by line color and style, with a legend provided in each subplot. A vertical line indicates a point representing 94.4% of training time.
### Components/Axes
**General Components:**
* **Titles:** (a) Brain Alignment, (b) Formal Competence, (c) Functional Competence
* **Legends:** Located in the top-left corner of each subplot, indicating model size (410M, 1B, 1.4B, 2.8B, 6.9B) with corresponding line styles and colors.
* **Vertical Line:** A vertical black line is present in each graph, labeled "94.4% of training time" in the Brain Alignment graph.
* **R-squared values:** R^2 = 0.65 is present to the left of the Brain Alignment graph, and R^2 = 0.36 is present to the right.
**Graph (a) Brain Alignment:**
* **Y-axis:** "Brain Alignment", scale from 0.2 to 0.6, with ticks at 0.2, 0.3, 0.4, 0.5, and 0.6.
* **X-axis:** "Number of Tokens", with values ranging from 0 to 286B (billions). Specific values marked are 0, 2M, 4M, 8M, 16M, 32M, 64M, 128M, 256M, 512M, 1B, 2B, 4B, 8B, 16B, 20B, 32B, 40B, 60B, 80B, 100B, 120B, 140B, 160B, 180B, 200B, 220B, 240B, 260B, 280B, 286B.
* **Data Series:**
* **410M (light green):** Starts at approximately 0.25, increases to around 0.5 at 16B tokens, then fluctuates between 0.5 and 0.6.
* **1B (green):** Starts at approximately 0.25, increases to around 0.5 at 16B tokens, then fluctuates between 0.45 and 0.55.
* **1.4B (green-grey):** Starts at approximately 0.25, increases to around 0.45 at 16B tokens, then fluctuates between 0.4 and 0.5.
* **2.8B (dark green):** Starts at approximately 0.25, increases to around 0.45 at 16B tokens, then fluctuates between 0.4 and 0.5.
* **6.9B (darkest green):** Starts at approximately 0.35, increases to around 0.45 at 16B tokens, then fluctuates between 0.4 and 0.5.
**Graph (b) Formal Competence:**
* **Y-axis:** "Formal Competence", scale from 0.1 to 0.7, with ticks at 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, and 0.7.
* **X-axis:** "Number of Tokens", with values ranging from 0 to 286B (billions). Specific values marked are 0, 2M, 4M, 8M, 16M, 32M, 64M, 128M, 256M, 512MB, 1B, 2B, 4B, 8B, 16B, 20B, 32B, 40B, 60B, 80B, 100B, 120B, 140B, 160B, 180B, 200B, 220B, 240B, 260B, 280B, 286B.
* **Data Series:**
* **410M (dark blue):** Starts at approximately 0.15, increases sharply to around 0.65 at 16B tokens, then plateaus around 0.7.
* **1B (blue-grey):** Starts at approximately 0.15, increases sharply to around 0.65 at 16B tokens, then plateaus around 0.7.
* **1.4B (green-grey):** Starts at approximately 0.15, increases sharply to around 0.65 at 16B tokens, then plateaus around 0.7.
* **2.8B (light green):** Starts at approximately 0.15, increases sharply to around 0.65 at 16B tokens, then plateaus around 0.7.
* **6.9B (yellow-green):** Starts at approximately 0.2, increases sharply to around 0.65 at 16B tokens, then plateaus around 0.7.
**Graph (c) Functional Competence:**
* **Y-axis:** "Functional Competence", scale from 0.00 to 0.30, with ticks at 0.00, 0.05, 0.10, 0.15, 0.20, 0.25, and 0.30.
* **X-axis:** "Number of Tokens", with values ranging from 0 to 286B (billions). Specific values marked are 0, 2M, 4M, 8M, 16M, 32M, 64M, 128M, 256M, 5120, 1B, 2B, 4B, 8B, 16B, 20B, 32B, 40B, 60B, 80B, 100B, 120B, 140B, 160B, 180B, 200B, 220B, 240B, 260B, 280B, 286B.
* **Data Series:**
* **410M (lightest blue):** Starts at approximately 0.01, increases to around 0.15 at 16B tokens, then plateaus around 0.17.
* **1B (light blue):** Starts at approximately 0.01, increases to around 0.20 at 16B tokens, then plateaus around 0.22.
* **1.4B (blue):** Starts at approximately 0.01, increases to around 0.23 at 16B tokens, then plateaus around 0.25.
* **2.8B (dark blue):** Starts at approximately 0.01, increases to around 0.27 at 16B tokens, then plateaus around 0.28.
* **6.9B (darkest blue):** Starts at approximately 0.01, increases to around 0.30 at 16B tokens, then plateaus around 0.31.
### Key Observations
* **Brain Alignment:** The brain alignment metric shows an initial increase for all model sizes, followed by fluctuations. The R^2 value of 0.65 suggests a moderate correlation.
* **Formal Competence:** All models exhibit a sharp increase in formal competence around 16B tokens, after which they plateau. The R^2 value is not provided for this graph.
* **Functional Competence:** Functional competence also increases sharply around 16B tokens, with larger models achieving higher levels of competence. The R^2 value of 0.36 suggests a weak correlation.
* **Model Size Impact:** Larger models generally achieve higher levels of formal and functional competence. The impact of model size on brain alignment is less clear.
* **Training Time Threshold:** The vertical line at 94.4% of training time appears to coincide with the point where the models begin to plateau in formal and functional competence.
### Interpretation
The data suggests that increasing the number of training tokens significantly improves the formal and functional competence of language models, particularly up to a certain point (around 16B tokens or 94.4% of training time). After this point, the gains diminish, and the models plateau. Larger models tend to perform better in terms of formal and functional competence, indicating that model size is also a crucial factor. The brain alignment metric shows a less clear relationship with model size and training tokens, suggesting that it may be influenced by other factors or require a different analysis approach. The R^2 values indicate the strength of the relationship between the number of tokens and the metrics, with higher values indicating a stronger correlation.
</details>
Figure 1: Model Alignment with the Human Language Network is Primarily Driven by Formal than Functional Linguistic Competence. (a) Average brain alignment across five Pythia models and five brain recording datasets, normalized by cross-subject consistency, throughout training. (b) Average normalized accuracy of the same models on formal linguistic competence benchmarks (two benchmarks). (c) Average normalized accuracy on functional linguistic competence benchmarks (six benchmarks). The x-axis is logarithmically spaced up to 16B tokens, capturing early training dynamics, and then evenly spaced every 20B tokens from 20B to ~300B tokens.
Deciphering the brainās algorithms underlying our ability to process language and communicate is a core goal in neuroscience. Human language processing is supported by the brainās language network (LN), a set of left-lateralized fronto-temporal regions in the brain (Binder et al., 1997; Bates et al., 2003; GornoāTempini et al., 2004; Price, 2010; Fedorenko, 2014; Hagoort, 2019) that respond robustly and selectively to linguistic input (Fedorenko et al., 2024a). Driven by recent advances in machine learning, large language models (LLMs) trained via next-word prediction on large corpora of text are now a particularly promising model family to capture the internal processes of the LN. In particular, when these models are exposed to the same linguistic stimuli (e.g., sentences or narratives) as human participants during neuroimaging and electrophysiology experiments, they account for a substantial portion of neural response variance (Schrimpf et al., 2021; Caucheteux and King, 2022; Goldstein et al., 2022; Pasquiou et al., 2022; Aw et al., 2023; Tuckute et al., 2024a; AlKhamissi et al., 2025; Rathi et al., 2025).
1.1 Key Questions and Contributions
This work investigates four key questions, all aimed at distilling why LLM aligns to brain responses. Specifically, we investigate the full model development cycle as a combination of model architecture (structural priors) and how linguistic competence emerges across training (developmental experience). We ask: (1) What drives brain alignment in untrained models? (2) Is brain alignment primarily linked to formal or functional linguistic competence (Mahowald et al., 2024)? (3) Do language models diverge from humans as they surpass human-level prediction? (4) Do current LLMs fully account for the explained variance in brain alignment benchmarks? To answer these questions, we introduce a rigorous brain-scoring framework to conduct a controlled and large-scale analysis of LLM brain alignment.
Our findings reveal that the initial brain alignment of models with untrained parameters is driven by context integration. During training, alignment primarily correlates with formal linguistic competenceātasks that probe mastery of grammar, syntax, and compositional rules, such as identifying subjectāverb agreement, parsing nested syntactic structures, or completing well-formed sentences. This competence saturates relatively early in training ( $\sim 4$ B tokens), consistent with a plateauing of model-to-brain alignment. Functional linguistic competence, in contrast, concerns how language is used in context to convey meaning, intent, and social/pragmatic contentāfor example, tasks involving discourse coherence, reference resolution, inference about speaker meaning, or interpreting figurative language. Functional competence emerges later in training, tracks brain alignment less strongly, and continues to grow even after alignment with the language network has saturated.
This disconnect later in training is further exemplified by a fading of the correlation between modelsā brain alignment and their next-word-prediction performance, as well as their behavioral alignment. Further, we show that model size is not a reliable predictor of brain alignment when controlling for the number of features, challenging the assumption that larger models necessarily resemble the brain more. Finally, we demonstrate that current brain alignment benchmarks remain unsaturated, indicating that LLMs can still be improved to model human language processing.
2 Preliminaries & Related Work
A Primer on Language in the Human Brain
The human language network (LN) is a set of left-lateralized frontal and temporal brain regions supporting language. These regions are functionally defined by contrasting responses to language inputs over perceptually matched controls (e.g., lists of non-words) (Fedorenko et al., 2010). The language network exhibits remarkable selectivity for language processing compared to various non-linguistic inputs and tasks, such as music perception (Fedorenko et al., 2012; Chen et al., 2023) or arithmetic computation (Fedorenko et al., 2011; Monti et al., 2012) (for review, see Fedorenko et al. (2024a)) and the language network only shows weak responses when participants comprehend or articulate meaningless non-words (Fedorenko et al., 2010; Hu et al., 2023). This selectivity profile is supported by extensive neuroimaging research and further corroborated by behavioral evidence from aphasia studies: when brain damage is confined to language areas, individuals lose their linguistic abilities while retaining other skills, such as mathematics (Benn et al., 2013; Varley et al., 2005), general reasoning (Varley and Siegal, 2000), and theory of mind (Siegal and Varley, 2006).
Model-to-Brain Alignment
Prior work has shown that the internal representations of certain artificial neural networks resemble those in the brain. This alignment was initially observed in the domain of vision (Yamins et al., 2014; Khaligh-Razavi and Kriegeskorte, 2014; Cichy et al., 2016; Schrimpf et al., 2018, 2020; Cadena et al., 2019; Kubilius et al., 2019; Zhuang et al., 2021) and has more recently been extended to auditory processing (Kell et al., 2018; Tuckute et al., 2023; Koumura et al., 2023) and language processing (Schrimpf et al., 2021; Caucheteux and King, 2022; Goldstein et al., 2022; Kauf et al., 2023; Hosseini et al., 2024; Aw et al., 2023; AlKhamissi et al., 2025; Tuckute et al., 2024b; Rathi et al., 2025).
Untrained Models
Recent work in vision neuroscience has shown that untrained convolutional networks can yield high brain alignment to recordings in the visual ventral stream without the need for training (Geiger et al., 2022; Kazemian et al., 2024). Other works have investigated the inductive biases in different architectures and initializations in models of visual processing (Cichy et al., 2016; Cadena et al., 2019; Geiger et al., 2022), speech perception (Millet and King, 2021; Tuckute et al., 2023), and language (Schrimpf et al., 2021; Pasquiou et al., 2022; Hosseini et al., 2024), highlighting that randomly initialized networks are not random functions (Teney et al., 2024).
3 Methods
3.1 Benchmarks for Brain Alignment
Neuroimaging & Behavioral Datasets
The neuroimaging datasets used in this work can be categorized along three dimensions: the imaging modality, the context length of the experimental materials, and the modality through which the language stimulus was presented to human participants (auditory or visual). Table 1 in Appendix A provides an overview of all datasets in this study. To focus specifically on language, we consider neural units (electrodes, voxels, or regions) associated with the brainās language network, as localized by the original dataset authors using the method described in the Section 3.2 and implemented in Brain-Score Schrimpf et al. (2020, 2021) (however, see Appendix J for control brain regions). An exception is the Narratives dataset, which lacks functional localization. We here approximate the language regions using a probabilistic atlas of the human language network (Lipkin et al., 2022), extracting the top-10% language-selective voxels (from the probabilistic atlas) within anatomically defined language parcels, in line with the functional localization procedure used in the other datasets. In an additional analysis, we investigate model alignment with language behavior using the Futrell et al. (2018) dataset, which contains self-paced, per-word human reading times. See Appendix A for details of each dataset. To the best of our knowledge, this study examines the largest number of benchmarks compared to previous work, providing a more comprehensive and reliable foundation for identifying the properties that drive brain alignment in LLMs. The diversity of datasets ensures that our conclusions generalize beyond specific experimental stimuli and paradigms.
Brain-Alignment Metrics
Following standard practice in measuring brain alignment, we train a ridge regression model to predict brain activity from model representations, using the same linguistic stimuli presented to human participants in neuroimaging studies (Schrimpf et al., 2020, 2021). We then measure the Pearson correlation between the predicted brain activations and the actual brain activations of human participants on a held-out set that covers entirely different stories or topics (see Section 4). This process is repeated over $k$ cross-validation splits, and we report the average (mean) Pearson correlation as our final result. We refer to this metric as Linear Predictivity. In Section 5.1, we demonstrate why other metrics such as Centered Kernel Alignment (CKA; Kornblith et al., 2019) and Representational Similarity Analysis (RSA; Kriegeskorte et al., 2008) are not suitable measures for brain alignment on current language datasets.
Estimation of Cross-Subject Consistency
To assess the reliability of our datasets and account for the inherent noise in brain recordings, we compute a cross-subject consistency score (Feather et al., 2025), also referred to as the noise ceiling (Schrimpf et al., 2021). The consistency score is estimated by predicting the brain activity of a held-out subject using data from all other subjects, through 10-fold cross-validation of all subjects. To obtain a conservative ceiling estimate, we extrapolate subject pool sizes and report the final value based on extrapolation to infinitely many subjects. For Tuckute2024 we use the theoretical estimate provided by (Tuckute et al., 2024b). Consistency scores are provided in Appendix K. To aggregate scores across benchmarks, we normalize each modelās Pearson correlation ( $r$ ) score for Linear Predictivity by the cross-subject consistency estimate, using the formula: ( $\textnormal{normalized score}=\frac{\textnormal{raw score}}{\textnormal{consistency}}$ ). The final alignment score for each model is reported as the average across all benchmarks. Otherwise, when reporting raw alignment, we compute the mean Pearson correlation across datasets without normalization.
3.2 Functional Localization
The human language network (LN) is defined functionally which means that units are chosen according to a ālocalizerā experiment (Saxe et al., 2006). Specifically, the LN is the set of neural units (e.g., voxels/electrodes) that are more selective to sentences over a perceptually-matched control condition (Fedorenko et al., 2010). When selecting units from artificial models for comparison against LN units, previous work selected output units from an entire Transformer block based on brain alignment scores (Schrimpf et al., 2021). However, LLMs learn diverse concepts and behaviors during their considerable pretraining, not all of which are necessarily related to language processing, e.g., storage of knowledge (AlKhamissi et al., 2022) and the ability to perform complex reasoning (Huang and Chang, 2023). Therefore, we here follow the method proposed by AlKhamissi et al. (2025) that identifies language units in LLMs using functional localization as is already standard in neuroscience. This approach offers a key advantage: it enables direct comparisons across models by selecting a fixed set of units, identified through the independent localizer experiment. In this work, we localize $128$ units for all models unless otherwise specified, and we show in Appendix H that the results hold when selecting a different number of units.
<details>
<summary>figures/brain-score-llms-untrained-greens.drawio.png Details</summary>

### Visual Description
## Bar Charts and Diagram: Model Performance and Architecture
### Overview
The image presents four sub-figures (a, b, c, d) illustrating the performance and architecture of different models. Sub-figures (a) and (b) are bar charts showing "Brain Alignment" scores for various model architectures. Sub-figure (c) is a diagram depicting a model architecture. Sub-figure (d) is a bar chart comparing "Normalized Accuracy" for "Formal" and "Functional" categories.
### Components/Axes
#### Sub-figure (a): Vertical Bar Chart
* **Title:** (a)
* **X-axis:** Implicit categories represented by the bars, corresponding to the "Architecture" legend.
* **Y-axis:** "Brain Alignment", with a scale from 0.0 to 0.4.
* Y-axis markers: 0.0, 0.1, 0.2, 0.3, 0.4
* **Legend (positioned to the right of the chart):** "Architecture"
* MLP (lightest green)
* GRU (mid-light green)
* LSTM (mid-green)
* MLP+Mean (mid-dark green)
* Transformer-v1 (dark green)
* Transformer-v2 (darkest green)
#### Sub-figure (b): Horizontal Bar Chart
* **Title:** (b)
* **X-axis:** "Brain Alignment", with a scale from 0.0 to 0.6.
* X-axis markers: 0.0, 0.2, 0.4, 0.6
* **Y-axis:** Model architectures:
* Pos+Attn+MLP (lightest green)
* Attn+MLP (mid-light green)
* Attn (mid-green)
* Pos+Attn (mid-dark green)
* MLP (dark green)
* Pos+MLP (darkest green)
* Tokens (darkest green)
* **Legend:** The same "Architecture" legend as in sub-figure (a) applies implicitly, with the bar colors corresponding to the architectures listed on the Y-axis.
#### Sub-figure (c): Diagram
* **Title:** (c)
* **Components:**
* Input: "Tokens", "Pos Embeddings"
* Processing Blocks (within a dashed blue box):
* "Multihead Attention" (blue box)
* "LayerNorm" (light blue box)
* "MLP" (blue box)
* "LayerNorm" (light blue box)
* Summation symbols (ā) indicating addition/merging of data streams.
* Arrows indicating the flow of data.
#### Sub-figure (d): Vertical Bar Chart
* **Title:** (d)
* **X-axis:** Categories: "Formal", "Functional"
* **Y-axis:** "Normalized Accuracy", with a scale from 0.00 to 0.20.
* Y-axis markers: 0.00, 0.05, 0.10, 0.15, 0.20
* **Bars:**
* "Formal" (light blue)
* "Functional" (dark blue)
### Detailed Analysis
#### Sub-figure (a): Vertical Bar Chart
The "Brain Alignment" increases as the model architecture changes from MLP to Transformer-v2.
* MLP: ~0.10
* GRU: ~0.14
* LSTM: ~0.17
* MLP+Mean: ~0.20
* Transformer-v1: ~0.24
* Transformer-v2: ~0.38
#### Sub-figure (b): Horizontal Bar Chart
The "Brain Alignment" varies across different architectures.
* Pos+Attn+MLP: ~0.55
* Attn+MLP: ~0.35
* Attn: ~0.30
* Pos+Attn: ~0.50
* MLP: ~0.20
* Pos+MLP: ~0.25
* Tokens: ~0.10
#### Sub-figure (c): Diagram
The diagram shows a model architecture that processes "Tokens" and "Pos Embeddings". The data flows through a "LayerNorm" layer, then a "Multihead Attention" block, another "LayerNorm" layer, and finally an "MLP" layer. Summation symbols indicate that the input is added to the output of the "MLP" block.
#### Sub-figure (d): Vertical Bar Chart
The "Normalized Accuracy" is significantly higher for "Formal" compared to "Functional".
* Formal: ~0.15
* Functional: ~0.01
### Key Observations
* **Sub-figure (a):** Transformer-v2 achieves the highest "Brain Alignment" score among the listed architectures.
* **Sub-figure (b):** "Pos+Attn+MLP" and "Pos+Attn" architectures have the highest "Brain Alignment" scores.
* **Sub-figure (c):** The model architecture includes "Multihead Attention" and "MLP" blocks, with "LayerNorm" applied before each.
* **Sub-figure (d):** "Formal" category has a much higher "Normalized Accuracy" than "Functional".
### Interpretation
The data suggests that more complex model architectures, particularly those incorporating "Attention" mechanisms (as seen in sub-figures a and b), tend to achieve higher "Brain Alignment" scores. The diagram in sub-figure (c) illustrates a specific architecture that combines "Multihead Attention" and "MLP" layers, which are likely contributing to the improved performance. The significant difference in "Normalized Accuracy" between "Formal" and "Functional" categories (sub-figure d) indicates that the model performs better on "Formal" tasks compared to "Functional" tasks. The "Brain Alignment" metric may be related to how well the model's internal representations align with brain activity patterns, suggesting that Transformer-based models are better at capturing relevant information.
</details>
Figure 2: Context Integration drives Brain Alignment of Untrained Models. (a) Sequence-based models (GRU, LSTM, Transformers, and mean pooling) achieve higher brain alignment than models that rely solely on the last token representation (Linear, MLP), highlighting the importance of temporal integration. Error bars report five random initializations in all subplots. (b) Ablation study of architectural components in a single untrained Transformer-v2 block, demonstrating that attention mechanisms combined with positional encoding yield the highest brain alignment. (c) Diagram of the Transformer block architecture used in (b), with components grouped into attention (lower box) and MLP (upper box). (d) The average performance of five Pythia models with untrained parameters on formal and functional linguistic competence benchmarks, showing that formal competence exceeds chance level even in untrained parameter models.
3.3 Benchmarks for Linguistic Competence
There is substantial evidence in neuroscience research that formal and functional linguistic competence are governed by distinct neural mechanisms Mahowald et al. (2024); Fedorenko et al. (2024a, b). Formal linguistic competence pertains to the knowledge of linguistic rules and patterns, while functional linguistic competence involves using language to interpret and interact with the world. Therefore, to accurately track the evolution of each type of competence during training, we focus on benchmarks that specifically target these cognitive capacities in LLMs.
Formal Linguistic Competence
To assess formal linguistic competence, we use two benchmarks: BLiMP (Warstadt et al., 2019) and SyntaxGym (Gauthier et al., 2020). BLiMP evaluates key grammatical phenomena in English through 67 tasks, each containing 1,000 minimal pairs designed to test specific contrasts in syntax, morphology, and semantics. Complementing this, SyntaxGym consists of 31 tasks that systematically measure the syntactic knowledge of language models. Together, these benchmarks provide a robust framework for evaluating how well LLMs acquire and apply linguistic rules.
Functional Linguistic Competence
Functional competence extends beyond linguistic rules, engaging a broader set of cognitive mechanisms. To assess this, we use six benchmarks covering world knowledge (ARC-Easy, ARC-Challenge (Clark et al., 2018)), social reasoning (Social IQa (Sap et al., 2019)), physical reasoning (PIQA (Bisk et al., 2019)), and commonsense reasoning (WinoGrande (Sakaguchi et al., 2019), HellaSwag (Zellers et al., 2019)). Together, these benchmarks provide a comprehensive evaluation of an LLMās ability to reason, infer implicit knowledge, and navigate real-world contexts.
Metrics
Inline with prior work, we evaluate all benchmarks in a zero-shot setting, using surprisal as the evaluation metric. where the modelās prediction is determined by selecting the most probable candidate, as packaged in the language model evaluation harness (Gao et al., 2024). We report accuracy normalized by chance performance, where 0% indicates performance at the random chance level.
Benchmark for Language Modeling
We use a subset of FineWebEdu Penedo et al. (2024) to evaluate the perplexity of the models on a held-out set. Specifically, use a maximum sequence length of 2048, and evaluate on the first 1000 documents of the Ay CC-MAIN-2024-10 subset.
3.4 Large Language Models (LLMs)
Throughout this work, we use eight models from the Pythia model suite (Biderman et al., 2023), spanning a range of sizes: {14M, 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B}. Each model is evaluated across 34 training checkpoints, spanning approximately 300B tokens. These checkpoints include the untrained model, the final trained model, and 16 intermediate checkpoints that are logarithmically spaced up to 128B tokens. The remaining 14 checkpoints are evenly spaced every 20B tokens from 20B to 280B tokens, ensuring a comprehensive analysis of alignment trends throughout training. Since smaller models fail to surpass chance performance on many functional benchmarks, we exclude 14M, 70M, 160M from analyses that compare brain alignment with functional performance.
4 Rigorous Brain-Scoring
While substantial progress has been made in measuring alignment between LLM representations and neural activity, thereās no standard for comparing brain alignment across datasets and conditions. Therefore, to ensure we perform meaningful inferences, we propose two criteria: (1) alignment should reflect stimulus-driven responses, dropping for random token sequences; and (2) models should generalize to new linguistic contexts. We justify our metrics and cross-validation choices accordingly. For all benchmarks, we identify language-selective units to ensure fair model comparisons, consistent with neural site selection in neuroscience AlKhamissi et al. (2025).
4.1 Robust Metrics and Generalization Tests
Measuring Stimulus-Driven Responses
We first ask if the alignment procedure is meaningful, i.e., whether the encoding models capture meaningful linguistic information and generalize to new linguistic contexts. Figure 6 (a) in Appendix B shows average brain alignment across all brain datasets under three conditions: (1) a pretrained model processing original stimuli, (2) a pretrained model processing random token sequences, and (3) an untrained model processing original stimuli. To evaluate metric reliability, we expect random sequences to yield significantly lower alignment than real stimuli. However, CKA fails this criterion, assigning similar alignment scores to both, and even untrained models surpass pretrained ones. In contrast, linear predictivity differentiates between real and random stimuli, more so than RSA.
Generalization and Contextualization
The second criterion we propose is that LLMs with high brain alignment should be able to generalize to held-out stimuli, with a preference for generalizing far outside the stimuli used for mapping the model to brain activity. A key factor in designing a corresponding cross-validation scheme is contextualizationāhow the data is split into train and test sets Feghhi et al. (2024). The Pereira2018 dataset consists of 24 topics composed of multi-sentence passages, and sentences are presented in their original order to both humans and models. A random sentence split (contextualization) allows sentences from the same topic in both train and test sets, and is thus less demanding of generalization. A stronger generalization test ensures entire topics are held out, preventing models from leveraging shared context. Figure 6 (b) shows that contextualization makes it easier for the model to predict brain activity. In contrast, topic-based splits halve the raw alignment score for pretrained models. The score of untrained models is reduced even more strongly when enforcing generalization across topics, suggesting that much of their alignment is context-dependent. Nonetheless, untrained models retain significant alignment ā about 50% of pretrained models ā even with strong generalization requirements.
<details>
<summary>figures/brain-score-llms-brain-alignment-final.drawio.png Details</summary>

### Visual Description
## Line Charts: Brain Alignment vs. Number of Tokens for Pythia Models
### Overview
The image presents three line charts comparing the brain alignment scores of different datasets against the number of tokens processed by three Pythia language models (1.4B, 2.8B, and 6.9B). Each chart displays multiple data series, each representing a different dataset, with the x-axis indicating the number of tokens and the y-axis indicating the brain alignment score. A vertical line is present in each chart, but its significance is not explicitly stated.
### Components/Axes
* **Titles:**
* Top-left chart: "Pythia-1.4B"
* Top-middle chart: "Pythia-2.8B"
* Top-right chart: "Pythia-6.9B"
* **Y-axis (Brain Alignment):**
* Left chart: Scale from 0.0 to 1.2, with increments of 0.2.
* Middle chart: Scale from 0.0 to 1.2, with increments of 0.2.
* Right chart: Scale from 0.0 to 1.0, with increments of 0.2.
* **X-axis (Number of Tokens):**
* All charts: Non-linear scale with labels "0", "2M", "4M", "8M", "16M", "32M", "64M", "128M", "256M", "512M", "1B", "2B", "4B", "8B", "16B", "20B", "32B", "40B", "60B", "80B", "100B", "120B", "140B", "160B", "180B", "200B", "220B", "240B", "260B", "280B", "286B".
* **Legend (bottom):**
* "Pereira2018" (light green circles)
* "Blank2014" (light green x marks)
* "Fedorenko2016" (light green squares)
* "Tuckute2024" (light green plus signs)
* "Narratives" (dark green diamonds)
* "Average" (dark green line with diamond markers, and shaded region indicating variance)
### Detailed Analysis
**Pythia-1.4B:**
* **Pereira2018 (light green circles):** Starts around 0.05, increases sharply around 64M tokens to approximately 0.8, then fluctuates between 0.7 and 1.0.
* **Blank2014 (light green x marks):** Remains relatively flat around 0.1 for the entire range.
* **Fedorenko2016 (light green squares):** Starts around 0.1, increases to approximately 0.6 around 64M tokens, then fluctuates between 0.4 and 0.6.
* **Tuckute2024 (light green plus signs):** Starts around 0.2, increases slightly to approximately 0.3 around 64M tokens, then remains relatively flat.
* **Narratives (dark green diamonds):** Starts around 0.2, increases to approximately 0.4 around 16M tokens, then remains relatively flat.
* **Average (dark green line with diamond markers):** Starts around 0.2, increases to approximately 0.5 around 16M tokens, then remains relatively flat.
**Pythia-2.8B:**
* **Pereira2018 (light green circles):** Starts around 0.05, increases sharply around 64M tokens to approximately 0.9, then fluctuates between 0.7 and 1.1.
* **Blank2014 (light green x marks):** Remains relatively flat around 0.1 for the entire range.
* **Fedorenko2016 (light green squares):** Starts around 0.1, increases to approximately 0.6 around 64M tokens, then fluctuates between 0.5 and 0.7.
* **Tuckute2024 (light green plus signs):** Starts around 0.2, increases slightly to approximately 0.3 around 64M tokens, then remains relatively flat.
* **Narratives (dark green diamonds):** Starts around 0.2, increases to approximately 0.4 around 16M tokens, then remains relatively flat.
* **Average (dark green line with diamond markers):** Starts around 0.2, increases to approximately 0.45 around 16M tokens, then remains relatively flat.
**Pythia-6.9B:**
* **Pereira2018 (light green circles):** Starts around 0.4, increases sharply around 64M tokens to approximately 0.8, then fluctuates between 0.7 and 0.9.
* **Blank2014 (light green x marks):** Remains relatively flat around 0.2 for the entire range.
* **Fedorenko2016 (light green squares):** Starts around 0.4, increases to approximately 0.7 around 64M tokens, then fluctuates between 0.6 and 0.8.
* **Tuckute2024 (light green plus signs):** Starts around 0.2, increases slightly to approximately 0.3 around 64M tokens, then remains relatively flat.
* **Narratives (dark green diamonds):** Starts around 0.2, increases to approximately 0.4 around 16M tokens, then remains relatively flat.
* **Average (dark green line with diamond markers):** Starts around 0.2, increases to approximately 0.4 around 16M tokens, then remains relatively flat.
### Key Observations
* The "Pereira2018" and "Fedorenko2016" datasets show a significant increase in brain alignment around 64M tokens for all three Pythia models.
* The "Blank2014" and "Tuckute2024" datasets show relatively flat brain alignment scores across all token counts and models.
* The "Narratives" dataset and the "Average" series show a moderate increase in brain alignment up to 16M tokens, then remain relatively flat.
* The brain alignment scores for "Pereira2018" and "Fedorenko2016" are generally higher than the other datasets.
* The vertical line in each chart is positioned between 8B and 16B tokens.
### Interpretation
The charts suggest that the brain alignment of certain datasets ("Pereira2018" and "Fedorenko2016") is significantly affected by the number of tokens processed by the Pythia models, particularly around 64M tokens. This could indicate a critical learning phase or a change in the model's representation of these datasets at that point. The relatively flat brain alignment scores for "Blank2014" and "Tuckute2024" suggest that these datasets may not be as effectively processed or represented by the models. The "Narratives" dataset and the "Average" series show a more gradual increase in brain alignment, indicating a more consistent but less dramatic learning pattern. The vertical line may represent a significant milestone in the training process or a specific point of comparison across the models. The increasing model size (1.4B, 2.8B, 6.9B) does not appear to drastically change the overall trends, but it may influence the absolute brain alignment scores for some datasets.
</details>
Figure 3: Brain Alignment Saturates Early on in Training. Plots indicate the brain alignment scores of three models from the Pythia model suite with varying sizes (log x-axis up to 16B tokens, uneven spacing after black line). Scores are normalized by their cross-subject consistency scores. Alignment quickly peaks around 2ā8B tokens before saturating or declining, regardless of model size (see Appendix D and F for more models).
5 Results
The following sections progressively unpack the emergence and limits of brain alignment with the human language network in LLMs. Section 5.1 establishes the foundation by showing that untrained models already exhibit modest brain alignment, pointing to the role of architectural priors. Building on this, Section 5.2 tracks how alignment evolves with training and reveals that it strongly correlates with the early acquisition of formal linguistic competence, but less so with functional abilities. Section 5.3 then shows that as models exceed human-level performance in next-word prediction, their brain and behavioral alignment begins to diverge, suggesting that at this point, LLMs outgrow their initial alignment with human language processing.
5.1 Brain Alignment of Untrained Models
In Figure 6 we show that untrained models, despite achieving lower alignment scores than their pretrained counterparts ( $\sim 50\%$ ), still achieve relatively decent alignment and surpass that of the models evaluated with a random sequence of tokens. Therefore, we here ask, what are the main drivers for this surprising alignment.
Inductive Biases of Untrained Models
We evaluate the brain alignment of various LLMs with untrained parameters to determine which architecture exhibits the strongest inductive bias toward the human language network. Figure 2 (a) presents the average alignment across five different random initializations for six different untrained models. Each model consists of a stack of two building blocks from its respective architecture, with a hidden state of $1024$ . To ensure a fair comparison, we apply the localizer to the output representations of the last token in the sequence from these two blocks, extracting 128 units to predict brain activity. Our findings reveal two key insights. First, sequence-based modelsāsuch as GRU, LSTM, Transformers, and even a simple mean operation over token representationsāexhibit higher brain alignment than models that rely solely on the last tokenās representation, such as Linear or MLP. In other words, context or temporal integration is a crucial factor in achieving high alignment. Second, we observe a notable difference between Transformer-v1 and Transformer-v2. While Transformer-v2 applies static positional embeddings by directly adding them to token embeddings, Transformer-v1 uses rotary position encoding. Our results suggest that static positional encoding enables models to capture intrinsic temporal dynamics in sentencesāpossibly tracking evolving word positionsāproviding further evidence that temporal integration is critical for brain-like language representations.
<details>
<summary>figures/brain-score-llms-lineplot-correlations.drawio.png Details</summary>

### Visual Description
## Chart: Pythia Model Performance
### Overview
The image presents four line charts comparing the performance of different Pythia models based on "Brain Alignment," "Formal Competence," and "Functional Competence" as the number of tokens increases. Each chart corresponds to a different Pythia model or model configuration: (a) Pythia (5 Models), (b) Pythia-1B, (c) Pythia-2.8B, and (d) Pythia-6.9B. Each chart displays three data series: Brain Alignment (green), Formal Competence (light blue), and Functional Competence (dark blue). The x-axis represents the number of tokens, and the y-axes represent the Brain Alignment (left) and Formal/Functional Competence (right).
### Components/Axes
* **Titles:**
* (a) Pythia (5 Models)
* (b) Pythia-1B
* (c) Pythia-2.8B
* (d) Pythia-6.9B
* **X-axis:** Number of Tokens, with markers at 0.01B, 0.1B, 1B, 10B, and 100B.
* **Left Y-axis:** Brain Alignment, ranging from 0.2 to 0.6 (or 0.3 to 0.6 in some charts).
* **Right Y-axis:** Formal Competence (light blue) ranging from 0.1 to 0.7, and Functional Competence (dark blue) ranging from 0.0 to 0.3.
* **R² values:** Each chart displays an R² value in the top-left corner.
* (a) R² = 0.65
* (b) R² = 0.82
* (c) R² = 0.51
* (d) R² = 0.67
* (e) R² = 0.36
* (f) R² = 0.80
* (g) R² = 0.40
* (h) R² = 0.51
* **Legend:** Located at the bottom of the image.
* Green: Brain Alignment
* Light Blue: Formal Competence
* Dark Blue: Functional Competence
### Detailed Analysis
**Chart (a): Pythia (5 Models)**
* **Brain Alignment (Green):** Starts at approximately 0.3, remains relatively stable until 1B tokens, then increases to approximately 0.55 at 100B tokens.
* **Formal Competence (Light Blue):** Starts at approximately 0.2, decreases slightly, then increases sharply after 1B tokens, reaching approximately 0.7 at 100B tokens.
* **Functional Competence (Dark Blue):** Not present in this chart.
**Chart (b): Pythia-1B**
* **Brain Alignment (Green):** Starts at approximately 0.25, remains relatively stable until 1B tokens, then increases to approximately 0.6 at 100B tokens.
* **Formal Competence (Light Blue):** Starts at approximately 0.25, decreases slightly, then increases sharply after 1B tokens, reaching approximately 0.7 at 100B tokens.
* **Functional Competence (Dark Blue):** Not present in this chart.
**Chart (c): Pythia-2.8B**
* **Brain Alignment (Green):** Starts at approximately 0.35, fluctuates, and reaches approximately 0.55 at 100B tokens.
* **Formal Competence (Light Blue):** Starts at approximately 0.2, decreases slightly, then increases sharply after 1B tokens, reaching approximately 0.7 at 100B tokens.
* **Functional Competence (Dark Blue):** Not present in this chart.
**Chart (d): Pythia-6.9B**
* **Brain Alignment (Green):** Starts at approximately 0.25, remains relatively stable until 1B tokens, then increases to approximately 0.5 at 100B tokens.
* **Formal Competence (Light Blue):** Starts at approximately 0.2, decreases slightly, then increases sharply after 1B tokens, reaching approximately 0.7 at 100B tokens.
* **Functional Competence (Dark Blue):** Not present in this chart.
**Chart (e): Pythia (5 Models)**
* **Brain Alignment (Green):** Starts at approximately 0.3, remains relatively stable until 1B tokens, then increases to approximately 0.55 at 100B tokens.
* **Formal Competence (Light Blue):** Not present in this chart.
* **Functional Competence (Dark Blue):** Starts at approximately 0.02, increases sharply after 1B tokens, reaching approximately 0.25 at 100B tokens.
**Chart (f): Pythia-1B**
* **Brain Alignment (Green):** Starts at approximately 0.25, remains relatively stable until 1B tokens, then increases to approximately 0.6 at 100B tokens.
* **Formal Competence (Light Blue):** Not present in this chart.
* **Functional Competence (Dark Blue):** Starts at approximately 0.01, increases sharply after 1B tokens, reaching approximately 0.2 at 100B tokens.
**Chart (g): Pythia-2.8B**
* **Brain Alignment (Green):** Starts at approximately 0.35, fluctuates, and reaches approximately 0.55 at 100B tokens.
* **Formal Competence (Light Blue):** Not present in this chart.
* **Functional Competence (Dark Blue):** Starts at approximately 0.01, increases sharply after 1B tokens, reaching approximately 0.25 at 100B tokens.
**Chart (h): Pythia-6.9B**
* **Brain Alignment (Green):** Starts at approximately 0.2, remains relatively stable until 1B tokens, then increases to approximately 0.5 at 100B tokens.
* **Formal Competence (Light Blue):** Not present in this chart.
* **Functional Competence (Dark Blue):** Starts at approximately 0.01, increases sharply after 1B tokens, reaching approximately 0.3 at 100B tokens.
### Key Observations
* Brain Alignment (green) generally increases with the number of tokens, especially after 1B tokens.
* Formal Competence (light blue) also increases sharply after 1B tokens in the top row of charts.
* Functional Competence (dark blue) increases sharply after 1B tokens in the bottom row of charts.
* The R² values vary across the different models, indicating different levels of fit for the data.
### Interpretation
The charts suggest that as Pythia models are trained with more tokens, their brain alignment, formal competence, and functional competence generally improve. The sharp increase in competence after 1B tokens indicates a critical threshold in the training process. The different R² values suggest that the relationship between the number of tokens and the performance metrics varies across different model configurations. The data demonstrates the impact of training data size on the performance of language models. The absence of Formal Competence and Functional Competence in the top and bottom rows respectively suggests that these metrics are being measured independently or are relevant to different aspects of the model's performance.
</details>
Figure 4: Formal Competence Tracks Brain Alignment More Closely Than Functional Competence. Each column compares how the evolution of formal competence (top) and functional competence (bottom) tracks the evolution of brain alignment during training. The $R^{2}$ values quantify the strength of this relationship, with higher values in formal competence suggesting it as the key driver of the observed brain alignment. (a): The data averaged across models of five different sizes. (b-d): the same comparison as in (a), but with comparisons were made for models from the Pythia suite with three different sizes.
Key Components of Transformers
To further isolate the key elements responsible for brain alignment in untrained parameter models, we perform an ablation study on the architectural components of Transformer-v2 using a single block (Figure 2 (c)). By focusing on the untrained model, we isolate the effect of architecture alone, without confounding influences from training. The architectural components analyzed are labeled on the left of each bar in Figure 2 (b). Ay Attn refers to all components inside the lower box in Figure 2 (c), including the first layer norm, multi-head attention, and the residual connection that follows. Ay MLP corresponds to the components in the upper box, comprising the post-attention layer norm, MLP, and the subsequent residual layer. Ay Pos represents the addition of positional embeddings to token embeddings. Ay Tokens means the model directly returns the raw token embeddings without further processing. This systematic ablation helps pinpoint the components that contribute most to brain alignment. Once again, we observe that integration across tokens, via attention mechanisms and positional encoding, yields the highest brain alignment. Further, we found that untrained parameter models perform better than chance-level performance on formal competence benchmarks, mirroring their non-zero brain alignment. In contrast, functional competence benchmarks remain at chance level for untrained models. This further supports the finding that brain alignment is primarily driven by formal, rather than functional, linguistic competence. (see Figure 2 (d)).
<details>
<summary>figures/brain-score-llms-correlation-ppl-behavior.drawio.png Details</summary>

### Visual Description
## Scatter Plot Matrix: Brain Alignment vs. Model Performance
### Overview
The image presents a matrix of scatter plots analyzing the relationship between brain alignment and model performance for different Pythia models. Each column represents a different Pythia model size (70M, 160M, 2.8B, and 8 Models). Each model has two scatter plots, one plotting "NWP (Perplexity)" against "Brain Alignment", and the other plotting "Behavioral Alignment" against "Brain Alignment". The data points are color-coded to indicate the training stage (Early vs. Late). Each plot includes a regression line with a confidence interval and the Pearson correlation coefficient (r-value) with significance markers.
### Components/Axes
* **Titles:** The plots are arranged in a 2x4 grid, with titles above each column indicating the model: (a) Pythia-70M, (b) Pythia-160M, (c) Pythia-2.8B, (d) Pythia (8 Models).
* **Y-Axes (Left Column):** The left column has a shared y-axis labeled "NWP (Perplexity)" for the top row and "Behavior" for the bottom row.
* **Y-Axes (All Plots):** All plots have a y-axis labeled "Brain Alignment". The scale ranges from approximately 0.2 to 0.5 for the top row and varies slightly for the bottom row.
* **X-Axes (Top Row):** The top row has an x-axis labeled "Log(NWP Perplexity)". The scale ranges from approximately 4 to 10.
* **X-Axes (Bottom Row):** The bottom row has an x-axis labeled "Behavioral Alignment". The scale ranges from approximately 0.38 to 0.44.
* **Legend:** Each plot contains a legend in the top-left corner indicating the "Training Stage": "Early" (represented by circles) and "Late" (represented by squares). The "Early" data points are colored in shades of purple/blue, while the "Late" data points are colored in shades of orange/yellow/red.
* **Correlation Coefficient (r):** Each plot displays the Pearson correlation coefficient (r) with significance markers (*, **, ****). "n.s." indicates a non-significant correlation.
* **Regression Line:** Each plot includes a regression line with a shaded confidence interval.
### Detailed Analysis
**Row 1: NWP (Perplexity) vs. Brain Alignment**
* **(a) Pythia-70M:**
* Trend: The "Early" data points (blue/purple circles) show a positive trend.
* r = 0.92****
* "Late" data points (orange/yellow/red squares) are clustered near x=4.
* r = 0.60*
* **(b) Pythia-160M:**
* Trend: The "Early" data points (blue/purple circles) show a positive trend.
* r = 0.89****
* "Late" data points (orange/yellow/red squares) are clustered near x=4.
* r = n.s.
* **(c) Pythia-2.8B:**
* Trend: The "Early" data points (blue/purple circles) show a positive trend.
* r = 0.63*
* "Late" data points (orange/yellow/red squares) are clustered near x=4.
* r = n.s.
* **(d) Pythia (8 Models):**
* Trend: The "Early" data points (blue/purple circles) show a positive trend.
* r = 0.81****
* "Late" data points (orange/yellow/red squares) are clustered near x=4.
* r = 0.26**
**Row 2: Behavioral Alignment vs. Brain Alignment**
* **(a) Pythia-70M:**
* Trend: The "Early" data points (blue/purple circles) show a positive trend.
* r = 0.97****
* The "Late" data points (orange/yellow/red squares) show a slight negative trend.
* r = n.s.
* **(b) Pythia-160M:**
* Trend: The "Early" data points (blue/purple circles) show a positive trend.
* r = 0.90****
* The "Late" data points (orange/yellow/red squares) show a slight negative trend.
* r = n.s.
* **(c) Pythia-2.8B:**
* Trend: The "Early" data points (blue/purple circles) show a positive trend.
* r = 0.89****
* The "Late" data points (orange/yellow/red squares) show a negative trend.
* r = -0.54*
* **(d) Pythia (8 Models):**
* Trend: The "Early" data points (blue/purple circles) show a positive trend.
* r = 0.84****
* The "Late" data points (orange/yellow/red squares) show a slight negative trend.
* r = n.s.
### Key Observations
* **NWP (Perplexity) vs. Brain Alignment:** There is a generally positive correlation between Log(NWP Perplexity) and Brain Alignment for the "Early" training stage across all models. The "Late" training stage data points are clustered at low Log(NWP Perplexity) values.
* **Behavioral Alignment vs. Brain Alignment:** There is a strong positive correlation between Behavioral Alignment and Brain Alignment for the "Early" training stage across all models. The "Late" training stage shows a weaker or negative correlation.
* **Significance:** The correlation coefficients for the "Early" training stage are generally statistically significant (p < 0.0001), while the "Late" training stage correlations are often not significant.
### Interpretation
The data suggests that brain alignment, as measured in these models, is strongly correlated with both NWP (Perplexity) and Behavioral Alignment during the early stages of training. The strong positive correlations indicate that as the models learn and their internal representations become more aligned with the human brain, their performance on language tasks (as measured by perplexity) and their behavioral alignment also improve.
The clustering of "Late" training stage data points at low Log(NWP Perplexity) values in the top row suggests that the models reach a certain level of performance beyond which further training does not significantly improve perplexity. The weaker or negative correlations observed for the "Late" training stage in the bottom row could indicate that after a certain point, improvements in behavioral alignment do not necessarily translate to improvements in brain alignment, or vice versa. This could be due to overfitting or the models developing different strategies for solving the tasks.
The differences in correlation strength and significance between the "Early" and "Late" training stages highlight the dynamic relationship between brain alignment and model performance during the learning process. The plots provide evidence that brain alignment is a useful metric for understanding and potentially improving the performance of language models, particularly during the initial stages of training.
</details>
Figure 5: NWP and Behavioral Alignment Correlate with Brain Alignment Only in Early Training. (Top Row): Correlation between brain alignment and language modeling loss shows a strong, significant relationship during early training (up to 2B tokens). While this correlation weakens in later stages (up to ~300B tokens). Results are shown for three models and the average of all 8 models (last column). (Bottom Row): The same analysis, but for the correlation between brain alignment and behavioral alignment, revealing a similar trendāstrong correlation early in training, but no significant relationship as models surpass human proficiency.
5.2 Brain Alignment Over Training
Having established the architectural components that make an untrained model brain-aligned in the previous section, we now investigate how brain alignment evolves during training. To do so, we use the Pythia model suite Biderman et al. (2023), which consists of models of various sizes, all trained on the same $\sim$ 300B tokens, with publicly available intermediate checkpoints. We report results for a model from a different family, SmolLM2-360M (Allal et al., 2025), which provides checkpoints at 250B-token intervals, in Appendix F.
Figure 3 illustrates the brain alignment of six Pythia models across five brain recording datasets at 34 training checkpoints, spanning approximately 300B tokens. Each panel presents checkpoints that are logarithmically spaced up to the vertical line, emphasizing the early-stage increase in brain alignment, which occurs within the first 5.6% of training time. Beyond this point, the panels display the remaining training period, where brain alignment stabilizes. More specifically, we observe the following trend: (1) Brain alignment is similar to the untrained model until approximately 128M tokens. (2) A sharp increase follows, peaking around 8B tokens. (3) Brain alignment then saturates for the remainder of training. Despite the vast difference in model sizes shown in Figure 3, the trajectory of brain alignment is remarkably similar.
Alignment Tracks Formal Competence
Following the observation that brain alignment plateaus early in training, we next investigate how this relates to the emergence of formal and functional linguistic competence in LLMs. Figure 4 displays the average brain alignment alongside the average performance on formal competence benchmarks (top row) and functional competence benchmarks (bottom row). This is shown for three Pythia models (1B, 2.8B, and 6.9B parameters) and the average of five Pythia models (first column) across the training process. To quantify this relationship, we train a ridge regression model (with a single scalar weight) to predict brain alignment scores from benchmark scores using 10-fold cross-validation. The average R-squared value across these folds serves as our metric for comparing the relationship between formal/functional linguistic competence and brain alignment. These R-squared values are shown in each panel of Figure 4. Finally, we perform a Wilcoxon signed-rank test on the distributions of R-squared values. This test reveals that formal linguistic competence is significantly more strongly correlated with brain alignment than functional competence (W = 0.0, p $<$ 0.002). One possible explanation for why brain alignment emerges before formal linguistic competence is that existing LLM benchmarks assess performance using discrete accuracy thresholds (hard metrics), rather than capturing the gradual progression of competence through more nuanced, continuous measures (soft metrics) (Schaeffer et al., 2023). We show the individual benchmark scores across all checkpoints in Figure 8 in Appendix E.
5.3 LLMs Lose Behavioral Alignment
Do language models that improve in next-word prediction remain aligned with human behavioral and neural responses, or do they diverge as they surpass human proficiency? To answer this question we use the Futrell2018 benchmark, which has been widely used in previous research to measure linguistic behavior (Futrell et al., 2018; Schrimpf et al., 2021; Aw et al., 2023). This dataset consists of self-paced reading times for naturalistic story materials from 180 participants. Per-word reading times provide a measure of incremental comprehension difficulty, a cornerstone of psycholinguistic research for testing theories of sentence comprehension (Gibson, 1998; Smith and Levy, 2013; Brothers and Kuperberg, 2021; Shain et al., 2024). We measure alignment by calculating the Pearson correlation between a modelās cross-entropy loss for a specific token in the sequence and the average human per-word reading time. The loss for words that comprise multiple tokens is added together before computing the correlation.
Early in training, LLMs align with this pattern, but as they surpass human proficiency (Shlegeris et al., 2022), their perplexity drops and they begin encoding statistical regularities that diverge from human intuition (Oh and Schuler, 2023; Steuer et al., 2023). This shift correlates with a decline in behavioral alignment, suggesting that superhuman models rely on different mechanisms than those underlying human language comprehension. Figure 5 shows that brain alignment initially correlates with perplexity and behavioral alignment, but only during the early stages of training (up to ~2B tokens). Beyond this point, these correlations diminish. In larger models, we observe a negative correlation between brain alignment and behavioral alignment in the later stages of training. This trend reinforces that early training aligns LLMs with human-like processing as also observed in earlier stages, while in later stages their language mechanisms diverge from humans.
6 Conclusion
In this work, we investigate how brain alignment in LLMs evolves throughout training, revealing different learning processes at play. We demonstrate that alignment with the human language network (LN) primarily correlates with formal linguistic competence Mahowald et al. (2024), peaking and saturating early in training. In contrast, functional linguistic competence, which involves world knowledge and reasoning, continues to grow beyond this stage. These findings suggest that the LN primarily encodes syntactic and compositional structure, in line with the literature of language neuroscience Fedorenko et al. (2024a), while broader linguistic functions may rely on other cognitive systems beyond the LN. This developmental approach reveals when brain-like representations emerge, offering a dynamic perspective compared to prior work focused on fully trained models. For example, Oota et al. (2023) demonstrated that syntactic structure contributes to alignment by selectively removing specific properties from already trained models. In contrast, we show that formal linguistic competence actively drives brain alignment during the early phases of training. Similarly, Hosseini et al. (2024) reported that models achieve strong alignment with limited data; we identify why: the brain-like representations emerge as soon as core formal linguistic knowledge is acquired. Further, their study evaluated only four training checkpoints and 2 models on a single dataset (Pereira2018). Our study evaluated eight models (14Mā6.7B parameters) across 34 checkpoints spanning 300B tokens, and used five neural benchmarks within a rigorous braināscoring framework. This extensive design enabled fineāgrained correlations with both formal and functional linguistic benchmarks and ensured our results are robust and generalizable.
We also show that model size is not a reliable predictor of brain alignment when controlling for the number of features (see Appendix I). Instead, alignment is shaped by architectural inductive biases, token integration mechanisms, and training dynamics. Our standardized brain-scoring framework eliminates contextualization biases from previous work, ensuring more rigorous evaluations. Finally, we demonstrate that current brain alignment benchmarks are not saturated, indicating that LLMs can still be improved in modeling human language processing. Together, these findings challenge prior assumptions about how alignment emerges in LLMs and provide new insights into the relationship between artificial and biological language processing.
Limitations
While this study offers a comprehensive analysis of brain alignment in LLMs, several open questions remain. If functional competence extends beyond the language network, future work should explore which additional brain regions LLMs align with as they develop reasoning and world knowledge, particularly in other cognitive networks like the multiple demand (Duncan and Owen, 2000) or theory of mind network (Saxe and Kanwisher, 2003; Saxe and Powell, 2006). Our findings suggest that LLM brain alignment studies should be broadened from the LN to downstream representations underlying other parts of cognition. This raises the question of whether specific transformer units specialize in formal vs. functional linguistic competence (AlKhamissi et al., 2025).
One other limitation of our study is that we rely exclusively on brain data collected from experiments conducted with English stimuli. As such, we do not explore whether our findings generalize across languages. This remains an open question and warrants further investigation. That said, evidence from cross-linguistic neuroscience research studying 45 languages from 12 language families (Malik-Moraleda et al., 2022) suggests the existence of a universal language network in the brain that is robust across languages and language families, both in topography and core functional properties.
Finally, a key question remains: Does LLM alignment evolution mirror human language acquisition? Comparing LLM representations to developmental data could reveal insights into learning trajectories and help differentiate formal from functional language learning. Expanding brain-scoring benchmarks and incorporating multimodal models will help address these questions, further bridging the gap between artificial and biological intelligence and deepening our understanding of how both systems process and represent language.
Ethical Statement
This research relies on previously published neuroimaging (fMRI, ECoG) and behavioral datasets, collected by the original research groups under their institutional ethical guidelines with informed consent and IRB/ethics approval. Our work involved only secondary analysis of de-identified data, with no new data collection or direct participant interaction, and we remain committed to using such data responsibly and respectfully.
Acknowledgments
We thank the members of the EPFL NeuroAI and NLP labs for their valuable feedback and insightful suggestions. We also gratefully acknowledge the support of the Swiss National Science Foundation (No. 215390), Innosuisse (PFFS-21-29), the EPFL Center for Imaging, Sony Group Corporation, and a Meta LLM Evaluation Research Grant.
References
- AlKhamissi et al. (2022) Badr AlKhamissi, Millicent Li, Asli Celikyilmaz, Mona T. Diab, and Marjan Ghazvininejad. 2022. A review on language models as knowledge bases. ArXiv, abs/2204.06031.
- AlKhamissi et al. (2025) Badr AlKhamissi, Greta Tuckute, Antoine Bosselut, and Martin Schrimpf. 2025. The LLM language network: A neuroscientific approach for identifying causally task-relevant units. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 10887ā10911, Albuquerque, New Mexico. Association for Computational Linguistics.
- Allal et al. (2025) Loubna Ben Allal, Anton Lozhkov, Elie Bakouch, Gabriel MartĆn BlĆ”zquez, Guilherme Penedo, Lewis Tunstall, AndrĆ©s Marafioti, Hynek KydlĆÄek, AgustĆn Piqueres LajarĆn, Vaibhav Srivastav, and 1 others. 2025. Smollm2: When smol goes bigādata-centric training of a small language model. arXiv preprint arXiv:2502.02737.
- Aw et al. (2023) Khai Loong Aw, Syrielle Montariol, Badr AlKhamissi, Martin Schrimpf, and Antoine Bosselut. 2023. Instruction-tuning aligns llms to the human brain.
- Bates et al. (2003) Elizabeth Bates, Stephen M. Wilson, Ayse Pinar Saygin, Frederic Dick, Martin I. Sereno, Robert T. Knight, and Nina F. Dronkers. 2003. Voxel-based lesionāsymptom mapping. Nature Neuroscience, 6(5):448ā450.
- Benn et al. (2013) Yael Benn, Iain D. Wilkinson, Ying Zheng, Kathrin Cohen Kadosh, Charles A.J. Romanowski, Michael Siegal, and Rosemary Varley. 2013. Differentiating core and co-opted mechanisms in calculation: The neuroimaging of calculation in aphasia. Brain and Cognition, 82(3):254ā264.
- Biderman et al. (2023) Stella Biderman, Hailey Schoelkopf, Quentin Anthony, Herbie Bradley, Kyle OāBrien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar Van Der Wal. 2023. Pythia: a suite for analyzing large language models across training and scaling. In Proceedings of the 40th International Conference on Machine Learning, ICMLā23. JMLR.org.
- Binder et al. (1997) Jeffrey R. Binder, Julie A. Frost, Thomas A. Hammeke, Robert W. Cox, Stephen M. Rao, and Thomas Prieto. 1997. Human brain language areas identified by functional magnetic resonance imaging. The Journal of Neuroscience, 17(1):353ā362.
- Bisk et al. (2019) Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. 2019. Piqa: Reasoning about physical commonsense in natural language. In AAAI Conference on Artificial Intelligence.
- Blank et al. (2014) Idan Blank, Nancy Kanwisher, and Evelina Fedorenko. 2014. A functional dissociation between language and multiple-demand systems revealed in patterns of BOLD signal fluctuations. Journal of Neurophysiology, 112(5):1105ā1118.
- Brothers and Kuperberg (2021) Trevor Brothers and Gina R Kuperberg. 2021. Word predictability effects are linear, not logarithmic: Implications for probabilistic models of sentence comprehension. Journal of Memory and Language, 116:104174.
- Cadena et al. (2019) Santiago A Cadena, George H Denfield, Edgar Y Walker, Leon A Gatys, Andreas S Tolias, Matthias Bethge, and Alexander S Ecker. 2019. Deep convolutional models improve predictions of macaque v1 responses to natural images. PLoS computational biology, 15(4):e1006897.
- Caucheteux and King (2022) Charlotte Caucheteux and Jean-RƩmi King. 2022. Brains and algorithms partially converge in natural language processing. Communications biology, 5(1):134.
- Chen et al. (2023) Xuanyi Chen, Josef Affourtit, Rachel Ryskin, Tamar I Regev, Samuel Norman-Haignere, Olessia Jouravlev, Saima Malik-Moraleda, Hope Kean, Rosemary Varley, and Evelina Fedorenko. 2023. The human language system, including its inferior frontal component in ābrocaās area,ā does not support music perception. Cerebral Cortex, 33(12):7904ā7929.
- Cichy et al. (2016) Radoslaw Martin Cichy, Aditya Khosla, Dimitrios Pantazis, Antonio Torralba, and Aude Oliva. 2016. Comparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence. Scientific reports, 6(1):27755.
- Clark et al. (2018) Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question answering? try arc, the ai2 reasoning challenge. ArXiv, abs/1803.05457.
- Duncan and Owen (2000) John Duncan and Adrian M Owen. 2000. Common regions of the human frontal lobe recruited by diverse cognitive demands. Trends in Neurosciences, 23(10):475ā483.
- Feather et al. (2025) Jenelle Feather, Meenakshi Khosla, N. Apurva, Ratan Murty, and Aran Nayebi. 2025. Brain-model evaluations need the neuroai turing test.
- Fedorenko (2014) Evelina Fedorenko. 2014. The role of domain-general cognitive control in language comprehension. Frontiers in Psychology, 5.
- Fedorenko et al. (2011) Evelina Fedorenko, Michael K Behr, and Nancy Kanwisher. 2011. Functional specificity for high-level linguistic processing in the human brain. Proceedings of the National Academy of Sciences, 108(39):16428ā16433.
- Fedorenko et al. (2010) Evelina Fedorenko, Po-Jang Hsieh, Alfonso Nieto-Castanon, Susan L. Whitfield-Gabrieli, and Nancy G. Kanwisher. 2010. New method for fmri investigations of language: defining rois functionally in individual subjects. Journal of neurophysiology, 104 2:1177ā94.
- Fedorenko et al. (2024a) Evelina Fedorenko, Anna A. Ivanova, and Tamar I. Regev. 2024a. The language network as a natural kind within the broader landscape of the human brain. Nature Reviews Neuroscience, 25(5):289ā312.
- Fedorenko et al. (2012) Evelina Fedorenko, Josh H. McDermott, Sam Norman-Haignere, and Nancy Kanwisher. 2012. Sensitivity to musical structure in the human brain. Journal of Neurophysiology, 108(12):3289ā3300.
- Fedorenko et al. (2024b) Evelina Fedorenko, Steven T. Piantadosi, and Edward A. F. Gibson. 2024b. Language is primarily a tool for communication rather than thought. Nature, 630(8017):575ā586.
- Fedorenko et al. (2016) Evelina Fedorenko, Terri L. Scott, Peter Brunner, William G. Coon, Brianna Pritchett, Gerwin Schalk, and Nancy Kanwisher. 2016. Neural correlate of the construction of sentence meaning. Proceedings of the National Academy of Sciences, 113(41):E6256āE6262.
- Feghhi et al. (2024) Ebrahim Feghhi, Nima Hadidi, Bryan Song, Idan A. Blank, and Jonathan C. Kao. 2024. What are large language models mapping to in the brain? a case against over-reliance on brain scores.
- Futrell et al. (2018) Richard Futrell, Edward Gibson, Harry J. Tily, Idan Blank, Anastasia Vishnevetsky, Steven Piantadosi, and Evelina Fedorenko. 2018. The natural stories corpus. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).
- Gao et al. (2024) Leo Gao, Jonathan Tow, Baber Abbasi, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Alain Le Noacāh, Haonan Li, Kyle McDonell, Niklas Muennighoff, Chris Ociepa, Jason Phang, Laria Reynolds, Hailey Schoelkopf, Aviya Skowron, Lintang Sutawika, and 5 others. 2024. A framework for few-shot language model evaluation.
- Gauthier et al. (2020) Jon Gauthier, Jennifer Hu, Ethan Wilcox, Peng Qian, and Roger Levy. 2020. SyntaxGym: An online platform for targeted evaluation of language models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 70ā76, Online. Association for Computational Linguistics.
- Geiger et al. (2022) Franziska Geiger, Martin Schrimpf, Tiago Marques, and James J DiCarlo. 2022. Wiring up vision: Minimizing supervised synaptic updates needed to produce a primate ventral stream. In International Conference on Learning Representations 2022 Spotlight.
- Gibson (1998) Edward Gibson. 1998. Linguistic complexity: locality of syntactic dependencies. Cognition, 68(1):1ā76.
- Goldstein et al. (2022) Ariel Goldstein, Zaid Zada, Eliav Buchnik, Mariano Schain, Amy Price, Bobbi Aubrey, Samuel A. Nastase, Amir Feder, Dotan Emanuel, Alon Cohen, Aren Jansen, Harshvardhan Gazula, Gina Choe, Aditi Rao, Catherine Kim, Colton Casto, Lora Fanda, Werner Doyle, Daniel Friedman, and 13 others. 2022. Shared computational principles for language processing in humans and deep language models. Nature Neuroscience, 25(3):369ā380.
- GornoāTempini et al. (2004) Maria Luisa GornoāTempini, Nina F. Dronkers, Katherine P. Rankin, Jennifer M. Ogar, La Phengrasamy, Howard J. Rosen, Julene K. Johnson, Michael W. Weiner, and Bruce L. Miller. 2004. Cognition and anatomy in three variants of primary progressive aphasia. Annals of Neurology, 55(3):335ā346.
- Hagoort (2019) Peter Hagoort. 2019. The neurobiology of language beyond single-word processing. Science, 366(6461):55ā58.
- Harvey et al. (2023) Sarah E Harvey, Brett W. Larsen, and Alex H Williams. 2023. Duality of bures and shape distances with implications for comparing neural representations. In UniReps: the First Workshop on Unifying Representations in Neural Models.
- Hosseini et al. (2024) Eghbal A Hosseini, Martin Schrimpf, Yian Zhang, Samuel Bowman, Noga Zaslavsky, and Evelina Fedorenko. 2024. Artificial neural network language models predict human brain responses to language even after a developmentally realistic amount of training. Neurobiology of Language, pages 1ā21.
- Hu et al. (2023) Jennifer Hu, Hannah Small, Hope Kean, Atsushi Takahashi, Leo Zekelman, Daniel Kleinman, Elizabeth Ryan, Alfonso Nieto-Castañón, Victor Ferreira, and Evelina Fedorenko. 2023. Precision fmri reveals that the language-selective network supports both phrase-structure building and lexical access during language production. Cerebral Cortex, 33(8):4384ā4404.
- Huang and Chang (2023) Jie Huang and Kevin Chen-Chuan Chang. 2023. Towards reasoning in large language models: A survey. In Findings of the Association for Computational Linguistics: ACL 2023, pages 1049ā1065, Toronto, Canada. Association for Computational Linguistics.
- Kauf et al. (2023) Carina Kauf, Greta Tuckute, Roger Levy, Jacob Andreas, and Evelina Fedorenko. 2023. Lexical-Semantic Content, Not Syntactic Structure, Is the Main Contributor to ANN-Brain Similarity of fMRI Responses in the Language Network. Neurobiology of Language, pages 1ā36.
- Kazemian et al. (2024) Atlas Kazemian, Eric Elmoznino, and Michael F. Bonner. 2024. Convolutional architectures are cortex-aligned de novo. bioRxiv.
- Kell et al. (2018) Alexander JE Kell, Daniel LK Yamins, Erica N Shook, Sam V Norman-Haignere, and Josh H McDermott. 2018. A task-optimized neural network replicates human auditory behavior, predicts brain responses, and reveals a cortical processing hierarchy. Neuron, 98(3):630ā644.
- Khaligh-Razavi and Kriegeskorte (2014) Seyed Mahdi Khaligh-Razavi and Nikolaus Kriegeskorte. 2014. Deep Supervised, but Not Unsupervised, Models May Explain IT Cortical Representation. PLoS Computational Biology, 10(11). Publisher: Public Library of Science ISBN: 1553-7358 (Electronic)\r1553-734X (Linking).
- Kornblith et al. (2019) Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey Hinton. 2019. Similarity of neural network representations revisited. In International conference on machine learning, pages 3519ā3529. PMLR.
- Koumura et al. (2023) Takuya Koumura, Hiroki Terashima, and Shigeto Furukawa. 2023. Human-like modulation sensitivity emerging through optimization to natural sound recognition. Journal of Neuroscience, 43(21):3876ā3894.
- Kriegeskorte et al. (2008) Nikolaus Kriegeskorte, Marieke Mur, and Peter Bandettini. 2008. Representational similarity analysis - connecting the branches of systems neuroscience. Frontiers in Systems Neuroscience, 2.
- Kubilius et al. (2019) Jonas Kubilius, Martin Schrimpf, Kohitij Kar, Rishi Rajalingham, Ha Hong, Najib Majaj, Elias Issa, Pouya Bashivan, Jonathan Prescott-Roy, Kailyn Schmidt, Aran Nayebi, Daniel Bear, Daniel L Yamins, and James J DiCarlo. 2019. Brain-like object recognition with high-performing shallow recurrent anns. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.
- Lipkin et al. (2022) Benjamin Lipkin, Greta Tuckute, Josef Affourtit, Hannah Small, Zachary Mineroff, Hope Kean, Olessia Jouravlev, Lara Rakocevic, Brianna Pritchett, Matthew Siegelman, Caitlyn Hoeflin, Alvincé Pongos, Idan A. Blank, Melissa Kline Struhl, Anna Ivanova, Steven Shannon, Aalok Sathe, Malte Hoffmann, Alfonso Nieto-Castañón, and Evelina Fedorenko. 2022. Probabilistic atlas for the language network based on precision fmri data from>800 individuals. Scientific Data, 9(1).
- Mahowald et al. (2024) Kyle Mahowald, Anna A Ivanova, Idan A Blank, Nancy Kanwisher, Joshua B Tenenbaum, and Evelina Fedorenko. 2024. Dissociating language and thought in large language models. Trends in Cognitive Sciences.
- Malik-Moraleda et al. (2022) Saima Malik-Moraleda, Dima Ayyash, Jeanne GallĆ©e, Josef Affourtit, Malte Hoffmann, Zachary Mineroff, Olessia Jouravlev, and Evelina Fedorenko. 2022. An investigation across 45 languages and 12 language families reveals a universal language network. Nature Neuroscience, 25(8):1014ā1019.
- Millet and King (2021) Juliette Millet and Jean-RƩmi King. 2021. Inductive biases, pretraining and fine-tuning jointly account for brain responses to speech. ArXiv, abs/2103.01032.
- Monti et al. (2012) Martin M Monti, Lawrence M Parsons, and Daniel N Osherson. 2012. Thought beyond language: neural dissociation of algebra and natural language. Psychological science, 23(8):914ā922.
- Nastase et al. (2021) Samuel A. Nastase, Yun-Fei Liu, Hanna Hillman, Asieh Zadbood, Liat Hasenfratz, Neggin Keshavarzian, Janice Chen, Christopher J. Honey, Yaara Yeshurun, Mor Regev, and et al. 2021. The ānarrativesā fmri dataset for evaluating models of naturalistic language comprehension. Scientific Data, 8(1).
- Oh and Schuler (2023) Byung-Doh Oh and William Schuler. 2023. Why does surprisal from larger transformer-based language models provide a poorer fit to human reading times? Transactions of the Association for Computational Linguistics, 11:336ā350.
- Oota et al. (2023) Subba Reddy Oota, Manish Gupta, and Mariya Toneva. 2023. Joint processing of linguistic properties in brains and language models. Preprint, arXiv:2212.08094.
- Pasquiou et al. (2022) Alexandre Pasquiou, Yair Lakretz, John Hale, Bertrand Thirion, and Christophe Pallier. 2022. Neural language models are not born equal to fit brain data, but training helps. Preprint, arXiv:2207.03380.
- Penedo et al. (2024) Guilherme Penedo, Hynek KydlĆÄek, Loubna Ben allal, Anton Lozhkov, Margaret Mitchell, Colin Raffel, Leandro Von Werra, and Thomas Wolf. 2024. The fineweb datasets: Decanting the web for the finest text data at scale. In The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track.
- Pereira et al. (2018) Francisco Pereira, Bin Lou, Brianna Pritchett, Samuel Ritter, Samuel J. Gershman, Nancy Kanwisher, Matthew Botvinick, and Evelina Fedorenko. 2018. Toward a universal decoder of linguistic meaning from brain activation. Nature Communications, 9(1):963.
- Price (2010) Cathy J. Price. 2010. The anatomy of language: a review of 100 fmri studies published in 2009. Annals of the New York Academy of Sciences, 1191(1):62ā88.
- Rathi et al. (2025) Neil Rathi, Johannes Mehrer, Badr AlKhamissi, Taha Binhuraib, Nicholas M. Blauch, and Martin Schrimpf. 2025. TopoLM: Brain-like spatio-functional organization in a topographic language model. In International Conference on Learning Representations (ICLR).
- Sakaguchi et al. (2019) Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019. Winogrande. Communications of the ACM, 64:99 ā 106.
- Sap et al. (2019) Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, and Yejin Choi. 2019. Social IQa: Commonsense reasoning about social interactions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4463ā4473, Hong Kong, China. Association for Computational Linguistics.
- Saxe and Kanwisher (2003) R Saxe and N Kanwisher. 2003. People thinking about thinking peoplethe role of the temporo-parietal junction in ātheory of mindā. NeuroImage, 19(4):1835ā1842.
- Saxe et al. (2006) Rebecca Saxe, Matthew Brett, and Nancy Kanwisher. 2006. Divide and conquer: a defense of functional localizers. Neuroimage, 30(4):1088ā1096.
- Saxe and Powell (2006) Rebecca Saxe and Lindsey J. Powell. 2006. Itās the thought that counts: Specific brain regions for one component of theory of mind. Psychological Science, 17(8):692ā699.
- Schaeffer et al. (2023) Rylan Schaeffer, Brando Miranda, and Oluwasanmi Koyejo. 2023. Are emergent abilities of large language models a mirage? ArXiv, abs/2304.15004.
- Schrimpf et al. (2021) Martin Schrimpf, Idan Asher Blank, Greta Tuckute, Carina Kauf, Eghbal A. Hosseini, Nancy Kanwisher, Joshua B. Tenenbaum, and Evelina Fedorenko. 2021. The neural architecture of language: Integrative modeling converges on predictive processing. Proceedings of the National Academy of Sciences, 118(45):e2105646118.
- Schrimpf et al. (2018) Martin Schrimpf, Jonas Kubilius, Ha Hong, Najib J. Majaj, Rishi Rajalingham, Elias B. Issa, Kohitij Kar, Pouya Bashivan, Jonathan Prescott-Roy, Franziska Geiger, Kailyn Schmidt, Daniel L. K. Yamins, and James J. DiCarlo. 2018. Brain-Score: Which Artificial Neural Network for Object Recognition is most Brain-Like? preprint, Neuroscience.
- Schrimpf et al. (2020) Martin Schrimpf, Jonas Kubilius, Michael J. Lee, N. Apurva Ratan Murty, Robert Ajemian, and James J. DiCarlo. 2020. Integrative benchmarking to advance neurally mechanistic models of human intelligence. Neuron, 108(3):413ā423.
- Shain et al. (2024) Cory Shain, Clara Meister, Tiago Pimentel, Ryan Cotterell, and Roger Levy. 2024. Large-scale evidence for logarithmic effects of word predictability on reading time. Proceedings of the National Academy of Sciences, 121(10):e2307876121.
- Shlegeris et al. (2022) Buck Shlegeris, Fabien Roger, Lawrence Chan, and Euan McLean. 2022. Language models are better than humans at next-token prediction. ArXiv, abs/2212.11281.
- Siegal and Varley (2006) Michael Siegal and Rosemary Varley. 2006. Aphasia, language, and theory of mind. Social Neuroscience, 1(3ā4):167ā174.
- Smith and Levy (2013) Nathaniel J. Smith and Roger Levy. 2013. The effect of word predictability on reading time is logarithmic. Cognition, 128(3):302ā319.
- Steuer et al. (2023) Julius Steuer, Marius Mosbach, and Dietrich Klakow. 2023. Large gpt-like models are bad babies: A closer look at the relationship between linguistic competence and psycholinguistic measures. arXiv preprint arXiv:2311.04547.
- Teney et al. (2024) Damien Teney, Armand Nicolicioiu, Valentin Hartmann, and Ehsan Abbasnejad. 2024. Neural redshift: Random networks are not random functions. Preprint, arXiv:2403.02241.
- Tuckute et al. (2023) Greta Tuckute, Jenelle Feather, Dana Boebinger, and Josh H. McDermott. 2023. Many but not all deep neural network audio models capture brain responses and exhibit correspondence between model stages and brain regions. PLOS Biology, 21(12):1ā70.
- Tuckute et al. (2024a) Greta Tuckute, Nancy Kanwisher, and Evelina Fedorenko. 2024a. Language in brains, minds, and machines. Annual Review of Neuroscience, 47.
- Tuckute et al. (2024b) Greta Tuckute, Aalok Sathe, Shashank Srikant, Maya Taliaferro, Mingye Wang, Martin Schrimpf, Kendrick Kay, and Evelina Fedorenko. 2024b. Driving and suppressing the human language network using large language models. Nature Human Behaviour, pages 1ā18.
- Varley and Siegal (2000) Rosemary Varley and Michael Siegal. 2000. Evidence for cognition without grammar from causal reasoning and ātheory of mindā in an agrammatic aphasic patient. Current Biology, 10(12):723ā726.
- Varley et al. (2005) Rosemary A. Varley, Nicolai J. C. Klessinger, Charles A. J. Romanowski, and Michael Siegal. 2005. Agrammatic but numerate. Proceedings of the National Academy of Sciences, 102(9):3519ā3524.
- Warstadt et al. (2019) Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mohananey, Wei Peng, Sheng-Fu Wang, and Samuel R. Bowman. 2019. Blimp: The benchmark of linguistic minimal pairs for english. Transactions of the Association for Computational Linguistics, 8:377ā392.
- Wolf et al. (2019) Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, RĆ©mi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingfaceās transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771.
- Yamins et al. (2014) Daniel LK Yamins, Ha Hong, Charles F Cadieu, Ethan A Solomon, Darren Seibert, and James J DiCarlo. 2014. Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proceedings of the national academy of sciences, 111(23):8619ā8624.
- Zellers et al. (2019) Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. Hellaswag: Can a machine really finish your sentence? In Annual Meeting of the Association for Computational Linguistics.
- Zhuang et al. (2021) Chengxu Zhuang, Siming Yan, Aran Nayebi, Martin Schrimpf, Michael C. Frank, James J. DiCarlo, and Daniel L.K. Yamins. 2021. Unsupervised neural network models of the ventral visual stream. Proceedings of the National Academy of Sciences (PNAS), 118(3). Publisher: Cold Spring Harbor Laboratory.
| Pereira2018 | fMRI | Reading | Accordions produce sound with bellows ⦠|
| --- | --- | --- | --- |
| Blank2014 | fMRI | Listening | A clear and joyous day it was and out on the wide ⦠|
| Fedorenko2016 | ECoG | Reading | āALEXā, āWASā, āTIREDā, āSOā, āHEā, āTOOKā, ⦠|
| Tuckute2024 | fMRI | Reading | The judge spoke, breaking the silence. |
| Narratives | fMRI | Listening | Okay so getting back to our story about uh Lucy ⦠|
| Futrell2018 | Reading Times | Reading | A clear and joyous day it was and out on the wide ⦠|
Table 1: Datasets Used for Evaluating Model Alignment. Neuroimaging datasets were collected via either functional magnetic resonance imaging (fMRI) or electrocorticography (ECoG). Stimuli range from short sentences (Fedorenko2016, Tuckute2024) to paragraphs (Pereira2018) and entire stories (Blank2014, Narratives, Futrell2018) and were presented either visually or auditorily. Futrell2018 is a behavioral dataset.
<details>
<summary>figures/brain-score-llms-metrics.drawio.png Details</summary>

### Visual Description
## Bar Charts: Brain Alignment Comparison
### Overview
The image presents two sets of bar charts, (a) and (b), comparing brain alignment under different conditions. The x-axis represents different experimental setups (Linear, CKA, RSA, Contextualization, No Contextualization), while the y-axis represents brain alignment scores. The charts compare "Pretrained | Original Stimuli", "Pretrained | Random Stimuli (= Length)", and "Untrained | Original Stimuli". Statistical significance is indicated by asterisks above the bars.
### Components/Axes
**Chart (a):**
* **Title:** Linear, CKA, RSA
* **Y-axis Title:** Brain Alignment (Pearson's r)
* **Y-axis Scale:** 0.00 to 0.14 (Linear, RSA), 0.00 to 0.30 (CKA), incrementing by 0.02 (Linear, RSA) and 0.05 (CKA)
* **X-axis:** Categorical, representing different experimental setups: Linear, CKA, RSA
* **Legend (bottom):**
* Light Green: Pretrained | Original Stimuli
* Medium Green: Pretrained | Random Stimuli (= Length)
* Dark Green: Untrained | Original Stimuli
**Chart (b):**
* **Title:** Contextualization, No Contextualization
* **Y-axis Title:** Brain Alignment
* **Y-axis Scale:** 0.0 to 0.7 (Contextualization), 0.0 to 1.0 (No Contextualization), incrementing by 0.1
* **X-axis:** Categorical, representing different experimental setups: Contextualization, No Contextualization
* **Legend (bottom):** Same as Chart (a)
### Detailed Analysis
**Chart (a):**
* **Linear:**
* Pretrained | Original Stimuli (Light Green): ~0.13, with error bar extending to ~0.14
* Pretrained | Random Stimuli (= Length) (Medium Green): ~0.01, with error bar extending to ~0.02
* Untrained | Original Stimuli (Dark Green): ~0.07, with error bar extending to ~0.08
* **CKA:**
* Pretrained | Original Stimuli (Light Green): ~0.19, with error bar extending to ~0.21
* Pretrained | Random Stimuli (= Length) (Medium Green): ~0.16, with error bar extending to ~0.17
* Untrained | Original Stimuli (Dark Green): ~0.28, with error bar extending to ~0.30
* **RSA:**
* Pretrained | Original Stimuli (Light Green): ~0.09, with error bar extending to ~0.10
* Pretrained | Random Stimuli (= Length) (Medium Green): ~0.02, with error bar extending to ~0.03
* Untrained | Original Stimuli (Dark Green): ~0.14, with error bar extending to ~0.15
**Chart (b):**
* **Contextualization:**
* Pretrained | Original Stimuli (Light Green): ~0.69, with error bar extending to ~0.71
* Pretrained | Random Stimuli (= Length) (Medium Green): ~0.16, with error bar extending to ~0.17
* Untrained | Original Stimuli (Dark Green): ~0.70, with error bar extending to ~0.72
* **No Contextualization:**
* Pretrained | Original Stimuli (Light Green): ~1.01, with error bar extending to ~1.03
* Pretrained | Random Stimuli (= Length) (Medium Green): ~0.17, with error bar extending to ~0.18
* Untrained | Original Stimuli (Dark Green): ~0.47, with error bar extending to ~0.49
### Key Observations
* In Chart (a), for Linear and RSA, "Pretrained | Original Stimuli" shows the highest brain alignment, while "Pretrained | Random Stimuli (= Length)" shows the lowest. For CKA, "Untrained | Original Stimuli" shows the highest brain alignment.
* In Chart (b), for both Contextualization and No Contextualization, "Pretrained | Original Stimuli" shows the highest brain alignment, while "Pretrained | Random Stimuli (= Length)" shows the lowest.
* Statistical significance (****) is indicated above the bars in all sub-charts, suggesting significant differences between the groups being compared.
### Interpretation
The data suggests that brain alignment varies significantly depending on the experimental setup (Linear, CKA, RSA, Contextualization, No Contextualization) and the type of stimuli used (Original vs. Random) in both pretrained and untrained models. The high brain alignment observed with "Pretrained | Original Stimuli" in most cases indicates that pretraining on original stimuli leads to better alignment with brain activity. The lower alignment with "Pretrained | Random Stimuli (= Length)" suggests that random stimuli, even when matched in length, do not elicit the same level of brain alignment as original stimuli. The performance of "Untrained | Original Stimuli" varies, sometimes showing comparable alignment to pretrained models (e.g., Contextualization) and sometimes showing lower alignment (e.g., Linear, RSA). The statistical significance (****) highlights that these differences are not due to random chance, but rather reflect genuine effects of the experimental manipulations.
</details>
Figure 6: Evaluating Brain Alignment with Linear Predictivity and No Contextualization is Most Stringent. (a) Average brain alignment across 8 Pythia models under three conditions: (1) a pretrained model processing the original stimuli, (2) a pretrained model processing random sequences of the same length (averaged over five random seeds) as a control condition, and (3) the model with untrained parameters processing the original stimuli. The linear predictivity metric differentiates between meaningful and random stimuli most strongly, while RSA and CKA overestimate alignment. (b) Brain alignment on the Pereira2018 dataset under two cross-validation schemes: with contextualization (random sentence split) and without contextualization (story-based split).
Appendix
Appendix A Neuroimaging & Behavioral Datasets
Table 1 shows the different neuroimaging and behavioral datasets used in this work, along with the dataset modality, presentation mode, and a stimulus example.
A.1 Neuroimaging Datasets
Pereira et al. (2018)
This dataset consists of fMRI activations (blood-oxygen-level-dependent; BOLD responses) recorded as participants read short passages presented one sentence at a time for 4 s. The dataset is composed of two distinct experiments: one with 9 subjects presented with 384 sentences, and another with 6 subjects presented with 243 sentences each. The passages in each experiment spanned 24 different topics. The results reported for this dataset are the average alignment across both experiments after normalizing with their respective cross-subject consistency estimates.
Blank et al. (2014)
This dataset also involves fMRI signals but recorded from only 12 functional regions of interest (fROI) instead of the higher resolution signal used by Pereira et al. (2018). The data was collected from 5 participants as they listened to 8 long naturalistic stories that were adapted from existing fairy tales and short stories (Futrell et al., 2018). Each story was approximately 5 minutes long, averaging up to 165 sentences, providing a much longer context length than the other neuroimaging datasets. When measuring brain alignment, we use the input stimuli of the last 32 TRs as the modelās context.
Fedorenko et al. (2016)
This dataset captures ECoG signals from 5 participants as they read 8-word-long sentences presented one word at a time for 450 or 700 ms. Following Schrimpf et al. (2021) we select the 52/80 sentences that were presented to all participants.
Tuckute et al. (2024b)
In this dataset, 5 participants read 1000 6-word sentences presented one sentence at a time for 2 s. BOLD responses from voxels in the language network were averaged within each participant and then across participants to yield an overall average language network response to each sentence. The stimuli used span a large part of the linguistic space, enabling model-brain comparisons across a wide range of single sentences. Sentence presentation order was randomized across participants. In combination with the diversity in linguistic materials, this dataset presents a particularly challenging dataset for model evaluation.
Narratives Dataset (Nastase et al., 2021)
This dataset consists of fMRI data collected while human subjects listened to 27 diverse spoken story stimuli. The collection includes 345 subjects, 891 functional scans, and approximately 4.6 hours of unique audio stimuli. For our story-based analysis, we focused on 5 participants who each listened to both the Lucy and Tunnel stories. Since functional localization was not performed in the Narratives dataset, we approximated language regions by extracting the top-10% voxels from each anatomically defined language region according to a probabilistic atlas for the human language system (Lipkin et al., 2022). Due to the limited corpus of two stories, traditional 10-fold cross-validation was not feasible. To implement topic-based splitting while maintaining methodological rigor, we partitioned each story into $n$ distinct segments, with each segment functioning as an independent narrative unit. This segmentation approach effectively prevented cross-contamination of contextual information between splits, thereby preserving the integrity of our evaluation framework.
A.2 Behavioral Dataset
(Futrell et al., 2018)
This dataset consists of self-paced reading times for each word from 180 participants. The stimuli include 10 stories from the Natural Stories Corpus (Futrell et al., 2018), similar to Blank2014. Each participant read between 5 and all 10 stories.
Appendix B Rigorous Brain-Scoring
Despite progress in linking LLMs to neural activity, thereās no standard for comparing brain alignment across datasets and conditions. Here, we aim to establish a set of desiderata for evaluating brain alignment. For a model to be considered truly brain-aligned, two key criteria must be met. First, high alignment scores should indicate that the model captures stimulus-driven responsesāmeaning that when presented with a random sequence of tokens, alignment should drop significantly compared to original linguistic stimuli. Second, a brain-aligned model should generalize effectively to new linguistic contexts rather than overfitting to specific examples. We address these two points in Section 4 to justify our choice of metric and cross-validation scheme for each dataset (see Figure 6). For all benchmarks, we localize language-selective units, which is consistent with neural site selection in neuroscience experiments and allows for fair comparisons across models irrespective of model size AlKhamissi et al. (2025). A key limitation of previous methods is their reliance on the raw hidden state dimensions, which inherently favors larger models by providing a greater feature space and artificially inflating alignment scores.
| 250B 500B 750B | 1.00 0.97 0.99 | 0.19 0.08 0.08 | 0.47 0.51 0.52 | 0.78 0.87 0.78 | 0.04 0.04 0.04 | 0.50 0.49 0.48 |
| --- | --- | --- | --- | --- | --- | --- |
| 1T | 1.07 | 0.12 | 0.55 | 0.84 | 0.04 | 0.52 |
| 1.25T | 1.00 | 0.12 | 0.50 | 0.82 | 0.03 | 0.49 |
| 1.5T | 1.00 | 0.12 | 0.52 | 0.79 | 0.03 | 0.49 |
| 1.75T | 0.96 | 0.13 | 0.48 | 0.79 | 0.04 | 0.48 |
| 2T | 1.05 | 0.15 | 0.56 | 0.84 | 0.04 | 0.53 |
| 2.25T | 1.08 | 0.16 | 0.55 | 0.75 | 0.04 | 0.51 |
| 2.5T | 1.12 | 0.17 | 0.52 | 0.72 | 0.01 | 0.51 |
| 2.75T | 1.13 | 0.12 | 0.49 | 0.75 | 0.04 | 0.49 |
| 3T | 1.03 | 0.26 | 0.51 | 0.55 | 0.01 | 0.47 |
| 3.25T | 1.02 | 0.13 | 0.52 | 0.68 | 0.02 | 0.47 |
| 3.5T | 1.04 | 0.14 | 0.52 | 0.72 | 0.04 | 0.49 |
| 3.75T | 1.14 | 0.06 | 0.57 | 0.84 | 0.03 | 0.53 |
| 4T | 1.05 | 0.13 | 0.63 | 0.82 | 0.05 | 0.54 |
Table 2: Brain Alignment Performance of SmolLM2-360M Across Training Checkpoints. Reported scores correspond to normalized correlations with neural responses from five benchmark datasets (Pereira2018, Blank2014, Tuckute2024, Fedorenko2016, Narratives), along with their average (Avg). These results assess the extent to which the modelās internal representations align with activity in the human language network.
Appendix C Brain-Score Using Additional Metrics
Centered Kernel Alignment (CKA)
Kornblith et al. (2019) introduced CKA as a substitute for Canonical Correlation Analysis (CCA) to assess the similarity between neural network representations. Unlike linear predictivity, it is a non-parameteric metric and therefore does not require any additional training. CKA is particularly effective with high-dimensional representations, and its reliability in identifying correspondences between representations in networks trained from different initializations (Kornblith et al., 2019).
Representational Similarity Analysis (RSA)
Kriegeskorte et al. (2008) introduced RDMs as a solution to the challenge of integrating brain-activity measurements, behavioral observations, and computational models in systems neuroscience. RDMs are part of a broader analytical framework referred to as representational similarity analysis (RSA). In practical terms, to compute the dissimilarity matrix for an $N$ -dimensional networkās responses to $M$ different stimuli, an $M$ Ć $M$ matrix of distances between all pairs of evoked responses is generated for both brain activity and the language modelās activations Harvey et al. (2023). The correlation between these two matrices is then used as a measure of brain alignment.
| 250B 500B 750B | 0.81 0.80 0.80 | 0.80 0.78 0.82 | 0.81 0.79 0.81 | 0.33 0.78 0.69 | 0.66 0.66 0.69 | 0.35 0.35 0.34 | 0.70 0.70 0.71 | 0.55 0.56 0.57 | 0.47 0.49 0.50 | 0.52 0.53 0.53 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1T | 0.81 | 0.78 | 0.80 | 0.69 | 0.69 | 0.35 | 0.71 | 0.57 | 0.50 | 0.54 |
| 1.25T | 0.81 | 0.78 | 0.79 | 0.68 | 0.68 | 0.35 | 0.71 | 0.57 | 0.51 | 0.54 |
| 1.5T | 0.81 | 0.80 | 0.80 | 0.69 | 0.68 | 0.35 | 0.72 | 0.56 | 0.51 | 0.54 |
| 1.75T | 0.80 | 0.79 | 0.79 | 0.68 | 0.68 | 0.36 | 0.72 | 0.59 | 0.51 | 0.54 |
| 2T | 0.81 | 0.81 | 0.81 | 0.69 | 0.69 | 0.35 | 0.72 | 0.59 | 0.52 | 0.54 |
| 2.25T | 0.81 | 0.82 | 0.81 | 0.68 | 0.68 | 0.35 | 0.71 | 0.59 | 0.51 | 0.54 |
| 2.5T | 0.81 | 0.82 | 0.82 | 0.68 | 0.68 | 0.36 | 0.70 | 0.56 | 0.52 | 0.54 |
| 2.75T | 0.81 | 0.82 | 0.81 | 0.25 | 0.23 | 0.35 | 0.50 | 0.57 | 0.50 | 0.50 |
| 3T | 0.81 | 0.81 | 0.81 | 0.25 | 0.23 | 0.35 | 0.50 | 0.57 | 0.50 | 0.50 |
| 3.25T | 0.81 | 0.77 | 0.79 | 0.67 | 0.67 | 0.34 | 0.67 | 0.57 | 0.51 | 0.52 |
| 3.5T | 0.81 | 0.79 | 0.80 | 0.71 | 0.71 | 0.38 | 0.72 | 0.58 | 0.53 | 0.55 |
| 3.75T | 0.80 | 0.78 | 0.79 | 0.72 | 0.72 | 0.58 | 0.58 | 0.54 | 0.56 | 0.56 |
| 4T | 0.81 | 0.79 | 0.80 | 0.73 | 0.73 | 0.39 | 0.74 | 0.61 | 0.56 | 0.57 |
Table 3: Performance of SmolLM2-360M on Formal and Functional Linguistic Benchmarks Across Training Checkpoints. Formal competence is measured using BLiMP and SyntaxGym (with averages reported as Avg Formal). Functional competence is measured using ARC-Easy, ARC-Challenge, Social-IQA, PIQA, WinoGrande, and HellaSwag (with averages reported as Avg Functional). Together, these results characterize the relationship between training progression and the development of different aspects of linguistic ability.
Appendix D Brain Alignment Over Training
<details>
<summary>figures/brain-score-llms-brain-alignment-final.drawio-3.png Details</summary>

### Visual Description
## Chart: Brain Alignment vs. Number of Tokens for Different Pythia Models
### Overview
The image presents three line charts comparing brain alignment against the number of tokens processed by different Pythia language models (160M, 410M, and 1B). Each chart displays multiple data series, representing different datasets (Pereira2018, Blank2014, Fedorenko2016, Tuckute2024, Narratives) and their average. The x-axis represents the number of tokens, and the y-axis represents brain alignment.
### Components/Axes
* **Titles:**
* Left Chart: Pythia-160M
* Middle Chart: Pythia-410M
* Right Chart: Pythia-1B
* **X-Axis:** Number of Tokens
* Scale: 0, 2M, 4M, 8M, 16M, 32M, 64M, 128M, 256M, 512M, 1B, 2B, 4B, 8B, 16B, 20B, 32B, 40B, 60B, 80B, 100B, 120B, 140B, 160B, 180B, 200B, 220B, 240B, 260B, 280B, 286B
* **Y-Axis:** Brain Alignment
* Scale: 0.0, 0.2, 0.4, 0.6, 0.8, 1.0, 1.2 (Left Chart), 1.4 (Middle Chart), 1.2 (Right Chart)
* **Legend (Bottom):**
* Pereira2018 (light green circles)
* Blank2014 (light green x's)
* Fedorenko2016 (light green squares)
* Tuckute2024 (light green plus signs)
* Narratives (dark green diamonds)
* Average (dark green diamond-ended line)
* **Vertical Line:** A vertical black line is present in each chart, positioned between 8B and 16B tokens.
### Detailed Analysis
**Pythia-160M (Left Chart):**
* **Pereira2018 (light green circles):** Starts around 0.0, increases to approximately 0.5 around 16M tokens, then fluctuates between 0.4 and 0.6.
* **Blank2014 (light green x's):** Remains relatively flat around 0.1-0.2 throughout the range.
* **Fedorenko2016 (light green squares):** Starts around 0.3, increases to approximately 0.4 around 16M tokens, then fluctuates between 0.3 and 0.4.
* **Tuckute2024 (light green plus signs):** Remains relatively flat around 0.1-0.2 throughout the range.
* **Narratives (dark green diamonds):** Remains relatively flat around 0.0-0.1 throughout the range.
* **Average (dark green diamond-ended line):** Starts around 0.2, increases to approximately 0.4 around 16M tokens, then fluctuates between 0.3 and 0.4.
**Pythia-410M (Middle Chart):**
* **Pereira2018 (light green circles):** Starts around 0.1, increases to approximately 1.0 around 16M tokens, then fluctuates between 0.8 and 1.0.
* **Blank2014 (light green x's):** Remains relatively flat around 0.1-0.2 throughout the range.
* **Fedorenko2016 (light green squares):** Starts around 0.3, increases to approximately 0.7 around 16M tokens, then fluctuates between 0.6 and 0.8.
* **Tuckute2024 (light green plus signs):** Remains relatively flat around 0.2-0.3 throughout the range.
* **Narratives (dark green diamonds):** Remains relatively flat around 0.0-0.1 throughout the range.
* **Average (dark green diamond-ended line):** Starts around 0.3, increases to approximately 0.5 around 16M tokens, then fluctuates between 0.4 and 0.6.
**Pythia-1B (Right Chart):**
* **Pereira2018 (light green circles):** Starts around 0.2, increases to approximately 1.0 around 16M tokens, then fluctuates between 0.8 and 1.1.
* **Blank2014 (light green x's):** Remains relatively flat around 0.1-0.2 throughout the range.
* **Fedorenko2016 (light green squares):** Starts around 0.3, increases to approximately 0.8 around 16M tokens, then fluctuates between 0.7 and 0.8.
* **Tuckute2024 (light green plus signs):** Remains relatively flat around 0.2-0.3 throughout the range.
* **Narratives (dark green diamonds):** Remains relatively flat around 0.0-0.1 throughout the range.
* **Average (dark green diamond-ended line):** Starts around 0.2, increases to approximately 0.5 around 16M tokens, then fluctuates between 0.4 and 0.6.
### Key Observations
* **Pereira2018** dataset consistently shows the highest brain alignment across all three Pythia models, with a significant increase around 16M tokens.
* **Blank2014** and **Narratives** datasets consistently show the lowest brain alignment across all three Pythia models.
* The **Average** brain alignment generally increases up to 16M tokens and then stabilizes.
* The vertical line at approximately 16B tokens does not appear to correlate with any significant change in the trends of the data series.
* The brain alignment values for Pereira2018 and Fedorenko2016 datasets are significantly higher for the 410M and 1B models compared to the 160M model.
### Interpretation
The charts suggest that brain alignment, as measured by these datasets, tends to increase with the number of tokens processed by the Pythia models, up to a certain point (around 16M tokens), after which it plateaus. The Pereira2018 dataset exhibits the strongest correlation with brain activity, while Blank2014 and Narratives show the weakest. The increase in brain alignment for Pereira2018 and Fedorenko2016 with larger models (410M and 1B) indicates that these models may be better at capturing the nuances of human language processing as represented in these datasets. The vertical line at 16B tokens may represent a significant point in the training or evaluation process, but its impact on brain alignment is not immediately apparent from the data.
</details>
Figure 7: Brain Alignment Saturates Early on in Training. Plots complementing Figure 3 showing the brain alignment scores of three other models from the Pythia model suite with varying sizes (log x-axis up to 16B tokens, uneven spacing after black line). Scores are normalized by their cross-subject consistency scores. Alignment quickly peaks around 2ā8B tokens before saturating or declining, regardless of model size.
Figure 7 complements Figure 3 in the main paper, illustrating that brain alignment saturates early on in training for all models analyzed in this work.
Appendix E Formal & Functional Scores
<details>
<summary>figures/brain-score-llms-formal-competence.drawio.png Details</summary>

### Visual Description
## Chart: Pythia Model Performance
### Overview
The image presents four line charts comparing the performance of Pythia language models of varying sizes (1B, 2.8B, 6.9B, and an average of 5 models) on formal and functional competence tasks. The charts display normalized accuracy against the number of tokens processed during training.
### Components/Axes
* **Titles:**
* (a) Pythia-1B
* (b) Pythia-2.8B
* (c) Pythia-6.9B
* (d) Pythia (5 Models)
* **Y-Axis (Left):**
* Label: "Formal Competence" (Top Row), "Functional Competence" (Bottom Row)
* Sub-Label: "Normalized Accuracy"
* Scale: 0.0 to 0.8 (Top Row), -0.1 to 0.5 (Bottom Row), with increments of 0.1.
* **X-Axis (Bottom):**
* Label: "Number of Tokens"
* Scale: 0 to 256B, with markers at intervals of 2, 4, 8, 12, 16, 24, 32, 48, 64, 80, 96, 112, 128, 144, 160, 176, 192, 208, 224, 240, 256 (all in Billions - 'B').
* **Legend (Bottom):**
* **Formal Competence:**
* BLiMP (Light Blue, Circle Marker)
* SyntaxGym (Light Blue, X Marker)
* **Functional Competence:**
* ARC-Easy (Dark Blue, Circle Marker)
* PIQA (Light Blue, X Marker)
* Social-IQA (Dark Blue, Square Marker)
* ARC Challenge (Dark Blue, Diamond Marker)
* HellaSwag (Dark Blue, Triangle Marker)
* WinoGrande (Dark Blue, Plus Marker)
### Detailed Analysis
**Chart (a) Pythia-1B:**
* **Formal Competence:**
* BLiMP (Light Blue, Circle): Starts at approximately 0.05, remains relatively flat until 32B tokens, then increases to approximately 0.65 and plateaus.
* SyntaxGym (Light Blue, X): Starts at approximately 0.25, dips slightly around 16B tokens, then increases sharply to approximately 0.78 and plateaus.
* **Functional Competence:**
* ARC-Easy (Dark Blue, Circle): Starts at approximately 0.0, increases to approximately 0.42 by 256B tokens.
* PIQA (Light Blue, X): Starts at approximately 0.0, increases to approximately 0.40 by 256B tokens.
* Social-IQA (Dark Blue, Square): Starts at approximately 0.0, increases to approximately 0.10 by 256B tokens.
* ARC Challenge (Dark Blue, Diamond): Starts at approximately -0.05, increases to approximately 0.15 by 256B tokens.
* HellaSwag (Dark Blue, Triangle): Starts at approximately 0.0, increases to approximately 0.10 by 256B tokens.
* WinoGrande (Dark Blue, Plus): Starts at approximately 0.0, increases to approximately 0.10 by 256B tokens.
**Chart (b) Pythia-2.8B:**
* **Formal Competence:**
* BLiMP (Light Blue, Circle): Starts at approximately 0.0, remains relatively flat until 32B tokens, then increases to approximately 0.70 and plateaus.
* SyntaxGym (Light Blue, X): Starts at approximately 0.25, dips slightly around 16B tokens, then increases sharply to approximately 0.80 and plateaus.
* **Functional Competence:**
* ARC-Easy (Dark Blue, Circle): Starts at approximately 0.0, increases to approximately 0.50 by 256B tokens.
* PIQA (Light Blue, X): Starts at approximately 0.0, increases to approximately 0.45 by 256B tokens.
* Social-IQA (Dark Blue, Square): Starts at approximately 0.0, increases to approximately 0.20 by 256B tokens.
* ARC Challenge (Dark Blue, Diamond): Starts at approximately -0.05, increases to approximately 0.30 by 256B tokens.
* HellaSwag (Dark Blue, Triangle): Starts at approximately 0.0, increases to approximately 0.20 by 256B tokens.
* WinoGrande (Dark Blue, Plus): Starts at approximately 0.0, increases to approximately 0.20 by 256B tokens.
**Chart (c) Pythia-6.9B:**
* **Formal Competence:**
* BLiMP (Light Blue, Circle): Starts at approximately 0.15, remains relatively flat until 32B tokens, then increases to approximately 0.65 and plateaus.
* SyntaxGym (Light Blue, X): Starts at approximately 0.25, dips slightly around 16B tokens, then increases sharply to approximately 0.82 and plateaus.
* **Functional Competence:**
* ARC-Easy (Dark Blue, Circle): Starts at approximately 0.0, increases to approximately 0.52 by 256B tokens.
* PIQA (Light Blue, X): Starts at approximately 0.0, increases to approximately 0.48 by 256B tokens.
* Social-IQA (Dark Blue, Square): Starts at approximately 0.0, increases to approximately 0.22 by 256B tokens.
* ARC Challenge (Dark Blue, Diamond): Starts at approximately -0.05, increases to approximately 0.32 by 256B tokens.
* HellaSwag (Dark Blue, Triangle): Starts at approximately 0.0, increases to approximately 0.22 by 256B tokens.
* WinoGrande (Dark Blue, Plus): Starts at approximately 0.0, increases to approximately 0.22 by 256B tokens.
**Chart (d) Pythia (5 Models):**
* **Formal Competence:**
* BLiMP (Light Blue, Circle): Starts at approximately 0.05, remains relatively flat until 32B tokens, then increases to approximately 0.65 and plateaus.
* SyntaxGym (Light Blue, X): Starts at approximately 0.25, dips slightly around 16B tokens, then increases sharply to approximately 0.82 and plateaus.
* **Functional Competence:**
* ARC-Easy (Dark Blue, Circle): The shaded region represents the range of performance across the 5 models. The average performance starts at approximately 0.0, increases to approximately 0.45 by 256B tokens.
* PIQA (Light Blue, X): The shaded region represents the range of performance across the 5 models. The average performance starts at approximately 0.0, increases to approximately 0.42 by 256B tokens.
* Social-IQA (Dark Blue, Square): The shaded region represents the range of performance across the 5 models. The average performance starts at approximately 0.0, increases to approximately 0.15 by 256B tokens.
* ARC Challenge (Dark Blue, Diamond): The shaded region represents the range of performance across the 5 models. The average performance starts at approximately -0.05, increases to approximately 0.20 by 256B tokens.
* HellaSwag (Dark Blue, Triangle): The shaded region represents the range of performance across the 5 models. The average performance starts at approximately 0.0, increases to approximately 0.15 by 256B tokens.
* WinoGrande (Dark Blue, Plus): The shaded region represents the range of performance across the 5 models. The average performance starts at approximately 0.0, increases to approximately 0.15 by 256B tokens.
* **Annotation:**
* "5.6% of training time" is indicated by a bracket above the x-axis, spanning from 0 to approximately 16B tokens.
### Key Observations
* **Formal Competence:** BLiMP and SyntaxGym consistently show a sharp increase in normalized accuracy after processing 32B tokens across all model sizes. SyntaxGym generally achieves higher accuracy than BLiMP.
* **Functional Competence:** The performance on functional competence tasks varies significantly. ARC-Easy and PIQA generally outperform Social-IQA, ARC Challenge, HellaSwag, and WinoGrande.
* **Model Size:** Increasing the model size (from 1B to 6.9B) generally improves the performance on functional competence tasks, as evidenced by the higher normalized accuracy achieved by larger models.
* **Training Time:** The annotation on chart (d) indicates that the initial 5.6% of training time (up to 16B tokens) corresponds to a period of relatively low performance, especially for formal competence tasks.
### Interpretation
The charts demonstrate the impact of model size and training progress on the performance of Pythia language models. The sharp increase in formal competence after 32B tokens suggests a critical learning phase. The varying performance across different functional competence tasks highlights the models' strengths and weaknesses in specific areas of reasoning and understanding. The shaded regions in chart (d) provide insight into the variability of performance across different models within the Pythia family. Overall, the data suggests that larger models and longer training times lead to improved performance, but the specific task significantly influences the achieved accuracy.
</details>
Figure 8: Individual Benchmark Scores for Formal and Functional Competence. (a-c): each column shows the evolution of individual benchmark scores for formal competence (top) and functional competence (bottom) during training. Data is presented for Pythia models of three different sizes. (d): the same as (aāc), with data averaged across models of five different sizes.
Figure 8 presents the individual benchmark scores for both formal and functional linguistic competence across training. Formal benchmarks peak early, mirroring the trajectory of brain alignment, and remain saturated throughout training. In contrast, functional benchmarks continue to improve, reflecting the modelsā increasing ability to acquire factual knowledge and reasoning skills as they are trained on significantly more tokens using next-word prediction.
Appendix F Results on SmolLM2-360M
To assess the generalizability of our findings, we replicated our experiments using a model from a different language family. Specifically, we evaluated multiple training checkpoints of SmolLM2-360M on the brain alignment, formal, and functional linguistic competence benchmarks. Since SmolLM2 only provides checkpoints at intervals of 250B tokens, we cannot capture the gradual emergence of brain alignment and formal competence, both of which typically saturate around 4Bā8B tokens. Given this limitation, our hypothesis was that brain alignment and formal competence would remain largely stable across these checkpoints, while functional competence would continue to improve. The results are consistent with this hypothesis as shown in Tables 2 and 3.
Appendix G Role of Weight Initialization
<details>
<summary>figures/untrained_init_range_comparison_nunits=128.png Details</summary>

### Visual Description
## Chart: Brain Alignment vs. Initialization Standard Deviation
### Overview
The image is a scatter plot with a line graph overlayed, showing the relationship between "Brain Alignment (Pearson's r)" and "Initialization Standard Deviation". The x-axis is on a logarithmic scale. The plot displays a trend of brain alignment increasing with initialization standard deviation up to a point, then decreasing. A shaded region around the line indicates variability or confidence intervals.
### Components/Axes
* **X-axis:** Initialization Standard Deviation (logarithmic scale). Axis markers are 10<sup>-3</sup>, 10<sup>-2</sup>, 10<sup>-1</sup>, and 10<sup>0</sup>.
* **Y-axis:** Brain Alignment (Pearson's r). Axis markers are 0.06, 0.08, 0.10, and 0.12.
* **Data Series:** A green line with circular markers represents the average brain alignment for each initialization standard deviation. A shaded green area around the line represents the confidence interval or standard deviation. Individual data points are scattered around the line, with varying shades of green.
### Detailed Analysis
* **Trend:** The green line initially slopes upward from approximately (10<sup>-3</sup>, 0.098) to (10<sup>-2</sup>, 0.111). It then sharply decreases to approximately (0.1, 0.072), and then slightly increases to approximately (1, 0.075).
* **Data Points:**
* At 10<sup>-3</sup>, the green line is at approximately 0.098. The shaded region spans from approximately 0.09 to 0.102.
* At 10<sup>-2</sup>, the green line is at approximately 0.111. The shaded region spans from approximately 0.098 to 0.12.
* At 10<sup>-1</sup>, the green line is at approximately 0.072. The shaded region spans from approximately 0.06 to 0.08.
* At 10<sup>0</sup>, the green line is at approximately 0.075. The shaded region spans from approximately 0.068 to 0.08.
* **Scatter:** The individual data points are scattered around the green line, indicating the distribution of brain alignment values for each initialization standard deviation. The density of points appears to be higher around the peak of the green line.
### Key Observations
* Brain alignment peaks at an initialization standard deviation of approximately 10<sup>-2</sup>.
* There is a sharp decline in brain alignment as the initialization standard deviation increases from 10<sup>-2</sup> to 10<sup>-1</sup>.
* The shaded region indicates that the variability in brain alignment is higher around the peak and lower at the extremes of the initialization standard deviation.
### Interpretation
The data suggests that there is an optimal initialization standard deviation for maximizing brain alignment, which is around 10<sup>-2</sup>. Too low or too high initialization standard deviations result in lower brain alignment. The variability in brain alignment, as indicated by the shaded region, suggests that the relationship between initialization standard deviation and brain alignment is not deterministic and may be influenced by other factors. The sharp decline after the peak suggests that there may be a critical threshold beyond which increasing the initialization standard deviation negatively impacts brain alignment.
</details>
Figure 9: Role of Weight Initialization on Brain Alignment in Untrained Models The default initialization standard deviation in the HuggingFace library (sd = 0.02) yields the highest brain alignment for untrained models, suggesting that initialization choices play a crucial role in shaping alignment even before training begins.
Figure 9 examines the effect of weight initialization variance on brain alignment in untrained models. We systematically vary the initialization standard deviation (sd) and find that the default HuggingFace Wolf et al. (2019) initialization (sd = 0.02) achieves the highest alignment across datasets. This suggests that even before training begins, the choice of initialization can significantly influence how well a modelās representations align with neural activity. This finding raises an intriguing hypothesis: could brain alignment, a computationally inexpensive metric, serve as a useful heuristic for selecting optimal initialization parameters? If so, it could help models learn tasks more efficiently and converge faster, reducing the need for extensive trial-and-error in training from scratch. The results highlight the importance of architectural inductive biases and suggest that brain alignment may serve as a useful heuristic for optimizing model initialization.
Appendix H Effect of Number of Units on Brain Alignment
<details>
<summary>figures/pretrained_num_units_model_size.png Details</summary>

### Visual Description
## Bar Chart: Brain Alignment vs. Number of Units for Different Model Sizes
### Overview
The image is a bar chart comparing brain alignment (measured by Pearson's r) against the number of units in a neural network model. The x-axis represents the number of units (128, 1024, 4096), and the y-axis represents the brain alignment score, ranging from 0.00 to 0.20. Different colored bars represent different model sizes (14M, 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B). Error bars are present on each bar, indicating variability in the data.
### Components/Axes
* **X-axis:** Number of Units (128, 1024, 4096)
* **Y-axis:** Brain Alignment (Pearson's r), ranging from 0.00 to 0.20 in increments of 0.05.
* **Legend (Top-Right):** Model Size
* Dark Purple: 14M
* Dark Blue: 70M
* Blue-Gray: 160M
* Teal: 410M
* Green-Teal: 1B
* Green: 1.4B
* Light Green: 2.8B
* Yellow-Green: 6.9B
### Detailed Analysis
**Number of Units: 128**
* **14M (Dark Purple):** Brain Alignment ~0.16, error bars extend from ~0.14 to ~0.18
* **70M (Dark Blue):** Brain Alignment ~0.17, error bars extend from ~0.15 to ~0.19
* **160M (Blue-Gray):** Brain Alignment ~0.12, error bars extend from ~0.10 to ~0.14
* **410M (Teal):** Brain Alignment ~0.16, error bars extend from ~0.14 to ~0.18
* **1B (Green-Teal):** Brain Alignment ~0.14, error bars extend from ~0.12 to ~0.16
* **1.4B (Green):** Brain Alignment ~0.14, error bars extend from ~0.12 to ~0.16
* **2.8B (Light Green):** Brain Alignment ~0.12, error bars extend from ~0.10 to ~0.14
* **6.9B (Yellow-Green):** Brain Alignment ~0.10, error bars extend from ~0.08 to ~0.12
**Number of Units: 1024**
* **14M (Dark Purple):** Brain Alignment ~0.17, error bars extend from ~0.15 to ~0.19
* **70M (Dark Blue):** Brain Alignment ~0.16, error bars extend from ~0.14 to ~0.18
* **160M (Blue-Gray):** Brain Alignment ~0.15, error bars extend from ~0.13 to ~0.17
* **410M (Teal):** Brain Alignment ~0.17, error bars extend from ~0.15 to ~0.19
* **1B (Green-Teal):** Brain Alignment ~0.16, error bars extend from ~0.14 to ~0.18
* **1.4B (Green):** Brain Alignment ~0.14, error bars extend from ~0.12 to ~0.16
* **2.8B (Light Green):** Brain Alignment ~0.14, error bars extend from ~0.12 to ~0.16
* **6.9B (Yellow-Green):** Brain Alignment ~0.12, error bars extend from ~0.10 to ~0.14
**Number of Units: 4096**
* **14M (Dark Purple):** Brain Alignment ~0.17, error bars extend from ~0.15 to ~0.19
* **70M (Dark Blue):** Brain Alignment ~0.16, error bars extend from ~0.14 to ~0.18
* **160M (Blue-Gray):** Brain Alignment ~0.17, error bars extend from ~0.15 to ~0.19
* **410M (Teal):** Brain Alignment ~0.16, error bars extend from ~0.14 to ~0.18
* **1B (Green-Teal):** Brain Alignment ~0.16, error bars extend from ~0.14 to ~0.18
* **1.4B (Green):** Brain Alignment ~0.14, error bars extend from ~0.12 to ~0.16
* **2.8B (Light Green):** Brain Alignment ~0.14, error bars extend from ~0.12 to ~0.16
* **6.9B (Yellow-Green):** Brain Alignment ~0.12, error bars extend from ~0.10 to ~0.14
### Key Observations
* The brain alignment scores generally range between 0.10 and 0.20 across all model sizes and number of units.
* The 14M model consistently shows relatively high brain alignment scores across all unit counts.
* The 6.9B model consistently shows relatively low brain alignment scores across all unit counts.
* The error bars suggest some variability in the brain alignment scores for each model size and number of units.
* There is no clear trend indicating that increasing the number of units consistently improves brain alignment across all model sizes.
### Interpretation
The chart suggests that brain alignment, as measured by Pearson's r, is influenced by both the model size and the number of units in the model. However, the relationship is not straightforward. Smaller models (e.g., 14M) tend to have relatively high brain alignment scores, while the largest model (6.9B) has the lowest. This could indicate that simply increasing model size does not guarantee better brain alignment. The variability indicated by the error bars suggests that other factors, such as model architecture or training data, may also play a significant role. The lack of a clear trend with increasing units suggests that there may be an optimal number of units for brain alignment, or that the effect of unit count is dependent on the model size. Further investigation would be needed to understand the underlying mechanisms driving these observations.
</details>
Figure 10: The Effect of the Number of Localized Units on Final Brain Alignment Brain alignment is evaluated after localizing 128, 1024, and 4096 units. While increasing the number of units slightly affects overall alignment, the relative ranking of models remains largely unchanged, indicating that model comparisons are robust to the choice of unit count.
Figure 10 illustrates the impact of localizing more units on final brain alignment across the eight Pythia models used in this study. We find that increasing the number of units has minimal impact on the relative ranking of models, with only a slight increase in average alignment. Additionally, model size does not influence brain alignment once the number of units is controlled, reinforcing the idea that alignment is driven by feature selection rather than scale.
<details>
<summary>figures/brain-score-llms-brain-alignment-v1.drawio.png Details</summary>

### Visual Description
## Line Charts: Brain Alignment vs. Number of Tokens for Different Model Sizes
### Overview
The image presents three line charts comparing brain alignment (Pearson's r) against the number of tokens processed by different language models. Each chart corresponds to a model size (14M, 70M, and 160M). The charts display the brain alignment for two regions: the Language Network and V1, as the number of tokens increases. Shaded regions around each line represent the uncertainty or variability in the data.
### Components/Axes
* **Title:** Each chart is titled with the model size: "14M", "70M", and "160M".
* **Y-axis:** "Brain Alignment (Pearson's r)". The scale ranges from -0.025 to 0.150 with increments of 0.025.
* **X-axis:** "Number of Tokens". The scale is non-linear and includes values like 0, 2M, 4M, 8M, 16M, 32M, 64M, 128M, 256M, 512M, 1B, 2B, 4B, 8B, 16B, 20B, 40B, 60B, 80B, 100B, 120B, 140B, 160B, 180B, 200B, 220B, 240B, 260B, 280B, 286B.
* **Legend:** Located at the bottom of the image.
* **Language Network:** Represented by a green line with circular markers and a light green shaded area.
* **V1:** Represented by a purple line with 'x' markers and a light purple shaded area.
### Detailed Analysis
#### 14M Model
* **Language Network (Green):** The brain alignment starts at approximately 0.06 (±0.01) and remains relatively stable until around 64M tokens. After 64M, the alignment increases sharply, reaching approximately 0.12 (±0.01) around 128M tokens, and then plateaus around 0.12-0.13 (±0.01) for the rest of the token range.
* **V1 (Purple):** The brain alignment starts near 0.01 (±0.01) and fluctuates between 0.01 and 0.03 (±0.01) across the entire range of tokens, with no clear increasing or decreasing trend.
#### 70M Model
* **Language Network (Green):** Similar to the 14M model, the brain alignment starts around 0.05 (±0.01) and remains stable until approximately 64M tokens. It then increases sharply, reaching approximately 0.12 (±0.01) around 128M tokens, and plateaus around 0.12-0.13 (±0.01) for the rest of the token range.
* **V1 (Purple):** The brain alignment starts near 0.00 (±0.01) and fluctuates between 0.00 and 0.03 (±0.01) across the entire range of tokens, with no clear increasing or decreasing trend.
#### 160M Model
* **Language Network (Green):** The brain alignment starts around 0.06 (±0.01) and remains relatively stable until approximately 64M tokens. It then increases sharply, reaching approximately 0.11 (±0.01) around 128M tokens, and plateaus around 0.11-0.12 (±0.01) for the rest of the token range.
* **V1 (Purple):** The brain alignment starts near 0.02 (±0.01) and fluctuates between 0.00 and 0.03 (±0.01) across the entire range of tokens, with no clear increasing or decreasing trend.
### Key Observations
* **Language Network Improvement:** The Language Network shows a significant increase in brain alignment after a certain number of tokens (around 64M), regardless of the model size.
* **V1 Stability:** The V1 region shows relatively stable and low brain alignment across all model sizes and token ranges.
* **Model Size Impact:** The 14M and 70M models show a slightly higher plateau in brain alignment for the Language Network compared to the 160M model.
### Interpretation
The data suggests that increasing the number of tokens processed by a language model leads to improved brain alignment in the Language Network region, but only after a certain threshold (around 64M tokens). The V1 region does not show a similar improvement, indicating that the Language Network is more sensitive to the amount of training data. The slight difference in plateau levels between the models suggests that there might be an optimal model size or that other factors beyond the number of tokens influence brain alignment. The shaded regions indicate the variability in the data, which could be due to individual differences or other experimental factors.
</details>
Figure 11: Brain Alignment with the Language Network vs. V1 Across Training. Raw brain alignment scores (Pearsonās r) of three Pythia models of varying sizes are shown on the Pereira2018 dataset. The x-axis (log-scaled up to 16B tokens; then evenly spaced after the black line every 20B tokens) represents training progress. Alignment with V1, an early visual region, remains stable throughout training, while alignment with the language network (LN) increases around 4B tokens before plateauing.
Appendix I Model Size Does Not Predict Alignment
<details>
<summary>figures/brain-score-llms-model-size-greens.drawio.png Details</summary>

### Visual Description
## Line Chart: Brain Alignment vs. Pythia Model Size
### Overview
The image is a line chart comparing brain alignment scores across different datasets as a function of Pythia model size. The chart displays six datasets: Pereira2018, Fedorenko2016, Average, Tuckute2024, Narratives, and Blank2014. The x-axis represents the Pythia model size, and the y-axis represents the brain alignment score. Each dataset is represented by a line with a specific color and marker. Shaded regions around each line indicate the uncertainty or variability in the data.
### Components/Axes
* **Title:** None
* **X-axis:** Pythia Model Size
* Scale: 14M, 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B
* **Y-axis:** Brain Alignment
* Scale: 0.0, 0.2, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4
* **Legend:** Located on the top-right of the chart.
* Pereira2018 (light green, circle marker)
* Fedorenko2016 (green, square marker)
* Average (dark green, diamond marker)
* Tuckute2024 (light green, plus marker)
* Narratives (dark green, diamond marker)
* Blank2014 (light green, x marker)
### Detailed Analysis
* **Pereira2018 (light green, circle marker):** The line starts at approximately 1.15 at 14M, increases to approximately 1.25 at 70M, decreases to approximately 1.1 at 160M, remains relatively stable at approximately 1.13 at 410M and 1B, decreases to approximately 0.9 at 2.8B, and further decreases to approximately 0.7 at 6.9B.
* **Fedorenko2016 (green, square marker):** The line starts at approximately 0.8 at 14M, remains relatively stable at approximately 0.8 at 70M, decreases slightly to approximately 0.78 at 160M, remains relatively stable at approximately 0.78 at 410M and 1B, increases slightly to approximately 0.84 at 2.8B, and decreases to approximately 0.75 at 6.9B.
* **Average (dark green, diamond marker):** The line starts at approximately 0.55 at 14M, increases slightly to approximately 0.58 at 70M and 410M, decreases to approximately 0.48 at 1.4B, remains relatively stable at approximately 0.49 at 2.8B, and decreases to approximately 0.42 at 6.9B. This line is thicker than the others.
* **Tuckute2024 (light green, plus marker):** The line starts at approximately 0.48 at 14M, increases slightly to approximately 0.5 at 70M and 410M, decreases to approximately 0.3 at 1B, remains relatively stable at approximately 0.32 at 1.4B, decreases to approximately 0.18 at 2.8B, and remains relatively stable at approximately 0.18 at 6.9B.
* **Narratives (dark green, diamond marker):** The line starts at approximately 0.13 at 14M, increases slightly to approximately 0.14 at 70M, remains relatively stable at approximately 0.13 at 160M, decreases slightly to approximately 0.12 at 410M, remains relatively stable at approximately 0.12 at 1B, increases slightly to approximately 0.17 at 2.8B, and remains relatively stable at approximately 0.17 at 6.9B.
* **Blank2014 (light green, x marker):** The line starts at approximately 0.08 at 14M, increases slightly to approximately 0.12 at 70M, remains relatively stable at approximately 0.1 at 160M, increases slightly to approximately 0.11 at 410M, remains relatively stable at approximately 0.09 at 1B, increases slightly to approximately 0.1 at 1.4B, remains relatively stable at approximately 0.1 at 2.8B, and remains relatively stable at approximately 0.08 at 6.9B.
### Key Observations
* The Pereira2018 dataset has the highest brain alignment scores across all model sizes.
* The Blank2014 dataset has the lowest brain alignment scores across all model sizes.
* The "Average" dataset line is thicker than the other lines, making it visually distinct.
* Most datasets show a decrease in brain alignment as the Pythia model size increases beyond 1B.
* The shaded regions around each line indicate variability in the data, with some datasets showing more variability than others.
### Interpretation
The chart suggests that brain alignment scores vary significantly across different datasets when evaluated against Pythia models of varying sizes. The Pereira2018 dataset consistently shows the highest alignment, indicating it may be more compatible or better represented by these models. Conversely, the Blank2014 dataset shows the lowest alignment. The general trend for most datasets is a decrease in brain alignment as the Pythia model size increases beyond 1B, which could indicate a point of diminishing returns or overfitting for larger models. The "Average" dataset provides a consolidated view of overall performance. The variability indicated by the shaded regions suggests that the alignment scores are not precise and may be influenced by other factors not represented in the chart.
</details>
Figure 12: Model Size Does Not Predict Brain Alignment when localizing a fixed set of language units. Brain alignment across model sizes in the Pythia suite, measured at their final training checkpoints. Brain alignment is shown for each dataset, along with the average score across datasets, for eight models of varying sizes.
Figure 12 presents the brain alignment for each dataset, along with the average alignment across datasets, for eight models of varying sizes from the Pythia model suite (final checkpoint). Contrary to the assumption that larger models exhibit higher brain alignment Aw et al. (2023), we observe a decline in average alignment starting from 1B parameters up to 6.9B parameters, when controlling for feature size. This analysis is made possible by functional localization, which allows us to extract a fixed number of units from each model, rather than relying on hidden state dimensions, as done in previous studies. This approach ensures a fairer comparison among models. We show in Appendix H that increasing the number of localized units has minimal impact on the relative ranking of the models. Additionally, these findings align with expectations in the neuroscience language community, where it is widely believed that human language processing does not require superhuman-scale models to capture neural activity in the brainās language network.
Appendix J Alignment with Other Brain Regions
As a control, we also examine alignment with non-language brain regions. Specifically, Figure 11 shows the brain alignment of three Pythia models with both the language network (LN) and V1āan early visual cortex regionāon the Pereira2018 dataset. While alignment with the LN increases early in training (around 4B tokens) and then saturates, alignment with V1 remains largely unchanged throughout training. This divergence highlights a key aspect of LLM representations: they do not appear to encode low-level perceptual features, such as those processed in early visual areas. If models were learning perceptual structure from the stimuli, we would expect alignment with V1 to increase alongside LN alignment. Instead, the stability of V1 alignment across training suggests that language models selectively develop internal representations that align with higher-order linguistic processing rather than general sensory processing.
One reason for not measuring alignment against other higher-level cognitive brain regions such as the default mode network (DMN), the multiple demand network (MD) or the theory of mind network (ToM) is due to a major limitation in current neuroimaging datasets: the linguistic stimuli used in studies with publicly available datasets (e.g., Pereira2018) do not reliably engage these higher-level cognitive regions, leading to substantial variability across individuals and thus much lower cross-subject consistency scores. Simply ālookingā for alignment in the DMN or MD is therefore insufficient. Instead, we need new datasets that deliberately activate nonālanguage networks and record itemālevel neural responses. For example, most MD studies rely on blocked fMRI designs (e.g., hard vs. easy math), yielding one activation estimate per condition rather than per stimulus. Such coarse measurements limit their utility to evaluate modelātoābrain correspondence at the granularity of individual items. We expect alignment with the MD network, a brain region involved in logical reasoning, to track functional linguistic competence more than formal competence as models improve on relevant benchmarks. We leave this investigation for future work, pending the availability of suitable datasets.
Appendix K Cross-Subject Consistency Scores
| Pereira2018 (Exp 2) ā Pereira2018 (Exp 3) Blank2014 | 0.086 0.144 0.178 |
| --- | --- |
| Fedorenko2016 | 0.222 |
| Tucktue2024 | 0.559 |
| Narratives | 0.181 |
| Futrell2018 | 0.858 |
Table 4: Cross-Subject Consistency Scores The values used to normalize the raw Pearson correlation. ā Pereira2018 (Exp 2) was computed without extrapolation.
Table 4 shows the cross-subject consistency scores computed with extrapolation for the different benchmarks used in this work.