# Inferring Functionality of Attention Heads from their Parameters
**Authors**:
- Amit Elhelo Mor Geva (Blavatnik School of Computer Science, Tel Aviv University)
## Abstract
Attention heads are one of the building blocks of large language models (LLMs). Prior work on investigating their operation mostly focused on analyzing their behavior during inference for specific circuits or tasks. In this work, we seek a comprehensive mapping of the operations they implement in a model. We propose MAPS (Mapping Attention head ParameterS), an efficient framework that infers the functionality of attention heads from their parameters, without any model training or inference. We showcase the utility of MAPS for answering two types of questions: (a) given a predefined operation, mapping how strongly heads across the model implement it, and (b) given an attention head, inferring its salient functionality. Evaluating MAPS on 20 operations across 6 popular LLMs shows its estimations correlate with the head’s outputs during inference and are causally linked to the model’s predictions. Moreover, its mappings reveal attention heads of certain operations that were overlooked in previous studies, and valuable insights on function universality and architecture biases in LLMs. Next, we present an automatic pipeline and analysis that leverage MAPS to characterize the salient operations of a given head. Our pipeline produces plausible operation descriptions for most heads, as assessed by human judgment, while revealing diverse operations. We release our code and mappings at https://github.com/amitelhelo/MAPS.
Inferring Functionality of Attention Heads from their Parameters
Amit Elhelo Mor Geva Blavatnik School of Computer Science, Tel Aviv University {amitelhelw@mail,morgeva@tauex}.tau.ac.il
## 1 Introduction
<details>
<summary>x1.png Details</summary>

### Visual Description
## Diagram: Multi-head Attention Layer and Token Mapping Analysis
### Overview
The diagram illustrates a multi-head attention layer in a neural network, focusing on how parameters are projected to vocabulary and how token mappings reveal functional insights. It includes two heatmaps analyzing token relationships: (A) country-to-capital associations and (B) name variation mappings.
### Components/Axes
1. **Main Diagram Elements**:
- **Multi-head attention layer**: Contains matrices labeled `W_VO`, `W_QK` (with superscripts `1` and `n`).
- **Projecting parameters to vocabulary**: A heatmap grid labeled with "IV" (Roman numeral) and "M" (matrix).
- **Inferring functionality**: Arrows connect the attention layer to heatmaps, emphasizing token mapping analysis.
2. **Heatmap A (Country to Capital)**:
- **X-axis**: Cities (Cairo, Paris, Berlin).
- **Y-axis**: Countries (France, Germany, Egypt).
- **Values**: Intensity gradients (light yellow to dark yellow) with approximate value `0.7` noted.
3. **Heatmap B (Name Variations)**:
- **X-axis**: Name variations (Tomas, Don, Tom).
- **Y-axis**: Names (Tommi, Donna).
- **Values**: Intensity gradients with approximate value `0.9` noted.
- **Legend**: Robot icon labeled "Name variations 0.9" in bottom-right corner.
4. **Textual Labels**:
- Section A: "Evaluating the head’s implementation of a predefined operation".
- Section B: "Inspecting the head’s salient operations".
### Detailed Analysis
- **Heatmap A**:
- France-Cairo: Darkest cell (highest intensity).
- Germany-Berlin: Moderate intensity.
- Egypt: No strong associations (lighter cells).
- All values approximate `0.7`.
- **Heatmap B**:
- Tomas-Tommi: Darkest cell.
- Donna-Tom: Moderate intensity.
- Other cells: Lighter shades.
- All values approximate `0.9`.
### Key Observations
1. **Country-Capital Mappings**:
- Strongest association: France-Cairo (darkest cell).
- Weakest: Egypt (no dark cells).
- Germany-Berlin shows moderate association.
2. **Name Variations**:
- Tomas-Tommi and Donna-Tom show strongest associations (darkest cells).
- Other combinations (e.g., Tomas-Don) have weaker links.
3. **Legend Placement**:
- Robot icon (name variations) is spatially isolated in bottom-right, distinct from heatmap grids.
### Interpretation
The diagram demonstrates how attention mechanisms in neural networks prioritize specific token relationships. The country-capital heatmap (A) reveals geographic/cultural biases in parameter projections, with France-Cairo being the strongest link. The name variation heatmap (B) highlights phonetic/semantic similarities, with Tomas-Tommi showing the highest salience. The `0.7` and `0.9` values suggest confidence scores for these mappings, with name variations having higher salience. The robot icon’s placement emphasizes its role as a metadata label rather than a data point. This analysis aligns with Peircean semiotics, where the attention layer acts as an interpretant, mapping signs (tokens) to their interpretive effects (heatmap intensities).
</details>
Figure 1: Illustration of MAPS, a framework for inferring the functionality of attention heads in LLMs from their parameters. MAPS casts the head as a matrix $M$ which assigns a score for every pair of tokens in the model’s vocabulary. Then, it considers groups of token pairs (sub-matrices in $M$ ) to measure how strongly the head implements a given operation (A) and to inspect the head’s salient operations (B).
Attention heads play a key role in modern large language models (LLMs) (Vaswani et al., 2017; Zhou et al., 2024; Olsson et al., 2022). Numerous studies (Zheng et al., 2024; Ferrando et al., 2024) have explored their functionality, typically by analyzing their attention patterns or outputs during inference for certain inputs or tasks.
However, relying on the model’s behavior for certain inputs has drawbacks. First, this approach may overlook some of the functions implemented by the head, as heads can exhibit different behaviors for different inputs (Gould et al., 2024; Merullo et al., 2024a; Olsson et al., 2022; Kissane et al., 2024). Second, a comprehensive analysis of the head’s operation would require executing the model over numerous inputs, potentially the whole training corpus, which involves a high computational cost and could be impossible when the data is unavailable. Last, analyzing the examples that activate the head is often non-trivial and could be misleading (Bolukbasi et al., 2021; Gao et al., 2024; Kissane et al., 2024).
In this work, we consider a different approach to this problem, where our goal is to infer the functionality of attention heads directly from their parameters and without executing the model. To this end, we leverage the approach of interpreting model parameters in the vocabulary space (Geva et al., 2021, 2022; Katz et al., 2024). Specifically, we build on the formulation by Elhage et al. (2021); Dar et al. (2023), who cast the attention head as a matrix $M$ , where each entry is a mapping score between two tokens. While this approach has been shown effective in identifying heads with certain operations, so far its usage has been limited to studying specific heads in detected circuits Wang et al. (2023); McDougall et al. (2024) or a single operation Gould et al. (2024).
Here, we scale this interpretation approach into a general framework, called MAPS (Mapping Attention heads ParameterS), which enables answering two types of basic questions: (a) given a predefined operation, mapping how strongly different heads across the model implement it, and (b) given an attention head, inferring its prominent operations. This is done by considering patterns across groups of mappings in $M$ , as illustrated in Figure 1. Predefined relations signify groups of mappings expressing a certain relation (e.g. city of a country or pronoun resolving). Salient operations consist of subsets of mappings for which the head induces the most prominent effect. In addition, analyzing simple statistics of these mappings provides insights into how global or specific its operation is.
We evaluate our framework on 6 popular LLMs and 20 predefined relations of 4 categories – knowledge, language, algorithmic, and translation. Experiments show that estimations by MAPS strongly correlate with the head outputs during inference. Moreover, causally removing all the heads implementing a certain operation substantially impairs the model’s ability to answer queries requiring this operation, compared to removing other heads.
Analysis of the obtained mappings shows that, across all models, MAPS detects relation heads mostly in the middle and upper layers, while revealing universality patterns for several relations. Moreover, it demonstrates how the model’s architecture introduces biases in function encoding. Smaller models tend to encode higher numbers of relations on a single head, and in Llama-3.1 models, which use grouped-query attention, grouped attention heads often implement the same or similar relations. Notably, MAPS successfully detected previously identified heads of specific operations, while discovering additional heads of similar operations not reported before.
Next, we demonstrate the utility of MAPS for inferring the prominent operations of a given head. We consider the head’s salient mappings in $M$ and use GPT-4o Hurst et al. (2024) to automatically describe the functionality they exhibit. Applying this procedure to GPT-2 xl and Pythia 6.9B, we map the prominent operations of 62% of their heads and 60%-96% of those in the middle and upper layers. Qualitative analysis shows semantic, linguistic, and algorithmic operations and reveals novel operations, such as the extension of time periods (day->month;month->year). A human study shows that our automated pipeline performs reasonably well, and GPT-4o reliably detects observable operations.
To conclude, we introduce MAPS, an efficient framework for inferring attention heads’ functionality from their parameters. We showcase the utility of MAPS in systematically mapping a certain functionality across the model and automatically characterizing the salient operations of a given head. Estimations by MAPS correlate with the head’s outputs and are faithful to the model’s behavior, and provide valuable insights on architecture biases and universality of head operations in LLMs.
## 2 Preliminaries and Notation
We assume a transformer-based LM with a hidden dimension $d$ , $L$ layers, $H$ attention heads per layer, a vocabulary $\mathcal{V}$ , an embedding matrix $E\in\mathbb{R}^{|\mathcal{V}|\times d}$ , and an unembedding matrix $U\in\mathbb{R}^{d\times|\mathcal{V}|}$ .
#### Attention heads as interaction matrices
We use the formulation by Elhage et al. (2021) and view an attention head as two “interaction” matrices $W_{QK},W_{VO}\in\mathbb{R}^{d\times d}$ . Given a sequence of $n$ hidden states $X\in\mathbb{R}^{n\times d}$ , the matrix $W_{QK}$ computes the query-key scores to produce an attention weights matrix $A\in\mathbb{R}^{n\times n}$ :
$$
A=\text{softmax}\Bigg{(}\frac{X(W_{QK})X^{T}}{\sqrt{d/H}}\Bigg{)}
$$
The matrix $W_{VO}$ operates on the contextualized hidden states according to $A$ , namely $\tilde{X}=AX$ , and produces the head’s output $Y\in\mathbb{R}^{n\times d}$ :
$$
Y=\tilde{X}W_{VO} \tag{1}
$$
The matrix $W_{QK}$ can be viewed as “reading” from the residual stream, and $W_{VO}$ can be viewed as the “writing” component. Notably, this formulation omits the bias terms of the head.
#### Interpreting attention heads in embedding space
Recent works have analyzed the operation of different components in transformers through projection to the model’s vocabulary space (nostalgebraist, 2020; Geva et al., 2021, 2022; Dar et al., 2023; Katz et al., 2024). Specifically, Elhage et al. (2021); Dar et al. (2023) interpret each of the attention head matrices – $W_{QK}$ and $W_{VO}$ – as a matrix that maps between pairs of tokens from the vocabulary. Considering $W_{VO}$ , it is interpreted via multiplication from both sides with the model’s embedding matrix: ${\tilde{M}=E(W_{VO})E^{T}\in\mathbb{R}^{|\mathcal{V}|\times|\mathcal{V}|}}$ . Each entry in $\tilde{M}$ is viewed as a mapping score between source and target tokens ${s,t\in\mathcal{V}}$ based on $W_{VO}$ , which signifies how strongly the head promotes it in its outputs. Elhage et al. (2021) suggested that when the weights of $E$ and $U$ are not tied, a more faithful interpretation can be obtained by:
$$
M=E(W_{VO})U
$$
Other notable variations include applying the model’s first MLP layer to the embedding matrix $E$ (Gould et al., 2024) and the final layer norm on rows of $E(W_{VO})$ (Wang et al., 2023).
## 3 MAPS
Based on the above view, we propose a general framework, called MAPS, for inferring the functionality of attention heads in LLMs directly from their parameters. We focus on analyzing the $W_{VO}$ component of the head, which produces the head’s output to the residual stream, and make the following observations. First, the $i$ -th row of $M$ provides the scores for mappings from the $i$ -th token to any token in $\mathcal{V}$ . Similarly, the $j$ -th column of $M$ provides scores for mappings from any token in $\mathcal{V}$ to the $j$ -th token. Therefore, considering the scores of certain submatrices of $M$ may reveal how the attention head operates on different sets of inputs. For example, analyzing the rows corresponding to tokens representing countries may reveal general knowledge-related operations implemented by the head, and attention heads that copy certain tokens should have diagonal-like submatrices in $M$ .
An important question that arises is which parts of $M$ to consider in order to identify the head’s functionality. In principle, there are $2^{|\mathcal{V}|}$ different subsets of rows that can be considered, which would be infeasible to traverse with $|\mathcal{V}|=\mathcal{O}(10K)$ in typical LLMs. Here, we propose two complementary ways to approach this, described next.
### 3.1 Predefined Relations
One intuitive approach is to define a set of possible operations that can be realized through pairs of tokens, and then measure the extent to which the head implements each operation. For example, the operation of mapping a country to its capital can be realized through a set of token pairs expressing that relation, e.g. (France, Paris) or (Egypt, Cairo). Similarly, mapping between synonyms can be realized via pairs such as (talk, speak) and (fast, quick). Such operations can be viewed as an implementation of relations between tokens.
Let $R$ be a predefined relation and $\mathcal{D}_{R}$ a dataset of token pairs expressing $R$ . Also, denote by $\mathbf{m}_{i}\in\mathbb{R}^{|\mathcal{V}|}$ the $i$ -th row of $M$ (corresponding to the mapping scores of the $i$ -th token), and by $\texttt{topk}(\mathbf{m}_{i})$ the $k$ tokens with the highest scores in $\mathbf{m}_{i}$ . The extent to which an attention head, interpreted as the matrix $M$ , implements $R$ can be measured as the portion of pairs $(s,t)\in\mathcal{D}_{R}$ where $t$ is in the top-scoring tokens in $\mathbf{m}_{s}$ :
$$
\phi_{R}(M):=\frac{1}{|\mathcal{D}_{R}|}\sum_{(s,t)\in\mathcal{D}_{R}}\mathds{
1}[t\in\texttt{topk}(\mathbf{m}_{s})] \tag{2}
$$
For instance, the score for $R=$ ‘‘country to capital’’ reflects how often the head promotes the capital city of a country in its output when operating on an input representation of that country.
Notably, our formulation also supports suppression operations observed in previous work (Wang et al., 2023; Gould et al., 2024; McDougall et al., 2024), where certain attention heads suppress certain concepts or outputs during inference. Representing a suppressive relation is done by defining the pairs $(s,t)$ as before and considering the top-scoring tokens in $-\mathbf{m}_{s}$ instead of $\mathbf{m}_{s}$ .
### 3.2 Salient Operations
The main limitation of the above approach is that it could miss certain relations that heads implement. A complementary approach would be to characterize the head’s functionality from prominent mappings appearing in $M$ . Dar et al. (2023) tackled this by considering the top-scoring mappings in $M$ . However, we recognize two drawbacks in this method: (a) the scores in $M$ are influenced by the token embedding norms, which could bias the top scores towards mappings of tokens with high embedding norms, and (b) the top entries in $M$ may cover mapping from a small number of tokens (e.g., from a single row), thus describing the head’s functionality for only a few tokens.
Here, we propose a more holistic approach to identify salient mappings in $M$ , by first identifying the tokens on which the head’s operation is most prominent, and then considering the top-scoring mappings for these tokens. We measure how prominent the head’s operation on a token $t\in\mathcal{V}$ via the ratio of the token’s embedding norm after multiplying by $W_{VO}$ to the norm before this transformation:
$$
\sigma_{t}(W_{VO}):=\frac{||\mathbf{e}_{t}W_{VO}||}{||\mathbf{e}_{t}||} \tag{3}
$$
Comparing the sets of top versus salient mappings indeed shows substantial differences. The average Jaccard similarity of the sets obtained for heads in GPT-2 xl is 0.01. In the next sections, we experiment with both approaches, showing their effectiveness in inferring attention head functionality in multiple LLMs.
## 4 Mapping Predefined Relations
In this section, we utilize MAPS to map how strongly attention heads implement various operations in multiple LLMs (§ 4.1). We assess the correctness and generalization of these estimations via correlative and causal experiments (§ 4.2, § 4.3) and analyze prominent trends (§ 4.4).
### 4.1 Experimental Setup
#### Datasets
We construct datasets for 20 relations of four categories: algorithmic (e.g., word to first letter), knowledge (e.g., country to capital), linguistic (e.g., adjective to comparative), and translation (English to French/Spanish), and 3 vocabularies of widely-used model families. For every relation, we collect pairs of strings expressing it. For instance, possible pairs for the relation word-to-compound are (hot, hotdog) and (wall, wallpaper). Data is obtained from previously published datasets and online sources and further augmented by querying ChatGPT to generate example pairs, which we (authors) manually validated. Then, we tokenize the pairs with each of the tokenizers of Llama-3.1 Dubey et al. (2024), Pythia Biderman et al. (2023) GPT Radford et al. (2019) and Phi-2 Javaheripi and Bubeck (2023), keeping only cases where the resulting mapping is between single tokens. Experimenting with different tokenizers is important as MAPS leverages the model’s vocabulary. Llama-3.1’s vocabulary has $\sim$ 130k tokens compared to $\sim$ 50k tokens for GPT-2, Phi-2, and Pythia. For more details on the collection, dataset statistics, and examples, see § A.
#### Models
We analyze models of various sizes from different families: Llama-3.1 8B and 70B Dubey et al. (2024), Pythia 6.9B and 12B Biderman et al. (2023), Phi-2 Javaheripi and Bubeck (2023), and GPT-2 xl Radford et al. (2019). These models have varying numbers of layers and attention heads, from 32 layers and 32 heads in Pythia 6.9B to 80 layers and 64 heads in Llama-3.1 70B. Additionally, Llama-3.1 uses grouped-query attention Ainslie et al. (2023), versus the other models which use multi-head attention Vaswani et al. (2017).
#### Measuring predefined relations
For every attention head and relation $R$ , we derive the matrix $M$ and calculate the relation score $\phi_{R}(M)$ (Eq. 2). We also compute the score for the suppressive variant $\bar{R}$ of every relation $R$ . For example, the suppressive variant of $R=\texttt{country to capital}$ corresponds to the operation of suppressing the capital of a given country.
We follow previous works (Dar et al., 2023; Geva et al., 2021, 2022) and set low $k$ values to reflect strong prioritization of the target token in the head’s output. For Pythia, Phi-2 and GPT-2, we use $k=1$ for the copying and name-copying relations and $k=10$ for other relations. For the Llama-3.1 models, we set $k=3$ for copying and name-copying and $k=25$ for other relations. The bigger values for Llama-3.1 are due to their large vocabulary, which allows expressing a concept with more tokens. The smaller values for the copying relations are for measuring them more strictly. For further discussion on this selection, see § A.
To classify whether a head “implements” a relation $R$ , we apply a threshold $\tau$ to $\phi_{R}(M)$ . Namely, if $t$ appears in the top- $k$ mappings of $s$ for $\tau$ percent of the pairs $(s,t)\in\mathcal{D}_{R}$ , then we consider the head as implementing $R$ . We choose a threshold of $\tau=15\$ after experimenting with different thresholds and comparing against randomly initialized heads (see § A for details).
### 4.2 Evaluation of Functionality Estimation
We evaluate whether the functionality estimations by MAPS faithfully describe the operations of the heads during inference. Our experiments show that the estimated operation of a head strongly correlates with its outputs and demonstrates the expected causal effect on the model’s generation.
#### Experiment 1: Correlation with head outputs
For every relation $R$ and source-target pair $(s,t)\in\mathcal{D}_{R}$ , we evaluate the model using four prompt templates (provided in § B.1). One representative template is: We do not simply feed in $s$ as input to avoid potential biases from the attention sink phenomenon Xiao et al. (2024).
$$
\mathcal{P}_{s}:=\texttt{``This is a document about $\langle$s$\rangle$''}
$$
Where $\langle\texttt{s}\rangle$ is the string of the source token $s$ . For example, for the pair (England, London), we will have ‘‘This is a document about England’’. Next, we obtain the output $\mathbf{y}_{s}\in\mathbb{R}^{d}$ of every attention head at the last position (corresponding to $s$ ), Here the head outputs include the bias term of $W_{V}$ , see § B.1. and project it to the model’s vocabulary space, i.e. $\mathbf{y}_{s}U\in\mathbb{R}^{|\mathcal{V}|}$ . The top-scoring tokens in the resulting vector are those promoted by the head given the prompt $\mathcal{P}_{s}$ Geva et al. (2022). To check whether the head implements the relation $R$ , namely promote $t$ when given $s$ in the input, we test for every pair $(s,t)$ whether $t$ appears in the top $k$ tokens in $\mathbf{y}_{s}U$ . We use the same $k$ values specified in § 4.1. Concretely, for every head $h$ we compute the following score, which represents how strongly the head implements $R$ during inference:
$$
\phi^{*}_{R}(h):=\frac{1}{|\mathcal{D}_{R}|}\sum_{(s,t)\in\mathcal{D}_{R}}
\mathds{1}[t\in\texttt{topk}(\mathbf{y}_{s}U)] \tag{4}
$$
We check the correlation between the static score $\phi_{R}(h)$ inferred by our method and the dynamic score $\phi^{*}_{R}(h)$ computed separately for each of the four templates. As a baseline, we compute $\phi^{*}_{R}(h)$ while restricting the attention in $h$ from $s$ to be only to itself. This emulates an operation of the head as if it fully attends to the representation of $s$ .
#### Results
Table 1 shows the results for Llama-3.1 8B. For the vast majority of relations, we observe a strong to very strong correlation of 0.71-0.95 Schober et al. (2018) when the query’s subject is not contextualized. This high correlation often remains or even increases when considering the head’s outputs for contextualized inputs. This shows that MAPS well-estimates the head’s behavior for task-related inputs. Still, for some relations (e.g. word to compound and word to last letter) correlation is lower for contextualized inputs, demonstrating that in some cases, the head may switch its operation depending on the context. This agrees with the observation that heads often implement multiple operations (§ 4.4). Results for other models are in § B.1, generally exhibiting similar trends, though with occasional larger drops in the contextualized setting for Pythia and GPT-2 xl.
| Category | Relation | Correlation w/o context. | Correlation w/ context. |
| --- | --- | --- | --- |
| Algorithmic | Copying | 0.76 | 0.73 |
| Name copying | 0.95 | 0.95 | |
| Word to first letter | 0.90 | 0.78 | |
| Word to last letter | 0.67 | 0.36 | |
| Knowledge | Country to capital | 0.85 | 0.85 |
| Country to language | 0.76 | 0.62 | |
| Object to superclass | 0.74 | 0.73 | |
| Product by company | 0.46 | 0.49 | |
| Work to location | 0.44 | 0.45 | |
| Linguistic | Word to antonym | 0.90 | 0.86 |
| Adj to comparative | 0.85 | 0.86 | |
| Adj to superlative | 0.87 | 0.89 | |
| Noun to pronoun | 0.89 | 0.79 | |
| Verb to past tense | 0.91 | 0.86 | |
| Word to compound | 0.78 | 0.62 | |
| Word to homophone | 0.85 | 0.75 | |
| Word to synonym | 0.79 | 0.69 | |
| Translation | English to French | 0.71 | 0.68 |
| English to Spanish | 0.82 | 0.81 | |
Table 1: Correlation between the relation score of a head and the head’s outputs in Llama-3.1 8B, with and without head contextualization. Results are statistically significant with p-values $\leq$ 3.9e-128 (see § B.1).
| Relation | TR Tasks | CTR Tasks | | | |
| --- | --- | --- | --- | --- | --- |
| Base | - TR | - RND | Base | - TR | |
| Adj to comparative | 0.91 | 0.20 | 0.82 | 0.92 | 0.63 |
| Copying | 1.00 | 0.68 | 1.00 | 0.95 | 0.88 |
| Country to capital | 0.97 | 0.00 | 0.95 | 0.89 | 0.90 |
| Country to language | 1.00 | 0.08 | 0.96 | 0.89 | 0.89 |
| Name copying | 1.00 | 0.24 | 1.00 | 0.90 | 0.92 |
| Noun to pronoun | 0.88 | 0.46 | 0.86 | 0.90 | 0.88 |
| Object to superclass | 0.78 | 0.39 | 0.68 | 0.90 | 0.87 |
| Verb to past tense | 0.22 | 0.04 | 0.26 | 0.03 | 0.02 |
| Word to first letter | 0.91 | 0.34 | 0.87 | 0.91 | 0.74 |
| Year to following | 0.92 | 0.00 | 0.87 | 0.83 | 0.79 |
Table 2: Accuracy of Pythia 12B on tasks for a target relation (TR) versus on control (CTR) tasks, when removing heads implementing the relation compared to when removing random heads (RND). Results for RND heads are averaged over 5 experiments. We omit standard deviation for brevity and report it in § B.2.
#### Experiment 2: Causal effect on model outputs
For a given relation $R$ , we evaluate the model’s performance on queries that require applying $R$ , when removing the heads classified by MAPS as implementing $R$ versus when removing random heads from the model. We choose a diverse set of 13 relations and construct a test set $\tilde{\mathcal{D}}_{R}$ for every relation $R$ as follows. First, we craft a task prompt that requires the model to apply $R$ . For example, a prompt for the country to capital relation could be ‘‘The capital of $\langle s\rangle$ is’’, with $\langle s\rangle$ being a placeholder for a country. Then, for each pair $(s,t)\in\mathcal{D}_{R}$ we instantiate the prompt with $s$ to create an input $\tilde{\mathcal{P}}_{s}$ and a test example $(\tilde{\mathcal{P}}_{s},t)\in\tilde{\mathcal{D}}_{R}$ .
Let $\mathcal{H}_{R}^{i}$ be the subset of $i$ attention heads with the highest scores for $\phi_{R}(M)$ . We evaluate the models on $\tilde{\mathcal{D}}_{R}$ while running each input $n$ times, each time canceling (by setting to zero) the outputs of the attention heads $\mathcal{H}_{R}^{i}$ and obtaining the model’s prediction with greedy decoding. We set $n$ as the minimum between the number of heads in the model with $\phi_{R}(M)>0$ and a fixed boundary: 150 for GPT-2 xl, Pythia 6.9B, Pythia 12B, and Llama-3.1 8B and 250 for Llama-3.1 70B. In cases when the accuracy drops to 0 after ablating $i<n$ heads, we report results obtained up to $i$ .
We compare the above intervention against a baseline where $i$ randomly sampled heads that are not in $\mathcal{H}_{R}^{i}$ are ablated, repeating this experiment 5 times and reporting the average accuracy. Additionally, to establish that relation heads are important specifically for tasks involving $R$ , we remove the relation heads as above and measure the model’s performance on up to five control tasks for other relations. We choose the relations such that $<$ 15% of the target relation heads are also control relation heads, and the absolute difference between the baseline accuracy on the control task and the target task is $\leq$ 20%.
#### Results
Results for Pythia 12B are presented in Table 2, excluding relations where the base accuracy was $<$ 0.1. For all relations, removing the relation heads identified by MAPS causes a major accuracy drop of $\geq$ 32% compared to $\leq$ 13% when removing random heads. Moreover, while the accuracy drop for the control tasks is considerable in some cases (at most 33%), it is significantly smaller than the relative drop on the target relation task. Results for the other models are generally similar (see § B.2). Notable differences are that the accuracy drops in Llama-3.1 are often smaller, but in 9 out of 11 relations they are larger than those obtained for the random and control baselines.
### 4.3 Generalization to Multi-Token Entities
A natural question that arises is how well the estimations by MAPS generalize to contextualized inputs representing multiple tokens. Namely, if we infer the head’s ability to perform country-to-capital mappings from country names tokenized as a single token, will we observe the same behavior for countries tokenized as multiple tokens?
To test this, we apply the data collection process from § 4.1 to create new datasets for 11 relations of source-target pairs $(s,t)$ where $s$ has multiple tokens. Then, we repeat the correlative experiment in § 4.2 for GPT-2 xl, Pythia 6.9B and Pythia 12B using this data and the prompt template ‘‘This is a document about $\langle$ s $\rangle$ ’’.
We observe that the estimated operations generalize to multi-token representations. For 53 out of the 64 model-relation combinations (with and without contextualization), the correlation between the relation score and the head’s output in the multi-token setting is similar ( $\leq$ 0.05 difference) or higher than the single-token setting. In the remaining cases, there is a slightly bigger drop ( $\leq$ 0.13), but the correlations remain $\geq$ 0.63. The full results are provided in § C.
### 4.4 Analysis
#### Function distribution
Figure 2 shows category-level classification results of all heads in GPT-2 xl, Phi-2, Pythia 12B, and Llama-3.1 70B. A head is assigned to a certain category if it implements at least one relation from it or its suppressive variant. Considering prominent trends across all models, we first observe that MAPS identified relations from all categories, with classified heads mostly being located in the middle and upper layers. This may suggest that early layers perform operations that cannot be represented in the model’s output vocabulary space. Interestingly, we observe a “side effect” of the grouped attention structure in Llama-3.1 models, where grouped heads often implement the same relations or their suppressive variants.
In addition, heads often implement multiple relations from the same or different categories. The portion of multi-category heads (out of all classified heads) generally decreases in model size: 38% in GPT-2 xl, 29% in Phi-2, 20% in Pythia 6.9B, Pythia 12B and 11% in Llama-3.1 70B. An exception to this trend is Llama-3.1 8B with 11% of multi-category heads, which may be caused by its grouped query attention structure. Also, 20%-36% of the classified heads implement at least one suppression relation.
<details>
<summary>x2.png Details</summary>

### Visual Description
## Heatmap: Distribution of Linguistic and Knowledge Categories Across Model Layers and Heads
### Overview
The image presents four heatmaps comparing the distribution of linguistic and knowledge-related categories across different neural network models (GPT-2 xl, Phi-2, Pythia 12B, Llama-3.1 70B). Each panel visualizes the relationship between **layers** (x-axis) and **heads** (y-axis), with colored squares representing categories such as "Unclassified," "Knowledge," "Translation," "Linguistic," and numerical categories (2, 3, 4). The legend in the top-left corner maps colors to these categories.
---
### Components/Axes
- **X-axis (Layer)**:
- Labeled "Layer" with numerical ranges:
- GPT-2 xl: 0–45
- Phi-2: 0–30
- Pythia 12B: 0–32
- Llama-3.1 70B: 0–60
- **Y-axis (Head)**:
- Labeled "Head" with numerical ranges:
- GPT-2 xl: 0–20
- Phi-2: 0–24
- Pythia 12B: 0–32
- Llama-3.1 70B: 0–48
- **Legend**:
- **Colors**:
- Gray: Unclassified
- Orange: Knowledge
- Red: Translation
- Green: Linguistic
- Purple: 2 categories
- Pink: 4 categories
- **Text**: "3 categories" and "4 categories" are noted but not explicitly tied to specific colors.
---
### Detailed Analysis
#### GPT-2 xl (Top-Left)
- **Layer range**: 0–45, **Head range**: 0–20.
- **Color distribution**:
- **Orange (Knowledge)**: Scattered across layers 0–45, with higher density in mid-layers (e.g., layers 10–20).
- **Green (Linguistic)**: Concentrated in upper layers (e.g., layers 30–45).
- **Red (Translation)**: Sparse, with isolated clusters in lower layers (e.g., layers 5–10).
- **Gray (Unclassified)**: Dominates lower layers (0–10) and some mid-layers.
#### Phi-2 (Top-Right)
- **Layer range**: 0–30, **Head range**: 0–24.
- **Color distribution**:
- **Green (Linguistic)**: Dominates upper layers (e.g., layers 15–30), forming dense clusters.
- **Orange (Knowledge)**: Scattered in lower layers (0–15), with fewer instances.
- **Red (Translation)**: Minimal, concentrated in mid-layers (e.g., layers 10–20).
- **Gray (Unclassified)**: Sparse, mostly in lower layers.
#### Pythia 12B (Bottom-Left)
- **Layer range**: 0–32, **Head range**: 0–32.
- **Color distribution**:
- **Orange (Knowledge)**: Dense in lower layers (0–10), with gradual decline in higher layers.
- **Green (Linguistic)**: Scattered throughout, with no clear pattern.
- **Red (Translation)**: Minimal, with isolated points in mid-layers.
- **Gray (Unclassified)**: Dominates upper layers (20–32).
#### Llama-3.1 70B (Bottom-Right)
- **Layer range**: 0–60, **Head range**: 0–48.
- **Color distribution**:
- **Red (Translation)**: Concentrated in upper layers (e.g., layers 40–60), forming dense clusters.
- **Green (Linguistic)**: Scattered in mid-layers (e.g., layers 20–40).
- **Orange (Knowledge)**: Sparse, with isolated points in lower layers.
- **Gray (Unclassified)**: Minimal, mostly in lower layers.
---
### Key Observations
1. **Model-specific patterns**:
- **Phi-2** and **Llama-3.1 70B** show strong specialization in **Linguistic** (green) and **Translation** (red) categories, respectively, in higher layers.
- **Pythia 12B** exhibits a more uniform distribution of **Knowledge** (orange) in lower layers, suggesting less task-specific specialization.
- **GPT-2 xl** has a balanced mix of categories, with **Knowledge** and **Linguistic** dominating mid-layers.
2. **Unclassified (gray) dominance**:
- Lower layers across all models show higher gray (Unclassified) density, indicating potential ambiguity in early processing stages.
3. **Color-category alignment**:
- All colors in the panels match the legend (e.g., orange = Knowledge, green = Linguistic).
---
### Interpretation
The heatmaps reveal how different models allocate computational resources (layers and heads) to specific linguistic or knowledge tasks.
- **Specialization**: Models like Phi-2 and Llama-3.1 70B demonstrate clear task-specific layering (e.g., Linguistic in Phi-2, Translation in Llama-3.1 70B), suggesting optimized architectures for these functions.
- **Generalization**: GPT-2 xl’s mixed distribution implies a more generalized approach, with overlapping roles across layers.
- **Unclassified regions**: The prevalence of gray in lower layers may reflect unresolved or ambiguous processing, highlighting areas for further research.
The data underscores the trade-off between specialization and generalization in neural architectures, with larger models (e.g., Llama-3.1 70B) showing more pronounced task-specific patterns. This could inform future model design for targeted applications like translation or linguistic analysis.
</details>
Figure 2: Functionality mapping by MAPS for 20 relations of 4 categories — algorithmic, knowledge, linguistic, translation — across all attention heads in GPT-2 xl, Phi-2, Pythia 12B, Llama-3.1 70B. A head is marked as a specific category if it implements at least one relation from this category.
#### Function universality
Figure 3 presents the distributions of relation scores for several representative relations in multiple models showing two interesting trends. First, despite architecture and training data differences, models encode relations in their heads to similar degrees, as observed by the similar highest scores per relation. This observation supports the “universality hypothesis” Li et al. (2015) that different networks learn similar features and circuits and expands recent similar findings about universality in LLMs Gould et al. (2024); Arditi et al. (2024); Tigges et al. (2024). Second, the scores for a given relation are diverse, with different heads implementing the relation at varying degrees, as opposed to having a small set of heads with high relation scores. This has implications for research concerning localization and editing; certain concepts or associations are encoded in a large number of model components at varying degrees.
#### Comparison with known head functionalities
Wang et al. (2023) identified “Name Mover” and “Anti Name Mover” heads in a circuit for indirect object identification in GPT-2 small, which copy or suppress copying specific names in the context, and Merullo et al. (2024a) identified “Mover” and “Capital” heads in GPT-2 medium. MAPS successfully identified all these heads as name copiers or country-to-capital mappers (which agrees with a similar analysis conducted by Wang et al., 2023). In addition, it discovered 25 heads in GPT-2 small and 46 in GPT-2 medium that implement similar operations but were not recognized in prior analyses. While the additional heads may not participate in the specific circuits discovered, they may be triggered for circuits of similar or related tasks that were overlooked in previous analyses.
Notably, for all the heads identified in previous works, MAPS reveals various additional functionalities. These observations extend the findings by Merullo et al. (2024a) of heads that implement multiple functionalities.
Taken together, these results demonstrate the effectiveness of MAPS in comprehensively mapping the implementation of a certain operation by attention heads across the model. A more detailed comparison is in § D.
<details>
<summary>x3.png Details</summary>

### Visual Description
## Scatter Plot: Relation Scores Across NLP Tasks for Language Models
### Overview
The image is a scatter plot comparing relation scores across five natural language processing (NLP) tasks for four language models: GPT-2 xl, Pythia 6.9B, Phi-2, and Llama-3.1 70B. The x-axis represents relation scores (0.0–1.0), and the y-axis lists NLP tasks. Data points are color-coded by model.
### Components/Axes
- **Y-Axis (Tasks)**:
1. Adj to antonym
2. Word to homophone
3. Word to synonym
4. Work to location
5. Country to capital
- **X-Axis (Relation Score)**:
Scale from 0.0 to 1.0, with no intermediate markers.
- **Legend**:
- Blue: GPT-2 xl
- Orange: Pythia 6.9B
- Green: Phi-2
- Red: Llama-3.1 70B
### Detailed Analysis
1. **Adj to antonym**:
- Llama-3.1 70B (red): ~0.8
- Phi-2 (green): ~0.7
- GPT-2 xl (blue): ~0.6
- Pythia 6.9B (orange): ~0.6
2. **Word to homophone**:
- Llama-3.1 70B (red): ~0.4
- Phi-2 (green): ~0.3
- GPT-2 xl (blue): ~0.2
- Pythia 6.9B (orange): ~0.1
3. **Word to synonym**:
- Llama-3.1 70B (red): ~0.7
- Phi-2 (green): ~0.6
- GPT-2 xl (blue): ~0.5
- Pythia 6.9B (orange): ~0.4
4. **Work to location**:
- Llama-3.1 70B (red): ~0.6
- Phi-2 (green): ~0.5
- GPT-2 xl (blue): ~0.4
- Pythia 6.9B (orange): ~0.3
5. **Country to capital**:
- Llama-3.1 70B (red): ~0.9
- Phi-2 (green): ~0.85
- GPT-2 xl (blue): ~0.8
- Pythia 6.9B (orange): ~0.75
### Key Observations
- **Llama-3.1 70B** consistently achieves the highest scores across all tasks, with the largest margin in "Country to capital" (~0.9).
- **Phi-2** outperforms GPT-2 xl and Pythia 6.9B in most tasks but lags behind Llama-3.1 70B.
- **Pythia 6.9B** has the lowest scores overall, particularly in "Word to homophone" (~0.1).
- **Task difficulty**: "Country to capital" is the easiest (highest scores), while "Word to homophone" is the hardest (lowest scores).
### Interpretation
The data suggests that **Llama-3.1 70B** demonstrates superior relational reasoning capabilities compared to other models, likely due to its larger parameter size (70B vs. 6.9B/2B/1.3B). The performance gap between Llama-3.1 70B and smaller models (Phi-2, GPT-2 xl, Pythia 6.9B) highlights the impact of model scale on NLP task performance.
The clustering of scores in "Country to capital" (~0.75–0.9) indicates this task is relatively straightforward for all models, possibly due to its reliance on factual knowledge rather than nuanced semantic relationships. Conversely, "Word to homophone" (~0.1–0.4) reflects the complexity of homophone detection, which requires deeper contextual understanding.
Phi-2’s mid-tier performance suggests it balances efficiency and capability better than GPT-2 xl or Pythia 6.9B, while Pythia 6.9B’s poor performance may stem from architectural limitations or training data constraints.
This analysis underscores the trade-offs between model size, training data, and task-specific performance in NLP systems.
</details>
Figure 3: Relation scores for all heads of Llama-3.1 70B, Pythia 6.9B, Phi-2, GPT-2 xl for several relations. We observe that heads from all models implement these relations to similar degrees.
## 5 Inspecting Salient Operations
We saw that given an operation realized as a relation between pairs of tokens, we can map how strongly it is implemented by attention heads across the model. Here, we use MAPS to tackle a complementary problem of inferring the prominent operations of a given attention head. We introduce an automatic pipeline for interpreting salient mappings in attention heads (§ 5.1) and use it to broadly infer the functionalities in Pythia 6.9B and GPT-2 xl (§ 5.2). In § F, we extend our analysis to show that the skewness of saliency scores can indicate how global or specific the head’s functionality is.
### 5.1 Automatic Functionality Inference
We propose the following steps for inferring the functionality of an attention head:
1. Using the saliency score (Eq. 3) to identify the top $k$ tokens for which the head’s transformation is most prominent.
1. For each salient token $s$ , collecting the top $n$ tokens it is mapped to according to $M$ , namely, the tokens corresponding to the top entries in $\mathbf{m}_{s}$ . This could be extended to suppression for better coverage.
1. Inferring the head’s salient operations by querying an LLM about prominent patterns in the list of salient tokens and their top mappings. Notably, we ask the model to indicate there is no pattern when no clear pattern is observed across the mappings. For the exact prompt used, see § E.
We run this pipeline on a total of 2,224 attention heads in GPT-2 xl and Pythia 6.9B, while setting $k=30$ (step 1) and $n=5$ (step 2) and using GPT-4o (Hurst et al., 2024) (step 3). We analyze how often GPT-4o was able to recognize a prominent functionality and measure the quality of its descriptions compared to human judgment.
### 5.2 Results
<details>
<summary>x4.png Details</summary>

### Visual Description
## Line Graphs: Pattern Detection Across Layers in Pythia 6.9b and GPT2 xl
### Overview
The image contains two side-by-side line graphs comparing the percentage of pattern detection across layers in two language models: **Pythia 6.9b** (left) and **GPT2 xl** (right). Both graphs plot the y-axis as "% of pattern where a pattern was detected" (0–100%) and the x-axis as "Layer" (0–30 for Pythia, 0–40 for GPT2). The lines exhibit fluctuating trends with notable peaks and troughs.
---
### Components/Axes
- **Left Graph (Pythia 6.9b)**:
- **Title**: "Pythia 6.9b" (top-center).
- **X-axis**: "Layer" (0–30, increments of 10).
- **Y-axis**: "% of pattern where a pattern was detected" (0–100%, increments of 20).
- **Line**: Blue, fluctuating, with approximate values:
- Layer 0: ~20%
- Layer 10: ~60%
- Layer 20: ~80%
- Layer 30: ~40%
- **Right Graph (GPT2 xl)**:
- **Title**: "GPT2 xl" (top-center).
- **X-axis**: "Layer" (0–40, increments of 10).
- **Y-axis**: Same as left graph.
- **Line**: Blue, fluctuating, with approximate values:
- Layer 0: ~20%
- Layer 10: ~50%
- Layer 20: ~70%
- Layer 30: ~80%
- Layer 40: ~60%
---
### Detailed Analysis
- **Pythia 6.9b**:
- The line starts at ~20% (Layer 0), rises sharply to ~60% by Layer 10, peaks at ~80% near Layer 20, then declines to ~40% by Layer 30.
- Notable volatility: Sharp drops (e.g., Layer 15 to 20) and irregular fluctuations.
- **GPT2 xl**:
- The line begins at ~20% (Layer 0), increases to ~50% by Layer 10, rises to ~70% at Layer 20, peaks at ~80% near Layer 30, then drops to ~60% by Layer 40.
- More sustained growth compared to Pythia, with fewer abrupt declines.
---
### Key Observations
1. **Pythia 6.9b**:
- Pattern detection peaks midway (Layer 20) and declines sharply afterward.
- Early layers (0–10) show rapid improvement, followed by instability.
2. **GPT2 xl**:
- Pattern detection grows more gradually, peaking later (Layer 30).
- Maintains higher values in later layers (30–40) compared to Pythia.
3. **Model Differences**:
- GPT2 xl has more layers (40 vs. 30) and exhibits steadier growth.
- Pythia’s performance drops significantly in deeper layers, suggesting potential overfitting or diminishing returns.
---
### Interpretation
The graphs suggest that **GPT2 xl** maintains stronger pattern detection capabilities in deeper layers compared to **Pythia 6.9b**, which experiences a notable decline after Layer 20. This could indicate architectural differences: GPT2’s larger scale (40 layers) may better preserve pattern detection in later stages, while Pythia’s smaller architecture (30 layers) shows diminishing returns. The volatility in both models highlights the complexity of pattern representation across transformer layers, with GPT2’s design potentially offering more robustness for deeper contextual analysis.
</details>
Figure 4: Portion of heads where GPT-4o identified a prominent pattern across the head’s salient mappings.
Figure 4 shows the percentage of heads per layer in GPT-2 xl and Pythia 6.9B where GPT-4o described a pattern. In both models, we observe a high rate of 60%-96% interpretable heads in the middle and upper layers, compared to a lower rate of 20%-60% in the early and last layers. These trends are consistent with those observed for predefined relations (§ 4), suggesting that early-layer heads are less interpretable in the vocabulary space. Qualitative analysis of 107 heads with identified patterns shows diverse operations: 38% semantic (e.g., extension of time-periods, day->month; month->year; year->decade), 36% algorithmic (e.g., capitalization, water->Water), and 26% linguistic (e.g., completion of sub-words (inhib->inhibition; resil->resilience). Examples of salient mappings and their interpretations are provided in § E.
#### Interpretation quality
We conduct a human study to assess the plausibility of the generated descriptions, finding that GPT-4o correctly identifies the presence or absence of a pattern in 80% of the cases and reliably detects observable patterns. This shows that our automatic pipeline is reasonable and demonstrates promising trends in automatically interpreting attention heads with MAPS. For more details on this study and its results, see § E.
## 6 Related Work
Prior studies of attention heads in LLMs mostly focused on analyzing their attention patterns Voita et al. (2019); Clark et al. (2019); Vig and Belinkov (2019), training probes and sparse auto-encoders Kissane et al. (2024), studying head outputs, and performing causal interventions (see survey by Zheng et al., 2024). Unlike these methods, MAPS infers the functionality of attention heads from their parameters, without any training or inference.
Vocabulary projections of attention head parameters have been used for analyzing certain attention head operations in LLMs Wang et al. (2023); McDougall et al. (2024); Kim et al. (2024); García-Carrasco et al. (2024); Elhage et al. (2021). However, they have been used mostly as a validation tool for operations inferred by other methods and were applied to specific relations and heads, typically in the scope of specific circuits. Gould et al. (2024) studied a single relation across all heads of multiple LLMs. Our work proposes a general framework that uses vocabulary projections as its primary tool for inferring attention head functionality.
Millidge and Black (2022) utilized an LLM to interpret the vocabulary projections of singular vectors of attention heads and MLP matrices, but their approach does not consider input-output mappings which are essential for estimating head functionality. More recently, Merullo et al. (2024b) used parameter similarities of heads at different layers to study their “communication channels”. Lastly, Hernandez et al. (2024) showed that relation operations of attention heads can be well-approximated by linear functions. Our work further shows that some of these relations are implemented by mappings encoded in head parameters.
## 7 Conclusion
We present MAPS, an efficient framework for analyzing the functionality of attention heads from their parameters. MAPS utility is two-fold: it allows mapping how strongly a given operation is implemented across the heads of a model and inferring the salient operations of a given head. Experiments show that estimations by MAPS correlate with the head outputs during inference and causally relate to the model’s behavior. Moreover, strong LLMs can interpret them automatically, often aligning with human judgment. Our analysis provides insights into architecture biases on function encoding and function universality in LLMs.
## Limitations
MAPS primarily focuses on analyzing the part of the head’s computation that writes the output to the residual stream, i.e., the matrix $W_{VO}$ . In other words, we use single-token mappings to analyze the operation of the output part of the head on contextualized representations $\tilde{X}$ . While our experiments in § 4.3 show that these estimations generalize to multi-token inputs, it is still valuable to examine the head’s computation responsible for contextualization and for creating $\tilde{X}$ , i.e., the matrix $W_{QK}$ .
Another limitation of MAPS is that its expressivity is bounded by the model’s vocabulary. Namely, it can only map operations that can be expressed via pairs of tokens. While this formulation can effectively describe and capture various features, as demonstrated by our experiments in § 4 and § 5, there are likely to be operations that this framework would overlook, such as idioms and positional features. A related challenge is the lower coverage of MAPS in early layers, where the model may not yet operate in the output vocabulary space, but instead computes general-purpose features to be used by later layers. Extending MAPS to support other types of representations is a promising direction to overcome these limitations, as well as exploring methods such as linear mappings Yom Din et al. (2024) and patching Ghandeharioun et al. (2024) to improve the performance on early layers.
Lastly, MAPS relies on the formulation of attention heads as interaction matrices (§ 2), which ignores the bias terms of $W_{V},W_{O}$ . While our experiments show there is a strong correlation between the estimations by MAPS and head outputs, these terms may influence them. Incorporating these bias terms into the analysis is an interesting direction, which we leave for future works to explore.
## Acknowledgments
We thank Guy Dar, Daniela Gottesman, Ohav Barbi, Ori Yoran, Yoav Gur-Arieh and Samuel Amouyal who helped with analysis and provided useful feedback. This research was supported in part by The Israel Science Foundation grant 1083/24.
## References
- Ainslie et al. (2023) Joshua Ainslie, James Lee-Thorp, Michiel de Jong, Yury Zemlyanskiy, Federico Lebron, and Sumit Sanghai. 2023. GQA: Training generalized multi-query transformer models from multi-head checkpoints. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 4895–4901, Singapore. Association for Computational Linguistics.
- Arditi et al. (2024) Andy Arditi, Oscar Obeso, Aaquib Syed, Daniel Paleka, Nina Panickssery, Wes Gurnee, and Neel Nanda. 2024. Refusal in language models is mediated by a single direction. arXiv preprint arXiv:2406.11717.
- Azaria and Mitchell (2023) Amos Azaria and Tom Mitchell. 2023. The internal state of an LLM knows when it‘s lying. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 967–976, Singapore. Association for Computational Linguistics.
- Biderman et al. (2023) Stella Biderman, Hailey Schoelkopf, Quentin Gregory Anthony, Herbie Bradley, Kyle O’Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar van der Wal. 2023. Pythia: A suite for analyzing large language models across training and scaling. In International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pages 2397–2430. PMLR.
- Bohnet et al. (2022) Bernd Bohnet, Vinh Q Tran, Pat Verga, Roee Aharoni, Daniel Andor, Livio Baldini Soares, Massimiliano Ciaramita, Jacob Eisenstein, Kuzman Ganchev, Jonathan Herzig, et al. 2022. Attributed question answering: Evaluation and modeling for attributed large language models. arXiv preprint arXiv:2212.08037.
- Bolukbasi et al. (2021) Tolga Bolukbasi, Adam Pearce, Ann Yuan, Andy Coenen, Emily Reif, Fernanda Viégas, and Martin Wattenberg. 2021. An interpretability illusion for bert. ArXiv preprint, abs/2104.07143.
- Clark et al. (2019) Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT look at? an analysis of BERT’s attention. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276–286, Florence, Italy. Association for Computational Linguistics.
- Dar et al. (2023) Guy Dar, Mor Geva, Ankit Gupta, and Jonathan Berant. 2023. Analyzing transformers in embedding space. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 16124–16170, Toronto, Canada. Association for Computational Linguistics.
- Dubey et al. (2024) Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. 2024. The llama 3 herd of models. ArXiv preprint, abs/2407.21783.
- Elhage et al. (2021) Nelson Elhage, Neel Nanda, Catherine Olsson, Tom Henighan, Nicholas Joseph, Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, et al. 2021. A mathematical framework for transformer circuits. Transformer Circuits Thread, 1(1):12.
- Ferrando et al. (2024) Javier Ferrando, Gabriele Sarti, Arianna Bisazza, and Marta R Costa-jussà. 2024. A primer on the inner workings of transformer-based language models. ArXiv preprint, abs/2405.00208.
- Gao et al. (2024) Leo Gao, Tom Dupré la Tour, Henk Tillman, Gabriel Goh, Rajan Troll, Alec Radford, Ilya Sutskever, Jan Leike, and Jeffrey Wu. 2024. Scaling and evaluating sparse autoencoders. ArXiv preprint, abs/2406.04093.
- García-Carrasco et al. (2024) Jorge García-Carrasco, Alejandro Maté, and Juan C. Trujillo. 2024. How does GPT-2 predict acronyms? extracting and understanding a circuit via mechanistic interpretability. In International Conference on Artificial Intelligence and Statistics, 2-4 May 2024, Palau de Congressos, Valencia, Spain, volume 238 of Proceedings of Machine Learning Research, pages 3322–3330. PMLR.
- Geva et al. (2022) Mor Geva, Avi Caciularu, Kevin Wang, and Yoav Goldberg. 2022. Transformer feed-forward layers build predictions by promoting concepts in the vocabulary space. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 30–45, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
- Geva et al. (2021) Mor Geva, Roei Schuster, Jonathan Berant, and Omer Levy. 2021. Transformer feed-forward layers are key-value memories. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5484–5495, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
- Ghandeharioun et al. (2024) Asma Ghandeharioun, Avi Caciularu, Adam Pearce, Lucas Dixon, and Mor Geva. 2024. Patchscopes: A unifying framework for inspecting hidden representations of language models. In Forty-first International Conference on Machine Learning.
- Gould et al. (2024) Rhys Gould, Euan Ong, George Ogden, and Arthur Conmy. 2024. Successor heads: Recurring, interpretable attention heads in the wild. In The Twelfth International Conference on Learning Representations.
- Gur-Arieh et al. (2025) Yoav Gur-Arieh, Roy Mayan, Chen Agassy, Atticus Geiger, and Mor Geva. 2025. Enhancing automated interpretability with output-centric feature descriptions. arXiv preprint arXiv:2501.08319.
- Hernandez et al. (2024) Evan Hernandez, Arnab Sen Sharma, Tal Haklay, Kevin Meng, Martin Wattenberg, Jacob Andreas, Yonatan Belinkov, and David Bau. 2024. Linearity of relation decoding in transformer language models. In The Twelfth International Conference on Learning Representations.
- Hurst et al. (2024) Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al. 2024. Gpt-4o system card. ArXiv preprint, abs/2410.21276.
- Javaheripi and Bubeck (2023) Mojan Javaheripi and Sébastien Bubeck. 2023. Phi-2: The surprising power of small language models.
- Katz et al. (2024) Shahar Katz, Yonatan Belinkov, Mor Geva, and Lior Wolf. 2024. Backward lens: Projecting language model gradients into the vocabulary space. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 2390–2422, Miami, Florida, USA. Association for Computational Linguistics.
- Kim et al. (2024) Geonhee Kim, Marco Valentino, and André Freitas. 2024. A mechanistic interpretation of syllogistic reasoning in auto-regressive language models. ArXiv preprint, abs/2408.08590.
- Kissane et al. (2024) Connor Kissane, Robert Krzyzanowski, Joseph Isaac Bloom, Arthur Conmy, and Neel Nanda. 2024. Interpreting attention layer outputs with sparse autoencoders. In ICML 2024 Workshop on Mechanistic Interpretability.
- Kuhn et al. (2023) Lorenz Kuhn, Yarin Gal, and Sebastian Farquhar. 2023. Semantic uncertainty: Linguistic invariances for uncertainty estimation in natural language generation. In The Eleventh International Conference on Learning Representations.
- Li et al. (2015) Yixuan Li, Jason Yosinski, Jeff Clune, Hod Lipson, and John Hopcroft. 2015. Convergent learning: Do different neural networks learn the same representations? In Proceedings of the 1st International Workshop on Feature Extraction: Modern Questions and Challenges at NIPS 2015, volume 44 of Proceedings of Machine Learning Research, pages 196–212, Montreal, Canada. PMLR.
- Loper and Bird (2002) Edward Loper and Steven Bird. 2002. NLTK: The natural language toolkit. In Proceedings of the ACL-02 Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics, pages 63–70, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
- McDougall et al. (2024) Callum Stuart McDougall, Arthur Conmy, Cody Rushing, Thomas McGrath, and Neel Nanda. 2024. Copy suppression: Comprehensively understanding a motif in language model attention heads. In Proceedings of the 7th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, pages 337–363, Miami, Florida, US. Association for Computational Linguistics.
- Meng et al. (2022) Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. 2022. Locating and editing factual associations in GPT. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022.
- Merullo et al. (2024a) Jack Merullo, Carsten Eickhoff, and Ellie Pavlick. 2024a. Circuit component reuse across tasks in transformer language models. In The Twelfth International Conference on Learning Representations.
- Merullo et al. (2024b) Jack Merullo, Carsten Eickhoff, and Ellie Pavlick. 2024b. Talking heads: Understanding inter-layer communication in transformer language models. In The Thirty-eighth Annual Conference on Neural Information Processing Systems.
- Millidge and Black (2022) Beren Millidge and Sid Black. 2022. The singular value decompositions of transformer weight matrices are highly interpretable.
- Nanda and Bloom (2022) Neel Nanda and Joseph Bloom. 2022. Transformerlens. https://github.com/TransformerLensOrg/TransformerLens.
- nostalgebraist (2020) nostalgebraist. 2020. Interpreting gpt: the logit lens.
- Olsson et al. (2022) Catherine Olsson, Nelson Elhage, Neel Nanda, Nicholas Joseph, Nova DasSarma, Tom Henighan, Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, et al. 2022. In-context learning and induction heads. ArXiv preprint, abs/2209.11895.
- Radford et al. (2019) Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
- Schober et al. (2018) Patrick Schober, Christa Boer, and Lothar A. Schwarte. 2018. Correlation coefficients: Appropriate use and interpretation. Anesthesia & Analgesia, 126:1763–1768.
- Tigges et al. (2024) Curt Tigges, Michael Hanna, Qinan Yu, and Stella Biderman. 2024. LLM circuit analyses are consistent across training and scale. In The Thirty-eighth Annual Conference on Neural Information Processing Systems.
- Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008.
- Vig and Belinkov (2019) Jesse Vig and Yonatan Belinkov. 2019. Analyzing the structure of attention in a transformer language model. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 63–76, Florence, Italy. Association for Computational Linguistics.
- Voita et al. (2019) Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. 2019. Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5797–5808, Florence, Italy. Association for Computational Linguistics.
- Vrandečić and Krötzsch (2014) Denny Vrandečić and Markus Krötzsch. 2014. Wikidata: a free collaborative knowledgebase. Commun. ACM, 57(10):78–85.
- Wang et al. (2023) Kevin Ro Wang, Alexandre Variengien, Arthur Conmy, Buck Shlegeris, and Jacob Steinhardt. 2023. Interpretability in the wild: a circuit for indirect object identification in GPT-2 small. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net.
- Wolf et al. (2020) Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
- Xiao et al. (2024) Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, and Mike Lewis. 2024. Efficient streaming language models with attention sinks. In The Twelfth International Conference on Learning Representations.
- Yom Din et al. (2024) Alexander Yom Din, Taelin Karidi, Leshem Choshen, and Mor Geva. 2024. Jump to conclusions: Short-cutting transformers with linear transformations. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 9615–9625, Torino, Italia. ELRA and ICCL.
- Yu et al. (2024) Lei Yu, Meng Cao, Jackie CK Cheung, and Yue Dong. 2024. Mechanistic understanding and mitigation of language model non-factual hallucinations. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 7943–7956, Miami, Florida, USA. Association for Computational Linguistics.
- Zheng et al. (2024) Zifan Zheng, Yezhaohui Wang, Yuxin Huang, Shichao Song, Bo Tang, Feiyu Xiong, and Zhiyu Li. 2024. Attention heads of large language models: A survey. ArXiv preprint, abs/2409.03752.
- Zhou et al. (2024) Zhenhong Zhou, Haiyang Yu, Xinghua Zhang, Rongwu Xu, Fei Huang, Kun Wang, Yang Liu, Junfeng Fang, and Yongbin Li. 2024. On the role of attention heads in large language model safety. ArXiv preprint, abs/2410.13708.
## Appendix A Mapping Predefined Relations – Additional Details and Results
In § 4, we showed how MAPS can be utilized to map all heads that implement a predefined relation across a language model. Here we offer further details on the datasets and implementation, as well as supplementary results.
### A.1 Datasets
| Category | Relation | Example mappings | Dataset size per tokenizer | | |
| --- | --- | --- | --- | --- | --- |
| Llama-3.1 | Pythia | GPT-2 / Phi-2 | | | |
| Algorithmic | Copying | (ottawa, ottawa),(say,say) | 450 | 432 | 436 |
| Name copying | (Mallory, Mallory),(Walt, Walt) | 134 | 113 | 132 | |
| Word to first letter | (bend, b),(past, p) | 238 | 237 | 238 | |
| Word to last letter | (bend, d),(past, t) | 238 | 237 | 238 | |
| Year to following | (1728, 1729),(1958, 1959) | | 147 | 133 | |
| Knowledge | Country to capital | (Bulgaria, Sofia),(Chile, Santiago) | 45 | 32 | 43 |
| Country to language | (Laos, Lao),(Denmark, Danish) | 51 | 37 | 48 | |
| Object to superclass | (tiger, animal),(carp, fish) | 62 | 46 | 65 | |
| Product by company | (Xbox, Microsoft),(Bravia, Sony) | 39 | | 40 | |
| Work to location | (farmer, farm),(chef, kitchen) | 48 | 34 | 45 | |
| Linguistic | Adj to comparative | (big, bigger),(high, higher) | 47 | 44 | 48 |
| Adj to superlative | (angry, angriest),(high, highest) | 39 | | 41 | |
| Noun to pronoun | (viewers, they),(Anna, she) | 257 | 238 | 253 | |
| Verb to past tense | (ask, asked),(eat, ate) | 110 | 112 | 112 | |
| Word to antonym | (love, hate),(right, wrong) | 91 | 88 | 92 | |
| Word to compound | (hot, hotdog),(wall, wallpaper) | 38 | | 36 | |
| Word to homophone | (steal, steel),(sea, see) | 103 | 88 | 91 | |
| Word to synonym | (vague, obscure),(ill, sick) | 154 | 142 | 154 | |
| Translation | English to French | (cat, chat),(love, amour) | 32 | | |
| English to Spanish | (cat, gato),(love, amor) | 34 | | | |
Table 3: Datasets used for inspecting predefined operations in models with different tokenizers. Every model column describes the datasets’ sizes for this model. Different tokenizers lead to differences between datasets. We discard datasets that were left with $\leq$ 30 single-token mappings after tokenization.
Table 4: Sources for constructing per-relation datasets used in § 4.
We display the list of categories and relations used to map predefined relations (§ 4), alongside the sizes of the different datasets and examples for relations pairs in Table 3.
#### Data collection
We obtained the relation pairs from the sources: WikiData Vrandečić and Krötzsch (2014); “English Word Frequency List” Kaggle dataset, https://www.kaggle.com/datasets/wheelercode/english-word-frequency-list which is based on Google Books Ngram Viewer Exports, version 3, exported on Feb 17, 2020, https://storage.googleapis.com/books/ngrams/books/datasetsv3.html the datasets used by Hernandez et al. (2024), which are based on CounterFact Meng et al. (2022) and WikiData Vrandečić and Krötzsch (2014), and ChatGPT. https://chatgpt.com/ We also used the nltk package Loper and Bird (2002) to validate several relation datasets. Except for the Translation and year to following datasets, all datasets are in English. The details on which source was used to compose which relation are presented in Table 4.
In the datasets for the relations work to location, verb to past tense, product by company, object to superclass, adj to superlative, adj to comparative, word to antonym, we filter out pairs where the source token appeared as a source token in other pairs. Relation pairs were filtered out from different datasets to assert their correctness.
#### Data processing
For every model, we tokenized the various datasets using the model’s tokenizer. To maximize the number of words mapped to single tokens, we added a leading space before every word. For example, if the relation source word was "Don", we tokenized the string " Don" instead. Finally, we filtered out relation pairs where at least one of the words was mapped to more than one token.
### A.2 Implementation Details
#### Applying the first MLP
For every model except Llama-3.1 70B, and similarly to Wang et al. (2023); Gould et al. (2024), we first applied the model’s first MLP to the tokens embeddings. Notably, we did not apply the first MLP when we analyzed heads from the models’ first layers (layer 0), since the first attention layer precedes the first MLP in the computation. To adjust the embeddings to the first MLP’s input distribution, we also applied the layer norm that precedes it. Regarding Llama-3.1 70B, we observed better results when not applying the first MLP.
#### Selection of $k$
To calculate a head’s relation score $\phi_{R}(M)$ , we obtain the top- $k$ tokens in $\mathbf{m}_{s}$ for every source token $s$ . For Pythia, GPT-2 and Phi-2 we set $k=1$ for copying and name-copying relations and $k=10$ for other relations. For the Llama-3.1 models we set $k=3$ for copying and name-copying and $k=25$ for other relations. Table 5 – which presents the tokenization applied to several base words by the tokenizers of Llama-3.1, GPT-2 and Pythia – demonstrates the need to set larger $k$ values for Llama-3.1. The larger vocabulary size allows Llama-3.1’s tokenizer to express the same concept with more tokens.
| Word | Llama-3.1 | Pythia | GPT-2 |
| --- | --- | --- | --- |
| Hello | >Hello, Hello, _hello, Ġhello, hello, ĠHello, Hallo, Bonjour, Hola | Hello, Ġhello, hello, ĠHello | hello, ĠHello, Ġhello, Hello |
| Please | Please, Ġplease, please, ĠPLEASE, ĠPlease, .Please, PLEASE, >Please, Bitte, ĠBITTE, ĠBitte, Ġbitte | Please, please, Ġplease, ĠPlease | Please, Ġplease, ĠPlease, ĠPLEASE, please |
| Love | ĠLOVE, love, loven, Ġlove, Love, ĠLove, ĠLiebe, Ġliebe, Ġamour, Ġamore, Ġamor | love, ĠLOVE, Love, Ġlove, ĠLove | Ġlove, love, ĠLove, Love, ĠLOVE |
| Water | -water, _WATER, ĠWater, _water, water, Ġwater, Water, ĠWATER, .water, ĠWasser, ’eau, agua, Ġagua | Water, Ġwater, water, ĠWater, agua | Water, water, Ġwater, ewater, ĠWater |
| School | ĠSCHOOL, -school, schools, Ġschool, _school, school, ĠSchool, .school, School | School, Ġschool, school, ĠSchool | ĠSchool, Ġschool, school, ĠSCHOOL, School |
Table 5: Different tokenizations for base words by the tokenizers of Llama-3.1, Pythia and GPT-2. The “Ġ” symbol represents a leading space. We observe that Llama-3.1’s larger vocabulary allows expressing every base word with more tokens.
### A.3 Random Baselines
A concern that may arise from choosing a relatively small relation score threshold, is that the results obtained by MAPS may capture the similarity of tokens embeddings, rather than a functionality implemented by attention head’s weights. To study this, we applied MAPS to randomly initialized matrices from the empirical distribution of the model. Concretely, for every layer in the original model, we sampled $H$ random matrices (with the same shape as $W_{VO}$ ) from a normal distribution, for which the mean and standard deviation are the average and the standard deviation of the $W_{VO}$ matrices in the original layer. We applied our predefined relation analysis (described in § 4.1) to those matrices and measured the amounts of “functional attention heads” classified among them.
For models Phi-2, Pythia 6.9B, Pythia 12B, Llama-3.1 8B and Llama-3.1 70B no random matrices were classified as relation heads. For GPT-2 xl, 5 matrices were classified as such, compared to 250 relation heads in the trained model, and out of 1200 heads in the model. This demonstrates that the choice of $\tau=15\$ is meaningful for separating between functionalities of trained attention heads and random ones. While smaller thresholds could have also been justified by this experiment, we chose $\tau=15\$ to assert that the heads encode a substantial fraction of the relation pairs.
### A.4 Additional Results
In Figure 5 we display all heads classified in Llama-3.1 70B, Llama-3.1 8B, Pythia 12B, Pythia 6.9B, Phi-2 and GPT-2 xl divided to four categories. In Tables 6 and 7 we present the number of relation heads (and suppression relation heads) discovered in the same models, divided into relations. We observe that several relations (Name copying, Adj to comparative, Word to first letter) are demonstrated by a relatively large number of heads in at least five out of six models. On the other hand, several relations (e.g., word to homophone, word to last letter) are demonstrated by a small number of heads across all models.
| Category | Relation | GPT-2 xl | Phi-2 | Pythia 6.9B | Pythia 12B | Llama-3.1 8B | Llama-3.1 70B |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Algorithmic | Copying | 35 | 15 | 11 | 9 | 2 | 1 |
| Name copying | 71 | 25 | 27 | 23 | 3 | 14 | |
| Word to first letter | 4 | 5 | 13 | 13 | 15 | 19 | |
| Word to last letter | 0 | 1 | 2 | 1 | 2 | 2 | |
| Year to following | 47 | 16 | 14 | 22 | | | |
| Knowledge | Country to capital | 60 | 17 | 26 | 31 | 5 | 26 |
| Country to language | 50 | 23 | 24 | 30 | 5 | 28 | |
| Object to superclass | 17 | 12 | 11 | 19 | 0 | 13 | |
| Product by company | 24 | 4 | | | 1 | 3 | |
| Work to location | 10 | 6 | 6 | 8 | 0 | 5 | |
| Linguistic | Adj to comparative | 45 | 47 | 27 | 28 | 8 | 25 |
| Adj to superlative | 23 | 23 | | | 10 | 21 | |
| Noun to pronoun | 14 | 13 | 13 | 16 | 8 | 12 | |
| Verb to past tense | 15 | 27 | 17 | 28 | 8 | 18 | |
| Word to antonym | 12 | 15 | 11 | 15 | 5 | 11 | |
| Word to compound | 1 | 1 | | | 2 | 5 | |
| Word to homophone | 0 | 0 | 0 | 0 | 0 | 2 | |
| Word to synonym | 7 | 7 | 3 | 7 | 1 | 2 | |
| Translation | English to French | | | | | 0 | 2 |
| English to Spanish | | | | | 3 | 10 | |
Table 6: Number of heads implementing each of the relations across different models.
| Category | Relation | GPT-2 xl | Phi-2 | Pythia 6.9B | Pythia 12B | Llama-3.1 8B | Llama-3.1 70B |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Algorithmic | Copying | 8 | 7 | 5 | 7 | 0 | 2 |
| Name copying | 23 | 9 | 9 | 7 | 3 | 8 | |
| Word to first letter | 0 | 2 | 2 | 0 | 9 | 11 | |
| Word to last letter | 0 | 0 | 2 | 2 | 1 | 3 | |
| Year to following | 5 | 2 | 1 | 0 | | | |
| Knowledge | Country to capital | 19 | 8 | 5 | 5 | 1 | 10 |
| Country to language | 26 | 12 | 9 | 11 | 3 | 9 | |
| Object to superclass | 2 | 5 | 3 | 6 | 0 | 4 | |
| Product by company | 7 | 0 | | | 0 | 3 | |
| Work to location | 2 | 3 | 1 | 1 | 0 | 2 | |
| Linguistic | Adj to comparative | 11 | 29 | 15 | 19 | 5 | 13 |
| Adj to superlative | 6 | 13 | | | 5 | 10 | |
| Noun to pronoun | 1 | 2 | 2 | 4 | 4 | 7 | |
| Verb to past tense | 2 | 21 | 8 | 7 | 5 | 10 | |
| Word to antonym | 0 | 4 | 3 | 4 | 2 | 3 | |
| Word to compound | 0 | 1 | | | 2 | 3 | |
| Word to homophone | 0 | 0 | 0 | 0 | 1 | 1 | |
| Word to synonym | 0 | 2 | 0 | 1 | 0 | 1 | |
| Translation | English to French | | | | | 0 | 0 |
| English to Spanish | | | | | 2 | 7 | |
Table 7: Number of suppression heads implementing each of the relations across different models.
<details>
<summary>x5.png Details</summary>

### Visual Description
## Scatter Plot Matrix: Category Distribution Across Layers and Heads
### Overview
The image presents a scatter plot matrix visualizing the distribution of data points across different categories, layers, and head numbers. The main chart ("All Categories") shows all data points, while four sub-charts isolate specific categories: Algorithmic, Knowledge, Linguistic, and Translation. Each sub-chart uses a distinct color to represent its category, as defined in the legend.
### Components/Axes
- **Main Chart ("All Categories")**:
- **X-axis**: Layer (0–80, increments of 16)
- **Y-axis**: Head (0–60, increments of 12)
- **Legend**: Located on the left, mapping colors to categories:
- Pink: 4 categories
- Brown: 3 categories
- Purple: 2 categories
- Red: Translation
- Green: Linguistic
- Orange: Knowledge
- Blue: Algorithmic
- Gray: Unclassified
- **Sub-Charts**:
- Each sub-chart replicates the main chart's axes but filters data to a single category.
- Example: The "Algorithmic" sub-chart shows only blue points.
### Detailed Analysis
#### Main Chart ("All Categories")
- **Data Distribution**:
- Points are scattered across all layers (0–80) and heads (0–60).
- High-density clusters appear in layers 16–48 and heads 24–48.
- Unclassified points (gray) are sparse but present in mid-layers (32–64) and mid-heads (24–36).
#### Sub-Charts
1. **Algorithmic (Blue)**:
- Points are concentrated in layers 16–64 and heads 12–48.
- Notable cluster at layer 32, head 24.
- Sparse points in layers 64–80 and heads 48–60.
2. **Knowledge (Orange)**:
- Points cluster in layers 16–48 and heads 12–36.
- Vertical alignment at layer 32, heads 24–36.
- Few points in layers 64–80.
3. **Linguistic (Green)**:
- Points dominate layers 16–80 and heads 24–60.
- Dense cluster at layer 64, head 48.
- Sparse points in layers 0–16.
4. **Translation (Red)**:
- Points are sparse and scattered across layers 32–80 and heads 24–48.
- Notable cluster at layer 64, head 36.
- Few points in layers 0–32.
### Key Observations
1. **Category-Specific Trends**:
- **Algorithmic**: Broad distribution but concentrated in mid-layers (16–64).
- **Knowledge**: Strong vertical clustering at layer 32.
- **Linguistic**: Dominates higher layers (64–80) and mid-to-high heads (48–60).
- **Translation**: Sparse and fragmented, with no clear trend.
2. **Unclassified Data**:
- Gray points in the main chart suggest incomplete categorization, particularly in mid-layers (32–64) and mid-heads (24–36).
3. **Layer-Head Correlation**:
- Higher layers (64–80) correlate with higher head numbers (48–60) for Linguistic and Algorithmic categories.
- Translation shows no strong layer-head correlation.
### Interpretation
The data suggests that **Linguistic** and **Algorithmic** categories are more prevalent in higher layers and heads, while **Knowledge** is concentrated in mid-layers. **Translation** appears less structured, possibly indicating ambiguity in its classification. The presence of unclassified points highlights gaps in the categorization framework. These patterns may reflect domain-specific processing requirements or data generation biases in the underlying system.
</details>
(a) Functionality mapping by MAPS for relations of 4 categories — algorithmic, knowledge, linguistic, translation — across all attention heads in Llama-3.1 70B. A head is marked for a specific category if it implements (also in a suppression variant) at least one relation from this category.
<details>
<summary>x6.png Details</summary>

### Visual Description
## Scatter Plots: Category Distribution Across Transformer Layers and Heads
### Overview
The image presents five scatter plots visualizing the distribution of linguistic and cognitive categories across transformer model layers (x-axis) and attention heads (y-axis). The plots use color-coded markers to represent different categories, with a legend indicating category counts (4, 3, 2 categories) and specific linguistic/cognitive domains (Translation, Linguistic, Knowledge, Algorithmic).
### Components/Axes
- **X-axis**: Layer (0-30, integer increments)
- **Y-axis**: Head (0-30, integer increments)
- **Legend**:
- Color gradient from pink (4 categories) to gray (Unclassified)
- Specific categories:
- Red = Translation
- Green = Linguistic
- Orange = Knowledge
- Blue = Algorithmic
- **Subplot Titles**:
- All Categories (composite)
- Algorithmic
- Knowledge
- Linguistic
- Translation
### Detailed Analysis
1. **All Categories Plot**:
- Dense distribution of markers across all layers and heads
- Highest concentration in layers 18-30 and heads 12-30
- Mix of all colors with gray unclassified points scattered throughout
2. **Algorithmic Subplot**:
- Blue squares dominate layers 18-30
- Vertical clustering in heads 6-12 and 18-24
- Sparse points in early layers (0-12)
3. **Knowledge Subplot**:
- Orange squares concentrated in layers 18-24
- Vertical banding in heads 12-24
- Fewer points in early/mid layers (0-18)
4. **Linguistic Subplot**:
- Green squares show strong layer progression (18-30)
- Head distribution peaks at 18-24
- Gradual increase in density toward later layers
5. **Translation Subplot**:
- Red squares limited to layers 24-30
- Head concentration at 4-12
- Minimal presence in early layers
### Key Observations
- **Layer Dependency**: All categories show stronger presence in later layers (18+), suggesting increased complexity in deeper transformer blocks
- **Head Specialization**:
- Algorithmic: Dominates middle heads (6-12, 18-24)
- Translation: Concentrated in lower heads (4-12)
- Knowledge: Spread across middle heads (12-24)
- **Category Co-occurrence**:
- Green (Linguistic) and orange (Knowledge) markers frequently overlap in layers 18-24
- Blue (Algorithmic) appears independently in deeper layers
- **Unclassified Points**:
- 12 gray markers in All Categories plot
- Mostly in layers 12-24, heads 6-18
### Interpretation
The data reveals systematic patterns in how different cognitive/linguistic functions are distributed across transformer architecture:
1. **Hierarchical Processing**:
- Early layers (0-18) show general linguistic processing (green)
- Middle layers (18-24) specialize in knowledge integration (orange)
- Late layers (24-30) focus on translation-specific tasks (red)
2. **Head Specialization**:
- Lower heads (0-12) handle basic translation tasks
- Middle heads (12-24) manage knowledge integration
- Upper heads (18-30) specialize in algorithmic processing
3. **Unclassified Activity**:
- The presence of gray points suggests residual processing not captured by current categorization
- Concentration in middle layers/heads may indicate transitional processing stages
4. **Architectural Implications**:
- The clear layer progression suggests effective hierarchical feature learning
- Head specialization patterns align with transformer's parallel processing capabilities
- Knowledge/linguistic overlap in middle layers may reflect semantic integration mechanisms
This visualization provides empirical evidence for the modular organization of transformer models, with distinct functional specialization across both layers and attention heads.
</details>
(b) Functionality mapping by MAPS for Llama-3.1 8B.
<details>
<summary>x7.png Details</summary>

### Visual Description
## Heatmap: Category Distribution Across Layers and Heads
### Overview
The image presents four heatmaps visualizing the distribution of linguistic, knowledge, and algorithmic categories across neural network layers (x-axis: 0-35) and heads (y-axis: 0-40). The "All Categories" heatmap shows overlapping distributions, while the subsequent panels isolate specific categories. Colors correspond to predefined categories (see legend).
### Components/Axes
- **X-axis (layer)**: 0 to 35, representing neural network layers.
- **Y-axis (head)**: 0 to 40, representing attention heads.
- **Legend**:
- Brown: 3 categories
- Purple: 2 categories
- Green: Linguistic
- Orange: Knowledge
- Blue: Algorithmic
- Gray: Unclassified
### Detailed Analysis
#### All Categories
- **Distribution**:
- Green (Linguistic) and orange (Knowledge) dominate, with green concentrated in layers 14-28 and orange peaking at layer 21.
- Blue (Algorithmic) is sparse but present across all layers.
- Brown (3 categories) and purple (2 categories) are rare, appearing sporadically in layers 14-35.
- Gray (Unclassified) fills gaps between colored points.
#### Algorithmic
- **Distribution**:
- Blue squares are uniformly distributed but denser in layers 14-21.
- No clear trend; density remains low compared to other categories.
#### Knowledge
- **Distribution**:
- Orange squares cluster strongly in layers 14-28, with a peak at layer 21.
- Density decreases sharply in layers 0-7 and 28-35.
#### Linguistic
- **Distribution**:
- Green squares are evenly spread across all layers but densest in layers 14-28.
- Layer 21 shows the highest concentration.
### Key Observations
1. **Layer 21 Dominance**: All three primary categories (Linguistic, Knowledge, Algorithmic) show elevated activity in layer 21.
2. **Knowledge Concentration**: Knowledge (orange) is most tightly clustered around layer 21, suggesting a focal point for this category.
3. **Algorithmic Sparsity**: Algorithmic (blue) points are dispersed but lack the density of other categories.
4. **Unclassified Prevalence**: Gray areas (Unclassified) are most prominent in layers 0-7 and 32-40.
### Interpretation
The data suggests that **layer 21** acts as a critical hub for integrating linguistic, knowledge, and algorithmic processing. The tight clustering of Knowledge in this layer implies specialized functionality, while the broader spread of Linguistic points indicates distributed processing. Algorithmic elements appear less localized, possibly reflecting general-purpose operations. The scarcity of 2- and 3-category points (brown/purple) suggests these represent edge cases or transitional states. The dominance of Unclassified regions in peripheral layers (0-7, 32-40) may indicate incomplete categorization or noise in those areas. This pattern aligns with hierarchical processing models, where middle layers specialize in complex feature integration.
</details>
(c) Functionality mapping by MAPS for Pythia 12B.
<details>
<summary>x8.png Details</summary>

### Visual Description
## Heatmaps: Category Distribution Across Layers and Heads
### Overview
The image displays four heatmaps visualizing the distribution of linguistic, knowledge-based, and algorithmic categories across neural network layers and attention heads. The "All Categories" heatmap combines all classifications, while the subsequent panels isolate specific categories. Spatial patterns reveal how different cognitive functions are localized within the model architecture.
### Components/Axes
- **X-axis**: Layer index (0-30), representing neural network depth
- **Y-axis**: Head index (0-30), representing attention mechanism components
- **Legend**:
- Brown: 3 categories
- Purple: 2 categories
- Green: Linguistic
- Orange: Knowledge
- Blue: Algorithmic
- Gray: Unclassified
- **Heatmap Titles**:
- All Categories (combined)
- Algorithmic (blue)
- Knowledge (orange)
- Linguistic (green)
### Detailed Analysis
1. **All Categories Heatmap**:
- Mixed distribution of brown (3 categories), purple (2 categories), green, orange, and blue squares
- Gray squares (unclassified) appear sparsely in upper layers (24-30)
- Highest density of colored squares in layers 12-24
2. **Algorithmic Heatmap**:
- Exclusively blue squares (algorithmic category)
- Concentrated in layers 12-24, heads 6-18
- Notable cluster at layer 18, head 12
3. **Knowledge Heatmap**:
- Orange squares dominate layers 6-24
- Strong presence in heads 12-24
- Notable cluster at layer 24, head 24
4. **Linguistic Heatmap**:
- Green squares prevalent in layers 0-24
- Dense distribution in heads 0-18
- Notable cluster at layer 6, head 6
### Key Observations
- **Spatial Segregation**: Categories show distinct spatial patterns, with minimal overlap between heatmaps
- **Layer Depth Correlation**: Algorithmic and Knowledge categories concentrate in deeper layers (12-24)
- **Head Specialization**: Linguistic category dominates early heads (0-18), while Knowledge/Algorithmic occupy mid-to-late heads
- **Unclassified Presence**: Gray squares in "All Categories" suggest 8-12% of layer-head combinations remain unclassified
### Interpretation
The data demonstrates clear functional specialization within the neural architecture:
1. **Linguistic Processing**: Early layers (0-12) and heads (0-18) specialize in language-related tasks, suggesting foundational language understanding occurs in shallower network regions
2. **Knowledge Integration**: Mid-layers (12-24) show strong Knowledge category presence, indicating hierarchical knowledge representation building upon linguistic foundations
3. **Algorithmic Operations**: Deeper layers (18-30) contain algorithmic processing, possibly handling complex pattern recognition and decision-making
4. **Unclassified Regions**: The presence of gray squares in upper layers suggests either model uncertainty or emergent properties not captured by current categorization
This spatial distribution pattern aligns with theories of neural network modularity, where different cognitive functions are localized in specific architectural regions. The clear separation between categories implies effective feature disentanglement, while the unclassified regions warrant further investigation into potential model ambiguities or novel processing mechanisms.
</details>
(d) Functionality mapping by MAPS for Pythia 6.9B.
<details>
<summary>x9.png Details</summary>

### Visual Description
## Heatmap: Category Distribution Across Layers and Heads
### Overview
The image displays four heatmaps visualizing the distribution of linguistic, knowledge, and algorithmic categories across neural network layers (x-axis) and attention heads (y-axis). Each heatmap uses distinct colors to represent specific categories, with the "All Categories" view showing a composite distribution.
### Components/Axes
- **X-axis (layer)**: Ranges from 0 to 30 in increments of 6.
- **Y-axis (head)**: Ranges from 0 to 30 in increments of 6.
- **Legend**: Located on the left of the "All Categories" heatmap, mapping colors to categories:
- Brown: 3 categories
- Purple: 2 categories
- Green: Linguistic
- Orange: Knowledge
- Blue: Algorithmic
- Gray: Unclassified
### Detailed Analysis
1. **All Categories**:
- Green (Linguistic) and orange (Knowledge) squares dominate, with green concentrated in layers 12–24 and heads 6–24.
- Orange (Knowledge) appears most frequently in layers 18–24 and heads 12–18.
- Blue (Algorithmic) is sparse, with clusters in layers 0–12 and heads 18–24.
- Brown (3 categories) and purple (2 categories) are rare, appearing sporadically.
2. **Algorithmic**:
- Blue squares are concentrated in layers 0–12 and heads 18–24, with a dense cluster at layer 18, head 24.
- Minimal presence in layers >18 or heads <12.
3. **Knowledge**:
- Orange squares are spread across layers 0–30 but peak in layers 6–24 and heads 6–18.
- A notable cluster appears at layer 12, head 6.
4. **Linguistic**:
- Green squares are distributed across all layers but cluster in layers 6–24 and heads 0–18.
- A dense region is observed at layer 18, head 12.
### Key Observations
- **Concentration vs. Distribution**: Algorithmic categories are tightly clustered in early layers and high heads, while Linguistic and Knowledge categories are more evenly distributed.
- **Overlap**: The "All Categories" heatmap shows significant overlap between Linguistic (green) and Knowledge (orange), particularly in layers 12–24 and heads 6–18.
- **Unclassified**: Gray squares (unclassified) are absent in the individual category heatmaps but appear in the composite view, suggesting some heads/layers lack clear categorization.
### Interpretation
The data suggests a hierarchical organization of neural processing:
1. **Algorithmic** functions (blue) may dominate early layers (0–12), potentially handling low-level feature extraction, with high-head activity (18–24) indicating complex pattern recognition.
2. **Linguistic** (green) and **Knowledge** (orange) categories show broader engagement across middle layers (6–24), implying integration of semantic and contextual information.
3. The absence of unclassified regions in individual category heatmaps suggests robust categorization, though the composite view reveals residual ambiguity in certain areas.
Notably, the clustering of Algorithmic activity in layer 18, head 24, and Linguistic activity in layer 18, head 12, may indicate specialized sub-networks for specific tasks. The even distribution of Knowledge across middle layers aligns with its role in cross-modal integration.
</details>
(e) Functionality mapping by MAPS for Phi-2.
<details>
<summary>x10.png Details</summary>

### Visual Description
## Heatmap: Distribution of Attention Heads Across Transformer Layers
### Overview
The image presents four heatmaps visualizing the distribution of attention heads across transformer layers (0-45) for different linguistic categories. Each heatmap uses color-coded squares to represent the presence/absence of specific categories in attention heads. The "All Categories" heatmap shows a mixed distribution, while the filtered views reveal category-specific patterns.
### Components/Axes
- **X-axis**: Layer (0-45) - Represents transformer decoder layers
- **Y-axis**: Head (0-25) - Represents attention head indices
- **Legend**:
- Brown: 3 categories
- Purple: 2 categories
- Green: Linguistic
- Orange: Knowledge
- Blue: Algorithmic
- Gray: Unclassified
- **Panel Titles**:
- Top-left: "All Categories"
- Top-center-left: "Algorithmic"
- Top-center-right: "Knowledge"
- Top-right: "Linguistic"
### Detailed Analysis
1. **All Categories Heatmap**:
- Mixed distribution of all colors
- Highest density in layers 15-30 (middle layers)
- Vertical clustering in heads 10-15
- Notable absence of brown/purple in top 5 layers
2. **Algorithmic Heatmap**:
- Blue squares dominate (85% of visible points)
- Strong horizontal banding in layers 18-27
- Concentration in heads 5-10 and 15-20
- Sparse distribution in layers <5 and >35
3. **Knowledge Heatmap**:
- Orange squares show diagonal pattern from bottom-left to top-right
- Dense cluster in layers 20-30, heads 10-15
- Sparse presence in early layers (0-9)
- Vertical striping pattern in layers 35-40
4. **Linguistic Heatmap**:
- Green squares form distinct vertical clusters
- Strong presence in layers 10-20 and 30-35
- Minimal representation in layers <5 and >35
- Horizontal banding in heads 5-10
### Key Observations
- **Layer Specialization**:
- Algorithmic patterns concentrate in middle layers (15-25)
- Knowledge shows progressive distribution across layers
- Linguistic elements peak in early/middle and late layers
- **Head Distribution**:
- Algorithmic heads cluster in mid-range indices (5-20)
- Knowledge shows bimodal distribution (5-10 and 15-20)
- Linguistic elements spread across all head indices
- **Category Co-occurrence**:
- Brown (3 categories) and purple (2 categories) appear only in "All Categories"
- No overlap between filtered category heatmaps
### Interpretation
The data suggests a layered processing architecture where:
1. **Early Layers** (0-10) specialize in linguistic features
2. **Middle Layers** (15-30) handle algorithmic pattern recognition
3. **Late Layers** (30-45) integrate knowledge representations
The strict separation between filtered category heatmaps indicates orthogonal processing streams for different information types. The "All Categories" heatmap reveals emergent properties through combinatorial interactions of these specialized streams. The absence of brown/purple in early layers suggests category co-occurrence develops through layer-wise processing.
The diagonal pattern in Knowledge heatmap implies progressive knowledge construction across layers, while the vertical clustering in Algorithmic heatmap points to recurrent pattern recognition mechanisms. These findings align with transformer architecture principles where lower layers capture local patterns and higher layers integrate global context.
</details>
(f) Functionality mapping by MAPS for GPT-2 xl.
Figure 5: Functionality mapping by MAPS.
## Appendix B Additional Details on Evaluation Experiment
### B.1 Correlative Experiment
In § 4.2 we conducted an experiment which calculates the correlation between MAPS ’s estimations and heads outputs during inference.
#### Implementation details
Recall that the attention head’s formulation that we used: $Y=\tilde{X}W_{VO}$ omits the bias terms of $W_{V},W_{O}$ (§ 2). To account for the bias term of $W_{V}$ in the correlative experiment, where we compute the attention head’s output dynamically, we use both the original attention head definition Vaswani et al. (2017) and the formulation suggested by Elhage et al. (2021), which we have followed so far. First, following Vaswani et al. (2017), we obtain the head’s intermediate output: $\hat{y}\in\mathbb{R}^{n\times d_{\text{head}}}$ , where $d_{\text{head}}$ is the inner dimension of the head, often fixed to $\frac{d}{H}$ . Notably, this output already considers the bias term of $W_{V}$ . In Vaswani et al. (2017), $\hat{y}$ is viewed as the head’s final output. Then, following Elhage et al. (2021), we multiply this intermediate output by $W_{O}\in\mathbb{R}^{{d_{\text{head}}\times d}}$ and obtain the head’s final output.
We use the following templates: ‘‘This is a document about $\langle$ s $\rangle$ ’’, ‘‘No $\langle$ s $\rangle$ means no’’, ‘‘The story of $\langle$ s $\rangle$ contains’’, ‘‘When I think about $\langle$ s $\rangle$ I think about’’.
#### Additional results
Tables 8, 9, 10, 11, 12 present the correlation results between the static score $\phi_{R}(h)$ inferred by our method and the score $\phi^{*}_{R}(h)$ observed dynamically (both when we allow contextualization or not), obtained for Llama-3.1 70B, Llama-3.1 8B, Pythia 12B, Pythia 6.9B, GPT-2 xl. We also present the p-values and the maximum relation score obtained for any head in the model for the required relation. Notably, some of the lower correlations are demonstrated for relations that are not fully implemented by the model’s attention heads, as indicated by the small maximum relation scores. Tables 13, 14, 15, 16, 17 present the results (following the same format) for the suppression relation scores.
| Category | Relation | Correlation w/o context | Correlation w/ context | Max relation score (over heads) |
| --- | --- | --- | --- | --- |
| Algorithmic | Copying | 0.84 | 0.81 | 0.22 |
| Name copying | 0.94 | 0.89 | 0.83 | |
| Word to first letter | 0.88 | 0.78 | 0.95 | |
| Word to last letter | 0.66 | 0.39 | 0.16 | |
| Knowledge | Country to capital | 0.93 | 0.88 | 0.87 |
| Country to language | 0.94 | 0.88 | 0.67 | |
| Object to superclass | 0.75 | 0.76 | 0.52 | |
| Product by company | 0.69 | 0.65 | 0.36 | |
| Work to location | 0.58 | 0.58 | 0.31 | |
| Linguistic | Adj to comparative | 0.90 | 0.88 | 0.57 |
| Adj to superlative | 0.90 | 0.84 | 0.67 | |
| Noun to pronoun | 0.57 | 0.41 | 0.33 | |
| Verb to past tense | 0.90 | 0.80 | 0.81 | |
| Word to antonym | 0.93 | 0.91 | 0.62 | |
| Word to compound | 0.85 | 0.82 | 0.39 | |
| Word to homophone | 0.87 | 0.80 | 0.16 | |
| Word to synonym | 0.84 | 0.79 | 0.27 | |
| Translation | English to French | 0.71 | 0.68 | 0.22 |
| English to Spanish | 0.85 | 0.83 | 0.47 | |
Table 8: Correlation between the relation score of a head and the head’s output in Llama-3.1 70B, with and without head contextualization. The “max relation score” is the highest relation score achieved by a head in the model. All p-values observed are 0.
| Category | Relation | Correlation w/o context | Correlation w/ context | Max relation score (over heads) |
| --- | --- | --- | --- | --- |
| Algorithmic | Copying | 0.76 | 0.73 | 0.18 |
| Name copying | 0.95 | 0.95 | 0.71 | |
| Word to first letter | 0.90 | 0.78 | 0.89 | |
| Word to last letter | 0.67 | 0.36 | 0.27 | |
| Knowledge | Country to capital | 0.85 | 0.85 | 0.49 |
| Country to language | 0.76 | 0.62 | 0.31 | |
| Object to superclass | 0.74 | 0.73 | 0.15 | |
| Product by company | 0.46 | 0.49 | 0.18 | |
| Work to location | 0.44 | 0.45 | 0.10 | |
| Linguistic | Adj to comparative | 0.85 | 0.86 | 0.60 |
| Adj to superlative | 0.87 | 0.89 | 0.59 | |
| Noun to pronoun | 0.89 | 0.79 | 0.57 | |
| Verb to past tense | 0.91 | 0.86 | 0.73 | |
| Word to antonym | 0.90 | 0.86 | 0.37 | |
| Word to compound | 0.78 | 0.62 | 0.21 | |
| Word to homophone | 0.85 | 0.75 | 0.08 | |
| Word to synonym | 0.79 | 0.69 | 0.17 | |
| Translation | English to French | 0.71 | 0.68 | 0.12 |
| English to Spanish | 0.82 | 0.81 | 0.29 | |
Table 9: Correlation between the relation score of a head and the head’s output in Llama-3.1 8B, with and without head contextualization. The “max relation score” is the highest relation score achieved by a head in the model. All p-values observed are $\leq$ 3.9e-128.
| Category | Relation | Correlation w/o context | Correlation w/ context | Max relation score (over heads) |
| --- | --- | --- | --- | --- |
| Algorithmic | Copying | 0.89 | 0.60 | 0.42 |
| Name copying | 0.86 | 0.57 | 0.65 | |
| Word to first letter | 0.84 | 0.62 | 0.75 | |
| Word to last letter | 0.36 | 0.17 | 0.16 | |
| Year to following | 0.90 | 0.78 | 1.00 | |
| Knowledge | Country to capital | 0.93 | 0.89 | 0.97 |
| Country to language | 0.94 | 0.89 | 0.86 | |
| Object to superclass | 0.88 | 0.87 | 0.74 | |
| Work to location | 0.75 | 0.64 | 0.29 | |
| Linguistic | Adj to comparative | 0.92 | 0.80 | 0.95 |
| Noun to pronoun | 0.85 | 0.74 | 0.50 | |
| Verb to past tense | 0.89 | 0.71 | 0.54 | |
| Word to antonym | 0.92 | 0.85 | 0.60 | |
| Word to homophone | 0.67 | 0.43 | 0.07 | |
| Word to synonym | 0.90 | 0.67 | 0.35 | |
Table 10: Correlation between the relation score of a head and the head’s output in Pythia 12B, with and without head contextualization. The “max relation score” is the highest relation score achieved by a head in the model. All p-values observed are $\leq$ 5.7e-40.
| Category | Relation | Correlation w/o context | Correlation w/ context | Max relation score (over heads) |
| --- | --- | --- | --- | --- |
| Algorithmic | Copying | 0.88 | 0.45 | 0.53 |
| Name copying | 0.94 | 0.62 | 0.96 | |
| Word to first letter | 0.87 | 0.64 | 0.67 | |
| Word to last letter | 0.44 | 0.43 | 0.27 | |
| Year to following | 0.94 | 0.79 | 0.99 | |
| Knowledge | Country to capital | 0.95 | 0.91 | 0.97 |
| Country to language | 0.91 | 0.86 | 0.84 | |
| Object to superclass | 0.88 | 0.88 | 0.72 | |
| Work to location | 0.76 | 0.68 | 0.29 | |
| Linguistic | Adj to comparative | 0.91 | 0.76 | 0.77 |
| Noun to pronoun | 0.89 | 0.67 | 0.63 | |
| Verb to past tense | 0.91 | 0.70 | 0.81 | |
| Word to antonym | 0.93 | 0.87 | 0.64 | |
| Word to homophone | 0.70 | 0.38 | 0.05 | |
| Word to synonym | 0.93 | 0.64 | 0.36 | |
Table 11: Correlation between the relation score of a head and the head’s output in Pythia 6.9B, with and without head contextualization. The “max relation score” is the highest relation score achieved by a head in the model. All p-values observed are $\leq$ 1.7e-139.
| Category | Relation | Correlation w/o context | Correlation w/ context | Max relation score (over heads) |
| --- | --- | --- | --- | --- |
| Algorithmic | Copying | 0.95 | 0.65 | 0.52 |
| Name copying | 0.97 | 0.70 | 0.92 | |
| Word to first letter | 0.91 | 0.69 | 0.32 | |
| Word to last letter | 0.61 | 0.20 | 0.05 | |
| Year to following | 0.94 | 0.74 | 0.95 | |
| Knowledge | Country to capital | 0.98 | 0.88 | 0.98 |
| Country to language | 0.96 | 0.84 | 0.75 | |
| Object to superclass | 0.94 | 0.81 | 0.43 | |
| Product by company | 0.96 | 0.91 | 0.65 | |
| Work to location | 0.88 | 0.73 | 0.31 | |
| Linguistic | Adj to comparative | 0.95 | 0.78 | 0.88 |
| Adj to superlative | 0.94 | 0.73 | 0.54 | |
| Noun to pronoun | 0.96 | 0.68 | 0.58 | |
| Verb to past tense | 0.93 | 0.76 | 0.28 | |
| Word to antonym | 0.96 | 0.85 | 0.38 | |
| Word to compound | 0.80 | 0.65 | 0.17 | |
| Word to homophone | 0.46 | 0.38 | 0.02 | |
| Word to synonym | 0.95 | 0.79 | 0.21 | |
Table 12: Correlation between the relation score of a head and the head’s output in GPT-2 xl, with and without head contextualization. The “max relation score” is the highest relation score achieved by a head in the model. All p-values observed are $\leq$ 1.1e-45.
| Category | Relation | Correlation w/o context | Correlation w/ context | Max relation score (over heads) |
| --- | --- | --- | --- | --- |
| Algorithmic | Copying | 0.88 | 0.85 | 0.18 |
| Name copying | 0.95 | 0.83 | 0.66 | |
| Word to first letter | 0.86 | 0.72 | 0.56 | |
| Word to last letter | 0.56 | 0.42 | 0.33 | |
| Knowledge | Country to capital | 0.91 | 0.90 | 0.84 |
| Country to language | 0.89 | 0.89 | 0.49 | |
| Object to superclass | 0.81 | 0.83 | 0.39 | |
| Product by company | 0.81 | 0.78 | 0.31 | |
| Work to location | 0.70 | 0.70 | 0.21 | |
| Linguistic | Adj to comparative | 0.91 | 0.88 | 0.72 |
| Adj to superlative | 0.90 | 0.87 | 0.56 | |
| Noun to pronoun | 0.33 | 0.30 | 0.46 | |
| Verb to past tense | 0.91 | 0.80 | 0.54 | |
| Word to antonym | 0.91 | 0.80 | 0.35 | |
| Word to compound | 0.86 | 0.82 | 0.24 | |
| Word to homophone | 0.91 | 0.81 | 0.31 | |
| Word to synonym | 0.83 | 0.77 | 0.21 | |
| Translation | English to French | 0.61 | 0.59 | 0.09 |
| English to Spanish | 0.86 | 0.83 | 0.35 | |
Table 13: Correlation between the suppression relation score of a head and the head’s output in Llama-3.1 70B, with and without head contextualization. The “max relation score” is the highest relation score achieved by a head in the model. All p-values observed are 0.
| Category | Relation | Correlation w/o context | Correlation w/ context | Max relation score (over heads) |
| --- | --- | --- | --- | --- |
| Algorithmic | Copying | 0.77 | 0.74 | 0.11 |
| Name copying | 0.99 | 0.95 | 0.72 | |
| Word to first letter | 0.78 | 0.41 | 0.61 | |
| Word to last letter | 0.77 | 0.31 | 0.25 | |
| Knowledge | Country to capital | 0.90 | 0.87 | 0.18 |
| Country to language | 0.76 | 0.74 | 0.20 | |
| Object to superclass | 0.61 | 0.63 | 0.08 | |
| Product by company | 0.44 | 0.38 | 0.08 | |
| Work to location | 0.40 | 0.32 | 0.12 | |
| Linguistic | Adj to comparative | 0.81 | 0.91 | 0.81 |
| Adj to superlative | 0.87 | 0.93 | 0.62 | |
| Noun to pronoun | 0.80 | 0.57 | 0.40 | |
| Verb to past tense | 0.90 | 0.85 | 0.46 | |
| Word to antonym | 0.81 | 0.70 | 0.29 | |
| Word to compound | 0.84 | 0.76 | 0.24 | |
| Word to homophone | 0.89 | 0.61 | 0.17 | |
| Word to synonym | 0.75 | 0.65 | 0.09 | |
| Translation | English to French | 0.74 | 0.65 | 0.06 |
| English to Spanish | 0.84 | 0.81 | 0.26 | |
Table 14: Correlation between the suppression relation score of a head and the head’s output in Llama-3.1 8B, with and without head contextualization. The “max relation score” is the highest relation score achieved by a head in the model. All p-values observed are $\leq$ 2.6e-89.
| Category | Relation | Correlation w/o context | Correlation w/ context | Max relation score (over heads) |
| --- | --- | --- | --- | --- |
| Algorithmic | Copying | 0.91 | 0.78 | 0.31 |
| Name copying | 0.99 | 0.72 | 1.00 | |
| Word to first letter | 0.48 | 0.18 | 0.11 | |
| Word to last letter | 0.59 | 0.23 | 0.19 | |
| Year to following | 0.39 | 0.59 | 0.12 | |
| Knowledge | Country to capital | 0.63 | 0.62 | 0.56 |
| Country to language | 0.84 | 0.70 | 0.46 | |
| Object to superclass | 0.79 | 0.77 | 0.41 | |
| Work to location | 0.61 | 0.64 | 0.24 | |
| Linguistic | Adj to comparative | 0.93 | 0.74 | 0.73 |
| Noun to pronoun | 0.68 | 0.29 | 0.28 | |
| Verb to past tense | 0.96 | 0.75 | 0.73 | |
| Word to antonym | 0.90 | 0.77 | 0.32 | |
| Word to homophone | 0.61 | 0.39 | 0.03 | |
| Word to synonym | 0.82 | 0.63 | 0.16 | |
Table 15: Correlation between the suppression relation score of a head and the head’s output in Pythia 12B, with and without head contextualization. The “max relation score” is the highest relation score achieved by a head in the model. All p-values observed are $\leq$ 2.2e-45.
| Category | Relation | Correlation w/o context | Correlation w/ context | Max relation score (over heads) |
| --- | --- | --- | --- | --- |
| Algorithmic | Copying | 0.88 | 0.81 | 0.41 |
| Name copying | 0.98 | 0.79 | 0.96 | |
| Word to first letter | 0.81 | 0.37 | 0.31 | |
| Word to last letter | 0.30 | 0.08 | 0.24 | |
| Year to following | 0.45 | 0.80 | 0.33 | |
| Knowledge | Country to capital | 0.92 | 0.91 | 0.66 |
| Country to language | 0.89 | 0.81 | 0.51 | |
| Object to superclass | 0.86 | 0.78 | 0.33 | |
| Work to location | 0.73 | 0.58 | 0.21 | |
| Linguistic | Adj to comparative | 0.95 | 0.83 | 0.59 |
| Noun to pronoun | 0.86 | 0.51 | 0.56 | |
| Verb to past tense | 0.94 | 0.80 | 0.82 | |
| Word to antonym | 0.91 | 0.78 | 0.30 | |
| Word to homophone | 0.49 | 0.31 | 0.02 | |
| Word to synonym | 0.87 | 0.73 | 0.13 | |
Table 16: Correlation between the suppression relation score of a head and the head’s output in Pythia 6.9B, with and without head contextualization. The “max relation score” is the highest relation score achieved by a head in the model. All p-values observed are $\leq$ 3.6e-7.
| Category | Relation | Correlation w/o context | Correlation w/ context | Max relation score (over heads) |
| --- | --- | --- | --- | --- |
| Algorithmic | Copying | 0.97 | 0.71 | 0.29 |
| Name copying | 0.99 | 0.72 | 0.97 | |
| Word to first letter | 0.78 | 0.52 | 0.04 | |
| Word to last letter | 0.78 | 0.54 | 0.06 | |
| Year to following | 0.75 | 0.52 | 0.32 | |
| Knowledge | Country to capital | 0.94 | 0.80 | 0.72 |
| Country to language | 0.96 | 0.78 | 0.50 | |
| Object to superclass | 0.89 | 0.82 | 0.23 | |
| Product by company | 0.88 | 0.77 | 0.33 | |
| Work to location | 0.83 | 0.62 | 0.18 | |
| Linguistic | Adj to comparative | 0.86 | 0.60 | 0.38 |
| Adj to superlative | 0.81 | 0.59 | 0.27 | |
| Noun to pronoun | 0.92 | 0.34 | 0.40 | |
| Verb to past tense | 0.84 | 0.64 | 0.17 | |
| Word to antonym | 0.53 | 0.37 | 0.05 | |
| Word to compound | 0.80 | 0.58 | 0.14 | |
| Word to homophone | 0.10 | 0.04 | 0.01 | |
| Word to synonym | 0.81 | 0.59 | 0.08 | |
Table 17: Correlation between the suppression relation score of a head and the head’s output in GPT-2 xl, with and without head contextualization. The “max relation score” is the highest relation score achieved by a head in the model. All p-values observed are $\leq$ 2.3e-3.
| Relation | Prompt |
| --- | --- |
| Adj to comparative | lovely-> lovelier; edgy-> edgier; <s>-> |
| Copying | walk-> walk; cat-> cat; water-> water; <s>-> |
| Country to capital | The capital of <s> is |
| Country to language | The official language of <s> is |
| English to Spanish | apartment-> departamento; computer-> computadora; tribe-> tribu; <s>-> |
| Name copying | John-> John; Donna-> Donna; <s>-> |
| Noun to pronoun | mother-> she; father-> he; tribe-> they; actress-> she; apartment-> it; <s>-> |
| Object to superclass | A <s> is a kind of |
| Product by company | Nesquik is made by Nestlé; Mustang is made by Ford; <s> is made by |
| Verb to past tense | hike->hiked; purchase-> purchased; <s>-> |
| Word to first letter | word-> w, o, r, d; cat-> c, a, t; <s>-> |
| Word to last letter | word-> d, r, o, w; cat-> t, a, c; <s>-> |
| Year to following | 1300-> 1301; 1000-> 1001; <s>-> |
Table 18: Relations and prompts used in the causal experiment. The < s> string is replaced with the relation’s source tokens.
### B.2 Causal Experiment
In § 4.2 we measured the causal effect of removing the heads that implement a specific operation on the model’s performance in handling queries that depend on that operation.
#### Implementation details
We evaluate models on tasks for 13 relations. For each model, we filter out relations where (a) the base accuracy is very low ( $<$ 0.1) or (b) there is no dataset for the relation (see § A). The task prompts used for the different relations are presented in Table 18. Notably, When ablating an attention head, we remove its output only from the last position of the prompt.
#### Additional results
In Tables 19, 20, 21, 22, 23 we present the extended experiment results for Llama-3.1 70B, Llama-3.1 8B, Pythia 12B, Pythia 6.9B, GPT-2 xl.
| Relation name | # heads removed | TR tasks | CTR tasks | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Base | -TR | -RND | # tasks | Base (CTR) | -TR (CTR) | | |
| Adj to comparative | 175 | 0.98 | $\downarrow$ 13% 0.85 | $\downarrow$ 0% 0.98 $\pm$ 0.00 | 5 | 0.94 $\pm$ 0.05 | $\downarrow$ 3% 0.92 $\pm$ 0.08 |
| Copying | 250 | 0.97 | $\downarrow$ 30% 0.68 | $\downarrow$ 0% 0.97 $\pm$ 0.01 | 3 | 0.97 $\pm$ 0.03 | $\downarrow$ 23% 0.75 $\pm$ 0.34 |
| Country to capital | 118 | 0.84 | $\downarrow$ 66% 0.29 | $\uparrow$ 1% 0.85 $\pm$ 0.09 | 5 | 0.93 $\pm$ 0.08 | $\uparrow$ 0% 0.94 $\pm$ 0.09 |
| Country to language | 133 | 0.96 | $\downarrow$ 6% 0.90 | $\downarrow$ 0% 0.96 $\pm$ 0.00 | 4 | 0.92 $\pm$ 0.08 | $\downarrow$ 1% 0.92 $\pm$ 0.10 |
| English to Spanish | 175 | 0.91 | $\downarrow$ 6% 0.85 | $\uparrow$ 0% 0.91 $\pm$ 0.00 | 4 | 0.97 $\pm$ 0.03 | $\uparrow$ 0% 0.97 $\pm$ 0.03 |
| Name copying | 205 | 0.99 | $\downarrow$ 95% 0.05 | $\uparrow$ 1% 1.00 $\pm$ 0.00 | 3 | 0.97 $\pm$ 0.03 | $\downarrow$ 15% 0.83 $\pm$ 0.23 |
| Noun to pronoun | 154 | 0.98 | $\uparrow$ 0% 0.98 | $\uparrow$ 0% 0.98 $\pm$ 0.00 | 5 | 0.93 $\pm$ 0.08 | $\downarrow$ 1% 0.92 $\pm$ 0.09 |
| Object to superclass | 119 | 0.79 | $\downarrow$ 4% 0.76 | $\downarrow$ 2% 0.77 $\pm$ 0.02 | 5 | 0.88 $\pm$ 0.11 | $\downarrow$ 3% 0.85 $\pm$ 0.15 |
| Product by company | 59 | 0.67 | $\downarrow$ 4% 0.64 | $\downarrow$ 0% 0.67 $\pm$ 0.00 | 1 | 0.79 $\pm$ 0.00 | $\downarrow$ 2% 0.77 $\pm$ 0.00 |
| Word to first letter | 250 | 1.00 | $\downarrow$ 8% 0.92 | $\downarrow$ 0% 1.00 $\pm$ 0.00 | 5 | 0.94 $\pm$ 0.05 | $\downarrow$ 5% 0.89 $\pm$ 0.14 |
| Word to last letter | 250 | 0.92 | $\downarrow$ 18% 0.76 | $\uparrow$ 1% 0.93 $\pm$ 0.01 | 5 | 0.94 $\pm$ 0.05 | $\uparrow$ 1% 0.95 $\pm$ 0.04 |
Table 19: Accuracy of Llama-3.1 70B on tasks for a target relation (TR) versus on control (CTR) tasks, when removing heads implementing the relation compared to when removing random heads (RND). Results for RND heads are averaged over 5 experiments.
| Relation name | # heads removed | TR tasks | CTR tasks | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Base | -TR | -RND | # tasks | Base (CTR) | -TR (CTR) | | |
| Adj to comparative | 69 | 0.98 | $\downarrow$ 7% 0.91 | $\downarrow$ 3% 0.95 $\pm$ 0.05 | 4 | 0.96 $\pm$ 0.04 | $\uparrow$ 0% 0.96 $\pm$ 0.04 |
| Copying | 150 | 1.00 | $\downarrow$ 94% 0.06 | $\downarrow$ 0% 1.00 $\pm$ 0.00 | 3 | 0.95 $\pm$ 0.04 | $\downarrow$ 5% 0.91 $\pm$ 0.05 |
| Country to capital | 19 | 0.89 | $\downarrow$ 75% 0.22 | $\uparrow$ 2% 0.91 $\pm$ 0.03 | 5 | 0.87 $\pm$ 0.12 | $\uparrow$ 1% 0.87 $\pm$ 0.12 |
| Country to language | 30 | 0.98 | $\downarrow$ 50% 0.49 | $\uparrow$ 1% 0.99 $\pm$ 0.01 | 5 | 0.98 $\pm$ 0.02 | $\downarrow$ 0% 0.98 $\pm$ 0.02 |
| English to Spanish | 54 | 0.94 | $\uparrow$ 3% 0.97 | $\downarrow$ 1% 0.93 $\pm$ 0.01 | 3 | 0.95 $\pm$ 0.04 | $\uparrow$ 2% 0.97 $\pm$ 0.02 |
| Name copying | 70 | 1.00 | $\downarrow$ 87% 0.13 | $\downarrow$ 0% 1.00 $\pm$ 0.00 | 2 | 0.94 $\pm$ 0.05 | $\downarrow$ 4% 0.90 $\pm$ 0.08 |
| Noun to pronoun | 35 | 0.98 | $\downarrow$ 0% 0.98 | $\uparrow$ 0% 0.99 $\pm$ 0.00 | 5 | 0.97 $\pm$ 0.04 | $\uparrow$ 1% 0.98 $\pm$ 0.03 |
| Object to superclass | 34 | 0.74 | $\downarrow$ 11% 0.66 | $\uparrow$ 1% 0.75 $\pm$ 0.01 | 2 | 0.79 $\pm$ 0.09 | $\downarrow$ 3% 0.77 $\pm$ 0.07 |
| Product by company | 12 | 0.54 | $\downarrow$ 5% 0.51 | $\uparrow$ 4% 0.56 $\pm$ 0.01 | 1 | 0.70 $\pm$ 0.00 | $\downarrow$ 1% 0.69 $\pm$ 0.00 |
| Verb to past tense | 113 | 0.70 | $\downarrow$ 61% 0.27 | $\downarrow$ 7% 0.65 $\pm$ 0.10 | 2 | 0.71 $\pm$ 0.18 | $\downarrow$ 1% 0.70 $\pm$ 0.14 |
| Word to first letter | 150 | 1.00 | $\downarrow$ 98% 0.02 | $\downarrow$ 0% 1.00 $\pm$ 0.00 | 5 | 0.96 $\pm$ 0.04 | $\downarrow$ 30% 0.67 $\pm$ 0.33 |
Table 20: Accuracy of Llama-3.1 8B on tasks for a target relation (TR) versus on control (CTR) tasks, when removing heads implementing the relation compared to when removing random heads (RND). Results for RND heads are averaged over 5 experiments.
| Relation name | # heads removed | TR tasks | CTR tasks | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Base | -TR | -RND | # tasks | Base (CTR) | -TR (CTR) | | |
| Adj to comparative | 150 | 0.91 | $\downarrow$ 77% 0.20 | $\downarrow$ 10% 0.82 $\pm$ 0.07 | 3 | 0.92 $\pm$ 0.04 | $\downarrow$ 32% 0.63 $\pm$ 0.18 |
| Copying | 150 | 1.00 | $\downarrow$ 32% 0.68 | $\downarrow$ 0% 1.00 $\pm$ 0.00 | 3 | 0.95 $\pm$ 0.05 | $\downarrow$ 7% 0.88 $\pm$ 0.11 |
| Country to capital | 75 | 0.97 | $\downarrow$ 100% 0.00 | $\downarrow$ 2% 0.95 $\pm$ 0.02 | 2 | 0.89 $\pm$ 0.02 | $\uparrow$ 0% 0.90 $\pm$ 0.01 |
| Country to language | 94 | 1.00 | $\downarrow$ 92% 0.08 | $\downarrow$ 4% 0.96 $\pm$ 0.01 | 2 | 0.89 $\pm$ 0.01 | $\downarrow$ 0% 0.89 $\pm$ 0.01 |
| Name copying | 150 | 1.00 | $\downarrow$ 76% 0.24 | $\downarrow$ 0% 1.00 $\pm$ 0.00 | 2 | 0.90 $\pm$ 0.02 | $\uparrow$ 2% 0.92 $\pm$ 0.05 |
| Noun to pronoun | 105 | 0.88 | $\downarrow$ 48% 0.46 | $\downarrow$ 2% 0.86 $\pm$ 0.03 | 5 | 0.90 $\pm$ 0.07 | $\downarrow$ 3% 0.88 $\pm$ 0.08 |
| Object to superclass | 75 | 0.78 | $\downarrow$ 50% 0.39 | $\downarrow$ 13% 0.68 $\pm$ 0.03 | 2 | 0.90 $\pm$ 0.02 | $\downarrow$ 3% 0.87 $\pm$ 0.09 |
| Verb to past tense | 150 | 0.22 | $\downarrow$ 84% 0.04 | $\uparrow$ 17% 0.26 $\pm$ 0.11 | 1 | 0.03 $\pm$ 0.00 | $\downarrow$ 33% 0.02 $\pm$ 0.00 |
| Word to first letter | 150 | 0.91 | $\downarrow$ 63% 0.34 | $\downarrow$ 4% 0.87 $\pm$ 0.04 | 5 | 0.91 $\pm$ 0.08 | $\downarrow$ 19% 0.74 $\pm$ 0.30 |
| Year to following | 56 | 0.92 | $\downarrow$ 100% 0.00 | $\downarrow$ 5% 0.87 $\pm$ 0.07 | 2 | 0.83 $\pm$ 0.05 | $\downarrow$ 5% 0.79 $\pm$ 0.03 |
Table 21: Accuracy of Pythia 12B on tasks for a target relation (TR) versus its accuracy on control (CTR) tasks, when removing heads implementing the relation compared to when removing random heads (RND). Results for RND heads are averaged over 5 experiments.
| Relation name | # heads removed | TR tasks | CTR tasks | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Base | -TR | -RND | # tasks | Base (CTR) | -TR (CTR) | | |
| Adj to comparative | 124 | 0.52 | $\downarrow$ 100% 0.00 | $\downarrow$ 51% 0.25 $\pm$ 0.18 | 1 | 0.68 $\pm$ 0.00 | $\downarrow$ 25% 0.51 $\pm$ 0.00 |
| Copying | 150 | 1.00 | $\downarrow$ 93% 0.07 | $\downarrow$ 1% 0.99 $\pm$ 0.01 | 0 | | |
| Country to capital | 45 | 0.97 | $\downarrow$ 100% 0.00 | $\downarrow$ 1% 0.96 $\pm$ 0.02 | 1 | 1.00 $\pm$ 0.00 | $\downarrow$ 0% 1.00 $\pm$ 0.00 |
| Country to language | 74 | 0.97 | $\downarrow$ 92% 0.08 | $\uparrow$ 1% 0.98 $\pm$ 0.01 | 0 | | |
| Name copying | 143 | 1.00 | $\downarrow$ 97% 0.03 | $\downarrow$ 1% 0.99 $\pm$ 0.01 | 0 | | |
| Noun to pronoun | 102 | 0.68 | $\downarrow$ 46% 0.37 | $\uparrow$ 13% 0.77 $\pm$ 0.09 | 3 | 0.68 $\pm$ 0.11 | $\downarrow$ 25% 0.51 $\pm$ 0.22 |
| Object to superclass | 67 | 0.78 | $\downarrow$ 53% 0.37 | $\downarrow$ 4% 0.75 $\pm$ 0.02 | 2 | 0.71 $\pm$ 0.03 | $\uparrow$ 1% 0.71 $\pm$ 0.18 |
| Verb to past tense | 150 | 0.43 | $\downarrow$ 94% 0.03 | $\downarrow$ 16% 0.36 $\pm$ 0.07 | 0 | | |
| Word to first letter | 66 | 1.00 | $\downarrow$ 100% 0.00 | $\downarrow$ 0% 1.00 $\pm$ 0.00 | 2 | 0.97 $\pm$ 0.00 | $\downarrow$ 13% 0.85 $\pm$ 0.13 |
| Year to following | 52 | 0.73 | $\downarrow$ 100% 0.00 | $\uparrow$ 5% 0.77 $\pm$ 0.07 | 2 | 0.73 $\pm$ 0.05 | $\downarrow$ 2% 0.71 $\pm$ 0.05 |
Table 22: Accuracy of Pythia 6.9B on tasks for a target relation (TR) versus its accuracy on control (CTR) tasks, when removing heads implementing the relation compared to when removing random heads (RND). Results for RND heads are averaged over 5 experiments.
| Relation name | # heads removed | TR tasks | CTR tasks | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Base | -TR | -RND | # tasks | Base (CTR) | -TR (CTR) | | |
| Copying | 150 | 0.99 | $\downarrow$ 30% 0.69 | $\downarrow$ 0% 0.99 $\pm$ 0.00 | 0 | | |
| Country to capital | 38 | 0.88 | $\downarrow$ 100% 0.00 | $\downarrow$ 3% 0.86 $\pm$ 0.05 | 1 | 0.71 $\pm$ 0.00 | $\uparrow$ 2% 0.72 $\pm$ 0.00 |
| Country to language | 148 | 0.96 | $\downarrow$ 91% 0.08 | $\downarrow$ 2% 0.94 $\pm$ 0.01 | 0 | | |
| Name copying | 133 | 0.76 | $\downarrow$ 100% 0.00 | $\downarrow$ 15% 0.65 $\pm$ 0.08 | 1 | 0.71 $\pm$ 0.00 | $\downarrow$ 15% 0.60 $\pm$ 0.00 |
| Noun to pronoun | 27 | 0.71 | $\downarrow$ 26% 0.53 | $\downarrow$ 2% 0.69 $\pm$ 0.04 | 4 | 0.72 $\pm$ 0.13 | $\downarrow$ 3% 0.69 $\pm$ 0.16 |
| Object to superclass | 99 | 0.71 | $\downarrow$ 54% 0.32 | $\downarrow$ 1% 0.70 $\pm$ 0.02 | 1 | 0.71 $\pm$ 0.00 | $\downarrow$ 42% 0.41 $\pm$ 0.00 |
| Product by company | 73 | 0.40 | $\downarrow$ 81% 0.08 | $\downarrow$ 0% 0.40 $\pm$ 0.00 | 1 | 0.40 $\pm$ 0.00 | $\uparrow$ 2% 0.41 $\pm$ 0.00 |
| Verb to past tense | 150 | 0.40 | $\downarrow$ 56% 0.18 | $\downarrow$ 4% 0.38 $\pm$ 0.18 | 0 | | |
| Word to first letter | 62 | 0.18 | $\downarrow$ 16% 0.16 | $\downarrow$ 1% 0.18 $\pm$ 0.02 | 1 | 0.04 $\pm$ 0.00 | $\uparrow$ 250% 0.15 $\pm$ 0.00 |
| Year to following | 54 | 0.53 | $\downarrow$ 100% 0.00 | $\downarrow$ 5% 0.50 $\pm$ 0.03 | 1 | 0.71 $\pm$ 0.00 | $\downarrow$ 36% 0.45 $\pm$ 0.00 |
Table 23: Accuracy of GPT-2 xl on tasks for a target relation (TR) versus its accuracy on control (CTR) tasks, when removing heads implementing the relation compared to when removing random heads (RND). Results for RND heads are averaged over 5 experiments.
## Appendix C Generalization to Multi-Token Entities – Additional Results
In § 4.3 we conducted an experiment that evaluates how well the classifications by MAPS generalize to contextualized inputs. Table 24 shows the full results of this experiment. We omit the correlations for GPT-2 xl and the relation word to last letter, as all static scores are very small ( $\leq$ 0.05).
| Model | Relation | # samples | W/o context | W/ context | | |
| --- | --- | --- | --- | --- | --- | --- |
| Single-token | Multi-token | Single-token | Multi-token | | | |
| Pythia 12B | Copying | 283 | 0.91 | 0.85 | 0.48 | 0.44 |
| Country to capital | 30 | 0.94 | 0.93 | 0.85 | 0.87 | |
| Country to language | 70 | 0.94 | 0.90 | 0.88 | 0.83 | |
| Name copying | 83 | 0.87 | 0.76 | 0.38 | 0.33 | |
| Noun to pronoun | 174 | 0.84 | 0.85 | 0.78 | 0.79 | |
| Object to superclass | 91 | 0.88 | 0.89 | 0.84 | 0.86 | |
| Word to first letter | 77 | 0.83 | 0.73 | 0.56 | 0.64 | |
| Word to last letter | 77 | 0.34 | 0.50 | 0.11 | 0.09 | |
| Word to synonym | 71 | 0.92 | 0.86 | 0.61 | 0.58 | |
| Work to location | 65 | 0.77 | 0.72 | 0.74 | 0.70 | |
| Year to following | 65 | 0.90 | 0.84 | 0.64 | 0.60 | |
| Pythia 6.9B | Copying | 283 | 0.90 | 0.87 | 0.34 | 0.32 |
| Country to capital | 30 | 0.95 | 0.93 | 0.89 | 0.89 | |
| Country to language | 70 | 0.92 | 0.88 | 0.85 | 0.83 | |
| Name copying | 83 | 0.94 | 0.92 | 0.47 | 0.47 | |
| Noun to pronoun | 174 | 0.89 | 0.85 | 0.69 | 0.70 | |
| Object to superclass | 91 | 0.88 | 0.90 | 0.86 | 0.82 | |
| Word to first letter | 77 | 0.89 | 0.79 | 0.59 | 0.66 | |
| Word to last letter | 77 | 0.45 | 0.70 | 0.44 | 0.44 | |
| Word to synonym | 71 | 0.94 | 0.91 | 0.62 | 0.62 | |
| Work to location | 65 | 0.79 | 0.76 | 0.71 | 0.75 | |
| Year to following | 65 | 0.94 | 0.87 | 0.72 | 0.67 | |
| GPT-2 xl | Copying | 301 | 0.95 | 0.88 | 0.68 | 0.64 |
| Country to capital | 34 | 0.98 | 0.97 | 0.87 | 0.86 | |
| Country to language | 70 | 0.96 | 0.91 | 0.82 | 0.80 | |
| Name copying | 91 | 0.97 | 0.93 | 0.60 | 0.58 | |
| Noun to pronoun | 154 | 0.97 | 0.95 | 0.47 | 0.56 | |
| Object to superclass | 97 | 0.93 | 0.89 | 0.83 | 0.82 | |
| Word to first letter | 78 | 0.92 | 0.89 | 0.53 | 0.72 | |
| Word to synonym | 79 | 0.95 | 0.89 | 0.79 | 0.76 | |
| Work to location | 67 | 0.89 | 0.80 | 0.74 | 0.76 | |
| Year to following | 90 | 0.95 | 0.82 | 0.74 | 0.63 | |
Table 24: Extended results for the multi-token experiment, presented in Section 4.3. All p-values observed are $\leq$ 9.3e-4.
## Appendix D Comparison to Head Operations Identified in Prior Works
#### Name-mover heads in GPT-2 small
Wang et al. (2023) studied the Indirect Object Identification circuit in GPT-2 small. Analyzing the operations of the circuit’s heads, they defined heads that copy names as Name-Mover heads and heads that suppress names as Negative Name-Mover heads. They also classified heads that contribute to these tasks when the original mover heads are ablated as “backup” mover heads.
Using MAPS we classified all three name-mover heads as implementing the name copying relation, and the two negative name-mover heads as implementing the suppression variant of name copying. We note that a similar analysis was performed by Wang et al. (2023) as well. However, by applying MAPS to all heads in the model, and not just the heads in the discovered circuit, we were able to identify 21 additional name-copying heads as well, 6 of which were identified by Wang et al. (2023) as “backup” heads. One backup mover head and one backup negative mover head that were identified by Wang et al. (2023), were not identified by MAPS. Moreover, we find that each of the five identified name-mover heads implements a myriad of other relations. In Figure 6(a) we present the name copying relation scores for all heads in GPT-2 small and the heads classified by Wang et al. (2023).
We further examined the name copying heads not classified by Wang et al. (2023), to study whether their omission was mostly due to limited involvement in the specific task studied by Wang et al. (2023), or instead a consequence of inaccurate estimations by MAPS. These heads show a strong correlation (0.94, p-value of $2.5e{-7}$ ) between their name copying static and dynamic relation scores (for the prompt This is a document about $\langle$ s $\rangle$ , see § 4.2), when attention is restricted to the name position, suggesting that they indeed copy names when they attend to them. However, the attention weight assigned to the name token may change depending on the context. For example, head 8.11 in GPT-2 small has a static relation score of 0.88. Its dynamic relation score is 0.23 for the prompt This is a document about $\langle$ s $\rangle$ , but it increases substantially to 0.92 for the prompt “ John->John; Donna-> Donna; $\langle$ s $\rangle$ -> ”. We anticipate that other relation heads will demonstrate the name-copying functionality for other prompts or interventions. Crafting prompts that steer heads to demonstrate a specific functionality over another (for example by adapting MAPS to the $W_{QK}$ matrix) is an interesting direction for future work.
#### Mover heads in GPT-2 medium
Merullo et al. (2024a) studied the Indirect Object Identification (IOI) and Colored Objects circuits in GPT-2 medium. They discovered two sets of attention heads implementing certain functions, both called “Mover” heads. Heads from the first set copy names (in IOI), and heads from the second set copy colors (in the Colored Objects task). The authors also point out a significant overlap between the two sets.
Using MAPS, we classified all mover heads as implementing the name copying relation. We find that many of these heads also implement the relations: year to following, country to language, country to capital, copying. Lastly, we identify 31 other name-copying heads. Notably, in our counting, we omit the heads 14.5, 17.10, 16.0, 18.12, and 21.7, which are labeled in Figure 2 of Merullo et al. (2024a) as Mover-heads. This is because, to the best of our understanding, the paper does not provide any explanation for why they are classified as such, while other heads are described as more important than them.
#### Capital heads in GPT-2 medium
Merullo et al. (2024a) have also studied a circuit for resolving the capital city of a country (in Appendix I). MAPS identified all attention heads classified in that study, along with 15 others. In Figure 6(b) we present the name copying, country to capital relation scores for all heads in GPT-2 medium and the heads classified by Merullo et al. (2024a).
<details>
<summary>x11.png Details</summary>

### Visual Description
## Heatmaps: GPT-2 Name-Copying Heads and Suppression Analysis
### Overview
Two heatmaps compare name-copying behavior in GPT-2's attention heads across layers (0-11) and heads (0-11). The left heatmap shows raw name-copying scores, while the right heatmap visualizes suppression effects. Color intensity represents score magnitude, with X marks and dots indicating specific head classifications.
### Components/Axes
- **X-axis**: Layer (0-11)
- **Y-axis**: Head (0-11)
- **Color Scale**:
- Left: Name Copying Score (0.0-1.0)
- Right: Suppression (Negative) Name Copying Score (-0.0-1.0)
- **Legend**:
- `X`: Name-Mover Heads
- `•`: Backup Name-Mover Heads
- Labels:
- "Interp. in the Wild" classifications
- "Name-Mover Heads"
- "Backup Name-Mover Heads"
### Detailed Analysis
#### Left Heatmap (Name-Copying Scores)
- **High-Score Regions**:
- Layer 6-7, Heads 6-7: Yellow cells (score ~0.8-1.0)
- Layer 8-9, Heads 8-9: Yellow cells (score ~0.8-1.0)
- Layer 10, Head 10: Yellow cell (score ~0.8)
- **X Marks** (Name-Mover Heads):
- Layer 8, Head 9
- Layer 9, Head 9
- Layer 10, Head 10
- Layer 11, Head 11
- **Dots** (Backup Name-Mover Heads):
- Layer 7, Head 2
- Layer 8, Head 3
- Layer 9, Head 4
- Layer 10, Head 5
#### Right Heatmap (Suppression Effects)
- **Negative Scores**:
- Layer 9, Head 10: Blue cell (score ~-0.2)
- Layer 10, Head 10: Yellow cell (score ~0.8)
- **X Marks** (Negative Name-Mover Heads):
- Layer 10, Head 10
- **Dots** (Negative Backup Heads):
- Layer 9, Head 10
### Key Observations
1. **Concentration of High Scores**: Name-mover heads cluster in layers 6-11, with strongest activity in layers 8-10 (Heads 8-10).
2. **Suppression Paradox**: The suppression heatmap shows a single high-score cell (Layer 10, Head 10) despite negative labeling, suggesting possible measurement inconsistency.
3. **Backup Head Pattern**: Backup name-mover heads appear diagonally shifted (Head = Layer + 2) in layers 7-10.
4. **Layer-Head Correlation**: Name-mover heads align with layer numbers (Head ≈ Layer) in higher layers.
### Interpretation
The data reveals a hierarchical organization of name-copying mechanisms:
1. **Primary Processing**: Name-mover heads dominate in later layers (8-11), suggesting progressive refinement of name representations.
2. **Redundancy Mechanism**: Backup heads follow a predictable pattern, potentially serving as fail-safes or alternative pathways.
3. **Suppression Anomaly**: The suppression heatmap's high score at Layer 10, Head 10 contradicts its negative labeling, indicating either:
- Measurement error
- Non-linear suppression effects
- Context-dependent activation
This pattern implies GPT-2 employs specialized, layer-dependent mechanisms for name processing, with built-in redundancy and potential for dynamic suppression that may require further investigation.
</details>
(a) Comparison between “Name-Mover” heads discovered by Wang et al. (2023) and heads which implement the name copying relation, discovered by MAPS.
<details>
<summary>x12.png Details</summary>

### Visual Description
## Heatmaps: GPT-2 Medium - Name Copying and Country to Capital Heads
### Overview
The image contains two side-by-side heatmaps comparing attention head activity in GPT-2 medium for two tasks: "Name Copying" (left) and "Country to Capital" (right). Both heatmaps use a color gradient (purple to yellow) to represent scores from 0 to 1, with black crosses marking specific data points. A text box labeled "Circuits Components Reused" appears in both charts, with "Mover Heads" and "Capital Heads" annotations.
---
### Components/Axes
- **X-axis (Layer)**: Labeled "Layer" with values 0–22 (integers).
- **Y-axis (Head)**: Labeled "Head" with values 15–22 (integers).
- **Color Scale**:
- Left heatmap: "Name Copying score" (0–1, purple to yellow).
- Right heatmap: "Country to capital score" (0–1, purple to yellow).
- **Annotations**:
- Text box: "Circuits Components Reused" (positioned near bottom-left of both heatmaps).
- Black crosses:
- Left: Labeled "Mover Heads" (e.g., Layer 14, Head 16; Layer 18, Head 19).
- Right: Labeled "Capital Heads" (e.g., Layer 16, Head 17; Layer 20, Head 21).
---
### Detailed Analysis
#### Left Heatmap (Name Copying)
- **Color Distribution**:
- High scores (yellow/green) cluster in:
- Upper-right quadrant (Layers 16–22, Heads 17–22).
- Lower-left quadrant (Layers 0–8, Heads 15–18).
- Low scores (purple) dominate the central region (Layers 8–16, Heads 15–18).
- **Black Crosses**:
- Located at:
- Layer 14, Head 16 (score ~0.8).
- Layer 18, Head 19 (score ~0.7).
- Layer 20, Head 21 (score ~0.6).
#### Right Heatmap (Country to Capital)
- **Color Distribution**:
- High scores (yellow/green) cluster in:
- Upper-right quadrant (Layers 16–22, Heads 17–22).
- Lower-left quadrant (Layers 0–8, Heads 15–18).
- Low scores (purple) dominate the central region (Layers 8–16, Heads 15–18).
- **Black Crosses**:
- Located at:
- Layer 16, Head 17 (score ~0.8).
- Layer 20, Head 21 (score ~0.7).
- Layer 22, Head 22 (score ~0.6).
---
### Key Observations
1. **Similar Patterns**: Both heatmaps show high scores in the upper-right and lower-left quadrants, suggesting shared mechanisms for these tasks.
2. **Black Crosses**: Marked heads in both charts align with high-scoring regions, indicating these heads are critical for their respective tasks.
3. **Text Box**: The "Circuits Components Reused" annotation implies overlapping functional components across tasks.
---
### Interpretation
- **Task-Specific Heads**: The black crosses ("Mover Heads" and "Capital Heads") likely represent specialized attention mechanisms for name copying and country-capital mapping.
- **Reused Circuits**: The overlapping high-score regions suggest GPT-2 medium repurposes similar attention patterns for structurally analogous tasks (e.g., mapping entities to their attributes).
- **Layer Dependency**: High scores in upper layers (16–22) may reflect hierarchical processing, where later layers refine task-specific representations.
The data highlights how transformer models leverage modular attention mechanisms, with certain heads specializing in specific tasks while sharing broader functional components.
</details>
(b) Comparison between “Name-Mover” and “Capital” heads discovered by Merullo et al. (2024a) and heads which implement the name copying and the country to capital relations discovered in our work.
Figure 6: Comparison between relation heads discovered by MAPS and heads classified in prior works.
## Appendix E Automatic Mapping of Salient Head Operations
### E.1 Automatic Functionality Inference
In § 5.1 we showed that GPT-4o can be utilized to interpret attention heads’ salient operations. Here, we provide additional implementation details and present an evaluation of the interpretation quality.
#### Implementation details
We found that GPT-4o sometimes describes in words that the pattern is unclear, rather than just outputting the word “Unclear”, as requested. To handle these cases, we classify every head for which GPT-4o’s response contained the string “clear” as a head where a pattern was not detected. We view this as an upper bound over the true ratio of heads with undetected patterns. Also, for some heads, GPT-4o would stop generating descriptions mid-generation. We hypothesize that it is because of strings viewed as special GPT-4o tokens that appeared in the salient mappings. We solved this issue by querying GPT-4o again with other random seeds. We note that in several mappings the salient tokens were decoded as an unreadable character. This could be solved by alternating between Transformers package Wolf et al. (2020) decoding functions.
#### Prompt format
We present the prompt used to query GPT-4o in Table 26.
| Head | Salient mappings | GPT-4o description |
| --- | --- | --- |
| Pythia 6.9B 15.3 | osevelt: 1943, 1941, 1940, 1930, 1936 Roosevelt: 1943, 1941, 1936, 1940, 1930 FDR: 1943, 1942, 1941, 1938, 1936 Napole: 1800, 1800, 18,18, 1840 oslov: 1968, 1970, 1960, 1964, 1965 Napoleon: 1800, 1800,18, 18, Napoleon taire: 1840, 1850,1800, Pruss, 1830 afka: 1908, 1912, 1916, 1903, 1911 lantern: 1870, 1880, 1930, Depression, railroad Edison: 1920,1920,1900, 1908, 1880 Confederate: 1863, 1864, 1861, 1862, 1870 1861: 1861, 1863, 1860, 1864, 1870 | The input strings are partial or full names of historical figures as well as years and terms relating to historical events. The mappings associate each input with years or terms relevant to their historical significance, reflecting events or periods related to the input entity. |
| Pythia 6.9B 16.1 | inhib: inhibition, inhib, Inhib, inhibiting, inhibit resil: resilience, resistance,Resp, res,resistance toler: toler, tolerance, tolerate, tolerated, tolerant aggrav: aggrav, exacerb, help, assistance : response, responses, responding, inhibiting destructive: destructive, destruction, destroying salvage: saving, save,saving,save, saves reluct: reluctance, resistance, resisting, resist prophyl: protection, protective, Protection Relief: relief, Relief, relie, relieved, relieve surv: survival, Survival, protection, surviv | The input strings are truncated forms of words, often found in contexts related to protection, resistance, or functionality. The mappings primarily expand truncated forms into semantically related words, often the full form of the input string or related words. |
| Pythia 6.9B 16.11 | weeks: months, month, summer, season, year months: year,year, Year,Year, yearly month: year, Year,year,Year, yearly Month: year, Year,year,Year, years weeks: month, months,month,months, summer months: year, Year,year,Year, yearly Week: months, month,months,month, Month week: month, months,month,months, season month: year, Year,year,Year, yearly overnight: month, week, weeks,acci, months years: decade, decades, aging, century, life | The input strings are related to time periods such as weeks, months, and years. Mappings are connecting input strings to related or hierarchical time concepts, often extending them into longer periods like months to years and weeks to months. |
| Pythia 6.9B 22.13 | periodontal: dental, Dental, dentist, dent, periodontal mandibular: dental, Dental, mandibular, teeth, dentist odontic: dental, Dental, dentist, teeth, tooth psori: skin, Skin,skin, dermat, skins retinal: eye, ophthal, retinal, ocular, eyes echocardiography: cardiac, Card, hearts,Card, Cardi scalp: brain, Brain,brain, brains, scalp hippocampal: hippocampal, Brain, brain,brain, hippocampus ocardi: cardiac, Card, hearts, Heart, heart ACL: knee, knees, thigh, Hip, ankle caries: dental, Dental, dentist, dent, Dent | The input strings seem to relate to various medical and anatomical terms, including parts of the body, diseases, and medical procedures. The mappings primarily associate anatomical or medical terms (input strings) with related medical terminology, such as conditions, associated body parts, or broader medical categories. |
| GPT-2 xl 26.2 | Jedi: lightsaber, Jedi, Kenobi, droid, Skywalker lightsaber: lightsaber, Jedi, Kenobi, Skywalker, Sith galactic: Galactic, galactic, starship, galaxy, droid Starfleet: galactic, Starfleet, starship, Galactic, interstellar Klingon: starship, Starfleet, Klingon, Trek, Starship starship: starship, Galactic, galactic, interstellar, Planetary Skyrim: Skyrim, Magicka, Bethesda, Elven, Hearth Darth: Jedi, lightsaber, Kenobi, Darth, Sith galaxy: Galactic, galactic, starship, galaxy, droid | The input strings are terms related to popular science fiction and fantasy franchises such as Star Wars, Star Trek, Pokémon, Elder Scrolls, Harry Potter, and general fantastical terms. The pattern observed is that each mapping takes an input term from a science fiction or fantasy context and maps it to other terms that are often from the same or related fictional universe. |
Table 25: Example salient operations of attention heads in Pythia 6.9B and GPT-2 xl and their corresponding descriptions by GPT-4o.
| Below you are given a list of input strings, and a list of mappings: each mapping is between an input string and a list of 5 strings. |
| --- |
| Mappings are provided in the format "s: t1, t2, t3, t4, t5" where each of s, t1, t2, t3, t4, t5 is a short string, typically corresponding to a single word or a sub-word. |
| Your goal is to describe shortly and simply the inputs and the function that produces these mappings. To perform the task, look for semantic and textual patterns. |
| For example, input tokens ’water’,’ice’,’freeze’ are water-related, and a mapping (’fire’:’f’) is from a word to its first letter. |
| As a final response, suggest the most clear patterns observed or indicate that no clear pattern is visible (write only the word "Unclear"). |
| Your response should be a vaild json, with the following keys: |
| "Reasoning": your reasoning. |
| "Input strings": One sentence describing the input strings (or "Unclear"). |
| "Observed pattern": One sentence describing the most clear patterns observed (or "Unclear"). |
| The input strings are: |
| <input strings> |
| The mappings are: |
| <mapping strings> |
Table 26: The prompt used to query GPT-4o. The salient tokens and mappings (§ 3.2), which are unique for every head, are plugged instead of <input strings> and <mapping strings>.
#### Examples
Table 25 provides examples of salient mappings and the patterns described by GPT-4o for three attention heads in GPT-2 xl and Pythia 6.9B.
### E.2 Interpretation Quality
To assess the accuracy and plausibility of the model-generated descriptions, we let human annotators — five graduate students who are fluent English speakers — evaluate its responses in terms of (a) did GPT-4o correctly recognize the existence of a pattern in the mappings, (b) the quality of the generated descriptions, (c) the category of the recognized patterns. We conduct this study for a random sample of 138 (13.5%) heads in Pythia 6.9B and 134 (11.2%) heads in GPT-2 xl.
#### Annotation instructions
We present the instructions given to the human annotators in Figures 7, 8.
<details>
<summary>x13.png Details</summary>

### Visual Description
## Text Document: Instructions for Evaluating GPT4 Pattern Inference
### Overview
This document outlines a task to verify the correctness of GPT4 in inferring relations or functions from input strings and their mappings. It includes instructions, a multi-step task, and three assessment questions (Q1–Q3) to evaluate GPT4's pattern recognition and description accuracy.
---
### Components/Axes
1. **Instructions Section**
- Goal: Verify GPT4's ability to infer relations/functions from demonstrations.
- Input Format:
- A list of 30 demonstrations mapping an input string `s` to 5 strings (`t1, t2, t3, t4, t5`).
- `s` is an input string; `t1–t5` are output strings (typically single words or sub-words).
- Output: A description of patterns identified across mappings.
2. **Task Section**
- **Part a**: Analyze input strings and their mappings to identify prominent patterns (semantic, language-related, general, or unnatural).
- Expectations: Most cases will exhibit one pattern or none.
- **Part b**: Answer multi-choice questions to assess alignment between GPT4's description and the actual patterns.
3. **Questions (Q1–Q3)**
- **Q1**: Did GPT4 correctly identify the presence/absence of a pattern?
- Options:
1. No pattern (GPT4 said no pattern).
2. No pattern (GPT4 described a pattern).
3. Observable pattern (GPT4 said no pattern).
4. Observable pattern (GPT4 described a pattern).
- **Q2** (if Q1=4): How precise is GPT4's description?
- Options:
- **Correct and accurate**: Accurate description without errors.
- **Correct but inaccurate**: Correct overall but too general/abstract or overly specific.
- **Partially correct**: Correct to some degree but includes errors.
- **Poor**: No description of the pattern.
- **Q3** (if Q1=3 or 4): Categorize the most prominent pattern.
- Options: Semantic, Language, General, Unnatural.
---
### Content Details
- **Input String Format**: `"s: t1, t2, t3, t4, t5"` (e.g., `s` mapped to 5 strings).
- **Pattern Types**: Semantic, language-related, general, or unnatural.
- **Assessment Criteria**:
- **Correctness**: Alignment between GPT4's description and actual patterns.
- **Precision**: Specificity and accuracy of the description.
---
### Key Observations
1. The task emphasizes identifying **prominent patterns** in mappings, with an expectation that most cases will have one pattern or none.
2. Q1 evaluates GPT4's ability to detect patterns, while Q2 and Q3 assess the quality and categorization of those patterns.
3. Answer options for Q2 and Q3 are conditional on prior responses (e.g., Q2 only applies if Q1=4).
---
### Interpretation
This task is designed to rigorously evaluate GPT4's capacity to:
1. **Infer patterns** from structured input-output mappings.
2. **Describe patterns accurately** (Q2) and **categorize them** (Q3).
3. The conditional nature of Q2 and Q3 suggests a focus on progressive assessment, where deeper analysis depends on initial correctness (Q1).
The emphasis on "prominent patterns" and the inclusion of "unnatural" as a category implies a focus on both natural language understanding and anomaly detection. The structured format of input strings (`s: t1–t5`) ensures consistency in evaluating GPT4's ability to generalize across demonstrations.
</details>
Figure 7: First part of human annotation instructions.
<details>
<summary>x14.png Details</summary>

### Visual Description
## Text Document: Guidelines for Pattern Recognition and Mapping Evaluation
### Overview
The document outlines critical guidelines for evaluating mappings between input and output strings, focusing on pattern recognition, labeling accuracy, and categorization of mappings. It emphasizes consistency in identifying patterns, defining "correct" vs. "inaccurate" descriptions, and classifying mappings into semantic, linguistic, general, or unnatural categories.
### Components/Axes
- **Headings**:
- "Important guidelines" (main section)
- Subheadings for each guideline (e.g., "In Q1...", "In Q2...", etc.)
- **Bullet Points**:
- Structured as hierarchical rules with subpoints (e.g., "o The functionality...").
- **Highlighted Text**:
- "suggesting a pattern related to breaking down or rearranging the input string" (bolded and underlined).
### Detailed Analysis
1. **Guideline 1 (Q1)**:
- Defines "GPT4 indicates there is no pattern" if the response includes "Unclear" or explicitly states no pattern.
2. **Guideline 2**:
- Excludes mappings with suggestive commentary about hidden motivations unless explicitly explained.
3. **Guideline 3**:
- Requires patterns to appear in **at least 20 mappings** to be considered recognizable.
- Subpoints:
- Functionality behind mappings must be visible (e.g., word-to-first-letter mappings).
- Destination strings must be highly related (e.g., all source strings mapped to numbers).
4. **Guideline 4**:
- Mutual patterns **only in source strings** are not considered recognizable.
5. **Guideline 5 (Q2)**:
- Labels descriptions as "correct" (accurate, no incorrect parts) or "inaccurate" (too general/specific).
6. **Guideline 6 (Q3)**:
- Mapping categories:
- **Semantic**: Associations requiring knowledge (e.g., countries → capitals).
- **Language**: Requires linguistic expertise (e.g., word → prefix mappings).
- **General**: Applies broadly (e.g., string → itself).
- **Unnatural**: Does not encode a recognizable function (e.g., non-linguistic relationships).
7. **Guideline 7**:
- Instructs using the "Notes" column for additional context or issues.
### Key Observations
- **Pattern Recognition Threshold**: A pattern must appear in ≥20 mappings to be deemed recognizable.
- **Labeling Criteria**: "Correct" descriptions are precise; "inaccurate" ones are overly broad or narrow.
- **Mapping Categories**: Four distinct types are defined, with "Unnatural" explicitly excluding non-linguistic relationships.
- **Highlighted Pattern**: Breaking down/rearranging input strings is flagged as a critical pattern to identify.
### Interpretation
The guidelines aim to standardize evaluations of mappings in tasks like natural language processing or data transformation. By requiring patterns to appear frequently (≥20 mappings) and defining strict criteria for "correctness," the document ensures consistency in identifying meaningful relationships. The categorization of mappings (semantic, linguistic, etc.) helps contextualize their utility, while the emphasis on visible functionality and destination string relationships prevents overfitting to trivial or ambiguous patterns. The highlighted text underscores the importance of structural transformations (e.g., splitting/rearranging strings) as a key pattern to prioritize.
</details>
Figure 8: Second part of human annotation instructions.
#### Human study results
The overall results per question and the distribution of responses across models and layers are presented in Figure 9 (Question 1), Figure 10 (Question 2), Figure 11 (Question 3). In 80% of the cases, GPT-4o correctly identifies the presence or absence of a pattern. In most of the failure cases (87%), the model described a pattern that is not visible in the mappings. We also find that in lower layers there are fewer patterns and they are harder to parse: there are higher rates of unnatural patterns and inaccurate descriptions. This agrees with our findings in § 4. In case of an observable pattern, GPT-4o will identify it: for 95% of heads with observable patterns, GPT-4o described a pattern, and $<$ 2% of the descriptions were labeled “poor”. Overall, this analysis shows that the quality of our automatic annotation pipeline is reasonable and demonstrates promising trends in automatically interpreting attention heads with MAPS. We leave further improvements to the pipeline for future work to explore. In particular, addressing model hallucinations could involve methods like aggregating multiple model responses to check its confidence (Kuhn et al., 2023), using intrinsic classifiers for hallucinations (e.g. Azaria and Mitchell (2023), Yu et al. (2024)), employing a strong LLM to indicate whether the generated pattern matches the mappings Gur-Arieh et al. (2025), using an NLI model Bohnet et al. (2022), or similarity-based heuristics.
<details>
<summary>x15.png Details</summary>

### Visual Description
## Pie Chart: Q1 - Did GPT4 correctly identify the presence or lack of a pattern?
### Overview
The chart evaluates GPT4's accuracy in identifying patterns, segmented into four categories based on pattern presence and GPT4's responses. The largest segment (46.3%) represents correct pattern identification, followed by correct non-pattern identification (33.5%). Errors include false positives (17.6%) and false negatives (2.6%).
### Components/Axes
- **Legend**: Positioned at the top, with four color-coded categories:
1. **Dark Green**: "There is an observable pattern, and GPT4 described a pattern." (46.3%)
2. **Light Green**: "There is no observable pattern, and GPT4 indicated there is no pattern." (33.5%)
3. **Red**: "There is no observable pattern, but GPT4 described a pattern." (17.6%)
4. **Dark Red**: "There is an observable pattern, and GPT4 indicated there is no pattern." (2.6%)
- **Pie Chart**: Circular visualization with segments proportional to percentages. Segments are ordered clockwise starting with dark green (largest), followed by light green, red, and dark red (smallest).
### Detailed Analysis
- **Correct Pattern Identification**: Dark green segment (46.3%) dominates, indicating GPT4 accurately detected patterns in nearly half of cases.
- **Correct Non-Pattern Identification**: Light green segment (33.5%) shows GPT4 correctly identified absence of patterns in over a third of cases.
- **False Positives**: Red segment (17.6%) highlights instances where GPT4 incorrectly described patterns where none existed.
- **False Negatives**: Dark red segment (2.6%) represents cases where GPT4 failed to detect existing patterns.
### Key Observations
- **Majority Accuracy**: Combined correct identifications (79.8%) suggest GPT4 performs well overall.
- **Error Distribution**: False positives (17.6%) outnumber false negatives (2.6%), indicating a bias toward over-identifying patterns.
- **Smallest Segment**: Dark red (2.6%) is visually distinct as the smallest slice, emphasizing rare failures to detect patterns.
### Interpretation
The data suggests GPT4 has strong pattern recognition capabilities but exhibits a tendency to over-identify patterns in ambiguous cases (false positives). The low false negative rate (2.6%) implies it is more reliable at confirming patterns when they exist. However, the 17.6% false positive rate raises questions about its threshold for pattern detection—potentially prioritizing sensitivity over specificity. This could be critical in applications where false alarms are costly (e.g., medical diagnostics). The chart underscores the need for context-aware adjustments to GPT4's pattern recognition parameters.
</details>
(a) Human annotation distribution for Question 1.
<details>
<summary>x16.png Details</summary>

### Visual Description
## Bar Chart: GPT-2 xl Head Distribution by Layer Bin
### Overview
The chart visualizes the distribution of attention heads across four layer bins ([0,12], [12,24], [24,36], [36,48]) in the GPT-2 xl model. Each bar is segmented into three categories: "Heads" (green), "Heads (merged)" (dark green), and "Heads (unmerged)" (red). The y-axis represents the number of heads, with percentages indicating proportional distribution within each bin.
### Components/Axes
- **X-axis**: Layer bins labeled as [0,12], [12,24], [24,36], [36,48]
- **Y-axis**: "# heads" (0–35)
- **Legend**:
- Green: "Heads" (66.7% in [0,12], decreasing to 26.5% in [36,48])
- Dark green: "Heads (merged)" (21.2% in [0,12], peaking at 79.4% in [24,36])
- Red: "Heads (unmerged)" (12.1% in [0,12], increasing to 17.6% in [36,48])
### Detailed Analysis
1. **Layer Bin [0,12]**:
- Heads: 66.7% (21.2 heads)
- Heads (merged): 21.2% (6.7 heads)
- Heads (unmerged): 12.1% (3.8 heads)
2. **Layer Bin [12,24]**:
- Heads: 48.5% (15.5 heads)
- Heads (merged): 36.4% (11.6 heads)
- Heads (unmerged): 15.1% (4.8 heads)
3. **Layer Bin [24,36]**:
- Heads: 8.8% (2.8 heads)
- Heads (merged): 79.4% (25.4 heads)
- Heads (unmerged): 2.9% (0.9 heads)
4. **Layer Bin [36,48]**:
- Heads: 26.5% (8.5 heads)
- Heads (merged): 55.9% (18.0 heads)
- Heads (unmerged): 17.6% (5.6 heads)
### Key Observations
- **Merged Heads Dominance**: The dark green segment ("Heads (merged)") peaks at 79.4% in the [24,36] bin, suggesting a concentration of merged heads in mid-layers.
- **Unmerged Heads Increase**: The red segment ("Heads (unmerged)") grows steadily from 12.1% to 17.6%, indicating rising fragmentation in later layers.
- **Heads Reduction**: The green segment ("Heads") declines from 66.7% to 8.8%, showing fewer total heads in higher layers.
### Interpretation
The data reveals a structural shift in attention mechanisms across GPT-2 xl layers:
1. **Early Layers ([0,12])**: High proportion of individual heads (66.7%) with moderate merging (21.2%) and fragmentation (12.1%).
2. **Mid-Layers ([24,36])**: Dominance of merged heads (79.4%) suggests optimized attention patterns, potentially improving computational efficiency.
3. **Late Layers ([36,48])**: Resurgence of unmerged heads (17.6%) alongside reduced total heads (26.5%) may indicate architectural adjustments to balance expressiveness and efficiency.
This pattern aligns with transformer design principles, where mid-layers often consolidate attention for hierarchical processing, while later layers reintroduce granularity for fine-grained tasks. The absence of unmerged heads in the [24,36] bin highlights a critical transition point in the model's attention architecture.
</details>
(b) Human annotation distribution for Question 1 across layers (GPT-2 xl).
<details>
<summary>x17.png Details</summary>

### Visual Description
## Bar Chart: Pythia 6.9B Layer Head Distribution
### Overview
The chart visualizes the distribution of attention heads across four layer ranges in the Pythia 6.9B model. Each bar is segmented into three colors (green, dark green, red) representing different proportions of attention heads, with percentages labeled on each segment.
### Components/Axes
- **X-axis (Layer)**: Categorized into four ranges: [0, 8], [8, 16], [16, 24], [24, 32].
- **Y-axis (# heads)**: Scaled from 0 to 40, representing the number of attention heads.
- **Legend**: Located on the right, associating colors with percentages:
- Green: 31.4%
- Dark Green: 40.5%
- Red: 14.3%
### Detailed Analysis
1. **Layer [0, 8]**:
- Green (48.6%): Largest segment, occupying nearly half the bar.
- Dark Green (31.4%): Middle segment.
- Red (14.3%): Smallest segment at the top.
2. **Layer [8, 16]**:
- Green (21.6%): Smallest segment.
- Dark Green (40.5%): Middle segment.
- Red (35.1%): Largest segment, dominating the bar.
3. **Layer [16, 24]**:
- Green (12.0%): Smallest segment.
- Dark Green (68.0%): Largest segment, occupying over two-thirds of the bar.
- Red (20.0%): Middle segment.
4. **Layer [24, 32]**:
- Green (31.7%): Middle segment.
- Dark Green (43.9%): Largest segment.
- Red (19.5%): Smallest segment.
### Key Observations
- **Dominance of Dark Green**: The dark green segment (40.5% in legend) is consistently the largest in [8, 16] and [16, 24] layers, peaking at 68.0% in [16, 24].
- **Green Segment Decline**: The green segment (31.4% in legend) decreases sharply from 48.6% in [0, 8] to 12.0% in [16, 24], then rebounds to 31.7% in [24, 32].
- **Red Segment Peaks**: The red segment (14.3% in legend) is largest in [8, 16] (35.1%) but drops to 19.5% in [24, 32].
### Interpretation
The chart reveals a layered hierarchy in attention head distribution:
1. **Early Layers ([0, 8])**: Green segments dominate, suggesting simpler or foundational processing mechanisms.
2. **Middle Layers ([8, 16] and [16, 24])**: Dark green segments (likely representing core attention mechanisms) become dominant, with [16, 24] showing the highest concentration (68.0%). This may indicate increased complexity or specialization in these layers.
3. **Late Layers ([24, 32])**: A balance between dark green and green segments, with red segments shrinking, possibly reflecting refined or optimized processing.
**Notable Anomalies**:
- The legend's percentages (31.4%, 40.5%, 14.3%) do not align with the actual segment values in the bars. For example, the green segment in [0, 8] is 48.6%, far exceeding the legend's 31.4%. This discrepancy suggests the legend may represent average values across layers rather than per-layer distributions.
- The red segment's peak in [8, 16] (35.1%) contrasts with its decline in later layers, hinting at a shift in attention mechanisms as depth increases.
This distribution implies that Pythia 6.9B allocates attention heads differently across layers, with middle layers prioritizing complex interactions (dark green) and early/late layers balancing simpler and refined mechanisms.
</details>
(c) Human annotation distribution for Question 1 across layers (Pythia 6.9B).
Figure 9: Quality of GPT-4o interpretation (§ E) - Human annotation distribution for Question 1.
<details>
<summary>x18.png Details</summary>

### Visual Description
## Pie Chart: How accurate is the description of GPT4?
### Overview
The chart visualizes responses to a survey question about the accuracy of GPT4's description. It is conditional on the respondent having answered "4" to a prior question (Q1). The data is divided into four categories, with percentages indicating distribution across response options.
### Components/Axes
- **Legend**: Positioned on the right side of the chart.
- **Green**: "Correct and accurate" (66.4%)
- **Yellow**: "Partially correct" (16.8%)
- **Orange**: "Correct but inaccurate" (15.2%)
- **Red**: "Poor" (1.6%)
- **Title**: "Q2 (answer only if your answer to Q1 is 4) How accurate is the description of GPT4?"
- **Data Segments**: Four proportional slices representing response categories.
### Detailed Analysis
1. **Correct and accurate** (Green, 66.4%):
- Largest segment, occupying ~2/3 of the chart.
- Positioned at the top-left quadrant.
2. **Partially correct** (Yellow, 16.8%):
- Second-largest segment, adjacent to the green slice.
3. **Correct but inaccurate** (Orange, 15.2%):
- Third-largest segment, adjacent to the yellow slice.
4. **Poor** (Red, 1.6%):
- Smallest segment, occupying ~1.6% of the chart.
- Positioned at the bottom-right edge.
### Key Observations
- **Dominance of "Correct and accurate"**: Over two-thirds of respondents rated GPT4's description as fully accurate.
- **Minority "Poor" responses**: Only 1.6% found the description unsatisfactory.
- **Middle-ground responses**: 32% (16.8% + 15.2%) indicated partial or mixed accuracy.
- **Color coding**: Green (positive), red (negative), yellow/orange (neutral/mixed).
### Interpretation
The data suggests strong consensus that GPT4's description is largely accurate, with a significant majority (66.4%) endorsing this view. The small red segment ("Poor") indicates minimal dissatisfaction, while the yellow and orange slices highlight nuanced feedback. The conditional nature of the question (dependent on Q1's answer being "4") implies this data reflects a subset of respondents with prior alignment on a related topic. The chart underscores GPT4's perceived reliability in this context, though the 32% middle-ground responses suggest room for improvement in clarity or completeness.
</details>
(a) Human annotation distribution for Question 2.
<details>
<summary>x19.png Details</summary>

### Visual Description
## Bar Chart: GPT-2 xl Head Distribution by Layer
### Overview
The chart visualizes the distribution of attention heads across four layers of the GPT-2 xl model. Each bar represents a layer range, with segmented colors indicating the proportion of heads assigned to different functional categories. The y-axis measures the number of heads, while the x-axis categorizes layers into quartiles.
### Components/Axes
- **X-axis (Layer Ranges)**:
- `[0,12)`: First quartile of layers
- `[12,24)`: Second quartile
- `[24,36)`: Third quartile
- `[36,48)`: Fourth quartile
- **Y-axis (# heads)**: Scaled from 0 to 25, representing total attention heads per layer.
- **Legend**:
- Green: Primary functional category
- Yellow: Secondary functional category
- Orange: Tertiary functional category
- Red: Quaternary functional category (only appears in `[12,24)` layer)
### Detailed Analysis
1. **Layer `[0,12)`**:
- Green: 28.6% (2.86 heads)
- Yellow: 42.9% (4.29 heads)
- Orange: 28.6% (2.86 heads)
- Red: 0.0% (0 heads)
- *Total*: 100.1% (rounded)
2. **Layer `[12,24)`**:
- Green: 58.3% (5.83 heads)
- Yellow: 25.0% (2.5 heads)
- Orange: 8.3% (0.83 heads)
- Red: 8.3% (0.83 heads)
- *Total*: 100.0%
3. **Layer `[24,36)`**:
- Green: 74.1% (7.41 heads)
- Yellow: 7.4% (0.74 heads)
- Orange: 18.5% (1.85 heads)
- Red: 0.0% (0 heads)
- *Total*: 100.0%
4. **Layer `[36,48)`**:
- Green: 78.9% (7.89 heads)
- Yellow: 10.5% (1.05 heads)
- Orange: 10.5% (1.05 heads)
- Red: 0.0% (0 heads)
- *Total*: 100.0%
### Key Observations
- **Green Dominance**: The primary functional category (green) grows monotonically from 28.6% to 78.9% across layers, suggesting increasing specialization in higher layers.
- **Yellow Decline**: Secondary functionality (yellow) decreases from 42.9% to 10.5%, indicating reduced emphasis on broader processing in deeper layers.
- **Orange Pattern**: Tertiary functionality (orange) peaks at 28.6% in the first layer, drops to 8.3% in the second, then rises to 18.5% in the third before stabilizing at 10.5%.
- **Red Anomaly**: Quaternary functionality (red) only appears in the second layer at 8.3%, suggesting a unique role or transitional behavior in that layer range.
### Interpretation
The data demonstrates a clear architectural progression in GPT-2 xl's attention mechanism:
1. **Layer Specialization**: Higher layers increasingly concentrate computational resources (green) on core tasks, while reducing reliance on auxiliary functions (yellow/orange).
2. **Temporal Dynamics**: The second layer's unique red component (8.3%) may represent a transitional phase or error-correction mechanism absent in other layers.
3. **Efficiency Tradeoff**: The sharp decline in yellow functionality (from 42.9% to 10.5%) suggests a shift from general-purpose processing to highly optimized, layer-specific operations.
This pattern aligns with transformer architecture principles where deeper layers typically handle more abstract, context-specific representations. The red component's exclusive presence in the second layer warrants further investigation into potential architectural quirks or specialized processing roles.
</details>
(b) Human annotation distribution for Question 2 across layers (GPT-2 xl).
<details>
<summary>x20.png Details</summary>

### Visual Description
## Bar Chart: Pythia 6.9B Layer Head Distribution
### Overview
The chart visualizes the distribution of attention heads across four layers of the Pythia 6.9B model architecture. Each bar represents a layer range, with segmented colors indicating different head categories. The y-axis shows the total number of heads, while the x-axis categorizes layers into four ranges: [0,8), [8,16), [16,24), and [24,32).
### Components/Axes
- **X-axis (Layer Ranges)**:
- [0,8)
- [8,16)
- [16,24)
- [24,32)
- **Y-axis (# heads)**:
- Scale from 0 to 15 in increments of 5
- **Legend (Right-aligned)**:
- Green: 45.5% → 83.3% (dominant category)
- Orange: 27.3% → 5.6% (declining trend)
- Red: 6.2% (only in [16,24))
- Yellow: 27.3% → 11.1% (declining trend)
### Detailed Analysis
1. **Layer [0,8)**:
- Green: 45.5% (5 heads)
- Orange: 27.3% (3 heads)
- Yellow: 27.3% (3 heads)
- Red: 0.0%
2. **Layer [8,16)**:
- Green: 46.7% (7 heads)
- Orange: 33.3% (5 heads)
- Yellow: 20.0% (3 heads)
- Red: 0.0%
3. **Layer [16,24)**:
- Green: 75.0% (12 heads)
- Orange: 12.5% (2 heads)
- Red: 6.2% (1 head)
- Yellow: 6.2% (1 head)
4. **Layer [24,32)**:
- Green: 83.3% (13 heads)
- Orange: 5.6% (1 head)
- Yellow: 11.1% (2 heads)
- Red: 0.0%
### Key Observations
- **Green Dominance**: Green segments (likely "query" or "key" heads) increase monotonically from 45.5% to 83.3% across layers.
- **Orange Decline**: Orange segments (possibly "value" heads) decrease from 27.3% to 5.6%, suggesting reduced importance in deeper layers.
- **Red Anomaly**: Red appears only in the [16,24) layer at 6.2%, potentially indicating a specialized head type in mid-layers.
- **Yellow Reduction**: Yellow segments (possibly "mixed" heads) decrease from 27.3% to 11.1%, showing diminishing prevalence in deeper layers.
### Interpretation
The chart reveals a clear architectural pattern where:
1. **Layer Depth Correlation**: Deeper layers ([24,32)) concentrate attention heads into fewer, more specialized categories (green dominance).
2. **Head Type Evolution**: The decline of orange and yellow heads suggests a shift from general-purpose to query-dominated attention mechanisms in deeper layers.
3. **Mid-Layer Specialization**: The presence of red heads in [16,24) implies a unique architectural feature (e.g., gating or cross-attention) in mid-layers.
4. **Quantitative Precision**: The exact percentage values (e.g., 83.3% in [24,32)) indicate careful balancing of head types to optimize model performance.
This distribution pattern aligns with transformer architecture principles, where deeper layers typically develop more specialized attention mechanisms while maintaining computational efficiency through reduced head diversity.
</details>
(c) Human annotation distribution for Question 2 across layers (Pythia 6.9B).
Figure 10: Quality of GPT-4o interpretation (§ E) - Human annotation distribution for Question 2.
<details>
<summary>x21.png Details</summary>

### Visual Description
## Pie Chart: Q3 Categorization of Prominent Patterns
### Overview
The image is a pie chart titled "Q3" with a conditional instruction: "(answer only if your answer to Q1 is 3 or 4)". It categorizes responses to the question "How would you categorise the most prominent pattern?" into four segments with approximate percentages.
### Components/Axes
- **Title**: "Q3" (top center).
- **Legend**: Located on the left side, with four color-coded categories:
- **1: Semantic** (green, 31.1%).
- **2: Language** (yellow, 21.2%).
- **3: General** (blue, 28.8%).
- **4: Unnatural** (gray, 18.9%).
- **Segments**: Four labeled wedges representing percentages of total responses.
- **Conditional Note**: Text below the title states: "(answer only if your answer to Q1 is 3 or 4)".
### Detailed Analysis
- **Segment Values**:
- **Semantic (1)**: 31.1% (largest segment, green).
- **General (3)**: 28.8% (second-largest, blue).
- **Language (2)**: 21.2% (yellow).
- **Unnatural (4)**: 18.9% (smallest, gray).
- **Color Consistency**: Legend colors match segment colors exactly (e.g., green for Semantic, blue for General).
- **Textual Elements**:
- Labels for each segment are placed adjacent to their respective wedges.
- Percentages are rounded to one decimal place (e.g., 31.1%, 28.8%).
### Key Observations
1. **Dominant Category**: "Semantic" (31.1%) is the most prominent pattern.
2. **Secondary Categories**: "General" (28.8%) and "Language" (21.2%) follow, with "Unnatural" (18.9%) being the least frequent.
3. **Conditional Context**: The chart’s relevance depends on prior responses to Q1 (only valid if Q1 was 3 or 4).
### Interpretation
- The data suggests that "Semantic" patterns are the most frequently identified, potentially indicating a focus on meaning or context in the dataset.
- The conditional instruction implies this analysis is part of a multi-step process, where Q1’s response (3 or 4) gates access to this categorization.
- The close proximity of "General" (28.8%) and "Semantic" (31.1%) highlights a potential overlap or ambiguity in how respondents distinguished these categories.
- "Unnatural" being the smallest segment (18.9%) may reflect rarity or specificity in the patterns observed.
- The chart lacks temporal or comparative data, so trends over time or between groups cannot be inferred.
*Note: Percentages are approximate and sum to 100% (31.1 + 28.8 + 21.2 + 18.9 = 100.0).*
</details>
(a) Human annotation distribution for Question 3.
<details>
<summary>x22.png Details</summary>

### Visual Description
## Stacked Bar Chart: GPT-2 xl Layer Head Distribution
### Overview
The chart visualizes the distribution of attention heads across four layer ranges in the GPT-2 xl model. Each bar represents a layer range, with segments colored to indicate proportional contributions from different head types. The y-axis measures the number of heads (0-25), while the x-axis categorizes layers into quartiles.
### Components/Axes
- **X-axis (Layer Ranges)**:
- [0,12)
- [12,24)
- [24,36)
- [36,48)
- **Y-axis**: Number of attention heads (# heads)
- **Legend**:
- Gray: 50.0% (top segment)
- Yellow: 46.4% (middle segment)
- Blue: 28.6% (lower segment)
- Green: 21.4% (bottom segment)
### Detailed Analysis
1. **Layer [0,12)**:
- Gray: 50.0% (3 heads)
- Yellow: 33.3% (2 heads)
- Blue: 16.7% (1 head)
- Total: 6 heads
2. **Layer [12,24)**:
- Gray: 30.8% (4 heads)
- Yellow: 53.8% (7 heads)
- Blue: 15.4% (2 heads)
- Total: 13 heads
3. **Layer [24,36)**:
- Yellow: 46.4% (12 heads)
- Blue: 28.6% (7 heads)
- Green: 21.4% (5 heads)
- Total: 24 heads
4. **Layer [36,48)**:
- Yellow: 31.6% (8 heads)
- Green: 47.4% (12 heads)
- Blue: 10.5% (3 heads)
- Gray: 10.5% (3 heads)
- Total: 26 heads
### Key Observations
1. **Layer [24,36)** has the highest total heads (24) with yellow dominating (46.4%).
2. **Layer [36,48)** shows the largest green segment (47.4%, 12 heads), suggesting a significant architectural shift.
3. **Gray segments** decrease from 50.0% in [0,12) to 10.5% in [36,48), indicating reduced prevalence of this head type in deeper layers.
4. **Yellow segments** peak in [12,24) (53.8%) and remain prominent in later layers.
### Interpretation
The data suggests a progressive architectural evolution in GPT-2 xl's attention mechanisms:
- Early layers ([0,12)) show balanced head distribution with gray heads dominating.
- Middle layers ([12,24)) exhibit increased yellow head prevalence, possibly indicating enhanced pattern recognition capabilities.
- Deeper layers ([24,36) and [36,48)) show a shift toward green and yellow heads, with [36,48) having the highest green head proportion (47.4%), potentially reflecting specialized processing roles in later layers.
- The total head count increases from 6 to 26 across layers, aligning with typical transformer scaling patterns where deeper layers have more complex attention mechanisms.
The distribution patterns may correlate with the model's ability to handle different abstraction levels of language processing, with deeper layers showing more specialized head configurations.
</details>
(b) Human annotation distribution for Question 3 across layers (GPT-2 xl).
<details>
<summary>x23.png Details</summary>

### Visual Description
## Stacked Bar Chart: Pythia 6.9B Attention Head Distribution
### Overview
The chart visualizes the distribution of attention heads across four layers of the Pythia 6.9B model. Each bar represents a layer range ([0,8], [8,16], [16,24], [24,32]), with stacked segments showing the percentage contribution of four head types (green, blue, yellow, gray). The y-axis measures the total number of heads, while the x-axis categorizes layers.
### Components/Axes
- **X-axis (Layer Ranges)**:
- `[0, 8]`
- `[8, 16]`
- `[16, 24]`
- `[24, 32]`
- **Y-axis**: `# heads` (0–20)
- **Legend (Right)**:
- Green: 30.8%
- Blue: 15.4%
- Yellow: 7.7%
- Gray: 46.2%
### Detailed Analysis
1. **Layer [0, 8]**:
- Gray (46.2%): Dominates with 13.5 heads (46.2% of 29.2 total).
- Green (30.8%): 9.0 heads.
- Blue (15.4%): 4.5 heads.
- Yellow (7.7%): 2.2 heads.
2. **Layer [8, 16]**:
- Green (31.2%): 12.5 heads (31.2% of 40.1 total).
- Blue (25.0%): 10.0 heads.
- Yellow (31.2%): 12.5 heads.
- Gray (12.5%): 5.0 heads.
3. **Layer [16, 24]**:
- Green (41.2%): 16.5 heads (41.2% of 40.1 total).
- Blue (23.5%): 9.4 heads.
- Yellow (23.5%): 9.4 heads.
- Gray (11.8%): 4.7 heads.
4. **Layer [24, 32]**:
- Green (25.0%): 10.0 heads (25.0% of 40.0 total).
- Blue (35.0%): 14.0 heads.
- Yellow (15.0%): 6.0 heads.
- Gray (25.0%): 10.0 heads.
### Key Observations
- **Gray Dominance**: The gray segment (46.2%) is largest in the first layer but decreases to 25% in the final layer.
- **Green Peak**: Green heads peak at 41.2% in the third layer, suggesting increased importance in mid-layers.
- **Blue Growth**: Blue heads grow steadily from 15.4% to 35.0%, indicating rising influence in later layers.
- **Yellow Volatility**: Yellow heads peak at 31.2% in the second layer but drop to 15.0% in the final layer.
### Interpretation
The chart reveals a dynamic shift in attention head contributions across layers. The gray heads (likely representing baseline or residual connections) diminish as layers progress, while blue and green heads (possibly specialized for specific tasks) increase. The yellow heads’ peak in the second layer may indicate a transitional role in early processing. The final layer’s balanced distribution (25% green, 35% blue, 15% yellow, 25% gray) suggests a convergence of mechanisms for final output generation. This distribution pattern aligns with transformer architectures’ hierarchical processing, where early layers handle basic features and later layers integrate complex patterns.
</details>
(c) Human annotation distribution for Question 3 across layers (Pythia 6.9B).
Figure 11: Quality of GPT-4o interpretation (§ E) - Human annotation distribution for Question 3.
## Appendix F Analysis of Global Versus Specific Functionality
We observe that the mappings in $M$ provide a broad view of the head’s functionality, particularly on how global the head’s operation is. For example, a head that maps any token to an end-of-sequence token has global functionality, whereas heads that map countries to their capitals, colors to their complementary pairs, and so on, demonstrate specific operations. In this section, we use properties of $M$ to analyze how global the functionalities of attention heads in LLMs are.
#### Analysis
We estimate how global the functionality of a given head is using two metrics: input skewness, which captures the skewness of the head’s operation towards specific inputs, and output space size, which estimates the number of tokens the head tends to output. For input skewness, we obtain the saliency scores $\sigma_{t}(W_{VO})\;\forall t\in\mathcal{V}$ according to the head (see § 3.2), and calculate the skewness of their distribution. For output space size, we compute for every token $s\in\mathcal{V}$ the highest-score token $t$ it is mapped into according to $M$ : $t=\arg\max(\mathbf{m}_{s})$ . Next, we define the output space size to be the portion of unique output tokens over the vocabulary. For instance, we expect the output space of a head that only maps strings to their first letters to be a small set of letter tokens. Similarly to the normalization of the saliency scores by the embedding norms, which we applied in § 3.2, here, when calculating $M$ , we normalize the unembeddings ( $U$ ’s columns).
<details>
<summary>x24.png Details</summary>

### Visual Description
## Line Graphs: Input Skewness and Output Space Size Across Layers for GPT-2 XL and Pythia 6.9b
### Overview
The image contains two side-by-side line graphs comparing the **input skewness** (blue) and **output space size** (orange) across layers for two language models: **GPT-2 XL** (left) and **Pythia 6.9b** (right). Each graph includes annotations for "Global head" (horizontal blue line) and "Specific head" (horizontal orange line), with shaded regions indicating variability.
---
### Components/Axes
1. **X-Axis (Layers)**:
- **GPT-2 XL**: Labeled "layer," ranging from 0 to 40.
- **Pythia 6.9b**: Labeled "layer," ranging from 0 to 30.
2. **Y-Axes**:
- **Left Y-Axis (Input Skewness)**: Ranges from 0.0 to 2.0 for both graphs.
- **Right Y-Axis (Output Space Size)**: Ranges from 0.0 to 0.4 for both graphs.
3. **Legends**:
- **Blue**: Input skewness.
- **Orange**: Output space size.
4. **Annotations**:
- "Global head" (horizontal blue line at ~0.5 input skewness).
- "Specific head" (horizontal orange line at ~0.1 output space size).
---
### Detailed Analysis
#### GPT-2 XL (Left Graph)
- **Input Skewness (Blue)**:
- Starts at ~1.5 (layer 0), drops sharply to ~0.5 by layer 10, then fluctuates between ~0.5 and ~1.0 up to layer 40.
- Notable peaks at layers 5 (~1.2) and 30 (~1.1).
- **Output Space Size (Orange)**:
- Begins at ~0.1 (layer 0), rises to ~0.3 by layer 10, then declines to ~0.1 by layer 40.
- Peaks at layer 15 (~0.35).
#### Pythia 6.9b (Right Graph)
- **Input Skewness (Blue)**:
- Starts at ~1.5 (layer 0), drops sharply to ~0.2 by layer 5, then stabilizes between ~0.1 and ~0.3 up to layer 30.
- Sharp decline in early layers (layers 0–5).
- **Output Space Size (Orange)**:
- Begins at ~0.3 (layer 0), drops to ~0.1 by layer 5, then fluctuates between ~0.1 and ~0.2 up to layer 30.
- Sharp decline in early layers (layers 0–5).
---
### Key Observations
1. **Early Layer Dominance**:
- Both models show **high input skewness** and **output space size** in early layers (layers 0–10), suggesting these layers handle more variable or complex information.
2. **Convergence**:
- By later layers (layers 20–30/40), input skewness and output space size stabilize near the "Global head" and "Specific head" thresholds, indicating reduced variability.
3. **Model Differences**:
- **GPT-2 XL** retains higher input skewness in later layers compared to **Pythia 6.9b**, which stabilizes earlier.
- **Pythia 6.9b** has a more pronounced drop in output space size in early layers.
---
### Interpretation
- **Input Skewness Trends**:
- High skewness in early layers may reflect the models' reliance on diverse or noisy data for initial processing. The sharper decline in Pythia 6.9b suggests more efficient normalization or regularization in these layers.
- **Output Space Size**:
- Early-layer peaks imply these layers generate highly variable representations, which consolidate into more stable patterns in later layers. The "Specific head" threshold (~0.1) acts as a baseline for model-specific processing.
- **Global vs. Specific Heads**:
- The "Global head" line (~0.5 input skewness) and "Specific head" line (~0.1 output space size) likely represent architectural benchmarks. Layers crossing these thresholds may transition from general to task-specific processing.
- **Anomalies**:
- GPT-2 XL's input skewness spike at layer 30 (~1.1) and Pythia 6.9b's output space size fluctuation at layer 25 (~0.2) suggest potential architectural or training quirks in these layers.
---
### Conclusion
The graphs highlight how early layers in both models exhibit higher variability in input and output metrics, which diminish as layers progress. This aligns with the hypothesis that early layers handle general feature extraction, while later layers specialize in task-specific processing. The "Global head" and "Specific head" annotations provide critical reference points for understanding layer-specific behavior.
</details>
Figure 12: Input skewness versus output space size for all attention heads per layer in Pythia 6.9B and GPT-2 xl, compared to baseline heads of global and specific functionalities. Lower input skewness indicates a larger input space.
Additionally, we present two baselines. The first baseline, dubbed “specific head”, represents the output space size of a head that maps the entire vocabulary to 1 specific token (e.g. a head that always outputs the end of sequence token). The second baseline, called “global head”, represents the output space size of a head that maps the entire vocabulary to capitalized tokens with leading spaces - a subset whose size is 25% of the vocabulary of GPT-2 xl, and 16% of the vocabulary of Pythia 6.9B. An example of such a “global head” is a head that maps every word (or sub-word) in English to its capitalized version, and all other tokens to one specific token.
#### Results
Figure 12 shows the input skewness and output space sizes for all heads in Pythia 6.9B and GPT-2 xl. In both models, the input skewness rises and then sharply decreases in the early layers, after which it stabilizes. This implies that attention heads in shallower layers induce a salient effect into a specific set of inputs compared to later layers. In contrast, the output space size generally decreases across layers with a slight increase in the final layers, suggesting that head outputs across layers converge to smaller token subsets. Taken together, we hypothesize that early layer heads demonstrate their functionality on fewer inputs than deeper heads, which in turn map a larger set of possible inputs to a small set of outputs.
## Appendix G Resources and Packages
In our experiments, we used models and code from the transformers Wolf et al. (2020) and TransformerLens Nanda and Bloom (2022) packages, and nanoGPT. https://github.com/karpathy/nanoGPT All the experiments were conducted using a single A100 80GB or H100 80GB GPU, aside from the experiments studying Llama-3.1 70B, which used nodes with 8 of these GPUs.