# NeuroLogic: From Neural Representations to Interpretable Logic Rules
**Authors**: Chuqin Geng, Anqi Xing, Li Zhang, Ziyu Zhao, Yuhe Jiang, Xujie Si
Abstract
Rule-based explanation methods offer rigorous and globally interpretable insights into neural network behavior. However, existing approaches are mostly limited to small fully connected networks and depend on costly layer-wise rule extraction and substitution processes. These limitations hinder their generalization to more complex architectures such as Transformers. Moreover, existing methods produce shallow, decision-tree-like rules that fail to capture rich, high-level abstractions in complex domains like computer vision and natural language processing. To address these challenges, we propose NeuroLogic, a novel framework that extracts interpretable logical rules directly from deep neural networks. Unlike previous methods, NeuroLogic can construct logic rules over hidden predicates derived from neural representations at any chosen layer, in contrast to costly layer-wise extraction and rewriting. This flexibility enables broader architectural compatibility and improved scalability. Furthermore, NeuroLogic supports richer logical constructs and can incorporate human prior knowledge to ground hidden predicates back to the input space, enhancing interpretability. We validate NeuroLogic on Transformer-based sentiment analysis, demonstrating its ability to extract meaningful, interpretable logic rules and provide deeper insightsâtasks where existing methods struggle to scale.
Introduction
In recent years, deep neural networks have made remarkable progress across various domains, including computer vision (Krizhevsky, Sutskever, and Hinton 2012; He et al. 2016) and natural language processing (Sutskever, Vinyals, and Le 2014). As AI advances, the demand for interpretability has become increasingly urgent especially in high-stakes and regulated domains where understanding model decisions is critical (Lipton 2016; Doshi-Velez and Kim 2017; Guidotti et al. 2018).
Among various types of explanations for deep neural networksâsuch as attributions (Selvaraju et al. 2017) and hidden semantics (Bau et al. 2017) ârule-based methods that generate global logic rules over input sets, rather than local rules for individual samples, offer stronger interpretability and are highly preferred (Pedreschi et al. 2019). However, most existing rule-based explanation methods (Cohen 1995; Zilke, Loza MencĂa, and Janssen 2016; Zarlenga, Shams, and Jamnik 2021a; Hemker, Shams, and Jamnik 2023) suffer from several limitations. We highlight three key issues, as illustrated in Figure 1: (1) they mostly rely on layer-by-layer rule extraction and rewriting to derive final rules, which introduces scalability limitations; (2) they are primarily tailored to fully connected networks (FCNs) and fail to generalize to modern deep neural network (DNN) architectures such as convolutional neural networks and Transformers; (3) the rules they produce are often shallow and decision-tree-like, lacking the ability to capture high-level abstractions, which limits their effectiveness in complex domains.
To this end, we introduce NeuroLogic, a modern rule-based framework designed to address architectural dependence, limited scalability, and the shallow nature of existing decision rules. Our approach is inspired by Neural Activation Patterns (NAPs) (Geng et al. 2023, 2024) which are subsets of neurons that consistently activate for inputs belonging to the same class. Specifically, for any given layer, we identify salient neurons for each class and determine their optimal activation thresholds, converting these neurons into hidden predicates. These predicates represent high-level features learned by the model, where a true value indicates the presence of the corresponding feature in a given input. Based on these predicates, NeuroLogic constructs first-order logic (FOL) rules in a fully data-driven manner to approximate the internal behavior of neural networks.
<details>
<summary>x1.png Details</summary>

### Visual Description
## Diagram: Comparison of Neural Network Architectures and Rule Extraction
### Overview
The image presents a comparison of different neural network architectures (FCNs, CNNs, Transformers) and their ability to be represented by shallow decision-tree-like rules. It highlights the limitations of extracting simple rules from modern architectures.
### Components/Axes
* **Top-Left:** FCNs (Fully Connected Networks)
* Input: X
* Output: ƶ
* Process: Multiple layers of interconnected nodes.
* "ExtractRules" labels indicate rule extraction at different layers.
* **Top-Right:** Shallow Decision-Tree-Like Rules
* Logic:
```
IF f0 > 0.5 AND f1 <= 0.49
OR f0 <= 0.50 AND f1 > 0.49
THEN 1 ELSE 0
```
* "Rewrite" arrow connects FCNs to the rules.
* **Bottom-Left:** CNNs (Convolutional Neural Networks)
* Input: 3@224x224
* Process: Convolution, Max pool, Dense layers.
* Intermediate Layers: 8@64x64, 24@16x16, 1x128
* **Bottom-Center:** Transformers
* Input: Nx
* Process: Input Emb, Multi-Head Attention, MLP (Multi-Layer Perceptron).
* **Bottom-Right:** Image and Token Inputs
* Image Inputs: (224, 224, 3) - An image of a cat is shown.
* Token Inputs: (50000, 768)
* Example Tokens and Embedded Token Vectors:
* '<s>' -> [0.1150, -0.1438, 0.0555, ...]
* '<pad>' -> [0.1149, -0.1438, 0.0547, ...]
* '</s>' -> [0.0010, -0.0922, 0.1025, ...]
* '<unk>' -> [0.1149, -0.1439, 0.0548, ...]
* 'the' -> [-0.0340, 0.0068, -0.0844, ...]
* 'to' -> [-0.0439, -0.0201, 0.0189, ...]
* **Annotations:**
* Red "X" with text: "(a) Cannot apply to modern architectures" - positioned between FCNs and CNNs/Transformers.
* Red "X" with text: "(b) Cannot describe high-level abstraction in complex domains" - positioned between Shallow Decision-Tree-Like Rules and Image/Token Inputs.
### Detailed Analysis or Content Details
* **FCNs:** The diagram shows a standard fully connected neural network with multiple layers. The "ExtractRules" labels suggest an attempt to extract rules from the network's internal representations.
* **Shallow Decision-Tree-Like Rules:** The rules are simple logical statements based on two features (f0 and f1).
* **CNNs:** The CNN architecture is depicted with convolutional, max pooling, and dense layers, processing an input image of size 3@224x224.
* **Transformers:** The Transformer architecture includes input embedding, multi-head attention, and an MLP, processing an input of size Nx.
* **Image/Token Inputs:** The image input is a 224x224 pixel image with 3 color channels. The token inputs consist of 50000 tokens, each represented by a 768-dimensional vector.
### Key Observations
* FCNs are presented as a model from which rules can be extracted.
* Modern architectures like CNNs and Transformers are difficult to represent with simple rules.
* The diagram highlights the trade-off between model complexity and interpretability.
### Interpretation
The diagram illustrates the challenge of interpreting complex neural networks. While FCNs can be approximated by simple decision rules, modern architectures like CNNs and Transformers are too complex for such representations. This suggests that while these models achieve high performance, understanding their decision-making process is difficult. The annotations emphasize the limitations of applying simple rule extraction techniques to modern architectures and the inability of such rules to capture high-level abstractions in complex domains.
</details>
Figure 1: Existing rule-based methods fail to generalize to modern DNNs and their associated complex input domains.
The remaining challenge is to ground these hidden predicates in the original input space to ensure interpretability. Unlike existing approaches that can only produce shallow, decision-tree-like rules, NeuroLogic features a flexible design that supports a wide range of interpretable surrogate methods, such as program synthesis, to learn rules with richer and more expressive structures. It can also incorporate human prior knowledge as high-level abstractions of complex input domains to enable more efficient and meaningful grounding. To demonstrate its capabilities, we apply NeuroLogic to extract logic rules from Transformer-based sentiment analysisâa setting where traditional rule-extraction methods struggle to scale. To the best of our knowledge, this is the first approach capable of extracting global logic rules from modern, complex architectures such as Transformers. We believe NeuroLogic represents a promising step toward opening the black box of deep neural networks. Our contributions are summarized as follows:
- We propose NeuroLogic, a novel framework for extracting interpretable global logic rules from deep neural networks. By abandoning the costly layer-wise rule extraction and substitution paradigm, NeuroLogic achieves greater scalability and broad architectural compatibility.
- The decoupled design of NeuroLogic enables flexible grounding, allowing the generation of more abstract and interpretable rules that transcend the limitations of shallow, decision treeâbased explanations.
- Experimental results on small-scale benchmarks demonstrate that NeuroLogic produces more compact rules with higher efficiency than state-of-the-art methods, while maintaining strong fidelity and predictive accuracy.
- We further showcase the practical feasibility of NeuroLogic in extracting meaningful logic rules and providing insights into the internal mechanisms of Transformersâan area where existing approaches struggle to scale effectively.
Preliminary
Neural Networks for Classification Tasks
We consider a general deep neural network $N$ used for classification. Let $z^{l}_{i}(x)$ denote the value of the $i$ -th neuron at layer $l$ for a given input $x$ . We do not assume any specific form for the transformation between layers, that is, the mapping from $z^{l}$ to $z^{l+1}$ can be arbitrary. This abstraction allows our analysis to be broadly applied across architectures.
The network $N$ as a whole functions as
$$
\displaystyle\mathbf{F}^{<N>}:X\to\mathbb{R}^{|C|}, \tag{1}
$$
mapping an input $xâ X$ from the dataset to a score vector over the class set $C$ . The predicted class is then given by
$$
\displaystyle\hat{c}=\arg\max_{c\in C}\mathbf{F}^{<N>}_{c}(x). \tag{2}
$$
First-Order Logic
First-Order Logic (FOL) is a formal language for stating interpretable rules about objects and their relations/attributes. It extends propositional logic by introducing quantifiers such as:
- Universal quantifier ( $â$ ): meaning âfor allâ, e.g., $â x\,p(x)$ means $p(x)$ holds for every $x$ .
- Existential quantifier ( $â$ ): meaning âthere existsâ, e.g., $â x\,p(x)$ means there exists at least one $x$ for which $p(x)$ holds.
We focus on FOL rules in Disjunctive Normal Form (DNF), which are disjunctions (ORs) of conjunctions (ANDs) of predicates.
- A predicate is a simple condition or property on the input, e.g., $p_{i}(x)$ .
- A clause is a conjunction (AND) of predicates, such as $p_{1}(x)\land p_{2}(x)\land\neg p_{3}(x)$ .
A DNF rule looks like a logical OR of multiple clauses:
$$
\displaystyle\forall x,\quad\left(p_{1}(x)\land p_{2}(x)\right)\lor\left(p_{3}(x)\land\neg p_{4}(x)\right)\Rightarrow\textit{Label}(x)=c, \tag{3}
$$
meaning that for every input $x$ , if any clause is satisfied, it is assigned to class $c$ . This structured form makes the rules easy to interpret and understand.
The NeuroLogic Framework
In this section, we introduce NeuroLogic, a novel approach for extracting interpretable logic rules from DNNS. For clarity, we divide the NeuroLogic framework into three subtasks. An overview is illustrated in Figure 2.
<details>
<summary>x2.png Details</summary>

### Visual Description
## Diagram: Predicate Identification and Grounding
### Overview
The image is a diagram illustrating a three-step process for identifying hidden predicates, determining logical rules, and grounding those predicates to an input space. It appears to be related to explainable AI, specifically how to extract logical rules from a trained neural network.
### Components/Axes
* **Panel 1: (1) Identify Hidden Predicates**
* A neural network diagram with input *X* and output *ƶ*.
* A series of colored circles (predicates) below the network, some red and some blue.
* Text: "Find the predicates for each class via purity metric"
* **Panel 2: (2) Determine the Logical Rules**
* A table showing the relationship between input variables *x<sub>i</sub>* and predicates *p<sub>j</sub>*.
* Arrow pointing downwards from the table to logical rules.
* Table Rows: *x<sub>1</sub>*, *x<sub>2</sub>*, *x<sub>3</sub>* (highlighted in light red), *x<sub>4</sub>*, *x<sub>5</sub>*, *x<sub>6</sub>* (highlighted in light blue)
* Table Columns: *p<sub>1</sub>*, *p<sub>2</sub>*, *p<sub>3</sub>*, *p<sub>4</sub>*
* Logical Rules:
* âx, (p<sub>1</sub> â§ p<sub>3</sub>) âš (ÂŹp<sub>1</sub> â§ p<sub>2</sub> â§ p<sub>3</sub> â§ ÂŹp<sub>4</sub>) â C<sub>1</sub>
* âx, (p<sub>1</sub> â§ p<sub>2</sub> â§ ÂŹp<sub>3</sub> â§ p<sub>4</sub>) âš (p<sub>1</sub> â§ p<sub>4</sub>) â C<sub>2</sub>
* **Panel 3: (3) Ground Predicates to Input Space**
* A set containing predicates *p<sub>3</sub>*, ..., *p<sub>4</sub>*.
* Text: "Predicate Set P"
* Text: "Learn mapping from P to X via interpretable surrogate model"
* A trapezoidal shape representing "Input Domain X" with two colored regions inside (red and blue).
### Detailed Analysis or ### Content Details
**Panel 1: Identify Hidden Predicates**
* The neural network takes an input *X* and produces an output *ƶ*.
* The colored circles represent predicates. Red circles likely correspond to one class, and blue circles to another.
* The text indicates that a "purity metric" is used to find these predicates.
**Panel 2: Determine the Logical Rules**
* The table shows the relationship between input variables *x<sub>i</sub>* and predicates *p<sub>j</sub>*.
* *x<sub>1</sub>*: p<sub>1</sub>=1, p<sub>2</sub>=0, p<sub>3</sub>=1, p<sub>4</sub>=0
* *x<sub>2</sub>*: p<sub>1</sub>=0, p<sub>2</sub>=1, p<sub>3</sub>=1, p<sub>4</sub>=0
* *x<sub>3</sub>*: p<sub>1</sub>=0, p<sub>2</sub>=1, p<sub>3</sub>=1, p<sub>4</sub>=0
* *x<sub>4</sub>*: p<sub>1</sub>=1, p<sub>2</sub>=1, p<sub>3</sub>=0, p<sub>4</sub>=1
* *x<sub>5</sub>*: p<sub>1</sub>=1, p<sub>2</sub>=0, p<sub>3</sub>=0, p<sub>4</sub>=1
* *x<sub>6</sub>*: p<sub>1</sub>=1, p<sub>2</sub>=0, p<sub>3</sub>=0, p<sub>4</sub>=1
* The logical rules are derived from the table and represent conditions for classes C<sub>1</sub> and C<sub>2</sub>.
**Panel 3: Ground Predicates to Input Space**
* The predicates are mapped to the input domain *X*.
* An "interpretable surrogate model" is used to learn this mapping.
* The colored regions within the input domain likely represent areas where specific predicates are active.
### Key Observations
* The diagram outlines a process for extracting logical rules from a neural network.
* Predicates are identified and then grounded to the input space.
* The goal is to create an interpretable model that explains the network's behavior.
### Interpretation
The diagram illustrates a method for making neural networks more transparent and understandable. By identifying hidden predicates and grounding them to the input space, it becomes possible to extract logical rules that govern the network's decision-making process. This approach can be valuable for debugging, verifying, and explaining AI systems. The use of a "purity metric" and an "interpretable surrogate model" suggests a focus on both accuracy and interpretability. The logical rules provide a human-readable representation of the network's internal logic.
</details>
Figure 2: Overview of the NeuroLogic Framework.
Identifying Hidden Predicates
For a given layer $l$ , we aim to identify a subset of neurons that are highly indicative of a particular class $câ C$ . These neurons form what are known as Neural Activation Patterns (NAPs) (Geng et al. 2023, 2024). A neuron is considered part of the NAP for class $c$ if its activation is consistently higher for inputs from class $c$ compared to inputs from other classes. This behavior suggests that such neurons encode class-specific latent features at layer $l$ , as discussed in (Geng et al. 2024).
To identify the NAP for a specific class $c$ , we evaluate how selectively each neuron responds to class $c$ versus other classes. Since each neuronâs activation is a scalar value, we can assess its discriminative power by learning a threshold $t$ . This threshold separates inputs from class $c$ and those from other classes based on activation values.
Formally, we consider a neuron to support class $c$ if its activation $z_{j}^{l}(x)$ for input $x$ satisfies $z_{j}^{l}(x)â„ t$ . If this condition holds, we classify $x$ as belonging to class $c$ ; otherwise, it is classified as not belonging to $c$ . To quantify the effectiveness of a threshold $t$ , we use the purity metric, defined as:
$$
\displaystyle\text{Purity}(t)=\ \displaystyle\frac{\left|\left\{x\in X_{c}:z_{j}^{l}(x)\geq t\right\}\right|}{|X_{c}|} \displaystyle+\frac{\left|\left\{x\in X_{\neg c}:z_{j}^{l}(x)<t\right\}\right|}{|X_{\neg c}|} \tag{4}
$$
Here, $X_{c}$ denotes the set of inputs from class $c$ , while $X_{\neg c}$ denotes inputs from all other classes. A high purity value means the neuron cleanly separates class $c$ from others, whereas a low value suggests ambiguous or overlapping activation responses. We conduct a linear search to determine the optimal threshold $t$ as its final purity.
In our implementation, for each neuron, we compute its purity with respect to each class to determine its class preference. Then, for each class, we rank the neurons by that purity and keep the top- $k$ . These selected neurons are referred to as hidden predicates, denoted as $P$ , as they capture discriminative features that are highly specific to each class within the input space.
Determining the Logical Rules
Formally, a predicate $p_{j}$ at layer $l$ , together with its corresponding threshold $t_{j}$ , is defined as $p_{j}(x):=\mathbb{I}[z_{j}^{(l)}(x)â„ t_{j}]$ . In this context, a True (1) assignment indicates the presence of the specific latent feature of class $c$ for input $x$ , while a False (0) assignment signifies its absence. Intuitively, the more predicates that fire, the stronger the evidence that $x$ belongs to class $c$ . However, this raises the question: to what extent should we believe that $x$ belongs to class $c$ based on the pattern of predicate activations?
We address this question using a data-driven approach. Let $P_{c}^{(l)}=\{p_{1},...,p_{m}\}$ be the $m$ predicates retained for class $c.$ Evaluating $P_{c}^{(l)}$ on every class example $xâ X_{c}$ gives a multiset of binary vectors $p(x)â\{0,1\}^{m}$ . Each distinct vector can be treated as a clause, and the union of all clauses forms a DNF rule:
$$
\forall x,\bigl(\bigvee_{v\in\mathcal{V}_{c}}\bigl(\bigwedge_{i:v_{i}=1}p_{i}(x)\wedge\bigwedge_{i:v_{i}=0}\neg p_{i}(x)\bigr)\bigr)\implies Label(x)=c
$$
where $\mathcal{V}_{c}$ is the set of unique activation vectors for $X_{c}$ . For instance, suppose we have four predicates $p_{1}(x),p_{2}(x),p_{3}(x),p_{4}(x)$ (we will omit $x$ when the context is clear), and five distinct inputs yield the following patterns: $(1,1,1,1)$ , $(1,1,1,0)$ , $(1,1,1,0)$ , $(1,1,0,1)$ , and $(1,1,0,1)$ . We can then construct a disjunctive normal form (DNF) expression to derive a rule:
$$
\displaystyle\forall x\quad( \displaystyle(p_{1}\land p_{2}\land p_{3}\land p_{4})\lor(p_{1}\land p_{2}\land p_{3}\land\neg p_{4}) \displaystyle\lor(p_{1}\land p_{2}\land\neg p_{3}\land p_{4}))\Rightarrow\textit{Label}(x)=c. \tag{5}
$$
In practice, these predicates behave as soft switches: their purity is imperfect, sometimes firing on inputs from $X_{\neg c}$ . Consequently, the resulting DNF is best viewed as a comprehensive description and may include many predicates that are less relevant to the modelâs actual classification behavior.
To address this, we apply a decision tree learner to distill a more compact and representative version of the rule, which will serve as the (discriminative) rule-based model.
Grounding Predicates to the Input Feature Space
The final step is to ground these hidden predicates in the input space to make them human-interpretable. We adopt the definition of interpretability as the ability to explain model decisions in terms understandable to humans (Doshi-Velez and Kim 2017). Since âunderstandabilityâ is task and audience dependent, NeuroLogic is designed in a decoupled fashion where any grounding method can be plugged in, allowing injection of domain knowledge.
This design also allows users to incorporate domain-specific knowledge where appropriate. Within the scope of this work, we present simple approaches for grounding predicates in simple input domains, as well as in the complex input domain of large vocabulary spaces for Transformers.
Exploring general grounding strategies for diverse tasks and models remains a challenge, and we believe it requires collective efforts from the whole research community.
Grounding Predicates in Simple Input Domains
For deep neural networks (DNNs) applied to tasks with simple input domains (e.g., tabular data), we aim to ground each predicate $p_{j}$ directly in the raw input space. This enables more transparent and interpretable logic rules.
We reframe the grounding task as a supervised classification problem. For a given predicate $p_{j}$ , we collect input examples where the predicate is activated versus deactivated, and then learn a symbolic function that approximates this distinction.
Formally, for a target class $c$ and predicate $p_{j}$ , we define the activation set and deactivation set, respectively, as
$$
\displaystyle D_{1}^{(j)} \displaystyle=\{x\in X_{c}\mid p_{j}(x)=1\}, \displaystyle D_{0}^{(j)} \displaystyle=\{x\in X_{c}\mid p_{j}(x)=0\}. \tag{6}
$$
These are combined into a labeled dataset
$$
\displaystyle D^{(j)}=\{(x,y)\mid x\in D_{1}^{(j)}\cup D_{0}^{(j)},\ y=p_{j}(x)\}. \tag{8}
$$
Then, to obtain expressive, compositional and human-readable logic rules as explanations, we employ program synthesis to learn a symbolic expression $\phi_{j}$ from a domain-specific language (DSL) $\mathcal{L}$ . Unlike traditional decision-tree-like rules, the symbolic language $\mathcal{L}$ is richer: a composable grammar over input features that supports not only logical and comparison operators but also linear combinations and nonlinear functions. Specifically, the language includes:
- Atomic abstractions formed by applying threshold comparisons to linear or nonlinear functions of the input features, for example,
$$
\displaystyle a:=f(x)\leq\theta\quad\text{or}\quad f(x)>\theta, \tag{9}
$$
where $f(x)$ can be any linear or nonlinear transformation, such as polynomials, trigonometric functions, or other basis expansions.
- Logical operators to combine these atomic abstractions into complex expressions:
$$
\displaystyle\phi::=a\mid\neg\phi\mid\phi_{1}\land\phi_{2}\mid\phi_{1}\lor\phi_{2}. \tag{10}
$$
The synthesis objective is to find an expression $\phi_{j}â\mathcal{L}$ that minimizes a combination of classification loss and complexity, formally:
$$
\displaystyle\phi_{j}\in\arg\min_{\phi\in\mathcal{L}}\left[\mathcal{L}_{\mathrm{cls}}(\phi;D^{(j)})+\lambda\cdot\Omega(\phi)\right], \tag{11}
$$
where $\mathcal{L}_{\mathrm{cls}}$ measures how well $\phi$ approximates the predicate activations in $D^{(j)}$ , $\Omega(\phi)$ penalizes the complexity of the expression (e.g., number of literals or tree depth), and $\lambda$ balances the trade-off between accuracy and interpretability.
This grounding approach also supports decision-tree-like rules, which are commonly used in existing methods. In this context, such rules can be viewed as a special case of the above atomic abstractions, where $f(x)$ corresponds to individual features.
A simpler alternative is to leverage off-the-shelf decision tree algorithms: we train a decision tree classifier $f_{j}^{\mathrm{DT}}$ such that
$$
\displaystyle f_{j}^{\mathrm{DT}}(x)\approx p_{j}(x),\quad\forall x\in X_{c}. \tag{12}
$$
The resulting decision tree provides a simpler rule-based approximation of predicate activations, effectively grounding $p_{j}$ in the input space in an interpretable manner.
Grounding predicates in the vocabulary space
The input space in NLP domains (i.e., vocabulary spaces) is typically extremely large, making it difficult to ground rules onto raw feature vectors. In such domains, it is more effective to incorporate human prior knowledge like words, tokens, or linguistic structure that are more semantically meaningful and ultimately guide the predictions made by transformer-based models (Tenney, Das, and Pavlick 2019a). In light of this, we define a set of atomic abstractions over the vocabulary spaces. Each atomic abstraction corresponds to a template specifying keywords along with their associated lexical structures. To ground the learned hidden predicates to this domain knowledge, we leverage causal inference (Zarlenga, Shams, and Jamnik 2021b; Vig et al. 2020).
Formally, let $\mathcal{A}=\{a_{1},a_{2},...,a_{k}\}$ be the set of atomic abstractions derived from domain knowledge (e.g., keywords or lexical patterns), and let $p_{j}$ be a learned hidden predicate extracted from the modelâs internal representations, and $x$ be an input instance (e.g., a text sample).
We define a causal intervention $do(\neg a_{i})$ as flipping the truth value of atomic abstraction $a_{i}$ in the input $x$ (e.g., masking the keyword associated with $a_{i}$ ). The grounding procedure tests whether flipping $a_{i}$ changes the truth of the hidden predicate $p_{j}$ :
$$
\displaystyle\textit{If}\quad p_{j}(x)=\textit{True}\quad\textit{and}\quad p_{j}\bigl(do(\neg a_{i})(x)\bigr)=\textit{False}, \tag{13}
$$
then we infer a causal dependence of $p_{j}$ on $a_{i}$ , grounding $p_{j}$ to the atomic abstraction $a_{i}$ .
By iterating over all atomic abstractions $a_{i}â\mathcal{A}$ , we establish a mapping:
$$
\displaystyle G:p_{j}\mapsto\{a_{i}\in\mathcal{A}\mid\textit{flipping }a_{i}\textit{ negates }p_{j}\}, \tag{14}
$$
which grounds the hidden predicate $p_{j}$ in terms of semantically meaningful domain knowledge.
Evaluation
| XOR ECLAIRE | Method C5.0 91.8 $±$ 1.0 | Accuracy (%) 52.6 $±$ 0.2 91.4 $±$ 2.4 | Fidelity (%) 53.0 $±$ 0.2 6.2 $±$ 0.4 | Runtime (s) 0.1 $±$ 0.0 87.0 $±$ 16.2 | Number of Clauses 1 $±$ 0 263.0 $±$ 49.1 | Avg Clause Length 1 $±$ 0 |
| --- | --- | --- | --- | --- | --- | --- |
| CGXPLAIN | 96.7 $±$ 1.7 | 92.4 $±$ 1.1 | 9.1 $±$ 1.8 | 3.6 $±$ 1.8 | 10.4 $±$ 7.2 | |
| NeuroLogic | 89.6 $±$ 1.9 | 90.3 $±$ 1.6 | 1.2 $±$ 0.3 | 10.8 $±$ 3.5 | 6.8 $±$ 2.0 | |
| MB-ER | C5.0 | 92.7 $±$ 0.9 | 89.3 $±$ 1.0 | 20.3 $±$ 0.8 | 21.8 $±$ 3 | 72.4 $±$ 14.5 |
| ECLAIRE | 94.1 $±$ 1.6 | 94.7 $±$ 0.2 | 123.5 $±$ 36.8 | 48.3 $±$ 15.3 | 137.6 $±$ 24.7 | |
| CGXPLAIN | 92.4 $±$ 0.7 | 94.7 $±$ 0.9 | 462.7 $±$ 34.0 | 5.9 $±$ 1.1 | 21.8 $±$ 3.4 | |
| NeuroLogic | 92.8 $±$ 0.9 | 92.7 $±$ 1.4 | 6.0 $±$ 1.2 | 5.8 $±$ 1.0 | 3.7 $±$ 0.2 | |
| MB-HIST | C5.0 | 87.9 $±$ 0.9 | 89.3 $±$ 1.0 | 16.06 $±$ 0.64 | 12.8 $±$ 3.1 | 35.2 $±$ 11.3 |
| ECLAIRE | 88.9 $±$ 2.3 | 89.4 $±$ 1.8 | 174.5 $±$ 73.2 | 30.0 $±$ 12.4 | 74.7 $±$ 15.7 | |
| CGXPLAIN | 89.4 $±$ 2.5 | 89.1 $±$ 3.6 | 285.3 $±$ 10.3 | 5.2 $±$ 1.9 | 27.8 $±$ 7.6 | |
| NeuroLogic | 90.7 $±$ 0.9 | 92.0 $±$ 3.5 | 2.3 $±$ 0.2 | 3.6 $±$ 1.6 | 2.7 $±$ 0.3 | |
| MAGIC | C5.0 | 82.8 $±$ 0.9 | 85.4 $±$ 2.5 | 1.9 $±$ 0.1 | 57.8 $±$ 4.5 | 208.7 $±$ 37.6 |
| ECLAIRE | 84.6 $±$ 0.5 | 87.4 $±$ 1.2 | 240.0 $±$ 35.9 | 392.2 $±$ 73.9 | 1513.4 $±$ 317.8 | |
| CGXPLAIN | 84.4 $±$ 0.8 | 91.5 $±$ 1.3 | 44.6 $±$ 2.9 | 7.4 $±$ 0.8 | 11.6 $±$ 1.9 | |
| NeuroLogic | 84.6 $±$ 0.5 | 90.8 $±$ 0.7 | 17.0 $±$ 1.5 | 6.0 $±$ 0.0 | 3.6 $±$ 0.1 | |
Table 1: Comparison of rule-based explanation methods across different benchmarks. The best results are highlighted in bold.
In this section, we evaluate our approach, NeuroLogic, in two settings: (1) small-scale benchmarks and (2) transformer-based sentiment analysis. The former involves comparisons with baseline methods to assess NeuroLogic in terms of accuracy, efficiency, and interpretability. The latter focuses on a challenging, large-scale, real-world scenario where existing methods fail to scale, highlighting the practical viability and scalability of NeuroLogic.
Small-Scale Benchmarks
Setup and Baselines
We evaluate NeuroLogic against popular rule-based explanation methods C5.0 This represents the use of the C5.0 decision tree algorithm to learn rules in an end-to-end manner., ECLAIRE (Zarlenga, Shams, and Jamnik 2021a), and CGXPLAIN (Hemker, Shams, and Jamnik 2023) on four standard interpretability benchmarks: XOR, MB-ER, MB-HIST, and MAGIC. For each baseline, we use the original implementation and follow the authorsâ recommended hyperparameters. We evaluate all methods using five metrics: accuracy, fidelity (agreement with the original model), runtime, number of clauses, and average clause length, to assess both interpretability and performance. Further details on the experimental setup are provided in the Appendix.
Notably, NeuroLogic consistently produces the most concise explanations, as reflected by both the number of clauses and the average clause length. In particular, it generates rule sets with substantially shorter average clause lengthsâfor example, on MB-HIST, it achieves $2.7± 0.3$ compared to $27.8± 7.6$ by the previous state-of-the-art, CGXPLAIN. This conciseness, along with fewer clauses, directly enhances interpretability and readability by reducing overall rule complexity. These results highlight a key advantage of NeuroLogic and align with our design goal of improving interpretability.
By avoiding the costly layer-wise rule extraction and substitution paradigm employed by ECLAIRE and CGXPLAIN, NeuroLogic achieves significantly higher efficiency. Although C5.0 can be faster in some cases by directly extracting rules from DNNs, it often suffers from lower fidelity, reduced accuracy, or the generation of overly complex rule sets. For example, while C5.0 can complete rule extraction on XOR in just 0.1 seconds, its accuracy is only around 52%. In contrast, NeuroLogic consistently achieves strong performance in both fidelity and accuracy across all benchmarks. These results demonstrate that NeuroLogic strikes a favorable balance by effectively combining interpretability, computational efficiency, and faithfulness, outperforming existing rule-based methods.
<details>
<summary>x3.png Details</summary>

### Visual Description
## Line Chart: Accuracy and Purity by Layer
### Overview
The image is a line chart comparing the accuracy of a "Rule Set" with the purity of four emotions (Anger, Joy, Optimism, and Sadness) across six layers. The x-axis represents the layer number (1 to 6). The left y-axis represents "Accuracy," ranging from 0.4 to 0.8. The right y-axis represents "Average Purity," ranging from 1.1 to 1.8.
### Components/Axes
* **X-axis:** "Layer" with values 1, 2, 3, 4, 5, and 6.
* **Left Y-axis:** "Accuracy" ranging from 0.4 to 0.8 in increments of 0.1.
* **Right Y-axis:** "Average Purity" ranging from 1.1 to 1.8 in increments of 0.1.
* **Legend:** Located at the top of the chart, it identifies the lines:
* Black solid line with square markers: "Accuracy - Rule Set"
* Blue dashed line with circle markers: "Purity - Anger"
* Orange dash-dot line with square markers: "Purity - Joy"
* Green dotted line with triangle markers: "Purity - Optimism"
* Red dash-dot-dot line with inverted triangle markers: "Purity - Sadness"
### Detailed Analysis
**1. Accuracy - Rule Set (Black solid line with square markers):**
The accuracy generally increases with each layer.
* Layer 1: ~0.39
* Layer 2: ~0.45
* Layer 3: ~0.63
* Layer 4: ~0.66
* Layer 5: ~0.78
* Layer 6: ~0.80
**2. Purity - Anger (Blue dashed line with circle markers):**
The purity of anger increases with each layer.
* Layer 1: ~1.12 (0.39 on left axis)
* Layer 2: ~1.15 (0.42 on left axis)
* Layer 3: ~1.35 (0.52 on left axis)
* Layer 4: ~1.50 (0.59 on left axis)
* Layer 5: ~1.72 (0.74 on left axis)
* Layer 6: ~1.78 (0.80 on left axis)
**3. Purity - Joy (Orange dash-dot line with square markers):**
The purity of joy increases with each layer.
* Layer 1: ~1.14 (0.41 on left axis)
* Layer 2: ~1.25 (0.47 on left axis)
* Layer 3: ~1.38 (0.54 on left axis)
* Layer 4: ~1.55 (0.63 on left axis)
* Layer 5: ~1.75 (0.77 on left axis)
* Layer 6: ~1.81 (0.81 on left axis)
**4. Purity - Optimism (Green dotted line with triangle markers):**
The purity of optimism increases with each layer.
* Layer 1: ~1.18 (0.44 on left axis)
* Layer 2: ~1.23 (0.46 on left axis)
* Layer 3: ~1.36 (0.53 on left axis)
* Layer 4: ~1.50 (0.59 on left axis)
* Layer 5: ~1.65 (0.69 on left axis)
* Layer 6: ~1.74 (0.76 on left axis)
**5. Purity - Sadness (Red dash-dot-dot line with inverted triangle markers):**
The purity of sadness increases with each layer.
* Layer 1: ~1.12 (0.39 on left axis)
* Layer 2: ~1.14 (0.41 on left axis)
* Layer 3: ~1.28 (0.50 on left axis)
* Layer 4: ~1.35 (0.53 on left axis)
* Layer 5: ~1.60 (0.67 on left axis)
* Layer 6: ~1.72 (0.75 on left axis)
### Key Observations
* The accuracy of the "Rule Set" is lower than the purity of all emotions at Layer 1 and Layer 2, but surpasses them significantly by Layer 6.
* The purity of "Joy" is consistently the highest among the four emotions across all layers.
* The purity of "Sadness" is consistently the lowest among the four emotions across all layers.
* All data series show an increasing trend as the layer number increases.
### Interpretation
The chart suggests that as the layers increase, both the accuracy of the "Rule Set" and the purity of the emotions improve. The "Rule Set" starts with lower accuracy but shows a steeper increase, eventually outperforming the purity of the individual emotions. This could indicate that the model becomes more accurate in its predictions as it processes through more layers. The relative purity levels of the emotions might reflect the inherent complexity or distinctiveness of each emotion within the dataset. "Joy" being the purest could mean it's the most easily identifiable emotion, while "Sadness" being the least pure could indicate it's more nuanced or easily confused with other emotions. The increasing trends across all data series suggest that the model benefits from deeper processing, regardless of whether it's measuring accuracy or emotional purity.
</details>
Figure 3: The (average) purity of predicates correlates with the rule model accuracy as layers go deeper.
<details>
<summary>x4.png Details</summary>

### Visual Description
## Line Chart: Accuracy vs. Top-k Predicates for Different Layers
### Overview
The image is a line chart comparing the accuracy of different layers (Layer1 to Layer6) as a function of the number of top-k predicates used. The x-axis represents the "Top-k Predicates" ranging from 1 to 40, and the y-axis represents "Accuracy" ranging from 0.0 to 0.8. Each layer is represented by a different colored line with a distinct marker.
### Components/Axes
* **X-axis:** "Top-k Predicates" with values 1, 5, 10, 15, 20, 25, 30, 35, 40.
* **Y-axis:** "Accuracy" with values ranging from 0.0 to 0.8 in increments of 0.1.
* **Legend:** Located in the bottom-right of the chart, it identifies each layer with a specific color and marker:
* Layer1 (blue, circle marker)
* Layer2 (orange, square marker)
* Layer3 (green, diamond marker)
* Layer4 (red, triangle marker)
* Layer5 (purple, star marker)
* Layer6 (brown, plus marker)
### Detailed Analysis
* **Layer1 (blue, circle):** The accuracy increases sharply from approximately 0.0 at k=1 to approximately 0.37 at k=5, then increases to approximately 0.4 at k=10, remains relatively stable until k=20 (approximately 0.39), and then gradually decreases to approximately 0.27 at k=40.
* k=1: ~0.0
* k=5: ~0.37
* k=10: ~0.4
* k=20: ~0.39
* k=40: ~0.27
* **Layer2 (orange, square):** The accuracy increases sharply from approximately 0.0 at k=1 to approximately 0.37 at k=5, then increases to approximately 0.43 at k=10, remains relatively stable until k=20 (approximately 0.42), and then gradually decreases to approximately 0.34 at k=40.
* k=1: ~0.0
* k=5: ~0.37
* k=10: ~0.43
* k=20: ~0.42
* k=40: ~0.34
* **Layer3 (green, diamond):** The accuracy increases sharply from approximately 0.0 at k=1 to approximately 0.54 at k=5, then increases to approximately 0.61 at k=10, reaches a peak of approximately 0.67 at k=20, and then gradually decreases to approximately 0.51 at k=40.
* k=1: ~0.0
* k=5: ~0.54
* k=10: ~0.61
* k=20: ~0.67
* k=40: ~0.51
* **Layer4 (red, triangle):** The accuracy increases sharply from approximately 0.0 at k=1 to approximately 0.63 at k=5, then increases to approximately 0.67 at k=10, reaches a peak of approximately 0.68 at k=20, and then gradually decreases to approximately 0.57 at k=40.
* k=1: ~0.0
* k=5: ~0.63
* k=10: ~0.67
* k=20: ~0.68
* k=40: ~0.57
* **Layer5 (purple, star):** The accuracy increases sharply from approximately 0.0 at k=1 to approximately 0.78 at k=5, then increases to approximately 0.8 at k=10, remains relatively stable until k=40 (approximately 0.79).
* k=1: ~0.0
* k=5: ~0.78
* k=10: ~0.8
* k=40: ~0.79
* **Layer6 (brown, plus):** The accuracy increases sharply from approximately 0.0 at k=1 to approximately 0.79 at k=5, then increases to approximately 0.8 at k=10, remains relatively stable until k=40 (approximately 0.8).
* k=1: ~0.0
* k=5: ~0.79
* k=10: ~0.8
* k=40: ~0.8
### Key Observations
* All layers show a significant increase in accuracy from k=1 to k=5.
* Layers 5 and 6 consistently exhibit the highest accuracy across all values of k.
* Layers 1 and 2 have the lowest accuracy.
* The accuracy of Layers 1, 2, 3, and 4 decreases after reaching a peak around k=20.
* The accuracy of Layers 5 and 6 remains relatively stable after k=10.
### Interpretation
The chart suggests that using a higher number of top-k predicates (around k=10 to k=20) generally improves the accuracy of the model, but beyond that point, the accuracy may plateau or even decrease for some layers. Layers 5 and 6 appear to be the most effective, achieving high accuracy even with a relatively small number of top-k predicates. The performance differences between layers could be attributed to variations in their architecture, training data, or other hyperparameters. The initial rapid increase in accuracy for all layers indicates that incorporating even a small number of top-k predicates significantly enhances the model's performance. The subsequent decrease in accuracy for some layers suggests that including too many predicates may introduce noise or irrelevant information, negatively impacting the model's ability to make accurate predictions.
</details>
Figure 4: The impact of the number of predicates affects the rule model.
| Class | Layer 1 (38.92%) | Layer 2 (43.70%) | Layer 3 (62.84%) | Layer 4 (66.92%) | Layer 5 (78.89%) | Layer 6 (80.58%) | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| # Clauses | Length | # Clauses | Length | # Clauses | Length | # Clauses | Length | # Clauses | Length | # Clauses | Length | |
| Anger | 70 | 4.29 | 91 | 4.27 | 91 | 4.82 | 75 | 4.51 | 42 | 4.19 | 33 | 4.15 |
| Joy | 58 | 3.62 | 50 | 4.18 | 58 | 4.81 | 48 | 4.98 | 35 | 5.14 | 20 | 4.25 |
| Optimism | 34 | 4.88 | 32 | 4.38 | 47 | 5.60 | 49 | 5.65 | 26 | 4.65 | 23 | 5.57 |
| Sadness | 78 | 4.76 | 53 | 4.36 | 84 | 5.25 | 73 | 3.78 | 46 | 4.72 | 38 | 4.92 |
Table 2: Number of clauses (# Clauses) and average clause length (Length) for each emotion class across Transformer layers after pruning. Per-layer rule-set accuracy is shown in parentheses following the layer number.
| the i of | at_end at_end at_end | i i user | at_start before_verb at_start | it you so | after_verb after_verb after_subject | sad in sad | at_end after_subject after_verb | sad sad depression | at_end after_verb at_end | sad lost depression | at_end after_subject at_end |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| when | at_start | user | before_subject | but | at_start | sad | at_start | me | after_subject | sad | after_verb |
| and | after_subject | a | after_subject | on | after_verb | the | before_verb | at | after_verb | sad | after_subject |
| at | after_verb | i | after_verb | a | at_end | think | after_subject | sad | after_subject | sad | at_start |
| be | after_subject | a | after_verb | of | at_start | sad | after_subject | sad | at_start | sadness | at_end |
| to | at_end | is | after_subject | you | after_subject | depression | at_end | sadness | at_end | depression | at_start |
| was | after_verb | to | after_verb | it | after_subject | in | at_start | depression | after_verb | depressing | at_end |
| sad | at_end | i | after_subject | just | at_start | by | after_verb | be | after_verb | depressing | after_subject |
| when | before_subject | user | before_verb | so | after_verb | user | after_verb | am | after_subject | lost | at_start |
| like | after_subject | it | at_start | can | after_subject | be | after_subject | was | before_verb | nightmare | at_end |
| when | before_verb | is | at_start | of | before_verb | think | at_start | at | after_subject | sadness | after_verb |
| like | after_verb | and | after_verb | can | before_verb | with | after_subject | at | at_start | lost | at_end |
| are | after_subject | my | after_verb | sad | after_verb | really | after_subject | depressing | at_end | anxiety | after_verb |
Table 3: Top 15 keyword linguistic pattern pairs for class Sadness learned by the top DNF rule across layers 1â6.
| | depression bad lost | sad depression lost |
| --- | --- | --- |
| terrorism | depressing | |
| sadness | sadness | |
| awful | sadly | |
| anxiety | mourn | |
| depressed | nightmare | |
| feeling | anxiety | |
| offended | never | |
| F1 | 0.297 | 0.499 |
Table 4: Top-10 words for the Sadness class from EmoLex and NeuroLogic. The bottom row reports the F1 scores.
Transformer-based Sentiment Analysis
Setup and Baselines
We evaluate NeuroLogic on the Emotion task from the TweetEval benchmark, which contains approximately 5,000 posts from Twitter, each labeled with one of four emotions: Anger, Joy, Optimism, or Sadness (Barbieri et al. 2020). All experiments use the pretrained model, a 6-layer DistilBERT fine-tuned on the same TweetEval splits (Schmid 2024; Sanh et al. 2020). The pretrained model has a test accuracy of 80.59%. The model contains approximately 66 million parameters, and we empirically validate that existing methods fail to efficiently scale to this level of complexity. For rule grounding, we approximate predicate-level interventions by masking tokens that instantiate an atomic abstract $a_{i}$ and flip an active DNF clause to False, thereby identifying $a_{i}$ as its causal grounder. In our study, each $a_{i}$ is defined as a (keyword, linguistic pattern) pair, where the linguistic pattern may include structures such as at_start. We benchmark the grounded rules produced by NeuroLogic against a classical purely lexical baseline. To the best of our knowledge, no existing rule-extraction baseline is available for this task. EmoLex (Mohammad and Turney 2013) tags a tweet as Sadness whenever it contains any word from its emotion dictionary. This method relies on isolated keyword matching, with syntactical or other linguistic patterns ignored. Additional details are provided in the Appendix.
Identifying Predicates
We first extract hidden predicates from all six Transformer layers and observe that, as layers deepen, the predicates tend to exhibit higher purity, from average 1.1 to 1.8. This trend also correlates with the test accuracy from around 40 % to 80% of our rule-based model, as illustrated in Figure 3. These results suggest that deeper layers capture more essential and task-relevant decision-making patterns, consistent with prior findings in (Geng et al. 2023, 2024). Another notable observation is that, surprisingly, a small number of predicatesâspecifically the top fiveâare often sufficient to explain the modelâs behavior. As shown in Figure 4, including more predicates beyond this point can even reduce accuracy, particularly in shallower layers (Layers 1 and 2). Middle layers (Layers 3 and 4) are less affected, while deeper layers (Layers 5 and 6) remain relatively stable. Upon closer inspection, we find that this decline is due to the added predicates being noisier and less semantically meaningful, thereby introducing spurious patterns that degrade rule quality.
Constructing Rules
Based on Figure 4, we select the top-15 predicates to construct the DNF rules, meaning that each clause initially consists of 15 predicates. However, after distillation, we find that, on average, fewer than five predicates are retained, as reported in Table 2. As a stand-alone classifier, the rule set distilled from Layer 6 achieves an accuracy of 80.58%, on par with the neural modelâs accuracy ( $80.59\%$ ). Notably, the distilled DNF rule sets primarily consist of positive predicates, with negations rarely appearing. This indicates that the underlying neuron activations function more like selective filters, each tuned to respond to specific input patterns rather than suppressing irrelevant ones. This aligns with the intuition that deeper transformer layers develop specialized units that favor and reinforce certain semantic or structural patterns, making the logic rules not only more compact but also more interpretable and faithful to the modelâs decision boundaries.
Grounding Rules
To simplify our analysis, we focus on the Sadness class and the highest-scoring DNF rule per layer in Table 3. We claim this is empirically justified: Figure 5 (Appendix) shows that the class accuracy for each layer is explained significantly by the top DNF rule, so it effectively âdecidesâ whether an example is labelled Sadness or not while the other rules handle outliers and more nuanced examples. In the earlier layers 1â2, high-frequency function keywords such as the, i, of, and at mostly describe surface positions i.e at_end. These words donât include any Sadness emotional keywords but rather provides syntactic cues like subject boundaries and sentence structuring. This observation mirrors earlier probing attempts on Transformer layers (Tenney, Das, and Pavlick 2019b; Peters et al. 2018). In mid-layers 3â4, the introduction of explicit Sad keywords (sad, depression) starts to mix in with anchors like in and you. This indicates a slow transition where emotional content is starting to get attended to, but overall linguistic patterns that encode local syntax are still required for rules to fire. Finally, in the deep layers 5â6, it is evident that the top rule fires nearly exclusively on keywords that convey Sadness (sad, lost, textit depression, nightmare, anxiety, bad). Each keyword appears numerous times paired with different linguistic patterns, with certain keywords being refined and pushed up (lost, sadness, sad, depression). Additionally, we also see a pattern collapse in later layers where many of the same keywords appear with multiple patterns. Together, these trends show that deeper predicates become less about local syntax and more about whether a salient semantic token is present anywhere in the inputâan observation shared in many other findings (de Vries, van Cranenburgh, and Nissim 2020; Peters et al. 2018).
Table 4 compares the top-10 token cues for the class Sadness extracted by each method. NeuroLogic âs top-10 list preserves core sadness cues like sad, depression, sadness, depressing, sadly, depressed, mourn, anxiety while promoting unique contextual hits like nightmare and never in place of more noisy terms like terrorism or feeling. Concretely, our method lifts the F1 from 0.297 to 0.499 by stripping out noisy cross-class terms without losing coverage.
Related Work
Interpreting neural networks with logic rules has been explored since before the deep learning era. These approaches are typically categorized into two groups: pedagogical and decompositional methods (Zhang et al. 2021; Craven and Shavlik 1994). Pedagogical approaches approximate the network in an end-to-end manner. For example, classic decision tree algorithms such as CART (Breiman et al. 1984) and C4.5 (Quinlan 1993) have been adapted to extract decision trees from trained neural networks (Craven and Shavlik 1995; Krishnan, Sivakumar, and Bhattacharya 1999; Boz 2002). In contrast, decompositional methods leverage internal network information, such as structure and learned weights, to extract rules by analyzing the modelâs internal connections. A core challenge in rule extraction lies in identifying layerwise value ranges through these connections and mapping them back to input features. While recent works have explored more efficient search strategies (Zilke, Loza MencĂa, and Janssen 2016; Zarlenga, Shams, and Jamnik 2021a; Hemker, Shams, and Jamnik 2023), these methods typically scale only to very small networks due to the exponential growth of the search space with the number of attributes. Our proposed method, NeuroLogic, combines the efficiency of pedagogical approaches with the faithfulness of decompositional ones, making it scalable to modern DNN models. Its flexible design also enables the generation of more abstract and interpretable rules, moving beyond the limitations of shallow, decision treeâstyle explanations.
Conclusion
In this work, we introduce NeuroLogic, a novel framework for extracting interpretable logic rules from modern deep neural networks. NeuroLogic abandons the costly paradigm of layer-wise rule extraction and substitution, enabling greater scalability and architectural compatibility. Its decoupled design allows for flexible grounding, supporting the generation of more abstract and interpretable rules. We demonstrate the practical feasibility of NeuroLogic in extracting meaningful logic rules and providing deeper insights into the inner workings of Transformers.
References
- Barbieri et al. (2020) Barbieri, F.; Camacho-Collados, J.; Espinosa Anke, L.; and Neves, L. 2020. TweetEval: Unified Benchmark and Comparative Evaluation for Tweet Classification. In Cohn, T.; He, Y.; and Liu, Y., eds., Findings of the Association for Computational Linguistics: EMNLP 2020, 1644â1650. Online: Association for Computational Linguistics.
- Bau et al. (2017) Bau, D.; Zhou, B.; Khosla, A.; Oliva, A.; and Torralba, A. 2017. Network Dissection: Quantifying Interpretability of Deep Visual Representations. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, 3319â3327. IEEE Computer Society.
- Bock et al. (2004) Bock, R. K.; Chilingarian, A.; Gaug, M.; Hakl, F.; Hengstebeck, T.; JiĆina, M.; Klaschka, J.; KotrÄ, E.; Savickỳ, P.; Towers, S.; et al. 2004. Methods for multidimensional event classification: a case study using images from a Cherenkov gamma-ray telescope. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 516(2-3): 511â528.
- Boz (2002) Boz, O. 2002. Extracting decision trees from trained neural networks. In Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, July 23-26, 2002, Edmonton, Alberta, Canada, 456â461. ACM.
- Breiman et al. (1984) Breiman, L.; Friedman, J. H.; Olshen, R. A.; and Stone, C. J. 1984. Classification and Regression Trees. Wadsworth. ISBN 0-534-98053-8.
- Cohen (1995) Cohen, W. W. 1995. Fast Effective Rule Induction. In Prieditis, A.; and Russell, S., eds., Machine Learning, Proceedings of the Twelfth International Conference on Machine Learning, Tahoe City, California, USA, July 9-12, 1995, 115â123. Morgan Kaufmann.
- Craven and Shavlik (1994) Craven, M. W.; and Shavlik, J. W. 1994. Using Sampling and Queries to Extract Rules from Trained Neural Networks. In Cohen, W. W.; and Hirsh, H., eds., Machine Learning, Proceedings of the Eleventh International Conference, Rutgers University, New Brunswick, NJ, USA, July 10-13, 1994, 37â45. Morgan Kaufmann.
- Craven and Shavlik (1995) Craven, M. W.; and Shavlik, J. W. 1995. Extracting Tree-Structured Representations of Trained Networks. In Touretzky, D. S.; Mozer, M.; and Hasselmo, M. E., eds., Advances in Neural Information Processing Systems 8, NIPS, Denver, CO, USA, November 27-30, 1995, 24â30. MIT Press.
- de Vries, van Cranenburgh, and Nissim (2020) de Vries, W.; van Cranenburgh, A.; and Nissim, M. 2020. Whatâs so special about BERTâs layers? A closer look at the NLP pipeline in monolingual and multilingual models. In Cohn, T.; He, Y.; and Liu, Y., eds., Findings of the Association for Computational Linguistics: EMNLP 2020, 4339â4350. Online: Association for Computational Linguistics.
- Doshi-Velez and Kim (2017) Doshi-Velez, F.; and Kim, B. 2017. Towards A Rigorous Science of Interpretable Machine Learning. arXiv: Machine Learning.
- Geng et al. (2023) Geng, C.; Le, N.; Xu, X.; Wang, Z.; Gurfinkel, A.; and Si, X. 2023. Towards Reliable Neural Specifications. In Krause, A.; Brunskill, E.; Cho, K.; Engelhardt, B.; Sabato, S.; and Scarlett, J., eds., International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, 11196â11212. PMLR.
- Geng et al. (2024) Geng, C.; Wang, Z.; Ye, H.; Liao, S.; and Si, X. 2024. Learning Minimal NAP Specifications for Neural Network Verification. arXiv preprint arXiv:2404.04662.
- Guidotti et al. (2018) Guidotti, R.; Monreale, A.; Turini, F.; Pedreschi, D.; and Giannotti, F. 2018. A Survey Of Methods For Explaining Black Box Models. CoRR, abs/1802.01933.
- He et al. (2016) He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep Residual Learning for Image Recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, 770â778. IEEE Computer Society.
- Hemker, Shams, and Jamnik (2023) Hemker, K.; Shams, Z.; and Jamnik, M. 2023. CGXplain: Rule-Based Deep Neural Network Explanations Using Dual Linear Programs. In Chen, H.; and Luo, L., eds., Trustworthy Machine Learning for Healthcare - First International Workshop, TML4H 2023, Virtual Event, May 4, 2023, Proceedings, volume 13932 of Lecture Notes in Computer Science, 60â72. Springer.
- Krishnan, Sivakumar, and Bhattacharya (1999) Krishnan, R.; Sivakumar, G.; and Bhattacharya, P. 1999. Extracting decision trees from trained neural networks. Pattern Recognit., 32(12): 1999â2009.
- Krizhevsky, Sutskever, and Hinton (2012) Krizhevsky, A.; Sutskever, I.; and Hinton, G. 2012. ImageNet Classification with Deep Convolutional Neural Networks. In Bartlett, P. L.; Pereira, F. C. N.; Burges, C. J. C.; Bottou, L.; and Weinberger, K. Q., eds., Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012. Proceedings of a meeting held December 3-6, 2012, Lake Tahoe, Nevada, United States, 1106â1114.
- Lipton (2016) Lipton, Z. C. 2016. The Mythos of Model Interpretability. CoRR, abs/1606.03490.
- Mohammad and Turney (2013) Mohammad, S. M.; and Turney, P. D. 2013. CROWDSOURCING A WORDâEMOTION ASSOCIATION LEXICON. Computational Intelligence, 29(3): 436â465.
- Pedreschi et al. (2019) Pedreschi, D.; Giannotti, F.; Guidotti, R.; Monreale, A.; Ruggieri, S.; and Turini, F. 2019. Meaningful Explanations of Black Box AI Decision Systems. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, 9780â9784. AAAI Press.
- Pereira et al. (2016) Pereira, B.; Chin, S.-F.; Rueda, O. M.; Vollan, H.-K. M.; Provenzano, E.; Bardwell, H. A.; Pugh, M.; Jones, L.; Russell, R.; Sammut, S.-J.; et al. 2016. The somatic mutation profiles of 2,433 breast cancers refine their genomic and transcriptomic landscapes. Nature communications, 7(1): 11479.
- Peters et al. (2018) Peters, M. E.; Neumann, M.; Zettlemoyer, L.; and Yih, W.-t. 2018. Dissecting Contextual Word Embeddings: Architecture and Representation. In Riloff, E.; Chiang, D.; Hockenmaier, J.; and Tsujii, J., eds., Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 1499â1509. Brussels, Belgium: Association for Computational Linguistics.
- Quinlan (1993) Quinlan, J. R. 1993. C4.5: Programs for Machine Learning. Morgan Kaufmann. ISBN 1-55860-238-0.
- Sanh et al. (2020) Sanh, V.; Debut, L.; Chaumond, J.; and Wolf, T. 2020. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv:1910.01108.
- Schmid (2024) Schmid, P. 2024. philschmid/DistilBERT-tweet-eval-emotion. https://huggingface.co/philschmid/DistilBERT-tweet-eval-emotion. Hugging Face model card, version accessed 31 Jul 2025.
- Selvaraju et al. (2017) Selvaraju, R. R.; Das, A.; Vedantam, R.; Cogswell, M.; Parikh, D.; and Batra, D. 2017. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. In 2017 IEEE International Conference on Computer Vision (ICCV), 618â626.
- Sutskever, Vinyals, and Le (2014) Sutskever, I.; Vinyals, O.; and Le, Q. V. 2014. Sequence to Sequence Learning with Neural Networks. In Ghahramani, Z.; Welling, M.; Cortes, C.; Lawrence, N. D.; and Weinberger, K. Q., eds., Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, 3104â3112.
- Tenney, Das, and Pavlick (2019a) Tenney, I.; Das, D.; and Pavlick, E. 2019a. BERT Rediscovers the Classical NLP Pipeline. In Korhonen, A.; Traum, D. R.; and MĂ rquez, L., eds., Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, 4593â4601. Association for Computational Linguistics.
- Tenney, Das, and Pavlick (2019b) Tenney, I.; Das, D.; and Pavlick, E. 2019b. BERT Rediscovers the Classical NLP Pipeline. In Korhonen, A.; Traum, D.; and MĂ rquez, L., eds., Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 4593â4601. Florence, Italy: Association for Computational Linguistics.
- Vig et al. (2020) Vig, J.; Gehrmann, S.; Belinkov, Y.; Qian, S.; Nevo, D.; Singer, Y.; and Shieber, S. M. 2020. Causal Mediation Analysis for Interpreting Neural NLP: The Case of Gender Bias. CoRR, abs/2004.12265.
- Zarlenga, Shams, and Jamnik (2021a) Zarlenga, M. E.; Shams, Z.; and Jamnik, M. 2021a. Efficient Decompositional Rule Extraction for Deep Neural Networks. CoRR, abs/2111.12628.
- Zarlenga, Shams, and Jamnik (2021b) Zarlenga, M. E.; Shams, Z.; and Jamnik, M. 2021b. Efficient decompositional rule extraction for deep neural networks. arXiv preprint arXiv:2111.12628.
- Zhang et al. (2021) Zhang, Y.; Tiño, P.; Leonardis, A.; and Tang, K. 2021. A Survey on Neural Network Interpretability. IEEE Trans. Emerg. Top. Comput. Intell., 5(5): 726â742.
- Zilke, Loza MencĂa, and Janssen (2016) Zilke, J. R.; Loza MencĂa, E.; and Janssen, F. 2016. Deepredârule extraction from deep neural networks. In Discovery Science: 19th International Conference, DS 2016, Bari, Italy, October 19â21, 2016, Proceedings 19, 457â473. Springer.
Appendix A Additional Details on Small-Scale Benchmarks
All experiments were conducted on a desktop equipped with a 2GHz Intel i7 processor and 32 GB of RAM. For each baseline, we used the original implementation and followed the authorsâ recommended hyperparameters to ensure a fair comparison. We performed all experiments across five different random folds to initialize the train-test splits, the random initialization of the DNN, and the random inputs for the baselines. Regarding the metric of average clause length, there appears to be a discrepancy in how it is computed in (Zarlenga, Shams, and Jamnik 2021a) and (Hemker, Shams, and Jamnik 2023). Specifically, (Zarlenga, Shams, and Jamnik 2021a) seems to underestimate the average clause length. To ensure consistency and accuracy, we adopt the computation method used in (Hemker, Shams, and Jamnik 2023).
To maintain consistency, we used the same DNN topology (i.e., number and depth of layers) as in the experiments reported by (Zarlenga, Shams, and Jamnik 2021a). For NeuroLogic, we applied it to the last hidden layer and used the C5.0 decision tree as the grounding method for optimal efficiency. Below is a detailed description of each dataset:
MAGIC.
The MAGIC dataset simulates the detection of high-energy gamma particles versus background cosmic hadrons using imaging signals captured by a ground-based atmospheric Cherenkov telescope (Bock et al. 2004). It consists of 19,020 samples with 10 handcrafted features extracted from the telescopeâs âshower images.â The dataset is moderately imbalanced, with approximately 35% of instances belonging to the minority (gamma) class.
Metabric-ER.
This biomedical dataset is constructed from the METABRIC cohort and focuses on predicting Estrogen Receptor (ER) statusâa key immunohistochemical marker for breast cancerâbased on 1,000 features, including tumor characteristics, gene expression levels, clinical variables, and survival indicators. Of the 1,980 patients, roughly 24% are ER-positive, indicating the presence of hormone receptors that influence tumor growth.
Metabric-Hist.
Also derived from the METABRIC cohort (Pereira et al. 2016), this dataset uses the mRNA expression profiles of 1,694 patients (spanning 1,004 genes) to classify tumors into two major histological subtypes: Invasive Lobular Carcinoma (ILC) and Invasive Ductal Carcinoma (IDC). Positive diagnoses (ILC) account for only 8.7% of all samples, resulting in a highly imbalanced classification setting.
XOR.
A synthetic dataset commonly used as a benchmark for rule-based models. Each instance $\mathbf{x}^{(i)}â[0,1]^{10}$ is sampled independently from a uniform distribution. Labels are assigned according to a non-linear XOR logic over the first two dimensions:
$$
y^{(i)}=\text{round}(x^{(i)}_{1})\oplus\text{round}(x^{(i)}_{2}),
$$
where $\oplus$ denotes the logical XOR operation. The dataset contains 1,000 instances.
Appendix B Additional Details on Transformer-Based Sentiment Analysis
All experiments are conducted on a machine running Ubuntu 22.04 LTS, equipped with an NVIDIA A100 GPU (40 GB VRAM), 85 GB of RAM, and dual Intel Xeon CPUs.
EmoLex.
We use the NRC Word-Emotion Association Lexicon (Mohammad and Turney 2013). Tweets are lower-cased and split into alphabetical word fragments with regex. The tweet is assigned emotion $e$ iff any word appears in the EmoLex list for $e$ . No lemmatisation, emoji handling, or other heuristics are applied.
Grounding Rule Templates Procedure
Given DNFs extracted in § Identifying Hidden Predicates, we ground each DNF to lexical templates of the underlying text. Our implementation (causal_word_lexical_batched does the following:
Implementation
We use spaCy 3.7 (en_core_web_sm) for sentence segmentation, POS tags, and dependency arcs. 1) Casual test. For every neuron-predicate in the learned DNF, we mask one candidate word. If the forward pass flips the DNF class prediction i.e any predicate in the DNF flips, the word is deemed causal. We then fit this word into the possible templates. 2) Template types. Once a word is deemed causal, we map it to the first matching template in the following order:
1. is_hashtag: word starts with â#â.
1. at_start / at_end: wordâs index within its sentence falls in the first or last 20 % of tokens ( $\alpha=0.20$ ).
1. before/after_subject: using spaCy, locate the first nsubj/nsubjpass; the word is before or after if it appears within a $±$ 6-token window of that subject.
1. before/after_verb: same window logic around the first main VERB.
1. exists: general template, applied to all templates.
This assignment yields the (word, template) pair that forms the grounded rules. 3) Scoring & ordering. For every (word, template) rule, we compute a support score
$$
s=\operatorname{idf}(w)\,\frac{\texttt{flips}(w,t)}{\texttt{total}(w,t)},\quad\operatorname{idf}(w)=\log\!\frac{N_{\text{docs}}+1}{\text{df}(w)+1}.
$$
Templates with $sâ„\tau$ ( $\tau=0.03$ ) are kept. The final rule list for each class is sorted in descending $s$ so highest score appears first.
Top DNF rule accuracy for each class
We report the class-wise accuracy achieved by the top DNF rule at each layer in Figure 5. The results show that each layerâs behavior can be effectively and consistently explained by its corresponding top DNF rule, demonstrating a strong alignment between the rule and the modelâs internal representations.
<details>
<summary>x5.png Details</summary>

### Visual Description
## Line Chart: Top-rule accuracy vs. Layer for different emotions
### Overview
The image is a line chart comparing the "Top-rule accuracy" across different "Layers" (1 to 6) for four emotions: Anger, Joy, Optimism, and Sadness. The chart shows how the accuracy of predicting these emotions changes as the layer increases.
### Components/Axes
* **X-axis (Horizontal):** "Layer", with integer values from 1 to 6.
* **Y-axis (Vertical):** "Top-rule accuracy", ranging from 0.3 to 0.9.
* **Legend (Top-Left):**
* Blue line with circle markers: "Anger"
* Orange line with square markers: "Joy"
* Green line with triangle markers: "Optimism"
* Red line with diamond markers: "Sadness"
### Detailed Analysis
* **Anger (Blue):**
* Trend: Generally increasing with layer.
* Data Points:
* Layer 1: ~0.37
* Layer 2: ~0.34
* Layer 3: ~0.42
* Layer 4: ~0.49
* Layer 5: ~0.73
* Layer 6: ~0.86
* **Joy (Orange):**
* Trend: Increasing with layer.
* Data Points:
* Layer 1: ~0.37
* Layer 2: ~0.48
* Layer 3: ~0.46
* Layer 4: ~0.61
* Layer 5: ~0.78
* Layer 6: ~0.85
* **Optimism (Green):**
* Trend: Initial increase, then decrease, then increase.
* Data Points:
* Layer 1: ~0.49
* Layer 2: ~0.52
* Layer 3: ~0.45
* Layer 4: ~0.43
* Layer 5: ~0.59
* Layer 6: ~0.71
* **Sadness (Red):**
* Trend: Generally increasing with layer.
* Data Points:
* Layer 1: ~0.32
* Layer 2: ~0.39
* Layer 3: ~0.41
* Layer 4: ~0.42
* Layer 5: ~0.67
* Layer 6: ~0.77
### Key Observations
* Joy consistently has the highest accuracy across all layers.
* Anger and Sadness have similar accuracy trends.
* Optimism shows a different trend, with an initial increase followed by a decrease before increasing again.
* All emotions show a significant increase in accuracy from Layer 4 to Layer 6.
### Interpretation
The chart suggests that the model's ability to predict emotions improves as the layer increases, particularly after Layer 4. Joy is the easiest emotion for the model to predict, while Optimism presents more complexity. The performance differences between emotions may be due to the nature of the emotions themselves or the way they are represented in the data. The significant jump in accuracy after Layer 4 could indicate a critical point in the model's architecture or training process where it begins to learn more effectively.
</details>
Figure 5: Top DNF rule accuracy for each class by layer.
Code
All code used for our experiments is available in the following GitHub repository: github.com/NeuroLogic2026/NeuroLogic.
The sample code for purity-based predicates extraction.
âŹ
1 import torch
2 from typing import Dict, List, Tuple
3
4 def purity_rules (
5 z_cls: torch. Tensor, # (N, H) CLS activations
6 y: torch. Tensor, # (N,) integer class labels
7 k: int = 15 # top-k neurons per class
8) -> Tuple [
9 Dict [int, List [Tuple [int, float, int]]], # rules[c] = [(neuron, Ï, support)]
10 Tuple [int, int, float, float, int] # best (class, neuron, Ï, purity, support)
11]:
12 num_samples, hidden_size = z_cls. shape
13 num_classes = int (y. max (). item ()) + 1
14
15 purity = torch. empty (num_classes, hidden_size)
16 thr_mat = torch. empty (num_classes, hidden_size)
17 supp_mat = torch. empty (num_classes, hidden_size, dtype = torch. long)
18
19 class_counts = torch. bincount (y, minlength = num_classes)
20
21 for j in range (hidden_size):
22 a = z_cls [:, j]
23 idx = torch. argsort (a, descending = True)
24 a_sorted, y_sorted = a [idx], y [idx]
25
26 one_hot = torch. nn. functional. one_hot (
27 y_sorted, num_classes = num_classes
28 ). cumsum (0)
29 total_seen = torch. arange (1, num_samples + 1)
30
31 for c in range (num_classes):
32 tp = one_hot [:, c]
33 fp = total_seen - tp
34 tn = (num_samples - class_counts [c]) - fp
35
36 tp_rate = tp. float () / class_counts [c]. clamp_min (1)
37 tn_rate = tn. float () / (num_samples - class_counts [c]). clamp_min (1)
38 p_scores = tp_rate + tn_rate
39
40 best = torch. argmax (p_scores)
41 purity [c, j] = p_scores [best]
42 thr_mat [c, j] = a_sorted [best]. item ()
43 supp_mat [c, j] = total_seen [best]. item ()
44
45 rules: Dict [int, List [Tuple [int, float, int]]] = {
46 c: [
47 (j, thr_mat [c, j]. item (), supp_mat [c, j]. item ())
48 for j in torch. topk (purity [c], k = min (k, hidden_size)). indices. tolist ()
49 ]
50 for c in range (num_classes)
51 }
52
53 best_c, best_j = divmod (purity. argmax (). item (), hidden_size)
54 best_neuron = (
55 best_c,
56 best_j,
57 thr_mat [best_c, best_j]. item (),
58 purity [best_c, best_j]. item (),
59 supp_mat [best_c, best_j]. item (),
60 )
61 return rules, best_neuron
Token Position Analysis
Tables 6, 7, 9, and 9 present results for the classes Anger, Sadness, Optimism, and Joy, respectively. We identify causal tokens âwords whose masking flips the activation of at least one class-specific predicate neuron. These words are grouped into 10 buckets based on their relative position within the input.
<details>
<summary>figures/LLM/heat_map_anger.png Details</summary>

### Visual Description
## Heatmap: Token Flip Rate by Position Bucket
### Overview
The image is a heatmap visualizing the "flip rate" of various tokens (words) across different "position buckets." The heatmap uses a color gradient to represent the flip rate, ranging from dark purple (0.0) to bright yellow (1.0). The tokens are listed on the vertical axis, and the position buckets (0-9) are on the horizontal axis.
### Components/Axes
* **Vertical Axis (Token):** Lists individual tokens (words).
* Tokens: india, raging, growl, concern, fury, irritate, state, revolting, terrorist, anger, irate, media, offense, threaten, furious, bit, grudge, trump, insult, both, terror, fox, shocking, pout, next, pakistan, terrorism, outrage, hold, boiling, rabid, rage, words, comes, please, hillary, lies, gotta, bitch, also
* **Horizontal Axis (Position Bucket):** Represents the position of the token within a sequence or context, divided into 10 buckets (0-9).
* **Color Scale (Flip Rate):** Represents the rate at which the token's meaning or sentiment "flips" or changes.
* Dark Purple: 0.0
* Light Blue: 0.2
* Green: 0.4
* Yellow-Green: 0.6
* Yellow: 0.8
* Bright Yellow: 1.0
### Detailed Analysis
Here's a breakdown of the flip rate for selected tokens across the position buckets. Note that the values are approximate due to the visual nature of the heatmap.
* **india:** Flip rate is high (approximately 0.8-1.0) in buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. It is low (approximately 0.0) in bucket 1.
* **raging:** Flip rate is high (approximately 0.8-1.0) in buckets 0, 1, 3, 4, 5, 6, 7, 8, and 9. It is low (approximately 0.0) in bucket 2.
* **growl:** Flip rate is high (approximately 0.8-1.0) in buckets 0, 1, 2, 3, 4, 5, 7, 8, and 9. It is low (approximately 0.0) in bucket 6.
* **concern:** Flip rate is high (approximately 0.8-1.0) in buckets 1, 2, 3, 4, 5, 6, 7, 8, and 9. It is low (approximately 0.0) in bucket 0.
* **fury:** Flip rate is low (approximately 0.0) in buckets 0, 1, 2, 4, 5, 6, 7, 8, and 9. It is medium (approximately 0.4) in bucket 3.
* **irritate:** Flip rate is low (approximately 0.0) in buckets 0, 1, 2, 4, 5, 6, 7, 8, and 9. It is medium (approximately 0.4) in bucket 3.
* **state:** Flip rate is high (approximately 0.8-1.0) in buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **revolting:** Flip rate is high (approximately 0.8-1.0) in buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **terrorist:** Flip rate is high (approximately 0.8-1.0) in buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **anger:** Flip rate is high (approximately 0.8-1.0) in buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **irate:** Flip rate is high (approximately 0.8-1.0) in buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **media:** Flip rate is high (approximately 0.8-1.0) in buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **offense:** Flip rate is high (approximately 0.8-1.0) in buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **threaten:** Flip rate is high (approximately 0.8-1.0) in buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **furious:** Flip rate is high (approximately 0.8-1.0) in buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **bit:** Flip rate is high (approximately 0.8-1.0) in buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **grudge:** Flip rate is high (approximately 0.8-1.0) in buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **trump:** Flip rate is high (approximately 0.8-1.0) in buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **insult:** Flip rate is high (approximately 0.8-1.0) in buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **both:** Flip rate is high (approximately 0.8-1.0) in buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **terror:** Flip rate is high (approximately 0.8-1.0) in buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **fox:** Flip rate is high (approximately 0.8-1.0) in buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **shocking:** Flip rate is high (approximately 0.8-1.0) in buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **pout:** Flip rate is high (approximately 0.8-1.0) in buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **next:** Flip rate is high (approximately 0.8-1.0) in buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **pakistan:** Flip rate is high (approximately 0.8-1.0) in buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **terrorism:** Flip rate is high (approximately 0.8-1.0) in buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **outrage:** Flip rate is high (approximately 0.8-1.0) in buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **hold:** Flip rate is high (approximately 0.8-1.0) in buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **boiling:** Flip rate is high (approximately 0.8-1.0) in buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **rabid:** Flip rate is high (approximately 0.8-1.0) in buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **rage:** Flip rate is high (approximately 0.8-1.0) in buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **words:** Flip rate is high (approximately 0.8-1.0) in buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **comes:** Flip rate is high (approximately 0.8-1.0) in buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **please:** Flip rate is high (approximately 0.8-1.0) in buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **hillary:** Flip rate is high (approximately 0.8-1.0) in buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **lies:** Flip rate is high (approximately 0.8-1.0) in buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **gotta:** Flip rate is high (approximately 0.8-1.0) in buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **bitch:** Flip rate is high (approximately 0.8-1.0) in buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **also:** Flip rate is high (approximately 0.8-1.0) in buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
### Key Observations
* Some tokens, like "india," "raging," and "growl," show variability in their flip rate across different position buckets.
* Other tokens, like "state," "revolting," and "terrorist," consistently have a high flip rate across all position buckets.
* The flip rate for "fury" and "irritate" is low across most position buckets, with a slight increase in bucket 3.
### Interpretation
The heatmap visualizes how the "flip rate" of different tokens varies depending on their position within a sequence or context. A high flip rate suggests that the token's meaning or sentiment is highly context-dependent and changes frequently. Conversely, a low flip rate indicates that the token's meaning is more stable and consistent across different contexts.
The variability in flip rates across different tokens and position buckets suggests that some words are more sensitive to their surrounding context than others. For example, the tokens "india," "raging," and "growl" may have different meanings or connotations depending on where they appear in a sentence or conversation. On the other hand, tokens like "state," "revolting," and "terrorist" may have a more consistent and negative connotation regardless of their position.
The slight increase in flip rate for "fury" and "irritate" in bucket 3 could indicate a specific context or pattern where these words are more likely to have a different meaning or sentiment. Further analysis would be needed to understand the specific factors driving these variations in flip rate.
</details>
Figure 6: Heat map of keywords by positional bucket for class Anger.
<details>
<summary>figures/LLM/heat_map_sadness.png Details</summary>

### Visual Description
## Heatmap: Token Flip Rate by Position Bucket
### Overview
The image is a heatmap visualizing the "flip rate" of different tokens (words) across various "position buckets." The heatmap uses a color gradient from dark purple (0.0) to bright yellow (1.0) to represent the flip rate, with higher flip rates indicated by brighter colors. The tokens are listed on the y-axis, and the position buckets are on the x-axis.
### Components/Axes
* **Y-axis:** "token" - Lists the tokens being analyzed. The tokens are: sadly, depressing, gloomy, nervous, mourn, despair, depress, dread, nightmare, bored, worry, dull, lost, heart, sick, dark, 12, leave, sad, ever, depression, sadness, crying, couldn, shy, broken, where, unhappy, wish, mood, cry, again, week, stayed, left, life, nno, old, feeling, anxiety.
* **X-axis:** "position bucket" - Represents the position buckets, numbered from 0 to 9.
* **Colorbar (Right):** "flip rate" - Indicates the mapping between color and flip rate, ranging from 0.0 (dark purple) to 1.0 (bright yellow) in increments of 0.2.
### Detailed Analysis
The heatmap shows the flip rate for each token at each position bucket. Here's a breakdown of some notable observations:
* **sadly:** High flip rate (yellow) in position buckets 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position buckets 0, 1, 2, 3, and 4.
* **depressing:** High flip rate (yellow) in position buckets 6, 7, 8, and 9. Low flip rate (dark purple) in position buckets 0, 1, 2, 3, 4, and 5.
* **gloomy:** High flip rate (yellow) in position buckets 7, 8, and 9. Low flip rate (dark purple) in position buckets 0, 1, 2, 3, 4, 5, and 6.
* **nervous:** High flip rate (yellow) in position buckets 8 and 9. Low flip rate (dark purple) in position buckets 0, 1, 2, 3, 4, 5, 6, and 7.
* **mourn:** High flip rate (yellow) in position buckets 6, 7, 8, and 9. Low flip rate (dark purple) in position buckets 0, 1, 2, 3, 4, and 5.
* **despair:** High flip rate (yellow) in position buckets 3, 6, 7, 8, and 9. Low flip rate (dark purple) in position buckets 0, 1, 2, 4, and 5.
* **depress:** High flip rate (yellow) in position buckets 3, 7, 8, and 9. Low flip rate (dark purple) in position buckets 0, 1, 2, 4, 5, and 6.
* **dread:** High flip rate (yellow) in position buckets 3, 7, 8, and 9. Low flip rate (dark purple) in position buckets 0, 1, 2, 4, 5, and 6.
* **nightmare:** High flip rate (yellow) in position buckets 3, 7, 8, and 9. Low flip rate (dark purple) in position buckets 0, 1, 2, 4, 5, and 6.
* **bored:** High flip rate (yellow) in position buckets 1, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 0.
* **worry:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **dull:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **lost:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **heart:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **sick:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **dark:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **12:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **leave:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **sad:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **ever:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **depression:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **sadness:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **crying:** Low flip rate (dark purple) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **couldn:** Low flip rate (dark purple) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **shy:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **broken:** Low flip rate (dark purple) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **where:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **unhappy:** Low flip rate (dark purple) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **wish:** Low flip rate (dark purple) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **mood:** Low flip rate (dark purple) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **cry:** Low flip rate (dark purple) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **again:** Low flip rate (dark purple) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **week:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **stayed:** Low flip rate (dark purple) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **left:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **life:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **nno:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **old:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **feeling:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
* **anxiety:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
### Key Observations
* The flip rate tends to be higher in the later position buckets (6-9) for tokens like "sadly," "depressing," "gloomy," "nervous," "mourn," "despair," "depress," "dread," and "nightmare."
* Tokens like "bored," "worry," "dull," "lost," "heart," "sick," "dark," "12," "leave," "sad," "ever," "depression," "sadness," "shy," "where," "week," "left," "life," "nno," "old," "feeling," and "anxiety" have consistently high flip rates across all position buckets.
* Tokens like "crying," "couldn," "broken," "unhappy," "wish," "mood," "cry," "again," and "stayed" have consistently low flip rates across all position buckets.
### Interpretation
The heatmap suggests that certain tokens related to negative emotions (e.g., "sadly," "depressing," "gloomy") are more likely to be "flipped" or altered in later positions within a sequence or context. This could indicate that these emotions are more sensitive to context or are more likely to be modified as a narrative or situation progresses.
The tokens with consistently high flip rates across all positions might be more versatile or common words that are easily substituted or rephrased without significantly changing the meaning. Conversely, the tokens with consistently low flip rates might be more specific or less likely to be altered due to their unique semantic content.
The "position bucket" likely represents a segment or division of a larger text or sequence. The flip rate could be measuring the frequency with which a token is replaced or modified within that bucket. This analysis could be useful in understanding how language models or text processing algorithms handle different types of words in various contexts.
</details>
Figure 7: Heat map of keywords by positional bucket for class Sadness.
<details>
<summary>figures/LLM/heat_map_optimism.png Details</summary>

### Visual Description
## Heatmap: Token Flip Rate by Position Bucket
### Overview
The image is a heatmap visualizing the "flip rate" of different tokens (words) across various "position buckets." The heatmap uses a color gradient from dark purple (0.0) to bright yellow (1.0) to represent the flip rate, with intermediate values shown in shades of blue and green. The tokens are listed vertically on the left, and the position buckets are listed horizontally along the bottom.
### Components/Axes
* **Y-axis (Token):** Lists individual tokens (words). The tokens are: optimism, milk, monster, there, o, president, optimist, pessimist, serious, would, ll, know, art, too, christ, every, silence, worry, advice, his, always, about, anxiety, do, our, let, relentless, hope, head, test, between, have, one, an, sting, start, stayed, but, life, way.
* **X-axis (Position Bucket):** Represents the position of the token within a sequence or context, divided into buckets labeled 0 through 9.
* **Color Scale (Flip Rate):** A vertical color bar on the right side of the heatmap indicates the flip rate, ranging from 0.0 (dark purple) to 1.0 (bright yellow). The scale is marked with values 0.0, 0.2, 0.4, 0.6, 0.8, and 1.0.
### Detailed Analysis or Content Details
The heatmap displays the flip rate for each token at each position bucket. A higher flip rate (yellow) indicates that the token is more likely to be "flipped" or changed at that position, while a lower flip rate (purple) indicates it is less likely to be changed.
Here's a breakdown of some tokens and their flip rate trends:
* **optimism:** High flip rate (yellow) in position buckets 0, 4, 5, 6, 7, 8, 9. Low flip rate (purple) in position buckets 1, 2, 3.
* **milk:** High flip rate (yellow) in position buckets 1, 2, 3, 4, 5, 6, 7, 8, 9. Low flip rate (purple) in position bucket 0.
* **monster:** High flip rate (yellow) in position buckets 1, 2, 3, 4, 5, 6, 7, 8, 9. Low flip rate (purple) in position bucket 0.
* **there:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, 9. Low flip rate (purple) in position bucket 1.
* **o:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
* **president:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
* **optimist:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
* **pessimist:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
* **serious:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
* **would:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
* **ll:** Low flip rate (purple) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
* **know:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
* **art:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
* **too:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
* **christ:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
* **every:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
* **silence:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
* **worry:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
* **advice:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
* **his:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
* **always:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
* **about:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
* **anxiety:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
* **do:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
* **our:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
* **let:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
* **relentless:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
* **hope:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
* **head:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
* **test:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
* **between:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
* **have:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
* **one:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
* **an:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
* **sting:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
* **start:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
* **stayed:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
* **but:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
* **life:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
* **way:** High flip rate (yellow) in position buckets 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
### Key Observations
* Some tokens, like "ll", consistently show a low flip rate across all position buckets.
* Other tokens, like "optimism", show a high flip rate in most position buckets, but a low flip rate in others.
* The majority of tokens have a high flip rate across all position buckets.
### Interpretation
The heatmap visualizes how likely a token is to be "flipped" or changed depending on its position in a sequence. A high flip rate suggests that the token is more context-dependent and its presence is more sensitive to changes in the surrounding text. Conversely, a low flip rate suggests that the token is more stable and less likely to be altered regardless of its position.
The data suggests that the token "ll" is very stable and unlikely to be changed, while tokens like "optimism" are more context-dependent. The high flip rate for the majority of tokens suggests that they are highly sensitive to changes in the surrounding text.
Further analysis would be needed to understand the specific reasons for these differences in flip rates, such as the semantic properties of the tokens or the nature of the sequences they appear in.
</details>
Figure 8: Heat map of keywords by positional bucket for class Optimism.
<details>
<summary>figures/LLM/heat_map_joy.png Details</summary>

### Visual Description
## Heatmap: Token Flip Rate by Position Bucket
### Overview
The image is a heatmap visualizing the "flip rate" of different tokens (words) across various "position buckets." The heatmap uses a color gradient to represent the flip rate, ranging from dark purple (0.0) to bright yellow (1.0). The tokens are listed on the y-axis, and the position buckets (0-9) are on the x-axis.
### Components/Axes
* **Y-axis:** "token" - Lists the tokens: excited, heyday, joyful, laughing, funny, cheering, elated, fun, cheer, playful, hilarious, shake, happy, glee, pleasing, night, breezy, exhilarating, tumblr, video, well, exhilaration, delight, oh, thanks, ll, amazing, christmas, laughter, bring, rejoice, terrific, mirth, beautiful, did, bright, friday, here, hilarity, animated.
* **X-axis:** "position bucket" - Numerical values from 0 to 9.
* **Colorbar (Right):** "flip rate" - Ranges from 0.0 (dark purple) to 1.0 (bright yellow), with increments of 0.2.
### Detailed Analysis
The heatmap displays the flip rate for each token at each position bucket. A higher flip rate (yellow) indicates a greater tendency for the token to be "flipped" or changed at that position. A lower flip rate (dark purple) indicates a lower tendency for the token to be flipped.
Here's a breakdown of some tokens and their flip rates across position buckets:
* **excited:** High flip rate (yellow) in position buckets 2, 7, 8, and 9. Low flip rate (dark purple) in position buckets 0, 1, 3, 4, 5, and 6.
* **heyday:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 1.
* **joyful:** High flip rate (yellow) in position buckets 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position buckets 0 and 1.
* **laughing:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 1.
* **funny:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 1.
* **cheering:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 1.
* **elated:** High flip rate (yellow) in position buckets 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position buckets 0 and 1.
* **fun:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 1.
* **cheer:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 1.
* **playful:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 1.
* **hilarious:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 1.
* **shake:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 1.
* **happy:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 1.
* **glee:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 1.
* **pleasing:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 1.
* **night:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 1.
* **breezy:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 1.
* **exhilarating:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 1.
* **tumblr:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 1.
* **video:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 1.
* **well:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 1.
* **exhilaration:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 1.
* **delight:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 1.
* **oh:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 1.
* **thanks:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 1.
* **ll:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 1.
* **amazing:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 1.
* **christmas:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 1.
* **laughter:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 1.
* **bring:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 1.
* **rejoice:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 1.
* **terrific:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 1.
* **mirth:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 1.
* **beautiful:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 1.
* **did:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 1.
* **bright:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 1.
* **friday:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 1.
* **here:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 1.
* **hilarity:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 1.
* **animated:** High flip rate (yellow) in position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9. Low flip rate (dark purple) in position bucket 1.
### Key Observations
* Position bucket 1 generally has a low flip rate across all tokens.
* Position buckets 0, 2, 3, 4, 5, 6, 7, 8, and 9 generally have a high flip rate across all tokens.
* Some tokens, like "oh" and "thanks," show a moderate flip rate (blue/green) in position bucket 1, while most others are dark purple.
### Interpretation
The heatmap suggests that the position of a token within a sequence significantly influences its likelihood of being "flipped" or changed. Position bucket 1 appears to be a more stable position where tokens are less likely to be altered, while other positions show a higher degree of variability. The specific meaning of "flip rate" and "position bucket" would require additional context to fully understand the implications of this data. It's possible that "flip rate" refers to the probability of a word being replaced by another word in a given context, and "position bucket" refers to the word's location within a sentence or phrase.
</details>
Figure 9: Heat map of keywords by positional bucket for class Joy.
Top grounded rules
Tables 5, 6, 7, 8, 9, 10 present the top five grounded rules, i.e., (Keyword, Template) pairs, associated with the top five neurons in each corresponding layer, respectively.
| 438 | optimism | 1.2304 | is | before_verb | 14 | 1.00 |
| --- | --- | --- | --- | --- | --- | --- |
| a | exists | 14 | 1.00 | | | |
| you | before_verb | 32 | 1.00 | | | |
| you | at_start | 25 | 1.00 | | | |
| you | after_subject | 22 | 1.00 | | | |
| 683 | optimism | 1.2237 | is | before_verb | 17 | 1.00 |
| a | exists | 13 | 1.00 | | | |
| you | before_verb | 21 | 1.00 | | | |
| you | at_start | 18 | 1.00 | | | |
| you | after_subject | 16 | 1.00 | | | |
| 568 | optimism | 1.2081 | is | before_verb | 20 | 1.00 |
| a | exists | 17 | 1.00 | | | |
| you | before_verb | 22 | 1.00 | | | |
| you | at_start | 22 | 1.00 | | | |
| you | after_subject | 19 | 1.00 | | | |
| 389 | anger | 1.2037 | my | at_end | 12 | 1.00 |
| it | at_start | 74 | 1.00 | | | |
| it | exists | 55 | 1.00 | | | |
| it | before_verb | 52 | 1.00 | | | |
| it | at_end | 43 | 1.00 | | | |
| 757 | optimism | 1.2006 | is | before_verb | 15 | 1.00 |
| a | exists | 13 | 1.00 | | | |
| you | before_verb | 26 | 1.00 | | | |
| you | at_start | 23 | 1.00 | | | |
| you | after_subject | 18 | 1.00 | | | |
Table 5: Top-5 grounded rules in layer 1.
| 734 | anger | 1.1921 | it | at_start | 61 | 1.00 |
| --- | --- | --- | --- | --- | --- | --- |
| it | after_verb | 73 | 1.00 | | | |
| s | after_subject | 62 | 1.00 | | | |
| that | after_subject | 84 | 1.00 | | | |
| that | after_verb | 87 | 1.00 | | | |
| 110 | anger | 1.1875 | it | at_start | 6 | 1.00 |
| it | after_verb | 6 | 1.00 | | | |
| s | after_subject | 5 | 1.00 | | | |
| that | after_subject | 3 | 1.00 | | | |
| that | after_verb | 1 | 1.00 | | | |
| 756 | anger | 1.1739 | it | at_start | 52 | 1.00 |
| it | after_verb | 63 | 1.00 | | | |
| s | after_subject | 61 | 1.00 | | | |
| that | after_subject | 64 | 1.00 | | | |
| that | after_verb | 70 | 1.00 | | | |
| 635 | anger | 1.1722 | it | at_start | 53 | 1.00 |
| it | after_verb | 60 | 1.00 | | | |
| s | after_subject | 65 | 1.00 | | | |
| that | after_subject | 69 | 1.00 | | | |
| that | after_verb | 78 | 1.00 | | | |
| 453 | anger | 1.1628 | it | at_start | 52 | 1.00 |
| it | after_verb | 62 | 1.00 | | | |
| s | after_subject | 64 | 1.00 | | | |
| that | after_subject | 69 | 1.00 | | | |
| that | after_verb | 74 | 1.00 | | | |
Table 6: Top-5 grounded rules in layer 2.
| 509 | anger | 1.3988 | it | at_end | 16 | 1.00 |
| --- | --- | --- | --- | --- | --- | --- |
| that | before_subject | 8 | 1.00 | | | |
| he | before_verb | 23 | 1.00 | | | |
| he | at_start | 19 | 1.00 | | | |
| user | after_subject | 16 | 1.00 | | | |
| 495 | anger | 1.3886 | it | at_end | 10 | 1.00 |
| that | before_subject | 14 | 1.00 | | | |
| he | before_verb | 10 | 1.00 | | | |
| he | at_start | 12 | 1.00 | | | |
| user | after_subject | 13 | 1.00 | | | |
| 652 | joy | 1.3743 | amazing | after_verb | 28 | 1.00 |
| live | after_verb | 27 | 1.00 | | | |
| ly | before_verb | 27 | 1.00 | | | |
| ly | at_start | 27 | 1.00 | | | |
| musically | at_end | 27 | 1.00 | | | |
| 734 | anger | 1.3660 | it | at_end | 6 | 1.00 |
| that | before_subject | 2 | 1.00 | | | |
| he | before_verb | 1 | 1.00 | | | |
| he | at_start | 1 | 1.00 | | | |
| user | after_subject | 3 | 1.00 | | | |
Table 7: Top-5 grounded rules in layer 3.
| Neuron | Class | Purity | Keyword | Template | Flips | Rate |
| --- | --- | --- | --- | --- | --- | --- |
| 597 | joy | 1.5896 | is | before_verb | 1 | 1.00 |
| 232 | joy | 1.5464 | is | before_verb | 4 | 1.00 |
| amazing | after_verb | 1 | 1.00 | | | |
| i | after_subject | 6 | 0.96 | | | |
| i | after_verb | 8 | 0.96 | | | |
| this | after_verb | 5 | 0.96 | | | |
| 66 | joy | 1.5405 | is | before_verb | 2 | 1.00 |
| this | after_verb | 1 | 0.96 | | | |
| 399 | anger | 1.5323 | s | exists | 2 | 1.00 |
| fucking | after_subject | 4 | 1.00 | | | |
| but | before_subject | 1 | 1.00 | | | |
| but | at_start | 1 | 1.00 | | | |
| but | before_verb | 1 | 1.00 | | | |
| 71 | joy | 1.5262 | is | before_verb | 8 | 1.00 |
| amazing | after_verb | 1 | 1.00 | | | |
| i | after_subject | 20 | 0.96 | | | |
| i | after_verb | 17 | 0.96 | | | |
| this | after_verb | 9 | 0.96 | | | |
Table 8: Top-5 grounded rules in layer 4.
| Neuron | Class | Purity | Keyword | Template | Flips | Rate |
| --- | --- | --- | --- | --- | --- | --- |
| 499 | joy | 1.8198 | today | at_start | 1 | 1.00 |
| 258 | joy | 1.7694 | heyday | after_verb | 1 | 1.00 |
| 698 | joy | 1.7653 | heyday | after_verb | 1 | 1.00 |
| today | at_start | 3 | 1.00 | | | |
| 221 | joy | 1.7426 | today | at_start | 1 | 1.00 |
| 535 | joy | 1.7384 | heyday | after_verb | 2 | 1.00 |
| glee | before_subject | 6 | 1.00 | | | |
| today | at_start | 2 | 1.00 | | | |
Table 9: Top-5 grounded rules in layer 5.
| 122 | joy | 1.8478 | laughter | after_subject | 2 | 1.00 |
| --- | --- | --- | --- | --- | --- | --- |
| hilarious | after_subject | 1 | 0.88 | | | |
| 344 | joy | 1.8359 | playful | exists | 1 | 1.00 |
| smiling | after_subject | 2 | 1.00 | | | |
| laughter | at_end | 1 | 1.00 | | | |
| laughter | after_subject | 1 | 1.00 | | | |
| hilarious | after_subject | 2 | 0.88 | | | |
| 497 | joy | 1.8342 | laughter | after_subject | 1 | 1.00 |
| hilarious | after_subject | 1 | 0.88 | | | |
| 212 | joy | 1.8330 | playful | exists | 1 | 1.00 |
| omg | before_subject | 1 | 1.00 | | | |
| smiling | after_subject | 2 | 1.00 | | | |
| laughter | at_end | 1 | 1.00 | | | |
| laughter | after_subject | 3 | 1.00 | | | |
| 452 | joy | 1.8261 | laughter | after_subject | 1 | 1.00 |
| hilarious | after_subject | 1 | 0.88 | | | |
Table 10: Top-5 grounded rules in layer 6.
Top three rules per class for every layer
Tables 11 and 12 present the top three grounded rules for each class across all layers.
| Class | Keyword | Template | Flips | Total | Rate | Neuron |
| --- | --- | --- | --- | --- | --- | --- |
| Layer 1 | | | | | | |
| anger | terrorism | after_subject | 19 | 19 | 1.00 | 125 |
| anger | am | at_start | 19 | 19 | 1.00 | 125 |
| anger | terrorism | after_subject | 17 | 19 | 1.00 | 695 |
| joy | ly | before_verb | 27 | 27 | 1.00 | 505 |
| joy | ly | at_start | 27 | 27 | 1.00 | 505 |
| joy | musically | at_end | 27 | 27 | 1.00 | 505 |
| optimism | can | after_subject | 17 | 19 | 1.00 | 563 |
| optimism | your | after_verb | 18 | 21 | 1.00 | 438 |
| optimism | your | after_verb | 18 | 21 | 1.00 | 563 |
| sadness | want | after_subject | 17 | 19 | 1.00 | 52 |
| sadness | lost | after_subject | 24 | 30 | 1.00 | 52 |
| sadness | want | after_subject | 16 | 19 | 1.00 | 679 |
| Layer 2 | | | | | | |
| anger | have | after_subject | 67 | 86 | 1.00 | 298 |
| anger | with | after_verb | 67 | 84 | 1.00 | 298 |
| anger | have | after_subject | 65 | 86 | 1.00 | 136 |
| sadness | my | after_verb | 83 | 94 | 1.00 | 698 |
| sadness | is | after_subject | 96 | 110 | 1.00 | 712 |
| sadness | my | after_verb | 72 | 94 | 1.00 | 712 |
| Layer 3 | | | | | | |
| anger | people | before_verb | 27 | 32 | 1.00 | 189 |
| anger | why | before_verb | 23 | 30 | 1.00 | 189 |
| anger | why | before_subject | 23 | 31 | 1.00 | 189 |
| joy | ly | before_verb | 27 | 27 | 1.00 | 652 |
| joy | ly | before_verb | 27 | 27 | 1.00 | 28 |
| joy | ly | at_start | 27 | 27 | 1.00 | 652 |
| optimism | be | after_subject | 22 | 26 | 1.00 | 459 |
| optimism | be | after_subject | 22 | 26 | 1.00 | 416 |
| optimism | be | after_subject | 19 | 26 | 1.00 | 157 |
| sadness | sad | at_end | 25 | 27 | 1.00 | 316 |
| sadness | sad | at_end | 24 | 27 | 1.00 | 498 |
| sadness | sad | after_verb | 23 | 26 | 1.00 | 305 |
Table 11: Top-3 grounded rules per class (Layers 1 to 3).
| Class | Token | Keyword | Template | Total | Rate | Neuron |
| --- | --- | --- | --- | --- | --- | --- |
| Layer 4 | | | | | | |
| anger | fucking | after_subject | 20 | 20 | 1.00 | 434 |
| anger | anger | after_verb | 18 | 19 | 1.00 | 92 |
| anger | terrorism | after_subject | 17 | 19 | 1.00 | 92 |
| joy | amazing | after_verb | 29 | 31 | 1.00 | 95 |
| joy | this | after_verb | 41 | 48 | 0.96 | 95 |
| joy | is | before_verb | 19 | 25 | 1.00 | 95 |
| optimism | be | after_subject | 22 | 26 | 1.00 | 416 |
| optimism | you | at_start | 26 | 34 | 0.97 | 416 |
| optimism | you | before_verb | 29 | 38 | 1.00 | 416 |
| sadness | sad | at_end | 19 | 27 | 1.00 | 305 |
| sadness | sad | at_end | 19 | 27 | 1.00 | 23 |
| sadness | sad | at_end | 17 | 27 | 1.00 | 296 |
| Layer 5 | | | | | | |
| anger | awful | at_end | 15 | 19 | 0.84 | 92 |
| anger | angry | after_subject | 22 | 27 | 0.93 | 92 |
| anger | angry | after_verb | 21 | 27 | 0.89 | 92 |
| optimism | it | at_start | 9 | 24 | 0.83 | 430 |
| optimism | it | before_verb | 6 | 21 | 0.81 | 430 |
| optimism | is | at_start | 8 | 29 | 0.76 | 459 |
| sadness | sad | at_end | 23 | 27 | 0.93 | 246 |
| sadness | sadness | exists | 17 | 23 | 0.83 | 433 |
| sadness | sad | at_end | 22 | 27 | 0.93 | 433 |
| Layer 6 | | | | | | |
| anger | anger | after_verb | 14 | 19 | 0.89 | 531 |
| anger | awful | at_end | 13 | 19 | 0.74 | 15 |
| anger | anger | after_verb | 13 | 19 | 0.89 | 603 |
| optimism | not | after_subject | 14 | 19 | 0.74 | 142 |
| optimism | s | after_subject | 13 | 20 | 0.75 | 142 |
| optimism | user | exists | 15 | 20 | 0.80 | 142 |
| sadness | sadness | exists | 17 | 23 | 0.78 | 298 |
| sadness | sad | exists | 25 | 31 | 0.81 | 298 |
| sadness | sad | at_end | 21 | 27 | 0.89 | 242 |
Table 12: Top-3 grounded rules per class (Layers 4 to 6).