# Hallucination Detection in LLMs Using Spectral Features of Attention Maps
**Authors**:
- Bogdan Gabrys, Tomasz Kajdanowicz (Wroclaw University of Science and Technology,
University of Technology Sydney,)
- Correspondence: jakub.binkowski@pwr.edu.pl
Abstract
Large Language Models (LLMs) have demonstrated remarkable performance across various tasks but remain prone to hallucinations. Detecting hallucinations is essential for safety-critical applications, and recent methods leverage attention map properties to this end, though their effectiveness remains limited. In this work, we investigate the spectral features of attention maps by interpreting them as adjacency matrices of graph structures. We propose the $\operatorname{LapEigvals}$ method, which utilizes the top- $k$ eigenvalues of the Laplacian matrix derived from the attention maps as an input to hallucination detection probes. Empirical evaluations demonstrate that our approach achieves state-of-the-art hallucination detection performance among attention-based methods. Extensive ablation studies further highlight the robustness and generalization of $\operatorname{LapEigvals}$ , paving the way for future advancements in the hallucination detection domain.
Hallucination Detection in LLMs Using Spectral Features of Attention Maps
Jakub Binkowski 1, Denis Janiak 1, Albert Sawczyn 1 Bogdan Gabrys 2, Tomasz Kajdanowicz 1 1 Wroclaw University of Science and Technology, 2 University of Technology Sydney, Correspondence: jakub.binkowski@pwr.edu.pl
1 Introduction
The recent surge of interest in Large Language Models (LLMs), driven by their impressive performance across various tasks, has led to significant advancements in their training, fine-tuning, and application to real-world problems. Despite progress, many challenges remain unresolved, particularly in safety-critical applications with a high cost of errors. A significant issue is that LLMs are prone to hallucinations, i.e. generating "content that is nonsensical or unfaithful to the provided source content" (Farquhar et al., 2024; Huang et al., 2023). Since eliminating hallucinations is impossible (Lee, 2023; Xu et al., 2024), there is a pressing need for methods to detect when a model produces hallucinations. In addition, examining the internal behavior of LLMs in the context of hallucinations may yield important insights into their characteristics and support further advancements in the field. Recent studies have shown that hallucinations can be detected using internal states of the model, e.g., hidden states (Chen et al., 2024) or attention maps (Chuang et al., 2024a), and that LLMs can internally "know when they do not know" (Azaria and Mitchell, 2023; Orgad et al., 2025). We show that spectral features of attention maps coincide with hallucinations and, building on this observation, propose a novel method for their detection.
As highlighted by (Barbero et al., 2024), attention maps can be viewed as weighted adjacency matrices of graphs. Building on this perspective, we performed statistical analysis and demonstrated that the eigenvalues of a Laplacian matrix derived from attention maps serve as good predictors of hallucinations. We propose the $\operatorname{LapEigvals}$ method, which utilizes the top- $k$ eigenvalues of the Laplacian as input features of a probing model to detect hallucinations. We share full implementation in a public repository: https://github.com/graphml-lab-pwr/lapeigvals.
We summarize our contributions as follows:
1. We perform statistical analysis of the Laplacian matrix derived from attention maps and show that it could serve as a better predictor of hallucinations compared to the previous method relying on the log-determinant of the maps.
1. Building on that analysis and advancements in the graph-processing domain, we propose leveraging the top- $k$ eigenvalues of the Laplacian matrix as features for hallucination detection probes and empirically show that it achieves state-of-the-art performance among attention-based approaches.
1. Through extensive ablation studies, we demonstrate properties, robustness and generalization of $\operatorname{LapEigvals}$ and suggest promising directions for further development.
2 Motivation
<details>
<summary>x1.png Details</summary>

### Visual Description
## Heatmaps: AttnScore and Laplacian Eigenvalues
### Overview
The image contains two side-by-side heatmaps comparing two metrics across layers and attention heads in a neural network architecture. The left heatmap visualizes "AttnScore" values, while the right heatmap shows "Laplacian Eigenvalues." Both use a color scale from dark purple (low values) to yellow (high values), with a legend on the right indicating p-values from 0.0 to 0.8.
### Components/Axes
- **X-axis (Head Index)**: Ranges from 0 to 28, representing attention head indices.
- **Y-axis (Layer Index)**: Ranges from 0 to 28, representing transformer layer indices.
- **Color Scale**:
- Dark purple ≈ 0.0 (low values)
- Yellow ≈ 0.8 (high values)
- **Legend**: Positioned on the right of both heatmaps, shared between the two plots.
### Detailed Analysis
#### AttnScore Heatmap (Left)
- **Key Regions**:
- **Layer 4-6, Head 12-16**: Bright yellow/orange clusters (p-value ~0.7-0.8).
- **Layer 12-14, Head 4-8**: Similar high-intensity regions.
- **Layer 20-22, Head 20-24**: Moderate orange values (~0.5-0.6).
- **Distribution**: Sparse but concentrated clusters, with most cells dark purple (~0.0-0.2).
#### Laplacian Eigenvalues Heatmap (Right)
- **Key Regions**:
- **Layer 0-2, Head 20-24**: Bright yellow spots (p-value ~0.7-0.8).
- **Layer 24-26, Head 0-4**: Similar high-intensity regions.
- **Layer 10-12, Head 12-16**: Moderate orange values (~0.5-0.6).
- **Distribution**: Even sparser than AttnScore, with isolated bright spots and minimal overlap.
### Key Observations
1. **Sparsity**: Both metrics show sparse distributions, with only ~10-15% of cells exceeding p-value 0.2.
2. **Contrasting Patterns**:
- AttnScore has broader clusters (e.g., Layer 4-6, Head 12-16).
- Laplacian Eigenvalues show isolated peaks (e.g., Layer 0-2, Head 20-24).
3. **Color Consistency**: All bright cells align with the legend’s high-p-value range (0.6-0.8).
### Interpretation
- **AttnScore**: High values in mid-layers (4-6, 12-14) suggest these layers/heads dominate attention mechanisms, potentially critical for task performance. The spread across heads indicates distributed but focused attention.
- **Laplacian Eigenvalues**: Peaks in early (Layer 0-2) and late layers (24-26) imply structural importance in initial and final processing stages. The lack of overlap with AttnScore clusters suggests orthogonal roles in model architecture.
- **Implications**: The sparsity highlights efficiency in parameter utilization, while contrasting patterns may indicate complementary roles for attention and graph-based components in the model.
</details>
Figure 1: Visualization of $p$ -values from the two-sided Mann-Whitney U test for all layers and heads of Llama-3.1-8B across two feature types: $\operatorname{AttentionScore}$ and the $k{=}10$ Laplacian eigenvalues. These features were derived from attention maps collected when the LLM answered questions from the TriviaQA dataset. Higher $p$ -values indicate no significant difference in feature values between hallucinated and non-hallucinated examples. For $\operatorname{AttentionScore}$ , $80\%$ of heads have $p<0.05$ , while for Laplacian eigenvalues, this percentage is $91\%$ . Therefore, Laplacian eigenvalues may be better predictors of hallucinations, as feature values across more heads exhibit statistically significant differences between hallucinated and non-hallucinated examples.
Considering the attention matrix as an adjacency matrix representing a set of Markov chains, each corresponding to one layer of an LLM (Wu et al., 2024) (see Figure 2), we can leverage its spectral properties, as was done in many successful graph-based methods (Mohar, 1997; von Luxburg, 2007; Bruna et al., 2013; Topping et al., 2022). In particular, it was shown that the graph Laplacian might help to describe several graph properties, like the presence of bottlenecks (Topping et al., 2022; Black et al., 2023). We hypothesize that hallucinations may arise from disruptions in information flow, such as bottlenecks, which could be detected through the graph Laplacian.
To assess whether our hypothesis holds, we computed graph spectral features and verified if they provide a stronger coincidence with hallucinations than the previous attention-based method - $\operatorname{AttentionScore}$ (Sriramanan et al., 2024). We prompted an LLM with questions from the TriviaQA dataset (Joshi et al., 2017) and extracted attention maps, differentiating by layers and heads. We then computed the spectral features, i.e., the 10 largest eigenvalues of the Laplacian matrix from each head and layer. Further, we conducted a two-sided Mann-Whitney U test (Mann and Whitney, 1947) to compare whether Laplacian eigenvalues and the values of $\operatorname{AttentionScore}$ are different between hallucinated and non-hallucinated examples. Figure 1 shows $p$ -values for all layers and heads, indicating that $\operatorname{AttentionScore}$ often results in higher $p$ -values compared to Laplacian eigenvalues. Overall, we studied 7 datasets and 5 LLMs and found similar results (see Appendix A). Based on these findings, we propose leveraging top- $k$ Laplacian eigenvalues as features for a hallucination probe.
<details>
<summary>x2.png Details</summary>

### Visual Description
## Diagram: Hierarchical Processing System with Multi-Layered Nodes
### Overview
The diagram illustrates a hierarchical processing system with three layers of nodes (`h^(0)`, `h^(1)`, `h^(2)`) and four input nodes (`x^0`, `x^1`, `x^2`, `x^3`). Arrows represent directional flows between nodes, labeled with parameters such as `a^(i,j)` and `a^(k,h)`. The system appears to model transformations or operations across layers, with inputs feeding into lower layers and propagating upward.
---
### Components/Axes
- **Nodes**:
- **Input Nodes**:
- `x^0` (yellow, bottom-left)
- `x^1` (green, bottom-center)
- `x^2` (purple, bottom-right)
- `x^3` (red, top-right)
- **Hidden Layers**:
- `h^(0)` (blue, middle-left)
- `h^(1)` (blue, middle-center)
- `h^(2)` (blue, middle-right)
- **Arrows**:
- Labeled with parameters like `a^(0,h)`, `a^(1,h)`, `a^(2,h)`, `a^(1,0)`, `a^(2,0)`, etc.
- Arrows connect inputs to hidden layers and between hidden layers.
---
### Detailed Analysis
1. **Input to Hidden Layer Flow**:
- `x^0` → `h^(0)` via `a^(0,h)`
- `x^1` → `h^(0)` via `a^(1,h)`
- `x^2` → `h^(0)` via `a^(2,h)`
- `x^3` → `h^(0)` via `a^(3,h)` (implied by pattern, though not explicitly labeled in the diagram).
2. **Inter-Layer Connections**:
- `h^(0)` → `h^(1)` via `a^(0,h)` and `a^(1,h)`
- `h^(1)` → `h^(2)` via `a^(0,h)` and `a^(1,h)`
- `h^(0)` → `h^(2)` via `a^(2,h)` (direct connection from `h^(0)` to `h^(2)`).
3. **Parameter Labels**:
- Arrows between `h^(i)` and `h^(j)` use `a^(k,h)` where `k` likely denotes the source layer and `h` the target layer.
- Arrows from inputs to `h^(0)` use `a^(i,h)` where `i` is the input index.
---
### Key Observations
- **Layered Structure**: The system is organized into three hierarchical layers, with inputs feeding into the lowest layer (`h^(0)`) and propagating upward.
- **Parameter Consistency**: The parameter `a^(k,h)` appears consistently across connections, suggesting a uniform transformation rule (e.g., weights in a neural network).
- **Missing Labels**: The arrow from `x^3` to `h^(0)` is not explicitly labeled in the diagram, but the pattern implies `a^(3,h)`.
---
### Interpretation
This diagram likely represents a **multi-layered neural network** or **hierarchical model** where:
- **Inputs** (`x^0`, `x^1`, `x^2`, `x^3`) are processed through **hidden layers** (`h^(0)`, `h^(1)`, `h^(2)`).
- **Parameters** (`a^(i,j)`) define the relationships between nodes, possibly representing weights or coefficients in a computational model.
- The **flow direction** (bottom-to-top) suggests a bottom-up processing mechanism, common in deep learning architectures.
The absence of explicit numerical values or trends indicates this is a **conceptual diagram** rather than a data-driven chart. The labels and structure emphasize **system design** over empirical data.
</details>
Figure 2: The autoregressive inference process in an LLM is depicted as a graph for a single attention head $h$ (as introduced by (Vaswani, 2017)) and three generated tokens ( $\hat{x}_{1},\hat{x}_{2},\hat{x}_{3}$ ). Here, $\mathbf{h}^{(l)}_{i}$ represents the hidden state at layer $l$ for the input token $i$ , while $a^{(l,h)}_{i,j}$ denotes the scalar attention score between tokens $i$ and $j$ at layer $l$ and attention head $h$ . Arrows direction refers to information flow during inference.
3 Method
<details>
<summary>x3.png Details</summary>

### Visual Description
## Flowchart: QA Dataset Processing Pipeline for Hallucination Detection
### Overview
The flowchart illustrates a technical pipeline for analyzing a QA (Question Answering) dataset using Large Language Models (LLMs) and machine learning techniques to detect hallucinations. The system integrates attention-based feature extraction, logistic regression, and model-based judgment to evaluate answer quality.
### Components/Axes
1. **Input**: QA Dataset (rectangular yellow box, leftmost node).
2. **Core Components**:
- **LLM (Large Language Model)**: Blue rectangle, central node.
- **Attention Maps**: Green parallelogram, derived from LLM outputs.
- **Feature Extraction (LapEigvals)**: Blue rectangle, processes attention maps.
- **Hallucination Probe (logistic regression)**: Blue rectangle, final classification step.
- **Answers**: Green parallelogram, outputs from LLM.
- **Judge LLM**: Blue rectangle, evaluates answers for hallucination labels.
- **Hallucination Labels**: Green parallelogram, outputs from Judge LLM.
3. **Flow Direction**:
- Solid arrows indicate direct data flow.
- Dashed arrows represent indirect or iterative relationships (e.g., feedback loops).
### Detailed Analysis
- **QA Dataset** → **LLM**: The pipeline begins with a QA dataset fed into an LLM to generate answers.
- **LLM → Attention Maps**: The LLM produces attention maps, visualizing input focus areas.
- **Attention Maps → Feature Extraction (LapEigvals)**: Laplacian eigenvalues (LapEigvals) are extracted as features from attention maps.
- **Feature Extraction → Hallucination Probe**: Features are input into a logistic regression model to predict hallucination likelihood.
- **Answers → Judge LLM**: Generated answers are evaluated by a separate Judge LLM to assign hallucination labels.
- **Judge LLM → Hallucination Labels**: Labels (e.g., "hallucinated" or "non-hallucinated") are generated.
- **Hallucination Labels → Hallucination Probe**: Labels may be used to train or validate the logistic regression model.
### Key Observations
- **Dual Pathway Architecture**: The system combines attention-based feature extraction (unsupervised) with model-based judgment (supervised) for hallucination detection.
- **Logistic Regression as Final Classifier**: The Hallucination Probe uses logistic regression, suggesting a probabilistic output for hallucination likelihood.
- **Feedback Loop**: Dashed arrows imply iterative refinement between the LLM, Judge LLM, and Hallucination Probe.
### Interpretation
This pipeline demonstrates a hybrid approach to hallucination detection in QA systems:
1. **Attention Maps** provide insight into the LLM's reasoning process, enabling feature extraction that captures contextual dependencies.
2. **Judge LLM** acts as a ground-truth evaluator, assigning labels to answers, which could be used to train the logistic regression model.
3. **Logistic Regression** (Hallucination Probe) likely combines features from attention maps and Judge LLM outputs to make final predictions, balancing interpretability and performance.
The system emphasizes transparency (via attention maps) and robustness (via dual evaluation paths), addressing hallucination detection as both a feature-based and model-based problem. The use of logistic regression suggests a focus on probabilistic confidence scores for hallucination likelihood.
</details>
Figure 3: Overview of the methodology used in this work. Solid lines indicate the test-time pipeline, while dashed lines represent additional pipeline steps for generating labels for training the hallucination probe (logistic regression). The primary contribution of this work is leveraging the top- $k$ eigenvalues of the Laplacian as features for the hallucination probe, highlighted with a bold box on the diagram.
In our method, we train a hallucination probe using only attention maps, which we extracted during LLM inference, as illustrated in Figure 2. The attention map is a matrix containing attention scores for all tokens processed during inference, while the hallucination probe is a logistic regression model that uses features derived from attention maps as input. This work’s core contribution is using the top- $k$ eigenvalues of the Laplacian matrix as input features, which we detail below.
Denote $\mathbf{A}^{(l,h)}∈\mathbb{R}^{T× T}$ as the attention map matrix for layer $l∈\{1...c L\}$ and attention head $h∈\{1...c H\}$ , where $T$ is the total number of tokens generated by an LLM (including input tokens), $L$ the number of layers (transformer blocks), and $H$ the number of attention heads. The attention matrix is row-stochastic, meaning each row sums to 1 ( $\sum_{j=0}^{T}\mathbf{A}^{(l,h)}_{:,j}=\mathbf{1}$ ). It is also lower triangular ( $a^{(l,h)}_{ij}=0$ for all $j>i$ ) and non-negative ( $a^{(l,h)}_{ij}≥ 0$ for all $i,j$ ). We can view $\mathbf{A}^{(l,h)}$ as a weighted adjacency matrix of a directed graph, where each node represents processed token, and each directed edge from token $i$ to token $j$ is weighted by the attention score, as depicted in Figure 2.
Then, we define the Laplacian of a layer $l$ and attention head $h$ as:
$$
\mathbf{L}^{(l,h)}=\mathbf{D}^{(l,h)}-\mathbf{A}^{(l,h)}, \tag{1}
$$
where $\mathbf{D}^{(l,h)}$ is a diagonal degree matrix. Since the attention map defines a directed graph, we distinguish between the in-degree and out-degree matrices. The in-degree is computed as the sum of attention scores from preceding tokens, and due to the softmax normalization, it is uniformly 1. Therefore, we define $\mathbf{D}^{(l,h)}$ as the out-degree matrix, which quantifies the total attention a token receives from tokens that follow it. To ensure these values remain independent of the sequence length, we normalize them by the number of subsequent tokens (i.e., the number of outgoing edges).
$$
d^{(l,h)}_{ii}=\frac{\sum_{u}{a^{(l,h)}_{ui}}}{T-i}, \tag{2}
$$
where $i,u∈\{0,...,(T-1)\}$ denote token indices. The Laplacian defined this way is bounded, i.e., $\mathbf{L}^{(l,h)}_{ij}∈\left[-1,1\right]$ (see Appendix B for proofs). Intuitively, the resulting Laplacian for each processed token represents the average attention score to previous tokens reduced by the attention score to itself. As eigenvalues of the Laplacian can summarize information flow in a graph (von Luxburg, 2007; Topping et al., 2022), we take eigenvalues of $\mathbf{L}^{(l,h)}$ , which are diagonal entries due to the lower triangularity of the Laplacian matrix, and sort them:
$$
\tilde{z}^{(l,h)}=\operatorname{sort\left(\operatorname{diag\left(\mathbf{L}^{(l,h)}\right)}\right)} \tag{3}
$$
Recently, (Zhu et al., 2024) found features from the entire token sequence, rather than a single token, improving hallucination detection. Similarly, (Kim et al., 2024) demonstrated that information from all layers, instead of a single one in isolation, yields better results on this task. Motivated by these findings, our method uses features from all tokens and all layers as input to the probe. Therefore, we take the top- $k$ largest values from each head and layer and concatenate them into a single feature vector $z$ , where $k$ is a hyperparameter of our method:
$$
z=\operatorname*{\mathchoice{\Big\|}{\big\|}{\|}{\|}}_{\forall l\in L,\forall h\in H}\left[\tilde{z}^{(l,h)}_{T},\tilde{z}^{(l,h)}_{T-1},\dotsc,\tilde{z}^{(l,h)}_{T-k}\right] \tag{4}
$$
Since LLMs contain dozens of layers and heads, the probe input vector $z∈\mathbb{R}^{L· H· k}$ can still be high-dimensional. Thus, we project it to a lower dimensionality using PCA (Jolliffe and Cadima, 2016). We call our approach $\operatorname{LapEigvals}$ .
4 Experimental setup
The overview of the methodology used in this work is presented in Figure 3. Next, we describe each step of the pipeline in detail.
4.1 Dataset construction
We use annotated QA datasets to construct the hallucination detection datasets and label incorrect LLM answers as hallucinations. To assess the correctness of generated answers, we followed prior work (Orgad et al., 2025) and adopted the llm-as-judge approach (Zheng et al., 2023), with the exception of one dataset where exact match evaluation against ground-truth answers was possible. For llm-as-judge, we prompted a large LLM to classify each response as either hallucination, non-hallucination, or rejected, where rejected indicates that it was unclear whether the answer was correct, e.g., the model refused to answer due to insufficient knowledge. Based on the manual qualitative inspection of several LLMs, we employed gpt-4o-mini (OpenAI et al., 2024) as the judge model since it provides the best trade-off between accuracy and cost. To confirm the reliability of the labels, we additionally verified agreement with the larger model, gpt-4.1, on Llama-3.1-8B and found that the agreement between models falls within the acceptable range widely adopted in the literature (see Appendix F).
For experiments, we selected 7 QA datasets previously utilized in the context of hallucination detection (Chen et al., 2024; Kossen et al., 2024; Chuang et al., 2024b; Mitra et al., 2024). Specifically, we used the validation set of NQ-Open (Kwiatkowski et al., 2019), comprising $3{,}610$ question-answer pairs, and the validation set of TriviaQA (Joshi et al., 2017), containing $7{,}983$ pairs. To evaluate our method on longer inputs, we employed the development set of CoQA (Reddy et al., 2019) and the rc.nocontext portion of the SQuADv2 (Rajpurkar et al., 2018) datasets, with $5{,}928$ and $9{,}960$ examples, respectively. Additionally, we incorporated the QA part of the HaluEvalQA (Li et al., 2023) dataset, containing $10{,}000$ examples, and the generation part of the TruthfulQA (Lin et al., 2022) benchmark with $817$ examples. Finally, we used test split of GSM8k dataset Cobbe et al. (2021) with $1{,}319$ grade school math problems, evaluated by exact match against labels. For TriviaQA, CoQA, and SQuADv2, we followed the same preprocessing procedure as (Chen et al., 2024).
We generate answers using 5 open-source LLMs: Llama-3.1-8B hf.co/meta-llama/Llama-3.1-8B-Instruct and Llama-3.2-3B hf.co/meta-llama/Llama-3.2-3B-Instruct (Grattafiori et al., 2024), Phi-3.5 hf.co/microsoft/Phi-3.5-mini-instruct (Abdin et al., 2024), Mistral-Nemo hf.co/mistralai/Mistral-Nemo-Instruct-2407 (Mistral AI Team and NVIDIA, 2024), Mistral-Small-24B hf.co/mistralai/Mistral-Small-24B-Instruct-2501 (Mistral AI Team, 2025). We use two softmax temperatures for each LLM when decoding ( $temp∈\{0.1,1.0\}$ ) and one prompt (for all datasets we used a prompt in Listing 3, except for GSM8K in Listing 5). Overall, we evaluated hallucination detection probes on 10 LLM configurations and 7 QA datasets. We present the frequency of classes for answers from each configuration in Figure 9 (Appendix E).
4.2 Hallucination Probe
As a hallucination probe, we take a logistic regression model, using the implementation from scikit-learn (Pedregosa et al., 2011) with all parameters default, except for ${max\_iter{=}2000}$ and ${class\_weight{=}{{}^{\prime\prime}balanced^{\prime\prime}}}$ . For top- $k$ eigenvalues, we tested 5 values of $k∈\{5,10,20,50,100\}$ For datasets with examples having less than 100 tokens, we stop at $k{=}50$ and selected the result with the highest efficacy. All eigenvalues are projected with PCA onto 512 dimensions, except in per-layer experiments where there may be fewer than 512 features. In these cases, we apply PCA projection to match the input feature dimensionality, i.e., decorrelating them. As an evaluation metric, we use AUROC on the test split (additional results presenting Precision and Recall are reported in Appendix G.1).
4.3 Baselines
Our method is a supervised approach for detecting hallucinations using only attention maps. For a fair comparison, we adapt the unsupervised $\operatorname{AttentionScore}$ (Sriramanan et al., 2024) by using log-determinants of each head’s attention scores as features instead of summing them, and we also include the original $\operatorname{AttentionScore}$ , computed as the sum of log-determinants over heads, for reference. To evaluate the effectiveness of our proposed Laplacian eigenvalues, we compare them to the eigenvalues of raw attention maps, denoted as $\operatorname{AttnEigvals}$ . Extended results for each approach on a per-layer basis are provided in Appendix G.2, while Appendix G.4 presents a comparison with a method based on hidden states. Implementation and hardware details are provided in Appendix C.
5 Results
Table 1: Test AUROC for $\operatorname{LapEigvals}$ and several baseline methods. AUROC values were obtained in a single run of logistic regression training on features from a dataset generated with $temp{=}1.0$ . We mark results for $\operatorname{AttentionScore}$ in gray as it is an unsupervised approach, not directly comparable to the others. In bold, we highlight the best performance individually for each dataset and LLM. See Appendix G for extended results.
| Llama3.1-8B Llama3.1-8B Llama3.1-8B | $\operatorname{AttentionScore}$ $\operatorname{AttnLogDet}$ $\operatorname{AttnEigvals}$ | 0.493 0.769 0.782 | 0.720 0.826 0.838 | 0.589 0.827 0.819 | 0.556 0.793 0.790 | 0.538 0.748 0.768 | 0.532 0.842 0.843 | 0.541 0.814 0.833 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Llama3.1-8B | $\operatorname{LapEigvals}$ | 0.830 | 0.872 | 0.874 | 0.827 | 0.791 | 0.889 | 0.829 |
| Llama3.2-3B | $\operatorname{AttentionScore}$ | 0.509 | 0.717 | 0.588 | 0.546 | 0.530 | 0.515 | 0.581 |
| Llama3.2-3B | $\operatorname{AttnLogDet}$ | 0.700 | 0.851 | 0.801 | 0.690 | 0.734 | 0.789 | 0.795 |
| Llama3.2-3B | $\operatorname{AttnEigvals}$ | 0.724 | 0.768 | 0.819 | 0.694 | 0.749 | 0.804 | 0.723 |
| Llama3.2-3B | $\operatorname{LapEigvals}$ | 0.812 | 0.870 | 0.828 | 0.693 | 0.757 | 0.832 | 0.787 |
| Phi3.5 | $\operatorname{AttentionScore}$ | 0.520 | 0.666 | 0.541 | 0.594 | 0.504 | 0.540 | 0.554 |
| Phi3.5 | $\operatorname{AttnLogDet}$ | 0.745 | 0.842 | 0.818 | 0.815 | 0.769 | 0.848 | 0.755 |
| Phi3.5 | $\operatorname{AttnEigvals}$ | 0.771 | 0.794 | 0.829 | 0.798 | 0.782 | 0.850 | 0.802 |
| Phi3.5 | $\operatorname{LapEigvals}$ | 0.821 | 0.885 | 0.836 | 0.826 | 0.795 | 0.872 | 0.777 |
| Mistral-Nemo | $\operatorname{AttentionScore}$ | 0.493 | 0.630 | 0.531 | 0.529 | 0.510 | 0.532 | 0.494 |
| Mistral-Nemo | $\operatorname{AttnLogDet}$ | 0.728 | 0.856 | 0.798 | 0.769 | 0.772 | 0.812 | 0.852 |
| Mistral-Nemo | $\operatorname{AttnEigvals}$ | 0.778 | 0.842 | 0.781 | 0.761 | 0.758 | 0.821 | 0.802 |
| Mistral-Nemo | $\operatorname{LapEigvals}$ | 0.835 | 0.890 | 0.833 | 0.795 | 0.812 | 0.865 | 0.828 |
| Mistral-Small-24B | $\operatorname{AttentionScore}$ | 0.516 | 0.576 | 0.504 | 0.462 | 0.455 | 0.463 | 0.451 |
| Mistral-Small-24B | $\operatorname{AttnLogDet}$ | 0.766 | 0.853 | 0.842 | 0.747 | 0.753 | 0.833 | 0.735 |
| Mistral-Small-24B | $\operatorname{AttnEigvals}$ | 0.805 | 0.856 | 0.848 | 0.751 | 0.760 | 0.844 | 0.765 |
| Mistral-Small-24B | $\operatorname{LapEigvals}$ | 0.861 | 0.925 | 0.882 | 0.791 | 0.820 | 0.876 | 0.748 |
Table 1 presents the results of our method compared to the baselines. $\operatorname{LapEigvals}$ achieved the best performance among all tested methods on 6 out of 7 datasets. Moreover, our method consistently performs well across all 5 LLM architectures ranging from 3 up to 24 billion parameters. TruthfulQA was the only exception where $\operatorname{LapEigvals}$ was the second-best approach, yet it might stem from the small size of the dataset or severe class imbalance (depicted in Figure 9). In contrast, using eigenvalues of vanilla attention maps in $\operatorname{AttnEigvals}$ leads to worse performance, which suggests that transformation to Laplacian is the crucial step to uncover latent features of an LLM corresponding to hallucinations. In Appendix G, we show that $\operatorname{LapEigvals}$ consistently demonstrates a smaller generalization gap, i.e., the difference between training and test performance is smaller for our method. While the $\operatorname{AttentionScore}$ method performed poorly, it is fully unsupervised and should not be directly compared to other approaches. However, its supervised counterpart – $\operatorname{AttnLogDet}$ – remains inferior to methods based on spectral features, namely $\operatorname{AttnEigvals}$ and $\operatorname{LapEigvals}$ . In Table 6 in Appendix G.2, we present extended results, including per-layer and all-layers breakdowns, two temperatures used during answers generation, and a comparison between training and test AUROC. Moreover, compared to probes based on hidden states, our method performs best in most of the tested settings, as shown in Appendix G.4.
6 Ablation studies
To better understand the behavior of our method under different conditions, we conduct a comprehensive ablation study. This analysis provides valuable insights into the factors driving the $\operatorname{LapEigvals}$ performance and highlights the robustness of our approach across various scenarios. In order to ensure reliable results, we perform all studies on the TriviaQA dataset, which has a moderate input size and number of examples.
6.1 How does the number of eigenvalues influence performance?
First, we verify how the number of eigenvalues influences the performance of the hallucination probe and present results for Mistral-Small-24B in Figure 4 (results for all models are showcased in Figure 10 in Appendix H). Generally, using more eigenvalues improves performance, but there is less variation in performance among different values of $k$ for $\operatorname{LapEigvals}$ compared to the baseline. Moreover, $\operatorname{LapEigvals}$ achieves significantly better performance with smaller input sizes, as $\operatorname{AttnEigvals}$ with the largest $k{=}100$ fails to surpass $\operatorname{LapEigvals}$ ’s performance at $k{=}5$ . These results confirm that spectral features derived from the Laplacian carry a robust signal indicating the presence of hallucinations and highlight the strength of our method.
<details>
<summary>x4.png Details</summary>

### Visual Description
## Line Graph: Test AUROC vs k-top Eigenvalues
### Overview
The graph compares three methods (AttnEigval, LapEigval, AttnLogDet) across different k-top eigenvalue thresholds (5, 10, 25, 50, 100) using Test AUROC as the metric. All lines represent "all layers" configurations.
### Components/Axes
- **X-axis**: "k-top eigenvalues" with markers at 5, 10, 25, 50, 100
- **Y-axis**: "Test AUROC" scaled from 0.82 to 0.87 in 0.01 increments
- **Legend**: Top-left corner with three entries:
- Blue dots: AttnEigval (all layers)
- Orange dashed: LapEigval (all layers)
- Green solid: AttnLogDet (all layers)
### Detailed Analysis
1. **LapEigval (orange dashed)**:
- Maintains near-constant performance at ~0.87 across all k-values
- Slight upward trend: 0.872 (k=5) → 0.874 (k=10) → 0.875 (k=25) → 0.875 (k=50) → 0.876 (k=100)
2. **AttnEigval (blue dots)**:
- Starts at 0.815 (k=5) and increases steadily
- Key points: 0.825 (k=10), 0.833 (k=25), 0.840 (k=50), 0.845 (k=100)
- Linear progression with ~0.003 AUROC gain per 10x k increase
3. **AttnLogDet (green solid)**:
- Horizontal line at 0.83 across all k-values
- No variation observed between thresholds
### Key Observations
- LapEigval demonstrates superior stability and performance across all k-values
- AttnEigval shows consistent improvement with larger k-values but remains below LapEigval
- AttnLogDet maintains a fixed performance level independent of k
- All methods operate within a narrow AUROC range (0.815-0.876)
### Interpretation
The data suggests:
1. **LapEigval** provides optimal performance regardless of eigenvalue threshold selection
2. **AttnEigval** benefits from larger k-values but requires careful threshold tuning
3. **AttnLogDet**'s constant performance indicates potential limitations in eigenvalue sensitivity
4. The 0.055 AUROC gap between LapEigval and AttnLogDet at k=5 highlights significant methodological differences
The graph reveals that eigenvalue-based methods (AttnEigval/LapEigval) outperform log-determinant approaches (AttnLogDet), with LapEigval maintaining consistent superiority across all tested configurations.
</details>
Figure 4: Probe performance across different top- $k$ eigenvalues: $k∈\{5,10,25,50,100\}$ for TriviaQA dataset with $temp{=}1.0$ and Mistral-Small-24B LLM.
6.2 Does using all layers at once improve performance?
Second, we demonstrate that using all layers of an LLM instead of a single one improves performance. In Figure 5, we compare per-layer to all-layer efficacy for Mistral-Small-24B (results for all models are showcased in Figure 11 in Appendix H). For the per-layer approach, better performance is generally achieved with deeper LLM layers. Notably, peak performance varies across LLMs, requiring an additional search for each new LLM. In contrast, the all-layer probes consistently outperform the best per-layer probes across all LLMs. This finding suggests that information indicating hallucinations is spread across many layers of LLM, and considering them in isolation limits detection accuracy. Further, Table 6 in Appendix G summarises outcomes for the two variants on all datasets and LLM configurations examined in this work.
<details>
<summary>x5.png Details</summary>

### Visual Description
## Line Chart: Test AUC-ROC Across Layer Indices
### Overview
The chart compares test AUC-ROC performance across 39 layers (0-38) for four attention mechanism variants: AttnEigval, LapEigval, and AttnLogDet. Two versions of each variant are shown: "all layers" (solid lines) and "single layer" (dashed lines). Horizontal reference lines at 0.85 and 0.84 provide performance benchmarks.
### Components/Axes
- **X-axis**: Layer Index (0-38, increments of 2)
- **Y-axis**: Test AUC-ROC (0.60-0.85, increments of 0.05)
- **Legend**:
- Top-left position
- Solid blue: AttnEigval (all layers)
- Solid orange: LapEigval (all layers)
- Solid green: AttnLogDet (all layers)
- Dashed blue: AttnEigval (single layer)
- Dashed orange: LapEigval (single layer)
- **Reference Lines**:
- Solid orange at 0.85
- Solid blue at 0.84
### Detailed Analysis
1. **AttnEigval (all layers)**:
- Solid blue line
- Starts at ~0.63 (layer 0)
- Peaks at ~0.78 (layer 16)
- Ends at ~0.76 (layer 38)
- Smooth upward trend with minor fluctuations
2. **LapEigval (all layers)**:
- Solid orange line
- Starts at ~0.65 (layer 0)
- Peaks at ~0.77 (layer 22)
- Ends at ~0.75 (layer 38)
- More pronounced oscillations than AttnEigval
3. **AttnLogDet (all layers)**:
- Solid green line
- Starts at ~0.64 (layer 0)
- Peaks at ~0.72 (layer 18)
- Ends at ~0.73 (layer 38)
- Gradual increase with mid-chart dip
4. **AttnEigval (single layer)**:
- Dashed blue line
- Starts at ~0.62 (layer 0)
- Peaks at ~0.77 (layer 16)
- Ends at ~0.76 (layer 38)
- Similar trend to solid line but with sharper fluctuations
5. **LapEigval (single layer)**:
- Dashed orange line
- Starts at ~0.63 (layer 0)
- Peaks at ~0.76 (layer 22)
- Ends at ~0.75 (layer 38)
- More volatile than solid line counterpart
### Key Observations
- All series remain below the 0.85 benchmark line
- Solid lines (all layers) show smoother trends than dashed lines
- LapEigval variants exhibit higher volatility across layers
- AttnEigval (all layers) achieves highest final AUC-ROC (~0.76)
- Layer 16 shows peak performance for multiple variants
- No series reaches the 0.84 benchmark line
### Interpretation
The chart demonstrates that attention mechanisms with "all layers" configuration generally outperform single-layer implementations, though with reduced volatility. The horizontal reference lines suggest these models fall short of higher performance thresholds, indicating potential for optimization. The consistent peaks around layer 16 across variants suggest this layer index may be critical for attention mechanism effectiveness. The divergence between solid and dashed lines highlights the importance of layer aggregation in maintaining stable performance metrics.
</details>
Figure 5: Analysis of model performance across different layers for Mistral-Small-24B and TriviaQA dataset with $temp{=}1.0$ and $k{=}100$ top eigenvalues (results for models operating on all layers provided for reference).
6.3 Does sampling temperature influence results?
Here, we compare $\operatorname{LapEigvals}$ to the baselines on hallucination datasets, where each dataset contains answers generated at a specific decoding temperature. Higher temperatures typically produce more hallucinated examples (Lee, 2023; Renze, 2024), leading to dataset imbalance. Thus, to mitigate the effect of data imbalance, we sample a subset of $1{,}000$ hallucinated and $1{,}000$ non-hallucinated examples $10$ times for each temperature and train hallucination probes. Interestingly, in Figure 6, we observe that all models improve their performance at higher temperatures, but $\operatorname{LapEigvals}$ consistently achieves the best accuracy on all considered temperature values. The correlation of efficacy with temperature may be attributed to differences in the characteristics of hallucinations at higher temperatures compared to lower ones (Renze, 2024). Also, hallucination detection might be facilitated at higher temperatures due to underlying properties of softmax function (Veličković et al., 2024), and further exploration of this direction is left for future work.
<details>
<summary>x6.png Details</summary>

### Visual Description
## Line Chart: Test AUROC vs Temperature
### Overview
The chart illustrates the relationship between temperature (x-axis) and Test AUROC (y-axis) for three distinct methods: AttnLogDet (green), AttnEigval (blue), and LapEigval (orange). Each method's performance is plotted across four temperature values (0.1, 0.5, 1.0, 2.0), with error bars indicating variability in measurements.
### Components/Axes
- **X-axis (temperature)**: Labeled "temperature" with discrete values at 0.1, 0.5, 1.0, and 2.0.
- **Y-axis (Test AUROC)**: Labeled "Test AUROC" with a scale from 0.76 to 0.92.
- **Legend**: Positioned in the top-left corner, associating:
- Green circles with "AttnLogDet"
- Blue circles with "AttnEigval"
- Orange circles with "LapEigval"
- **Error Bars**: Vertical lines with caps, representing measurement uncertainty for each data point.
### Detailed Analysis
#### Temperature = 0.1
- **AttnLogDet**: ~0.79 (±0.01)
- **AttnEigval**: ~0.79 (±0.02)
- **LapEigval**: ~0.85 (±0.01)
#### Temperature = 0.5
- **AttnLogDet**: ~0.795 (±0.015)
- **AttnEigval**: ~0.80 (±0.02)
- **LapEigval**: ~0.84 (±0.02)
#### Temperature = 1.0
- **AttnLogDet**: ~0.82 (±0.02)
- **AttnEigval**: ~0.82 (±0.03)
- **LapEigval**: ~0.86 (±0.03)
#### Temperature = 2.0
- **AttnLogDet**: ~0.87 (±0.02)
- **AttnEigval**: ~0.88 (±0.03)
- **LapEigval**: ~0.91 (±0.03)
### Key Observations
1. **LapEigval** consistently outperforms the other methods across all temperatures, with the largest gap at 2.0 (~0.91 vs. ~0.87 for AttnLogDet).
2. **AttnEigval** and **AttnLogDet** show nearly identical performance at lower temperatures (0.1–0.5) but diverge slightly at higher temperatures (1.0–2.0).
3. **Error Bars**: LapEigval exhibits the largest variability, particularly at 2.0, where its error range spans ~0.88–0.94. AttnLogDet and AttnEigval have smaller, more consistent error margins.
### Interpretation
The data suggests that increasing temperature improves Test AUROC for all methods, but **LapEigval** demonstrates the most significant performance gains, especially at higher temperatures. However, its larger error bars indicate greater variability, potentially reflecting instability or sensitivity to temperature changes. In contrast, **AttnLogDet** and **AttnEigval** show more stable performance but with smaller improvements. This trade-off between performance and reliability may influence method selection depending on the application's tolerance for variability. The consistent upward trend for all methods implies that temperature is a critical factor in optimizing Test AUROC.
</details>
Figure 6: Test AUROC for different sampling $temp$ values during answer decoding on the TriviaQA dataset, using $k{=}100$ eigenvalues for $\operatorname{LapEigvals}$ and $\operatorname{AttnEigvals}$ with the Llama-3.1-8B LLM. Error bars indicate the standard deviation over 10 balanced samples containing $N=1000$ examples per class.
6.4 How does $\operatorname{LapEigvals}$ generalizes?
To check whether our method generalizes across datasets, we trained the hallucination probe on features from the training split of one QA dataset and evaluated it on the features from the test split of a different QA dataset. Due to space limitations, we present results for selected datasets and provide extended results and absolute efficacy values in Appendix I. Figure 7 showcases the percent drop in Test AUROC when using a different training dataset compared to training and testing on the same QA dataset. We can observe that $\operatorname{LapEigvals}$ provides a performance drop comparable to other baselines, and in several cases, it generalizes best. Interestingly, all methods exhibit poor generalization on TruthfulQA and GSM8K. We hypothesize that the weak performance on TruthfulQA arises from its limited size and class imbalance, whereas the difficulty on GSM8K likely reflects its distinct domain, which has been shown to hinder hallucination detection (Orgad et al., 2025). Additionally, in Appendix I, we show that $\operatorname{LapEigvals}$ achieves the highest test performance in all scenarios (except for TruthfulQA).
<details>
<summary>x7.png Details</summary>

### Visual Description
## Bar Chart: Drop (% of AUROC) Across QA Models and Evaluation Methods
### Overview
The image is a grouped bar chart comparing the percentage drop in AUROC (Area Under the Receiver Operating Characteristic curve) for three evaluation methods (AttnLogDet, AttnEval, LapEval) across four datasets (SQuADv2, NQOpen, HaluevalQA, CoQA). Each dataset sub-chart contains seven QA models (TriviaQA, NQOpen, HaluevalQA, GSM8K, CoQA, SQuADv2, TruthfulQA), with three bars per QA model representing the three evaluation methods. The y-axis ranges from 0% to 50% in 10% increments.
### Components/Axes
- **Title**: "Drop (% of AUROC)"
- **Sub-charts**: Four datasets (SQuADv2, NQOpen, HaluevalQA, CoQA)
- **X-axis**: QA models (TriviaQA, NQOpen, HaluevalQA, GSM8K, CoQA, SQuADv2, TruthfulQA)
- **Y-axis**: Drop (% of AUROC) from 0% to 50%
- **Legend**:
- Green: AttnLogDet (all layers)
- Blue: AttnEval (all layers)
- Orange: LapEval (all layers)
- **Bar Groups**: For each QA model, three bars (one per evaluation method) are grouped together.
### Detailed Analysis
#### SQuADv2 Sub-chart
- **TriviaQA**:
- AttnLogDet (green): ~10%
- AttnEval (blue): ~8%
- LapEval (orange): ~5%
- **NQOpen**:
- AttnLogDet: ~15%
- AttnEval: ~10%
- LapEval: ~5%
- **HaluevalQA**:
- AttnLogDet: ~20%
- AttnEval: ~15%
- LapEval: ~10%
- **GSM8K**:
- AttnLogDet: ~25%
- AttnEval: ~20%
- LapEval: ~15%
- **CoQA**:
- AttnLogDet: ~30%
- AttnEval: ~25%
- LapEval: ~20%
- **SQuADv2**:
- AttnLogDet: ~35%
- AttnEval: ~30%
- LapEval: ~25%
- **TruthfulQA**:
- AttnLogDet: ~40%
- AttnEval: ~35%
- LapEval: ~30%
#### NQOpen Sub-chart
- **TriviaQA**:
- AttnLogDet: ~8%
- AttnEval: ~6%
- LapEval: ~4%
- **NQOpen**:
- AttnLogDet: ~12%
- AttnEval: ~10%
- LapEval: ~6%
- **HaluevalQA**:
- AttnLogDet: ~18%
- AttnEval: ~15%
- LapEval: ~10%
- **GSM8K**:
- AttnLogDet: ~22%
- AttnEval: ~20%
- LapEval: ~15%
- **CoQA**:
- AttnLogDet: ~28%
- AttnEval: ~25%
- LapEval: ~20%
- **SQuADv2**:
- AttnLogDet: ~32%
- AttnEval: ~30%
- LapEval: ~25%
- **TruthfulQA**:
- AttnLogDet: ~38%
- AttnEval: ~35%
- LapEval: ~30%
#### HaluevalQA Sub-chart
- **TriviaQA**:
- AttnLogDet: ~5%
- AttnEval: ~4%
- LapEval: ~3%
- **NQOpen**:
- AttnLogDet: ~9%
- AttnEval: ~7%
- LapEval: ~5%
- **HaluevalQA**:
- AttnLogDet: ~14%
- AttnEval: ~12%
- LapEval: ~8%
- **GSM8K**:
- AttnLogDet: ~20%
- AttnEval: ~18%
- LapEval: ~15%
- **CoQA**:
- AttnLogDet: ~26%
- AttnEval: ~24%
- LapEval: ~20%
- **SQuADv2**:
- AttnLogDet: ~30%
- AttnEval: ~28%
- LapEval: ~25%
- **TruthfulQA**:
- AttnLogDet: ~36%
- AttnEval: ~34%
- LapEval: ~30%
#### CoQA Sub-chart
- **TriviaQA**:
- AttnLogDet: ~10%
- AttnEval: ~8%
- LapEval: ~6%
- **NQOpen**:
- AttnLogDet: ~15%
- AttnEval: ~12%
- LapEval: ~9%
- **HaluevalQA**:
- AttnLogDet: ~20%
- AttnEval: ~18%
- LapEval: ~15%
- **GSM8K**:
- AttnLogDet: ~25%
- AttnEval: ~23%
- LapEval: ~20%
- **CoQA**:
- AttnLogDet: ~30%
- AttnEval: ~28%
- LapEval: ~25%
- **SQuADv2**:
- AttnLogDet: ~35%
- AttnEval: ~33%
- LapEval: ~30%
- **TruthfulQA**:
- AttnLogDet: ~40%
- AttnEval: ~38%
- LapEval: ~35%
### Key Observations
1. **LapEval (orange) consistently shows the highest drop** across all datasets and QA models, suggesting it is the most sensitive to model complexity.
2. **AttnLogDet (green) and AttnEval (blue) exhibit lower drops**, with AttnLogDet occasionally outperforming AttnEval (e.g., in SQuADv2 and CoQA).
3. **Drop increases with QA model complexity**:
- TriviaQA (simplest) has the lowest drops (~5–10%).
- TruthfulQA (most complex) has the highest drops (~30–40%).
4. **Notable outliers**:
- In NQOpen, LapEval’s drop for TruthfulQA (~30%) is significantly higher than AttnLogDet (~38%) and AttnEval (~35%).
- In CoQA, LapEval’s drop for SQuADv2 (~30%) is lower than AttnLogDet (~35%) and AttnEval (~33%).
### Interpretation
The data suggests that **LapEval is the least robust** to variations in QA model complexity, leading to higher AUROC drops. **AttnLogDet and AttnEval** are more stable, with AttnLogDet occasionally outperforming AttnEval in certain datasets. The trend of increasing drops with model complexity implies that these evaluation methods struggle more with complex reasoning tasks (e.g., TruthfulQA). The slight discrepancies (e.g., LapEval underperforming in CoQA) may indicate dataset-specific biases or methodological differences. This highlights the need for evaluation methods tailored to specific QA tasks.
</details>
Figure 7: Generalization across datasets measured as a percent performance drop in Test AUROC (less is better) when trained on one dataset and tested on the other. Training datasets are indicated in the plot titles, while test datasets are shown on the $x$ -axis. Results computed on Llama-3.1-8B with $k{=}100$ top eigenvalues and $temp{=}1.0$ . Results for all datasets are presented in Appendix I.
6.5 How does performance vary across prompts?
Lastly, to assess the stability of our method across different prompts used for answer generation, we compared the results of the hallucination probes trained on features regarding four distinct prompts, the content of which is included in Appendix M. As shown in Table 2, $\operatorname{LapEigvals}$ consistently outperforms all baselines across all four prompts. While we can observe variations in performance across prompts, $\operatorname{LapEigvals}$ demonstrates the lowest standard deviation ( $0.05$ ) compared to $\operatorname{AttnLogDet}$ ( $0.016$ ) and $\operatorname{AttnEigvals}$ ( $0.07$ ), indicating its greater robustness.
Table 2: Test AUROC across four different prompts for answers on the TriviaQA dataset using Llama-3.1-8B with $temp{=}1.0$ and $k{=}50$ (some prompts have led to fewer than 100 tokens). Prompt $\boldsymbol{p_{3}}$ was the main one used to compare our method to baselines, as presented in Tables 1.
| $\operatorname{AttnLogDet}$ $\operatorname{AttnEigvals}$ $\operatorname{LapEigvals}$ | 0.847 0.840 0.882 | 0.855 0.870 0.890 | 0.842 0.842 0.888 | 0.860 0.875 0.895 |
| --- | --- | --- | --- | --- |
7 Related Work
Hallucinations in LLMs were proved to be inevitable (Xu et al., 2024), and to detect them, one can leverage either black-box or white-box approaches. The former approach uses only the outputs from an LLM, while the latter uses hidden states, attention maps, or logits corresponding to generated tokens.
Black-box approaches focus on the text generated by LLMs. For instance, (Li et al., 2024) verified the truthfulness of factual statements using external knowledge sources, though this approach relies on the availability of additional resources. Alternatively, SelfCheckGPT (Manakul et al., 2023) generates multiple responses to the same prompt and evaluates their consistency, with low consistency indicating potential hallucination.
White-box methods have emerged as a promising approach for detecting hallucinations (Farquhar et al., 2024; Azaria and Mitchell, 2023; Arteaga et al., 2024; Orgad et al., 2025). These methods are universal across all LLMs and do not require additional domain adaptation compared to black-box ones (Farquhar et al., 2024). They draw inspiration from seminal works on analyzing the internal states of simple neural networks (Alain and Bengio, 2016), which introduced linear classifier probes – models operating on the internal states of neural networks. Linear probes have been widely applied to the internal states of LLMs, notably for detecting hallucinations.
One of the first such probes was SAPLMA (Azaria and Mitchell, 2023), which demonstrated that one could predict the correctness of generated text straight from LLM’s hidden states. Further, the INSIDE method (Chen et al., 2024) tackled hallucination detection by sampling multiple responses from an LLM and evaluating consistency between their hidden states using a normalized sum of the eigenvalues from their covariance matrix. Also, (Farquhar et al., 2024) proposed a complementary probabilistic approach, employing entropy to quantify the model’s intrinsic uncertainty. Their method involves generating multiple responses, clustering them by semantic similarity, and calculating Semantic Entropy using an appropriate estimator. To address concerns regarding the validity of LLM probes, (Marks and Tegmark, 2024) introduced a high-quality QA dataset with simple true / false answers and causally demonstrated that the truthfulness of such statements is linearly represented in LLMs, which supports the use of probes for short texts.
Self-consistency methods (Liang et al., 2024), like INSIDE or Semantic Entropy, require multiple runs of an LLM for each input example, which substantially lowers their applicability. Motivated by this limitation, (Kossen et al., 2024) proposed to use Semantic Entropy Probe, which is a small model trained to predict expensive Semantic Entropy (Farquhar et al., 2024) from LLM’s hidden states. Notably, (Orgad et al., 2025) explored how LLMs encode information about truthfulness and hallucinations. First, they revealed that truthfulness is concentrated in specific tokens. Second, they found that probing classifiers on LLM representations do not generalize well across datasets, especially across datasets requiring different skills, which we confirmed in Section 6.4. Lastly, they showed that the probes could select the correct answer from multiple generated answers with reasonable accuracy, meaning LLMs make mistakes at the decoding stage, besides knowing the correct answer.
Recent studies have started to explore hallucination detection exclusively from attention maps. (Chuang et al., 2024a) introduced the lookback ratio, which measures how much attention LLMs allocate to relevant input parts when answering questions based on the provided context. The work most closely related to ours is (Sriramanan et al., 2024), which introduces the $\operatorname{AttentionScore}$ method. Although the process is unsupervised and computationally efficient, the authors note that its performance can depend highly on the specific layer from which the score is extracted. Compared to $\operatorname{AttentionScore}$ , our method is fully supervised and grounded in graph theory, as we interpret inference in LLM as a graph. While $\operatorname{AttentionScore}$ aggregates only the attention diagonal to compute its log-determinant, we instead derive features from the graph Laplacian, which captures all attention scores (see Eq. (1) and (2)). Additionally, we utilize all layers for detecting hallucination rather than a single one, demonstrating effectiveness of this approach. We also demonstrate that it performs poorly on the datasets we evaluated. Nonetheless, we drew inspiration from their approach, particularly using the lower triangular structure of matrices when constructing features for the hallucination probe.
8 Conclusions
In this work, we demonstrated that the spectral features of LLMs’ attention maps, specifically the eigenvalues of the Laplacian matrix, carry a signal capable of detecting hallucinations. Specifically, we proposed the $\operatorname{LapEigvals}$ method, which employs the top- $k$ eigenvalues of the Laplacian as input to the hallucination detection probe. Through extensive evaluations, we empirically showed that our method consistently achieves state-of-the-art performance among all tested approaches. Furthermore, multiple ablation studies demonstrated that our method remains stable across varying numbers of eigenvalues, diverse prompts, and generation temperatures while offering reasonable generalization.
In addition, we hypothesize that self-supervised learning (Balestriero et al., 2023) could yield a more robust and generalizable approach while uncovering non-trivial intrinsic features of attention maps. Notably, results such as those in Section 6.3 suggest intriguing connections to recent advancements in LLM research (Veličković et al., 2024; Barbero et al., 2024), highlighting promising directions for future investigation.
Limitations
Supervised method In our approach, one must provide labelled hallucinated and non-hallucinated examples to train the hallucination probe. While this can be handled by the llm-as-judge, it might introduce some noise or pose a risk of overfitting. Limited generalization across LLM architectures The method is incompatible with LLMs having different head and layer configurations. Developing architecture-agnostic hallucination probes is left for future work. Minimum length requirement Computing $\operatorname{top-k}$ Laplacian eigenvalues demands attention maps of at least $k$ tokens (e.g., $k{=}100$ require 100 tokens). Open LLMs Our method requires access to the internal states of LLM thus it cannot be applied to closed LLMs. Risks Please note that the proposed method was tested on selected LLMs and English data, so applying it to untested domains and tasks carries a considerable risk without additional validation.
Acknowledgements
We sincerely thank Piotr Bielak for his valuable review and insightful feedback, which helped improve this work. This work was funded by the European Union under the Horizon Europe grant OMINO – Overcoming Multilevel INformation Overload (grant number 101086321, https://ominoproject.eu/). Views and opinions expressed are those of the authors alone and do not necessarily reflect those of the European Union or the European Research Executive Agency. Neither the European Union nor the European Research Executive Agency can be held responsible for them. It was also co-financed with funds from the Polish Ministry of Education and Science under the programme entitled International Co-Financed Projects, grant no. 573977. We gratefully acknowledge the Wroclaw Centre for Networking and Supercomputing for providing the computational resources used in this work. This work was co-funded by the National Science Centre, Poland under CHIST-ERA Open & Re-usable Research Data & Software (grant number 2022/04/Y/ST6/00183). The authors used ChatGPT to improve the clarity and readability of the manuscript.
References
- Abdin et al. (2024) Marah Abdin, Jyoti Aneja, Hany Awadalla, Ahmed Awadallah, Ammar Ahmad Awan, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Jianmin Bao, Harkirat Behl, Alon Benhaim, Misha Bilenko, Johan Bjorck, Sébastien Bubeck, Martin Cai, Qin Cai, Vishrav Chaudhary, Dong Chen, Dongdong Chen, Weizhu Chen, Yen-Chun Chen, Yi-Ling Chen, Hao Cheng, Parul Chopra, Xiyang Dai, Matthew Dixon, Ronen Eldan, Victor Fragoso, Jianfeng Gao, Mei Gao, Min Gao, Amit Garg, Allie Del Giorno, Abhishek Goswami, Suriya Gunasekar, Emman Haider, Junheng Hao, Russell J. Hewett, Wenxiang Hu, Jamie Huynh, Dan Iter, Sam Ade Jacobs, Mojan Javaheripi, Xin Jin, Nikos Karampatziakis, Piero Kauffmann, Mahoud Khademi, Dongwoo Kim, Young Jin Kim, Lev Kurilenko, James R. Lee, Yin Tat Lee, Yuanzhi Li, Yunsheng Li, Chen Liang, Lars Liden, Xihui Lin, Zeqi Lin, Ce Liu, Liyuan Liu, Mengchen Liu, Weishung Liu, Xiaodong Liu, Chong Luo, Piyush Madan, Ali Mahmoudzadeh, David Majercak, Matt Mazzola, Caio César Teodoro Mendes, Arindam Mitra, Hardik Modi, Anh Nguyen, Brandon Norick, Barun Patra, Daniel Perez-Becker, Thomas Portet, Reid Pryzant, Heyang Qin, Marko Radmilac, Liliang Ren, Gustavo de Rosa, Corby Rosset, Sambudha Roy, Olatunji Ruwase, Olli Saarikivi, Amin Saied, Adil Salim, Michael Santacroce, Shital Shah, Ning Shang, Hiteshi Sharma, Yelong Shen, Swadheen Shukla, Xia Song, Masahiro Tanaka, Andrea Tupini, Praneetha Vaddamanu, Chunyu Wang, Guanhua Wang, Lijuan Wang, Shuohang Wang, Xin Wang, Yu Wang, Rachel Ward, Wen Wen, Philipp Witte, Haiping Wu, Xiaoxia Wu, Michael Wyatt, Bin Xiao, Can Xu, Jiahang Xu, Weijian Xu, Jilong Xue, Sonali Yadav, Fan Yang, Jianwei Yang, Yifan Yang, Ziyi Yang, Donghan Yu, Lu Yuan, Chenruidong Zhang, Cyril Zhang, Jianwen Zhang, Li Lyna Zhang, Yi Zhang, Yue Zhang, Yunan Zhang, and Xiren Zhou. 2024. Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone. arXiv preprint. ArXiv:2404.14219 [cs].
- Alain and Bengio (2016) Guillaume Alain and Yoshua Bengio. 2016. Understanding intermediate layers using linear classifier probes.
- Ansel et al. (2024) Jason Ansel, Edward Yang, Horace He, Natalia Gimelshein, Animesh Jain, Michael Voznesensky, Bin Bao, Peter Bell, David Berard, Evgeni Burovski, Geeta Chauhan, Anjali Chourdia, Will Constable, Alban Desmaison, Zachary DeVito, Elias Ellison, Will Feng, Jiong Gong, Michael Gschwind, Brian Hirsh, Sherlock Huang, Kshiteej Kalambarkar, Laurent Kirsch, Michael Lazos, Mario Lezcano, Yanbo Liang, Jason Liang, Yinghai Lu, CK Luk, Bert Maher, Yunjie Pan, Christian Puhrsch, Matthias Reso, Mark Saroufim, Marcos Yukio Siraichi, Helen Suk, Michael Suo, Phil Tillet, Eikan Wang, Xiaodong Wang, William Wen, Shunting Zhang, Xu Zhao, Keren Zhou, Richard Zou, Ajit Mathews, Gregory Chanan, Peng Wu, and Soumith Chintala. 2024. PyTorch 2: Faster Machine Learning Through Dynamic Python Bytecode Transformation and Graph Compilation. In 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 2 (ASPLOS ’24). ACM.
- Arteaga et al. (2024) Gabriel Y. Arteaga, Thomas B. Schön, and Nicolas Pielawski. 2024. Hallucination Detection in LLMs: Fast and Memory-Efficient Finetuned Models. In Northern Lights Deep Learning Conference 2025.
- Azaria and Mitchell (2023) Amos Azaria and Tom Mitchell. 2023. The Internal State of an LLM Knows When It‘s Lying. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 967–976, Singapore. Association for Computational Linguistics.
- Balestriero et al. (2023) Randall Balestriero, Mark Ibrahim, Vlad Sobal, Ari Morcos, Shashank Shekhar, Tom Goldstein, Florian Bordes, Adrien Bardes, Gregoire Mialon, Yuandong Tian, Avi Schwarzschild, Andrew Gordon Wilson, Jonas Geiping, Quentin Garrido, Pierre Fernandez, Amir Bar, Hamed Pirsiavash, Yann LeCun, and Micah Goldblum. 2023. A Cookbook of Self-Supervised Learning. arXiv preprint. ArXiv:2304.12210 [cs].
- Barbero et al. (2024) Federico Barbero, Andrea Banino, Steven Kapturowski, Dharshan Kumaran, João G. M. Araújo, Alex Vitvitskyi, Razvan Pascanu, and Petar Veličković. 2024. Transformers need glasses! Information over-squashing in language tasks. arXiv preprint. ArXiv:2406.04267 [cs].
- Black et al. (2023) Mitchell Black, Zhengchao Wan, Amir Nayyeri, and Yusu Wang. 2023. Understanding Oversquashing in GNNs through the Lens of Effective Resistance. In International Conference on Machine Learning, pages 2528–2547. PMLR. ArXiv:2302.06835 [cs].
- Bruna et al. (2013) Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. 2013. Spectral Networks and Locally Connected Networks on Graphs. CoRR.
- Chen et al. (2024) Chao Chen, Kai Liu, Ze Chen, Yi Gu, Yue Wu, Mingyuan Tao, Zhihang Fu, and Jieping Ye. 2024. INSIDE: LLMs’ Internal States Retain the Power of Hallucination Detection. In The Twelfth International Conference on Learning Representations.
- Chuang et al. (2024a) Yung-Sung Chuang, Linlu Qiu, Cheng-Yu Hsieh, Ranjay Krishna, Yoon Kim, and James R. Glass. 2024a. Lookback Lens: Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 1419–1436, Miami, Florida, USA. Association for Computational Linguistics.
- Chuang et al. (2024b) Yung-Sung Chuang, Yujia Xie, Hongyin Luo, Yoon Kim, James R. Glass, and Pengcheng He. 2024b. DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models. In The Twelfth International Conference on Learning Representations.
- Cobbe et al. (2021) Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168.
- Dao et al. (2022) Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. 2022. FLASHATTENTION: fast and memory-efficient exact attention with IO-awareness. In Proceedings of the 36th international conference on neural information processing systems, Nips ’22, New Orleans, LA, USA. Curran Associates Inc. Number of pages: 16 tex.address: Red Hook, NY, USA tex.articleno: 1189.
- Farquhar et al. (2024) Sebastian Farquhar, Jannik Kossen, Lorenz Kuhn, and Yarin Gal. 2024. Detecting hallucinations in large language models using semantic entropy. Nature, 630(8017):625–630. Publisher: Nature Publishing Group.
- Grattafiori et al. (2024) Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mitra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, Arun Rao, Aston Zhang, Aurelien Rodriguez, Austen Gregerson, Ava Spataru, Baptiste Roziere, Bethany Biron, Binh Tang, Bobbie Chern, Charlotte Caucheteux, Chaya Nayak, Chloe Bi, Chris Marra, Chris McConnell, Christian Keller, Christophe Touret, Chunyang Wu, Corinne Wong, Cristian Canton Ferrer, Cyrus Nikolaidis, Damien Allonsius, Daniel Song, Danielle Pintz, Danny Livshits, Danny Wyatt, David Esiobu, Dhruv Choudhary, Dhruv Mahajan, Diego Garcia-Olano, Diego Perino, Dieuwke Hupkes, Egor Lakomkin, Ehab AlBadawy, Elina Lobanova, Emily Dinan, Eric Michael Smith, Filip Radenovic, Francisco Guzmán, Frank Zhang, Gabriel Synnaeve, Gabrielle Lee, Georgia Lewis Anderson, Govind Thattai, Graeme Nail, Gregoire Mialon, Guan Pang, Guillem Cucurell, Hailey Nguyen, Hannah Korevaar, Hu Xu, Hugo Touvron, Iliyan Zarov, Imanol Arrieta Ibarra, Isabel Kloumann, Ishan Misra, Ivan Evtimov, Jack Zhang, Jade Copet, Jaewon Lee, Jan Geffert, Jana Vranes, Jason Park, Jay Mahadeokar, Jeet Shah, Jelmer van der Linde, Jennifer Billock, Jenny Hong, Jenya Lee, Jeremy Fu, Jianfeng Chi, Jianyu Huang, Jiawen Liu, Jie Wang, Jiecao Yu, Joanna Bitton, Joe Spisak, Jongsoo Park, Joseph Rocca, Joshua Johnstun, Joshua Saxe, Junteng Jia, Kalyan Vasuden Alwala, Karthik Prasad, Kartikeya Upasani, Kate Plawiak, Ke Li, Kenneth Heafield, Kevin Stone, Khalid El-Arini, Krithika Iyer, Kshitiz Malik, Kuenley Chiu, Kunal Bhalla, Kushal Lakhotia, Lauren Rantala-Yeary, Laurens van der Maaten, Lawrence Chen, Liang Tan, Liz Jenkins, Louis Martin, Lovish Madaan, Lubo Malo, Lukas Blecher, Lukas Landzaat, Luke de Oliveira, Madeline Muzzi, Mahesh Pasupuleti, Mannat Singh, Manohar Paluri, Marcin Kardas, Maria Tsimpoukelli, Mathew Oldham, Mathieu Rita, Maya Pavlova, Melanie Kambadur, Mike Lewis, Min Si, Mitesh Kumar Singh, Mona Hassan, Naman Goyal, Narjes Torabi, Nikolay Bashlykov, Nikolay Bogoychev, Niladri Chatterji, Ning Zhang, Olivier Duchenne, Onur Çelebi, Patrick Alrassy, Pengchuan Zhang, Pengwei Li, Petar Vasic, Peter Weng, Prajjwal Bhargava, Pratik Dubal, Praveen Krishnan, Punit Singh Koura, Puxin Xu, Qing He, Qingxiao Dong, Ragavan Srinivasan, Raj Ganapathy, Ramon Calderer, Ricardo Silveira Cabral, Robert Stojnic, Roberta Raileanu, Rohan Maheswari, Rohit Girdhar, Rohit Patel, Romain Sauvestre, Ronnie Polidoro, Roshan Sumbaly, Ross Taylor, Ruan Silva, Rui Hou, Rui Wang, Saghar Hosseini, Sahana Chennabasappa, Sanjay Singh, Sean Bell, Seohyun Sonia Kim, Sergey Edunov, Shaoliang Nie, Sharan Narang, Sharath Raparthy, Sheng Shen, Shengye Wan, Shruti Bhosale, Shun Zhang, Simon Vandenhende, Soumya Batra, Spencer Whitman, Sten Sootla, Stephane Collot, Suchin Gururangan, Sydney Borodinsky, Tamar Herman, Tara Fowler, Tarek Sheasha, Thomas Georgiou, Thomas Scialom, Tobias Speckbacher, Todor Mihaylov, Tong Xiao, Ujjwal Karn, Vedanuj Goswami, Vibhor Gupta, Vignesh Ramanathan, Viktor Kerkez, Vincent Gonguet, Virginie Do, Vish Vogeti, Vítor Albiero, Vladan Petrovic, Weiwei Chu, Wenhan Xiong, Wenyin Fu, Whitney Meers, Xavier Martinet, Xiaodong Wang, Xiaofang Wang, Xiaoqing Ellen Tan, Xide Xia, Xinfeng Xie, Xuchao Jia, Xuewei Wang, Yaelle Goldschlag, Yashesh Gaur, Yasmine Babaei, Yi Wen, Yiwen Song, Yuchen Zhang, Yue Li, Yuning Mao, Zacharie Delpierre Coudert, Zheng Yan, Zhengxing Chen, Zoe Papakipos, Aaditya Singh, Aayushi Srivastava, Abha Jain, Adam Kelsey, Adam Shajnfeld, Adithya Gangidi, Adolfo Victoria, Ahuva Goldstand, Ajay Menon, Ajay Sharma, Alex Boesenberg, Alexei Baevski, Allie Feinstein, Amanda Kallet, Amit Sangani, Amos Teo, Anam Yunus, Andrei Lupu, Andres Alvarado, Andrew Caples, Andrew Gu, Andrew Ho, Andrew Poulton, Andrew Ryan, Ankit Ramchandani, Annie Dong, Annie Franco, Anuj Goyal, Aparajita Saraf, Arkabandhu Chowdhury, Ashley Gabriel, Ashwin Bharambe, Assaf Eisenman, Azadeh Yazdan, Beau James, Ben Maurer, Benjamin Leonhardi, Bernie Huang, Beth Loyd, Beto De Paola, Bhargavi Paranjape, Bing Liu, Bo Wu, Boyu Ni, Braden Hancock, Bram Wasti, Brandon Spence, Brani Stojkovic, Brian Gamido, Britt Montalvo, Carl Parker, Carly Burton, Catalina Mejia, Ce Liu, Changhan Wang, Changkyu Kim, Chao Zhou, Chester Hu, Ching-Hsiang Chu, Chris Cai, Chris Tindal, Christoph Feichtenhofer, Cynthia Gao, Damon Civin, Dana Beaty, Daniel Kreymer, Daniel Li, David Adkins, David Xu, Davide Testuggine, Delia David, Devi Parikh, Diana Liskovich, Didem Foss, Dingkang Wang, Duc Le, Dustin Holland, Edward Dowling, Eissa Jamil, Elaine Montgomery, Eleonora Presani, Emily Hahn, Emily Wood, Eric-Tuan Le, Erik Brinkman, Esteban Arcaute, Evan Dunbar, Evan Smothers, Fei Sun, Felix Kreuk, Feng Tian, Filippos Kokkinos, Firat Ozgenel, Francesco Caggioni, Frank Kanayet, Frank Seide, Gabriela Medina Florez, Gabriella Schwarz, Gada Badeer, Georgia Swee, Gil Halpern, Grant Herman, Grigory Sizov, Guangyi, Zhang, Guna Lakshminarayanan, Hakan Inan, Hamid Shojanazeri, Han Zou, Hannah Wang, Hanwen Zha, Haroun Habeeb, Harrison Rudolph, Helen Suk, Henry Aspegren, Hunter Goldman, Hongyuan Zhan, Ibrahim Damlaj, Igor Molybog, Igor Tufanov, Ilias Leontiadis, Irina-Elena Veliche, Itai Gat, Jake Weissman, James Geboski, James Kohli, Janice Lam, Japhet Asher, Jean-Baptiste Gaya, Jeff Marcus, Jeff Tang, Jennifer Chan, Jenny Zhen, Jeremy Reizenstein, Jeremy Teboul, Jessica Zhong, Jian Jin, Jingyi Yang, Joe Cummings, Jon Carvill, Jon Shepard, Jonathan McPhie, Jonathan Torres, Josh Ginsburg, Junjie Wang, Kai Wu, Kam Hou U, Karan Saxena, Kartikay Khandelwal, Katayoun Zand, Kathy Matosich, Kaushik Veeraraghavan, Kelly Michelena, Keqian Li, Kiran Jagadeesh, Kun Huang, Kunal Chawla, Kyle Huang, Lailin Chen, Lakshya Garg, Lavender A, Leandro Silva, Lee Bell, Lei Zhang, Liangpeng Guo, Licheng Yu, Liron Moshkovich, Luca Wehrstedt, Madian Khabsa, Manav Avalani, Manish Bhatt, Martynas Mankus, Matan Hasson, Matthew Lennie, Matthias Reso, Maxim Groshev, Maxim Naumov, Maya Lathi, Meghan Keneally, Miao Liu, Michael L. Seltzer, Michal Valko, Michelle Restrepo, Mihir Patel, Mik Vyatskov, Mikayel Samvelyan, Mike Clark, Mike Macey, Mike Wang, Miquel Jubert Hermoso, Mo Metanat, Mohammad Rastegari, Munish Bansal, Nandhini Santhanam, Natascha Parks, Natasha White, Navyata Bawa, Nayan Singhal, Nick Egebo, Nicolas Usunier, Nikhil Mehta, Nikolay Pavlovich Laptev, Ning Dong, Norman Cheng, Oleg Chernoguz, Olivia Hart, Omkar Salpekar, Ozlem Kalinli, Parkin Kent, Parth Parekh, Paul Saab, Pavan Balaji, Pedro Rittner, Philip Bontrager, Pierre Roux, Piotr Dollar, Polina Zvyagina, Prashant Ratanchandani, Pritish Yuvraj, Qian Liang, Rachad Alao, Rachel Rodriguez, Rafi Ayub, Raghotham Murthy, Raghu Nayani, Rahul Mitra, Rangaprabhu Parthasarathy, Raymond Li, Rebekkah Hogan, Robin Battey, Rocky Wang, Russ Howes, Ruty Rinott, Sachin Mehta, Sachin Siby, Sai Jayesh Bondu, Samyak Datta, Sara Chugh, Sara Hunt, Sargun Dhillon, Sasha Sidorov, Satadru Pan, Saurabh Mahajan, Saurabh Verma, Seiji Yamamoto, Sharadh Ramaswamy, Shaun Lindsay, Shaun Lindsay, Sheng Feng, Shenghao Lin, Shengxin Cindy Zha, Shishir Patil, Shiva Shankar, Shuqiang Zhang, Shuqiang Zhang, Sinong Wang, Sneha Agarwal, Soji Sajuyigbe, Soumith Chintala, Stephanie Max, Stephen Chen, Steve Kehoe, Steve Satterfield, Sudarshan Govindaprasad, Sumit Gupta, Summer Deng, Sungmin Cho, Sunny Virk, Suraj Subramanian, Sy Choudhury, Sydney Goldman, Tal Remez, Tamar Glaser, Tamara Best, Thilo Koehler, Thomas Robinson, Tianhe Li, Tianjun Zhang, Tim Matthews, Timothy Chou, Tzook Shaked, Varun Vontimitta, Victoria Ajayi, Victoria Montanez, Vijai Mohan, Vinay Satish Kumar, Vishal Mangla, Vlad Ionescu, Vlad Poenaru, Vlad Tiberiu Mihailescu, Vladimir Ivanov, Wei Li, Wenchen Wang, Wenwen Jiang, Wes Bouaziz, Will Constable, Xiaocheng Tang, Xiaojian Wu, Xiaolan Wang, Xilun Wu, Xinbo Gao, Yaniv Kleinman, Yanjun Chen, Ye Hu, Ye Jia, Ye Qi, Yenda Li, Yilin Zhang, Ying Zhang, Yossi Adi, Youngjin Nam, Yu, Wang, Yu Zhao, Yuchen Hao, Yundi Qian, Yunlu Li, Yuzi He, Zach Rait, Zachary DeVito, Zef Rosnbrick, Zhaoduo Wen, Zhenyu Yang, Zhiwei Zhao, and Zhiyu Ma. 2024. The Llama 3 Herd of Models. arXiv preprint. ArXiv:2407.21783 [cs].
- Huang et al. (2023) Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, and Ting Liu. 2023. A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions. arXiv preprint. ArXiv:2311.05232 [cs].
- Jolliffe and Cadima (2016) Ian T. Jolliffe and Jorge Cadima. 2016. Principal component analysis: a review and recent developments. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2065):20150202. Publisher: Royal Society.
- Joshi et al. (2017) Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601–1611, Vancouver, Canada. Association for Computational Linguistics.
- Kim et al. (2024) Hazel Kim, Adel Bibi, Philip Torr, and Yarin Gal. 2024. Detecting LLM Hallucination Through Layer-wise Information Deficiency: Analysis of Unanswerable Questions and Ambiguous Prompts. arXiv preprint. ArXiv:2412.10246 [cs].
- Kossen et al. (2024) Jannik Kossen, Jiatong Han, Muhammed Razzak, Lisa Schut, Shreshth Malik, and Yarin Gal. 2024. Semantic Entropy Probes: Robust and Cheap Hallucination Detection in LLMs. arXiv preprint. ArXiv:2406.15927 [cs].
- Kuprieiev et al. (2025) Ruslan Kuprieiev, skshetry, Peter Rowland, Dmitry Petrov, Pawel Redzynski, Casper da Costa-Luis, David de la Iglesia Castro, Alexander Schepanovski, Ivan Shcheklein, Gao, Batuhan Taskaya, Jorge Orpinel, Fábio Santos, Daniele, Ronan Lamy, Aman Sharma, Zhanibek Kaimuldenov, Dani Hodovic, Nikita Kodenko, Andrew Grigorev, Earl, Nabanita Dash, George Vyshnya, Dave Berenbaum, maykulkarni, Max Hora, Vera, and Sanidhya Mangal. 2025. DVC: Data Version Control - Git for Data & Models.
- Kwiatkowski et al. (2019) Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural Questions: A Benchmark for Question Answering Research. Transactions of the Association for Computational Linguistics, 7:452–466. Place: Cambridge, MA Publisher: MIT Press.
- Lee (2023) Minhyeok Lee. 2023. A Mathematical Investigation of Hallucination and Creativity in GPT Models. Mathematics, 11(10):2320.
- Li et al. (2024) Junyi Li, Jie Chen, Ruiyang Ren, Xiaoxue Cheng, Xin Zhao, Jian-Yun Nie, and Ji-Rong Wen. 2024. The Dawn After the Dark: An Empirical Study on Factuality Hallucination in Large Language Models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 10879–10899, Bangkok, Thailand. Association for Computational Linguistics.
- Li et al. (2023) Junyi Li, Xiaoxue Cheng, Wayne Xin Zhao, Jian-Yun Nie, and Ji-Rong Wen. 2023. HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language Models. arXiv preprint. ArXiv:2305.11747 [cs].
- Liang et al. (2024) Xun Liang, Shichao Song, Zifan Zheng, Hanyu Wang, Qingchen Yu, Xunkai Li, Rong-Hua Li, Feiyu Xiong, and Zhiyu Li. 2024. Internal Consistency and Self-Feedback in Large Language Models: A Survey. CoRR, abs/2407.14507.
- Lin et al. (2022) Stephanie Lin, Jacob Hilton, and Owain Evans. 2022. TruthfulQA: Measuring How Models Mimic Human Falsehoods. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3214–3252, Dublin, Ireland. Association for Computational Linguistics.
- Manakul et al. (2023) Potsawee Manakul, Adian Liusie, and Mark Gales. 2023. SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 9004–9017, Singapore. Association for Computational Linguistics.
- Mann and Whitney (1947) Henry B Mann and Donald R Whitney. 1947. On a test of whether one of two random variables is stochastically larger than the other. The annals of mathematical statistics, pages 50–60. Publisher: JSTOR.
- Marks and Tegmark (2024) Samuel Marks and Max Tegmark. 2024. The Geometry of Truth: Emergent Linear Structure in Large Language Model Representations of True/False Datasets. In First Conference on Language Modeling.
- Mistral AI Team (2025) Mistral AI Team. 2025. Mistral-small-24B-instruct-2501.
- Mistral AI Team and NVIDIA (2024) Mistral AI Team and NVIDIA. 2024. Mistral-nemo-instruct-2407.
- Mitra et al. (2024) Kushan Mitra, Dan Zhang, Sajjadur Rahman, and Estevam Hruschka. 2024. FactLens: Benchmarking Fine-Grained Fact Verification. arXiv preprint. ArXiv:2411.05980 [cs].
- Mohar (1997) Bojan Mohar. 1997. Some applications of Laplace eigenvalues of graphs. In Geňa Hahn and Gert Sabidussi, editors, Graph Symmetry, pages 225–275. Springer Netherlands, Dordrecht.
- OpenAI et al. (2024) OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, Red Avila, Igor Babuschkin, Suchir Balaji, Valerie Balcom, Paul Baltescu, Haiming Bao, Mohammad Bavarian, Jeff Belgum, Irwan Bello, Jake Berdine, Gabriel Bernadett-Shapiro, Christopher Berner, Lenny Bogdonoff, Oleg Boiko, Madelaine Boyd, Anna-Luisa Brakman, Greg Brockman, Tim Brooks, Miles Brundage, Kevin Button, Trevor Cai, Rosie Campbell, Andrew Cann, Brittany Carey, Chelsea Carlson, Rory Carmichael, Brooke Chan, Che Chang, Fotis Chantzis, Derek Chen, Sully Chen, Ruby Chen, Jason Chen, Mark Chen, Ben Chess, Chester Cho, Casey Chu, Hyung Won Chung, Dave Cummings, Jeremiah Currier, Yunxing Dai, Cory Decareaux, Thomas Degry, Noah Deutsch, Damien Deville, Arka Dhar, David Dohan, Steve Dowling, Sheila Dunning, Adrien Ecoffet, Atty Eleti, Tyna Eloundou, David Farhi, Liam Fedus, Niko Felix, Simón Posada Fishman, Juston Forte, Isabella Fulford, Leo Gao, Elie Georges, Christian Gibson, Vik Goel, Tarun Gogineni, Gabriel Goh, Rapha Gontijo-Lopes, Jonathan Gordon, Morgan Grafstein, Scott Gray, Ryan Greene, Joshua Gross, Shixiang Shane Gu, Yufei Guo, Chris Hallacy, Jesse Han, Jeff Harris, Yuchen He, Mike Heaton, Johannes Heidecke, Chris Hesse, Alan Hickey, Wade Hickey, Peter Hoeschele, Brandon Houghton, Kenny Hsu, Shengli Hu, Xin Hu, Joost Huizinga, Shantanu Jain, Shawn Jain, Joanne Jang, Angela Jiang, Roger Jiang, Haozhun Jin, Denny Jin, Shino Jomoto, Billie Jonn, Heewoo Jun, Tomer Kaftan, Łukasz Kaiser, Ali Kamali, Ingmar Kanitscheider, Nitish Shirish Keskar, Tabarak Khan, Logan Kilpatrick, Jong Wook Kim, Christina Kim, Yongjik Kim, Jan Hendrik Kirchner, Jamie Kiros, Matt Knight, Daniel Kokotajlo, Łukasz Kondraciuk, Andrew Kondrich, Aris Konstantinidis, Kyle Kosic, Gretchen Krueger, Vishal Kuo, Michael Lampe, Ikai Lan, Teddy Lee, Jan Leike, Jade Leung, Daniel Levy, Chak Ming Li, Rachel Lim, Molly Lin, Stephanie Lin, Mateusz Litwin, Theresa Lopez, Ryan Lowe, Patricia Lue, Anna Makanju, Kim Malfacini, Sam Manning, Todor Markov, Yaniv Markovski, Bianca Martin, Katie Mayer, Andrew Mayne, Bob McGrew, Scott Mayer McKinney, Christine McLeavey, Paul McMillan, Jake McNeil, David Medina, Aalok Mehta, Jacob Menick, Luke Metz, Andrey Mishchenko, Pamela Mishkin, Vinnie Monaco, Evan Morikawa, Daniel Mossing, Tong Mu, Mira Murati, Oleg Murk, David Mély, Ashvin Nair, Reiichiro Nakano, Rajeev Nayak, Arvind Neelakantan, Richard Ngo, Hyeonwoo Noh, Long Ouyang, Cullen O’Keefe, Jakub Pachocki, Alex Paino, Joe Palermo, Ashley Pantuliano, Giambattista Parascandolo, Joel Parish, Emy Parparita, Alex Passos, Mikhail Pavlov, Andrew Peng, Adam Perelman, Filipe de Avila Belbute Peres, Michael Petrov, Henrique Ponde de Oliveira Pinto, Michael, Pokorny, Michelle Pokrass, Vitchyr H. Pong, Tolly Powell, Alethea Power, Boris Power, Elizabeth Proehl, Raul Puri, Alec Radford, Jack Rae, Aditya Ramesh, Cameron Raymond, Francis Real, Kendra Rimbach, Carl Ross, Bob Rotsted, Henri Roussez, Nick Ryder, Mario Saltarelli, Ted Sanders, Shibani Santurkar, Girish Sastry, Heather Schmidt, David Schnurr, John Schulman, Daniel Selsam, Kyla Sheppard, Toki Sherbakov, Jessica Shieh, Sarah Shoker, Pranav Shyam, Szymon Sidor, Eric Sigler, Maddie Simens, Jordan Sitkin, Katarina Slama, Ian Sohl, Benjamin Sokolowsky, Yang Song, Natalie Staudacher, Felipe Petroski Such, Natalie Summers, Ilya Sutskever, Jie Tang, Nikolas Tezak, Madeleine B. Thompson, Phil Tillet, Amin Tootoonchian, Elizabeth Tseng, Preston Tuggle, Nick Turley, Jerry Tworek, Juan Felipe Cerón Uribe, Andrea Vallone, Arun Vijayvergiya, Chelsea Voss, Carroll Wainwright, Justin Jay Wang, Alvin Wang, Ben Wang, Jonathan Ward, Jason Wei, C. J. Weinmann, Akila Welihinda, Peter Welinder, Jiayi Weng, Lilian Weng, Matt Wiethoff, Dave Willner, Clemens Winter, Samuel Wolrich, Hannah Wong, Lauren Workman, Sherwin Wu, Jeff Wu, Michael Wu, Kai Xiao, Tao Xu, Sarah Yoo, Kevin Yu, Qiming Yuan, Wojciech Zaremba, Rowan Zellers, Chong Zhang, Marvin Zhang, Shengjia Zhao, Tianhao Zheng, Juntang Zhuang, William Zhuk, and Barret Zoph. 2024. GPT-4 Technical Report. arXiv preprint. ArXiv:2303.08774 [cs].
- Orgad et al. (2025) Hadas Orgad, Michael Toker, Zorik Gekhman, Roi Reichart, Idan Szpektor, Hadas Kotek, and Yonatan Belinkov. 2025. LLMs Know More Than They Show: On the Intrinsic Representation of LLM Hallucinations. In The Thirteenth International Conference on Learning Representations.
- Pedregosa et al. (2011) F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research, 12:2825–2830.
- Rajpurkar et al. (2018) Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know What You Don‘t Know: Unanswerable Questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784–789, Melbourne, Australia. Association for Computational Linguistics.
- Reddy et al. (2019) Siva Reddy, Danqi Chen, and Christopher D. Manning. 2019. CoQA: A Conversational Question Answering Challenge. Transactions of the Association for Computational Linguistics, 7:249–266. Place: Cambridge, MA Publisher: MIT Press.
- Renze (2024) Matthew Renze. 2024. The Effect of Sampling Temperature on Problem Solving in Large Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 7346–7356, Miami, Florida, USA. Association for Computational Linguistics.
- Sriramanan et al. (2024) Gaurang Sriramanan, Siddhant Bharti, Vinu Sankar Sadasivan, Shoumik Saha, Priyatham Kattakinda, and Soheil Feizi. 2024. LLM-Check: Investigating Detection of Hallucinations in Large Language Models. In The Thirty-eighth Annual Conference on Neural Information Processing Systems.
- team (2020) The pandas development team. 2020. pandas-dev/pandas: Pandas.
- Topping et al. (2022) Jake Topping, Francesco Di Giovanni, Benjamin Paul Chamberlain, Xiaowen Dong, and Michael M. Bronstein. 2022. Understanding over-squashing and bottlenecks on graphs via curvature. In International Conference on Learning Representations.
- Vaswani (2017) A Vaswani. 2017. Attention is all you need. Advances in Neural Information Processing Systems.
- Veličković et al. (2024) Petar Veličković, Christos Perivolaropoulos, Federico Barbero, and Razvan Pascanu. 2024. softmax is not enough (for sharp out-of-distribution). arXiv preprint. ArXiv:2410.01104 [cs].
- Virtanen et al. (2020) Pauli Virtanen, Ralf Gommers, Travis E. Oliphant, Matt Haberland, Tyler Reddy, David Cournapeau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, Stéfan J. van der Walt, Matthew Brett, Joshua Wilson, K. Jarrod Millman, Nikolay Mayorov, Andrew R. J. Nelson, Eric Jones, Robert Kern, Eric Larson, C J Carey, İlhan Polat, Yu Feng, Eric W. Moore, Jake VanderPlas, Denis Laxalde, Josef Perktold, Robert Cimrman, Ian Henriksen, E. A. Quintero, Charles R. Harris, Anne M. Archibald, Antônio H. Ribeiro, Fabian Pedregosa, Paul van Mulbregt, and SciPy 1.0 Contributors. 2020. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods, 17:261–272.
- von Luxburg (2007) Ulrike von Luxburg. 2007. A tutorial on spectral clustering. Statistics and Computing, 17(4):395–416.
- Waskom (2021) Michael L. Waskom. 2021. seaborn: statistical data visualization. Journal of Open Source Software, 6(60):3021. Publisher: The Open Journal.
- Wolf et al. (2020) Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-Art Natural Language Processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
- Wu et al. (2024) Xinyi Wu, Amir Ajorlou, Yifei Wang, Stefanie Jegelka, and Ali Jadbabaie. 2024. On the role of attention masks and LayerNorm in transformers. In Advances in neural information processing systems, volume 37, pages 14774–14809. Curran Associates, Inc.
- Xu et al. (2024) Ziwei Xu, Sanjay Jain, and Mohan Kankanhalli. 2024. Hallucination is Inevitable: An Innate Limitation of Large Language Models. arXiv preprint. ArXiv:2401.11817.
- Zheng et al. (2023) Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. 2023. Judging LLM-as-a-judge with MT-bench and Chatbot Arena. In Proceedings of the 37th International Conference on Neural Information Processing Systems, NIPS ’23, Red Hook, NY, USA. Curran Associates Inc. Event-place: New Orleans, LA, USA.
- Zhu et al. (2024) Derui Zhu, Dingfan Chen, Qing Li, Zongxiong Chen, Lei Ma, Jens Grossklags, and Mario Fritz. 2024. PoLLMgraph: Unraveling Hallucinations in Large Language Models via State Transition Dynamics. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 4737–4751, Mexico City, Mexico. Association for Computational Linguistics.
Appendix A Details of motivational study
We present a detailed description of the procedure used to obtain the results presented in Section 2, along with additional results for other datasets and LLMs.
Our goal was to test whether $\operatorname{AttentionScore}$ and eigenvalues of Laplacian matrix (used by our $\operatorname{LapEigvals}$ ) differ significantly when examples are split into hallucinated and non-hallucinated groups. To this end, we used 7 datasets (Section 4.1) and ran inference with 5 LLMs (Section 4.1) using $temp{=}0.1$ . From the extracted attention maps, we computed $\operatorname{AttentionScore}$ (Sriramanan et al., 2024), defined as the log-determinant of the attention matrices. Unlike the original work, we did not aggregate scores across heads, but instead analyzed them at the single-head level. For $\operatorname{LapEigvals}$ , we constructed the Laplacian as defined in Section 3, extracted the 10 largest eigenvalues per head, and applied the same single-head analysis as for $\operatorname{AttnEigvals}$ . Finally, we performed the Mann–Whitney U test (Mann and Whitney, 1947) using the SciPy implementation (Virtanen et al., 2020) and collected the resulting $p$ -values
Table 3 presents the percentage of heads having a statistically significant difference in feature values between hallucinated and non-hallucinated examples, as indicated by $p<0.05$ from the Mann-Whitney U test. These results show that the Laplacian eigenvalues better distinguish between the two classes for almost all considered LLMs and datasets.
Table 3: Percentage of heads having a statistically significant difference in feature values between hallucinated and non-hallucinated examples, as indicated by $p<0.05$ from the Mann-Whitney U test. Results were obtained for $\operatorname{AttentionScore}$ and the 10 largest Laplacian eigenvalues on 6 datasets and 5 LLMs.
| | | AttnScore | Laplacian eigvals |
| --- | --- | --- | --- |
| Llama3.1-8B | CoQA | 40 | 87 |
| Llama3.1-8B | GSM8K | 83 | 70 |
| Llama3.1-8B | HaluevalQA | 91 | 93 |
| Llama3.1-8B | NQOpen | 78 | 83 |
| Llama3.1-8B | SQuADv2 | 70 | 81 |
| Llama3.1-8B | TriviaQA | 80 | 91 |
| Llama3.1-8B | TruthfulQA | 40 | 60 |
| Llama3.2-3B | CoQA | 50 | 79 |
| Llama3.2-3B | GSM8K | 74 | 67 |
| Llama3.2-3B | HaluevalQA | 91 | 93 |
| Llama3.2-3B | NQOpen | 81 | 84 |
| Llama3.2-3B | SQuADv2 | 69 | 74 |
| Llama3.2-3B | TriviaQA | 81 | 87 |
| Llama3.2-3B | TruthfulQA | 40 | 62 |
| Phi3.5 | CoQA | 45 | 81 |
| Phi3.5 | GSM8K | 67 | 69 |
| Phi3.5 | HaluevalQA | 80 | 86 |
| Phi3.5 | NQOpen | 73 | 80 |
| Phi3.5 | SQuADv2 | 81 | 82 |
| Phi3.5 | TriviaQA | 86 | 92 |
| Phi3.5 | TruthfulQA | 41 | 53 |
| Mistral-Nemo | CoQA | 35 | 78 |
| Mistral-Nemo | GSM8K | 90 | 71 |
| Mistral-Nemo | HaluevalQA | 78 | 82 |
| Mistral-Nemo | NQOpen | 64 | 57 |
| Mistral-Nemo | SQuADv2 | 54 | 56 |
| Mistral-Nemo | TriviaQA | 71 | 74 |
| Mistral-Nemo | TruthfulQA | 40 | 50 |
| Mistral-Small-24B | CoQA | 28 | 78 |
| Mistral-Small-34B | GSM8K | 75 | 72 |
| Mistral-Small-24B | HaluevalQA | 68 | 70 |
| Mistral-Small-24B | NQOpen | 45 | 51 |
| Mistral-Small-24B | SQuADv2 | 75 | 82 |
| Mistral-Small-24B | TriviaQA | 65 | 70 |
| Mistral-Small-24B | TruthfulQA | 43 | 52 |
Appendix B Bounds of the Laplacian
In the following section, we prove that the Laplacian defined in 3 is bounded and has at least one zero eigenvalue. We denote eigenvalues as $\lambda_{i}$ , and provide derivation for a single layer and head, which holds also after stacking them together into a single graph (set of per-layer graphs). For clarity, we omit the superscript ${(l,h)}$ indicating layer and head.
**Lemma 1**
*The Laplacian eigenvalues are bounded: $-1≤\lambda_{i}≤ 1$ .*
* Proof*
Due to the lower-triangular structure of the Laplacian, its eigenvalues lie on the diagonal and are given by:
$$
\lambda_{i}=\mathbf{L}_{ii}=d_{ii}-a_{ii}
$$
The out-degree is defined as:
$$
d_{ii}=\frac{\sum_{u}{a_{ui}}}{T-i},
$$
Since $0≤ a_{ui}≤ 1$ , the sum in the numerator is upper bounded by $T-i$ , therefore $d_{ii}≤ 1$ , and consequently $\lambda_{i}=\mathbf{L}_{ii}≤ 1$ , which concludes upper-bound part of the proof. Recall that eigenvalues lie on the main diagonal of the Laplacian, hence $\lambda_{i}=\frac{\sum_{u}{a_{uj}}}{T-i}-a_{ii}$ . To find the lower bound of $\lambda_{i}$ , we need to minimize $X=\frac{\sum_{u}{a_{uj}}}{T-i}$ and maximize $Y=a_{ii}$ . First, we note that $X$ ’s denominator is always positive $T-i>0$ , since $i∈\{0...(T-1)\}$ (as defined by Eq. (2)). For the numerator, we recall that $0≤ a_{ui}≤ 1$ ; therefore, the sum has its minimum at 0, hence $X≥ 0$ . Second, to maximize $Y=a_{ii}$ , we can take maximum of $0≤ a_{ii}≤ 1$ which is $1$ . Finally, $X-Y=-1$ , consequently $\mathbf{L}_{ii}≥-1$ , which concludes the lower-bound part of the proof. ∎
**Lemma 2**
*For every $\mathbf{L}_{ii}$ , there exists at least one zero-eigenvalue, and it corresponds to the last token $T$ , i.e., $\lambda_{T}=0$ .*
* Proof*
Recall that eigenvalues lie on the main diagonal of the Laplacian, hence $\lambda_{i}=\frac{\sum_{u}{a_{uj}}}{T-i}-a_{ii}$ . Consider last token, wherein the sum in the numerator reduces to $\sum_{u}{a_{uj}}=a_{TT}$ , denominator becomes $T-i=T-(T-1)=1$ , thus $\lambda_{T}=\frac{a_{TT}}{1}-a_{TT}=0$ . ∎
Appendix C Implementation details
In our experiments, we used HuggingFace Transformers (Wolf et al., 2020), PyTorch (Ansel et al., 2024), and scikit-learn (Pedregosa et al., 2011). We utilized Pandas (team, 2020) and Seaborn (Waskom, 2021) for visualizations and analysis. To version data, we employed DVC (Kuprieiev et al., 2025). The Cursor IDE was employed to assist with code development. We performed LLM inference and acquired attention maps using a single Nvidia A40 with 40GB VRAM, except for Mistral-Small-24B for which we used Nvidia H100 with 96GB VRAM. Training the hallucination probe was done using the CPU only. To compute labels using the llm-as-judge approach, we leveraged gpt-4o-mini model available through OpenAI API. Detailed hyperparameter settings and code to reproduce the experiments are available in the public Github repository: https://github.com/graphml-lab-pwr/lapeigvals.
Appendix D Details of QA datasets
We used 7 open and publicly available question answering datasets: NQ-Open (Kwiatkowski et al., 2019) (CC-BY-SA-3.0 license), SQuADv2 (Rajpurkar et al., 2018) (CC-BY-SA-4.0 license), TruthfulQA (Apache-2.0 license) (Lin et al., 2022), HaluEvalQA (MIT license) (Li et al., 2023), CoQA (Reddy et al., 2019) (domain-dependent licensing, detailed on https://stanfordnlp.github.io/coqa/), TriviaQA (Apache 2.0 license), GSM8K (Cobbe et al., 2021) (MIT license). Research purposes fall into the intended use of these datasets. To preprocess and filter TriviaQA, CoQA, and SQuADv2 we utilized open-source code of (Chen et al., 2024) https://github.com/alibaba/eigenscore (MIT license), which also borrows from (Farquhar et al., 2024) https://github.com/lorenzkuhn/semantic_uncertainty (MIT license). In Figure 8, we provide histogram plots of the number of tokens for $question$ and $answer$ of each dataset computed with meta-llama/Llama-3.1-8B-Instruct tokenizer.
<details>
<summary>x8.png Details</summary>

### Visual Description
## Bar Charts: Question and Answer Token Frequency Distribution
### Overview
The image contains two side-by-side bar charts comparing token frequency distributions for "Questions" (left) and "Answers" (right). Both charts use a logarithmic y-axis scale (10⁰ to 10³) and display frequency distributions across different token counts. The charts reveal distinct patterns in text length distributions for questions versus answers.
### Components/Axes
**Left Chart (Question):**
- **X-axis**: "#Tokens" (linear scale: 200 → 1000)
- **Y-axis**: "Frequency" (log scale: 10⁰ → 10³)
- **Bars**: Blue vertical bars representing frequency counts
**Right Chart (Answer):**
- **X-axis**: "#Tokens" (log scale: 10¹ → 10³)
- **Y-axis**: "Frequency" (log scale: 10⁰ → 10³)
- **Bars**: Blue vertical bars representing frequency counts
**Shared Elements:**
- Grid lines at 10x intervals on y-axis
- No explicit legend (charts are separated by category)
- White background with light gray grid
### Detailed Analysis
**Question Chart Trends:**
1. Peak frequency at ~400 tokens (10² frequency)
2. Gradual decline to 10¹ frequency at 600 tokens
3. Sharp drop to 10⁰ frequency at 800-1000 tokens
4. No data points below 200 tokens
**Answer Chart Trends:**
1. Highest frequency at 10 tokens (10³ frequency)
2. Secondary peak at 100 tokens (10² frequency)
3. Gradual decline through 10¹ to 10³ token ranges
4. Long tail extending to 1000 tokens with low frequencies
### Key Observations
1. **Question Length Distribution**:
- Bimodal pattern with dominant peak at 400 tokens
- 90% of questions contain <600 tokens
- Long questions (>800 tokens) are rare (<10 frequency)
2. **Answer Length Distribution**:
- Exponential decay pattern with log-scaled x-axis
- 50% of answers contain <100 tokens
- Answers between 10-100 tokens dominate (90% of total frequency)
- Very long answers (>100 tokens) show power-law distribution
3. **Scale Differences**:
- Questions use linear x-axis for detailed analysis of mid-range lengths
- Answers use log x-axis to visualize wide range of lengths
- Answer frequencies show 3 orders of magnitude difference between shortest and longest answers
### Interpretation
The data suggests fundamental differences in text generation patterns:
1. **Question Design**:
- Optimal question length clusters around 400 tokens, possibly reflecting human cognitive processing limits
- Technical questions may require longer context (up to 600 tokens)
2. **Answer Structure**:
- Short answers (10 tokens) dominate, indicating prevalence of concise responses
- Power-law distribution suggests few very long answers exist but have disproportionate impact
- Log scale visualization reveals hidden patterns in answer length variability
3. **Practical Implications**:
- Question-answering systems should optimize for 400-token context windows
- Answer generation models need to handle both short responses and rare long-form content
- The 10³ frequency at 10 tokens suggests many answers are single-sentence responses
4. **Anomalies**:
- Question chart shows unexpected drop-off after 400 tokens
- Answer chart's 100-token peak may indicate special formatting requirements
- No data below 200 tokens for questions suggests minimum length requirements
</details>
(a) CoQA
<details>
<summary>x9.png Details</summary>

### Visual Description
## Bar Charts: Token Frequency Distribution for Questions and Answers
### Overview
The image contains two side-by-side bar charts comparing the frequency distribution of token counts in questions and answers. Both charts use a logarithmic y-axis scale (10⁰ to 10³) and linear x-axis scale (#Tokens from 5 to 25). The charts reveal distinct patterns in how token counts are distributed across questions versus answers.
### Components/Axes
- **X-axis (Horizontal):**
- Label: "#Tokens"
- Scale: Linear from 5 to 25 tokens
- Tick marks at every 5-token interval (5, 10, 15, 20, 25)
- **Y-axis (Vertical):**
- Label: "Frequency"
- Scale: Logarithmic (10⁰ to 10³)
- Tick marks at 10⁰, 10¹, 10², 10³
- **Legend:**
- No explicit legend present, but color coding is consistent:
- **Blue bars:** Represent both question and answer distributions
- **Chart Titles:**
- Left chart: "Question"
- Right chart: "Answer"
### Detailed Analysis
#### Question Chart
- **Peak Frequency:**
- Highest frequency (~10³) occurs at 10 tokens
- Secondary peak at 12 tokens (~800 frequency)
- **Distribution Pattern:**
- Frequencies decrease monotonically after 12 tokens
- At 20 tokens: ~10¹ frequency
- At 25 tokens: ~10⁰ frequency (1 occurrence)
- **Notable:**
- No bars visible between 5-9 tokens
- All bars are blue with consistent width
#### Answer Chart
- **Peak Frequency:**
- Highest frequency (~10³) occurs at 5 tokens
- Secondary peak at 7 tokens (~800 frequency)
- **Distribution Pattern:**
- Sharp decline after 7 tokens
- At 10 tokens: ~10² frequency
- At 15 tokens: ~10¹ frequency
- At 20 tokens: ~10⁰ frequency
- **Notable:**
- No bars visible between 5-7 tokens
- All bars are blue with consistent width
### Key Observations
1. **Length Distribution:**
- Questions cluster around 10-12 tokens (peak frequency)
- Answers cluster around 5-7 tokens (peak frequency)
2. **Long-Tail Behavior:**
- Both distributions show rapid decay beyond 15 tokens
- Frequencies drop by 2 orders of magnitude between 10-20 tokens
3. **Symmetry:**
- Answer distribution is more concentrated (narrower peak)
- Question distribution is slightly broader but still right-skewed
4. **Logarithmic Scale Impact:**
- Visualizes power-law distribution effectively
- Highlights dominance of short texts over long ones
### Interpretation
The data demonstrates a clear preference for brevity in both questions and answers, with shorter texts being exponentially more frequent. This aligns with natural language processing patterns where most human-generated text follows a power-law distribution. The question distribution shows slightly more variability in length compared to answers, suggesting answers may be more tightly constrained in length (e.g., through system design or user expectations). The logarithmic scale is critical for visualizing these distributions, as linear scaling would obscure the long-tail behavior. The absence of data points between 5-9 tokens in both charts suggests either a minimum length requirement or natural clustering around specific token counts. This pattern could inform system design decisions regarding token limits or text processing pipelines.
</details>
(b) NQ-Open
<details>
<summary>x10.png Details</summary>

### Visual Description
## Bar Charts: Token Frequency Distribution for Questions and Answers
### Overview
The image contains two side-by-side bar charts comparing the frequency distribution of token counts for "Questions" and "Answers." Both charts use a logarithmic y-axis (Frequency) ranging from 10⁰ to 10³ and a linear x-axis (#Tokens) from 0 to 125. The charts reveal distinct patterns in token usage for questions versus answers.
### Components/Axes
- **X-axis (Horizontal)**: Labeled "#Tokens," with increments at 0, 25, 50, 75, 100, and 125.
- **Y-axis (Vertical)**: Labeled "Frequency," using a logarithmic scale (10⁰, 10¹, 10², 10³).
- **Bars**: Blue-colored bars represent frequency counts. No explicit legend is present, but the color is consistent across both charts.
- **Titles**:
- Left chart: "Question"
- Right chart: "Answer"
### Detailed Analysis
#### Question Chart
- **Trend**: Frequencies decrease monotonically as token count increases.
- **Key Data Points**:
- Highest frequency (~10³) occurs at 10–15 tokens.
- Frequencies drop to ~10² at 25 tokens and ~10¹ at 75 tokens.
- Minimal frequency (~10⁰) observed beyond 100 tokens.
- **Distribution**: Long-tailed distribution with a sharp decline after 25 tokens.
#### Answer Chart
- **Trend**: Similar decreasing pattern but with a steeper drop-off.
- **Key Data Points**:
- Peak frequency (~10³) at 0–5 tokens.
- Rapid decline to ~10² at 10 tokens and ~10¹ at 25 tokens.
- Near-zero frequencies beyond 50 tokens.
- **Distribution**: Even more concentrated than questions, with a pronounced tail cut-off after 25 tokens.
### Key Observations
1. **Shorter Dominance**: Both questions and answers are predominantly short, with >90% of instances containing ≤25 tokens.
2. **Answer Conciseness**: Answers exhibit a more extreme concentration of short tokens compared to questions.
3. **Logarithmic Scale Impact**: The y-axis compression emphasizes the disparity in high-frequency ranges (10⁰–10³) versus low-frequency tails.
4. **Token Thresholds**:
- Questions: 75–100 tokens mark the transition to negligible frequency.
- Answers: 50 tokens represent the effective upper limit for non-zero frequency.
### Interpretation
The data suggests a strong preference for brevity in both questions and answers, with answers being significantly more concise. The logarithmic scale highlights the dominance of short tokens, implying that most interactions involve minimal token usage. This could reflect user behavior favoring efficiency or system design constraints (e.g., token limits in models). The steeper decline in answers may indicate that responses are often direct and to the point, whereas questions might require slightly more elaboration. The absence of data beyond 125 tokens suggests either a lack of such instances or a truncation mechanism in the dataset.
</details>
(c) HaluEvalQA
<details>
<summary>x11.png Details</summary>

### Visual Description
## Bar Charts: Token Frequency Distribution for Questions and Answers
### Overview
The image contains two side-by-side bar charts comparing the frequency distribution of token counts for "Questions" and "Answers" in a dataset. Both charts use a logarithmic y-axis (10⁰ to 10³) and linear x-axis (#Tokens: 0–40). The charts reveal distinct patterns in token usage between questions and answers.
### Components/Axes
- **X-axis (Horizontal)**: "#Tokens" (0–40), linear scale
- **Y-axis (Vertical)**: "Frequency" (10⁰ to 10³), logarithmic scale
- **Left Chart**: Labeled "Question" (blue bars)
- **Right Chart**: Labeled "Answer" (blue bars)
- **No explicit legend** (data series inferred by chart labels)
### Detailed Analysis
#### Question Chart
- **Peak Frequency**: ~1,000 occurrences at 10 tokens
- **Distribution**:
- 5–15 tokens: Frequencies range from ~100 to ~1,000
- 16–30 tokens: Gradual decline to ~10 occurrences
- 31–40 tokens: Minimal frequency (~1–10)
- **Notable**: Bimodal distribution with secondary peak at ~20 tokens (~200 frequency)
#### Answer Chart
- **Peak Frequency**: ~1,200 occurrences at 1 token
- **Distribution**:
- 1–5 tokens: Frequencies range from ~500 to ~1,200
- 6–15 tokens: Gradual decline to ~100 occurrences
- 16–30 tokens: Sharp drop to ~10–50
- 31–40 tokens: Sparse occurrences (~1–10)
- **Notable**: Longer tail extending to 40 tokens compared to questions
### Key Observations
1. **Question Length**:
- Median question length clusters around 10–15 tokens
- 80% of questions contain ≤20 tokens
2. **Answer Length**:
- Median answer length clusters around 1–5 tokens
- 50% of answers contain ≤10 tokens
3. **Logarithmic Scale Impact**:
- Frequency drops by 100x between 10³ and 10⁰
- Visualizes wide distribution ranges effectively
### Interpretation
The data suggests:
- **Question Complexity**: Questions require more tokens (avg. 10–15) than answers (avg. 1–5), indicating higher syntactic complexity in question formulation.
- **Answer Conciseness**: Answers are predominantly short, with only 10% exceeding 15 tokens, suggesting efficient response generation.
- **Outlier Analysis**:
- Questions with >30 tokens are rare (≤10 occurrences)
- Answers with >30 tokens are slightly more common (5–10 occurrences), possibly indicating edge cases like multi-part answers.
- **Practical Implications**:
- Token budgeting for NLP models should allocate ~2x more resources for questions than answers
- Training data augmentation could benefit from balancing short/long answer pairs
The logarithmic scale emphasizes the power-law distribution, revealing that both questions and answers follow a "long tail" pattern where most instances cluster in lower token ranges.
</details>
(d) SQuADv2
<details>
<summary>x12.png Details</summary>

### Visual Description
## Bar Charts: Token Frequency Distribution for Questions and Answers
### Overview
The image contains two side-by-side bar charts comparing the frequency distribution of token counts for "Questions" and "Answers." Both charts use a logarithmic scale (10⁰ to 10⁴) on the y-axis (Frequency) and a linear scale (0 to 150) on the x-axis (#Tokens). The data suggests differences in how questions and answers are structured in terms of token length.
### Components/Axes
- **X-axis (Horizontal)**:
- Label: `#Tokens`
- Range: 0 to 150 (linear scale)
- Tick marks: Every 50 units (0, 50, 100, 150)
- **Y-axis (Vertical)**:
- Label: `Frequency`
- Scale: Logarithmic (10⁰ to 10⁴)
- Tick marks: 10⁰, 10¹, 10², 10³, 10⁴
- **Charts**:
- **Left Chart**: Titled "Question"
- **Right Chart**: Titled "Answer"
- No explicit legend, but the chart titles act as labels for the data series.
### Detailed Analysis
#### Question Chart
- **Trend**:
- Frequency decreases sharply as token count increases.
- Peaks at ~10⁴ for 10 tokens, then declines to ~10¹ at 150 tokens.
- **Key Data Points**:
- 0–10 tokens: Frequencies range from ~10³ to ~10⁴.
- 50 tokens: ~10².
- 100 tokens: ~10¹.
- 150 tokens: ~10¹ (outlier, significantly lower than adjacent bins).
#### Answer Chart
- **Trend**:
- Sharp peak at 0 tokens (~10⁴), followed by a steep decline.
- Frequencies drop to ~10¹ at 50 tokens, with sporadic bars at higher token counts (e.g., 100, 150).
- **Key Data Points**:
- 0 tokens: ~10⁴ (dominant peak).
- 10 tokens: ~10³.
- 50 tokens: ~10¹.
- 100 tokens: ~10¹.
- 150 tokens: ~10¹ (outlier, similar to Question chart).
### Key Observations
1. **Question Distribution**:
- Questions are more evenly distributed across token counts, with a gradual decline.
- A notable outlier at 150 tokens suggests some unusually long questions.
2. **Answer Distribution**:
- Answers are heavily concentrated at 0 tokens, indicating many short or empty answers.
- A few answers extend to 150 tokens, but these are rare (frequency ~10¹).
3. **Log Scale Impact**:
- The logarithmic y-axis emphasizes the disparity in frequency between low and high token counts.
- Without this scale, the Answer chart’s peak at 0 tokens would appear disproportionately large.
### Interpretation
- **Structural Differences**:
- Questions tend to be more variable in length, while answers are predominantly short, with a few exceptions.
- The Answer chart’s peak at 0 tokens may reflect placeholder or incomplete responses.
- **Implications**:
- The data could indicate a need for better answer quality control or question design to reduce variability.
- The outlier at 150 tokens in both charts suggests potential anomalies (e.g., malformed data, edge cases).
- **Peircean Insight**:
- The log scale reveals hidden patterns (e.g., the Answer chart’s dominance at 0 tokens) that a linear scale might obscure.
- The similarity in outlier frequencies at 150 tokens hints at a shared underlying process (e.g., data generation or user behavior).
</details>
(e) TriviaQA
<details>
<summary>x13.png Details</summary>

### Visual Description
## Bar Charts: Question and Answer Token Frequency Distribution
### Overview
The image contains two side-by-side bar charts comparing the frequency distribution of token counts for "Questions" and "Answers." Both charts use a logarithmic scale (base 10) for the y-axis (Frequency) and a linear scale for the x-axis (#Tokens). The x-axis ranges from 0 to 60 tokens, while the y-axis spans 10⁰ to 10². The charts reveal distinct patterns in token usage between questions and answers.
### Components/Axes
- **X-axis (Horizontal):**
- Label: `#Tokens`
- Scale: Linear, 0–60 tokens, with ticks at 10-unit intervals (0, 10, 20, ..., 60).
- **Y-axis (Vertical):**
- Label: `Frequency`
- Scale: Logarithmic (10⁰ to 10²), with gridlines at 1, 10, and 100.
- **Chart Titles:**
- Left chart: `Question`
- Right chart: `Answer`
- **Bars:**
- Color: Blue (no legend present; titles act as identifiers).
### Detailed Analysis
#### Question Chart
- **Trend:**
- Frequency peaks at ~10 tokens (~100 occurrences).
- Gradual decline as token count increases, with frequencies dropping below 10⁰ (~1 occurrence) by 50–60 tokens.
- **Key Data Points:**
- 10 tokens: ~100 frequency
- 20 tokens: ~50 frequency
- 30 tokens: ~20 frequency
- 40 tokens: ~10 frequency
- 50 tokens: ~5 frequency
#### Answer Chart
- **Trend:**
- Higher peak frequency (~150 occurrences) at 10 tokens compared to questions.
- Steeper decline: frequencies drop below 10⁰ by 30–40 tokens.
- **Key Data Points:**
- 10 tokens: ~150 frequency
- 20 tokens: ~100 frequency
- 30 tokens: ~50 frequency
- 40 tokens: ~10 frequency
- 50 tokens: ~5 frequency
### Key Observations
1. **Similar Distribution Shape:** Both charts exhibit a right-skewed distribution, with most tokens concentrated at lower values (10–20 tokens).
2. **Higher Answer Frequencies:** Answers consistently show higher frequencies than questions for equivalent token counts (e.g., 10 tokens: 150 vs. 100).
3. **Rapid Decline:** Both distributions drop sharply after 20 tokens, indicating rare use of longer sequences.
4. **Logarithmic Scale Impact:** The y-axis compression emphasizes differences in low-frequency ranges (e.g., 1–10 tokens).
### Interpretation
- **Token Usage Patterns:**
- Questions and answers are predominantly concise, with most instances requiring ≤20 tokens.
- Answers may require slightly more tokens on average, as evidenced by their higher peak frequency at 10 tokens.
- **Implications:**
- The logarithmic scale highlights the dominance of short sequences, suggesting efficiency in token usage for both question and answer generation.
- The steeper decline in answers implies stricter constraints on response length compared to questions.
- **Anomalies:**
- No significant outliers; both distributions follow predictable decay patterns.
- The absence of data beyond 40 tokens in answers suggests a hard cutoff or truncation in the dataset.
This analysis underscores the importance of token efficiency in natural language processing tasks, with answers demonstrating marginally higher complexity than questions within the same token budget.
</details>
(f) TruthfulQA
<details>
<summary>x14.png Details</summary>

### Visual Description
## Bar Charts: Question and Answer Token Frequency Distribution
### Overview
The image contains two side-by-side bar charts comparing the frequency distribution of token counts for "Questions" (left) and "Answers" (right). Both charts use a logarithmic y-axis scale (10⁻¹ to 10¹) and a linear x-axis scale (#Tokens: 0–300). The charts reveal distinct patterns in token length distributions between questions and answers.
### Components/Axes
- **X-axis (Horizontal)**:
- Label: "#Tokens"
- Scale: Linear, 0–300 tokens
- Tick marks: Every 50 tokens (0, 50, 100, 150, 200, 250, 300)
- **Y-axis (Vertical)**:
- Label: "Frequency"
- Scale: Logarithmic (10⁻¹ to 10¹)
- Tick marks: 10⁻¹, 10⁰, 10¹
- **Legend**:
- No explicit legend present, but bar colors differentiate categories:
- **Blue bars**: Represent both "Question" and "Answer" distributions
- **Chart Titles**:
- Left chart: "Question"
- Right chart: "Answer"
### Detailed Analysis
#### Question Chart (Left)
- **Trend**:
- Highest frequency (10¹) occurs at ~50 tokens
- Sharp decline to 10⁰ frequency at ~100 tokens
- Minimal activity beyond 150 tokens (frequency < 10⁻¹)
- **Key Data Points**:
- 50 tokens: ~10 occurrences
- 100 tokens: ~1 occurrence
- 150 tokens: ~0.1 occurrences
#### Answer Chart (Right)
- **Trend**:
- Peak frequency (10¹) occurs at ~150 tokens
- Gradual decline to 10⁰ frequency at ~250 tokens
- Slight uptick at ~250 tokens (~0.5 occurrences)
- **Key Data Points**:
- 150 tokens: ~10 occurrences
- 200 tokens: ~5 occurrences
- 250 tokens: ~0.5 occurrences
### Key Observations
1. **Length Distribution**:
- Questions cluster tightly around shorter token counts (peak at 50 tokens)
- Answers exhibit longer token lengths with a broader distribution (peak at 150 tokens)
2. **Frequency Magnitude**:
- Questions show 100x higher peak frequency than answers at their respective maxima
- Answers maintain higher frequencies across longer token ranges (100–250 tokens)
3. **Logarithmic Scale Impact**:
- Visual compression of high-frequency ranges (10⁰–10¹) makes differences in lower frequencies (10⁻¹) appear exaggerated
### Interpretation
The data suggests a fundamental asymmetry in question-answer dynamics:
- **Questions**:
- Typically concise, with most requiring <100 tokens
- High frequency of short questions implies a focus on direct, factual inquiries
- **Answers**:
- Require significantly more tokens (median ~150 tokens)
- Gradual decline in frequency suggests increasing complexity or variability in longer responses
- **Practical Implications**:
- System design for QA processing should allocate more computational resources to answer generation
- Token budgeting for responses should prioritize 100–200 token ranges
- The logarithmic scale highlights that even small frequency differences at high token counts represent substantial absolute quantities
### Anomalies
- **Question Chart**:
- Unexplained gap between 100–150 tokens (frequency drops from 10⁰ to 10⁻¹)
- Possible indication of data preprocessing or filtering at this range
- **Answer Chart**:
- Slight uptick at 250 tokens may indicate outliers or specialized response types
</details>
(g) GSM8K
Figure 8: Token count histograms for the datasets used in our experiments. Token counts were computed separately for each example’s $question$ (left) and gold $answer$ (right) using the meta-llama/Llama-3.1-8B-Instruct tokenizer. In cases with multiple answers, they were flattened into one.
Appendix E Hallucination dataset sizes
Figure 9 shows the number of examples per label, determined using exact match for GSM8K and the llm-as-judge heuristic for the other datasets. It is worth noting that different generation configurations result in different splits, as LLMs might produce different answers. All examples classified as $Rejected$ were discarded from the hallucination probe training and evaluation. We observe that most datasets are imbalanced, typically underrepresenting non-hallucinated examples, with the exception of TriviaQA and GSM8K. We split each dataset into 80% training examples and 20% test examples. Splits were stratified according to hallucination labels.
<details>
<summary>x15.png Details</summary>

### Visual Description
## Bar Chart Grid: Model Performance Across Datasets and Temperature Settings
### Overview
The image displays a 7x4 grid of bar charts comparing the performance of four language models (Mistral-Small-24B, Llama3-1-8B, Llama3-2-3B, Mistral-Nemo) across seven datasets (GSM8K, TruthfulQA, CoQA, SQuADv2, TriviaQA, HaluevalQA, NQOpen) at two temperature settings (0.1 and 1.0). Each chart uses three color-coded bars to represent counts of "Hallucination" (red), "Non-Hallucination" (green), and "Rejected" (gray) responses.
### Components/Axes
- **X-axis**: Labeled "temperature" with values 0.1 (left) and 1.0 (right).
- **Y-axis**: Labeled "Count" with logarithmic scaling (0 to 12,000).
- **Legend**: Located at the bottom center, with:
- Red = Hallucination
- Green = Non-Hallucination
- Gray = Rejected
- **Grid Structure**:
- Rows represent datasets (top to bottom: GSM8K, TruthfulQA, CoQA, SQuADv2, TriviaQA, HaluevalQA, NQOpen).
- Columns represent models (left to right: Mistral-Small-24B, Llama3-1-8B, Llama3-2-3B, Mistral-Nemo).
### Detailed Analysis
#### Dataset: GSM8K
- **Mistral-Small-24B**:
- Temperature 0.1: Hallucination (~100), Non-Hallucination (~1,100), Rejected (~50).
- Temperature 1.0: Hallucination (~150), Non-Hallucination (~1,050), Rejected (~70).
- **Llama3-1-8B**:
- Temperature 0.1: Hallucination (~200), Non-Hallucination (~1,000), Rejected (~30).
- Temperature 1.0: Hallucination (~250), Non-Hallucination (~950), Rejected (~40).
- **Llama3-2-3B**:
- Temperature 0.1: Hallucination (~180), Non-Hallucination (~1,020), Rejected (~60).
- Temperature 1.0: Hallucination (~220), Non-Hallucination (~980), Rejected (~50).
- **Mistral-Nemo**:
- Temperature 0.1: Hallucination (~120), Non-Hallucination (~1,080), Rejected (~40).
- Temperature 1.0: Hallucination (~160), Non-Hallucination (~1,020), Rejected (~50).
#### Dataset: TruthfulQA
- **Mistral-Small-24B**:
- Temperature 0.1: Hallucination (~250), Non-Hallucination (~300), Rejected (~400).
- Temperature 1.0: Hallucination (~300), Non-Hallucination (~280), Rejected (~350).
- **Llama3-1-8B**:
- Temperature 0.1: Hallucination (~500), Non-Hallucination (~200), Rejected (~150).
- Temperature 1.0: Hallucination (~550), Non-Hallucination (~180), Rejected (~120).
- **Llama3-2-3B**:
- Temperature 0.1: Hallucination (~480), Non-Hallucination (~220), Rejected (~130).
- Temperature 1.0: Hallucination (~520), Non-Hallucination (~210), Rejected (~110).
- **Mistral-Nemo**:
- Temperature 0.1: Hallucination (~350), Non-Hallucination (~320), Rejected (~330).
- Temperature 1.0: Hallucination (~380), Non-Hallucination (~300), Rejected (~310).
#### Dataset: CoQA
- **Mistral-Small-24B**:
- Temperature 0.1: Hallucination (~2,000), Non-Hallucination (~5,500), Rejected (~50).
- Temperature 1.0: Hallucination (~2,200), Non-Hallucination (~5,300), Rejected (~60).
- **Llama3-1-8B**:
- Temperature 0.1: Hallucination (~2,100), Non-Hallucination (~5,400), Rejected (~40).
- Temperature 1.0: Hallucination (~2,300), Non-Hallucination (~5,200), Rejected (~50).
- **Llama3-2-3B**:
- Temperature 0.1: Hallucination (~2,050), Non-Hallucination (~5,450), Rejected (~55).
- Temperature 1.0: Hallucination (~2,250), Non-Hallucination (~5,150), Rejected (~65).
- **Mistral-Nemo**:
- Temperature 0.1: Hallucination (~1,800), Non-Hallucination (~5,700), Rejected (~45).
- Temperature 1.0: Hallucination (~1,900), Non-Hallucination (~5,600), Rejected (~55).
#### Dataset: SQuADv2
- **Mistral-Small-24B**:
- Temperature 0.1: Hallucination (~1,000), Non-Hallucination (~3,000), Rejected (~200).
- Temperature 1.0: Hallucination (~1,200), Non-Hallucination (~2,800), Rejected (~180).
- **Llama3-1-8B**:
- Temperature 0.1: Hallucination (~1,100), Non-Hallucination (~2,900), Rejected (~190).
- Temperature 1.0: Hallucination (~1,300), Non-Hallucination (~2,700), Rejected (~170).
- **Llama3-2-3B**:
- Temperature 0.1: Hallucination (~1,050), Non-Hallucination (~2,950), Rejected (~210).
- Temperature 1.0: Hallucination (~1,250), Non-Hallucination (~2,650), Rejected (~190).
- **Mistral-Nemo**:
- Temperature 0.1: Hallucination (~900), Non-Hallucination (~3,100), Rejected (~180).
- Temperature 1.0: Hallucination (~1,000), Non-Hallucination (~3,000), Rejected (~170).
#### Dataset: TriviaQA
- **Mistral-Small-24B**:
- Temperature 0.1: Hallucination (~3,000), Non-Hallucination (~3,500), Rejected (~200).
- Temperature 1.0: Hallucination (~3,200), Non-Hallucination (~3,300), Rejected (~190).
- **Llama3-1-8B**:
- Temperature 0.1: Hallucination (~3,500), Non-Hallucination (~3,000), Rejected (~150).
- Temperature 1.0: Hallucination (~3,700), Non-Hallucination (~2,900), Rejected (~140).
- **Llama3-2-3B**:
- Temperature 0.1: Hallucination (~3,400), Non-Hallucination (~3,100), Rejected (~160).
- Temperature 1.0: Hallucination (~3,600), Non-Hallucination (~2,800), Rejected (~150).
- **Mistral-Nemo**:
- Temperature 0.1: Hallucination (~2,800), Non-Hallucination (~3,700), Rejected (~170).
- Temperature 1.0: Hallucination (~2,900), Non-Hallucination (~3,600), Rejected (~160).
#### Dataset: HaluevalQA
- **Mistral-Small-24B**:
- Temperature 0.1: Hallucination (~1,500), Non-Hallucination (~2,500), Rejected (~300).
- Temperature 1.0: Hallucination (~1,700), Non-Hallucination (~2,300), Rejected (~280).
- **Llama3-1-8B**:
- Temperature 0.1: Hallucination (~1,600), Non-Hallucination (~2,400), Rejected (~290).
- Temperature 1.0: Hallucination (~1,800), Non-Hallucination (~2,200), Rejected (~270).
- **Llama3-2-3B**:
- Temperature 0.1: Hallucination (~1,550), Non-Hallucination (~2,450), Rejected (~310).
- Temperature 1.0: Hallucination (~1,750), Non-Hallucination (~2,150), Rejected (~290).
- **Mistral-Nemo**:
- Temperature 0.1: Hallucination (~1,300), Non-Hallucination (~2,700), Rejected (~280).
- Temperature 1.0: Hallucination (~1,400), Non-Hallucination (~2,600), Rejected (~270).
#### Dataset: NQOpen
- **Mistral-Small-24B**:
- Temperature 0.1: Hallucination (~1,000), Non-Hallucination (~1,500), Rejected (~200).
- Temperature 1.0: Hallucination (~1,100), Non-Hallucination (~1,400), Rejected (~190).
- **Llama3-1-8B**:
- Temperature 0.1: Hallucination (~1,200), Non-Hallucination (~1,300), Rejected (~180).
- Temperature 1.0: Hallucination (~1,300), Non-Hallucination (~1,200), Rejected (~170).
- **Llama3-2-3B**:
- Temperature 0.1: Hallucination (~1,150), Non-Hallucination (~1,350), Rejected (~190).
- Temperature 1.0: Hallucination (~1,250), Non-Hallucination (~1,250), Rejected (~180).
- **Mistral-Nemo**:
- Temperature 0.1: Hallucination (~900), Non-Hallucination (~1,600), Rejected (~210).
- Temperature 1.0: Hallucination (~1,000), Non-Hallucination (~1,500), Rejected (~200).
### Key Observations
1. **Temperature Sensitivity**:
- Lower temperature (0.1) generally increases hallucination rates for Llama3 models (e.g., TruthfulQA: Llama3-1-8B hallucination jumps from ~500 to ~550 at 1.0).
- Mistral-Nemo shows minimal hallucination increases across temperatures (e.g., CoQA: ~1,800 to ~1,900).
2. **Model Robustness**:
- Mistral-Nemo consistently has the lowest hallucination rates (e.g., SQuADv2: ~900 at 0.1 vs. Llama3-1-8B’s ~1,100).
- Llama3-2-3B often has higher non-hallucination counts than Llama3-1-8B (e.g., TriviaQA: ~3,100 vs. ~3,000 at 0.1).
3. **Rejected Responses**:
- Rejected counts are highest for Mistral-Small-24B in TruthfulQA (~400 at 0.1) and lowest for Llama3-1-8B in CoQA (~40 at 0.1).
### Interpretation
The data suggests that temperature settings significantly impact model behavior, with lower temperatures (0.1) often increasing hallucination rates for Llama3 models. Mistral-Nemo demonstrates superior robustness, maintaining lower hallucination rates across datasets. The "Rejected" category indicates instances where models abstained from answering, possibly reflecting confidence thresholds. These trends highlight trade-offs between creativity (higher temperature) and factual accuracy (lower temperature), with model architecture playing a critical role in performance.
</details>
Figure 9: Number of examples per each label in generated datasets ( $Hallucination$ - number of hallucinated examples, $Non{-}Hallucination$ - number of truthful examples, $Rejected$ - number of examples unable to evaluate).
Appendix F LLM-as-Judge agreement
To ensure the high quality of labels generated using the llm-as-judge approach, we complemented manual evaluation of random examples with a second judge LLM and measured agreement between the models. We assume that higher agreement among LLMs indicates better label quality. The reduced performance of $\operatorname{LapEigvals}$ on TriviaQA may be attributed to the lower agreement, as well as the dataset’s size and class imbalance discussed earlier.
Table 4: Agreement between LLM judges labeling hallucinations (gpt-4o-mini, gpt-4.1), measured with Cohen’s Kappa.
| CoQA HaluevalQA NQOpen | 0.876 0.946 0.883 |
| --- | --- |
| SquadV2 | 0.854 |
| TriviaQA | 0.939 |
| TruthfulQA | 0.714 |
Appendix G Extended results
G.1 Precision and Recall analysis
To provide insights relevant for potential practical usage, we analyze the Precision and Recall of our method. While it has not yet been fully evaluated in production settings, this analysis illustrates the trade-offs between these metrics and informs how the method might behave in real-world applications. Metrics were computed using the default threshold of 0.5, as reported in Table 5. Although trade-off patterns vary across datasets, they are consistent across all evaluated LLMs. Specifically, we observe higher recall on CoQA, GSM8K, and TriviaQA, whereas HaluEvalQA, NQ-Open, SQuADv2, and TruthfulQA exhibit higher precision. These insights can guide threshold adjustments to balance precision and recall for different production scenarios.
Table 5: Precision and Recall values for the $\operatorname{LapEigvals}$ method, complementary to AUROC presented in Table 1. Values are presented as Precision / Recall for each dataset and model combination.
| Llama3.1-8B Llama3.2-3B Phi3.5 | 0.583 / 0.710 0.679 / 0.728 0.560 / 0.703 | 0.644 / 0.729 0.718 / 0.699 0.600 / 0.739 | 0.895 / 0.785 0.912 / 0.788 0.899 / 0.768 | 0.859 / 0.740 0.894 / 0.662 0.910 / 0.785 | 0.896 / 0.720 0.924 / 0.720 0.906 / 0.731 | 0.719 / 0.812 0.787 / 0.729 0.787 / 0.785 | 0.872 / 0.781 0.910 / 0.746 0.829 / 0.798 |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Mistral-Nemo | 0.646 / 0.714 | 0.594 / 0.809 | 0.873 / 0.760 | 0.875 / 0.751 | 0.920 / 0.756 | 0.707 / 0.769 | 0.892 / 0.825 |
| Mistral-Small-24B | 0.610 / 0.779 | 0.561 / 0.852 | 0.811 / 0.801 | 0.700 / 0.750 | 0.784 / 0.789 | 0.575 / 0.787 | 0.679 / 0.655 |
G.2 Extended method comparison
In Tables 6 and 7, we present the extended results corresponding to those summarized in Table 1 in the main part of this paper. The extended results cover probes trained with both all-layers and per-layer variants across all models, as well as lower temperature ( $temp∈\{0.1,1.0\}$ ). In almost all cases, the all-layers variant outperforms the per-layer variant, suggesting that hallucination-related information is distributed across multiple layers. Additionally, we observe a smaller generalization gap (measured as the difference between test and training performance) for the $\operatorname{LapEigvals}$ method, indicating more robust features present in the Laplacian eigenvalues. Finally, as demonstrated in Section 6, increasing the temperature during answer generation improves probe performance, which is also evident in Table 6, where probes trained on answers generated with $temp{=}1.0$ consistently outperform those trained on data generated with $temp{=}0.1$ .
Table 6: (Part I) Performance comparison of methods on an extended set of configurations. We mark results for $\operatorname{AttentionScore}$ in gray as it is an unsupervised approach, not directly comparable to the others. In bold, we highlight the best performance on the test split of data, individually for each dataset, LLM, and temperature.
| Llama3.1-8B | 0.1 | $\operatorname{AttentionScore}$ | | ✓ | 0.509 | 0.683 | 0.667 | 0.607 | 0.556 | 0.567 | 0.563 | 0.541 | 0.764 | 0.653 | 0.631 | 0.575 | 0.571 | 0.650 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Llama3.1-8B | 0.1 | $\operatorname{AttentionScore}$ | ✓ | | 0.494 | 0.677 | 0.614 | 0.568 | 0.522 | 0.522 | 0.489 | 0.504 | 0.708 | 0.587 | 0.558 | 0.521 | 0.511 | 0.537 |
| Llama3.1-8B | 0.1 | $\operatorname{AttnLogDet}$ | | ✓ | 0.574 | 0.810 | 0.776 | 0.702 | 0.688 | 0.739 | 0.709 | 0.606 | 0.840 | 0.770 | 0.713 | 0.708 | 0.741 | 0.777 |
| Llama3.1-8B | 0.1 | $\operatorname{AttnLogDet}$ | ✓ | | 0.843 | 0.977 | 0.884 | 0.851 | 0.839 | 0.861 | 0.913 | 0.770 | 0.833 | 0.837 | 0.768 | 0.758 | 0.827 | 0.820 |
| Llama3.1-8B | 0.1 | $\operatorname{AttnEigvals}$ | | ✓ | 0.764 | 0.879 | 0.828 | 0.713 | 0.742 | 0.793 | 0.680 | 0.729 | 0.798 | 0.799 | 0.728 | 0.749 | 0.773 | 0.790 |
| Llama3.1-8B | 0.1 | $\operatorname{AttnEigvals}$ | ✓ | | 0.861 | 0.992 | 0.895 | 0.878 | 0.858 | 0.867 | 0.979 | 0.776 | 0.841 | 0.838 | 0.755 | 0.781 | 0.822 | 0.819 |
| Llama3.1-8B | 0.1 | $\operatorname{LapEigvals}$ | | ✓ | 0.758 | 0.777 | 0.817 | 0.698 | 0.707 | 0.781 | 0.708 | 0.757 | 0.844 | 0.793 | 0.711 | 0.733 | 0.780 | 0.764 |
| Llama3.1-8B | 0.1 | $\operatorname{LapEigvals}$ | ✓ | | 0.869 | 0.928 | 0.901 | 0.864 | 0.855 | 0.896 | 0.903 | 0.836 | 0.887 | 0.867 | 0.793 | 0.782 | 0.872 | 0.822 |
| Llama3.1-8B | 1.0 | $\operatorname{AttentionScore}$ | | ✓ | 0.514 | 0.705 | 0.640 | 0.607 | 0.558 | 0.578 | 0.533 | 0.525 | 0.731 | 0.642 | 0.607 | 0.572 | 0.602 | 0.629 |
| Llama3.1-8B | 1.0 | $\operatorname{AttentionScore}$ | ✓ | | 0.507 | 0.710 | 0.602 | 0.580 | 0.534 | 0.535 | 0.546 | 0.493 | 0.720 | 0.589 | 0.556 | 0.538 | 0.532 | 0.541 |
| Llama3.1-8B | 1.0 | $\operatorname{AttnLogDet}$ | | ✓ | 0.596 | 0.791 | 0.755 | 0.704 | 0.697 | 0.750 | 0.757 | 0.597 | 0.828 | 0.763 | 0.757 | 0.686 | 0.754 | 0.771 |
| Llama3.1-8B | 1.0 | $\operatorname{AttnLogDet}$ | ✓ | | 0.848 | 0.973 | 0.882 | 0.856 | 0.846 | 0.867 | 0.930 | 0.769 | 0.826 | 0.827 | 0.793 | 0.748 | 0.842 | 0.814 |
| Llama3.1-8B | 1.0 | $\operatorname{AttnEigvals}$ | | ✓ | 0.762 | 0.864 | 0.820 | 0.758 | 0.754 | 0.800 | 0.796 | 0.723 | 0.812 | 0.784 | 0.732 | 0.728 | 0.796 | 0.770 |
| Llama3.1-8B | 1.0 | $\operatorname{AttnEigvals}$ | ✓ | | 0.867 | 0.995 | 0.889 | 0.873 | 0.867 | 0.876 | 0.972 | 0.782 | 0.838 | 0.819 | 0.790 | 0.768 | 0.843 | 0.833 |
| Llama3.1-8B | 1.0 | $\operatorname{LapEigvals}$ | | ✓ | 0.760 | 0.873 | 0.803 | 0.732 | 0.722 | 0.795 | 0.751 | 0.743 | 0.833 | 0.789 | 0.725 | 0.724 | 0.794 | 0.764 |
| Llama3.1-8B | 1.0 | $\operatorname{LapEigvals}$ | ✓ | | 0.879 | 0.936 | 0.896 | 0.866 | 0.857 | 0.901 | 0.918 | 0.830 | 0.872 | 0.874 | 0.827 | 0.791 | 0.889 | 0.829 |
| Llama3.2-3B | 0.1 | $\operatorname{AttentionScore}$ | | ✓ | 0.526 | 0.662 | 0.697 | 0.592 | 0.570 | 0.570 | 0.569 | 0.547 | 0.640 | 0.714 | 0.643 | 0.582 | 0.551 | 0.564 |
| Llama3.2-3B | 0.1 | $\operatorname{AttentionScore}$ | ✓ | | 0.506 | 0.638 | 0.635 | 0.523 | 0.515 | 0.534 | 0.473 | 0.519 | 0.609 | 0.644 | 0.573 | 0.561 | 0.510 | 0.489 |
| Llama3.2-3B | 0.1 | $\operatorname{AttnLogDet}$ | | ✓ | 0.573 | 0.774 | 0.762 | 0.692 | 0.682 | 0.719 | 0.725 | 0.579 | 0.794 | 0.774 | 0.735 | 0.698 | 0.711 | 0.674 |
| Llama3.2-3B | 0.1 | $\operatorname{AttnLogDet}$ | ✓ | | 0.782 | 0.946 | 0.868 | 0.845 | 0.827 | 0.824 | 0.918 | 0.695 | 0.841 | 0.843 | 0.763 | 0.749 | 0.796 | 0.678 |
| Llama3.2-3B | 0.1 | $\operatorname{AttnEigvals}$ | | ✓ | 0.675 | 0.784 | 0.782 | 0.750 | 0.725 | 0.755 | 0.727 | 0.626 | 0.761 | 0.792 | 0.734 | 0.695 | 0.724 | 0.720 |
| Llama3.2-3B | 0.1 | $\operatorname{AttnEigvals}$ | ✓ | | 0.814 | 0.977 | 0.873 | 0.872 | 0.852 | 0.842 | 0.963 | 0.723 | 0.808 | 0.844 | 0.772 | 0.744 | 0.788 | 0.688 |
| Llama3.2-3B | 0.1 | $\operatorname{LapEigvals}$ | | ✓ | 0.681 | 0.763 | 0.774 | 0.733 | 0.708 | 0.733 | 0.722 | 0.676 | 0.835 | 0.781 | 0.736 | 0.697 | 0.732 | 0.690 |
| Llama3.2-3B | 0.1 | $\operatorname{LapEigvals}$ | ✓ | | 0.831 | 0.889 | 0.875 | 0.837 | 0.832 | 0.852 | 0.895 | 0.801 | 0.852 | 0.857 | 0.779 | 0.736 | 0.826 | 0.743 |
| Llama3.2-3B | 1.0 | $\operatorname{AttentionScore}$ | | ✓ | 0.532 | 0.674 | 0.668 | 0.588 | 0.578 | 0.553 | 0.555 | 0.557 | 0.753 | 0.637 | 0.592 | 0.593 | 0.558 | 0.675 |
| Llama3.2-3B | 1.0 | $\operatorname{AttentionScore}$ | ✓ | | 0.512 | 0.648 | 0.606 | 0.554 | 0.529 | 0.517 | 0.484 | 0.509 | 0.717 | 0.588 | 0.546 | 0.530 | 0.515 | 0.581 |
| Llama3.2-3B | 1.0 | $\operatorname{AttnLogDet}$ | | ✓ | 0.578 | 0.807 | 0.738 | 0.677 | 0.720 | 0.716 | 0.739 | 0.597 | 0.816 | 0.724 | 0.678 | 0.707 | 0.711 | 0.742 |
| Llama3.2-3B | 1.0 | $\operatorname{AttnLogDet}$ | ✓ | | 0.784 | 0.951 | 0.869 | 0.816 | 0.839 | 0.831 | 0.924 | 0.700 | 0.851 | 0.801 | 0.690 | 0.734 | 0.789 | 0.795 |
| Llama3.2-3B | 1.0 | $\operatorname{AttnEigvals}$ | | ✓ | 0.642 | 0.807 | 0.777 | 0.716 | 0.747 | 0.763 | 0.735 | 0.641 | 0.817 | 0.756 | 0.696 | 0.703 | 0.746 | 0.748 |
| Llama3.2-3B | 1.0 | $\operatorname{AttnEigvals}$ | ✓ | | 0.819 | 0.973 | 0.878 | 0.847 | 0.876 | 0.847 | 0.978 | 0.724 | 0.768 | 0.819 | 0.694 | 0.749 | 0.804 | 0.723 |
| Llama3.2-3B | 1.0 | $\operatorname{LapEigvals}$ | | ✓ | 0.695 | 0.781 | 0.764 | 0.683 | 0.719 | 0.727 | 0.682 | 0.715 | 0.815 | 0.754 | 0.671 | 0.711 | 0.738 | 0.767 |
| Llama3.2-3B | 1.0 | $\operatorname{LapEigvals}$ | ✓ | | 0.842 | 0.894 | 0.885 | 0.803 | 0.850 | 0.863 | 0.911 | 0.812 | 0.870 | 0.828 | 0.693 | 0.757 | 0.832 | 0.787 |
| Phi3.5 | 0.1 | $\operatorname{AttentionScore}$ | | ✓ | 0.517 | 0.723 | 0.559 | 0.565 | 0.606 | 0.625 | 0.601 | 0.528 | 0.682 | 0.551 | 0.637 | 0.621 | 0.628 | 0.637 |
| Phi3.5 | 0.1 | $\operatorname{AttentionScore}$ | ✓ | | 0.499 | 0.632 | 0.538 | 0.532 | 0.473 | 0.539 | 0.522 | 0.505 | 0.605 | 0.511 | 0.578 | 0.458 | 0.534 | 0.554 |
| Phi3.5 | 0.1 | $\operatorname{AttnLogDet}$ | | ✓ | 0.583 | 0.805 | 0.732 | 0.741 | 0.711 | 0.757 | 0.720 | 0.585 | 0.749 | 0.726 | 0.785 | 0.726 | 0.772 | 0.765 |
| Phi3.5 | 0.1 | $\operatorname{AttnLogDet}$ | ✓ | | 0.845 | 0.995 | 0.863 | 0.905 | 0.852 | 0.875 | 0.981 | 0.723 | 0.752 | 0.802 | 0.802 | 0.759 | 0.842 | 0.716 |
| Phi3.5 | 0.1 | $\operatorname{AttnEigvals}$ | | ✓ | 0.760 | 0.882 | 0.781 | 0.793 | 0.745 | 0.802 | 0.854 | 0.678 | 0.764 | 0.764 | 0.790 | 0.747 | 0.791 | 0.774 |
| Phi3.5 | 0.1 | $\operatorname{AttnEigvals}$ | ✓ | | 0.862 | 1.000 | 0.867 | 0.904 | 0.861 | 0.881 | 0.999 | 0.728 | 0.732 | 0.802 | 0.787 | 0.740 | 0.838 | 0.761 |
| Phi3.5 | 0.1 | $\operatorname{LapEigvals}$ | | ✓ | 0.734 | 0.713 | 0.758 | 0.737 | 0.704 | 0.775 | 0.759 | 0.716 | 0.753 | 0.757 | 0.761 | 0.732 | 0.768 | 0.741 |
| Phi3.5 | 0.1 | $\operatorname{LapEigvals}$ | ✓ | | 0.856 | 0.946 | 0.860 | 0.897 | 0.841 | 0.884 | 0.965 | 0.810 | 0.785 | 0.819 | 0.815 | 0.791 | 0.858 | 0.717 |
| Phi3.5 | 1.0 | $\operatorname{AttentionScore}$ | | ✓ | 0.499 | 0.699 | 0.567 | 0.615 | 0.626 | 0.637 | 0.618 | 0.533 | 0.722 | 0.581 | 0.630 | 0.645 | 0.642 | 0.626 |
| Phi3.5 | 1.0 | $\operatorname{AttentionScore}$ | ✓ | | 0.489 | 0.640 | 0.540 | 0.566 | 0.469 | 0.553 | 0.541 | 0.520 | 0.666 | 0.541 | 0.594 | 0.504 | 0.540 | 0.554 |
| Phi3.5 | 1.0 | $\operatorname{AttnLogDet}$ | | ✓ | 0.587 | 0.831 | 0.733 | 0.773 | 0.722 | 0.766 | 0.753 | 0.557 | 0.842 | 0.762 | 0.784 | 0.736 | 0.772 | 0.763 |
| Phi3.5 | 1.0 | $\operatorname{AttnLogDet}$ | ✓ | | 0.842 | 0.993 | 0.868 | 0.921 | 0.859 | 0.879 | 0.971 | 0.745 | 0.842 | 0.818 | 0.815 | 0.769 | 0.848 | 0.755 |
| Phi3.5 | 1.0 | $\operatorname{AttnEigvals}$ | | ✓ | 0.755 | 0.852 | 0.794 | 0.820 | 0.790 | 0.809 | 0.864 | 0.710 | 0.809 | 0.795 | 0.787 | 0.752 | 0.799 | 0.747 |
| Phi3.5 | 1.0 | $\operatorname{AttnEigvals}$ | ✓ | | 0.858 | 1.000 | 0.871 | 0.924 | 0.876 | 0.887 | 0.998 | 0.771 | 0.794 | 0.829 | 0.798 | 0.782 | 0.850 | 0.802 |
| Phi3.5 | 1.0 | $\operatorname{LapEigvals}$ | | ✓ | 0.733 | 0.771 | 0.755 | 0.755 | 0.718 | 0.779 | 0.713 | 0.723 | 0.816 | 0.769 | 0.755 | 0.732 | 0.792 | 0.732 |
| Phi3.5 | 1.0 | $\operatorname{LapEigvals}$ | ✓ | | 0.856 | 0.937 | 0.863 | 0.911 | 0.849 | 0.889 | 0.961 | 0.821 | 0.885 | 0.836 | 0.826 | 0.795 | 0.872 | 0.777 |
Table 7: (Part II) Performance comparison of methods on an extended set of configurations. We mark results for $\operatorname{AttentionScore}$ in gray as it is an unsupervised approach, not directly comparable to the others. In bold, we highlight the best performance on the test split of data, individually for each dataset, LLM, and temperature.
| Mistral-Nemo | 0.1 | $\operatorname{AttentionScore}$ | | ✓ | 0.504 | 0.727 | 0.574 | 0.591 | 0.509 | 0.550 | 0.546 | 0.515 | 0.697 | 0.559 | 0.587 | 0.527 | 0.545 | 0.681 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Mistral-Nemo | 0.1 | $\operatorname{AttentionScore}$ | ✓ | | 0.508 | 0.707 | 0.536 | 0.537 | 0.507 | 0.520 | 0.535 | 0.484 | 0.667 | 0.523 | 0.533 | 0.495 | 0.505 | 0.631 |
| Mistral-Nemo | 0.1 | $\operatorname{AttnLogDet}$ | | ✓ | 0.584 | 0.801 | 0.716 | 0.702 | 0.675 | 0.689 | 0.744 | 0.583 | 0.807 | 0.723 | 0.688 | 0.668 | 0.722 | 0.731 |
| Mistral-Nemo | 0.1 | $\operatorname{AttnLogDet}$ | ✓ | | 0.828 | 0.993 | 0.842 | 0.861 | 0.858 | 0.854 | 0.963 | 0.734 | 0.820 | 0.786 | 0.752 | 0.709 | 0.822 | 0.776 |
| Mistral-Nemo | 0.1 | $\operatorname{AttnEigvals}$ | | ✓ | 0.708 | 0.865 | 0.751 | 0.749 | 0.749 | 0.747 | 0.797 | 0.672 | 0.795 | 0.740 | 0.701 | 0.704 | 0.738 | 0.717 |
| Mistral-Nemo | 0.1 | $\operatorname{AttnEigvals}$ | ✓ | | 0.845 | 1.000 | 0.842 | 0.878 | 0.864 | 0.859 | 0.996 | 0.768 | 0.771 | 0.789 | 0.743 | 0.716 | 0.809 | 0.752 |
| Mistral-Nemo | 0.1 | $\operatorname{LapEigvals}$ | | ✓ | 0.763 | 0.777 | 0.772 | 0.732 | 0.723 | 0.781 | 0.725 | 0.759 | 0.751 | 0.760 | 0.697 | 0.696 | 0.769 | 0.710 |
| Mistral-Nemo | 0.1 | $\operatorname{LapEigvals}$ | ✓ | | 0.868 | 0.969 | 0.862 | 0.875 | 0.869 | 0.886 | 0.977 | 0.823 | 0.805 | 0.821 | 0.755 | 0.767 | 0.858 | 0.737 |
| Mistral-Nemo | 1.0 | $\operatorname{AttentionScore}$ | | ✓ | 0.502 | 0.656 | 0.586 | 0.606 | 0.546 | 0.553 | 0.570 | 0.525 | 0.670 | 0.587 | 0.588 | 0.564 | 0.570 | 0.632 |
| Mistral-Nemo | 1.0 | $\operatorname{AttentionScore}$ | ✓ | | 0.493 | 0.675 | 0.541 | 0.552 | 0.503 | 0.521 | 0.531 | 0.493 | 0.630 | 0.531 | 0.529 | 0.510 | 0.532 | 0.494 |
| Mistral-Nemo | 1.0 | $\operatorname{AttnLogDet}$ | | ✓ | 0.591 | 0.790 | 0.723 | 0.716 | 0.717 | 0.717 | 0.741 | 0.581 | 0.782 | 0.730 | 0.703 | 0.711 | 0.707 | 0.801 |
| Mistral-Nemo | 1.0 | $\operatorname{AttnLogDet}$ | ✓ | | 0.829 | 0.994 | 0.851 | 0.870 | 0.860 | 0.857 | 0.963 | 0.728 | 0.856 | 0.798 | 0.769 | 0.772 | 0.812 | 0.852 |
| Mistral-Nemo | 1.0 | $\operatorname{AttnEigvals}$ | | ✓ | 0.704 | 0.845 | 0.762 | 0.742 | 0.757 | 0.752 | 0.806 | 0.670 | 0.781 | 0.749 | 0.742 | 0.719 | 0.737 | 0.804 |
| Mistral-Nemo | 1.0 | $\operatorname{AttnEigvals}$ | ✓ | | 0.844 | 1.000 | 0.851 | 0.893 | 0.864 | 0.862 | 0.996 | 0.778 | 0.842 | 0.781 | 0.761 | 0.758 | 0.821 | 0.802 |
| Mistral-Nemo | 1.0 | $\operatorname{LapEigvals}$ | | ✓ | 0.765 | 0.820 | 0.790 | 0.749 | 0.740 | 0.804 | 0.779 | 0.738 | 0.808 | 0.763 | 0.708 | 0.723 | 0.785 | 0.818 |
| Mistral-Nemo | 1.0 | $\operatorname{LapEigvals}$ | ✓ | | 0.876 | 0.965 | 0.877 | 0.884 | 0.881 | 0.901 | 0.978 | 0.835 | 0.890 | 0.833 | 0.795 | 0.812 | 0.865 | 0.828 |
| Mistral-Small-24B | 0.1 | $\operatorname{AttentionScore}$ | | ✓ | 0.520 | 0.759 | 0.538 | 0.517 | 0.577 | 0.535 | 0.571 | 0.525 | 0.685 | 0.552 | 0.592 | 0.625 | 0.533 | 0.724 |
| Mistral-Small-24B | 0.1 | $\operatorname{AttentionScore}$ | ✓ | | 0.520 | 0.668 | 0.472 | 0.449 | 0.510 | 0.449 | 0.491 | 0.493 | 0.578 | 0.493 | 0.467 | 0.556 | 0.461 | 0.645 |
| Mistral-Small-24B | 0.1 | $\operatorname{AttnLogDet}$ | | ✓ | 0.585 | 0.834 | 0.674 | 0.659 | 0.724 | 0.685 | 0.698 | 0.586 | 0.809 | 0.684 | 0.695 | 0.752 | 0.682 | 0.721 |
| Mistral-Small-24B | 0.1 | $\operatorname{AttnLogDet}$ | ✓ | | 0.851 | 0.990 | 0.817 | 0.799 | 0.820 | 0.861 | 0.898 | 0.762 | 0.896 | 0.760 | 0.725 | 0.763 | 0.778 | 0.767 |
| Mistral-Small-24B | 0.1 | $\operatorname{AttnEigvals}$ | | ✓ | 0.734 | 0.863 | 0.722 | 0.667 | 0.745 | 0.757 | 0.732 | 0.720 | 0.837 | 0.707 | 0.697 | 0.773 | 0.758 | 0.765 |
| Mistral-Small-24B | 0.1 | $\operatorname{AttnEigvals}$ | ✓ | | 0.872 | 0.999 | 0.873 | 0.923 | 0.903 | 0.899 | 0.993 | 0.793 | 0.896 | 0.771 | 0.731 | 0.803 | 0.809 | 0.796 |
| Mistral-Small-24B | 0.1 | $\operatorname{LapEigvals}$ | | ✓ | 0.802 | 0.781 | 0.720 | 0.646 | 0.714 | 0.742 | 0.694 | 0.800 | 0.850 | 0.719 | 0.674 | 0.784 | 0.757 | 0.827 |
| Mistral-Small-24B | 0.1 | $\operatorname{LapEigvals}$ | ✓ | | 0.887 | 0.985 | 0.870 | 0.901 | 0.887 | 0.905 | 0.979 | 0.852 | 0.881 | 0.808 | 0.722 | 0.821 | 0.831 | 0.757 |
| Mistral-Small-24B | 1.0 | $\operatorname{AttentionScore}$ | | ✓ | 0.511 | 0.706 | 0.555 | 0.582 | 0.561 | 0.562 | 0.542 | 0.535 | 0.713 | 0.566 | 0.576 | 0.567 | 0.574 | 0.606 |
| Mistral-Small-24B | 1.0 | $\operatorname{AttentionScore}$ | ✓ | | 0.497 | 0.595 | 0.503 | 0.463 | 0.519 | 0.451 | 0.493 | 0.516 | 0.576 | 0.504 | 0.462 | 0.455 | 0.463 | 0.451 |
| Mistral-Small-24B | 1.0 | $\operatorname{AttnLogDet}$ | | ✓ | 0.591 | 0.824 | 0.727 | 0.710 | 0.732 | 0.720 | 0.677 | 0.600 | 0.869 | 0.771 | 0.714 | 0.726 | 0.734 | 0.687 |
| Mistral-Small-24B | 1.0 | $\operatorname{AttnLogDet}$ | ✓ | | 0.850 | 0.989 | 0.847 | 0.827 | 0.856 | 0.853 | 0.877 | 0.766 | 0.853 | 0.842 | 0.747 | 0.753 | 0.833 | 0.735 |
| Mistral-Small-24B | 1.0 | $\operatorname{AttnEigvals}$ | | ✓ | 0.757 | 0.920 | 0.743 | 0.728 | 0.764 | 0.779 | 0.741 | 0.723 | 0.868 | 0.780 | 0.733 | 0.734 | 0.780 | 0.718 |
| Mistral-Small-24B | 1.0 | $\operatorname{AttnEigvals}$ | ✓ | | 0.877 | 1.000 | 0.878 | 0.923 | 0.911 | 0.895 | 0.997 | 0.805 | 0.846 | 0.848 | 0.751 | 0.760 | 0.844 | 0.765 |
| Mistral-Small-24B | 1.0 | $\operatorname{LapEigvals}$ | | ✓ | 0.814 | 0.860 | 0.762 | 0.733 | 0.790 | 0.766 | 0.703 | 0.805 | 0.897 | 0.790 | 0.712 | 0.781 | 0.779 | 0.725 |
| Mistral-Small-24B | 1.0 | $\operatorname{LapEigvals}$ | ✓ | | 0.895 | 0.980 | 0.890 | 0.898 | 0.910 | 0.907 | 0.965 | 0.861 | 0.925 | 0.882 | 0.791 | 0.820 | 0.876 | 0.748 |
G.3 Best found hyperparameters
We present the hyperparameter values corresponding to the results in Table 1 and Table 6. Table 8 shows the optimal hyperparameter $k$ for selecting the top- $k$ eigenvalues from either the attention maps in $\operatorname{AttnEigvals}$ or the Laplacian matrix in $\operatorname{LapEigvals}$ . While fewer eigenvalues were sufficient for optimal performance in some cases, the best results were generally achieved with the highest tested value, $k{=}100$ .
Table 9 reports the layer indices that yielded the highest performance for the per-layer models. Performance typically peaked in layers above the 10th, especially for Llama-3.1-8B, where attention maps from the final layers more often led to better hallucination detection. Interestingly, the first layer’s attention maps also produced strong performance in a few cases. Overall, no clear pattern emerges regarding the optimal layer, and as noted in prior work, selecting the best layer in the per-layer setup often requires a search.
Table 8: Values of $k$ hyperparameter, denoting how many highest eigenvalues are taken from the Laplacian matrix, corresponding to the best results in Table 1 and Table 6.
| | | | | | CoQA | GSM8K | HaluevalQA | NQOpen | SQuADv2 | TriviaQA | TruthfulQA |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Llama3.1-8B | 0.1 | $\operatorname{AttnEigvals}$ | | ✓ | 50 | 100 | 100 | 25 | 100 | 100 | 10 |
| Llama3.1-8B | 0.1 | $\operatorname{AttnEigvals}$ | ✓ | | 100 | 100 | 100 | 100 | 100 | 50 | 100 |
| Llama3.1-8B | 0.1 | $\operatorname{LapEigvals}$ | | ✓ | 50 | 50 | 100 | 10 | 100 | 100 | 100 |
| Llama3.1-8B | 0.1 | $\operatorname{LapEigvals}$ | ✓ | | 10 | 100 | 100 | 100 | 100 | 100 | 100 |
| Llama3.1-8B | 1.0 | $\operatorname{AttnEigvals}$ | | ✓ | 100 | 100 | 100 | 100 | 100 | 100 | 100 |
| Llama3.1-8B | 1.0 | $\operatorname{AttnEigvals}$ | ✓ | | 100 | 100 | 100 | 100 | 100 | 100 | 100 |
| Llama3.1-8B | 1.0 | $\operatorname{LapEigvals}$ | | ✓ | 100 | 50 | 100 | 100 | 100 | 100 | 100 |
| Llama3.1-8B | 1.0 | $\operatorname{LapEigvals}$ | ✓ | | 100 | 100 | 25 | 100 | 100 | 100 | 100 |
| Llama3.2-3B | 0.1 | $\operatorname{AttnEigvals}$ | | ✓ | 100 | 100 | 100 | 100 | 100 | 100 | 10 |
| Llama3.2-3B | 0.1 | $\operatorname{AttnEigvals}$ | ✓ | | 100 | 100 | 25 | 100 | 100 | 100 | 100 |
| Llama3.2-3B | 0.1 | $\operatorname{LapEigvals}$ | | ✓ | 100 | 25 | 100 | 100 | 100 | 50 | 5 |
| Llama3.2-3B | 0.1 | $\operatorname{LapEigvals}$ | ✓ | | 25 | 100 | 100 | 100 | 100 | 100 | 100 |
| Llama3.2-3B | 1.0 | $\operatorname{AttnEigvals}$ | | ✓ | 100 | 100 | 100 | 100 | 100 | 100 | 50 |
| Llama3.2-3B | 1.0 | $\operatorname{AttnEigvals}$ | ✓ | | 100 | 50 | 100 | 100 | 100 | 100 | 100 |
| Llama3.2-3B | 1.0 | $\operatorname{LapEigvals}$ | | ✓ | 100 | 50 | 100 | 10 | 100 | 100 | 25 |
| Llama3.2-3B | 1.0 | $\operatorname{LapEigvals}$ | ✓ | | 25 | 100 | 100 | 100 | 100 | 100 | 100 |
| Phi3.5 | 0.1 | $\operatorname{AttnEigvals}$ | | ✓ | 100 | 100 | 100 | 100 | 100 | 100 | 100 |
| Phi3.5 | 0.1 | $\operatorname{AttnEigvals}$ | ✓ | | 100 | 25 | 10 | 10 | 25 | 100 | 50 |
| Phi3.5 | 0.1 | $\operatorname{LapEigvals}$ | | ✓ | 100 | 10 | 100 | 100 | 100 | 100 | 100 |
| Phi3.5 | 0.1 | $\operatorname{LapEigvals}$ | ✓ | | 10 | 100 | 50 | 100 | 100 | 100 | 100 |
| Phi3.5 | 1.0 | $\operatorname{AttnEigvals}$ | | ✓ | 100 | 100 | 100 | 100 | 100 | 100 | 100 |
| Phi3.5 | 1.0 | $\operatorname{AttnEigvals}$ | ✓ | | 100 | 100 | 100 | 10 | 100 | 100 | 50 |
| Phi3.5 | 1.0 | $\operatorname{LapEigvals}$ | | ✓ | 100 | 25 | 100 | 100 | 100 | 100 | 50 |
| Phi3.5 | 1.0 | $\operatorname{LapEigvals}$ | ✓ | | 10 | 25 | 100 | 100 | 100 | 100 | 100 |
| Mistral-Nemo | 0.1 | $\operatorname{AttnEigvals}$ | | ✓ | 100 | 50 | 100 | 100 | 100 | 100 | 100 |
| Mistral-Nemo | 0.1 | $\operatorname{AttnEigvals}$ | ✓ | | 100 | 50 | 100 | 100 | 100 | 100 | 100 |
| Mistral-Nemo | 0.1 | $\operatorname{LapEigvals}$ | | ✓ | 100 | 25 | 100 | 100 | 100 | 100 | 10 |
| Mistral-Nemo | 0.1 | $\operatorname{LapEigvals}$ | ✓ | | 10 | 100 | 25 | 100 | 50 | 100 | 100 |
| Mistral-Nemo | 1.0 | $\operatorname{AttnEigvals}$ | | ✓ | 100 | 100 | 100 | 100 | 100 | 100 | 100 |
| Mistral-Nemo | 1.0 | $\operatorname{AttnEigvals}$ | ✓ | | 100 | 100 | 100 | 100 | 100 | 50 | 100 |
| Mistral-Nemo | 1.0 | $\operatorname{LapEigvals}$ | | ✓ | 100 | 100 | 100 | 50 | 100 | 100 | 100 |
| Mistral-Nemo | 1.0 | $\operatorname{LapEigvals}$ | ✓ | | 10 | 100 | 50 | 100 | 100 | 100 | 100 |
| Mistral-Small-24B | 0.1 | $\operatorname{AttnEigvals}$ | | ✓ | 100 | 100 | 100 | 10 | 100 | 50 | 25 |
| Mistral-Small-24B | 0.1 | $\operatorname{AttnEigvals}$ | ✓ | | 100 | 100 | 100 | 100 | 100 | 100 | 25 |
| Mistral-Small-24B | 0.1 | $\operatorname{LapEigvals}$ | | ✓ | 100 | 50 | 100 | 50 | 100 | 100 | 10 |
| Mistral-Small-24B | 0.1 | $\operatorname{LapEigvals}$ | ✓ | | 25 | 100 | 100 | 100 | 100 | 10 | 100 |
| Mistral-Small-24B | 1.0 | $\operatorname{AttnEigvals}$ | | ✓ | 100 | 100 | 100 | 100 | 100 | 100 | 100 |
| Mistral-Small-24B | 1.0 | $\operatorname{AttnEigvals}$ | ✓ | | 100 | 100 | 100 | 100 | 100 | 100 | 100 |
| Mistral-Small-24B | 1.0 | $\operatorname{LapEigvals}$ | | ✓ | 100 | 100 | 100 | 100 | 50 | 100 | 50 |
| Mistral-Small-24B | 1.0 | $\operatorname{LapEigvals}$ | ✓ | | 10 | 100 | 50 | 10 | 10 | 100 | 50 |
Table 9: Values of a layer index (numbered from 0) corresponding to the best results for per-layer models in Table 6.
| | | | CoQA | GSM8K | HaluevalQA | NQOpen | SQuADv2 | TriviaQA | TruthfulQA |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Llama3.1-8B | 0.1 | $\operatorname{AttentionScore}$ | 13 | 28 | 10 | 0 | 0 | 0 | 28 |
| Llama3.1-8B | 0.1 | $\operatorname{AttnLogDet}$ | 7 | 31 | 13 | 16 | 11 | 29 | 21 |
| Llama3.1-8B | 0.1 | $\operatorname{AttnEigvals}$ | 22 | 31 | 31 | 26 | 31 | 31 | 7 |
| Llama3.1-8B | 0.1 | $\operatorname{LapEigvals}$ | 15 | 25 | 14 | 20 | 29 | 31 | 20 |
| Llama3.1-8B | 1.0 | $\operatorname{AttentionScore}$ | 29 | 3 | 10 | 0 | 0 | 0 | 23 |
| Llama3.1-8B | 1.0 | $\operatorname{AttnLogDet}$ | 17 | 16 | 11 | 13 | 29 | 29 | 30 |
| Llama3.1-8B | 1.0 | $\operatorname{AttnEigvals}$ | 22 | 28 | 31 | 31 | 31 | 31 | 31 |
| Llama3.1-8B | 1.0 | $\operatorname{LapEigvals}$ | 15 | 11 | 14 | 31 | 29 | 29 | 29 |
| Llama3.2-3B | 0.1 | $\operatorname{AttentionScore}$ | 15 | 17 | 12 | 12 | 12 | 21 | 14 |
| Llama3.2-3B | 0.1 | $\operatorname{AttnLogDet}$ | 12 | 18 | 13 | 24 | 10 | 25 | 14 |
| Llama3.2-3B | 0.1 | $\operatorname{AttnEigvals}$ | 27 | 14 | 14 | 14 | 25 | 27 | 17 |
| Llama3.2-3B | 0.1 | $\operatorname{LapEigvals}$ | 11 | 24 | 8 | 12 | 25 | 12 | 14 |
| Llama3.2-3B | 1.0 | $\operatorname{AttentionScore}$ | 24 | 25 | 12 | 0 | 24 | 21 | 14 |
| Llama3.2-3B | 1.0 | $\operatorname{AttnLogDet}$ | 12 | 18 | 26 | 23 | 25 | 25 | 12 |
| Llama3.2-3B | 1.0 | $\operatorname{AttnEigvals}$ | 11 | 14 | 27 | 25 | 25 | 27 | 10 |
| Llama3.2-3B | 1.0 | $\operatorname{LapEigvals}$ | 11 | 10 | 18 | 12 | 25 | 25 | 11 |
| Phi3.5 | 0.1 | $\operatorname{AttentionScore}$ | 7 | 1 | 15 | 0 | 0 | 0 | 19 |
| Phi3.5 | 0.1 | $\operatorname{AttnLogDet}$ | 20 | 19 | 18 | 16 | 17 | 13 | 23 |
| Phi3.5 | 0.1 | $\operatorname{AttnEigvals}$ | 18 | 18 | 19 | 15 | 19 | 18 | 28 |
| Phi3.5 | 0.1 | $\operatorname{LapEigvals}$ | 18 | 23 | 28 | 28 | 19 | 31 | 28 |
| Phi3.5 | 1.0 | $\operatorname{AttentionScore}$ | 19 | 1 | 0 | 1 | 0 | 0 | 19 |
| Phi3.5 | 1.0 | $\operatorname{AttnLogDet}$ | 12 | 19 | 29 | 14 | 19 | 13 | 14 |
| Phi3.5 | 1.0 | $\operatorname{AttnEigvals}$ | 18 | 1 | 30 | 17 | 31 | 31 | 31 |
| Phi3.5 | 1.0 | $\operatorname{LapEigvals}$ | 18 | 16 | 28 | 15 | 19 | 31 | 31 |
| Mistral-Nemo | 0.1 | $\operatorname{AttentionScore}$ | 2 | 27 | 18 | 35 | 0 | 30 | 35 |
| Mistral-Nemo | 0.1 | $\operatorname{AttnLogDet}$ | 37 | 20 | 17 | 15 | 38 | 38 | 33 |
| Mistral-Nemo | 0.1 | $\operatorname{AttnEigvals}$ | 38 | 37 | 38 | 18 | 18 | 15 | 31 |
| Mistral-Nemo | 0.1 | $\operatorname{LapEigvals}$ | 16 | 38 | 37 | 37 | 18 | 37 | 8 |
| Mistral-Nemo | 1.0 | $\operatorname{AttentionScore}$ | 10 | 2 | 16 | 28 | 14 | 30 | 21 |
| Mistral-Nemo | 1.0 | $\operatorname{AttnLogDet}$ | 18 | 17 | 20 | 18 | 18 | 15 | 18 |
| Mistral-Nemo | 1.0 | $\operatorname{AttnEigvals}$ | 38 | 30 | 39 | 39 | 18 | 15 | 18 |
| Mistral-Nemo | 1.0 | $\operatorname{LapEigvals}$ | 16 | 39 | 37 | 37 | 18 | 37 | 18 |
| Mistral-Small-24B | 0.1 | $\operatorname{AttentionScore}$ | 14 | 1 | 39 | 33 | 35 | 0 | 30 |
| Mistral-Small-24B | 0.1 | $\operatorname{AttnLogDet}$ | 16 | 29 | 38 | 18 | 16 | 38 | 11 |
| Mistral-Small-24B | 0.1 | $\operatorname{AttnEigvals}$ | 36 | 27 | 36 | 19 | 16 | 38 | 20 |
| Mistral-Small-24B | 0.1 | $\operatorname{LapEigvals}$ | 21 | 3 | 35 | 24 | 36 | 35 | 34 |
| Mistral-Small-24B | 1.0 | $\operatorname{AttentionScore}$ | 15 | 1 | 1 | 0 | 1 | 0 | 30 |
| Mistral-Small-24B | 1.0 | $\operatorname{AttnLogDet}$ | 14 | 24 | 27 | 17 | 24 | 38 | 34 |
| Mistral-Small-24B | 1.0 | $\operatorname{AttnEigvals}$ | 36 | 39 | 27 | 21 | 24 | 36 | 23 |
| Mistral-Small-24B | 1.0 | $\operatorname{LapEigvals}$ | 21 | 39 | 36 | 16 | 21 | 35 | 34 |
G.4 Comparison with hidden-states-based baselines
We take an approach considered in the previous works Azaria and Mitchell (2023); Orgad et al. (2025) and aligned to our evaluation protocol. Specifically, we trained a logistic regression classifier on PCA-projected hidden states to predict whether the model is hallucinating or not. To this end, we select the last token of the answer. While we also tested the last token of the prompt, we observed significantly lower performance, which aligns with results presented by (Orgad et al., 2025). We considered hidden states either from all layers or a single layer corresponding to the selected token. In the all-layer scenario, we use the concatenation of hidden states of all layers, and in the per-layer scenario, we use the hidden states of each layer separately and select the best-performing layer.
In Table 10, we show the obtained results. The all-layer version is consistently worse than our $\operatorname{LapEigvals}$ , which further confirms the strength of the proposed method. Our work is one of the first to detect hallucinations solely using attention maps, providing an important insight into the behavior of LLMs, and it motivates further theoretical research on information flow patterns inside these models.
Table 10: Results of the probe trained on the hidden state features from the last generated token.
| | CoQA | GSM8K | HaluevalQA | NQOpen | SQuADv2 | TriviaQA | TruthfulQA | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Llama3.1-8B | 0.1 | $\operatorname{HiddenStates}$ | ✓ | | 0.835 | 0.799 | 0.840 | 0.766 | 0.736 | 0.820 | 0.834 |
| Llama3.1-8B | 0.1 | $\operatorname{HiddenStates}$ | | ✓ | 0.821 | 0.765 | 0.825 | 0.728 | 0.723 | 0.791 | 0.785 |
| Llama3.1-8B | 0.1 | $\operatorname{LapEigvals}$ | ✓ | | 0.757 | 0.844 | 0.793 | 0.711 | 0.733 | 0.780 | 0.764 |
| Llama3.1-8B | 0.1 | $\operatorname{LapEigvals}$ | | ✓ | 0.836 | 0.887 | 0.867 | 0.793 | 0.782 | 0.872 | 0.822 |
| Llama3.1-8B | 1.0 | $\operatorname{HiddenStates}$ | ✓ | | 0.836 | 0.816 | 0.850 | 0.786 | 0.754 | 0.850 | 0.823 |
| Llama3.1-8B | 1.0 | $\operatorname{HiddenStates}$ | | ✓ | 0.835 | 0.759 | 0.847 | 0.757 | 0.749 | 0.838 | 0.808 |
| Llama3.1-8B | 1.0 | $\operatorname{LapEigvals}$ | ✓ | | 0.743 | 0.833 | 0.789 | 0.725 | 0.724 | 0.794 | 0.764 |
| Llama3.1-8B | 1.0 | $\operatorname{LapEigvals}$ | | ✓ | 0.830 | 0.872 | 0.874 | 0.827 | 0.791 | 0.889 | 0.829 |
| Llama3.2-3B | 0.1 | $\operatorname{HiddenStates}$ | ✓ | | 0.800 | 0.826 | 0.808 | 0.732 | 0.750 | 0.782 | 0.760 |
| Llama3.2-3B | 0.1 | $\operatorname{HiddenStates}$ | | ✓ | 0.790 | 0.802 | 0.784 | 0.709 | 0.721 | 0.760 | 0.770 |
| Llama3.2-3B | 0.1 | $\operatorname{LapEigvals}$ | ✓ | | 0.676 | 0.835 | 0.774 | 0.730 | 0.727 | 0.712 | 0.690 |
| Llama3.2-3B | 0.1 | $\operatorname{LapEigvals}$ | | ✓ | 0.801 | 0.852 | 0.844 | 0.771 | 0.778 | 0.821 | 0.743 |
| Llama3.2-3B | 1.0 | $\operatorname{HiddenStates}$ | ✓ | | 0.778 | 0.727 | 0.758 | 0.679 | 0.719 | 0.773 | 0.716 |
| Llama3.2-3B | 1.0 | $\operatorname{HiddenStates}$ | | ✓ | 0.773 | 0.652 | 0.753 | 0.657 | 0.681 | 0.761 | 0.618 |
| Llama3.2-3B | 1.0 | $\operatorname{LapEigvals}$ | ✓ | | 0.715 | 0.815 | 0.765 | 0.696 | 0.696 | 0.738 | 0.767 |
| Llama3.2-3B | 1.0 | $\operatorname{LapEigvals}$ | | ✓ | 0.812 | 0.870 | 0.857 | 0.798 | 0.751 | 0.836 | 0.787 |
| Phi3.5 | 0.1 | $\operatorname{HiddenStates}$ | ✓ | | 0.841 | 0.773 | 0.845 | 0.813 | 0.781 | 0.886 | 0.737 |
| Phi3.5 | 0.1 | $\operatorname{HiddenStates}$ | | ✓ | 0.833 | 0.696 | 0.840 | 0.806 | 0.774 | 0.878 | 0.689 |
| Phi3.5 | 0.1 | $\operatorname{LapEigvals}$ | ✓ | | 0.716 | 0.753 | 0.757 | 0.761 | 0.732 | 0.768 | 0.741 |
| Phi3.5 | 0.1 | $\operatorname{LapEigvals}$ | | ✓ | 0.810 | 0.785 | 0.819 | 0.815 | 0.791 | 0.858 | 0.717 |
| Phi3.5 | 1.0 | $\operatorname{HiddenStates}$ | ✓ | | 0.872 | 0.784 | 0.850 | 0.821 | 0.806 | 0.891 | 0.822 |
| Phi3.5 | 1.0 | $\operatorname{HiddenStates}$ | | ✓ | 0.853 | 0.686 | 0.844 | 0.804 | 0.790 | 0.887 | 0.752 |
| Phi3.5 | 1.0 | $\operatorname{LapEigvals}$ | ✓ | | 0.723 | 0.816 | 0.769 | 0.755 | 0.732 | 0.792 | 0.732 |
| Phi3.5 | 1.0 | $\operatorname{LapEigvals}$ | | ✓ | 0.821 | 0.885 | 0.836 | 0.826 | 0.795 | 0.872 | 0.777 |
| Mistral-Nemo | 0.1 | $\operatorname{HiddenStates}$ | ✓ | | 0.818 | 0.757 | 0.814 | 0.734 | 0.731 | 0.821 | 0.792 |
| Mistral-Nemo | 0.1 | $\operatorname{HiddenStates}$ | | ✓ | 0.805 | 0.741 | 0.784 | 0.722 | 0.730 | 0.793 | 0.699 |
| Mistral-Nemo | 0.1 | $\operatorname{LapEigvals}$ | ✓ | | 0.759 | 0.751 | 0.760 | 0.697 | 0.696 | 0.769 | 0.710 |
| Mistral-Nemo | 0.1 | $\operatorname{LapEigvals}$ | | ✓ | 0.823 | 0.805 | 0.821 | 0.755 | 0.767 | 0.858 | 0.737 |
| Mistral-Nemo | 1.0 | $\operatorname{HiddenStates}$ | ✓ | | 0.793 | 0.832 | 0.777 | 0.738 | 0.719 | 0.783 | 0.722 |
| Mistral-Nemo | 1.0 | $\operatorname{HiddenStates}$ | | ✓ | 0.771 | 0.834 | 0.771 | 0.706 | 0.685 | 0.779 | 0.644 |
| Mistral-Nemo | 1.0 | $\operatorname{LapEigvals}$ | ✓ | | 0.738 | 0.808 | 0.763 | 0.708 | 0.723 | 0.785 | 0.818 |
| Mistral-Nemo | 1.0 | $\operatorname{LapEigvals}$ | | ✓ | 0.835 | 0.890 | 0.833 | 0.795 | 0.812 | 0.865 | 0.828 |
| Mistral-Small-24B | 0.1 | $\operatorname{HiddenStates}$ | ✓ | | 0.838 | 0.872 | 0.744 | 0.680 | 0.700 | 0.749 | 0.735 |
| Mistral-Small-24B | 0.1 | $\operatorname{HiddenStates}$ | | ✓ | 0.815 | 0.812 | 0.703 | 0.632 | 0.629 | 0.726 | 0.589 |
| Mistral-Small-24B | 0.1 | $\operatorname{LapEigvals}$ | ✓ | | 0.800 | 0.850 | 0.719 | 0.674 | 0.784 | 0.757 | 0.827 |
| Mistral-Small-24B | 0.1 | $\operatorname{LapEigvals}$ | | ✓ | 0.852 | 0.881 | 0.808 | 0.722 | 0.821 | 0.831 | 0.757 |
| Mistral-Small-24B | 1.0 | $\operatorname{HiddenStates}$ | ✓ | | 0.801 | 0.879 | 0.720 | 0.665 | 0.603 | 0.684 | 0.581 |
| Mistral-Small-24B | 1.0 | $\operatorname{HiddenStates}$ | | ✓ | 0.770 | 0.760 | 0.703 | 0.617 | 0.575 | 0.659 | 0.485 |
| Mistral-Small-24B | 1.0 | $\operatorname{LapEigvals}$ | ✓ | | 0.805 | 0.897 | 0.790 | 0.712 | 0.781 | 0.779 | 0.725 |
| Mistral-Small-24B | 1.0 | $\operatorname{LapEigvals}$ | | ✓ | 0.861 | 0.925 | 0.882 | 0.791 | 0.820 | 0.876 | 0.748 |
Appendix H Extended results of ablations
In the following section, we extend the ablation results presented in Section 6.1 and Section 6.2. Figure 10 compares the top $k$ eigenvalues across all five LLMs. In Figure 11 we present a layer-wise performance comparison for each model.
<details>
<summary>x16.png Details</summary>

### Visual Description
## Line Chart: Model Performance Comparison Across Datasets
### Overview
The image is a multi-panel line chart comparing the performance of three model evaluation methods (AttnEigval, LapEigval, AttnLogDet) across five datasets (Llam3.1-8B, Llam3.2-3B, Mistral-Nemo, Mistral-Small-24B, Phi3.5). Performance is measured by Test AUROC (Area Under the ROC Curve), with k-top eigenvalues (5, 10, 25, 50, 100) as the independent variable. A green reference line at AUROC=0.84 is included for benchmarking.
---
### Components/Axes
- **X-axis**: k-top eigenvalues (5, 10, 25, 50, 100)
- **Y-axis**: Test AUROC (ranging from ~0.78 to 0.88)
- **Legend**:
- Blue dashed line: AttnEigval (all layers)
- Orange dash-dot line: LapEigval (all layers)
- Green solid line: AttnLogDet (all layers)
- **Panels**: Five subplots labeled with dataset names (top to bottom: Llam3.1-8B, Llam3.2-3B, Mistral-Nemo, Mistral-Small-24B, Phi3.5).
---
### Detailed Analysis
#### Llam3.1-8B
- **AttnEigval**: Starts at ~0.822 (k=5), increases to ~0.843 (k=100).
- **LapEigval**: Flat at ~0.885 across all k-values.
- **AttnLogDet**: Horizontal at 0.84.
#### Llam3.2-3B
- **AttnEigval**: Starts at ~0.778 (k=5), rises to ~0.802 (k=100).
- **LapEigval**: Flat at ~0.832 across all k-values.
- **AttnLogDet**: Horizontal at 0.80.
#### Mistral-Nemo
- **AttnEigval**: Starts at ~0.78 (k=5), increases to ~0.822 (k=100).
- **LapEigval**: Flat at ~0.86 across all k-values.
- **AttnLogDet**: Horizontal at 0.82.
#### Mistral-Small-24B
- **AttnEigval**: Starts at ~0.818 (k=5), rises to ~0.845 (k=100).
- **LapEigval**: Flat at ~0.87 across all k-values.
- **AttnLogDet**: Horizontal at 0.83.
#### Phi3.5
- **AttnEigval**: Starts at ~0.83 (k=5), increases to ~0.852 (k=100).
- **LapEigval**: Flat at ~0.865 across all k-values.
- **AttnLogDet**: Horizontal at 0.85.
---
### Key Observations
1. **LapEigval Dominance**: Consistently achieves the highest AUROC across all datasets, with minimal variation as k increases.
2. **AttnEigval Trends**: Performance improves with larger k-values, approaching or surpassing the AttnLogDet baseline in larger models (e.g., Mistral-Small-24B, Phi3.5).
3. **AttnLogDet as Baseline**: Acts as a fixed reference point, with values ranging from 0.80 (Llam3.2-3B) to 0.85 (Phi3.5).
4. **Dataset Variability**: Larger models (e.g., Mistral-Small-24B, Phi3.5) show higher baseline AUROC values compared to smaller models (e.g., Llam3.2-3B).
---
### Interpretation
- **Model Robustness**: LapEigval’s stability suggests it is less sensitive to eigenvalue selection, making it a reliable choice for model evaluation.
- **AttnEigval Sensitivity**: Its performance gains with larger k-values indicate it captures more nuanced layer interactions but may overfit smaller datasets (e.g., Llam3.2-3B).
- **AttnLogDet as Benchmark**: The green line provides a clear threshold for evaluating whether eigenvalue-based methods outperform simpler baselines.
- **Dataset Complexity**: Larger models (e.g., Mistral-Small-24B) exhibit higher AUROC values, implying they may have more discriminative features for evaluation.
The data suggests that LapEigval is the most robust method across datasets, while AttnEigval’s performance depends on the number of eigenvalues considered. AttnLogDet serves as a critical baseline, with its value correlating with model size (e.g., higher for Phi3.5 than Llam3.2-3B).
</details>
Figure 10: Probe performance across different top- $k$ eigenvalues: $k∈\{5,10,25,50,100\}$ for TriviaQA dataset with $temp{=}1.0$ and five considered LLMs.
<details>
<summary>x17.png Details</summary>

### Visual Description
## Line Chart: Test AUROC Performance Across Model Layers
### Overview
The chart compares test AUROC (Area Under the ROC Curve) performance across different neural network layers for five models: Llama3.1-8B, Llama3.2-3B, Mistral-Nemo, Mistral-Small-24B, and Phi3.5. Two metrics are tracked per model: "AttnEigval (all layers)" (solid lines) and "LapEigval (all layers)" (dashed lines), with distinct color coding for each model.
### Components/Axes
- **X-axis**: Layer Index (0–38), representing neural network layers.
- **Y-axis**: Test AUROC (0.60–0.90), normalized performance metric.
- **Legend**: Located at top-left, mapping models to colors and line styles:
- Llama3.1-8B: Blue (AttnEigval), Orange (LapEigval)
- Llama3.2-3B: Green (AttnEigval), Orange (LapEigval)
- Mistral-Nemo: Blue (AttnEigval), Orange (LapEigval)
- Mistral-Small-24B: Green (AttnEigval), Orange (LapEigval)
- Phi3.5: Blue (AttnEigval), Orange (LapEigval)
### Detailed Analysis
1. **Llama3.1-8B**:
- AttnEigval (solid blue): Starts at ~0.62, peaks at ~0.78 (layer 16), ends at ~0.74.
- LapEigval (dashed orange): Starts at ~0.60, peaks at ~0.79 (layer 16), ends at ~0.73.
2. **Llama3.2-3B**:
- AttnEigval (solid green): Starts at ~0.63, peaks at ~0.76 (layer 16), ends at ~0.72.
- LapEigval (dashed orange): Starts at ~0.61, peaks at ~0.77 (layer 16), ends at ~0.71.
3. **Mistral-Nemo**:
- AttnEigval (solid blue): Starts at ~0.60, peaks at ~0.77 (layer 16), ends at ~0.73.
- LapEigval (dashed orange): Starts at ~0.62, peaks at ~0.78 (layer 16), ends at ~0.74.
4. **Mistral-Small-24B**:
- AttnEigval (solid green): Starts at ~0.61, peaks at ~0.76 (layer 16), ends at ~0.72.
- LapEigval (dashed orange): Starts at ~0.63, peaks at ~0.77 (layer 16), ends at ~0.73.
5. **Phi3.5**:
- AttnEigval (solid blue): Starts at ~0.65, peaks at ~0.80 (layer 24), ends at ~0.76.
- LapEigval (dashed orange): Starts at ~0.64, peaks at ~0.79 (layer 24), ends at ~0.75.
### Key Observations
- **Consistent Peaks**: All models show performance peaks around layer 16, suggesting this layer is critical for AUROC optimization.
- **LapEigval Superiority**: Dashed orange lines (LapEigval) generally outperform solid lines (AttnEigval) across most models, with differences up to 0.03 AUROC.
- **Phi3.5 Anomaly**: Exhibits the highest absolute performance (up to 0.80 AUROC) and the most pronounced layer-specific variation.
- **Layer 0 Baseline**: All models start near 0.60–0.65 AUROC, indicating minimal performance in initial layers.
### Interpretation
The data suggests that Laplacian eigenvalues (LapEigval) consistently contribute more to model performance than attention eigenvalues (AttnEigval) across all tested architectures. The layer 16 peak implies a critical point in network depth where discriminative power is maximized. Phi3.5's superior performance and variability highlight its architectural efficiency. The convergence of trends across models indicates that layer-specific eigenvalue distributions are a robust predictor of AUROC, with LapEigval serving as a more stable performance indicator. This analysis could guide layer-wise optimization strategies in transformer-based models.
</details>
Figure 11: Analysis of model performance across different layers for and 5 considered LLMs and TriviaQA dataset with $temp{=}1.0$ and $k{=}100$ top eigenvalues (results for models operating on all layers provided for reference).
Appendix I Extended results of generalization study
We present the complete results of the generalization ablation discussed in Section 6.4 of the main paper. Table 11 reports the absolute Test AUROC values for each method and test dataset. Except for TruthfulQA, $\operatorname{LapEigvals}$ achieves the highest performance across all configurations. Notably, some methods perform close to random, whereas $\operatorname{LapEigvals}$ consistently outperforms this baseline. Regarding relative performance drop (Figure 12), $\operatorname{LapEigvals}$ remains competitive, exhibiting the lowest drop in nearly half of the scenarios. These results indicate that our method is robust but warrants further investigation across more datasets, particularly with a deeper analysis of TruthfulQA.
Table 11: Full results of the generalization study. By gray color we denote results obtained on test split from the same QA dataset as training split, otherwise results are from test split of different QA dataset. We highlight the best performance in bold.
| $\operatorname{AttnLogDet}$ $\operatorname{AttnEigvals}$ $\operatorname{LapEigvals}$ | CoQA CoQA CoQA | 0.758 0.782 0.830 | 0.518 0.426 0.555 | 0.687 0.726 0.790 | 0.644 0.696 0.748 | 0.646 0.659 0.743 | 0.640 0.702 0.786 | 0.587 0.560 0.629 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| $\operatorname{AttnLogDet}$ | GSM8K | 0.515 | 0.828 | 0.513 | 0.502 | 0.555 | 0.503 | 0.586 |
| $\operatorname{AttnEigvals}$ | GSM8K | 0.510 | 0.838 | 0.563 | 0.545 | 0.549 | 0.579 | 0.557 |
| $\operatorname{LapEigvals}$ | GSM8K | 0.568 | 0.872 | 0.648 | 0.596 | 0.611 | 0.610 | 0.538 |
| $\operatorname{AttnLogDet}$ | HaluevalQA | 0.580 | 0.500 | 0.823 | 0.750 | 0.727 | 0.787 | 0.668 |
| $\operatorname{AttnEigvals}$ | HaluevalQA | 0.579 | 0.569 | 0.819 | 0.792 | 0.743 | 0.803 | 0.688 |
| $\operatorname{LapEigvals}$ | HaluevalQA | 0.685 | 0.448 | 0.873 | 0.796 | 0.778 | 0.848 | 0.595 |
| $\operatorname{AttnLogDet}$ | NQOpen | 0.552 | 0.594 | 0.720 | 0.794 | 0.717 | 0.766 | 0.597 |
| $\operatorname{AttnEigvals}$ | NQOpen | 0.546 | 0.633 | 0.725 | 0.790 | 0.714 | 0.770 | 0.618 |
| $\operatorname{LapEigvals}$ | NQOpen | 0.656 | 0.676 | 0.792 | 0.827 | 0.748 | 0.843 | 0.564 |
| $\operatorname{AttnLogDet}$ | SQuADv2 | 0.553 | 0.695 | 0.716 | 0.774 | 0.746 | 0.757 | 0.658 |
| $\operatorname{AttnEigvals}$ | SQuADv2 | 0.576 | 0.723 | 0.730 | 0.737 | 0.768 | 0.760 | 0.711 |
| $\operatorname{LapEigvals}$ | SQuADv2 | 0.673 | 0.754 | 0.801 | 0.806 | 0.791 | 0.841 | 0.625 |
| $\operatorname{AttnLogDet}$ | TriviaQA | 0.565 | 0.618 | 0.761 | 0.793 | 0.736 | 0.838 | 0.572 |
| $\operatorname{AttnEigvals}$ | TriviaQA | 0.577 | 0.667 | 0.770 | 0.786 | 0.742 | 0.843 | 0.616 |
| $\operatorname{LapEigvals}$ | TriviaQA | 0.702 | 0.612 | 0.813 | 0.818 | 0.773 | 0.889 | 0.522 |
| $\operatorname{AttnLogDet}$ | TruthfulQA | 0.550 | 0.706 | 0.597 | 0.603 | 0.604 | 0.662 | 0.811 |
| $\operatorname{AttnEigvals}$ | TruthfulQA | 0.538 | 0.579 | 0.600 | 0.595 | 0.646 | 0.685 | 0.833 |
| $\operatorname{LapEigvals}$ | TruthfulQA | 0.590 | 0.722 | 0.552 | 0.529 | 0.569 | 0.631 | 0.829 |
<details>
<summary>x18.png Details</summary>

### Visual Description
## Bar Chart: Model Performance Drop (% of AUROC) Across Datasets
### Overview
The image displays a grouped bar chart comparing the performance drop (as a percentage of AUROC) for three models—**AttnLog**, **AttnEig**, and **LapEig**—across seven question-answering datasets: **TriviaQA**, **NQOpen**, **GSM8K**, **HaluevalQA**, **CoQA**, **SQuADv2**, and **TruthfulQA**. Each dataset is represented by a cluster of three bars (one per model), with colors mapped to models via a legend at the top.
### Components/Axes
- **X-axis**: Datasets (TriviaQA, NQOpen, GSM8K, HaluevalQA, CoQA, SQuADv2, TruthfulQA), ordered left to right.
- **Y-axis**: Drop (% of AUROC), ranging from 0% to 50% in increments of 10%.
- **Legend**:
- Green: AttnLog (all layers)
- Blue: AttnEig (all layers)
- Orange: LapEig (all layers)
- **Bar Groups**: Each dataset has three adjacent bars (green, blue, orange) representing the three models.
### Detailed Analysis
1. **TriviaQA**:
- AttnLog: ~25% drop
- AttnEig: ~20% drop
- LapEig: ~30% drop
2. **NQOpen**:
- AttnLog: ~10% drop
- AttnEig: ~15% drop
- LapEig: ~25% drop
3. **GSM8K**:
- AttnLog: ~35% drop
- AttnEig: ~30% drop
- LapEig: ~35% drop
4. **HaluevalQA**:
- AttnLog: ~40% drop
- AttnEig: ~35% drop
- LapEig: ~45% drop
5. **CoQA**:
- AttnLog: ~5% drop
- AttnEig: ~10% drop
- LapEig: ~15% drop
6. **SQuADv2**:
- AttnLog: ~20% drop
- AttnEig: ~25% drop
- LapEig: ~30% drop
7. **TruthfulQA**:
- AttnLog: ~30% drop
- AttnEig: ~35% drop
- LapEig: ~40% drop
### Key Observations
- **LapEig** consistently shows the highest drop across most datasets (e.g., 45% in HaluevalQA, 40% in TruthfulQA), suggesting it is the least robust model.
- **AttnLog** performs best (lowest drop) in **CoQA** (~5%) and **NQOpen** (~10%), while **AttnEig** excels in **CoQA** (~10%).
- **GSM8K** and **HaluevalQA** exhibit the largest drops overall, with LapEig reaching ~45% in HaluevalQA.
- **SQuADv2** and **TruthfulQA** show moderate drops, with LapEig again leading in decline.
### Interpretation
The data indicates that **LapEig** is the least robust model, with the highest performance drops across nearly all datasets. **AttnLog** and **AttnEig** demonstrate more variability, with AttnLog outperforming others in specific datasets like CoQA and NQOpen. The largest drops in GSM8K and HaluevalQA suggest these datasets are more challenging for all models, potentially due to their complexity or ambiguity. The consistent underperformance of LapEig highlights its sensitivity to dataset-specific features, while AttnLog’s lower drops in CoQA and NQOpen may reflect better adaptability to structured or fact-based questions. These trends underscore the importance of model architecture design for robustness in QA tasks.
</details>
Figure 12: Generalization across datasets measured as a percent performance drop in Test AUROC (less is better) when trained on one dataset and tested on the other. Training datasets are indicated in the plot titles, while test datasets are shown on the $x$ -axis. Results computed on Llama-3.1-8B with $k{=}100$ top eigenvalues and $temp{=}1.0$ .
Appendix J Influence of dataset size
One of the limitations of $\operatorname{LapEigvals}$ is that it is a supervised method and thus requires labelled hallucination data. To check whether it requires a large volume of data, we conducted an additional study in which we trained $\operatorname{LapEigvals}$ on only a stratified fraction of the available examples for each hallucination dataset (using a dataset created from Llama-3.1-8B outputs) and evaluated on the full test split. The AUROC scores are presented in Table 12. As shown, LapEigvals maintains reasonable performance even when trained on as few as a few hundred examples. Additionally, we emphasise that labelling can be efficiently automated and scaled using the llm-as-judge paradigm.
Table 12: Impact of training dataset size on performance. Test AUROC scores are reported for different fractions of the training data. The study uses a dataset derived from Llama-3.1-8B answers with $temp{=}1.0$ and $k{=}100$ top eigenvalues, with absolute dataset sizes shown in parentheses.
Appendix K Reliability of spectral features
Our method relies on ordered spectral features, which may exhibit sensitivity to perturbations and limited robustness. In our setup, both attention weights and extracted features were stored as bfloat16 type, which has lower precision than float32. The reduced precision acts as a form of regularization–minor fluctuations are often rounded off, making the method more robust to small perturbations that might otherwise affect the eigenvalue ordering.
To further investigate perturbation-sensitivity, we conducted a controlled analysis on one model by adding Gaussian noise to randomly selected input feature dimensions before the eigenvalue sorting step. We varied both the noise standard deviation and the fraction of perturbed dimensions (ranging from 0.5 to 1.0). Perturbations were applied consistently to both the training and test sets. In Table 13 we report the mean and standard deviation of performance across 5 runs on hallucination data generated by Llama-3.1-8B on the TriviaQA dataset with $temp{=}1.0$ , along with percentage change relative to the unperturbed baseline (0.0 indicates no perturbation applied). We observe that small perturbations have a negligible impact on performance and further confirm the robustness of our method.
Table 13: Impact of Gaussian noise perturbations on input features for different top- $k$ eigenvalues and noise standard deviations $\sigma$ . Results are averaged over five perturbations, with mean and standard deviation reported; relative percentage drops are shown in parentheses. Results were obtained for Llama-3.1-8B with $temp{=}1.0$ on TriviaQA dataset.
| 5 10 20 | 0.867 ± 0.0 (0.0%) 0.867 ± 0.0 (0.0%) 0.869 ± 0.0 (0.0%) | 0.867 ± 0.0 (0.0%) 0.867 ± 0.0 (0.0%) 0.869 ± 0.0 (0.0%) | 0.867 ± 0.0 (0.0%) 0.867 ± 0.0 (0.0%) 0.869 ± 0.0 (0.0%) | 0.867 ± 0.0 (-0.01%) 0.867 ± 0.0 (0.03%) 0.869 ± 0.0 (0.0%) | 0.859 ± 0.003 (0.86%) 0.861 ± 0.002 (0.78%) 0.862 ± 0.002 (0.84%) | 0.573 ± 0.017 (33.84%) 0.579 ± 0.01 (33.3%) 0.584 ± 0.018 (32.76%) |
| --- | --- | --- | --- | --- | --- | --- |
| 50 | 0.870 ± 0.0 (0.0%) | 0.870 ± 0.0 (0.0%) | 0.870 ± 0.0 (0.0%) | 0.869 ± 0.0 (0.02%) | 0.864 ± 0.002 (0.66%) | 0.606 ± 0.014 (30.31%) |
| 100 | 0.872 ± 0.0 (0.0%) | 0.872 ± 0.0 (0.0%) | 0.872 ± 0.0 (0.01%) | 0.872 ± 0.0 (-0.0%) | 0.866 ± 0.001 (0.66%) | 0.640 ± 0.007 (26.64%) |
Appendix L Cost and time analysis
Providing precise cost and time measurements is nontrivial due to the multi-stage nature of our method, as it involves external services (e.g., OpenAI API for labelling), and the runtime and cost can vary depending on the hardware and platform used. Nonetheless, we present an overview of the costs and complexity as follows.
1. Inference with LLM (preparing hallucination dataset) - does not introduce additional cost beyond regular LLM inference; however, it may limit certain optimizations (e.g. FlashAttention (Dao et al., 2022)) since the full attention matrix needs to be materialized in memory.
1. Automated labeling with llm-as-judge using OpenAI API - we estimate labeling costs using the tiktoken library and OpenAI API pricing ($0.60 per 1M output tokens). However, these estimates exclude caching effects and could be reduced using the Batch API; Table 14 reports total and per-item hallucination labelling costs across all datasets (including 5 LLMs and 2 temperature settings). Estimation for GSM8K dataset is not present as the outputs for this dataset are evaluated by exact-match.
1. Computing spectral features - since we exploit the fact that eigenvalues of the Laplacian lie on the diagonal, the complexity is dominated by the computation of the out-degree matrix, which in turn is dominated by the computation of the mean over rows of the attention matrix. Thus, it is $O(n^{2})$ time, where $n$ is the number of tokens. Then, we have to sort eigenvalues, which takes $O(n\log n)$ time. The overall complexity multiplies by the number of layers and heads of a particular LLM. Practically, in our implementation, we fused feature computation with LLM inference, since we observed a memory bottleneck compared to using raw attention matrices stored on disk.
Table 14: Estimation of costs regarding llm-as-judge labelling with OpenAI API.
| CoQA NQOpen HaluEvalQA | 52,194,357 11,853,621 33,511,346 | 320,613 150,782 421,572 | 653.82 328.36 335.11 | 4.02 4.18 4.22 | 7.83 1.78 5.03 | 0.19 0.09 0.25 | 8.02 1.87 5.28 |
| --- | --- | --- | --- | --- | --- | --- | --- |
| SQuADv2 | 19,601,322 | 251,264 | 330.66 | 4.24 | 2.94 | 0.15 | 3.09 |
| TriviaQA | 41,114,137 | 408,067 | 412.79 | 4.10 | 6.17 | 0.24 | 6.41 |
| TruthfulQA | 2,908,183 | 33,836 | 355.96 | 4.14 | 0.44 | 0.02 | 0.46 |
| Total | 158,242,166 | 1,575,134 | 402.62 | 4.15 | 24.19 | 0.94 | 25.13 |
Appendix M QA prompts
Following, we describe all prompts for QA used to obtain the results presented in this work:
- prompt $p_{1}$ – medium-length one-shot prompt with single example of QA task (Listing 1),
- prompt $p_{2}$ – medium-length zero-shot prompt without examples (Listing 2),
- prompt $p_{3}$ – long few-shot prompt; the main prompt used in this work; modification of prompt used by (Kossen et al., 2024) (Listing 3),
- prompt $p_{4}$ – short-length zero-shot prompt without examples (Listing 4),
- prompt $gsm8k$ – short prompt used for GSM8K dataset with output-format instruction.
Listing 1: One-shot QA (prompt $p_{1}$ )
⬇
Deliver a succinct and straightforward answer to the question below. Focus on being brief while maintaining essential information. Keep extra details to a minimum.
Here is an example:
Question: What is the Riemann hypothesis?
Answer: All non - trivial zeros of the Riemann zeta function have real part 1/2
Question: {question}
Answer:
Listing 2: Zero-shot QA (prompt $p_{2}$ ).
⬇
Please provide a concise and direct response to the following question, keeping your answer as brief and to - the - point as possible while ensuring clarity. Avoid any unnecessary elaboration or additional details.
Question: {question}
Answer:
Listing 3: Few-shot QA prompt (prompt $p_{3}$ ), modified version of prompt used by (Kossen et al., 2024).
⬇
Answer the following question as briefly as possible.
Here are several examples:
Question: What is the capital of France?
Answer: Paris
Question: Who wrote * Romeo and Juliet *?
Answer: William Shakespeare
Question: What is the boiling point of water in Celsius?
Answer: 100 ∘ C
Question: How many continents are there on Earth?
Answer: Seven
Question: What is the fastest land animal?
Answer: Cheetah
Question: {question}
Answer:
Listing 4: Zero-shot short QA prompt (prompt $p_{4}$ ).
⬇
Answer the following question as briefly as possible.
Question: {question}
Answer:
Listing 5: Zero-shot QA prompt for GSM8K dataset.
⬇
Given the following problem, reason and give a final answer to the problem.
Problem: {question}
Your response should end with " The final answer is [answer]" where [answer] is the response to the problem.
Appendix N LLM-as-Judge prompt
During hallucinations dataset construction we leveraged llm-as-judge approach to label answers generated by the LLMs. To this end, we utilized gpt-4o-mini with prompt in Listing 6, which is an adapted version of the prompt used by (Orgad et al., 2025).
Listing 6: Prompt used in llm-as-judge approach for determining hallucination labels. Prompt is a modified version of the one used by (Orgad et al., 2025).
⬇
You will evaluate answers to questions. For each question, I will provide a model ’ s answer and one or more correct reference answers.
You would have to determine if the model answer is correct, incorrect, or model refused to answer. The model answer to be correct has to match from one to all of the possible correct answers.
If the model answer is correct, write ’ correct ’ and if it is not correct, write ’ incorrect ’. If the Model Answer is a refusal, stating that they don ’ t have enough information, write ’ refuse ’.
For example:
Question: who is the young guitarist who played with buddy guy?
Ground Truth: [Quinn Sullivan, Eric Gales]
Model Answer: Ronnie Earl
Correctness: incorrect
Question: What is the name of the actor who plays Iron Man in the Marvel movies?
Ground Truth: [Robert Downey Jr.]
Model Answer: Robert Downey Jr. played the role of Tony Stark / Iron Man in the Marvel Cinematic Universe films.
Correctness: correct
Question: what is the capital of France?
Ground Truth: [Paris]
Model Answer: I don ’ t have enough information to answer this question.
Correctness: refuse
Question: who was the first person to walk on the moon?
Ground Truth: [Neil Armstrong]
Model Answer: I apologize, but I cannot provide an answer without verifying the historical facts.
Correctness: refuse
Question: {{question}}
Ground Truth: {{gold_answer}}
Model Answer: {{predicted_answer}}
Correctness: