2404.15993
Model: gemini-2.0-flash
# Uncertainty Estimation and Quantification for LLMs: A Simple Supervised Approach
**Authors**: Linyu LiuYu PanXiaocheng LiGuanting Chen
(â University of North Carolina § Tsinghua University ⥠HKUST(GZ) â Imperial College London)
Abstract
In this paper, we study the problem of uncertainty estimation and calibration for LLMs. We begin by formulating the uncertainty estimation problem, a relevant yet underexplored area in existing literature. We then propose a supervised approach that leverages labeled datasets to estimate the uncertainty in LLMsâ responses. Based on the formulation, we illustrate the difference between the uncertainty estimation for LLMs and that for standard ML models and explain why the hidden neurons of the LLMs may contain uncertainty information. Our designed approach demonstrates the benefits of utilizing hidden activations to enhance uncertainty estimation across various tasks and shows robust transferability in out-of-distribution settings. We distinguish the uncertainty estimation task from the uncertainty calibration task and show that better uncertainty estimation leads to better calibration performance. Furthermore, our method is easy to implement and adaptable to different levels of model accessibility including black box, grey box, and white box. footnotetext: Equal contribution. footnotetext: Email address: linyuliu@unc.edu, yupan@hkust-gz.edu.cn, xiaocheng.li@imperial.ac.uk, guanting@unc.edu.
1 Introduction
Large language models (LLMs) have marked a significant milestone in the advancement of natural language processing (Radford et al., 2019; Brown et al., 2020; Ouyang et al., 2022; Bubeck et al., 2023), showcasing remarkable capabilities in understanding and generating human-like text. However, their tendency to produce hallucinationsâmisleading or fabricated informationâraises concerns about their reliability and trustworthiness (Rawte et al., 2023). The problem of whether we should trust the response from machine learning models is critical in machine-assisted decision applications, such as self-driving cars (Ramos et al., 2017), medical diagnosis (Esteva et al., 2017), and loan approval processes (Burrell, 2016), where errors can lead to significant loss.
This issue becomes even more pressing in the era of generative AI, as the outputs of these models are random variables sampled from a distribution, meaning incorrect responses can still be produced with positive probability. Due to this inherent randomness, the need to address uncertainty estimation in generative AI is even greater than that in other machine learning models (Gal and Ghahramani, 2016; Lakshminarayanan et al., 2017; Guo et al., 2017; Minderer et al., 2021), and yet there has been limited research in this area (Kuhn et al., 2023; Manakul et al., 2023; Tian et al., 2023).
<details>
<summary>x1.png Details</summary>

### Visual Description
## Diagram: LLM Uncertainty Estimation
### Overview
The image is a diagram illustrating how a Large Language Model (LLM) generates answers to a user's question and how an uncertainty estimation module analyzes the input and output to provide confidence scores for each answer. The diagram shows the flow of information from the user's question to the LLM, the random generation of answers with associated probabilities, and the final confidence scores assigned to each answer by the uncertainty estimation module.
### Components/Axes
* **User's Question:** "What's the capital of France?" (located at the top-left)
* **LLM:** Represents the Large Language Model (located in the top-center)
* **Randomly generate answers:** Describes the LLM's process of generating answers (located at the top-right)
* **Ans 1:** It's Paris -- w.p. 0.5 (w.p. stands for "with probability")
* **Ans 2:** Paris -- w.p. 0.4
* **Ans 3:** London -- w.p. 0.1
* **Uncertainty estimation module:** A module that analyzes the input and output (located at the bottom-left)
* **Answer:** Column header for the answer provided (located in the bottom-right table)
* **Confidence:** Column header for the confidence score (located in the bottom-right table)
### Detailed Analysis or ### Content Details
**1. User Input and LLM Processing:**
* The user asks: "What's the capital of France?".
* The LLM receives this input and generates three possible answers randomly.
**2. LLM Generated Answers:**
* **Answer 1:** "It's Paris" with a probability of 0.5.
* **Answer 2:** "Paris" with a probability of 0.4.
* **Answer 3:** "London" with a probability of 0.1.
**3. Uncertainty Estimation Module:**
* The module receives the user input, the LLM's activations, and the LLM's output.
* It analyzes this information to estimate the uncertainty associated with each answer.
**4. Confidence Scores:**
The uncertainty estimation module outputs the following confidence scores:
| Answer | Confidence |
| ----------- | ---------- |
| It's Paris | 0.999 |
| Paris | 0.999 |
| London | 0.1 |
### Key Observations
* The LLM initially assigns probabilities to the answers, but the uncertainty estimation module refines these into confidence scores.
* The confidence scores for "It's Paris" and "Paris" are very high (0.999), indicating high certainty.
* The confidence score for "London" is low (0.1), indicating low certainty.
### Interpretation
The diagram illustrates a system where an LLM generates multiple possible answers to a question, and an uncertainty estimation module then assesses the confidence in each answer. The LLM initially provides probabilities for each answer, reflecting its internal assessment. The uncertainty estimation module refines these probabilities into confidence scores, potentially using additional information such as the LLM's activations.
The high confidence scores for "It's Paris" and "Paris" suggest that the uncertainty estimation module correctly identifies the capital of France. The low confidence score for "London" indicates that the module is able to distinguish incorrect answers.
The diagram highlights the importance of uncertainty estimation in LLMs, as it allows the system to provide not only answers but also an assessment of their reliability. This is crucial for applications where accuracy is paramount.
</details>
Figure 1: An example to illustrate the uncertainty estimation task. The LLM randomly generates an answer to the question (Itâs Paris, Paris, or London). The goal of the uncertainty estimation is to estimate a confidence score to the question-answer pair, where a higher score indicates a higher confidence in the correctness of the answer.
In this work, we aim to formally define the problem of uncertainty estimation for LLMs and propose methods to address it. As shown in Figure 1, uncertainty estimation for LLMs can be broadly defined as the task of predicting the quality of the generated response based on the input. In this context, âqualityâ typically refers to aspects such as confidence, truthfulness, and uncertainty. Assuming access to a universal metric for evaluating the confidence of the output, the goal of uncertainty estimation is to produce a confidence score that closely aligns with this metric. Given the inherent randomness in LLMs, where incorrect responses can still be generated with positive probability, uncertainty estimation serves as a crucial safeguard. It helps assess the reliability of responses, enhance the trustworthiness of the model, and guide users on when to trust or question the output.
It is also worth noting that calibration is closely related and can be viewed as a subclass of uncertainty estimation, where the metric corresponds to the conditional probability in the individual level. Most studies on uncertainty estimation or calibration in language models focus on fixed-dimensional prediction tasks (i.e., the output of the LLM only has one token limited in a finite set), such as sentiment analysis, natural language inference, and commonsense reasoning (Zhou et al., 2023; Si et al., 2022; Xiao et al., 2022; Desai and Durrett, 2020). However, given the structural differences in how modern LLMs are used, alongside their proven capability to handle complex, free-form tasks with variable-length outputs, there is a growing need to address uncertainty estimation and calibration specifically for general language tasks in the domain of LLMs.
This work explores a simple supervised method motivated by two ideas in the existing literature on LLMs. First, prior work on uncertainty estimation for LLMs primarily focused on designing uncertainty metrics in an unsupervised way by examining aspects like the generated outputsâ consistency, similarity, entropy, and other relevant characteristics (Lin et al., 2023; Manakul et al., 2023; Kuhn et al., 2023; Hou et al., 2023; Lin et al., 2022; Kuhn et al., 2023; Chen et al., 2024). The absence of the need for knowledge of the modelâs weights enables their application to some black-box or gray-box models. Second, a growing stream of literature argues that hidden layersâ activation values within the LLMs offer insights into the LLMsâ knowledge and confidence (Slobodkin et al., 2023; Ahdritz et al., 2024; Duan et al., 2024). It has shown success in other fields of LLMs, like hallucination detection (CH-Wang et al., 2023; Azaria and Mitchell, 2023; Ahdritz et al., 2024). Based on this argument, white-box LLMs, which allow access to more of LLMsâ internal states, such as logits and hidden layers, are believed to have the capacity to offer a more nuanced understanding and improved uncertainty estimation results (Verma et al., 2023; Chen et al., 2024; Plaut et al., 2024).
Both of the above approaches, however, have key limitations. For the unsupervised metrics, given the complexity of LLMsâ underlying architectures, semantic information may be diluted when processing through self-attention mechanisms and during token encoding/decoding. For the second idea, the requirements of hidden layer features restrict its application to close-source/black-box LLMs. In this paper, we combine the strengths of these two ideas by proposing a general supervised learning method and pipeline design that address these limitations. Specifically, to incorporate more features (e.g., hidden layers) in estimating the uncertainty, we train an external uncertainty estimation model in a supervised way to estimate the uncertainty/confidence of the response generated from an LLM (target LLM). As the quality of the response reveals to what extent we should believe the response is correct, we formulate this supervised uncertainty estimation problem as a regression task and prepare the labels in the training dataset by measuring the responseâs quality. To extend our method to black-box LLMs, we allow the semantic features of the question-response pair to come from another language model (tool LLM). The overall pipeline of this method is shown in Figure 2.
<details>
<summary>x2.png Details</summary>

### Visual Description
## Diagram: LLM Uncertainty Estimation
### Overview
The image is a diagram illustrating a system for estimating the uncertainty of a Target Large Language Model (LLM) response. It shows the flow of information from a query, through the Target LLM, and then through a series of steps involving a Tool LLM, quality metrics, and an uncertainty estimator.
### Components/Axes
* **Query x:** A gray rounded rectangle containing the question "What's the capital of France?".
* **Target LLM:** A stylized icon representing a language model.
* **Generated response y:** A blue rounded rectangle containing the answer "It's Paris.".
* **Reference response:** A dashed gray rounded rectangle containing the answer "Paris".
* **Quality metric:** Labeled "Rouge-L/BLEU".
* **s(y, ytrue):** A green rounded rectangle representing the score of the generated response compared to the true response.
* **Tool LLM:** A cartoon llama icon representing a language model.
* **Hidden layers:** A yellow rounded rectangle containing three rows of circles, colored blue, red, and green.
* **Probability/entropy features:** A yellow rounded rectangle.
* **Uncertainty estimator:** A red rounded rectangle.
* **Predict:** A label indicating the prediction step.
### Detailed Analysis or Content Details
1. **Query Input:** The process begins with a query "What's the capital of France?" which is fed into the Target LLM.
2. **Target LLM Response:** The Target LLM generates a response, "It's Paris.".
3. **Quality Assessment:** The generated response is compared to a reference response ("Paris") using quality metrics like Rouge-L/BLEU, resulting in a score s(y, ytrue).
4. **Tool LLM and Feature Extraction:** The query and the generated response are also fed into a Tool LLM. The Tool LLM extracts probability/entropy features from its hidden layers.
5. **Uncertainty Estimation:** The quality score s(y, ytrue) and the probability/entropy features are used as input to an Uncertainty Estimator.
6. **Prediction:** The Uncertainty Estimator predicts the uncertainty associated with the Target LLM's response.
### Key Observations
* The diagram illustrates a closed-loop system where the uncertainty estimation is based on both the quality of the response and the internal features of a Tool LLM.
* The use of a separate Tool LLM suggests that it provides additional information or features that are not directly available from the Target LLM.
* The quality metric (Rouge-L/BLEU) compares the generated response to a reference response, which is assumed to be the ground truth.
### Interpretation
The diagram presents a method for quantifying the uncertainty of a language model's response. By combining traditional quality metrics with features extracted from a separate Tool LLM, the system aims to provide a more comprehensive assessment of the reliability of the generated output. This approach could be valuable in applications where it is crucial to know how confident the model is in its answer, such as in safety-critical systems or when providing information to users who need to make informed decisions. The system leverages the strengths of both explicit quality measures (Rouge-L/BLEU) and implicit features learned by a neural network (Tool LLM), potentially leading to a more robust and accurate uncertainty estimation.
</details>
Figure 2: Illustration of our proposed supervised method. The tool LLM is an open-source LLM and can be different from the target LLM. In the training phase, where the reference response is available, we train the uncertainty estimator using the quality of the response as the label. In the test phase, the uncertainty estimator predicts the quality of the generated response to obtain an uncertainty score.
Our contributions are four-fold:
- First, we formally define the task of uncertainty estimation, while some of the existing literature either does not distinguish uncertainty estimation and uncertainty calibration or misuses and confuses the terminologies of uncertainty and hallucination.
- Second, we adopt a supervised method for uncertainty estimation that is intuitive, easy to implement, and executable even on black-box LLMs. Leveraging supervised labels from the uncertainty metric, our approach sets an upper bound for the performance of all unsupervised methods, representing the highest achievable performance for these approaches.
- Third, we systematically discuss the relationship and the difference between deep learning and LLM in uncertainty estimation. Formally, we give an explanation to see why the method for the traditional deep learning model may fail in LLM, and why the hidden layer is useful in estimating the uncertainty in our context.
- Finally, numerical experiments on various natural language processing tasks demonstrate the superiority of our methods over existing benchmarks. The results also reveal several insightful observations, including the role of neural nodes in representing uncertainty, and the transferability of our trained uncertainty estimation model.
1.1 Related literature
The uncertainty estimation and calibration for traditional machine learning is relatively well-studied (Abdar et al., 2021; Gawlikowski et al., 2023). However, with the rapid development of LLMs, there is a pressing need to better understand the uncertainty for LLMsâ responses, and measuring the uncertainty from sentences instead of a fixed-dimension output is more challenging. One stream of work has been focusing on unsupervised methods that leverage entropy (Malinin and Gales, 2021), similarity (Fomicheva et al., 2020; Lin et al., 2022), semantic (Kuhn et al., 2023; Duan et al., 2023), logit or hidden statesâ information (Kadavath et al., 2022; Chen et al., 2024; Su et al., 2024; Plaut et al., 2024) to craft an uncertainty metric that helps to quantify uncertainty. For black-box models, some of the metrics can be computed based on multiple sampled output of the LLMs (Malinin and Gales, 2021; Lin et al., 2023; Manakul et al., 2023; Chen and Mueller, 2023); while for white-box models, more information such as the outputâs distribution, the value of the logit and hidden layers make computing the uncertainty metric easier. We also refer to Desai and Durrett (2020); Zhang et al. (2021); Ye and Durrett (2021); Si et al. (2022); Quach et al. (2023); Kumar et al. (2023); Mohri and Hashimoto (2024) for other related uncertainty estimation methods such as conformal prediction. We defer more discussions on related literature, in particular, on the topics of hallucination detection and information in hidden layers of LLMs, to Appendix A.
2 Problem Setup
Consider the following environment where one interacts with LLMs through prompts and responses: An LLM is given with an input prompt $\bm{x}=(x_{1},x_{2},...,x_{k})â\mathcal{X}$ with $x_{i}â\mathcal{V}$ representing the $i$ -th token of the prompt. Here $\mathcal{V}$ denotes the vocabulary for all the tokens. Then the LLM randomly generates its response $\bm{y}=(y_{1},y_{2},...,y_{m})â\mathcal{Y}$ following the probability distribution
$$
y_{j}\sim p_{\theta}(\cdot|\bm{x},y_{1},y_{2},...,y_{j-1}).
$$
Here the probability distribution $p_{\theta}$ denotes the distribution (over vocabulary $\mathcal{V}$ ) as the LLMâs output, and $\theta$ encapsulates all the parameters of the LLM. The conditional part includes the prompt $\bm{x}$ and all the tokens $y_{1},y_{2},...,y_{j-1}$ generated preceding the current position.
We consider using the LLM for some downstream NLP tasks such as question answering, multiple choice, and machine translation. Such a task usually comes with an evaluation/scoring function that evaluates the quality of the generated response $s(·,·):\mathcal{Y}Ă\mathcal{Y}â[0,1].$ For each pair of $(\bm{x},\bm{y}),$ the evaluation function rates the response $\bm{y}$ with the score $z\coloneqq s(\bm{y},\bm{y}_{\text{true}})$ where $\bm{y}_{\text{true}}$ is the true response for the prompt $\bm{x}$ . The true response $\bm{y}_{\text{true}}$ is usually decided by factual truth, humans, or domain experts, and we can assume it follows a distribution condition on the prompt $\bm{x}$ . It does not hurt to assume a larger score represents a better answer; $z=1$ indicates a perfect answer, while $z=0$ says the response $\bm{y}$ is off the target.
We define the task of uncertainty estimation for LLMs as the learning of a function $g$ that predicts the score
$$
g(\bm{x},\bm{y})\approx\mathbb{E}\left[s(\bm{y},\bm{y}_{\text{true}})|\bm{x},%
\bm{y}\right] \tag{1}
$$
where the expectation on the right-hand side is taken with respect to the (possible) randomness of the true response $\bm{y}_{\text{true}}$ , and for notational clarity, we omit the dependence of $\bm{y}_{\text{true}}$ on $\bm{x}$ . We emphasize two points on this task definition: The uncertainty function $g$ takes the prompt $\bm{x}$ and $\bm{y}$ as its inputs. This implies (i) the true and predicted uncertainty score can and should depend on the specific realization of the response $\bm{y}$ , not just $\bm{x}$ (Zhang et al., 2021; Kuhn et al., 2023), and (ii) the uncertainty function $g$ does not require the true response $\bm{y}_{\text{true}}$ as the input.
We note that a significant body of literature explores uncertainty estimation and calibration in language models (Zhou et al., 2023; Si et al., 2022; Xiao et al., 2022; Desai and Durrett, 2020). They primarily focus on classification tasks where outputs are limited to a finite set of tokens (i.e., $\bm{y}$ contains only one element). In contrast, our work extends this to allow free-form responses, and the ability to handle variable-length outputs aligns more closely with current advancements in LLMs.
3 Uncertainty Estimation via Supervised Learning
3.1 Overview of supervised uncertainty estimation
We consider a supervised approach of learning the uncertainty function $g:\mathcal{X}Ă\mathcal{Y}â[0,1]$ , which is similar to the standard setting of uncertainty quantification for ML/deep learning models. First, we start with a raw dataset of $n$ samples
$$
\mathcal{D}_{\text{raw}}=\left\{(\bm{x}_{i},\bm{y}_{i},\bm{y}_{i,\text{true}},%
s(\bm{y}_{i},\bm{y}_{i,\text{true}}))\right\}_{i=1}^{n}.
$$
$\mathcal{D}_{\text{raw}}$ can be generated based on a labeled dataset for the tasks we consider. Here $\bm{x}_{i}=(x_{i,1},...,x_{i,k_{i}})$ and $\bm{y}_{i}=(y_{i,1},...,y_{i,m_{i}})$ denote the prompt and the corresponding LLMâs response, respectively. $\bm{y}_{i,\text{true}}$ denotes the true response (that comes from the labeled dataset) of $\bm{x}_{i}$ , and $s(\bm{y}_{i},\bm{y}_{i,\text{true}})$ assigns a score for the response $\bm{y}_{i}$ based on the true answer $\bm{y}_{i,\text{true}}$ .
The next is to formulate a supervised learning task based on $\mathcal{D}_{\text{raw}}$ . Specifically, we construct
$$
\mathcal{D}_{\text{sl}}=\left\{(\bm{v}_{i},z_{i})\right\}_{i=1}^{n}
$$
where $z_{i}\coloneqq s(\bm{y}_{i},\bm{y}_{i,\text{true}})â[0,1]$ denotes the target score to be predicted. The vector $\bm{v}_{i}$ summarizes useful features for the $i$ -th sample based on $(\bm{x}_{i},\bm{y}_{i})$ . With this design, a supervised learning task on the dataset $\mathcal{D}_{\text{sl}}$ coincides exactly with learning the uncertainty estimation task defined in (1).
Getting Features. When constructing $\bm{v}_{i}$ , a natural implementation is to use the features of $(\bm{x},\bm{y})$ extracted from the LLM (denoted as target LLM) that generates the response $\bm{y}$ as done in Duan et al. (2024) for hallucination detection and Burns et al. (2022) for discovering latent knowledge. This method functions effectively with white-box LLMs where hidden activations are accessible. We note that obtaining hidden layersâ activations merely requires an LLM and the prompt-response pair $(\bm{x},\bm{y})$ , and the extra knowledge of uncertainty can come from the hidden layers of any white-box LLM that takes as input the $(\bm{x},\bm{y})$ pair, not necessarily from the target LLM.
Another note is that our goal is to measure the uncertainty of the input-output pair $(\bm{x},\bm{y})$ using the given metric, which is independent of the target LLM that generates the output from input $\bm{x}$ . Therefore, due to the unique structure of LLMs, any white-box LLM can take $(\bm{x},\bm{y})$ together as input, allowing us to extract features from this white-box LLM (referred to as the tool LLM).
This observation has two implications: First, if the target LLM is a black-box one, we can rely on a white-box tool LLM to extract feature; Second, even if the target LLM is a Which-box one, we can also adopt a more powerful white-box tool LLM) that could potentially generate more useful feature. In Algorithm 1, we present the algorithm of our pipeline that is applicable to target LLMs of any type, and we provide an illustration of the algorithm pipeline in Figure 2.
Algorithm 1 Supervised uncertainty estimation
1: Target LLM $p_{\theta}$ (the uncertainty of which is to be estimated), tool LLM $q_{\theta}$ (used for uncertainty estimation), a labeled training dataset $\mathcal{D}$ , a test sample with prompt $\bm{x}$
2: %% Training phase:
3: Use $p_{\theta}$ to generate responses for the samples in $\mathcal{D}$ and construct the dataset $\mathcal{D}_{\text{raw}}$
4: For each sample $(\bm{x}_{i},\bm{y}_{i})â\mathcal{D}_{\text{raw}}$ , extract features (hidden-layer activations, entropy- and probability-related features) using the LLM $q_{\theta}$ , and then construct the dataset $\mathcal{D}_{\text{sl}}$
5: Train a supervised learning model $\hat{g}$ that predicts $z_{i}$ with $\bm{v}_{i}$ based on the dataset $\mathcal{D}_{\text{sl}}$
6: %% Test phase:
7: Generate the response $\bm{y}$ for the test prompt $\bm{x}$
8: Extract features $\bm{v}$ using $q_{\theta}$
9: Associate the response $\bm{y}$ with the uncertainty score $\hat{g}(\bm{v})$
3.2 Features for uncertainty estimation
A bunch of features that can be extracted from an LLM show a potential relationship to the measurement of uncertainty in the literature. Here we categorize these features into two types based on their sources:
White-box features: LLMâs hidden-layer activations. We feed $(\bm{x}_{i},\bm{y}_{i})$ as input into the tool LLM, and extract the corresponding hidden layersâ activations of the LLM.
Grey-box features: Entropy- or probability-related outputs. The entropy of a discrete distribution $p$ over the vocabulary $\mathcal{V}$ is defined by $H(p)\coloneqq-\sum_{vâ\mathcal{V}}p(v)\log\left(p(v)\right).$ For a prompt-response pair $(\bm{x},\bm{y})=(x_{1},...,x_{k},y_{1},...,y_{m})$ , we consider as the features the entropy at each token such as $H(q_{\theta}(·|x_{1},...,x_{j-1}))$ and $H(q_{\theta}(·|\bm{x},y_{1},...,y_{j-1}))$ where $q_{\theta}$ denotes the tool LLM. We defer the detailed discussions on feature construction to Appendix D.
There can be other useful features such as asking the LLM âhow certain it is about the responseâ (Tian et al., 2023). We do not try to exhaust all the possibilities, and the aim of our paper is more about formulating the uncertainty estimation for the LLMs as a supervised task and understanding how the internal states of the LLM encode uncertainty. To the best of our knowledge, our paper is the first one to do so. Specifically, the above formulation aims for the following two outcomes: (i) an uncertainty model $\hat{g}(\bm{v}_{i})$ that predicts $z_{i}$ and (ii) knowing whether the hidden layers carry the uncertainty information.
3.3 Three regimes of supervised uncertainty estimation
In Section 3.1, we present that our supervised uncertainty estimation method can be extended to a black-box LLM by separating the target LLM and tool LLM. Next, we formally present our method for white-box, grey-box, and black-box target LLMs.
White-box supervised uncertainty estimation (Wb-S): This Wb-S approach is applicable to a white-box LLM where the tool LLM coincides with the target LLM (i.e., $p_{\theta}=q_{\theta}$ ).
Grey-box supervised uncertainty estimation (Gb-S): This Gb-S regime also uses the same target and tool LLMs ( $p_{\theta}=q_{\theta}$ ) and constructs the features only from the grey-box source, that is, those features relying on the probability and the entropy (such as those in Table 5 in Appendix D), but it ignores the hidden-layer activations.
Black-box supervised uncertainty estimation (Bb-S): The Bb-S regime does not assume the knowledge of the parameters of $p_{\theta}$ but still aims to estimate its uncertainty. To achieve this, it considers another open-source LLM denoted by $q_{\theta}$ . The original data $\mathcal{D}_{\text{raw}}$ is generated by $p_{\theta}$ but then the uncertainty estimation data $\mathcal{D}_{\text{sl}}$ is constructed based on $q_{\theta}$ from $\mathcal{D}_{\text{raw}}$ as illustrated in the following diagram
$$
\mathcal{D}_{\text{raw}}\overset{q_{\theta}}{\longrightarrow}\mathcal{D}_{%
\text{sl}}.
$$
For example, for a prompt $\bm{x}$ , a black-box LLM $p_{\theta}$ generates the response $\bm{y}.$ We utilize the open-source LLM $q_{\theta}$ to treat $(\bm{x},\bm{y})$ jointly as a sequence of (prompt) tokens and extract the features of hidden activations and entropy as in Section 3.2. In this way, we use $q_{\theta}$ together with the learned uncertainty model from $\mathcal{D}_{\text{sl}}$ to estimate the uncertainty of responses generated from $p_{\theta}$ which we do not have any knowledge about.
4 Insights for the algorithm design
4.1 Uncertainty estimation v.s. uncertainty calibration
So far in this paper, we focus on the uncertainty estimation task which aims to predict the quality of the response to reveal whether the LLM makes mistakes in its response or not. There is a different but related task known as the uncertainty calibration problem. In comparison, the uncertainty calibration aims to ensure that the output from the uncertainty estimation model for (1) conveys a probabilistic meaning. That is, $g(\bm{x},\bm{y})$ is defined as the probability that $\bm{y}$ is true. This is compatible with our method by replacing the quality $s(\bm{y},\bm{y}_{\text{true}})$ with $1\left\{\bm{y}â\mathcal{Y}_{\text{true}}\right\}$ , where $\mathcal{Y}_{\text{true}}$ is a set containing all the possible true responses. Another aspect of the relation between our uncertainty estimation method and uncertainty calibration is that our method can be followed by any recalibration methods for ML models to form a pipeline for calibration. And intuitively, a better uncertainty estimation/prediction will lead to a better-calibrated uncertainty model, which is also verified in our numerical experiments in Appendix C.
4.2 Why hidden layers as features?
In this subsection, we provide a simple theoretical explanation for why the hidden activations of the LLM can be useful in uncertainty estimation. Consider a binary classification task where the features $\bm{X}â\mathbb{R}^{d}$ and the label $Yâ\{0,1\}$ are drawn from a distribution $\mathcal{P}.$ We aim to learn a model $f:\mathbb{R}^{d}â[0,1]$ that predicts the label $Y$ from the feature vector $\bm{X}$ , and the learning of the model employs a loss function $l(·,·):[0,1]Ă[0,1]â\mathbb{R}$ .
**Proposition 4.1**
*Let $\mathcal{F}$ be the class of measurable function that maps from $\mathbb{R}^{d}$ to $[0,1]$ . Under the cross-entropy loss $l(y,\hat{y})=y\log(\hat{y})+(1-y)\log(1-\hat{y})$ , the function $f^{*}$ that minimizes the loss
$$
f^{*}=\operatorname*{arg\,min}_{f\in\mathcal{F}}\mathbb{E}\left[l(Y,f(\bm{X}))\right]
$$
is the Bayes optimal classifier $f^{*}(\bm{x})=\mathbb{P}(Y=1|\bm{X}=\bm{x})$ where the expectation and the probability are taken with respect to $(\bm{X},Y)\sim\mathcal{P}.$ Moreover, the following conditional independence holds
$$
Y\perp\bm{X}\ |\ f^{*}(\bm{X}).
$$*
The proposition is not technical and it can be easily proved by using the structure of $f^{*}(\bm{X})$ so we refer the proof to Berger (2013). It states a nice property of the cross-entropy loss that the function learned under the cross-entropy loss coincides with the Bayes optimal classifier. Note that this is contingent on two requirements. First, the function class $\mathcal{F}$ is the measurable function class. Second, it requires the function $f^{*}$ learned through the population loss rather than the empirical loss/risk. The proposition also states one step further on conditional independence $Y\perp\bm{X}\ |\ f^{*}(\bm{X})$ . This means all the information related to the label $Y$ that is contained in $\bm{X}$ is summarized in the prediction function $f^{*}.$ This intuition suggests that for classic uncertainty estimation problems, when a prediction model $\hat{f}:\mathbb{R}^{d}â[0,1]$ is well-trained, the predicted score $\hat{f}(\bm{X})$ should capture all the information about the true label $Y$ contained in the features $\bm{X}$ , without relying on the features of $\bm{X}$ . This indeed explains why the classic uncertainty estimation and calibration methods only work with the predicted score $\hat{f}(\bm{X})$ for re-calibration, including Platt scaling (Platt et al., 1999), isotonic regression (Zadrozny and Elkan, 2002), temperature scaling (Guo et al., 2017), etc.
When it comes to uncertainty estimation for LLMs, which is different from calibration and LLMsâ structure is much more complex, we will no longer have conditional independence, and that requires additional procedures to retrieve more information on $Y$ . The following supporting corollary states that when the underlying loss function $\tilde{l}$ does not possess this nice property (the Bayes classifier minimizes the loss point-wise) of the cross-entropy loss, the conditional independence will collapse.
**Corollary 4.2**
*Suppose the loss function $\tilde{l}$ satisfies
$$
\mathbb{P}\left(f^{*}(\bm{x})\neq\operatorname*{arg\,min}_{\tilde{y}\in[0,1]}%
\mathbb{E}\left[\tilde{l}(Y,\tilde{y})|\bm{X}=\bm{x}\right]\right)>0,
$$
where $f^{*}$ is defined as Proposition 4.1, then for the function $\tilde{f}=\operatorname*{arg\,min}_{fâ\mathcal{F}}\mathbb{E}\left[\tilde{l}(%
Y,f(\bm{X}))\right],$ where the expectation is with respect to $(\bm{X},Y)\sim\mathcal{P},$ there exists a distribution $\mathcal{P}$ such that the conditional independence no longer holds
$$
Y\not\perp\bm{X}\ |\ \tilde{f}(\bm{X}).
$$*
Proposition 4.1 and Corollary 4.2 together illustrate the difference between uncertainty estimation for a traditional ML model and that for LLMs. In this task, the output $\tilde{f}(\bm{X})$ of the model (traditional ML model or LLM) is restricted in [0,1] to indicate the confidence of $Y=1$ . For the traditional ML models, the cross-entropy loss, which is commonly used for training the model, is aligned toward the uncertainty calibration objective. When it comes to uncertainty estimation for LLMs, the objective can be different from calibration, and the LLMs are often pretrained with some other loss functions (for example, the negative log-likelihood loss for next-token prediction) on diverse language tasks besides binary classifications. These factors cause a misalignment between the model pre-training and the uncertainty estimation task. Consequently, the original features (e.g., the output logits) may and should (in theory) contain information about the uncertainty score $Y$ that cannot be fully captured by $\tilde{f}(\bm{X})$ . This justifies why we formulate the uncertainty estimation task as the previous subsection and take the hidden-layer activations as features to predict the uncertainty score; it also explains why we do not see much similar treatment in the mainstream uncertainty estimation literature (Kuhn et al., 2023; Manakul et al., 2023; Tian et al., 2023).
5 Numerial Experiments and Findings
In this section, we provide a systematic evaluation of the proposed supervised approach for estimating the uncertainty of the LLMs. All code used in our experiments is available at https://github.com/LoveCatc/supervised-llm-uncertainty-estimation.
5.1 LLMs, tasks, benchmarks, and performance metrics
Here we outline the general setup of the numerical experiments. Certain tasks may deviate from the general setup, and we will detail the specific adjustments as needed.
LLMs. For our numerical experiments, we mainly consider three open-source LLMs, LLaMA2-7B (Touvron et al., 2023), LLaMA3-8B (AI@Meta, 2024) and Gemma-7B (Gemma Team et al., 2024) as $p_{\theta}$ defined in Section 2. For certain experiments, we also employ the models of LLaMA2-13B and Gemma-2B. We also use their respective tokenizers as provided by Hugging Face. We do not change the parameters/weights $\theta$ of these LLMs.
Tasks and Datasets. We mainly consider three tasks for uncertainty estimation, question answering, multiple choice, and machine translation. All the labeled datasets for these tasks are in the form of $\{(\bm{x}_{i},\bm{y}_{i,\text{true}})\}_{i=1}^{n}$ where $\bm{x}_{i}$ can be viewed as the prompt for the $i$ -th sample and $\bm{y}_{i,\text{true}}$ the true response. We adopt the few-shot prompting when generating the LLMâs response $\bm{y}_{i}$ , and we use 5 examples in the prompt of the multiple-choice task and 3 examples for the remaining natural language generation tasks. This enables the LLMâs in-context learning ability (Radford et al., 2019; Zhang et al., 2023) and ensures the LLMâs responses are in a desirable format. We defer more details of the few-shot prompting to Appendix D.1. The three tasks are:
- Question answering. We follow Kuhn et al. (2023) and use the CoQA and TriviaQA (Joshi et al., 2017) datasets. The CoQA task requires the LLM to answer questions by understanding the provided text, and the TriviaQA requires the LLM to answer questions based on its pre-training knowledge. We adopt the scoring function $s(·,·)$ as Rouge-1 (Lin and Och, 2004a) and label a response $\bm{y}_{i}$ as correct if $s(\bm{y}_{i},\bm{y}_{i,\text{true}})℠0.3$ and incorrect otherwise.
- Multiple choice. We consider the Massive Multitask Language Understanding (MMLU) dataset (Hendrycks et al., 2020), a collection of 15,858 questions covering 57 subjects across STEM. Due to the special structure of the dataset, the generated output $\bm{y}_{i}$ and the correct answer $\bm{y}_{\text{true},i}â\{\text{A, B, C, D}\}$ . Therefore, this task can also be regarded as a classification problem for the LLM by answering the question with one of the four candidate choices.
- Machine translation. We consider the WMT 2014 dataset (Bojar et al., 2014) for estimating LLMâs uncertainty on the machine translation task. The scoring function $s(·,·)$ is chosen to be the BLEU score (Papineni et al., 2002; Lin and Och, 2004b) and the generated answer $\bm{y}_{i}$ is labeled as correct if $s(\bm{y}_{i},\bm{y}_{i,\text{true}})>0.3$ and incorrect otherwise.
Benchmarks. We compare our approach with a number of the state-of-the-art benchmarks for the problem. Manakul et al. (2023) give a comprehensive survey of the existing methods and compare four distinct measures for predicting sentence generation uncertainty. The measures are based on either the maximum or average values of entropy or probability across the sentence, including Max Likelihood, Avg Likelihood, Max Ent, and Avg Ent defined in Table 5. We note that each of these measures can be applied as a single uncertainty estimator, and they are all applied in an unsupervised manner that does not require additional supervised training. In particular, in applying these measures for the MMLU dataset, since the answer only contains one token from $\{\text{A, B, C, D}\}$ , we use the probabilities and the entropy (over these four tokens) as the benchmarks which represent the probability of the most likely choice and the entropy of all choices, respectively. Kuhn et al. (2023) generate multiple answers, compute their entropy in a semantic sense, and define the quantity as semantic entropy. This semantic-entropy uncertainty (SU) thus can be used as an uncertainty estimator for the LLMâs responses. Tian et al. (2023) propose the approach of asking the LLM for its confidence (denoted as A4U) which directly obtains the uncertainty score from the LLM itself.
Our methods. We follow the discussions in Section 3.3 and implement three versions of our proposed supervised approach: black-box supervised (Bb-S), grey-box supervised (Gb-S), and white-box supervised (Wb-S). These models have the same pipeline of training the uncertainty estimation model and the difference is only on the availability of the LLM. For the Bb-S method, we use the Gemma-7B as the model $q_{\theta}$ to evaluate the uncertainty of LLaMA2-7B/LLaMA3-8B $p_{\theta}$ (treated as a black-box), and reversely, use LLaMA2-7B to evaluate Gemma-7B. The supervised uncertainty model $\hat{g}$ is trained based on the random forest model (Breiman, 2001). Details on the feature construction and the training of the random forest model are deferred to Appendix D.2.
Performance metrics. For the model evaluation, we follow Filos et al. (2019); Kuhn et al. (2023) and compare the performance of our methods against the benchmark using the generated uncertainty score to predict whether the answer is correct. The area under the receiver operator characteristic curve (AUROC) metric is employed to measure the performance of the uncertainty estimation. As discussed in Section 4.1, AUROC works as a good metric for the uncertainty estimation task whereas for the uncertainty calibration task, we follow the more standard calibration metrics and present the results in Section C.
5.2 Performance of uncertainty estimation
Now we present the performance on the uncertainty estimation task.
5.2.1 Question answering and machine translation
The question answering and machine translation tasks can all be viewed as natural language generation tasks so we present their results together. Table 1 summarizes the three versions of our proposed supervised method against the existing benchmarks in terms of AUROC.
| TriviaQA | G-7B | 0.857 | 0.862 | 0.849 | 0.854 | 0.847 | 0.534 | 0.879 | 0.866 | 0.882 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| L-7B | 0.565 | 0.761 | 0.761 | 0.773 | 0.678 | 0.526 | 0.925 | 0.811 | 0.897 | |
| L-8B | 0.838 | 0.851 | 0.849 | 0.853 | 0.826 | 0.571 | 0.843 | 0.861 | 0.874 | |
| CoQA | G-7B | 0.710 | 0.708 | 0.725 | 0.708 | 0.674 | 0.515 | 0.737 | 0.737 | 0.762 |
| L-7B | 0.535 | 0.600 | 0.603 | 0.580 | 0.541 | 0.502 | 0.848 | 0.667 | 0.807 | |
| L-8B | 0.692 | 0.697 | 0.716 | 0.699 | 0.684 | 0.506 | 0.745 | 0.737 | 0.769 | |
| WMT-14 | G-7B | 0.668 | 0.589 | 0.637 | 0.811 | 0.572 | 0.596 | 0.863 | 0.829 | 0.855 |
| L-7B | 0.606 | 0.712 | 0.583 | 0.711 | 0.513 | 0.506 | 0.792 | 0.724 | 0.779 | |
| L-8B | 0.554 | 0.685 | 0.616 | 0.729 | 0.510 | 0.502 | 0.700 | 0.724 | 0.745 | |
Table 1: Out-of-sample AUROC performance for benchmarks and our methods on natural language generation tasks. G-7B, L-7B, and L-8B represent Gemma-7B, LLaMA2-7B, and LLaMA-8B, respectively. The columns MaxL, AvgL, MaxE, and AvgE all come from Manakul et al. (2023). The column SU implements the semantic uncertainty estimation by Kuhn et al. (2023), and the column A4C implements the ask-for-confidence method by Tian et al. (2023). The columns Bb-S, Gb-S, and Wb-S represent respectively the three regimes (black-box supervised, grey-box supervised, and white-box supervised) of our supervised method with details in Section 3.3.
We make several remarks on the numerical results. First, our methods generally have a better performance than the existing benchmarks. Note that the existing benchmarks are mainly unsupervised and based on one single score, and also that our method proceeds with the most standard pipeline for supervised training of an uncertainty estimation model. The advantage of our method should be attributed to the supervised nature and the labeled dataset. While these unsupervised benchmark methods can work in a larger scope than these NLP tasks (though they have not been extensively tested on open questions yet), our methods rely on the labeled dataset. But in addition to these better numbers, the experiment results show the potential of labeled datasets for understanding the uncertainty in LLMâs responses. In particular, our method Gb-S uses features including the benchmark methods, and it shows that some minor supervised training can improve a lot upon the ad-hoc uncertainty estimation based on one single score such as MaxL or MaxE.
Second, our method Wb-S has a clear advantage over our method Gb-S. Note that these two methods differ in that the Wb-S uses the hidden activations while the Gb-S only uses probability-related (and entropy-related) features. This implies that the hidden activations do contain uncertainty information which we will investigate more in Appendix B. Also, we note from the table that there is no single unsupervised grey-box method (under the Benchmarks columns) that consistently surpasses others across different datasets/NLP tasks. For example, among all these unsupervised benchmark methods for grey-box LLMs, AvgE emerges as a top-performing one for the Gemma-7B model when applied to the machine translation task, but it shows the poorest performance for the same Gemma-7B model when tested on the question-answering CoQA dataset. This inconsistency highlights some caveats when using the unsupervised approach for uncertainty estimation of LLMs.
Lastly, we note that the Bb-S method has a similar or even better performance as the Wb-S method. As discussed in Section 3.3, the performance of uncertainty estimation relies on the LLM that we use to evaluate the prompt-response pair. Therefore, it is not surprising to see that in the question-answering task, for answers generated by LLaMA2-7B, Bb-S features better uncertainty estimation than Wb-S, possibly because Gemma-7B, the LLM that is used as the âtool LLMâ in Algorithm 1, encodes better knowledge about the uncertainty of the answers than LLaMA-7B. We also note that the performance of Bb-S is not always as good as Wb-S, and we hypothesize that it is because LLMsâ output distribution differs, which could result in evaluating the uncertainty of different answers. Despite these inconsistencies, the performance of Bb-S is still strong, and these results point to a potential future avenue for estimating the uncertainty of closed-source LLMs.
5.2.2 Multiple choice (MMLU)
Table 2 presents the performance of our methods against the benchmark methods on the MMLU dataset. For this multiple choice task, the output is from {A,B,C,D} which bears no semantic meaning, and therefore we do not include the Semantic Uncertainty (SU) as Table 1. The results show the advantage of our proposed supervised approach, consistent with the previous findings in Table 1.
| Gemma-7B LLaMA2-7B LLaMA3-8B | 0.712 0.698 0.781 | 0.742 0.693 0.791 | 0.582 0.514 0.516 | 0.765 0.732 0.766 | 0.776 0.698 0.793 | 0.833 0.719 0.830 |
| --- | --- | --- | --- | --- | --- | --- |
Table 2: Out-of-sample AUROC performance for benchmarks and our methods on the MMLU dataset. The columns Probability and Entropy come from Manakul et al. (2023), and the column A4C implements the ask-for-confidence method by Tian et al. (2023). The columns Bb-S, Gb-S, and Wb-S represent respectively the three regimes (black-box supervised, grey-box supervised, and white-box supervised) of our supervised method with details in Section 3.3.
We defer more numerical experiments and visualization to Appendices B and C where we investigate more on (i) the effect of the choice of layers; (ii) the scale of the LLMs used; (iii) the uncertainty neurons of the LLMs; and (iv) the calibration performance.
5.3 Transferability
In this subsection, we evaluate the robustness of our methods under the OOD setting.
Setup for the OOD multiple-choice task. We split the MMLU datasets into two groups based on the subjects: Group 1 contains questions from the first 40 subjects while Group 2 contains the remaining 17 subjects, such that the test dataset size of each group is similar (around 600 questions). Note that these 57 subjects span a diverse range of topics, and this means the training and test set can be very different. To test the OOD robustness, we train the proposed methods on one group and evaluate the performance on the other group.
Setup for the OOD question-answering task. For the QA task, since we have two datasets (CoQA and TriviaQA), we train the supervised model on either the TriviaQA or CoQA dataset and then evaluate its performance on the other dataset. While both datasets are for question-answering purposes, they diverge notably in two key aspects: (i) CoQA prioritizes assessing the LLMâs comprehension through the discernment of correct responses within extensive contextual passages, while TriviaQA focuses on evaluating the modelâs recall of factual knowledge. (ii) TriviaQA typically contains answers comprising single words or short phrases, while CoQA includes responses of varying lengths, ranging from shorter to more extensive answers.
| LLMs | Test data | Ours | Best of benchmarks | | | |
| --- | --- | --- | --- | --- | --- | --- |
| Bb-S | Gb-S | Wb-S | Best GB | Best BB | | |
| Transferability in MMLU | | | | | | |
| G-7B | Group 1 | 0.756(0.768) | 0.793(0.799) | 0.846(0.854) | 0.765 | 0.538 |
| Group 2 | 0.738(0.760) | 0.755(0.754) | 0.804(0.807) | 0.721 | 0.616 | |
| L-7B | Group 1 | 0.733(0.749) | 0.715(0.713) | 0.726(0.751) | 0.719 | 0.504 |
| Group 2 | 0.700(0.714) | 0.676(0.677) | 0.685(0.692) | 0.679 | 0.529 | |
| L-8B | Group 1 | 0.763(0.773) | 0.796(0.795) | 0.836(0.839) | 0.799 | 0.524 |
| Group 2 | 0.729(0.761) | 0.786(0.785) | 0.794(0.818) | 0.782 | 0.507 | |
| Transferability in Question-Answering Datasets | | | | | | |
| G-7B | TriviaQA | 0.842(0.879) | 0.861(0.866) | 0.861(0.882) | 0.862 | 0.847 |
| CoQA | 0.702(0.737) | 0.722(0.737) | 0.730(0.762) | 0.725 | 0.674 | |
| L-7B | TriviaQA | 0.917(0.925) | 0.801(0.811) | 0.881(0.897) | 0.773 | 0.678 |
| CoQA | 0.825(0.848) | 0.623(0.667) | 0.764(0.807) | 0.603 | 0.541 | |
| L-8B | TriviaQA | 0.813(0.843) | 0.859(0.861) | 0.863(0.874) | 0.853 | 0.826 |
| CoQA | 0.710(0.745) | 0.714(0.737) | 0.725(0.769) | 0.716 | 0.684 | |
Table 3: Transferability of the trained uncertainty estimation model across different groups of subjects in MMLU and question-answering datasets. For our proposed Bb-S, Gb-S, and Wb-S methods, values within the parentheses $(·)$ represent the AUROCs where the uncertainty estimation model is trained and tested on the same group of subjects or dataset, while values outside the parentheses represent models trained on another group of subjects or dataset. The Best GB and Best BB columns refer to the best AUROC achieved by the unsupervised grey-box baselines and black-box baselines (fully listed in Table 1 and Table 2), respectively.
Table 3 summarizes the performance of these OOD experiments. As expected, for all the methods, there is a slight drop in terms of performance compared to the in-distribution setting (reported by the numbers in the parentheses in the table). We make the following observations based on the experiment results. First, based on the performance gap between in-distribution and OOD evaluation, it is evident that although incorporating white-box features such as hidden activations makes the model more susceptible to performance decreases on OOD tasks, these features also enhance the uncertainty estimation modelâs overall capacity, and the benefits outweigh the drawbacks. It is also noteworthy that even in these scenarios of OOD, our Wb-S and Bb-S method almost consistently outperform corresponding baseline approaches. Overall, the robustness of our methods shows that the hidden layersâ activations within the LLM exhibit similar patterns in encoding uncertainty information to some extent. The performance drop (from in-distribution to OOD) observed in the MMLU dataset is notably less than that in the question-answering dataset, which may stem from the larger disparity between the CoQA and TriviaQA datasets compared to that between two distinct groups of subjects within the same MMLU dataset. This suggests that in cases of significant distributional shifts, re-training or re-calibrating the uncertainty estimation model using test data may be helpful.
6 Conclusions
In this paper, we study the problem of uncertainty estimation and calibration for LLMs. We follow a simple and standard supervised idea and use the labeled NLP datasets to train an uncertainty estimation model for LLMs. Our finding is that, first, the proposed supervised methods have better performances than the existing unsupervised methods. Second, the hidden activations of the LLMs contain uncertainty information about the LLMsâ responses. Third, the black-box regime of our approach (Bb-S) provides a new approach to estimating the uncertainty of closed-source LLMs. Lastly, we distinguish the task of uncertainty estimation from uncertainty calibration and show that a better uncertainty estimation model leads to better calibration performance. One limitation of our proposed supervised method is that it critically relies on the labeled data. For the scope of our paper, we restrict the discussion to the NLP tasks and datasets. One future direction is to utilize the human-annotated data for LLMsâ responses to train a supervised uncertainty estimation model for open-question prompts. We believe the findings that the supervised method gives a better performance and the hidden activations contain the uncertainty information will persist.
References
- Abdar et al. (2021) Abdar, Moloud, Farhad Pourpanah, Sadiq Hussain, Dana Rezazadegan, Li Liu, Mohammad Ghavamzadeh, Paul Fieguth, Xiaochun Cao, Abbas Khosravi, U Rajendra Acharya, et al. 2021. A review of uncertainty quantification in deep learning: Techniques, applications and challenges. Information fusion 76 243â297.
- Ahdritz et al. (2024) Ahdritz, Gustaf, Tian Qin, Nikhil Vyas, Boaz Barak, Benjamin L Edelman. 2024. Distinguishing the knowable from the unknowable with language models. arXiv preprint arXiv:2402.03563 .
- AI@Meta (2024) AI@Meta. 2024. Llama 3 model card URL https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md.
- Azaria and Mitchell (2023) Azaria, Amos, Tom Mitchell. 2023. The internal state of an llm knows when its lying. arXiv preprint arXiv:2304.13734 .
- Berger (2013) Berger, J.O. 2013. Statistical Decision Theory and Bayesian Analysis. Springer Series in Statistics, Springer New York. URL https://books.google.nl/books?id=1CDaBwAAQBAJ.
- Bojar et al. (2014) Bojar, OndĆej, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, et al. 2014. Findings of the 2014 workshop on statistical machine translation. Proceedings of the ninth workshop on statistical machine translation. 12â58.
- Breiman (2001) Breiman, Leo. 2001. Random forests. Machine learning 45 5â32.
- Brown et al. (2020) Brown, Tom, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems 33 1877â1901.
- Bubeck et al. (2023) Bubeck, Sébastien, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. 2023. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712 .
- Burns et al. (2022) Burns, Collin, Haotian Ye, Dan Klein, Jacob Steinhardt. 2022. Discovering latent knowledge in language models without supervision. arXiv preprint arXiv:2212.03827 .
- Burrell (2016) Burrell, J. 2016. How the machine âthinksâ: Understanding opacity in machine learning algorithms. Big Data & Society .
- CH-Wang et al. (2023) CH-Wang, Sky, Benjamin Van Durme, Jason Eisner, Chris Kedzie. 2023. Do androids know theyâre only dreaming of electric sheep? arXiv preprint arXiv:2312.17249 .
- Chen et al. (2024) Chen, Chao, Kai Liu, Ze Chen, Yi Gu, Yue Wu, Mingyuan Tao, Zhihang Fu, Jieping Ye. 2024. Inside: Llmsâ internal states retain the power of hallucination detection. arXiv preprint arXiv:2402.03744 .
- Chen and Mueller (2023) Chen, Jiuhai, Jonas Mueller. 2023. Quantifying uncertainty in answers from any language model and enhancing their trustworthiness .
- Desai and Durrett (2020) Desai, Shrey, Greg Durrett. 2020. Calibration of pre-trained transformers. arXiv preprint arXiv:2003.07892 .
- Duan et al. (2024) Duan, Hanyu, Yi Yang, Kar Yan Tam. 2024. Do llms know about hallucination? an empirical investigation of llmâs hidden states. arXiv preprint arXiv:2402.09733 .
- Duan et al. (2023) Duan, Jinhao, Hao Cheng, Shiqi Wang, Chenan Wang, Alex Zavalny, Renjing Xu, Bhavya Kailkhura, Kaidi Xu. 2023. Shifting attention to relevance: Towards the uncertainty estimation of large language models. arXiv preprint arXiv:2307.01379 .
- Esteva et al. (2017) Esteva, Andre, Brett Kuprel, Roberto A Novoa, Justin Ko, Susan M Swetter, Helen M Blau, Sebastian Thrun. 2017. Dermatologist-level classification of skin cancer with deep neural networks. nature 542 (7639) 115â118.
- Filos et al. (2019) Filos, Angelos, Sebastian Farquhar, Aidan N Gomez, Tim GJ Rudner, Zachary Kenton, Lewis Smith, Milad Alizadeh, Arnoud de Kroon, Yarin Gal. 2019. Benchmarking bayesian deep learning with diabetic retinopathy diagnosis. Preprint at https://arxiv. org/abs/1912.10481 .
- Fomicheva et al. (2020) Fomicheva, Marina, Shuo Sun, Lisa Yankovskaya, FrĂ©dĂ©ric Blain, Francisco GuzmĂĄn, Mark Fishel, Nikolaos Aletras, Vishrav Chaudhary, Lucia Specia. 2020. Unsupervised quality estimation for neural machine translation. Transactions of the Association for Computational Linguistics 8 539â555.
- Gal and Ghahramani (2016) Gal, Yarin, Zoubin Ghahramani. 2016. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. international conference on machine learning. PMLR, 1050â1059.
- Gawlikowski et al. (2023) Gawlikowski, Jakob, Cedrique Rovile Njieutcheu Tassi, Mohsin Ali, Jongseok Lee, Matthias Humt, Jianxiang Feng, Anna Kruspe, Rudolph Triebel, Peter Jung, Ribana Roscher, et al. 2023. A survey of uncertainty in deep neural networks. Artificial Intelligence Review 56 (Suppl 1) 1513â1589.
- Gemma Team et al. (2024) Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Laurent Sifre, Morgane RiviÚre, Mihir Sanjay Kale, Juliette Love, Pouya Tafti, Léonard Hussenot, et al. 2024. Gemma doi: 10.34740/KAGGLE/M/3301. URL https://www.kaggle.com/m/3301.
- Guo et al. (2017) Guo, Chuan, Geoff Pleiss, Yu Sun, Kilian Q Weinberger. 2017. On calibration of modern neural networks. International conference on machine learning. PMLR, 1321â1330.
- Hendrycks et al. (2020) Hendrycks, Dan, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, Jacob Steinhardt. 2020. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300 .
- Hou et al. (2023) Hou, Bairu, Yujian Liu, Kaizhi Qian, Jacob Andreas, Shiyu Chang, Yang Zhang. 2023. Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 .
- Joshi et al. (2017) Joshi, Mandar, Eunsol Choi, Daniel S Weld, Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. arXiv preprint arXiv:1705.03551 .
- Kadavath et al. (2022) Kadavath, Saurav, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, et al. 2022. Language models (mostly) know what they know. arXiv preprint arXiv:2207.05221 .
- Kuhn et al. (2023) Kuhn, Lorenz, Yarin Gal, Sebastian Farquhar. 2023. Semantic uncertainty: Linguistic invariances for uncertainty estimation in natural language generation. arXiv preprint arXiv:2302.09664 .
- Kumar et al. (2023) Kumar, Bhawesh, Charlie Lu, Gauri Gupta, Anil Palepu, David Bellamy, Ramesh Raskar, Andrew Beam. 2023. Conformal prediction with large language models for multi-choice question answering. arXiv preprint arXiv:2305.18404 .
- Lakshminarayanan et al. (2017) Lakshminarayanan, Balaji, Alexander Pritzel, Charles Blundell. 2017. Simple and scalable predictive uncertainty estimation using deep ensembles. Advances in neural information processing systems 30.
- Li et al. (2024) Li, Kenneth, Oam Patel, Fernanda Viégas, Hanspeter Pfister, Martin Wattenberg. 2024. Inference-time intervention: Eliciting truthful answers from a language model. Advances in Neural Information Processing Systems 36.
- Lin and Och (2004a) Lin, Chin-Yew, Franz Josef Och. 2004a. Automatic evaluation of machine translation quality using longest common subsequence and skip-bigram statistics. Proceedings of the 42nd annual meeting of the association for computational linguistics (ACL-04). 605â612.
- Lin and Och (2004b) Lin, Chin-Yew, Franz Josef Och. 2004b. ORANGE: a method for evaluating automatic evaluation metrics for machine translation. COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics. COLING, Geneva, Switzerland, 501â507. URL https://www.aclweb.org/anthology/C04-1072.
- Lin et al. (2023) Lin, Zhen, Shubhendu Trivedi, Jimeng Sun. 2023. Generating with confidence: Uncertainty quantification for black-box large language models. arXiv preprint arXiv:2305.19187 .
- Lin et al. (2022) Lin, Zi, Jeremiah Zhe Liu, Jingbo Shang. 2022. Towards collaborative neural-symbolic graph semantic parsing via uncertainty. Findings of the Association for Computational Linguistics: ACL 2022 .
- Liu et al. (2023) Liu, Kevin, Stephen Casper, Dylan Hadfield-Menell, Jacob Andreas. 2023. Cognitive dissonance: Why do language model outputs disagree with internal representations of truthfulness? arXiv preprint arXiv:2312.03729 .
- Malinin and Gales (2021) Malinin, Andrey, Mark Gales. 2021. Uncertainty estimation in autoregressive structured prediction. International Conference on Learning Representations. URL https://openreview.net/forum?id=jN5y-zb5Q7m.
- Manakul et al. (2023) Manakul, Potsawee, Adian Liusie, Mark JF Gales. 2023. Selfcheckgpt: Zero-resource black-box hallucination detection for generative large language models. arXiv preprint arXiv:2303.08896 .
- Mielke et al. (2022) Mielke, Sabrina J, Arthur Szlam, Emily Dinan, Y-Lan Boureau. 2022. Reducing conversational agentsâ overconfidence through linguistic calibration. Transactions of the Association for Computational Linguistics 10 857â872.
- Minderer et al. (2021) Minderer, Matthias, Josip Djolonga, Rob Romijnders, Frances Hubis, Xiaohua Zhai, Neil Houlsby, Dustin Tran, Mario Lucic. 2021. Revisiting the calibration of modern neural networks. Advances in Neural Information Processing Systems 34 15682â15694.
- Mohri and Hashimoto (2024) Mohri, Christopher, Tatsunori Hashimoto. 2024. Language models with conformal factuality guarantees. arXiv preprint arXiv:2402.10978 .
- Ouyang et al. (2022) Ouyang, Long, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in neural information processing systems 35 27730â27744.
- Papineni et al. (2002) Papineni, Kishore, Salim Roukos, Todd Ward, Wei jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. 311â318.
- Pedregosa et al. (2011) Pedregosa, Fabian, GaĂ«l Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in python. the Journal of machine Learning research 12 2825â2830.
- Platt et al. (1999) Platt, John, et al. 1999. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in large margin classifiers 10 (3) 61â74.
- Plaut et al. (2024) Plaut, Benjamin, Khanh Nguyen, Tu Trinh. 2024. Softmax probabilities (mostly) predict large language model correctness on multiple-choice q&a. arXiv preprint arXiv:2402.13213 .
- Quach et al. (2023) Quach, Victor, Adam Fisch, Tal Schuster, Adam Yala, Jae Ho Sohn, Tommi S Jaakkola, Regina Barzilay. 2023. Conformal language modeling. arXiv preprint arXiv:2306.10193 .
- Radford et al. (2019) Radford, Alec, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog 1 (8) 9.
- Ramos et al. (2017) Ramos, Sebastian, Stefan Gehrig, Peter Pinggera, Uwe Franke, Carsten Rother. 2017. Detecting unexpected obstacles for self-driving cars: Fusing deep learning and geometric modeling. 2017 IEEE Intelligent Vehicles Symposium (IV). IEEE, 1025â1032.
- Rawte et al. (2023) Rawte, Vipula, Amit Sheth, Amitava Das. 2023. A survey of hallucination in large foundation models. arXiv preprint arXiv:2309.05922 .
- Si et al. (2022) Si, Chenglei, Chen Zhao, Sewon Min, Jordan Boyd-Graber. 2022. Re-examining calibration: The case of question answering. arXiv preprint arXiv:2205.12507 .
- Slobodkin et al. (2023) Slobodkin, Aviv, Omer Goldman, Avi Caciularu, Ido Dagan, Shauli Ravfogel. 2023. The curious case of hallucinatory (un) answerability: Finding truths in the hidden states of over-confident large language models. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. 3607â3625.
- Su et al. (2024) Su, Weihang, Changyue Wang, Qingyao Ai, Yiran Hu, Zhijing Wu, Yujia Zhou, Yiqun Liu. 2024. Unsupervised real-time hallucination detection based on the internal states of large language models. arXiv preprint arXiv:2403.06448 .
- Tian et al. (2023) Tian, Katherine, Eric Mitchell, Allan Zhou, Archit Sharma, Rafael Rafailov, Huaxiu Yao, Chelsea Finn, Christopher D Manning. 2023. Just ask for calibration: Strategies for eliciting calibrated confidence scores from language models fine-tuned with human feedback. arXiv preprint arXiv:2305.14975 .
- Touvron et al. (2023) Touvron, Hugo, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 .
- Verma et al. (2023) Verma, Shreyas, Kien Tran, Yusuf Ali, Guangyu Min. 2023. Reducing llm hallucinations using epistemic neural networks. arXiv preprint arXiv:2312.15576 .
- Xiao et al. (2022) Xiao, Yuxin, Paul Pu Liang, Umang Bhatt, Willie Neiswanger, Ruslan Salakhutdinov, Louis-Philippe Morency. 2022. Uncertainty quantification with pre-trained language models: A large-scale empirical analysis. arXiv preprint arXiv:2210.04714 .
- Xu et al. (2024) Xu, Ziwei, Sanjay Jain, Mohan Kankanhalli. 2024. Hallucination is inevitable: An innate limitation of large language models. arXiv preprint arXiv:2401.11817 .
- Ye and Durrett (2021) Ye, Xi, Greg Durrett. 2021. Can explanations be useful for calibrating black box models? arXiv preprint arXiv:2110.07586 .
- Zadrozny and Elkan (2002) Zadrozny, Bianca, Charles Elkan. 2002. Transforming classifier scores into accurate multiclass probability estimates. Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining. 694â699.
- Zhang et al. (2023) Zhang, Hanlin, Yi-Fan Zhang, Yaodong Yu, Dhruv Madeka, Dean Foster, Eric Xing, Hima Lakkaraju, Sham Kakade. 2023. A study on the calibration of in-context learning. arXiv preprint arXiv:2312.04021 .
- Zhang et al. (2021) Zhang, Shujian, Chengyue Gong, Eunsol Choi. 2021. Knowing more about questions can help: Improving calibration in question answering. arXiv preprint arXiv:2106.01494 .
- Zhou et al. (2023) Zhou, Han, Xingchen Wan, Lev Proleev, Diana Mincu, Jilin Chen, Katherine Heller, Subhrajit Roy. 2023. Batch calibration: Rethinking calibration for in-context learning and prompt engineering. arXiv preprint arXiv:2309.17249 .
Appendix A More Related Literature
Hallucination detection.
Recently, there is a trend of adopting uncertainty estimation approaches for hallucination detection. The rationale is that the information of the value of logits and the hidden states contain some of the LLMsâ beliefs about the trustworthiness of its generated output. By taking the activations of hidden layers as input, Azaria and Mitchell (2023) train a classifier to predict hallucinations, and Verma et al. (2023) develop epistemic neural networks aimed at reducing hallucinations. Slobodkin et al. (2023) demonstrate that the information from hidden layers of LLMsâ output can indicate the answerability of an input query, providing indirect insights into hallucination occurrences. Chen et al. (2024) develop an unsupervised metric that leverages the internal states of LLMs to perform hallucination detection. More related works on hallucination detection can be found in CH-Wang et al. (2023); Duan et al. (2024); Xu et al. (2024). While there is a lack of a rigorous definition of hallucination, and its definition varies in the above-mentioned literature, the uncertainty estimation problem can be well defined, and our results on uncertainty estimation can also help the task of hallucination detection.
Leveraging LLMsâ hidden activation.
The exploration of hidden states within LLMs has been studied to better understand LLMsâ behavior. Mielke et al. (2022) improve the linguistic calibration performance of a controllable chit-chat model by fine-tuning it using a calibrator trained on the hidden states, Burns et al. (2022) utilizes hidden activations in an unsupervised way to represent knowledge about the trustfulness of their outputs. Liu et al. (2023) show that LLMsâ linguistic outputs and their internal states can offer conflicting information about truthfulness, and determining whether outputs or internal states are more reliable sources of information often varies from one scenario to another. By taking the activations of hidden layers as input, Ahdritz et al. (2024) employ a linear probe to show that hidden layersâ information from LLMs can be used to differentiate between epistemic and aleatoric uncertainty. Duan et al. (2024) experimentally reveal the variations in hidden layersâ activations when LLMs generate true versus false responses in their hallucination detection task. Lastly, Li et al. (2024) enhance the truthfulness of LLMs during inference time by adjusting the hidden activationsâ values in specific directions.
We also remark on the following two aspects:
- Fine-tuning: For all the numerical experiments in this paper, we do not perform any fine-tuning with respect to the underlying LLMs. While the fine-tuning procedure generally boosts the LLMsâ performance on a downstream task, our methods can still be applied for a fine-tuned LLM, which we leave as future work.
- Hallucination: The hallucination problem has been widely studied in the LLM literature. Yet, as mentioned earlier, it seems there is no consensus on a rigorous definition of what hallucination refers to in the context of LLMs. For example, when an image classifier wrongly classifies a cat image as a dog, we do not say the image classifier hallucinates, then why or when we should say the LLMs hallucinate when they make a mistake? Comparatively, the uncertainty estimation problem is more well-defined, and we provide a mathematical formulation for the uncertainty estimation task for LLMs. Also, we believe our results on uncertainty estimation can also help with a better understanding of the hallucination phenomenon and tasks such as hallucination detection.
Appendix B Interpreting the Uncertainty Estimation
Now we use some visualizations to provide insights into the working mechanism of the uncertainty estimation procedure for LLMs and to better understand the experiment results in the previous subsection.
B.1 Layer comparison
For general LLMs, each token is associated with a relatively large number of hidden layers (32 layers for LLaMA2-7B for example), each of which is represented by high-dimensional vectors (4096 for LLaMA2-7B). Thus it is generally not a good practice to incorporate all hidden layers as features for the uncertainty estimation due to this dimensionality. Previous works find that the middle layer and the last layer activations of the LLMâs last token contain the most useful features for supervised learning (Burns et al., 2022; Chen et al., 2024; Ahdritz et al., 2024; Azaria and Mitchell, 2023). To investigate the layer-wise effect for uncertainty estimation, we implement our Wb-S method with features different in two aspects: (i) different layers within the LLM architecture, specifically focusing on the middle and last layers (e.g., LLaMA2-7B and LLaMA3-8B: 16th and 32nd layers out of 32 layers with 4096 dimensions; Gemma-7B: 14th and 28th layers out of 28 layers with 3072 dimensions); and (ii) position of token activations, including averaging hidden activations over all the prompt/answer tokens or utilizing the hidden activation of the last token. The second aspect makes sense when the output contains more than one token, so we conduct this experiment on the natural language generation tasks only. Figure 3 gives a visualization of the comparison result. While the performances of these different feature extraction ways are quite similar in terms of performance across different tasks and LLMs, activation features from the middle layer generally perform better than the last layer. This may come from the fact that the last layer focuses more on the generation of the next token instead of summarizing information of the whole sentence, as has been discussed by Azaria and Mitchell (2023).
<details>
<summary>x3.png Details</summary>

### Visual Description
## Bar Chart: Feature Comparison of Language Models
### Overview
The image presents a set of bar charts comparing the performance of three language models (Gemma-7B, LLaMA2-7B, and LLaMA3-8B) on three different tasks (TriviaQA, CoQA, and WMT-14). The performance metric used is AUROC (Area Under the Receiver Operating Characteristic curve). The charts compare the AUROC scores obtained using features extracted from different layers (mid and last) and token positions (average and last) within each model.
### Components/Axes
* **Title:** The image is composed of three sub-charts, each titled "Features from [Model Name]".
* Left Chart: "Features from Gemma-7B"
* Middle Chart: "Features from LLaMA2-7B"
* Right Chart: "Features from LLaMA3-8B"
* **Y-axis:** Labeled "AUROC" with a numerical scale ranging from 0.75 to 0.90, with tick marks at 0.75, 0.80, 0.85, and 0.90.
* **X-axis:** Categorical, representing the tasks: TriviaQA, CoQA, and WMT-14.
* **Legend:** Located at the bottom of the image.
* Green with diagonal lines: "Avg token, mid layer"
* Red with diagonal lines: "Avg token, last layer"
* Green with circles: "Last token, mid layer"
* Red with circles: "Last token, last layer"
### Detailed Analysis
**Gemma-7B:**
* **TriviaQA:**
* Avg token, mid layer (green, diagonal lines): ~0.87
* Avg token, last layer (red, diagonal lines): ~0.87
* Last token, mid layer (green, circles): ~0.87
* Last token, last layer (red, circles): ~0.87
* **CoQA:**
* Avg token, mid layer (green, diagonal lines): ~0.755
* Avg token, last layer (red, diagonal lines): ~0.75
* Last token, mid layer (green, circles): ~0.76
* Last token, last layer (red, circles): ~0.755
* **WMT-14:**
* Avg token, mid layer (green, diagonal lines): ~0.86
* Avg token, last layer (red, diagonal lines): ~0.85
* Last token, mid layer (green, circles): ~0.86
* Last token, last layer (red, circles): ~0.855
**LLaMA2-7B:**
* **TriviaQA:**
* Avg token, mid layer (green, diagonal lines): ~0.89
* Avg token, last layer (red, diagonal lines): ~0.89
* Last token, mid layer (green, circles): ~0.895
* Last token, last layer (red, circles): ~0.895
* **CoQA:**
* Avg token, mid layer (green, diagonal lines): ~0.80
* Avg token, last layer (red, diagonal lines): ~0.79
* Last token, mid layer (green, circles): ~0.805
* Last token, last layer (red, circles): ~0.80
* **WMT-14:**
* Avg token, mid layer (green, diagonal lines): ~0.76
* Avg token, last layer (red, diagonal lines): ~0.755
* Last token, mid layer (green, circles): ~0.77
* Last token, last layer (red, circles): ~0.76
**LLaMA3-8B:**
* **TriviaQA:**
* Avg token, mid layer (green, diagonal lines): ~0.86
* Avg token, last layer (red, diagonal lines): ~0.855
* Last token, mid layer (green, circles): ~0.86
* Last token, last layer (red, circles): ~0.855
* **CoQA:**
* Avg token, mid layer (green, diagonal lines): ~0.76
* Avg token, last layer (red, diagonal lines): ~0.755
* Last token, mid layer (green, circles): ~0.76
* Last token, last layer (red, circles): ~0.755
* **WMT-14:**
* Avg token, mid layer (green, diagonal lines): ~0.745
* Avg token, last layer (red, diagonal lines): ~0.74
* Last token, mid layer (green, circles): ~0.75
* Last token, last layer (red, circles): ~0.745
### Key Observations
* For all models, performance on TriviaQA is generally the highest, followed by WMT-14, and then CoQA.
* The choice of layer (mid vs. last) and token position (average vs. last) has a relatively small impact on the AUROC score compared to the choice of task.
* LLaMA2-7B generally achieves the highest AUROC scores across all tasks and feature extraction methods.
* Gemma-7B and LLaMA3-8B show similar performance patterns, with LLaMA3-8B slightly underperforming on WMT-14.
### Interpretation
The bar charts provide a comparative analysis of the performance of three language models on different tasks, using AUROC as the evaluation metric. The data suggests that the LLaMA2-7B model is the most effective among the three, achieving the highest AUROC scores across the tasks. The performance differences between using the average token versus the last token, and the mid layer versus the last layer, are relatively minor, indicating that the overall model architecture and training data have a more significant impact on performance than these specific feature extraction choices. The lower performance on CoQA across all models suggests that this task is more challenging for these models compared to TriviaQA and WMT-14.
</details>
Figure 3: Performance comparison of using hidden activations from different tokens and layers as features in the Wb-S method. The bars filled with â/â and â.â represent the activations averaged over the answer tokens and the hidden activation of the last token, respectively. And the green and orange bars denote the activations from the middle and the last layer, respectively.
B.2 Scaling effect
In Figure 4, we investigate whether larger LLMsâ hidden activations enhance our uncertainty estimation method. For a fair comparison, we fix the target LLM that generates the output in Algorithm 1 and vary the tool LLM used for analysis. For example, in the left plot of Figure 4, we use Gemma-7B to generate the outputs, and LLaMA2-7B, LLaMA2-13B, and Gemma-7B to perform uncertainty estimation.
<details>
<summary>x4.png Details</summary>

### Visual Description
## Bar Chart: Model Prediction Performance
### Overview
The image presents three bar charts comparing the performance (AUROC score) of different language models when predicting the outputs of other models. The charts are grouped by the model used for prediction: LLaMA2 predicting Gemma-7B, Gemma predicting LLaMA2-7B, and Gemma predicting LLaMA3-8B. Each chart compares the performance across four tasks: MMLU, TriviaQA, CoQA, and WMT-14.
### Components/Axes
* **Title (Top-Left Chart):** Use LLaMA2 to predict Gemma-7B
* **Title (Top-Middle Chart):** Use Gemma to predict LLaMA2-7B
* **Title (Top-Right Chart):** Use Gemma to predict LLaMA3-8B
* **Y-axis Label:** AUROC
* Scale: 0.70 to 1.00, with tick marks at 0.70, 0.75, 0.80, 0.85, 0.90, 0.95, and 1.00.
* **X-axis Labels (all charts):** MMLU, TriviaQA, CoQA, WMT-14
* **Legend (Top-Left Chart):**
* White: Wb-S
* Gray: Gb-S
* Green (diagonal lines): 7B
* Red (dotted): 13B
* **Legend (Top-Middle and Top-Right Charts):**
* White: Wb-S
* Gray: Gb-S
* Green (diagonal lines): 2B
* Red (dotted): 7B
### Detailed Analysis
#### Chart 1: Use LLaMA2 to predict Gemma-7B
* **MMLU:**
* Wb-S: ~0.84
* Gb-S: ~0.84
* 7B: ~0.84
* 13B: ~0.84
* **TriviaQA:**
* Wb-S: ~0.88
* Gb-S: ~0.87
* 7B: ~0.89
* 13B: ~0.90
* **CoQA:**
* Wb-S: ~0.76
* Gb-S: ~0.74
* 7B: ~0.74
* 13B: ~0.75
* **WMT-14:**
* Wb-S: ~0.86
* Gb-S: ~0.85
* 7B: ~0.86
* 13B: ~0.86
#### Chart 2: Use Gemma to predict LLaMA2-7B
* **MMLU:**
* Wb-S: ~0.72
* Gb-S: ~0.72
* 2B: ~0.72
* 7B: ~0.72
* **TriviaQA:**
* Wb-S: ~0.89
* Gb-S: ~0.83
* 2B: ~0.80
* 7B: ~0.88
* **CoQA:**
* Wb-S: ~0.72
* Gb-S: ~0.80
* 2B: ~0.84
* 7B: ~0.87
* **WMT-14:**
* Wb-S: ~0.77
* Gb-S: ~0.78
* 2B: ~0.80
* 7B: ~0.85
#### Chart 3: Use Gemma to predict LLaMA3-8B
* **MMLU:**
* Wb-S: ~0.84
* Gb-S: ~0.84
* 2B: ~0.84
* 7B: ~0.84
* **TriviaQA:**
* Wb-S: ~0.88
* Gb-S: ~0.87
* 2B: ~0.86
* 7B: ~0.82
* **CoQA:**
* Wb-S: ~0.77
* Gb-S: ~0.74
* 2B: ~0.74
* 7B: ~0.74
* **WMT-14:**
* Wb-S: ~0.74
* Gb-S: ~0.72
* 2B: ~0.68
* 7B: ~0.70
### Key Observations
* When LLaMA2 predicts Gemma-7B, the performance across different tasks is relatively consistent for all model sizes (Wb-S, Gb-S, 7B, 13B). TriviaQA shows the highest AUROC scores, while CoQA shows the lowest.
* When Gemma predicts LLaMA2-7B, there is more variance in performance across tasks and model sizes. TriviaQA and CoQA show higher AUROC scores compared to MMLU and WMT-14.
* When Gemma predicts LLaMA3-8B, the performance is generally high for MMLU and TriviaQA, but lower for CoQA and WMT-14.
### Interpretation
The charts illustrate the transferability and predictive power of different language models. The AUROC scores indicate how well one model can predict the output of another on various tasks.
* The first chart suggests that LLaMA2 can effectively predict Gemma-7B's outputs, with relatively consistent performance across different model sizes.
* The second chart shows that Gemma's ability to predict LLaMA2-7B varies depending on the task. This could indicate differences in the models' architectures or training data.
* The third chart shows that Gemma's ability to predict LLaMA3-8B also varies depending on the task.
The differences in performance across tasks (MMLU, TriviaQA, CoQA, WMT-14) highlight the importance of task-specific evaluation when assessing language model capabilities. The models seem to perform better on TriviaQA compared to CoQA, suggesting that they are better at answering factual questions than engaging in conversational question answering.
</details>
Figure 4: (Left) Using the hidden activations of LLaMA2-7B and LLaMA2-13B to estimate the uncertainty of the answer provided by Gemma-7B. (Middle) Using the hidden activations of Gemma-2B and Gemma-7B to estimate the uncertainty of the answer provided by LLaMA2-7B. (Right) Using the hidden activations of Gemma-2B and Gemma-7B to estimate the uncertainty of the answer provided by LLaMA3-8B
We find that larger LLM does encode better knowledge about the uncertainty, which is attributed to their improved knowledge in answering the questions. We also note that in the case of using Gemma to predict LLaMA2-7B, even a small tool LLM (Gemma-2B) is capable of achieving better performance than the Gb-S that only uses the entropy- and probability-related features from the target LLM. This result also underscores the benefits of adopting the internal state in estimating the uncertainty, even from an LLM different from the one generating the answers.
B.3 Histogram of correlations
Figure 5 plots the histograms of the pairwise correlations between the neuron activations and the labels (whether the LLMâs response is correct). We make two observations here: First, for both LLMs, some neurons have a significantly positive (or negative) correlation with the label. We can interpret these neurons as the uncertainty neuron for the corresponding task. When these neurons are activated, the LLMs are uncertain about their responses. Second, Gemma-7B and LLaMA3-8B have more significant neurons than LLaMA2-7B, and this is consistent with the better performance of Gemma-7B and LLaMA3-8B in Table 1 and Table 2. Also, this reinforces that the hidden activations of the LLMs contain uncertainty information about the LLMâs output.
<details>
<summary>x5.png Details</summary>

### Visual Description
## Histogram: Model Output Distributions
### Overview
The image presents three histograms, each displaying the distribution of outputs for a different language model: LLaMA2-7B (blue), LLaMA3-8B (red), and Gemma-7B (green). The x-axis represents the output values, ranging from approximately -0.2 to 0.2, while the y-axis represents the frequency or count of each output value, ranging from 0 to 120.
### Components/Axes
* **X-axis:** Output Value, ranging from -0.2 to 0.2, with tick marks at -0.2, -0.1, 0.0, 0.1, and 0.2.
* **Y-axis:** Frequency/Count, ranging from 0 to 120, with tick marks at 0, 20, 40, 60, 80, 100, and 120.
* **Histograms:**
* **Left:** Blue histogram labeled "LLaMA2-7B".
* **Middle:** Red histogram labeled "LLaMA3-8B".
* **Right:** Green histogram labeled "Gemma-7B".
### Detailed Analysis
* **LLaMA2-7B (Blue):**
* The distribution is centered around 0.0.
* The frequency increases from -0.2 to approximately 0.0, reaching a peak around 0.0.
* The frequency decreases from 0.0 to 0.2.
* The maximum frequency is approximately 105.
* **LLaMA3-8B (Red):**
* The distribution is centered around 0.0.
* The frequency increases from -0.2 to approximately 0.0, reaching a peak around 0.0.
* The frequency decreases from 0.0 to 0.2.
* The maximum frequency is approximately 120.
* **Gemma-7B (Green):**
* The distribution is centered around 0.0.
* The frequency increases from -0.2 to approximately 0.0, reaching a peak around 0.0.
* The frequency decreases from 0.0 to 0.2.
* The maximum frequency is approximately 80.
### Key Observations
* All three distributions are unimodal and centered around 0.0.
* LLaMA3-8B (red) has the highest peak frequency, indicating a higher concentration of outputs around 0.0.
* Gemma-7B (green) has the lowest peak frequency, suggesting a wider spread of outputs compared to the other two models.
* LLaMA2-7B (blue) has a peak frequency between LLaMA3-8B and Gemma-7B.
### Interpretation
The histograms provide a visual comparison of the output distributions for three different language models. The fact that all three models have distributions centered around 0.0 suggests that their outputs tend to cluster around a central value. The differences in peak frequencies and spread indicate variations in the models' confidence or certainty in their outputs. LLaMA3-8B appears to be the most "confident" in its outputs, with a higher concentration around 0.0, while Gemma-7B exhibits a wider range of outputs, suggesting more variability or uncertainty. LLaMA2-7B falls in between these two extremes. These differences could be attributed to variations in model architecture, training data, or other factors.
</details>
<details>
<summary>x6.png Details</summary>

### Visual Description
## Histogram Comparison: LLaMA2-7B, LLaMA3-8B, and Gemma-7B
### Overview
The image presents three histograms, each displaying the distribution of a different language model: LLaMA2-7B (blue), LLaMA3-8B (red), and Gemma-7B (green). The x-axis represents some unspecified metric, ranging from -0.2 to 0.2, while the y-axis represents the frequency or count, ranging from 0 to 120.
### Components/Axes
* **X-axis:** Ranges from -0.2 to 0.2, with a tick mark at 0.0. The unit of measure is not specified.
* **Y-axis:** Ranges from 0 to 120, with tick marks at 0, 20, 40, 60, 80, 100, and 120. The unit of measure is frequency or count.
* **Histograms:**
* **Left:** Blue histogram labeled "LLaMA2-7B".
* **Middle:** Red histogram labeled "LLaMA3-8B".
* **Right:** Green histogram labeled "Gemma-7B".
### Detailed Analysis
* **LLaMA2-7B (Blue):** The distribution is approximately normal, centered around 0.0. The frequency peaks at approximately 105 near 0.0. The distribution spreads from -0.2 to 0.2.
* **LLaMA3-8B (Red):** The distribution is approximately normal, centered around 0.0. The frequency peaks at approximately 115 near 0.0. The distribution spreads from -0.2 to 0.2.
* **Gemma-7B (Green):** The distribution appears to be more uniform compared to the other two, with a relatively flat top. The frequency ranges from approximately 20 to 45 between -0.2 and 0.2.
### Key Observations
* LLaMA2-7B and LLaMA3-8B have similar distributions, both centered around 0.0, with LLaMA3-8B having a slightly higher peak frequency.
* Gemma-7B has a significantly different distribution, appearing more uniform and with lower frequencies compared to the other two models.
### Interpretation
The histograms compare the distributions of some metric for three different language models. The data suggests that LLaMA2-7B and LLaMA3-8B exhibit similar behavior with respect to this metric, while Gemma-7B behaves differently. The specific metric being measured is not specified, but the distributions suggest that LLaMA2-7B and LLaMA3-8B are more concentrated around a central value, while Gemma-7B is more evenly distributed across the range. The higher peak frequency of LLaMA3-8B compared to LLaMA2-7B may indicate a more consistent or predictable behavior for LLaMA3-8B with respect to this metric. The uniform distribution of Gemma-7B may indicate a more diverse or less predictable behavior.
</details>
Figure 5: The histograms of the pairwise correlations on the TriviaQA task between the neuron activations and the labels (whether the LLMâs response is correct), where the neural values are the last-token hidden activations of answers from the middle layer (upper) and the last layer (lower) of two models respectively.
Figure 6 plots some example neuronsâ activation by selecting the neurons with the largest absolute correlations in Figure 5. More neurons from the last layer can be found in Figure 7. These neurons as an individual indicator exhibit different distributional patterns when the response is correct compared to when the response is incorrect, and thus reflect the uncertainty of the LLMâs responses.
<details>
<summary>x7.png Details</summary>

### Visual Description
## Histogram Grid: Neuron Activation Distributions for Language Models
### Overview
The image presents a grid of 12 histograms, arranged in a 3x4 layout. Each histogram displays the distribution of neuron activations for a specific neuron in different language models (LLaMA-2-7B, LLaMA-3-8B, and Gemma-7B). The distributions are separated into "true answer" and "false answer" categories, represented by blue and red histograms respectively. The x-axis represents the activation value, and the y-axis represents the number of samples.
### Components/Axes
* **Title:** There is no overall title for the figure.
* **Legend:** Located at the top of the image.
* "true answer": Represented by the color blue.
* "false answer": Represented by the color red.
* **Y-Axis:**
* The y-axis label varies depending on the row:
* Top Row: "# Samples / LLaMA-2-7B"
* Middle Row: "# Samples / LLaMA-3-8B"
* Bottom Row: "# Samples / Gemma-7B"
* The y-axis scale ranges from 0 to 500, with tick marks at 0, 100, 200, 300, 400, and 500.
* **X-Axis:**
* The x-axis label is specific to each histogram, indicating the neuron number and the phrase "neuron act.".
* The x-axis scale varies for each histogram, but generally centers around 0.
### Detailed Analysis
**Top Row (LLaMA-2-7B):**
* **3961-th neuron act.:**
* X-axis: Approximately -1 to 1, with tick marks at -1, 0, and 1.
* Blue (true answer) distribution: Peaks around -0.25, extending from approximately -1 to 1. Max height ~325.
* Red (false answer) distribution: Peaks around -0.1, extending from approximately -0.75 to 0.5. Max height ~175.
* **394-th neuron act.:**
* X-axis: Approximately -1 to 1, with tick marks at -1, 0, and 1.
* Blue (true answer) distribution: Peaks around 0, extending from approximately -1 to 1. Max height ~325.
* Red (false answer) distribution: Peaks around 0, extending from approximately -0.75 to 0.75. Max height ~175.
* **490-th neuron act.:**
* X-axis: Approximately -2 to 6, with tick marks at -2, 0, 2, 4, and 6.
* Blue (true answer) distribution: Peaks around 2, extending from approximately -1 to 6. Max height ~325.
* Red (false answer) distribution: Peaks around 2, extending from approximately 0 to 4. Max height ~175.
* **2635-th neuron act.:**
* X-axis: Approximately -1 to 1, with tick marks at -1, 0, and 1.
* Blue (true answer) distribution: Peaks around 0, extending from approximately -1 to 1. Max height ~325.
* Red (false answer) distribution: Peaks around 0, extending from approximately -0.75 to 0.75. Max height ~175.
**Middle Row (LLaMA-3-8B):**
* **3702-th neuron act.:**
* X-axis: Approximately -0.6 to 0.2, with tick marks at -0.6, -0.4, -0.2, 0.0, and 0.2.
* Blue (true answer) distribution: Peaks around -0.3, extending from approximately -0.6 to 0.2. Max height ~425.
* Red (false answer) distribution: Peaks around -0.3, extending from approximately -0.5 to 0.1. Max height ~150.
* **3740-th neuron act.:**
* X-axis: Approximately -0.5 to 0.5, with tick marks at -0.5, 0.0, and 0.5.
* Blue (true answer) distribution: Peaks around 0, extending from approximately -0.5 to 0.5. Max height ~425.
* Red (false answer) distribution: Peaks around 0, extending from approximately -0.4 to 0.4. Max height ~150.
* **1800-th neuron act.:**
* X-axis: Approximately -1.0 to 0.5, with tick marks at -1.0, -0.5, 0.0, and 0.5.
* Blue (true answer) distribution: Peaks around -0.25, extending from approximately -1.0 to 0.5. Max height ~425.
* Red (false answer) distribution: Peaks around -0.25, extending from approximately -0.75 to 0.25. Max height ~150.
* **2082-th neuron act.:**
* X-axis: Approximately -0.5 to 1.0, with tick marks at -0.5, 0.0, 0.5, and 1.0.
* Blue (true answer) distribution: Peaks around 0.1, extending from approximately -0.5 to 1.0. Max height ~425.
* Red (false answer) distribution: Peaks around 0.1, extending from approximately -0.3 to 0.5. Max height ~150.
**Bottom Row (Gemma-7B):**
* **2368-th neuron act.:**
* X-axis: Approximately -0.1 to 0.1, with tick marks at -0.1, 0.0, and 0.1.
* Blue (true answer) distribution: Peaks around 0, extending from approximately -0.1 to 0.1. Max height ~425.
* Red (false answer) distribution: Peaks around 0, extending from approximately -0.05 to 0.05. Max height ~150.
* **1945-th neuron act.:**
* X-axis: Approximately -0.5 to 1.0, with tick marks at -0.5, 0.0, 0.5, and 1.0.
* Blue (true answer) distribution: Peaks around 0.2, extending from approximately -0.5 to 1.0. Max height ~425.
* Red (false answer) distribution: Peaks around 0.2, extending from approximately -0.3 to 0.7. Max height ~150.
* **1758-th neuron act.:**
* X-axis: Approximately -0.50 to 0.50, with tick marks at -0.50, -0.25, 0.00, 0.25, and 0.50.
* Blue (true answer) distribution: Peaks around 0.1, extending from approximately -0.5 to 0.5. Max height ~425.
* Red (false answer) distribution: Peaks around 0.1, extending from approximately -0.25 to 0.3. Max height ~150.
* **719-th neuron act.:**
* X-axis: Approximately -0.1 to 0.1, with tick marks at -0.1, 0.0, and 0.1.
* Blue (true answer) distribution: Peaks around 0, extending from approximately -0.1 to 0.1. Max height ~425.
* Red (false answer) distribution: Peaks around 0, extending from approximately -0.05 to 0.05. Max height ~150.
### Key Observations
* The "true answer" distributions (blue) generally have higher peaks and wider spreads than the "false answer" distributions (red).
* The x-axis scales vary significantly between neurons, suggesting different activation ranges for different neurons.
* The y-axis label changes depending on the language model (LLaMA-2-7B, LLaMA-3-8B, Gemma-7B).
* The distributions appear roughly Gaussian, but with varying means and standard deviations.
### Interpretation
The histograms provide a visual comparison of neuron activation distributions for different language models when processing "true" and "false" answers. The fact that "true answer" distributions tend to have higher peaks and wider spreads suggests that these neurons are more strongly activated and exhibit a wider range of responses when the model is processing correct information. The differences in activation patterns between "true" and "false" answers could be indicative of how the model differentiates between correct and incorrect information internally. The varying x-axis scales highlight the diverse activation ranges of different neurons within the models. The differences in the y-axis label indicate that the number of samples used to generate the histograms is specific to each language model.
</details>
Figure 6: Distribution of values from particular neurons of mid-layers on TriviaQA dataset.
<details>
<summary>x8.png Details</summary>

### Visual Description
## Histogram Grid: Neuron Activation Distributions
### Overview
The image presents a grid of 12 histograms, arranged in a 3x4 matrix. Each histogram visualizes the distribution of neuron activations for a specific neuron in different language models (LLaMA-2-7B, LLaMA-3-8B, and Gemma-7B). The histograms compare the activation distributions for "true answer" and "false answer" scenarios, represented by blue and red bars, respectively.
### Components/Axes
* **Title:** None explicitly provided for the entire figure, but each subplot has a title indicating the neuron number and "neuron act." (neuron activation).
* **X-axis:** Represents the neuron activation value. The range varies across subplots, but generally spans a range of approximately -20 to +20.
* **Y-axis:** Represents the number of samples. The scale is consistent across the first two rows, ranging from 0 to 1000. The third row also ranges from 0 to 1000.
* **Legend:** Located at the top of the image.
* Blue: "true answer"
* Red: "false answer"
* **Y-axis Label:** "# Samples / LLaMA-2-7B" for the first row, "# Samples / LLaMA-3-8B" for the second row, and "# Samples / Gemma-7B" for the third row.
### Detailed Analysis
Here's a breakdown of each subplot, including the neuron number, model, and a description of the distributions:
**Row 1: LLaMA-2-7B**
* **Plot 1:** 2021-th neuron act.
* X-axis: Approximately -10 to 10.
* True answer (blue): Peaks around -8, with a sharp drop-off towards 0 and a long tail to the right. Max value ~700.
* False answer (red): Peaks around -5, with a broader distribution extending to the right. Max value ~300.
* **Plot 2:** 149-th neuron act.
* X-axis: Approximately -10 to 10.
* True answer (blue): Peaks around 5, with a long tail to the left. Max value ~500.
* False answer (red): Peaks around 5, with a smaller distribution. Max value ~200.
* **Plot 3:** 3556-th neuron act.
* X-axis: Approximately -40 to 20.
* True answer (blue): Peaks around -25, with a sharp drop-off towards 0 and a long tail to the right. Max value ~900.
* False answer (red): Peaks around -20, with a broader distribution extending to the right. Max value ~300.
* **Plot 4:** 2672-th neuron act.
* X-axis: Approximately -2.5 to 5.0.
* True answer (blue): Peaks around 2.5, with a long tail to the left. Max value ~500.
* False answer (red): Peaks around 2.5, with a smaller distribution. Max value ~200.
**Row 2: LLaMA-3-8B**
* **Plot 5:** 1917-th neuron act.
* X-axis: Approximately -20 to 20.
* True answer (blue): Peaks around -10, with a long tail to the right. Max value ~400.
* False answer (red): Peaks around -5, with a broader distribution extending to the right. Max value ~200.
* **Plot 6:** 4055-th neuron act.
* X-axis: Approximately -20 to 0.
* True answer (blue): Peaks around -15, with a long tail to the right. Max value ~400.
* False answer (red): Peaks around -10, with a broader distribution extending to the right. Max value ~200.
* **Plot 7:** 3795-th neuron act.
* X-axis: Approximately -15 to 5.
* True answer (blue): Peaks around -8, with a long tail to the right. Max value ~500.
* False answer (red): Peaks around -5, with a broader distribution extending to the right. Max value ~200.
* **Plot 8:** 3939-th neuron act.
* X-axis: Approximately -10 to 10.
* True answer (blue): Peaks around -5, with a long tail to the right. Max value ~400.
* False answer (red): Peaks around -2, with a broader distribution extending to the right. Max value ~200.
**Row 3: Gemma-7B**
* **Plot 9:** 2944-th neuron act.
* X-axis: Approximately -5 to 5.
* True answer (blue): Peaks around -2, with a long tail to the right. Max value ~400.
* False answer (red): Peaks around -1, with a broader distribution extending to the right. Max value ~200.
* **Plot 10:** 96-th neuron act.
* X-axis: Approximately -10 to 5.
* True answer (blue): Peaks around -5, with a long tail to the right. Max value ~400.
* False answer (red): Peaks around -2, with a broader distribution extending to the right. Max value ~200.
* **Plot 11:** 156-th neuron act.
* X-axis: Approximately -5 to 5.
* True answer (blue): Peaks around 2, with a long tail to the left. Max value ~400.
* False answer (red): Peaks around 3, with a broader distribution extending to the left. Max value ~200.
* **Plot 12:** 23-th neuron act.
* X-axis: Approximately -5 to 5.
* True answer (blue): Peaks around 2, with a long tail to the left. Max value ~400.
* False answer (red): Peaks around 3, with a broader distribution extending to the left. Max value ~200.
### Key Observations
* The distributions of neuron activations differ significantly between "true answer" and "false answer" scenarios.
* The "true answer" distributions tend to have sharper peaks, while the "false answer" distributions are broader.
* The activation ranges vary across different neurons.
* The LLaMA-2-7B model seems to have a wider range of activation values compared to LLaMA-3-8B and Gemma-7B.
* The number of samples is consistent across all plots, allowing for direct comparison of the distributions.
### Interpretation
The histograms provide insights into how different neurons in the language models respond to "true" and "false" answers. The distinct distributions suggest that these neurons play a role in distinguishing between correct and incorrect responses. The sharper peaks in the "true answer" distributions may indicate a more focused and specific activation pattern when the model is providing a correct answer. The broader "false answer" distributions could reflect a more diffuse or less certain activation pattern when the model is making a mistake. The differences in activation ranges and distributions across different neurons highlight the diverse roles that individual neurons play in the overall functioning of the language models. Comparing the distributions across the three models (LLaMA-2-7B, LLaMA-3-8B, and Gemma-7B) could reveal differences in their internal representations and processing strategies.
</details>
Figure 7: More distribution of values from specific neurons of last layers on the TriviaQA dataset. The plots are obtained in the same way as Figure 6.
B.4 Proof of Proposition 4.1
The proof of Proposition 4.1 follows from the definition of $f^{*}.$
Appendix C Calibration performance
In Section 4.1, we distinguish the two tasks of uncertainty estimation and uncertainty calibration. Throughout the paper, we have been focused on improving the performance on the task of uncertainty estimation â to predict when the LLM is uncertain about its response. Generally, a better uncertainty estimation model leads to one with better calibration performance. The calibration (or recalibration) of the uncertainty estimation model can be indeed reduced to the classic ML setting which does not involve the LLM. Table 4 gives the calibration performance and we see an advantage of our supervised methods over benchmark methods consistent with the AUROC performance in Table 1. We adopt the histogram binning method here because we find that the temperature scaling method and the Platt scaling method will give all predicted scores concentrated within a small range such as $[0.2,0.6]$ . We also do not exclude the possibility that the other calibration methods can give even better performance. The point to make here is that uncertainty estimation and uncertainty calibration are two closely related tasks. Note that (i) a better uncertainty estimation model leads to a better calibration performance and (ii) the LLMs are pretrained and not designed for these NLP tasks in the first place (see Section 4.2) so that there is no uncertainty score readily available (as the predicted probabilities for the image classifiers); we emphasize the importance of an extra uncertainty estimation procedure as our supervised one so to extract the uncertainty information from the inside of the LLMs.
| NLL | TriviaQA | G-7B | 0.478 | 0.500 | 0.428 | 0.472 | 0.739 | 8.710 | 0.414 | 0.467 | 0.392 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| L-7B | 1.155 | 0.551 | 0.575 | 0.600 | 1.481 | 21.119 | 0.338 | 0.580 | 0.388 | | |
| L-8B | 0.483 | 0.407 | 0.383 | 0.401 | 0.719 | 8.515 | 0.423 | 0.467 | 0.365 | | |
| CoQA | G-7B | 0.778 | 0.474 | 0.469 | 0.476 | 0.632 | 8.106 | 0.474 | 0.497 | 0.457 | |
| L-7B | 1.047 | 0.620 | 0.637 | 0.649 | 1.358 | 11.708 | 0.417 | 0.607 | 0.457 | | |
| L-8B | 0.823 | 0.502 | 0.508 | 0.499 | 0.762 | 8.007 | 0.551 | 0.535 | 0.507 | | |
| WMT-14 | G-7B | 9.674 | 1.266 | 0.809 | 0.618 | 0.701 | 17.933 | 0.454 | 0.463 | 0.449 | |
| L-7B | 1.204 | 1.150 | 0.718 | 0.809 | 0.796 | 16.913 | 0.553 | 0.622 | 0.583 | | |
| L-8B | 1.490 | 0.752 | 0.652 | 0.676 | 0.722 | 21.340 | 0.649 | 0.673 | 0.612 | | |
| ECE | TriviaQA | G-7B | 0.152 | 0.138 | 0.066 | 0.115 | 0.275 | 0.253 | 0.056 | 0.075 | 0.067 |
| L-7B | 0.437 | 0.068 | 0.048 | 0.146 | 0.188 | 0.616 | 0.043 | 0.087 | 0.049 | | |
| L-8B | 0.171 | 0.082 | 0.046 | 0.081 | 0.196 | 0.283 | 0.107 | 0.087 | 0.075 | | |
| CoQA | G-7B | 0.356 | 0.054 | 0.112 | 0.064 | 0.221 | 0.237 | 0.121 | 0.129 | 0.113 | |
| L-7B | 0.397 | 0.065 | 0.105 | 0.073 | 0.174 | 0.494 | 0.052 | 0.071 | 0.038 | | |
| L-8B | 0.339 | 0.031 | 0.071 | 0.033 | 0.196 | 0.312 | 0.156 | 0.110 | 0.122 | | |
| WMT-14 | G-7B | 0.499 | 0.464 | 0.234 | 0.197 | 0.072 | 0.521 | 0.097 | 0.063 | 0.073 | |
| L-7B | 0.164 | 0.389 | 0.065 | 0.269 | 0.127 | 0.491 | 0.045 | 0.090 | 0.101 | | |
| L-8B | 0.318 | 0.192 | 0.051 | 0.142 | 0.029 | 0.618 | 0.145 | 0.201 | 0.137 | | |
| Brier | TriviaQA | G-7B | 0.282 | 0.221 | 0.224 | 0.215 | 0.344 | 0.279 | 0.266 | 0.288 | 0.282 |
| L-7B | 0.431 | 0.241 | 0.271 | 0.259 | 0.322 | 0.645 | 0.334 | 0.322 | 0.315 | | |
| L-8B | 0.262 | 0.192 | 0.204 | 0.188 | 0.291 | 0.373 | 0.258 | 0.265 | 0.255 | | |
| CoQA | G-7B | 0.318 | 0.174 | 0.188 | 0.171 | 0.232 | 0.241 | 0.207 | 0.218 | 0.212 | |
| L-7B | 0.395 | 0.233 | 0.242 | 0.230 | 0.265 | 0.464 | 0.296 | 0.256 | 0.276 | | |
| L-8B | 0.338 | 0.197 | 0.201 | 0.191 | 0.255 | 0.359 | 0.258 | 0.242 | 0.248 | | |
| WMT-14 | G-7B | 0.505 | 0.454 | 0.330 | 0.319 | 0.247 | 0.606 | 0.327 | 0.287 | 0.309 | |
| L-7B | 0.313 | 0.413 | 0.271 | 0.334 | 0.275 | 0.502 | 0.296 | 0.277 | 0.288 | | |
| L-8B | 0.343 | 0.279 | 0.250 | 0.263 | 0.246 | 0.620 | 0.282 | 0.300 | 0.284 | | |
Table 4: Calibration performance on natural language generation tasks after histogram binning. The base models are from Table 1. The original uncertainty scores from the base models are first scaled into $[0,1]$ and then a histogram binning is performed with 20 bins of equal length.
Appendix D Details for the Numerical Experiments
We ran all of our experiments on an AMD EPYC 7452 128-core processor with 4 $Ă$ 48G NVIDIA A6000 GPUs.
D.1 Dataset preparation
In the following we provide more information for the three tasks considered in our numerical experiments.
- Question answering. We follow Kuhn et al. (2023) and use the CoQA and TriviaQA (Joshi et al., 2017) datasets. The CoQA task requires the LLM to answer questions by understanding the provided text, and the TriviaQA requires the LLM to answer questions based on its pre-training knowledge. We adopt the scoring function $s(·,·)$ as Rouge-1 (Lin and Och, 2004a) and label a response $\bm{y}_{i}$ as correct if $s(\bm{y}_{i},\bm{y}_{i,\text{true}})℠0.3$ and incorrect otherwise.
- Multiple choice. We consider the Massive Multitask Language Understanding (MMLU) dataset (Hendrycks et al., 2020), a collection of 15,858 questions covering 57 subjects across STEM. Due to the special structure of the dataset, the generated output $\bm{y}_{i}$ and the correct answer $\bm{y}_{\text{true},i}â\{\text{A, B, C, D}\}$ . Therefore, this task can also be regarded as a classification problem for the LLM by answering the question with one of the four candidate choices.
- Machine translation. We consider the WMT 2014 dataset (Bojar et al., 2014) for estimating LLMâs uncertainty on the machine translation task. The scoring function $s(·,·)$ is chosen to be the BLEU score (Papineni et al., 2002; Lin and Och, 2004b) and the generated answer $\bm{y}_{i}$ is labeled as correct if $s(\bm{y}_{i},\bm{y}_{i,\text{true}})>0.3$ and incorrect otherwise.
Prompt dataset generation. For all the tasks studied in this paper, we adopt the few-shot prompting for the LLM. Specifically, in the prompt, we provide $r$ examples to make the LLM learn the format of the response, as illustrated in the following. For the question-answering task, we construct the prompt without using any question-answering sample repeatedly in the original dataset. For example, Prompt 1 includes the 1st to $r$ -th question-answering samples in the original dataset as the examples and the $(r+1)$ -th sample as the target question-answering pair for the LLM; next, Prompt 2 uses the $(r+2)$ -th to $(2r+1)$ -th samples as the examples and the $(2r+2)$ -th sample as the target question-answering pair. However, as the test datasets of MMLU and WMT used for evaluation are not sufficiently large, we generate the prompt in a convolution-like manner: Prompt 2 includes the 2nd to $(r+1)$ -th question-answering samples as the examples and the $(r+2)$ -th sample as the target question-answering pair.
Dataset split. After generating the prompt-answering dataset, we split this dataset into two parts for training the calibration model and evaluation/test. For the MMLU and WMT datasets, we take the dataset generated from the original validation/test dataset. For the question-answering task, as the answer of TriviaQA in the original test dataset is vacant, we take the first 2000 generated prompt-answering pairs from the training dataset as the test dataset, and the remaining for training.
Prompting format. Here we give the different prompting templates used for different tasks. We use few-shot prompting and the templates can always be roughly divided into four parts: introduction (empty only for WMT), examples, question, and answer, where examples are just $r$ distinct question-answer pairs in the same form as the question and answer parts. We feed the model with the template string except for the reference answer as inputs.
COQA Reading the passage and answer given questions accordingly. Passage: {a passage in COQA} Examples: {r distinct QA pairs related to the given passage} Q: {a new question related to the given passage} A: {reference answer}
TriviaQA Answer the question as following examples. Examples: {r distinct QA pairs} Q: {a new question} A: {reference answer}
MMLU You would be given a multiple-choice question paired with 4 choices (A-D). Choose one of them using letter A, B, C, or D as the correct answer to the question. Here are some examples: {r distinct QA pairs} Now answer the question: {a new question} A: {answer sentence A} B: {answer sentence B} C: {answer sentence C} D: {answer sentence D} Answer: {reference answer (a letter)}
WMT {r distinct QA pairs} Q: What is the English translation of the following sentence? {a French sentence} A: {reference answer (an English sentence)}
D.2 Details of the training procedure
For the three regimes of our supervised approach presented in Section 3.3, the details of the supervised training procedure are as below:
Gb-S. For the natural language generation tasks (question-answering and machine-translation), we train a random forest model with the input features listed in Table 5 (20 features in total). For the multiple-choice task, as the answer has only one token from {A, B, C, D}, we take the output logits of these 4 tokens (denoted as $\alpha_{\text{A}}$ , $\alpha_{\text{B}}$ , $\alpha_{\text{C}}$ , and $\alpha_{\text{D}}$ ) after inputting the question prompt $\bm{x}$ to the LLM. Then, we get the probability of each choice as follows:
$$
p_{\theta}(y|\bm{x})=\frac{\text{exp}(\alpha_{y})}{\sum_{y^{\prime}\in\{\text{%
A},\text{B},\text{C},\text{D}\}}\text{exp}(\alpha_{y^{\prime}})},\ \forall y%
\in\{\text{A},\text{B},\text{C},\text{D}\}.
$$
Then we use 5 features as the input to Gb-S: the entropy of this distribution, and the sorted probability values in descending order.
| Max Ent Min Ent Avg Ent | $\max_{jâ\{1,...,m\}}\ H(p_{\theta}(·|\bm{x},\bm{y}_{1:j-1}))$ $\min_{jâ\{1,...,m\}}\ H(p_{\theta}(·|\bm{x},\bm{y}_{1:j-1}))$ $\frac{1}{m}\sum_{j=1}^{m}H(p_{\theta}(·|\bm{x},\bm{y}_{1:j-1}))$ | $\max_{jâ\{1,...,n\}}\ H(p_{\theta}(·|\bm{x}_{1:j-1}))$ $\min_{jâ\{1,...,n\}}\ H(p_{\theta}(·|\bm{x}_{1:j-1}))$ $\frac{1}{n}\sum_{j=1}^{n}H(p_{\theta}(·|\bm{x}_{1:j-1}))$ |
| --- | --- | --- |
| Std Ent | $\sqrt{\frac{\sum_{j=1}^{m}\left(H(p_{\theta}(·|\bm{x},\bm{y}_{1:j-1}))-%
\text{Avg Ent}\right)^{2}}{m-1}}$ | $\sqrt{\frac{\sum_{j=1}^{n}\left(H(p_{\theta}(·|\bm{x}_{1:j-1}))-\text{Avg %
Ent}\right)^{2}}{n-1}}$ |
| Max Likelihood | $\max_{jâ\{1,...,m\}}\ -\log p_{\theta}(y_{j}|\bm{x},\bm{y}_{1:j-1})$ | $\max_{jâ\{1,...,n\}}\ -\log p_{\theta}(x_{j}|\bm{x}_{1:j-1})$ |
| Min Likelihood | $\min_{jâ\{1,...,m\}}\ -\log p_{\theta}(y_{j}|\bm{x},\bm{y}_{1:j-1})$ | $\min_{jâ\{1,...,n\}}\ -\log p_{\theta}(x_{j}|\bm{x}_{1:j-1})$ |
| Avg Likelihood | $\frac{1}{m}\sum_{j=1}^{m}-\log p_{\theta}(y_{j}|\bm{x},\bm{y}_{1:j-1})$ | $\frac{1}{n}\sum_{j=1}^{n}-\log p_{\theta}(x_{j}|\bm{x}_{1:j-1})$ |
| Std Likelihood | $\sqrt{\frac{\sum_{j=1}^{m}\left(-\log p_{\theta}(y_{j}|\bm{x},\bm{y}_{1:j-1})-%
\text{Avg Likelihood}\right)^{2}}{m-1}}$ | $\sqrt{\frac{\sum_{j=1}^{n}\left(-\log p_{\theta}(x_{j}|\bm{x}_{1:j-1})-\text{%
Avg Likelihood}\right)^{2}}{n-1}}$ |
| Avg Prob | $\frac{1}{m}\sum_{j=1}^{m}p_{\theta}(y_{j}|\bm{x},\bm{y}_{1:j-1})$ | $\frac{1}{n}\sum_{j=1}^{n}p_{\theta}(x_{j}|\bm{x}_{1:j-1})$ |
| Std Prob | $\sqrt{\frac{\sum_{j=1}^{m}\left(p_{\theta}(y_{j}|\bm{x},\bm{y}_{1:j-1})-\text{%
Avg Prob}\right)^{2}}{m-1}}$ | $\sqrt{\frac{\sum_{j=1}^{n}\left(p_{\theta}(x_{j}|\bm{x}_{1:j-1})-\text{Avg %
Prob}\right)^{2}}{n-1}}$ |
Table 5: Grey-box features used for the supervised task of uncertainty estimation for LLMs.
Wb-S. The dimension of a hidden layer from LM is typically high (e.g., 4096 for LLaMA2-7B), which may prevent the calibration model from capturing the effective uncertainty information revealed from the activations, especially with limited training samples. Thus, before training a model, we do the feature selection first. We maintain all the features used in the Gb-S and select another 300 features (neural nodes): (i) We use all the features to train a Lasso model and select 100 neural nodes with the highest absolute coefficient values; (ii) By calculating the mutual information between any neural node and the label (correct or not), we select another 100 features possessing top absolute mutual information; (iii) We select another 100 features with top absolute Pearson correlation coefficient. After the feature selection, we train a random forest model to predict whether the response is correct based on the selected features.
In the experiment section of the main text, the features in the Wb-S for natural language generation tasks include (i) all the features used in the Gb-S, (ii) the hidden activations of the last token of the question from the middle layer (LLaMA2-7B or LLaMA3-8B: 16th layer; Gemma-7B: 14th layer), and (iii) the hidden activations of the last token of the answer from the middle layer. Therefore, in these natural language generation tasks, the dimension is 8212 for LLaMA2-7B/LLaMA3-8B and 6164 for Gemma-7B.
The features in the Wb-S for the multiple-choice task include (i) all the features used in the Gb-S and (ii) the hidden activations of the last token of the answer (letter A, B, C, or D) from the middle layer. The dimension is 4101 for LLaMA2-7B/LLaMA3-8B and 3077 for Gemma-7B.
Notably, there are many choices of the hidden activations employed in the Wb-S. Besides what has been shown in Section B, we provide further discussion in Section E.
Bb-S. The idea of building a supervised calibration model for a black-box LLM is to use the hidden layers and output distributions from another open-source LLM model by feeding it with the question and the provided response. Therefore, the features available for the Wb-S are also available for the open-source LLM, so we just take the corresponding features from the open-source LLM in the Bb-S. Hence, in the natural language generation tasks, the input dimension of the calibration model is 4196 (including hidden activations of the question and answer and 20 entropy and likelihood-related features, $2Ă 2048+20$ ) for Gemma-2B, 6164 for Gemma-7B, 8212 for LLaMA2-7B/LLaMA3-8B, and 10260 for LLaMA2-13B. In the multiple-choice task, the dimension is 2053 for Gemma-2B (including the hidden activations of the answer and 5 entropy- and probability-related features used in the Gb-S), 3077 for Gemma-7B, 4101 for LLaMA2-7B/LLaMA3-8B, and 5125 for LLaMA2-13B.
For all these methods, we employ the random forest (Breiman, 2001) using the implementation from the scikit-learn package (Pedregosa et al., 2011) to estimate the uncertainty. The hyperparameters are set as [n_estimators=150, random_state=0, max_depth=8, verbose=2, max_features=45] if the number of selected features is no less than 100 and [n_estimators=100, random_state=0, max_depth=4, verbose=2] otherwise.
Appendix E Additional results and visualizations
In Section B, we show the advantage of utilizing the hidden activations of the answer from the middle layer of the LLM to estimate the uncertainty in Wb-S. In this section, we further discuss the impact of employing the hidden activations from the question in the Wb-S.
The motivation stems from the observation that within the transformer architecture, although the hidden activation of a questionâs last token (referred to as the questionâs activation) is forwarded to obtain the hidden activation of the answerâs last token (referred to as the answerâs activation), implying that the answerâs activation incorporates the questionâs activation information, it has been discovered that concatenating the questionâs activation with the answerâs activation offers additional insights into the answerâs uncertainty (Duan et al., 2024). We would like to further investigate the effectiveness of incorporating the questionâs activation along with the answerâs activation into the supervised setting.
We experiment with three feature combinations in our supervised setting: (i) Question: we use the hidden activation of the last token of the question from the middle layer, incorporated with the entropy- or probability-related features of the question (10 features in total listed in the right column of Table 5) if it is a natural language generation task, otherwise incorporated with all the features in Gb-S; (ii) Answer: we use the hidden activation of the last token of the answer from the middle layer incorporated with all the features used in Gb-S; (iii) Question-Answer: we use the last-token hidden activation of both the question and answer from the middle layer and all the features in Gb-S. We compare their performance with Gb-S in Figure 8 and present the following observations.
Question itself cannot capture enough uncertainty information. From Figure 8, we observe that the method Bb-S consistently outperforms Question across all these tasks. This implies that incorporating the features relating to the question only cannot provide enough information about the uncertainty of the answer. This aligns with the inferior performance of the sample-based method (Kuhn et al., 2023) we tested in the earlier sections. In these methods, the uncertainty score is used to estimate the language modelâs uncertainty about the question. This result implies that uncertainty cannot be captured in the question by the language model without generating the answer.
Questionâs hidden activation cannot help to generate more uncertainty information Again from Figure 8, by comparing the performance of Answer and Question-Answer, we find that the inclusion of questionâs activation has little impact on improving the performance. This shows that the uncertainty from the question might have already been well encoded in the last token activation of the answer.
<details>
<summary>x9.png Details</summary>

### Visual Description
## Bar Chart: Features from Gemma-7B, LLAMA2-7B, and LLAMA3-8B
### Overview
The image presents three bar charts comparing the AUROC scores of different features extracted from three language models: Gemma-7B, LLAMA2-7B, and LLAMA3-8B. Each chart displays the AUROC scores for four tasks (MMLU, TriviaQA, CoQA, and WMT-14) across three feature types: Gb-S, Question, and Question-Answer.
### Components/Axes
* **Title:** Features from Gemma-7B (left), Features from LLAMA2-7B (center), Features from LLAMA3-8B (right)
* **Y-axis:** AUROC, ranging from 0.60 to 0.90 in increments of 0.05.
* **X-axis:** Categorical, representing the tasks: MMLU, TriviaQA, CoQA, WMT-14.
* **Legend:** Located at the bottom of the chart.
* Gb-S: White bars
* Question: Light green bars
* Answer: White bars with black diagonal stripes
* Question-Answer: Light green bars with black diagonal stripes
### Detailed Analysis
**Chart 1: Features from Gemma-7B**
* **MMLU:**
* Gb-S: ~0.78
* Question: ~0.76
* Question-Answer: ~0.83
* **TriviaQA:**
* Gb-S: ~0.87
* Question: ~0.72
* Question-Answer: ~0.89
* **CoQA:**
* Gb-S: ~0.74
* Question: ~0.76
* Question-Answer: ~0.77
* **WMT-14:**
* Gb-S: ~0.66
* Question: ~0.85
* Question-Answer: ~0.86
**Chart 2: Features from LLAMA2-7B**
* **MMLU:**
* Gb-S: ~0.72
* Question: ~0.70
* Question-Answer: ~0.73
* **TriviaQA:**
* Gb-S: ~0.81
* Question: ~0.77
* Question-Answer: ~0.90
* **CoQA:**
* Gb-S: ~0.62
* Question: ~0.67
* Question-Answer: ~0.79
* **WMT-14:**
* Gb-S: ~0.78
* Question: ~0.78
* Question-Answer: ~0.80
**Chart 3: Features from LLAMA3-8B**
* **MMLU:**
* Gb-S: ~0.78
* Question: ~0.79
* Question-Answer: ~0.83
* **TriviaQA:**
* Gb-S: ~0.86
* Question: ~0.72
* Question-Answer: ~0.88
* **CoQA:**
* Gb-S: ~0.72
* Question: ~0.77
* Question-Answer: ~0.78
* **WMT-14:**
* Gb-S: ~0.73
* Question: ~0.75
* Question-Answer: ~0.75
### Key Observations
* For all three models, TriviaQA generally yields the highest AUROC scores, especially for the Question-Answer feature.
* CoQA tends to have the lowest AUROC scores across all models and feature types.
* The Question-Answer feature generally performs better than the Question feature, and both are often better than Gb-S.
* LLAMA2-7B shows a particularly low AUROC score for CoQA across all feature types.
### Interpretation
The charts compare the performance of different features extracted from three language models on various tasks, as measured by AUROC. The results suggest that the type of feature used significantly impacts performance, with Question-Answer features generally outperforming Gb-S and Question features. The models also exhibit varying levels of success across different tasks, indicating that some tasks are inherently more challenging or better suited to the models' capabilities. The relatively low performance on CoQA across all models suggests that this task may require different or more sophisticated features. The high performance on TriviaQA, especially with Question-Answer features, indicates that these models are particularly adept at answering trivia questions when provided with both the question and answer information.
</details>
Figure 8: Performance comparison of using last-token middle layer hidden activations of the answer (Answer) or the concatenation of the question and answer (Question-Answer) as features in the Wb-S, where the features in Gb-S are also included in Wb-S. In the natural language generation tasks, the dimensions of Gb-S, Question, Answer, and Question-Answer for Gemma-7B are 20, 3082, 3092, and 6164, while for LLaMA2-7B or LLaMA3-8B they are 20, 4106, 4116, and 8212, respectively. In the MMLU task, for Gemma-7B they are 5, 3077, 3077, and 6149, while for LLaMA2-7B or LLaMA3-8B, they are 5, 4101, 4101, and 8197, respectively.
The middle layer is still better than the last layer. In Section B, Figure 3 shows that when using the hidden activation of the answer in the Wb-S, the middle layer of the LLM is a better choice than the last layer. The next question is: Does this conclusion still hold for using the concatenated hidden activations of the question and answer? We depict the experiment result in Figure 9, which is consistent with the conclusion drawn from Figure 3.
<details>
<summary>x10.png Details</summary>

### Visual Description
## Bar Chart: Feature Comparison of Language Models
### Overview
The image presents a series of bar charts comparing the AUROC (Area Under the Receiver Operating Characteristic curve) scores of different feature extraction methods from three language models: Gemma-7B, LLaMA2-7B, and LLaMA3-8B. The charts compare the performance on three tasks: TriviaQA, CoQA, and WMT-14. The feature extraction methods are "Avg token, mid layer", "Avg token, last layer", "Last token, mid layer", and "Last token, last layer".
### Components/Axes
* **Title:** The image is composed of three separate bar charts, each titled:
* "Features from Gemma-7B" (top-left)
* "Features from LLaMA2-7B" (top-center)
* "Features from LLaMA3-8B" (top-right)
* **Y-axis:** Labeled "AUROC" with a scale from approximately 0.74 to 0.91.
* **X-axis:** Categorical, representing the tasks: TriviaQA, CoQA, and WMT-14.
* **Legend:** Located at the bottom of the image, associating colors and patterns with feature extraction methods:
* Green with diagonal lines: "Avg token, mid layer"
* Red with diagonal lines: "Avg token, last layer"
* Green with circles: "Last token, mid layer"
* Red with circles: "Last token, last layer"
### Detailed Analysis
#### Features from Gemma-7B
* **TriviaQA:**
* Avg token, mid layer (green, diagonal lines): ~0.87
* Avg token, last layer (red, diagonal lines): ~0.86
* Last token, mid layer (green, circles): ~0.87
* Last token, last layer (red, circles): ~0.86
* **CoQA:**
* Avg token, mid layer (green, diagonal lines): ~0.76
* Avg token, last layer (red, diagonal lines): ~0.75
* Last token, mid layer (green, circles): ~0.76
* Last token, last layer (red, circles): ~0.75
* **WMT-14:**
* Avg token, mid layer (green, diagonal lines): ~0.86
* Avg token, last layer (red, diagonal lines): ~0.85
* Last token, mid layer (green, circles): ~0.86
* Last token, last layer (red, circles): ~0.85
#### Features from LLaMA2-7B
* **TriviaQA:**
* Avg token, mid layer (green, diagonal lines): ~0.89
* Avg token, last layer (red, diagonal lines): ~0.89
* Last token, mid layer (green, circles): ~0.90
* Last token, last layer (red, circles): ~0.89
* **CoQA:**
* Avg token, mid layer (green, diagonal lines): ~0.80
* Avg token, last layer (red, diagonal lines): ~0.80
* Last token, mid layer (green, circles): ~0.80
* Last token, last layer (red, circles): ~0.80
* **WMT-14:**
* Avg token, mid layer (green, diagonal lines): ~0.76
* Avg token, last layer (red, diagonal lines): ~0.76
* Last token, mid layer (green, circles): ~0.78
* Last token, last layer (red, circles): ~0.78
#### Features from LLaMA3-8B
* **TriviaQA:**
* Avg token, mid layer (green, diagonal lines): ~0.85
* Avg token, last layer (red, diagonal lines): ~0.85
* Last token, mid layer (green, circles): ~0.85
* Last token, last layer (red, circles): ~0.85
* **CoQA:**
* Avg token, mid layer (green, diagonal lines): ~0.76
* Avg token, last layer (red, diagonal lines): ~0.76
* Last token, mid layer (green, circles): ~0.76
* Last token, last layer (red, circles): ~0.76
* **WMT-14:**
* Avg token, mid layer (green, diagonal lines): ~0.73
* Avg token, last layer (red, diagonal lines): ~0.73
* Last token, mid layer (green, circles): ~0.74
* Last token, last layer (red, circles): ~0.74
### Key Observations
* For all three models, performance on TriviaQA is generally higher than on CoQA and WMT-14.
* The choice of layer (mid vs. last) and token aggregation (avg vs. last) has a relatively small impact on AUROC scores within each task and model.
* LLaMA2-7B generally shows the highest AUROC scores, especially on TriviaQA.
* LLaMA3-8B shows the lowest AUROC scores on WMT-14.
### Interpretation
The bar charts provide a comparative analysis of feature extraction methods from different language models based on their AUROC scores on various tasks. The data suggests that the LLaMA2-7B model performs slightly better overall compared to Gemma-7B and LLaMA3-8B. The performance differences between using the average token versus the last token, and the mid-layer versus the last layer, are relatively minor, indicating that the choice of feature extraction method is not as critical as the choice of the language model itself. The lower scores on CoQA and WMT-14 across all models suggest that these tasks are more challenging for the models compared to TriviaQA.
</details>
Figure 9: Performance comparison of using question-answer concatenated hidden activations from different tokens and layers as features in the Wb-S method. Scores are normalized in [0,1], where a lower value indicates larger uncertainty. For Gemma-7B, the dimension of the Wb-S input is 6164 (3072 from the question, 3072 from the answer, and 20 from the grey-box features). For LLaMA2-7B/LLaMA3-8B, it is 8212.
Our method better characterizes the uncertainty. We find that the grey-box and white-box features enhance the ability to characterize the dataset so that the distribution of the generated outputâs uncertainty score is better correlated with the outputâs correctness. According to Figure 10, we observe that with black-box features, the distributions of the uncertainty score for true and false answers are not very distinguishable, and the true answerâs distribution is even similar to a uniform distribution. With grey-box and white-box features, the distributions of the uncertainty scores are more separated between the true and false answers. The results show the supervised learning approach not only achieves better AUROC but also learns to better separate the distribution of the uncertainty scores.
<details>
<summary>x11.png Details</summary>

### Visual Description
## Histogram: Distribution of US Values for True and False Answers
### Overview
The image consists of four histograms displayed side-by-side. Each histogram shows the distribution of "US" values for "true answer" and "false answer" categories. The x-axis represents the "US" value, ranging from 0.0 to 1.0, and the y-axis represents the number of samples. The histograms are distinguished by different "US of" categories: Entropy, Bb-S, Gb-S, and Wb-S. The "true answer" data is represented in blue, and the "false answer" data is represented in red.
### Components/Axes
* **Y-axis:** "# Samples" ranging from 0 to 150, with increments of 25.
* **X-axis:** "US of [Category]" ranging from 0.0 to 1.0, with increments of 0.2.
* **Legend (Top):**
* Blue: "true answer"
* Red: "false answer"
* **Histograms (Left to Right):**
1. "US of Entropy"
2. "US of Bb-S"
3. "US of Gb-S"
4. "US of Wb-S"
### Detailed Analysis
**1. US of Entropy**
* **True Answer (Blue):** The distribution is bimodal. There is a peak around US = 0.5 with approximately 25 samples, and another peak near US = 0.9 with approximately 40 samples.
* **False Answer (Red):** The distribution is skewed to the left, with a peak around US = 0.1 with approximately 50 samples.
**2. US of Bb-S**
* **True Answer (Blue):** The distribution is bimodal. There is a peak around US = 0.8 with approximately 30 samples, and another peak near US = 1.0 with approximately 35 samples.
* **False Answer (Red):** The distribution is skewed to the left, with a peak around US = 0.1 with approximately 90 samples.
**3. US of Gb-S**
* **True Answer (Blue):** The distribution is skewed to the right, with a peak around US = 0.9 with approximately 30 samples.
* **False Answer (Red):** The distribution is skewed to the left, with a peak around US = 0.1 with approximately 100 samples.
**4. US of Wb-S**
* **True Answer (Blue):** The distribution is skewed to the right, with a peak around US = 0.9 with approximately 40 samples.
* **False Answer (Red):** The distribution is heavily skewed to the left, with a sharp peak around US = 0.1 with approximately 160 samples.
### Key Observations
* For "Entropy", the "true answer" distribution has two distinct peaks, while the "false answer" distribution is skewed towards lower US values.
* For "Bb-S", "Gb-S", and "Wb-S", the "false answer" distributions are heavily concentrated at low US values (around 0.1), while the "true answer" distributions are concentrated at high US values (around 0.8-1.0).
* The "Wb-S" histogram shows the most pronounced separation between "true answer" and "false answer" distributions.
### Interpretation
The histograms suggest that the "US" values for "Bb-S", "Gb-S", and "Wb-S" are good indicators of whether an answer is true or false. Low US values are strongly associated with false answers, while high US values are associated with true answers. The "Entropy" US value is less discriminatory, as the "true answer" distribution has significant overlap with the "false answer" distribution. The "Wb-S" metric appears to be the most effective at distinguishing between true and false answers, given the clear separation of the distributions. This information could be used to improve the accuracy of a system that predicts the correctness of answers based on these "US" values.
</details>
Figure 10: Uncertainty scores of different methods on the MMLU dataset for answers provided by the Gemma-7B model, where scores are normalized in [0,1], and US is short for uncertainty score. False answer refers to the sample where the choice assigned with maximum probability by the LLM is false, while true answer represents the sample answered correctly.
Appendix F Examples
In this section, we show some examples of the wrong answers the LLM generated and explore how different methods understand the LLMâs uncertainty. The wrong answers are selected from those samples where the LLM makes wrong predictions.
Since we let the LLM output the greedy answer, which could be wrong, we expect an ideal uncertainty estimation model to output a high confidence score when the LLM generates the correct answer, and give a low confidence score when the LLM outputs the wrong answer. By looking at different wrong answers generated by the LLM, we note that although our approach sometimes gives a high confidence score on a wrong answer generated by the LLM, at other times it shows desirable properties such as giving higher uncertainty scores to better answers, and giving low confidence score when LLM does not know the answer.
Our illustrative examples are generated as follows: For questions where the LLMâs greedy response is incorrect, we also extract the correct answer from the dataset and additional answers randomly generated by the LLM with lower probabilities than the greedy answer. Along with these answers, we also compute the answersâ corresponding metrics and features so that we can observe how they behave with different outputs. We conduct this experiment in the test dataset of TriviaQA, in which both the question and answer are short. We summarize the ways that our uncertainty estimation model behaves as follows:
- Confidently support a wrong answer. The LLMs are confident that the wrong greedy answer is true and assign a high confidence score. Moreover, the LLMs give low uncertainty scores to the correct answers, suggesting a lack of knowledge about these questions. We give an example of LLaMA2-7B and Gemma-7B in Figure 11 and 12. Note that in both examples, our method assigns a low uncertainty score to the correct answer and a much higher uncertainty score to the wrong answer. In contrast, the unsupervised grey-box methods assign higher uncertainty scores to the correct answer.
- Confidently reject a wrong answer. We give examples from LLaMA2-7B and Gemma-7B in Figure 13 and 14. The uncertainty estimation model gives a higher score to the true answer or answers that are better than the wrong answer. This means that for these questions, our model actually knows which answer is better and can assign uncertainty scores accordingly. In contrast, the unsupervised methods tend to assign much higher uncertainty scores to the greedy (wrong) answer.
- Unconfident about any answer. Due to the lack of knowledge, the LLM may not know the true answer. We show the examples in Figure 15 and 16. From these examples, we can see that the model assigns almost the same uncertainty scores to these generated answers, including the true answer. In this scenario, the uncertainty estimation model is uncertain about the correctness of any answer. Furthermore, it is interesting to note that the unsupervised methods exhibit similar behavior, assigning almost similar scores to other answers as well, albeit with much higher uncertainty scores. This differs from the previous two cases, where the unsupervised method behaved differently from our uncertainty estimation model.
<details>
<summary>x12.png Details</summary>

### Visual Description
## Chart/Diagram Type: Performance Comparison Table
### Overview
The image presents a performance comparison table for a Language Model (LLaMA2-7B) when answering the question: "Who had a 70s No 1 hit with Billy, Don't Be A Hero?". The table compares the reference answer, the greedy answer from the model, and three other possible answers generated by the model. The comparison is based on several metrics, including Rouge-1, Max Prob, Avg Prob, Max Ent, Avg Ent, Gb-S, Wb-S, Bb-S, SU, and Ask4-conf.
### Components/Axes
* **Title:** An example of a confidently wrong answer (LM: LLaMA2-7B)
* **Question:** Who had a 70s No 1 hit with Billy, Don't Be A Hero?
* **Ref answer:** Bo Donaldson & The Heywoods
* **Greedy answer:** Paper Lace
* **Answer 1:** Bo Donaldson
* **Answer 2:** Paperchaser
* **Answer 3:** Paper Moon
* **Columns:**
* Rouge-1
* Max Prob
* Avg Prob
* Max Ent
* Avg Ent
* Gb-S
* Wb-S
* Bb-S
* SU
* Ask4-conf
* **Rows:**
* Ref answer
* Greedy answer
* Answer 1
* Answer 2
* Answer 3
### Detailed Analysis or ### Content Details
The table contains the following data:
| | Rouge-1 | Max Prob | Avg Prob | Max Ent | Avg Ent | Gb-S | Wb-S | Bb-S | SU | Ask4-conf |
| :-------------------- | :------ | :------- | :------- | :------ | :------ | :--- | :--- | :--- | :--- | :-------- |
| **Ref answer** | 1 | 0.13 | 0.94 | 0.82 | 0.94 | 0.21 | 0.31 | | | |
| **Greedy answer** | 0 | 0.79 | 0.99 | 0.86 | 0.94 | 0.82 | 0.83 | 0.72 | 0.31 | 0 |
| **Answer 1** | 0.67 | 0.13 | 0.9 | 0.82 | 0.9 | 0.1 | 0.25 | | | |
| **Answer 2** | 0 | 0 | 0.81 | 0.7 | 0.82 | 0.08 | 0.12 | | | |
| **Answer 3** | 0 | 0 | 0.82 | 0.86 | 0.89 | 0.1 | 0.2 | | | |
* **Ref answer:** Rouge-1 score is 1, Max Prob is 0.13, Avg Prob is 0.94, Max Ent is 0.82, Avg Ent is 0.94, Gb-S is 0.21, and Wb-S is 0.31.
* **Greedy answer:** Rouge-1 score is 0, Max Prob is 0.79, Avg Prob is 0.99, Max Ent is 0.86, Avg Ent is 0.94, Gb-S is 0.82, Wb-S is 0.83, Bb-S is 0.72, SU is 0.31, and Ask4-conf is 0.
* **Answer 1:** Rouge-1 score is 0.67, Max Prob is 0.13, Avg Prob is 0.9, Max Ent is 0.82, Avg Ent is 0.9, Gb-S is 0.1, and Wb-S is 0.25.
* **Answer 2:** Rouge-1 score is 0, Max Prob is 0, Avg Prob is 0.81, Max Ent is 0.7, Avg Ent is 0.82, Gb-S is 0.08, and Wb-S is 0.12.
* **Answer 3:** Rouge-1 score is 0, Max Prob is 0, Avg Prob is 0.82, Max Ent is 0.86, Avg Ent is 0.89, Gb-S is 0.1, and Wb-S is 0.2.
### Key Observations
* The "Ref answer" has the highest Rouge-1 score (1), indicating the best match with the reference.
* The "Greedy answer" has a high Max Prob (0.79) and Avg Prob (0.99), but a Rouge-1 score of 0, suggesting it's confidently incorrect.
* "Answer 1" has a relatively high Rouge-1 score (0.67) compared to "Answer 2" and "Answer 3".
* "Answer 2" and "Answer 3" have Max Prob values of 0.
### Interpretation
The data demonstrates a scenario where the language model (LLaMA2-7B) provides a "confidently wrong" answer. The "Greedy answer" has high probability scores (Max Prob and Avg Prob) but fails to match the reference answer (Rouge-1 score of 0). This suggests the model is confident in its incorrect answer. The other generated answers ("Answer 1", "Answer 2", "Answer 3") also show varying degrees of accuracy, with "Answer 1" being the closest to the reference based on the Rouge-1 score. The table highlights the importance of evaluating language models not only on probability scores but also on the accuracy of their responses.
</details>
Figure 11: An example of LLaMA2-7B assigning a confidently wrong answer in the TriviaQA dataset. Scores are normalized in $[0,1]$ , where a lower value indicates a larger uncertainty. The score of the greedy answer provided by any uncertainty estimation method is higher than that of the true answer, but the greedy answer is incorrect. The UK band Paper Lace did indeed release a version of âBilly, Donât Be A Heroâ in 1974, the same year as the version of Bo, but it was Bo Donaldson & The Heywoods (a band in the U.S.) whose version topped the charts as a No.1 hit.
<details>
<summary>x13.png Details</summary>

### Visual Description
## Chart/Diagram Type: Data Table with Question/Answer Context
### Overview
The image presents an example of a confidently wrong answer generated by a language model (LM: Gemma-7B). It includes a question, the reference answer, the model's "greedy" answer, and two additional answers. A table provides various metrics for each answer, including Rouge-1 score, maximum probability (Max Prob), average probability (Avg Prob), maximum entropy (Max Ent), average entropy (Avg Ent), and several other metrics (Gb-S, Wb-S, Bb-S, SU, Ask4-conf).
### Components/Axes
* **Title:** An example of a confidently wrong answer (LM: Gemma-7B)
* **Question:** Which sitcom starred Leonard Rossiter in the role of a supermarket manager?
* **Ref answer:** Tripper's Day
* **Greedy answer:** Rising Damp
* **Answer 1:** Rising Damp.
* **Answer 2:** The Rise and Fall of Reginald Perrin
* **Table Headers:**
* Rouge-1
* Max Prob
* Avg Prob
* Max Ent
* Avg Ent
* Gb-S
* Wb-S
* Bb-S
* SU
* Ask4-conf
* **Table Rows:**
* Ref answer
* Greedy answer
* Answer 1
* Answer 2
### Detailed Analysis or ### Content Details
The table presents the following data:
| | Rouge-1 | Max Prob | Avg Prob | Max Ent | Avg Ent | Gb-S | Wb-S | Bb-S | SU | Ask4-conf |
| :-------------------- | ------: | -------: | -------: | ------: | ------: | ---: | ---: | ---: | ---: | --------: |
| **Ref answer** | 1 | 0.00 | 0.66 | 0.70 | 0.74 | 0.14 | 0.15 | 0.24 | | |
| **Greedy answer** | 0 | 0.76 | 0.99 | 0.90 | 0.94 | 0.93 | 0.86 | 0.89 | 0.46 | 1 |
| **Answer 1** | 0 | 0.02 | 0.87 | 0.81 | 0.88 | 0.60 | 0.40 | 0.86 | | |
| **Answer 2** | 0 | 0.05 | 0.91 | 0.89 | 0.93 | 0.68 | 0.46 | 0.64 | | |
### Key Observations
* The "Ref answer" has a Rouge-1 score of 1, indicating it's the reference.
* The "Greedy answer" has a high average probability (0.99) and a high Ask4-conf score of 1, suggesting the model is very confident in this (incorrect) answer.
* "Answer 1" and "Answer 2" have lower maximum probabilities but relatively high average probabilities.
### Interpretation
The data demonstrates a scenario where a language model confidently provides an incorrect answer. The high "Avg Prob" and "Ask4-conf" values for the "Greedy answer" indicate that the model is highly certain about its response, despite it being wrong. This highlights a potential issue with language models: they can be confidently incorrect. The other metrics provide further insight into the characteristics of the different answers, such as their entropy and similarity to the reference answer. The Rouge-1 score confirms that only the reference answer matches the expected response.
</details>
Figure 12: An example for Gemma-7B that assigns a high confidence score to a wrong answer. Leonard Rossiter starred in âRising Dampâ as a landlord, not as a supermarket manager.
<details>
<summary>x14.png Details</summary>

### Visual Description
## Table: LM Answer Identification Example
### Overview
The image presents an example of how a Language Model (LM), specifically LLaMA2-7B, identifies the better answer to a question. It includes the question, a reference answer, a greedy answer, and two other possible answers. A table provides metrics for each answer, including Rouge-1 score, maximum probability, average probability, maximum entropy, average entropy, Gb-S, Wb-S, Bb-S, SU, and Ask4-conf.
### Components/Axes
* **Title:** An example that the LM identifies the better answer (LM: LLaMA2-7B)
* **Question:** Which musical featured the songs A Secretary is Not A Toy, and The Company Way?
* **Answers:**
* Ref answer: How to Succeed in Business Without Really Trying
* Greedy answer: The Pajama Game
* Answer 1: How to Succeed In Business Without Really Trying
* Answer 2: The Company Way
* **Table Headers:**
* Rouge-1
* Max Prob
* Avg Prob
* Max Ent
* Avg Ent
* Gb-S
* Wb-S
* Bb-S
* SU
* Ask4-conf
* **Table Rows:**
* Ref answer
* Greedy answer
* Answer 1
* Answer 2
### Detailed Analysis or ### Content Details
The table presents the following data:
| | Rouge-1 | Max Prob | Avg Prob | Max Ent | Avg Ent | Gb-S | Wb-S | Bb-S | SU | Ask4-conf |
| :-------------------- | :------ | :------- | :------- | :------ | :------ | :--- | :--- | :--- | :--- | :-------- |
| Ref answer | 1 | 0.12 | 0.96 | 0.43 | 0.93 | 0.23 | 0.33 | | | |
| Greedy answer | 0 | 0.12 | 0.9 | 0.37 | 0.82 | 0.09 | 0.14 | 0.33 | 0.08 | 0 |
| Answer 1 | 1 | 0.08 | 0.93 | 0.43 | 0.94 | 0.14 | 0.22 | | | |
| Answer 2 | 0 | 0.01 | 0.78 | 0.37 | 0.6 | 0.08 | 0.13 | | | |
### Key Observations
* The "Ref answer" and "Answer 1" have the highest Rouge-1 scores (1), indicating they are the closest to the reference answer based on the Rouge-1 metric.
* The "Ref answer" has the highest average probability (0.96).
* The "Greedy answer" has the lowest Ask4-conf score (0).
* "Answer 2" has the lowest Max Prob (0.01) and Avg Prob (0.78)
### Interpretation
The table provides a quantitative comparison of different answers generated by the LM against a reference answer. The metrics suggest that the LM identifies "Ref answer" and "Answer 1" as better answers, as indicated by their higher Rouge-1 scores and average probabilities. The "Greedy answer" and "Answer 2" perform worse according to these metrics. The data demonstrates how different metrics can be used to evaluate the quality of answers generated by a language model.
</details>
Figure 13: An example that LLaMA2-7B can successfully identify the better answer (by attaching a higher score). Scores are normalized in [0,1], where a lower value indicates larger uncertainty.
<details>
<summary>x15.png Details</summary>

### Visual Description
## Chart/Diagram Type: Table with Question/Answer Context
### Overview
The image presents a table comparing the performance of different answers generated by a Language Model (LM), Gemma-7B, against a reference answer. The context is a question about the science related to the behavior of sound in rooms and concert halls. The table provides various metrics for each answer, including ROUGE-1 score, maximum probability (Max Prob), average probability (Avg Prob), maximum entropy (Max Ent), average entropy (Avg Ent), and several other metrics denoted by abbreviations (Gb-S, Wb-S, Bb-S, SU, Ask4-conf).
### Components/Axes
* **Header:** "An example that the LM identifies the better answer (LM: Gemma-7B)"
* **Question Context:**
* Question: "The behavior of sound in rooms and concert halls is a separate science. what is its name?"
* Ref answer: Acoustics
* Greedy answer: Acoustical
* Answer 1: Acoustical Engineering
* Answer 2: Acoustiics
* **Table Columns (Metrics):**
* Rouge-1
* Max Prob
* Avg Prob
* Max Ent
* Avg Ent
* Gb-S
* Wb-S
* Bb-S
* SU
* Ask4-conf
* **Table Rows (Answers):**
* Ref answer
* Greedy answer
* Answer 1
* Answer 2
### Detailed Analysis or ### Content Details
The table contains the following data:
| | Rouge-1 | Max Prob | Avg Prob | Max Ent | Avg Ent | Gb-S | Wb-S | Bb-S | SU | Ask4-conf |
| :-------------------- | :------ | :------- | :------- | :------ | :------ | :--- | :--- | :--- | :--- | :-------- |
| Ref answer | 1 | 0.45 | 0.96 | 0.86 | 0.88 | 0.64 | 0.73 | 0.93 | | |
| Greedy answer | 0 | 0.41 | 0.95 | 0.79 | 0.84 | 0.50 | 0.51 | 0.29 | 0.28 | 1 |
| Answer 1 | 0 | 0.28 | 0.94 | 0.79 | 0.83 | 0.39 | 0.44 | 0.33 | | |
| Answer 2 | 0 | 0.04 | 0.86 | 0.69 | 0.80 | 0.16 | 0.25 | 0.39 | | |
* **Ref answer (Acoustics):**
* Rouge-1: 1
* Max Prob: 0.45
* Avg Prob: 0.96
* Max Ent: 0.86
* Avg Ent: 0.88
* Gb-S: 0.64
* Wb-S: 0.73
* Bb-S: 0.93
* **Greedy answer (Acoustical):**
* Rouge-1: 0
* Max Prob: 0.41
* Avg Prob: 0.95
* Max Ent: 0.79
* Avg Ent: 0.84
* Gb-S: 0.50
* Wb-S: 0.51
* Bb-S: 0.29
* SU: 0.28
* Ask4-conf: 1
* **Answer 1 (Acoustical Engineering):**
* Rouge-1: 0
* Max Prob: 0.28
* Avg Prob: 0.94
* Max Ent: 0.79
* Avg Ent: 0.83
* Gb-S: 0.39
* Wb-S: 0.44
* Bb-S: 0.33
* **Answer 2 (Acoustiics):**
* Rouge-1: 0
* Max Prob: 0.04
* Avg Prob: 0.86
* Max Ent: 0.69
* Avg Ent: 0.80
* Gb-S: 0.16
* Wb-S: 0.25
* Bb-S: 0.39
### Key Observations
* The "Ref answer" (Acoustics) has a ROUGE-1 score of 1, indicating a perfect match with the reference.
* The "Greedy answer" (Acoustical) has a high average probability (0.95) but a ROUGE-1 score of 0.
* "Answer 2" (Acoustiics) has the lowest maximum probability (0.04) among all answers.
* The "Ref answer" has the highest Bb-S score (0.93).
### Interpretation
The table demonstrates a comparison of different answers generated by the LM (Gemma-7B) to a specific question. The metrics provide insights into the quality and relevance of each answer. The ROUGE-1 score highlights the exact match of the "Ref answer," while other metrics like "Max Prob" and "Avg Prob" indicate the model's confidence in its generated answers. The "Greedy answer," although having a high average probability, fails to match the reference answer, as indicated by its ROUGE-1 score of 0. This suggests that the model might be generating answers that are probable but not necessarily correct. The other answers ("Answer 1" and "Answer 2") also have ROUGE-1 scores of 0, indicating they are incorrect. The Ask4-conf value of 1 for the Greedy answer is interesting, and its meaning is not clear from the context.
</details>
Figure 14: An example that Gemma-7B can successfully identify the better answer (by attaching a higher score). Scores are normalized in [0,1], where a lower value indicates larger uncertainty.
<details>
<summary>x16.png Details</summary>

### Visual Description
## Example Analysis: Language Model Answer Evaluation
### Overview
The image presents an example where a Language Model (LM), specifically LLaMA2-7B, fails to provide the correct answer to a question. It includes the question, the reference answer, the LM's greedy answer, and two other possible answers. A table provides various metrics for each answer, including Rouge-1 score, maximum probability, average probability, maximum entropy, average entropy, and other statistical measures.
### Components/Axes
* **Title:** "An example that the LM does not know the answer (LM: LLaMA2-7B)"
* **Question:** "Who played Sandy Richardson in the British tv series 'Crossroads'?"
* **Reference Answer:** "Roger Tonge"
* **Greedy Answer:** "Noel Clarke"
* **Answer 1:** "Mike Pratt"
* **Answer 2:** "Lucy Carless"
* **Table Headers:**
* Rouge-1
* Max Prob
* Avg Prob
* Max Ent
* Avg Ent
* Gb-S
* Wb-S
* Bb-S
* SU
* Ask4-conf
* **Table Rows:**
* Ref answer
* Greedy answer
* Answer 1
* Answer 2
### Detailed Analysis or ### Content Details
**Table Data:**
| Metric | Ref answer | Greedy answer | Answer 1 | Answer 2 |
| ----------- | ---------- | ------------- | -------- | -------- |
| Rouge-1 | 1 | 0 | 0 | 0 |
| Max Prob | 0.01 | 0.16 | 0.01 | 0 |
| Avg Prob | 0.78 | 0.89 | 0.82 | 0.71 |
| Max Ent | 0.28 | 0.28 | 0.28 | 0.28 |
| Avg Ent | 0.71 | 0.75 | 0.73 | 0.63 |
| Gb-S | 0.08 | 0.08 | 0.08 | 0.08 |
| Wb-S | 0.09 | 0.09 | 0.09 | 0.08 |
| Bb-S | N/A | 0.23 | N/A | N/A |
| SU | N/A | 0 | N/A | N/A |
| Ask4-conf | N/A | 0 | N/A | N/A |
* **Rouge-1:** The reference answer has a perfect score of 1, while all other answers have a score of 0.
* **Max Prob:** The greedy answer has the highest maximum probability at 0.16. The reference answer and Answer 1 both have a max probability of 0.01, while Answer 2 has a max probability of 0.
* **Avg Prob:** The greedy answer has the highest average probability at 0.89. Answer 1 has an average probability of 0.82, the reference answer has 0.78, and Answer 2 has 0.71.
* **Max Ent:** All answers have the same maximum entropy of 0.28.
* **Avg Ent:** The greedy answer has the highest average entropy at 0.75. Answer 1 has an average entropy of 0.73, the reference answer has 0.71, and Answer 2 has 0.63.
* **Gb-S:** All answers have the same Gb-S score of 0.08.
* **Wb-S:** The reference answer, greedy answer, and Answer 1 all have a Wb-S score of 0.09, while Answer 2 has a score of 0.08.
* **Bb-S:** The greedy answer has a Bb-S score of 0.23.
* **SU:** The greedy answer has an SU score of 0.
* **Ask4-conf:** The greedy answer has an Ask4-conf score of 0.
### Key Observations
* The LM's "greedy answer" (Noel Clarke) has a Rouge-1 score of 0, indicating it's completely incorrect.
* The "greedy answer" has the highest Max Prob and Avg Prob, suggesting the LM was most confident in this incorrect answer.
* The reference answer has a perfect Rouge-1 score of 1, as expected.
### Interpretation
The data demonstrates a failure case for the LLaMA2-7B model. Despite having relatively high average and maximum probabilities for its "greedy answer," the model failed to provide the correct answer to the question. This highlights the limitations of relying solely on probability scores for evaluating the correctness of LM-generated answers. The Rouge-1 score accurately reflects the correctness of the reference answer and the incorrectness of the other answers. The other metrics (entropy, Gb-S, Wb-S, Bb-S, SU, Ask4-conf) provide additional information about the characteristics of the answers, but the Rouge-1 score is the most direct indicator of accuracy in this case.
</details>
Figure 15: An example that LLaMA2-7B does not know the true answer. Scores are normalized in [0,1], where a lower value indicates larger uncertainty. The LM does not know the true answer and attempts to guess it by generating different names with low confidence scores, but the score is also low even when the LM faces the true answer.
<details>
<summary>x17.png Details</summary>

### Visual Description
## Table: Language Model Uncertainty Estimation Failure Example
### Overview
The image presents a table that exemplifies a failure case in estimating uncertainty using the Gemma-7B language model. It shows the model's responses to a question about a film, comparing a reference answer with a greedy answer and two other generated answers. The table provides various metrics for each answer, including Rouge-1 score, maximum probability, average probability, maximum entropy, average entropy, and other scores (Gb-S, Wb-S, Bb-S, SU, Ask4-conf).
### Components/Axes
* **Title:** An example of the failure in estimating the uncertainty (LM: Gemma-7B)
* **Question:** What is the name of the colliery in the 1939 film 'The Stars Look Down'?
* **Answers:**
* Ref answer: Neptune Colliery
* Greedy answer: The Black Diamond
* Answer 1: Oakwood Colliery
* Answer 2: Northmoor Colliery
* **Table Headers (Columns):**
* Rouge-1
* Max Prob
* Avg Prob
* Max Ent
* Avg Ent
* Gb-S
* Wb-S
* Bb-S
* SU
* Ask4-conf
* **Table Rows:**
* Ref answer
* Greedy answer
* Answer 1
* Answer 2
### Detailed Analysis or ### Content Details
The table presents the following data:
| | Rouge-1 | Max Prob | Avg Prob | Max Ent | Avg Ent | Gb-S | Wb-S | Bb-S | SU | Ask4-conf |
| :-------------------- | :------ | :------- | :------- | :------ | :------ | :---- | :---- | :---- | :---- | :-------- |
| **Ref answer** | 1 | 0 | 0.62 | 0.19 | 0.65 | 0.10 | 0.13 | 0.23 | N/A | N/A |
| **Greedy answer** | 0 | 0.02 | 0.72 | 0.18 | 0.20 | 0.10 | 0.10 | 0.12 | 0 | 1 |
| **Answer 1** | 0 | 0 | 0.73 | 0.18 | 0.57 | 0.10 | 0.11 | 0.18 | N/A | N/A |
| **Answer 2** | 0 | 0 | 0.73 | 0.18 | 0.53 | 0.10 | 0.12 | 0.19 | N/A | N/A |
* **Ref answer:**
* Rouge-1: 1
* Max Prob: 0
* Avg Prob: 0.62
* Max Ent: 0.19
* Avg Ent: 0.65
* Gb-S: 0.10
* Wb-S: 0.13
* Bb-S: 0.23
* **Greedy answer:**
* Rouge-1: 0
* Max Prob: 0.02
* Avg Prob: 0.72
* Max Ent: 0.18
* Avg Ent: 0.20
* Gb-S: 0.10
* Wb-S: 0.10
* Bb-S: 0.12
* SU: 0
* Ask4-conf: 1
* **Answer 1:**
* Rouge-1: 0
* Max Prob: 0
* Avg Prob: 0.73
* Max Ent: 0.18
* Avg Ent: 0.57
* Gb-S: 0.10
* Wb-S: 0.11
* Bb-S: 0.18
* **Answer 2:**
* Rouge-1: 0
* Max Prob: 0
* Avg Prob: 0.73
* Max Ent: 0.18
* Avg Ent: 0.53
* Gb-S: 0.10
* Wb-S: 0.12
* Bb-S: 0.19
### Key Observations
* The "Ref answer" has a Rouge-1 score of 1, indicating it's the correct answer.
* The "Greedy answer" has a higher average probability (0.72) than the "Ref answer" (0.62), yet it's incorrect (Rouge-1 score of 0).
* "Answer 1" and "Answer 2" have the highest average probability (0.73), but are also incorrect.
* The maximum entropy is relatively consistent across all answers.
* Gb-S is the same for all answers.
### Interpretation
The data suggests that the Gemma-7B language model, in this specific instance, fails to accurately estimate uncertainty. Despite the "Greedy answer," "Answer 1," and "Answer 2" having higher average probabilities than the correct "Ref answer," they are incorrect. This highlights a potential issue where the model's confidence (as reflected by average probability) doesn't align with the actual correctness of the answer. The model is more "certain" about the wrong answers. The consistent Gb-S score across all answers suggests this metric might not be useful in distinguishing correct from incorrect answers in this scenario. The Ask4-conf score is only present for the Greedy answer, and its meaning is not clear from the context.
</details>
Figure 16: An example that Gemma-7B does not know the true answer. Scores are normalized in [0,1], where a lower value indicates larger uncertainty.