2412.05563v2
Model: nemotron-free
# A Survey on Uncertainty Quantification of Large Language Models: Taxonomy, Open Research Challenges, and Future Directions
**Authors**: Ola Shorinwa, Zhiting Mei, Justin Lidard, Allen Z. Ren, AnirudhaMajumdar
> Princeton University Princeton NJ USA
## Abstract
The remarkable performance of large language models (LLMs) in content generation, coding, and common-sense reasoning has spurred widespread integration into many facets of society. However, integration of LLMs raises valid questions on their reliability and trustworthiness, given their propensity to generate hallucinations: plausible, factually-incorrect responses, which are expressed with striking confidence. Previous work has shown that hallucinations and other non-factual responses generated by LLMs can be detected by examining the uncertainty of the LLM in its response to the pertinent prompt, driving significant research efforts devoted to quantifying the uncertainty of LLMs. This survey seeks to provide an extensive review of existing uncertainty quantification methods for LLMs, identifying their salient features, along with their strengths and weaknesses. We present existing methods within a relevant taxonomy, unifying ostensibly disparate methods to aid understanding of the state of the art. Furthermore, we highlight applications of uncertainty quantification methods for LLMs, spanning chatbot and textual applications to embodied artificial intelligence applications in robotics. We conclude with open research challenges in uncertainty quantification of LLMs, seeking to motivate future research.
Uncertainty Quantification; Large Language Models (LLMs); Confidence Estimation. doi: 1111111.1111111 ccs: Computing methodologies ccs: Computing methodologies Artificial intelligence ccs: Computing methodologies Natural language processing ccs: Computing methodologies Natural language generation
## 1. Introduction
Large language models have demonstrated remarkable language generation capabilities, surpassing average human performance on many benchmarks including math, reasoning, and coding (Achiam et al., 2023; Anthropic, 2024; Brown, 2020; Touvron et al., 2023; Dubey et al., 2024; Chiang et al., 2023). For example, recent (multi-modal) large language models were shown to achieve impressive scores, e.g., in the $90\$ percentile, on simulated Law School Admission Test (LSAT) exams, the American Mathematics Competition (AMC) contests, the Multistate Bar Exam, and the Graduate Record Exam (GRE) General Test, outperforming a majority of test takers (Katz et al., 2024; Achiam et al., 2023; Anthropic, 2024). Likewise, LLMs have advanced the state of the art in machine translation, text summarization, and question-and-answer tasks. However, LLMs also tend to produce plausible, factually-incorrect responses to their input prompts, termed hallucinations (Lee et al., 2018). In some scenarios, the hallucinated response is overtly incorrect; however, in many cases, the factuality of the LLM response is harder to discern, posing significant risk as a user might falsely assume factuality of the response, which can result in devastating consequences, especially when safety is of paramount importance. As a result, hallucinations pose a notable danger to the safe, widespread adoption of LLMs.
<details>
<summary>extracted/6588127/images/teaser/uq_for_llms.png Details</summary>

### Visual Description
## Screenshot: Text Conversation with Fact-Check Annotation
### Overview
The image depicts a text-based conversation between two participants, with a final annotation labeled "Fact-Check: False" at the bottom. The dialogue revolves around the lowest recorded temperature in Antarctica, followed by a confidence assertion and a correction.
### Components/Axes
- **Chat Bubble Layout**:
- **Participant 1 (Questioner)**: Messages appear in blue bubbles with a silhouette icon (top-right).
- **Participant 2 (Responder)**: Messages appear in blue bubbles with a green icon (bottom-left).
- **Annotations**:
- A large red "X" symbol and bold text "Fact-Check: False" in red at the bottom center.
### Detailed Analysis
1. **Initial Question**:
- **Text**: "What is the lowest-ever temperature recorded in Antarctica?"
- **Position**: Top-right bubble.
2. **Response**:
- **Text**: "The lowest-ever temperature recorded in Antarctica was a staggering -128.6°F (-89.2°C)."
- **Position**: Directly below the question.
3. **Follow-Up Question**:
- **Text**: "How confident are you?"
- **Position**: Bottom-right bubble.
4. **Confidence Assertion**:
- **Text**: "I am 100% confident."
- **Position**: Bottom-left bubble.
5. **Fact-Check Annotation**:
- **Text**: "Fact-Check: False" (bold, red).
- **Symbol**: Large red "X" to the left of the text.
- **Position**: Bottom center, spanning the width of the conversation.
### Key Observations
- The responderâs temperature value (-128.6°F/-89.2°C) is contradicted by the fact-check, which marks the claim as false.
- The confidence assertion ("100% confident") is directly challenged by the fact-check, implying the responderâs certainty was misplaced.
- No numerical trends or data series are present; the focus is on textual claims and their validity.
### Interpretation
The conversation highlights a discrepancy between the responderâs factual claim and the fact-check result. The responderâs 100% confidence is undermined by the correction, suggesting either an error in the initial answer or a lack of verification. The use of the red "X" and bold "False" emphasizes the importance of cross-referencing claims, even when confidence is high. This exchange underscores the risks of overconfidence in unverified information, particularly in contexts requiring precision (e.g., scientific or technical documentation).
</details>
Figure 1. A user asks an LLM the question: What is the lowest-ever temperature recorded in Antarctica?; in response, the LLM answers definitively. Afterwards, the user asks the LLM how confident the LLM is. Although the LLM states that it is â100% confident,â the LLMâs response fails to pass a fact-check test. Confidence scores provided by LLMs are generally miscalibrated. UQ methods seek to provide calibrated estimates of the confidence of LLMs in their interaction with users.
[A user asks an LLM the question: What is the lowest-ever temperature recorded in Antarctica?; in response, the LLM answers definitively. Afterwards, the user asks the LLM how confident the LLM is. Although the LLM states that it is â100% confident,â the LLMâs response fails to pass a fact-check test. Confidence scores provided by LLMs are generally miscalibrated. UQ methods seek to provide calibrated estimates of the confidence of LLMs in their interaction with users.]A user asks an LLM the question: What is the lowest-ever temperature recorded in Antarctica?; in response, the LLM answers definitively. Afterwards, the user asks the LLM how confident the LLM is. Although the LLM states that it is â100% confident,â the LLMâs response fails to pass a fact-check test. Confidence scores provided by LLMs are generally miscalibrated. UQ methods seek to provide calibrated estimates of the confidence of LLMs in their interaction with users.
To ensure the trustworthiness of LLMs, substantial research has been devoted to examining the mechanisms behind hallucinations in LLMs (Lee et al., 2018; Chen et al., 2023; Azamfirei et al., 2023; Xu et al., 2024a; Ji et al., 2023), detecting its occurrence, identifying potential causes, and proposing mitigating actions. However, even in the absence of hallucinations, LLMs are susceptible to doubt when given prompts at the boundary of their knowledge base. In these situations, prior work has shown that LLMs fail to accurately convey their uncertainty to a user, either implicitly or explicitly, unlike typical humans (Liu et al., 2023c; Alkaissi and McFarlane, 2023). In fact, LLMs tend to be overconfident even when they should be uncertain about the factuality of their response (Xiong et al., 2023; Groot and Valdenegro-Toro, 2024). We provide an example in Figure 1, where an LLM is asked: âWhat is the lowest-ever temperature recorded in Antarctica?â, to which the LLM responds definitively. Even when prompted for its confidence in its answer, the LLM claims that it is â100% confident.â However, the LLMâs answer fails to pass a fact-check test. Knowing how much to trust an LLM-generated response is critical for users (Kim et al., 2024a), helping inform the development of contingency strategies commensurate with the degree of uncertainty of the LLM in its response. For example, in applications such as robotics, an LLM-equipped robot could seek human guidance (Ren et al., 2023a) or necessitate further review in the judicial practice (Delacroix, 2024). Uncertainty quantification (UQ) methods for LLMs seek to address this challenge by providing users with an estimate of an LLMâs confidence in its response to a given prompt. Indeed, uncertainty quantification can be important in factuality analysis (Huang et al., 2023a).
The rapid adoption of LLMs in many applications has contributed to the fast-pace development of UQ methods for LLMs to promote their safe integration into a wide range of applications. However, the huge volume of UQ methods for LLMs has made it particularly challenging to ascertain the research scope and guarantees provided by existing UQ methods, complicating the identification of useful UQ methods for practitioners seeking to leverage them in their application areas, as well as the identification of impactful future directions for research. We claim that this challenge arises from the lack of a taxonomy that unifies related existing methods and presents an organized view of existing work in this research area.
Through this survey, we seek not only to enumerate existing work in UQ for LLMs, but also to provide a useful taxonomy of UQ methods for LLMs to aid understanding the state of the art in this research area. We reiterate that the introduction of an effective taxonomy for these methods can facilitate their adoption in wide-ranging applications, such as in factuality analysis, hallucination detection, and robotics. We categorize existing uncertainty quantification methods for LLMs into four main classes: (1) token-level uncertainty quantification methods; (2) self-verbalized uncertainty quantification methods; (3) semantic-similarity uncertainty quantification methods; and (4) mechanistic interpretability methods. These categories encompass uncertainty quantification of multi-claim, multi-sentence LLM responses. We elaborate on each category in this survey, identifying the key features shared by methods within each category. Moreover, we identify open research challenges and provide directions for future research, hoping to inspire future effort in advancing the state of the art.
### Comparison to other Surveys
A number of surveys on hallucinations in LLMs exists, e.g., (Rawte et al., 2023; Huang et al., 2023b; Tonmoy et al., 2024; Liu et al., 2024e; Bai et al., 2024). These surveys discuss hallucinations in detail, introducing the notion of hallucinations (Rawte et al., 2023), identifying its types and potential causes (Huang et al., 2023b), and presenting mitigation techniques (Tonmoy et al., 2024). However, these papers provide little to no discussion on uncertainty quantification methods for LLMs, as this research area lies outside the scope of these surveys. In contrast, only two surveys on uncertainty quantification methods for LLMs exist, to the best of our knowledge. The first survey (Geng et al., 2024) categorizes confidence estimation and calibration methods into two broad classes: methods for generation tasks and methods for classification tasks, defined by the application domain. The survey in (Geng et al., 2024) focuses more heavily on calibration methods, with a less extensive discussion on confidence estimation methods. In contrast, our paper provides an extensive survey of uncertainty quantification methods with a brief discussion on calibration of uncertainty estimates. For example, whereas (Geng et al., 2024) lacks a detailed discussion on the emerging field of mechanistic interpretability, our survey presents this field in detail, along with potential applications to uncertainty quantification. Moreover, our survey discusses a broad range of applications of uncertainty quantification methods for LLMs, e.g., embodied applications such as in robotics, beyond those discussed in (Geng et al., 2024). A concurrent survey (Huang et al., 2024) on uncertainty quantification of LLMs categorizes existing uncertainty quantification methods within more traditional classes, which do not consider the unique architecture and characteristics of LLMs. In contrast, our survey categorizes existing work within the lens of LLMs, considering the underlying transformer architecture of LLMs and the autoregressive token-based procedure utilized in language generation.
### Organization
In Section 2, we begin with a review of essential concepts that are necessary for understanding the salient components of uncertainty quantification of LLMs. We discuss the general notion of uncertainty and introduce the main categories of uncertainty quantification methods within the broader field of deep learning. Subsequently, we identify the relevant metrics utilized by a majority of uncertainty quantification methods for LLMs. In Sections 3, 4, 5, and 6, we discuss the four main categories of uncertainty quantification methods for LLMs, highlighting the key ideas leveraged by the methods in each category. In Section 7, we provide a brief discussion of calibration techniques for uncertainty quantification, with applications to uncertainty quantification of LLMs. In Section 8, we summarize the existing datasets and benchmarks for uncertainty quantification of LLMs and present applications of uncertainty quantification methods for LLMs in Section 9. We highlight open challenges in Section 10 and suggest directions for future research. Lastly, we provide concluding remarks in Section 11. Figure 2 summarizes the organization of this survey, highlighting the key details presented therein.
Uncertainty Quantification for LLMs
Taxonomy
Datasets and Benchmarks
Applications
Open Challenges and Future Directions
Token-Level UQ
Self-Verbalized UQ
Semantic-Similarity UQ
Mechanistic Interpretability
Reading Comprehension
Mathematics
Multi-Hop Reasoning
Factuality Analysis
Chatbot and Textual
Robotics
Consistency and Factuality
Entropy and Factuality
Multi-Episode UQ for Interactive Agents
Mechanistic Interpretability and UQ
Datasets and Benchmarks
(Xiao and Wang, 2021; Kadavath et al., 2022; Bakman et al., 2024; Ling et al., 2024; Vazhentsev et al., 2024; Fadeeva et al., 2024; Ren et al., 2023b)
(Mielke et al., 2022; Lin et al., 2022; Stengel-Eskin et al., 2024; Yang et al., 2024b; Xu et al., 2024b; Tao et al., 2024; Band et al., 2024)
(Kuhn et al., 2023; Chen and Mueller, 2023; Lin et al., 2023; Kossen et al., 2024; Wang et al., 2024a; Qiu and Miikkulainen, 2024; Ao et al., 2024)
(Ahdritz et al., 2024)
(Joshi et al., 2017; Reddy et al., 2019; Lebret et al., 2016)
(Lin et al., 2022)
(Yang et al., 2018; Geva et al., 2021)
(Lin et al., 2021; Li et al., 2023; Thorne et al., 2018)
(Tsai et al., 2024; Ren et al., 2023a; Wang et al., 2023b; Liang et al., 2024; Mullen Jr and Manocha, 2024; Wang et al., 2024c; Zheng et al., 2024)
(Zhang et al., 2023a; Yadkori et al., 2024; Mohri and Hashimoto, 2024; Pacchiardi et al., 2023; Tai et al., 2024; Kolagar and Zarcone, 2024; Steindl et al., 2024)
Figure 2. The overview of this survey, including a taxonomy of uncertainty quantification methods for LLMs, relevant datasets and benchmarks, applications, and open challenges and directions for future research.
## 2. Background
We review fundamental concepts that are crucial to understanding uncertainty quantification of LLMs. We assume basic familiarity with deep learning and build upon this foundation to introduce more specific concepts, describing the notion of uncertainty, the inner workings of LLMs, and the development of metrics and probes to illuminate the uncertainty of LLMs in their response to a userâs prompt.
### 2.1. Uncertainty
Uncertainty is a widely-known, yet vaguely-defined concept. For example, people generally associate uncertainty with doubt or a lack of understanding, knowledge, or control, but cannot generally provide a precise definition, especially a mathematical one. This general ambiguity applies to the field of LLMs (Keeling and Street, 2024). For example, a subset of the LLM research field considers the uncertainty of a model to be distinct from its level of confidence in a response generated by the model (Lin et al., 2023), stating that confidence scores are associated with a prompt (input) and a prediction by the model, whereas uncertainty is independent of the modelâs prediction. However, a large subset of the field considers uncertainty and the lack of confidence to be mostly-related, generally-interchangeable concepts. In this section, for simplicity, we consider uncertainty and confidence to be mostly interchangeable.
When prompted, LLMs tend to hallucinate when uncertainty about the correct answer exists, e.g., when a lack of understanding or a lack of knowledge exists (see Figures 4 and 4). In Figures 4 and 4, we ask GPT-4o mini to name the best cooking book written by a (likely) fictional person Jamie Feldman. GPT-4o mini provides a confident response: âThe Ultimate Guide to Cooking for One.â However, based on an internet search, this cookbook does not exist (although many similar ones do). Moreover, when prompted about its confidence, GPT-4o mini apologizes before providing yet another confident, but factually-incorrect response: âThe Jewish Cookbook.â This book is authored by Leah Koenig, not Jamie Feldman. Uncertainty quantification (UQ) methods aim to provide a more rigorous estimate of the modelâs confidence in its response, e.g., from the entropy of the distribution from which the tokens are sampled. Before discussing UQ techniques for LLMs, we identify the types of uncertainty and the methods suitable for characterizing uncertainty in deep-learned models, more broadly.
<details>
<summary>extracted/6588127/images/background/uncertainty/hallucination_chatgpt_4.png Details</summary>

### Visual Description
## Screenshot: Messaging App Conversation
### Overview
The image shows a messaging app interface with two text bubbles. The conversation involves a question about a cooking book authored by "Jaime Feldman" and a response detailing the book's content.
### Components/Axes
- **Text Bubbles**:
- **Right Bubble (Question)**: Blue background with white text.
- **Left Bubble (Answer)**: Dark blue background with white text.
- **Icons**:
- A green circular icon with a black "X" symbol appears to the left of the answer bubble.
- A black silhouette of a personâs head and shoulders is positioned at the top-right corner of the question bubble.
### Content Details
1. **Question (Right Bubble)**:
- Text: *"What is the best cooking book written by Jaime Feldman?"*
- Spelling Note: The name "Jaime" is used here (with an "e").
2. **Answer (Left Bubble)**:
- Text:
*"Jamie Feldman is known for her book 'The Ultimate Guide to Cooking for One.' This book features recipes designed for solo cooks, focusing on simple, delicious meals that minimize waste and maximize flavor. It's a great resource for anyone cooking for themselves and looking for practical and tasty ideas!"*
- Spelling Note: The name "Jamie" is used here (without an "e").
3. **Icons**:
- **Green "X" Icon**: Positioned to the left of the answer bubble, likely indicating a user or message status.
- **Person Silhouette**: Located at the top-right of the question bubble, possibly representing the sender.
### Key Observations
- The answer bubble contains a detailed endorsement of Jamie Feldmanâs book, emphasizing its focus on solo cooking, waste reduction, and flavor optimization.
- The spelling discrepancy ("Jaime" vs. "Jamie") may indicate a typo or intentional variation in the question.
- The green "X" icon and person silhouette suggest standard messaging app UI elements for user identification and message status.
### Interpretation
The conversation highlights a recommendation for Jamie Feldmanâs cookbook, which targets individuals cooking alone. The bookâs practical approach to minimizing waste and maximizing flavor aligns with modern culinary trends emphasizing sustainability and efficiency. The spelling inconsistency ("Jaime" vs. "Jamie") could reflect a minor error in the question or a deliberate stylistic choice. The UI elements (icons, bubble colors) follow common design patterns for clarity in messaging apps.
No numerical data, trends, or anomalies are present in this image. The focus is purely on textual information and UI structure.
</details>
Figure 3. Hallucination in LLMs: When asked for information about a possibly fictional person, LLMs tend to fabricate a response that sounds coherent but is entirely false.
[Hallucination in LLMs: When asked for information about a possibly fictional person, LLMs tend to fabricate a response that sounds coherent but is entirely false.]Hallucination in LLMs: When asked for information about a possibly fictional person, LLMs tend to fabricate a response that sounds coherent but is entirely false.
<details>
<summary>extracted/6588127/images/background/uncertainty/hallucination_chatgpt_4_followup.png Details</summary>

### Visual Description
## Screenshot: Text Conversation Exchange
### Overview
The image depicts a text conversation between two parties. The exchange involves a query about confidence in a prior response regarding "Jamie Feldman" and a subsequent clarification about Feldman's association with "The Jewish Cookbook," a resource for Jewish recipes.
### Components/Axes
- **Text Bubbles**:
- **Right Bubble (Sender)**: Contains the query: *"Are you confident about your response on Jamie Feldman?"*
- **Left Bubble (Recipient)**: Contains the response: *"I apologize for the confusion. Jamie Feldman is best known for âThe Jewish Cookbook,â which features a wide range of traditional and modern Jewish recipes. If youâre looking for a comprehensive guide to Jewish cooking, that would be the book to check out! Thank you for your patience."*
- **Icons**:
- **Right Bubble**: Accompanied by a silhouette icon (default user avatar).
- **Left Bubble**: Accompanied by a green circular icon with a black "X" (possibly a custom profile picture or app-specific symbol).
### Detailed Analysis
- **Text Content**:
- The senderâs message is a direct question about the recipientâs confidence in a prior response.
- The recipientâs reply clarifies that Jamie Feldman is associated with a cookbook, not a person, and provides context about the bookâs content. The response includes a recommendation and an apology for prior confusion.
### Key Observations
- The conversation revolves around resolving ambiguity about Jamie Feldmanâs identity (person vs. author).
- The recipient emphasizes Feldmanâs authorship of a cookbook, distinguishing it from other potential interpretations.
- The tone is polite, with the recipient acknowledging the senderâs patience.
### Interpretation
The exchange highlights a common scenario where prior communication lacks clarity, leading to follow-up questions. The recipientâs response serves to correct a misunderstanding, redirecting the conversation toward Feldmanâs culinary work. The mention of "The Jewish Cookbook" as a comprehensive resource suggests the sender may have been seeking recipe-related information, prompting the recipient to provide targeted guidance. The apology and gratitude indicate a collaborative tone, prioritizing clarity over defensiveness.
No numerical data, trends, or anomalies are present, as the image contains only textual dialogue.
</details>
Figure 4. Hallucination in LLMs: When asked about its confidence, the LLM apologizes before hallucinating another response. The Jewish Cookbook is authored by Leah Koenig, not Jaime Feldman.
[Hallucination in LLMs: When asked about its confidence, the LLM apologizes before hallucinating another response. The Jewish Cookbook is authored by Leah Koenig, not Jaime Feldman.]Hallucination in LLMs: When asked about its confidence, the LLM apologizes before hallucinating another response. The Jewish Cookbook is authored by Leah Koenig, not Jaime Feldman.
### 2.2. Types of Uncertainty
Uncertainty can be broadly categorized into two classes, namely: aleatoric uncertainty and epistemic uncertainty. When considered collectively, the resulting uncertainty is referred to as predictive uncertainty, without a distinction between the two components.
#### 2.2.1. Aleatoric Uncertainty
Aleatoric uncertainty encompasses the lack of definiteness of the outcome of an event due to the inherent randomness in the process which determines the outcome of the event. For example, a model cannot predict with certainty the outcome of an unbiased coin toss due to the random effects in the coin toss, regardless of the complexity of the model or the size of the training dataset used in training the model. This irreducible uncertainty is referred to as aleatoric uncertainty. For example, in the case of LLMs, aleatoric uncertainty can arise when there is inherent randomness in the ground-truth response, e.g., when prompted with âWhat will the temperature be tomorrow?â, the uncertainty associated with the LLMâs output can be characterized as aleatoric uncertainty, which is entirely due to the random effects associated with daily weather conditions. In essence, daily weather conditions cannot be predicted with absolute certainty, irrespective of the amount of training data available.
#### 2.2.2. Epistemic Uncertainty
In contrast to aleatoric uncertainty, epistemic uncertainty characterizes the doubt associated with a certain outcome (prediction) due to a lack of knowledge or âignoranceâ by a model, often due to limited training data. For example, when prompted to provide the digit in the $7$ th decimal place in the square-root of $2$ , GPT-4o mini responds with the answer $6$ . However, this answer is wrong: the digit in the $7$ th decimal place is $5$ . The uncertainty in the LLMâs output can be characterized as epistemic uncertainty, which can be eliminated by training the LLM on more data specific to this prompt. In other words, epistemic uncertainty describes reducible uncertainty, i.e., epistemic uncertainty should reduce when there is more knowledge about the state on which the decision is being made, e.g., via choosing the right model to use for learning, using more training data, or by incorporating any prior knowledge. The uncertainty associated with the response in Figure 4 is entirely epistemic and stems from missing training data. If we train the LLM on more data, including the fact that Jamie Feldman did not write a cookbook, we can eliminate the uncertainty associated with the modelâs response. Before concluding, we note that prior work has explored decomposing predictive uncertainty into epistemic and aleatoric components (Hou et al., 2023).
### 2.3. Uncertainty Quantification in Deep Learning
Broadly, uncertainty quantification for deep learning lies along a spectrum between two extremes: training-based and training-free methods, illustrated in Figure 5. Whereas training-based methods assume partial or complete visibility and access to the internal structure of the neural network, modifying it to probe its uncertainty, training-free methods use auxiliary models or additional data to quantify the uncertainty of the model post-hoc.
Training-based Training-free BNNs (Jospin et al., 2022) MCMC (Hastings, 1970) Variational Inference (Posch et al., 2019) MC-Dropout (Gal and Ghahramani, 2016; Gal et al., 2017)
| Deep Ensemble |
| --- |
| (Lakshminarayanan et al., 2017; Guo et al., 2018; Cavalcanti et al., 2016; Martinez-Munoz et al., 2008; BuciluÇ et al., 2006; Hinton, 2015) |
| ENNs |
| --- |
| (Osband et al., 2023; Wang and Ji, 2024) |
| Test-time Data Augmentation |
| --- |
| (Lee and AlRegib, 2020; Ayhan and Berens, 2018; Wu and Williamson, 2024; Bahat and Shakhnarovich, 2020) |
Dropout Injection (Loquercio et al., 2020; Ledda et al., 2023) Gradient-based (Lee and AlRegib, 2020; Huang et al., 2021; Igoe et al., 2022)
Figure 5. Uncertainty quantification methods in deep learning span the spectrum from training-based methods to training-free methods.
#### 2.3.1. Training-Based Methods
Training-based uncertainty quantification methods span Bayesian Neural Networks, Monte Carlo Dropout methods, and Deep Ensembles, which we review in the subsequent discussion. Instead of training a set of parameters to predict a single outcome, a Bayesian neural network (BNN) (Jospin et al., 2022) learns a distribution over the modelâs weights $\theta$ . Specifically, a BNN learns a distribution over the parameters, $p(\theta|D)$ , with dataset $D$ , with its prediction consisting of two parts: a maximum a posteriori estimation component $\hat{y}$ , and the uncertainty associated with it, defined by the covariance of the prediction $\Sigma_{\hat{y}|x,D}$ .
Despite being statistically principled, the prohibitive computational costs associated with BNNs prevent them from being directly employed. In order to train BNNs, a variety of methods have been proposed, among which the most popular ones are Markov Chain Monte-Carlo (MCMC) (Hastings, 1970) and variational inference (Posch et al., 2019). The former samples from the exact posterior distribution, while the latter learns to approximate the posterior with a variational distribution, $q_{\varphi}$ . Due to the relaxed requirement of access to large amounts of samples, the variational inference method has been more widely used, with Monte-Carlo dropout (Gal and Ghahramani, 2016; Gal et al., 2017) and Deep ensemble (Lakshminarayanan et al., 2017) being representative methods. More recently, epistemic neural networks (ENNs) (Osband et al., 2023; Wang and Ji, 2024) have been introduced to reduce the computational challenges associated with BNNs. To make ensemble methods more efficient, e.g., in out-of-distribution detection (Vyas et al., 2018), pruning methods (Guo et al., 2018; Cavalcanti et al., 2016; Martinez-Munoz et al., 2008), which reduce redundancy among ensemble members, and distillation methods (BuciluÇ et al., 2006; Hinton, 2015), which reduce the number of networks to one, teaching it to represent the knowledge of a group of networks, have been introduced. While these methods are easy to implement and require much less computation compared to regular BNNs or MCMC, they do suffer from being an approximation of the true posterior distribution. In fact, the modelâs uncertainty predictions could be worse when data augmentation, ensembling, and post-processing calibration are used together (Rahaman et al., 2021).
#### 2.3.2. Training-Free Methods
Training-free methods for estimating uncertainty have become popular due to their ease of implementation. Since neither the network architecture nor the training process need to be revised, training-free methods work well with large-scale foundation models that are costly to train or fine-tune. In (Ayhan and Berens, 2018; Lee and AlRegib, 2020; Wu and Williamson, 2024; Bahat and Shakhnarovich, 2020), the authors perform data augmentation at test time to generate a predictive distribution, quantifying the modelâs uncertainty. Similarly, dropout injection (Loquercio et al., 2020; Ledda et al., 2023) extends MC-dropout to the training-free domain by only performing dropout at inference time to estimate epistemic uncertainty. In (Mi et al., 2022), the authors estimate uncertainty for regression using similar perturbation techniques. Lastly, gradient-based uncertainty quantification methods (Lee and AlRegib, 2020) generate gradients at test-time, which provide an signal for epistemic uncertainty and for OOD detection in (Huang et al., 2021; Igoe et al., 2022), by constructing confounding labels.
### 2.4. Uncertainty Quantification for LLMs
The introduction of the transformer (Vaswani, 2017) for sequence-to-sequence machine translation tasks spurred the development of large language models. However, as noted in the preceding discussion, LLMs have gained some notoriety for their tendency to hallucinate when uncertain about a response to a specified prompt. Here, we review the general architecture of LLMs and provide some motivation for the development of LLM-specific metrics for quantifying uncertainty.
#### 2.4.1. LLM Architecture
LLMs use the transformer architecture to provide free-form responses to input prompts specified in natural language. The transformer architecture consists of an encoder, which processes the input to the model, and a decoder, which generates the modelâs outputs auto-regressively, where the previous outputs of the model are passed into the model to generate the future outputs. Given an input prompt, the words (elements) of the prompt are tokenized (i.e., the sentences/phrases in natural-language are decomposed into simple units referred to as tokens) and transformed to input embeddings using a learned model. The encoder takes in the input embeddings augmented with positional encodings to incorporate positional context and generates a sequence of latent embeddings, which serves as an input to the decoder, using a stack of $N$ multi-head attention sub-blocks and fully-connected feedforward networks. The decoder takes in the embeddings associated with the previous outputs of the decoder, preceded by a start token, and computes an output embedding using a similar stack of multi-head attention heads and feedforward networks as the encoder. The resulting output embeddings are passed into a linear layer prior to a softmax output layer, which converts the decoder embeddings to a probability distribution over the tokens in the dictionary of the model. In subsequent discussion, we denote the probability of the $j$ âth token in the $i$ âth sentence of an LLMâs output as $p_{ij}$ . The output token is selected from this probability distribution: e.g., by greedily taking the token associated with the maximum probability mass. The resulting output is passed into the decoder for auto-regressive generation of text.
<details>
<summary>extracted/6588127/images/background/architecture/llm_architecture.png Details</summary>

### Visual Description
## Flowchart: Transformer Model Architecture
### Overview
The image depicts a simplified flowchart of a transformer model architecture, illustrating the sequence of operations from input embedding to final output. The diagram emphasizes key components like positional encoding, attention mechanisms, normalization, and output processing.
### Components/Axes
- **Input Processing**:
- **Positional Encoding** (Green box, top-left): Adds positional information to embeddings.
- **Embedding** (Blue box): Converts input tokens into vector representations.
- **Core Transformer Blocks** (Dashed box labeled "N multi-head attention sub-blocks"):
- **Norm** (Blue box): Layer normalization after embedding and attention.
- **Masked Multi-Head Attention** (Purple box): Self-attention mechanism with masking for autoregressive tasks.
- **Feed-Forward** (Orange box): Position-wise feed-forward neural network.
- **Output Processing**:
- **Norm** (Blue box): Final normalization before output.
- **Linear** (Purple box): Linear transformation of normalized outputs.
- **Softmax** (Red box): Converts logits into probability distributions.
### Detailed Analysis
1. **Input Flow**:
- Embeddings (blue) are combined with positional encoding (green) via element-wise addition.
- The result passes through a normalization layer (blue) before entering the attention mechanism.
2. **Attention Mechanism**:
- Masked multi-head attention (purple) processes the normalized input, capturing contextual relationships between tokens.
- Residual connections (implied by "+" symbols) allow gradient flow across layers.
3. **Feed-Forward Network**:
- The output of attention is normalized (blue) and passed through a feed-forward network (orange), introducing non-linearity.
4. **Output Generation**:
- The final normalized output undergoes linear transformation (purple) and softmax (red) to produce token probabilities.
### Key Observations
- **Residual Connections**: Implied by "+" symbols between components, enabling deeper networks.
- **Masking**: Critical for autoregressive tasks (e.g., language modeling) to prevent future token leakage.
- **Normalization**: Applied after embedding, attention, and feed-forward steps to stabilize training.
### Interpretation
This architecture demonstrates the transformer's ability to process sequential data through self-attention and feed-forward networks. The masking in multi-head attention ensures causal relationships are preserved during training. The repeated "N" sub-blocks indicate stacking of these layers for deeper context modeling. The final softmax layer converts hidden representations into interpretable probabilities, essential for tasks like text generation or classification.
</details>
Figure 6. Many state-of-the-art LLMs are decoder-only transformers, with $N$ multi-head attention sub-blocks, for auto-regressive output generation.
[Many state-of-the-art LLMs are decoder-only transformers, with $N$ multi-head attention sub-blocks, for auto-regressive output generation.]Many state-of-the-art LLMs are decoder-only transformers, with $N$ multi-head attention sub-blocks, for auto-regressive output generation.
While early LLM models utilized encoder-only or encoder-decoder transformer architectures, state-of-the-art LLMs now generally utilize a decoder-only architecture. For example, the GPT family of models, such as GPT-4 (Achiam et al., 2023), and the Llama family of models, such as Llama 3 (Dubey et al., 2024), are decoder-only transformers. In Figure 6, we show a decoder-only transformer model. These state-of-the-art models leverage advances in transformers to improve computational efficiency, given the huge size of these models: Llama $3$ has $8$ B parameters for the small variant and $70$ B parameters for the large variant, while GPT- $4$ is rumored to have over one trillion parameters. Llama $3$ uses rotary positional embeddings (RoPE) (Su et al., 2024a) instead of absolute positional embeddings, which have been shown to be more effective than alternative embedding schemes. For a more detailed review of LLMs, we refer readers to (Minaee et al., 2024). Before presenting the metrics utilized by UQ methods for LLMs, we discuss natural-language inference, which is an important component of many UQ methods for LLMs.
#### 2.4.2. Natural-Language Inference
Natural-language inference (NLI) refers to the task of characterizing the relationship between two text fragments, where one text fragment represents a premise (i.e., a statement that is believed to be true) while the other fragment represents a hypothesis (i.e., a statement whose veracity we seek to evaluate based on the premise) (Williams et al., 2017; Dagan et al., 2005; Fyodorov et al., 2000). Given a premise and a hypothesis, we can classify the relation between the text pair as: an entailment, if one can infer that the hypothesis is most likely true given the premise; a contradiction, if one can infer that the hypothesis is most likely false given the premise; or a neutral label, if one cannot infer the truthfulness of the hypothesis from the premise (MacCartney and Manning, 2008; Condoravdi et al., 2003; Monz and de Rijke, 2001). In Figure 7, we provide some examples of text pairs that exhibit entailment, contradiction, or neutrality. In the first example, the premise indicates that the student presented a research paper at a conference (i.e., the student did not skip the conference), hence, the contradiction. In the second example, the premise indicates that the orchestra enjoyed the concert, but does not state whether the orchestra performed at the concert (or just observed the event), hence the neutral label. In the third example, we can infer that the hypothesis is true, since the premise indicates that the team was on vacation, hence, not in the office.
<details>
<summary>extracted/6588127/images/background/nli/nli.png Details</summary>

### Visual Description
## Diagram: Logical Relationships Between Sentences
### Overview
The image presents a structured diagram categorizing three types of logical relationships between pairs of sentences: **Contradiction**, **Neutral**, and **Entailment**. Each category is represented by a colored circle (red, yellow, green) connected to two sentences via black chain links.
### Components/Axes
1. **Categories (Labels)**:
- **Contradiction** (Red circle)
- **Neutral** (Yellow circle)
- **Entailment** (Green circle)
2. **Sentence Pairs**:
- Each category contains two sentences linked by a chain, illustrating their relationship.
3. **Color Coding**:
- Red = Contradiction
- Yellow = Neutral
- Green = Entailment
### Detailed Analysis
#### Contradiction (Red)
- **Left Sentence**: "A student presented a research paper at the conference."
- **Right Sentence**: "The student skipped the conference."
- **Connection**: The red circle (Contradiction) links these sentences, indicating mutual exclusivity.
#### Neutral (Yellow)
- **Left Sentence**: "The orchestra enjoyed the concert."
- **Right Sentence**: "The orchestra performed at the concert."
- **Connection**: The yellow circle (Neutral) links these sentences, suggesting no direct logical conflict or dependency.
#### Entailment (Green)
- **Left Sentence**: "The team was out on vacation yesterday."
- **Right Sentence**: "The team was not in the office yesterday."
- **Connection**: The green circle (Entailment) links these sentences, indicating the second sentence logically follows from the first.
### Key Observations
1. **Color Consistency**:
- Red (Contradiction) and green (Entailment) are used for opposing logical relationships, while yellow (Neutral) represents a lack of direct connection.
2. **Sentence Structure**:
- All sentences follow a subject-verb-object pattern, with the right sentence in each pair providing a contrasting, neutral, or dependent statement.
3. **Spatial Layout**:
- Categories are arranged vertically, with each row containing a colored circle flanked by two sentences.
### Interpretation
This diagram illustrates how the same subject (e.g., "a student," "the orchestra," "the team") can be described in ways that either conflict (Contradiction), remain unrelated (Neutral), or logically depend on one another (Entailment). The use of color coding simplifies the visualization of these relationships, which could be applied in natural language processing (NLP) tasks such as sentiment analysis or text classification. The absence of numerical data suggests the focus is on categorical logic rather than quantitative trends.
</details>
Figure 7. Natural-language inference models characterize the relationship between a pair of texts, namely: a premise and a hypothesis. The possible relations include: (1) an entailment where the hypothesis can be inferred from the premise; (2) a contradiction, where the hypothesis is more likely false given the premise; and (3) a neutral relation, where the veracity of the hypothesis cannot be determined from the premise.
[Natural-language inference models characterize the relationship between a pair of texts, namely: a premise and a hypothesis. The possible relations include: (1) an entailment where the hypothesis can be inferred from the premise; (2) a contradiction, where the hypothesis is more likely false given the premise; and (3) a neutral relation, where the veracity of the hypothesis cannot be determined from the premise.]Natural-language inference models characterize the relationship between a pair of texts, namely: a premise and a hypothesis. The possible relations include: (1) an entailment where the hypothesis can be inferred from the premise; (2) a contradiction, where the hypothesis is more likely false given the premise; and (3) a neutral relation, where the veracity of the hypothesis cannot be determined from the premise.
NLI methods play an important role in uncertainty quantification of LLMs. Many UQ methods for LLMs rely on characterization of the semantic relationship between multiple realizations of the LLMâs responses to a given input prompt to determine the confidence of the model. Many of these methods rely on learned models for natural-language inference, such as BERT (Devlin, 2018), which utilizes a transformer-based architecture to learn useful language representations that are crucial in natural-language tasks such as question answering and natural-language inference. Unlike many standard language models, e.g., Generative Pre-trained Transformer (GPT) (Radford and Narasimhan, 2018), which impose a unidirectionality constraint where every token can only attend to previous tokens, BERT employs a bidirectional approach where each token can attend to any token regardless of its relative position, using a masked language model, potentially enabling the model to capture broader context, especially in sentence-level tasks. In (Liu, 2019), the authors demonstrate that the performance of BERT is limited by inadequate pre-training and propose an improved model, named RoBERTa (Liu, 2019), which retains the same architecture as BERT but is trained for longer with larger mini-batches of data with longer sequences. DeBERTa (He et al., 2020) further improves upon the performance of RoBERTa by introducing a disentangled attention mechanism and an enhanced mask decoder.
#### 2.4.3. Metrics for Uncertainty Quantification for LLMs.
Uncertainty quantification in the LLM community has largely eschewed traditional UQ methods for learned models due to the notable computation cost of running inference on LLMs (Balabanov and Linander, 2024), although, a few UQ methods for LLMs utilize deep ensembles, e.g., (Wang et al., 2023a; Balabanov and Linander, 2024; Zhang et al., 2024; Arteaga et al., 2024), generally based on low-rank adaptation (LoRA) (Hu et al., 2021). Consequently, many UQ methods in this space have introduced less computationally intensive approximate quantification methods that directly harness the unique architecture of LLM models to assess the uncertainty of these models. In some cases, these methods retain the high-level idea of ensemble methods, quantifying the uncertainty of the model on a given prompt using the outputs of a set of individual models or a collection of outputs from the same model, with a temperature parameter less than one to promote greater stochasticity in the tokens generated by the model. UQ methods for LLMs can be broadly categorized into white-box models and black-box models (Liu et al., 2024d; Vashurin et al., 2024), illustrated in Figure 9 and Figure 9, respectively.
White-Box UQ Methods
White-box UQ models assume that the underlying architecture of the model is partially or completely visible and accessibleâhence the term white-box âtaking advantage of access to the intermediate outputs of the underlying models, such as the probability distribution over the generated tokens or outputs at the inner layers of the model, to assess the uncertainty of the model (Kuhn et al., 2023; Azaria and Mitchell, 2023; Fadeeva et al., 2024). We provide some metrics utilized by white-box UQ methods for LLMs, where $p_{ij}$ denotes the conditional probability of token $j$ (conditioned on all preceding tokens) in sentence $i$ :
<details>
<summary>extracted/6588127/images/background/uq_techniques/white_box.png Details</summary>

### Visual Description
## Screenshot: Chat Conversation with Volleyball Olympic Data
### Overview
The image is a screenshot of a chat interface showing a conversation about the history of volleyball as an Olympic sport. The primary text includes a question about when volleyball became an Olympic sport, followed by a response stating its debut at the Tokyo Olympic Games in 1964. Below this, a white box with a black border lists five cities (Tokyo, Osaka, Kyoto, Beijing, Mumbai) with associated percentages in red text. The percentages appear to represent some form of distribution or ranking, though the context is unclear.
---
### Components/Axes
- **Chat Messages**:
- **Left Bubble (User)**: "When did volleyball become an Olympic Sport?"
- **Right Bubble (Assistant)**: "Volleyball debuted at the Tokyo Olympic Games in 1964."
- **Data Box**:
- **Content**: A vertical list of cities with percentages:
- Tokyo = 87.28%
- Osaka = 7.49%
- Kyoto = 4.16%
- Beijing = 1.04%
- Mumbai = 0.03%
- **Formatting**: Cities in red text, percentages in red text with two decimal places. The box has a white background and black border.
---
### Detailed Analysis
- **Chat Text**:
- The question and answer are straightforward, with no additional formatting or annotations.
- **Data Box**:
- **Structure**: Cities listed in descending order of percentage values.
- **Values**:
- Tokyo dominates with 87.28%, far exceeding other cities.
- Osaka (7.49%) and Kyoto (4.16%) follow, with Beijing (1.04%) and Mumbai (0.03%) at the lower end.
- **Uncertainty**: No explicit uncertainty markers (e.g., error bars) are present, but the precision of two decimal places suggests high confidence in the values.
---
### Key Observations
1. **Dominance of Tokyo**: Tokyoâs percentage (87.28%) is orders of magnitude higher than the next city (Osaka at 7.49%), indicating a potential outlier or unique significance.
2. **Geographic Clustering**: Four of the five cities (Tokyo, Osaka, Kyoto, Beijing) are in East Asia, while Mumbai is the only city from South Asia.
3. **Low Mumbai Value**: Mumbaiâs 0.03% is the smallest value, suggesting minimal relevance in the context of the data.
---
### Interpretation
- **Contextual Clues**: The mention of Tokyoâs 1964 Olympics aligns with the historical debut of volleyball as an Olympic sport. However, the percentages likely refer to a different metric (e.g., medal distribution, event frequency, or athlete origins) not explicitly stated in the chat.
- **Possible Explanations**:
- **Tokyoâs High Value**: Could reflect the cityâs role as the host of the 1964 Olympics, where volleyball was introduced.
- **Other Cities**: Osaka and Kyoto might represent Japanese cities with historical ties to volleyball, while Beijing could relate to later Olympic events (e.g., 2008). Mumbaiâs low value might indicate limited participation or infrastructure.
- **Ambiguity**: The lack of a legend or axis labels in the data box leaves the exact meaning of the percentages open to interpretation. The data could represent:
- Percentage of Olympic volleyball medals won by athletes from these cities.
- Frequency of volleyball events held in these cities over time.
- A ranking of cities based on volleyball popularity or infrastructure.
---
### Conclusion
The image provides a snapshot of a chat conversation with a historical fact about volleyball and a list of cities with percentages. While the percentages suggest a ranking or distribution, the absence of context or labels makes it challenging to determine their precise significance. The data highlights Tokyoâs dominance, potentially tied to its 1964 Olympic hosting, but further clarification is needed to interpret the values accurately.
</details>
Figure 8. White-box uncertainty quantification methods utilize an LLMâs internal information, e.g., the modelâs probabilities for the token associated with each output.
[White-box uncertainty quantification methods utilize an LLMâs internal information, e.g., the modelâs probabilities for the token associated with each output.]White-box uncertainty quantification methods utilize an LLMâs internal information, e.g., the modelâs probabilities for the token associated with each output.
<details>
<summary>extracted/6588127/images/background/uq_techniques/black_box.png Details</summary>

### Visual Description
## Flowchart: LLM Response Generation Process for Historical Queries
### Overview
The diagram illustrates a simplified workflow of a Large Language Model (LLM) processing a historical query about Abraham Lincoln. It shows the input question, model processing, response generation, and confidence estimation.
### Components/Axes
1. **Input Question**:
- Blue box at top-center with text: "Who was Abraham Lincoln?"
- Adjacent user icon (silhouette) in top-right corner
2. **Model Processing**:
- Green box labeled "LLM" with a black knot symbol (infinity loop)
3. **Response Generation**:
- Light blue section containing two conflicting responses:
- Left box: "Abraham Lincoln was the fifteenth president of the U.S., serving from 1861 to 1865."
- Right box: "Abraham Lincoln was the sixteenth president of the U.S., serving from 1861 to 1864."
- Three ellipses (...) between responses indicate potential for multiple outputs
4. **Confidence Estimation**:
- Pink box at bottom-center with text: "Confidence Estimate: 75%"
### Detailed Analysis
- **Temporal Flow**:
- Top-to-bottom vertical progression from question â LLM â responses â confidence
- **Spatial Relationships**:
- Question box (blue) anchors top of diagram
- LLM processing (green) centrally located
- Response options (light blue) occupy middle section
- Confidence estimate (pink) anchors bottom
- **Textual Elements**:
- All text in English
- Numerical values: 15th/16th president, 1861-1865/1864 dates, 75% confidence
- No non-English text detected
### Key Observations
1. **Conflicting Information**:
- Responses contain contradictory presidential rankings (15th vs 16th)
- Date ranges overlap (1861-1864 vs 1861-1865)
2. **Confidence Paradox**:
- 75% confidence despite factual inconsistency in responses
3. **Structural Design**:
- Use of color coding (blue/green/light blue/pink) for visual hierarchy
- Arrows indicate deterministic flow despite random response generation
### Interpretation
This diagram reveals critical aspects of LLM behavior:
1. **Uncertainty Handling**:
- The model generates multiple responses despite factual contradictions, suggesting probabilistic output mechanisms
2. **Confidence Calibration**:
- 75% confidence despite factual errors indicates potential misalignment between confidence scores and factual accuracy
3. **Historical Knowledge Representation**:
- Conflicting responses highlight challenges in encoding precise historical timelines
4. **Process Transparency**:
- Visualization of internal LLM processes (question â processing â response generation â confidence) provides insight into AI decision-making
The diagram demonstrates both the capabilities and limitations of current LLM systems in handling historical queries, particularly regarding factual consistency and confidence calibration.
</details>
Figure 9. Black-box uncertainty quantification methods do not access the internal states or probabilities computed by the model, quantifying the modelâs uncertainty entirely from its natural-language response.
[Black-box uncertainty quantification methods do not access the internal states or probabilities computed by the model, quantifying the modelâs uncertainty entirely from its natural-language response.]Black-box uncertainty quantification methods do not access the internal states or probabilities computed by the model, quantifying the modelâs uncertainty entirely from its natural-language response.
1. Average Token Log-Probability. The average of the negative log-probability of the tokens, which captures the average confidence of the model (Manakul et al., 2023), is given by: ${\mathrm{Average}(p)=-\frac{1}{L_{i}}\sum_{j}\log(p_{ij}),}$ where sentence $i$ consists of $L_{i}$ tokens. Note that the value of this metric increases as the conditional probability distribution of each token decreases, signifying an decrease in the modelâs confidence. The average token probability is related to the product of the token probabilities.
1. Perplexity. The perplexity of a modelâs prediction represents the exponential of the average of the negative log-probability of the tokens which comprise the sentence (response) generated by the LLM (Fadeeva et al., 2024). Perplexity is given by: ${\mathrm{Perplexity}(p)=\exp\left(-\frac{1}{L_{i}}\sum_{j}\log(p_{ij})\right).}$
1. Maximum Token Log-Probability. The maximum token log-probability captures the token with the lowest conditional probability, which is given by: ${\mathrm{Maximum}(p)=\max_{j}-\log(p_{ij}).}$
1. Response Improbability. This metric entails computing the probability of a given sentence given the conditional distribution for each token (Fadeeva et al., 2024), where the probability distribution is conditioned on preceding tokens, and subtracting the resulting value from one. The uncertainty metric is defined as: ${\mathrm{Improb.}=1-\prod_{j}p_{ij}.}$
1. Entropy. The maximum entropy of the probability distribution associated with each token can be utilized as a metric for UQ, given by: ${\mathrm{Entropy}=\max_{j}\mathcal{H}(p_{j}),}$ where $\mathcal{H}$ represents the entropy of the probability distribution $p_{j}$ of token $j$ . Some existing methods claim that this metric is better than the perplexity (Fadeeva et al., 2024). Similarly, the predictive entropy (Malinin and Gales, 2020) at input $x$ and output $y$ is defined as: ${\mathcal{H}(Y\mid x)=-\int p(y\mid x)\ln p(y\mid x)dy.}$ In the discrete case, the entropy associated with the output distribution of token $j$ in sentence $i$ is defined by: ${\mathcal{H}_{ij}=-\sum_{w\in\mathcal{D}}p_{ij}(w)\log p_{ij}(w),}$ where $\mathcal{D}$ denotes the dictionary containing all possible words in the model and $w$ represents a word in $\mathcal{D}$ .
Black-Box UQ Methods
In contrast, black-box methods assume that the modelâs internal outputs cannot be accessed externally (Manakul et al., 2023; Chen and Mueller, 2023). Hence, these methods quantify the uncertainty of the model entirely from the modelâs response to an input prompt. Prior work has discussed the pros and cons of both categories of UQ methods (Lin et al., 2023). Concisely, white-box methods generally require access to the underlying architecture and intermediate outputs of an LLM, which is increasingly difficult to obtain given that many LLMs have become closed-source models, posing a significant limitation. In contrast, black-box models enable UQ of closed-source models such as OpenAIâs GPT-4 (Achiam et al., 2023) and Anthropic Claude (Anthropic, 2024), which do not provide complete access to the model. In general, black-box UQ methods for LLMs require the evaluation of the similarity between multiple responses generated by an LLM or an ensemble of LLMs on the same or similar prompts to quantify the uncertainty of the LLM on a given input prompt. Other black-box UQ methods, such as self-verbalized UQ methods, train the model to directly provide a natural-language estimate of its confidence. Here, we identify some prominent techniques for measuring the similarity between a pair of text fragments:
1. NLI Scores. As described in Section 2.4.2, NLI models, such as RoBERTa (Liu, 2019) and DeBERTa (He et al., 2020), classify the relationship between a pair of text fragment as either an entailment, a contradiction, or a neutral relation. Many black-box methods utilize the probabilities (or logits) predicted by the NLI model for these three classes as a measure of the similarity between the two text fragments, which is ultimately used to quantify the uncertainty of the LLM. For example, given the probability $p_{\mathrm{entail}}$ predicted by an NLI model that a text fragment $t_{1}$ entails another text fragments $t_{2}$ , we can define the similarity between both text fragments as: ${\mathrm{sim}(t_{1},t_{2})=p_{\mathrm{entail}}}$ . Conversely, given the probability of contradiction $p_{\mathrm{contradict}}$ , we can define the similarity between $t_{1}$ and $t_{2}$ as: ${\mathrm{sim}(t_{1},t_{2})=1-p_{\mathrm{contradict}}}$ .
1. Jaccard Index. The Jaccard index, also referred to as Intersection-over-Union measures the similarity between two sets by computing the ratio of the intersection of both sets and the union of both sets. Hence, the Jaccard index $J$ between two sets $\mathcal{T}_{1}$ and $\mathcal{T}_{2}$ , where each set consists of the words that make up its associated text fragment, is given by: ${J(\mathcal{T}_{1},\mathcal{T}_{2})=\frac{|\mathcal{T}_{1}\cap\mathcal{T}_{2}| }{|\mathcal{T}_{1}\cup\mathcal{T}_{2}|}.}$ Although the Jaccard index always lies between $0 0$ and $1$ , making it a suitable metric (Pilehvar et al., 2013; Cronin et al., 2017; Qurashi et al., 2020), the Jaccard index does not consider the context of the text fragments, which is important in evaluating the similarity between both text fragments.
1. Sentence-Embedding-Based Similarity. The similarity between two text fragments can also be determined by computing the cosine-similarity between the sentence embeddings associated with each text fragment. Sentence-embedding models transform natural-language inputs (or tokens) into a vector space, enabling direct computation of the similarity between two sentences (phrases). For example, Sentence-BERT (SBERT) (Reimers, 2019) builds upon the pretrained BERT architecture to train a model that computes semantically-relevant sentence embeddings. Other similar models include LaBSE (Feng et al., 2020) and SONAR (Duquenne et al., 2023). Since the sentence embeddings capture the context of the text fragment, this approach is less susceptible to the challenges faced by the Jaccard index, such as those that arise with negated words.
1. BERTScore. The BERTScore (Zhang et al., 2019) measures the similarity between two sentences by computing the cosine-similarity between the contextual embedding of each token (word) in the reference sentence $t_{r}$ and the contextual embedding of the associated token in the candidate sentence $t_{c}$ . The token embeddings are generated from NLI models to capture the context of the sentence. As a result, a given word might have different embeddings, depending on the context of the sentence in which it is used, addressing the challenges faced by the Jaccard similarity metric and word-embedding-based metrics. The BERTScore is composed of a precision $P_{\mathrm{BS}}$ , recall $R_{\mathrm{BS}}$ , and F1 $F_{\mathrm{BS}}$ score, given by:
$$
P_{\mathrm{BS}}=\frac{1}{|t_{c}|}\sum_{\hat{w}_{j}\in t_{c}}\max_{w_{i}\in t_{
r}}w_{i}^{\top}\hat{w}_{j},\enspace R_{\mathrm{BS}}=\frac{1}{|t_{r}|}\sum_{w_{
i}\in t_{r}}\max_{\hat{w}_{j}\in t_{c}}w_{i}^{\top}\hat{w}_{j},\enspace F_{
\mathrm{BS}}=2\frac{P_{\mathrm{BS}}\cdot R_{\mathrm{BS}}}{P_{\mathrm{BS}}+R_{
\mathrm{BS}}}, \tag{1}
$$
where each token in the candidate sentence is matched to its most similar token in the reference sentence. The BERTScore is obtained by computing the cosine-similarity between matched pairs. Since each token embedding is normalized, the cosine-similarity between a pair of embeddings simplifies to the inner-product. The recall score is related to the ROUGE metric (Lin, 2004) used in evaluating text summaries and more broadly to the BARTScore (Yuan et al., 2021). However, the ROUGE metric utilizes human-provided summaries as the reference.
In the following sections, we describe the main categories of UQ methods for LLMs in detail, namely: (1) Token-Level UQ Methods; (2) Self-Verbalized UQ Methods; (3) Semantic-Similarity UQ Methods; and (4) Mechanistic Interpretability, outlined in Figure 2. Although mechanistic interpretability has not been widely applied to uncertainty quantification, we believe that insights from mechanistic interpretability can be more extensively applied to the uncertainty quantification of LLMs; hence, we include these methods within our taxonomy.
## 3. Token-Level UQ
We recall that the outputs of an LLM are generated by sampling from a probability distribution over the tokens that make up the outputs, conditioned on the preceding tokens in the outputs (see LABEL:{sec:background_uq_llm}). Token-level UQ methods leverage the probability distribution over each token to estimate the probability of generating a given response from an LLM. Although a high predicted probability in a particular generation may not be indicative of its correctness over another, direct quantification of the modelâs generating distribution may lead to better understanding of the stochasticity of generations. Token-level UQ methods utilize the white-box UQ metrics discussed in Section 2.4.3 to estimate the randomness in the probability distribution associated with an LLMâs response. For example, some token-level UQ methods compute the entropy of the underlying probability distribution over the tokens (Xiao and Wang, 2021; Ling et al., 2024) or semantic clusters (Kuhn et al., 2023) (referred to as semantic entropy) to estimate the LLMâs confidence. Likewise, a variant of SelfCheckGPT (Manakul et al., 2023) trains an $n$ -gram model using multiple samples of the response of an LLM to a given query including its main response. Subsequently, SelfCheckGPT estimates the LLMâs uncertainty by computing the average of the log-probabilities of the tokens generated by the $n$ -gram model, given the original response of the LLM. Moreover, SelfCheckGPT proposes using the maximum of the negative log-probability to estimate the LLMâs uncertainty.
Token-based UQ methods generally perform poorly with long-form responses, because the product of the token probabilities decrease with longer responses, even when the responses are semantically equivalent to a shorter response. To address this limitation, token-based UQ methods employ a length-normalized scoring function (Thomas and Joy, 2006; Malinin and Gales, 2020), to reduce the dependence of the UQ metrics on the length of the sequence, given by: ${\mathrm{Product(p)}=\prod_{j=1}^{L_{i}}p_{ij}^{\frac{1}{L_{i}}}}$ , where $L_{i}$ denotes the length of sentence $i$ , and $p_{ij}$ is the conditional probability of token $j$ , given all preceding tokens, in sentence $i$ . The work in (Bakman et al., 2024) introduces Meaning-Aware Response Scoring (MARS) as an alternative to length-normalized scoring. MARS utilizes an importance function to assign weights to each token based on its contribution to the meaning of the sentence. The contribution of each token to the meaning of the sentence is determined using BEM (Bulian et al., 2022), a question-answer evaluation model. Taking a different approach, Claim-Conditioned Probability (CCP) (Fadeeva et al., 2024) decomposes the output of an LLM into a set of claims and computes the token-level uncertainty of each claim from its constituent tokens. CCP utilizes the OpenAI Chat API (Brown, 2020; Achiam et al., 2023) to identify the main claims in a given response. By examining the component claims, CCP provides finer-grained uncertainty quantification compared to other UQ methods for LLMs.
As described, token-level UQ methods estimate the uncertainty of an LLM based on the conditional distribution associated with each token. Although this approach is effective in general, the conditional distribution of the tokens can be misleading in certain scenarios, especially when an initial token is incorrect but all the succeeding tokens are highly probable given the initial token. Trainable attention-based dependency (TAD) (Vazhentsev et al., 2024) trains a regression model on the conditional dependence between the tokens and applies the predicted factors to improve the estimated uncertainty of the LLM. Lastly, we present token-level UQ methods that use specific prompting strategies to estimate the modelâs confidence. The work in (Kadavath et al., 2022) shows that token-based UQ methods can be particularly effective in estimating the confidence of LLMs when the model is prompted to select an option when given a multiple-choice question. Specifically, the authors show that the modelâs probability distribution over the options in the prompt is well-calibrated, when presented with multiple-choice problems or problems with a True/False answer. Further, the authors fine-tune an LLM with a value head to predict the probability that the model knows the answer to a given question for each token. The probability associated with the LLMâs final token is defined as the confidence of the LLM in its response for the given prompt. The results demonstrate that the LLM predictions of these probability values are well-calibrated, with an improvement in the calibration performance with larger models. Other follow-on work leveraging multiple-choice problems to estimate the uncertainty of LLMs includes (Ren et al., 2023b).
## 4. Self-Verbalized UQ
Self-verbalized uncertainty quantification methods seek to harness the impressive learning and reasoning capabilities of LLMs to enable an LLM to express its confidence in a given response through natural-language. Self-verbalized uncertainty estimates (e.g., expressed as probabilities) are more easily interpretable to humans, especially when the estimates are provided using widely-used epistemic uncertainty markers (Tang et al., 2024; Yona et al., 2024), e.g., words like I am not sure⊠or This response might be⊠Figure 11 illustrates the use of epistemic markers by an LLM to convey its uncertainty, when asked of the team that won the 2022 NBA Finals. The response of the LLM is actually incorrect; however, by expressing its uncertainty, a user may be more inclined to verify the factuality of the LLMâs response. Prior work has shown that LLMs typically fail to accurately express their confidence in a given response, often using decisive words that suggest confidence, while at the same time being unsure of the accuracy of their response. Empirical studies (Krause et al., 2023) have shown that poor calibration of LLMâs self-verbalized confidence estimates is more pronounced in low-data language settings, e.g., Hindi and Amharic.
<details>
<summary>extracted/6588127/images/self_verbalized_uq/epistemic_uncertainty_markers.png Details</summary>

### Visual Description
## Flowchart: LLM Response Generation with Epistemic Markers
### Overview
The image depicts a flowchart illustrating how a Large Language Model (LLM) processes a question about the 2022 NBA Finals winner and generates two distinct responses based on the presence or absence of epistemic markers. The flowchart includes a user query, an LLM processing node, and two divergent response paths.
---
### Components/Axes
1. **User Query (Top)**:
- Text: *"Which team won the 2022 NBA Finals?"*
- Position: Top-center, enclosed in a blue rounded rectangle.
- Icon: Silhouette of a human head in the top-right corner.
2. **LLM Processing Node (Center)**:
- Label: *"LLM"* in black text.
- Symbol: Green square with a black infinity symbol (â) inside a circle.
- Arrows: Two downward-pointing arrows branching left and right.
3. **Response Paths (Bottom)**:
- **Left Path (With Epistemic Markers)**:
- Label: *"With Epistemic Markers"* in black text on a yellow banner.
- Text Box:
- *"I think the Milwaukee Bucks won the 2022 NBA Finals, but I am not sure."*
- Keywords: *"I think"* (red), *"not sure"* (red).
- **Right Path (Without Epistemic Markers)**:
- Label: *"Without Epistemic Markers"* in black text on a yellow banner.
- Text Box:
- *"The Milwaukee Bucks won the 2022 NBA Finals."*
---
### Detailed Analysis
- **Flow Direction**:
The flowchart follows a top-to-bottom, left-to-right structure. The user query initiates the process, leading to the LLM node, which splits into two response paths.
- **Textual Content**:
- The left response includes **epistemic markers** (*"I think"*, *"not sure"*) to express uncertainty.
- The right response is **declarative** and lacks uncertainty markers.
- **Color Coding**:
- Red highlights uncertainty-related phrases (*"I think"*, *"not sure"*).
- Yellow banners distinguish the two response categories.
---
### Key Observations
1. The LLM generates **two versions** of the answer: one uncertain (with markers) and one confident (without markers).
2. The correct answer (*Milwaukee Bucks*) is embedded in both paths, but confidence levels differ.
3. The flowchart emphasizes the **impact of epistemic markers** on response certainty.
---
### Interpretation
This flowchart demonstrates how LLMs can modulate responses based on contextual cues (e.g., epistemic markers). The presence of uncertainty markers (*"I think"*, *"not sure"*) reflects the modelâs awareness of potential ambiguity, even when the factual answer is known. The Milwaukee Bucks did win the 2022 NBA Finals, validating the declarative response. However, the uncertain version acknowledges the modelâs probabilistic reasoning process, which may prioritize hedging over absolute certainty in certain contexts.
The diagram underscores the importance of **epistemic framing** in AI-generated text, balancing factual accuracy with transparency about confidence levels.
</details>
Figure 10. The LLM provides an incorrect response, but communicates its uncertainty using epistemic markers, e.g., âI think.â
[The LLM provides an incorrect response, but communicates its uncertainty using epistemic markers, e.g., âI think.â]The LLM provides an incorrect response, but communicates its uncertainty using epistemic markers, e.g., âI think.â
<details>
<summary>extracted/6588127/images/self_verbalized_uq/self_verbalized_uncertainty.png Details</summary>

### Visual Description
## Screenshot: Chat Interaction
### Overview
The image depicts a chat interface with three message bubbles. The conversation involves a question about the capital of Luxembourg, an answer, and a confidence indicator.
### Components/Axes
- **Message Bubbles**:
- **Blue Bubble (Top Right)**: Contains the question "What is the capital of Luxembourg?" with a user icon (silhouette) in the top-right corner.
- **Blue Bubble (Left)**: Contains the answer "Luxembourg is the capital of Luxembourg." with a green circular icon (cross symbol) on the left.
- **Purple Bubble (Bottom Left)**: Contains the confidence statement "Confidence: I am 98% confident." with the same green circular icon on the left.
- **Icons**:
- Green circular icon with a black cross (repeated in two bubbles).
- User silhouette icon (black) in the top-right corner.
### Detailed Analysis
1. **Question**:
- Text: "What is the capital of Luxembourg?"
- Color: Blue (standard for user messages in many chat interfaces).
- Position: Top-right bubble, aligned to the right.
2. **Answer**:
- Text: "Luxembourg is the capital of Luxembourg."
- Color: Blue (same as the question bubble).
- Position: Left-aligned, below the question.
- Notable: The answer repeats "Luxembourg" twice, creating a tautological structure.
3. **Confidence Indicator**:
- Text: "Confidence: I am 98% confident."
- Color: Purple (distinct from the blue answer bubble).
- Position: Bottom-left bubble, left-aligned.
- Icon: Green circular icon with a black cross (matches the icon in the answer bubble).
### Key Observations
- The answer is technically correct but redundant, as it states the country name twice.
- The confidence level (98%) is high, suggesting strong certainty in the response.
- The green cross icon is reused across two bubbles, possibly indicating a system-generated response or a specific feature (e.g., AI verification).
### Interpretation
- **Redundancy in Answer**: The response "Luxembourg is the capital of Luxembourg" is factually accurate but verbose. This could indicate a lack of contextual awareness in the systemâs response generation.
- **Confidence vs. Accuracy**: The 98% confidence aligns with the correctness of the answer, but the redundancy might suggest the system prioritizes factual accuracy over conciseness.
- **UI Design**: The use of color (blue for user/system messages, purple for confidence) and icons (cross, silhouette) follows common chat interface conventions, aiding user comprehension.
## Conclusion
The screenshot captures a straightforward Q&A interaction where the system provides a correct but verbose answer with high confidence. The repetition in the response and the confidence metric highlight potential areas for optimization in natural language generation systems.
</details>
Figure 11. LLMs can be trained or fine-tuned to provide numeric estimates of their confidence in the factuality of their response.
[LLMs can be trained or fine-tuned to provide numeric estimates of their confidence in the factuality of their response.]LLMs can be trained or fine-tuned to provide numeric estimates of their confidence in the factuality of their response.
To address this challenge, prior work in (Mielke et al., 2022) trains a learned model (calibrator) that predicts the probability that an LLMâs response to a given prompt is correct, given the input prompt, its response, and the LLMâs representations of the prompt and its response. In addition, the output of the calibrator and the LLMâs original response are subsequently used in fine-tuning a generative model (Smith et al., 2020) to produce a linguistically calibrated response, aligning the verbal expression of the LLMâs confidence with its probability of factual correctness. However, the resulting verbalized uncertainty lacks a numerical value, making it difficult for users to assess the relative confidence of the LLM. Follow-on work in (Lin et al., 2022) introduces the notion of verbalized probability, providing a definite numerical value of the modelâs confidence, e.g., in Figure 11, or a scaled characterization of the modelâs confidence in words, e.g., low, medium, or high confidence. The authors of (Lin et al., 2022) fine-tune GPT-3 on their proposed CalibratedMath benchmark dataset using supervised learning, demonstrating that the verbalized probability generalizes well; however, best performance is achieved in in-distribution scenarios.
More recent work has investigated other training approaches for fine-tuning LLMs to accurately express their confidence verbally. LACIE (Stengel-Eskin et al., 2024) introduces a two-agent speaker-listener architecture to generate training data for fine-tuning an LLM, where the reward signal is a function of the ground-truth answer and the listenerâs perceived confidence of the speakerâs response. In essence, LACIE aims to fine-tune an LLM to produce a response composed of epistemic markers that are aligned with the modelâs confidence in the correctness of its response. Likewise, the work in (Yang et al., 2024b) proposes a knowledge-transfer training architecture where the knowledge from a bigger LLM (the teacher), e.g., GPT-4 (Achiam et al., 2023), is distilled into a smaller LLM (the student), e.g., Vicuna-7B (Chiang et al., 2023), using chain-of-thought reasoning. The student LLM is fine-tuned to provide its confidence (expressed as a value between $0 0$ and $100$ ) along with its response to an input prompt. A line of existing work (Xu et al., 2024b; Tao et al., 2024) utilizes reinforcement learning to fine-tune an LLM to improve the alignment of the confidence estimates expressed by the LLM with its factual accuracy. While SaySelf (Xu et al., 2024b) relies on self-reflective rationales to improve the calibration of the verbalized confidence, the work in (Tao et al., 2024) uses reinforcement learning from human feedback (RLHF) to define a reward function consisting of a quality component in addition to an alignment component. Similarly, the work in (Band et al., 2024) fine-tunes Llama 2 (Touvron et al., 2023) using supervised learning and reinforcement learning, to produce calibrated verbalized confidence estimates that enable a user to make informed decisions on related questions. Lastly, other recent work, e.g., (Yang et al., 2023a; Feng et al., 2024), seeks to fine-tune LLMs to abstain from providing an answer to a question when faced with doubt (Tomani et al., 2024), which is illustrated in Figure 12.
<details>
<summary>extracted/6588127/images/self_verbalized_uq/abstain_when_uncertain.png Details</summary>

### Visual Description
## Screenshot: Dialogue System Response Panels
### Overview
The image shows two side-by-side panels labeled "Default" and "Abstain when Uncertain," each containing a user query ("How deep is the Pacific Ocean?") and a system response. The panels demonstrate conditional behavior based on the system's confidence in providing accurate information.
### Components/Axes
- **Panel Titles**:
- Top panel: "Default" (red text, top-left corner).
- Bottom panel: "Abstain when Uncertain" (green text, top-left corner).
- **User Query**:
- Both panels display the question: "How deep is the Pacific Ocean?" (white text on blue background, right-aligned).
- **System Responses**:
- **Default Panel**:
- Response: "The Pacific Ocean is 3500 meters (11483 feet) deep." (white text on blue background, left-aligned).
- UI Element: Green circular button with a black "X" icon (left side of the panel).
- **Abstain Panel**:
- Response: "Sorry, I am unsure about the correct answer, so I humbly decline to provide an answer." (white text on blue background, left-aligned).
- UI Element: Identical green circular button with a black "X" icon (left side of the panel).
### Detailed Analysis
- **Textual Content**:
- All text is in English. No non-English content is present.
- Numerical values in the "Default" response are precise (3500 meters, 11483 feet).
- **UI Design**:
- Buttons are consistently styled (green circle, black "X") across both panels.
- Panel titles use color coding: red for "Default," green for "Abstain when Uncertain."
### Key Observations
1. The "Default" panel provides a factual, dual-unit answer (meters and feet).
2. The "Abstain when Uncertain" panel refuses to answer when confidence is low, despite the same query.
3. The green "X" button is identical in both panels, suggesting it may trigger a follow-up action (e.g., clarifying the query).
### Interpretation
The systemâs behavior reflects a confidence-based decision mechanism:
- **"Default" Mode**: Assumes the query is answerable with known data (e.g., well-documented facts like ocean depth).
- **"Abstain when Uncertain" Mode**: Prioritizes accuracy over responsiveness, avoiding speculative or incorrect answers.
This design aligns with ethical AI principles, emphasizing transparency and reliability. The consistent button placement implies a unified user interface, while the panel titles signal contextual adaptability.
</details>
Figure 12. Some self-verbalized UQ methods fine-tune an LLM to refrain from answering when it is uncertain about the answer.
[Some self-verbalized UQ methods fine-tune an LLM to refrain from answering when it is uncertain about the answer.]Some self-verbalized UQ methods fine-tune an LLM to refrain from answering when it is uncertain about the answer.
Despite these efforts, in many cases, LLMs still fail to accurately express their confidence verbally (Xiong et al., 2023; Groot and Valdenegro-Toro, 2024), typically exhibiting overconfidence, with confidence values primarily between 80% and 100%, often in multiples of $5$ , similar to the way humans interact. This weakness decreases with the size of an LLM. Nonetheless, large-scale LLMs still display overconfidence, albeit at a smaller rate. However, effective prompting strategies to reduce the calibration error of these models exist. Although verbalized confidence estimates are better calibrated than raw, conditional token probabilities (Tian et al., 2023), existing empirical studies (Ni et al., 2024) suggest that token-based UQ methods generally yield better-calibrated uncertainty estimates compared to their self-verbalized UQ counterparts.
## 5. Semantic-Similarity UQ
<details>
<summary>extracted/6588127/images/semantic_similarity_uq/semantically_similar_responses.png Details</summary>

### Visual Description
## Flowchart: LLM Response Generation Process
### Overview
The image depicts a flowchart illustrating how a Large Language Model (LLM) processes a user query about Buckingham Palace's location and generates semantically similar responses. The diagram uses color-coded boxes, arrows, and symbols to represent the flow of information.
### Components/Axes
1. **Input Query**:
- A blue text box with white text: *"Where is Buckingham Palace in the United Kingdom?"*
- Positioned at the top-right, adjacent to a black silhouette icon (likely representing a user).
2. **LLM Processing Unit**:
- A green square with a black circular logo containing a knot-like symbol (resembling a Celtic knot or infinity loop).
- Labeled *"LLM"* in black text to the left of the square.
- Arrows connect the input query to the LLM box, indicating processing.
3. **Output Responses**:
- A light blue rectangular section titled *"Randomly-Generated Semantically-Similar Responses"* in black text.
- Contains two example responses in black text within bordered rectangles:
- *"Buckingham Palace is located in London."*
- *"London is home to Buckingham Palace."*
- Three ellipses (...) between the responses suggest variability in output.
### Detailed Analysis
- **Color Coding**:
- Blue: User input (question).
- Green: LLM processing unit (symbolizes computational logic).
- Light Blue: Output responses (semantic similarity).
- **Flow Direction**:
- Top-to-bottom vertical flow: Query â LLM â Responses.
- No branching or feedback loops; linear progression.
### Key Observations
1. The LLM generates responses that retain the core factual information (Buckingham Palaceâs location in London) while varying phrasing.
2. The knot symbol in the LLM box may symbolize complexity, interconnectedness, or iterative processing.
3. Responses are factually identical but syntactically distinct, demonstrating semantic similarity.
### Interpretation
This diagram abstracts the LLMâs ability to rephrase information while preserving meaning. The use of a knot symbol for the LLM could imply:
- **Complexity**: The modelâs intricate decision-making process.
- **Cyclical Nature**: Iterative refinement of responses.
- **Uniqueness**: Generating non-redundant outputs.
The flowchart emphasizes the LLMâs role in transforming rigid queries into flexible, contextually appropriate answers, highlighting its utility in natural language understanding and generation. No numerical data or trends are present, as the focus is on conceptual relationships rather than quantitative analysis.
</details>
Figure 13. When prompted to answer a question, e.g., âWhere is Buckingham Palace in the United Kingdom?â, an LLM might generate many variations of the same sentence. Although the form of each response may differ at the token-level, the semantic meaning of the sentences remains consistent. Semantic-similarity UQ techniques exploit semantic clustering to derive UQ methods that are robust to these variations in the form of the responses.
[When prompted to answer a question, e.g., âWhere is Buckingham Palace in the United Kingdom?â, an LLM might generate many variations of the same sentence. Although the form of each response may differ at the token-level, the semantic meaning of the sentences remains consistent. Semantic-similarity UQ techniques exploit semantic clustering to derive UQ methods that are robust to these variations in the form of the responses.]When prompted to answer a question, e.g., âWhere is Buckingham Palace in the United Kingdom?â, an LLM might generate many variations of the same sentence. Although the form of each response may differ at the token-level, the semantic meaning of the sentences remains consistent. Semantic-similarity UQ techniques exploit semantic clustering to derive UQ methods that are robust to these variations in the form of the responses.
Semantic-similarity uncertainty quantification methods examine the similarity between multiple responses of an LLM to the same query (Kuhn et al., 2023; Chen and Mueller, 2023; Lin et al., 2023) by focusing on the meaning (i.e., the semantic content of a generated sentence) rather than the form (i.e., the string of tokens that the model predicts) (Kuhn et al., 2023) of the responses. For example, consider the prompt (question) given to an LLM: Where is Buckingham Palace in the United Kingdom? Standard sampling from an LLM can produce many variations of the same answer when prompted with this question, as illustrated in Figure 13. However, while an LLM may be uncertain about which sequence the user may prefer, most variations do not alter the meaning of the sentence. This difference in the ordering of the tokens in each response may lead to different token probabilities, which in turn may negatively impact the accuracy of other uncertainty quantification methods, such as token-level UQ methods.
Since semantic similarity is a relative metric, its outputs are in general model-dependent, posing a central challenge. A recent line of work uses NLI models, such as RoBERTa (Liu, 2019) and DeBERTa (He et al., 2020) (discussed in Section 2.4.2), to compute entailment probabilities. The work in (Aichberger et al., 2024) proposes upweighting tokens that have large gradients with respect to the NLI model to maximize the probability of contradiction to generate semantically-varied responses. In addition, the method in (Tanneru et al., 2024) proposes using a chain-of-thought agreement (CoTA) metric that uses entailment probabilities to evaluate the agreement between CoT generations, concluding that CoTA semantic uncertainty leads to more robust model faithfulness estimates than either self-verbalized or token-level uncertainty estimates. The authors of (Chen and Mueller, 2023) propose using a combined measure of confidence that incorporates entailment probabilities along with a verbalized confidence score, and selects the generation with the highest confidence. The UQ method in (Becker and Soatto, 2024) proposes generating multiple explanations for each plausible response and then summing the entailment probabilities. Another work (Kossen et al., 2024) introduces semantic entropy probes, wherein semantic clusters are grown iteratively using entailment probabilities. Each new generation is either added to an existing cluster if entailment holds, or added to a new cluster. Then, a linear classifier is trained to predict high-entropy prompts. Furthermore, the method in (MartĂn et al., 2022) uses a database of user-verified false statements to build a semi-automated fact-checking system that uses entailment probabilities with database queries as a metric for confidence in a statementâs falseness.
In addition to using NLI models to evaluate factual similarities between responses, some methods use language embeddings (Petukhova et al., 2024) to cluster responses based on their semantic similarity and reason about uncertainty over the clusters, e.g., semantic density in (Qiu and Miikkulainen, 2024). First, several reference responses are generated by sampling the model. Then, the overall uncertainty per response is computed using the entailment scores, taking values in the set $\{0,0.5,1\}$ . The semantic density is then used to accept or reject a target response based on the similarity to the target responses. The supervised approach in (He and Li, 2024) utilizes the K-means algorithm to first cluster synonyms, which are attended by the LLM during training. The work in (Hu et al., 2024) introduces a method to achieve semantically-aligned item identification embeddings based on item descriptions, which aid in aligning LLM-based recommender systems with semantically-similar generations when item descriptions are sparse. Further, the method in (Wang et al., 2024a) prompts an LLM to generate concepts (effectively semantic clusters) and uses an NLI-based concept scorer along with the entropy over the concepts to quantify the overall uncertainty of the LLM. ClusterLLM (Zhang et al., 2023b) uses a frozen instruction-trained LLM to guide clustering based on triplet queries (e.g., does A match B better than C?), achieving more semantically-aligned embeddings.
However, assigning responses to a single cluster precludes assignment to another, when in reality a response may belong to more than one class, limiting the effectiveness of clustering-based semantic-similarity UQ methods. To address this challenge, another line of work extends clustering-based methods to graphs, which may express the complex relationship between responses more explicitly. The work in (Ao et al., 2024) proposes Contrastive Semantic Similarity, which uses responses as vertices and CLIP cosine similarities as edges. The overall uncertainty is computed from the eigenvalues of the graph Laplacian, and the eigenvectors can be used to assign clusters more effectively. Similarly, the approach in (Da et al., 2024) uses edges weights determined directly from NLI models and extends the graph-Laplacian-based uncertainty metric to include additional semantic uncertainty, such as Jaccard similarity. The authors of (Jiang et al., 2024) introduce a claim-and-response structure wherein edges are added between a claim and response if the response entails the claim. The centrality metrics are used to estimate per-claim uncertainty and integrate low-uncertainty claims into further generations. In addition, Kernel Language Entropy (Nikitin et al., 2024) clusters responses to construct a kernelized graph Laplacian, which is used to estimate fine-grained differences between responses in a cluster.
A few works that learn to estimate semantic meanings without NLI models using supervised approaches have also been proposed. In (Liu et al., 2024d), the authors use an auxiliary tool LLM to compute a similarity score between the target LLMâs generation and the tool LLMâs generation and learns an uncertainty estimation function to estimate the similarity score. In (Jung et al., 2024), the authors propose a cascading chain of increasingly complex LLM judges to evaluate the predecessorâs preference between two generations. A calibration dataset is used to learn a threshold that determines each judgeâs minimum confidence level. The confidence thresholds are tuned in order to guarantee that the appropriate judge is selected to generate a satisfactory response.
## 6. Mechanistic Interpretability
Mechanistic interpretability (MI) aims to understand the inner workings of LLMs to pinpoint the potential sources of uncertainty, by uncovering causal relationships (Bereska and Gavves, 2024). Several survey papers have provided a taxonomy of mechanistic interpretability in the field of transformer-based language models (Rai et al., 2024), focused on AI safety (Bereska and Gavves, 2024) or interpretability of language models in general (Zhao et al., 2024a).
<details>
<summary>extracted/6588127/images/mechanistic_interpretability/MITaxonomy.png Details</summary>

### Visual Description
## Diagram: Conceptual Framework for Hypothesis Testing and Methodology
### Overview
The diagram illustrates a structured framework connecting hypotheses, fundamental objects, and methods through labeled components and directional relationships. It uses color-coded boxes and arrows to represent conceptual flows and dependencies.
### Components/Axes
1. **Hypothesis Section (Left)**
- **Superposition** (Light Blue Box)
- **Universality** (Light Blue Box)
- Arrows point from both to the central "Fundamental Objects" section.
2. **Fundamental Objects Section (Center)**
- **Features** (Green Box)
- Receives input from both Hypothesis components.
- Connects to all Methods via bidirectional arrows.
- **Circuits** (Purple Box)
- Receives input from "Universality" only.
- Connects to "Logit Lens" via a pink arrow.
- Connects to "SAEs" and "Probing" via bidirectional arrows.
3. **Methods Section (Right)**
- **SAEs** (Blue Box)
- **Probing** (Blue Box)
- **Logit Lens** (Blue Box)
- Arrows from "Features" and "Circuits" point to all three methods.
### Detailed Analysis
- **Hypothesis â Fundamental Objects**:
- "Superposition" and "Universality" both feed into "Features" and "Circuits," suggesting these hypotheses underpin the foundational elements.
- "Circuits" only receives input from "Universality," implying a specialized relationship.
- **Fundamental Objects â Methods**:
- "Features" connects to all three methods (SAEs, Probing, Logit Lens) via bidirectional arrows, indicating mutual influence.
- "Circuits" connects to "SAEs" and "Probing" bidirectionally but has a unidirectional pink arrow to "Logit Lens," suggesting a unique or specialized interaction.
### Key Observations
1. **Color Coding**:
- Light blue for Hypothesis, green for Features, purple for Circuits, and blue for Methods.
- Pink arrow from Circuits to Logit Lens stands out as a distinct relationship.
2. **Bidirectional vs. Unidirectional Arrows**:
- Most connections are bidirectional (e.g., Features â Methods), except the Circuits â Logit Lens link.
3. **Central Role of "Features"**:
- Acts as a hub connecting Hypothesis to all Methods.
### Interpretation
This diagram represents a theoretical model where hypotheses (Superposition and Universality) inform fundamental objects (Features and Circuits), which in turn guide methodological approaches (SAEs, Probing, Logit Lens). The bidirectional relationships between Features and Methods suggest iterative refinement, while the unidirectional Circuits â Logit Lens arrow may indicate a specialized application or dependency. The framework emphasizes how abstract hypotheses translate into concrete analytical tools, with Features serving as a critical intermediary. The pink arrowâs uniqueness implies Logit Lens might require additional constraints or assumptions derived specifically from Circuits.
</details>
Figure 14. Taxonomy of Mechanistic Interpretability (Rai et al., 2024).
[Taxonomy of Mechanistic Interpretability (Rai et al., 2024).]Taxonomy of Mechanistic Interpretability (Rai et al., 2024).
We start by discussing a few key concepts of mechanistic interpretability, summarized in Figure 14. Features are the unit for encoding knowledge in a neural network. For example, a neuron or set of neurons consistently activating for Golden Gate Bridge can be interpreted as the âGolden Gate Bridgeâ feature (Templeton et al., 2024). Superposition (Elhage et al., 2022) is often a key hypothesis in mechanistic interpretability (Bereska and Gavves, 2024), due to the fact that the same neuron seems to activate in multiple, distinct contexts, a phenomenon known as polysemanticity (Cunningham et al., 2023). The superposition hypothesis claims that the set of $N$ neurons encode $M>N$ features, by allocating each feature to a linear combination of neurons, which are in almost orthogonal directions, leading to an overcomplete set of basis. On the other hand, the work in (Engels et al., 2024) suggests that there exists circular features corresponding to days of the week and months of the year, breaking the assumption that high-level features are linearly represented in the activation space. Circuits, another fundamental concept, refers to sub-graphs of the network that consist of features and weights connecting them. Recent research have aimed to perform comprehensive circuit analysis on LLMs in order to construct a full mapping from specific circuits to functionalities of the language model (Lieberum et al., 2024; Dunefsky et al., 2024). The hypothesis of universality, related to both features and circuits, claims that similar features and circuits exist across different LLMs.
Methods in MI can be broadly classified into the following categories: logit lens, probing, and sparse auto-encoders methods, each discussed briefly. Logit lens methods project the activations from various layers of the LLM back into the vocabulary space, allowing for interpreting intermediate predictions and information encoded in activations (Geva et al., 2020; Lieberum et al., 2023). Probing methods aim to find which intermediate activations encode specific information (e.g., syntactic, semantic, or factual knowledge), by training a linear classifier as a probe to predict the existence of a certain feature (Belinkov, 2022; Gurnee et al., 2023). Despite being simple and successful, probing methods only reveal correlations instead of causal relations, limiting their use in MI. Sparse auto-encoders (SAEs) represent a popular architecture applied in MI to directly identify meaningful feature activations within LLMs and the causal relations between them. SAEs map the feature vectors onto a much higher dimensional space with strong sparsity, in order to disentangle the features that were in superposition. In these methods, an encoder-decoder pair $(z,\hat{x})$ is trained to map $\hat{x}(z(x))$ back to the modelâs activation $x$ , given by: ${z=\sigma(W_{\text{enc}}x+b_{\text{enc}})}$ , ${{\hat{x}}=W_{\text{dec}}z+b_{\text{dec}}}$ . The specific implementation of the activation function can vary, with a common choice of the activation function given by the ReLU (Dunefsky et al., 2024; Cunningham et al., 2023). In (Gao et al., 2024), ${\sigma=\mathrm{TopK}}$ is used to keep only the $k$ -largest latents, simplifying tuning and outperforming ReLU. In (Lieberum et al., 2024), ${\sigma=\mathrm{JumpReLU}}$ is chosen due to its slightly better performance and the ability to allow for a variable number of active latents at different tokens. In (Dunefsky et al., 2024), the authors train the architecture differently with transcoders, where the faithfulness term in the loss function measures the error between the output and the original MLP sub-layer output, instead of the original input. In (Yun et al., 2021), the authors hypothesize that contextualized word embeddings are linear superpositions of transformer factors. For example, the word âappleâ can be decomposed into: ${\mathrm{apple}=0.09\mathrm{dessert}+0.11\mathrm{organism}+0.16\mathrm{fruit}+ 0.22\mathrm{mobile\&IT}+0.42\mathrm{other}}$ . The authors aim to learn a comprehensive dictionary of word factors. In doing so, they distinguish between low, mid, and high-level factors by looking at the change in the importance score across layers. Low-level factors correspond to word-level polysemy disambiguation; mid-level factors are sentence-level pattern formation; and high-level factors correspond to long-range dependency, which have to be manually distinguishable from mid-level factors, although it could be done with black-box interpretation algorithms as well. In (Tamkin et al., 2023), the authors quantize features into sparse âcodebookâ features, providing the capability to control the network behavior.
Prior work has employed techniques from mechanistic interpretability to track the progress of models during training (Nanda et al., 2023), to explain the outputs of models (Schwab and Karlen, 2019), and to improve the accuracy of LLMs (Burns et al., 2022). The work in (Burns et al., 2022) demonstrates that the accuracy of the latent knowledge of LLMs is less sensitive to the input prompts, with its accuracy remaining relatively constant even when the LLM is prompted to generate incorrect responses. Likewise, ReDeEP (Sun et al., 2024a) examines the latent knowledge of an LLM to decouple the effects of external knowledge from knowledge bases and the internal knowledge in the model on hallucinations in retrieval-augmented generation. Further, prior work has examined hallucinations in LLMs through the lens of mechanistic interpretability (Yu et al., 2024; Wang et al., 2024d). The work in (Yu et al., 2024) investigates the role of an LLMâs hidden states in contributing to hallucinations, quantifying the contributions of lower-layer and upper-layer MLPs and attention heads to factual errors. In addition, the method in (Ferrando et al., 2024) leverages mechanistic interpretability to identify the boundaries of an LLMâs internal knowledge of its own capabilities, which could be used to prevent a model from answering questions on certain subjects (i.e., in safeguarding the model) or to prevent hallucinations when the model does not know about certain subjects. Lastly, the work in (Ahdritz et al., 2024) trains small classifiers (linear and non-linear MLPs) on the activations of a small LLM to predict the uncertainty level of a larger LLM, demonstrating that the classifiers generalize to unseen distributions. Although there is an inextricable link between understanding the inner workings of LLMs and quantifying their uncertainty when prompted by a user, the connections between mechanistic interpretability and uncertainty quantification have not been extensively explored. For example, certain neural activation patterns in LLMs might be associated with the expression of uncertainty by the model. In addition, when faced with doubt, an LLM might utilize certain features (words/concepts), that could be detected from its neural activations. Identifying the specific intermediate activations and features of an LLM that are relevant for uncertainty quantification remains an open research challenge. We describe this open challenge in Section 10.4.
## 7. Calibration of Uncertainty
In many cases, the confidence estimates computed by the UQ methods presented in the preceding sections are not well-calibrated i.e., aligned with the observed frequencies of the responses (accuracy of the model). However, reliability of the confidence estimates of an LLMâs output remains crucial to the safe deployment of LLMs. As a result, we would like the confidence estimates to be calibrated. Formally, for a perfectly-calibrated confidence estimate $p$ , we have that, ${\forall p\in[0,1]}$ :
$$
\mathbb{P}[Y=\hat{Y}\mid\hat{P}=p]=p, \tag{2}
$$
where $Y$ and $\hat{Y}$ represent random variables denoting the ground-truth and predicted outputs from the model, respectively, and $\hat{P}$ represents a random variable denoting the confidence associated with the predicted output $\hat{Y}$ (Guo et al., 2017). In Figure 15, we show poorly-calibrated confidence estimates on the left, where the estimated confidence of the model is not well-aligned with the observed accuracy of the model. The dashed-line illustrates perfect alignment between the estimated confidence of the model and its accuracy. In this example, confidence estimates of the model above $0.5$ tend to be overconfident, exceeding the accuracy of the model. Conversely, confidence estimates that are less than $0.5$ tend to be underconfident. Calibration techniques improve the alignment of the estimated confidence of the model with the observed accuracy, with the estimated confidence more closely following the dashed-line, as shown on the right in Figure 15. We review some metrics for quantifying the calibration of a modelâs confidence estimates.
<details>
<summary>extracted/6588127/images/calibration/calibrated_confidence.png Details</summary>

### Visual Description
## Bar Charts: Accuracy vs. Confidence Before and After Calibration
### Overview
The image contains two side-by-side bar charts comparing accuracy and confidence levels before and after a calibration process. Each chart uses vertical bars to represent accuracy at specific confidence thresholds (0, 0.5, 1) and includes a dashed trend line illustrating the ideal relationship between confidence and accuracy.
### Components/Axes
- **X-Axis (Confidence)**: Labeled "Confidence," scaled from 0 to 1 in increments of 0.5.
- **Y-Axis (Accuracy)**: Labeled "Accuracy," scaled from 0 to 1 in increments of 0.2.
- **Legend**: No explicit legend is present, but colors are inferred:
- **Blue bars**: "Before Calibration" (left chart).
- **Green bars**: "After Calibration" (right chart).
- **Dashed lines**: Red (before) and purple (after), representing the ideal 1:1 relationship between confidence and accuracy.
### Detailed Analysis
#### Before Calibration (Left Chart)
- **Bars**:
- Confidence 0: Accuracy â 0.1.
- Confidence 0.5: Accuracy â 0.4.
- Confidence 1: Accuracy â 0.8.
- **Trend Line**: Red dashed line slopes from (0,0) to (1,1), indicating the ideal scenario where accuracy equals confidence. Actual bars deviate significantly below this line except at confidence 1.
#### After Calibration (Right Chart)
- **Bars**:
- Confidence 0: Accuracy â 0.1.
- Confidence 0.5: Accuracy â 0.3.
- Confidence 1: Accuracy â 0.8.
- **Trend Line**: Purple dashed line identical to the "Before" chart, spanning (0,0) to (1,1). Bars show improved alignment with the trend line at confidence 1 but diverge at confidence 0.5.
### Key Observations
1. **Improvement at High Confidence**: Accuracy at confidence 1 remains unchanged (0.8) before and after calibration, but the trend line suggests an ideal value of 1.0, indicating persistent underperformance.
2. **Decline at Mid-Confidence**: Accuracy at confidence 0.5 decreases from 0.4 (before) to 0.3 (after), contradicting the expectation of calibration improving performance.
3. **Baseline Consistency**: Accuracy at confidence 0 remains unchanged (0.1) in both scenarios, suggesting no improvement in low-confidence predictions.
### Interpretation
The calibration process appears to have mixed effects:
- **Positive Impact**: At high confidence (1), the systemâs accuracy aligns more closely with the ideal trend line after calibration, though it still underperforms.
- **Negative Impact**: At mid-confidence (0.5), calibration reduces accuracy, raising concerns about unintended consequences or overfitting to specific confidence thresholds.
- **Baseline Limitations**: The systemâs inability to improve low-confidence predictions (confidence 0) suggests fundamental limitations in the modelâs foundational assumptions or data quality.
The trend lines highlight a persistent gap between ideal performance (1:1 relationship) and actual results, emphasizing the need for further refinement in calibration methodology or model architecture.
</details>
Figure 15. The confidence estimates provided by many UQ methods are not always calibrated, i.e., the observed frequencies do not match the estimates. Calibration techniques correct these confidence estimates for better alignment with the observed accuracy.
[The confidence estimates provided by many UQ methods are not always calibrated, i.e., the observed frequencies do not match the estimates. Calibration techniques correct these confidence estimates for better alignment with the observed accuracy.]The confidence estimates provided by many UQ methods are not always calibrated, i.e., the observed frequencies do not match the estimates. Calibration techniques correct these confidence estimates for better alignment with the observed accuracy.
#### Expected Calibration Error (ECE)
The Expected Calibration Error (ECE) measures the expected deviation between the left-hand side and right-hand side of (2), with: ${\mathbb{E}_{\hat{P}}\left[\left\lvert\mathbb{P}[Y=\hat{Y}\mid\hat{P}=p]-p \right\rvert\right],}$ where the expectation is taken over the random variable $\hat{P}$ . Computing the expectation in the ECE is intractable in general. Hence, the work in (Naeini et al., 2015) introduces an approximation of the ECE, which partitions the confidence estimates into equal-width bins and computes the difference bin-wide, with: ${\mathrm{ECE}=\sum_{m=1}^{M}\frac{\lvert B_{m}\rvert}{n}\left\lvert\mathrm{acc }(B_{m})-\mathrm{conf}(B_{m})\right\rvert,}$ where the confidence estimates are divided into $M$ bins with the iâth bin denoted by $B_{i}$ , and $\mathrm{acc}$ and $\mathrm{conf}$ denote the average accuracy and confidence of the samples in a bin.
#### Maximum Calibration Error (MCE)
Alternatively, we may seek to quantify the maximum deviation between the left-hand and right-hand sides of (2), representing the worst-case error, which is often useful in safety-critical applications. The Maximum Calibration Error (MCE) is given by: ${\max_{p\in[0,1]}\left\lvert\mathbb{P}[Y=\hat{Y}\mid\hat{P}=p]-p\right\rvert,}$ which is also challenging to compute exactly, like the ECE. As a result, we can estimate an upper bound, given by: ${\mathrm{MCE}=\max_{m\in\{1,\ldots,M\}}\left\lvert\mathrm{acc}(B_{m})-\mathrm{ conf}(B_{m})\right\rvert,}$ as introduced in (Naeini et al., 2015). Metrics for quantifying the calibration error of confidence estimates are further discussed in (Guo et al., 2017; Niculescu-Mizil and Caruana, 2005; Nixon et al., 2019).
We can categorize calibration techniques for uncertainty estimation as either training-based or training-free calibration methods. Training-based calibration methods comprise supervised techniques that modify the networkâs weights and various types of self-verbalization, where the model qualifies and refines its outputs based on its own reasoning or feedback about uncertainty. In contrast, training-free calibration methods include statistical techniques that operate on a frozen learned model.
### 7.1. Training-Free Calibration Methods
Training-free calibration methods do not modify the weights of the model to produce calibrated predictions, e.g., Platt scaling (Platt et al., 1999), isotonic regression (Zadrozny and Elkan, 2001, 2002), and conformal prediction (Shafer and Vovk, 2008). Here, we discuss conformal prediction in greater detail. Conformal prediction (CP) is a powerful technique used to quantify the uncertainty of a modelâs predictions by providing prediction sets that are guaranteed to contain the true outcome with a specified probability. Given a prediction model $f$ and a calibration dataset $\mathcal{D}_{\text{cal}}=\{(x,y)_{i})\}_{i=1}^{N}$ , conformal prediction aims to compute a set of nonconformity scores $\mathcal{S}=\{(s)_{i}\}_{i=1}^{N}$ over $\mathcal{D}_{\text{cal}}$ , which reflect how closely each prediction $f(x_{i})$ âsuch as the confidence estimate provided by the aforementioned UQ methodsâaligns with the true label $y_{i}$ . Given a coverage level $\hat{\varepsilon}$ (effectively a budget for incorrect predictions) and $\mathcal{S}$ , CP aims to construct a prediction set $C(x_{n+1})$ for a new test data point $x_{n+1}$ : ${C(x_{n+1})=\left\{y:f(x_{n+1})\leq q_{1-\hat{\varepsilon}}(s_{1},s_{2},\dots, s_{n})\right\},}$ along with the probabilistic guarantee: ${\mathbb{P}(x_{n+1}\in C(x_{n+1})|\mathcal{D}_{\text{cal}})\geq 1-\varepsilon( \delta),}$ where $q_{1-\hat{\varepsilon}}$ is the $(1-\hat{\varepsilon})$ -quantile of the nonconformity scores from the calibration set and $\delta$ is a tunable failure probability associated with the randomness in sampling $\mathcal{D}_{\text{cal}}$ . By applying a Hoeffding-style argument (Shafer and Vovk, 2008), one can show that $\varepsilon$ can be selected, e.g., using the cumulative distribution function of the Beta distribution: ${\varepsilon:=\text{Beta}^{-1}_{N+1-v,v}(\delta),\quad v:=\lfloor(N+1)\hat{ \varepsilon}\rfloor,}$ where $\hat{\varepsilon}$ is the target coverage level.
Provided that the nonconformity scores represent the true conditional probabilities, conformal prediction produces the tightest prediction set that minimizes the number of false positives (i.e., maximizes the discriminative power) among all set-valued predictors such that the user-specified coverage level holds (Sadinle et al., 2019, Theorem 1). As a result, LLMs that are calibrated with conformal prediction will have the smallest prediction sets on average, and therefore the least ambiguity in their responses. A number of papers employ conformal prediction for uncertainty quantification of LLMs, e.g., for semantic uncertainty quantification (Wang et al., 2024b) and calibration (Liu and Wu, 2024). In addition to conformal prediction, information-theoretic approaches have been developed to manage and calibrate uncertainty in sequential decision-making processes (Zhao et al., 2022), e.g., entropy-rate control and multicalibration (Detommaso et al., 2024), which involves grouping data points into subgroups and ensuring the model is calibrated with respect to each of these subgroups . A model can also be calibrated to control a heuristic estimate of risk, such as human agreement (Jung et al., 2024) or Pareto-optimality of the response correctness (Zhao et al., 2024b).
### 7.2. Training-Based Calibration Methods
We can group training-based calibration techniques into ensemble-based calibration methods, few-shot calibration methods, and supervised calibration methods.
#### 7.2.1. Ensemble-Based Calibration
Ensemble-based calibration (model ensembling) seeks to estimate uncertainty by querying many similar models (for example, the same architecture trained with different random seeds) and comparing their outputs. Prompt ensembles enhance calibration by combining the outputs of multiple prompts (Jiang et al., 2023a). One common and effective ensembling strategy involves utilizing the majority vote. Given $K$ models predicting a response $l_{i}$ , the majority vote is selected as: ${P_{\text{acc}}(\hat{y}=l_{i})=\sum_{k=1}^{K}P_{k}(\hat{y}_{k}=l_{i})\mathbb{I }(\hat{y}_{k}=l_{i}).}$ The ensemble vote is then the response $l_{i}$ with the highest aggregate confidence. Another class of ensemble-based methods evaluates overall (rather than pre-choice) uncertainty, e.g. binning the modelâs responses into semantic categories and computing the entropy (Bakman et al., 2024; Ulmer et al., 2024). An ensemble-like effect can also be realized by varying the in-context examples provided to the LLM (Li et al., 2024b).
#### 7.2.2. Few-Shot Calibration
Few-shot calibration techniques employ several queries to the same model and benefit from sequential reasoning as the model evaluates its intermediate generations. For instance, prompting models to begin their responses with a fact and justification for the fact has been shown to improve calibration versus other types of linear reasoning, such as tree-of-thought (Zhao et al., 2024c; Wei et al., 2022). In the domain of code generation, calibration techniques have also been applied to improve the reliability of generated code (Spiess et al., 2024). Furthermore, inferring human preferences with in-context learning has been explored as a means to calibrate models in alignment with human judgments (Liu et al., 2023b).
#### 7.2.3. Supervised Calibration
Supervised calibration approaches, which mainly involve modifying the LLMâs weights via additional losses, auxiliary models, or additional data, are also crucial in enhancing model calibration. In supervised methods, learning to classify generated responses as correct (i.e., via a cross-entropy loss) can result in better calibration than non-learning-based approaches and can help to combat overconfidence (Chen et al., 2022; Zhu et al., 2023; Johnson et al., 2024). In fact, some existing work argue that fine-tuning is necessary for the calibration of uncertainty estimates of LLMs (Kapoor et al., 2024). Given a language generator $\hat{f}$ , score model (confidence) $\hat{P}$ , and a dataset $\mathcal{D}:=\{(x,y)_{i}\}_{i=1}^{N}$ of data-label pairs, the token-level cross-entropy loss seeks to measure the uncertainty of the predicted labels $\hat{f}(x)$ , on average, over the dataset: ${L_{\text{CE}}=-\mathbb{E}^{(x,y)\sim D}[\log\hat{P}(y=\hat{f}(x))],}$ to improve the calibration of the confidence estimates of the model. While LLMs exhibit high-quality text generations ( $\hat{f}$ ), their confidences ( $\hat{P}$ ) may be improved by fine-tuning the model with a cross-entropy loss on the full dataset or a subset. Besides the cross-entropy function, other proper-scoring rules can also be used for achieving calibration (Gneiting et al., 2007; Gneiting and Raftery, 2007). Reinforcement learning (with human feedback in some applications) may be used to fine-tune a model to produce realistic confidence estimates, e.g., (Band et al., 2024; Mao et al., 2024). Techniques such as learning to rationalize predictions with generative adversarial networks (Sha et al., 2021), applying regularization (Kong et al., 2020), and biasing token logits (Liu et al., 2024b; Zhao et al., 2021) have also been explored. Finally, sequence-level likelihood calibration has been proposed to improve the quality of LLM generations (Zhao et al., 2022). Instead of modifying the modelâs weights, another class of supervised calibration methods seeks to modify model hyperparameters in a post-hoc manner. These include temperature tuning (Desai and Durrett, 2020) and methods involving entropy and logit differences [QQ] (Lyu et al., 2024).
## 8. Datasets and Benchmarks
Here, we present useful benchmarks in uncertainty quantification for LLMs. The rapid development of highly-capable LLMs has led to the introduction of a slate of benchmarks for measuring advances on the different capabilities of these models. Some examples of these datasets include: GPQA (Rein et al., 2023), a domain-specific dataset with multiple-choice questions in the physical sciences; MMLU (Hendrycks et al., 2020), a multi-task dataset for evaluating the breadth of knowledge of LLMs across a wide range of subjects, e.g., the humanities and sciences; HellaSwag (Zellers et al., 2019), a dataset for evaluating LLMâs common-sense reasoning capability in sentence-completion tasks; RACE (Lai et al., 2017), a dataset for reading-comprehension evaluation; GSM8K (Cobbe et al., 2021), a dataset for evaluating the grade-school, math-solving capability of LLMs; and APPS (Hendrycks et al., 2021), a code-generation benchmark for LLMs. There have been a related line of work in developing datasets with inherent ambiguities (Kamath et al., 2024; Min et al., 2020; Liu et al., 2023a; Tamkin et al., 2022), e.g., âthe cat was lost after leaving the houseâ meaning either that the cat was unable to find the way, or the cat was unable to be found (Min et al., 2020, Fig. 1), as well as datasets modeling clarifying questions in multi-turn conversations (Aliannejadi et al., 2021). However, experimental results associated with these datasets do not necessarily incorporate uncertainty evaluation beyond answering accuracy.
Although many of the aforementioned benchmarks have not been widely adopted in research on uncertainty quantification, a few benchmarks in natural-language processing have proven highly amenable to research in uncertainty quantification of LLMs, e.g., TriviaQA (Joshi et al., 2017), a dataset which consists of 95K question-answer pairs for evaluating an LLMâs reading-comprehension skill. TriviaQA (Joshi et al., 2017) has been widely utilized in evaluating many methods for uncertainty quantification of LLMs (Kuhn et al., 2023; Mielke et al., 2022; Stengel-Eskin et al., 2024). Likewise, other methods have employed CoQA (Reddy et al., 2019), a dataset containing conversational question-answer pairs, and WikiBio (Lebret et al., 2016), a dataset containing biographies from Wikipedia, in evaluating the performance of UQ methods for LLMs. The CalibratedMath benchmark was introduced in (Lin et al., 2022) for examining the ability of LLMs to verbally express their confidence in solving arithmetic tasks. Moreover, datasets for evaluating the consistency of LLMs exist, e.g., ParaRel (Elazar et al., 2021), which consists of 328 paraphrases, generated by altering a set of prompts while keeping the semantic meaning of the prompts the same. Furthermore, HotpotQA (Yang et al., 2018) and StrategyQA (Geva et al., 2021) represent question-answering benchmarks consisting of question-answer pairs generated from Wikipedia, specifically designed to assess the ability of LLMs to perform multi-hop reasoning. Similarly, TruthfulQA (Lin et al., 2021) represents a factuality-oriented dataset, designed to evaluate the ability of LLMs to generate factual responses to questions that some humans might answer wrongly based on misconceptions. Noting the connection between hallucination and uncertainty quantification, uncertain quantification methods can leverage benchmarks for hallucination detection, e.g., HaluEval (Li et al., 2023), and datasets for factuality analysis and claim verification, e.g., FEVER (Thorne et al., 2018). Lastly, we note that there has been some work that aims to standardize the tasks for evaluating the performance of LLMs by explicitly accounting for the uncertainty of LLMs in specific tasks, e.g., based on selective classification and generation (Vashurin et al., 2024) or conformal prediction (Ye et al., 2024).
## 9. Applications
We highlight a few application areas of uncertainty quantification of LLMs, including its applications to chatbots and other textual use-cases and robotics.
### 9.1. Chatbot and Textual Applications
Given that LLMs are prone to hallucinate, existing work examines the integration of uncertainty quantification techniques in LLM-enabled chatbots. For example, recent work leverages uncertainty quantification techniques for LLMs in hallucination detection (Zhang et al., 2023a; Yadkori et al., 2024; Kossen et al., 2024; Tomani et al., 2024) and content and factuality analysis (Tai et al., 2024; Pacchiardi et al., 2023). Semantic entropy probes (SEPs) (Zhang et al., 2023a) utilize linear logistic models to predict semantic entropy from the hidden states of an LLM, demonstrating its effectiveness in detecting hallucinations on a variety of tasks. The approach in (Yadkori et al., 2024) introduces an information-theoretic metric for hallucination detection by estimating both the aleatoric and epistemic uncertainty of the LLM, with the premise that large epistemic uncertainty corresponds to hallucinations. Other downstream applications leverage hallucination detection to estimate the confidence of the LLM on the factuality of its response (Mahaut et al., 2024) or to actively improve the factuality of LLMs during the token-generation step (Chang et al., 2024).
In Figure 17, we illustrate an application of uncertainty quantification to detect hallucinations in LLMs. When asked for the smallest country in Asia by land area, the LLM provides a confident response. However, the low token-level confidence estimate reveals the uncertainty of the LLM, indicating a high likelihood of hallucination by the LLM. Drawing upon the association between factuality analysis and uncertainty quantification, the work in (Mohri and Hashimoto, 2024) employs conformal prediction to actively generate outputs that have a high probability of being facts. Further, the work in (Pacchiardi et al., 2023) trains a logistic regression classifier to detect outright lies in LLMs (i.e., false information provided by the LLM when the factual answer is known as opposed to hallucinations where the LLM does not know the factual answer), by asking the LLM follow-up questions unrelated to the original prompt. Applications in sentiment analysis (Maltoudoglou et al., 2020) and content analysis (Xiao et al., 2023; Dai et al., 2023; Chew et al., 2023) utilize LLMs in characterizing the sentiments or opinions implied in text sources and in deductive coding to aid the identification of relevant themes across highly-varied documents, respectively. However, noting that LLMs are not necessarily consistent in their outputs, the LLMq method (Tai et al., 2024) examines the LLMâs outputs for the presence of epistemic linguistic uncertainty markers and the consistency of the LLMâs outputs to identify the thematic codes associated with the text. Further applications arise in text summarization (Kolagar and Zarcone, 2024), examining the alignment of uncertainty markers in the original source document and the LLM-generated summary.
Uncertainty quantification has also been explored within the context of jailbreaking LLMs. For example, the work in (Steindl et al., 2024) examines the connections between predictive entropy and jailbreak prompts, showing that the entropy of the LLMâs tokens increases when an LLM is given jailbreak prompts. However, the LLMâs uncertainty can be directly manipulated during the jailbreaking attempt (Zeng et al., 2024). In addition, the evaluation study in (Liu et al., 2024a) highlights that safeguard models for LLMs often show notable miscalibration in jailbreaking attempts. Further, existing work employs uncertainty quantification techniques to improve LLMs via fine-tuning (Osband et al., 2022; Niu et al., 2024; Yang et al., 2023b, 2024a). Other applications have explored uncertainty quantification in multi-step interaction and chain-of-thought prompting settings (Zhao et al., 2024d; Han et al., 2024), where the final output of an LLM depends on intermediate responses. To account for the influence of preceding responses, these methods propagate the LLMâs uncertainty at each interaction phase. Similar uncertainty propagation techniques have been applied to sequential labeling problems (He et al., 2023). In other applications, uncertainty quantification methods for LLMs have been utilized in retrieval-augmented generation (Rouzrokh et al., 2024; Li et al., 2024a), using the framework of conformal prediction to provide provable guarantees. Moreover, some existing work utilizes conformal prediction in response generation from an LLM to identify prediction sets that are likely to contain the ground-truth with some guarantees (Quach et al., 2023; Kumar et al., 2023). Although prior work employing conformal prediction generally assume access to the LLMâs logits, conformal prediction can also be utilized with black-box LLMs, e.g., (Su et al., 2024b). Lastly, techniques and results from mechanistic interpretability can be used to predict performance of LLMs at test time. In (Schwab and Karlen, 2019), the authors train a causal explanation model to estimate model performance using sensitivity to input features. In (Nanda et al., 2023), the authors find that sudden emergent qualitative changes in LLMs can be predicted by reverse engineering the model. Further, recent works (Zimmermann et al., 2024) have shown that scaling up LLMs in terms of model size or dataset does not improve interpretability as previously believed, by surveying human participants.
<details>
<summary>extracted/6588127/images/applications/hallucination_detection.png Details</summary>

### Visual Description
## Flowchart Diagram: Confidence and Hallucination Analysis of AI Response
### Overview
The diagram illustrates a three-stage process analyzing an AI's response to a factual question. It shows a question about the smallest country in Asia by land area, followed by the AI's answer, confidence metrics, and hallucination risk assessment. The visual flow moves from top to bottom, with color-coded components indicating different stages of analysis.
### Components/Axes
1. **Top Box (Blue)**
- Text: "What is the most smallest country in Asia, by land area?"
- Contains a user icon in the top-right corner
- Represents the input question
2. **Middle Box (Pink)**
- Text: "Token-Level Confidence Estimate: 13%"
- Positioned directly below the blue box with a downward arrow
- Represents confidence assessment
3. **Bottom Box (Orange)**
- Text: "Hallucination Score: 80%"
- Positioned below the pink box with a downward arrow
- Represents risk assessment
4. **Connecting Elements**
- Black downward arrows between components
- Color-coded boxes (blue â pink â orange) creating visual hierarchy
### Detailed Analysis
- **Question/Answer Pair**
- Question: "What is the most smallest country in Asia, by land area?"
- Answer: "Nepal is the smallest country in Asia, by land area."
- Spatial relationship: Answer appears in the same box as the question
- **Confidence Metrics**
- Token-Level Confidence: 13% (pink box)
- Position: Directly below question/answer box
- Visual weight: Medium-sized box with bold text
- **Risk Assessment**
- Hallucination Score: 80% (orange box)
- Position: Bottom-most component
- Visual emphasis: Largest box with highest numerical value
### Key Observations
1. **Confidence-Hallucination Inversion**
- Despite high hallucination risk (80%), confidence remains low (13%)
- Suggests model uncertainty about its own response
2. **Geographical Inaccuracy**
- Nepal's actual land area: 147,181 kmÂČ
- Smallest Asian country by land area: Maldives (298 kmÂČ)
- Model's answer contains factual error
3. **Visual Hierarchy**
- Color progression (blue â pink â orange) creates descending importance
- Arrows establish clear causal flow from question to analysis
### Interpretation
The diagram reveals critical limitations in AI fact-checking capabilities:
1. **Model Uncertainty**: The 13% confidence score indicates the system recognizes its response as unreliable
2. **Hallucination Paradox**: High hallucination score (80%) suggests the model generated content not grounded in training data
3. **Factual Error**: The response contains incorrect information about Asian geography
4. **Design Implications**: The color-coded flow effectively communicates risk levels but fails to prevent misinformation
This analysis demonstrates the challenges in balancing confidence metrics with factual accuracy in AI systems, particularly for geographical knowledge where precise data exists but may not be properly weighted in the model's architecture.
</details>
Figure 16. Uncertainty quantification methods for LLMs have been employed in hallucination detection. LLMs tend to be less confident when hallucinating (measured via token-based metrics), although their responses may sound overly confident. In this example, although the LLM provides a confident response to the prompt, a token-level UQ method indicates that the LLM is uncertain, enabling hallucination detection.
[Uncertainty quantification methods for LLMs have been employed in hallucination detection. LLMs tend to be less confident when hallucinating (measured via token-based metrics), although their responses may sound overly confident. In this example, although the LLM provides a confident response to the prompt, a token-level UQ method indicates that the LLM is uncertain, enabling hallucination detection.]Uncertainty quantification methods for LLMs have been employed in hallucination detection. LLMs tend to be less confident when hallucinating (measured via token-based metrics), although their responses may sound overly confident. In this example, although the LLM provides a confident response to the prompt, a token-level UQ method indicates that the LLM is uncertain, enabling hallucination detection.
<details>
<summary>extracted/6588127/images/applications/uq_llm_robotics.png Details</summary>

### Visual Description
## Screenshot: Robotics Task Interface
### Overview
The image depicts a robotics interface showing a robotic arm interacting with a table containing a ceramic bowl and a metal bowl. A text box displays a user query ("How do I clear the table?") and a system response listing four actions with associated probabilities. The interface includes a legend-like structure with colored numerical values and action descriptions.
### Components/Axes
- **Primary Elements**:
- **Robot Arm**: Positioned above a gray cabinet with a tabletop.
- **Table Setup**: Contains a ceramic bowl (left) and a metal bowl (right).
- **Text Boxes**:
1. **User Query**: "How do I clear the table?" (blue background, white text).
2. **System Response**: A list of four actions with probabilities (blue background, white text).
- **Legend-like Structure**:
- **Colors**: Green (0.51), Orange (0.33), Pink (0.10), Red (0.06).
- **Actions**:
- "Move the ceramic bowl into the top drawer." (green)
- "Move the metal bowl into the middle drawer." (orange)
- "Move the table to the living room." (pink)
- "Remove the top drawer." (red)
### Detailed Analysis
- **User Query**:
- Text: "How do I clear the table?"
- Position: Centered below the robot arm image.
- **System Response**:
- **Action 1**: "Move the ceramic bowl into the top drawer."
- Probability: 0.51 (green)
- **Action 2**: "Move the metal bowl into the middle drawer."
- Probability: 0.33 (orange)
- **Action 3**: "Move the table to the living room."
- Probability: 0.10 (pink)
- **Action 4**: "Remove the top drawer."
- Probability: 0.06 (red)
- **Legend-like Structure**:
- Colors are directly embedded in the text box, with no separate legend.
- Spatial grounding: Probabilities are listed in descending order (0.51 > 0.33 > 0.10 > 0.06).
### Key Observations
1. **Probability Distribution**:
- The highest probability (0.51) is assigned to moving the ceramic bowl, suggesting it is the most likely or prioritized action.
- The second-highest probability (0.33) is for moving the metal bowl, indicating a secondary priority.
- Lower probabilities (0.10 and 0.06) suggest less likely actions, such as relocating the table or removing a drawer.
2. **Robot Arm Position**:
- The arm is positioned over the table, implying readiness to execute the highest-probability action (moving the ceramic bowl).
3. **Ambiguity in Action 3**:
- "Move the table to the living room" is an unusual instruction, as tables are typically stationary. This may indicate a fallback or error in the system's logic.
### Interpretation
The interface demonstrates a probabilistic task-planning system for a robot. The system prioritizes actions based on likelihood, with the ceramic bowl being the most probable target. The inclusion of "Move the table to the living room" as a low-probability action raises questions about the system's training data or contextual understanding. The robot's positioning aligns with the highest-probability action, suggesting real-time decision-making. However, the low probability of removing the top drawer (0.06) may indicate a design flaw or edge case not fully addressed by the system.
## Notes
- No non-English text is present.
- All textual elements are transcribed directly from the image.
- The legend-like structure is embedded within the text box, with no separate visual legend.
- The robot arm's interaction with the table is implied but not explicitly labeled.
</details>
Figure 17. Robotics applications utilize UQ methods to estimate the LLMâs confidence in the sub-tasks proposed by the LLM, to determine when human assistance is required.
[Robotics applications utilize UQ methods to estimate the LLMâs confidence in the sub-tasks proposed by the LLM, to determine when human assistance is required.]Robotics applications utilize UQ methods to estimate the LLMâs confidence in the sub-tasks proposed by the LLM, to determine when human assistance is required.
### 9.2. Robotics
Endowing LLMs with an embodiment (physical form) presents unique challenges, as is the case in robotics. Such embodiment essentially empowers LLMs to be agents of physical change, which can lead to potentially disastrous outcomes if the outputs of the LLMs are not reliable or trustworthy. Although LLMs (and vision-language models) have found widespread applications in robotics, e.g., robotic manipulation (Ahn et al., 2022; Brohan et al., 2022, 2023; Kim et al., 2024b), robotic navigation and exploration (Shah et al., 2023; Dorbala et al., 2023; Ren et al., 2024), and multi-robot collaboration (Kannan et al., 2023; Chen et al., 2024; Mandi et al., 2024), only a few of these applications explicitly consider the uncertainty of the LLMs to ensure safety, although other existing work (Wang et al., 2024c) utilize LLMs to assess the success of a task without explicitly reasoning about the confidence of the LLM.
The work in (Tsai et al., 2024) fine-tunes the Mistral-7B LLM (Jiang et al., 2023b) to generate possible next actions for a decision-making agent and trains a neural point-wise dependency estimator to predict the compatibility score between a user-provided prompt and all generated actions. Subsequently, the authors employ conformal prediction to identify more likely actions for a given prompt, which is presented to the user to select the next action. A collection of LLM-based task planning work for robots examine the confidence an LLM assigns to its generated next-step plans to determine when human assistance or verification is required, illustrated in Figure 17. To determine when an LLM requires clarification from a human, KnowNo (Ren et al., 2023a) utilizes a token-based UQ approach to estimate the uncertainty of the LLM in generating possible next steps for a robot given a task, by examining the token probability assigned to each option in the list of possible next steps. Further, KnowNo employs conformal prediction to generate prediction sets over the possible next steps, with provable theoretical guarantees, prompting the human for help, if the prediction set consists of more than one possible action. HERACLEs (Wang et al., 2023b) presents a similar pipeline within a Linear Temporal Logic framework, with multiple high-level sub-goals.
IntroPlan (Liang et al., 2024) extends KnowNo (Ren et al., 2023a) through introspective planning, where, given a task, the LLM retrieves the most relevant instance from a knowledge base constructed from few-shot, human-provided examples and reasons about the feasibility of the possible next actions. Introspective planning enables IntroPlan to generate prediction sets with tighter confidence bounds, minimizing human intervention. LAP (Mullen Jr and Manocha, 2024) further introduces an action-feasibility metric to improve the alignment of the LLMâs confidence estimate with the probability of success, resulting in fewer clarification queries. S-ATLAS (Wang et al., 2024c) extends KnowNo to LLM-based multi-robot task planning, where a team of robot collaborate to complete a task. In addition, KnowLoop (Zheng et al., 2024) utilizes a multi-modal large language model (MLLM), e.g., LLaVa (Liu et al., 2024c) or ChatGPT-4V, for failure detection in LLM-based task planning. The MLLM evaluates the success of the task, given images of the environment at each stage, providing its feedback along with its estimated confidence, using either a self-verbalized approach or a token-level UQ method. KnowLoop (Zheng et al., 2024) demonstrates that token-level UQ approaches yield better-aligned uncertainty estimates compared to a self-verbalized UQ approach. Lastly, TrustNavGPT (Sun et al., 2024b) employs a similar architecture to evaluate the trustworthiness of human commands to an LLM in LLM-based, audio-guided robot navigation.
## 10. Open Research Challenges
We enumerate a number of open research challenges, hoping to drive future research to address these challenges.
### 10.1. Consistency is not Factuality
Many uncertainty quantification methods for LLMs rely on evaluating the consistency between multiple realizations of the response generated by LLMs. This approach faces fundamental limitations, since consistency is not necessarily aligned with factuality. For example, in Figure 19, when prompted to provide a response to the question: âWhat happened to Google in June 2007, in a single sentence?â GPT-4 claims that Google announced its mobile operating system Android in June 2007, which is incorrect, given that Android was launched in November 2007. In fact, when creating the set of responses for uncertainty quantification, multiple queries to GPT-4 generate the same incorrect response, which can lead to a miscalibrated confidence estimate. Notably, black-box methods that rely entirely on consistency are most susceptible to this challenge.
Nonetheless, consistency is often a good predictor of factuality, especially when given a sufficiently large number of samples. However, many existing methods do not rigorously examine the number of samples required to define a reliable set of responses when evaluating the consistency of an LLM on a given prompt, which constitutes a critical component for any guarantee on the estimated confidence of the model or factuality of the modelâs response. Moreover, this challenge might be mitigated by a principled selection of the temperature parameter in an LLM to increase the randomness of the mode; however, the effectiveness of this strategy is quite limited, as excessive randomness in the LLMâs outputs defeats the purpose of examining the confidence of the model on a given prompt.
<details>
<summary>extracted/6588127/images/open_research_challenges/consistency_and_factuality.png Details</summary>

### Visual Description
## Flowchart: LLM Response Generation and Fact-Checking Process
### Overview
The image depicts a flowchart illustrating the process of an LLM (Large Language Model) generating responses to a user query about historical events related to Google in June 2007, followed by a consistency estimate and fact-check validation.
### Components/Axes
1. **User Query**:
- Text box labeled: *"What happened to Google in June 2007?"*
- Positioned at the top of the flowchart, connected to the LLM via a downward arrow.
2. **LLM Component**:
- Green square labeled *"LLM"* with a circular logo (three interlocked lines).
- Receives input from the user query and generates responses.
3. **Randomly-Generated Responses**:
- Light blue rectangle containing two example responses:
- *"In June 2007, Google introduced Android, its mobile operating system."*
- *"Google launched its open-source mobile operating system Android in June 2007."*
- Connected to the LLM via a downward arrow.
4. **Consistency Estimate**:
- Pink rectangle labeled *"Consistency Estimate: 99%"*
- Positioned below the response examples, connected via a downward arrow.
5. **Fact-Check Validation**:
- Final section with a red "X" symbol and text:
- *"Fact-Check: False"*
- Positioned at the bottom of the flowchart.
### Detailed Analysis
- **User Query**: Explicitly asks about Google's activities in June 2007.
- **LLM Output**: Generates two nearly identical responses about Android's introduction in June 2007.
- **Consistency Estimate**: High confidence (99%) in the generated responses.
- **Fact-Check**: Explicitly marked as false, contradicting the LLM's output.
### Key Observations
1. The LLM produces responses with high internal consistency (99%) but fails fact-checking.
2. Both generated responses are factually incorrect (Android was launched in November 2007, not June).
3. The flowchart highlights a critical limitation: LLMs may generate confident but inaccurate outputs.
### Interpretation
This flowchart demonstrates a common challenge in AI systems: **confidence â accuracy**. The LLMâs high consistency estimate (99%) suggests strong internal coherence in its responses, but the fact-check reveals a factual error. This underscores the need for external validation mechanisms when deploying LLMs for factual tasks. The discrepancy between the modelâs confidence and the ground-truth fact-check highlights risks in relying solely on AI-generated information without verification.
</details>
Figure 18. Consistency is not factuality. Semantic-similarity UQ methods for LLMs might provide misleading confidence estimates, e.g., when multiple random responses from the LLM are consistent but false. In this example, the LLM consistently claims that Google introduced Android in June 2007, which is incorrect, given that Android was introduced in November 2007.
[Consistency is not factuality. Semantic-similarity UQ methods for LLMs might provide misleading confidence estimates, e.g., when multiple random responses from the LLM are consistent but false. In this example, the LLM consistently claims that Google introduced Android in June 2007, which is incorrect, given that Android was introduced in November 2007.]Consistency is not factuality. Semantic-similarity UQ methods for LLMs might provide misleading confidence estimates, e.g., when multiple random responses from the LLM are consistent but false. In this example, the LLM consistently claims that Google introduced Android in June 2007, which is incorrect, given that Android was introduced in November 2007.
<details>
<summary>extracted/6588127/images/open_research_challenges/entropy_and_factuality.png Details</summary>

### Visual Description
## Screenshot: ChatInterface with Population Query and Fact-Check
### Overview
The image depicts a chat interface where a user asks, "What is the most populous country in the world in 2024?" The response claims the United States of America has a population of **345,426,571**, accompanied by a confidence estimate of **91%** from token probability. A subsequent fact-check explicitly labels this claim as **False**, marked with a red "X".
---
### Components/Axes
1. **Chat Messages**:
- **User Query (Top-Right)**:
- Text: *"What is the most populous country in the world in 2024?"*
- Visual: Blue speech bubble with white text; silhouette icon (anonymous user) in the top-right corner.
- **AI Response (Bottom-Left)**:
- Text: *"The United States of America with a population of 345,426,571."*
- Visual: Blue speech bubble with white text; green vertical bars (likely representing population data) and one yellow bar (smaller than green bars).
- **Confidence Estimate**:
- Text: *"Confidence Estimate from Token Probability: 91%"* in a pink rectangle.
- **Fact-Check**:
- Text: *"Fact-Check: False"* with a red "X" symbol.
2. **Visual Elements**:
- **Green Bars**: 8 vertical bars of varying heights (likely representing population data for multiple countries, though unlabeled).
- **Yellow Bar**: 1 shorter bar (possibly indicating a secondary data point, e.g., a different country or metric).
- **Red "X"**: Overlays the fact-check section, signaling incorrectness.
---
### Detailed Analysis
- **Population Claim**: The AI asserts the U.S. population is **345,426,571** (exact numerical value provided).
- **Confidence Estimate**: The model assigns a **91% probability** to this answer, suggesting high internal confidence in the token sequence.
- **Fact-Check Discrepancy**: The fact-check explicitly contradicts the AIâs response, indicating the claim is **False**.
---
### Key Observations
1. **Incorrect Answer**: The AIâs response is factually wrong. The most populous country in 2024 is **India** (estimated ~1.428 billion), not the U.S. (~339 million in 2023, with minimal growth projected for 2024).
2. **High Confidence, Low Accuracy**: The 91% confidence estimate highlights a potential flaw in the modelâs calibrationâhigh token probability does not guarantee factual correctness.
3. **Visual Ambiguity**: The green/yellow bars lack labels, making it impossible to verify their intended representation (e.g., country populations, growth rates).
---
### Interpretation
- **Model Limitations**: The AIâs response demonstrates that language models can generate plausible-sounding but incorrect answers, even with high confidence scores. This underscores the need for external fact-checking in critical applications.
- **User Interface Design**: The use of color-coded bars (green/yellow) and a red "X" provides visual cues for data validity, but the absence of labels reduces interpretability.
- **Ethical Implications**: Deploying such systems without robust fact-checking mechanisms risks spreading misinformation, particularly in high-stakes domains like demographics, healthcare, or policy.
---
### Conclusion
This screenshot illustrates the tension between model confidence and factual accuracy. While the AIâs response is visually structured and numerically specific, the explicit fact-check reveals its inaccuracy. The image serves as a cautionary example of the importance of integrating verification systems into AI-driven information pipelines.
</details>
Figure 19. Using the conditional distribution of tokens for uncertainty quantification (e.g., in token-level UQ methods) can lead to misleading uncertainty estimates. In this example, the uncertainty of the LLM is notably low, since the succeeding tokens are highly likely given the preceding tokens. However, the claim is incorrect. The most populous country in the world in 2024 is India, not the United States of America. The bars denote the probability of each token.
[Using the conditional distribution of tokens for uncertainty quantification (e.g., in token-level UQ methods) can lead to misleading uncertainty estimates. In this example, the uncertainty of the LLM is notably low, since the succeeding tokens are highly likely given the preceding tokens. However, the claim is incorrect. The most populous country in the world in 2024 is India, not the United States of America. The bars denote the probability of each token.]Using the conditional distribution of tokens for uncertainty quantification (e.g., in token-level UQ methods) can lead to misleading uncertainty estimates. In this example, the uncertainty of the LLM is notably low, since the succeeding tokens are highly likely given the preceding tokens. However, the claim is incorrect. The most populous country in the world in 2024 is India, not the United States of America. The bars denote the probability of each token.
### 10.2. Entropy is not Factuality
Entropy and other token-based UQ metrics of the token probability distribution in an LLMâs output are not necessarily aligned with the factuality of the modelâs output, although entropy and factuality are often aligned. In particular, the distribution over the tokens is a function of the size of the LLM (including its dictionary of tokens) and the diversity and size of the training data, which can influence the alignment of entropy and factuality. Hence, token-based UQ methods might produce highly miscalibrated confidence estimates for a given prompt, when these estimates are computed entirely from the distribution over the tokens. For example, in a worst-case scenario where the training data is corrupted or insufficient, an LLM might assign most of its probability to an incorrect answer (token) which is most closely related to the training data, leading to a miscalibrated estimate of its confidence. Moreover, reinforcement learning with human feedback (RLHF), which is utilized in fine-tuning LLMs, generally reduces the calibration of the LLMâs confidence estimates (Achiam et al., 2023). Further, the conditional distribution of each token might not be indicative of the factuality of an LLMâs response at the claim-level (sentence-level), i.e., although each generated token might be highly likely given the preceding token, the overall claim expressed by the LLM might not be correct (Vazhentsev et al., 2024), as illustrated in Figure 19.
Future research should explore aligning the entropy of tokens with the factuality of the claims expressed by LLMs and examine augmentation strategies that consider the training distribution of LLMs to better account for the influence of the training data on the probability distribution associated with the generated tokens to ultimately improve the alignment of entropy and other token-based measures of uncertainty with factuality. Moreover, the probability distributions over the tokens of an LLM can be manipulated in jailbreaking attacks, leading to misleading confidence estimates and, in some cases, non-factual responses (Zeng et al., 2024). Future research should seek to improve the robustness of token-level uncertainty quantification methods to adversarial attacks. Further, few existing methods explore uncertainty quantification of LLMs in text summarization, which is critical to the preservation of factual records, constituting an important direction for future research.
### 10.3. Applications in Interactive LLM-Enabled Agents
Although some existing applications explore uncertainty quantification in LLM-enabled agents, e.g., see Section 9.2, many of these applications only estimate the LLMâs uncertainty at each episode without considering the history of the agentâs interaction with the LLM. However, many practical applications require multi-episode interactions, where the LLM generates successive responses based on the information from preceding episodes with the agent. For example, in the scenario depicted in Figure 17, the robot may be asked to prepare a meal for a user, which would require multi-episode interactions, where each episode corresponds to a given sub-task, such as dicing some vegetables before sautĂ©ing it. Note that utilizing many existing techniques for uncertainty quantification would require the assumption that the LLMâs uncertainty at each episode is independent of its prior interaction history, an assumption that is generally not satisfied in real-world applications. Rigorous uncertainty quantification of the LLMâs outputs requires the consideration of the history of the agentâs interaction with the LLM and its observations (e.g., camera images), in the case of VLMs. This yet-unexplored research area constitutes an exciting direction for future research.
### 10.4. Applications of Mechanistic Interpretability to Uncertainty Quantification
The connections between interpretability of LLMs and uncertainty quantification have been relatively unexplored, despite the intuitive relationship between both concepts. Mechanistic interpretability holds notable potential in exploiting the synergy between both areas to derive solutions to some of the aforementioned research challenges. For example, the work in (Ahdritz et al., 2024) predicts the token-level confidence of large LLMs using small linear probes (models) trained on the embeddings of frozen pretrained models. This work suggests the existence of a relationship between the internal states of LLMs and their confidence. The authors indicate that their findings suggest that information on the internal state of an LLM could be utilized in distinguishing epistemic uncertainty of the model from aleatoric uncertainty. However, this research area is relatively unexplored, presenting a potentially fruitful direction for future research.
### 10.5. Datasets and Benchmarks
Although a number of datasets and benchmark for uncertainty quantification exists (Joshi et al., 2017; Reddy et al., 2019; Yang et al., 2018; Lin et al., 2021), to the best of our knowledge, no dataset exists for uncertainty quantification of LLMs in multi-episode interaction scenarios. Future research should examine the creation of versatile, standardized datasets that aid research on uncertainty quantification of LLMs, taking into consideration the history of interaction between a user and an LLM. Moreover, benchmarks on uncertainty quantification of LLMs can help inform researchers on the relative performance of their proposed methods. Unfortunately, widely-accepted benchmarks for uncertainty quantification of LLMs do not exist, although some work has been devoted to developing such benchmarks. Future work should seek to create suitable benchmarks for this purpose, especially benchmarks that evaluate the calibration, tightness (conservativeness), and interpretability of uncertainty quantification methods. However, benchmarks can also introduce other challenges, by disconnecting research from practical concerns, overly simplifying the assessment of research advances to outperforming existing work on some metric defined in a benchmark. Hence, care must be taken to ensure that benchmarks remain relevant to practical effectiveness.
## 11. Conclusion
In this survey, we provide a comprehensive review of existing uncertainty quantification methods for LLMs, including relevant background information necessary for readers. We categorize UQ methods for LLMs into four broad classes based on the underlying technique employed by these methods, namely: token-based UQ methods, self-verbalized UQ methods, semantic-similarity-based methods, and mechanistic interpretability. Token-based UQ methods rely on access to an LLMâs intermediate outputs or architecture to estimate the confidence an LLM, whereas in self-verbalized UQ methods, the LLM provides its estimated confidence in natural-language. Many semantic-similarity-based methods are black-box methods which only require access to the modelâs natural-language output, relying on consistency metrics to estimate the LLMâs confidence. In contrast, mechanistic interpretability requires access to the LLMâs internal activations to identify latent features that explain its activation patterns. Furthermore, we identify relevant datasets and applications for uncertainty quantification of LLMs and highlight open research challenges to inspire future research.
Acknowledgements. We would like to acknowledge Apurva S. Badithela and David Snyder for their contributions. This work was partially supported by the NSF CAREER Award [#2044149], the Office of Naval Research [N00014-23-1-2148], and the Sloan Fellowship. Justin Lidard was supported by a National Science Foundation Graduate Research Fellowship.
## References
- (1)
- Achiam et al. (2023) Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 (2023).
- Ahdritz et al. (2024) Gustaf Ahdritz, Tian Qin, Nikhil Vyas, Boaz Barak, and Benjamin L Edelman. 2024. Distinguishing the knowable from the unknowable with language models. arXiv preprint arXiv:2402.03563 (2024).
- Ahn et al. (2022) Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, et al. 2022. Do as i can, not as i say: Grounding language in robotic affordances. arXiv preprint arXiv:2204.01691 (2022).
- Aichberger et al. (2024) Lukas Aichberger, Kajetan Schweighofer, Mykyta Ielanskyi, and Sepp Hochreiter. 2024. Semantically Diverse Language Generation for Uncertainty Estimation in Language Models. arXiv preprint arXiv:2406.04306 (2024).
- Aliannejadi et al. (2021) Mohammad Aliannejadi, Julia Kiseleva, Aleksandr Chuklin, Jeffrey Dalton, and Mikhail Burtsev. 2021. Building and evaluating open-domain dialogue corpora with clarifying questions. arXiv preprint arXiv:2109.05794 (2021).
- Alkaissi and McFarlane (2023) Hussam Alkaissi and Samy I McFarlane. 2023. Artificial hallucinations in ChatGPT: implications in scientific writing. Cureus 15, 2 (2023).
- Anthropic (2024) AI Anthropic. 2024. The Claude 3 model family: Opus, Sonnet, Haiku. Claude-3 Model Card 1 (2024).
- Ao et al. (2024) Shuang Ao, Stefan Rueger, and Advaith Siddharthan. 2024. CSS: Contrastive Semantic Similarity for Uncertainty Quantification of LLMs. arXiv preprint arXiv:2406.03158 (2024).
- Arteaga et al. (2024) Gabriel Y Arteaga, Thomas B Schön, and Nicolas Pielawski. 2024. Hallucination Detection in LLMs: Fast and Memory-Efficient Finetuned Models. arXiv preprint arXiv:2409.02976 (2024).
- Ayhan and Berens (2018) Murat Seckin Ayhan and Philipp Berens. 2018. Test-time data augmentation for estimation of heteroscedastic aleatoric uncertainty in deep neural networks. In Medical Imaging with Deep Learning.
- Azamfirei et al. (2023) Razvan Azamfirei, Sapna R Kudchadkar, and James Fackler. 2023. Large language models and the perils of their hallucinations. Critical Care 27, 1 (2023), 120.
- Azaria and Mitchell (2023) Amos Azaria and Tom Mitchell. 2023. The internal state of an LLM knows when itâs lying. arXiv preprint arXiv:2304.13734 (2023).
- Bahat and Shakhnarovich (2020) Yuval Bahat and Gregory Shakhnarovich. 2020. Classification confidence estimation with test-time data-augmentation. arXiv preprint arXiv:2006.16705 (2020).
- Bai et al. (2024) Zechen Bai, Pichao Wang, Tianjun Xiao, Tong He, Zongbo Han, Zheng Zhang, and Mike Zheng Shou. 2024. Hallucination of multimodal large language models: A survey. arXiv preprint arXiv:2404.18930 (2024).
- Bakman et al. (2024) Yavuz Faruk Bakman, Duygu Nur Yaldiz, Baturalp Buyukates, Chenyang Tao, Dimitrios Dimitriadis, and Salman Avestimehr. 2024. MARS: Meaning-Aware Response Scoring for Uncertainty Estimation in Generative LLMs. arXiv preprint arXiv:2402.11756 (2024).
- Balabanov and Linander (2024) Oleksandr Balabanov and Hampus Linander. 2024. Uncertainty quantification in fine-tuned LLMs using LoRA ensembles. arXiv preprint arXiv:2402.12264 (2024).
- Band et al. (2024) Neil Band, Xuechen Li, Tengyu Ma, and Tatsunori Hashimoto. 2024. Linguistic Calibration of Long-Form Generations. In Forty-first International Conference on Machine Learning.
- Becker and Soatto (2024) Evan Becker and Stefano Soatto. 2024. Cycles of Thought: Measuring LLM Confidence through Stable Explanations. arXiv preprint arXiv:2406.03441 (2024).
- Belinkov (2022) Yonatan Belinkov. 2022. Probing classifiers: Promises, shortcomings, and advances. Computational Linguistics 48, 1 (2022), 207â219.
- Bereska and Gavves (2024) Leonard Bereska and Efstratios Gavves. 2024. Mechanistic Interpretability for AI SafetyâA Review. arXiv preprint arXiv:2404.14082 (2024).
- Brohan et al. (2023) Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, et al. 2023. Rt-2: Vision-language-action models transfer web knowledge to robotic control. arXiv preprint arXiv:2307.15818 (2023).
- Brohan et al. (2022) Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Joseph Dabis, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Jasmine Hsu, et al. 2022. Rt-1: Robotics transformer for real-world control at scale. arXiv preprint arXiv:2212.06817 (2022).
- Brown (2020) Tom B Brown. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165 (2020).
- BuciluÇ et al. (2006) Cristian BuciluÇ, Rich Caruana, and Alexandru Niculescu-Mizil. 2006. Model compression. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining. 535â541.
- Bulian et al. (2022) Jannis Bulian, Christian Buck, Wojciech Gajewski, Benjamin Boerschinger, and Tal Schuster. 2022. Tomayto, tomahto. beyond token-level answer equivalence for question answering evaluation. arXiv preprint arXiv:2202.07654 (2022).
- Burns et al. (2022) Collin Burns, Haotian Ye, Dan Klein, and Jacob Steinhardt. 2022. Discovering latent knowledge in language models without supervision. arXiv preprint arXiv:2212.03827 (2022).
- Cavalcanti et al. (2016) George DC Cavalcanti, Luiz S Oliveira, Thiago JM Moura, and Guilherme V Carvalho. 2016. Combining diversity measures for ensemble pruning. Pattern Recognition Letters 74 (2016), 38â45.
- Chang et al. (2024) Haw-Shiuan Chang, Nanyun Peng, Mohit Bansal, Anil Ramakrishna, and Tagyoung Chung. 2024. REAL Sampling: Boosting Factuality and Diversity of Open-Ended Generation via Asymptotic Entropy. arXiv preprint arXiv:2406.07735 (2024).
- Chen and Mueller (2023) Jiuhai Chen and Jonas Mueller. 2023. Quantifying uncertainty in answers from any language model via intrinsic and extrinsic confidence assessment. arXiv preprint arXiv:2308.16175 (2023).
- Chen et al. (2024) Yongchao Chen, Jacob Arkin, Yang Zhang, Nicholas Roy, and Chuchu Fan. 2024. Scalable multi-robot collaboration with large language models: Centralized or decentralized systems?. In 2024 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 4311â4317.
- Chen et al. (2023) Yuyan Chen, Qiang Fu, Yichen Yuan, Zhihao Wen, Ge Fan, Dayiheng Liu, Dongmei Zhang, Zhixu Li, and Yanghua Xiao. 2023. Hallucination detection: Robustly discerning reliable answers in large language models. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management. 245â255.
- Chen et al. (2022) Yangyi Chen, Lifan Yuan, Ganqu Cui, Zhiyuan Liu, and Heng Ji. 2022. A close look into the calibration of pre-trained language models. arXiv preprint arXiv:2211.00151 (2022).
- Chew et al. (2023) Robert Chew, John Bollenbacher, Michael Wenger, Jessica Speer, and Annice Kim. 2023. LLM-assisted content analysis: Using large language models to support deductive coding. arXiv preprint arXiv:2306.14924 (2023).
- Chiang et al. (2023) Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al. 2023. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. See https://vicuna. lmsys. org (accessed 14 April 2023) 2, 3 (2023), 6.
- Cobbe et al. (2021) Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 (2021).
- Condoravdi et al. (2003) Cleo Condoravdi, Dick Crouch, Valeria De Paiva, Reinhard Stolle, and Daniel Bobrow. 2003. Entailment, intensionality and text understanding. In Proceedings of the HLT-NAACL 2003 workshop on Text meaning. 38â45.
- Cronin et al. (2017) Robert M Cronin, Daniel Fabbri, Joshua C Denny, S Trent Rosenbloom, and Gretchen Purcell Jackson. 2017. A comparison of rule-based and machine learning approaches for classifying patient portal messages. International journal of medical informatics 105 (2017), 110â120.
- Cunningham et al. (2023) Hoagy Cunningham, Aidan Ewart, Logan Riggs, Robert Huben, and Lee Sharkey. 2023. Sparse autoencoders find highly interpretable features in language models. arXiv preprint arXiv:2309.08600 (2023).
- Da et al. (2024) Longchao Da, Tiejin Chen, Lu Cheng, and Hua Wei. 2024. LLM Uncertainty Quantification through Directional Entailment Graph and Claim Level Response Augmentation. arXiv preprint arXiv:2407.00994 (2024).
- Dagan et al. (2005) Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The pascal recognising textual entailment challenge. In Machine learning challenges workshop. Springer, 177â190.
- Dai et al. (2023) Shih-Chieh Dai, Aiping Xiong, and Lun-Wei Ku. 2023. LLM-in-the-loop: Leveraging large language model for thematic analysis. arXiv preprint arXiv:2310.15100 (2023).
- Delacroix (2024) Sylvie Delacroix. 2024. Augmenting judicial practices with LLMs: re-thinking LLMsâ uncertainty communication features in light of systemic risks. Available at SSRN (2024).
- Desai and Durrett (2020) Shrey Desai and Greg Durrett. 2020. Calibration of pre-trained transformers. arXiv preprint arXiv:2003.07892 (2020).
- Detommaso et al. (2024) Gianluca Detommaso, Martin Bertran, Riccardo Fogliato, and Aaron Roth. 2024. Multicalibration for confidence scoring in LLMs. arXiv preprint arXiv:2404.04689 (2024).
- Devlin (2018) Jacob Devlin. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).
- Dorbala et al. (2023) Vishnu Sashank Dorbala, James F Mullen Jr, and Dinesh Manocha. 2023. Can an embodied agent find your âcat-shaped mugâ? llm-based zero-shot object navigation. IEEE Robotics and Automation Letters (2023).
- Dubey et al. (2024) Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 (2024).
- Dunefsky et al. (2024) Jacob Dunefsky, Philippe Chlenski, and Neel Nanda. 2024. Transcoders Find Interpretable LLM Feature Circuits. arXiv preprint arXiv:2406.11944 (2024).
- Duquenne et al. (2023) Paul-Ambroise Duquenne, Holger Schwenk, and BenoĂźt Sagot. 2023. SONAR: sentence-level multimodal and language-agnostic representations. arXiv e-prints (2023), arXivâ2308.
- Elazar et al. (2021) Yanai Elazar, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich SchĂŒtze, and Yoav Goldberg. 2021. Measuring and improving consistency in pretrained language models. Transactions of the Association for Computational Linguistics 9 (2021), 1012â1031.
- Elhage et al. (2022) Nelson Elhage, Tristan Hume, Catherine Olsson, Nicholas Schiefer, Tom Henighan, Shauna Kravec, Zac Hatfield-Dodds, Robert Lasenby, Dawn Drain, Carol Chen, et al. 2022. Toy models of superposition. arXiv preprint arXiv:2209.10652 (2022).
- Engels et al. (2024) Joshua Engels, Isaac Liao, Eric J Michaud, Wes Gurnee, and Max Tegmark. 2024. Not All Language Model Features Are Linear. arXiv preprint arXiv:2405.14860 (2024).
- Fadeeva et al. (2024) Ekaterina Fadeeva, Aleksandr Rubashevskii, Artem Shelmanov, Sergey Petrakov, Haonan Li, Hamdy Mubarak, Evgenii Tsymbalov, Gleb Kuzmin, Alexander Panchenko, Timothy Baldwin, et al. 2024. Fact-checking the output of large language models via token-level uncertainty quantification. arXiv preprint arXiv:2403.04696 (2024).
- Feng et al. (2020) Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, and Wei Wang. 2020. Language-agnostic BERT sentence embedding. arXiv preprint arXiv:2007.01852 (2020).
- Feng et al. (2024) Shangbin Feng, Weijia Shi, Yike Wang, Wenxuan Ding, Vidhisha Balachandran, and Yulia Tsvetkov. 2024. Donât Hallucinate, Abstain: Identifying LLM Knowledge Gaps via Multi-LLM Collaboration. arXiv preprint arXiv:2402.00367 (2024).
- Ferrando et al. (2024) Javier Ferrando, Oscar Obeso, Senthooran Rajamanoharan, and Neel Nanda. 2024. Do I Know This Entity? Knowledge Awareness and Hallucinations in Language Models. arXiv preprint arXiv:2411.14257 (2024).
- Fyodorov et al. (2000) Yaroslav Fyodorov, Yoad Winter, and Nissim Francez. 2000. A natural logic inference system. In Proceedings of the 2nd workshop on inference in computational semantics (ICoS-2).
- Gal and Ghahramani (2016) Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning. PMLR, 1050â1059.
- Gal et al. (2017) Yarin Gal, Jiri Hron, and Alex Kendall. 2017. Concrete dropout. Advances in neural information processing systems 30 (2017).
- Gao et al. (2024) Leo Gao, Tom Dupré la Tour, Henk Tillman, Gabriel Goh, Rajan Troll, Alec Radford, Ilya Sutskever, Jan Leike, and Jeffrey Wu. 2024. Scaling and evaluating sparse autoencoders. arXiv preprint arXiv:2406.04093 (2024).
- Geng et al. (2024) Jiahui Geng, Fengyu Cai, Yuxia Wang, Heinz Koeppl, Preslav Nakov, and Iryna Gurevych. 2024. A Survey of Confidence Estimation and Calibration in Large Language Models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers). 6577â6595.
- Geva et al. (2021) Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. 2021. Did Aristotle use a laptop? a question answering benchmark with implicit reasoning strategies. Transactions of the Association for Computational Linguistics 9 (2021), 346â361.
- Geva et al. (2020) Mor Geva, Roei Schuster, Jonathan Berant, and Omer Levy. 2020. Transformer feed-forward layers are key-value memories. arXiv preprint arXiv:2012.14913 (2020).
- Gneiting et al. (2007) Tilmann Gneiting, Fadoua Balabdaoui, and Adrian E Raftery. 2007. Probabilistic forecasts, calibration and sharpness. Journal of the Royal Statistical Society Series B: Statistical Methodology 69, 2 (2007), 243â268.
- Gneiting and Raftery (2007) Tilmann Gneiting and Adrian E Raftery. 2007. Strictly proper scoring rules, prediction, and estimation. Journal of the American statistical Association 102, 477 (2007), 359â378.
- Groot and Valdenegro-Toro (2024) Tobias Groot and Matias Valdenegro-Toro. 2024. Overconfidence is Key: Verbalized Uncertainty Evaluation in Large Language and Vision-Language Models. arXiv preprint arXiv:2405.02917 (2024).
- Guo et al. (2017) Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. 2017. On calibration of modern neural networks. In International conference on machine learning. PMLR, 1321â1330.
- Guo et al. (2018) Huaping Guo, Hongbing Liu, Ran Li, Changan Wu, Yibo Guo, and Mingliang Xu. 2018. Margin & diversity based ordering ensemble pruning. Neurocomputing 275 (2018), 237â246.
- Gurnee et al. (2023) Wes Gurnee, Neel Nanda, Matthew Pauly, Katherine Harvey, Dmitrii Troitskii, and Dimitris Bertsimas. 2023. Finding neurons in a haystack: Case studies with sparse probing. arXiv preprint arXiv:2305.01610 (2023).
- Han et al. (2024) Jiuzhou Han, Wray Buntine, and Ehsan Shareghi. 2024. Towards uncertainty-aware language agent. arXiv preprint arXiv:2401.14016 (2024).
- Hastings (1970) W Keith Hastings. 1970. Monte Carlo sampling methods using Markov chains and their applications. Oxford University Press.
- He et al. (2023) Jianfeng He, Linlin Yu, Shuo Lei, Chang-Tien Lu, and Feng Chen. 2023. Uncertainty estimation on sequential labeling via uncertainty transmission. arXiv preprint arXiv:2311.08726 (2023).
- He and Li (2024) Lin He and Keqin Li. 2024. Mitigating Hallucinations in LLM Using K-means Clustering of Synonym Semantic Relevance. Authorea Preprints (2024).
- He et al. (2020) Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. Deberta: Decoding-enhanced bert with disentangled attention. arXiv preprint arXiv:2006.03654 (2020).
- Hendrycks et al. (2021) Dan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo, Collin Burns, Samir Puranik, Horace He, Dawn Song, et al. 2021. Measuring coding challenge competence with apps. arXiv preprint arXiv:2105.09938 (2021).
- Hendrycks et al. (2020) Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300 (2020).
- Hinton (2015) Geoffrey Hinton. 2015. Distilling the Knowledge in a Neural Network. arXiv preprint arXiv:1503.02531 (2015).
- Hou et al. (2023) Bairu Hou, Yujian Liu, Kaizhi Qian, Jacob Andreas, Shiyu Chang, and Yang Zhang. 2023. Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023).
- Hu et al. (2021) Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685 (2021).
- Hu et al. (2024) Jun Hu, Wenwen Xia, Xiaolu Zhang, Chilin Fu, Weichang Wu, Zhaoxin Huan, Ang Li, Zuoli Tang, and Jun Zhou. 2024. Enhancing sequential recommendation via llm-based semantic embedding learning. In Companion Proceedings of the ACM on Web Conference 2024. 103â111.
- Huang et al. (2024) Hsiu-Yuan Huang, Yutong Yang, Zhaoxi Zhang, Sanwoo Lee, and Yunfang Wu. 2024. A Survey of Uncertainty Estimation in LLMs: Theory Meets Practice. arXiv preprint arXiv:2410.15326 (2024).
- Huang et al. (2023b) Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, et al. 2023b. A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions. arXiv preprint arXiv:2311.05232 (2023).
- Huang et al. (2021) Rui Huang, Andrew Geng, and Yixuan Li. 2021. On the importance of gradients for detecting distributional shifts in the wild. Advances in Neural Information Processing Systems 34 (2021), 677â689.
- Huang et al. (2023a) Yuheng Huang, Jiayang Song, Zhijie Wang, Shengming Zhao, Huaming Chen, Felix Juefei-Xu, and Lei Ma. 2023a. Look before you leap: An exploratory study of uncertainty measurement for large language models. arXiv preprint arXiv:2307.10236 (2023).
- Igoe et al. (2022) Conor Igoe, Youngseog Chung, Ian Char, and Jeff Schneider. 2022. How useful are gradients for ood detection really? arXiv preprint arXiv:2205.10439 (2022).
- Ji et al. (2023) Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. 2023. Survey of hallucination in natural language generation. Comput. Surveys 55, 12 (2023), 1â38.
- Jiang et al. (2023b) Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. 2023b. Mistral 7B. arXiv preprint arXiv:2310.06825 (2023).
- Jiang et al. (2023a) Mingjian Jiang, Yangjun Ruan, Sicong Huang, Saifei Liao, Silviu Pitis, Roger Baker Grosse, and Jimmy Ba. 2023a. Calibrating language models via augmented prompt ensembles. (2023).
- Jiang et al. (2024) Mingjian Jiang, Yangjun Ruan, Prasanna Sattigeri, Salim Roukos, and Tatsunori Hashimoto. 2024. Graph-based Uncertainty Metrics for Long-form Language Model Outputs. arXiv preprint arXiv:2410.20783 (2024).
- Johnson et al. (2024) Daniel D Johnson, Daniel Tarlow, David Duvenaud, and Chris J Maddison. 2024. Experts Donât Cheat: Learning What You Donât Know By Predicting Pairs. arXiv preprint arXiv:2402.08733 (2024).
- Joshi et al. (2017) Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. arXiv preprint arXiv:1705.03551 (2017).
- Jospin et al. (2022) Laurent Valentin Jospin, Hamid Laga, Farid Boussaid, Wray Buntine, and Mohammed Bennamoun. 2022. Hands-on Bayesian neural networksâA tutorial for deep learning users. IEEE Computational Intelligence Magazine 17, 2 (2022), 29â48.
- Jung et al. (2024) Jaehun Jung, Faeze Brahman, and Yejin Choi. 2024. Trust or Escalate: LLM Judges with Provable Guarantees for Human Agreement. arXiv preprint arXiv:2407.18370 (2024).
- Kadavath et al. (2022) Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, et al. 2022. Language models (mostly) know what they know. arXiv preprint arXiv:2207.05221 (2022).
- Kamath et al. (2024) Gaurav Kamath, Sebastian Schuster, Sowmya Vajjala, and Siva Reddy. 2024. Scope Ambiguities in Large Language Models. Transactions of the Association for Computational Linguistics 12 (2024), 738â754.
- Kannan et al. (2023) Shyam Sundar Kannan, Vishnunandan LN Venkatesh, and Byung-Cheol Min. 2023. Smart-llm: Smart multi-agent robot task planning using large language models. arXiv preprint arXiv:2309.10062 (2023).
- Kapoor et al. (2024) Sanyam Kapoor, Nate Gruver, Manley Roberts, Katherine Collins, Arka Pal, Umang Bhatt, Adrian Weller, Samuel Dooley, Micah Goldblum, and Andrew Gordon Wilson. 2024. Large Language Models Must Be Taught to Know What They Donât Know. arXiv preprint arXiv:2406.08391 (2024).
- Katz et al. (2024) Daniel Martin Katz, Michael James Bommarito, Shang Gao, and Pablo Arredondo. 2024. Gpt-4 passes the bar exam. Philosophical Transactions of the Royal Society A 382, 2270 (2024), 20230254.
- Keeling and Street (2024) Geoff Keeling and Winnie Street. 2024. On the attribution of confidence to large language models. arXiv preprint arXiv:2407.08388 (2024).
- Kim et al. (2024b) Moo Jin Kim, Karl Pertsch, Siddharth Karamcheti, Ted Xiao, Ashwin Balakrishna, Suraj Nair, Rafael Rafailov, Ethan Foster, Grace Lam, Pannag Sanketi, et al. 2024b. OpenVLA: An Open-Source Vision-Language-Action Model. arXiv preprint arXiv:2406.09246 (2024).
- Kim et al. (2024a) Sunnie SY Kim, Q Vera Liao, Mihaela Vorvoreanu, Stephanie Ballard, and Jennifer Wortman Vaughan. 2024a. â Iâm Not Sure, ButâŠâ: Examining the Impact of Large Language Modelsâ Uncertainty Expression on User Reliance and Trust. In The 2024 ACM Conference on Fairness, Accountability, and Transparency. 822â835.
- Kolagar and Zarcone (2024) Zahra Kolagar and Alessandra Zarcone. 2024. Aligning Uncertainty: Leveraging LLMs to Analyze Uncertainty Transfer in Text Summarization. In Proceedings of the 1st Workshop on Uncertainty-Aware NLP (UncertaiNLP 2024). 41â61.
- Kong et al. (2020) Lingkai Kong, Haoming Jiang, Yuchen Zhuang, Jie Lyu, Tuo Zhao, and Chao Zhang. 2020. Calibrated language model fine-tuning for in-and out-of-distribution data. arXiv preprint arXiv:2010.11506 (2020).
- Kossen et al. (2024) Jannik Kossen, Jiatong Han, Muhammed Razzak, Lisa Schut, Shreshth Malik, and Yarin Gal. 2024. Semantic entropy probes: Robust and cheap hallucination detection in llms. arXiv preprint arXiv:2406.15927 (2024).
- Krause et al. (2023) Lea Krause, Wondimagegnhue Tufa, Selene BĂĄez SantamarĂa, Angel Daza, Urja Khurana, and Piek Vossen. 2023. Confidently wrong: exploring the calibration and expression of (Un) certainty of large language models in a multilingual setting. In Proceedings of the workshop on multimodal, multilingual natural language generation and multilingual WebNLG Challenge (MM-NLG 2023). 1â9.
- Kuhn et al. (2023) Lorenz Kuhn, Yarin Gal, and Sebastian Farquhar. 2023. Semantic uncertainty: Linguistic invariances for uncertainty estimation in natural language generation. arXiv preprint arXiv:2302.09664 (2023).
- Kumar et al. (2023) Bhawesh Kumar, Charlie Lu, Gauri Gupta, Anil Palepu, David Bellamy, Ramesh Raskar, and Andrew Beam. 2023. Conformal prediction with large language models for multi-choice question answering. arXiv preprint arXiv:2305.18404 (2023).
- Lai et al. (2017) Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. Race: Large-scale reading comprehension dataset from examinations. arXiv preprint arXiv:1704.04683 (2017).
- Lakshminarayanan et al. (2017) Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. 2017. Simple and scalable predictive uncertainty estimation using deep ensembles. Advances in neural information processing systems 30 (2017).
- Lebret et al. (2016) Rémi Lebret, David Grangier, and Michael Auli. 2016. Generating text from structured data with application to the biography domain. CoRR, abs/1603.07771 (2016).
- Ledda et al. (2023) Emanuele Ledda, Giorgio Fumera, and Fabio Roli. 2023. Dropout injection at test time for post hoc uncertainty quantification in neural networks. Information Sciences 645 (2023), 119356.
- Lee and AlRegib (2020) Jinsol Lee and Ghassan AlRegib. 2020. Gradients as a measure of uncertainty in neural networks. In 2020 IEEE International Conference on Image Processing (ICIP). IEEE, 2416â2420.
- Lee et al. (2018) Katherine Lee, Orhan Firat, Ashish Agarwal, Clara Fannjiang, and David Sussillo. 2018. Hallucinations in neural machine translation. (2018).
- Li et al. (2024b) Chengzu Li, Han Zhou, Goran GlavaĆĄ, Anna Korhonen, and Ivan VuliÄ. 2024b. Can Large Language Models Achieve Calibration with In-Context Learning?. In ICLR 2024 Workshop on Reliable and Responsible Foundation Models.
- Li et al. (2023) Junyi Li, Xiaoxue Cheng, Wayne Xin Zhao, Jian-Yun Nie, and Ji-Rong Wen. 2023. Halueval: A large-scale hallucination evaluation benchmark for large language models. arXiv preprint arXiv:2305.11747 (2023).
- Li et al. (2024a) Shuo Li, Sangdon Park, Insup Lee, and Osbert Bastani. 2024a. TRAQ: Trustworthy Retrieval Augmented Question Answering via Conformal Prediction. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers). 3799â3821.
- Liang et al. (2024) Kaiqu Liang, Zixu Zhang, and Jaime FernĂĄndez Fisac. 2024. Introspective Planning: Guiding Language-Enabled Agents to Refine Their Own Uncertainty. arXiv preprint arXiv:2402.06529 (2024).
- Lieberum et al. (2023) Tom Lieberum, Matthew Rahtz, JĂĄnos KramĂĄr, Neel Nanda, Geoffrey Irving, Rohin Shah, and Vladimir Mikulik. 2023. Does circuit analysis interpretability scale? evidence from multiple choice capabilities in chinchilla. arXiv preprint arXiv:2307.09458 (2023).
- Lieberum et al. (2024) Tom Lieberum, Senthooran Rajamanoharan, Arthur Conmy, Lewis Smith, Nicolas Sonnerat, Vikrant Varma, JĂĄnos KramĂĄr, Anca Dragan, Rohin Shah, and Neel Nanda. 2024. Gemma scope: Open sparse autoencoders everywhere all at once on gemma 2. arXiv preprint arXiv:2408.05147 (2024).
- Lin (2004) Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out. 74â81.
- Lin et al. (2021) Stephanie Lin, Jacob Hilton, and Owain Evans. 2021. Truthfulqa: Measuring how models mimic human falsehoods. arXiv preprint arXiv:2109.07958 (2021).
- Lin et al. (2022) Stephanie Lin, Jacob Hilton, and Owain Evans. 2022. Teaching models to express their uncertainty in words. arXiv preprint arXiv:2205.14334 (2022).
- Lin et al. (2023) Zhen Lin, Shubhendu Trivedi, and Jimeng Sun. 2023. Generating with confidence: Uncertainty quantification for black-box large language models. arXiv preprint arXiv:2305.19187 (2023).
- Ling et al. (2024) Chen Ling, Xujiang Zhao, Wei Cheng, Yanchi Liu, Yiyou Sun, Xuchao Zhang, Mika Oishi, Takao Osaki, Katsushi Matsuda, Jie Ji, et al. 2024. Uncertainty decomposition and quantification for in-context learning of large language models. arXiv preprint arXiv:2402.10189 (2024).
- Liu et al. (2023a) Alisa Liu, Zhaofeng Wu, Julian Michael, Alane Suhr, Peter West, Alexander Koller, Swabha Swayamdipta, Noah A Smith, and Yejin Choi. 2023a. Weâre afraid language models arenât modeling ambiguity. arXiv preprint arXiv:2304.14399 (2023).
- Liu et al. (2024a) Hongfu Liu, Hengguan Huang, Hao Wang, Xiangming Gu, and Ye Wang. 2024a. On Calibration of LLM-based Guard Models for Reliable Content Moderation. arXiv preprint arXiv:2410.10414 (2024).
- Liu et al. (2024c) Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. 2024c. Visual instruction tuning. Advances in neural information processing systems 36 (2024).
- Liu et al. (2024e) Hanchao Liu, Wenyuan Xue, Yifei Chen, Dapeng Chen, Xiutian Zhao, Ke Wang, Liping Hou, Rongjun Li, and Wei Peng. 2024e. A survey on hallucination in large vision-language models. arXiv preprint arXiv:2402.00253 (2024).
- Liu et al. (2024d) Linyu Liu, Yu Pan, Xiaocheng Li, and Guanting Chen. 2024d. Uncertainty Estimation and Quantification for LLMs: A Simple Supervised Approach. arXiv preprint arXiv:2404.15993 (2024).
- Liu and Wu (2024) Terrance Liu and Zhiwei Steven Wu. 2024. Multi-group Uncertainty Quantification for Long-form Text Generation. arXiv preprint arXiv:2407.21057 (2024).
- Liu et al. (2024b) Xin Liu, Muhammad Khalifa, and Lu Wang. 2024b. LitCab: Lightweight Language Model Calibration over Short-and Long-form Responses. In The Twelfth International Conference on Learning Representations.
- Liu (2019) Yinhan Liu. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019).
- Liu et al. (2023b) Yuxuan Liu, Tianchi Yang, Shaohan Huang, Zihan Zhang, Haizhen Huang, Furu Wei, Weiwei Deng, Feng Sun, and Qi Zhang. 2023b. Calibrating llm-based evaluator. arXiv preprint arXiv:2309.13308 (2023).
- Liu et al. (2023c) Yang Liu, Yuanshun Yao, Jean-Francois Ton, Xiaoying Zhang, Ruocheng Guo Hao Cheng, Yegor Klochkov, Muhammad Faaiz Taufiq, and Hang Li. 2023c. Trustworthy LLMs: A survey and guideline for evaluating large language modelsâ alignment. arXiv preprint arXiv:2308.05374 (2023).
- Loquercio et al. (2020) Antonio Loquercio, Mattia Segu, and Davide Scaramuzza. 2020. A general framework for uncertainty estimation in deep learning. IEEE Robotics and Automation Letters 5, 2 (2020), 3153â3160.
- Lyu et al. (2024) Qing Lyu, Kumar Shridhar, Chaitanya Malaviya, Li Zhang, Yanai Elazar, Niket Tandon, Marianna Apidianaki, Mrinmaya Sachan, and Chris Callison-Burch. 2024. Calibrating large language models with sample consistency. arXiv preprint arXiv:2402.13904 (2024).
- MacCartney and Manning (2008) Bill MacCartney and Christopher D Manning. 2008. Modeling semantic containment and exclusion in natural language inference. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008). 521â528.
- Mahaut et al. (2024) MatĂ©o Mahaut, Laura Aina, Paula Czarnowska, Momchil Hardalov, Thomas MĂŒller, and LluĂs MĂ rquez. 2024. Factual Confidence of LLMs: on Reliability and Robustness of Current Estimators. arXiv preprint arXiv:2406.13415 (2024).
- Malinin and Gales (2020) Andrey Malinin and Mark Gales. 2020. Uncertainty estimation in autoregressive structured prediction. arXiv preprint arXiv:2002.07650 (2020).
- Maltoudoglou et al. (2020) Lysimachos Maltoudoglou, Andreas Paisios, and Harris Papadopoulos. 2020. BERT-based conformal predictor for sentiment analysis. In Conformal and Probabilistic Prediction and Applications. PMLR, 269â284.
- Manakul et al. (2023) Potsawee Manakul, Adian Liusie, and Mark JF Gales. 2023. Selfcheckgpt: Zero-resource black-box hallucination detection for generative large language models. arXiv preprint arXiv:2303.08896 (2023).
- Mandi et al. (2024) Zhao Mandi, Shreeya Jain, and Shuran Song. 2024. Roco: Dialectic multi-robot collaboration with large language models. In 2024 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 286â299.
- Mao et al. (2024) Xin Mao, Feng-Lin Li, Huimin Xu, Wei Zhang, and Anh Tuan Luu. 2024. Donât Forget Your Reward Values: Language Model Alignment via Value-based Calibration. arXiv preprint arXiv:2402.16030 (2024).
- MartĂn et al. (2022) Alejandro MartĂn, Javier Huertas-Tato, Ălvaro Huertas-GarcĂa, Guillermo Villar-RodrĂguez, and David Camacho. 2022. FacTeR-Check: Semi-automated fact-checking through semantic similarity and natural language inference. Knowledge-based systems 251 (2022), 109265.
- Martinez-Munoz et al. (2008) Gonzalo Martinez-Munoz, Daniel HernĂĄndez-Lobato, and Alberto SuĂĄrez. 2008. An analysis of ensemble pruning techniques based on ordered aggregation. IEEE Transactions on Pattern Analysis and Machine Intelligence 31, 2 (2008), 245â259.
- Mi et al. (2022) Lu Mi, Hao Wang, Yonglong Tian, Hao He, and Nir N Shavit. 2022. Training-free uncertainty estimation for dense regression: Sensitivity as a surrogate. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36. 10042â10050.
- Mielke et al. (2022) Sabrina J Mielke, Arthur Szlam, Emily Dinan, and Y-Lan Boureau. 2022. Reducing conversational agentsâ overconfidence through linguistic calibration. Transactions of the Association for Computational Linguistics 10 (2022), 857â872.
- Min et al. (2020) Sewon Min, Julian Michael, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2020. AmbigQA: Answering ambiguous open-domain questions. arXiv preprint arXiv:2004.10645 (2020).
- Minaee et al. (2024) Shervin Minaee, Tomas Mikolov, Narjes Nikzad, Meysam Chenaghlu, Richard Socher, Xavier Amatriain, and Jianfeng Gao. 2024. Large language models: A survey. arXiv preprint arXiv:2402.06196 (2024).
- Mohri and Hashimoto (2024) Christopher Mohri and Tatsunori Hashimoto. 2024. Language models with conformal factuality guarantees. arXiv preprint arXiv:2402.10978 (2024).
- Monz and de Rijke (2001) Christof Monz and Maarten de Rijke. 2001. Light-weight entailment checking for computational semantics. In Proc. of the third workshop on inference in computational semantics (ICoS-3).
- Mullen Jr and Manocha (2024) James F Mullen Jr and Dinesh Manocha. 2024. Towards Robots That Know When They Need Help: Affordance-Based Uncertainty for Large Language Model Planners. arXiv preprint arXiv:2403.13198 (2024).
- Naeini et al. (2015) Mahdi Pakdaman Naeini, Gregory Cooper, and Milos Hauskrecht. 2015. Obtaining well calibrated probabilities using bayesian binning. In Proceedings of the AAAI conference on artificial intelligence, Vol. 29.
- Nanda et al. (2023) Neel Nanda, Lawrence Chan, Tom Lieberum, Jess Smith, and Jacob Steinhardt. 2023. Progress measures for grokking via mechanistic interpretability. arXiv preprint arXiv:2301.05217 (2023).
- Ni et al. (2024) Shiyu Ni, Keping Bi, Lulu Yu, and Jiafeng Guo. 2024. Are Large Language Models More Honest in Their Probabilistic or Verbalized Confidence? arXiv preprint arXiv:2408.09773 (2024).
- Niculescu-Mizil and Caruana (2005) Alexandru Niculescu-Mizil and Rich Caruana. 2005. Predicting good probabilities with supervised learning. In Proceedings of the 22nd international conference on Machine learning. 625â632.
- Nikitin et al. (2024) Alexander Nikitin, Jannik Kossen, Yarin Gal, and Pekka Marttinen. 2024. Kernel Language Entropy: Fine-grained Uncertainty Quantification for LLMs from Semantic Similarities. arXiv preprint arXiv:2405.20003 (2024).
- Niu et al. (2024) Ruijia Niu, Dongxia Wu, Rose Yu, and Yi-An Ma. 2024. Functional-level Uncertainty Quantification for Calibrated Fine-tuning on LLMs. arXiv preprint arXiv:2410.06431 (2024).
- Nixon et al. (2019) Jeremy Nixon, Michael W Dusenberry, Linchuan Zhang, Ghassen Jerfel, and Dustin Tran. 2019. Measuring Calibration in Deep Learning.. In CVPR workshops, Vol. 2.
- Osband et al. (2022) Ian Osband, Seyed Mohammad Asghari, Benjamin Van Roy, Nat McAleese, John Aslanides, and Geoffrey Irving. 2022. Fine-tuning language models via epistemic neural networks. arXiv preprint arXiv:2211.01568 (2022).
- Osband et al. (2023) Ian Osband, Zheng Wen, Seyed Mohammad Asghari, Vikranth Dwaracherla, Morteza Ibrahimi, Xiuyuan Lu, and Benjamin Van Roy. 2023. Epistemic neural networks. Advances in Neural Information Processing Systems 36 (2023), 2795â2823.
- Pacchiardi et al. (2023) Lorenzo Pacchiardi, Alex J Chan, Sören Mindermann, Ilan Moscovitz, Alexa Y Pan, Yarin Gal, Owain Evans, and Jan Brauner. 2023. How to catch an ai liar: Lie detection in black-box llms by asking unrelated questions. arXiv preprint arXiv:2309.15840 (2023).
- Petukhova et al. (2024) Alina Petukhova, Joao P Matos-Carvalho, and Nuno Fachada. 2024. Text clustering with LLM embeddings. arXiv preprint arXiv:2403.15112 (2024).
- Pilehvar et al. (2013) Mohammad Taher Pilehvar, David Jurgens, and Roberto Navigli. 2013. Align, disambiguate and walk: A unified approach for measuring semantic similarity. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 1341â1351.
- Platt et al. (1999) John Platt et al. 1999. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in large margin classifiers 10, 3 (1999), 61â74.
- Posch et al. (2019) Konstantin Posch, Jan Steinbrener, and JĂŒrgen Pilz. 2019. Variational Inference to Measure Model Uncertainty in Deep Neural Networks. arXiv preprint arXiv:1902.10189 (2019). arXiv:1902.10189 [stat.ML] https://arxiv.org/abs/1902.10189
- Qiu and Miikkulainen (2024) Xin Qiu and Risto Miikkulainen. 2024. Semantic Density: Uncertainty Quantification in Semantic Space for Large Language Models. arXiv preprint arXiv:2405.13845 (2024).
- Quach et al. (2023) Victor Quach, Adam Fisch, Tal Schuster, Adam Yala, Jae Ho Sohn, Tommi S Jaakkola, and Regina Barzilay. 2023. Conformal language modeling. arXiv preprint arXiv:2306.10193 (2023).
- Qurashi et al. (2020) Abdul Wahab Qurashi, Violeta Holmes, and Anju P Johnson. 2020. Document processing: Methods for semantic text similarity analysis. In 2020 international conference on INnovations in Intelligent SysTems and Applications (INISTA). IEEE, 1â6.
- Radford and Narasimhan (2018) Alec Radford and Karthik Narasimhan. 2018. Improving Language Understanding by Generative Pre-Training.
- Rahaman et al. (2021) Rahul Rahaman et al. 2021. Uncertainty quantification and deep ensembles. Advances in neural information processing systems 34 (2021), 20063â20075.
- Rai et al. (2024) Daking Rai, Yilun Zhou, Shi Feng, Abulhair Saparov, and Ziyu Yao. 2024. A practical review of mechanistic interpretability for transformer-based language models. arXiv preprint arXiv:2407.02646 (2024).
- Rawte et al. (2023) Vipula Rawte, Amit Sheth, and Amitava Das. 2023. A survey of hallucination in large foundation models. arXiv preprint arXiv:2309.05922 (2023).
- Reddy et al. (2019) Siva Reddy, Danqi Chen, and Christopher D Manning. 2019. Coqa: A conversational question answering challenge. Transactions of the Association for Computational Linguistics 7 (2019), 249â266.
- Reimers (2019) N Reimers. 2019. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. arXiv preprint arXiv:1908.10084 (2019).
- Rein et al. (2023) David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R Bowman. 2023. Gpqa: A graduate-level google-proof q&a benchmark. arXiv preprint arXiv:2311.12022 (2023).
- Ren et al. (2024) Allen Z Ren, Jaden Clark, Anushri Dixit, Masha Itkina, Anirudha Majumdar, and Dorsa Sadigh. 2024. Explore until Confident: Efficient Exploration for Embodied Question Answering. arXiv preprint arXiv:2403.15941 (2024).
- Ren et al. (2023a) Allen Z Ren, Anushri Dixit, Alexandra Bodrova, Sumeet Singh, Stephen Tu, Noah Brown, Peng Xu, Leila Takayama, Fei Xia, Jake Varley, et al. 2023a. Robots that ask for help: Uncertainty alignment for large language model planners. arXiv preprint arXiv:2307.01928 (2023).
- Ren et al. (2023b) Jie Ren, Yao Zhao, Tu Vu, Peter J. Liu, and Balaji Lakshminarayanan. 2023b. Self-Evaluation Improves Selective Generation in Large Language Models. In Proceedings on âI Canât Believe Itâs Not Better: Failure Modes in the Age of Foundation Modelsâ at NeurIPS 2023 Workshops (Proceedings of Machine Learning Research, Vol. 239), Javier AntorĂĄn, Arno Blaas, Kelly Buchanan, Fan Feng, Vincent Fortuin, Sahra Ghalebikesabi, Andreas Kriegler, Ian Mason, David Rohde, Francisco J. R. Ruiz, Tobias Uelwer, Yubin Xie, and Rui Yang (Eds.). PMLR, 49â64.
- Rouzrokh et al. (2024) Pouria Rouzrokh, Shahriar Faghani, Cooper U Gamble, Moein Shariatnia, and Bradley J Erickson. 2024. CONFLARE: CONFormal LArge language model REtrieval. arXiv preprint arXiv:2404.04287 (2024).
- Sadinle et al. (2019) Mauricio Sadinle, Jing Lei, and Larry Wasserman. 2019. Least ambiguous set-valued classifiers with bounded error levels. J. Amer. Statist. Assoc. 114, 525 (2019), 223â234.
- Schwab and Karlen (2019) Patrick Schwab and Walter Karlen. 2019. Cxplain: Causal explanations for model interpretation under uncertainty. Advances in neural information processing systems 32 (2019).
- Sha et al. (2021) Lei Sha, Oana-Maria Camburu, and Thomas Lukasiewicz. 2021. Learning from the best: Rationalizing predictions by adversarial information calibration. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35. 13771â13779.
- Shafer and Vovk (2008) Glenn Shafer and Vladimir Vovk. 2008. A tutorial on conformal prediction. Journal of Machine Learning Research 9, 3 (2008).
- Shah et al. (2023) Dhruv Shah, BĆaĆŒej OsiĆski, Sergey Levine, et al. 2023. Lm-nav: Robotic navigation with large pre-trained models of language, vision, and action. In Conference on robot learning. PMLR, 492â504.
- Smith et al. (2020) Eric Michael Smith, Diana Gonzalez-Rico, Emily Dinan, and Y-Lan Boureau. 2020. Controlling style in generated dialogue. arXiv preprint arXiv:2009.10855 (2020).
- Spiess et al. (2024) Claudio Spiess, David Gros, Kunal Suresh Pai, Michael Pradel, Md Rafiqul Islam Rabin, Amin Alipour, Susmit Jha, Prem Devanbu, and Toufique Ahmed. 2024. Calibration and correctness of language models for code. arXiv preprint arXiv:2402.02047 (2024).
- Steindl et al. (2024) Sebastian Steindl, Ulrich SchĂ€fer, Bernd Ludwig, and Patrick Levi. 2024. Linguistic Obfuscation Attacks and Large Language Model Uncertainty. In Proceedings of the 1st Workshop on Uncertainty-Aware NLP (UncertaiNLP 2024). 35â40.
- Stengel-Eskin et al. (2024) Elias Stengel-Eskin, Peter Hase, and Mohit Bansal. 2024. LACIE: Listener-Aware Finetuning for Confidence Calibration in Large Language Models. arXiv preprint arXiv:2405.21028 (2024).
- Su et al. (2024a) Jianlin Su, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen Bo, and Yunfeng Liu. 2024a. Roformer: Enhanced transformer with rotary position embedding. Neurocomputing 568 (2024), 127063.
- Su et al. (2024b) Jiayuan Su, Jing Luo, Hongwei Wang, and Lu Cheng. 2024b. Api is enough: Conformal prediction for large language models without logit-access. arXiv preprint arXiv:2403.01216 (2024).
- Sun et al. (2024b) Xingpeng Sun, Yiran Zhang, Xindi Tang, Amrit Singh Bedi, and Aniket Bera. 2024b. TrustNavGPT: Modeling Uncertainty to Improve Trustworthiness of Audio-Guided LLM-Based Robot Navigation. arXiv preprint arXiv:2408.01867 (2024).
- Sun et al. (2024a) Zhongxiang Sun, Xiaoxue Zang, Kai Zheng, Yang Song, Jun Xu, Xiao Zhang, Weijie Yu, and Han Li. 2024a. ReDeEP: Detecting Hallucination in Retrieval-Augmented Generation via Mechanistic Interpretability. arXiv preprint arXiv:2410.11414 (2024).
- Tai et al. (2024) Robert H Tai, Lillian R Bentley, Xin Xia, Jason M Sitt, Sarah C Fankhauser, Ana M Chicas-Mosier, and Barnas G Monteith. 2024. An examination of the use of large language models to aid analysis of textual data. International Journal of Qualitative Methods 23 (2024), 16094069241231168.
- Tamkin et al. (2022) Alex Tamkin, Kunal Handa, Avash Shrestha, and Noah Goodman. 2022. Task ambiguity in humans and language models. arXiv preprint arXiv:2212.10711 (2022).
- Tamkin et al. (2023) Alex Tamkin, Mohammad Taufeeque, and Noah D Goodman. 2023. Codebook features: Sparse and discrete interpretability for neural networks. arXiv preprint arXiv:2310.17230 (2023).
- Tang et al. (2024) Zhisheng Tang, Ke Shen, and Mayank Kejriwal. 2024. An Evaluation of Estimative Uncertainty in Large Language Models. arXiv preprint arXiv:2405.15185 (2024).
- Tanneru et al. (2024) Sree Harsha Tanneru, Chirag Agarwal, and Himabindu Lakkaraju. 2024. Quantifying uncertainty in natural language explanations of large language models. In International Conference on Artificial Intelligence and Statistics. PMLR, 1072â1080.
- Tao et al. (2024) Shuchang Tao, Liuyi Yao, Hanxing Ding, Yuexiang Xie, Qi Cao, Fei Sun, Jinyang Gao, Huawei Shen, and Bolin Ding. 2024. When to Trust LLMs: Aligning Confidence with Response Quality. arXiv preprint arXiv:2404.17287 (2024).
- Templeton et al. (2024) Adly Templeton, Tom Conerly, Jonathan Marcus, Jack Lindsey, Trenton Bricken, Brian Chen, Adam Pearce, Craig Citro, Emmanuel Ameisen, Andy Jones, et al. 2024. Scaling monosemanticity: Extracting interpretable features from claude 3 sonnet. https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html
- Thomas and Joy (2006) MTCAJ Thomas and A Thomas Joy. 2006. Elements of information theory. Wiley-Interscience.
- Thorne et al. (2018) James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and VERification. arXiv preprint arXiv:1803.05355 (2018).
- Tian et al. (2023) Katherine Tian, Eric Mitchell, Allan Zhou, Archit Sharma, Rafael Rafailov, Huaxiu Yao, Chelsea Finn, and Christopher D Manning. 2023. Just ask for calibration: Strategies for eliciting calibrated confidence scores from language models fine-tuned with human feedback. arXiv preprint arXiv:2305.14975 (2023).
- Tomani et al. (2024) Christian Tomani, Kamalika Chaudhuri, Ivan Evtimov, Daniel Cremers, and Mark Ibrahim. 2024. Uncertainty-Based Abstention in LLMs Improves Safety and Reduces Hallucinations. arXiv preprint arXiv:2404.10960 (2024).
- Tonmoy et al. (2024) SM Tonmoy, SM Zaman, Vinija Jain, Anku Rani, Vipula Rawte, Aman Chadha, and Amitava Das. 2024. A comprehensive survey of hallucination mitigation techniques in large language models. arXiv preprint arXiv:2401.01313 (2024).
- Touvron et al. (2023) Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 (2023).
- Tsai et al. (2024) Yao-Hung Hubert Tsai, Walter Talbott, and Jian Zhang. 2024. Efficient Non-Parametric Uncertainty Quantification for Black-Box Large Language Models and Decision Planning. arXiv preprint arXiv:2402.00251 (2024).
- Ulmer et al. (2024) Dennis Ulmer, Martin Gubri, Hwaran Lee, Sangdoo Yun, and Seong Joon Oh. 2024. Calibrating Large Language Models Using Their Generations Only. arXiv preprint arXiv:2403.05973 (2024).
- Vashurin et al. (2024) Roman Vashurin, Ekaterina Fadeeva, Artem Vazhentsev, Akim Tsvigun, Daniil Vasilev, Rui Xing, Abdelrahman Boda Sadallah, Lyudmila Rvanova, Sergey Petrakov, Alexander Panchenko, et al. 2024. Benchmarking Uncertainty Quantification Methods for Large Language Models with LM-Polygraph. arXiv preprint arXiv:2406.15627 (2024).
- Vaswani (2017) A Vaswani. 2017. Attention is all you need. Advances in Neural Information Processing Systems (2017).
- Vazhentsev et al. (2024) Artem Vazhentsev, Ekaterina Fadeeva, Rui Xing, Alexander Panchenko, Preslav Nakov, Timothy Baldwin, Maxim Panov, and Artem Shelmanov. 2024. Unconditional Truthfulness: Learning Conditional Dependency for Uncertainty Quantification of Large Language Models. arXiv preprint arXiv:2408.10692 (2024).
- Vyas et al. (2018) Apoorv Vyas, Nataraj Jammalamadaka, Xia Zhu, Dipankar Das, Bharat Kaul, and Theodore L Willke. 2018. Out-of-distribution detection using an ensemble of self supervised leave-out classifiers. In Proceedings of the European conference on computer vision (ECCV). 550â564.
- Wang and Ji (2024) Hanjing Wang and Qiang Ji. 2024. Epistemic Uncertainty Quantification For Pre-Trained Neural Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 11052â11061.
- Wang et al. (2024c) Jun Wang, Guocheng He, and Yiannis Kantaros. 2024c. Safe Task Planning for Language-Instructed Multi-Robot Systems using Conformal Prediction. arXiv preprint arXiv:2402.15368 (2024).
- Wang et al. (2023b) J. Wang, Jiaming Tong, Kai Liang Tan, Yevgeniy Vorobeychik, and Yiannis Kantaros. 2023b. Conformal Temporal Logic Planning using Large Language Models: Knowing When to Do What and When to Ask for Help. ArXiv abs/2309.10092 (2023).
- Wang et al. (2023a) Xi Wang, Laurence Aitchison, and Maja Rudolph. 2023a. LoRA ensembles for large language model fine-tuning. arXiv preprint arXiv:2310.00035 (2023).
- Wang et al. (2024d) Yiming Wang, Pei Zhang, Baosong Yang, Derek F Wong, and Rui Wang. 2024d. Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation. arXiv preprint arXiv:2410.13640 (2024).
- Wang et al. (2024a) Yu-Hsiang Wang, Andrew Bai, Che-Ping Tsai, and Cho-Jui Hsieh. 2024a. CLUE: Concept-Level Uncertainty Estimation for Large Language Models. arXiv preprint arXiv:2409.03021 (2024).
- Wang et al. (2024b) Zhiyuan Wang, Jinhao Duan, Lu Cheng, Yue Zhang, Qingni Wang, Hengtao Shen, Xiaofeng Zhu, Xiaoshuang Shi, and Kaidi Xu. 2024b. ConU: Conformal Uncertainty in Large Language Models with Correctness Coverage Guarantees. arXiv preprint arXiv:2407.00499 (2024).
- Wei et al. (2022) Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems 35 (2022), 24824â24837.
- Williams et al. (2017) Adina Williams, Nikita Nangia, and Samuel R Bowman. 2017. A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:1704.05426 (2017).
- Wu and Williamson (2024) Luhuan Wu and Sinead A Williamson. 2024. Posterior Uncertainty Quantification in Neural Networks using Data Augmentation. In International Conference on Artificial Intelligence and Statistics. PMLR, 3376â3384.
- Xiao and Wang (2021) Yijun Xiao and William Yang Wang. 2021. On hallucination and predictive uncertainty in conditional language generation. arXiv preprint arXiv:2103.15025 (2021).
- Xiao et al. (2023) Ziang Xiao, Xingdi Yuan, Q Vera Liao, Rania Abdelghani, and Pierre-Yves Oudeyer. 2023. Supporting qualitative analysis with large language models: Combining codebook with GPT-3 for deductive coding. In Companion proceedings of the 28th international conference on intelligent user interfaces. 75â78.
- Xiong et al. (2023) Miao Xiong, Zhiyuan Hu, Xinyang Lu, Yifei Li, Jie Fu, Junxian He, and Bryan Hooi. 2023. Can llms express their uncertainty? an empirical evaluation of confidence elicitation in llms. arXiv preprint arXiv:2306.13063 (2023).
- Xu et al. (2024b) Tianyang Xu, Shujin Wu, Shizhe Diao, Xiaoze Liu, Xingyao Wang, Yangyi Chen, and Jing Gao. 2024b. SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales. arXiv preprint arXiv:2405.20974 (2024).
- Xu et al. (2024a) Ziwei Xu, Sanjay Jain, and Mohan Kankanhalli. 2024a. Hallucination is inevitable: An innate limitation of large language models. arXiv preprint arXiv:2401.11817 (2024).
- Yadkori et al. (2024) Yasin Abbasi Yadkori, Ilja Kuzborskij, Andrås György, and Csaba Szepesvåri. 2024. To Believe or Not to Believe Your LLM. arXiv preprint arXiv:2406.02543 (2024).
- Yang et al. (2024a) Adam X. Yang, Maxime Robeyns, Xi Wang, and Laurence Aitchison. 2024a. Bayesian Low-rank Adaptation for Large Language Models. arXiv preprint arXiv:2308.13111 (2024). arXiv:2308.13111 [cs.LG] https://arxiv.org/abs/2308.13111
- Yang et al. (2024b) Haoyan Yang, Yixuan Wang, Xingyin Xu, Hanyuan Zhang, and Yirong Bian. 2024b. Can We Trust LLMs? Mitigate Overconfidence Bias in LLMs through Knowledge Transfer. arXiv preprint arXiv:2405.16856 (2024).
- Yang et al. (2023a) Yuqing Yang, Ethan Chern, Xipeng Qiu, Graham Neubig, and Pengfei Liu. 2023a. Alignment for honesty. arXiv preprint arXiv:2312.07000 (2023).
- Yang et al. (2023b) Yuchen Yang, Houqiang Li, Yanfeng Wang, and Yu Wang. 2023b. Improving the Reliability of Large Language Models by Leveraging Uncertainty-Aware In-Context Learning. arXiv preprint arXiv:2310.04782 (2023).
- Yang et al. (2018) Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. arXiv preprint arXiv:1809.09600 (2018).
- Ye et al. (2024) Fanghua Ye, Mingming Yang, Jianhui Pang, Longyue Wang, Derek F Wong, Emine Yilmaz, Shuming Shi, and Zhaopeng Tu. 2024. Benchmarking llms via uncertainty quantification. arXiv preprint arXiv:2401.12794 (2024).
- Yona et al. (2024) Gal Yona, Roee Aharoni, and Mor Geva. 2024. Can Large Language Models Faithfully Express Their Intrinsic Uncertainty in Words? arXiv preprint arXiv:2405.16908 (2024).
- Yu et al. (2024) Lei Yu, Meng Cao, Jackie Chi Kit Cheung, and Yue Dong. 2024. Mechanisms of non-factual hallucinations in language models. arXiv preprint arXiv:2403.18167 (2024).
- Yuan et al. (2021) Weizhe Yuan, Graham Neubig, and Pengfei Liu. 2021. Bartscore: Evaluating generated text as text generation. Advances in Neural Information Processing Systems 34 (2021), 27263â27277.
- Yun et al. (2021) Zeyu Yun, Yubei Chen, Bruno A Olshausen, and Yann LeCun. 2021. Transformer visualization via dictionary learning: contextualized embedding as a linear superposition of transformer factors. arXiv preprint arXiv:2103.15949 (2021).
- Zadrozny and Elkan (2001) Bianca Zadrozny and Charles Elkan. 2001. Obtaining calibrated probability estimates from decision trees and naive bayesian classifiers. In Icml, Vol. 1. 609â616.
- Zadrozny and Elkan (2002) Bianca Zadrozny and Charles Elkan. 2002. Transforming classifier scores into accurate multiclass probability estimates. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining. 694â699.
- Zellers et al. (2019) Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. Hellaswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830 (2019).
- Zeng et al. (2024) Qingcheng Zeng, Mingyu Jin, Qinkai Yu, Zhenting Wang, Wenyue Hua, Zihao Zhou, Guangyan Sun, Yanda Meng, Shiqing Ma, Qifan Wang, et al. 2024. Uncertainty is Fragile: Manipulating Uncertainty in Large Language Models. arXiv preprint arXiv:2407.11282 (2024).
- Zhang et al. (2024) Caiqi Zhang, Fangyu Liu, Marco Basaldella, and Nigel Collier. 2024. LUQ: Long-text Uncertainty Quantification for LLMs. arXiv preprint arXiv:2403.20279 (2024).
- Zhang et al. (2019) Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. arXiv preprint arXiv:1904.09675 (2019).
- Zhang et al. (2023a) Tianhang Zhang, Lin Qiu, Qipeng Guo, Cheng Deng, Yue Zhang, Zheng Zhang, Chenghu Zhou, Xinbing Wang, and Luoyi Fu. 2023a. Enhancing uncertainty-based hallucination detection with stronger focus. arXiv preprint arXiv:2311.13230 (2023).
- Zhang et al. (2023b) Yuwei Zhang, Zihan Wang, and Jingbo Shang. 2023b. Clusterllm: Large language models as a guide for text clustering. arXiv preprint arXiv:2305.14871 (2023).
- Zhao et al. (2024a) Haiyan Zhao, Hanjie Chen, Fan Yang, Ninghao Liu, Huiqi Deng, Hengyi Cai, Shuaiqiang Wang, Dawei Yin, and Mengnan Du. 2024a. Explainability for large language models: A survey. ACM Transactions on Intelligent Systems and Technology 15, 2 (2024), 1â38.
- Zhao et al. (2024d) Qiwei Zhao, Xujiang Zhao, Yanchi Liu, Wei Cheng, Yiyou Sun, Mika Oishi, Takao Osaki, Katsushi Matsuda, Huaxiu Yao, and Haifeng Chen. 2024d. SAUP: Situation Awareness Uncertainty Propagation on LLM Agent. arXiv preprint arXiv:2412.01033 (2024).
- Zhao et al. (2024b) Theodore Zhao, Mu Wei, J Preston, and Hoifung Poon. 2024b. Pareto Optimal Learning for Estimating Large Language Model Errors. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 10513â10529.
- Zhao et al. (2024c) Xinran Zhao, Hongming Zhang, Xiaoman Pan, Wenlin Yao, Dong Yu, Tongshuang Wu, and Jianshu Chen. 2024c. Fact-and-Reflection (FaR) Improves Confidence Calibration of Large Language Models. arXiv preprint arXiv:2402.17124 (2024).
- Zhao et al. (2022) Yao Zhao, Mikhail Khalman, Rishabh Joshi, Shashi Narayan, Mohammad Saleh, and Peter J Liu. 2022. Calibrating sequence likelihood improves conditional language generation. In The eleventh international conference on learning representations.
- Zhao et al. (2021) Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate before use: Improving few-shot performance of language models. In International conference on machine learning. PMLR, 12697â12706.
- Zheng et al. (2024) Zhi Zheng, Qian Feng, Hang Li, Alois Knoll, and Jianxiang Feng. 2024. Evaluating Uncertainty-based Failure Detection for Closed-Loop LLM Planners. arXiv preprint arXiv:2406.00430 (2024).
- Zhu et al. (2023) Chiwei Zhu, Benfeng Xu, Quan Wang, Yongdong Zhang, and Zhendong Mao. 2023. On the calibration of large language models and alignment. arXiv preprint arXiv:2311.13240 (2023).
- Zimmermann et al. (2024) Roland S Zimmermann, Thomas Klein, and Wieland Brendel. 2024. Scale alone does not improve mechanistic interpretability in vision models. Advances in Neural Information Processing Systems 36 (2024).