2510.16062v2
Model: gemini-2.0-flash
# Can LLMs Correct Themselves? A Benchmark of Self-Correction in LLMs
> Equal contributions.â Corresponding authors: Pan Zhou and Lixing Chen.
Abstract
Self-correction of large language models (LLMs) emerges as a critical component for enhancing their reasoning performance. Although various self-correction methods have been proposed, a comprehensive evaluation of these methods remains largely unexplored, and the question of whether LLMs can truly correct themselves is a matter of significant interest and concern. In this study, we introduce CorrectBench, a benchmark developed to evaluate the effectiveness of self-correction strategies, including intrinsic, external, and fine-tuned approaches, across three tasks: commonsense reasoning, mathematical reasoning, and code generation. Our findings reveal that: 1) Self-correction methods can improve accuracy, especially for complex reasoning tasks; 2) Mixing different self-correction strategies yields further improvements, though it reduces efficiency; 3) Reasoning LLMs have limited optimization under additional self-correction methods and have high time costs. Interestingly, a comparatively simple chain-of-thought (CoT) baseline demonstrates competitive accuracy and efficiency. These results underscore the potential of self-correction to enhance LLMâs reasoning performance while highlighting the ongoing challenge of improving their efficiency. Consequently, we advocate for further research focused on optimizing the balance between reasoning capabilities and operational efficiency. Project Page: https://correctbench.github.io/
1 Introduction
The rapid advancement of large language models (LLMs), exemplified by GPT-3.5 Ye2023ACC and LLaMA 3 Dubey2024TheL3 , has precipitated a transformative shift in artificial intelligence (AI), yielding state-of-the-art performance across diverse tasks tie2025survey . Specifically, these tasks include content generation Agossah2023LLMBasedIF , natural language understanding Kuang2024NaturalLU , and complex decision-making Yang2024EvaluatingWM , all of which have been revolutionized by the extensive pretraining and sophisticated architectures of LLMs. Notably, the introduction of frameworks like Chain-of-Thought (CoT) Wei2022ChainOT has further expanded LLMâs capacity for multi-step reasoning, enabling them to tackle more intricate tasks.
Despite these advancements, ensuring the reliability and accuracy of model outputs, especially for reasoning-intensive tasks, remains a formidable challenge. In response, recent works have focused on self-correction strategies aimed at refining LLMsâ decision-making processes Kamoi2024WhenCL ; Lin2024CriticBenchBL through iterative revision. Pioneering approaches such as RARR Gao2022RARRRA , Refiner Paul2023REFINERRF , and CRITIC Gou2023CRITICLL illustrate the potential of integrating feedback loops and corrective components into model architectures. However, these approaches often yield inconsistent gains across different tasks, prompting deeper questions about their capability of correction and generalizability. This observation motivates the central question: Can LLMs truly correct themselves? Moreover, it remains unclear whether more intricate self-correction schemes necessarily translate into superior overall performance.
To address these issues, this paper presents CorrectBench, a systematic benchmark for investigating how self-correction methods affect LLMs performance across multiple tasks. Building on a recent survey of self-correction approaches Kamoi2024WhenCL , we categorize such methods into three primary categories (i.e., intrinsic correction, external correction and fine-tuned correction), and select 11 representative methods from them. Additionally, we establish comparative baselines using both the widely adopted CoT Wei2022ChainOT and a standard prompting strategy denoted as âBaseâ (detailed in Appendix G.1).
<details>
<summary>x1.png Details</summary>

### Visual Description
## Radar Charts: Model Performance on GPQA Benchmarks
### Overview
The image presents three radar charts comparing the performance of various language models on the GPQA benchmark, across different settings: a baseline model (Base), a model with intrinsic correction (S1), and a model with external correction (S2). Each chart visualizes the performance of multiple models across several tasks, including CS-QA, GSM8K, HotpotQA, AQUA, HumanEval, and MATH.
### Components/Axes
* **Chart Type**: Radar Charts (3 charts side-by-side)
* **Titles**:
* Left Chart: "Base (Baseline) GPQA"
* Middle Chart: "S1 (Intrinsic Correction) GPQA"
* Right Chart: "S2 (External Correction) GPQA"
* **Axes**:
* Radial Axis: Represents performance score, ranging from 0.0 to 0.8, with markers at 0.2, 0.4, 0.6, 0.8.
* Angular Axis: Represents different tasks/benchmarks: CS-QA, GSM8K, HotpotQA, AQUA, HumanEval, MATH. These are arranged clockwise around the circle.
* **Legend**: Located at the bottom of the image. Lists the models and their corresponding line colors:
* Light Blue: LLaMA3.1-8B-Instruct
* Light Yellow: LLaMA3.1-70B-Instruct
* Light Purple: Qwen2.5-7B-Instruct
* Light Red: Qwen2.5-72B-Instruct
* Darker Blue: Claude3.5-Sonnet
* Orange: GPT-3.5
* Green: GPT-4o
* Dashed Light Purple: QWQ-32B-Instruct
* Dashed Dark Blue: DeepSeek-V3
* Dashed Light Blue: DeepSeek-R1
* Dashed Light Pink: o3-mini
### Detailed Analysis or ### Content Details
**Chart 1: Base (Baseline) GPQA**
* **LLaMA3.1-8B-Instruct (Light Blue)**: Scores approximately 0.6 on CS-QA, 0.7 on GSM8K, 0.5 on HotpotQA, 0.4 on AQUA, 0.3 on HumanEval, and 0.2 on MATH.
* **LLaMA3.1-70B-Instruct (Light Yellow)**: Scores approximately 0.7 on CS-QA, 0.75 on GSM8K, 0.6 on HotpotQA, 0.5 on AQUA, 0.4 on HumanEval, and 0.3 on MATH.
* **Qwen2.5-7B-Instruct (Light Purple)**: Scores approximately 0.7 on CS-QA, 0.75 on GSM8K, 0.6 on HotpotQA, 0.5 on AQUA, 0.4 on HumanEval, and 0.3 on MATH.
* **Qwen2.5-72B-Instruct (Light Red)**: Scores approximately 0.7 on CS-QA, 0.8 on GSM8K, 0.6 on HotpotQA, 0.5 on AQUA, 0.4 on HumanEval, and 0.3 on MATH.
* **Claude3.5-Sonnet (Darker Blue)**: Scores approximately 0.7 on CS-QA, 0.8 on GSM8K, 0.6 on HotpotQA, 0.5 on AQUA, 0.4 on HumanEval, and 0.3 on MATH.
* **GPT-3.5 (Orange)**: Scores approximately 0.7 on CS-QA, 0.75 on GSM8K, 0.6 on HotpotQA, 0.5 on AQUA, 0.4 on HumanEval, and 0.3 on MATH.
* **GPT-4o (Green)**: Scores approximately 0.7 on CS-QA, 0.8 on GSM8K, 0.6 on HotpotQA, 0.5 on AQUA, 0.4 on HumanEval, and 0.3 on MATH.
* **QWQ-32B-Instruct (Dashed Light Purple)**: Scores approximately 0.6 on CS-QA, 0.7 on GSM8K, 0.5 on HotpotQA, 0.4 on AQUA, 0.3 on HumanEval, and 0.2 on MATH.
* **DeepSeek-V3 (Dashed Dark Blue)**: Scores approximately 0.7 on CS-QA, 0.75 on GSM8K, 0.6 on HotpotQA, 0.5 on AQUA, 0.4 on HumanEval, and 0.3 on MATH.
* **DeepSeek-R1 (Dashed Light Blue)**: Scores approximately 0.7 on CS-QA, 0.75 on GSM8K, 0.6 on HotpotQA, 0.5 on AQUA, 0.4 on HumanEval, and 0.3 on MATH.
* **o3-mini (Dashed Light Pink)**: Scores approximately 0.5 on CS-QA, 0.6 on GSM8K, 0.4 on HotpotQA, 0.3 on AQUA, 0.2 on HumanEval, and 0.1 on MATH.
**Chart 2: S1 (Intrinsic Correction) GPQA**
* **LLaMA3.1-8B-Instruct (Light Blue)**: Scores approximately 0.7 on CS-QA, 0.8 on GSM8K, 0.6 on HotpotQA, 0.5 on AQUA, 0.4 on HumanEval, and 0.3 on MATH.
* **LLaMA3.1-70B-Instruct (Light Yellow)**: Scores approximately 0.7 on CS-QA, 0.8 on GSM8K, 0.6 on HotpotQA, 0.5 on AQUA, 0.4 on HumanEval, and 0.3 on MATH.
* **Qwen2.5-7B-Instruct (Light Purple)**: Scores approximately 0.7 on CS-QA, 0.8 on GSM8K, 0.6 on HotpotQA, 0.5 on AQUA, 0.4 on HumanEval, and 0.3 on MATH.
* **Qwen2.5-72B-Instruct (Light Red)**: Scores approximately 0.7 on CS-QA, 0.8 on GSM8K, 0.6 on HotpotQA, 0.5 on AQUA, 0.4 on HumanEval, and 0.3 on MATH.
* **Claude3.5-Sonnet (Darker Blue)**: Scores approximately 0.7 on CS-QA, 0.8 on GSM8K, 0.6 on HotpotQA, 0.5 on AQUA, 0.4 on HumanEval, and 0.3 on MATH.
* **GPT-3.5 (Orange)**: Scores approximately 0.7 on CS-QA, 0.8 on GSM8K, 0.6 on HotpotQA, 0.5 on AQUA, 0.4 on HumanEval, and 0.3 on MATH.
* **GPT-4o (Green)**: Scores approximately 0.8 on CS-QA, 0.85 on GSM8K, 0.7 on HotpotQA, 0.6 on AQUA, 0.5 on HumanEval, and 0.4 on MATH.
* **QWQ-32B-Instruct (Dashed Light Purple)**: Scores approximately 0.7 on CS-QA, 0.8 on GSM8K, 0.6 on HotpotQA, 0.5 on AQUA, 0.4 on HumanEval, and 0.3 on MATH.
* **DeepSeek-V3 (Dashed Dark Blue)**: Scores approximately 0.7 on CS-QA, 0.8 on GSM8K, 0.6 on HotpotQA, 0.5 on AQUA, 0.4 on HumanEval, and 0.3 on MATH.
* **DeepSeek-R1 (Dashed Light Blue)**: Scores approximately 0.7 on CS-QA, 0.8 on GSM8K, 0.6 on HotpotQA, 0.5 on AQUA, 0.4 on HumanEval, and 0.3 on MATH.
* **o3-mini (Dashed Light Pink)**: Scores approximately 0.6 on CS-QA, 0.7 on GSM8K, 0.5 on HotpotQA, 0.4 on AQUA, 0.3 on HumanEval, and 0.2 on MATH.
**Chart 3: S2 (External Correction) GPQA**
* **LLaMA3.1-8B-Instruct (Light Blue)**: Scores approximately 0.7 on CS-QA, 0.8 on GSM8K, 0.6 on HotpotQA, 0.5 on AQUA, 0.4 on HumanEval, and 0.3 on MATH.
* **LLaMA3.1-70B-Instruct (Light Yellow)**: Scores approximately 0.7 on CS-QA, 0.8 on GSM8K, 0.6 on HotpotQA, 0.5 on AQUA, 0.4 on HumanEval, and 0.3 on MATH.
* **Qwen2.5-7B-Instruct (Light Purple)**: Scores approximately 0.7 on CS-QA, 0.8 on GSM8K, 0.6 on HotpotQA, 0.5 on AQUA, 0.4 on HumanEval, and 0.3 on MATH.
* **Qwen2.5-72B-Instruct (Light Red)**: Scores approximately 0.7 on CS-QA, 0.8 on GSM8K, 0.6 on HotpotQA, 0.5 on AQUA, 0.4 on HumanEval, and 0.3 on MATH.
* **Claude3.5-Sonnet (Darker Blue)**: Scores approximately 0.7 on CS-QA, 0.8 on GSM8K, 0.6 on HotpotQA, 0.5 on AQUA, 0.4 on HumanEval, and 0.3 on MATH.
* **GPT-3.5 (Orange)**: Scores approximately 0.7 on CS-QA, 0.8 on GSM8K, 0.6 on HotpotQA, 0.5 on AQUA, 0.4 on HumanEval, and 0.3 on MATH.
* **GPT-4o (Green)**: Scores approximately 0.8 on CS-QA, 0.9 on GSM8K, 0.7 on HotpotQA, 0.6 on AQUA, 0.5 on HumanEval, and 0.4 on MATH.
* **QWQ-32B-Instruct (Dashed Light Purple)**: Scores approximately 0.7 on CS-QA, 0.8 on GSM8K, 0.6 on HotpotQA, 0.5 on AQUA, 0.4 on HumanEval, and 0.3 on MATH.
* **DeepSeek-V3 (Dashed Dark Blue)**: Scores approximately 0.7 on CS-QA, 0.8 on GSM8K, 0.6 on HotpotQA, 0.5 on AQUA, 0.4 on HumanEval, and 0.3 on MATH.
* **DeepSeek-R1 (Dashed Light Blue)**: Scores approximately 0.7 on CS-QA, 0.8 on GSM8K, 0.6 on HotpotQA, 0.5 on AQUA, 0.4 on HumanEval, and 0.3 on MATH.
* **o3-mini (Dashed Light Pink)**: Scores approximately 0.6 on CS-QA, 0.7 on GSM8K, 0.5 on HotpotQA, 0.4 on AQUA, 0.3 on HumanEval, and 0.2 on MATH.
### Key Observations
* **Task Performance**: Models generally perform best on GSM8K and CS-QA, and worst on MATH and HumanEval.
* **Model Comparison**: GPT-4o (Green) consistently shows higher performance across all tasks and settings compared to other models. o3-mini (Dashed Light Pink) generally performs the worst.
* **Correction Impact**: Intrinsic (S1) and External (S2) corrections generally improve performance compared to the baseline (Base), with S2 showing a slight edge over S1.
### Interpretation
The radar charts provide a visual comparison of language model performance across different question-answering tasks. The data suggests that:
* **Task Difficulty**: Some tasks are inherently more challenging for these models, as evidenced by the consistently lower scores on MATH and HumanEval.
* **Model Superiority**: GPT-4o demonstrates superior performance, indicating its advanced capabilities in handling diverse question types.
* **Effectiveness of Corrections**: Both intrinsic and external correction methods enhance model performance, suggesting that these techniques are valuable for improving accuracy and reliability. The slight advantage of external correction (S2) may indicate that providing additional context or information during the correction process is beneficial.
* **Model Consistency**: The relative performance of models remains consistent across different correction settings. Models that perform well in the baseline setting tend to maintain their relative advantage in the corrected settings.
</details>
Figure 1: Comparison of different LLMs across various self-correction types and task scenarios.
For a rigorous and comprehensive assessment, we construct two curated datasets: CorrectBench-base, which integrates diverse subtasks with 3,825 question-answer pairs drawn from 7 distinct subdatasets, and CorrectBench-test, a curated collection of question-answer pairs specifically tailored for correction-oriented experiments. These subdatasets cover three principal tasks: commonsense reasoning Putri2024CanLG , mathematical reasoning Hendrycks2021MeasuringMP , and code generation Chen2021EvaluatingLL . We then apply these subdatasets to both instruction-based LLMs (e.g., LLaMA 3.1-8B-Instruct meta2024llama3 , Qwen 2.5-7B-Instruct Yang2024Qwen25TR , GPT-4o hurst2024gpt , Claude 3.5-Sonnet AhtropicClaude ) and reasoning LLMs This paper defines âreasoning LLMsâ as those models that are specifically enhanced with complex reasoning capabilities through a post-training optimization process. (e.g., DeepSeek-V3 DeepSeekAI2024DeepSeekV3TR ). Figure 1 compares the baseline (âBaseâ) performance with the mean performances of intrinsic and external correction methods separately, revealing that self-correction bolsters overall accuracy (detailed in Figure 8).
Key insights. First, self-correction methods substantially enhance accuracy, particularly in complex reasoning tasks. Meanwhile, mixing multiple methods, while improving accuracy, incurs higher computational costs and reduced efficiency. For reasoning LLMs, these methods offer only marginal gains with increased time usage. Interestingly, the CoT Wei2022ChainOT strategy demonstrates a favorable trade-off between operational efficiency and overall accuracy, challenging the prevailing assumption that more sophisticated correction frameworks inherently produce superior outcomes.
To summarize, our work provides three key contributions:
- A Comprehensive Benchmark. We propose CorrectBench, the first benchmark devised to systematically evaluate the impact of self-correction on LLMs inference. Spanning multiple tasks and model categories, CorrectBench offers a robust, reproducible platform for methodological comparisons.
- Two Datasets. We present CorrectBench-base and CorrectBench-test, both meticulously constructed to encompass a broad range of question-answer formats and reasoning scenarios, facilitating thorough assessments of different correction methods.
- Insights and Implications. Our empirical findings show that self-correction substantially advances LLMsâ performance, especially on tasks demanding extensive reasoning. However, the increased computational load of mixing multiple correction strategies must be weighed against potential accuracy gains. Moreover, for reasoning LLMs, additional correction methods provide limited improvements, emphasizing critical cost-efficiency concerns for practical applications.
2 CorrectBench: A Benchmark of Self-Correction in LLMs
CorrectBench is a systematically designed benchmark that quantifies the extent to which various correction methods improve model outputs in reasoning-intensive scenarios. As illustrated in Figure 2, CorrectBench characterizes self-correction along three principal dimensions: Task Scenario, Self-Correction Type, and LLM Type. The evaluation pipeline begins with selecting a specific task scenario and dataset, followed by applying a chosen correction method, and concludes with assessing the modelâs iterative self-correction process across diverse LLMs.
<details>
<summary>x2.png Details</summary>

### Visual Description
## Diagram: LLM Correction and Reasoning Process
### Overview
The image illustrates a comparison between instruction-based and reasoning-based Large Language Models (LLMs), demonstrating different correction mechanisms applied to instruction-based LLMs. The diagram highlights intrinsic correction, external correction, and fine-tuned correction strategies. The image is divided into 3 main sections: Task Examples (T1, T2, T3), Correction Strategies (S1, S2, S3), and Model Types (M1, M2).
### Components/Axes
* **Task Examples (Left Column)**: Shows examples of tasks LLMs might encounter, including Commonsense Reasoning (T1), Mathematical Reasoning (T2), and Code Generation (T3).
* **Correction Strategies (Top Right)**: Illustrates three correction methods: Intrinsic Correction (S1), External Correction (S2), and Fine-tuned Correction (S3). Each correction strategy shows a looping process where a Base LLM produces a Response, Feedback is generated (either internally or externally), and then used to refine the LLM.
* **Model Types (Bottom)**: Differentiates between Instruction-based LLMs (M1) and Reasoning LLMs (M2) by showing logos of companies and specific models within each category.
* **Legend**:
* Base LLM (Swirl Symbol): Represents the initial LLM.
* Refine LLM (Swirl Symbol): Indicates the LLM after the initial refinement process.
* Fine-tuned Refine LLM (Green Swirl Symbol): Represents the final refined LLM after fine-tuning.
* Response (Document Icon): Represents the output or answer provided by the LLM.
* Feedback LLM (Avatar): Person providing the feedback.
* Generated Feedback (Checklist): Represents the feedback used to refine the model.
* Data Flow (Solid Arrow): Indicates the flow of data between components.
* Extra Data Flow (Dashed Arrow): Indicates auxiliary or external data flow.
### Detailed Analysis
**Task Examples (Left)**:
* **T1: Commonsense Reasoning**: Question: "Which of the following is the largest planet in our solar system? A) Earth B) Mars C) Jupiter D) Venus?" Answer: C.
* **T2: Mathematical Reasoning**: Question: "Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?" Answer: 72.
* **T3: Code Generation**: Question: "Write a Python function that takes two integers as input and returns their sum." Answer: `def add_numbers(a, b): return a + b`
**Correction Strategies (Top Right)**:
* **S1: Intrinsic Correction**:
* A "Base LLM" provides a "Response" that is reviewed by the "Feedback LLM". The "Generated Feedback" is used to refine the initial "Base LLM" into a "Refine LLM".
* Feedback is internally generated, with visual icons suggesting positive/negative feedback (check mark, smile, neutral, frown).
* **S2: External Correction**:
* Similar structure to S1.
* Includes a "Search Engine" and "Code Executor" which are used to provide "Extra Data Flow" for the "Generated Feedback." This feedback is used to refine the initial "Base LLM" into a "Refine LLM".
* The "Search Engine" box has a search bar labeled "Search..." inside it.
* The "Code Executor" has an icon resembling a computer with code on the screen.
* **S3: Fine-tuned Correction**:
* A "Base LLM" provides a "Response" that is reviewed by the "Feedback LLM". Then a "Fine-tuning" is performed, using a "Fine-tuned Refine LLM". Then "Generated Feedback" is used to refine the initial "Base LLM".
**Model Types (Bottom)**:
* **M1: Instruction-based LLM**: Includes logos for Meta, Qwen, Claude, and OpenAI.
* **M2: Reasoning LLM**: Includes logos for QwQ-32B and deepseek.
### Key Observations
* The diagram clearly distinguishes between methods for improving LLM performance through intrinsic, external, and fine-tuned corrections.
* The separation of instruction-based and reasoning LLMs suggests different architectural and training approaches.
* The presence of "extra data flow" in external correction highlights the use of external tools like search engines and code executors to enhance the feedback process.
### Interpretation
The diagram illustrates the evolution and sophistication of techniques used to improve LLMs. Instruction-based LLMs, represented by prominent industry players like Meta and OpenAI, benefit from correction strategies that involve internal feedback loops, external knowledge sources, and fine-tuning mechanisms. Reasoning LLMs, represented by QwQ-32B and deepseek, are presented as a distinct category, implicitly suggesting they may employ different core architectures or training paradigms that emphasize reasoning capabilities from the outset.
The progression from intrinsic to external to fine-tuned correction indicates a move towards more complex and resource-intensive methods for improving LLM performance. The external correction strategy, in particular, highlights the importance of leveraging external tools and data to ground LLM responses in real-world knowledge. The process appears cyclical, which suggests a closed-loop adaptive system that learns and improves over time.
</details>
Figure 2: An overview of the CorrectBench framework.
Iterative Self-Correction Paradigm. In a standard LLM evaluation, the model generates an initial response $r_{0}$ to a question $q$ given an initial prompt $p_{0}$ , formally $r_{0}=\mathcal{M}(q,p_{0})$ , where $\mathcal{M}$ denotes the LLM. While this process becomes iterative in the self-correction paradigm. Specifically, for the $k$ -th iteration, $p_{k}=p_{k-1}\cup r_{k-1},r_{k}=\mathcal{M}(q,p_{k})$ , where $p_{k}$ is the updated prompt that includes the previous response $r_{k-1}$ . After $K$ iterations, the final output $r_{K}$ reflects the modelâs corrected response. This iterative mechanism enables the model to continually refine its output based on newly revealed errors or inconsistencies.
<details>
<summary>x3.png Details</summary>

### Visual Description
## Diagram: Iterative Response Correction
### Overview
The image is a diagram illustrating an iterative process for response correction. It shows a question being processed to generate an initial response, followed by a series of correction methods and additions to refine the response iteratively.
### Components/Axes
* **Question:** Input to the system, leading to the initial response.
* **Initial Response:** The first output generated from the question.
* **Response<sub>1</sub>:** The first corrected response.
* **Response<sub>n-1</sub>:** The (n-1)th corrected response.
* **Response<sub>n</sub>:** The nth corrected response.
* **Correction Method:** Represented by a laptop icon with a gear inside the screen, a green checkmark, and a red "X" symbol.
* **Addition:** Represented by a circled plus sign.
* **Iteration:** Represented by a dashed arrow.
### Detailed Analysis
The diagram starts with a "Question" that feeds into a symbol resembling the ChatGPT logo, which produces an "Initial Response." This initial response is then processed through a "Correction Method" block, labeled "Response<sub>1</sub>". The output of this block is added to the initial response via an "Addition" operation.
The process then iterates. The output of the first addition is fed back into another "Correction Method" block, labeled "Response<sub>n-1</sub>". This block's output is again added to the previous response. This iterative process continues until the final "Correction Method" block, labeled "Response<sub>n</sub>", produces the final corrected response.
The "Correction Method" blocks each have a green checkmark and a red "X" symbol, suggesting a process of validation and rejection or acceptance of certain aspects of the response.
The iteration loops are represented by dashed lines, colored blue and green, indicating the flow of information back into the addition operations.
### Key Observations
* The diagram illustrates an iterative refinement process.
* Each iteration involves a "Correction Method" and an "Addition" operation.
* The use of checkmarks and "X" symbols suggests a filtering or validation mechanism within the "Correction Method."
### Interpretation
The diagram depicts a system where an initial response to a question is iteratively refined through a series of correction and addition steps. The "Correction Method" likely involves evaluating the response and making adjustments based on certain criteria, as indicated by the checkmark and "X" symbols. The "Addition" operation suggests that the corrections are integrated back into the response, leading to a more refined output in each iteration. This iterative approach aims to improve the quality and accuracy of the response over time. The diagram highlights the cyclical nature of the refinement process, where feedback loops are used to continuously improve the response.
</details>
Figure 3: Mixture of different correction methods.
Mixture Framework. While individual self-correction methods can improve model responses, it is plausible that integrating multiple methods may yield further improvements in accuracy. To examine these potential synergies, we propose the mixture framework, illustrated in Figure 3. The response of one correction method serves as input to the next, forming a dynamic pipeline of iterative refinements. This setup enables us to analyze how distinct self-correction methods interact, thereby guiding the development of optimal configurations for improving LLMâs performance.
2.1 Self-Correction Method
CorrectBench comprehensively evaluates three distinct categories of self-correction methods:
S1: Intrinsic Correction. This category focuses on the LLMsâ capacity to internally identify and correct errors without external tools. Methods such as RCI Kim2023LanguageMC , Self-Refine Madaan2023SelfRefineIR , CoVe Dhuliawala2023ChainofVerificationRH , and Reflexion Shinn2023ReflexionLA enable the LLMs to re-evaluate its prior reasoning steps and resolve inconsistencies based on its internal knowledge.
S2: External Correction. In contrast to S1, S2 (e.g., Reflexion Shinn2023ReflexionLA , RARR Gao2022RARRRA , RATT Zhang2024RATTAT , CRITIC Gou2023CRITICLL ) leverages external resources, such as knowledge bases or Google search tools, to address gaps in the internal representation. This external support can correct factual inaccuracies or logical oversights, though it may constrain the modelâs capacity for divergent reasoning.
S3: Fine-tuned Correction. Fine-tuned correction methods including DCoT Puerto2024FineTuningWD , SCORE Zhang2024SmallLM , and SuperCorrect Yang2024SuperCorrectSA enhance LLMsâ self-correction performance through targeted fine-tuning. Although potentially effective, these methods require substantial training and are often limited by the scope and quality of the fine-tuning data.
2.2 Task Scenario
CorrectBench investigates self-correction methods across three representative task scenarios:
T1: Commonsense Reasoning. This scenario probes the modelâs capacity to address factual or logical inconsistencies within everyday knowledge domains. Relevant datasets include HotpotQA Yang2018HotpotQAAD , CommonsenseQA Putri2024CanLG CommonsenseQA is represented as CS-QA in the following., and the more challenging GPQA Rein2023GPQAAG , which emphasizes complex reasoning.
T2: Mathematical Reasoning. Datasets in this task scenario emphasize the detection and correction of errors in mathematical derivations, algebraic manipulations, and multi-step reasoning. Representative datasets include GSM8K Cobbe2021TrainingVT , AQUA Chen2019AQuAAA , and MATH Hendrycks2021MeasuringMP .
T3: Code Generation. This scenario assesses the LLMâs ability to generate functionally correct and logically consistent code from natural language prompts. For instance, HumanEval Chen2021EvaluatingLL measures the LLMâs ability to detect and correct syntax errors, logical flaws, and other coding mistakes.
2.3 LLM Type
To ensure a broad and realistic appraisal, CorrectBench encompasses two categories of LLMs:
M1: Instruction-Based LLMs. LLMs are designed to follow user-provided instructions to generate relevant outputs, making them highly versatile across diverse tasks. This category includes both open-source and closed-source LLMs, distinguished by their accessibility and design paradigms. Open-source LLMs, such as Qwen 2.5-7B/70B-Instruct Yang2024Qwen25TR , as well as LLaMA 3.1-8B/70B-Instruct meta2024llama3 , offer transparency and flexibility for modification, enabling detailed analysis and fine-tuning. Conversely, closed-source LLMs, including OpenAIâs GPT-3.5 ouyang2022training , GPT-4o hurst2024gpt , and Anthropicâs Claude 3.5-Sonnet AhtropicClaude , excel in real-world tasks due to proprietary optimizations, but restrict direct access and customization for research purposes.
M2: Reasoning LLMs. Reasoning LLMs are models specifically enhanced with advanced reasoning capabilities through targeted post-training optimization processes. These models are designed to excel in tasks requiring multi-step logical reasoning, often incorporating integrated self-correction mechanisms to refine their outputs. Representative examples include QWQ-32B-Instruct qwen2 , o3-mini and DeepSeek-R1 DeepSeekAI2024DeepSeekV3TR . DeepSeek-V3 DeepSeekAI2024DeepSeekV3TR , in particular, adopts an innovative approach to distilling reasoning capabilities from long chain-of-thought models, leveraging its predecessor. By integrating verification and reflection patterns from R1, DeepSeek-V3 achieves substantial improvements in reasoning accuracy while maintaining precise control over output style and length.
2.4 Research Question
This study aims to elucidate the effectiveness of different self-correction methods in enhancing LLMsâ performance, addressing the following core research questions:
[RQ1] To what extent can LLMs achieve accurate results by leveraging intrinsic (S1) and external (S2) self-correction methods S3 is analyzed separately due to dataset-specific constraints. without requiring further intervention?
[RQ2] How does mixing multiple self-correction methods influence model accuracy and robustness, and what are the associated computational trade-offs?
[RQ3] For reasoning LLMs with built-in correction mechanisms, to what extent can the above self-correction methods provide additional benefits?
3 Experiment Settings
Dataset Preparation. To ensure consistency and reproducibility, we employ CorrectBench-test for experimental evaluations. For each dataset within CorrectBench-test, we randomly select 100 samples and subsequently refine this selection by excluding a minimal number of outliers or irrelevant instances, thereby ensuring a more precise representation of error patterns. Comprehensive details regarding the datasets and preprocessing steps are provided in Appendix B.1, thereby promoting transparency and reproducibility for subsequent research.
Task and Model Selection. CorrectBench adopts a hierarchical strategy to evaluate self-correction across diverse task scenarios and LLM types. Specific datasets are selected to match the characteristics and objectives of each self-correction method, ensuring that the benchmark captures comprehensive error types and correction challenges. Further specifications regarding selections of datasets and LLMs are included in Appendix B.2, ensuring full reproducibility of the evaluation methodology.
Evaluation Metrics. We employ both task-specific and judgment-based metrics to evaluate the self-correction capabilities of diverse tasks: 1) Task-Specific Metrics. These metrics are tailored to evaluate model performance across different tasks. For T1, accuracy is computed as: $\mathrm{ACC}=\frac{1}{N}\sum_{i=1}^{N}\mathbb{I}(\hat{y}_{i}=y_{i})$ , where $N$ is the total number of samples, $y_{i}$ is the ground truth, $\hat{y}_{i}$ is the predicted answer, and $\mathbb{I}(·)$ is the indicator function. For T2, the solve rate represents the percentage of problems correctly solved by the model out of the total number of problems. For T3, pass@k evaluates whether at least one of the $k$ generated solutions for a problem passes all test cases. The final pass@k score is calculated as the average pass rate across all problems. 2) Judgment-Based Metrics. In cases where the modelâs response is ambiguous or incomplete, we conduct human evaluations, where human evaluators apply stringent criteria to ensure an impartial and thorough assessment of the judgments. Additionally, we employ GPT-4o as an LLM-as-a-Judge paradigm, providing an automated yet consistent scoring mechanism for large-scale experimental runs.
4 Empirical Results and Analysis
Table 1: Main results on CorrectBench for the average of multiple LLMs. Values in () indicate the change from the baseline. Blue signifies improvements, and orange indicates declines, where darker shades reflect larger magnitudes. Further details are given in Appendix H.
| Type - CoT | Method Base 83.29 ( +2.53 ) | HotpotQA( $\uparrow$ ) 80.76 78.03 ( -1.93 ) | CS-QA( $\uparrow$ ) 79.96 16.52 ( -2.04 ) | GPQA( $\uparrow$ ) 18.56 91.96 ( +5.50 ) | GSM8K( $\uparrow$ ) 86.46 60.24 ( -0.99 ) | AQUA( $\uparrow$ ) 61.23 72.59 ( -2.53 ) | MATH( $\uparrow$ ) 75.12 60.10 ( -12.61 ) | HumanEval( $\uparrow$ ) 72.71 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| S1 | RCI | 79.67 ( -1.09 ) | 76.29 ( -3.67 ) | 19.98 ( +1.42 ) | 87.00 ( +0.54 ) | 67.12 ( +5.89 ) | 74.92 ( -0.20 ) | 67.46 ( -5.25 ) |
| CoVe | 83.04 ( +2.28 ) | 78.54 ( -1.42 ) | 37.41 ( +18.85 ) | 92.23 ( +5.77 ) | 71.12 ( +9.89 ) | 79.30 ( +4.18 ) | 76.96 ( +4.25 ) | |
| Self-Refine | 85.49 ( +4.73 ) | 81.06 ( +1.10 ) | 40.69 ( +22.13 ) | 91.74 ( +5.28 ) | 69.46 ( +8.23 ) | 81.77 ( +6.65 ) | - | |
| Reflexion-v1 | 69.52 ( -11.24 ) | 63.89 ( -16.07 ) | 19.25 ( +0.69 ) | 67.64 ( -18.82 ) | 48.33 ( -12.90 ) | 65.01 ( -10.11 ) | - | |
| S2 | Reflexion-v2 | 87.98 ( +7.22 ) | 82.21 ( +2.25 ) | 26.85 ( +8.29 ) | 89.87 ( +3.41 ) | 68.23 ( +7.00 ) | 81.36 ( +6.24 ) | - |
| RARR | 85.47 ( +4.71 ) | 80.57 ( +0.61 ) | 36.82 ( +18.26 ) | 88.92 ( +2.46 ) | 66.81 ( +5.58 ) | 82.78 ( +7.66 ) | 77.35 ( +4.64 ) | |
| RATT | 79.59 ( -1.17 ) | 80.81 ( +0.85 ) | 25.90 ( +7.34 ) | 88.08 ( +1.62 ) | 68.06 ( +6.83 ) | 80.74 ( +5.62 ) | 73.44 ( +0.73 ) | |
| CRITIC | - | 81.77 ( +1.81 ) | - | 77.46 ( -9.00 ) | - | - | - | |
| - | Average | 83.54 ( +2.78 ) | 80.18 ( +0.22 ) | 31.28 ( +12.72 ) | 85.04 ( -1.42 ) | 68.47 ( +7.24 ) | 80.15 ( +5.03 ) | 73.80 ( +1.09 ) |
4.1 Main Results
Table 1 summarizes the average performance improvements attained by various self-correction methods over the âBaseâ. The results reveal that each self-correction method demonstrates performance improvements over the âBaseâ to varying degrees, with particularly pronounced gains in more complex tasks such as GPQA and MATH. For instance, CoVe from S1 yields an improvement of +23.24% on GPQA. However, simpler tasks like GSM8K exhibit more modest gains (e.g., +5.28% for CoVe). By contrast, external correction methods S2 generally achieve higher average gains than S1. For example, Reflexion-v1 Reflexion-v1 denotes reflexion without external tools experiences declines on tasks such as HotpotQA (-11.13%) and AQUA (-12.90%). However, Reflexion-v2 Reflexion-v2 denotes reflexion with external tools increases its effectiveness, yielding improvements of +7.33% on HotpotQA and +7.00% on AQUA. We analyze that is because Reflexion Shinn2023ReflexionLA was initially designed to leverage external tools for enhanced correction, but Reflexion-v1, stripped of these tools, lacks optimization. This leaves weaker LLMs prone to getting "stuck," producing persistent incorrect outputs and lowering the average score. Finally, S3 generally lags behind these methods, likely due to smaller model sizes and narrower training objectives (Details in Section 4.5).
<details>
<summary>x4.png Details</summary>

### Visual Description
## Scatter Plot: Model Performance Across Datasets
### Overview
The image is a scatter plot comparing the performance of various language models (LLMs) across different datasets. The y-axis represents the change in performance (Î in percentage), and the x-axis represents the datasets. Each model is represented by a unique color and shape combination, as indicated in the legend on the right. A horizontal dashed green line indicates the baseline performance (Î=0).
### Components/Axes
* **X-axis:** Datasets: HotpotQA, CS-QA, GPQA, AQUA, GSM8K, MATH, HumanEval.
* **Y-axis:** Î (%), ranging from -15% to 25% with increments of 5%.
* **Legend (Right side):**
* Light Blue Circle: LLaMA3.1-8B
* Yellow Circle: LLaMA3.1-70B
* Dark Blue Circle: Qwen2.5-7B
* Red Circle: Qwen2.5-72B
* Blue Pentagon: Claude3.5
* Orange Pentagon: GPT-3.5
* Green Pentagon: GPT-4o
* Purple Diamond: QWQ-32B
* Dark Purple Diamond: DeepSeek-V3
* White Circle: Open LLM
* White Pentagon: Closed LLM
* White Diamond: Reasoning LLM
* Green Dashed Line: Baseline (Î=0)
### Detailed Analysis
**HotpotQA Dataset:**
* LLaMA3.1-8B (Light Blue Circle): Approximately -13%
* LLaMA3.1-70B (Yellow Circle): Approximately 5%
* Qwen2.5-7B (Dark Blue Circle): Approximately -2%
* Qwen2.5-72B (Red Circle): Approximately 5%
* Claude3.5 (Blue Pentagon): Approximately 5%
* GPT-3.5 (Orange Pentagon): Approximately 5%
* GPT-4o (Green Pentagon): Approximately 3%
* QWQ-32B (Purple Diamond): Approximately -1%
* DeepSeek-V3 (Dark Purple Diamond): Approximately -1%
* Open LLM (White Circle): Approximately -3%
* Closed LLM (White Pentagon): Approximately -8%
* Reasoning LLM (White Diamond): Approximately -1%
**CS-QA Dataset:**
* LLaMA3.1-8B (Light Blue Circle): Approximately -8%
* LLaMA3.1-70B (Yellow Circle): Approximately -6%
* Qwen2.5-7B (Dark Blue Circle): Approximately -9%
* Qwen2.5-72B (Red Circle): Approximately 8%
* Claude3.5 (Blue Pentagon): Approximately 8%
* GPT-3.5 (Orange Pentagon): Approximately 8%
* GPT-4o (Green Pentagon): Approximately 1%
* QWQ-32B (Purple Diamond): Approximately 11%
* DeepSeek-V3 (Dark Purple Diamond): Approximately -1%
* Open LLM (White Circle): Approximately -1%
* Closed LLM (White Pentagon): Approximately -1%
* Reasoning LLM (White Diamond): Approximately -1%
**GPQA Dataset:**
* LLaMA3.1-8B (Light Blue Circle): Approximately 1%
* LLaMA3.1-70B (Yellow Circle): Approximately 15%
* Qwen2.5-7B (Dark Blue Circle): Approximately 1%
* Qwen2.5-72B (Red Circle): Approximately 17%
* Claude3.5 (Blue Pentagon): Approximately 15%
* GPT-3.5 (Orange Pentagon): Approximately 20%
* GPT-4o (Green Pentagon): Approximately 4%
* QWQ-32B (Purple Diamond): Approximately 12%
* DeepSeek-V3 (Dark Purple Diamond): Approximately 17%
* Open LLM (White Circle): Approximately -5%
* Closed LLM (White Pentagon): Approximately -1%
* Reasoning LLM (White Diamond): Approximately 11%
**AQUA Dataset:**
* LLaMA3.1-8B (Light Blue Circle): Approximately 2%
* LLaMA3.1-70B (Yellow Circle): Approximately 1%
* Qwen2.5-7B (Dark Blue Circle): Approximately 3%
* Qwen2.5-72B (Red Circle): Approximately 4%
* Claude3.5 (Blue Pentagon): Approximately 1%
* GPT-3.5 (Orange Pentagon): Approximately 5%
* GPT-4o (Green Pentagon): Approximately 2%
* QWQ-32B (Purple Diamond): Approximately 11%
* DeepSeek-V3 (Dark Purple Diamond): Approximately -1%
* Open LLM (White Circle): Approximately 1%
* Closed LLM (White Pentagon): Approximately 1%
* Reasoning LLM (White Diamond): Approximately 1%
**GSM8K Dataset:**
* LLaMA3.1-8B (Light Blue Circle): Approximately 11%
* LLaMA3.1-70B (Yellow Circle): Approximately 10%
* Qwen2.5-7B (Dark Blue Circle): Approximately 11%
* Qwen2.5-72B (Red Circle): Approximately 10%
* Claude3.5 (Blue Pentagon): Approximately 11%
* GPT-3.5 (Orange Pentagon): Approximately 10%
* GPT-4o (Green Pentagon): Approximately 11%
* QWQ-32B (Purple Diamond): Approximately 11%
* DeepSeek-V3 (Dark Purple Diamond): Approximately 4%
* Open LLM (White Circle): Approximately 1%
* Closed LLM (White Pentagon): Approximately 1%
* Reasoning LLM (White Diamond): Approximately 1%
**MATH Dataset:**
* LLaMA3.1-8B (Light Blue Circle): Approximately 3%
* LLaMA3.1-70B (Yellow Circle): Approximately 3%
* Qwen2.5-7B (Dark Blue Circle): Approximately 3%
* Qwen2.5-72B (Red Circle): Approximately 3%
* Claude3.5 (Blue Pentagon): Approximately 3%
* GPT-3.5 (Orange Pentagon): Approximately 3%
* GPT-4o (Green Pentagon): Approximately 3%
* QWQ-32B (Purple Diamond): Approximately 3%
* DeepSeek-V3 (Dark Purple Diamond): Approximately -4%
* Open LLM (White Circle): Approximately -6%
* Closed LLM (White Pentagon): Approximately -6%
* Reasoning LLM (White Diamond): Approximately -4%
**HumanEval Dataset:**
* LLaMA3.1-8B (Light Blue Circle): Approximately 4%
* LLaMA3.1-70B (Yellow Circle): Approximately 5%
* Qwen2.5-7B (Dark Blue Circle): Approximately 4%
* Qwen2.5-72B (Red Circle): Approximately 4%
* Claude3.5 (Blue Pentagon): Approximately 4%
* GPT-3.5 (Orange Pentagon): Approximately 4%
* GPT-4o (Green Pentagon): Approximately 4%
* QWQ-32B (Purple Diamond): Approximately 4%
* DeepSeek-V3 (Dark Purple Diamond): Approximately 1%
* Open LLM (White Circle): Approximately 1%
* Closed LLM (White Pentagon): Approximately 1%
* Reasoning LLM (White Diamond): Approximately 1%
### Key Observations
* The performance of different models varies significantly across different datasets.
* GPT-3.5 generally performs well across all datasets, often achieving high scores.
* LLaMA3.1-8B shows the lowest performance on HotpotQA.
* Reasoning LLM shows the lowest performance on MATH.
* The performance of Open LLM and Closed LLM is consistently low across all datasets.
### Interpretation
The scatter plot provides a comparative analysis of various language models' performance on different datasets. The variation in performance highlights the strengths and weaknesses of each model in handling different types of tasks. For example, GPT-3.5 consistently performs well, indicating its robustness across different tasks, while LLaMA3.1-8B struggles with HotpotQA. The plot also reveals that certain models, like Open LLM and Closed LLM, consistently underperform compared to others. This information is valuable for selecting the most appropriate model for a specific task and for identifying areas where model improvement is needed. The "Reasoning LLM" models (marked with diamonds) show a wide range of performance, suggesting that reasoning ability is highly task-dependent.
</details>
Figure 4: Average performance improvements achieved by S1 across multiple LLMs.
<details>
<summary>x5.png Details</summary>

### Visual Description
## Scatter Plot: Model Performance on Various Datasets
### Overview
The image is a scatter plot comparing the performance (Delta %) of various Large Language Models (LLMs) on different datasets. The x-axis represents the dataset, and the y-axis represents the percentage difference (Delta %). Each model is represented by a unique color and marker. A horizontal dashed line indicates the baseline performance (Delta = 0).
### Components/Axes
* **X-axis:** "Dataset" with categories: HotpotQA, CS-QA, GPQA, AQUA, GSM8K, MATH, HumanEval.
* **Y-axis:** "Î (%)" with a scale from -15 to 25, incrementing by 5.
* **Legend:** Located on the right side of the plot. It maps colors and markers to specific LLMs:
* Teal circle: LLaMA3.1-8B
* Yellow circle: LLaMA3.1-70B
* Purple circle: Qwen2.5-7B
* Red circle: Qwen2.5-72B
* Blue pentagon: Claude3.5
* Orange pentagon: GPT-3.5
* Green pentagon: GPT-4o
* Purple diamond: QWQ-32B
* Red diamond: DeepSeek-V3
* White circle: Open LLM
* White pentagon: Closed LLM
* White diamond: Reasoning LLM
* Light green dashed line: Baseline (Î=0)
### Detailed Analysis
**HotpotQA:**
* LLaMA3.1-8B (Teal circle): ~6%
* LLaMA3.1-70B (Yellow circle): ~6%
* Qwen2.5-7B (Purple circle): ~1.5%
* Qwen2.5-72B (Red circle): ~1.5%
* Claude3.5 (Blue pentagon): ~-1%
* GPT-3.5 (Orange pentagon): ~2%
* GPT-4o (Green pentagon): ~2%
* QWQ-32B (Purple diamond): ~-3%
* DeepSeek-V3 (Red diamond): ~-3%
* Open LLM (White circle): ~1%
* Closed LLM (White pentagon): ~2%
* Reasoning LLM (White diamond): ~-3%
**CS-QA:**
* LLaMA3.1-8B (Teal circle): ~2%
* LLaMA3.1-70B (Yellow circle): ~2%
* Qwen2.5-7B (Purple circle): ~2%
* Qwen2.5-72B (Red circle): ~2%
* Claude3.5 (Blue pentagon): ~1%
* GPT-3.5 (Orange pentagon): ~2%
* GPT-4o (Green pentagon): ~2%
* QWQ-32B (Purple diamond): ~3%
* DeepSeek-V3 (Red diamond): ~-1%
* Open LLM (White circle): ~2%
* Closed LLM (White pentagon): ~2%
* Reasoning LLM (White diamond): ~-1%
**GPQA:**
* LLaMA3.1-8B (Teal circle): ~7%
* LLaMA3.1-70B (Yellow circle): ~17%
* Qwen2.5-7B (Purple circle): ~18%
* Qwen2.5-72B (Red circle): ~15%
* Claude3.5 (Blue pentagon): ~14%
* GPT-3.5 (Orange pentagon): ~5%
* GPT-4o (Green pentagon): ~5%
* QWQ-32B (Purple diamond): ~17%
* DeepSeek-V3 (Red diamond): ~-3%
* Open LLM (White circle): ~7%
* Closed LLM (White pentagon): ~15%
* Reasoning LLM (White diamond): ~17%
**AQUA:**
* LLaMA3.1-8B (Teal circle): ~-3%
* LLaMA3.1-70B (Yellow circle): ~-3%
* Qwen2.5-7B (Purple circle): ~-1%
* Qwen2.5-72B (Red circle): ~-1%
* Claude3.5 (Blue pentagon): ~-1%
* GPT-3.5 (Orange pentagon): ~11%
* GPT-4o (Green pentagon): ~5%
* QWQ-32B (Purple diamond): ~13%
* DeepSeek-V3 (Red diamond): ~-3%
* Open LLM (White circle): ~-3%
* Closed LLM (White pentagon): ~-1%
* Reasoning LLM (White diamond): ~13%
**GSM8K:**
* LLaMA3.1-8B (Teal circle): ~1%
* LLaMA3.1-70B (Yellow circle): ~13%
* Qwen2.5-7B (Purple circle): ~13%
* Qwen2.5-72B (Red circle): ~13%
* Claude3.5 (Blue pentagon): ~14%
* GPT-3.5 (Orange pentagon): ~11%
* GPT-4o (Green pentagon): ~11%
* QWQ-32B (Purple diamond): ~-2%
* DeepSeek-V3 (Red diamond): ~-2%
* Open LLM (White circle): ~1%
* Closed LLM (White pentagon): ~11%
* Reasoning LLM (White diamond): ~-2%
**MATH:**
* LLaMA3.1-8B (Teal circle): ~8%
* LLaMA3.1-70B (Yellow circle): ~10%
* Qwen2.5-7B (Purple circle): ~11%
* Qwen2.5-72B (Red circle): ~9%
* Claude3.5 (Blue pentagon): ~11%
* GPT-3.5 (Orange pentagon): ~6%
* GPT-4o (Green pentagon): ~7%
* QWQ-32B (Purple diamond): ~1%
* DeepSeek-V3 (Red diamond): ~1%
* Open LLM (White circle): ~8%
* Closed LLM (White pentagon): ~9%
* Reasoning LLM (White diamond): ~1%
**HumanEval:**
* LLaMA3.1-8B (Teal circle): ~-6%
* LLaMA3.1-70B (Yellow circle): ~7%
* Qwen2.5-7B (Purple circle): ~7%
* Qwen2.5-72B (Red circle): ~7%
* Claude3.5 (Blue pentagon): ~7%
* GPT-3.5 (Orange pentagon): ~1%
* GPT-4o (Green pentagon): ~1%
* QWQ-32B (Purple diamond): ~7%
* DeepSeek-V3 (Red diamond): ~-1%
* Open LLM (White circle): ~-6%
* Closed LLM (White pentagon): ~1%
* Reasoning LLM (White diamond): ~7%
### Key Observations
* The performance of different LLMs varies significantly across different datasets.
* Some models (e.g., LLaMA3.1-70B, Qwen2.5-7B, Qwen2.5-72B, Claude3.5) generally perform well on GPQA, GSM8K, and MATH datasets.
* Models like QWQ-32B and DeepSeek-V3 show relatively lower performance on several datasets, sometimes even below the baseline.
* The "Reasoning LLM" category (white diamond) shows high variance, with strong performance on GPQA, AQUA, GSM8K, and HumanEval, but weaker performance on HotpotQA, CS-QA, and MATH.
### Interpretation
The scatter plot provides a comparative analysis of LLM performance across various question-answering and reasoning datasets. The data suggests that no single model consistently outperforms all others across all tasks. The choice of the "best" model depends heavily on the specific dataset and the type of reasoning or knowledge required. The plot highlights the strengths and weaknesses of each model, which can inform model selection for specific applications. The "Reasoning LLM" category's high variance suggests that this type of model may be highly specialized, excelling in some reasoning tasks but struggling in others. The baseline (Î=0) serves as a reference point to quickly identify models that perform above or below average on each dataset.
</details>
Figure 5: Average performance improvements achieved by S2 across multiple LLMs.
4.2 Results of Intrinsic Correction
Figure 4 illustrates the mean performance gains realized by all S1 methods across nine LLMs and multiple datasets, where y=0 denotes the baseline. For detailed results of each method, refer to Appendix E.1. Although S1 improves accuracy overall, the degree of improvement varies across instruction-based and reasoning LLMs. 1) Instruction-based LLMs. Closed-source LLMs exhibit uneven performance gains. For instance, LLaMA3.1-8B-Instruct shows significant declines on AQUA and HumanEval, whereas Qwen2.5-7B-Instruct demonstrates modest gains. These discrepancies likely stem from smaller parameter sizes and weaker instruction-following capabilities. In contrast, open-source LLMs offer more consistent and stable performance improvements. Notably, GPT-4o shows substantial gains on GPQA, whereas Claude3.5 achieves similar enhancements, highlighting the robust instruction-following adaptability of open-source architectures. 2) Reasoning LLMs. DeepSeek-V3âs performance remains close to the baseline across most tasks. While it demonstrates slight improvements on datasets such as CS-QA, GPQA, and MATH, it exhibits marginal declines on others. To further investigate this phenomenon, we conducted additional experiments (see Section 4.8), revealing that DeepSeek-V3âs built-in correction mechanism delivers a strong baseline performance. This high initial performance likely limits the impact of other correction methods. Conversely, QWQ attains considerable improvements on most datasets except CS-QA and MATH, likely reflecting the constraints posed by its smaller parameter size.
4.3 Results of External Correction
Figure 5 illustrates the average performance improvements achieved by each LLM utilizing external correction methods (S2) across various datasets. The trends observed align closely with those depicted in Figure 4, indicating a consistent enhancement in overall performance. Notably, external correction methods demonstrate greater stability, which can be attributed to their reliance on authoritative external resources. By referencing these resources, S2 effectively mitigates the occurrence of incorrect responses. However, this reliance on external inputs may also limit the LLMâs capacity for divergent or creative reasoning, resulting in steadier but less flexible performance compared to intrinsic correction methods. For comprehensive results for each method, please refer to Appendix E.2.
[RQ1] To what extent can LLMs achieve accurate results by leveraging intrinsic (S1) and external (S2) self-correction methods without requiring further intervention?
Conclusion: Both S1 and S2 enable significant performance gains, particularly for complex tasks requiring multi-step reasoning or domain-specific knowledge. By iteratively refining responses, these methods effectively correct themselves even without additional fine-tuning.
<details>
<summary>x6.png Details</summary>

### Visual Description
## Stacked Bar Chart: Model Performance on GPQA and MATH Datasets
### Overview
The image presents four stacked bar charts comparing the performance of various language models on the GPQA and MATH datasets, evaluated using F1 scores. The charts are arranged in a 2x2 grid, with GPQA and MATH results shown for two different evaluation settings (F1). Each bar represents a model, and the bar is segmented into three components: "Base" performance, "Intrinsic Correction," and "External Correction."
### Components/Axes
* **Titles:**
* a) GPQA (F1)
* b) MATH (F1)
* c) GPQA (F2)
* d) MATH (F2)
* **Y-axis:**
* Label: "Percentage (%)"
* Scale: 0 to 100, with tick marks at 0, 20, 40, 60, 80, and 100.
* **X-axis:**
* Labels: Model names (see below).
* **Legend:** Located at the bottom of the image.
* Base (light teal)
* Intrinsic Correction (light coral)
* External Correction (light blue)
* **Models (X-axis labels, same for all charts):**
* LLaMA3.1-8B
* LLaMA3.1-70B
* Qwen2.5-7B
* Qwen2.5-72B
* Claude3.5
* GPT-3.5
* GPT-4o
* QWQ-32B
* DeepSeek-V3
### Detailed Analysis
**a) GPQA (F1)**
* **LLaMA3.1-8B:** Base ~10%, Intrinsic ~15%, External ~5%, Total ~30%
* **LLaMA3.1-70B:** Base ~15%, Intrinsic ~10%, External ~10%, Total ~35%
* **Qwen2.5-7B:** Base ~5%, Intrinsic ~5%, External ~10%, Total ~20%
* **Qwen2.5-72B:** Base ~10%, Intrinsic ~5%, External ~5%, Total ~20%
* **Claude3.5:** Base ~10%, Intrinsic ~25%, External ~25%, Total ~60%
* **GPT-3.5:** Base ~30%, Intrinsic ~10%, External ~5%, Total ~45%
* **GPT-4o:** Base ~20%, Intrinsic ~5%, External ~10%, Total ~35%
* **QWQ-32B:** Base ~20%, Intrinsic ~5%, External ~5%, Total ~30%
* **DeepSeek-V3:** Base ~10%, Intrinsic ~0%, External ~0%, Total ~10%
**b) MATH (F1)**
* **LLaMA3.1-8B:** Base ~10%, Intrinsic ~0%, External ~0%, Total ~10%
* **LLaMA3.1-70B:** Base ~75%, Intrinsic ~5%, External ~5%, Total ~85%
* **Qwen2.5-7B:** Base ~70%, Intrinsic ~5%, External ~5%, Total ~80%
* **Qwen2.5-72B:** Base ~70%, Intrinsic ~5%, External ~5%, Total ~80%
* **Claude3.5:** Base ~70%, Intrinsic ~5%, External ~5%, Total ~80%
* **GPT-3.5:** Base ~80%, Intrinsic ~5%, External ~5%, Total ~90%
* **GPT-4o:** Base ~70%, Intrinsic ~5%, External ~5%, Total ~80%
* **QWQ-32B:** Base ~85%, Intrinsic ~5%, External ~5%, Total ~95%
* **DeepSeek-V3:** Base ~75%, Intrinsic ~0%, External ~0%, Total ~75%
**c) GPQA (F2)**
* **LLaMA3.1-8B:** Base ~10%, Intrinsic ~5%, External ~10%, Total ~25%
* **LLaMA3.1-70B:** Base ~25%, Intrinsic ~5%, External ~5%, Total ~35%
* **Qwen2.5-7B:** Base ~15%, Intrinsic ~5%, External ~5%, Total ~25%
* **Qwen2.5-72B:** Base ~25%, Intrinsic ~0%, External ~5%, Total ~30%
* **Claude3.5:** Base ~30%, Intrinsic ~15%, External ~10%, Total ~55%
* **GPT-3.5:** Base ~10%, Intrinsic ~5%, External ~5%, Total ~20%
* **GPT-4o:** Base ~25%, Intrinsic ~10%, External ~10%, Total ~45%
* **QWQ-32B:** Base ~5%, Intrinsic ~0%, External ~0%, Total ~5%
* **DeepSeek-V3:** Base ~20%, Intrinsic ~10%, External ~10%, Total ~40%
**d) MATH (F2)**
* **LLaMA3.1-8B:** Base ~75%, Intrinsic ~5%, External ~5%, Total ~85%
* **LLaMA3.1-70B:** Base ~75%, Intrinsic ~5%, External ~5%, Total ~85%
* **Qwen2.5-7B:** Base ~70%, Intrinsic ~5%, External ~5%, Total ~80%
* **Qwen2.5-72B:** Base ~75%, Intrinsic ~5%, External ~5%, Total ~85%
* **Claude3.5:** Base ~75%, Intrinsic ~5%, External ~5%, Total ~85%
* **GPT-3.5:** Base ~80%, Intrinsic ~5%, External ~5%, Total ~90%
* **GPT-4o:** Base ~75%, Intrinsic ~5%, External ~5%, Total ~85%
* **QWQ-32B:** Base ~85%, Intrinsic ~5%, External ~5%, Total ~95%
* **DeepSeek-V3:** Base ~70%, Intrinsic ~5%, External ~5%, Total ~80%
### Key Observations
* The MATH dataset generally yields higher F1 scores than the GPQA dataset across all models.
* The "Base" performance component dominates the MATH results, while "Intrinsic Correction" and "External Correction" contribute relatively little.
* On GPQA, the "Intrinsic Correction" and "External Correction" components play a more significant role, especially for models like Claude3.5.
* The performance of different models varies significantly on GPQA, with Claude3.5 and GPT-4o showing relatively higher scores.
* DeepSeek-V3 shows relatively low performance on GPQA compared to MATH.
* QWQ-32B consistently achieves high scores on the MATH dataset.
### Interpretation
The charts illustrate the performance of various language models on two different tasks: GPQA (likely a general-purpose question answering task) and MATH (mathematical problem-solving). The stacked bars break down the performance into "Base" (initial performance), "Intrinsic Correction" (improvements from internal mechanisms), and "External Correction" (improvements from external tools or data).
The high "Base" performance on MATH suggests that many models are already proficient in mathematical reasoning. The relatively small contributions from "Intrinsic Correction" and "External Correction" indicate that these models may not benefit significantly from additional correction mechanisms for this task.
In contrast, the GPQA results show a greater reliance on "Intrinsic Correction" and "External Correction," suggesting that the models require more assistance to perform well on this task. The varying performance across models on GPQA highlights the differences in their ability to handle general question answering.
The difference in performance between the two datasets suggests that mathematical reasoning is a relatively easier task for these models compared to general question answering. The data also indicates that certain models, like Claude3.5, are better equipped to leverage correction mechanisms for improved performance on complex tasks like GPQA.
</details>
Figure 6: Comparison of different LLMs with mixture methods.
4.4 Results of Mixture Method
We further explored whether mixing multiple self-correction methods results in additive or synergistic performance improvements. Specifically, the responses generated by the baseline method (Base) are utilized as auxiliary prompts for an intrinsic method (S1), an external method (S2), or a mixture of both (S1 + S2). As shown in Figure 6, we evaluated two representative configurations (e.g., F1: âBase to S1 to S2 â and F2: âBase to S2 to S1 â) on two benchmark tasks: GPQA and MATH. The results revealed two key findings. Firstly, applying one or more correction methods consistently improves model performance to varying degrees. Secondly, S2 generally yields larger performance gains compared to S1. Notably, complex reasoning tasks, such as GPQA, benefit the most from these mixed methods. However, these mixtures often introduce additional computational overhead. To address the associated efficiency and accuracy trade-offs, we conducted a complementary analysis of response times under different correction methods and model configurations, as detailed in Section 4.7.
4.5 Results of Fine-tuned Correction
Table 2: Performance of fine-tuned methods on selected datasets.
| Fine-tuned LLM - LLaMA2-7B-hf | Method Base DCoT | CS-QA( $\uparrow$ ) 31.40 29.65(-1.75) | GSM8K( $\uparrow$ ) 56.75 41.20(-15.55) |
| --- | --- | --- | --- |
| Gemma-7B-it | SCORE | 43.26(+11.86) | 75.30(+18.55) |
| LLaMA2-13B-chat | SCORE | 41.45(+10.05) | 72.10(+15.35) |
| Qwen2.5-Math-7B-Instruct | SuperCorrect | 46.25(+14.85) | 84.30(+27.55) |
| MATH( $\uparrow$ ) | HumanEval( $\uparrow$ ) | | |
| - | Base | 41.71 | 26.25 |
| Qwen2.5-Math-7B-Instruct | SuperCorrect | 70.16(+28.45) | 39.30(+13.05) |
Table 2 summarizes the performance of fine-tuned correction methods (S3), revealing two main observations. First, S3 (e.g., DCoT) often exhibits inconsistent outcomes across diverse tasks. This variability stems from their reliance on fine-tuning with narrowly focused datasets, which restricts their broader applicability. Second, domain-specific fine-tuning proves especially promising for models tailored to specialized tasks. For example, SuperCorrect, fine-tuned on Qwen2.5-Math-7B-Instruct, demonstrates marked improvements in mathematical reasoning (e.g., on GSM8K and MATH), outperforming other methods by a wide margin. This underscores the effectiveness of leveraging task-aligned models, particularly when fine-tuning objectives closely align with the target domain requirements.
Table 3: Performance of correction rate and misjudgment rate.
| Cove MR RARR | CR 8.1 CR | 31.6 8.0 30.7 | 36.0 6.7 49.3 | 52.1 7.5 51.3 | 40.8 47.1 |
| --- | --- | --- | --- | --- | --- |
| MR | 5.5 | 4.3 | 4.5 | 4.5 | |
4.6 Results of Correction and Misjudgment
In order to further assess the correction ability, we divide the responses of different models to questions in the three most challenging tasks (GPQA, AQUA, and HotpotQA) into error-based dataset and correction-based dataset, corresponding to the wrong and correct question-response pairs, respectively. We select CoVe and RARR from S1 and S2 respectively to evaluate on Claude 3.5-Sonnet. We hereby define two new metrics: Correction Rate (CR) indicates the proportion of incorrect responses that are successfully corrected, and Misjudgment Rate (MR) refers to the proportion of correct responses that are misjudged to be corrected wrongly. The results shown in Table 3 reflects that both methods achieve high CRs and low MRs, which shows that self-correction methods can effectively correct the wrong examples with less misjudgment.
<details>
<summary>x7.png Details</summary>

### Visual Description
## Bar Chart: Model Performance on Self-Correction Methods
### Overview
The image is a bar chart comparing the performance of three language models (LLaMA3.1-70B, GPT-4o, and DeepSeek-V3) across various self-correction methods. Performance is measured in time (seconds).
### Components/Axes
* **Y-axis:** "Time (s)" with a scale from 0 to 400 in increments of 100.
* **X-axis:** "Self-correction Methods" with the following categories: Base, CoT, RCI, CoVe, Selfrefline, Refexion, RARR, RATT.
* **Legend:** Located at the top-left of the chart.
* LLaMA3.1-70B (light green)
* GPT-4o (light red)
* DeepSeek-V3 (light blue)
### Detailed Analysis
The chart displays the time taken by each model to perform each self-correction method.
* **Base:**
* LLaMA3.1-70B: ~5s
* GPT-4o: ~5s
* DeepSeek-V3: ~8s
* **CoT:**
* LLaMA3.1-70B: ~8s
* GPT-4o: ~8s
* DeepSeek-V3: ~15s
* **RCI:**
* LLaMA3.1-70B: ~15s
* GPT-4o: ~15s
* DeepSeek-V3: ~60s
* **CoVe:**
* LLaMA3.1-70B: ~28s
* GPT-4o: ~20s
* DeepSeek-V3: ~145s
* **Selfrefline:**
* LLaMA3.1-70B: ~70s
* GPT-4o: ~45s
* DeepSeek-V3: ~130s
* **Refexion:**
* LLaMA3.1-70B: ~70s
* GPT-4o: ~50s
* DeepSeek-V3: ~105s
* **RARR:**
* LLaMA3.1-70B: ~10s
* GPT-4o: ~10s
* DeepSeek-V3: ~55s
* **RATT:**
* LLaMA3.1-70B: ~430s
* GPT-4o: ~290s
* DeepSeek-V3: ~430s
### Key Observations
* For most self-correction methods (Base, CoT, RCI, CoVe, Selfrefline, Refexion, RARR), LLaMA3.1-70B and GPT-4o have similar performance, while DeepSeek-V3 generally takes more time.
* The RATT method shows a significant increase in time for all models, with LLaMA3.1-70B and DeepSeek-V3 taking approximately the same amount of time, and GPT-4o taking slightly less.
* DeepSeek-V3 shows the highest time consumption for CoVe, Selfrefline, and Refexion.
### Interpretation
The bar chart illustrates the computational cost (time) associated with different self-correction methods for three language models. The data suggests that the choice of self-correction method can significantly impact the performance of these models. The RATT method, in particular, appears to be computationally expensive for all models. The relative performance of the models varies depending on the specific self-correction method used, indicating that the effectiveness and efficiency of these methods are model-dependent. DeepSeek-V3 generally takes more time than LLaMA3.1-70B and GPT-4o, except for the RATT method where their performance is similar.
</details>
Figure 7: Average response times for LLaMA3.1-70B, GPT-4o, and DeepSeek-V3. Intrinsic (S1) and external (S2) methods generally increase inference duration relative to Base and CoT.
4.7 Results of Response Time
Figure 7 compares the average response times across representative models (e.g., LLaMA3.1-70B and GPT-4o from M1, DeepSeek-V3 from M2) under various correction methods. In general, both intrinsic and external methods extend inference times relative to baseline approaches (Base and CoT), largely due to increased reasoning complexity or reliance on external services (especially for RATT). Moreover, reasoning LLMs, such as DeepSeek-V3, exhibit notably longer execution times than instruction-based models, likely attributable to their built-in correction mechanisms. In contrast, the baseline CoT method achieves notably shorter response times while maintaining a reasonable accuracy (combined with Table 1). This observation underscores that more complex correction strategies do not always yield superior outcomes, highlighting the critical trade-off between model accuracy and computational efficiency. As shown in Table 4, RARR offers a balanced trade-off between efficiency and accuracy, with only 533 tokens and 2 API calls. Reflexion-v2 and RATT achieve the highest accuracies, reflecting the benefit of external retrieval or code execution, while their overhead remains manageable (below 15% additional search tokens). Bootstrap-based confidence intervals for these results are reported in Appendix B.3.
[RQ2] How does mixing multiple self-correction methods influence model accuracy and robustness, and what are the associated computational trade-offs?
Conclusion: Mixing self-correction methods typically results in accuracy improvements, though at the cost of increased computational overhead. Such mixtures are particularly beneficial for high-precision tasks where the trade-off of more runtime is justifiable.
Table 4: Comprehensive resource cost analysis on the MATH dataset (150 samples). Values in green indicate the best trade-off between cost and accuracy, while red highlights the least efficient results. âEfficiency Rankâ represents the ratio of Accuracy / (Token Count Ă API Calls).
| Base CoT CoVe | - - S1 | 791 1804 2019 | 1.0 1.0 1.0 | 0 (0%) 0 (0%) 0 (0%) | 68.5 69.5 75.0 | 0.0866 0.0385 0.0371 |
| --- | --- | --- | --- | --- | --- | --- |
| RCI | S1 | 1780 | 1.2 | 0 (0%) | 70.2 | 0.0328 |
| Reflexion-v1 | S1 | 1460 | 3.5 | 0 (0%) | 72.8 | 0.0143 |
| Reflexion-v2 | S2 | 1712 | 4.0 | 154 (8.25%) | 74.5 | 0.0109 |
| RARR | S2 | 533 | 2.0 | 89 (14.31%) | 76.3 | 0.0716 |
| RATT | S2 | 2185 | 3.0 | 162 (6.9%) | 78.7 | 0.0120 |
4.8 Results of Reasoning LLMs
Table 5: Comparison of baseline performance among instruction-based and reasoning LLMs. Per-column maxima are bolded; per-column minima are underlined. Per-row maxima are highlighted with blue; per-row minima are highlighted with orange.
| Open-source LLaMA3.1-70B-Instruct Qwen2.5-7B-Instruct | LLaMA3.1-8B-Instruct 81.28 74.05 | 75.80 81.88 74.75 | 76.16 15.62 7.53 | 9.74 90.63 90.23 | 81.55 62.65 47.50 | 53.88 78.21 74.28 | 74.37 62.18 79.11 | 73.44 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Qwen2.5-72B-Instruct | 83.63 | 81.92 | 9.85 | 91.11 | 57.58 | 66.91 | 86.13 | |
| Closed-source | Claude3.5-Sonnet | 88.29 | 80.25 | 32.34 | 95.81 | 81.26 | 83.51 | 84.69 |
| GPT-3.5 | 82.94 | 77.92 | 27.29 | 79.14 | 55.15 | 70.44 | 80.29 | |
| GPT-4o | 89.16 | 80.65 | 22.49 | 91.15 | 65.82 | 69.54 | 77.04 | |
| Reasoning | QWQ-32B | 62.43 | 82.78 | 10.85 | 63.41 | 52.42 | 73.78 | 19.86 |
| DeepSeek-V3 | 89.29 | 83.35 | 31.35 | 95.12 | 74.79 | 85.02 | 91.67 | |
| DeepSeek-R1 | 88.92 | 79.93 | 41.15 | 92.63 | 80.23 | 84.21 | 89.06 | |
| o3-mini | 81.24 | 74.28 | 27.17 | 92.45 | 78.26 | 67.97 | 85.75 | |
To further investigate why DeepSeek-V3 shows limited improvement from self-correction, we compare instruction-based LLMs and reasoning LLMs on the Base alone. Table 5 shows that DeepSeek-V3 consistently achieves top-2 or even top-1 performance across all datasets. Combined with the results of Section 4.2, we find that this may be because reasoning LLMs already incorporate robust intrinsic correction mechanisms, limiting additional gains from external correction steps. In particular, DeepSeek-V3 integrates advanced reflection modules and comprehensive error-detection routines distilled from its earlier R1 series, enabling thorough multi-step reasoning at the outset. This high baseline effectively reduces the scope for further improvement through additional self-correction. Consequently, attempts to augment DeepSeek-V3 with further self-correction methods produce minimal net gains while incurring additional computational overhead.
[RQ3] For reasoning LLMs with built-in correction mechanisms, to what extent can the above self-correction methods provide additional benefits?
Conclusion: Reasoning LLMs (e.g., DeepSeek-V3) already embed sophisticated error-detection and correction processes. As a result, additional self-correction methods confer only marginal gains and may increase computational overhead, highlighting a performance ceiling in highly reasoning LLMs.
4.9 Failure Mode Taxonomy and Case Analysis
To better understand why different self-correction strategies succeed or fail, we conducted a supplemental failure-mode analysis on the GPQA (250 samples) and MATH (500 samples) datasets. Six major categories of failure were identified, alongside a residual âOtherâ category, as summarized in Table 6. Logical Oversight (32.9%) and Factual Inaccuracy (22.0%) dominate, implying that intrinsic corrections (S1) are suitable for reasoning-related errors, while external corrections (S2) excel at factual validation. These findings motivate our adaptive correction controller (Sec. 6), which dynamically selects correction strategies based on detected failure types.
Table 6: Error taxonomy of LLM self-correction failures across GPQA and MATH. Logical and factual errors dominate, suggesting distinct correction strategies (S1 vs. S2).
| Logical Oversight Factual Inaccuracy Over-Reliance on Tools | 32.9 22.0 14.6 | Reasoning step errors (e.g., misapplied formula) Outdated or incorrect retrieved evidence Excessive external API calls causing inefficiency | S1: CoT, RCI S2: RARR, RATT S2 (bounded) |
| --- | --- | --- | --- |
| Ambiguous Output | 14.2 | Incomplete or vague final answer | S1 refinement |
| Contextual Misunderstanding | 10.8 | Misinterpreted question or missing context | S1+S2 hybrid |
| Computational Error | 3.5 | Faulty code execution or symbolic computation | S2 verification |
| Other | 2.0 | Miscellaneous or formatting issues | - |
5 Related Work
Self-Correction Methods. With the continuous development of self-correction techniques wei2025unsupervised ; xu2024llava ; fan2025missing ; li2025vision ; wei2025advancing , researchers have proposed various approaches to enhance the performance of large language models. Intrinsic methods, such as CoVe Dhuliawala2023ChainofVerificationRH and RCI Kim2023LanguageMC , improve the precision and consistency of generated content through self-supervised mechanisms within the model. At the same time, extrinsic methods, such as CRITIC Gou2023CRITICLL , RATT Zhang2024RATTAT , and RARR Gao2022RARRRA , rely on an external tool to evaluate and provide feedback on the generated outputs, guiding the model towards optimization. Fine-tuned methods, such as DCoT Puerto2024FineTuningWD , Supercorrect Yang2024SuperCorrectSA , and SCORE Zhang2024SmallLM , further enhance the performance of the model by fine-tuning it for specific tasks, enabling more accurate and efficient handling of complex tasks. The continuous evolution of these methods provides diverse options and techniques for self-correction. Detailed discussions on the related word are provided in Appendix C.
Correction Benchmarks. Benchmarking the LLMsâ self-correction ability tie2025mmmr ; duan2024gtbench ; zheng2024thinking ; duan2024reta ; cao2025survey ; zhou2024comprehensive ; shi2024optimization ; liu2025agentic has prompted the development of specialized benchmarks for different tasks. For instance, CriticBench Lin2024CriticBenchBL evaluates critique ability using discrimination results, but it struggles with task-specific fine-grained metrics and reliance on costly human annotations or potentially biased GPT-4 outputs. In the realm of vision-language models, VISCO Wu2024VISCOBF focuses on self-correction in multimodal tasks, while Beyond Correctness Zheng2024BeyondCB specifically targets self-correction in large models for code generation. Our CorrectBench focused on striking a trade-off between reasoning capability and efficiency, proposing more generalized and nuanced evaluation methods for complex reasoning tasks.
6 Future Improvements in Self-Correction
Looking ahead, several promising directions can further enhance the robustness and adaptability of self-correction in large language models. (1) Dynamic Adjustment: reinforcement learning or meta-controller mechanisms could dynamically select among correction strategies (S1âS3) based on task complexity or confidence levels, reducing redundant computation through early stopping. (2) Task-Specific Optimization: domain-oriented fine-tuning, such as the 10.2% improvement achieved by SuperCorrect on MATH, suggests the value of adaptive pipelines that align with domain reasoning depth and structure. (3) Human-in-the-Loop Integration: in sensitive fields like medicine or law, coupling automated correction with limited expert feedback could improve factual reliability and ensure accountable model behavior. (4) Meta-Controller Framework: developing a lightweight controller to detect and correct intermediate reasoning inconsistencies may help refine the chain-of-thought process and prevent logical drift. Further analysis and discussions of these future directions are provided in Appendix D.
7 Conclusion
This paper presents CorrectBench, a comprehensive and extensible benchmark for evaluating the self-correction capabilities of large language models (LLMs) across diverse reasoning-intensive tasks, including commonsense inference, mathematical problem-solving, and code generation. Through systematic evaluation, we demonstrate that modern LLMs are increasingly capable of genuine self-correction, with reasoning-oriented models such as DeepSeek-R1 achieving substantial baseline accuracy and showing consistent improvement through iterative refinement. CorrectBench not only reveals the effectiveness of various correction paradigms (S1, S2, S3) but also exposes critical limitationsâsuch as diminishing returns in deeper correction chains and resource inefficiencies in web-augmented methods. These insights emphasize the necessity for adaptive, cost-aware correction mechanisms that balance efficiency and reasoning depth. Overall, this study provides a unified foundation for understanding and benchmarking LLM self-correction. We hope this work serves as a stepping stone toward more trustworthy and self-improving language models.
Acknowledgments
This work is supported by National Natural Science Foundation of China (NSFC) under grant No. 62476107.
References
- [1] Alexandre Agossah, Frédérique Krupa, Matthieu Perreira da Silva, and Patrick Le Callet. Llm-based interaction for content generation: A case study on the perception of employees in an it department. Proceedings of the 2023 ACM International Conference on Interactive Media Experiences, 2023.
- [2] Claude Ahtropic. Claude. [Online]. Available: https://www.anthropic.com/claude, 2024.
- [3] Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, et al. Constitutional ai: Harmlessness from ai feedback. ArXiv, abs/2212.08073, 2022.
- [4] Su Lin Blodgett, Solon Barocas, Hal Daumâe, and Hanna M. Wallach. Language (technology) is power: A critical survey of âbiasâ in nlp. ArXiv, abs/2005.14050, 2020.
- [5] Yihan Cao, Siyu Li, Yixin Liu, Zhiling Yan, Yutong Dai, Philip Yu, and Lichao Sun. A survey of ai-generated content (aigc). ACM Computing Surveys, 57(5):1â38, 2025.
- [6] Canyu Chen and Kai Shu. Can llm-generated misinformation be detected? ArXiv, abs/2309.13788, 2023.
- [7] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, et al. Evaluating large language models trained on code. ArXiv, abs/2107.03374, 2021.
- [8] Michael Chen, Mike DâArcy, Alisa Liu, Jared Fernandez, and Doug Downey. Aqua: An adversarially authored question-answer dataset for common sense. 2019.
- [9] Yuxuan Chen, Rongpeng Li, Xiaoxue Yu, Zhifeng Zhao, and Honggang Zhang. Adaptive layer splitting for wireless llm inference in edge computing: A model-based reinforcement learning approach. arXiv preprint arXiv:2406.02616, 2024.
- [10] Ethan Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, and Pengfei Liu. Factool: Factuality detection in generative ai - a tool augmented framework for multi-task and multi-domain scenarios. ArXiv, abs/2307.13528, 2023.
- [11] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems. ArXiv, abs/2110.14168, 2021.
- [12] DeepSeek-AI, Aixin Liu, Bei Feng, Bing Xue, Bing-Li Wang, Bochao Wu, et al. Deepseek-v3 technical report. 2024.
- [13] Shehzaad Dhuliawala, Mojtaba Komeili, Jing Xu, Roberta Raileanu, Xian Li, Asli Celikyilmaz, and Jason Weston. Chain-of-verification reduces hallucination in large language models. ArXiv, abs/2309.11495, 2023.
- [14] Jinhao Duan, Shiqi Wang, James Diffenderfer, Lichao Sun, Tianlong Chen, Bhavya Kailkhura, and Kaidi Xu. Reta: Recursively thinking ahead to improve the strategic reasoning of large language models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 2232â2246, 2024.
- [15] Jinhao Duan, Renming Zhang, James Diffenderfer, Bhavya Kailkhura, Lichao Sun, Elias Stengel-Eskin, Mohit Bansal, Tianlong Chen, and Kaidi Xu. Gtbench: Uncovering the strategic reasoning capabilities of llms via game-theoretic evaluations. Advances in Neural Information Processing Systems, 37:28219â28253, 2024.
- [16] Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, et al. The llama 3 herd of models. ArXiv, abs/2407.21783, 2024.
- [17] Esin Durmus, He He, and Mona T. Diab. Feqa: A question answering evaluation framework for faithfulness assessment in abstractive summarization. ArXiv, abs/2005.03754, 2020.
- [18] Chenrui Fan, Ming Li, Lichao Sun, and Tianyi Zhou. Missing premise exacerbates overthinking: Are reasoning models losing critical thinking skill? arXiv preprint arXiv:2504.06514, 2025.
- [19] Shangbin Feng, Chan Young Park, Yuhan Liu, and Yulia Tsvetkov. From pretraining data to language models to downstream tasks: Tracking the trails of political biases leading to unfair nlp models. In Annual Meeting of the Association for Computational Linguistics, 2023.
- [20] Luyu Gao, Zhuyun Dai, Panupong Pasupat, Anthony Chen, Arun Tejasvi Chaganty, Yicheng Fan, Vincent Zhao, N. Lao, Hongrae Lee, Da-Cheng Juan, and Kelvin Guu. Rarr: Researching and revising what language models say, using language models. In Annual Meeting of the Association for Computational Linguistics, 2022.
- [21] Zhibin Gou, Zhihong Shao, Yeyun Gong, Yelong Shen, Yujiu Yang, Nan Duan, and Weizhu Chen. Critic: Large language models can self-correct with tool-interactive critiquing. ArXiv, abs/2305.11738, 2023.
- [22] Caglar Gulcehre, Tom Le Paine, Srivatsan Srinivasan, Ksenia Konyushkova, Lotte Weerts, Abhishek Sharma, Aditya Siddhant, Alexa Ahern, Miaosen Wang, Chenjie Gu, Wolfgang Macherey, A. Doucet, Orhan Firat, and Nando de Freitas. Reinforced self-training (rest) for language modeling. ArXiv, abs/2308.08998, 2023.
- [23] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Xiaodong Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. ArXiv, abs/2103.03874, 2021.
- [24] Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei Han. Large language models can self-improve. ArXiv, abs/2210.11610, 2022.
- [25] Jie Huang, Xinyun Chen, Swaroop Mishra, Huaixiu Steven Zheng, Adams Wei Yu, Xinying Song, and Denny Zhou. Large language models cannot self-correct reasoning yet. ArXiv, abs/2310.01798, 2023.
- [26] Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al. Gpt-4o system card. arXiv preprint arXiv:2410.21276, 2024.
- [27] Dongwei Jiang, Jingyu Zhang, Orion Weller, Nathaniel Weir, Benjamin Van Durme, and Daniel Khashabi. Self-[in]correct: Llms struggle with discriminating self-generated responses. 2024.
- [28] Ryo Kamoi, Sarkar Snigdha Sarathi Das, Renze Lou, Jihyun Janice Ahn, Yilun Zhao, Xiaoxin Lu, Nan Zhang, Yusen Zhang, Ranran Haoran Zhang, Sujeeth Reddy Vummanthala, Salika Dave, Shaobo Qin, Arman Cohan, Wenpeng Yin, and Rui Zhang. Evaluating llms at detecting errors in llm responses. ArXiv, abs/2404.03602, 2024.
- [29] Ryo Kamoi, Yusen Zhang, Nan Zhang, Jiawei Han, and Rui Zhang. When can llms actually correct their own mistakes? a critical survey of self-correction of llms. Transactions of the Association for Computational Linguistics, 12:1417â1440, 2024.
- [30] Geunwoo Kim, Pierre Baldi, and Stephen Marcus McAleer. Language models can solve computer tasks. ArXiv, abs/2303.17491, 2023.
- [31] Jiayi Kuang, Jingyou Xie, Haohao Luo, Ronghao Li, Zhe Xu, Xianfeng Cheng, Yinghui Li, Xika Lin, and Ying Shen. Natural language understanding and inference with mllm in visual question answering: A survey. ArXiv, abs/2411.17558, 2024.
- [32] Yuting Li, Lai Wei, Kaipeng Zheng, Jingyuan Huang, Linghe Kong, Lichao Sun, and Weiran Huang. Vision matters: Simple visual perturbations can boost multimodal math reasoning. arXiv preprint arXiv:2506.09736, 2025.
- [33] Bill Yuchen Lin, Yicheng Fu, Karina Yang, Faeze Brahman, Shiyu Huang, Chandra Bhagavatula, Prithviraj Ammanabrolu, Yejin Choi, and Xiang Ren. Swiftsage: A generative agent with fast and slow thinking for complex interactive tasks. Advances in Neural Information Processing Systems, 36, 2024.
- [34] Zicheng Lin, Zhibin Gou, Tian Liang, Ruilin Luo, Haowei Liu, and Yujiu Yang. Criticbench: Benchmarking llms for critique-correct reasoning. In Annual Meeting of the Association for Computational Linguistics, 2024.
- [35] Dancheng Liu, Amir Nassereldine, Ziming Yang, Chenhui Xu, Yuting Hu, Jiajie Li, Utkarsh Kumar, Changjae Lee, and Jinjun Xiong. Large language models have intrinsic self-correction ability. ArXiv, abs/2406.15673, 2024.
- [36] Guang-Da Liu, Haitao Mao, Bochuan Cao, Zhiyu Xue, Kristen Marie Johnson, Jiliang Tang, and Rongrong Wang. On the intrinsic self-correction capability of llms: Uncertainty and latent concept. ArXiv, abs/2406.02378, 2024.
- [37] Yixin Liu, Yonghui Wu, Denghui Zhang, and Lichao Sun. Agentic autosurvey: Let llms survey llms. arXiv preprint arXiv:2509.18661, 2025.
- [38] Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Sean Welleck, Bodhisattwa Prasad Majumder, Shashank Gupta, Amir Yazdanbakhsh, and Peter Clark. Self-refine: Iterative refinement with self-feedback. ArXiv, abs/2303.17651, 2023.
- [39] Yu Meng, Jiaxin Huang, Yu Zhang, and Jiawei Han. Generating training data with language models: Towards zero-shot language understanding. ArXiv, abs/2202.04538, 2022.
- [40] Meta. Meta Llama 3. https://llama.meta.com/docs/model-cards-andprompt-formats/meta-llama-3/, 2024.
- [41] Abhika Mishra, Akari Asai, Vidhisha Balachandran, Yizhong Wang, Graham Neubig, Yulia Tsvetkov, and Hannaneh Hajishirzi. Fine-grained hallucination detection and editing for language models. ArXiv, abs/2401.06855, 2024.
- [42] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35:27730â27744, 2022.
- [43] Jing-Cheng Pang, Pengyuan Wang, Kaiyuan Li, Xiong-Hui Chen, Jiacheng Xu, Zongzhang Zhang, and Yang Yu. Language model self-improvement by reinforcement learning contemplation. ArXiv, abs/2305.14483, 2023.
- [44] Debjit Paul, Mete Ismayilzada, Maxime Peyrard, Beatriz Borges, Antoine Bosselut, Robert West, and Boi Faltings. Refiner: Reasoning feedback on intermediate representations. In Conference of the European Chapter of the Association for Computational Linguistics, 2023.
- [45] Haritz Puerto, Tilek Chubakov, Xiaodan Zhu, Harish Tayyar Madabushi, and Iryna Gurevych. Fine-tuning with divergent chains of thought boosts reasoning through self-correction in language models. ArXiv, abs/2407.03181, 2024.
- [46] Zac Pullar-Strecker, Katharina Dost, Eibe Frank, and Jörg Wicker. Hitting the target: stopping active learning at the cost-based optimum. Machine Learning, 113(4):1529â1547, 2024.
- [47] Rifki Afina Putri, Faiz Ghifari Haznitrama, Dea Adhista, and Alice Oh. Can llm generate culturally relevant commonsense qa data? case study in indonesian and sundanese. In Conference on Empirical Methods in Natural Language Processing, 2024.
- [48] Biqing Qi, Xinquan Chen, Junqi Gao, Dong Li, Jianxing Liu, Ligang Wu, and Bowen Zhou. Interactive continual learning: Fast and slow thinking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12882â12892, 2024.
- [49] Maribeth Rauh, John F. J. Mellor, Jonathan Uesato, Po-Sen Huang, Johannes Welbl, Laura Weidinger, Sumanth Dathathri, Amelia Glaese, Geoffrey Irving, Iason Gabriel, William S. Isaac, and Lisa Anne Hendricks. Characteristics of harmful text: Towards rigorous benchmarking of language models. ArXiv, abs/2206.08325, 2022.
- [50] David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R. Bowman. Gpqa: A graduate-level google-proof q&a benchmark. ArXiv, abs/2311.12022, 2023.
- [51] Carl Orge Retzlaff, Srijita Das, Christabel Wayllace, Payam Mousavi, Mohammad Afshari, Tianpei Yang, Anna Saranti, Alessa Angerschmid, Matthew E Taylor, and Andreas Holzinger. Human-in-the-loop reinforcement learning: A survey and position on requirements, challenges, and opportunities. Journal of Artificial Intelligence Research, 79:359â415, 2024.
- [52] Thomas Scialom, Paul-Alexis Dray, Patrick Gallinari, Sylvain Lamprier, Benjamin Piwowarski, Jacopo Staiano, and Alex Wang. Questeval: Summarization asks for fact-based evaluation. In Conference on Empirical Methods in Natural Language Processing, 2021.
- [53] Jiawen Shi, Zenghui Yuan, Yinuo Liu, Yue Huang, Pan Zhou, Lichao Sun, and Neil Zhenqiang Gong. Optimization-based prompt injection attack to llm-as-a-judge. In Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security, pages 660â674, 2024.
- [54] Noah Shinn, Federico Cassano, Beck Labash, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. Reflexion: language agents with verbal reinforcement learning. In Neural Information Processing Systems, 2023.
- [55] Chuanneng Sun, Songjun Huang, and Dario Pompili. Llm-based multi-agent reinforcement learning: Current and future directions. arXiv preprint arXiv:2405.11106, 2024.
- [56] Guiyao Tie, Zeli Zhao, Dingjie Song, Fuyang Wei, Rong Zhou, Yurou Dai, Wen Yin, Zhejian Yang, Jiangyue Yan, Yao Su, et al. A survey on post-training of large language models. arXiv e-prints, pages arXivâ2503, 2025.
- [57] Guiyao Tie, Xueyang Zhou, Tianhe Gu, Ruihang Zhang, Chaoran Hu, Sizhe Zhang, Mengqu Sun, Yan Zhang, Pan Zhou, and Lichao Sun. Mmmr: Benchmarking massive multi-modal reasoning tasks. arXiv preprint arXiv:2505.16459, 2025.
- [58] Gladys Tyen, Hassan Mansoor, Peter Chen, Tony Mak, and Victor Carbune. Llms cannot find reasoning errors, but can correct them! In Annual Meeting of the Association for Computational Linguistics, 2023.
- [59] Alex Wang, Kyunghyun Cho, and Mike Lewis. Asking and answering questions to evaluate the factual consistency of summaries. ArXiv, abs/2004.04228, 2020.
- [60] Yifei Wang, Yuyang Wu, Zeming Wei, Stefanie Jegelka, and Yisen Wang. A theoretical understanding of self-correction through in-context alignment. ArXiv, abs/2405.18634, 2024.
- [61] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed H. Chi, F. Xia, Quoc Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. ArXiv, abs/2201.11903, 2022.
- [62] Lai Wei, Yuting Li, Chen Wang, Yue Wang, Linghe Kong, Weiran Huang, and Lichao Sun. Unsupervised post-training for multi-modal llm reasoning via grpo. arXiv preprint arXiv:2505.22453, 2025.
- [63] Lai Wei, Yuting Li, Kaipeng Zheng, Chen Wang, Yue Wang, Linghe Kong, Lichao Sun, and Weiran Huang. Advancing multimodal reasoning via reinforcement learning with cold start. arXiv preprint arXiv:2505.22334, 2025.
- [64] Xueqing Wu, Yuheng Ding, Bingxuan Li, Pan Lu, Da Yin, Kai-Wei Chang, and Nanyun Peng. Visco: Benchmarking fine-grained critique and correction towards self-improvement in visual reasoning. ArXiv, abs/2412.02172, 2024.
- [65] Guowei Xu, Peng Jin, Ziang Wu, Hao Li, Yibing Song, Lichao Sun, and Li Yuan. Llava-cot: Let vision language models reason step-by-step. arXiv preprint arXiv:2411.10440, 2024.
- [66] An Yang, Baosong Yang, Binyuan Hui, et al. Qwen2 technical report. arXiv preprint arXiv:2407.10671, 2024.
- [67] Chang Yang, Xinrun Wang, Junzhe Jiang, Qinggang Zhang, and Xiao Huang. Evaluating world models with llm for decision making. ArXiv, abs/2411.08794, 2024.
- [68] Hanqing Yang, Marie Siew, and Carlee Joe-Wong. An llm-based digital twin for optimizing human-in-the loop systems. arXiv preprint arXiv:2403.16809, 2024.
- [69] Ling Yang, Zhaochen Yu, Tianjun Zhang, Minkai Xu, Joseph E. Gonzalez, Bin Cui, and Shuicheng Yan. Supercorrect: Supervising and correcting language models with error-driven insights. ArXiv, abs/2410.09008, 2024.
- [70] Qwen An Yang, Baosong Yang, Beichen Zhang, et al. Qwen2.5 technical report. 2024.
- [71] Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. In Conference on Empirical Methods in Natural Language Processing, 2018.
- [72] Junjie Ye, Xuanting Chen, Nuo Xu, Can Zu, Zekai Shao, Shichun Liu, Yuhan Cui, Zeyang Zhou, Chao Gong, Yang Shen, Jie Zhou, Siming Chen, Tao Gui, Qi Zhang, and Xuanjing Huang. A comprehensive capability analysis of gpt-3 and gpt-3.5 series models. ArXiv, abs/2303.10420, 2023.
- [73] E. Zelikman, Yuhuai Wu, and Noah D. Goodman. Star: Bootstrapping reasoning with reasoning. 2022.
- [74] Jinghan Zhang, Xiting Wang, Weijieying Ren, Lu Jiang, Dongjie Wang, and Kunpeng Liu. Ratt: A thought structure for coherent and correct llm reasoning. ArXiv, abs/2406.02746, 2024.
- [75] Muru Zhang, Ofir Press, William Merrill, Alisa Liu, and Noah A. Smith. How language model hallucinations can snowball. ArXiv, abs/2305.13534, 2023.
- [76] Yunxiang Zhang, Muhammad Khalifa, Lajanugen Logeswaran, Jaekyeom Kim, Moontae Lee, Honglak Lee, and Lu Wang. Small language models need strong verifiers to self-correct reasoning. In Annual Meeting of the Association for Computational Linguistics, 2024.
- [77] Haojie Zheng, Tianyang Xu, Hanchi Sun, Shu Pu, Ruoxi Chen, and Lichao Sun. Thinking before looking: Improving multimodal llm reasoning via mitigating visual hallucination. arXiv preprint arXiv:2411.12591, 2024.
- [78] Jiasheng Zheng, Boxi Cao, Zhengzhao Ma, Ruotong Pan, Hongyu Lin, Yaojie Lu, Xianpei Han, and Le Sun. Beyond correctness: Benchmarking multi-dimensional code generation for large language models. ArXiv, abs/2407.11470, 2024.
- [79] Ce Zhou, Qian Li, Chen Li, Jun Yu, Yixin Liu, Guangjing Wang, Kai Zhang, Cheng Ji, Qiben Yan, Lifang He, et al. A comprehensive survey on pretrained foundation models: A history from bert to chatgpt. International Journal of Machine Learning and Cybernetics, pages 1â65, 2024.
NeurIPS Paper Checklist
1. Claims
1. Question: Do the main claims made in the abstract and introduction accurately reflect the paperâs contributions and scope?
1. Answer: [Yes]
1. Justification: The abstract and introduction clearly state the main contributions of the paper, including evaluating the performance of three types of self-correction methods on three types of tasks, exploring the effects of mixture-based methods, and testing the correction performance of the reasoning model. These contributions are supported by the theoretical analysis and experimental results in the main text. In order to avoid over-generalization, the scope and limitations of the study are also discussed in the text.
1. Guidelines:
- The answer NA means that the abstract and introduction do not include the claims made in the paper.
- The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers.
- The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings.
- It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper.
1. Limitations
1. Question: Does the paper discuss the limitations of the work performed by the authors?
1. Answer: [Yes]
1. Justification: Section LABEL:sec:future discusses the limitations of our approach and future directions for scalability, including adaptive correction pipelines, integration in agents, and human-in-the-loop correction.
1. Guidelines:
- The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper.
- The authors are encouraged to create a separate "Limitations" section in their paper.
- The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be.
- The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated.
- The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon.
- The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size.
- If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness.
- While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that arenât acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations.
1. Theory assumptions and proofs
1. Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof?
1. Answer: [N/A]
1. Justification: This paper does not include theoretical results.
1. Guidelines:
- The answer NA means that the paper does not include theoretical results.
- All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced.
- All assumptions should be clearly stated or referenced in the statement of any theorems.
- The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition.
- Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material.
- Theorems and Lemmas that the proof relies upon should be properly referenced.
1. Experimental result reproducibility
1. Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)?
1. Answer: [Yes]
1. Justification: We provide all necessary details to reproduce the main experimental results. This includes complete descriptions of datasets, models, evaluation metrics, and prompt implementation details in Section LABEL:app:1 and Section G.
1. Guidelines:
- The answer NA means that the paper does not include experiments.
- If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not.
- If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable.
- Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed.
- While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example
1. If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm.
1. If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully.
1. If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset).
1. We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results.
1. Open access to data and code
1. Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material?
1. Answer: [Yes]
1. Justification: We provide anonymized access to key code and data used in the experiments, with detailed instructions for reproducing the main results, including environment setup, running commands, and datasets.
1. Guidelines:
- The answer NA means that paper does not include experiments requiring code.
- Please see the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.
- While we encourage the release of code and data, we understand that this might not be possible, so âNoâ is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark).
- The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.
- The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc.
- The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why.
- At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable).
- Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted.
1. Experimental setting/details
1. Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results?
1. Answer: [Yes]
1. Justification: We specify all necessary experimental details to understand and interpret our results. This includes the data splits and model configuration.
1. Guidelines:
- The answer NA means that the paper does not include experiments.
- The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them.
- The full details can be provided either with the code, in appendix, or as supplemental material.
1. Experiment statistical significance
1. Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments?
1. Answer: [No]
1. Justification: While we provide quantitative results for all experiments, we do not include error bars or statistical significance tests.
1. Guidelines:
- The answer NA means that the paper does not include experiments.
- The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper.
- The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions).
- The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.)
- The assumptions made should be given (e.g., Normally distributed errors).
- It should be clear whether the error bar is the standard deviation or the standard error of the mean.
- It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified.
- For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates).
- If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text.
1. Experiments compute resources
1. Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments?
1. Answer: [Yes]
1. Justification: We report the response time cost of calling API by different methods in Section 4.7.
1. Guidelines:
- The answer NA means that the paper does not include experiments.
- The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage.
- The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute.
- The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didnât make it into the paper).
1. Code of ethics
1. Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines?
1. Answer: [Yes]
1. Justification: We have carefully reviewed the NeurIPS Code of Ethics and confirm that our research complies with all relevant ethical guidelines. Our work does not involve human subjects, private or sensitive data, or potentially harmful applications.
1. Guidelines:
- The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics.
- If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics.
- The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction).
1. Broader impacts
1. Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed?
1. Answer: [Yes]
1. Justification: We discuss the potential broader societal impacts of our work in the Impact Statement section. On the positive side, our method can help researchers understand the correction ability of LLMs better.
1. Guidelines:
- The answer NA means that there is no societal impact of the work performed.
- If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact.
- Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations.
- The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster.
- The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology.
- If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML).
1. Safeguards
1. Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)?
1. Answer: [No]
1. Justification: The paper does not introduce or release any models or datasets that pose significant risks of misuse. Therefore, no specific safeguards are necessary.
1. Guidelines:
- The answer NA means that the paper poses no such risks.
- Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters.
- Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images.
- We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort.
1. Licenses for existing assets
1. Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected?
1. Answer: [Yes]
1. Justification: We make use of publicly available datasets and code assets, all of which are properly cited in the main paper. For each asset, we explicitly state the license (e.g., MIT, Apache 2.0, CC-BY 4.0) and ensure our use complies with the terms. Version information and source URLs are also provided where applicable.
1. Guidelines:
- The answer NA means that the paper does not use existing assets.
- The authors should cite the original paper that produced the code package or dataset.
- The authors should state which version of the asset is used and, if possible, include a URL.
- The name of the license (e.g., CC-BY 4.0) should be included for each asset.
- For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided.
- If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset.
- For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided.
- If this information is not available online, the authors are encouraged to reach out to the assetâs creators.
1. New assets
1. Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets?
1. Answer: [Yes]
1. Justification: We introduce new assets in the paper, including the collected datasets for evaluating self-correction of LLMs. We provide complete documentation alongside these assets, including descriptions of their structure, usage instructions, licensing terms, known limitations, and guidelines for responsible use. All release materials are anonymized and hosted in accordance with NeurIPS submission policies.
1. Guidelines:
- The answer NA means that the paper does not release new assets.
- Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc.
- The paper should discuss whether and how consent was obtained from people whose asset is used.
- At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file.
1. Crowdsourcing and research with human subjects
1. Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)?
1. Answer: [N/A]
1. Justification: The paper does not involve crowdsourcing or research with human subjects. All experiments are performed using synthetic or publicly available machine-generated datasets.
1. Guidelines:
- The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.
- Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper.
- According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector.
1. Institutional review board (IRB) approvals or equivalent for research with human subjects
1. Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained?
1. Answer: [N/A]
1. Justification: The paper does not involve crowdsourcing or research with human subjects, and thus IRB approval is not applicable.
1. Guidelines:
- The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.
- Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper.
- We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution.
- For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.
1. Declaration of LLM usage
1. Question: Does the paper describe the usage of LLMs if it is an important, original, or non-standard component of the core methods in this research? Note that if the LLM is used only for writing, editing, or formatting purposes and does not impact the core methodology, scientific rigorousness, or originality of the research, declaration is not required.
1. Answer: [Yes]
1. Justification: This work involves the use of LLMs as a core component of our methodology. Specifically, we use instruction-based LLMs and reasoning-based LLMs for evaluations. The role of the LLM in our pipeline is described in detail in Sections 2.3.
1. Guidelines:
- The answer NA means that the core method development in this research does not involve LLMs as any important, original, or non-standard components.
- Please refer to our LLM policy (https://neurips.cc/Conferences/2025/LLM) for what should or should not be described.
Appendix A Overview of Correction Performance
Figure 8 shows the comparison between the performance of the baseline and the average performance of the intrinsic correction and external correction methods on different tasks on different LLMs. It can be observed that both intrinsic correction and external correction outperform the baseline on most models and tasks.
<details>
<summary>x8.png Details</summary>

### Visual Description
## Radar Charts: Model Performance Comparison
### Overview
The image presents a series of radar charts comparing the performance of different language models across various benchmarks. Each chart represents a specific model, and the performance is evaluated using metrics like GPQA, CS-QA, HotpotQA, HumanEval, MATH, AQUA, and GSM8K. Three different configurations are compared for each model: a baseline ("Base"), a version with intrinsic correction ("S1"), and a version with external correction ("S2").
### Components/Axes
* **Chart Type:** Radar Chart
* **Number of Charts:** 9, arranged in a 3x3 grid.
* **Axes:** Each radar chart has 7 axes, representing the following benchmarks:
* GPQA
* CS-QA
* HotpotQA
* HumanEval
* MATH
* AQUA
* GSM8K
* **Scale:** The radial scale ranges from 0.0 to 0.8, with increments of 0.2.
* **Models (Chart Titles):**
* LLaMA3.1-8B-Instruct (top-left)
* LLaMA3.1-70B-Instruct (top-center)
* Qwen2.5-7B-Instruct (top-right)
* Qwen2.5-72B-Instruct (middle-left)
* Claude3.5-Sonnet (middle-center)
* GPT-3.5 (middle-right)
* GPT-4o (bottom-left)
* QWQ-32B-Instruct (bottom-center)
* DeepSeek-V3 (bottom-right)
* **Legend (bottom):**
* Blue: Base (Baseline)
* Orange: S1 (Intrinsic Correction)
* Green: S2 (External Correction)
### Detailed Analysis
Each model's performance is visualized as a polygon formed by connecting the data points on each axis. The larger the area enclosed by the polygon, the better the overall performance of the model across the benchmarks.
**LLaMA3.1-8B-Instruct:**
* **Base (Blue):** Performance ranges from approximately 0.2 on MATH to 0.6 on GSM8K.
* **S1 (Orange):** Shows a slight improvement over the baseline, especially on MATH and HumanEval, reaching approximately 0.3 and 0.7 respectively.
* **S2 (Green):** Exhibits the best performance, with values ranging from approximately 0.4 on MATH to 0.8 on GSM8K.
**LLaMA3.1-70B-Instruct:**
* **Base (Blue):** Performance ranges from approximately 0.2 on MATH to 0.6 on GSM8K.
* **S1 (Orange):** Shows a slight improvement over the baseline, especially on MATH and HumanEval, reaching approximately 0.3 and 0.7 respectively.
* **S2 (Green):** Exhibits the best performance, with values ranging from approximately 0.4 on MATH to 0.8 on GSM8K.
**Qwen2.5-7B-Instruct:**
* **Base (Blue):** Performance ranges from approximately 0.1 on MATH to 0.6 on GSM8K.
* **S1 (Orange):** Shows a slight improvement over the baseline, especially on MATH and HumanEval, reaching approximately 0.2 and 0.7 respectively.
* **S2 (Green):** Exhibits the best performance, with values ranging from approximately 0.3 on MATH to 0.8 on GSM8K.
**Qwen2.5-72B-Instruct:**
* **Base (Blue):** Performance ranges from approximately 0.2 on MATH to 0.6 on GSM8K.
* **S1 (Orange):** Shows a slight improvement over the baseline, especially on MATH and HumanEval, reaching approximately 0.3 and 0.7 respectively.
* **S2 (Green):** Exhibits the best performance, with values ranging from approximately 0.4 on MATH to 0.8 on GSM8K.
**Claude3.5-Sonnet:**
* **Base (Blue):** Performance ranges from approximately 0.2 on MATH to 0.6 on GSM8K.
* **S1 (Orange):** Shows a slight improvement over the baseline, especially on MATH and HumanEval, reaching approximately 0.3 and 0.7 respectively.
* **S2 (Green):** Exhibits the best performance, with values ranging from approximately 0.4 on MATH to 0.8 on GSM8K.
**GPT-3.5:**
* **Base (Blue):** Performance ranges from approximately 0.2 on MATH to 0.6 on GSM8K.
* **S1 (Orange):** Shows a slight improvement over the baseline, especially on MATH and HumanEval, reaching approximately 0.3 and 0.7 respectively.
* **S2 (Green):** Exhibits the best performance, with values ranging from approximately 0.4 on MATH to 0.8 on GSM8K.
**GPT-4o:**
* **Base (Blue):** Performance ranges from approximately 0.2 on MATH to 0.6 on GSM8K.
* **S1 (Orange):** Shows a slight improvement over the baseline, especially on MATH and HumanEval, reaching approximately 0.3 and 0.7 respectively.
* **S2 (Green):** Exhibits the best performance, with values ranging from approximately 0.4 on MATH to 0.8 on GSM8K.
**QWQ-32B-Instruct:**
* **Base (Blue):** Performance ranges from approximately 0.2 on MATH to 0.6 on GSM8K.
* **S1 (Orange):** Shows a slight improvement over the baseline, especially on MATH and HumanEval, reaching approximately 0.3 and 0.7 respectively.
* **S2 (Green):** Exhibits the best performance, with values ranging from approximately 0.4 on MATH to 0.8 on GSM8K.
**DeepSeek-V3:**
* **Base (Blue):** Performance ranges from approximately 0.2 on MATH to 0.6 on GSM8K.
* **S1 (Orange):** Shows a slight improvement over the baseline, especially on MATH and HumanEval, reaching approximately 0.3 and 0.7 respectively.
* **S2 (Green):** Exhibits the best performance, with values ranging from approximately 0.4 on MATH to 0.8 on GSM8K.
### Key Observations
* **Consistent Trends:** Across all models, the "S2 (External Correction)" configuration (green line) generally outperforms both the "Base (Baseline)" (blue line) and the "S1 (Intrinsic Correction)" (orange line) configurations.
* **Benchmark Variation:** Performance varies significantly across different benchmarks. GSM8K consistently shows the highest scores, while MATH tends to have the lowest scores for most models.
* **Model-Specific Differences:** While the general trend is consistent, the magnitude of improvement from baseline to S1/S2 varies across different models.
### Interpretation
The radar charts provide a visual comparison of the performance of different language models under various conditions. The consistent outperformance of the "S2 (External Correction)" configuration suggests that external correction methods are generally more effective in improving model performance across a range of tasks. The variation in performance across benchmarks highlights the strengths and weaknesses of each model in specific areas. The data suggests that while all models benefit from external correction, the degree of improvement is model-dependent, indicating that the effectiveness of correction methods may be influenced by the underlying architecture and training data of each model. The consistent low scores on the MATH benchmark suggest that this is a particularly challenging area for these models, warranting further investigation and targeted improvement strategies.
</details>
Figure 8: Comparative performance of different LLMs across various self-correction types and task scenarios.
Appendix B Dataset
B.1 Data Setting
This benchmark includes datasets from various domains as shown in Table 7: GSM8K, AQuA, and MATH for mathematical reasoning, HotpotQA, CommonsenseQA, and GPQA for commonsense reasoning, and HumanEval for code generation. GSM8K and AQuA feature high school-level math and quantitative reasoning problems, while MATH provides a broader set of mathematical challenges. HotpotQA and CommonsenseQA (CS-QA) test multi-hop and commonsense reasoning, respectively, with GPQA expanding on the latter by including more diverse questions. HumanEval consists of programming problems to assess code generation abilities.
Table 7: Statistics of the datasets used in CorrectBench.
| Type | Dataset | Samples | License |
| --- | --- | --- | --- |
| Commonsense | HotpotQA | 300 | CC BY-SA 4.0 |
| CommonsenseQA | 300 | MIT License | |
| GPQA | 250 | Apache License 2.0 | |
| Math | GSM8K | 250 | MIT License |
| AQuA | 254 | Apache License 2.0 | |
| MATH | 500 | MIT License | |
| Coding | HumanEval | 164 | MIT License |
| All | - | 2018 | - |
To ensure consistency and reproducibility, each dataset is sampled to include 100 examples, selected through a randomized process. To refine the dataset quality, we remove outliers or irrelevant samples, ensuring a more accurate representation of typical error patterns. For mathematical reasoning datasets such as GSM8K, AQuA, and MATH, we ensure that selected problems span diverse difficulty levels to capture a comprehensive assessment of model performance. Similarly, commonsense reasoning datasets (HotpotQA, CS-QA, and GPQA) are curated to include a balanced mix of multi-hop and diverse reasoning tasks. For HumanEval, programming problems are filtered to maintain relevance to standard coding scenarios while avoiding overly specialized or ambiguous cases.
B.2 Dataset and LLM Selection
Table 8 summarizes the experimental evaluation of various self-correction methods across multiple datasets. The âââ indicates that the corresponding method is evaluated on the dataset, whereas the â-â signifies that there are no experiments.
Table 8: Selection of different datasets and LLMs for all self-correction methods.
| Type | Methods | T1 | T2 | T3 | LLM Type | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| HotpotQA | CS-QA | GPQA | GSM8K | AQUA | MATH | HumanEval | M1 | M2 | | |
| S1 | RCI | â | â | â | â | â | â | â | â | â |
| CoVe | â | â | â | â | â | â | â | â | â | |
| Self-Refine | â | â | â | â | â | â | - | â | â | |
| Reflexion-v1 | â | â | â | â | â | â | - | â | â | |
| S2 | Reflexion-v2 | â | â | â | â | â | â | - | â | â |
| RARR | â | â | â | â | â | â | â | â | â | |
| RATT | â | â | â | â | â | â | â | â | â | |
| CRITIC | - | â | - | â | - | - | - | â | â | |
| S3 | DCoT | - | â | - | â | - | - | - | - | - |
| SCORE | - | â | - | â | - | - | - | - | - | |
| Supercorrect | - | â | - | â | - | â | â | - | - | |
B.3 Bootstrap Confidence Intervals for Resource Cost Results
To validate the robustness of the trade-offs in Table 4, we report 95% bootstrap confidence intervals for each methodâs accuracy across five random subsamples. The mean accuracy difference between S1 and S2 methods is statistically significant ( $p<0.05$ ), confirming that cost-normalized accuracy scales with correction depth.
Appendix C Comprehensive Related Works
C.1 Self-Correction Methods
Theoretical Perspectives. Recent research has delved into the theoretical foundations of self-correction in large language models (LLMs), particularly examining how these models refine their outputs through iterative self-examination. Key transformer components, such as softmax attention and multi-head attention, have been identified as central mechanisms in enabling this self-correction process [60]. Several studies have highlighted the limitations of intrinsic self-correction. For instance, LLMs often encounter challenges when attempting to self-correct reasoning tasks without external feedback, resulting in degraded performance in specific scenarios [25]. Nevertheless, other research has demonstrated that intrinsic self-correction can be effective under certain conditions, such as employing zero-temperature settings and fair prompts. These conditions help LLMs enhance accuracy across various tasks by providing a more structured and deterministic framework for response refinement [35]. Further investigations reveal that intrinsic self-correction processes can converge over multiple iterations, yielding stable performance improvements, particularly in iterative and complex tasks [36]. However, some studies challenge the notion that LLMs can consistently enhance their outputs through self-correction alone. These findings suggest that LLMs often struggle to differentiate between previously generated alternatives, which limits the effectiveness of their self-correction mechanisms [27]. Additionally, innovative approaches such as the Divergent CoT (DCoT) method have been proposed. By generating and comparing multiple divergent reasoning chains, this method improves reasoning accuracy and facilitates more effective self-correction during complex reasoning tasks [45].
Self-Detection of Mistakes. Self-detection of mistakes in LLM responses, often with the aid of external information, has been widely explored across several domains. In misinformation detection, numerous studies have investigated how LLMs can identify and correct errors in the information they generate [75, 10, 6, 41]. Similarly, context-faithfulness, which examines whether LLMs maintain consistency with the context in which they are deployed, has also been a focal point in recent research [59, 17, 52]. Other works have concentrated on harmful content detection, where LLMs are tasked with identifying potentially harmful or offensive outputs [49], as well as bias detection, which aims to identify and mitigate biases in LLM responses [4, 19]. Despite significant progress, recent studies have shown that even state-of-the-art LLMs struggle to reliably detect their own mistakes across a variety of tasks [58, 28]. For instance, research demonstrates that LLMs often fail to identify errors in their outputs, even when performing complex reasoning or content generation tasks. These findings highlight a crucial gap in the current self-correction capabilities of LLMs, underscoring the need for further research into more robust error detection and correction mechanisms.
Fine-tuning Methods. Self-training, or self-improvement, involves models utilizing their own responses to enhance performance. Several studies have explored the use of self-evaluation or self-correction for generating training data. For example, [3] and [22] leverage self-correction as a means to create training datasets, while [43] employ self-evaluation as a training signal to improve model performance. Another direction within self-training focuses on improving reasoning in LLMs by selecting high-quality generated outputs. [73] enhance reasoning by selecting outputs based on ground-truth final answers, whereas [24] emphasize self-consistency as a method for refining reasoning. [39] adopt a different approach by using high-confidence sentences generated by LLMs to train classifiers, demonstrating the potential of leveraging model confidence in improving task performance.
C.2 Correction Benchmarks.
Benchmarking the LLMsâ self-correction ability has prompted the development of specialized benchmarks for different tasks. For instance, CriticBench [34] evaluates critique ability using discrimination results, but it struggles with task-specific fine-grained metrics and reliance on costly human annotations or potentially biased GPT-4 outputs. In the realm of vision-language models, VISCO [64] focuses on self-correction in multimodal tasks, while Beyond Correctness [78] specifically targets self-correction in large models for code generation. Our CorrectBench focused on striking a trade-off between reasoning capability and efficiency, proposing more generalized and nuanced evaluation methods for complex reasoning tasks.
Appendix D Limitations and Future Directions
Adaptive Correction Pipelines. The iterative nature of self-correction in LLMs presents an opportunity to develop adaptive correction pipelines that dynamically determine when and how to refine model outputs. Current self-correction methods often employ a fixed number of refinement steps, which may not be optimal for all tasks or inputs. By investigating optimal stopping criteria [46], researchers can design systems that allocate computational resources more efficiently, thereby balancing accuracy and efficiency. Techniques such as reinforcement learning and meta-learning could be leveraged to train models that autonomously decide the appropriate number of correction iterations based on the complexity and confidence of their responses [9]. Furthermore, adaptive pipelines can incorporate uncertainty estimation to identify instances where additional refinement is necessary, potentially reducing unnecessary computation for straightforward queries while allocating more resources to complex or ambiguous cases [48].
Integration for Agents. Incorporating self-correction mechanisms into autonomous LLM-based agents can significantly enhance their functionality beyond static conversational roles. Agents equipped with self-correction capabilities are better suited to perform complex, multi-step tasks that require continuous adaptation and error mitigation. This integration can enable agents to engage in more sophisticated interactions, such as dynamic problem-solving, real-time data analysis, and interactive decision-making in diverse domains [55, 33]. By embedding self-correction within the agentâs operational framework, these systems can achieve higher levels of autonomy and reliability, making them more effective in real-world applications. Additionally, the ability to self-correct allows agents to better handle unforeseen scenarios and maintain performance consistency across varying contexts, thereby broadening their applicability and utility.
Human-in-the-Loop Correction. While automated self-correction methods offer significant improvements in model accuracy and reliability, integrating human feedback can further enhance these outcomes, especially in high-stakes or sensitive applications. Human-in-the-loop (HITL) correction involves leveraging expert knowledge to validate and refine model outputs, ensuring that the responses meet stringent quality and safety standards [51]. Effective HITL systems can combine the strengths of automated refinement with the nuanced understanding of human experts, thereby addressing limitations inherent in purely algorithmic approaches. For instance, in domains such as medical diagnostics, legal reasoning, or financial analysis, expert oversight can prevent critical errors and ensure that the model adheres to ethical guidelines and regulatory requirements. Future research should focus on developing seamless interfaces for human-AI collaboration, optimizing the balance between automation and manual intervention, and exploring scalable methods for incorporating diverse expert inputs without compromising efficiency [68].
Appendix E Additional Experiments for Performance Improvement
This section evaluates performance gains from self-correction methods across various LLMs and datasets. Figure 9 shows the performance gains of the CoT method across models and datasets. Most models surpass the baseline (ây=0â for Base), though some fall short. For instance, LLaMA 3.1-8B-Instruct performs poorly overall, and no model achieves improvements on the HumanEval dataset.
E.1 Performance Gains for Intrinsic Correction methods
Performance Gains for RCI. Figure 10 illustrates the performance gains from the RCI method across all LLMs on the evaluated datasets. Over half the data points surpass the baseline, demonstrating its effectiveness. Notably, for GPT-4o, nearly all data points exceed the baseline, highlighting significant improvements.
Performance Gains for CoVe. As depicted in Figure 11, the CoVe method delivers significant performance enhancements across all LLMs on the evaluated datasets. The majority of data points surpass the baseline, with substantial improvement magnitudes, underscoring the effectiveness of the CoVe method.
Performance Gains for Self-Refine. Figure 12 demonstrates the performance gains achieved by the Self-Refine method across all LLMs on the selected datasets. Nearly all data points lie above the baseline. In particular for the GPQA dataset, all LLMs exhibit significant performance improvements.
Performance Changes for Reflexion-v1. Figure 13 depicts the performance outcomes of the Reflexion-v1 method without tools across all LLMs on the evaluated datasets. In this scenario, nearly all data points fall below the baseline, indicating a performance decline across most models and datasets.
E.2 Performance Gains for External Correction methods
Performance Gains for Reflexion-v2. In contrast to the results without tools, Figure 14 highlights the performance improvements achieved by Reflexion-v2 with tools. The majority of data points surpass the baseline, demonstrating the effectiveness of tool integration in enhancing performance.
Performance Gains for RARR. Figure 15 illustrates the performance improvements resulting from the RARR method across all LLMs on the evaluated datasets. Nearly all data points exceed the baseline. Specifically, the GPQA dataset shows significant performance enhancements across all models.
Performance Gains for RATT. Figure 16 showcases the performance gains achieved by the RATT method across various LLMs on the evaluated datasets. Most data points lie above the baseline, reflecting the positive impact of the RATT method in improving model performance.
<details>
<summary>x9.png Details</summary>

### Visual Description
## Scatter Plot: LLM Performance Across Datasets
### Overview
The image is a scatter plot comparing the performance of various Large Language Models (LLMs) across different datasets. The y-axis represents the percentage difference (Î (%)), and the x-axis represents the datasets. Each LLM is represented by a unique color and marker. A horizontal dashed line indicates the baseline performance (Î = 0).
### Components/Axes
* **X-axis:** "Dataset" with categories: HotpotQA, CS-QA, AQUA, GSM8K, MATH, GPQA, HumanEval. Vertical dashed lines separate the datasets.
* **Y-axis:** "Î (%)" ranging from -30 to 30, with tick marks at -30, -20, -10, 0, 10, 20, and 30.
* **Legend (Top-Right):**
* Light Blue Circle: LLaMA3.1-8B
* Yellow Circle: LLaMA3.1-70B
* Purple Circle: Qwen2.5-7B
* Red Circle: Qwen2.5-72B
* Dark Blue Pentagon: Claude3.5
* Orange Pentagon: GPT-3.5
* Green Pentagon: GPT-4o
* Light Blue Diamond: QWQ-32B
* Purple Diamond: DeepSeek-V3
* White Circle: Open LLM
* Gray Pentagon: Close LLM
* White Diamond: Reasoning LLM
* Light Green Dashed Line: Baseline (Î=0)
### Detailed Analysis
**HotpotQA Dataset:**
* LLaMA3.1-8B (Light Blue Circle): Approximately -4%
* LLaMA3.1-70B (Yellow Circle): Approximately 4%
* Qwen2.5-7B (Purple Circle): Approximately -3%
* Qwen2.5-72B (Red Circle): Approximately 6%
* Claude3.5 (Dark Blue Pentagon): Approximately 4%
* GPT-3.5 (Orange Pentagon): Approximately 5%
* GPT-4o (Green Pentagon): Approximately 3%
* QWQ-32B (Light Blue Diamond): Approximately 3%
* DeepSeek-V3 (Purple Diamond): Approximately 2%
* Open LLM (White Circle): Approximately 1%
* Close LLM (Gray Pentagon): Approximately 2%
* Reasoning LLM (White Diamond): Approximately 3%
**CS-QA Dataset:**
* LLaMA3.1-8B (Light Blue Circle): Approximately -2%
* LLaMA3.1-70B (Yellow Circle): Approximately 8%
* Qwen2.5-7B (Purple Circle): Approximately -1%
* Qwen2.5-72B (Red Circle): Approximately 9%
* Claude3.5 (Dark Blue Pentagon): Approximately 7%
* GPT-3.5 (Orange Pentagon): Approximately 6%
* GPT-4o (Green Pentagon): Approximately 5%
* QWQ-32B (Light Blue Diamond): Approximately 1%
* DeepSeek-V3 (Purple Diamond): Approximately 1%
* Open LLM (White Circle): Approximately -18%
* Close LLM (Gray Pentagon): Approximately 1%
* Reasoning LLM (White Diamond): Approximately 1%
**AQUA Dataset:**
* LLaMA3.1-8B (Light Blue Circle): Approximately -1%
* LLaMA3.1-70B (Yellow Circle): Approximately 4%
* Qwen2.5-7B (Purple Circle): Approximately -1%
* Qwen2.5-72B (Red Circle): Approximately 7%
* Claude3.5 (Dark Blue Pentagon): Approximately -17%
* GPT-3.5 (Orange Pentagon): Approximately 5%
* GPT-4o (Green Pentagon): Approximately 2%
* QWQ-32B (Light Blue Diamond): Approximately 3%
* DeepSeek-V3 (Purple Diamond): Approximately 1%
* Open LLM (White Circle): Approximately -1%
* Close LLM (Gray Pentagon): Approximately 1%
* Reasoning LLM (White Diamond): Approximately 3%
**GSM8K Dataset:**
* LLaMA3.1-8B (Light Blue Circle): Approximately -2%
* LLaMA3.1-70B (Yellow Circle): Approximately 18%
* Qwen2.5-7B (Purple Circle): Approximately 3%
* Qwen2.5-72B (Red Circle): Approximately 7%
* Claude3.5 (Dark Blue Pentagon): Approximately 3%
* GPT-3.5 (Orange Pentagon): Approximately 18%
* GPT-4o (Green Pentagon): Approximately 2%
* QWQ-32B (Light Blue Diamond): Approximately 1%
* DeepSeek-V3 (Purple Diamond): Approximately 2%
* Open LLM (White Circle): Approximately -2%
* Close LLM (Gray Pentagon): Approximately 2%
* Reasoning LLM (White Diamond): Approximately 2%
**MATH Dataset:**
* LLaMA3.1-8B (Light Blue Circle): Approximately -1%
* LLaMA3.1-70B (Yellow Circle): Approximately 4%
* Qwen2.5-7B (Purple Circle): Approximately 2%
* Qwen2.5-72B (Red Circle): Approximately 4%
* Claude3.5 (Dark Blue Pentagon): Approximately 3%
* GPT-3.5 (Orange Pentagon): Approximately 5%
* GPT-4o (Green Pentagon): Approximately 2%
* QWQ-32B (Light Blue Diamond): Approximately 2%
* DeepSeek-V3 (Purple Diamond): Approximately 2%
* Open LLM (White Circle): Approximately -16%
* Close LLM (Gray Pentagon): Approximately 2%
* Reasoning LLM (White Diamond): Approximately 2%
**GPQA Dataset:**
* LLaMA3.1-8B (Light Blue Circle): Approximately -10%
* LLaMA3.1-70B (Yellow Circle): Approximately -8%
* Qwen2.5-7B (Purple Circle): Approximately -9%
* Qwen2.5-72B (Red Circle): Approximately 5%
* Claude3.5 (Dark Blue Pentagon): Approximately -15%
* GPT-3.5 (Orange Pentagon): Approximately 4%
* GPT-4o (Green Pentagon): Approximately 3%
* QWQ-32B (Light Blue Diamond): Approximately 3%
* DeepSeek-V3 (Purple Diamond): Approximately 3%
* Open LLM (White Circle): Approximately -9%
* Close LLM (Gray Pentagon): Approximately 3%
* Reasoning LLM (White Diamond): Approximately 3%
**HumanEval Dataset:**
* LLaMA3.1-8B (Light Blue Circle): Approximately -23%
* LLaMA3.1-70B (Yellow Circle): Approximately -19%
* Qwen2.5-7B (Purple Circle): Approximately -15%
* Qwen2.5-72B (Red Circle): Approximately 8%
* Claude3.5 (Dark Blue Pentagon): Approximately -22%
* GPT-3.5 (Orange Pentagon): Approximately 5%
* GPT-4o (Green Pentagon): Approximately 3%
* QWQ-32B (Light Blue Diamond): Approximately -5%
* DeepSeek-V3 (Purple Diamond): Approximately -4%
* Open LLM (White Circle): Approximately -24%
* Close LLM (Gray Pentagon): Approximately -5%
* Reasoning LLM (White Diamond): Approximately -5%
### Key Observations
* The performance of different LLMs varies significantly across datasets.
* Some models consistently outperform others across all datasets (e.g., GPT-3.5, GPT-4o).
* Open LLM shows particularly poor performance on CS-QA, MATH, and HumanEval datasets.
* Claude3.5 shows particularly poor performance on AQUA, GPQA, and HumanEval datasets.
* LLaMA3.1-8B shows particularly poor performance on HumanEval datasets.
### Interpretation
The scatter plot provides a comparative analysis of LLM performance on various tasks represented by the datasets. The percentage difference (Î (%)) likely indicates the improvement or decline in performance relative to a baseline or a specific metric. The data suggests that no single LLM excels in all tasks, highlighting the importance of task-specific model selection. The significant performance variations across datasets indicate that the models have different strengths and weaknesses, possibly due to their training data or architecture. The negative values suggest that some models perform worse than the baseline on certain tasks. The plot is useful for identifying which models are best suited for specific applications based on their performance on relevant datasets. The performance of Open LLM is a notable outlier, suggesting potential issues with its training or architecture.
</details>
Figure 9: Performance Gains for CoT.
<details>
<summary>x10.png Details</summary>

### Visual Description
## Scatter Plot: Model Performance on Various Datasets
### Overview
The image is a scatter plot comparing the performance of various language models (LLMs) on different datasets. The y-axis represents the percentage difference (Î (%)) from a baseline, and the x-axis represents the datasets. Each model is represented by a unique color and marker.
### Components/Axes
* **X-axis:** Datasets: HotpotQA, CS-QA, AQUA, GSM8K, MATH, GPQA, HumanEval
* **Y-axis:** Î (%) - Percentage difference from a baseline, ranging from -30% to 30%
* **Baseline:** A horizontal dashed green line at Î = 0%.
* **Legend (Right side):**
* Light Blue Circle: LLAMA3.1-8B
* Yellow Circle: LLAMA3.1-70B
* Purple Circle: Qwen2.5-7B
* Red Circle: Qwen2.5-72B
* Teal Pentagon: Claude3.5
* Orange Pentagon: GPT-3.5
* Green Pentagon: GPT-4o
* Light Blue Diamond: QWQ-32B
* Purple Diamond: DeepSeek-V3
* White Circle: Open LLM
* White Pentagon: Close LLM
* White Diamond: Reasoning LLM
### Detailed Analysis
Here's a breakdown of model performance on each dataset:
* **HotpotQA:**
* LLAMA3.1-8B (Light Blue Circle): Approximately -15%
* LLAMA3.1-70B (Yellow Circle): Approximately -5%
* Qwen2.5-7B (Purple Circle): Approximately -15%
* Qwen2.5-72B (Red Circle): Approximately -5%
* Claude3.5 (Teal Pentagon): Approximately 0%
* GPT-3.5 (Orange Pentagon): Approximately 1%
* GPT-4o (Green Pentagon): Approximately 0%
* QWQ-32B (Light Blue Diamond): Approximately -5%
* DeepSeek-V3 (Purple Diamond): Approximately -15%
* Open LLM (White Circle): Approximately -20%
* Close LLM (White Pentagon): Approximately -10%
* Reasoning LLM (White Diamond): Approximately -5%
* **CS-QA:**
* LLAMA3.1-8B (Light Blue Circle): Approximately -10%
* LLAMA3.1-70B (Yellow Circle): Approximately -15%
* Qwen2.5-7B (Purple Circle): Approximately -5%
* Qwen2.5-72B (Red Circle): Approximately -2%
* Claude3.5 (Teal Pentagon): Approximately 10%
* GPT-3.5 (Orange Pentagon): Approximately 10%
* GPT-4o (Green Pentagon): Approximately 5%
* QWQ-32B (Light Blue Diamond): Approximately -5%
* DeepSeek-V3 (Purple Diamond): Approximately -10%
* Open LLM (White Circle): Approximately -10%
* Close LLM (White Pentagon): Approximately -5%
* Reasoning LLM (White Diamond): Approximately -20%
* **AQUA:**
* LLAMA3.1-8B (Light Blue Circle): Approximately 0%
* LLAMA3.1-70B (Yellow Circle): Approximately 5%
* Qwen2.5-7B (Purple Circle): Approximately 5%
* Qwen2.5-72B (Red Circle): Approximately 5%
* Claude3.5 (Teal Pentagon): Approximately 5%
* GPT-3.5 (Orange Pentagon): Approximately 25%
* GPT-4o (Green Pentagon): Approximately 5%
* QWQ-32B (Light Blue Diamond): Approximately 15%
* DeepSeek-V3 (Purple Diamond): Approximately -5%
* Open LLM (White Circle): Approximately 0%
* Close LLM (White Pentagon): Approximately 0%
* Reasoning LLM (White Diamond): Approximately 15%
* **GSM8K:**
* LLAMA3.1-8B (Light Blue Circle): Approximately 0%
* LLAMA3.1-70B (Yellow Circle): Approximately 5%
* Qwen2.5-7B (Purple Circle): Approximately 5%
* Qwen2.5-72B (Red Circle): Approximately 5%
* Claude3.5 (Teal Pentagon): Approximately 10%
* GPT-3.5 (Orange Pentagon): Approximately 15%
* GPT-4o (Green Pentagon): Approximately 10%
* QWQ-32B (Light Blue Diamond): Approximately -5%
* DeepSeek-V3 (Purple Diamond): Approximately -10%
* Open LLM (White Circle): Approximately 0%
* Close LLM (White Pentagon): Approximately 0%
* Reasoning LLM (White Diamond): Approximately 0%
* **MATH:**
* LLAMA3.1-8B (Light Blue Circle): Approximately 0%
* LLAMA3.1-70B (Yellow Circle): Approximately 0%
* Qwen2.5-7B (Purple Circle): Approximately 5%
* Qwen2.5-72B (Red Circle): Approximately 5%
* Claude3.5 (Teal Pentagon): Approximately 10%
* GPT-3.5 (Orange Pentagon): Approximately 10%
* GPT-4o (Green Pentagon): Approximately 10%
* QWQ-32B (Light Blue Diamond): Approximately -5%
* DeepSeek-V3 (Purple Diamond): Approximately -5%
* Open LLM (White Circle): Approximately 0%
* Close LLM (White Pentagon): Approximately 0%
* Reasoning LLM (White Diamond): Approximately 0%
* **GPQA:**
* LLAMA3.1-8B (Light Blue Circle): Approximately -5%
* LLAMA3.1-70B (Yellow Circle): Approximately -5%
* Qwen2.5-7B (Purple Circle): Approximately -5%
* Qwen2.5-72B (Red Circle): Approximately 5%
* Claude3.5 (Teal Pentagon): Approximately 5%
* GPT-3.5 (Orange Pentagon): Approximately 5%
* GPT-4o (Green Pentagon): Approximately 5%
* QWQ-32B (Light Blue Diamond): Approximately -10%
* DeepSeek-V3 (Purple Diamond): Approximately -5%
* Open LLM (White Circle): Approximately -5%
* Close LLM (White Pentagon): Approximately -5%
* Reasoning LLM (White Diamond): Approximately 0%
* **HumanEval:**
* LLAMA3.1-8B (Light Blue Circle): Approximately -5%
* LLAMA3.1-70B (Yellow Circle): Approximately -5%
* Qwen2.5-7B (Purple Circle): Approximately -5%
* Qwen2.5-72B (Red Circle): Approximately 0%
* Claude3.5 (Teal Pentagon): Approximately 5%
* GPT-3.5 (Orange Pentagon): Approximately 5%
* GPT-4o (Green Pentagon): Approximately 5%
* QWQ-32B (Light Blue Diamond): Approximately -5%
* DeepSeek-V3 (Purple Diamond): Approximately -5%
* Open LLM (White Circle): Approximately -20%
* Close LLM (White Pentagon): Approximately -5%
* Reasoning LLM (White Diamond): Approximately -5%
### Key Observations
* GPT-3.5 and GPT-4o generally perform well across all datasets, often showing a positive percentage difference from the baseline.
* LLAMA3.1-8B and LLAMA3.1-70B tend to have lower performance, often showing a negative percentage difference.
* The performance of different models varies significantly across different datasets, indicating that some models are better suited for certain tasks than others.
* Reasoning LLM shows significant underperformance on CS-QA.
* Open LLM shows significant underperformance on HumanEval.
### Interpretation
The scatter plot provides a comparative analysis of various LLMs on different datasets. The data suggests that no single model consistently outperforms all others across all tasks. The choice of the best model depends on the specific dataset and the desired performance characteristics. The plot highlights the importance of evaluating models on a diverse set of benchmarks to understand their strengths and weaknesses. The baseline (Î = 0) serves as a reference point to quickly assess whether a model is performing better or worse than the average. The variability in performance across datasets suggests that model architecture, training data, and fine-tuning strategies play a crucial role in determining the effectiveness of an LLM for a given task.
</details>
Figure 10: Performance Gains for RCI.
<details>
<summary>x11.png Details</summary>

### Visual Description
## Scatter Plot: Model Performance Across Datasets
### Overview
The image is a scatter plot comparing the performance of various large language models (LLMs) across different datasets. The y-axis represents the percentage difference (Î) from a baseline, and the x-axis represents the datasets. Each model is represented by a unique color and marker.
### Components/Axes
* **X-axis:** "Dataset" with categories: HotpotQA, CS-QA, AQUA, GSM8K, MATH, GPQA, HumanEval.
* **Y-axis:** "Î (%)" with a numerical scale ranging from -30 to 30, with increments of 10.
* **Legend:** Located on the right side of the plot, mapping colors and markers to specific LLMs:
* Light Blue Circle: LLaMA3.1-8B
* Yellow Circle: LLaMA3.1-70B
* Light Purple Circle: Qwen2.5-7B
* Red Circle: Qwen2.5-72B
* Dark Blue Pentagon: Claude3.5
* Orange Pentagon: GPT-3.5
* Green Pentagon: GPT-4o
* Yellow Diamond: QWQ-32B
* Dark Purple Diamond: DeepSeek-V3
* White Circle: Open LLM
* White Pentagon: Close LLM
* White Diamond: Reasoning LLM
* Dashed Light Green Line: Baseline (Î=0)
* Vertical dashed lines separate the datasets.
### Detailed Analysis
**HotpotQA Dataset:**
* LLaMA3.1-8B (Light Blue Circle): Approximately -8%
* LLaMA3.1-70B (Yellow Circle): Approximately 8%
* Qwen2.5-7B (Light Purple Circle): Approximately 0%
* Qwen2.5-72B (Red Circle): Approximately 2%
* Claude3.5 (Dark Blue Pentagon): Approximately 5%
* GPT-3.5 (Orange Pentagon): Approximately 10%
* GPT-4o (Green Pentagon): Approximately 5%
* QWQ-32B (Yellow Diamond): Approximately -2%
* DeepSeek-V3 (Dark Purple Diamond): Approximately -20%
* Open LLM (White Circle): Approximately -10%
* Close LLM (White Pentagon): Approximately 0%
* Reasoning LLM (White Diamond): Approximately -25%
**CS-QA Dataset:**
* LLaMA3.1-8B (Light Blue Circle): Approximately -10%
* LLaMA3.1-70B (Yellow Circle): Approximately -5%
* Qwen2.5-7B (Light Purple Circle): Approximately 0%
* Qwen2.5-72B (Red Circle): Approximately -2%
* Claude3.5 (Dark Blue Pentagon): Approximately 0%
* GPT-3.5 (Orange Pentagon): Approximately 0%
* GPT-4o (Green Pentagon): Approximately 0%
* QWQ-32B (Yellow Diamond): Approximately 1%
* DeepSeek-V3 (Dark Purple Diamond): Approximately -10%
* Open LLM (White Circle): Approximately -15%
* Close LLM (White Pentagon): Approximately -2%
* Reasoning LLM (White Diamond): Approximately 1%
**AQUA Dataset:**
* LLaMA3.1-8B (Light Blue Circle): Approximately 2%
* LLaMA3.1-70B (Yellow Circle): Approximately 8%
* Qwen2.5-7B (Light Purple Circle): Approximately 10%
* Qwen2.5-72B (Red Circle): Approximately 18%
* Claude3.5 (Dark Blue Pentagon): Approximately 15%
* GPT-3.5 (Orange Pentagon): Approximately 28%
* GPT-4o (Green Pentagon): Approximately 12%
* QWQ-32B (Yellow Diamond): Approximately 2%
* DeepSeek-V3 (Dark Purple Diamond): Approximately -8%
* Open LLM (White Circle): Approximately 1%
* Close LLM (White Pentagon): Approximately 10%
* Reasoning LLM (White Diamond): Approximately 12%
**GSM8K Dataset:**
* LLaMA3.1-8B (Light Blue Circle): Approximately 2%
* LLaMA3.1-70B (Yellow Circle): Approximately 0%
* Qwen2.5-7B (Light Purple Circle): Approximately 0%
* Qwen2.5-72B (Red Circle): Approximately 0%
* Claude3.5 (Dark Blue Pentagon): Approximately 5%
* GPT-3.5 (Orange Pentagon): Approximately 17%
* GPT-4o (Green Pentagon): Approximately 2%
* QWQ-32B (Yellow Diamond): Approximately 0%
* DeepSeek-V3 (Dark Purple Diamond): Approximately 0%
* Open LLM (White Circle): Approximately -5%
* Close LLM (White Pentagon): Approximately 0%
* Reasoning LLM (White Diamond): Approximately -2%
**MATH Dataset:**
* LLaMA3.1-8B (Light Blue Circle): Approximately 10%
* LLaMA3.1-70B (Yellow Circle): Approximately 12%
* Qwen2.5-7B (Light Purple Circle): Approximately 10%
* Qwen2.5-72B (Red Circle): Approximately 10%
* Claude3.5 (Dark Blue Pentagon): Approximately 10%
* GPT-3.5 (Orange Pentagon): Approximately 12%
* GPT-4o (Green Pentagon): Approximately 10%
* QWQ-32B (Yellow Diamond): Approximately 12%
* DeepSeek-V3 (Dark Purple Diamond): Approximately -12%
* Open LLM (White Circle): Approximately 0%
* Close LLM (White Pentagon): Approximately 0%
* Reasoning LLM (White Diamond): Approximately 15%
**GPQA Dataset:**
* LLaMA3.1-8B (Light Blue Circle): Approximately 22%
* LLaMA3.1-70B (Yellow Circle): Approximately 25%
* Qwen2.5-7B (Light Purple Circle): Approximately 20%
* Qwen2.5-72B (Red Circle): Approximately 22%
* Claude3.5 (Dark Blue Pentagon): Approximately 5%
* GPT-3.5 (Orange Pentagon): Approximately 28%
* GPT-4o (Green Pentagon): Approximately 25%
* QWQ-32B (Yellow Diamond): Approximately 10%
* DeepSeek-V3 (Dark Purple Diamond): Approximately 12%
* Open LLM (White Circle): Approximately 2%
* Close LLM (White Pentagon): Approximately 2%
* Reasoning LLM (White Diamond): Approximately 10%
**HumanEval Dataset:**
* LLaMA3.1-8B (Light Blue Circle): Approximately -10%
* LLaMA3.1-70B (Yellow Circle): Approximately 28%
* Qwen2.5-7B (Light Purple Circle): Approximately 10%
* Qwen2.5-72B (Red Circle): Approximately 2%
* Claude3.5 (Dark Blue Pentagon): Approximately 10%
* GPT-3.5 (Orange Pentagon): Approximately 10%
* GPT-4o (Green Pentagon): Approximately 10%
* QWQ-32B (Yellow Diamond): Approximately 10%
* DeepSeek-V3 (Dark Purple Diamond): Approximately 10%
* Open LLM (White Circle): Approximately -22%
* Close LLM (White Pentagon): Approximately 0%
* Reasoning LLM (White Diamond): Approximately 10%
### Key Observations
* The performance of the models varies significantly across different datasets.
* GPT-3.5 generally performs well, often achieving high percentage differences.
* DeepSeek-V3 shows a wide range of performance, with some datasets showing negative percentage differences.
* The "Reasoning LLM" (White Diamond) shows a wide range of performance.
### Interpretation
The scatter plot provides a comparative analysis of various LLMs across different benchmark datasets. The percentage difference from the baseline indicates how much better or worse each model performs relative to a standard. The variability in performance across datasets suggests that different models are better suited for different types of tasks. For example, GPT-3.5 consistently performs well, indicating a robust general-purpose capability. In contrast, DeepSeek-V3's performance fluctuates, suggesting it may be highly specialized or sensitive to the nature of the task. The plot highlights the importance of evaluating LLMs on a diverse set of benchmarks to understand their strengths and weaknesses. The "Reasoning LLM" shows a wide range of performance, suggesting it may be highly specialized or sensitive to the nature of the task.
</details>
Figure 11: Performance Gains for CoVe.
<details>
<summary>x12.png Details</summary>

### Visual Description
## Scatter Plot: LLM Performance Comparison Across Datasets
### Overview
The image is a scatter plot comparing the performance of various Large Language Models (LLMs) across different datasets. The y-axis represents the percentage difference (Î (%)), and the x-axis represents the datasets. Each LLM is represented by a unique color and marker. A horizontal dashed line indicates the baseline performance (Î = 0).
### Components/Axes
* **X-axis:** Datasets: HotpotQA, CS-QA, AQUA, GSM8K, MATH, GPQA, HumanEval
* **Y-axis:** Î (%) - Percentage difference, ranging from -30 to 30, with tick marks at -30, -20, -10, 0, 10, 20, and 30.
* **Legend:** Located on the right side of the plot, mapping colors and markers to specific LLMs:
* Light Blue Circle: LLaMA3.1-8B
* Yellow Circle: LLaMA3.1-70B
* Dark Blue Circle: Qwen2.5-7B
* Red Circle: Qwen2.5-72B
* Teal Pentagon: Claude3.5
* Orange Pentagon: GPT-3.5
* Green Pentagon: GPT-4o
* Light Blue Diamond: QWQ-32B
* Purple Diamond: DeepSeek-V3
* White Circle: Open LLM
* White Pentagon: Close LLM
* White Diamond: Reasoning LLM
* Light Green Dashed Line: Baseline (Î=0)
### Detailed Analysis
**LLaMA3.1-8B (Light Blue Circle):**
* HotpotQA: Approximately -14%
* CS-QA: Approximately 0%
* AQUA: Approximately 0%
* GSM8K: Approximately -5%
* MATH: Approximately 0%
* GPQA: Approximately 1%
* HumanEval: Approximately -1%
**LLaMA3.1-70B (Yellow Circle):**
* HotpotQA: Approximately 7%
* CS-QA: Approximately 2%
* AQUA: Approximately 15%
* GSM8K: Approximately 2%
* MATH: Approximately 3%
* GPQA: Approximately 2%
* HumanEval: Approximately 5%
**Qwen2.5-7B (Dark Blue Circle):**
* HotpotQA: Approximately 3%
* CS-QA: Approximately 0%
* AQUA: Approximately 1%
* GSM8K: Approximately 3%
* MATH: Approximately 1%
* GPQA: Approximately 3%
* HumanEval: Approximately 3%
**Qwen2.5-72B (Red Circle):**
* HotpotQA: Approximately 8%
* CS-QA: Approximately 4%
* AQUA: Approximately 3%
* GSM8K: Approximately 4%
* MATH: Approximately 4%
* GPQA: Approximately 4%
* HumanEval: Approximately 4%
**Claude3.5 (Teal Pentagon):**
* HotpotQA: Approximately 5%
* CS-QA: Approximately 2%
* AQUA: Approximately 1%
* GSM8K: Approximately 3%
* MATH: Approximately 3%
* GPQA: Approximately 5%
* HumanEval: Approximately 6%
**GPT-3.5 (Orange Pentagon):**
* HotpotQA: Approximately 9%
* CS-QA: Approximately 6%
* AQUA: Approximately 16%
* GSM8K: Approximately 12%
* MATH: Approximately 12%
* GPQA: Approximately 28%
* HumanEval: Approximately 6%
**GPT-4o (Green Pentagon):**
* HotpotQA: Approximately 6%
* CS-QA: Approximately 5%
* AQUA: Approximately 2%
* GSM8K: Approximately 5%
* MATH: Approximately 5%
* GPQA: Approximately 7%
* HumanEval: Approximately 7%
**QWQ-32B (Light Blue Diamond):**
* HotpotQA: Approximately 0%
* CS-QA: Approximately 3%
* AQUA: Approximately 18%
* GSM8K: Approximately 19%
* MATH: Approximately 19%
* GPQA: Approximately 29%
* HumanEval: Approximately 7%
**DeepSeek-V3 (Purple Diamond):**
* HotpotQA: Approximately 1%
* CS-QA: Approximately 11%
* AQUA: Approximately 12%
* GSM8K: Approximately 2%
* MATH: Approximately 2%
* GPQA: Approximately 22%
* HumanEval: Approximately 16%
**Open LLM (White Circle):**
* HotpotQA: Approximately 0%
* CS-QA: Approximately 0%
* AQUA: Approximately 0%
* GSM8K: Approximately 0%
* MATH: Approximately 0%
* GPQA: Approximately 1%
* HumanEval: Approximately 1%
**Close LLM (White Pentagon):**
* HotpotQA: Approximately 0%
* CS-QA: Approximately 0%
* AQUA: Approximately 0%
* GSM8K: Approximately 0%
* MATH: Approximately 0%
* GPQA: Approximately 0%
* HumanEval: Approximately 0%
**Reasoning LLM (White Diamond):**
* CS-QA: Approximately 3%
* AQUA: Approximately 3%
* GSM8K: Approximately 3%
* MATH: Approximately 3%
* GPQA: Approximately 3%
* HumanEval: Approximately 3%
### Key Observations
* LLaMA3.1-8B performs poorly on HotpotQA compared to other datasets.
* GPT-3.5 and QWQ-32B show high performance on GPQA.
* The performance of most models improves from HotpotQA to HumanEval.
* The "Reasoning LLM" consistently scores around 3% across all datasets.
* The "Close LLM" consistently scores around 0% across all datasets.
* The "Open LLM" consistently scores around 0% across all datasets.
### Interpretation
The scatter plot visualizes the relative performance of different LLMs on various benchmark datasets. The percentage difference (Î) likely represents the improvement or decline in performance compared to a baseline model or a specific metric. The plot highlights the strengths and weaknesses of each model across different tasks. For example, LLaMA3.1-8B struggles with HotpotQA, while GPT-3.5 and QWQ-32B excel at GPQA. The consistent performance of the "Reasoning LLM" suggests it might be specifically designed for reasoning tasks, while the "Open LLM" and "Close LLM" may be base models or control groups. The general trend of increasing performance from HotpotQA to HumanEval could indicate the increasing complexity or relevance of these datasets.
</details>
Figure 12: Performance Gains for Self-Refine.
<details>
<summary>x13.png Details</summary>

### Visual Description
## Scatter Plot: Model Performance Across Datasets
### Overview
The image is a scatter plot comparing the performance of various Large Language Models (LLMs) across different datasets. The y-axis represents the percentage difference (Î (%)) from a baseline, and the x-axis represents the datasets. Each LLM is represented by a unique color and marker.
### Components/Axes
* **X-Axis:** "Dataset" with categories: HotpotQA, CS-QA, AQUA, GSM8K, MATH, GPQA, HumanEval. Vertical dashed lines separate each dataset.
* **Y-Axis:** "Î (%)" ranging from -30 to 30, with tick marks at -30, -20, -10, 0, 10, 20, and 30.
* **Legend (Top-Right):**
* Light Blue Circle: LLaMA3.1-8B
* Yellow Circle: LLaMA3.1-70B
* Light Purple Circle: Qwen2.5-7B
* Red Circle: Qwen2.5-72B
* Dark Blue Pentagon: Claude3.5
* Orange Pentagon: GPT-3.5
* Green Pentagon: GPT-4o
* Yellow Diamond: QWQ-32B
* Dark Purple Diamond: DeepSeek-V3
* White Circle: Open LLM
* White Pentagon: Close LLM
* White Diamond: Reasoning LLM
* Light Green Dashed Line: Baseline (Î=0)
* **Baseline:** A horizontal dashed light green line at Î (%) = 0.
### Detailed Analysis
Here's a breakdown of the approximate performance of each model on each dataset:
* **HotpotQA:**
* LLaMA3.1-8B (Light Blue Circle): Approximately -5%
* LLaMA3.1-70B (Yellow Circle): Approximately -12%
* Qwen2.5-7B (Light Purple Circle): Approximately -13%
* Qwen2.5-72B (Red Circle): Approximately -14%
* Claude3.5 (Dark Blue Pentagon): Approximately -7%
* GPT-3.5 (Orange Pentagon): Approximately -10%
* GPT-4o (Green Pentagon): Approximately -17%
* QWQ-32B (Yellow Diamond): Approximately -8%
* DeepSeek-V3 (Dark Purple Diamond): Approximately -12%
* Open LLM (White Circle): Approximately -14%
* Close LLM (White Pentagon): Approximately -12%
* Reasoning LLM (White Diamond): Approximately -10%
* **CS-QA:**
* LLaMA3.1-8B (Light Blue Circle): Approximately -23%
* LLaMA3.1-70B (Yellow Circle): Approximately -19%
* Qwen2.5-7B (Light Purple Circle): Approximately -12%
* Qwen2.5-72B (Red Circle): Approximately -15%
* Claude3.5 (Dark Blue Pentagon): Approximately -12%
* GPT-3.5 (Orange Pentagon): Approximately -15%
* GPT-4o (Green Pentagon): Approximately -15%
* QWQ-32B (Yellow Diamond): Approximately -20%
* DeepSeek-V3 (Dark Purple Diamond): Approximately -12%
* Open LLM (White Circle): Approximately -19%
* Close LLM (White Pentagon): Approximately -15%
* Reasoning LLM (White Diamond): Approximately -15%
* **AQUA:**
* LLaMA3.1-8B (Light Blue Circle): Approximately -22%
* LLaMA3.1-70B (Yellow Circle): Approximately -27%
* Qwen2.5-7B (Light Purple Circle): Approximately -6%
* Qwen2.5-72B (Red Circle): Approximately -17%
* Claude3.5 (Dark Blue Pentagon): Approximately -18%
* GPT-3.5 (Orange Pentagon): Approximately -6%
* GPT-4o (Green Pentagon): Approximately -10%
* QWQ-32B (Yellow Diamond): Approximately -8%
* DeepSeek-V3 (Dark Purple Diamond): Approximately -6%
* Open LLM (White Circle): Approximately -22%
* Close LLM (White Pentagon): Approximately -27%
* Reasoning LLM (White Diamond): Approximately -10%
* **GSM8K:**
* LLaMA3.1-8B (Light Blue Circle): Approximately -23%
* LLaMA3.1-70B (Yellow Circle): Approximately -28%
* Qwen2.5-7B (Light Purple Circle): Approximately -14%
* Qwen2.5-72B (Red Circle): Approximately -17%
* Claude3.5 (Dark Blue Pentagon): Approximately -18%
* GPT-3.5 (Orange Pentagon): Approximately -10%
* GPT-4o (Green Pentagon): Approximately -10%
* QWQ-32B (Yellow Diamond): Approximately -10%
* DeepSeek-V3 (Dark Purple Diamond): Approximately -14%
* Open LLM (White Circle): Approximately -20%
* Close LLM (White Pentagon): Approximately -28%
* Reasoning LLM (White Diamond): Approximately -10%
* **MATH:**
* LLaMA3.1-8B (Light Blue Circle): Approximately -10%
* LLaMA3.1-70B (Yellow Circle): Approximately -10%
* Qwen2.5-7B (Light Purple Circle): Approximately -5%
* Qwen2.5-72B (Red Circle): Approximately -1%
* Claude3.5 (Dark Blue Pentagon): Approximately -1%
* GPT-3.5 (Orange Pentagon): Approximately -5%
* GPT-4o (Green Pentagon): Approximately -10%
* QWQ-32B (Yellow Diamond): Approximately -8%
* DeepSeek-V3 (Dark Purple Diamond): Approximately -5%
* Open LLM (White Circle): Approximately -10%
* Close LLM (White Pentagon): Approximately -10%
* Reasoning LLM (White Diamond): Approximately -5%
* **GPQA:**
* LLaMA3.1-8B (Light Blue Circle): Approximately 0%
* LLaMA3.1-70B (Yellow Circle): Approximately -5%
* Qwen2.5-7B (Light Purple Circle): Approximately 10%
* Qwen2.5-72B (Red Circle): Approximately 10%
* Claude3.5 (Dark Blue Pentagon): Approximately 11%
* GPT-3.5 (Orange Pentagon): Approximately 1%
* GPT-4o (Green Pentagon): Approximately -5%
* QWQ-32B (Yellow Diamond): Approximately 12%
* DeepSeek-V3 (Dark Purple Diamond): Approximately 1%
* Open LLM (White Circle): Approximately 0%
* Close LLM (White Pentagon): Approximately -5%
* Reasoning LLM (White Diamond): Approximately 1%
* **HumanEval:**
* LLaMA3.1-8B (Light Blue Circle): Approximately -10%
* LLaMA3.1-70B (Yellow Circle): Approximately -10%
* Qwen2.5-7B (Light Purple Circle): Approximately -10%
* Qwen2.5-72B (Red Circle): Approximately -15%
* Claude3.5 (Dark Blue Pentagon): Approximately -12%
* GPT-3.5 (Orange Pentagon): Approximately -15%
* GPT-4o (Green Pentagon): Approximately -15%
* QWQ-32B (Yellow Diamond): Approximately -2%
* DeepSeek-V3 (Dark Purple Diamond): Approximately -10%
* Open LLM (White Circle): Approximately -10%
* Close LLM (White Pentagon): Approximately -10%
* Reasoning LLM (White Diamond): Approximately -2%
### Key Observations
* Most models perform below the baseline (Î=0) on the HotpotQA, CS-QA, AQUA, GSM8K, and MATH datasets.
* The performance varies significantly across different datasets for all models.
* QWQ-32B, Claude3.5, Qwen2.5-7B, and Qwen2.5-72B tend to perform relatively better on GPQA.
* QWQ-32B and Reasoning LLM perform relatively better on HumanEval.
### Interpretation
The scatter plot provides a comparative analysis of the performance of various LLMs on different question-answering and reasoning datasets. The negative Î (%) values indicate that most models underperform relative to the baseline on several datasets. The variability in performance across datasets suggests that different models have varying strengths and weaknesses depending on the type of task or knowledge required. The better performance of some models on GPQA and HumanEval indicates that these models might be better suited for tasks requiring general problem-solving abilities or human-like evaluation skills. The plot highlights the importance of evaluating LLMs on a diverse set of benchmarks to understand their capabilities and limitations.
</details>
Figure 13: Performance Gains for Reflexion-v1.
<details>
<summary>x14.png Details</summary>

### Visual Description
## Scatter Plot: LLM Performance on Various Datasets
### Overview
The image is a scatter plot comparing the performance of various Large Language Models (LLMs) across different datasets. The y-axis represents the percentage difference (Î) from a baseline, and the x-axis represents the datasets. Each LLM is represented by a unique color and marker.
### Components/Axes
* **X-axis:** Datasets: HotpotQA, CS-QA, AQUA, GSM8K, MATH, GPQA, HumanEval
* **Y-axis:** Î (%) - Percentage difference, ranging from -30 to 30, with increments of 10.
* **Legend (Right Side):**
* Light Green Circle: LLaMA3.1-8B
* Yellow Circle: LLaMA3.1-70B
* Light Purple Circle: Qwen2.5-7B
* Red Circle: Qwen2.5-72B
* Blue Pentagon: Claude3.5
* Orange Pentagon: GPT-3.5
* Green Pentagon: GPT-4o
* Yellow Diamond: QWQ-32B
* Purple Diamond: DeepSeek-V3
* White Circle: Open LLM
* Black Circle: Close LLM
* White Diamond: Reasoning LLM
* Light Green Dashed Line: Baseline (Î=0)
* Vertical dashed lines separate the datasets.
### Detailed Analysis
Here's a breakdown of the approximate performance of each LLM on each dataset:
* **Baseline:** Represented by a light green dashed line at Î = 0.
* **HotpotQA:**
* LLaMA3.1-8B (Light Green Circle): ~7%
* LLaMA3.1-70B (Yellow Circle): ~8%
* Qwen2.5-7B (Light Purple Circle): ~13%
* Qwen2.5-72B (Red Circle): ~1%
* Claude3.5 (Blue Pentagon): ~12%
* GPT-3.5 (Orange Pentagon): ~10%
* GPT-4o (Green Pentagon): ~7%
* QWQ-32B (Yellow Diamond): ~-5%
* DeepSeek-V3 (Purple Diamond): ~-7%
* Open LLM (White Circle): ~-3%
* Close LLM (Black Circle): ~-25%
* Reasoning LLM (White Diamond): ~-2%
* **CS-QA:**
* LLaMA3.1-8B (Light Green Circle): ~-7%
* LLaMA3.1-70B (Yellow Circle): ~-3%
* Qwen2.5-7B (Light Purple Circle): ~-7%
* Qwen2.5-72B (Red Circle): ~-1%
* Claude3.5 (Blue Pentagon): ~3%
* GPT-3.5 (Orange Pentagon): ~1%
* GPT-4o (Green Pentagon): ~-1%
* QWQ-32B (Yellow Diamond): ~-1%
* DeepSeek-V3 (Purple Diamond): ~-7%
* Open LLM (White Circle): ~-1%
* Close LLM (Black Circle): ~-1%
* Reasoning LLM (White Diamond): ~-1%
* **AQUA:**
* LLaMA3.1-8B (Light Green Circle): ~3%
* LLaMA3.1-70B (Yellow Circle): ~3%
* Qwen2.5-7B (Light Purple Circle): ~4%
* Qwen2.5-72B (Red Circle): ~0%
* Claude3.5 (Blue Pentagon): ~4%
* GPT-3.5 (Orange Pentagon): ~2%
* GPT-4o (Green Pentagon): ~3%
* QWQ-32B (Yellow Diamond): ~0%
* DeepSeek-V3 (Purple Diamond): ~-1%
* Open LLM (White Circle): ~0%
* Close LLM (Black Circle): ~0%
* Reasoning LLM (White Diamond): ~0%
* **GSM8K:**
* LLaMA3.1-8B (Light Green Circle): ~3%
* LLaMA3.1-70B (Yellow Circle): ~-7%
* Qwen2.5-7B (Light Purple Circle): ~3%
* Qwen2.5-72B (Red Circle): ~2%
* Claude3.5 (Blue Pentagon): ~5%
* GPT-3.5 (Orange Pentagon): ~15%
* GPT-4o (Green Pentagon): ~2%
* QWQ-32B (Yellow Diamond): ~-8%
* DeepSeek-V3 (Purple Diamond): ~-1%
* Open LLM (White Circle): ~-1%
* Close LLM (Black Circle): ~-1%
* Reasoning LLM (White Diamond): ~-1%
* **MATH:**
* LLaMA3.1-8B (Light Green Circle): ~9%
* LLaMA3.1-70B (Yellow Circle): ~10%
* Qwen2.5-7B (Light Purple Circle): ~10%
* Qwen2.5-72B (Red Circle): ~1%
* Claude3.5 (Blue Pentagon): ~9%
* GPT-3.5 (Orange Pentagon): ~10%
* GPT-4o (Green Pentagon): ~10%
* QWQ-32B (Yellow Diamond): ~-1%
* DeepSeek-V3 (Purple Diamond): ~-5%
* Open LLM (White Circle): ~-1%
* Close LLM (Black Circle): ~-1%
* Reasoning LLM (White Diamond): ~-1%
* **GPQA:**
* LLaMA3.1-8B (Light Green Circle): ~20%
* LLaMA3.1-70B (Yellow Circle): ~5%
* Qwen2.5-7B (Light Purple Circle): ~21%
* Qwen2.5-72B (Red Circle): ~0%
* Claude3.5 (Blue Pentagon): ~11%
* GPT-3.5 (Orange Pentagon): ~5%
* GPT-4o (Green Pentagon): ~2%
* QWQ-32B (Yellow Diamond): ~-1%
* DeepSeek-V3 (Purple Diamond): ~-4%
* Open LLM (White Circle): ~-1%
* Close LLM (Black Circle): ~-1%
* Reasoning LLM (White Diamond): ~-1%
* **HumanEval:**
* LLaMA3.1-8B (Light Green Circle): ~10%
* LLaMA3.1-70B (Yellow Circle): ~10%
* Qwen2.5-7B (Light Purple Circle): ~10%
* Qwen2.5-72B (Red Circle): ~0%
* Claude3.5 (Blue Pentagon): ~10%
* GPT-3.5 (Orange Pentagon): ~10%
* GPT-4o (Green Pentagon): ~10%
* QWQ-32B (Yellow Diamond): ~-1%
* DeepSeek-V3 (Purple Diamond): ~-1%
* Open LLM (White Circle): ~-1%
* Close LLM (Black Circle): ~-1%
* Reasoning LLM (White Diamond): ~-1%
### Key Observations
* The performance of different LLMs varies significantly across different datasets.
* Some models (e.g., QWQ-32B, DeepSeek-V3, Open LLM, Close LLM, Reasoning LLM) consistently underperform compared to others.
* LLaMA3.1-8B, LLaMA3.1-70B, Qwen2.5-7B, Claude3.5, GPT-3.5, and GPT-4o generally perform better, with variations depending on the dataset.
* Close LLM has a very low score on HotpotQA.
### Interpretation
The scatter plot visualizes the relative performance of various LLMs on different benchmark datasets. The percentage difference from the baseline (Î) allows for a direct comparison of how much better or worse each model performs. The plot highlights the strengths and weaknesses of each model across different types of tasks represented by the datasets. For example, some models excel in mathematical reasoning (MATH dataset) while others perform better on question answering tasks (HotpotQA). The consistent underperformance of certain models suggests potential areas for improvement in their architecture or training data. The plot provides valuable insights for model selection and further research in the field of natural language processing.
</details>
Figure 14: Performance Gains for Reflexion-v2.
<details>
<summary>x15.png Details</summary>

### Visual Description
## Scatter Plot: LLM Performance on Various Datasets
### Overview
The image is a scatter plot comparing the performance of various Large Language Models (LLMs) on different datasets. The y-axis represents the percentage difference (Î (%)), and the x-axis represents the datasets. Each LLM is represented by a unique color and marker. A horizontal dashed line indicates the baseline performance (Î = 0).
### Components/Axes
* **X-axis:** Datasets: HotpotQA, CS-QA, AQUA, GSM8K, MATH, GPQA, HumanEval.
* **Y-axis:** Î (%) - Percentage difference, ranging from -30 to 30 with increments of 10.
* **Legend:** Located on the right side of the plot, associating each LLM with a specific color and marker.
* LLaMA3.1-8B (light teal circle)
* LLaMA3.1-70B (light yellow circle)
* Qwen2.5-7B (light purple circle)
* Qwen2.5-72B (red circle)
* Claude3.5 (teal pentagon)
* GPT-3.5 (orange pentagon)
* GPT-4o (light green pentagon)
* QWQ-32B (light teal diamond)
* DeepSeek-V3 (purple diamond)
* Open LLM (white circle)
* Close LLM (white pentagon)
* Reasoning LLM (white diamond)
* Baseline (light green dashed line)
### Detailed Analysis
**Dataset: HotpotQA**
* LLaMA3.1-8B (light teal circle): ~5%
* LLaMA3.1-70B (light yellow circle): ~3%
* Qwen2.5-7B (light purple circle): ~-1%
* Qwen2.5-72B (red circle): ~5%
* Claude3.5 (teal pentagon): ~6%
* GPT-3.5 (orange pentagon): ~1%
* GPT-4o (light green pentagon): ~5%
* QWQ-32B (light teal diamond): ~2%
* DeepSeek-V3 (purple diamond): ~-1%
**Dataset: CS-QA**
* LLaMA3.1-8B (light teal circle): ~-1%
* LLaMA3.1-70B (light yellow circle): ~-1%
* Qwen2.5-7B (light purple circle): ~-2%
* Qwen2.5-72B (red circle): ~3%
* Claude3.5 (teal pentagon): ~2%
* GPT-3.5 (orange pentagon): ~-1%
* GPT-4o (light green pentagon): ~1%
* QWQ-32B (light teal diamond): ~1%
* DeepSeek-V3 (purple diamond): ~-1%
**Dataset: AQUA**
* LLaMA3.1-8B (light teal circle): ~5%
* LLaMA3.1-70B (light yellow circle): ~6%
* Qwen2.5-7B (light purple circle): ~-1%
* Qwen2.5-72B (red circle): ~-1%
* Claude3.5 (teal pentagon): ~7%
* GPT-3.5 (orange pentagon): ~5%
* GPT-4o (light green pentagon): ~6%
* QWQ-32B (light teal diamond): ~1%
* DeepSeek-V3 (purple diamond): ~2%
**Dataset: GSM8K**
* LLaMA3.1-8B (light teal circle): ~-1%
* LLaMA3.1-70B (light yellow circle): ~-1%
* Qwen2.5-7B (light purple circle): ~-4%
* Qwen2.5-72B (red circle): ~-3%
* Claude3.5 (teal pentagon): ~-1%
* GPT-3.5 (orange pentagon): ~-5%
* GPT-4o (light green pentagon): ~0%
* QWQ-32B (light teal diamond): ~0%
* DeepSeek-V3 (purple diamond): ~-4%
**Dataset: MATH**
* LLaMA3.1-8B (light teal circle): ~10%
* LLaMA3.1-70B (light yellow circle): ~11%
* Qwen2.5-7B (light purple circle): ~6%
* Qwen2.5-72B (red circle): ~10%
* Claude3.5 (teal pentagon): ~7%
* GPT-3.5 (orange pentagon): ~4%
* GPT-4o (light green pentagon): ~10%
* QWQ-32B (light teal diamond): ~11%
* DeepSeek-V3 (purple diamond): ~3%
**Dataset: GPQA**
* LLaMA3.1-8B (light teal circle): ~20%
* LLaMA3.1-70B (light yellow circle): ~22%
* Qwen2.5-7B (light purple circle): ~18%
* Qwen2.5-72B (red circle): ~29%
* Claude3.5 (teal pentagon): ~2%
* GPT-3.5 (orange pentagon): ~-3%
* GPT-4o (light green pentagon): ~14%
* QWQ-32B (light teal diamond): ~27%
* DeepSeek-V3 (purple diamond): ~3%
**Dataset: HumanEval**
* LLaMA3.1-8B (light teal circle): ~-6%
* LLaMA3.1-70B (light yellow circle): ~-1%
* Qwen2.5-7B (light purple circle): ~-2%
* Qwen2.5-72B (red circle): ~5%
* Claude3.5 (teal pentagon): ~1%
* GPT-3.5 (orange pentagon): ~-3%
* GPT-4o (light green pentagon): ~1%
* QWQ-32B (light teal diamond): ~16%
* DeepSeek-V3 (purple diamond): ~-9%
### Key Observations
* The performance of the LLMs varies significantly across different datasets.
* Some models (e.g., QWQ-32B) show high variance in performance, excelling in some datasets (GPQA) but underperforming in others (HumanEval).
* The baseline (Î = 0) serves as a reference point, with some models consistently outperforming it while others fluctuate around it.
* MATH and GPQA datasets seem to be more challenging, with a wider range of performance differences among the models.
* GSM8K shows generally negative performance differences for most models.
### Interpretation
The scatter plot provides a comparative analysis of LLM performance across various datasets. The data suggests that the choice of LLM can significantly impact performance depending on the specific task or dataset. The variability in performance highlights the importance of selecting the appropriate model for a given application. The positive and negative percentage differences indicate whether a model is performing better or worse than a certain baseline (likely another model or a previous version). The datasets themselves likely represent different types of tasks or challenges, which explains the varying performance of the LLMs. The plot also reveals potential strengths and weaknesses of each model, which can inform future development and optimization efforts.
</details>
Figure 15: Performance Gains for RARR.
<details>
<summary>x16.png Details</summary>

### Visual Description
## Scatter Plot: LLM Performance Across Datasets
### Overview
The image is a scatter plot comparing the performance of various Large Language Models (LLMs) across different datasets. The y-axis represents the percentage difference (Î (%)), and the x-axis represents the datasets. Each LLM is represented by a unique color and marker shape, as indicated in the legend on the right. A baseline at Î = 0 is shown as a dashed green line.
### Components/Axes
* **X-axis:** "Dataset" with categories: HotpotQA, CS-QA, AQUA, GSM8K, MATH, GPQA, HumanEval.
* **Y-axis:** "Î (%)" ranging from -30 to 30 with tick marks at -30, -20, -10, 0, 10, 20, and 30.
* **Legend (Right side):**
* Light Green Circle: LLAMA3.1-8B
* Yellow Circle: LLAMA3.1-70B
* Light Purple Circle: Qwen2.5-7B
* Red Circle: Qwen2.5-72B
* Teal Pentagon: Claude3.5
* Orange Pentagon: GPT-3.5
* Light Green Pentagon: GPT-4o
* Light Blue Diamond: QWQ-32B
* Dark Purple Diamond: DeepSeek-V3
* White Circle: Open LLM
* Gray Pentagon: Close LLM
* White Diamond: Reasoning LLM
* Dashed Light Green Line: Baseline (Î=0)
### Detailed Analysis
Here's a breakdown of the approximate performance of each model on each dataset:
* **HotpotQA:**
* LLAMA3.1-8B (Light Green Circle): ~1%
* LLAMA3.1-70B (Yellow Circle): ~1%
* Qwen2.5-7B (Light Purple Circle): ~1%
* Qwen2.5-72B (Red Circle): ~1%
* Claude3.5 (Teal Pentagon): ~1%
* GPT-3.5 (Orange Pentagon): ~-7%
* GPT-4o (Light Green Pentagon): ~2%
* QWQ-32B (Light Blue Diamond): ~-7%
* DeepSeek-V3 (Dark Purple Diamond): ~-5%
* Open LLM (White Circle): ~1%
* Close LLM (Gray Pentagon): ~-6%
* Reasoning LLM (White Diamond): ~-2%
* **CS-QA:**
* LLAMA3.1-8B (Light Green Circle): ~-2%
* LLAMA3.1-70B (Yellow Circle): ~1%
* Qwen2.5-7B (Light Purple Circle): ~-1%
* Qwen2.5-72B (Red Circle): ~2%
* Claude3.5 (Teal Pentagon): ~-1%
* GPT-3.5 (Orange Pentagon): ~-6%
* GPT-4o (Light Green Pentagon): ~1%
* QWQ-32B (Light Blue Diamond): ~-10%
* DeepSeek-V3 (Dark Purple Diamond): ~-4%
* Open LLM (White Circle): ~-2%
* Close LLM (Gray Pentagon): ~-7%
* Reasoning LLM (White Diamond): ~2%
* **AQUA:**
* LLAMA3.1-8B (Light Green Circle): ~10%
* LLAMA3.1-70B (Yellow Circle): ~17%
* Qwen2.5-7B (Light Purple Circle): ~4%
* Qwen2.5-72B (Red Circle): ~11%
* Claude3.5 (Teal Pentagon): ~12%
* GPT-3.5 (Orange Pentagon): ~8%
* GPT-4o (Light Green Pentagon): ~10%
* QWQ-32B (Light Blue Diamond): ~-1%
* DeepSeek-V3 (Dark Purple Diamond): ~5%
* Open LLM (White Circle): ~18%
* Close LLM (Gray Pentagon): ~13%
* Reasoning LLM (White Diamond): ~2%
* **GSM8K:**
* LLAMA3.1-8B (Light Green Circle): ~-8%
* LLAMA3.1-70B (Yellow Circle): ~1%
* Qwen2.5-7B (Light Purple Circle): ~2%
* Qwen2.5-72B (Red Circle): ~9%
* Claude3.5 (Teal Pentagon): ~1%
* GPT-3.5 (Orange Pentagon): ~1%
* GPT-4o (Light Green Pentagon): ~10%
* QWQ-32B (Light Blue Diamond): ~-1%
* DeepSeek-V3 (Dark Purple Diamond): ~-4%
* Open LLM (White Circle): ~-8%
* Close LLM (Gray Pentagon): ~-2%
* Reasoning LLM (White Diamond): ~2%
* **MATH:**
* LLAMA3.1-8B (Light Green Circle): ~1%
* LLAMA3.1-70B (Yellow Circle): ~11%
* Qwen2.5-7B (Light Purple Circle): ~4%
* Qwen2.5-72B (Red Circle): ~11%
* Claude3.5 (Teal Pentagon): ~11%
* GPT-3.5 (Orange Pentagon): ~7%
* GPT-4o (Light Green Pentagon): ~9%
* QWQ-32B (Light Blue Diamond): ~-1%
* DeepSeek-V3 (Dark Purple Diamond): ~-5%
* Open LLM (White Circle): ~1%
* Close LLM (Gray Pentagon): ~-1%
* Reasoning LLM (White Diamond): ~2%
* **GPQA:**
* LLAMA3.1-8B (Light Green Circle): ~-11%
* LLAMA3.1-70B (Yellow Circle): ~6%
* Qwen2.5-7B (Light Purple Circle): ~4%
* Qwen2.5-72B (Red Circle): ~15%
* Claude3.5 (Teal Pentagon): ~4%
* GPT-3.5 (Orange Pentagon): ~11%
* GPT-4o (Light Green Pentagon): ~10%
* QWQ-32B (Light Blue Diamond): ~-1%
* DeepSeek-V3 (Dark Purple Diamond): ~-5%
* Open LLM (White Circle): ~-1%
* Close LLM (Gray Pentagon): ~-2%
* Reasoning LLM (White Diamond): ~2%
* **HumanEval:**
* LLAMA3.1-8B (Light Green Circle): ~1%
* LLAMA3.1-70B (Yellow Circle): ~2%
* Qwen2.5-7B (Light Purple Circle): ~1%
* Qwen2.5-72B (Red Circle): ~5%
* Claude3.5 (Teal Pentagon): ~-2%
* GPT-3.5 (Orange Pentagon): ~-3%
* GPT-4o (Light Green Pentagon): ~1%
* QWQ-32B (Light Blue Diamond): ~-11%
* DeepSeek-V3 (Dark Purple Diamond): ~2%
* Open LLM (White Circle): ~1%
* Close LLM (Gray Pentagon): ~-2%
* Reasoning LLM (White Diamond): ~2%
### Key Observations
* The performance of the LLMs varies significantly across different datasets.
* Some models consistently outperform others on specific datasets.
* There is a noticeable spread in performance, indicating that no single model is universally superior.
* The "Reasoning LLM" (White Diamond) consistently hovers around the baseline (Î=0) across all datasets.
* GPT-3.5 (Orange Pentagon) tends to underperform compared to other models, often showing negative Î values.
* QWQ-32B (Light Blue Diamond) shows significant underperformance on CS-QA and HumanEval.
* LLAMA3.1-70B (Yellow Circle) and Qwen2.5-72B (Red Circle) often achieve higher Î values compared to other models, particularly on AQUA, MATH, and GPQA datasets.
### Interpretation
The scatter plot provides a comparative analysis of various LLMs across a range of datasets, highlighting their strengths and weaknesses. The data suggests that the choice of LLM should be tailored to the specific task or dataset, as performance varies significantly. The consistent performance of the "Reasoning LLM" near the baseline might indicate a more general-purpose model, while others are optimized for specific types of questions or data. The underperformance of GPT-3.5 on several datasets is a notable outlier, suggesting potential limitations in its architecture or training data. The superior performance of LLAMA3.1-70B and Qwen2.5-72B on certain datasets indicates their potential suitability for tasks involving those specific types of data or reasoning. The plot underscores the importance of benchmarking LLMs on diverse datasets to gain a comprehensive understanding of their capabilities.
</details>
Figure 16: Performance Gains for RATT.
Appendix F Contrast Experiments for Diverse LLMs
In this section, we compare the performance of various models across multiple datasets using different methods. The HotpotQA, GSM8K, and GPQA datasets are selected to represent commonsense reasoning, mathematical reasoning, and complex reasoning, respectively. Each figure depicts the performance of 9 LLMs on the three datasets under a specific method. For each figure, the first subgraph compares the performance of open-source LLMs, identifying the best-performing one. The second subgraph evaluates the best open-source LLM against closed-source LLMs, and the third subgraph summarizes the performance of the best open-source LLM, the best closed-source model, and reasoning models.
The performance of different models across the three datasets using various methods is summarized in Figures 17 to 25. For instance, Figure 17 represents results for Base method, while other figures illustrate performance for methods such as CoT, RCI, Cove, Self-Refine, Reflexion-v1, Reflexion-v2, RARR, and RATT, respectively.
Among the evaluated LLMs, GPT-4o and Qwen2.5-72B-Instrcut consistently demonstrate superior performance as the best open-source LLM and closed-source LLM, respectively, across most methods. Based on these results, it is evident that closed-source LLMs generally outperform open-source LLMs. Furthermore, reasoning LLMs (e.g., DeepSeek-V3) exhibit the best overall performance, excelling particularly in tasks requiring complex reasoning capabilities, as demonstrated by their consistent dominance across all datasets and methods.
<details>
<summary>x17.png Details</summary>

### Visual Description
## Bar Chart: Comparison of LLMs
### Overview
The image presents three bar charts comparing the performance of different Large Language Models (LLMs) on three datasets: HotpotQA, GSM8k, and GPQA. The charts are grouped by LLM type: Open-source, Closed-source, and Instruction-based vs. Reasoning. The y-axis represents scores, ranging from 0 to 100.
### Components/Axes
**Chart 1: Comparison of Open-source LLMs**
* **Title:** Comparison of Open-source LLMs
* **Y-axis:** Scores (0 to 100, incrementing by 20)
* **X-axis:** Datasets (HotpotQA, GSM8k, GPQA)
* **Legend (Top-Right):**
* Llama3.1-8B (Teal)
* Llama3.1-70B (Yellow)
* Qwen2.5-7B (Light Purple)
* Qwen2.5-72B (Salmon)
**Chart 2: Comparison of Closed-source LLMs**
* **Title:** Comparison of Closed-source LLMs
* **Y-axis:** Scores (0 to 100, incrementing by 20)
* **X-axis:** Datasets (HotpotQA, GSM8k, GPQA)
* **Legend (Top-Right):**
* Qwen2.5-72B (Salmon)
* Claude3.5 (Light Blue)
* GPT-3.5 (Orange)
* GPT-4o (Light Green)
**Chart 3: Instruction-based vs. Reasoning LLMs**
* **Title:** Instruction-based vs. Reasoning LLMs
* **Y-axis:** Scores (0 to 100, incrementing by 20)
* **X-axis:** Datasets (HotpotQA, GSM8k, GPQA)
* **Legend (Top-Right):**
* Qwen2.5-72B (Salmon)
* GPT-4o (Light Green)
* QWQ-32B (Pink)
* DeepSeek-V3 (Purple)
### Detailed Analysis
**Chart 1: Open-source LLMs**
* **Llama3.1-8B (Teal):**
* HotpotQA: ~76
* GSM8k: ~82
* GPQA: ~10
* **Llama3.1-70B (Yellow):**
* HotpotQA: ~82
* GSM8k: ~91
* GPQA: ~15
* **Qwen2.5-7B (Light Purple):**
* HotpotQA: ~74
* GSM8k: ~89
* GPQA: ~8
* **Qwen2.5-72B (Salmon):**
* HotpotQA: ~84
* GSM8k: ~91
* GPQA: ~10
**Chart 2: Closed-source LLMs**
* **Qwen2.5-72B (Salmon):**
* HotpotQA: ~84
* GSM8k: ~91
* GPQA: ~10
* **Claude3.5 (Light Blue):**
* HotpotQA: ~86
* GSM8k: ~93
* GPQA: ~32
* **GPT-3.5 (Orange):**
* HotpotQA: ~82
* GSM8k: ~79
* GPQA: ~25
* **GPT-4o (Light Green):**
* HotpotQA: ~89
* GSM8k: ~94
* GPQA: ~28
**Chart 3: Instruction-based vs. Reasoning LLMs**
* **Qwen2.5-72B (Salmon):**
* HotpotQA: ~84
* GSM8k: ~91
* GPQA: ~10
* **GPT-4o (Light Green):**
* HotpotQA: ~89
* GSM8k: ~94
* GPQA: ~22
* **QWQ-32B (Pink):**
* HotpotQA: ~89
* GSM8k: ~93
* GPQA: ~12
* **DeepSeek-V3 (Purple):**
* HotpotQA: ~91
* GSM8k: ~95
* GPQA: ~32
### Key Observations
* All models perform significantly better on HotpotQA and GSM8k datasets compared to GPQA.
* Within Open-source LLMs, Llama3.1-70B and Qwen2.5-72B generally achieve higher scores than Llama3.1-8B and Qwen2.5-7B.
* Within Closed-source LLMs, GPT-4o and Claude3.5 generally outperform GPT-3.5 and Qwen2.5-72B.
* DeepSeek-V3 shows the highest performance on GSM8k.
* GPQA scores are consistently low across all models, indicating this dataset is more challenging.
### Interpretation
The charts provide a comparative analysis of different LLMs across various datasets. The data suggests that:
* **Model Size Matters:** Larger models (e.g., Llama3.1-70B vs. Llama3.1-8B) tend to perform better, especially on HotpotQA and GSM8k.
* **Closed-source Models Lead:** Closed-source models like GPT-4o and Claude3.5 generally outperform open-source models on these datasets.
* **GPQA is a Bottleneck:** The consistently low scores on GPQA indicate that this dataset poses a significant challenge for all models, regardless of their architecture or training. This could be due to the nature of the questions, the complexity of the reasoning required, or the format of the data.
* **Instruction-based vs. Reasoning Specialization:** The "Instruction-based vs. Reasoning LLMs" chart highlights models potentially optimized for different tasks. DeepSeek-V3 shows a strong performance, suggesting it may excel in reasoning-based tasks, while others might be more focused on instruction following.
The relationship between the elements is clear: the charts compare the performance of different LLMs on the same datasets, allowing for a direct comparison of their capabilities. The low GPQA scores are a notable outlier, suggesting a potential area for improvement in LLM design and training.
</details>
Figure 17: Performance Comparison of Base
<details>
<summary>x18.png Details</summary>

### Visual Description
## Bar Charts: LLM Performance Comparison
### Overview
The image presents three bar charts comparing the performance of different Large Language Models (LLMs) on three datasets: HotpotQA, GSM8k, and GPQA. The charts are grouped by LLM type: Open-source, Closed-source, and Instruction-based vs. Reasoning. The y-axis represents scores, presumably accuracy or a similar performance metric.
### Components/Axes
**General Chart Elements:**
* **Title (Left Chart):** Comparison of Open-source LLMs
* **Title (Middle Chart):** Comparison of Closed-source LLMs
* **Title (Right Chart):** Instruction-based vs. Reasoning LLMs
* **Y-axis Label:** Scores
* **Y-axis Scale:** 0 to 100, with tick marks at 0, 20, 40, 60, 80, and 100.
* **X-axis Label:** Datasets
* **X-axis Categories:** HotpotQA, GSM8k, GPQA
**Legends (Top-Right of each chart):**
* **Left Chart (Open-source):**
* Light Teal: LLaMA3.1-8B
* Yellow: LLaMA3.1-70B
* Light Purple: Qwen2.5-7B
* Salmon: Qwen2.5-72B
* **Middle Chart (Closed-source):**
* Salmon: Qwen2.5-72B
* Sky Blue: Claude3.5
* Orange: GPT-3.5
* Light Green: GPT-4o
* **Right Chart (Instruction-based vs. Reasoning):**
* Salmon: Qwen2.5-72B
* Light Green: GPT-4o
* Pink: QWQ-32B
* Purple: DeepSeek-V3
### Detailed Analysis
**Chart 1: Comparison of Open-source LLMs**
* **HotpotQA Dataset:**
* LLaMA3.1-8B (Light Teal): ~72
* LLaMA3.1-70B (Yellow): ~86
* Qwen2.5-7B (Light Purple): ~72
* Qwen2.5-72B (Salmon): ~90
* **GSM8k Dataset:**
* LLaMA3.1-8B (Light Teal): ~78
* LLaMA3.1-70B (Yellow): ~96
* Qwen2.5-7B (Light Purple): ~87
* Qwen2.5-72B (Salmon): ~94
* **GPQA Dataset:**
* LLaMA3.1-8B (Light Teal): ~12
* LLaMA3.1-70B (Yellow): ~24
* Qwen2.5-7B (Light Purple): ~14
* Qwen2.5-72B (Salmon): ~18
**Chart 2: Comparison of Closed-source LLMs**
* **HotpotQA Dataset:**
* Qwen2.5-72B (Salmon): ~92
* Claude3.5 (Sky Blue): ~92
* GPT-3.5 (Orange): ~92
* GPT-4o (Light Green): ~93
* **GSM8k Dataset:**
* Qwen2.5-72B (Salmon): ~97
* Claude3.5 (Sky Blue): ~97
* GPT-3.5 (Orange): ~98
* GPT-4o (Light Green): ~99
* **GPQA Dataset:**
* Qwen2.5-72B (Salmon): ~16
* Claude3.5 (Sky Blue): ~16
* GPT-3.5 (Orange): ~17
* GPT-4o (Light Green): ~13
**Chart 3: Instruction-based vs. Reasoning LLMs**
* **HotpotQA Dataset:**
* Qwen2.5-72B (Salmon): ~92
* GPT-4o (Light Green): ~93
* QWQ-32B (Pink): ~88
* DeepSeek-V3 (Purple): ~92
* **GSM8k Dataset:**
* Qwen2.5-72B (Salmon): ~97
* GPT-4o (Light Green): ~99
* QWQ-32B (Pink): ~94
* DeepSeek-V3 (Purple): ~97
* **GPQA Dataset:**
* Qwen2.5-72B (Salmon): ~16
* GPT-4o (Light Green): ~13
* QWQ-32B (Pink): ~8
* DeepSeek-V3 (Purple): ~22
### Key Observations
* **General Trend:** All models perform significantly better on HotpotQA and GSM8k datasets compared to GPQA.
* **Open-source Models:** LLaMA3.1-70B and Qwen2.5-72B generally outperform LLaMA3.1-8B and Qwen2.5-7B.
* **Closed-source Models:** Performance is very similar across all closed-source models on HotpotQA and GSM8k. GPT-4o shows a slight edge on GSM8k.
* **Instruction-based vs. Reasoning Models:** GPT-4o and DeepSeek-V3 generally perform well, while QWQ-32B shows the lowest scores, especially on GPQA.
* **GPQA Challenge:** All models struggle with the GPQA dataset, indicating it is a more challenging benchmark.
### Interpretation
The charts provide a comparative analysis of LLM performance across different model architectures and datasets. The data suggests that:
* **Model Size Matters:** Larger open-source models (70B/72B) tend to outperform smaller ones (8B/7B).
* **Closed-source Models are Highly Optimized:** The closed-source models exhibit very similar and high performance on HotpotQA and GSM8k, suggesting they are well-optimized for these tasks.
* **GPQA Highlights Reasoning Gaps:** The poor performance on GPQA across all models indicates that this dataset requires more advanced reasoning capabilities that are not fully captured by current LLMs.
* **Instruction-based vs. Reasoning Trade-offs:** The performance differences between instruction-based and reasoning models on GPQA suggest potential trade-offs in model design. DeepSeek-V3 shows a relatively better performance on GPQA compared to QWQ-32B, indicating better reasoning capabilities.
</details>
Figure 18: Performance Comparison of CoT
<details>
<summary>x19.png Details</summary>

### Visual Description
## Bar Chart: LLM Performance Comparison
### Overview
The image presents three bar charts comparing the performance of different Large Language Models (LLMs) across three datasets: HotpotQA, GSM8k, and GPQA. The charts are grouped by LLM type: Open-source, Closed-source, and Instruction-based vs. Reasoning. The y-axis represents scores, ranging from 0 to 100.
### Components/Axes
* **Titles:**
* Left Chart: "Comparison of Open-source LLMs"
* Middle Chart: "Comparison of Closed-source LLMs"
* Right Chart: "Instruction-based vs. Reasoning LLMs"
* **Y-axis:**
* Label: "Scores"
* Scale: 0 to 100, with tick marks at 0, 20, 40, 60, 80, and 100.
* **X-axis:**
* Label: "Datasets"
* Categories: HotpotQA, GSM8k, GPQA
* **Legends:**
* Left Chart (Open-source LLMs):
* Light Green: LLaMA3.1-8B
* Yellow: LLaMA3.1-70B
* Light Purple: Qwen2.5-7B
* Salmon: Qwen2.5-72B
* Middle Chart (Closed-source LLMs):
* Salmon: Qwen2.5-72B
* Light Blue: Claude3.5
* Orange: GPT-3.5
* Light Green: GPT-4o
* Right Chart (Instruction-based vs. Reasoning LLMs):
* Salmon: Qwen2.5-72B
* Light Green: GPT-4o
* Pink: QWQ-32B
* Purple: DeepSeek-V3
### Detailed Analysis
**Left Chart: Comparison of Open-source LLMs**
* **LLaMA3.1-8B (Light Green):**
* HotpotQA: ~60
* GSM8k: ~78
* GPQA: ~22
* **LLaMA3.1-70B (Yellow):**
* HotpotQA: ~78
* GSM8k: ~87
* GPQA: ~24
* **Qwen2.5-7B (Light Purple):**
* HotpotQA: ~67
* GSM8k: ~94
* GPQA: ~22
* **Qwen2.5-72B (Salmon):**
* HotpotQA: ~84
* GSM8k: ~94
* GPQA: ~23
**Middle Chart: Comparison of Closed-source LLMs**
* **Qwen2.5-72B (Salmon):**
* HotpotQA: ~86
* GSM8k: ~94
* GPQA: ~16
* **Claude3.5 (Light Blue):**
* HotpotQA: ~84
* GSM8k: ~93
* GPQA: ~22
* **GPT-3.5 (Orange):**
* HotpotQA: ~88
* GSM8k: ~94
* GPQA: ~24
* **GPT-4o (Light Green):**
* HotpotQA: ~92
* GSM8k: ~95
* GPQA: ~23
**Right Chart: Instruction-based vs. Reasoning LLMs**
* **Qwen2.5-72B (Salmon):**
* HotpotQA: ~84
* GSM8k: ~94
* GPQA: ~16
* **GPT-4o (Light Green):**
* HotpotQA: ~93
* GSM8k: ~95
* GPQA: ~22
* **QWQ-32B (Pink):**
* HotpotQA: ~80
* GSM8k: ~94
* GPQA: ~18
* **DeepSeek-V3 (Purple):**
* HotpotQA: ~84
* GSM8k: ~94
* GPQA: ~28
### Key Observations
* **GSM8k Performance:** All models perform exceptionally well on the GSM8k dataset, with scores consistently above 90.
* **GPQA Performance:** All models struggle with the GPQA dataset, with scores generally below 30.
* **Open-source vs. Closed-source:** Closed-source models generally outperform open-source models on the HotpotQA dataset.
* **Instruction-based vs. Reasoning:** GPT-4o shows a slight edge on HotpotQA and GSM8k compared to other models in this category. DeepSeek-V3 shows a higher score on GPQA compared to the other models.
### Interpretation
The charts provide a comparative analysis of LLM performance across different datasets and model types. The high scores on GSM8k suggest that all models are proficient in tasks related to this dataset, while the low scores on GPQA indicate a common weakness in handling the complexities of that dataset. The comparison between open-source and closed-source models highlights the performance advantages of closed-source models in certain areas. The instruction-based vs. reasoning comparison shows the relative strengths and weaknesses of different models in these categories. The data suggests that model selection should be tailored to the specific task and dataset, as different models exhibit varying levels of proficiency.
</details>
Figure 19: Performance Comparison of RCI
<details>
<summary>x20.png Details</summary>

### Visual Description
## Bar Chart: LLM Performance Comparison
### Overview
The image presents three bar charts comparing the performance of different Large Language Models (LLMs) on three datasets: HotpotQA, GSM8k, and GPQA. The charts are grouped by LLM type: Open-source, Closed-source, and Instruction-based vs. Reasoning. The y-axis represents scores, presumably accuracy or a similar performance metric.
### Components/Axes
**General Chart Elements:**
* **Title (Left Chart):** Comparison of Open-source LLMs
* **Title (Middle Chart):** Comparison of Closed-source LLMs
* **Title (Right Chart):** Instruction-based vs. Reasoning LLMs
* **Y-axis Label:** Scores
* **Y-axis Scale:** 0 to 100, with tick marks at intervals of 20.
* **X-axis Label:** Datasets
* **X-axis Categories:** HotpotQA, GSM8k, GPQA
**Legend (Left Chart):** Located in the top-right corner of the left chart.
* Light Teal: LLaMA3.1-8B
* Light Yellow: LLaMA3.1-70B
* Light Purple: Qwen2.5-7B
* Light Red: Qwen2.5-72B
**Legend (Middle Chart):** Located in the top-right corner of the middle chart.
* Light Red: Qwen2.5-72B
* Light Blue: Claude3.5
* Light Orange: GPT-3.5
* Light Green: GPT-4o
**Legend (Right Chart):** Located in the top-right corner of the right chart.
* Light Red: Qwen2.5-72B
* Light Green: GPT-4o
* Light Pink: QWQ-32B
* Light Purple: DeepSeek-V3
### Detailed Analysis
**Left Chart: Open-source LLMs**
* **LLaMA3.1-8B (Light Teal):**
* HotpotQA: ~68
* GSM8k: ~81
* GPQA: ~35
* **LLaMA3.1-70B (Light Yellow):**
* HotpotQA: ~89
* GSM8k: ~87
* GPQA: ~38
* **Qwen2.5-7B (Light Purple):**
* HotpotQA: ~73
* GSM8k: ~91
* GPQA: ~36
* **Qwen2.5-72B (Light Red):**
* HotpotQA: ~89
* GSM8k: ~94
* GPQA: ~37
**Middle Chart: Closed-source LLMs**
* **Qwen2.5-72B (Light Red):**
* HotpotQA: ~91
* GSM8k: ~95
* GPQA: ~30
* **Claude3.5 (Light Blue):**
* HotpotQA: ~88
* GSM8k: ~96
* GPQA: ~40
* **GPT-3.5 (Light Orange):**
* HotpotQA: ~90
* GSM8k: ~96
* GPQA: ~44
* **GPT-4o (Light Green):**
* HotpotQA: ~92
* GSM8k: ~97
* GPQA: ~46
**Right Chart: Instruction-based vs. Reasoning LLMs**
* **Qwen2.5-72B (Light Red):**
* HotpotQA: ~91
* GSM8k: ~95
* GPQA: ~24
* **GPT-4o (Light Green):**
* HotpotQA: ~92
* GSM8k: ~97
* GPQA: ~40
* **QWQ-32B (Light Pink):**
* HotpotQA: ~88
* GSM8k: ~95
* GPQA: ~28
* **DeepSeek-V3 (Light Purple):**
* HotpotQA: ~90
* GSM8k: ~95
* GPQA: ~50
### Key Observations
* **Dataset Performance:** All models generally perform better on GSM8k and HotpotQA compared to GPQA.
* **Open-source vs. Closed-source:** Closed-source models (GPT-4o, GPT-3.5, Claude3.5) generally achieve higher scores on HotpotQA and GSM8k than the open-source models.
* **GPQA Challenge:** GPQA appears to be a more challenging dataset for all models, with significantly lower scores across the board.
* **Qwen2.5-72B Consistency:** Qwen2.5-72B is present in all three charts, providing a point of comparison across different model categories.
* **Instruction-based vs. Reasoning:** The right chart highlights the performance difference between models optimized for instruction following versus reasoning tasks, particularly on the GPQA dataset.
### Interpretation
The charts provide a comparative analysis of LLM performance across different model architectures and datasets. The data suggests that:
* Closed-source models generally outperform open-source models on HotpotQA and GSM8k, indicating potential advantages in training data, model size, or architecture.
* The GPQA dataset poses a significant challenge for all models, suggesting it requires more advanced reasoning capabilities or a different type of knowledge.
* The Instruction-based vs. Reasoning chart highlights the trade-offs between models optimized for different tasks. DeepSeek-V3 and GPT-4o show better performance on GPQA compared to Qwen2.5-72B and QWQ-32B, suggesting they may be better suited for reasoning-intensive tasks.
* The consistent presence of Qwen2.5-72B across all charts allows for a direct comparison of its performance relative to different model categories. It performs competitively on HotpotQA and GSM8k but struggles on GPQA, similar to other models.
The data underscores the importance of dataset selection in evaluating LLM performance and highlights the varying strengths and weaknesses of different model architectures. Further investigation into the specific characteristics of the GPQA dataset could provide valuable insights into the limitations of current LLMs and guide future research efforts.
</details>
Figure 20: Performance Comparison of CoVe
<details>
<summary>x21.png Details</summary>

### Visual Description
## Bar Charts: LLM Performance Comparison
### Overview
The image presents three bar charts comparing the performance of different Large Language Models (LLMs) on three datasets: HotpotQA, GSM8k, and GPQA. The charts are grouped by LLM type: Open-source, Closed-source, and Instruction-based vs. Reasoning. The y-axis represents scores, and the x-axis represents the datasets.
### Components/Axes
**General Chart Elements:**
* **Title (Left Chart):** Comparison of Open-source LLMs
* **Title (Middle Chart):** Comparison of Closed-source LLMs
* **Title (Right Chart):** Instruction-based vs. Reasoning LLMs
* **Y-axis Label:** Scores
* **Y-axis Scale:** 0 to 100, with tick marks at 20, 40, 60, 80, and 100.
* **X-axis Label:** Datasets
* **X-axis Categories:** HotpotQA, GSM8k, GPQA
**Legends:**
* **Left Chart (Open-source LLMs):** Located in the top-right corner of the chart.
* Light Green: LLaMA3.1-8B
* Yellow: LLaMA3.1-70B
* Lavender: Qwen2.5-7B
* Salmon: Qwen2.5-72B
* **Middle Chart (Closed-source LLMs):** Located in the top-right corner of the chart.
* Salmon: Qwen2.5-72B
* Light Blue: Claude3.5
* Orange: GPT-3.5
* Green: GPT-4o
* **Right Chart (Instruction-based vs. Reasoning LLMs):** Located in the top-right corner of the chart.
* Salmon: Qwen2.5-72B
* Green: GPT-4o
* Pink: QWQ-32B
* Purple: DeepSeek-V3
### Detailed Analysis
**1. Comparison of Open-source LLMs:**
* **LLaMA3.1-8B (Light Green):**
* HotpotQA: ~62
* GSM8k: ~77
* GPQA: ~20
* **LLaMA3.1-70B (Yellow):**
* HotpotQA: ~85
* GSM8k: ~92
* GPQA: ~25
* **Qwen2.5-7B (Lavender):**
* HotpotQA: ~75
* GSM8k: ~88
* GPQA: ~28
* **Qwen2.5-72B (Salmon):**
* HotpotQA: ~92
* GSM8k: ~94
* GPQA: ~30
**2. Comparison of Closed-source LLMs:**
* **Qwen2.5-72B (Salmon):**
* HotpotQA: ~92
* GSM8k: ~95
* GPQA: ~30
* **Claude3.5 (Light Blue):**
* HotpotQA: ~93
* GSM8k: ~97
* GPQA: ~58
* **GPT-3.5 (Orange):**
* HotpotQA: ~95
* GSM8k: ~98
* GPQA: ~52
* **GPT-4o (Green):**
* HotpotQA: ~97
* GSM8k: ~98
* GPQA: ~52
**3. Instruction-based vs. Reasoning LLMs:**
* **Qwen2.5-72B (Salmon):**
* HotpotQA: ~92
* GSM8k: ~95
* GPQA: ~30
* **GPT-4o (Green):**
* HotpotQA: ~97
* GSM8k: ~98
* GPQA: ~52
* **QWQ-32B (Pink):**
* HotpotQA: ~90
* GSM8k: ~92
* GPQA: ~28
* **DeepSeek-V3 (Purple):**
* HotpotQA: ~92
* GSM8k: ~94
* GPQA: ~32
### Key Observations
* **Dataset Difficulty:** All models generally perform best on GSM8k and HotpotQA, and significantly worse on GPQA.
* **Open-source Performance:** The 72B parameter version of Qwen2.5 consistently outperforms the other open-source models across all datasets. LLaMA3.1-8B performs the worst.
* **Closed-source Performance:** GPT-4o and GPT-3.5 show very high performance on HotpotQA and GSM8k, with GPT-4o slightly edging out GPT-3.5. Claude3.5 also performs well.
* **Instruction vs. Reasoning:** GPT-4o generally outperforms Qwen2.5-72B, QWQ-32B, and DeepSeek-V3, especially on GPQA.
### Interpretation
The data suggests that model size (parameter count) is a significant factor in performance for open-source models, as evidenced by the difference between LLaMA3.1-8B and LLaMA3.1-70B. Closed-source models generally outperform open-source models, particularly on the GPQA dataset, which may indicate better reasoning capabilities. The performance differences between instruction-based and reasoning LLMs on GPQA suggest that some models are better suited for complex reasoning tasks. The consistently low scores on GPQA across all model types indicate that this dataset is particularly challenging.
</details>
Figure 21: Performance Comparison of Self-Refine
<details>
<summary>x22.png Details</summary>

### Visual Description
## Bar Charts: LLM Performance Comparison
### Overview
The image presents three bar charts comparing the performance of different Large Language Models (LLMs) on three datasets: HotpotQA, GSM8K, and GPQA. The charts are grouped by LLM type: Open-source, Closed-source, and Instruction-based vs. Reasoning. The y-axis represents "Scores," ranging from 0 to 80. The x-axis represents the "Datasets."
### Components/Axes
* **Y-axis:** "Scores," ranging from 0 to 80 in increments of 20.
* **X-axis:** "Datasets," with categories: HotpotQA, GSM8K, GPQA.
* **Chart 1: Comparison of Open-source LLMs**
* **Legend (top-right):**
* Light Green: LLaMA3.1-8B
* Yellow: LLaMA3.1-70B
* Light Purple: Qwen2.5-7B
* Salmon: Qwen2.5-72B
* **Chart 2: Comparison of Closed-source LLMs**
* **Legend (top-right):**
* Salmon: Qwen2.5-72B
* Light Blue: Claude3.5
* Orange: GPT-3.5
* Green: GPT-4o
* **Chart 3: Instruction-based vs. Reasoning LLMs**
* **Legend (top-right):**
* Salmon: Qwen2.5-72B
* Light Green: GPT-4o
* Pink: QWQ-32B
* Purple: DeepSeek-V3
### Detailed Analysis
**Chart 1: Comparison of Open-source LLMs**
* **HotpotQA:**
* LLaMA3.1-8B (Light Green): ~72
* LLaMA3.1-70B (Yellow): ~69
* Qwen2.5-7B (Light Purple): ~61
* Qwen2.5-72B (Salmon): ~70
* **GSM8K:**
* LLaMA3.1-8B (Light Green): ~59
* LLaMA3.1-70B (Yellow): ~64
* Qwen2.5-7B (Light Purple): ~61
* Qwen2.5-72B (Salmon): ~72
* **GPQA:**
* LLaMA3.1-8B (Light Green): ~6
* LLaMA3.1-70B (Yellow): ~16
* Qwen2.5-7B (Light Purple): ~10
* Qwen2.5-72B (Salmon): ~12
**Chart 2: Comparison of Closed-source LLMs**
* **HotpotQA:**
* Qwen2.5-72B (Salmon): ~70
* Claude3.5 (Light Blue): ~82
* GPT-3.5 (Orange): ~72
* GPT-4o (Green): ~73
* **GSM8K:**
* Qwen2.5-72B (Salmon): ~72
* Claude3.5 (Light Blue): ~78
* GPT-3.5 (Orange): ~73
* GPT-4o (Green): ~80
* **GPQA:**
* Qwen2.5-72B (Salmon): ~11
* Claude3.5 (Light Blue): ~35
* GPT-3.5 (Orange): ~22
* GPT-4o (Green): ~16
**Chart 3: Instruction-based vs. Reasoning LLMs**
* **HotpotQA:**
* Qwen2.5-72B (Salmon): ~70
* GPT-4o (Light Green): ~72
* QWQ-32B (Pink): ~61
* DeepSeek-V3 (Purple): ~73
* **GSM8K:**
* Qwen2.5-72B (Salmon): ~72
* GPT-4o (Light Green): ~80
* QWQ-32B (Pink): ~65
* DeepSeek-V3 (Purple): ~78
* **GPQA:**
* Qwen2.5-72B (Salmon): ~11
* GPT-4o (Light Green): ~15
* QWQ-32B (Pink): ~22
* DeepSeek-V3 (Purple): ~27
### Key Observations
* **Open-source LLMs:** Qwen2.5-72B generally performs competitively with LLaMA3.1-70B on HotpotQA and GSM8K, but all open-source models struggle on GPQA.
* **Closed-source LLMs:** Claude3.5 and GPT-4o consistently outperform Qwen2.5-72B and GPT-3.5 across all datasets. GPQA remains a challenge, but the scores are significantly higher than for open-source models.
* **Instruction-based vs. Reasoning LLMs:** GPT-4o and DeepSeek-V3 show strong performance on GSM8K, suggesting good reasoning capabilities. QWQ-32B generally scores lower than the other models in this category.
### Interpretation
The charts provide a comparative analysis of LLM performance across different model architectures (open-source vs. closed-source) and task types (HotpotQA, GSM8K, GPQA). The data suggests that closed-source models like Claude3.5 and GPT-4o generally achieve higher scores, particularly on the more challenging GPQA dataset. This could indicate superior reasoning or knowledge integration capabilities. The open-source models, while competitive on some tasks, appear to struggle with the complexities of GPQA. The Instruction-based vs. Reasoning LLMs chart highlights the varying strengths of different models in this category, with GPT-4o and DeepSeek-V3 showing promise in reasoning tasks. The low scores on GPQA across all model types suggest that this dataset poses a significant challenge for current LLMs.
</details>
Figure 22: Performance Comparison of Reflexion-v1
<details>
<summary>x23.png Details</summary>

### Visual Description
## Bar Charts: LLM Performance Comparison
### Overview
The image presents three bar charts comparing the performance of different Large Language Models (LLMs) on three datasets: HotpotQA, GSM8k, and GPQA. The charts are grouped by LLM type: Open-source, Closed-source, and Instruction-based vs. Reasoning. The y-axis represents scores, ranging from 0 to 100.
### Components/Axes
**General:**
* **Y-axis Title:** Scores
* **Y-axis Scale:** 0, 20, 40, 60, 80, 100
* **X-axis Title:** Datasets
* **X-axis Categories:** HotpotQA, GSM8k, GPQA
**Chart 1: Comparison of Open-source LLMs**
* **Title:** Comparison of Open-source LLMs
* **Legend (Top-Right):**
* Light Blue: LLaMA3.1-8B
* Yellow: LLaMA3.1-70B
* Purple: Qwen2.5-7B
* Salmon: Qwen2.5-72B
**Chart 2: Comparison of Closed-source LLMs**
* **Title:** Comparison of Closed-source LLMs
* **Legend (Top-Right):**
* Salmon: Qwen2.5-72B
* Light Blue: Claude3.5
* Orange: GPT-3.5
* Green: GPT-4o
**Chart 3: Instruction-based vs. Reasoning LLMs**
* **Title:** Instruction-based vs. Reasoning LLMs
* **Legend (Top-Right):**
* Salmon: Qwen2.5-72B
* Green: GPT-4o
* Pink: QWQ-32B
* Purple: DeepSeek-V3
### Detailed Analysis
**Chart 1: Open-source LLMs**
* **LLaMA3.1-8B (Light Blue):**
* HotpotQA: ~88
* GSM8k: ~84
* GPQA: ~24
* **LLaMA3.1-70B (Yellow):**
* HotpotQA: ~87
* GSM8k: ~82
* GPQA: ~26
* **Qwen2.5-7B (Purple):**
* HotpotQA: ~83
* GSM8k: ~89
* GPQA: ~28
* **Qwen2.5-72B (Salmon):**
* HotpotQA: ~83
* GSM8k: ~93
* GPQA: ~27
**Chart 2: Closed-source LLMs**
* **Qwen2.5-72B (Salmon):**
* HotpotQA: ~83
* GSM8k: ~93
* GPQA: ~15
* **Claude3.5 (Light Blue):**
* HotpotQA: ~93
* GSM8k: ~93
* GPQA: ~54
* **GPT-3.5 (Orange):**
* HotpotQA: ~91
* GSM8k: ~93
* GPQA: ~32
* **GPT-4o (Green):**
* HotpotQA: ~93
* GSM8k: ~94
* GPQA: ~23
**Chart 3: Instruction-based vs. Reasoning LLMs**
* **Qwen2.5-72B (Salmon):**
* HotpotQA: ~83
* GSM8k: ~93
* GPQA: ~15
* **GPT-4o (Green):**
* HotpotQA: ~91
* GSM8k: ~94
* GPQA: ~23
* **QWQ-32B (Pink):**
* HotpotQA: ~84
* GSM8k: ~93
* GPQA: ~19
* **DeepSeek-V3 (Purple):**
* HotpotQA: ~87
* GSM8k: ~94
* GPQA: ~28
### Key Observations
* **Open-source LLMs:** Qwen2.5-72B generally performs well on GSM8k, while all models struggle on GPQA.
* **Closed-source LLMs:** GPT-4o and Claude3.5 show high performance on HotpotQA and GSM8k. Claude3.5 has a relatively higher score on GPQA compared to other closed-source models.
* **Instruction-based vs. Reasoning LLMs:** All models perform well on GSM8k, but GPQA scores are significantly lower.
### Interpretation
The charts provide a comparative analysis of LLM performance across different model types and datasets. The data suggests that:
* **Dataset Difficulty:** GPQA is a more challenging dataset for all models compared to HotpotQA and GSM8k.
* **Model Specialization:** Some models (e.g., GPT-4o, Claude3.5) excel in specific tasks or datasets, indicating potential specialization in their training.
* **Open vs. Closed Source:** Closed-source models generally outperform open-source models on HotpotQA, but the performance is more comparable on GSM8k.
* **Reasoning vs. Instruction:** The "Instruction-based vs. Reasoning" chart highlights the varying capabilities of models designed for different types of tasks, with reasoning-focused models (DeepSeek-V3) showing slightly better performance on GPQA compared to instruction-based models (Qwen2.5-72B).
* **Outliers:** Claude3.5's relatively high score on GPQA in the Closed-source LLMs chart is a notable outlier, suggesting it may have a stronger capability in this specific area compared to other closed-source models.
</details>
Figure 23: Performance Comparison of Reflexion-v2
<details>
<summary>x24.png Details</summary>

### Visual Description
## Bar Charts: LLM Performance Comparison
### Overview
The image presents three bar charts comparing the performance of different Large Language Models (LLMs) on three datasets: HotpotQA, GSM8k, and GPQA. The charts are grouped by LLM type: Open-source, Closed-source, and Instruction-based vs. Reasoning. The y-axis represents scores, ranging from 0 to 100.
### Components/Axes
**General Chart Elements:**
* **Title (Left Chart):** Comparison of Open-source LLMs
* **Title (Middle Chart):** Comparison of Closed-source LLMs
* **Title (Right Chart):** Instruction-based vs. Reasoning LLMs
* **Y-axis Label:** Scores
* **Y-axis Scale:** 0, 20, 40, 60, 80, 100
* **X-axis Label:** Datasets
* **X-axis Categories:** HotpotQA, GSM8k, GPQA
**Left Chart (Open-source LLMs) Legend:**
* **Light Green:** LLaMA3.1-8B
* **Yellow:** LLaMA3.1-70B
* **Lavender:** Qwen2.5-7B
* **Salmon:** Qwen2.5-72B
**Middle Chart (Closed-source LLMs) Legend:**
* **Salmon:** Qwen2.5-72B
* **Orange:** Claude3.5
* **Teal:** GPT-3.5
* **Green:** GPT-4o
**Right Chart (Instruction-based vs. Reasoning LLMs) Legend:**
* **Salmon:** Qwen2.5-72B
* **Green:** GPT-4o
* **Pink:** QWQ-32B
* **Purple:** DeepSeek-V3
### Detailed Analysis
**Left Chart (Open-source LLMs):**
* **LLaMA3.1-8B (Light Green):**
* HotpotQA: ~82
* GSM8k: ~79
* GPQA: ~34
* **LLaMA3.1-70B (Yellow):**
* HotpotQA: ~91
* GSM8k: ~88
* GPQA: ~35
* **Qwen2.5-7B (Lavender):**
* HotpotQA: ~72
* GSM8k: ~85
* GPQA: ~34
* **Qwen2.5-72B (Salmon):**
* HotpotQA: ~88
* GSM8k: ~86
* GPQA: ~40
**Middle Chart (Closed-source LLMs):**
* **Qwen2.5-72B (Salmon):**
* HotpotQA: ~87
* GSM8k: ~86
* GPQA: ~40
* **Claude3.5 (Orange):**
* HotpotQA: ~89
* GSM8k: ~95
* GPQA: ~28
* **GPT-3.5 (Teal):**
* HotpotQA: ~87
* GSM8k: ~91
* GPQA: ~49
* **GPT-4o (Green):**
* HotpotQA: ~92
* GSM8k: ~97
* GPQA: ~36
**Right Chart (Instruction-based vs. Reasoning LLMs):**
* **Qwen2.5-72B (Salmon):**
* HotpotQA: ~88
* GSM8k: ~96
* GPQA: ~30
* **GPT-4o (Green):**
* HotpotQA: ~92
* GSM8k: ~98
* GPQA: ~32
* **QWQ-32B (Pink):**
* HotpotQA: ~89
* GSM8k: ~93
* GPQA: ~28
* **DeepSeek-V3 (Purple):**
* HotpotQA: ~89
* GSM8k: ~95
* GPQA: ~31
### Key Observations
* Across all charts, performance on GPQA is significantly lower than on HotpotQA and GSM8k.
* In the Open-source LLMs chart, LLaMA3.1-70B (Yellow) generally performs better than LLaMA3.1-8B (Light Green).
* In the Closed-source LLMs chart, GPT-4o (Green) and Claude3.5 (Orange) show high performance on HotpotQA and GSM8k.
* In the Instruction-based vs. Reasoning LLMs chart, GPT-4o (Green) consistently scores high on all datasets.
### Interpretation
The charts provide a comparative analysis of LLM performance across different model types and datasets. The lower scores on GPQA suggest that all models struggle with this particular dataset, possibly indicating a higher level of complexity or a different type of reasoning required. The Open-source LLM comparison shows the impact of model size (70B vs. 8B parameters) on performance. The Closed-source and Instruction-based/Reasoning charts highlight the strengths of models like GPT-4o and Claude3.5 in specific tasks. The data suggests that model architecture and training data play a significant role in determining LLM performance on different benchmarks.
</details>
Figure 24: Performance Comparison of RARR
<details>
<summary>x25.png Details</summary>

### Visual Description
## Bar Charts: LLM Performance Comparison
### Overview
The image presents three bar charts comparing the performance of different Large Language Models (LLMs) on three datasets: HotpotQA, GSM8k, and GPQA. The charts are grouped by LLM type: Open-source, Closed-source, and Instruction-based vs. Reasoning. The y-axis represents scores, presumably accuracy or a similar metric, ranging from 0 to 100.
### Components/Axes
* **Titles:**
* Left Chart: "Comparison of Open-source LLMs"
* Middle Chart: "Comparison of Closed-source LLMs"
* Right Chart: "Instruction-based vs. Reasoning LLMs"
* **Y-axis:**
* Label: "Scores"
* Scale: 0 to 100, with tick marks at 20, 40, 60, 80, and 100.
* **X-axis:**
* Label: "Datasets"
* Categories: HotpotQA, GSM8k, GPQA
* **Legends:**
* **Left Chart (Open-source):** Located at the top-right of the chart.
* Light Green: "LLaMA3.1-8B"
* Yellow: "LLaMA3.1-70B"
* Lavender: "Qwen2.5-7B"
* Salmon: "Qwen2.5-72B"
* **Middle Chart (Closed-source):** Located at the top-right of the chart.
* Salmon: "Qwen2.5-72B"
* Light Blue: "Claude3.5"
* Orange: "GPT-3.5"
* Light Green: "GPT-4o"
* **Right Chart (Instruction-based vs. Reasoning):** Located at the top-right of the chart.
* Salmon: "Qwen2.5-72B"
* Light Green: "GPT-4o"
* Pink: "QWQ-32B"
* Lavender: "DeepSeek-V3"
### Detailed Analysis
**Left Chart: Comparison of Open-source LLMs**
* **HotpotQA Dataset:**
* LLaMA3.1-8B (Light Green): Approximately 77
* LLaMA3.1-70B (Yellow): Approximately 82
* Qwen2.5-7B (Lavender): Approximately 73
* Qwen2.5-72B (Salmon): Approximately 85
* **GSM8k Dataset:**
* LLaMA3.1-8B (Light Green): Approximately 73
* LLaMA3.1-70B (Yellow): Approximately 87
* Qwen2.5-7B (Lavender): Approximately 95
* Qwen2.5-72B (Salmon): Approximately 91
* **GPQA Dataset:**
* LLaMA3.1-8B (Light Green): Approximately 10
* LLaMA3.1-70B (Yellow): Approximately 25
* Qwen2.5-7B (Lavender): Approximately 10
* Qwen2.5-72B (Salmon): Approximately 25
**Middle Chart: Comparison of Closed-source LLMs**
* **HotpotQA Dataset:**
* Qwen2.5-72B (Salmon): Approximately 85
* Claude3.5 (Light Blue): Approximately 82
* GPT-3.5 (Orange): Approximately 91
* GPT-4o (Light Green): Approximately 92
* **GSM8k Dataset:**
* Qwen2.5-72B (Salmon): Approximately 98
* Claude3.5 (Light Blue): Approximately 92
* GPT-3.5 (Orange): Approximately 98
* GPT-4o (Light Green): Approximately 98
* **GPQA Dataset:**
* Qwen2.5-72B (Salmon): Approximately 25
* Claude3.5 (Light Blue): Approximately 37
* GPT-3.5 (Orange): Approximately 25
* GPT-4o (Light Green): Approximately 25
**Right Chart: Instruction-based vs. Reasoning LLMs**
* **HotpotQA Dataset:**
* Qwen2.5-72B (Salmon): Approximately 85
* GPT-4o (Light Green): Approximately 92
* QWQ-32B (Pink): Approximately 82
* DeepSeek-V3 (Lavender): Approximately 85
* **GSM8k Dataset:**
* Qwen2.5-72B (Salmon): Approximately 98
* GPT-4o (Light Green): Approximately 98
* QWQ-32B (Pink): Approximately 92
* DeepSeek-V3 (Lavender): Approximately 95
* **GPQA Dataset:**
* Qwen2.5-72B (Salmon): Approximately 25
* GPT-4o (Light Green): Approximately 25
* QWQ-32B (Pink): Approximately 15
* DeepSeek-V3 (Lavender): Approximately 32
### Key Observations
* **General Performance:** All models perform significantly better on HotpotQA and GSM8k datasets compared to GPQA.
* **Open-source Models:** Qwen2.5-72B generally outperforms LLaMA3.1 models across all datasets.
* **Closed-source Models:** GPT-4o and GPT-3.5 show very high performance on HotpotQA and GSM8k.
* **Instruction-based vs. Reasoning:** GPT-4o and Qwen2.5-72B show similar performance, while QWQ-32B and DeepSeek-V3 show slightly lower performance.
* **GPQA Challenge:** All models struggle with the GPQA dataset, indicating it is a more challenging benchmark.
### Interpretation
The charts provide a comparative analysis of various LLMs across different datasets. The data suggests that:
* **Dataset Difficulty:** GPQA is a significantly more challenging dataset for all models compared to HotpotQA and GSM8k. This could be due to the nature of the questions or the complexity of reasoning required.
* **Model Strengths:** Closed-source models like GPT-4o and GPT-3.5 demonstrate superior performance on HotpotQA and GSM8k, suggesting they are well-optimized for these types of tasks.
* **Open-source Advancements:** The Qwen2.5-72B model shows competitive performance among open-source models, indicating progress in open-source LLM development.
* **Reasoning vs. Instruction:** The Instruction-based vs. Reasoning chart suggests that while GPT-4o and Qwen2.5-72B perform similarly, other models like QWQ-32B and DeepSeek-V3 may have different strengths or weaknesses in instruction following or reasoning capabilities.
The low scores on GPQA across all model categories highlight the need for further research and development in areas such as complex reasoning and knowledge integration.
</details>
Figure 25: Performance Comparison of RATT
Appendix G Prompt and Result Example
G.1 Base
Prompt Example. Baseâs prompt example is as follows:
Baseâs prompt example
Zero-shot Prompt: Your final answer in the form boxed{answer}, at the end of your response. A: Critique Prompt: Review your previous answer and find problems with your answer. Improve Prompt: Based on the problems you found, improve your answer. Please reiterate your answer, with your final answer a single numerical number, In the form boxed{answer}.
Result Example on GSM8K. Baseâs result example on GSM8K dataset is as follows:
Baseâs result example on GSM8K dataset
$\blacktriangleright$ qustion: Q: Josh decides to try flipping a house. He buys a house for $80,000 and then puts in $50,000 in repairs. This increased the value of the house by 150%. How much profit did he make?Please solve the question above, then store the final answer in boxed{answer}. output: To determine Joshâs profit from flipping the house, letâs break down the costs and the increased value step by step. 1. **Initial Purchase Cost**: Josh buys the house for **$80,000**. 2. **Repair Costs**: He spends **$50,000** on repairs. 3. **Total Investment**: Total cost = Purchase cost + Repair costs Total cost = $80,000 + $50,000 = **$130,000**. 4. **Increased Value**: The repairs increase the value of the house by **150%**. To find the new value: New value = Original value + (150% of original value) New value = $80,000 + (1.5 * $80,000) New value = $80,000 + $120,000 = **$200,000**. 5. **Profit Calculation**: Profit = New value - Total investment Profit = $200,000 - $130,000 = **$70,000**. Thus, Joshâs profit is: boxed{70000}. final_answer: 70000 correct answer: 70000 correct: true
G.2 CoT
Prompt Example. CoTâs prompt example is as follows:
CoTâs prompt example
prompt_based Prompt: Please understand and solve the question above step by step, then store the final answer in boxed{answer}. Zero-shot-cot Prompt: Letâs think step by step. In the form boxed{answer}, at the end of your response. Critique Prompt: Review your previous answer and find problems with your answer. Improve Prompt: Based on the problems you found, improve your answer. Please reiterate your answer, with your final answer a single numerical number, In the form boxed{answer}.
Result Example on MATH. CoTâs result example on MATH dataset is as follows:
CoTâs result example on MATH dataset
$\blacktriangleright$ qustion: Q: What is the sum of the values of $x$ that satisfy the equation xË2-5x+5=9? Please understand and solve the question above step by step, then store the final answer in boxed{answer}. output: To solve the equation ( xË2 - 5x + 5 = 9 ), follow these steps: 1. **Move all terms to one side to set the equation to zero:** [ xË2 - 5x + 5 - 9 = 0 ] Simplify: [ xË2 - 5x - 4 = 0 ] 2. **Identify the quadratic equation:** The equation is in the form ( axË2 + bx + c = 0 ), where: - ( a = 1 ) - ( b = -5 ) - ( c = -4 ) 3. **Find the sum of the roots of the quadratic equation:** For a quadratic equation ( axË2 + bx + c = 0 ), the sum of the roots is given by: [text{Sum of roots} = -frac{b}{a} ] Substitute the values of ( a ) and ( b ):[ text{Sum of roots} = -frac{-5}{1} = 5] Therefore, the sum of the values of ( x ) that satisfy the equation is: [boxed{5}] final_answer: 5 correct answerr: 3 correct: false error: Final_answer and answer do not match
G.3 RCI
Prompt Example. RCIâs prompt example is as follows:
RCIâs prompt example
Initial Prompt: Letâs think step by step. In the form boxed{answer}, at the end of your response. Critique Prompt: Review your previous answer and find problems with your answer. Improve Prompt: Based on the problems you found, improve your answer. Please reiterate your answer, with your final answer a single numerical number, In the form boxed{answer}.
Result Example on MATH. RCIâs result example on MATH dataset is as follows:
RCIâs result example on MATH dataset
$\blacktriangleright$ Question: What is the degree of the polynomial $(4+5x^{3}+100+2\pi x^{4}+\sqrt{10}x^{4}+9)$ ? Correct Answer: 4 Round 0: The degree of a polynomial is defined as the highest exponent of the variable, which is the variable with the highest degree. We have a polynomial in the variable $x$ , which includes the terms $4$ , $5x^{3}$ , $100$ , $2\pi x^{4}$ , $\sqrt{10}x^{4}$ , and $9$ . The term $5x^{3}$ has an exponent of $3$ , the terms $2\pi x^{4}$ and $\sqrt{10}x^{4}$ have an exponent of $4$ , and all of the other terms have an exponent of $0$ . Therefore, the degree of the polynomial is boxed4. Round 1 - Critique: No, I do not see any problems with my answer. I think I have already provided a clear and complete answer. Final Answer: The final answer is boxed4. The degree of a polynomial is defined as the highest exponent of the variable, which is the variable with the highest degree. In this case, $x^{4}$ is the term with the highest power of $x$ . Thus, the degree of the polynomial is $4$ . Round 1 - Output: I think I have already provided a clear and complete answer. To confirm this answer, we can analyze the terms of the polynomial. The polynomial contains the terms $4$ , $5x^{3}$ , $100$ , $2\pi x^{4}$ , $\sqrt{10}x^{4}$ , and $9$ . The term $5x^{3}$ has an exponent of $3$ , the terms $2\pi x^{4}$ and $\sqrt{10}x^{4}$ have an exponent of $4$ , and all of the other terms have an exponent of $0$ . In this case, the variable is $x$ , and the term with the highest degree is $x^{4}$ . Therefore, the degree of the polynomial is {boxed{4}.} Final Answer: 4 Correct: True
G.4 CoVe
Prompt Example. CoVeâs prompt example is as follows:
CoVeâs prompt example
Baseline prompt: Please generate a response to the following question. Answer directly and concisely. Question: question Verification prompt: The following is a baseline response to a question. Please generate a set of verification questions to check the accuracy of each fact in the response. List each question on a new line, prefixed with a number and a dot (e.g., 1., 2.). Baseline Response: Baseline response Execute prompt: Please answer the following question independently. Ensure your answer is not influenced by any previous responses. Final prompt: The following is a baseline response and its verification results. Please generate a final response by correcting any errors in the baseline response based on the verification results. Initial Question: question. Baseline Response: Baseline response. Verification Results: Your final answer should be in the form boxed{answer}, at the end of your response.
Result Example on MATH. CoVeâs result example on MATH dataset is as follows:
CoVeâs result example on MATH dataset
$\blacktriangleright$ Question: Berengere and her American foreign-exchange student Emily are at a bakery in Paris that accepts both euros and American dollars. They want to buy a cake, but neither of them has enough money. If the cake costs 6 euros and Emily has an American five-dollar bill, how many euros does Berengere need to contribute to the cost of the cake if 1 euro = 1.25 USD? Baseline Response: Emilyâs $5 is equivalent to 4 euros (since $5 / 1.25 = 4 euros). The cake costs 6 euros, so Berengere needs to contribute **2 euros** to cover the remaining cost. 5 Verifications with 1 verification shown : Is the cost of the cake confirmed to be 6 euros? To confirm whether the cost of the cake is 6 euros, you would need to refer to the specific context or source where the price was mentioned. Without additional information or context, I cannot independently verify the cost of the cake. If you have a specific reference or details, please provide them for a more accurate answer. Final Verified Response: Based on the verification results, the baseline response correctly calculates Berengereâs contribution under the assumption that the exchange rate is 1 euro = 1.25 USD, the cake costs 6 euros, and there are no additional fees or factors affecting the total cost. Hereâs the corrected and verified response:EmilyĆ $5 is equivalent to 4 euros (since $5 / 1.25 = 4 euros). The cake costs 6 euros, so Berengere needs to contribute **2 euros** to cover the remaining cost. boxed{2 text{euros}}. Predicted Answer: 2 text{euros} Correct Answer: 2 text{euros} Correct: True
G.5 Self-Refine
Prompt Example. Self-Refineâs prompt example is as follows:
9-Shot init Prompts with 2 Shots Shown Below
$\blacktriangleright$ # Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny? # solution using Python: def solution(): """Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny?""" jason_lollipops_initial = 20 jason_lollipops_after = 12 denny_lollipops = jason_lollipops_initial - jason_lollipops_after result = denny_lollipops return result $\blacktriangleright$ # Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today? # solution using Python: def solution(): """There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today?""" trees_initial = 15 trees_after = 21 trees_added = trees_after - trees_initial result = trees_added return result
4-Shot feedback Prompts with 1 Shot Shown Below
def solution(): """Twenty dozen cups cost $1200 less than the total cost of half a dozen plates sold at $6000 each. Calculate the total cost of buying each cup.""" plates = 6 plate_cost = 6000 cups = 12 * 20 cup_cost = plate_cost result = cup_cost return result # There is an error in the code above because of lack of understanding of the question. What is the error? To find the error, go through semantically complete blocks of the code, and check if everything looks good. # Let us go through the error and check step-by-step plates = 6 plate_cost = 6000 # looks good # Letâs check the other parts cups = 12 * 20 cup_cost = plate_cost # wrong! The cost of a cup is not the same as the cost of a plate. The cost of a cup is $1200 less than the total cost of half a dozen plates sold at $6000 each. So we need to calculate the cost of a cup first (total cost of half a dozen plates sold at $6000 each - $1200) and use that. Okay! Here is the rewrite: def solution(): """Twenty dozen cups cost $1200 less than the total cost of half a dozen plates sold at $6000 each. Calculate the total cost of buying each cup.""" plates = 6 plate_cost = 6000 cups = 12 * 20 total_cup_cost = (plate_cost * plates) - 1200 cost_per_cup = total_cup_cost / cups return cost_per_cup ### END ###
Result Example on MATH. Self-Refineâs result example on MATH dataset is as follows:
Self-Refineâs result example on MATH dataset
$\blacktriangleright$ question: What is the degree of the polynomial $(4+5x^{3}+100+2/pix^{4}+sqrt{10}x^{4}+9)$ ? $\blacktriangleright$ 8 attempt examples with 1 shown below: solution_curr: def solution():#Parking Lot Cars cars_initial = 3 cars_arrived = 2 total_cars = cars_initial + cars_arrived return total_cars This code is correct and matches the solution I described. It correctly calculates the total number of cars in the parking lot by adding the initial number of cars and the number of cars that arrived.If youâd like me to review the other problems or provide feedback on the code, please let me know! solution_fixed: def solution():#Parking Lot Cars cars_initial = 3 cars_arrived = 2 total_cars = cars_initial + cars_arrived return total_cars Upon reviewing the code, I notice that it is actually correct and matches the solution you described. It correctly calculates the total number of cars in the parking lot by adding the initial number of cars and the number of cars that arrived.Therefore, there is no error in the code. It is a simple and straightforward calculation that produces the correct result.f youâd like me to review any other code or provide feedback, please let me know! feedback: Letâs go through the code step by step to identify the error.The code is calculating the total number of cars in a parking lot by adding the initial number of cars and the number of cars that arrived.Hereâs the code:ââpython answer: 5 correct_answer: 4 final_answer: 5 correct: False
G.6 Reflexion
Prompt Example. Reflexionâs prompt example is as follows:
Reflexionâs prompt example
$\blacktriangleright$ (reflect_prompt) You are an advanced reasoning agent that can improve based on self refection. You will be given a previous reasoning trial in which you were given access to an Docstore API environment and a question to answer. You were unsuccessful in answering the question either because you guessed the wrong answer with Finish[<answer>], or you used up your set number of reasoning steps. In a few sentences, Diagnose a possible reason for failure and devise a new, concise, high level plan that aims to mitigate the same failure. Use complete sentences. Here is an example:{examples} Previous trial: Question: {question} {scratchpad} Reflection: $\blacktriangleright$ (react_agent_prompt) Solve a question answering task with interleaving Thought, Action, Observation steps. Thought can reason about the current situation, and Action can be three types: (1) Search[entity], which searches the exact entity on Wikipedia and returns the first paragraph if it exists. If not, it will return some similar entities to search. (2) Lookup[keyword], which returns the next sentence containing keyword in the last passage successfully found by Search. (3) Finish[answer], which returns the answer and finishes the task. You may take as many steps as necessary. Here are some examples: examples (END OF EXAMPLES) Question: {question} {scratchpad} $\blacktriangleright$ (react_reflect_agent_prompt) Solve a question answering task with interleaving Thought, Action, Observation steps. Thought can reason about the current situation, and Action can be three types: (1) Search[entity], which searches the exact entity on Wikipedia and returns the first paragraph if it exists. If not, it will return some similar entities to search. (2) Lookup[keyword], which returns the next sentence containing keyword in the last passage successfully found by Search. (3) Finish[answer], which returns the answer and finishes the task. You may take as many steps as necessary. Here are some examples: examples (END OF EXAMPLES) {reflections} Question: {question} {scratchpad} $\blacktriangleright$ (reflecion_header) You have attempted to answer following question before and failed. The following reflection(s) give a plan to avoid failing to answer the question in the same way you did previously. Use them to improve your strategy of correctly answering the given question.
Result Example on AQUA. Reflexionâs result example on AQUA dataset is as follows:
Reflexionâs result example on AQUA dataset
$\blacktriangleright$ Question: A travel company wants to charter a plane to the Bahamas. Chartering the plane costs $5,000. So far, 12 people have signed up for the trip. If the company charges $200 per ticket, how many more passengers must sign up for the trip before the company can make any profit on the charter? $\blacktriangleright$ options: A:7 B:9 C:13 D:27 E:45 Trial 1: Scratchpad: Thought 1: To determine how many more passengers need to sign up for the trip before the company can make a profit, we need to calculate the total revenue required to cover the charter cost and then find out how many additional tickets need to be sold. Action 1: The cost of chartering the plane is $5,000. Each ticket is sold for $200. To break even, the company needs to sell enough tickets to cover the $5,000 cost. Observation 1: Invalid Action. Valid Actions are Lookup[topic] Search[topic] and Finish[answer]. Thought 2: To find out how many tickets need to be sold to cover the $5,000 cost, we divide the total cost by the price per ticket. Then, we subtract the number of tickets already sold to find out how many more are needed. Action 2: The company needs to sell enough tickets to cover the $5,000 cost. Each ticket is $200, so the number of tickets needed to break even is $5,000 / $200 = 25 tickets. Since 12 tickets have already been sold, we need to find out how many more are needed to reach 25 tickets. Observation 2: Invalid Action. Valid Actions are Lookup[topic] Search[topic] and Finish[answer]. Thought 3: To find out how many more passengers need to sign up, we calculate the total number of tickets needed to break even and subtract the number of tickets already sold. The company needs to sell 25 tickets to cover the $5,000 cost. Since 12 tickets have already been sold, the company needs to sell 25 - 12 = 13 more tickets to break even. Therefore, to make a profit, they need to sell at least 13 more tickets. Action 3: Finish[13] Observation 3: Answer is CORRECT $\blacktriangleright$ Correctness: True correct_answer: c final_answer: c correct: True
G.7 RARR
Prompt Example. RARRâs prompt example is as follows:
RARRâs prompt example
$\blacktriangleright$ (Prompts for generating hallucinations.) EVIDENCE_HALLUCINATION = Generate a paragraph that answers the question. $\blacktriangleright$ Question: What is New York-Style pizza? Text: New York-style pizza has slices that are large and wide with a thin crust that is foldable yet crispy. It is traditionally topped with tomato sauce and mozzarella cheese, with any extra toppings placed on top of the cheese. $\blacktriangleright$ Question: When did the first McDonaldâs open? Text: The McDonaldâs brothers opened their first McDonaldâs restaurant in 1940 in San Bernardino, California. Originally, a carhop drive-in system was used to serve customers. The initial menu items were centered around barbecue and the first name the brothers called their business was "McDonaldâs Famous Barbecue." $\blacktriangleright$ Question: {query} $\triangleright$ (6 Prompts for RARR with 1 Prompt Shown Below.) QGEN_PROMPT I will check things you said and ask questions. $\blacktriangleright$ You said: Your nose switches back and forth between nostrils. When you sleep, you switch about every 45 minutes. This is to prevent a buildup of mucus. Itâs called the nasal cycle. To verify it, 1. I googled: Does your nose switch between nostrils? 2. I googled: How often does your nostrils switch? 3. I googled: Why does your nostril switch? 4. I googled: What is nasal cycle? $\blacktriangleright$ You said: The Stanford Prison Experiment was conducted in the basement of Encina Hall, Stanfordâs psychology building. To verify it, 1. I googled: Where was Stanford Prison Experiment was conducted? $\blacktriangleright$ You said: The Havel-Hakimi algorithm is an algorithm for converting the adjacency matrix of a graph into its adjacency list. It is named after Vaclav Havel and Samih Hakimi. To verify it, 1. I googled: What does Havel-Hakimi algorithm do? 2. I googled: Who are Havel-Hakimi algorithm named after? $\blacktriangleright$ You said: "Time of My Life" is a song by American singer-songwriter Bill Medley from the soundtrack of the 1987 film Dirty Dancing. The song was produced by Michael Lloyd. To verify it, 1. I googled: Who sings the song "Time of My Life"? 2. I googled: Which film is the song "Time of My Life" from? 3. I googled: Who produced the song "Time of My Life"? $\blacktriangleright$ You said: Kelvin Hopins was suspended from the Labor Party due to his membership in the Conservative Party. To verify it, 1. I googled: Why was Kelvin Hopins suspended from Labor Party? $\blacktriangleright$ You said: Social work is a profession that is based in the philosophical tradition of humanism. It is an intellectual discipline that has its roots in the 1800s. To verify it, 1. I googled: What philosophical tradition is social work based on? 2. I googled: What year does social work have its root in? $\blacktriangleright$ You said: {claim} To verify it,
Result Example on gsm8k. RARRâs result example on gsm8k dataset is as follows:
RARRâs result example on gsm8k
$\blacktriangleright$ question: A robe takes 2 bolts of blue fiber and half that much white fiber. How many bolts in total does it take? text: Let me help solve this word problem step by step:1. Blue fiber needed: 2 bolts2. White fiber needed: Half of blue = 2 / 2 = 1 bolt 3. Total bolts = Blue + White = 2 + 1 = 3 bolts Therefore, it takes 3 bolts in total to make the robe. revisions: original_text: Let me help solve this word problem step by step:1. Blue fiber needed: 2 bolts 2. White fiber needed: Half of blue = 2 / 2 = 1 bolt 3. Total bolts = Blue + White = 2 + 1 = 3 bolts Therefore, it takes 3 bolts in total to make the robe. revised_text: Default revision text final_answer: 3 correct_answer: 3 correct: True
G.8 RATT
Prompt Example. RATTâs prompt example is as follows:
RATTâs prompt example
$\blacktriangleright$ (Query_prompt) Question: question.Content: previous_answer.Instruction: I want to verify the content correctness of the given question, especially the last sentences.Please summarize the content with the corresponding question.This summarization will be used as a query to search with Bing search engine. The query should be short but need to be specific to promise Bing can find related knowledge or pages.You can also use search syntax to make the query short and clear enough for the search engine to find relevant language data.Try to make the query as relevant as possible to the last few sentences in the content. **IMPORTANT** Just output the query directly. DO NOT add additional explanations or introducement in the answer unless you are asked to. $\blacktriangleright$ (Filter_prompt) Text: content. Question: question. Please read the following text and extract only the sections that are relevant to the given question. Organize the extracted information coherently, maintaining the structure of multiple paragraphs with subtitles, and split the paragraphs with Question: question, Text to Filter: content, Instruction: Extract only the relevant information related to the question. Keep the structure clear with multiple paragraphs and subtitles. Provide the filtered information directly without additional explanations or commentary. $\blacktriangleright$ (Draft_prompt) Question: question.IMPORTANT: Try to answer this question/instruction with step-by-step thoughts and make the answer more structural.Use â\n\nâ to split the answer into several paragraphs.Just respond to the instruction directly. DO NOT add additional explanations or introducement in the answer unless you are asked to.If you have got the final answer, in the form \boxed{answer}, at the end of your response. $\blacktriangleright$ (Revise_prompt) Existing Text in Wiki Web: content. Question: question. Answer: answer. I want to revise the answer according to retrieved related text of the question in WIKI pages. You need to check whether the answer is correct. If you find some errors in the answer, revise the answer to make it better. If you find some necessary details are ignored, add it to make the answer more plausible according to the related text. If you find that a part of the answer is correct and does not require any additional details, maintain that part of the answer unchanged. Directly output the original content of that part without any modifications. **IMPORTANT** Try to keep the structure (multiple paragraphs with its subtitles) in the revised answer and make it more structural for understanding. Split the paragraphs with â\n\nâ characters. Just output the revised answer directly. DO NOT add additional explanations or announcements in the revised answer unless you are asked to. If you have got the final answer, in the form \boxed{answer}, at the end of your response. $\blacktriangleright$ (Refine_prompt) Agent_drafts: agent_drafts.Referencing the answers provided by all agents, synthesize a more detailed and comprehensive response by integrating all relevant details from these answers. Ensure logical coherence and provide ONLY THE MERGED ANSWER AS THE OUTPUT, omitting any discussion of the comparison process or analytical thoughts.If you have got the final answer, in the form \boxed{answer}, at the end of your response. $\blacktriangleright$ (Refine_prompt) Final_prompt: Based on the original answer and an additional supplementary answer, generate a response that is richer in detail and logically coherent. Review the original answer:1. If any part of the answer is correct and requires no further details, retain that portion unchanged and output it directly as it is.2. For parts that may be improved or lack necessary details, enhance them by integrating information from the supplementary answer to make the response more comprehensive and accurate.3. If you identify any errors within the answers, correct these errors while ensuring that the revised content remains logically coherent. Original Answer: previous_answer. Supplementary Answer: final_draft_raw. **IMPORTANT** Ensure the revised answer maintains a structured format (multiple paragraphs with subtitles) for better clarity. Separate the paragraphs with â\n\nâ characters. Output only the enhanced answer directly, without any extra explanations or announcements unless specifically requested.If you have got the final answer, in the form \boxed{answer}, at the end of your response. $\blacktriangleright$ (Conclude_prompt (optional)) Final_draft:final_answers.Referencing the answers provided by each step, synthesize a more detailed and comprehensive response by integrating all relevant details from these answers. Ensure logical coherence and provide ONLY THE MERGED ANSWER AS THE OUTPUT, omitting any discussion of the comparison process or analytical thoughts.If you have got the final answer, in the form \boxed{answer}, at the end of your response.
Result Example on MATH. RATTâs result example on MATH dataset is as follows:
RATTâs result example on MATH dataset
$\blacktriangleright$ question: What is the value of $(2x + 5)Ë2$ when $x = 3$?" draft_cot: To find the value of ((2x + 5)Ë2) when (x = 3), follow these steps: First, substitute (x = 3) into the expression: [(2(3) + 5)Ë2] Next, simplify inside the parentheses: [(6 + 5)Ë2 = (11)Ë2] Finally, calculate the square: [11Ë2 = 121 The value of ((2x + 5)Ë2) when (x = 3) is (boxed{121}). final_answer: 121 correct_answer: 121 is_correct: True
G.9 CRITIC
Prompt Example. CRITICâs prompt example is as follows:
8-Shot CoT Prompts with 2 Shots Shown Below
$\blacktriangleright$ Question: Janetâs ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmersâ market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmersâ market? # Python code, return answer total_eggs = 16 eaten_eggs = 3 baked_eggs = 4 sold_eggs = total_eggs - eaten_eggs - baked_eggs dollars_per_egg = 2 answer = sold_eggs * dollars_per_egg $\blacktriangleright$ Question: A robe takes 2 bolts of blue fiber and half that much white fiber.How many bolts in total does it take? # Python code, return answer bolts_of_blue_fiber = 2 bolts_of_white_fiber = num_of_blue_fiber / 2 answer = bolts_of_blue_fiber + bolts_of_white_fiber
4 Critique Prompts with 1 Shot Shown Below:
$\blacktriangleright$ Question: A farmer has twice as many pigs as cows, and 4 more cows than goats. If the farmer has 56 animals total, how many pigs does he have? ââpython num_of_pigs = 2 * num_of_cows num_of_cows = number_of_goats + 4 num_of_animals = 56 number_of_goats = (num_of_animals - 4) / 3 num_of_cows = (number_of_goats + 4) * 2 answer = num_of_cows ââ $\blacktriangleright$ Execution: NameError("name ânum_of_cowsâ is not defined") $\blacktriangleright$ Output: answer = None $\triangleright$ Whatâs the problem with the above code? 1. The above code causes the "NameError" because it use the variable ânum_of_cowsâ before it is defined. 2. The variable names in the code are a little bit confusing, becase both ânum_of_pigsâ and "number_of_goats" are used. Letâs analysis the problem, we know that the total number of animals are 56, but we donât konw the number of pigs, cows or goats, and we canât get any of them directly, so we can build equations to solve for the number of pigs. $\triangleright$ Hereâs a better solution: ââpython # letâs define the number of goats is x # then the number of cows is 4+x # then the number of pigs is 2(4+x) # so, total animals is x + (4+x) + 2(4+x) = 56 # Now we can solve for "x": # x + 4 + x + 2x + 8 = 56 # 4x + 12 = 56 # 4x = 44 # x = 11 # So the farmer has 11 goats. num_of_goats = 11 num_of_cows = num_of_goats + 4 # calculate the answer given the number of goats num_of_pigs = 2 * num_of_cows $\blacktriangleright$ answer = num_of_pigs ââ
6 Shot CoT Prompts with 2 Shots Shown Below (Question Answering):
$\blacktriangleright$ Q: What is the elevation range for the area that the eastern sector of the Colorado orogeny extends into? A: Letâs think step by step. The eastern sector of Colorado orogeny extends into the High Plains. High Plains rise in elevation from around 1,800 to 7,000 ft. So the answer is: 1,800 to 7,000 ft. $\blacktriangleright$ Q: Musician and satirist Allie Goertz wrote a song about the "The Simpsons" character Milhouse, who Matt Groening named after who? A: Letâs think step by step. Milhouse was named after U.S. president Richard Nixon. So the answer is: Richard Nixon.
6 Critique Prompts with 1 Shot Shown Below (Question Answering):
$\blacktriangleright$ Question: Are John H. Auer and Jerome Robbins both directors? Proposed Answer: Letâs think step by step. John H. Auer is a director, producer, and screenwriter. Jerome Robbins is a director, producer, choreographer, and dancer. So the answer is: Jerome Robbins. $\triangleright$ Whatâs the problem with the above answer? $\triangleright$ 1. Plausibility: This is a yes/no question, the proposed answer is "Jerome Robbins", which does not clearly answer whether they are both directors. The answer should be a clear "yes" or "no". $\triangleright$ 2. Truthfulness: Letâs search the question in google: $\blacktriangleright$ Search Query: Are John H. Auer and Jerome Robbins both directors? $\blacktriangleright$ Evidence: [A history of Jerome Robbins at PNB - Pacific Northwest Ballet] Robbins retained the title of associate artistic director until approximately 1963, ⊠Ballets: USA, from the late 1950s to the late 1960s. The evidence suggests Jerome Robbins is a director. We then need to verify if John H. Auer is a director. $\blacktriangleright$ Search Query: Is John H. Auer a director? site: wikipedia.org $\blacktriangleright$ Evidence: [John H. Auer - Wikipedia] Auer was a Hungarian-born child actor who, on coming to the Americas in 1928, became a movie director and producer, initially in Mexico but, from the early 1930s, in Hollywood. The evidence suggests that John H. Auer is an actor, director and producer. Therefore, both John H. Auer and Jerome Robbins are directors. $\blacktriangleright$ Question: Are John H. Auer and Jerome Robbins both directors? Hereâs the most possible answer: Letâs think step by step. John H. Auer is an actor, director and producer. Jerome Robbins is a director. Therefore, Both John H. Auer and Jerome Robbins are directors. So the answer is: Yes.
Result Example on MATH. CRITICâs Mathematical Program Synthesis task result example on MATH dataset is as follows:
CRITICâs Mathematical Program Synthesis task result example on MATH dataset
$\blacktriangleright$ Question: Janetâs ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmersâ market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmersâ market? Correct Answer: 18 Round 0: \n total_eggs = 16\n eaten_eggs = 3\n baked_eggs = 4\n sold_eggs = total_eggs - eaten_eggs - baked_eggs\n dollars_per_egg = 2\n answer = sold_eggs * dollars_per_egg\n print(answer) Round 1 - Critique(Through python Interpreter): \n total_eggs = 16\n eaten_eggs = 3\n baked_eggs = 4\n sold_eggs = total_eggs - eaten_eggs - baked_eggs\n dollars_per_egg = 2\n answer = sold_eggs * dollars_per_egg\n print(answer) Round 1 - Output: 18 Final Answer: 18 Correct: True
Result Example on HotPotQA. CRITICâs Question Answering task result example on HotPotQA dataset is as follows:
CRITICâs Question Answering task result example on HotPotQA dataset
$\blacktriangleright$ Question: Were Scott Derrickson and Ed Wood of the same nationality? Correct Answer: yes Round 0: Letâs think step by step. Scott Derrickson is an American film director, screenwriter, and producer. Ed Wood was an American filmmaker, actor, and writer. So Scott Derrickson and Ed Wood are of the same nationality. So the answer is: Yes. Round 1 - Critique(Through Google search): supporting_facts: "title": ["Scott Derrickson", "Ed Wood"] Round 1 - Output: Yes.This answer is plausible and truthful, and the evidence supports the claim that both Scott Derrickson and Ed Wood are American Final Answer: Yes Correct: True
Appendix H All Performance Results
H.1 All Performance Results for Intrinsic Correction
Table 9: Performance of Intrinsic Correction Methods on Different LLMs
H.2 All Performance Results for External Correction
Table 10: Performance of External Correction Methods on Different LLMs
| LLaMA3.1-8B CoT Reflexion-v2 | Base 72.14 (-3.66) 89.02 (+13.22) | 75.8 73.85 (-2.31) 78.66 (+2.5) | 76.16 44.55 (-9.33) 59.77 (+5.89) | 53.88 78.28 (-3.27) 83.9 (+2.35) | 81.55 77.65 (+3.28) 78.14 (+3.77) | 74.37 13.11 (+3.37) 22.14 (+12.4) | 9.74 51.83 (-21.61) - | 73.44 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| RARR | 81.31 (+5.51) | 74.6 (-1.56) | 56.54 (+2.66) | 79.31 (-2.24) | 84.43 (+10.06) | 34.58 (+24.84) | 67.58 (-5.86) | |
| RATT | 76.82 (+1.02) | 76.72 (+0.56) | 54.94 (+1.06) | 72.34 (-9.21) | 81.16 (+6.79) | 13.83 (+4.09) | 62.33 (-11.11) | |
| CRITIC | 69.33 (-6.47) | - | - | 69.12 (-12.43) | - | - | - | |
| LLaMA3.1-70B | Base | 81.28 | 81.88 | 62.65 | 90.63 | 78.21 | 15.62 | 62.18 |
| CoT | 86.13 (+4.85) | 82.84 (+0.96) | 62.93 (+0.28) | 95.92 (+5.29) | 60.11 (-18.1) | 23.74 (+8.12) | 46.39 (-15.79) | |
| Reflexion-v2 | 88.78 (+7.5) | 86.17 (+4.29) | 62.45 (-0.2) | 82.11 (-8.52) | 84.82 (+6.61) | 24.62 (+9) | - | |
| RARR | 90.49 (+9.21) | 79.93 (-1.95) | 67.29 (+4.64) | 88.5 (-2.13) | 87.8 (+9.59) | 35.79 (+20.17) | 78.96 (+16.78) | |
| RATT | 82.37 (+1.09) | 82.08 (+0.2) | 68.22 (+5.57) | 86.15 (-4.48) | 85.79 (+7.58) | 26.09 (+10.47) | 66.05 (+3.87) | |
| CRITIC | 85.57 (+4.29) | - | - | 95.24 (+4.61) | - | - | - | |
| Qwen2.5-7B | Base | 74.05 | 74.75 | 47.5 | 90.23 | 74.28 | 7.53 | 79.11 |
| CoT | 72.07 (-1.98) | 56.19 (-18.56) | 50.75 (+3.25) | 87.01 (-3.22) | 68.61 (-5.67) | 12.17 (+4.64) | 63.24 (-15.87) | |
| Reflexion-v2 | 87.21 (+13.16) | 67.95 (-6.8) | 54.35 (+6.85) | 89.31 (-0.92) | 85.63 (+11.35) | 26.18 (+18.65) | - | |
| RARR | 71.67 (-2.38) | 75.52 (+0.77) | 45.61 (-1.89) | 85.22 (-5.01) | 79.44 (+5.16) | 34.44 (+26.91) | 85.71 (+6.6) | |
| RATT | 72.4 (-1.65) | 84.55 (+9.8) | 65.17 (+17.67) | 94.92 (+4.69) | 84.61 (+10.33) | 14.61 (+7.08) | 80.69 (+1.58) | |
| CRITIC | 69.89 (-4.16) | - | - | 74.42 (-15.81) | - | - | - | |
| Qwen2.5-72B | Base | 83.63 | 81.92 | 57.58 | 91.11 | 66.91 | 9.85 | 86.13 |
| CoT | 89.87 (+6.24) | 88.44 (+6.52) | 57.34 (-0.24) | 94.14 (+3.03) | 63.71 (-3.2) | 16.19 (+6.34) | 62.45 (-23.68) | |
| Reflection-v2 | 83.69 (+0.06) | 82.33 (+0.41) | 57.46 (-0.12) | 92.64 (+1.53) | 77.46 (+10.55) | 14.72 (+4.87) | - | |
| RARR | 87.77 (+4.14) | 86.16 (+4.24) | 57.23 (-0.35) | 86.4 (-4.71) | 73.66 (+6.75) | 40.13 (+30.28) | 87.63 (+1.5) | |
| RATT | 84.68 (+1.05) | 84.12 (+2.2) | 64.41 (+6.83) | 90.64 (-0.47) | 77.94 (+11.03) | 25.74 (+15.89) | 84.72 (-1.41) | |
| CRITIC | 84.78 (+1.15) | - | - | 79.35 (-11.76) | - | - | - | |
| Claude3.5 | Base | 88.29 | 80.25 | 81.26 | 95.81 | 83.51 | 32.34 | 84.69 |
| CoT | 92.3 (+4.01) | 82.48 (+2.23) | 64.51 (-16.75) | 97.55 (+1.74) | 85.48 (+1.97) | 16.67 (-15.67) | 80.53 (-4.16) | |
| Reflexion-v2 | 93.62 (+5.33) | 91.77 (+11.52) | 84.83 (+3.57) | 94.1 (-1.71) | 92.56 (+9.05) | 53.21 (+20.87) | - | |
| RARR | 87.44 (-0.85) | 80.79 (+0.54) | 83.52 (+2.26) | 94.36 (-1.45) | 94.18 (+10.67) | 50.34 (+18.0) | 86.27 (+1.58) | |
| RATT | 82.13 (-6.16) | 77.62 (-2.63) | 88.51 (+7.25) | 96.39 (+0.58) | 97.31 (+13.8) | 38.28 (+5.94) | 85.29 (+0.6) | |
| CRITIC | 95.16 (+6.87) | - | - | 94.85 (-0.96) | - | - | - | |
| GPT-3.5 | Base | 82.94 | 77.92 | 55.15 | 79.14 | 70.44 | 27.29 | 80.29 |
| CoT | 89.34 (+6.4) | 81.47 (+3.55) | 65.56 (+10.41) | 97.41 (+18.27) | 74.91 (+4.47) | 17.84 (-9.45) | 76.77 (-3.52) | |
| Reflexion-v2 | 90.31 (+7.37) | 82.72 (+4.8) | 76.22 (+21.07) | 92.62 (+13.48) | 79.84 (+9.4) | 30.51 (+3.22) | - | |
| RARR | 82.37 (-0.57) | 74.04 (-3.88) | 62.04 (+6.89) | 89.93 (+10.79) | 73.36 (+2.92) | 28.42 (+1.13) | 78.58 (-1.71) | |
| RATT | 75.92 (-7.02) | 78.39 (+0.47) | 64.99 (+9.84) | 88.27 (+9.13) | 73.66 (+3.22) | 37.87 (+10.58) | 80.56 (+0.27) | |
| CRITIC | 82.49 (-0.45) | - | - | 82.72 (+3.58) | - | - | - | |
| GPT-4o | Base | 89.16 | 80.65 | 65.82 | 91.15 | 69.54 | 22.49 | 77.04 |
| CoT | 91.86 (+2.7) | 81.68 (+1.03) | 61.45 (-4.37) | 97.81 (+6.66) | 73.46 (+3.92) | 13.75 (-8.74) | 64.58 (-12.46) | |
| Reflexion-v2 | 91.17 (+2.01) | 84.23 (+3.58) | 77.01 (+11.19) | 94.13 (+2.98) | 77.65 (+8.11) | 19.47 (-3.02) | - | |
| RARR | 90.89 (+1.73) | 84.87 (+4.22) | 82.13 (+16.31) | 95.34 (+4.19) | 81.61 (+12.07) | 35.69 (+13.2) | 94.09 (+17.05) | |
| RATT | 90.84 (+1.68) | 80.49 (-0.16) | 77.23 (+11.41) | 98.73 (+7.58) | 73.48 (+3.94) | 24.95 (+2.46) | 86.04 (+9.0) | |
| CRITIC | 91.08 (+1.92) | - | - | 97.44 (+6.29) | - | - | - | |
| QWQ-32B | Base | 62.43 | 82.78 | 52.42 | 63.41 | 73.78 | 10.85 | 19.86 |
| CoT | 65.86 (+3.43) | 75.23 (-7.55) | 62.43 (+10.01) | 81.41 (+18.0) | 75.62 (+1.84) | 11.31 (+0.46) | 14.19 (-5.67) | |
| Reflexion-v2 | 83.18 (+20.75) | 81.71 (-1.07) | 73.55 (+21.13) | 83.75 (+20.34) | 73.56 (-0.22) | 23.12 (+12.27) | - | |
| RARR | 88.28 (+25.85) | 85.03 (+2.25) | 72.67 (+20.25) | 86.33 (+22.92) | 82.97 (+9.19) | 31.66 (+20.81) | 34.92 (+15.06) | |
| RATT | 66.19 (+3.76) | 76.78 (-6.0) | 50.88 (-1.54) | 72.85 (+9.44) | 66.12 (-7.66) | 15.62 (+4.77) | 22.71 (+2.85) | |
| CRITIC | 79.22 (+16.79) | - | - | 90.83 (+27.42) | - | - | - | |
| DeepSeek-V3 | Base | 89.29 | 83.35 | 74.79 | 95.12 | 85.02 | 31.35 | 91.67 |
| CoT | 90.08 (+0.79) | 80.08 (-3.27) | 72.67 (-2.12) | 98.13 (+3.01) | 73.73 (-11.29) | 23.91 (-7.44) | 80.92 (-10.75) | |
| Reflexion-v2 | 84.84 (-4.45) | 84.38 (+1.03) | 68.47 (-6.32) | 96.28 (+1.16) | 82.62 (-2.4) | 27.72 (-3.63) | - | |
| RARR | 88.99 (-0.3) | 84.15 (+0.8) | 74.28 (-0.51) | 94.93 (-0.19) | 87.55 (+2.53) | 40.31 (+8.96) | 82.41 (-9.26) | |
| RATT | 84.93 (-4.36) | 86.53 (+3.18) | 78.19 (+3.4) | 92.47 (-2.65) | 86.57 (+1.55) | 36.14 (+4.79) | 92.55 (+0.88) | |
| CRITIC | 91.72 (+2.43) | - | - | 94.37 (-0.75) | - | - | - | |