# Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models
> Equal Contribution. 🖂 yangling0818@163.com
Abstract
We introduce Buffer of Thoughts (BoT), a novel and versatile thought-augmented reasoning approach for enhancing accuracy, efficiency and robustness of large language models (LLMs). Specifically, we propose meta-buffer to store a series of informative high-level thoughts, namely thought-template, distilled from the problem-solving processes across various tasks. Then for each problem, we retrieve a relevant thought-template and adaptively instantiate it with specific reasoning structures to conduct efficient reasoning. To guarantee the scalability and stability, we further propose buffer-manager to dynamically update the meta-buffer, thus enhancing the capacity of meta-buffer as more tasks are solved. We conduct extensive experiments on 10 challenging reasoning-intensive tasks, and achieve significant performance improvements over previous SOTA methods: 11% on Game of 24, 20% on Geometric Shapes and 51% on Checkmate-in-One. Further analysis demonstrate the superior generalization ability and model robustness of our BoT, while requiring only 12% of the cost of multi-query prompting methods (e.g., tree/graph of thoughts) on average. Notably, we find that our Llama3-8B + BoT has the potential to surpass Llama3-70B model. Our project is available at https://github.com/YangLing0818/buffer-of-thought-llm
1 Introduction
A series of Large Language Models (LLMs) [1, 2, 3, 4, 5] like GPT-4 [3], PaLM [2] and LLaMA [6, 7] have showcased the impressive performance in various reasoning tasks. In addition to scaling up the model size to improve the reasoning performance, there are more effective prompting methods that further enhance the functionality and performance of LLMs. We divide these methods into two categories: (i) single-query reasoning: these methods [8, 9, 10] usually focus on prompt engineering and their reasoning process can be finished within a single query, such as CoT [8] that appends the input query with ’Let’s think step by step’ to produce rationales for increasing reasoning accuracy, and Few-shot Prompting [11, 12, 9, 13] which provides task-relevant exemplars to assist the answer generation; (ii) multi-query reasoning: these methods [14, 15] focus on leveraging multiple LLM queries to elicit different plausible reasoning paths, thus decomposing a complex problem into a series of simpler sub-problems, such as Least-to-Most [16], ToT [14] and GoT [17].
However, both kinds of methods face some limitations: (1) single-query reasoning usually requires prior assumption or relevant exemplars of reasoning process, which makes it impractical to manually design them task by task, thus lacking universality and generalization; (2) Due to the recursive expansion of reasoning paths, multi-query reasoning is usually computationally-intensive when finding a unique intrinsic structure underlying the reasoning process for each specific task; (3) Both single-query and multi-query reasoning processes are limited by their designed exemplars and reasoning structures, and they neglect to derive general and high-level guidelines or thoughts from previously-completed tasks, which are informative for improving efficiency and accuracy when solving similar problems.
<details>
<summary>x1.png Details</summary>

### Visual Description
## Diagram: LLM Reasoning Strategies
### Overview
The image presents a comparative diagram illustrating three different strategies for Large Language Model (LLM) reasoning: Single-Query, Multi-Query (N-hop Iteration), and Buffer of Thoughts (BoT). Each strategy outlines a distinct approach to processing input queries and generating outputs, with varying implications for accuracy and efficiency.
### Components/Axes
**1. Single-Query:**
* **Input query:** Represents the initial prompt provided to the LLM.
* **LLM:** Denotes the Large Language Model processing the input.
* **Manual Prompt for Specific Task (e.g., CoT, Few-shot Prompting):** Describes the prompting technique used.
* **Reasoning:** Represents the reasoning process performed by the LLM.
* **Output:** The final result generated by the LLM.
* **Accuracy:** Indicates the level of correctness of the output, shown with a downward arrow and a sad face, suggesting lower accuracy.
**2. Multi-Query (N-hop Iteration):**
* **Input query:** Represents the initial prompt provided to the LLM.
* **LLM:** Denotes the Large Language Model processing the input.
* **Multi-Query:** Indicates the use of multiple queries.
* **Pre-defined Query Structure (e.g., ToT, GoT):** Describes the structure of the queries.
* **Thought Expansion:** Represents the process of expanding on initial thoughts.
* **Reasoning:** Represents the reasoning process performed by the LLM.
* **Output:** The final result generated by the LLM.
* **Efficiency:** Indicates the level of efficiency of the output, shown with a downward arrow and a sad face, suggesting lower efficiency.
**3. Buffer of Thoughts (BoT):**
* **Input query:** Represents the initial prompt provided to the LLM.
* **LLM:** Denotes the Large Language Model processing the input.
* **Problem Distiller:** Represents the component responsible for distilling the problem.
* **Instantiated Reasoning:** Represents the reasoning process performed by the LLM.
* **Output:** The final result generated by the LLM.
* **Thought Distillation and Update:** Represents the process of refining and updating thoughts.
* **Meta Buffer:** A buffer for storing meta-level thoughts.
* **High-level Thoughts:** Thoughts generated during the process.
* **Thought Retrieval:** The process of retrieving thoughts.
* **Accuracy & Efficiency:** Indicates the level of accuracy and efficiency of the output, shown with an upward arrow and a happy face, suggesting higher accuracy and efficiency.
### Detailed Analysis
**1. Single-Query:**
* The process starts with an input query fed into the LLM.
* The LLM uses a manual prompt tailored for a specific task, such as Chain-of-Thought (CoT) or Few-shot Prompting.
* The LLM performs reasoning based on the prompt and generates an output.
* The accuracy of this method is indicated to be lower.
**2. Multi-Query (N-hop Iteration):**
* The process starts with an input query fed into the LLM.
* The LLM uses a pre-defined query structure, such as Tree of Thoughts (ToT) or Graph of Thoughts (GoT).
* The LLM expands on the initial thoughts and performs reasoning iteratively.
* The output is generated after multiple iterations.
* The efficiency of this method is indicated to be lower.
**3. Buffer of Thoughts (BoT):**
* The process starts with an input query fed into the LLM.
* The LLM uses a Problem Distiller to refine the problem.
* Instantiated Reasoning is performed based on the distilled problem.
* High-level thoughts are generated and stored.
* Thought Retrieval is used to retrieve relevant thoughts.
* Thought Distillation and Update are performed to refine the thoughts.
* The Meta Buffer stores meta-level thoughts.
* The output is generated after the distillation and update process.
* The accuracy and efficiency of this method are indicated to be higher.
### Key Observations
* The Single-Query method is straightforward but may suffer from lower accuracy.
* The Multi-Query method involves iterative reasoning but may be less efficient.
* The Buffer of Thoughts method aims to improve both accuracy and efficiency through thought distillation and updating.
### Interpretation
The diagram illustrates a progression in LLM reasoning strategies, moving from simple, single-query approaches to more complex, iterative, and refined methods. The Single-Query method represents a basic approach, while the Multi-Query method attempts to improve reasoning through iteration. The Buffer of Thoughts (BoT) method represents a more advanced strategy that incorporates thought distillation and updating to enhance both accuracy and efficiency. The diagram suggests that more sophisticated methods like BoT are designed to overcome the limitations of simpler approaches, leading to better performance in complex tasks.
</details>
Figure 1: Comparison between single-query [8, 11], multi-query [14, 17], and (c) our BoT methods.
To address these limitations, we propose Buffer of Thoughts (BoT), a novel and versatile thought-augmented reasoning framework aimed at enhancing reasoning accuracy, efficiency and robustness of LLMs across various tasks. Specifically, we design meta-buffer, a lightweight library housing a series of universal high-level thoughts (thought-template), which are distilled from different problem-solving processes and can be shared across tasks. Then, for each problem, we retrieve a relevant thought-template and instantiate it with specific reasoning structure for efficient thought-augmented reasoning. In order to guarantee the scalability and stability of our BoT, we further propose buffer-manager to dynamically update the meta-buffer, which effectively enhances the capacity of meta-buffer as more tasks are solved.
Our method has three critical advantages: (i) Accuracy Improvement: With the shared thought-templates, we can adaptively instantiate high-level thoughts for addressing different tasks, eliminating the need to build reasoning structures from scratch, thereby improving reasoning accuracy. (ii) Reasoning Efficiency: Our thought-augmented reasoning could directly leverage informative historical reasoning structures to conduct reasoning without complex multi-query processes, thus improving reasoning efficiency. (iii) Model Robustness: The procedure from thought retrieval to thought instantiation is just like the human thought process, enabling LLMs to address similar problems in a consistent way, thus significantly enhancing the model robustness of our method. Our empirical studies demonstrate that Buffer of Thoughts significantly improves precision, efficiency, and robustness over a diverse array of tasks. Here, we summarize our contributions as follows:
1. We propose a novel thought-augmented reasoning framework Buffer of Thoughts (BoT) for improving the accuracy, efficiency and robustness of LLM-based reasoning.
1. We propose meta-buffer for store informative high-level thoughts distilled from different problems, and adaptively instantiate each thought template to address each specific task.
1. We design buffer-manager to distill thought-templates from various solutions, and is continually improves the capacity of meta-buffer as more tasks are solved.
1. We conduct extensive experiments on 10 challenging reasoning-intensive tasks. Our BoT achieves significant performance improvements over previous SOTA methods: 11% on Game of 24, 20% on Geometric Shapes and 51% on Checkmate-in-One, while requiring only 12% of the cost of multi-query prompting methods on average.
2 Related Work and Discussions
Retrieval-Augmented Language Models
The retrieval-augmented (Large) Language Model is introduced as a solution to mitigate the phenomenon of hallucination and enhance the output quality of language models [18, 19, 20, 21, 22]. When presented with an input question, the retrieval-augmented LLM first queries an external database with billion-level tokens [23] for retrieving a subset of the text corpus to help generating the final answer. Notably, the retrieval-augmented LLM achieves superior question-answering performance using fewer parameters compared to conventional LLMs [19], and it has found application across various downstream tasks [24, 25, 26], including multi-modal generation [24, 22, 23, 25] and biomedical applications [26, 27]. In this paper, we construct a novel category of retrieval database, termed meta-buffer, which contains a series of high-level thoughts rather than specific instances, aiming to universally address various tasks for LLM-based reasoning.
Prompt-based Reasoning with Large Language Models
Prompting techniques have significantly enahnced the arithmetic and commonsense reasoning capabilities of LLMs. Chain-of-Thought (CoT) prompting [8] and its variants [28, 29, 30], such as Least-to-Most [16], Decomposed Prompting [31], and Auto-CoT [13] —prompt LLMs to break down complex questions into simpler subtasks and systematically solve them before summarizing a final answer. Numerous studies [32, 33, 34, 35, 36, 37] have demonstrated the effectiveness of these prompting methods across a wide range of tasks and benchmarks. Innovations like Tree-of-Thought [14] and Graph-of-Thought [17], have further advanced this field by exploring dynamic, non-linear reasoning pathways to expand heuristic capabilities of LLMs [38, 39]. However, they suffer from increased resource demands and greater time complexity, depend on manual prompt crafting, and are often tailored to specific task types. Recent meta prompting methods [15, 40] utilize a same task-agnostic form of prompting for various tasks and recursively guide a single LLM to adaptively addressing different input queries. Nevertheless, such a long meta prompt may require a considerable context window, and these methods fail to leverage historical informative guidelines or thoughts for potential similar tasks.
Analogical Reasoning
Analogical reasoning is a useful technique for natural language reasoning [41, 42, 43, 44, 45]. Recent works demonstrate that LLMs can perform analogical reasoning just like humans [46, 47, 12, 48, 49]. For example, Analogical Prompting [12] and Thought Propagation [48] prompt LLMs to self-generate a set of analogous problems, and then utilize the results of analogous problems to produce a solution for input problem. However, the specific solutions for self-explored problems may introduce additional noise and cause error accumulation. Recent Thought-Retriever [49] uses the intermediate thoughts generated when solving past user to address analogous queries, but it only focuses on textual comprehension/generation instead of general reasoning problems. Thus, a more high-level and general analogical approach for LLM complex reasoning is still lacking.
3 Buffer of Thoughts
Overview of Buffer of Thoughts
In this section, we introduce our Buffer of Thoughts in detail and we also illustrate our core thought-augmented reasoning process in Figure 2. Given a specific task, we utilize our problem-distiller (Section 3.1) to extract critical task-specific information along with relevant constraints. Based on the distilled information, we search in meta-buffer (Section 3.2) that contains a series of high-level thoughts (thought-template) and retrieve a most relevant thought-template for the task. Subsequently, we instantiate the retrieved thought-template with more task-specific reasoning structures and conduct reasoning process. Finally, we employs a buffer-manager (Section 3.3) for summarizing the whole problem-solving process and distilling high-level thoughts for imcreasing the capacity of meta-buffer.
<details>
<summary>x2.png Details</summary>

### Visual Description
## Problem Solving Strategies for Price Reduction
### Overview
The image presents a problem related to price reduction in a retail setting, along with several approaches to solving it. It includes the problem statement, and four different solution strategies: Chain-of-Thought, Plan-and-Solve, Thought Templates, and Instantiated Reasoning. Each strategy attempts to determine the optimal price reduction for a shirt to achieve a target daily profit.
### Components/Axes
* **Input Problem:** The problem statement describes a shopping mall selling branded shirts with an average daily sales of 20 pieces and a profit of 40 yuan per piece. The goal is to determine the price reduction needed to achieve a daily profit of 1200 yuan, given that for every 1 yuan decrease in price, 2 more shirts are sold per day.
* **Chain-of-Thought:** A step-by-step calculation approach.
* **Plan-and-Solve:** A structured approach to solving the problem, breaking it down into steps.
* **Thought Template T1:** A template for solving quadratic equations, focusing on calculating the discriminant and determining the nature of the roots.
* **Thought Template TN:** A template defining functions for processing elements, combining elements, checking conditions, and solving problems.
* **Problem Distillation & Thought Retrieval:** A process of refining the problem and retrieving relevant knowledge.
* **Meta Buffer:** A storage area for general knowledge and problem-solving strategies.
* **Instantiated Reasoning:** A solution using variables and a quadratic equation, following the steps outlined in the Thought Template.
### Detailed Analysis or ### Content Details
**Input Problem:**
* A shopping mall sells branded shirts.
* Average daily sales: 20 pieces.
* Profit per piece: 40 yuan.
* Goal: Achieve a daily profit of 1200 yuan.
* For every 1 yuan price decrease, 2 more shirts are sold per day.
**Chain-of-Thought:**
1. Calculate the current daily profit: 20 * 40 = 800 yuan
2. Calculate additional daily profit from selling more shirts: 2 * 40 = 80 yuan
3. Calculate Price reduction: Price reduction needed = 320 / 2 = 160 yuan
* Conclusion: Each shirt should be reduced by 160 yuan to make an average profit of 1200 yuan per day.
* This solution is marked with a red "X", indicating it is incorrect.
**Plan-and-Solve:**
1. Determine the current daily profit: Calculate the current daily profit using the average daily sales and profit per shirt.
2. Set up the equation for the desired daily profit: Create an equation that relates new profit per shirt, increased number of shirts sold, and desired daily profit.
3. Solve the equation:
* Expand the equation and collect like terms: 1200 = (20 * 40) + (2p * 40) - (20 * p) - (2p * p)
* Divide by 2 to simplify: p² - 30p - 200 = 0
* p ≈ 35.6155
* This solution is marked with a red "X", indicating it is incorrect.
**Thought Template T1:**
* To solve any quadratic equation of the form ax² + bx + c = 0.
* Step 1: Calculate the Discriminant: D = b² - 4ac
* Step 2: Determine the Nature of the Roots:
* If D ≥ 0, the equation has two distinct real roots.
* If D = 0, the equation has exactly one real root (also known as a repeated or double root).
* If D < 0, the equation has two complex roots.
* Step 3: Compute the Roots:
* For D ≥ 0, x = (-b ± √D) / 2a
* For D < 0, x = (-b ± √-Di) / 2a, where i is the imaginary unit.
**Thought Template TN:**
* `def process_element(element):` # Define how to process each individual element.
* `def combine_elements(element1, element2):` # Define how to combine elements.
* `def check_condition(accumulated_result):` # Define the condition that the accumulated result must meet.
* `def solve_problem(input_list):`
**Instantiated Reasoning:**
* Solution:
* p = original profit per shirt (40 yuan)
* x = amount by which the price of each shirt is reduced
* q = original quantity of shirts sold daily (20 pieces)
* r = rate at which the quantity of shirts sold increases
* The mall wants to maintain an average daily profit of 1200 yuan.
* After simplification: x² - 30x + 200 = 0
* Now we can solve this quadratic equation using the steps outlined in the template:
* Step 1: Calculate the Discriminant: D = b² - 4ac = (-30)² - 4 * 1 * 200 = 100
* Step 2: Determine the Nature of the Roots: Since D >= 0, the equation has two distinct real roots.
* Step 3: Compute the Roots: x = (-b ± √D) / 2a = (30 ± 10) / 2
* The two possible solutions are: x = 20 or x = 10
* Due to reducing inventory as soon as possible, x = 20 is taken.
* This solution is marked with a green checkmark, indicating it is correct.
### Key Observations
* The Chain-of-Thought and Plan-and-Solve approaches provide incorrect solutions.
* The Instantiated Reasoning approach, guided by the Thought Template, arrives at the correct solution.
* The correct price reduction is 20 yuan.
### Interpretation
The image demonstrates different problem-solving strategies applied to a business scenario. The Chain-of-Thought and Plan-and-Solve methods, while intuitive, fail to account for the quadratic relationship between price reduction and increased sales, leading to incorrect results. The Instantiated Reasoning approach, leveraging a structured template for solving quadratic equations, successfully determines the optimal price reduction. This highlights the importance of using appropriate mathematical models and structured problem-solving techniques to address complex business problems. The fact that reducing inventory is a priority suggests there may be storage costs or other factors influencing the decision to choose x=20 over x=10.
</details>
Figure 2: Illustration of different reasoning process. Buffer of Thoughts enables large language models to tackle complex reasoning tasks through our thought-augmented reasoning process. Thought template is marked in orange and instantiated thought is marked in blue.
3.1 Problem Distiller
Most of complex tasks contain implicit constraints, complex object relationships, and intricate variables and parameters within their contexts. Consequently, during the reasoning stage, LLMs need to overcome three main challenges: extracting vital information, recognizing potential constraints, and performing accurate reasoning. These challenges would impose a significant burden on a single LLM. Therefore, we separate the extraction and comprehension stages of task information from the final reasoning stage, through prepending a problem distiller to the reasoning process. More concretely, we design a meta prompt $\mathcal{\phi}$ to first distill and formalize the task information. The distilled task information could be denoted as:
$$
x_{d}=LLM(\mathcal{\phi}(x)), \tag{1}
$$
where $x$ is the task statement. Due to the page limit, we put the detailed meta prompt for problem-distiller in Section A.2.
Problem Condensation and Translation
We use the problem distiller to extract key elements from input tasks, focusing on: (1). Essential parameters and variables for problem-solving; (2). The objectives of the input tasks and their corresponding constraints. We then re-organize this distilled information into a clear, comprehensible format for the subsequent reasoning stage. We then translate the specific problems into high-level concepts and structures. This translation procedure decomposes complex real-world problems, like intricate mathematical application scenarios, into simpler, multi-step calculations, making it easier for later retrieval of high-level thought.
3.2 Thought-Augmented Reasoning with Meta Buffer
Motivation
Human often summarize and induce higher-level guidelines when solving problems and then apply them to relevant problems. Motivated by this, we propose meta-buffer, a lightweight library that contains a series of high-level thoughts (thought-template) for addressing various types of problems. Unlike traditional methods [11, 46, 12, 36, 9] that require specific instructions or exemplars, our high-level thought-templates can be adaptively instantiated when solving different problems, thereby enhancing LLMs with superior precision and flexibility.
Thought Template
As a kind of high-level guideline, our thought-template is stored in meta-buffer , and is obtained from various problem-solving processes by our buffer-manager. The details about acquiring thought-templates would be introduced in Section 3.3. Since our BoT aims to provide a general reasoning approach for various tasks, we correspondingly classify the thought-templates into six categories: Text Comprehension, Creative Language Generation, Common Sense Reasoning, Mathematical Reasoning, Code Programming and Application Scheduling. We provide some example thought-templates in Section A.1. Such classification of thought-templates can facilitate the template retrieval for finding most suitable solutions to different problems. Here we denote thought template, template description and its corresponding category as $(T_{i},D_{T_{i}},C_{k})$ , where $i$ denotes the index of meta-template, $k∈\mathbb{Z^{+}}$ and $1≤ k≤ 6$ , which means $C_{k}$ is in one of the six categories, and $D_{T_{i}}$ is the description of thought template.
Template Retrieval
For each task, our BoT retrieves a thought-template $T_{i}$ that is highly similar to the distilled problem $x_{d}$ by calculating the embedding similarity between the description $D_{T_{i}}$ and $x_{d}$ . The retrieval process can be formulated as:
$$
j=\text{argmax}_{i}(\text{Sim}(f(x_{d}),\{f(D_{T_{i}})\}_{i=1}^{N})),\quad%
\text{where}\quad\text{Sim}(f({x}_{d}),\{f(D_{T_{i}})\}_{i=0}^{n})>=\delta, \tag{2}
$$
$N$ is the size of the meta-buffer, $f(·)$ is a normal text embedding model, and $T_{j}$ denotes the retrieved thought template. We set a threshold $\delta$ (0.5 $\sim$ 0.7 is recommended) to determine whether the current task is new. Therefore, if $\text{Sim}(f({x}_{d}),\{f(D_{T_{i}})\}_{i=0}^{n})<\delta$ , we identify the task $x$ as a new task.
Instantiated Reasoning
For each specific task, we discuss two situations for the instantiated reasoning, depending on whether the current task is new: The first situation is that we successfully retrieve a thought-template $T_{j}$ for the task. In this case, as presented in Figure 2, our thought-augmented reasoning will be adaptively instantiated to suitable reasoning structures with our designed instantiation prompt (in Section A.3). For example, in a Checkmate-in-One problem, we instantiate the template of updating chess board state to solve the problem step by step. Thus we conduct the instantiated reasoning for task $x$ using the distilled information $x_{d}$ and the retrieved template $T_{j}$ , and produce its solution $S_{x}$ as:
$$
S_{x}=LLM_{\text{instantiation}}(x_{d},T_{j}), \tag{3}
$$
where $LLM_{\text{instantiation}}$ denotes the instantiated reasoner with a LLM.
In the second situation, the task is identified as a new task. To enable proper instantiated reasoning, we prepare three general coarse-grained thought-templates for utilization. Based on the distilled task information $x_{d}$ , our BoT would automatically assign a suitable thought-template to the reasoning process. The detailed pre-defined thought-templates are included in Section A.3).
3.3 Buffer Manager
We propose buffer-manager to summarize the high-level guidelines and thoughts that are gained from each problem-solving process. It can generalize each specific solution to more problems, storing the critical distilled knowledge in the form of thought-templates within the meta buffer. In contrast to methods that temporarily generate exemplars or instructions for each problem, our buffer-manager can ensure permanent advancements in accuracy, efficiency, and robustness for LLM-based reasoning.
Template Distillation
To extract a general though-template, we propose a three-step approach: (1) Core task summarization: identifying and describing basic types and core challenges of problems; (2) Solution steps description: summarize the general steps for solving a problem; (3) General answering template: based on the above analysis, propose a solution template or approach that can be widely applied to similar problems. Additionally, to boost the generalization ability and stability of template distillation, we carefully design two types of in-context examples of how to generate thought-template— in-task and cross-task examples. Cross-task means we choose the template distilled from one task to tackle the problem of other tasks, such as addressing a mathematical problem with a code-related thought-template. The new template distilled from input task $x$ can be denoted as:
$$
T_{new}=LLM_{\text{distill}}(x_{d},S_{x}), \tag{4}
$$
where $LLM_{\text{distill}}$ is the LLM-based template distiller initialized with the following prompt:
Prompt for Template Distillation: User: [Problem Description] + [Solution Steps or Code] To extract and summarize the high-level paradigms and general approaches for solving such problems, please follow these steps in your response: 1. Core task summarization: Identify and describe the basic type and core challenges of the problem, such as classifying it as a mathematical problem (e.g., solving a quadratic equation), a data structure problem (e.g., array sorting), an algorithm problem (e.g., search algorithms), etc. And analyze the most efficient way to solve the problem. 2. Solution Steps Description: Outline the general solution steps, including how to define the problem, determine variables, list key equations or constraints, choose appropriate solving strategies and methods, and how to verify the correctness of the results. 3. General Answer Template: Based on the above analysis, propose a template or approach that can be widely applied to this type of problem, including possible variables, functions, class definitions, etc. If it is a programming problem, provide a set of base classes and interfaces that can be used to construct solutions to specific problems. Please ensure that your response is highly concise and structured, so that specific solutions can be transformed into generalizable methods. [Optional] Here are some exemplars of the thought-template: (Choose cross-task or in-task exemplars based on the analysis of the Core task summarization.)
Dynamic Update of Meta-Buffer
After template distillation, we need to consider whether the distilled template should be updated into the meta-buffer. If we initialize an empty meta-buffer or encounter a problem without a proper thought-template, the distilled thought-templates will be directly stored in the meta-buffer. If we solve problem with a retrieved thought-template, new insights may arise during the instantiation of a certain thought-template. Therefore, to avoid the redundancy of the meta-buffer while maintaining newly-generated informative thoughts, we will calculate the similarity between the embedding vectors of $D_{T_{new}}$ and $\{D_{T_{i}}\}_{i=0}^{n}$ and update the meta-buffer with the following rule:
$$
\text{Max}(\text{Sim}(f(D_{T_{new}}),\{f(D_{T_{i}})\}_{i=0}^{n}))<\delta. \tag{5}
$$
Otherwise, it means the meta-buffer has already possessed the necessary knowledge to solve this task and does not need to perform the update. Our dynamic update strategy effectively reduces the computational burden of template retrieval while ensuring the lightweight property of our meta-buffer. We further conduct ablation study to analyze it in Section 6.
4 Experiments
Datasets and Tasks
To evaluate the efficacy of our proposed Buffer of Thoughts and compare with previous methods, we consider a diverse set of tasks and datasets that require varying degrees of mathematical and algorithmic reasoning, domain-specific knowledge, and literary creativity: (a). The Game of 24 from ToT [14], where the objective is to form an arithmetic expression that equals 24 using each of four given numbers exactly once; (b). Three BIG-Bench Hard (BBH) [35] tasks: Geometric Shapes, Multi-Step Arithmetic Two, and Word Sorting; (c). Three reasoning tasks directly obtained from the BIG-Bench suite [50]: Checkmate-in-One, Penguins —where the task is to answer questions about penguins’ attributes based on a given table and additional natural language information, and DateUnderstanding —a task that involves inferring dates from natural language descriptions, performing arithmetic operations on dates, and utilizing global knowledge such as the number of days in February; (d). Python Programming Puzzles (P3) [51, 52], a collection of challenging programming puzzles written in Python with varying difficulty levels; (e). Multilingual Grade School Math (MGSM) [33], a multilingual version of the GSM8K dataset [53] featuring translations of a subset of examples into ten typologically diverse languages, including Bengali, Japanese, and Swahili; (f). Shakespearean Sonnet Writing from meta-prompting [15], a novel task where the goal is to write a sonnet following the strict rhyme scheme "ABAB CDCD EFEF GG" and incorporating three provided words verbatim.
Implementation and Baselines
For the fair comparisons with previous methods, we use GPT-4 as the base model of our BoT, including the main experiment and the ablation study (in Section 6). We also use Llama3-8B and Llama3-70B in our analysis part on NVIDIA A100-PCIE-40GB GPU. We compare our Buffer of Thoughts with the following prompting methods: 1. Standard Prompting: This is our most basic baseline, where an LLM is asked to generate a response directly from the input query, without any specific guiding input-output examples or additional instructions beyond the task description included in the query.
2. Single-query Method: This includes Zero-shot CoT [8] and PAL [10], which use the LLM to analyze natural language problems and generate intermediate reasoning steps. We also include Expert Prompting [9], which creates an expert identity tailored to the specific context of the input query, and then integrates this expert profile into the input to generate a well-informed response.
3. Multi-query Method: This includes ToT [14] and GoT [17], which enable LLMs to make deliberate decisions by considering multiple reasoning paths and self-evaluating choices to determine the next course of action. These methods also allow for looking ahead or backtracking when necessary to make global decisions. Additionally, we include Meta Prompting [15], which employs an effective scaffolding technique designed to enhance the functionality of LLMs.
Table 1: Comparing BoT with previous methods across various tasks. We denote the best score in blue, and the second-best score in green. Our BoT significantly outperforms other methods on all tasks, especially on general reasoning problems.
| Game of 24 MGSM (avg) Multi-Step Arithmetic | 3.0 84.4 84.0 | 11.0 85.5 83.2 | 3.0 85.0 83.2 | 64.0 72.0 87.4 | 74.0 86.4 88.2 | 73.2 87.0 89.2 | 67.0 84.8 90.0 | 82.4 89.2 99.8 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| WordSorting | 80.4 | 83.6 | 85.2 | 93.2 | 96.4 | 98.4 | 99.6 | 100.0 |
| Python Puzzles | 31.1 | 36.3 | 33.8 | 47.3 | 43.5 | 41.9 | 45.8 | 52.4 |
| Geometric Shapes | 52.6 | 69.2 | 55.2 | 51.2 | 56.8 | 54.2 | 78.2 | 93.6 |
| Checkmate-in-One | 36.4 | 32.8 | 39. 6 | 10.8 | 49.2 | 51.4 | 57.2 | 86.4 |
| Date Understanding | 68.4 | 69.6 | 68.4 | 76.2 | 78.6 | 77.4 | 79.2 | 88.2 |
| Penguins | 71.1 | 73.6 | 75.8 | 93.3 | 84.2 | 85.4 | 88.6 | 94.7 |
| Sonnet Writing | 62.0 | 71.2 | 74.0 | 36.2 | 68.4 | 62.8 | 79.6 | 80.0 |
4.1 BoT Achieves Better Accuracy, Efficiency and Robustness
Reasoning Accuracy
As shown in LABEL:tab-accuracy, our BoT consistently outperforms all previous prompting methods across multiple kinds of challenging benchmarks, particularly demonstrated in complicated reasoning tasks such as Game of 24 and Checkmate-in-One. Taking GPT-4 as a baseline, our method achieves an astonishing 79.4% accuracy improvement in Game of 24, and compared to ToT, which has a good performance on this task, we also achieve an 8.4% accuracy improvement. What’s more, compared to recent Meta-prompting method [15], we see significant accuracy improvements: 23% on Game of 24, 20% on Geometric Shapes and 51% on Checkmate-in-One. Existing methods need complex, iterative, and heuristic search strategies to address these problems on a case-by-case basis. Conversely, our BoT leverages the historical insights and informative guidelines from thought-templates, and further adaptively instantiate a more optimal reasoning structure for addressing these complex problems.
<details>
<summary>x3.png Details</summary>

### Visual Description
## Bar Chart: Comparison of the inference time
### Overview
The image is a bar chart comparing the logarithmic inference time (in seconds) of different methods ("Expert", "PAL", "ToT", "Meta-prompting", and "Ours") across three tasks: "Game of 24", "MGSM", and "Checkmate-in-One".
### Components/Axes
* **Title:** Comparison of the inference time
* **X-axis:** Categorical axis with three categories: "Game of 24", "MGSM", and "Checkmate-in-One".
* **Y-axis:** Numerical axis labeled "Logarithmic time (s)", ranging from 0 to 10, with tick marks at every integer value.
* **Legend:** Located at the top of the chart, indicating the color-coded methods:
* Blue: "Expert"
* Orange: "PAL"
* Gray: "ToT"
* Yellow: "Meta-prompting"
* Light Blue: "Ours"
### Detailed Analysis
Here's a breakdown of the inference time for each method and task:
* **Game of 24:**
* Expert (Blue): 4.64 s
* PAL (Orange): 5.5 s
* ToT (Gray): 8.73 s
* Meta-prompting (Yellow): 8.47 s
* Ours (Light Blue): 5.17 s
* **MGSM:**
* Expert (Blue): 4.16 s
* PAL (Orange): 4.81 s
* ToT (Gray): 8.34 s
* Meta-prompting (Yellow): 8.04 s
* Ours (Light Blue): 5 s
* **Checkmate-in-One:**
* Expert (Blue): 5 s
* PAL (Orange): 5.21 s
* ToT (Gray): 9.03 s
* Meta-prompting (Yellow): 8.43 s
* Ours (Light Blue): 6.39 s
### Key Observations
* "ToT" (Gray) and "Meta-prompting" (Yellow) consistently exhibit the highest inference times across all three tasks.
* "Expert" (Blue) generally has the lowest inference time for "Game of 24" and "MGSM", but "PAL" (Orange) is slightly lower for "Checkmate-in-One".
* "Ours" (Light Blue) shows a moderate inference time, generally lower than "ToT" and "Meta-prompting" but higher than "Expert" and "PAL".
### Interpretation
The bar chart provides a comparative analysis of the inference times for different methods across three tasks. The data suggests that "ToT" and "Meta-prompting" are computationally more expensive than "Expert" and "PAL". The "Ours" method appears to offer a compromise between the two extremes. The specific task also influences the inference time, as evidenced by the varying performance of each method across "Game of 24", "MGSM", and "Checkmate-in-One". The chart highlights the trade-offs between different approaches in terms of computational efficiency.
</details>
Figure 3: Comparison of logarithmic inference time between our Buffer of Thoughts and GPT4 [3], GPT4+CoT [8], Expert-prompting [9], PAL [10], ToT [14] across different benchmarks.
<details>
<summary>x4.png Details</summary>

### Visual Description
## Bar Chart: Success Rate
### Overview
The image is a bar chart comparing the success rates of five different methods (GPT4, Expert, PAL, ToT, and Ours) across three tasks (Game of 24, MGSM, and Checkmate-in-One) and their average. The y-axis represents the average accuracy in percentage, ranging from 0 to 100.
### Components/Axes
* **Title:** Success rate
* **Y-axis:** Average accuracy (%) with scale from 0 to 100 in increments of 10.
* **X-axis:** Categories: Game of 24, MGSM, Checkmate-in-One, and Average.
* **Legend:** Located at the top-right of the chart.
* GPT4 (Blue)
* Expert (Orange)
* PAL (Gray)
* ToT (Yellow)
* Ours (Light Blue)
### Detailed Analysis
Here's a breakdown of the success rates for each method across the tasks:
* **Game of 24:**
* GPT4 (Blue): 27%
* Expert (Orange): 36%
* PAL (Gray): 61%
* ToT (Yellow): 71%
* Ours (Light Blue): 98%
* **MGSM:**
* GPT4 (Blue): 85%
* Expert (Orange): 76%
* PAL (Gray): 87%
* ToT (Yellow): 84%
* Ours (Light Blue): 96.8%
* **Checkmate-in-One:**
* GPT4 (Blue): 48.2%
* Expert (Orange): 53.4%
* PAL (Gray): 36.4%
* ToT (Yellow): 78.4%
* Ours (Light Blue): 93.4%
* **Average:**
* GPT4 (Blue): 67.13%
* Expert (Orange): 71.82%
* PAL (Gray): 70.12%
* ToT (Yellow): 84.57%
* Ours (Light Blue): 95.15%
### Key Observations
* "Ours" consistently achieves the highest success rates across all tasks and the average.
* GPT4 performs the worst on "Game of 24" and "Checkmate-in-One" but shows improvement on "MGSM".
* The "ToT" method shows a strong performance, consistently ranking among the top performers.
* The "Expert" method shows a relatively consistent performance across all tasks.
* The "PAL" method shows a relatively consistent performance across all tasks.
### Interpretation
The chart demonstrates a comparative analysis of different methods in terms of success rate across various tasks. The "Ours" method significantly outperforms the other methods, suggesting its superior effectiveness in these tasks. The performance variation across tasks highlights the strengths and weaknesses of each method in different problem-solving scenarios. The average success rates provide an overall performance indicator, further emphasizing the superiority of the "Ours" method.
</details>
Figure 4: Comparison of reasoning robustness between our Buffer of Thoughts and GPT4 [3], GPT4+CoT [8], Expert-prompting [9], PAL [10], ToT [14] across different benchmarks.
Reasoning Efficiency
In addition to significant improvements in accuracy, as a multi-query method, our BoT can achieve comparable reasoning time to single-query method across various tasks, while being considerably less than conventional multi-query method like ToT [14] as shown in Figure 3. For example, in Game of 24, both single-query and multi-query methods necessitate iterative and heuristic searches to identify feasible solutions. This process is particularly time-consuming and inefficient, especially for the multi-query method, which involves conducting multi-query search and backtrace phases. In contrast, our BoT directly retrieves a thought-template in code format, thus a program is instantiated to traverse combinations of numbers and symbols, thereby eliminating the need to build the reasoning structure from scratch. This allows for solving the problem with just one query after invoking the problem-distiller, significantly reducing the time required for complex reasoning. Notably, our BoT requires only 12% of the cost of multi-query methods (e.g., tree of thoughts and meta-prompting) on average.
Reasoning Robustness
To better evaluate our BoT, we devise a new evaluation metric: success rate, which is used to assess the reasoning robustness. We randomly sample 1000 examples from various benchmarks as a test subset and evaluate different methods on this subset. As shown in Figure 4, we repeat this evaluation process 10 times and take the average accuracy as the success rate of different methods on each benchmark. Compared with other methods, our BoT consistently maintains a higher success rate across various tasks, surpassing the second-best by 10% in average success rate. We attribute our outstanding robustness to the great generalization ability of our distilled thought-templates during reasoning across different tasks. By offering high-level thought from the suitable thought-templates, the stability of our method across different tasks is greatly enhanced.
5 Model Analysis
Distribution Analysis of Thought-Templates
As depicted in the left figure of Figure 5, we choose six different benchmarks, each sampled with 100 distinct tasks. We update the meta-buffer from scratch, and after completing all sampled tasks, we display the number of derived thought-templates. We can observe that our BoT generates a greater number of thought-templates in the MGSM tasks that contain more diverse scenarios. In tasks with relatively simple requirements, such as Checkmate-in-One and Penguins, BoT produces more fixed thought-templates tailored for those specific issues. The distribution of templates indicates that our BoT can effectively discover appropriate thought templates for different benchmarks.
<details>
<summary>x5.png Details</summary>

### Visual Description
## Chart Type: Pie Chart
### Overview
The image is a pie chart titled "Template distribution across different tasks". It shows the distribution of templates across six different tasks: Checkmate-in-One, Python Puzzles, Sonnet writing, Table of Penguins, Date understanding, and MGSM. Each slice of the pie represents the proportion of templates used for a specific task, with the corresponding numerical value displayed next to the task name.
### Components/Axes
* **Title:** Template distribution across different tasks
* **Categories:**
* Checkmate-in-One
* Python Puzzles
* Sonnet writing
* Table of Penguins
* Date understanding
* MGSM
* **Legend:** Located in the bottom-right corner, the legend maps each task to a specific color in the pie chart.
* Sonnet writing (Dark Blue)
* Table of Penguins (Light Green)
* Date understanding (Yellow)
* MGSM (Red)
* Python Puzzles (Light Blue)
* Checkmate-in-One (Dark Green)
### Detailed Analysis
The pie chart is divided into six slices, each representing a different task. The size of each slice corresponds to the proportion of templates used for that task. The numerical values associated with each task are as follows:
* **MGSM (Red):** 78
* **Python Puzzles (Light Blue):** 37
* **Sonnet writing (Dark Blue):** 37
* **Date understanding (Yellow):** 14
* **Table of Penguins (Light Green):** 8
* **Checkmate-in-One (Dark Green):** 4
### Key Observations
* MGSM has the largest share of the template distribution, with a value of 78.
* Python Puzzles and Sonnet writing have the same distribution, with a value of 37 each.
* Checkmate-in-One has the smallest share of the template distribution, with a value of 4.
### Interpretation
The pie chart illustrates the distribution of templates across different tasks. MGSM utilizes the most templates, suggesting it may be a more complex or frequently performed task. Checkmate-in-One uses the fewest templates, potentially indicating it is a simpler or less common task. The equal distribution between Python Puzzles and Sonnet writing suggests a similar level of template usage for these two tasks. The data provides insights into the relative complexity or frequency of different tasks based on template usage.
</details>
<details>
<summary>x6.png Details</summary>

### Visual Description
## Pie Chart: Average Time Distribution for Each Part of our BoT
### Overview
The image is a pie chart illustrating the average time distribution for different components of a "BoT". The chart is divided into four sections, each representing a component: problem-distiller, reasoner, meta-buffer, and buffer-manager. The size of each slice corresponds to the percentage of time spent on that component.
### Components/Axes
* **Title:** Average time distribution for each part of our BoT
* **Slices:**
* **problem-distiller:** Blue, 15.6
* **reasoner:** Green, 52.7
* **meta-buffer:** Yellow, 8.9
* **buffer-manager:** Red, 21.3
* **Legend:** Located in the bottom-right corner, mapping colors to components.
* Blue: problem-distiller
* Green: reasoner
* Yellow: meta-buffer
* Red: buffer-manager
### Detailed Analysis
The pie chart shows the proportion of time spent on each component of the BoT. The "reasoner" component takes up the largest portion of time (52.7%), followed by "buffer-manager" (21.3%), "problem-distiller" (15.6%), and "meta-buffer" (8.9%).
* **problem-distiller:** 15.6, represented by the blue slice in the top-right quadrant.
* **reasoner:** 52.7, represented by the green slice, occupying the largest portion of the pie chart in the bottom-right quadrant.
* **meta-buffer:** 8.9, represented by the yellow slice in the bottom-left quadrant.
* **buffer-manager:** 21.3, represented by the red slice in the top-left quadrant.
### Key Observations
* The "reasoner" component consumes the majority of the time, accounting for over half of the total time distribution.
* The "meta-buffer" component consumes the least amount of time.
* The "buffer-manager" consumes a significant portion of time, more than the "problem-distiller".
### Interpretation
The pie chart provides a clear visualization of the time allocation among different components of the BoT. The dominance of the "reasoner" component suggests that the BoT spends a significant amount of time on reasoning tasks. The relatively small proportion of time spent on the "meta-buffer" component might indicate that this component is highly efficient or less frequently used. The data suggests that optimizing the "reasoner" component could lead to the most significant improvements in overall BoT performance. The "buffer-manager" also represents a significant portion of time, so optimizing this component could also yield improvements.
</details>
Figure 5: Distribution Analysis of Thought-Templates and Time. Left: Distribution Analysis of Thought-Templates. Right: Distribution Analysis of Thought-Templates.
Distribution Analysis of Time Cost
As illustrated in Figure 5, we measured the average time cost for each component of BoT’s reasoning framework across different tasks. The time required for distilling task information and template retrieval is relatively short, whereas instantiated reasoning takes longer. Overall, considering the complexity of different components, our BoT achieves a relatively balanced distribution of time cost, demonstrating the efficiency of our BoT framework.
Better Trade-off between Model Size and Performance
As depicted in Figure 6, on Game of 24, word list sorting and Checkmate-in-One, Llama3-8B and Llama-70B models [6] may result in poor outcomes. However, equipped with our BoT, both models demonstrate a substantial accuracy improvement. Notably, BoT+Llama3-8B has the potential to surpass single Llama3-70B model. Our BoT enables smaller models to exhibit the capabilities that approximate or even surpass larger models, significantly bridging the gap between their reasoning abilities. Furthermore, it greatly diminishes the inference cost required by large language models when tackling complex problems.
<details>
<summary>x7.png Details</summary>

### Visual Description
## Bar Chart: Trade-off between model size and performance
### Overview
The image is a horizontal bar chart comparing the accuracy (%) of four different language models (BoT+Llama-3-70B, BoT+Llama-3-8B, Llama3-70B, and Llama3-8B) across three tasks: Checkmate-in-One, Word list sorting, and Game of 24. The x-axis represents accuracy (%), ranging from 0 to 100. The y-axis represents the tasks.
### Components/Axes
* **Title:** Trade-off between model size and performance
* **X-axis:** Accuracy (%) with scale from 0 to 100, incrementing by 10.
* **Y-axis:** Tasks (Checkmate-in-One, Word list sorting, Game of 24)
* **Legend:** Located at the top of the chart.
* Yellow: BoT+Llama-3-70B
* Gray: BoT+Llama-3-8B
* Orange: Llama3-70B
* Blue: Llama3-8B
### Detailed Analysis
The chart presents accuracy scores for each model on each task.
* **Checkmate-in-One:**
* BoT+Llama-3-70B (Yellow): 75.6%
* BoT+Llama-3-8B (Gray): 56.7%
* Llama3-70B (Orange): 15%
* Llama3-8B (Blue): 0.8%
* **Word list sorting:**
* BoT+Llama-3-70B (Yellow): 92.3%
* BoT+Llama-3-8B (Gray): 73.4%
* Llama3-70B (Orange): 79%
* Llama3-8B (Blue): 48.4%
* **Game of 24:**
* BoT+Llama-3-70B (Yellow): 78.4%
* BoT+Llama-3-8B (Gray): 73.4%
* Llama3-70B (Orange): 2.4%
* Llama3-8B (Blue): 1.2%
### Key Observations
* BoT+Llama-3-70B (Yellow) consistently achieves the highest accuracy across all three tasks.
* Llama3-8B (Blue) generally has the lowest accuracy, especially on Checkmate-in-One and Game of 24.
* The performance difference between models is most pronounced on the Checkmate-in-One task.
### Interpretation
The chart illustrates the trade-off between model size and performance. The BoT+Llama-3-70B model, presumably the largest, consistently outperforms the other models in terms of accuracy. The Llama3-8B model, likely the smallest, generally exhibits the lowest accuracy. This suggests that increasing model size (and potentially complexity) leads to improved performance on these tasks. However, the specific architecture (BoT+Llama vs. Llama3) also plays a significant role, as evidenced by the differences between the 70B and 8B versions of each architecture. The Checkmate-in-One task appears to be particularly challenging, highlighting the performance differences between the models. The data suggests that for tasks like Checkmate-in-One, model architecture and size are critical for achieving high accuracy.
</details>
Figure 6: We evaluate the trade-off between model size and performance with Llama3-8B and Llama3-70B models on three challenging benchmarks.
6 Ablation Study
Impact of Problem-Distiller
As illustrated in Figure 7, when the problem-distiller is disabled, both Llama3-70B and GPT-4 experience a certain degree of accuracy decline. More complex problems, such as Game of 24 and Checkmate-in-One, show a more significant accuracy reduction, whereas relatively simpler problems like word list sorting and MGSM exhibit smaller decreases. This is because LLMs can more easily extract key information in simpler tasks, making the impact of the problem-distiller less noticeable. In contrast, extracting key information and potential constraints in complex problems is more challenging, making the role of our problem-distiller more prominent, thereby explaining the differences depicted in the figure.
<details>
<summary>x8.png Details</summary>

### Visual Description
## Bar Chart: Ablation study of problem-distiller
### Overview
The image is a bar chart comparing the accuracy (%) of different models (BoT+Llama-3-70B with/without problem-distiller and BoT+GPT-4 with/without problem-distiller) across four tasks: Game of 24, Word list sorting, Checkmate-in-One, and MGSM. The chart visually represents the performance of each model on each task, allowing for a direct comparison of their effectiveness.
### Components/Axes
* **Title:** Ablation study of problem-distiller
* **Y-axis:**
* Label: Accuracy (%)
* Scale: 0 to 100, with tick marks at intervals of 10.
* **X-axis:**
* Categories: Game of 24, Word list sorting, Checkmate-in-One, MGSM
* **Legend:** Located at the top of the chart.
* Blue: BoT+Llama-3-70B (w/o problem-distiller)
* Orange: BoT+Llama-3-70B
* Gray: BoT+GPT-4 (w/o problem-distiller)
* Yellow: BoT+GPT-4
### Detailed Analysis
**Game of 24:**
* BoT+Llama-3-70B (w/o problem-distiller) (Blue): 71.2%
* BoT+Llama-3-70B (Orange): 78.4%
* BoT+GPT-4 (w/o problem-distiller) (Gray): 76.5%
* BoT+GPT-4 (Yellow): 82.4%
**Word list sorting:**
* BoT+Llama-3-70B (w/o problem-distiller) (Blue): 89.5%
* BoT+Llama-3-70B (Orange): 92.3%
* BoT+GPT-4 (w/o problem-distiller) (Gray): 97.3%
* BoT+GPT-4 (Yellow): 99.6%
**Checkmate-in-One:**
* BoT+Llama-3-70B (w/o problem-distiller) (Blue): 64.3%
* BoT+Llama-3-70B (Orange): 75.6%
* BoT+GPT-4 (w/o problem-distiller) (Gray): 78.9%
* BoT+GPT-4 (Yellow): 86.4%
**MGSM:**
* BoT+Llama-3-70B (w/o problem-distiller) (Blue): 85.6%
* BoT+Llama-3-70B (Orange): 86.8%
* BoT+GPT-4 (w/o problem-distiller) (Gray): 87.4%
* BoT+GPT-4 (Yellow): 89.2%
### Key Observations
* The "Word list sorting" task has the highest accuracy across all models.
* The "Checkmate-in-One" task generally has the lowest accuracy across all models.
* BoT+GPT-4 (Yellow) generally outperforms BoT+Llama-3-70B (Blue) on all tasks.
* The problem-distiller seems to improve performance in most cases, as the orange and yellow bars are generally higher than the blue and gray bars, respectively.
### Interpretation
The chart presents an ablation study, which aims to understand the impact of removing a specific component (the "problem-distiller") from the models. The data suggests that the problem-distiller generally improves the accuracy of both BoT+Llama-3-70B and BoT+GPT-4 models across the tested tasks. The BoT+GPT-4 model consistently achieves higher accuracy compared to BoT+Llama-3-70B, indicating that GPT-4 is a more effective base model for these tasks. The "Word list sorting" task appears to be relatively easier for these models, while "Checkmate-in-One" poses a greater challenge. The performance difference between models with and without the problem-distiller highlights the importance of this component in achieving optimal accuracy.
</details>
Figure 7: We conduct ablation study on problem-distiller across four benchmarks, employing Llama3-70B and GPT-4 as the base models.
Impact of Meta-Buffer
As illustrated in Figure 8, when the meta-buffer is disabled, both Llama3-70B and GPT-4 models exhibit a noticeable decline in performance, particularly in benchmarks requiring complex reasoning, such as Game of 24 and Checkmate-in-One. This further underscores the superiority of our meta-buffer in addressing complex problems.
<details>
<summary>x9.png Details</summary>

### Visual Description
## Bar Chart: Ablation study of meta-buffer
### Overview
The image is a bar chart displaying the accuracy (%) of different models on four tasks: Game of 24, Word list sorting, Checkmate-in-One, and MGSM. The models compared are BoT + Llama-3-70B (with and without meta-buffer) and BoT + GPT-4 (with and without meta-buffer).
### Components/Axes
* **Title:** Ablation study of meta-buffer
* **X-axis:** Categorical axis representing the tasks: Game of 24, Word list sorting, Checkmate-in-One, MGSM.
* **Y-axis:** Numerical axis labeled "Accuracy (%)", ranging from 0 to 100 with increments of 10.
* **Legend:** Located at the top of the chart.
* Blue: BoT + Llama-3-70B (w/o meta-buffer)
* Orange: BoT + Llama-3-70B
* Gray: BoT + GPT-4 (w/o meta-buffer)
* Yellow: BoT + GPT-4
### Detailed Analysis
Here's a breakdown of the accuracy for each model on each task:
* **Game of 24:**
* BoT + Llama-3-70B (w/o meta-buffer) (Blue): 65.6%
* BoT + Llama-3-70B (Orange): 78.4%
* BoT + GPT-4 (w/o meta-buffer) (Gray): 75.2%
* BoT + GPT-4 (Yellow): 82.4%
* **Word list sorting:**
* BoT + Llama-3-70B (w/o meta-buffer) (Blue): 81.7%
* BoT + Llama-3-70B (Orange): 92.3%
* BoT + GPT-4 (w/o meta-buffer) (Gray): 95.4%
* BoT + GPT-4 (Yellow): 99.6%
* **Checkmate-in-One:**
* BoT + Llama-3-70B (w/o meta-buffer) (Blue): 27.4%
* BoT + Llama-3-70B (Orange): 75.6%
* BoT + GPT-4 (w/o meta-buffer) (Gray): 56.7%
* BoT + GPT-4 (Yellow): 86.4%
* **MGSM:**
* BoT + Llama-3-70B (w/o meta-buffer) (Blue): 79.6%
* BoT + Llama-3-70B (Orange): 86.8%
* BoT + GPT-4 (w/o meta-buffer) (Gray): 85.4%
* BoT + GPT-4 (Yellow): 89.2%
### Key Observations
* The "Word list sorting" task consistently shows the highest accuracy across all models.
* The "Checkmate-in-One" task has the lowest accuracy for BoT + Llama-3-70B (w/o meta-buffer) compared to other tasks and models.
* For all tasks, the models *with* meta-buffer (orange and yellow) outperform their counterparts *without* meta-buffer (blue and gray).
* BoT+GPT-4 (yellow) generally achieves the highest accuracy among the four models.
### Interpretation
The chart illustrates the impact of the meta-buffer on the performance of BoT models with different language models (Llama-3-70B and GPT-4) across various tasks. The consistent improvement in accuracy when using the meta-buffer suggests its effectiveness in enhancing the models' capabilities. The "Checkmate-in-One" task appears to be particularly challenging for the BoT + Llama-3-70B model without the meta-buffer, indicating a potential area for improvement. The superior performance of BoT + GPT-4 suggests that GPT-4 may be better suited for these tasks or that it benefits more from the meta-buffer.
</details>
Figure 8: We conduct ablation study on meta-buffer across four benchmarks, employing Llama3-70B and GPT-4 as the base models.
<details>
<summary>x10.png Details</summary>

### Visual Description
## Chart: Ablation study of buffer-manager -- Accuracy
### Overview
The image is a line chart comparing the accuracy of two models, "BoT+GPT4" and "BoT+GPT4 (w/o buffer-manager)", across four rounds. The y-axis represents accuracy in percentage, ranging from 0 to 100. The x-axis represents the rounds, labeled from Round 1 to Round 4.
### Components/Axes
* **Title:** Ablation study of buffer-manager -- Accuracy
* **X-axis:**
* Label: Round
* Categories: Round 1, Round 2, Round 3, Round 4
* **Y-axis:**
* Label: Accuracy (%)
* Scale: 0 to 100, with increments of 10.
* **Legend:** Located at the bottom of the chart.
* Blue line: BoT+GPT4
* Orange line: BoT+GPT4 (w/o buffer-manager)
### Detailed Analysis
* **BoT+GPT4 (Blue Line):**
* Trend: The accuracy increases from Round 1 to Round 3, then plateaus from Round 3 to Round 4.
* Round 1: 56.8%
* Round 2: 78.5%
* Round 3: 87.4%
* Round 4: 88.5%
* **BoT+GPT4 (w/o buffer-manager) (Orange Line):**
* Trend: The accuracy is relatively flat, with a slight increase from Round 1 to Round 3, then a slight decrease from Round 3 to Round 4.
* Round 1: 52.8%
* Round 2: 53.6%
* Round 3: 57.4%
* Round 4: 54.1%
### Key Observations
* The "BoT+GPT4" model consistently outperforms the "BoT+GPT4 (w/o buffer-manager)" model in terms of accuracy across all rounds.
* The "BoT+GPT4" model shows a significant improvement in accuracy from Round 1 to Round 3, indicating that the model benefits from the rounds.
* The "BoT+GPT4 (w/o buffer-manager)" model shows minimal improvement across the rounds, suggesting that the buffer-manager plays a crucial role in the performance improvement of the "BoT+GPT4" model.
### Interpretation
The data suggests that the buffer-manager component significantly contributes to the accuracy of the "BoT+GPT4" model. The ablation study, by removing the buffer-manager, demonstrates a clear performance decrease. The "BoT+GPT4" model's accuracy increases substantially over the rounds, while the model without the buffer-manager remains relatively stable, indicating that the buffer-manager is essential for leveraging the iterative rounds to improve performance. The small increase in the orange line could be attributed to the base model learning, but the buffer-manager is clearly the dominant factor.
</details>
Figure 9: We conduct ablation study on buffer-manager regarding reasoning accuracy across four tasks, employing Llama3-70B and GPT-4 as the base models.
Impact of Buffer-Manager
In this ablation study, we divide the entire process into four rounds. In each round, we randomly sample 50 questions from each benchmark and conduct reasoning. In the subsequent round, we continue to randomly sample another 50 questions from each benchmark. As depicted in Figure 9, with the increase of the number of rounds, the model with the buffer-manager continually expands the meta-buffer while also utilizing the thought-templates obtained from previously solved problems to help addressing subsequent similar problems. Therefore, we can observe that the accuracy of BoT steadily improves with each round. In contrast, the model without the buffer-manager fails to exhibit an upward trend. Additionally, we have also measured the reasoning time as depicted in Figure 10. when the number of rounds increases, the model with the buffer-manager will experience a continual improvement in reasoning efficiency. This is because, with the continual expansion of the meta-buffer, the likelihood of retrieving suitable thought-templates also increases. Consequently, models can avoid constructing reasoning structures from scratch, thereby enhancing the inference efficiency accordingly.
<details>
<summary>x11.png Details</summary>

### Visual Description
## Line Chart: Ablation study of buffer-manager -- Time
### Overview
The image is a line chart comparing the average inference time per problem (in seconds) over four rounds for two configurations: "BoT+GPT4" and "BoT+GPT4 (w/o buffer-manager)". The x-axis represents the round number (1 to 4), and the y-axis represents the average inference time in seconds, ranging from 0 to 350. The chart aims to show the impact of the buffer-manager on the inference time.
### Components/Axes
* **Title:** Ablation study of buffer-manager -- Time
* **X-axis:**
* Label: Round
* Ticks: Round 1, Round 2, Round 3, Round 4
* **Y-axis:**
* Label: Average inference time per problem (s)
* Scale: 0 to 350, with increments of 50 (0, 50, 100, 150, 200, 250, 300, 350)
* **Legend:** Located at the bottom of the chart.
* Blue line: BoT+GPT4
* Orange line: BoT+GPT4 (w/o buffer-manager)
### Detailed Analysis
* **BoT+GPT4 (Blue Line):**
* Trend: The average inference time decreases over the four rounds.
* Round 1: 297 seconds
* Round 2: 205 seconds
* Round 3: 128 seconds
* Round 4: 78.5 seconds
* **BoT+GPT4 (w/o buffer-manager) (Orange Line):**
* Trend: The average inference time starts high, increases slightly, then decreases slightly, and then increases slightly again.
* Round 1: 308 seconds
* Round 2: 317 seconds
* Round 3: 304 seconds
* Round 4: 306 seconds
### Key Observations
* The "BoT+GPT4" configuration consistently has a lower average inference time compared to the "BoT+GPT4 (w/o buffer-manager)" configuration across all rounds.
* The "BoT+GPT4" configuration shows a significant decrease in inference time from Round 1 to Round 4, indicating a learning or optimization effect.
* The "BoT+GPT4 (w/o buffer-manager)" configuration shows a relatively stable, but higher, inference time across all rounds.
### Interpretation
The data suggests that the buffer-manager significantly improves the inference time of the "BoT+GPT4" configuration, especially as the number of rounds increases. The decreasing inference time for "BoT+GPT4" indicates that the buffer-manager is effectively learning and optimizing the process over time. The relatively constant inference time for "BoT+GPT4 (w/o buffer-manager)" suggests that the buffer-manager plays a crucial role in the optimization process. The buffer manager is likely caching results and reducing redundant computations.
</details>
Figure 10: We conduct ablation study on buffer-manager regarding reasoning efficiency across four tasks, employing Llama3-70B and GPT-4 as the base models.
7 Discussion
Limitations and Future Directions
Despite our method’s significant improvement in accuracy while maintaining reasoning efficiency and robustness, our method’s enhancements are limited when addressing problems requiring human-like creativity, as this issue often does not rely on a specific thought-template. Besides, if our BoT initializes the meta-buffer with a weaker model, the quality of the derived thought-templates may be suboptimal due to the weaker model’s limited reasoning ability and instruction-following capability. Overall, our BoT brings out a set of future directions: 1. integrating external resources with BoT to build a open-domain system like agent models [54, 55]. 2. making the distillation of thought-templates optimizable, which may significantly enhance their template qualities for more complex tasks.
Conclusion
In this work, we introduce Buffer of Thoughts, a novel beffered reasoning framework that employs LLMs to utilize pre-accumulated experiences and methodologies from prior tasks as thought-templates stored within a meta-buffer. We further design buffer-manager to continuously refine the problem-solving processes and dynamically distill thought-templates, thereby progressively raising the LLM’s reasoning capacity. Our BoT demonstrates SOTA performance on 10 challenging tasks, and offers promising prospects for future research and application.
References
- [1] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al., “Language models are few-shot learners,” Advances in neural information processing systems, vol. 33, pp. 1877–1901, 2020.
- [2] R. Anil, A. M. Dai, O. Firat, M. Johnson, D. Lepikhin, A. Passos, S. Shakeri, E. Taropa, P. Bailey, Z. Chen, et al., “Palm 2 technical report,” arXiv preprint arXiv:2305.10403, 2023.
- [3] J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman, S. Anadkat, et al., “Gpt-4 technical report,” arXiv preprint arXiv:2303.08774, 2023.
- [4] Z. Du, Y. Qian, X. Liu, M. Ding, J. Qiu, Z. Yang, and J. Tang, “Glm: General language model pretraining with autoregressive blank infilling,” in Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 320–335, 2022.
- [5] A. Q. Jiang, A. Sablayrolles, A. Roux, A. Mensch, B. Savary, C. Bamford, D. S. Chaplot, D. d. l. Casas, E. B. Hanna, F. Bressand, et al., “Mixtral of experts,” arXiv preprint arXiv:2401.04088, 2024.
- [6] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar, et al., “Llama: Open and efficient foundation language models,” arXiv preprint arXiv:2302.13971, 2023.
- [7] H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale, et al., “Llama 2: Open foundation and fine-tuned chat models,” arXiv preprint arXiv:2307.09288, 2023.
- [8] J. Wei, X. Wang, D. Schuurmans, M. Bosma, F. Xia, E. Chi, Q. V. Le, D. Zhou, et al., “Chain-of-thought prompting elicits reasoning in large language models,” Advances in neural information processing systems, vol. 35, pp. 24824–24837, 2022.
- [9] B. Xu, A. Yang, J. Lin, Q. Wang, C. Zhou, Y. Zhang, and Z. Mao, “Expertprompting: Instructing large language models to be distinguished experts,” arXiv preprint arXiv:2305.14688, 2023.
- [10] L. Gao, A. Madaan, S. Zhou, U. Alon, P. Liu, Y. Yang, J. Callan, and G. Neubig, “Pal: Program-aided language models,” in International Conference on Machine Learning, pp. 10764–10799, PMLR, 2023.
- [11] X. Wang, J. Wei, D. Schuurmans, Q. V. Le, E. H. Chi, S. Narang, A. Chowdhery, and D. Zhou, “Self-consistency improves chain of thought reasoning in language models,” in The Eleventh International Conference on Learning Representations, 2022.
- [12] M. Yasunaga, X. Chen, Y. Li, P. Pasupat, J. Leskovec, P. Liang, E. H. Chi, and D. Zhou, “Large language models as analogical reasoners,” International Conference on Learning Representations, 2024.
- [13] Z. Zhang, A. Zhang, M. Li, and A. Smola, “Automatic chain of thought prompting in large language models,” in The Eleventh International Conference on Learning Representations, 2022.
- [14] S. Yao, D. Yu, J. Zhao, I. Shafran, T. Griffiths, Y. Cao, and K. Narasimhan, “Tree of thoughts: Deliberate problem solving with large language models,” Advances in Neural Information Processing Systems, vol. 36, 2024.
- [15] M. Suzgun and A. T. Kalai, “Meta-prompting: Enhancing language models with task-agnostic scaffolding,” arXiv preprint arXiv:2401.12954, 2024.
- [16] D. Zhou, N. Schärli, L. Hou, J. Wei, N. Scales, X. Wang, D. Schuurmans, C. Cui, O. Bousquet, Q. V. Le, et al., “Least-to-most prompting enables complex reasoning in large language models,” in The Eleventh International Conference on Learning Representations, 2022.
- [17] M. Besta, N. Blach, A. Kubicek, R. Gerstenberger, M. Podstawski, L. Gianinazzi, J. Gajda, T. Lehmann, H. Niewiadomski, P. Nyczyk, et al., “Graph of thoughts: Solving elaborate problems with large language models,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, pp. 17682–17690, 2024.
- [18] A. Asai, S. Min, Z. Zhong, and D. Chen, “Retrieval-based language models and applications,” in Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 6: Tutorial Abstracts), pp. 41–46, 2023.
- [19] G. Mialon, R. Dessi, M. Lomeli, C. Nalmpantis, R. Pasunuru, R. Raileanu, B. Roziere, T. Schick, J. Dwivedi-Yu, A. Celikyilmaz, et al., “Augmented language models: a survey,” Transactions on Machine Learning Research, 2023.
- [20] W. Shi, S. Min, M. Yasunaga, M. Seo, R. James, M. Lewis, L. Zettlemoyer, and W.-t. Yih, “Replug: Retrieval-augmented black-box language models,” arXiv preprint arXiv:2301.12652, 2023.
- [21] Y. Gao, Y. Xiong, X. Gao, K. Jia, J. Pan, Y. Bi, Y. Dai, J. Sun, and H. Wang, “Retrieval-augmented generation for large language models: A survey,” arXiv preprint arXiv:2312.10997, 2023.
- [22] P. Zhao, H. Zhang, Q. Yu, Z. Wang, Y. Geng, F. Fu, L. Yang, W. Zhang, and B. Cui, “Retrieval-augmented generation for ai-generated content: A survey,” arXiv preprint arXiv:2402.19473, 2024.
- [23] S. Borgeaud, A. Mensch, J. Hoffmann, T. Cai, E. Rutherford, K. Millican, G. B. Van Den Driessche, J.-B. Lespiau, B. Damoc, A. Clark, et al., “Improving language models by retrieving from trillions of tokens,” in International conference on machine learning, pp. 2206–2240, PMLR, 2022.
- [24] M. Yasunaga, A. Aghajanyan, W. Shi, R. James, J. Leskovec, P. Liang, M. Lewis, L. Zettlemoyer, and W.-T. Yih, “Retrieval-augmented multimodal language modeling,” in International Conference on Machine Learning, pp. 39755–39769, PMLR, 2023.
- [25] G. Izacard, P. Lewis, M. Lomeli, L. Hosseini, F. Petroni, T. Schick, J. Dwivedi-Yu, A. Joulin, S. Riedel, and E. Grave, “Atlas: Few-shot learning with retrieval augmented language models,” Journal of Machine Learning Research, vol. 24, no. 251, pp. 1–43, 2023.
- [26] Z. Wang, W. Nie, Z. Qiao, C. Xiao, R. Baraniuk, and A. Anandkumar, “Retrieval-based controllable molecule generation,” in The Eleventh International Conference on Learning Representations, 2022.
- [27] L. Yang, Z. Huang, X. Zhou, M. Xu, W. Zhang, Y. Wang, X. Zheng, W. Yang, R. O. Dror, S. Hong, et al., “Prompt-based 3d molecular diffusion models for structure-based drug design,” 2023.
- [28] T. Kojima, S. S. Gu, M. Reid, Y. Matsuo, and Y. Iwasawa, “Large language models are zero-shot reasoners,” Advances in neural information processing systems, vol. 35, pp. 22199–22213, 2022.
- [29] O. Press, M. Zhang, S. Min, L. Schmidt, N. A. Smith, and M. Lewis, “Measuring and narrowing the compositionality gap in language models,” in Findings of the Association for Computational Linguistics: EMNLP 2023, pp. 5687–5711, 2023.
- [30] S. Arora, A. Narayan, M. F. Chen, L. Orr, N. Guha, K. Bhatia, I. Chami, and C. Re, “Ask me anything: A simple strategy for prompting language models,” in The Eleventh International Conference on Learning Representations, 2022.
- [31] T. Khot, H. Trivedi, M. Finlayson, Y. Fu, K. Richardson, P. Clark, and A. Sabharwal, “Decomposed prompting: A modular approach for solving complex tasks,” in The Eleventh International Conference on Learning Representations, 2022.
- [32] J. Wei, Y. Tay, R. Bommasani, C. Raffel, B. Zoph, S. Borgeaud, D. Yogatama, M. Bosma, D. Zhou, D. Metzler, et al., “Emergent abilities of large language models,” Transactions on Machine Learning Research, 2022.
- [33] F. Shi, M. Suzgun, M. Freitag, X. Wang, S. Srivats, S. Vosoughi, H. W. Chung, Y. Tay, S. Ruder, D. Zhou, et al., “Language models are multilingual chain-of-thought reasoners,” in The Eleventh International Conference on Learning Representations, 2022.
- [34] Y. Fu, H. Peng, A. Sabharwal, P. Clark, and T. Khot, “Complexity-based prompting for multi-step reasoning,” in The Eleventh International Conference on Learning Representations, 2022.
- [35] M. Suzgun, N. Scales, N. Schärli, S. Gehrmann, Y. Tay, H. W. Chung, A. Chowdhery, Q. Le, E. Chi, D. Zhou, et al., “Challenging big-bench tasks and whether chain-of-thought can solve them,” in Findings of the Association for Computational Linguistics: ACL 2023, pp. 13003–13051, 2023.
- [36] H. S. Zheng, S. Mishra, X. Chen, H.-T. Cheng, E. H. Chi, Q. V. Le, and D. Zhou, “Take a step back: Evoking reasoning via abstraction in large language models,” arXiv preprint arXiv:2310.06117, 2023.
- [37] P. Zhou, J. Pujara, X. Ren, X. Chen, H.-T. Cheng, Q. V. Le, E. H. Chi, D. Zhou, S. Mishra, and H. S. Zheng, “Self-discover: Large language models self-compose reasoning structures,” arXiv preprint arXiv:2402.03620, 2024.
- [38] W. Chen, X. Ma, X. Wang, and W. W. Cohen, “Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks,” Transactions on Machine Learning Research, 2023.
- [39] X. Ning, Z. Lin, Z. Zhou, Z. Wang, H. Yang, and Y. Wang, “Skeleton-of-thought: Large language models can do parallel decoding,” in The Twelfth International Conference on Learning Representations, 2023.
- [40] Y. Zhang, “Meta prompting for agi systems,” arXiv preprint arXiv:2311.11482, 2023.
- [41] J. Chen, R. Xu, Z. Fu, W. Shi, Z. Li, X. Zhang, C. Sun, L. Li, Y. Xiao, and H. Zhou, “E-kar: A benchmark for rationalizing natural language analogical reasoning,” in Findings of the Association for Computational Linguistics: ACL 2022, pp. 3941–3955, 2022.
- [42] O. Sultan and D. Shahaf, “Life is a circus and we are the clowns: Automatically finding analogies between situations and processes,” in Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp. 3547–3562, 2022.
- [43] N. Zhang, L. Li, X. Chen, X. Liang, S. Deng, and H. Chen, “Multimodal analogical reasoning over knowledge graphs,” in The Eleventh International Conference on Learning Representations, 2022.
- [44] B. Bhavya, J. Xiong, and C. Zhai, “Analogy generation by prompting large language models: A case study of instructgpt,” in Proceedings of the 15th International Conference on Natural Language Generation, pp. 298–312, 2022.
- [45] B. Bhavya, J. Xiong, and C. Zhai, “Cam: A large language model-based creative analogy mining framework,” in Proceedings of the ACM Web Conference 2023, pp. 3903–3914, 2023.
- [46] Z. Zhang, A. Zhang, M. Li, and A. Smola, “Automatic chain of thought prompting in large language models,” in The Eleventh International Conference on Learning Representations, 2022.
- [47] T. Webb, K. J. Holyoak, and H. Lu, “Emergent analogical reasoning in large language models,” Nature Human Behaviour, vol. 7, no. 9, pp. 1526–1541, 2023.
- [48] J. Yu, R. He, and Z. Ying, “Thought propagation: An analogical approach to complex reasoning with large language models,” in International Conference on Learning Representations, 2024.
- [49] T. Feng, P. Han, G. Lin, G. Liu, and J. You, “Thought-retriever: Don’t just retrieve raw data, retrieve thoughts,” in ICLR 2024 Workshop: How Far Are We From AGI.
- [50] B. bench authors, “Beyond the imitation game: Quantifying and extrapolating the capabilities of language models,” Transactions on Machine Learning Research, 2023.
- [51] T. Schuster, A. Kalyan, A. Polozov, and A. T. Kalai, “Programming puzzles,” in Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2021.
- [52] A. T. K. Patrick Haluptzok, Matthew Bowers, “Language models can teach themselves to program better,” in Eleventh International Conference on Learning Representations (ICLR), 2023.
- [53] K. Cobbe, V. Kosaraju, M. Bavarian, M. Chen, H. Jun, L. Kaiser, M. Plappert, J. Tworek, J. Hilton, R. Nakano, C. Hesse, and J. Schulman, “Training verifiers to solve math word problems,” arXiv preprint arXiv:2110.14168, 2021.
- [54] G. Chen, S. Dong, Y. Shu, G. Zhang, J. Sesay, B. F. Karlsson, J. Fu, and Y. Shi, “Autoagents: A framework for automatic agent generation,” arXiv preprint arXiv:2309.17288, 2023.
- [55] Q. Wu, G. Bansal, J. Zhang, Y. Wu, S. Zhang, E. Zhu, B. Li, L. Jiang, X. Zhang, and C. Wang, “Autogen: Enabling next-gen llm applications via multi-agent conversation framework,” arXiv preprint arXiv:2308.08155, 2023.
Appendix A Additional Method Details
A.1 Detailed Thought-Templates
Here we show six example thought-templates in six different categories:
A.1.1 Text Comprehension
Task Description: The task involves analyzing a table with various attributes of penguins, such as name, age, height, and weight, and answering questions about these attributes. The table may be updated with new entries, and additional context or comparisons may be provided in natural language.
Solution Description: To accurately answer questions about the penguins’ attributes, one must be able to interpret the data presented in tabular form, understand any additional information provided in natural language, and apply logical reasoning to identify the correct attribute based on the question asked. Thought Template: Step 1: Parse the initial table, extracting the header information and each penguin’s attributes into a structured format (e.g., a list of dictionaries). Step 2: Read and integrate any additional natural language information that updates or adds to the table, ensuring the data remains consistent. Step 3: Identify the attribute in question (e.g., oldest penguin, heaviest penguin) and the corresponding column in the table. Step 4: Apply logical reasoning to compare the relevant attribute across all entries to find the correct answer (e.g., the highest age for the oldest penguin). Step 5: Select the answer from the provided options that matches the result of the logical comparison.
A.1.2 Creative Language Generation
Task Description: The task is to generate a sonnet that adheres to the traditional English sonnet rhyme scheme of "ABAB CDCD EFEF GG" and includes three specific words verbatim in the text.
Solution Description: Writing a sonnet involves crafting 14 lines of poetry that follow a specific rhyme pattern. The lines are typically in iambic pentameter, though flexibility in rhythm can be allowed for creative reasons. The given rhyme scheme dictates the end sounds of each line, ensuring a structured poetic form. Incorporating the three provided words verbatim requires strategic placement within the lines to maintain the poem’s coherence and thematic unity. Thought Template: Step 1: Identify the three words that must be included in the sonnet. Step 2: Understand the rhyme scheme "ABAB CDCD EFEF GG" and prepare a list of rhyming words that could be used. Step 3: Develop a theme or story for the sonnet that can naturally incorporate the three provided words. Step 4: Begin drafting the sonnet by writing the first quatrain (four lines) following the "ABAB" rhyme scheme, ensuring one or more of the provided words are included. Step 5: Continue with the second quatrain "CDCD," the third quatrain "EFEF," and finally the closing couplet "GG," each time incorporating the provided words as needed. Step 6: Review the sonnet for coherence, flow, and adherence to the rhyme scheme, making adjustments as necessary.
A.1.3 Common Sense Reasoning
Task Description: Given a specific date and an event, such as a holiday or historical event, determine the following date.
Solution Description: To determine the next date, we need to consider the structure of the calendar, the number of days in each month, and whether it’s a leap year. Typically, the number of days in a month is fixed, except February may vary due to leap years. The next day in a year is usually the date increased by one day unless it’s the end of the month, then the next day will be the first day of the following month. For the end of the year, the next day will be January 1st of the following year. Thought Template: Step 1: Identify the given date’s month and day number. Step 2: Check if it’s the end of the month; if so, confirm the start date of the next month. Step 3: If it’s not the end of the month, simply add one to the day number. Step 4: Pay special attention to the end of the year, ensuring the year increments.
A.1.4 Mathematical Reasoning
Task Description: Solve an quadratic equation of the form $ax^{2}+bx+c=0$ considering any situations.
Solution Description: To solve any quadratic equation of the form $ax^{2}+bx+c=0$ , we can follow a general approach based on the method described. Here is the structured template for solving such equations: Thought Template: Step 1: Calculate the Discriminant - Compute the discriminant $D$ using the formula $D=b^{2}-4ac$ . Step 2: Determine the Nature of the Roots - If $D>0$ , the equation has two distinct real roots. - If $D=0$ , the equation has exactly one real root (also known as a repeated or double root). - If $D<0$ , the equation has two complex roots. Step 3: Compute the Roots - For $D≥ 0$ , calculate the roots using the formula $x=\frac{-b±\sqrt{D}}{2a}$ . - For $D<0$ , calculate the real and imaginary parts of the complex roots using the formula $x=\frac{-b}{2a}±\frac{\sqrt{-D}}{2a}i$ , where $i$ is the imaginary unit.
A.1.5 Code Programming
Task Description: When given a list of numbers, try to utilize 4 basic mathematical operations (+-*/) to get a target number.
Thought Template: Listing 1: Python template ⬇ from itertools import permutations, product def perform_operation (a, b, operation): # Define the operation logic (e.g., addition, subtraction, etc.). pass def evaluate_sequence (sequence, operations): # Apply operations to the sequence and check if the result meets the criteria. pass def generate_combinations (elements, operations): # Generate all possible combinations of elements and operations. pass def format_solution (sequence, operations): # Format the sequence and operations into a human-readable string. pass def find_solution (input_elements, target_result): # Data Input Handling # Validate and preprocess input data if necessary. # Core Algorithm Logic for sequence in permutations (input_elements): for operation_combination in generate_combinations (sequence, operations): try: if evaluate_sequence (sequence, operation_combination) == target_result: # Data Output Formatting return format_solution (sequence, operation_combination) except Exception as e: # Error Handling # Handle specific exceptions that may occur during evaluation. continue # If no solution is found after all iterations, return a default message. # return No solution found message return # Example usage: input_elements = [1, 7, 10, 3] target_result = 24 print (find_solution (input_elements, target_result))
A.1.6 Application Scheduling
Task Description: Given some Chess moves in SAN, update the chess board state.
Listing 2: Python template ⬇ import chess def find_checkmate_move (moves_san): # Initialize a new chess board board = chess. Board () # Apply the moves to the board for move_san in moves_san: # Remove move numbers and periods (e.g., "1." or "2.") if len (move_san. split (’. ␣ ’)) > 1: move_san = move_san. split (’. ␣ ’)[1] # Skip empty strings resulting from the removal if move_san: # Apply each move in SAN format to the board move = board. parse_san (move_san) board. push (move) # Generate all possible legal moves from the current position for move in board. legal_moves: # Make the move on a copy of the board to test the result board_copy = board. copy () board_copy. push (move) # Check if the move results in a checkmate if board_copy. is_checkmate (): # Return the move that results in checkmate in SAN format return board. san (move) # return No solution found message return #Example usage: input = ’......’ # Check input format and transform the input into legal format # Remove move numbers and periods (e.g., "1." or "2.") checkmate_move = find_checkmate_move (moves_san) print (checkmate_move)
A.2 Prompt for Problem Distiller
[Problem Distiller]: As a highly professional and intelligent expert in information distillation, you excel at extracting essential information to solve problems from user input queries. You adeptly transform this extracted information into a suitable format based on the respective type of the issue. Please categorize and extract the crucial information required to solve the problem from the user’s input query, the distilled information should include. 1. Key information: Values and information of key variables extracted from user input, which will be handed over to the respective expert for task resolution, ensuring all essential information required to solve the problem is provided. 2. Restrictions: The objective of the problem and corresponding constraints. 3. Distilled task: Extend the problem based on 1 and 2, summarize a meta problem that can address the user query and handle more input and output variations. Incorporate the real-world scenario of the extended problem along with the types of key variables and information constraints from the original problem to restrict the key variables in the extended problem. After that, use the user query input key information as input to solve the problem as an example.
A.3 Prompt for Instantiated Reasoning
[Meta Reasoner] You are a Meta Reasoner who are extremely knowledgeable in all kinds of fields including Computer Science, Math, Physics, Literature, History, Chemistry, Logical reasoning, Culture, Language….. You are also able to find different high-level thought for different tasks. Here are three reasoning sturctures: i) Prompt-based structure: It has a good performance when dealing with problems like Common Sense Reasoning, Application Scheduling ii) Procedure-based structure It has a good performance when dealing with creative tasks like Creative Language Generation, and Text Comprehension iii) Programming-based: It has a good performance when dealing with Mathematical Reasoning and Code Programming, it can also transform real-world problems into programming problem which could be solved efficiently. (Reasoning instantiation) Your task is: 1. Deliberately consider the context and the problem within the distilled respond from problem distiller and use your understanding of the question within the distilled respond to find a domain expert who are suitable to solve the problem. 2. Consider the distilled information, choose one reasoning structures for the problem. 3. If the thought-template is provided, directly follow the thought-template to instantiate for the given problem.