## MATH-SHEPHERD: VERIFY AND REINFORCE LLMS STEP-BY-STEP WITHOUT HUMAN ANNOTATIONS
Peiyi Wang 1 †Lei Li 3 Zhihong Shao 4 R.X. Xu 2 Damai Dai 1 Yifei Li 5 2 2 1
Deli Chen Y. Wu Zhifang Sui
1 National Key Laboratory for Multimedia Information Processing, Peking University
2 DeepSeek-AI 3 The University of Hong Kong
4 Tsinghua University 5 The Ohio State University
{ wangpeiyi9979, nlp.lilei } @gmail.com li.14042@osu.edu
szf@pku.edu.cn
<details>
<summary>Image 1 Details</summary>

### Visual Description
Icon/Small Image (46x43)
</details>
Project Page:
MATH-SHEPHERD
## ABSTRACT
In this paper, we present an innovative process-oriented math process reward model called MATH-SHEPHERD , which assigns a reward score to each step of math problem solutions. The training of MATH-SHEPHERD is achieved using automatically constructed process-wise supervision data, breaking the bottleneck of heavy reliance on manual annotation in existing work. We explore the effectiveness of MATH-SHEPHERD in two scenarios: 1) Verification : MATH-SHEPHERD is utilized for reranking multiple outputs generated by Large Language Models (LLMs); 2) Reinforcement Learning : MATH-SHEPHERD is employed to reinforce LLMs with step-by-step Proximal Policy Optimization (PPO). With MATH-SHEPHERD, a series of open-source LLMs demonstrates exceptional performance. For instance, the step-by-step PPO with MATH-SHEPHERD significantly improves the accuracy of Mistral-7B (77.9% → 84.1% on GSM8K and 28.6% → 33.0% on MATH). The accuracy can be further enhanced to 89.1% and 43.5% on GSM8K and MATH with the verification of MATH-SHEPHERD, respectively. We believe that automatic process supervision holds significant potential for the future evolution of LLMs.
Figure 1: We evaluate the performance of various LLMs with MATH-SHEPHERD on the GSM8K and MATH datasets. All base models are finetuned with the MetaMath dataset (Yu et al., 2023b). The +SHEPHERD results are obtained by selecting the best one from 256 candidates using MATHSHEPHERD. We observe that MATH-SHEPHERD is compatible with different LLMs. The results of GPT-4 (early) are from Bubeck et al. (2023).
<details>
<summary>Image 2 Details</summary>

### Visual Description
## Bar Chart: GSM8K Accuracy Comparison
### Overview
The image is a bar chart comparing the accuracy of different Large Language Models (LLMs) on the GSM8K dataset. The chart compares fine-tuned LLMs against versions enhanced with "+SHEPHERD". The y-axis represents accuracy in percentage, and the x-axis represents different LLM configurations.
### Components/Axes
* **Title:** GSM8K
* **Y-axis:** Accuracy (%)
* Scale: 70 to 95, with tick marks at 70, 75, 80, 85, 90, and 95.
* **X-axis:** LLM configurations
* Categories: LLaMA2-70B MAmmoTH, LLaMA2-70B WizardMATH, LLaMA2-70B MetaMATH, LLemma-34B MetaMATH*, DeepSeek-67B MetaMATH*
* **Legend:** Located at the top of the chart.
* Blue: Fine-tuned LLMs
* Orange: +SHEPHERD
* **Horizontal Lines:**
* GPT-4-0613*: 94.4 (Green line)
* GPT-4 (early): 92.0 (Red line)
### Detailed Analysis
The chart presents accuracy data for various LLMs, both fine-tuned and enhanced with "+SHEPHERD".
* **LLaMA2-70B MAmmoTH:**
* Fine-tuned LLMs (Blue): 72.4%
* **LLaMA2-70B WizardMATH:**
* Fine-tuned LLMs (Blue): 81.6%
* **LLaMA2-70B MetaMATH:**
* Fine-tuned LLMs (Blue): 80.4%
* +SHEPHERD (Orange): 93.2% (Total height of the bar)
* **LLemma-34B MetaMATH*:**
* Fine-tuned LLMs (Blue): 75.8%
* +SHEPHERD (Orange): 90.9% (Total height of the bar)
* **DeepSeek-67B MetaMATH*:**
* Fine-tuned LLMs (Blue): 82.8%
* +SHEPHERD (Orange): 93.3% (Total height of the bar)
### Key Observations
* The "+SHEPHERD" enhancement consistently improves the accuracy of the LLMs.
* The DeepSeek-67B MetaMATH* with +SHEPHERD achieves the highest accuracy among the models tested, closely followed by LLaMA2-70B MetaMATH with +SHEPHERD.
* The GPT-4 models (early and 0613*) serve as benchmarks, with the +SHEPHERD enhanced models approaching or exceeding their performance.
### Interpretation
The data suggests that the "+SHEPHERD" enhancement is effective in improving the accuracy of LLMs on the GSM8K dataset. The comparison with GPT-4 models indicates that these enhanced models are competitive with state-of-the-art LLMs. The chart highlights the potential of combining fine-tuning with additional techniques like "+SHEPHERD" to achieve higher accuracy in mathematical reasoning tasks. The performance of DeepSeek-67B MetaMATH* and LLaMA2-70B MetaMATH* with +SHEPHERD is particularly noteworthy.
</details>
<details>
<summary>Image 3 Details</summary>

### Visual Description
## Bar Chart: Accuracy of LLMs on MATH Dataset
### Overview
The image is a bar chart comparing the accuracy of various Large Language Models (LLMs) on the MATH dataset. The chart shows the accuracy of fine-tuned LLMs and the improvement achieved by adding "+SHEPHERD". It also includes horizontal lines indicating the performance of GPT-4 models.
### Components/Axes
* **X-axis:** MATH (Categories of LLMs: LLaMA2-70B MAmmoTH, LLaMA2-70B WizardMATH, LLaMA2-70B MetaMATH, LLemma-34B MetaMATH*, DeepSeek-67B MetaMATH*)
* **Y-axis:** Accuracy (%) (Scale from 10 to 60, with increments of 10)
* **Legend:**
* Blue: Fine-tuned LLMs
* Orange: +SHEPHERD
* **Horizontal Lines:**
* Red: GPT-4 (early): 42.5
* Green: GPT-4-0613*: 56.2
### Detailed Analysis
The chart presents the accuracy of different LLMs on the MATH dataset, with and without the addition of "+SHEPHERD".
* **LLaMA2-70B MAmmoTH:** Accuracy of fine-tuned LLM is approximately 21.1%.
* **LLaMA2-70B WizardMATH:** Accuracy of fine-tuned LLM is approximately 22.7%.
* **LLaMA2-70B MetaMATH:** Accuracy of fine-tuned LLM is approximately 29.8%. With +SHEPHERD, the accuracy increases to approximately 45.2%.
* **LLemma-34B MetaMATH*:** Accuracy of fine-tuned LLM is approximately 34.8%. With +SHEPHERD, the accuracy increases to approximately 47.3%.
* **DeepSeek-67B MetaMATH*:** Accuracy of fine-tuned LLM is approximately 36.8%. With +SHEPHERD, the accuracy increases to approximately 48.1%.
The horizontal lines indicate the performance of GPT-4 models:
* GPT-4 (early): 42.5%
* GPT-4-0613*: 56.2%
### Key Observations
* The addition of "+SHEPHERD" consistently improves the accuracy of the LLMs on the MATH dataset.
* The DeepSeek-67B MetaMATH* model achieves the highest accuracy among the tested models with +SHEPHERD.
* The performance of GPT-4-0613* significantly surpasses all other models shown in the chart.
### Interpretation
The data suggests that fine-tuning LLMs can improve their performance on the MATH dataset, and the addition of "+SHEPHERD" further enhances their accuracy. The performance of GPT-4 models serves as a benchmark, indicating the potential for further improvement in LLM performance on mathematical reasoning tasks. The chart highlights the effectiveness of "+SHEPHERD" in boosting the accuracy of LLMs, particularly for the MetaMATH variants. The DeepSeek-67B MetaMATH* model, with +SHEPHERD, shows the most promising results among the tested models, approaching the performance of GPT-4 (early).
</details>
†Contribution during internship at DeepSeek-AI.
## 1 INTRODUCTION
Large language models (LLMs) have demonstrated remarkable capabilities across various tasks (Park et al., 2023; Kaddour et al., 2023; Song et al.; Li et al., 2023a; Wang et al., 2023a; Chen et al., 2023; Zheng et al., 2023; Wang et al., 2023c), However, even the most advanced LLMs face challenges in complex multi-step mathematical reasoning problems (Lightman et al., 2023; Huang et al., 2023). To address this issue, prior research has explored different methodologies, such as pretraining (Azerbayev et al., 2023), fine-tuning (Luo et al., 2023; Yu et al., 2023b; Wang et al., 2023b), prompting (Wei et al., 2022; Fu et al., 2022), and verification (Wang et al., 2023d; Li et al., 2023b; Zhu et al., 2023; Leviathan et al., 2023). Among these techniques, verification has recently emerged as a favored method. The motivation behind verification is that relying solely on the top-1 result may not always produce reliable outcomes. A verification model can rerank candidate responses, ensuring higher accuracy and consistency in the outputs of LLMs. In addition, a good verification model can also offer invaluable feedback for further improvement of LLMs (Uesato et al., 2022; Wang et al., 2023b; Pan et al., 2023).
The verification models generally fall into the outcome reward model (ORM) (Cobbe et al., 2021; Yu et al., 2023a) and process reward model (PRM) (Li et al., 2023b; Uesato et al., 2022; Lightman et al., 2023; Ma et al., 2023). The ORM assigns a confidence score based on the entire generation sequence, whereas the PRM evaluates the reasoning path step-by-step. PRM is advantageous due to several compelling reasons. One major benefit is its ability to offer precise feedback by identifying the specific location of any errors that may arise, which is a valuable signal in reinforcement learning and automatic correction. Besides, The PRM exhibits similarities to human behavior when assessing a reasoning problem. If any steps contain an error, the final result is more likely to be incorrect, mirroring the way human judgment works. However, gathering data to train a PRM can be an arduous process. Uesato et al. (2022) and Lightman et al. (2023) utilize human annotators to provide process supervision annotations, enhancing the performance of PRM. Nevertheless, annotation by humans, particularly for intricate multi-step reasoning tasks that require advanced annotator skills, can be quite costly, which hinders the advancement and practical application of PRM.
To tackle the problem, in this paper, we propose an automatic process annotation framework. Inspired by Monte Carlo Tree Search (Kocsis & Szepesv´ ari, 2006; Coulom, 2006; Silver et al., 2016; ´ Swiechowski et al., 2023), we define the quality of an intermediate step as its potential to deduce the correct final answer. By leveraging the correctness of the answer, we can automatically gather step-wise supervision. Specifically, given a math problem with a golden answer and a step-by-step solution, to achieve the label of a specific step, we utilize a fine-tuned LLM to decode multiple subsequent reasoning paths from this step. We further validate whether the decoded final answer matches with the golden answer. If a reasoning step can deduce more correct answers than another, it would be assigned a higher correctness score.
We use this automatic way to construct the training data for MATH-SHEPHERD, and verify our ideas on two widely used mathematical benchmarks, GSM8K (Cobbe et al., 2021) and MATH (Hendrycks et al., 2021). We explore the effectiveness of MATH-SHEPHERD in two scenarios: 1) verification: MATH-SHEPHERD is utilized for reranking multiple outputs generated by LLMs; 2) reinforcement learning: MATH-SHEPHERD is employed to reinforce LLMs with step-by-step Proximal Policy Optimization (PPO). With the verification of MATH-SHEPHERD, a series of open-source LLMs from 7B to 70B demonstrates exceptional performance. For instance, the step-by-step PPO with MATHSHEPHERD significantly improves the accuracy of Mistral-7B (77.9% → 84.1% on GSM8K and 28.6% → 33.0% on MATH). The accuracy can be further enhanced to 89.1% and 43.5% on GSM8K and MATH with verification. DeepSeek 67B (DeepSeek, 2023) achieves accuracy rates of 93.3% on the GSM8K dataset and 48.1% on the MATH dataset with verification of MATH-SHEPHERD. To the best of our knowledge, these results are unprecedented for open-source models that do not rely on additional tools.
Our main contributions are as follows:
1) We propose a framework to automatically construct process supervision datasets without human annotations for math reasoning tasks.
- 2) We evaluate our method on both step-by-step verification and reinforcement learning scenarios. Extensive experiments on two widely used mathematical benchmarks - GSM8K and MATH, in addition to a series of LLMs ranging from 7B to 70B, demonstrate the effectiveness of our method.
- 3) We empirically analyze the key factors for training high-performing process reward models, shedding light on future directions toward improving reasoning capability with automatic step-bystep verification and supervision.
## 2 RELATED WORKS
Improving and eliciting mathematical reasoning abilities of LLMs. Mathematical reasoning tasks are one of the most challenging tasks for LLMs. Researchers have proposed various methods to improve or elicit the mathematical reasoning ability of LLMs, which can be broadly divided into three groups: 1) pre-training : The pre-training methods (OpenAI, 2023; Anil et al., 2023; Touvron et al., 2023; Azerbayev et al., 2023) pre-train LLMs on a vast of datasets that are related to math problems, such as the Proof-Pile and ArXiv (Azerbayev et al., 2023) with a simple next token prediction objective. 2) fine-tuning : The fine-tuning methods (Yu et al., 2023b; Luo et al., 2023; Yue et al., 2023; Wang et al., 2023b; Gou et al., 2023) can also enhance the mathematical reasoning ability of LLMs. The core of fine-tuning usually lies in constructing high-quality question-response pair datasets with a chain-of-thought reasoning process. and 3) prompting : The prompting methods (Wei et al., 2022; Zhang et al., 2023; Fu et al., 2022; Bi et al., 2023) aim to elicit the mathematical reasoning ability of LLMs by designing prompting strategy without updating the model parameters, which is very convenient and practical.
Mathematical reasoning verification for LLMs. Except for directly improving and eliciting the mathematical reasoning potential of LLMs, the reasoning results can be boosted via an extra verifier for selecting the best answer from multiple decoded candidates. There are two primary types of verifiers: the Outcome Reward Model (ORM) and the Process Reward Model (PRM). The ORM allocates a score to the entire solution while the PRM assigns a score to each individual step in the reasoning process. Recent findings by (Lightman et al., 2023) suggest that PRM outperforms ORM. In addition to verification, reward models can offer invaluable feedback for further training of generators (Uesato et al., 2022; Pan et al., 2023). Compared to ORM, PRM provides more detailed feedback, demonstrating greater potential to enhance generator (Wu et al., 2023). However, training a PRM requires access to expensive human-annotated datasets (Uesato et al., 2022; Lightman et al., 2023), which hinders the advancement and practical application of PRM. Therefore, in this paper, we aim to build a PRM for mathematical reasoning without human annotation, and we explore the effectiveness of the automatic PRM with both verification and reinforcement learning scenarios.
## 3 METHODOLOGY
In this section, we first present our task formulation to evaluate the performance of reward models (§3.1). Subsequently, we outline two typical categories of reward models, ORM and PRM(§3.2). Then, we introduce our methodology to automatically build the training dataset for PRM(§3.3), breaking the bottleneck of heavy reliance on manual annotation in existing work (Uesato et al., 2022; Lightman et al., 2023).
## 3.1 TASK FORMULATION
We evaluate the performance of the reward model in two scenarios:
Verification Following (Lightman et al., 2023), we consider a best-of-N selection evaluation paradigm. Specifically, given a problem p in the testing set, we sample N candidate solutions from a generator. These candidates are then scored using a reward model, and the highest-scoring solution is selected as the final answer. An enhanced reward model elevates the likelihood of selecting the solution containing the correct answer, consequently raising the success rate in solving mathematical problems for LLMs.
Reinforcement learning We also use the automatically constructed PRM to supervise LLMs with step-by-step PPO. In this scenario, we evaluate the accuracy of the LLMs' greedy decoding output. An enhanced reward model is instrumental in training higher-performing LLMs.
## 3.2 REWARD MODELS FOR MATHEMATICAL PROBLEM
ORM Given a mathematical problem p and its solution s , ORM ( P × S → R ) assigns a single real-value to s to indicate whether s is correct. ORM is usually trained with a cross-entropy loss (Cobbe et al., 2021; Li et al., 2023b):
<!-- formula-not-decoded -->
where y s is the golden answer of the solution s , y s = 1 if s is correct, otherwise y s = 0 . r s is the sigmoid score of s assigned by ORM. The success of the reward model hinges on the effective construction of the high-quality training dataset. As the math problem usually has a certain answer, we can automatically construct the training set of ORM by two steps: 1) sampling some candidate solutions for a problem from a generator; 2) assigning the label to each sampling solution by checking whether its answer is correct. Although false positives solutions that reach the correct answer with incorrect reasoning will be misgraded, previous studies have proven that it is still effective for training a good ORM (Lightman et al., 2023; Yu et al., 2023a).
PRM Take a step further, PRM ( P × S → R + ) assigns a score to each reasoning step of s , which is usually trained with:
<!-- formula-not-decoded -->
where y s i is the golden answer of s i (the i -th step of s ), r s i is the sigmoid score of s i assigned by PRM and K is the number of reasoning steps for s . (Lightman et al., 2023) also conceptualizes the PRM training as a three-class classification problem, in which each step is classified as either 'good', 'neutral', or 'bad'. In this paper, we found that there is not much difference between the binary and the three classifications, and we regard PRM training as the binary classification. Compared to ORM, PRM can provide more detailed and reliable feedback (Lightman et al., 2023). However, there are currently no automated methods available for constructing high-quality PRM training datasets. Previous works (Uesato et al., 2022; Lightman et al., 2023) typically resort to costly human annotations. While PRM manages to outperform ORM (Lightman et al., 2023), the annotation cost invariably impedes both the development and application of PRM.
## 3.3 AUTOMATIC PROCESS ANNOTATION
In this section, we propose an automatic process annotation framework to mitigate the annotation cost issues associated with PRM. We first define the quality of a reasoning step, followed by the introduction of our solution that obviates the necessity for human annotation.
## 3.3.1 DEFINITION
Inspired by Monto Carlo Tree Search (Kocsis & Szepesv´ ari, 2006; Coulom, 2006; Silver et al., 2016; ´ Swiechowski et al., 2023), we define the quality of a reasoning step as its potential to deduce the correct answer. This criterion stems from the primary objective of the reasoning process, which essentially is a cognitive procedure aiding humans or intelligent agents in reaching a well-founded outcome (Huang & Chang, 2023). Therefore, a step that has the potential to deduce a well-founded result can be considered a good reasoning step. Analogous to ORM, this definition also introduces some degree of noise. Nevertheless, we find that it is beneficial for effectively training a good PRM.
## 3.3.2 SOLUTION
Completion To quantify and estimate the potential for a give reasoning step s i , as shown in Figure 2, we use a 'completer' to finalize N subsequent reasoning processes from this step: { ( s i +1 ,j , · · · , s K j ,j , a j ) } N j =1 , where a j and K j are the decoded answer and the total number of steps for the j -th finalized solution, respectively. Then, we estimate the potential of this step based on the correctness of all decoded answers A = { a j } N j =1 .
Figure 2: Comparison for previous automatic outcome annotation and our automatic process annotation. (a): automatic outcome annotation assigns a label to the entire solution S , dependent on the correctness of the answer; (b) automatic process annotation employs a 'completer' to finalize N reasoning processes (N=3 in this figure) for an intermediate step ( s 1 in this figure), subsequently use hard estimation (HE) and soft estimation (SE) to annotate this step based on all decoded answers.
<details>
<summary>Image 4 Details</summary>

### Visual Description
## Diagram: Math Problem Solution Process
### Overview
The image depicts a diagram illustrating the process of solving a mathematical problem. It shows the problem statement, the golden answer, a solution path, and annotations for both the outcome and the process. The diagram is divided into sections representing the problem, the solution steps, and the final answer, with indicators of correctness.
### Components/Axes
* **Problem Statement (Top-Left, in a yellow box):** "Problem: Let p(x) be a monic polynomial of degree 4. Three of the roots of p(x) are 1, 2, and 3. Find p(0) + p(4)."
* **Golden Answer (Top-Right, in a yellow box):** "Golden Answer: 24"
* **Solution Path (Blue Box):** "Solution: S = s1, s2, s3, ..., sK" leading to "Answer: 20" with a red "X" indicating an incorrect answer.
* **(a) Outcome Annotation:** ys = 0
* **Detailed Solution Steps (Green Box):**
* **Problem:** "Problem: ..."
* **S1:** "S1: Since three of the roots of p(x) are 1, 2, and 3, we can write: p(x) = (x-1)(x-2)(x-3)(x-r)."
* Three parallel paths, each representing a possible solution approach, labeled as follows:
* Path 1: s2,1 -> s3,1 -> ... -> sK,1 -> Answer: 24 ✓ (Correct)
* Path 2: s2,2 -> s3,2 -> ... -> sK,2 -> Answer: 24 ✓ (Correct)
* Path 3: s2,3 -> s3,3 -> ... -> sK,3 -> Answer: 20 X (Incorrect)
* **(b) Process Annotation:** ySEs1 = 2/3 ; yHEs1 = 1
* **Legend (Bottom, in a gray box):**
* "si: the i-th step of the solution S."
* "si,j: the i-th step of the j-th finalized solution."
### Detailed Analysis or ### Content Details
* The problem involves finding the value of p(0) + p(4) for a monic polynomial p(x) of degree 4, given three of its roots.
* The "Golden Answer" is 24, indicating the correct solution to the problem.
* The initial solution path "S" leads to an incorrect answer of 20, as marked by the red "X".
* The detailed solution steps show three parallel paths, each starting from the initial step S1.
* Two of the paths (Path 1 and Path 2) lead to the correct answer of 24, as indicated by the green checkmarks.
* One path (Path 3) leads to an incorrect answer of 20, as indicated by the red "X".
* The process annotation provides values for ySEs1 and yHEs1, which are 2/3 and 1, respectively. The meaning of these variables is not explicitly defined in the image.
* The legend defines the notation used for the solution steps: "si" represents the i-th step of the solution S, and "si,j" represents the i-th step of the j-th finalized solution.
### Key Observations
* The diagram illustrates multiple solution paths to the same problem, with some paths leading to the correct answer and others leading to incorrect answers.
* The annotations provide information about the outcome of the initial solution path and the process of the detailed solution steps.
* The diagram uses visual cues (green checkmarks and red "X" marks) to indicate the correctness of the answers.
### Interpretation
The diagram demonstrates a problem-solving process where multiple approaches are explored, and the correctness of each approach is evaluated. It highlights the importance of exploring different solution paths and verifying the correctness of the final answer. The process annotation suggests a way to quantify the effectiveness or correctness of the initial steps in the solution process. The diagram could be used to illustrate the concept of solution diversity and the importance of error detection in problem-solving.
</details>
Estimation In this paper, we use two methods to estimate the quality y s i for the step s i , hard estimation (HE) and soft estimation (SE). HE supposes that a reasoning step is good as long as it can reach the correct answer a ∗ :
<!-- formula-not-decoded -->
SE assumes the quality of a step as the frequency with which it reaches the correct answer:
<!-- formula-not-decoded -->
Once we gather the label of each step, we can train PRM with the cross-entropy loss. In conclusion, our automatic process annotation framework defines the quality of a step as its potential to deduce the correct answer and achieve the label of each step by completion and estimation.
## 3.4 RANKING FOR VERIFICATION
Following (Lightman et al., 2023), we use the minimum score across all steps to represent the final score of a solution assigned by PRM. We also explore the combination of self-consistency and reward models following (Li et al., 2023b). In this context, we initially classify solutions into distinct groups according to their final answers. Following that, we compute the aggregate score for each group. Formally, the final prediction answer based on N candidate solutions is:
<!-- formula-not-decoded -->
Where RM ( p, S i ) is the score of the i -th solution assigned by ORM or PRM for problem p .
## 3.5 REINFORCE LEARNING WITH PROCESS SUPERVISION
Upon achieving PRM, we employ reinforcement learning to train LLMs. We implement Proximal Policy Optimization (PPO) in a step-by-step manner. This method differs from the conventional strategy that utilizes PPO with ORM, which only offers a reward at the end of the response. Conversely, our step-by-step PPO offers rewards at the end of each reasoning step.
Table 1: Performances of different LLMs on GSM8K and MATH with different verification strategies. The reward models are trained based on LLama2-70B and LLemma-34B on GSM8K and MATH, respectively. The verification is based on 256 outputs.
| Models | Verifiers | GSM8K | MATH500 |
|------------------------|-----------------------------------------|---------|-----------|
| LLaMA2-70B: MetaMATH | Self-Consistency | 88 | 39.4 |
| LLaMA2-70B: MetaMATH | ORM | 91.8 | 40.4 |
| LLaMA2-70B: MetaMATH | Self-Consistency+ORM | 92 | 42 |
| LLaMA2-70B: MetaMATH | MATH-SHEPHERD (Ours) | 93.2 | 44.5 |
| LLaMA2-70B: MetaMATH | Self-Consistency + MATH-SHEPHERD (Ours) | 92.4 | 45.2 |
| LLemma-34B: MetaMATH | Self-Consistency | 82.6 | 44.2 |
| LLemma-34B: MetaMATH | ORM | 90 | 43.7 |
| LLemma-34B: MetaMATH | Self-Consistency+ORM | 89.6 | 45.4 |
| LLemma-34B: MetaMATH | MATH-SHEPHERD (Ours) | 90.9 | 46 |
| LLemma-34B: MetaMATH | Self-Consistency + MATH-SHEPHERD (Ours) | 89.7 | 47.3 |
| DeepSeek-67B: MetaMATH | Self-Consistency | 88.2 | 45.4 |
| DeepSeek-67B: MetaMATH | ORM | 92.6 | 45.3 |
| DeepSeek-67B: MetaMATH | Self-Consistency+ORM | 92.4 | 47 |
| DeepSeek-67B: MetaMATH | MATH-SHEPHERD (Ours) | 93.3 | 47 |
| DeepSeek-67B: MetaMATH | Self-Consistency + MATH-SHEPHERD (Ours) | 92.5 | 48.1 |
## 4 EXPERIMENTS
Datasets We conduct our experiments using two widely used math reasoning datasets, GSM8K (Cobbe et al., 2021) and MATH (Hendrycks et al., 2021). For the GSM8K dataset, we leverage the whole test set in both verification and reinforcement learning scenarios. For the MATH dataset, in the verification scenario, due to the computation cost, we employ a subset MATH500 that is identical to the test set of Lightman et al. (2023). The subset consists of 500 representative problems, and we find that the subset evaluation produces similar results to the full-set evaluation. To assess different verification methods, we generate 256 candidate solutions for each test problem. We report the mean accuracy of 3 groups of sampling results. In the reinforcement learning scenario, we use the whole test set to evaluate the model performance. We train LLMs with MetaMATH (Yu et al., 2023b).
Parameter Setting Our experiments are based on a series of large language models, LLaMA27B/13B/70B (Touvron et al., 2023), LLemma-7B/34B (Azerbayev et al., 2023), Mistral-7B (Jiang et al., 2023) and DeepSeek-67B (DeepSeek, 2023). We train the generator and completer for 3 epochs on MetaMATH. We train the Mistral-7B with a learning rate of 5e-6. For other models, The learning rates are set to 2e-5, 1e-5, and 6e-6 for the 7B/13B, 34B, and 67B/70B LLMs, respectively. To construct the training dataset of ORM and PRM, we train 7B and 13B models for a single epoch on the GSM8K and MATH training sets. Subsequently, we sample 15 solutions per problem from each model for the training set. Following this, we eliminate duplicate solutions and annotate the solutions at each step. We use LLemma-7B as the completer with the decoded number N=8. Consequently, we obtain around 170k solutions for GSM8K and 270k solutions for MATH. For verification, we choose LLaMA2-70B and LLemma-34B as the base models to train reward models for GSM8K and MATH, respectively. For reinforcement learning, we choose Mistral-7B as the base model to train reward models and use it to supervise LLama2-7B and Mistral-7B generators. The reward model is trained in 1 epoch with a learning rate 1e-6. For the sake of convenience, we train the PRM using the hard estimation version because it allows us to utilize a standard language modeling pipeline by selecting two special tokens to represent 'has potential' and 'no potential' labels, thereby eliminating the need for any specific model adjustments. In reinforcement learning, the learning rate is 4e-7 and 1e-7 for LLaMA2-7B and Mistral-7B, respectively. The Kullback-Leibler coefficient is set to 0.04. We implement a cosine learning rate scheduler, employing a minimal learning rate set to 1e-8. We use 3D parallelism provided by hfai 1 to train all models with the max sequence length of 512.
1 https://doc.hfai.high-flyer.cn/index.html
Table 2: Performances of different 7B models on GSM8K and MATH with greedy decoding. We use the questions in MetaMATH for RFT and PPO training. Both LLaMA2-7B and Mistral-7B are supervised by Mistral-7B-ORM and -MATH-SHEPHERD.
| Models | GSM8K | MATH |
|-----------------------------------------|---------|--------|
| LLaMA2-7B: MetaMATH | 66.6 | 19.2 |
| + RFT | 68.5 | 19.9 |
| + ORM-PPO | 70.8 | 20.8 |
| + MATH-SHEPHERD-step-by-step-PPO (Ours) | 73.2 | 21.6 |
| Mistral-7B: MetaMATH | 77.9 | 28.6 |
| + RFT | 79 | 29.9 |
| + ORM-PPO | 81.8 | 31.3 |
| + MATH-SHEPHERD-step-by-step-PPO (Ours) | 84.1 | 33 |
Baselines and Metrics In the verification scenario, following (Lightman et al., 2023), we evaluate the performance of our reward model by comparing it against the Self-consistency (majority voting) and outcome reward model. The accuracy of the best-of-N solution is utilized as the evaluation metric. For PRM, the minimum score across all steps is adopted to represent the final score of a solution. In the reinforcement scenario, we compare our step-by-step supervision with the outcome supervision provided by ORM, and Rejective Sampling Fine-tuning (RFT) (Yuan et al., 2023), we sample 8 responses for each question in MetaMATH for RFT. We use the accuracy of LLMs' greedy decoding output to assess the performance.
## 4.1 MAIN RESULTS
MATH-SHEPHERD as verifier Table 1 presents the performance comparison of various methods on GSM8K and MATH. We find that: 1) As the verifier, MATH-SHEPHERD consistently outperforms self-consistency and ORM on two datasets with all generators. Specifically, enhanced by MATHSHEPHERD, DeepSeek-67B achieves 93.3% and 48.1% accuracy on GSM8K and MATH; 2) In comparison to GSM8K, PRM achieves a greater advantage over ORM on the more challenging MATH dataset; This outcome aligns with the findings in Uesato et al. (2022) and Lightman et al. (2023). The former discovers that PRM and ORM yield similar results on GSM8K, whereas the latter shows that PRM significantly outperforms ORM on the MATH dataset. This could be attributed to the relative simplicity of the GSM8K dataset compared to MATH, i.e., the GSM8K dataset necessitates fewer steps for problem-solving. As a result, ORM operates efficiently when handling this particular dataset; 3) In GSM8K, when combined with self-consistency, there's a drop in performance, whereas in MATH, performance improves. These results indicate that if the reward model is sufficiently powerful for a task, combining it with self-consistency may harm the verification performance.
MATH-SHEPHERD as reward model on reinforcement learning Table 2 presents the performance of different LLMs with greedy decoding outputs. As is shown: 1) step-by-step PPO significantly improves the performance of two supervised fine-tuned models. For example, Mistral-7B with step-by-step PPO achieves 84.1% and 33.0% on the GSM8K and MATH datasets, respectively; 2) RFT only slightly improves the model performance, we believe this is because MetaMATH already has conducted some data augmentation strategies like RFT; 3) the vanilla PPO with ORM can also enhance the model performance. However, it does not perform as well as the step-by-step PPO supervised by MATH-SHEPHERD, demonstrating the potential of step-by-step supervision.
MATH-SHEPHERD as both reward models and verifiers We also combine the reinforcement learning and the verification. As shown in Table 3: 1) reinforcement learning and verification are complementary. For example, in MATH, step-by-step PPO Mistral-7B outperforms supervised fine-tuning Mistral-7B 7.2% accuracy with self-consistency as the verifier; The performance gap is even larger than that of greedy decoding results, i.e., 4.4%; 2) after reinforcement learning, the vanilla verification methods with only reward models is inferior to self-consistency, we think the
Table 3: Results of reinforcement learning and verification combination. The reward models are trained based on Mistral-7B. The verification is based on 256 outputs.
| Models | Verifiers | GSM8K | MATH500 |
|--------------------------|-----------------------------------------|---------|-----------|
| Mistral-7B: MetaMATH | Self-Consistency | 83.9 | 35.1 |
| Mistral-7B: MetaMATH | ORM | 86.2 | 36.4 |
| Mistral-7B: MetaMATH | Self-Consistency+ORM | 86.6 | 38 |
| Mistral-7B: MetaMATH | MATH-SHEPHERD (Ours) | 87.1 | 37.3 |
| Mistral-7B: MetaMATH | Self-Consistency + MATH-SHEPHERD (Ours) | 86.3 | 38.3 |
| Mistral-7B: MetaMATH | Self-Consistency | 87.4 | 42.3 |
| Mistral-7B: MetaMATH | ORM | 87.6 | 41.3 |
| +step-by-step PPO (Ours) | Self-Consistency+ORM | 89 | 43.1 |
| +step-by-step PPO (Ours) | MATH-SHEPHERD (Ours) | 88.4 | 41.1 |
| +step-by-step PPO (Ours) | Self-Consistency + MATH-SHEPHERD (Ours) | 89.1 | 43.5 |
reason is that the initial reward model is not sufficient to supervise the more powerful model after PPO. These results can also show the potential of iterative reinforcement learning, which we leave for future work.
## 5 ANALYSIS
## 5.1 PERFORMANCE WITH DIFFERENT NUMBER OF CANDIDATE SOLUTIONS
Figure 3 illustrates the performance comparison of various strategies when applied to different numbers of candidates ranging from 1 to 256 on two benchmarks. The key observations are as follows: 1) PRM exhibits consistent superior performance when compared to both ORM and majority voting, with the degree of this superiority becoming more pronounced as N escalates. 2) In MATH, our automatically annotated datasets outperform the human-annotated PRM800K (Lightman et al., 2023). We ascribe this superiority to the distribution gap and the data quantity. Specifically, PRM800K is annotated based on the output from GPT-4, and consequently, a discrepancy arises for the output of open-source LLaMA models fine-tuned on MetaMATH. Furthermore, when considering the quantity of data, our automated reward model data exhibits both high scalability and a reduced labeling cost. Consequently, our dataset is four times larger than that provided in PRM800K. Overall, these results further underscore the effectiveness and potential of our method.
## 5.2 QUALITY OF THE AUTOMATIC PROCESS ANNOTATIONS
In this section, we explore the quality of our automatic PRM dataset. To achieve this, we manually annotate 160 steps sampled from the training set of GSM8K and use different completers to infer from each step to achieve their label. We find that:
Automatic process annotation exhibits satisfactory quality. Figure 4(a) demonstrates that utilizing LLaMA2-70B trained on MetaMATH as the completer, the accuracy of the hard estimation (HE) reaches 86% when N equals 4. This suggests that our automatically constructed dataset is of high quality. However, we observed a decline in the accuracy of the constructed dataset with further increases in N. Our analysis indicates that larger values for N may lead to false positives.
Figure 4(b) shows the cross-entropy loss between SE and HE labels compared to the human-annotated distribution: as N increases, SE progressively aligns closer to the standard distribution, in contrast to HE which does not exhibit similar behavior. It is essential to note that at N=4, HE achieves an accuracy of 86%. We can theoretically attain higher quality data exceeding 86% accuracy by utilizing SE. However, we discovered that the performance of the verifier exhibits no substantial divergence whether trained with either SE or HE. This may be attributable to the already high-quality annotations provided by HE.
Furthermore, we also delve into other automatic process annotation methodologies. For instance, (Li et al., 2023b) employs a natural language inference (NLI) model and a string match rule to annotate a
Figure 3: Performance of LLaMA2-70B using different verification strategies across different numbers of solution candidates on GSM8K and MATH.
<details>
<summary>Image 5 Details</summary>

### Visual Description
## Line Charts: Performance Comparison on GSM8K and MATH Datasets
### Overview
The image presents two line charts comparing the performance of different models (SC, ORM, PRM800K, and SHEPHERD) on the GSM8K and MATH datasets. The charts show the percentage of problems solved (Best-of-N) as a function of N, the number of solutions per problem.
### Components/Axes
**Left Chart (GSM8K):**
* **Title:** GSM8K
* **X-axis:** N = number of solutions per problem. Scale: 1, 4, 16, 64, 256 (logarithmic scale)
* **Y-axis:** % Problems Solved (Best-of-N). Scale: 80.0% to 92.5%
* **Legend:** Located in the bottom-right of the chart.
* SC (Red)
* ORM (Blue)
* SHEPHERD (Green)
**Right Chart (MATH):**
* **Title:** MATH
* **X-axis:** N = number of solutions per problem. Scale: 1, 4, 16, 64, 256 (logarithmic scale)
* **Y-axis:** % Problems Solved (Best-of-N). Scale: 30% to 45%
* **Legend:** Located in the bottom-right of the chart.
* SC (Red)
* ORM (Blue)
* PRM800K (Purple)
* SHEPHERD (Green)
### Detailed Analysis
**GSM8K Chart:**
* **SC (Red):** Starts at approximately 80.5% at N=1, increases to about 83% at N=4, then to 86.2% at N=16, 87.5% at N=64, and finally to 88% at N=256. The rate of increase diminishes as N increases.
* **ORM (Blue):** Starts at approximately 80.5% at N=1, increases to about 88% at N=4, then to 91.5% at N=16, 91.8% at N=64, and finally to 91.8% at N=256. The rate of increase diminishes as N increases.
* **SHEPHERD (Green):** Starts at approximately 80.5% at N=1, increases to about 88.5% at N=4, then to 92% at N=16, 92.5% at N=64, and finally to 92% at N=256. The rate of increase diminishes as N increases.
**MATH Chart:**
* **SC (Red):** Starts at approximately 29% at N=1, increases to about 33% at N=4, then to 37% at N=16, 37.5% at N=64, and finally to 38% at N=256. The rate of increase diminishes as N increases.
* **ORM (Blue):** Starts at approximately 29% at N=1, increases to about 35% at N=4, then to 38% at N=16, 39% at N=64, and finally to 39.5% at N=256. The rate of increase diminishes as N increases.
* **PRM800K (Purple):** Starts at approximately 29% at N=1, increases to about 36% at N=4, then to 38% at N=16, 39% at N=64, and finally to 40% at N=256. The rate of increase diminishes as N increases.
* **SHEPHERD (Green):** Starts at approximately 29% at N=1, increases to about 36% at N=4, then to 41% at N=16, 42% at N=64, and finally to 42.5% at N=256. The rate of increase diminishes as N increases.
### Key Observations
* For both datasets, performance generally increases with the number of solutions per problem (N).
* The rate of performance increase diminishes as N increases, suggesting diminishing returns.
* SHEPHERD consistently outperforms SC on both datasets.
* On GSM8K, SHEPHERD and ORM perform similarly and better than SC.
* On MATH, SHEPHERD outperforms ORM, PRM800K, and SC. PRM800K outperforms ORM and SC.
### Interpretation
The charts illustrate the impact of increasing the number of solutions per problem (N) on the performance of different models in solving mathematical problems. The results suggest that increasing N generally improves performance, but the gains diminish as N becomes larger. The relative performance of the models varies depending on the dataset. SHEPHERD appears to be the most effective model overall, particularly on the MATH dataset. The performance differences between the models highlight the importance of model selection and optimization for specific problem domains.
</details>
Figure 4: Quality of process annotation on GSM8K. (a): Accuracy of the process annotation using different completer; (b): Loss of the process annotation using different completer; (c): Loss of the process annotation using the same completer with different training data.
<details>
<summary>Image 6 Details</summary>

### Visual Description
## Line Charts: Accuracy and Loss vs. Number of Decoded Paths
### Overview
The image presents three line charts comparing the performance of different models based on accuracy and loss, plotted against the number of decoded paths (N). The first chart shows the accuracy of 7B, 13B, and 70B models. The second chart shows the loss of 7B:Soft, 13B:Soft, 70B:Soft, and 70B:Hard models. The third chart shows the loss of Normal, Weak, and Augmented models.
### Components/Axes
**Chart 1: Accuracy vs. Number of Decoded Paths**
* **Y-axis:** "% Accuracy", ranging from 80% to 86% with tick marks at each integer value.
* **X-axis:** "N = number of decoded path", with values 1, 4, 16, 64, and 256.
* **Legend:** Located at the top-right of the chart.
* Red line: "7B"
* Blue line: "13B"
* Green line: "70B"
**Chart 2: Loss vs. Number of Decoded Paths**
* **Y-axis:** "Loss", ranging from 1.0 to 3.0.
* **X-axis:** "N = number of decoded path", with values 1, 4, 16, 64, and 256.
* **Legend:** Located at the center-right of the chart.
* Red line: "7B:Soft"
* Blue line: "13B:Soft"
* Green line: "70B:Soft"
* Purple line: "70B:Hard"
**Chart 3: Loss vs. Number of Decoded Paths**
* **Y-axis:** "Loss", ranging from approximately 1 to 4.
* **X-axis:** "N = number of decoded path", with values 1, 4, 16, 64, and 256.
* **Legend:** Located at the top-right of the chart.
* Red line: "Normal"
* Blue line: "Weak"
* Green line: "Augmented"
### Detailed Analysis
**Chart 1: Accuracy vs. Number of Decoded Paths**
* **7B (Red):** Starts at approximately 80% accuracy at N=1, increases to approximately 83% at N=4, remains relatively constant at N=16 and N=64, then decreases to approximately 81% at N=256.
* **13B (Blue):** Starts at approximately 82% accuracy at N=1, increases to approximately 85% at N=4, decreases to approximately 84% at N=16, remains relatively constant at N=64, then decreases to approximately 83% at N=256.
* **70B (Green):** Starts at approximately 82% accuracy at N=1, increases to approximately 86% at N=4, decreases to approximately 85% at N=16, remains relatively constant at N=64, then decreases to approximately 84% at N=256.
**Chart 2: Loss vs. Number of Decoded Paths**
* **7B:Soft (Red):** Starts at approximately 2.7 loss at N=1, decreases to approximately 2.1 at N=4, decreases to approximately 1.9 at N=16, remains relatively constant at N=64, then decreases to approximately 1.7 at N=256.
* **13B:Soft (Blue):** Starts at approximately 2.5 loss at N=1, decreases to approximately 2.2 at N=4, decreases to approximately 2.0 at N=16, remains relatively constant at N=64, then decreases to approximately 1.8 at N=256.
* **70B:Soft (Green):** Starts at approximately 2.6 loss at N=1, decreases to approximately 2.2 at N=4, decreases to approximately 2.1 at N=16, remains relatively constant at N=64, then decreases to approximately 1.5 at N=256.
* **70B:Hard (Purple):** Starts at approximately 2.4 loss at N=1, decreases to approximately 2.1 at N=4, increases to approximately 2.2 at N=16, remains relatively constant at N=64 and N=256.
**Chart 3: Loss vs. Number of Decoded Paths**
* **Normal (Red):** Starts at approximately 2.4 loss at N=1, decreases to approximately 2.2 at N=4, decreases to approximately 1.9 at N=16, decreases to approximately 1.7 at N=64, then decreases to approximately 1.6 at N=256.
* **Weak (Blue):** Starts at approximately 3.7 loss at N=1, decreases to approximately 2.6 at N=4, decreases to approximately 2.2 at N=16, decreases to approximately 1.9 at N=64, then decreases to approximately 1.8 at N=256.
* **Augmented (Green):** Starts at approximately 2.6 loss at N=1, decreases to approximately 2.1 at N=4, decreases to approximately 1.8 at N=16, decreases to approximately 1.7 at N=64, then decreases to approximately 1.5 at N=256.
### Key Observations
* In the first chart, the 70B model generally has the highest accuracy, especially at N=4.
* In the second chart, the 70B:Soft model has the lowest loss at N=256. The 70B:Hard model has a different trend, with loss increasing slightly after N=4.
* In the third chart, the "Weak" model starts with the highest loss but converges towards the "Normal" and "Augmented" models as N increases.
### Interpretation
The charts suggest that increasing the number of decoded paths (N) initially improves both accuracy and loss for most models. However, after a certain point (around N=4 to N=16), the gains diminish, and in some cases, performance decreases slightly. The 70B model generally outperforms the 7B and 13B models in terms of accuracy. The "Soft" versions of the models tend to have lower loss than the "Hard" version. The "Augmented" model shows a consistent decrease in loss as N increases, suggesting that data augmentation can improve model performance. The "Weak" model's high initial loss indicates that it may require more decoded paths to achieve comparable performance to the "Normal" and "Augmented" models.
</details>
given step. The NLI-based method annotates a step as correct if it is entailment with any step in the reference solutions. The Rule-based method annotates a step as correct if its support number precisely matches that of any steps in the reference solutions. As demonstrated in Table 4, our annotation strategy exhibits substantial superiority over the two approaches.
The ability of the LLM completer plays an important role in the data quality. We employ a completer to finalize multiple subsequent reasoning processes for a given step. Therefore, we investigate the impact of the LLM completer.
Figure 4(b) presents the cross-entropy loss across diverse completers trained on MetaMath. The results indicate that a larger completer is adept at generating superior-quality datasets. Figure 4(c) depicts the cross-entropy loss of LLaMA2-70B trained with different datasets. 'Normal' denotes the original GSM8K training dataset; 'Weak' refers to the Normal set excluding examples whose questions are in our 160 evaluation set; while 'Augmented' symbolizes MetaMath, an augmented version of the Normal set.
The findings suggest that high-quality training sets allow the model to operate more proficiently as a completer. Importantly, the 'Weak' set exhibits a markedly larger loss than other datasets. This insight drives us to infer that LLMs should acquire the questions in advance to enhance their performance as completers. We can also conjecture that a stronger foundational model, coupled with superior training data, could further enhance the quality of automatic annotation.
## 5.3 INFLUENCE OF THE PRE-TRAINED BASE MODELS
To conduct an exhaustive evaluation of MATH-SHEPHERD's effectiveness, we performed a diverse range of experiments using model sizes 7B, 13B, and 70B.
Table 4: The comparison between NLI/Rule-based automatic process annotation methods from Li et al. (2023b) and our method.
| Methods | Models | Accuracy (%) | Loss |
|---------------------------------|---------------------------|----------------|--------|
| DIVERSE-NLI (Li et al., 2023b) | DeBERTa (He et al., 2020) | 61.3 | 5.43 |
| DIVERSE-NLI (Li et al., 2023b) | LLaMA2-13B | 75.6 | 3.27 |
| DIVERSE-Rule (Li et al., 2023b) | - | 75 | 3.43 |
| MATH-SHEPHERD | LLaMA2-13B (N = 4) | 85 | 2.05 |
Figure 5: Performance of different verification strategies on different sizes of generators and verifiers.
<details>
<summary>Image 7 Details</summary>

### Visual Description
## Chart Type: Line Graphs Comparing Problem Solving Performance
### Overview
The image contains four line graphs, each comparing the performance of three different methods (SC, ORM, and SHEPHERD) in solving problems. The x-axis represents the number of solutions per problem (N), and the y-axis represents the percentage of problems solved (Best-of-N). Each graph represents a different combination of Generator and Verifier models (7B, 13B, and 70B).
### Components/Axes
* **X-axis:** N = number of solutions per problem. Values: 1, 4, 16, 64, 256. Logarithmic scale.
* **Y-axis:** % Problems Solved (Best-of-N).
* Graph (a): Scale from 62% to 74%.
* Graph (b): Scale from 68% to 80%.
* Graph (c): Scale from 81% to 88%.
* Graph (d): Scale from 62% to 88%.
* **Legend:** Located in the bottom-right of each graph.
* SC (Red line)
* ORM (Blue line)
* SHEPHERD (Green line)
* **Titles:**
* Graph (a): Generator: 7B; Verifier: 7B
* Graph (b): Generator: 13B; Verifier: 13B
* Graph (c): Generator: 70B; Verifier: 7B
* Graph (d): Generator: 7B; Verifier: 70B
### Detailed Analysis
**Graph (a): Generator: 7B; Verifier: 7B**
* **SC (Red):** Starts at approximately 62%, rises to approximately 67% at N=4, then to 70% at N=16, and plateaus around 71% for N=64 and N=256.
* **ORM (Blue):** Starts at approximately 62%, rises to approximately 71% at N=4, peaks at approximately 73% at N=16, then decreases to approximately 72% at N=64, and further to approximately 71.5% at N=256.
* **SHEPHERD (Green):** Starts at approximately 62%, rises to approximately 71% at N=4, then to 73% at N=16, and plateaus around 74% for N=64 and N=256.
**Graph (b): Generator: 13B; Verifier: 13B**
* **SC (Red):** Starts at approximately 68%, rises to approximately 72.5% at N=4, then to 76% at N=16, and plateaus around 76.5-77% for N=64 and N=256.
* **ORM (Blue):** Starts at approximately 68%, rises to approximately 77% at N=4, then to 80% at N=16, and plateaus around 80% for N=64 and N=256.
* **SHEPHERD (Green):** Starts at approximately 68%, rises to approximately 77% at N=4, then to 80% at N=16, and plateaus around 80% for N=64 and N=256.
**Graph (c): Generator: 70B; Verifier: 7B**
* **SC (Red):** Starts at approximately 81%, rises to approximately 84% at N=4, then to 87% at N=16, and plateaus around 87.5% for N=64 and N=256.
* **ORM (Blue):** Starts at approximately 81%, rises to approximately 86% at N=4, then decreases to approximately 85.5% at N=16, and further to approximately 85% at N=64 and N=256.
* **SHEPHERD (Green):** Starts at approximately 81%, rises to approximately 84% at N=4, then to 86% at N=16, and plateaus around 86% for N=64 and N=256.
**Graph (d): Generator: 7B; Verifier: 70B**
* **SC (Red):** Starts at approximately 62%, rises to approximately 67% at N=4, then to 70% at N=16, and plateaus around 71% for N=64 and N=256.
* **ORM (Blue):** Starts at approximately 62%, rises to approximately 76% at N=4, then to 82% at N=16, and plateaus around 85% for N=64 and N=256.
* **SHEPHERD (Green):** Starts at approximately 62%, rises to approximately 76% at N=4, then to 82% at N=16, and plateaus around 85% for N=64 and N=256.
### Key Observations
* In most cases, performance increases significantly from N=1 to N=16, after which it plateaus or slightly decreases.
* The SHEPHERD method generally performs as well as or better than the other two methods (SC and ORM).
* The combination of Generator and Verifier models affects the overall performance.
* When the Verifier is 70B, the performance of ORM and SHEPHERD is significantly better than when the Verifier is 7B.
### Interpretation
The graphs illustrate the impact of increasing the number of solutions per problem (N) on the performance of different problem-solving methods (SC, ORM, and SHEPHERD) using different Generator and Verifier models. The data suggests that increasing N initially leads to significant improvements in the percentage of problems solved, but beyond a certain point (around N=16), the gains diminish. This could be due to the models reaching their capacity or the diminishing returns of exploring more solutions.
The SHEPHERD method consistently performs well, suggesting it is a robust approach for these types of problems. The choice of Generator and Verifier models also plays a crucial role, with the 70B Verifier model generally leading to better performance, especially for ORM and SHEPHERD. This indicates that a more powerful Verifier can effectively leverage multiple solutions to improve accuracy.
</details>
Figures 5(a), 5(b), and 3(a) display the results from the 7B, 13B, and 70B generators paired with equal-sized reward models, respectively. It becomes evident that PRM exhibits superiority over self-consistency and ORM across all sizes of base models. Moreover, bigger reward models prove to be more robust; for instance, the accuracy of the 70B reward models escalates as the number of candidate solutions rises, while the 7B reward models show a decreasing trend.
Figure 5(c) and 5(d) presents the performance of 7B and 70B generators interfaced with differentsized reward models. The findings illustrate that utilizing a larger reward model to validate the output of a smaller generator significantly enhances performance. Conversely, when a smaller reward model is employed to validate the output of a larger generator, the verification process adversely impacts the model's performance compared to SC. These results substantiate that we should utilize a more potent reward model for validating or supervising the generator.
## 5.4 INFLUENCE OF THE NUMBER OF DATA
We delve deeper into the analysis of PRM and ORM by utilizing varying quantities of training data. As depicted in Figure 6(a), it is clear that PRM exhibits superior data efficiency. Specifically, it outperforms ORM by approximately 4% accuracy when applying a modestly sized training dataset (i.e., 10k instances). Furthermore, PRM seems to have a higher potential ceiling than ORM. These observations highlight the efficacy of PRM for verification purposes.
## 5.5 OUT-OF-DISTRIBUTION PERFORMANCE
To further demonstrate the effectiveness of our method, we conduct an out-of-distribution evaluation on the Hungarian national final exam 2 , which consists of 33 questions. The total score of these questions is 100. We use the LLemma-34B trained on MetaMATH to serve as the generator and generate 256 candidate solutions for each question. We use LLemma-34B-ORM and LLemma34B-PRM to select the solution for each question. As shown in Figure 6(b): 1) both LLemma34B-ORM and LLemma-34B-PRM outperform the origin LLemma-34B, showing the reward model can generalize to other domains; 2) PRM outperforms ORM 9 scores, further demonstrating the superiority of PRM.
2 https://huggingface.co/datasets/keirp/hungarian\_national\_hs\_finals\_ exam
Figure 6: (a): Performance of different reward models using different numbers of training data; (b) performance of different verification strategies on the out-of-distribution Hungarian national exam.
<details>
<summary>Image 8 Details</summary>

### Visual Description
## Chart: Performance Comparison of Different Methods
### Overview
The image presents two charts comparing the performance of different methods (SC, ORM, SHEPHERD, and Greedy). The left chart is a line graph showing the percentage of problems solved by SC, ORM, and SHEPHERD as the number of training solutions increases. The right chart is a bar graph comparing the scores of Greedy, ORM, and SHEPHERD methods.
### Components/Axes
**Left Chart:**
* **Title:** None explicitly provided, but can be inferred as "Performance vs. Training Solutions"
* **X-axis:** Number of training solutions (10k, 20k, 40k, 80k, 160k)
* **Y-axis:** % Problems Solved (Best-of-256), ranging from approximately 86% to 94%.
* **Legend:** Located in the bottom-right corner.
* SC (Red line)
* ORM (Blue line)
* SHEPHERD (Green line)
**Right Chart:**
* **Title:** None explicitly provided, but can be inferred as "Score Comparison"
* **X-axis:** Methods (Greedy, ORM, SHEPHERD)
* **Y-axis:** Score, ranging from 30 to 70.
### Detailed Analysis
**Left Chart (Line Graph):**
* **SC (Red):** The line is relatively flat at approximately 88% across all training solution numbers.
* 10k: 88%
* 20k: 88%
* 40k: 88%
* 80k: 88%
* 160k: 88%
* **ORM (Blue):** The line increases sharply from 10k to 20k, then plateaus and slightly decreases.
* 10k: 86%
* 20k: 92%
* 40k: 92%
* 80k: 91%
* 160k: 91.5%
* **SHEPHERD (Green):** The line increases from 10k to 20k, plateaus slightly, and then continues to increase.
* 10k: 90%
* 20k: 93%
* 40k: 92.5%
* 80k: 93%
* 160k: 93.5%
**Right Chart (Bar Graph):**
* **Greedy (Light Blue):** Score of 46.0
* **ORM (Blue):** Score of 54.0
* **SHEPHERD (Green):** Score of 63.0
### Key Observations
* SC's performance remains constant regardless of the number of training solutions.
* ORM and SHEPHERD show significant improvement from 10k to 20k training solutions.
* SHEPHERD consistently outperforms ORM and SC in the line graph.
* SHEPHERD has the highest score in the bar graph, followed by ORM and then Greedy.
### Interpretation
The data suggests that increasing the number of training solutions beyond 20k has a diminishing return for ORM and SHEPHERD. SC's performance is unaffected by the number of training solutions, indicating it might be limited by other factors. SHEPHERD appears to be the most effective method overall, achieving the highest percentage of problems solved and the highest score. The bar graph complements the line graph by showing the final scores of different methods, including Greedy, which was not present in the line graph. The relationship between the number of training solutions and the performance of ORM and SHEPHERD is evident, highlighting the importance of sufficient training data for these methods.
</details>
Table 5: A case study from the Hungarian national exam. Red text denotes the mistake that ORM fails to detect.
<details>
<summary>Image 9 Details</summary>

### Visual Description
## Text Comparison: MATH-SHEPHERD vs. ORM
### Overview
The image presents two different approaches (MATH-SHEPHERD and ORM) to solving a problem related to arithmetic sequences. It shows the steps taken by each method, along with associated scores.
### Components/Axes
* **Question:** The problem statement regarding an arithmetic sequence.
* **MATH-SHEPHERD:** One approach to solving the problem.
* **ORM:** Another approach to solving the problem.
* **Steps:** Numbered steps outlining the solution process for each approach.
* **MATH-SHEPHERD Score:** A score associated with each step in the MATH-SHEPHERD approach.
* **ORM Score:** A score associated with the final answer in the ORM approach.
### Detailed Analysis or ### Content Details
**Question:**
> The first term of an arithmetic sequence is 18. The sum of the first six terms of the sequence is equal to the sum of the first seven terms. Show that the sum of the first 13 terms of the sequence is 0 and calculate the 13th term of the sequence.
**MATH-SHEPHERD Approach:**
* **Step 1:** Let the common difference of the arithmetic sequence be *d*. MATH-SHEPHERD: 0.99
* **Step 2:** The sum of the first six terms of the sequence is 18 + (18 + *d*) + (18 + 2*d*) + (18 + 3*d*) + (18 + 4*d*) + (18 + 5*d*) = 108 + 30*d*. MATH-SHEPHERD: 0.85
* **Step 6:** Therefore, *d* = -3. MATH-SHEPHERD: 0.85
* **Step 8:** The 13th term of the sequence is 18 - 36 = -18. MATH-SHEPHERD: 0.93. ORM Score: 0.73
**ORM Approach:**
* **Step 1:** Let the common difference of the arithmetic sequence be *d*. MATH-SHEPHERD: 0.99
* **Step 2:** The sum of the first six terms is 18 + (18 + *d*) + (18 + 2*d*) + (18 + 3*d*) + (18 + 4*d*) + (18 + 5*d*) = 90 + 15*d*. MATH-SHEPHERD: 0.53
* **Step 6:** Dividing by -6, we find that *d* = -2. MATH-SHEPHERD: 0.38
* **Step 9:** The 13th term of the sequence is 18 - 26 = -8. MATH-SHEPHERD: 0.38. ORM Score: 0.84
### Key Observations
* Both approaches start by defining the common difference as *d*.
* The MATH-SHEPHERD approach arrives at a common difference of -3, while the ORM approach arrives at -2.
* The MATH-SHEPHERD approach calculates the 13th term as -18, while the ORM approach calculates it as -8.
* The MATH-SHEPHERD approach generally has higher MATH-SHEPHERD scores than the ORM approach.
* The ORM approach has an ORM score for the final answer, which is different for each approach.
### Interpretation
The image demonstrates two different methods for solving the same arithmetic sequence problem, resulting in different answers. The MATH-SHEPHERD approach appears to be initially favored based on its higher MATH-SHEPHERD scores in the early steps. However, the ORM approach also provides a score for the final answer, suggesting an independent evaluation of the result. The discrepancy in the final answers and the associated scores highlight potential differences in the solution methodologies or evaluation criteria used by MATH-SHEPHERD and ORM. It is important to note that the ORM score for the MATH-SHEPHERD solution is 0.73, while the ORM score for the ORM solution is 0.84, suggesting that ORM favors its own solution.
</details>
| Question | The first term of an arithmetic sequence is 18. The sum of the first six terms of the sequence is equal to the sum of the first seven terms. Show that the sum of the first 13 terms of the sequence is 0 and calculate the 13th term of the sequence. |
|---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| MATH-SHEPHERD | Step 1: Let the common difference of the arithmetic sequence be d . MATH-SHEPHERD: 0.99. Step 2: The sum of the first six terms of the sequence is 18 +(18+ d ) +(18+2 d ) +(18+3 d ) +(18+4 d )+ (18+5 d ) = 108 +30 d . MATH-SHEPHERD: 0.85. ...... Step 6: Therefore, d = - 3 . MATH-SHEPHERD: 0.85. ...... Step 8: The 13th term of the sequence is 18 - 36 = - 18 . MATH-SHEPHERD: 0.93. ORMScore: 0.73. |
| ORM | Step 1: Let the common difference of the arithmetic sequence be d . MATH-SHEPHERD: 0.99. Step 2: The sum of the first six terms is 18 +(18+ d ) +(18+2 d ) +(18+3 d ) +(18+4 d ) +(18+5 d ) = 90 + 15 d . MATH-SHEPHERD: 0.53. ...... Step 6: Dividing by - 6 , we find that d = - 2 . MATH-SHEPHERD: 0.38. ...... Step 9: The 13th term of the sequence is 18 - 26 = - 8 . MATH-SHEPHERD: 0.38. ORMScore: 0.84. |
We also conduct a case study to intuitively demonstrate the effectiveness of MATH-SHEPHERD. As outlined in Table 5, when presented with a question from the Hungarian national final exam, our MATH-SHEPHERD accurately selected the correct solution from a pool of 256 potential solutions, which ORM failed. Moreover, MATH-SHEPHERD displayed superior discernment by precisely identifying incorrect steps within the solutions selected by ORM. Notably, it recognized errors in Step 2, Step 6, and Step 9 and so on, and subsequently assigned them lower scores relative to those for steps present in the correct solutions.
## 6 LIMITATIONS
Our paper has some limitations, which we leave for future work:
The computational cost of the completion process. To determine the label of each reasoning step, we utilize a 'completer' to decode N subsequent reasoning processes. We observe that as N increases, so does the quality of automatic annotations. However, this completion process demands a lot of computing resources, potentially imposing a limitation on the usage of our method. Despite this limitation, the cost remains significantly lower than human annotation. Furthermore, we are optimistic that advancements in efficient inference techniques such as speculative decoding (Xia et al., 2022; Leviathan et al., 2023) and vLLM (Kwon et al., 2023) could mitigate this limitation.
The automatic process annotation consists of noise. Similar to the automatic outcome annotation, our automatic process annotation also has noise. Despite this, our experiments verify the efficacy of our method for training a PRM. In particular, the PRM trained on our dataset outperforms the human-annotated PRM800K dataset. However, a noticeable gap remains between PRM800K and the candidate responses generated by the open-source models utilized in this study, which may result in the invalidation of PRM800K. As a result, the impact of this potential noise on PRM performance is still undetermined. A comprehensive comparison between human and automated annotations is envisaged for future studies. Furthermore, we assert that integrating human and automated process annotations could play a vital role in constructing robust and efficient process supervision.
## 7 CONCLUSION
In this paper, we introduce a process-oriented math verifier called MATH-SHEPHERD, which assigns a reward score to each step of the LLM's outputs on math problems. The training of MATH-SHEPHERD is achieved using automatically constructed process-wise supervision data, thereby eradicating the necessity for labor-intensive human annotation. Remarkably, this automatic methodology correlates strongly with human annotations. Extensive experiments in both verification and reinforcement learning scenarios demonstrate the effectiveness of our method.
## REFERENCES
- Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report. arXiv preprint arXiv:2305.10403 , 2023.
- Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer, Albert Q Jiang, Jia Deng, Stella Biderman, and Sean Welleck. Llemma: An open language model for mathematics. arXiv preprint arXiv:2310.10631 , 2023.
- Zhen Bi, Ningyu Zhang, Yinuo Jiang, Shumin Deng, Guozhou Zheng, and Huajun Chen. When do program-of-thoughts work for reasoning? arXiv preprint arXiv:2308.15452 , 2023.
- S´ ebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712 , 2023.
- Liang Chen, Yichi Zhang, Shuhuai Ren, Haozhe Zhao, Zefan Cai, Yuchi Wang, Peiyi Wang, Tianyu Liu, and Baobao Chang. Towards end-to-end embodied decision making via multi-modal large language model: Explorations with gpt4-vision and beyond. arXiv preprint arXiv:2310.02071 , 2023.
- Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 , 2021.
- R´ emi Coulom. Efficient selectivity and backup operators in monte-carlo tree search. In International conference on computers and games , pp. 72-83. Springer, 2006.
- DeepSeek. Deepseek llm: Let there be answers. https://github.com/deepseek-ai/ DeepSeek-LLM , 2023.
- Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, and Tushar Khot. Complexity-based prompting for multi-step reasoning. arXiv preprint arXiv:2210.00720 , 2022.
- Zhibin Gou, Zhihong Shao, Yeyun Gong, Yujiu Yang, Minlie Huang, Nan Duan, Weizhu Chen, et al. Tora: A tool-integrated reasoning agent for mathematical problem solving. arXiv preprint arXiv:2309.17452 , 2023.
- Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. Deberta: Decoding-enhanced bert with disentangled attention. arXiv preprint arXiv:2006.03654 , 2020.
- Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874 , 2021.
- Jie Huang and Kevin Chen-Chuan Chang. Towards reasoning in large language models: A survey. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki (eds.), Findings of the Association for Computational Linguistics: ACL 2023 , pp. 1049-1065, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.findings-acl.67. URL https://aclanthology.org/2023.findings-acl.67 .
- Jie Huang, Xinyun Chen, Swaroop Mishra, Huaixiu Steven Zheng, Adams Wei Yu, Xinying Song, and Denny Zhou. Large language models cannot self-correct reasoning yet. arXiv preprint arXiv:2310.01798 , 2023.
- Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. Mistral 7b. arXiv preprint arXiv:2310.06825 , 2023.
- Jean Kaddour, Joshua Harris, Maximilian Mozes, Herbie Bradley, Roberta Raileanu, and Robert McHardy. Challenges and applications of large language models. arXiv preprint arXiv:2307.10169 , 2023.
- Levente Kocsis and Csaba Szepesv´ ari. Bandit based monte-carlo planning. In European conference on machine learning , pp. 282-293. Springer, 2006.
- Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model serving with pagedattention. In Proceedings of the 29th Symposium on Operating Systems Principles , pp. 611-626, 2023.
- Yaniv Leviathan, Matan Kalman, and Yossi Matias. Fast inference from transformers via speculative decoding. In International Conference on Machine Learning , pp. 19274-19286. PMLR, 2023.
- Lei Li, Yuwei Yin, Shicheng Li, Liang Chen, Peiyi Wang, Shuhuai Ren, Mukai Li, Yazheng Yang, Jingjing Xu, Xu Sun, et al. M3it: A large-scale dataset towards multi-modal multilingual instruction tuning. arXiv preprint arXiv:2306.04387 , 2023a.
- Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. Making language models better reasoners with step-aware verifier. In Anna Rogers, Jordan BoydGraber, and Naoaki Okazaki (eds.), Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pp. 5315-5333, Toronto, Canada, July 2023b. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-long.291. URL https://aclanthology.org/2023.acl-long.291 .
- Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let's verify step by step. arXiv preprint arXiv:2305.20050 , 2023.
- Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, and Dongmei Zhang. Wizardmath: Empowering mathematical reasoning for large language models via reinforced evol-instruct. arXiv preprint arXiv:2308.09583 , 2023.
- Qianli Ma, Haotian Zhou, Tingkai Liu, Jianbo Yuan, Pengfei Liu, Yang You, and Hongxia Yang. Let's reward step by step: Step-level reward model as the navigators for reasoning. arXiv preprint arXiv:2310.10080 , 2023.
- OpenAI. GPT-4 technical report. CoRR , abs/2303.08774, 2023. doi: 10.48550/arXiv.2303.08774. URL https://doi.org/10.48550/arXiv.2303.08774 .
- Sarah Pan, Vladislav Lialin, Sherin Muckatira, and Anna Rumshisky. Let's reinforce step by step. arXiv preprint arXiv:2311.05821 , 2023.
- Joon Sung Park, Joseph O'Brien, Carrie Jun Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. Generative agents: Interactive simulacra of human behavior. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology , pp. 1-22, 2023.
- David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. nature , 529(7587):484-489, 2016.
- Yifan Song, Weimin Xiong, Dawei Zhu, Cheng Li, Ke Wang, Ye Tian, and Sujian Li. Restgpt: Connecting large language models with real-world applications via restful apis. corr, abs/2306.06624, 2023. doi: 10.48550. arXiv preprint arXiv.2306.06624 .
- Maciej ´ Swiechowski, Konrad Godlewski, Bartosz Sawicki, and Jacek Ma´ ndziuk. Monte carlo tree search: A review of recent modifications and applications. Artificial Intelligence Review , 56(3): 2497-2562, 2023.
- Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 , 2023.
- Jonathan Uesato, Nate Kushman, Ramana Kumar, Francis Song, Noah Siegel, Lisa Wang, Antonia Creswell, Geoffrey Irving, and Irina Higgins. Solving math word problems with process-and outcome-based feedback. arXiv preprint arXiv:2211.14275 , 2022.
- Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. Voyager: An open-ended embodied agent with large language models. arXiv preprint arXiv:2305.16291 , 2023a.
- Peiyi Wang, Lei Li, Liang Chen, Feifan Song, Binghuai Lin, Yunbo Cao, Tianyu Liu, and Zhifang Sui. Making large language models better reasoners with alignment. arXiv preprint arXiv:2309.02144 , 2023b.
- Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. Large language models are not fair evaluators. arXiv preprint arXiv:2305.17926 , 2023c.
- Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V. Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023 . OpenReview.net, 2023d. URL https://openreview.net/ pdf?id=1PL1NIMMrw .
- Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language models. In NeurIPS , 2022. URL http://papers.nips.cc/paper\_files/paper/2022/hash/ 9d5609613524ecf4f15af0f7b31abca4-Abstract-Conference.html .
- Zeqiu Wu, Yushi Hu, Weijia Shi, Nouha Dziri, Alane Suhr, Prithviraj Ammanabrolu, Noah A Smith, Mari Ostendorf, and Hannaneh Hajishirzi. Fine-grained human feedback gives better rewards for language model training. arXiv preprint arXiv:2306.01693 , 2023.
- Heming Xia, Tao Ge, Furu Wei, and Zhifang Sui. Lossless speedup of autoregressive translation with generalized aggressive decoding. arXiv preprint arXiv:2203.16487 , 2022.
- Fei Yu, Anningzhe Gao, and Benyou Wang. Outcome-supervised verifiers for planning in mathematical reasoning. arXiv preprint arXiv:2311.09724 , 2023a.
- Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematical questions for large language models. arXiv preprint arXiv:2309.12284 , 2023b.
- Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Chuanqi Tan, and Chang Zhou. Scaling relationship on learning mathematical reasoning with large language models. arXiv preprint arXiv:2308.01825 , 2023.
- Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen. Mammoth: Building math generalist models through hybrid instruction tuning. arXiv preprint arXiv:2309.05653 , 2023.
- Yifan Zhang, Jingqin Yang, Yang Yuan, and Andrew Chi-Chih Yao. Cumulative reasoning with large language models. arXiv preprint arXiv:2308.04371 , 2023.
- Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. arXiv preprint arXiv:2306.05685 , 2023.
- Xinyu Zhu, Junjie Wang, Lin Zhang, Yuxiang Zhang, Yongfeng Huang, Ruyi Gan, Jiaxing Zhang, and Yujiu Yang. Solving math word problems via cooperative reasoning induced language models. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki (eds.), Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pp. 4471-4485, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023. acl-long.245. URL https://aclanthology.org/2023.acl-long.245 .