# Process Reward Models That Think
\undefine@key
newfloatplacement \undefine@key newfloatname \undefine@key newfloatfileext \undefine@key newfloatwithin
Abstract
Step-by-step verifiersāalso known as process reward models (PRMs)āare a key ingredient for test-time scaling, but training them requires expensive step-level supervision. This work aims to build data-efficient PRMs as verbalized step-wise reward models that verify every step in the solution by generating a verification chain-of-thought (CoT). We propose ThinkPRM, a long CoT verifier fine-tuned on orders of magnitude fewer process labels than those required by discriminative PRMs. Our approach capitalizes on the inherent reasoning abilities of long CoT models, and outperforms LLM-as-a-Judge and discriminative verifiersāusing only 1% of the process labels in PRM800Kāacross several challenging benchmarks. Specifically, ThinkPRM beats the baselines on ProcessBench, MATH-500, and AIME ā24 under best-of-N selection and reward-guided search. In an out-of-domain evaluation over subsets of GPQA-Diamond and LiveCodeBench, our PRM surpasses discriminative verifiers trained with the full PRM800K by 8% and 4.5%, respectively. Lastly, under the same token budget, ThinkPRM scales up verification compute more effectively compared to LLM-as-a-Judge, outperforming it by 7.2% on a subset of ProcessBench. This work highlights the value of generative, long CoT PRMs that can scale test-time compute for verification while requiring minimal supervision for training. Our code, data, and models are released at https://github.com/mukhal/thinkprm.
<details>
<summary>x1.png Details</summary>

### Visual Description
\n
## Charts: Training Data Efficiency & Verifier-Guided Search
### Overview
The image presents two charts side-by-side. The left chart illustrates the training data efficiency on the ProcessBench dataset, showing verification F1 score against the number of training samples for different models. The right chart depicts verifier-guided search performance on the MATH-500 dataset, displaying reasoning accuracy against the number of beams used.
### Components/Axes
**Left Chart:**
* **Title:** Training data efficiency: ProcessBench
* **X-axis:** Training samples (logarithmic scale, from 10^3 to 10^5)
* **Y-axis:** Verification F1 score (from 70 to 90)
* **Legend:**
* ThinkPRM (Orange star)
* DiscPRM (Teal circle)
* LLM-as-a-judge (Blue dashed line)
* **Annotations:** "8K process labels" near the first ThinkPRM data point, "~700K process labels" near the last DiscPRM data point.
**Right Chart:**
* **Title:** Verifier-guided search: MATH-500
* **X-axis:** Number of beams (logarithmic scale, from 2^0 to 2^4, or 1 to 16)
* **Y-axis:** Reasoning accuracy (from 50 to 70)
* **Legend:**
* ThinkPRM (Orange solid line)
* DiscPRM (Teal solid line)
* LLM-as-a-judge (Blue dashed line)
### Detailed Analysis or Content Details
**Left Chart - Training Data Efficiency:**
* **ThinkPRM (Orange Star):** The line slopes downward initially, then plateaus. Approximate data points: (10^3, ~86), (10^4, ~65), (10^5, ~67).
* **DiscPRM (Teal Circle):** The line shows a slight upward trend. Approximate data points: (10^3, ~74), (10^4, ~75), (10^5, ~77).
* **LLM-as-a-judge (Blue Dashed Line):** The line is relatively flat. Approximate data points: (10^3, ~71), (10^4, ~71), (10^5, ~71).
**Right Chart - Verifier-Guided Search:**
* **ThinkPRM (Orange Solid Line):** The line slopes upward. Approximate data points: (2^0, ~64), (2^1, ~65), (2^2, ~67), (2^3, ~68), (2^4, ~69).
* **DiscPRM (Teal Solid Line):** The line slopes upward, but less steeply than ThinkPRM. Approximate data points: (2^0, ~58), (2^1, ~60), (2^2, ~62), (2^3, ~64), (2^4, ~65).
* **LLM-as-a-judge (Blue Dashed Line):** The line slopes upward, starting lower than the other two. Approximate data points: (2^0, ~54), (2^1, ~56), (2^2, ~59), (2^3, ~61), (2^4, ~63).
### Key Observations
* **Left Chart:** ThinkPRM shows a significant drop in verification F1 score as the number of training samples increases from 8K to 700K. DiscPRM shows a modest increase in F1 score with more training data. LLM-as-a-judge remains relatively constant.
* **Right Chart:** All three models show improved reasoning accuracy with an increasing number of beams. ThinkPRM consistently outperforms DiscPRM and LLM-as-a-judge. The performance gap between the models widens as the number of beams increases.
### Interpretation
The left chart suggests that ThinkPRM may not scale well with increasing training data on the ProcessBench dataset, potentially due to overfitting or diminishing returns. DiscPRM demonstrates more stable performance with more data. The right chart indicates that increasing the number of beams in the search process improves reasoning accuracy for all models on the MATH-500 dataset. ThinkPRM's superior performance suggests it benefits more from the verifier-guided search strategy. The consistent performance of LLM-as-a-judge across both charts suggests it may be less sensitive to the specific dataset or search strategy. The logarithmic scales on the x-axes emphasize the relative changes in training samples and beam numbers. The annotations on the left chart highlight the specific training data sizes used for ThinkPRM. The combination of these two charts provides insights into the trade-offs between training data efficiency and search strategy effectiveness for different models.
</details>
Figure 1: Left: Verifier F1-score on ProcessBench (Zheng et al., 2024). ThinkPRM -14B, trained on 8K process labels or 1K synthetic examples, outperforms discriminative PRMs trained on about 100x more data. Right: Verifier-guided search accuracy on MATH-500 with Llama-3.2-3B-Instruct as generator. ThinkPRM -1.5B, trained using the same 8K labels, outperforms LLM-as-a-judge and discriminative verifiers in reward-guided search on MATH-500. The LLM-as-a-judge in both figures uses the same base model as ThinkPRM.
1 Introduction
Reasoning with large language models (LLMs) can substantially benefit from utilizing more test-time compute (Jaech et al., 2024; Guo et al., 2025; Akyürek et al., 2024). This typically depends on a high-quality process reward model (PRM)āalso known as a process verifierāthat scores (partial) solutions for selecting promising paths for search or ranking (Cobbe et al., 2021; Li et al., 2023; Wu et al., 2024; Brown et al., 2024). PRMs have typically assumed the form of discriminative classifiers, trained to discern correct from incorrect reasoning (Uesato et al., 2022; Zhang et al., 2025). However, training discriminative PRMs requires access to process labels, i.e., step-level annotations, which either require extensive human annotation (Lightman et al., 2023; Zheng et al., 2024), gold step-by-step solutions (Khalifa et al., 2023), or compute-intensive rollouts (Luo et al., 2024; Chen et al., 2024a). For instance, training reasonably performing math PRMs requires hundreds of thousands of step-level annotations (Lightman et al., 2023; Wang et al., 2023b).
Generative verification either via LLM-as-a-judge (Wang et al., 2023a; Liu et al., 2023b; Zheng et al., 2023) or GenRM (Zhang et al., 2024a) treats verification as a generation problem of a rationale followed by a decision. However, LLM-as-a-judge is known to perform poorly compared to specialized reward models (Lambert et al., 2024; Zhang et al., 2024b; Chen et al., 2024c), as general-purpose LLMs frequently fail to recognize reasoning errors (Huang et al., 2023; Zhang et al., 2024a; Ye et al., 2024). Moreover, GenRM is limited to outcome verification via short chain-of-thoughts (CoTs), fundamentally limiting its ability for test-time scaling.
<details>
<summary>x2.png Details</summary>

### Visual Description
\n
## Line Chart: Scaling verifier compute: ProcessBench
### Overview
This line chart illustrates the performance of different verification methods (ThinkPRM, DiscPRM, and LLM-as-a-judge) on the ProcessBench dataset as a function of the number of tokens used for "thinking" (up to 32K tokens). The performance metric is F1-score, expressed as a percentage.
### Components/Axes
* **Title:** Scaling verifier compute: ProcessBench
* **X-axis:** Thinking up to (#tokens) - with markers at 8K, 16K, 24K, and 32K.
* **Y-axis:** F1-score (%) - ranging from approximately 74% to 89%. The axis has tick marks at 76%, 78%, 80%, 82%, 84%, 86%, 88%.
* **Legend:** Located at the bottom of the chart.
* ThinkPRM (Orange line with star markers)
* DiscPRM (Teal dashed line)
* LLM-as-a-judge (Blue line with circle markers)
### Detailed Analysis
* **ThinkPRM (Orange):** The line slopes upward from 8K to 16K tokens, then plateaus.
* 8K tokens: Approximately 84% F1-score.
* 16K tokens: Approximately 88% F1-score.
* 24K tokens: Approximately 89% F1-score.
* 32K tokens: Approximately 89% F1-score.
* **DiscPRM (Teal):** The line is nearly flat across all token values.
* 8K tokens: Approximately 74% F1-score.
* 16K tokens: Approximately 74% F1-score.
* 24K tokens: Approximately 74% F1-score.
* 32K tokens: Approximately 74% F1-score.
* **LLM-as-a-judge (Blue):** The line shows a slight downward trend.
* 8K tokens: Approximately 80% F1-score.
* 16K tokens: Approximately 82% F1-score.
* 24K tokens: Approximately 80% F1-score.
* 32K tokens: Approximately 79% F1-score.
### Key Observations
* ThinkPRM consistently outperforms both DiscPRM and LLM-as-a-judge.
* DiscPRM maintains a relatively constant, but lower, F1-score regardless of the number of tokens.
* LLM-as-a-judge shows a slight decrease in performance as the number of tokens increases.
* The performance gains from ThinkPRM appear to saturate after 16K tokens.
### Interpretation
The data suggests that increasing the "thinking" capacity (number of tokens) significantly improves the performance of the ThinkPRM method on the ProcessBench dataset, but the benefits diminish beyond 16K tokens. DiscPRM does not benefit from increased token capacity, indicating it is not leveraging the additional context. LLM-as-a-judge's performance slightly degrades with more tokens, potentially due to increased noise or computational challenges. This implies that ThinkPRM is the most effective method for this task, but there are diminishing returns to increasing the token limit. The flat performance of DiscPRM suggests it may be a simpler or less context-aware approach. The slight decline in LLM-as-a-judge performance is an interesting anomaly that warrants further investigation. The chart highlights the importance of model architecture and context utilization in verification tasks.
</details>
Figure 2: ThinkPRM enables scaling verification compute with more CoT tokens.
This paper builds on the insight that generative step-by-step verification can greatly benefit from scaling up the verifierās inference computeāspecifically, by enabling it to think through a CoT. Specifically, we repurpose open-weight large reasoning models (LRMs) as the foundation for generative PRMs through lightweight training. This training uses uses synthetic data (Kim et al., 2023; Zhu et al., 2023; Wang et al., 2024), utilizing as few as 8K step labels, and yieldinga ThinkPRM āa PRM that not only surpasses LLM-as-a-judge, but also outperforms discriminative PRMs trained on two orders of magnitude more data across a variety of test-time scaling scenarios.
We obtain ThinkPRM by training four reasoning models, namely R1-Distill-Qwen{1.5B,7B,14B} (Guo et al., 2025), and QwQ-32B-Preview (Team, 2024), and extensively evaluate it both as a standalone verifier on ProcessBench (Zheng et al., 2024), and combined with a generator under Best-of-N and verifier-guided beam search. ThinkPRM -14B outperforms a discriminative PRM based on the same base model in terms of accuracy while using far fewer supervision signals as in Fig. 1 left. In addition, ThinkPRM -1.5B demonstrates strong performance on MATH-500 (Hendrycks et al., 2021) under guided beam search, shown in Fig. 1 right. Lastly, as shown in Fig. 2, ThinkPRM can effectively utilize more verification compute than LLM-as-a-judge, by forcing it to think for more tokens. All these results are obtained while training only on 8K step labels.
Our work highlights the promise of long CoT PRMs that verify reasoning with reasoning, effectively scaling both generator and verifier compute. Our main findings are as follows: ThinkPRM outperforms strong PRM baselines in best-of-N and guided-search setups on two math reasoning benchmarks: MATH-500 and AIME 2024, and surpasses LLM-as-a-judge baselines under the same base model by thinking longer during verification (section 4). Moreover, ThinkPRM generalizes under two types of domain shift. First, it outperforms baselines on out-of-domain tasks such as scientific reasoning and code generation. Second, despite being trained only on short solutions, it generalizes to long-form reasoning without explicit step delimiters (section 5.3). Third, ThinkPRM outperforms self-consistency (Wang et al., 2022) when using the same compute budget, especially under high sampling regimes (section 5.4). Finally, fine-grained filtering of synthetic data based on step supervision is crucial for training high-quality PRMs (section 5.7).
2 Background and Related Work
Discriminative PRMs.
Discriminative PRMs are trained as classifiers that directly predict numerical correctness scores for each solution step, and typically rely on extensive step-level annotations (Uesato et al., 2022; Lightman et al., 2023; Zhang et al., 2025). Given a solution prefix, discriminative PRMs encode the solution text and employ a classification head to produce step-level scores, usually optimized with binary cross-entropy. An overall correctness score for a solution is obtained by aggregating these step-level scores (Beeching et al., ). PRMs are effective and straightforward but they do not utilize the language-modeling head of the base language model, making training expensive and labor-intensive (Yuan et al., 2024). Additionally, they offer limited interpretability and utilize fixed compute, restricting their dynamic scalability at test-time (Zhang et al., 2024a; Mahan et al., 2024). Thus, there is a need for data-efficient PRMs that can scale with more test-time compute.
Generative Verification.
Generative verification (Zheng et al., 2023; Zhu et al., 2023; Zhang et al., 2024a) frames verification as a language-generation task, producing step-level decisions as tokens (e.g., ācorrectā or āincorrectā), typically accompanied by a chain-of-thought (CoT). One can train generative verifiers using the standard language modeling objective on verification rationales rather than on binary labels. This approach leverages the strengths of LLMs in text generation, making generative verifiers inherently interpretable and scalable (Zhang et al., 2024a; Mahan et al., 2024; Wang et al., 2023a; Ankner et al., 2024). However, prior work on generative verifiers has relied mainly on short verification CoT (e.g., few hundred tokens) (Zhang et al., 2024a), which highly limits their scalability. Thus, there is a need for verifiers that can āthinkā longer through verification, utilizing test-time compute effectively. While LLM-as-a-Judge has been employed for step-level verification (Zheng et al., 2024). it tends to be sensitive to prompt phrasing, and prone to invalid outputs, such as infinite looping or excessive overthinking (Bavaresco et al., 2024) āissues we further confirm in this work. Prior results with reasoning models like QwQ-32B-Preview (Team, 2024) show promise, but their practical utility in test-time scaling remains limited without additional training (Zheng et al., 2024).
Test-Time Scaling with PRMs.
Test-time scaling techniques, such as Best-of-N selection (Charniak & Johnson, 2005) and tree-based search (Yao et al., 2023; Chen et al., 2024c; Wan et al., 2024), leverage additional inference-time compute to improve reasoning performance. Central to these approaches is the quality of the verifier used to score and select solutions. A major advantage of generative PRMs is that they uniquely support simultaneous scaling of both generator and verifier compute (Zhang et al., 2024a; Kalra & Tang, 2025). In particular, our work shows that generative PRMs trained based on long CoT models (Jaech et al., 2024; Guo et al., 2025) enable both parallel and sequential scaling of verifier compute.
3 ThinkPRM
<details>
<summary>x3.png Details</summary>

### Visual Description
\n
## Diagram: Sample Verification Chains
### Overview
This diagram illustrates a process for verifying reasoning chains generated by a "Reasoning Model" to create a dataset for finetuning. It depicts a workflow where potential solutions are evaluated against process labels, and only valid chains are retained for use as finetuning data.
### Components/Axes
The diagram consists of three main sections, arranged horizontally:
1. **Problem & Solution:** A red rounded rectangle labeled "Problem" with a question mark inside, connected by an arrow to a pink rounded rectangle labeled "Solution" with "Step 1:", "Step 2:", and "Step 3:" listed vertically.
2. **Verification & Comparison:** Two rectangular boxes, one labeled "1. Sample verification chains" and the other "2. Compare against process labels".
3. **Data Retention:** A rectangular box labeled "3. Keep good chains" connected to a cylinder labeled "Finetuning data".
There are also visual indicators (checkmarks and crosses) used to represent the correctness of each step in the reasoning chain.
### Detailed Analysis or Content Details
**Section 1: Problem & Solution**
- The "Problem" is represented by a red rectangle with a question mark.
- The "Solution" is represented by a pink rectangle, outlining a three-step process.
**Section 2: Verification & Comparison**
- **Box 1 ("Sample verification chains"):** Contains two examples of reasoning chains within green boxes. The text within these boxes is formatted as code:
- **Chain 1 (Discarded):**
```
```
This chain is marked with red "X" symbols next to steps 2 and 3.
- **Chain 2 (Kept):**
```
```
This chain is marked with green checkmarks next to steps 1 and 2, and a red "X" next to step 3.
- **Box 2 ("Compare against process labels"):** Shows a list of "Step" evaluations:
- Step 1: Correct
- Step 2: Incorrect
- Step 3: Incorrect
This box has a large red "X" symbol indicating the entire chain is discarded.
**Section 3: Data Retention**
- **Box 3 ("Keep good chains"):** Shows a list of "Step" evaluations:
- Step 1: Correct
- Step 2: Correct
- Step 3: Incorrect
This box has a green checkmark symbol.
- The output of this box is connected to a yellow cylinder labeled "Finetuning data".
### Key Observations
- The diagram highlights a filtering process. Reasoning chains are evaluated step-by-step.
- A chain is discarded if *any* step is incorrect.
- The `<think>` tags suggest the content within represents the internal reasoning process of the model.
- The `boxed[correct]` and `boxed[incorrect]` notations indicate the outcome of evaluating each step.
- The diagram visually emphasizes the importance of all steps being correct for a chain to be considered valid.
### Interpretation
The diagram illustrates a quality control mechanism for generating training data for a reasoning model. The model generates potential solutions (chains of reasoning steps), and these are then rigorously evaluated against predefined "process labels" (ground truth). The diagram demonstrates a strict filtering criterion: a single incorrect step invalidates the entire chain. Only chains that consistently demonstrate correct reasoning are retained and used to refine the model through finetuning. This process aims to improve the model's accuracy and reliability by ensuring it learns from high-quality, verified examples. The use of visual cues (checkmarks, crosses, colors) effectively communicates the outcome of each evaluation step and the overall flow of the process. The diagram suggests a focus on identifying and eliminating flawed reasoning patterns to enhance the model's performance.
</details>
Figure 3: Collecting verification chains for finetuning. First, we prompt a reasoning model, in our case QwQ-32B-Preview to critique a given solution to a problem. Then, we sample multiple verification chains, which we judge against gold process labels from PRM800K, only keeping chains that match the gold process labels.
Our goal is verbalized PRM that, given a problem-solution pair, verifies every step in the solution via an extended chain-of-thought (CoT) such as the one shown in Fig. 44 in App. G. This section introduces issues with LLM-as-a-judge verification and proposes a data collection process (shown in Fig. 3) to curate high-quality synthetic verification CoTs for training such PRM. The rest of the paper addresses the following research questions:
- RQ1: How well do LRMs perform under LLM-as-a-judge for process-level verification? Section 3.1
- RQ2: Can lightweight finetuning on synthetic verification CoTs improve the reliability and effectiveness of these models as process verifiers? Section 3.2
- RQ3: How does a finetuned verbalized PRM (ThinkPRM) compare to discriminative PRMs and LLM-as-a-Judge baselines under different test-time scaling scenarios? Section 4
3.1 LLM-as-a-judge PRMs are suboptimal
This section highlights limitations we observe when using off-the-shelf reasoning models as process verifiers, suggesting the need for finetuning. For evaluation, we use ProcessBench (Zheng et al., 2024), which includes problem-solution pairs with problems sourced from existing math benchmarks, along with ground-truth correctness labels. We report the binary F1-score by instructing models to verify full solutions and judge whether there exists a mistake. We use two most challenging subsets of ProcessBench: OlympiadBench (He et al., 2024) and OmniMath (Gao et al., 2024), each comprised of 1K problem-prefix pairs. For LLM-as-a-judge, we use the same prompt template as in Zheng et al. (2024), shown in Fig. 42, which we found to work best overall. Table 3 shows LLM-as-a-judge F1 scores and a sample output by QwQ-32B-Preview is displayed in Fig. 41 in App. F.
We observe different issues with LLM-as-a-judge verification. First, the verification quality is highly sensitive to the instruction wording: slight change in the instruction can affect the F1-score by up to 3-4 points. First, a substantial number of the generated chains include invalid judgments, i.e., chains without an extractable overall label as clear in Fig. 10. Such invalid judgements are caused by the following. In some cases, final decision was in the wrong format than instructed e.g., the model tries to solve the problem rather than verify the given solutionāa behavior likely stemming from the model training. Second, we noted multiple instances of overthinking (Chen et al., 2024b; Cuadron et al., 2025), which prevents the model from terminating within the token budget, and infinite looping/repetitions, where the model gets stuck trying alternative techniques to verify the solutions.
Fig. 4 (left) shows a histogram of verification CoT lengths generated by R1-Qwen-14B in the LLM-as-a-judge setting. Accurate CoTs tend to be shorter, typically under 3K tokens, while inaccurate CoTs are more evenly distributed and spike sharply around 7K-8K tokens, highlighting the prevalence of overthinking and looping in long chains. We show examples of these behaviors in App. B. In the next section, we mostly fix these issues via lightweight finetuning over synthetic verification CoTs.
3.2 Finetuning on synthetic data boosts LLM-as-a-judge verification
Inspired by recent work on reducing overthinking in long CoT models that by training (Yu et al., 2024; Kang et al., 2024), we aim to improve LLM-as-a-judge performance via finetuning on high-quality verification data. Collecting real data would be expensive, so we rely on filtered synthetic data (Zelikman et al., 2022; Singh et al., 2023; Dong et al., 2023; Zhang et al., 2024b; Wang et al., 2024) also known as rejection sampling finetuning. To keep our approach simple, we refrain from more expensive training techniques, such as reinforcement learning or preference-based learning.
Synthetic data collection.
As training data, we sample synthetic verification CoTs from QwQ-32B-Preview, prompting it to verify each step in a solution prefix, using the instruction shown in Fig. 21. The problems and corresponding step-by-step solutions come from the PRM800K dataset (Lightman et al., 2023), which provides both model-generated solutions and human-verified step-level labels.
The sampling process continues until we obtain 1K verification CoTs which coreepond to 8K step labels in total. For data filtering, we use the following criteria: (i) the CoT must follow the expected format (i.e., include an extractable decision label for each step inside \boxed{} as shown in Fig. 20, and (ii) the generated step judgements match the gold step labels from PRM800K, and (iii) the CoT length is within a maximum budgetāto avoid the excessive overthinking behavior we observed in Fig. 4 (left). The filtering process ensures our training data is of sufficient quality. note that process-based filtering is crucial for the performance of the resulting PRM as we show in Section 5.7. Data collection is illustrated in Fig. 3, data statistics are in Section A.1 and a training example is in Fig. 20.
Notably, our filtering relies only on step-level annotations, not on gold verification rationales or CoTsāmaking this pipeline scalable and low-overhead. In the absence of gold step-level annotations, one can obtain silver labels via Monte Carlo rollouts (Wang et al., 2023b; Chen et al., 2024a). While we train only on math data, the resulting PRM remains robust under other domains such as science QA and code generation as we show in Section 4.2. We then proceed to train our models on the 1K collected chains. Our training is very lightweight; finetuning QwQ-32B-Preview takes only 4.5 hours on a single A100 80GB GPU. Refer to Section C.1 for training details.
<details>
<summary>x4.png Details</summary>

### Visual Description
\n
## Histograms: Verification Accuracy vs. Length (K tokens)
### Overview
The image presents two histograms, side-by-side, comparing the distribution of response lengths (in K tokens) for "Accurate Verification" and "Inaccurate Verification" cases. The left histogram represents data for "LLM-as-a-judge", and the right histogram represents data for "ThinkPRM". Both histograms share the same x and y axes scales. A text annotation on the right histogram points to the "Inaccurate Verification" distribution, labeling it as relating to "Overthinking, repetition, infinite looping, etc.".
### Components/Axes
* **X-axis:** "Length (K tokens)" - Ranges from 0 to 8, with tick marks at integer values.
* **Y-axis:** "Frequency" - Ranges from 0 to 300, with tick marks at intervals of 50.
* **Legend (bottom-center):**
* "Accurate Verification" - Represented by a light blue color.
* "Inaccurate Verification" - Represented by a light purple/pink color.
* **Titles:**
* Left Histogram: "LLM-as-a-judge"
* Right Histogram: "ThinkPRM"
* **Annotation:** "Overthinking, repetition, infinite looping, etc." - Points to the tail of the "Inaccurate Verification" distribution in the "ThinkPRM" histogram.
### Detailed Analysis or Content Details
**LLM-as-a-judge (Left Histogram):**
* **Accurate Verification (Blue):** The distribution is unimodal, peaking at approximately 0.8 K tokens. It slopes downward relatively smoothly from the peak, with a long tail extending towards 8 K tokens.
* Frequency at 0.8 K tokens: ~270
* Frequency at 1.6 K tokens: ~150
* Frequency at 2.4 K tokens: ~70
* Frequency at 3.2 K tokens: ~40
* Frequency at 4.0 K tokens: ~25
* Frequency at 4.8 K tokens: ~15
* Frequency at 5.6 K tokens: ~10
* Frequency at 6.4 K tokens: ~5
* Frequency at 7.2 K tokens: ~5
* **Inaccurate Verification (Purple):** The distribution is relatively flat with a small peak around 6.4 K tokens. It has a more pronounced tail extending towards 8 K tokens compared to the accurate verification distribution.
* Frequency at 0.8 K tokens: ~10
* Frequency at 1.6 K tokens: ~15
* Frequency at 2.4 K tokens: ~20
* Frequency at 3.2 K tokens: ~25
* Frequency at 4.0 K tokens: ~30
* Frequency at 4.8 K tokens: ~40
* Frequency at 5.6 K tokens: ~50
* Frequency at 6.4 K tokens: ~60
* Frequency at 7.2 K tokens: ~50
**ThinkPRM (Right Histogram):**
* **Accurate Verification (Blue):** Similar to LLM-as-a-judge, the distribution is unimodal, peaking at approximately 1.6 K tokens. It slopes downward from the peak.
* Frequency at 1.6 K tokens: ~280
* Frequency at 0.8 K tokens: ~200
* Frequency at 2.4 K tokens: ~120
* Frequency at 3.2 K tokens: ~60
* Frequency at 4.0 K tokens: ~30
* Frequency at 4.8 K tokens: ~15
* Frequency at 5.6 K tokens: ~10
* Frequency at 6.4 K tokens: ~5
* **Inaccurate Verification (Purple):** The distribution is more concentrated towards higher lengths (6.4-7.2 K tokens) than in the LLM-as-a-judge histogram. The annotation "Overthinking, repetition, infinite looping, etc." points to this region.
* Frequency at 1.6 K tokens: ~20
* Frequency at 2.4 K tokens: ~30
* Frequency at 3.2 K tokens: ~40
* Frequency at 4.0 K tokens: ~50
* Frequency at 4.8 K tokens: ~60
* Frequency at 5.6 K tokens: ~70
* Frequency at 6.4 K tokens: ~80
* Frequency at 7.2 K tokens: ~60
### Key Observations
* For both LLM-as-a-judge and ThinkPRM, accurate verifications tend to have shorter lengths than inaccurate verifications.
* The "Inaccurate Verification" distribution for ThinkPRM is more heavily skewed towards longer lengths (6-8 K tokens) and has a more pronounced peak in that region compared to LLM-as-a-judge.
* The annotation suggests that longer lengths in ThinkPRM are associated with issues like overthinking and repetition.
* The peak of accurate verifications is higher for ThinkPRM than for LLM-as-a-judge.
### Interpretation
The data suggests that both LLM-as-a-judge and ThinkPRM exhibit a correlation between response length and verification accuracy: longer responses are more likely to be inaccurate. However, this correlation is stronger for ThinkPRM. The annotation highlights a potential issue with ThinkPRM ā it appears to be prone to generating overly verbose responses that are characterized by overthinking, repetition, or infinite looping, leading to inaccuracies.
The difference in distributions between the two models suggests that ThinkPRM might be more susceptible to these issues than LLM-as-a-judge. The higher peak in accurate verifications for ThinkPRM could indicate that when it *does* provide accurate responses, they are often more concise.
The histograms provide a visual representation of the trade-off between response length and accuracy. A potential area for improvement for ThinkPRM is to implement mechanisms to constrain response length and prevent the generation of overly detailed or repetitive outputs. The data supports the hypothesis that controlling length can improve accuracy.
</details>
Figure 4: Verifier performance on ProcessBench in light of CoT lengths. On the left, LLM-as-a-judge produces excessively long chains including repetition, infinite looping, and overthinking, leading to worse verifier performance since the output never terminates. Training on collected syntehtic data substantially reduces these issues as shown in the ThinkPRM plot on the right.
Finetuning on synthetic verification CoTs substantially improves the verifier.
ThinkPRM trains on the 1K chains and is evaluated on ProcessBench and compared to LLM-as-a-judge under the same base model. Fig. 10 shows verifier accuracy of different models before and after our finetuning. We note a substantial boost in F1 across all models, with the 1.5B model gaining most improvement by over 70 F1 points, and the 14B model performing best. Looking at the ratio of invalid judgements in Fig. 10, we also note a significant reduction in invalid labels with all models, except for QwQ, where it slightly increases. Lastly, the reduction in overthinking and infinite looping behavior discussed in the last section is evident, as in Fig. 4 (right), where ThinkPRM generations maintain a reasonable length (1K-5K) tokens while being substantially more accurate.
\caption@setoptions
floatrow \caption@setoptions figurerow \caption@setposition b
\caption@setoptions figure \caption@setposition b
<details>
<summary>x5.png Details</summary>

### Visual Description
\n
## Bar Chart: CoTs without a valid label on ProcessBench
### Overview
This bar chart visualizes the percentage of total CoTs (Chain of Thoughts) without a valid label on the ProcessBench dataset, for different model configurations. Two evaluation methods, "ThinkPRM" and "LLM-as-a-judge", are compared across four model versions: QwQ-32B-preview, R1-Qwen-14B, R1-Qwen-7B, and R1-Qwen-1.5B. The y-axis represents the percentage of total CoTs, ranging from 0% to 60%.
### Components/Axes
* **Title:** "CoTs without a valid label on ProcessBench" (Top-center)
* **X-axis Label:** Model Configurations (Bottom-center)
* Categories: QwQ-32B-preview, R1-Qwen-14B, R1-Qwen-7B, R1-Qwen-1.5B
* **Y-axis Label:** "Percentage of total (%)" (Left-center)
* Scale: 0%, 10%, 20%, 30%, 40%, 50%, 60%
* **Legend:** (Bottom-left)
* "ThinkPRM" - Orange
* "LLM-as-a-judge" - Blue
### Detailed Analysis
The chart consists of paired bars for each model configuration, representing the results from "ThinkPRM" and "LLM-as-a-judge".
* **QwQ-32B-preview:**
* ThinkPRM: Approximately 11.5% (Orange bar)
* LLM-as-a-judge: Approximately 9.4% (Blue bar)
* **R1-Qwen-14B:**
* ThinkPRM: Approximately 2.3% (Orange bar)
* LLM-as-a-judge: Approximately 16.0% (Blue bar)
* **R1-Qwen-7B:**
* ThinkPRM: Approximately 1.2% (Orange bar)
* LLM-as-a-judge: Approximately 19.5% (Blue bar)
* **R1-Qwen-1.5B:**
* ThinkPRM: Approximately 1.9% (Orange bar)
* LLM-as-a-judge: Approximately 53.2% (Blue bar)
The "LLM-as-a-judge" bars generally increase in height from left to right, with a particularly large jump for R1-Qwen-1.5B. The "ThinkPRM" bars remain relatively low and consistent across all model configurations.
### Key Observations
* The percentage of CoTs without a valid label is significantly higher when evaluated using "LLM-as-a-judge", especially for the R1-Qwen-1.5B model.
* "ThinkPRM" consistently reports a low percentage of invalid labels across all models.
* There is a clear trend of increasing invalid labels with "LLM-as-a-judge" as the model size decreases (from QwQ-32B-preview to R1-Qwen-1.5B).
### Interpretation
The data suggests a discrepancy in how "ThinkPRM" and "LLM-as-a-judge" evaluate the validity of labels in CoTs on the ProcessBench dataset. "LLM-as-a-judge" appears to be more sensitive to label issues, or perhaps more critical in its assessment, leading to a higher percentage of flagged invalid labels. The increasing trend of invalid labels for "LLM-as-a-judge" with smaller models could indicate that smaller models generate CoTs with less consistent or accurate labeling, which are then more readily identified as invalid by the LLM judge. Alternatively, it could be that the LLM judge is more prone to false positives when evaluating the output of smaller models. The consistently low invalid label rate reported by "ThinkPRM" suggests it may be less effective at detecting these issues, or that it uses a different criteria for label validity. This difference in evaluation methods highlights the importance of considering the evaluation metric when assessing the performance of CoT generation models.
</details>
Figure 7: LLM-as-a-judge suffers from a significant ratio of verification CoTs that do not terminate with a parsable label, i.e., \boxed{yes} or \boxed{no}. Our finetuning process that yields ThinkPRM, substantially mitigates this issue. Both verifiers are based on R1-Distill-Qwen-14B. \caption@setoptions figure \caption@setposition b
<details>
<summary>x6.png Details</summary>

### Visual Description
## Bar Chart: Verifier Performance on ProcessBench
### Overview
This bar chart displays the F1-score performance of two verifiers, "ThinkPRM" and "LLMs-as-a-judge", across four different models: QwQ-32B-preview, R1-Qwen-14B, R1-Qwen-7B, and R1-Qwen-1.5B. A horizontal dashed line indicates the performance of a "random" verifier, serving as a baseline.
### Components/Axes
* **Title:** "Verifier performance on ProcessBench" (top-center)
* **Y-axis:** "F1-score" (left-side, ranging from 0 to 100, with tick marks at 0, 20, 40, 60, 80, and 100)
* **X-axis:** Model names: "QwQ-32B-preview", "R1-Qwen-14B", "R1-Qwen-7B", "R1-Qwen-1.5B" (bottom-center)
* **Legend:** Located at the bottom-center.
* Orange: "ThinkPRM"
* Blue: "LLMs-as-a-judge"
* **Baseline:** A horizontal dashed line labeled "random" at an F1-score of approximately 40.
### Detailed Analysis
The chart consists of paired bars for each model, representing the F1-scores of "ThinkPRM" and "LLMs-as-a-judge".
* **QwQ-32B-preview:**
* ThinkPRM: Approximately 73.2 F1-score. (Orange bar)
* LLMs-as-a-judge: Approximately 53.0 F1-score. (Blue bar)
* **R1-Qwen-14B:**
* ThinkPRM: Approximately 86.5 F1-score. (Orange bar) - Highest score for ThinkPRM.
* LLMs-as-a-judge: Approximately 70.3 F1-score. (Blue bar)
* **R1-Qwen-7B:**
* ThinkPRM: Approximately 73.7 F1-score. (Orange bar)
* LLMs-as-a-judge: Approximately 45.2 F1-score. (Blue bar)
* **R1-Qwen-1.5B:**
* ThinkPRM: Approximately 76.0 F1-score. (Orange bar)
* LLMs-as-a-judge: Approximately 5.2 F1-score. (Blue bar) - Lowest score for LLMs-as-a-judge.
The "random" baseline is a horizontal dashed line at approximately 40 F1-score.
### Key Observations
* "ThinkPRM" consistently outperforms "LLMs-as-a-judge" across all models.
* The highest performance for "ThinkPRM" is achieved with the "R1-Qwen-14B" model (86.5 F1-score).
* "LLMs-as-a-judge" shows a significant drop in performance with the "R1-Qwen-1.5B" model (5.2 F1-score), falling well below the random baseline.
* Both verifiers perform above the random baseline for the QwQ-32B-preview, R1-Qwen-14B, and R1-Qwen-7B models.
### Interpretation
The data suggests that "ThinkPRM" is a more reliable verifier than "LLMs-as-a-judge" on the ProcessBench dataset. The performance of "LLMs-as-a-judge" is particularly sensitive to the underlying model, with a drastic decrease in F1-score when used with "R1-Qwen-1.5B". This could indicate that "LLMs-as-a-judge" requires larger or more complex models to achieve acceptable performance. The random baseline provides a crucial point of reference, highlighting that both verifiers offer value beyond random chance, except for the "LLMs-as-a-judge" with the "R1-Qwen-1.5B" model. The consistent outperformance of "ThinkPRM" suggests it may be a more robust and generalizable verifier across different model architectures. The chart demonstrates the importance of evaluating verifiers on a variety of models to understand their limitations and strengths.
</details>
Figure 10: Verification accuracy on 2K question-solution pairs from two most challenging subsets of ProcessBench: OlympiadBench and OmniMath. ThinkPRM obtained by finetuning the correponding model over only 1K verification chains performs better.
4 Test-time Scaling Experiments
This section aims to answer RQ3 introduced in section 3 by comparing ThinkPRM to baselines under different scaling scenarios. We study how ThinkPRM performs under different generation budgets (i) best-of-N selection (Wu et al., 2024; Brown et al., 2020) and (ii) guided beam search (Snell et al., 2024; Beeching et al., ). We also explore how ThinkPRM performs when verifier compute is scaled either in parallel by aggregating decisions over multiple verification CoTs or sequentially through longer CoTs by forcing the model to double check or self-correct its verification.
4.1 Experimental Setup
In the remainder of the the paper, we will mainly use our finetuned verifiers based on R1-Distill-Qwen-1.5B and R1-Distill-Qwen-14B as these provide the best tradeoff between size and performance. We will refer to these as ThinkPRM -1.5B and ThinkPRM -14B, respectively.
Baselines.
We compare ThinkPRM to DiscPRM, which uses the same base model as ThinkPRM, finetuned with binary cross-entropy on the entire PRM800K dataset, totaling 712K process labels, which is two orders of magnitude larger than our training data. Details on finetuning DiscPRMs are in Section C.2. We also compare to unweighted majority voting, which merely selects the most frequent answer across the samples (Wang et al., 2022), and to LLM-as-a-Judge using the same base model as ThinkPRM, prompted as in Section 3.1.
Tasks and Models.
We show results on three math reasoning tasks, namely 100 problems from MATH-500 (Hendrycks et al., 2021) covering all difficulty levels (see Section E.5 for more details), and American Invitational Mathematics Examination (AIME) problems for 2024. Since ThinkPRM was finetuned only on math data, we study the out-of-domain generalization on two tasks: scientific reasoning and code generation. For scientific reasoning, we use the physics subset of GPQA-Diamond (Rein et al., 2024), consisting of 86 PhD-level multiple choice questions. For code generation, we use a 200-problem subset from the v5 release of LiveCodeBench (Jain et al., 2024).
Over MATH-500, we show results with ThinkPRM -1.5B and ThinkPRM -14B on two different generator models: Qwen-2.5-14B and Llama-3.2-3B-Instruct. The former model is used for best-of-N and the latter for beam search as search is compute intensive. Showing results with different generators guarantees that our conclusions are not specific to a certain model family or size. For the more challenging tasks, namely AIME ā24 and GPQA, we use a more capable model, namely Qwen-2.5-32B-Instruct. For code generation, we use Qwen-2.5-Coder-7B (Hui et al., 2024). Implementation and hyperparemter details on how we select the final answer with best-of-N and beam search are in App. E.
<details>
<summary>x7.png Details</summary>

### Visual Description
\n
## Line Chart: Model Accuracy vs. Number of Solutions
### Overview
The image presents two line charts comparing the accuracy of different Large Language Models (LLMs) on two mathematical reasoning benchmarks: AIME '24 and MATH-500. The charts show how accuracy changes as the number of solutions generated by the models increases.
### Components/Axes
**Chart 1 (AIME '24):**
* **Title:** Best-of-N: AIME '24
* **Generator:** Qwen2.5-32B-Instruct
* **X-axis:** Number of solutions (labeled as 2ā°, 2¹, 2², 2³) ā representing 1, 2, 4, and 8 solutions respectively.
* **Y-axis:** Accuracy (%) ā ranging from approximately 20% to 33%.
* **Legend:**
* ThinkPRM-14B (Orange)
* DiscPRM-14B (Teal)
* LLM-as-a-judge (Gray)
* Majority (Peach)
**Chart 2 (MATH-500):**
* **Title:** Best-of-N: MATH-500
* **Generator:** Qwen2.5-14B
* **X-axis:** Number of solutions (labeled as 2¹, 2², 2³, 2ā“, 2āµ, 2ā¶) ā representing 2, 4, 8, 16, 32, and 64 solutions respectively.
* **Y-axis:** Accuracy (%) ā ranging from approximately 50% to 85%.
* **Legend:**
* ThinkPRM-14B (Orange)
* DiscPRM-14B (Teal)
* LLM-as-a-judge (Gray)
* Majority (Peach)
The legend is positioned at the bottom-center of the image, spanning both charts.
### Detailed Analysis or Content Details
**Chart 1 (AIME '24):**
* **ThinkPRM-14B (Orange):** Starts at approximately 21% accuracy with 1 solution (2ā°), increases to roughly 26% with 2 solutions (2¹), then rises to about 29% with 4 solutions (2²), and finally reaches approximately 32.5% with 8 solutions (2³). The line slopes upward consistently.
* **DiscPRM-14B (Teal):** Begins at approximately 21% accuracy with 1 solution (2ā°), increases to around 25% with 2 solutions (2¹), then rises to about 29% with 4 solutions (2²), and reaches approximately 30.5% with 8 solutions (2³). The line slopes upward, but less steeply than ThinkPRM-14B.
* **LLM-as-a-judge (Gray):** Remains relatively flat, starting at approximately 21% with 1 solution (2ā°) and ending at around 23% with 8 solutions (2³).
* **Majority (Peach):** Starts at approximately 21% accuracy with 1 solution (2ā°) and remains relatively flat, fluctuating around 21-22% throughout.
**Chart 2 (MATH-500):**
* **ThinkPRM-14B (Orange):** Starts at approximately 51% accuracy with 2 solutions (2¹), increases sharply to around 74% with 4 solutions (2²), then rises to about 77% with 8 solutions (2³), plateaus around 77-78% with 16 solutions (2ā“), and finally reaches approximately 83% with 64 solutions (2ā¶).
* **DiscPRM-14B (Teal):** Begins at approximately 51% accuracy with 2 solutions (2¹), increases to around 72% with 4 solutions (2²), then rises to about 75% with 8 solutions (2³), plateaus around 75-76% with 16 solutions (2ā“), and reaches approximately 79% with 64 solutions (2ā¶).
* **LLM-as-a-judge (Gray):** Starts at approximately 51% accuracy with 2 solutions (2¹), increases to around 64% with 4 solutions (2²), then rises to about 73% with 8 solutions (2³), plateaus around 73-74% with 16 solutions (2ā“), and reaches approximately 75% with 64 solutions (2ā¶).
* **Majority (Peach):** Starts at approximately 51% accuracy with 2 solutions (2¹), increases to around 68% with 4 solutions (2²), then rises to about 75% with 8 solutions (2³), plateaus around 75-76% with 16 solutions (2ā“), and reaches approximately 78% with 64 solutions (2ā¶).
### Key Observations
* In both charts, increasing the number of solutions generally improves accuracy, but with diminishing returns.
* ThinkPRM-14B consistently outperforms the other methods on both benchmarks, especially as the number of solutions increases.
* LLM-as-a-judge and Majority perform similarly, but their accuracy gains are less pronounced than those of ThinkPRM-14B and DiscPRM-14B.
* The AIME '24 benchmark shows lower overall accuracy compared to the MATH-500 benchmark.
* The accuracy of all models on AIME '24 plateaus at a lower level than on MATH-500.
### Interpretation
These charts demonstrate the effectiveness of generating multiple solutions to mathematical problems and then selecting the best one. The "Best-of-N" approach, where N represents the number of solutions, significantly improves accuracy compared to relying on a single solution. The Qwen2.5 models used as generators appear to be effective in producing diverse solutions.
The superior performance of ThinkPRM-14B suggests that its prompting strategy or architecture is particularly well-suited for mathematical reasoning. The relatively flat performance of LLM-as-a-judge indicates that simply using an LLM to judge the correctness of solutions is not as effective as more sophisticated methods like ThinkPRM-14B or DiscPRM-14B.
The diminishing returns observed as the number of solutions increases suggest that there is a point at which generating additional solutions provides only marginal improvements in accuracy. This highlights the importance of balancing computational cost with performance gains. The difference in overall accuracy between the two benchmarks (AIME '24 and MATH-500) suggests that AIME '24 is a more challenging benchmark, requiring more sophisticated reasoning abilities.
</details>
Figure 11: Best-of-N on AIME ā24 and MATH-500. Compared to LLM-as-a-judge, DiscPRM, and (unweighted) majority vote, ThinkPRM -14B exhibits best accuracy scaling curve.
Scaling verifier compute.
Compared to DiscPRMs, generative reward models enable an extra dimension of scaling to squeeze more performance: scaling the verifier compute. Specifically, ThinkPRM allows for two types of scaling. First, we use parallel scaling (Mahan et al., 2024; Brown et al., 2024), by sampling $K$ independent CoTs and averaging their scores. We will refer to this scaling using ā@Kā throughout the rest of the paper. Second, and more specific to long reasoning models, we use sequential scaling e.g., by enabling the model to double-check its initial verification (Xiong et al., 2025; Kumar et al., 2024; Ye et al., 2024). Inspired by Muennighoff et al. (2025), we use a trigger phrase such as āLetās verify againā to elicit self-correction of earlier verification. See Section E.4 for more details.
\caption@setoptions
floatrow \caption@setoptions figurerow \caption@setposition b
\caption@setoptions figure \caption@setposition b
<details>
<summary>x8.png Details</summary>

### Visual Description
## Line Chart: Guided Beam Search - MATH-500 Accuracy
### Overview
This line chart illustrates the relationship between the number of beams used in a guided beam search and the resulting accuracy on the MATH-500 dataset. The chart compares the performance of several different models: ThinkPRM-1.5B, ThinkPRM-1.5B@4, DiscPRM-1.5B, MathShepherd-7B, and RLHFflow-8B-Deepseek. The generator used for all models is Llama-3.2-3B-Instruct.
### Components/Axes
* **Title:** "Guided beam search: MATH-500"
* **Subtitle:** "Generator: Llama-3.2-3B-Instruct" (positioned centrally below the title)
* **X-axis:** "Number of beams" with markers at 2ā°, 2¹, 2², 2³, and 2ā“.
* **Y-axis:** "Accuracy (%)" with a scale ranging from approximately 55.0% to 72.5%.
* **Legend:** Located at the bottom of the chart, horizontally aligned. It contains the following labels and corresponding colors:
* ThinkPRM-1.5B (Orange)
* ThinkPRM-1.5B@4 (Dark Orange, dashed line)
* DiscPRM-1.5B (Teal)
* MathShepherd-7B (Gray)
* RLHFflow-8B-Deepseek (Purple)
### Detailed Analysis
The chart displays five distinct lines, each representing a different model's accuracy as the number of beams increases.
* **ThinkPRM-1.5B (Orange):** The line slopes upward consistently.
* At 2ā° beams: Approximately 62.2% accuracy.
* At 2¹ beams: Approximately 63.5% accuracy.
* At 2² beams: Approximately 65.0% accuracy.
* At 2³ beams: Approximately 67.0% accuracy.
* At 2ā“ beams: Approximately 68.2% accuracy.
* **ThinkPRM-1.5B@4 (Dark Orange, dashed):** This line shows the most significant upward trend.
* At 2ā° beams: Approximately 65.0% accuracy.
* At 2¹ beams: Approximately 66.5% accuracy.
* At 2² beams: Approximately 68.5% accuracy.
* At 2³ beams: Approximately 70.0% accuracy.
* At 2ā“ beams: Approximately 72.5% accuracy.
* **DiscPRM-1.5B (Teal):** The line shows a moderate upward trend, with a slight flattening at the higher beam counts.
* At 2ā° beams: Approximately 57.5% accuracy.
* At 2¹ beams: Approximately 58.5% accuracy.
* At 2² beams: Approximately 62.0% accuracy.
* At 2³ beams: Approximately 63.0% accuracy.
* At 2ā“ beams: Approximately 63.5% accuracy.
* **MathShepherd-7B (Gray):** The line shows a relatively flat trend, with a slight increase.
* At 2ā° beams: Approximately 55.0% accuracy.
* At 2¹ beams: Approximately 56.0% accuracy.
* At 2² beams: Approximately 57.5% accuracy.
* At 2³ beams: Approximately 57.5% accuracy.
* At 2ā“ beams: Approximately 61.0% accuracy.
* **RLHFflow-8B-Deepseek (Purple):** The line shows an upward trend, but less pronounced than ThinkPRM-1.5B@4.
* At 2ā° beams: Approximately 55.5% accuracy.
* At 2¹ beams: Approximately 57.0% accuracy.
* At 2² beams: Approximately 58.5% accuracy.
* At 2³ beams: Approximately 60.5% accuracy.
* At 2ā“ beams: Approximately 62.5% accuracy.
### Key Observations
* ThinkPRM-1.5B@4 consistently outperforms all other models across all beam counts.
* The performance gap between ThinkPRM-1.5B@4 and the other models widens as the number of beams increases.
* MathShepherd-7B exhibits the lowest overall accuracy and the flattest trend.
* Increasing the number of beams generally improves accuracy for all models, but the rate of improvement varies.
### Interpretation
The data suggests that guided beam search is an effective technique for improving the accuracy of language models on the MATH-500 dataset. The model ThinkPRM-1.5B@4, in particular, demonstrates a strong ability to leverage the benefits of increasing the number of beams. The relatively poor performance of MathShepherd-7B may indicate that it is less well-suited to this particular task or that it requires different optimization strategies. The consistent upward trends across most models suggest that there is still potential for further accuracy gains by exploring even larger beam counts. The difference in performance between ThinkPRM-1.5B and ThinkPRM-1.5B@4 suggests that the "@4" modification significantly improves the model's ability to utilize beam search. The use of Llama-3.2-3B-Instruct as the generator provides a common baseline for comparing the performance of these different models.
</details>
Figure 14: Comparison to off-the-shelf PRMs trained on much more step labels than ThinkPRM. $@K$ represents parallel scaling by averaging scores over K CoTs. \caption@setoptions figure \caption@setposition b
<details>
<summary>x9.png Details</summary>

### Visual Description
\n
## Line Chart: Filtering based on Process vs. Outcome
### Overview
This line chart compares the accuracy of two filtering methods ā process-based (labeled "ours") and outcome-based (GenRM) ā as a function of the number of beams used. A baseline accuracy score from an LLM-as-a-judge is also presented for comparison. The generator used is Llama-3.2-3B-Instruct.
### Components/Axes
* **Title:** Filtering based on Process vs. Outcome
* **Subtitle:** Generator: Llama-3.2-3B-Instruct
* **X-axis:** Number of beams. Markers are at 2ā°, 2¹, 2², 2³, and 2ā“.
* **Y-axis:** Accuracy (%) with a scale ranging from approximately 56% to 68%.
* **Legend:**
* Process-based (ours) - Solid orange line with star markers.
* Outcome-based (GenRM) - Solid yellow line with star markers.
* LLM-as-a-judge - Dashed blue line.
### Detailed Analysis
* **Process-based (ours):** The orange line shows an upward trend.
* At 2ā° (1 beam): Approximately 62% accuracy.
* At 2¹ (2 beams): Approximately 63% accuracy.
* At 2² (4 beams): Approximately 64% accuracy.
* At 2³ (8 beams): Approximately 66% accuracy.
* At 2ā“ (16 beams): Approximately 68% accuracy.
* **Outcome-based (GenRM):** The yellow line shows a downward trend initially, then a slight increase.
* At 2ā° (1 beam): Approximately 61% accuracy.
* At 2¹ (2 beams): Approximately 58% accuracy.
* At 2² (4 beams): Approximately 56% accuracy.
* At 2³ (8 beams): Approximately 57% accuracy.
* At 2ā“ (16 beams): Approximately 59% accuracy.
* **LLM-as-a-judge:** The blue dashed line is approximately horizontal, indicating a relatively constant accuracy.
* Accuracy is consistently around 62%.
### Key Observations
* The process-based filtering method consistently outperforms the outcome-based method across all beam numbers.
* The process-based method shows a clear positive correlation between the number of beams and accuracy. Increasing the number of beams leads to improved accuracy.
* The outcome-based method initially decreases in accuracy as the number of beams increases, then shows a slight recovery.
* The LLM-as-a-judge provides a stable baseline for comparison.
### Interpretation
The data suggests that filtering based on the *process* of generation (the "ours" method) is more effective than filtering based on the *outcome* (GenRM) when using the Llama-3.2-3B-Instruct generator. The increasing accuracy of the process-based method with more beams indicates that a more thorough evaluation of the generation process leads to better results. The relatively flat line for the LLM-as-a-judge suggests that its accuracy doesn't significantly change with the number of beams, implying it's a consistent but potentially less sensitive metric. The initial drop in accuracy for the outcome-based method could indicate that simply evaluating the final output is less reliable, and may be susceptible to noise or superficial characteristics. The slight recovery at higher beam numbers might suggest that with more samples, the outcome-based method can identify some improvements, but it still doesn't reach the performance of the process-based approach. This implies that understanding *how* a model generates text is more valuable than simply assessing *what* it generates.
</details>
Figure 17: Ablating the data filtering mechanism, where our process-based filtering yields better PRMs. LLM-as-a-judge is shown with number of beams = 16.
4.2 Results
ThinkPRM outperforms DiscPRM and LLM-as-a-Judge.
Under best-of-N selection with MATH-500 shown in Fig. 11 (right), ThinkPRM leads to higher or comparable reasoning accuracy to DiscPRM under all sampling budgets. The trend holds on the more challenging AIME ā24, shown in Fig. 11 left. Additionally, Fig. 1 (right) shows beam search results on MATH-500, with ThinkPRM 1.5B surpassing DiscPRM and LLM-as-a-Judge.
ThinkPRM surpasses off-the-shelf PRMs.
We compare ThinkPRM -1.5B to two strong off-the-shelf PRMs, namely RLHFFlow-Deepseek-PRM (Xiong et al., 2024) and MATH-Shepherd-PRM (Wang et al., 2023b). These PRMs are trained on even more data than PRM800K and are larger than 1.5B. We show results under verifier-guided search on MATH-500 in Fig. 17, with ThinkPRM -1.5Bās scaling curve surpassing all baselines and outperforming RLHFFlow-Deepseek-PRM, the best off-the-shelf PRM among the ones we tested, by more than 7% across all beam sizes.
ThinkPRM excels on out-of-domain tasks.
As for OOD performance on GPQA-physics (Fig. 18 left), ThinkPRM scales better than DiscPRMāwhich drops substantially at N=32āoutperforming it by 8%. On LiveCodeBench (Fig. 18 right), ThinkPRM also outperforms DiscPRM by 4.5%. On LiveCodeBench, Qwen2.5-7B-Math-PRM (Zhang et al., 2025) āa discriminative PRM trained on substantial amount of process labels obtained from LLM-as-a-judge data and Monte Carlo rolloutsāstruggles when applied out-of-domain. Our results shed light on the fragility of discriminative PRMs under domain shifts in contrast with generative PRMs.
Scaling ThinkPRM compute boosts performance.
Under verifier-guided search (shown in Fig. 17), parallel scaling with ThinkPRM -1.5B@4 boosts the accuracy by more than 5% points, and yields the best accuracy on MATH-500. In addition, parallel scaling with ThinkPRM -14B@4 and ThinkPRM -14B@8 boosts best-of-N performance on MATH-500 as shown in Fig. 31 in Section E.6. Now we move to sequential scaling of verifier compute by forcing ThinkPRM to recheck its own verification. Since this can be compute-intensive, we only run this on 200 problems from OmniMath subset of ProcessBench, and observe how verification F1 improves as we force the model to think for longer as shown in Fig. 2. ThinkPRM exhibits better scaling behavior compared to LLM-as-a-judge, which drops after 16K tokens, and outperforms DiscPRM-14B by 15 F1 points. In summary, ThinkPRM is consistently better than LLM-as-a-judge under parallel and sequential scaling.
Parallel scaling vs. sequential scaling.
Is it preferable to scale verifier compute in parallel or sequentially? We investigate this by comparing the two modes of scaling under the same token budget. Fig. 32 in Section E.6 shows performance of best-of-N with Qwen-2.5-14B under parallel and sequential scaling with $K=2,4$ under both parallel scaling and sequential scaling. Overall, the performance of both methods is fairly close, but we observe a slight advantage to parallel scaling under certain budgets.
<details>
<summary>x10.png Details</summary>

### Visual Description
## Charts: Best-of-N Performance Comparison
### Overview
The image presents two line charts comparing the performance of different language models on two out-of-distribution (OOD) datasets: GPQA-physics and LiveCodeBench. The charts plot performance metrics (Accuracy for GPQA-physics and Pass Rate for LiveCodeBench) against the number of solutions generated by the models.
### Components/Axes
Both charts share the following components:
* **X-axis:** "Number of solutions" with markers at 2ā°, 2¹, 2², 2³, 2ā“, and 2āµ.
* **Y-axis:** The left chart displays "Accuracy (%)", ranging from approximately 54% to 72%. The right chart displays "Pass rate (%)", ranging from approximately 55% to 67%.
* **Legend:** Located at the bottom of each chart, identifying the different models/strategies:
* ThinkPRM-14B (Orange, dashed line)
* DiscPRM-14B (Green, solid line)
* Qwen2.5-7B-Math-PRM (Gray, dashed-dotted line)
* Majority (Gray, solid line)
* **Title:** Each chart has a title indicating the dataset being evaluated:
* Left Chart: "Best-of-N: GPQA-physics (OOD)"
* Right Chart: "Best-of-N: LiveCodeBench (OOD)"
* **Generator:** Each chart also indicates the generator used:
* Left Chart: "Generator: Qwen-2.5-32B-Instruct"
* Right Chart: "Generator: Qwen-2.5-Coder-7B"
### Detailed Analysis or Content Details
**Chart 1: GPQA-physics (OOD)**
* **ThinkPRM-14B (Orange):** Starts at approximately 54.5% at 2ā°, increases steadily to around 68% at 2ā“, and peaks at approximately 71.5% at 2āµ.
* **DiscPRM-14B (Green):** Begins at approximately 56% at 2ā°, rises to around 64% at 2³, then declines to approximately 62% at 2āµ.
* **Qwen2.5-7B-Math-PRM (Gray, dashed-dotted):** Starts at approximately 55% at 2ā°, increases to around 61% at 2ā“, and remains relatively stable at approximately 61% at 2āµ.
* **Majority (Gray, solid):** Starts at approximately 54% at 2ā°, increases to around 58% at 2³, and remains relatively stable at approximately 58% at 2āµ.
**Chart 2: LiveCodeBench (OOD)**
* **ThinkPRM-14B (Orange):** Starts at approximately 57.5% at 2ā°, increases to around 66% at 2ā“, and declines slightly to approximately 65% at 2āµ.
* **DiscPRM-14B (Green):** Begins at approximately 59% at 2ā°, rises to around 64% at 2³, and declines to approximately 62% at 2āµ.
* **Qwen2.5-7B-Math-PRM (Gray, dashed-dotted):** Starts at approximately 57.5% at 2ā°, increases to around 61% at 2³, and declines to approximately 58% at 2āµ.
* **Majority (Gray, solid):** Starts at approximately 55% at 2ā°, increases to around 59% at 2³, and remains relatively stable at approximately 58% at 2āµ.
### Key Observations
* In both charts, ThinkPRM-14B generally outperforms the other models, especially at higher numbers of solutions (2ā“ and 2āµ).
* DiscPRM-14B shows an initial increase in performance but then plateaus or declines.
* Qwen2.5-7B-Math-PRM and Majority consistently perform at a lower level than ThinkPRM-14B and DiscPRM-14B.
* The performance gap between the models tends to widen as the number of solutions increases.
### Interpretation
The data suggests that increasing the number of solutions generated by the models can improve performance on both GPQA-physics and LiveCodeBench datasets. ThinkPRM-14B appears to be the most effective strategy, benefiting significantly from generating more solutions. The decline in performance for DiscPRM-14B after 2³ might indicate a point of diminishing returns or potential overfitting to the generated solutions. The relatively stable performance of the Majority baseline suggests that simply selecting the most frequent solution doesn't yield the same benefits as more sophisticated generation and selection strategies. The difference in performance between the two generators (Qwen-2.5-32B-Instruct for GPQA-physics and Qwen-2.5-Coder-7B for LiveCodeBench) suggests that the choice of generator is also important and may be task-dependent. The "Best-of-N" approach, where multiple solutions are generated and the best one is selected, is a promising technique for improving the performance of language models on challenging tasks. The OOD nature of the datasets highlights the importance of evaluating models on data that differs from their training distribution.
</details>
Figure 18: Best-of-N on two out-of-domain tasks: science QA (GPQA-Physics) and code generation (LiveCodeBench). Although ThinkPRM was only finetuned on math, it exhibits superior OOD performance than the baselines, especially at larger sampling budgets, where the baselines fall short. Discriminative PRMs struggle despite being trained on orders of magnitude more process labels.
5 Analysis and Discussion
5.1 Training data efficiency
A major strength of ThinkPRM is training data efficiency compared to discriminative versions. Here, we study the training scaling behavior of ThinkPRM -14B by training it over 500 and 1K examples in total collected using the pipeline in Section 3.2, which roughly corresponds to 4K and 8K process labels from PRM800K in total. We compare that to DiscPRM-14B trained with 1K, 10K, 50K and 98K examples, where 98K corresponds to training on the full PRM800K train set that includes 712K step labels. Fig. 1 (Left) contrasts the training data scaling behavior of ThinkPRM -14B with that of DiscPRM-14B, where ThinkPRM -14Bās performance scales substantially better with two orders of magnitude fewer process labels. This primarily stems from ThinkPRM ās utilization of text generation and reasoning abilities of the underlying models.
While we train ThinkPRM using only 1K data points, we investigate whether it will benefit from training on more data. Using the pipeline, we collect and filter additional verification CoTs and obtain a total of 65K chains. We then finetune R1-Distill-Qwen-1.5B and R1-Distill-Qwen-14B on these for a single epoch while keeping all other training hyperparameters fixed. We then compare the resulting models to the 1K-trained version of ThinkPRM under best-of-N selection on MATH-500. Figs. 38 and 38 in Section E.7 show a performance boost from training on the 65K examples compared to only 1K. This suggests that ThinkPRM can utilize more training data when available.
5.2 Effect of Verification CoT Length on PRM Quality
We study whether the length of verification chains of thought affects the quality of the resulting generative verifier. Specifically, we compare ThinkPRM trained on the full, long synthetic CoTs with a variant trained on short, compressed versions of the same 1K CoTs. To obtain the short CoTs, we instruct gpt-4o-mini to rewrite each original CoT into a concise version that preserves only the essential reasoning. We then train R1-Qwen-1.5B and R1-Qwen-14B on these short CoTs and evaluate verification F1 on ProcessBench. Table 1 reports the comparison.
| R1-Qwen-1.5B R1-Qwen-14B | 87.3 87.3 | 75.7 85.7 | 64.8 55.3 | 66.7 60.8 |
| --- | --- | --- | --- | --- |
Table 1: Verification F1 when training R1 models on long versus short CoTs.
The substantial performance drop when training on short CoTs emphasizes how ThinkPRM benefits from extended reasoning. Since verification is a complex task, throwing more reasoning effort at it via thinking improves performance. These results support the value of using long verification CoTs for training.
5.3 Reasoning traces without clear step boundaries
So far, we have used ThinkPRM to verify short CoTs with clear steps delimiters. Here, we investigate whether ThinkPRM can still verify long CoTs that involve extended reasoning, backtracking, and self-correction. As a generator, we use Qwen3-1.7B (Yang et al., 2025) with thinking mode. Although ThinkPRM was only trained on short solutions from PRM800K, it can still verify long CoTs and outperforms the baselines as shown in Fig. 19 left. Inspecting ThinkPRM ās outputs, we found that it extracts and verifies individual steps embedded in the long CoTāan example is in Fig. 45.
5.4 Compute-matched comparison to self-consistency
Under a fixed test-time compute budget for best-of-N, how does ThinkPRM compare to simply sampling more solutions from the generator and applying majority voting? To investigate this, we conduct a compute-matched analysis on MATH-500 and GPQA-Physics. Fig. 19 mid and right plot solution accuracy as a function of sampling FLOPs for MATH-500 and GPQA-physics. At low sampling budgets, best-of-N with ThinkPRM performs comparably to self-consistency, but as the compute budget increases, ThinkPRM has a clear advantage. These findings agree with recent work on outcome reward models (Singhi et al., 2025).
5.5 ThinkPRM with Monte Carlo step labels
To train ThinkPRM, we have relied on manual step labels from PRM800K. Since automatic labels e.g., via Monte Carlo rollouts Luo et al. (2024) are cheaper, we validate whether we can train ThinkPRM using automatic labels. We train ThinkPRM -1.5B using 1K synthetic chains based on labels from Math-shepherd dataset (Wang et al., 2023b). Performance on ProcessBench is shown in Table 4, where training ThinkPRM with automatic labels yields very comparable performance to training with manual labels, showing that our training pipeline is agnostic to step-labeling strategy.
<details>
<summary>x11.png Details</summary>

### Visual Description
## Line Chart: Best-of-N: MATH-500
### Overview
This line chart displays the accuracy of different models on the MATH-500 dataset as a function of the number of samples (N) used in a "Best-of-N" approach. The x-axis represents the number of samples, expressed as powers of 2 (2<sup>1</sup> to 2<sup>4</sup>), and the y-axis represents the accuracy in percentage. The chart compares the performance of four models: ThinkPRM-1.5B, ThinkPRM-1.5B@4, Majority, and DiscPRM-1.5B.
### Components/Axes
* **Title:** Best-of-N: MATH-500
* **Subtitle:** Generator: Qwen3-1.7B-thinking
* **X-axis Label:** Number of samples (N)
* **X-axis Markers:** 2<sup>1</sup>, 2<sup>2</sup>, 2<sup>3</sup>, 2<sup>4</sup>
* **Y-axis Label:** Accuracy (%)
* **Legend:**
* ThinkPRM-1.5B (Orange, dashed line with triangle markers)
* ThinkPRM-1.5B@4 (Dark Orange, dashed line with square markers)
* Majority (Purple, solid line with circle markers)
* DiscPRM-1.5B (Teal, solid line with diamond markers)
### Detailed Analysis
The chart shows four lines representing the accuracy of each model as the number of samples increases.
* **ThinkPRM-1.5B (Orange):** The line slopes upward, indicating increasing accuracy with more samples.
* At 2<sup>1</sup> (N=2): Approximately 84.7% accuracy.
* At 2<sup>2</sup> (N=4): Approximately 86.2% accuracy.
* At 2<sup>3</sup> (N=8): Approximately 87.2% accuracy.
* At 2<sup>4</sup> (N=16): Approximately 89.1% accuracy.
* **ThinkPRM-1.5B@4 (Dark Orange):** The line also slopes upward, generally above ThinkPRM-1.5B.
* At 2<sup>1</sup> (N=2): Approximately 85.2% accuracy.
* At 2<sup>2</sup> (N=4): Approximately 86.7% accuracy.
* At 2<sup>3</sup> (N=8): Approximately 87.8% accuracy.
* At 2<sup>4</sup> (N=16): Approximately 89.4% accuracy.
* **Majority (Purple):** The line slopes upward, starting lower than the ThinkPRM models but converging towards the higher values.
* At 2<sup>1</sup> (N=2): Approximately 82.5% accuracy.
* At 2<sup>2</sup> (N=4): Approximately 84.2% accuracy.
* At 2<sup>3</sup> (N=8): Approximately 86.2% accuracy.
* At 2<sup>4</sup> (N=16): Approximately 88.8% accuracy.
* **DiscPRM-1.5B (Teal):** The line slopes upward, starting at the lowest accuracy and consistently increasing with more samples.
* At 2<sup>1</sup> (N=2): Approximately 81.2% accuracy.
* At 2<sup>2</sup> (N=4): Approximately 83.2% accuracy.
* At 2<sup>3</sup> (N=8): Approximately 85.2% accuracy.
* At 2<sup>4</sup> (N=16): Approximately 88.2% accuracy.
### Key Observations
* All models show improved accuracy as the number of samples increases.
* ThinkPRM-1.5B@4 consistently outperforms ThinkPRM-1.5B.
* The "Majority" model starts with lower accuracy but shows a significant improvement with more samples, approaching the performance of the ThinkPRM models.
* DiscPRM-1.5B consistently has the lowest accuracy across all sample sizes.
* The differences in accuracy between the models become less pronounced at higher sample sizes (N=16).
### Interpretation
The data suggests that a "Best-of-N" approach is effective in improving the accuracy of these models on the MATH-500 dataset. Increasing the number of samples (N) leads to better performance for all models. The ThinkPRM-1.5B@4 model appears to be the most robust, consistently achieving the highest accuracy. The "Majority" model demonstrates that a simple ensemble method can be competitive, especially with a larger number of samples. The performance gap between the models narrows as N increases, indicating that all models benefit from more data, but some are more sensitive to sample size than others. The generator used, Qwen3-1.7B-thinking, provides context for the models being evaluated. This chart is a comparative analysis of different model architectures and sampling strategies for solving mathematical problems.
</details>
<details>
<summary>x12.png Details</summary>

### Visual Description
## Line Chart: Compute-matched analysis: MATH-500
### Overview
This image presents a line chart comparing the accuracy of two methods, "ThinkPRM-14B" and "Majority voting", as a function of estimated FLOPS (Floating Point Operations Per Second) on the MATH-500 dataset. The chart is designed to show how performance scales with computational resources.
### Components/Axes
* **Title:** Compute-matched analysis: MATH-500
* **Subtitle:** Generator: Qwen2.5-14B
* **X-axis Label:** Estimated FLOPS (log scale)
* **X-axis Scale:** Logarithmic, ranging from approximately 1 x 10<sup>15</sup> to 1 x 10<sup>17</sup>. Markers are at 1 x 10<sup>15</sup>, 5 x 10<sup>15</sup>, 1 x 10<sup>16</sup>, 5 x 10<sup>16</sup>, and 1 x 10<sup>17</sup>.
* **Y-axis Label:** Accuracy (%)
* **Y-axis Scale:** Linear, ranging from 50% to 85%. Markers are at 50%, 55%, 60%, 65%, 70%, 75%, 80%, and 85%.
* **Legend:** Located in the bottom-right corner.
* "ThinkPRM-14B" ā Represented by a solid orange line with circular markers.
* "Majority voting" ā Represented by a dashed orange line with circular markers.
### Detailed Analysis
**ThinkPRM-14B (Solid Orange Line):**
The line slopes upward, indicating increasing accuracy with increasing FLOPS.
* At 1 x 10<sup>15</sup> FLOPS: Approximately 51% accuracy.
* At 5 x 10<sup>15</sup> FLOPS: Approximately 62% accuracy.
* At 1 x 10<sup>16</sup> FLOPS: Approximately 72% accuracy.
* At 5 x 10<sup>16</sup> FLOPS: Approximately 78% accuracy.
* At 1 x 10<sup>17</sup> FLOPS: Approximately 85% accuracy.
**Majority Voting (Dashed Orange Line):**
The line also slopes upward, but with a different trajectory than ThinkPRM-14B.
* At 1 x 10<sup>15</sup> FLOPS: Approximately 51% accuracy.
* At 5 x 10<sup>15</sup> FLOPS: Approximately 52% accuracy.
* At 1 x 10<sup>16</sup> FLOPS: Approximately 75% accuracy.
* At 5 x 10<sup>16</sup> FLOPS: Approximately 78% accuracy.
* At 1 x 10<sup>17</sup> FLOPS: Approximately 79% accuracy.
### Key Observations
* Both methods show an increase in accuracy with increasing FLOPS.
* ThinkPRM-14B consistently outperforms Majority voting across all FLOPS levels.
* The performance gap between the two methods widens as FLOPS increase, particularly between 1 x 10<sup>15</sup> and 1 x 10<sup>16</sup> FLOPS.
* The accuracy of Majority voting plateaus at higher FLOPS levels (above 1 x 10<sup>16</sup> FLOPS).
### Interpretation
The data suggests that ThinkPRM-14B is a more effective method for solving MATH-500 problems than Majority voting, especially when significant computational resources are available. The increasing accuracy with FLOPS indicates that both methods benefit from increased computational power, but ThinkPRM-14B demonstrates a stronger scaling effect. The plateau in Majority voting's accuracy suggests that its performance is limited by factors other than computational resources, such as the inherent limitations of the voting mechanism itself. The generator used, Qwen2.5-14B, likely influences the performance of ThinkPRM-14B. The chart highlights the trade-off between computational cost and accuracy, and suggests that investing in more FLOPS can lead to substantial improvements in performance for ThinkPRM-14B. The initial low accuracy for both methods at the lowest FLOPS level suggests that a minimum level of computation is required to achieve meaningful results.
</details>
<details>
<summary>x13.png Details</summary>

### Visual Description
## Line Chart: Compute-matched analysis: GPQA-Physics
### Overview
This image presents a line chart illustrating the relationship between Estimated FLOPS (on a logarithmic scale) and Accuracy (%) for two different methods: ThinkPRM-14B and Majority voting. The chart focuses on the GPQA-Physics dataset and uses data generated by Qwen2.5-32B-Instruct.
### Components/Axes
* **Title:** Compute-matched analysis: GPQA-Physics
* **Subtitle:** Generator: Qwen2.5-32B-Instruct
* **X-axis:** Estimated FLOPS (log scale). Markers are at 2 x 10<sup>15</sup>, 5 x 10<sup>15</sup>, 1 x 10<sup>16</sup>, 2 x 10<sup>16</sup>, and 5 x 10<sup>16</sup>.
* **Y-axis:** Accuracy (%). Scale ranges from approximately 54% to 72%.
* **Legend:** Located in the bottom-right corner.
* ThinkPRM-14B (represented by a solid orange line)
* Majority voting (represented by a dashed gray line)
### Detailed Analysis
**ThinkPRM-14B (Orange Line):** The line generally slopes upward, indicating increasing accuracy with increasing FLOPS.
* At 2 x 10<sup>15</sup> FLOPS, accuracy is approximately 55%.
* At 5 x 10<sup>15</sup> FLOPS, accuracy dips to approximately 53%.
* At 1 x 10<sup>16</sup> FLOPS, accuracy rises to approximately 57%.
* At 2 x 10<sup>16</sup> FLOPS, accuracy is approximately 65%.
* At 5 x 10<sup>16</sup> FLOPS, accuracy reaches approximately 71%.
**Majority Voting (Gray Dashed Line):** The line shows a more moderate increase in accuracy with increasing FLOPS.
* At 2 x 10<sup>15</sup> FLOPS, accuracy is approximately 55%.
* At 5 x 10<sup>15</sup> FLOPS, accuracy decreases to approximately 52%.
* At 1 x 10<sup>16</sup> FLOPS, accuracy rises to approximately 55%.
* At 2 x 10<sup>16</sup> FLOPS, accuracy is approximately 62%.
* At 5 x 10<sup>16</sup> FLOPS, accuracy is approximately 62%.
### Key Observations
* ThinkPRM-14B consistently outperforms Majority voting across all FLOPS levels.
* Both methods show a dip in accuracy at 5 x 10<sup>15</sup> FLOPS.
* The accuracy of Majority voting plateaus at approximately 62% after 2 x 10<sup>16</sup> FLOPS, while ThinkPRM-14B continues to improve.
* The largest gains in accuracy for ThinkPRM-14B occur between 2 x 10<sup>16</sup> and 5 x 10<sup>16</sup> FLOPS.
### Interpretation
The data suggests that ThinkPRM-14B benefits significantly from increased computational resources (FLOPS) in solving GPQA-Physics problems, demonstrating a clear positive correlation between compute and performance. Majority voting, while providing a baseline level of accuracy, shows diminishing returns with increased FLOPS. The initial dip in accuracy for both methods at 5 x 10<sup>15</sup> FLOPS could be due to noise in the data or a specific characteristic of the GPQA-Physics dataset at that computational scale. The plateauing of Majority voting suggests it reaches a performance limit, while ThinkPRM-14B continues to leverage additional compute for improved accuracy. This indicates that ThinkPRM-14B is a more scalable approach for this task. The use of Qwen2.5-32B-Instruct as the generator implies that the quality of the generated data also plays a role in the overall performance of both methods.
</details>
Figure 19: Left: Best-of-N with Qwen3-1.7B on the full MATH-500 test set, showing how ThinkPRM generalizes well to verifying long reasoning traces. Mid and Right: Compute-matched comparison between best-of-N with ThinkPRM and self-consistency or majority vote.
5.6 ThinkPRM helps with difficult reasoning problems
ThinkPRM ās reasoning ability should enable it to tackle verification of hard problems. To check if this is the case, we analyze performance of ThinkPRM vs. DiscPRM in light of problem difficulty over MATH-500 and GPQA-physics (how we estimate difficulty for GPQA-Physics is explained in Section E.9), shown in Fig. 39. The generators here are Qwen-2.5-14B for MATH-500 and Qwen-2.5-32B-Instruct for GPQA-Physics. Primarily, ThinkPRM improves reasoning on the difficult problems (levels 3, 4, 5 in MATH-500 and 2, 3, 4 in GPQA-Physics) substantially more than DiscPRM.
5.7 Filtering based on process vs. outcome labels
In Section 3.2, we describe our process-based filtering strategy, which selects verification CoTs based on agreement between generated step-level decisions and gold process labels. To validate its effectiveness, we compare it to outcome-based filtering, as in GenRM (Zheng et al., 2024), which retains chains solely based on final answer correctnessākeeping a CoT if its final answer is correct and the final step is \boxed{correct}, or if the answer is incorrect and the final step is \boxed{incorrect}, thereby ignoring intermediate step labels. We obtain 65K and 128K CoTs using process- and outcome-based filtering, respectively. Fig. 17 shows that finetuning R1-Distill-Qwen-1.5B on process-filtered data yields significantly better verification performance, despite using fewer examples, which reflects the importance of our process-based filtering in training strong PRMs.
5.8 Limitations of Generative PRMs
While generative PRMs are more powerful and data-efficient than their discriminative counterparts, they come with some limitations that we highlight as avenues for future work. First, overconfidence is a known issue in LLMs (Liu et al., 2023a; Stechly et al., 2023; Zhou et al., 2024) and, in the case of PRMs, it can cause the predicted PRM scores to cluster near extremes: close to either 0 or 1. One reason is that we are using probabilities of certain tokens such as āyesā or ānoā, which by nature will be either very high or very low. Future work should explore more reliable techniques to extract calibrated scores from generative reward models. Another limitation is due to autoregressive nature of LLMs, leading them to prematurely commit to an earlier judgment. For example, we observe a phenomenon we term step label interference, where verification errors for earlier steps impact verification of later steps. For example, we noted that if the PRM judges a particular step as incorrect, it becomes more likely to label subsequent steps as incorrect even if it is not. Lastly, generating a verification CoT introduces extra overhead compared to discriminative PRMs, but we argue that the performance gains offered by generative PRMs justify this extra cost.
Conclusion
We introduced ThinkPRM, a generative process reward model trained with minimal synthetic supervision for scalable step-by-step verification. With just 8K process labels, ThinkPRM significantly outperforms LLM-as-a-judge and even surpasses discriminative PRMs trained on orders of magnitude more data. These results highlight the benefits of generative PRMs in interpretability, scalability, and data efficiency, and demonstrate their potential to scale verification compute for complex reasoning tasks in math and science.
References
- Akyürek et al. (2024) Ekin Akyürek, Mehul Damani, Linlu Qiu, Han Guo, Yoon Kim, and Jacob Andreas. The surprising effectiveness of test-time training for abstract reasoning. arXiv preprint arXiv:2411.07279, 2024.
- Ankner et al. (2024) Zachary Ankner, Mansheej Paul, Brandon Cui, Jonathan D Chang, and Prithviraj Ammanabrolu. Critique-out-loud reward models. arXiv preprint arXiv:2408.11791, 2024.
- Bavaresco et al. (2024) Anna Bavaresco, Raffaella Bernardi, Leonardo Bertolazzi, Desmond Elliott, Raquel FernƔndez, Albert Gatt, Esam Ghaleb, Mario Giulianelli, Michael Hanna, Alexander Koller, et al. Llms instead of human judges? a large scale empirical study across 20 nlp evaluation tasks. arXiv preprint arXiv:2406.18403, 2024.
- (4) Edward Beeching, Lewis Tunstall, and Sasha Rush. Scaling test-time compute with open models. URL https://huggingface.co/spaces/HuggingFaceH4/blogpost-scaling-test-time-compute.
- Brown et al. (2024) Bradley Brown, Jordan Juravsky, Ryan Ehrlich, Ronald Clark, Quoc V Le, Christopher RĆ©, and Azalia Mirhoseini. Large language monkeys: Scaling inference compute with repeated sampling. arXiv preprint arXiv:2407.21787, 2024.
- Brown et al. (2020) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877ā1901, 2020.
- Charniak & Johnson (2005) Eugene Charniak and Mark Johnson. Coarse-to-fine n-best parsing and maxent discriminative reranking. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACLā05), pp. 173ā180, 2005.
- Chen et al. (2024a) Guoxin Chen, Minpeng Liao, Chengxi Li, and Kai Fan. Alphamath almost zero: process supervision without process. arXiv preprint arXiv:2405.03553, 2024a.
- Chen et al. (2024b) Xingyu Chen, Jiahao Xu, Tian Liang, Zhiwei He, Jianhui Pang, Dian Yu, Linfeng Song, Qiuzhi Liu, Mengfei Zhou, Zhuosheng Zhang, et al. Do not think that much for 2+ 3=? on the overthinking of o1-like llms. arXiv preprint arXiv:2412.21187, 2024b.
- Chen et al. (2024c) Ziru Chen, Michael White, Raymond Mooney, Ali Payani, Yu Su, and Huan Sun. When is tree search useful for llm planning? it depends on the discriminator. arXiv preprint arXiv:2402.10890, 2024c.
- Cobbe et al. (2021) Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021.
- Cuadron et al. (2025) Alejandro Cuadron, Dacheng Li, Wenjie Ma, Xingyao Wang, Yichuan Wang, Siyuan Zhuang, Shu Liu, Luis Gaspar Schroeder, Tian Xia, Huanzhi Mao, et al. The danger of overthinking: Examining the reasoning-action dilemma in agentic tasks. arXiv preprint arXiv:2502.08235, 2025.
- Dong et al. (2023) Hanze Dong, Wei Xiong, Deepanshu Goyal, Yihan Zhang, Winnie Chow, Rui Pan, Shizhe Diao, Jipeng Zhang, Kashun Shum, and Tong Zhang. Raft: Reward ranked finetuning for generative foundation model alignment. arXiv preprint arXiv:2304.06767, 2023.
- Gao et al. (2024) Bofei Gao, Feifan Song, Zhe Yang, Zefan Cai, Yibo Miao, Qingxiu Dong, Lei Li, Chenghao Ma, Liang Chen, Runxin Xu, et al. Omni-math: A universal olympiad level mathematic benchmark for large language models. arXiv preprint arXiv:2410.07985, 2024.
- Guo et al. (2025) Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948, 2025.
- He et al. (2024) Chaoqun He, Renjie Luo, Yuzhuo Bai, Shengding Hu, Zhen Leng Thai, Junhao Shen, Jinyi Hu, Xu Han, Yujie Huang, Yuxiang Zhang, et al. Olympiadbench: A challenging benchmark for promoting agi with olympiad-level bilingual multimodal scientific problems. arXiv preprint arXiv:2402.14008, 2024.
- Hendrycks et al. (2021) Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874, 2021.
- Hu et al. (2022) Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models. ICLR, 1(2):3, 2022.
- Huang et al. (2023) Jie Huang, Xinyun Chen, Swaroop Mishra, Huaixiu Steven Zheng, Adams Wei Yu, Xinying Song, and Denny Zhou. Large language models cannot self-correct reasoning yet. arXiv preprint arXiv:2310.01798, 2023.
- Hui et al. (2024) Binyuan Hui, Jian Yang, Zeyu Cui, Jiaxi Yang, Dayiheng Liu, Lei Zhang, Tianyu Liu, Jiajun Zhang, Bowen Yu, Kai Dang, et al. Qwen2. 5-coder technical report. arXiv preprint arXiv:2409.12186, 2024.
- Jaech et al. (2024) Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720, 2024.
- Jain et al. (2024) Naman Jain, King Han, Alex Gu, Wen-Ding Li, Fanjia Yan, Tianjun Zhang, Sida Wang, Armando Solar-Lezama, Koushik Sen, and Ion Stoica. Livecodebench: Holistic and contamination free evaluation of large language models for code. arXiv preprint arXiv:2403.07974, 2024.
- Kalra & Tang (2025) Nimit Kalra and Leonard Tang. Verdict: A library for scaling judge-time compute. arXiv preprint arXiv:2502.18018, 2025.
- Kang et al. (2024) Yu Kang, Xianghui Sun, Liangyu Chen, and Wei Zou. C3ot: Generating shorter chain-of-thought without compromising effectiveness. arXiv preprint arXiv:2412.11664, 2024.
- Khalifa et al. (2023) Muhammad Khalifa, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, and Lu Wang. GRACE: discriminator-guided chain-of-thought reasoning. In Houda Bouamor, Juan Pino, and Kalika Bali (eds.), Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, December 6-10, 2023, pp. 15299ā15328. Association for Computational Linguistics, 2023. doi: 10.18653/V1/2023.FINDINGS-EMNLP.1022. URL https://doi.org/10.18653/v1/2023.findings-emnlp.1022.
- Kim et al. (2023) Seungone Kim, Jamin Shin, Yejin Cho, Joel Jang, Shayne Longpre, Hwaran Lee, Sangdoo Yun, Seongjin Shin, Sungdong Kim, James Thorne, et al. Prometheus: Inducing fine-grained evaluation capability in language models. In The Twelfth International Conference on Learning Representations, 2023.
- Kumar et al. (2024) Aviral Kumar, Vincent Zhuang, Rishabh Agarwal, Yi Su, John D Co-Reyes, Avi Singh, Kate Baumli, Shariq Iqbal, Colton Bishop, Rebecca Roelofs, et al. Training language models to self-correct via reinforcement learning. arXiv preprint arXiv:2409.12917, 2024.
- Lambert et al. (2024) Nathan Lambert, Valentina Pyatkin, Jacob Morrison, LJ Miranda, Bill Yuchen Lin, Khyathi Chandu, Nouha Dziri, Sachin Kumar, Tom Zick, Yejin Choi, et al. Rewardbench: Evaluating reward models for language modeling. arXiv preprint arXiv:2403.13787, 2024.
- Li et al. (2023) Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. Making language models better reasoners with step-aware verifier. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 5315ā5333, 2023.
- Lightman et al. (2023) Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Letās verify step by step. arXiv preprint arXiv:2305.20050, 2023.
- Liu et al. (2023a) Xin Liu, Muhammad Khalifa, and Lu Wang. Litcab: Lightweight language model calibration over short-and long-form responses. arXiv preprint arXiv:2310.19208, 2023a.
- Liu et al. (2023b) Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. G-eval: Nlg evaluation using gpt-4 with better human alignment. arXiv preprint arXiv:2303.16634, 2023b.
- Luo et al. (2024) Liangchen Luo, Yinxiao Liu, Rosanne Liu, Samrat Phatale, Harsh Lara, Yunxuan Li, Lei Shu, Yun Zhu, Lei Meng, Jiao Sun, et al. Improve mathematical reasoning in language models by automated process supervision. arXiv preprint arXiv:2406.06592, 2, 2024.
- Mahan et al. (2024) Dakota Mahan, Duy Van Phung, Rafael Rafailov, Chase Blagden, Nathan Lile, Louis Castricato, Jan-Philipp FrƤnken, Chelsea Finn, and Alon Albalak. Generative reward models. arXiv preprint arXiv:2410.12832, 2024.
- Muennighoff et al. (2025) Niklas Muennighoff, Zitong Yang, Weijia Shi, Xiang Lisa Li, Li Fei-Fei, Hannaneh Hajishirzi, Luke Zettlemoyer, Percy Liang, Emmanuel CandĆØs, and Tatsunori Hashimoto. s1: Simple test-time scaling. arXiv preprint arXiv:2501.19393, 2025.
- Rein et al. (2024) David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R Bowman. Gpqa: A graduate-level google-proof q&a benchmark. In First Conference on Language Modeling, 2024.
- Singh et al. (2023) Avi Singh, John D Co-Reyes, Rishabh Agarwal, Ankesh Anand, Piyush Patil, Xavier Garcia, Peter J Liu, James Harrison, Jaehoon Lee, Kelvin Xu, et al. Beyond human data: Scaling self-training for problem-solving with language models. arXiv preprint arXiv:2312.06585, 2023.
- Singhi et al. (2025) Nishad Singhi, Hritik Bansal, Arian Hosseini, Aditya Grover, Kai-Wei Chang, Marcus Rohrbach, and Anna Rohrbach. When to solve, when to verify: Compute-optimal problem solving and generative verification for llm reasoning. arXiv preprint arXiv:2504.01005, 2025.
- Snell et al. (2024) Charlie Snell, Jaehoon Lee, Kelvin Xu, and Aviral Kumar. Scaling llm test-time compute optimally can be more effective than scaling model parameters. arXiv preprint arXiv:2408.03314, 2024.
- Stechly et al. (2023) Kaya Stechly, Matthew Marquez, and Subbarao Kambhampati. Gpt-4 doesnāt know itās wrong: An analysis of iterative prompting for reasoning problems. arXiv preprint arXiv:2310.12397, 2023.
- Sun et al. (2024) Zhiqing Sun, Longhui Yu, Yikang Shen, Weiyang Liu, Yiming Yang, Sean Welleck, and Chuang Gan. Easy-to-hard generalization: Scalable alignment beyond human supervision. arXiv preprint arXiv:2403.09472, 2024.
- Team (2024) Qwen Team. Qwq: Reflect deeply on the boundaries of the unknown, November 2024. URL https://qwenlm.github.io/blog/qwq-32b-preview/.
- Uesato et al. (2022) Jonathan Uesato, Nate Kushman, Ramana Kumar, Francis Song, Noah Siegel, Lisa Wang, Antonia Creswell, Geoffrey Irving, and Irina Higgins. Solving math word problems with process-and outcome-based feedback. arXiv preprint arXiv:2211.14275, 2022.
- Wan et al. (2024) Ziyu Wan, Xidong Feng, Muning Wen, Stephen Marcus McAleer, Ying Wen, Weinan Zhang, and Jun Wang. Alphazero-like tree-search can guide large language model decoding and training. In Forty-first International Conference on Machine Learning, 2024.
- Wang et al. (2023a) Jiaan Wang, Yunlong Liang, Fandong Meng, Haoxiang Shi, Zhixu Li, Jinan Xu, Jianfeng Qu, and Jie Zhou. Is chatgpt a good nlg evaluator? a preliminary study. arXiv preprint arXiv:2303.04048, 2023a.
- Wang et al. (2023b) Peiyi Wang, Lei Li, Zhihong Shao, RX Xu, Damai Dai, Yifei Li, Deli Chen, Y Wu, and Zhifang Sui. Math-shepherd: A label-free step-by-step verifier for llms in mathematical reasoning. arXiv preprint arXiv:2312.08935, 2023b.
- Wang et al. (2024) Tianlu Wang, Ilia Kulikov, Olga Golovneva, Ping Yu, Weizhe Yuan, Jane Dwivedi-Yu, Richard Yuanzhe Pang, Maryam Fazel-Zarandi, Jason Weston, and Xian Li. Self-taught evaluators. arXiv preprint arXiv:2408.02666, 2024.
- Wang et al. (2022) Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022.
- Wu et al. (2024) Yangzhen Wu, Zhiqing Sun, Shanda Li, Sean Welleck, and Yiming Yang. Inference scaling laws: An empirical analysis of compute-optimal inference for problem-solving with language models. arXiv preprint arXiv:2408.00724, 2024.
- Xiong et al. (2024) Wei Xiong, Hanning Zhang, Nan Jiang, and Tong Zhang. An implementation of generative prm. https://github.com/RLHFlow/RLHF-Reward-Modeling, 2024.
- Xiong et al. (2025) Wei Xiong, Hanning Zhang, Chenlu Ye, Lichang Chen, Nan Jiang, and Tong Zhang. Self-rewarding correction for mathematical reasoning. arXiv preprint arXiv:2502.19613, 2025.
- Yang et al. (2025) An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Gao, Chengen Huang, Chenxu Lv, et al. Qwen3 technical report. arXiv preprint arXiv:2505.09388, 2025.
- Yao et al. (2023) Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. Advances in neural information processing systems, 36:11809ā11822, 2023.
- Ye et al. (2024) Tian Ye, Zicheng Xu, Yuanzhi Li, and Zeyuan Allen-Zhu. Physics of language models: Part 2.2, how to learn from mistakes on grade-school math problems. arXiv preprint arXiv:2408.16293, 2024.
- Yu et al. (2024) Ping Yu, Jing Xu, Jason Weston, and Ilia Kulikov. Distilling system 2 into system 1. arXiv preprint arXiv:2407.06023, 2024.
- Yuan et al. (2024) Lifan Yuan, Wendi Li, Huayu Chen, Ganqu Cui, Ning Ding, Kaiyan Zhang, Bowen Zhou, Zhiyuan Liu, and Hao Peng. Free process rewards without process labels. arXiv preprint arXiv:2412.01981, 2024.
- Zelikman et al. (2022) Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah Goodman. Star: Bootstrapping reasoning with reasoning. Advances in Neural Information Processing Systems, 35:15476ā15488, 2022.
- Zhang et al. (2024a) Lunjun Zhang, Arian Hosseini, Hritik Bansal, Mehran Kazemi, Aviral Kumar, and Rishabh Agarwal. Generative verifiers: Reward modeling as next-token prediction. arXiv preprint arXiv:2408.15240, 2024a.
- Zhang et al. (2024b) Yunxiang Zhang, Muhammad Khalifa, Lajanugen Logeswaran, Jaekyeom Kim, Moontae Lee, Honglak Lee, and Lu Wang. Small language models need strong verifiers to self-correct reasoning. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar (eds.), Findings of the Association for Computational Linguistics, ACL 2024, Bangkok, Thailand and virtual meeting, August 11-16, 2024, pp. 15637ā15653. Association for Computational Linguistics, 2024b. doi: 10.18653/V1/2024.FINDINGS-ACL.924. URL https://doi.org/10.18653/v1/2024.findings-acl.924.
- Zhang et al. (2025) Zhenru Zhang, Chujie Zheng, Yangzhen Wu, Beichen Zhang, Runji Lin, Bowen Yu, Dayiheng Liu, Jingren Zhou, and Junyang Lin. The lessons of developing process reward models in mathematical reasoning. arXiv preprint arXiv:2501.07301, 2025.
- Zheng et al. (2024) Chujie Zheng, Zhenru Zhang, Beichen Zhang, Runji Lin, Keming Lu, Bowen Yu, Dayiheng Liu, Jingren Zhou, and Junyang Lin. Processbench: Identifying process errors in mathematical reasoning. arXiv preprint arXiv:2412.06559, 2024.
- Zheng et al. (2023) Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems, 36:46595ā46623, 2023.
- Zhou et al. (2024) Kaitlyn Zhou, Jena D Hwang, Xiang Ren, and Maarten Sap. Relying on the unreliable: The impact of language modelsā reluctance to express uncertainty. arXiv preprint arXiv:2401.06730, 2024.
- Zhu et al. (2023) Lianghui Zhu, Xinggang Wang, and Xinlong Wang. Judgelm: Fine-tuned large language models are scalable judges. arXiv preprint arXiv:2310.17631, 2023.
Appendix A Training data
A.1 Sampling
Fig. 3 shows the prompt used with QwQ-32B-Preview to sample verification chains for training. We use the problem-prefix pairs from PRM800K train split (Lightman et al., 2023), which is based on MATH (Hendrycks et al., 2021). We sample 4 verification chains for each prefix in the dataset with temperature $T=0.1$ until we have 1K verification CoTs that (1) follow the desired format i.e., the model generated an extractable label for every step in the prefix and (2) the verbalized step labels match the gold labels in the PRM800K dataset annotation, (3) at most 4096 tokens long, and are (4) roughly balanced with respect to final answer correctness.
Roughly 20% of the initial samples satisfied criteria 1 and 2 in our case, which means that about 5K samples in total are needed to obtain 1K filtered samples. Table 2 shows the statistics of the final dataset. We also show an example from the 1K training CoTs in Fig. 20.
Synthetic training sample
Problem: Find the area in square feet of a square with a perimeter of 32ft. Solution: Step 1: A square has four equal sides, so the perimeter is four times the length of one side. Step 2: I can write an equation to relate the perimeter and the side length: 4s = 32, where s is the side length in feet. Step 3: To solve for s, I can divide both sides by 4: s = 32/4 = 8. Step 4: Now I know the side length of the square is 8 feet. Step 5: The area of a square is the side length squared, so the area is s² = 8² = 64. Step 6: The area is in square feet, so the units are ft². Step 7: The answer is 64 ft² <think> Alright, letās verify step by step the solution provided for finding the area of a square with a perimeter of 32 feet. Step 1: āA square has four equal sides, so the perimeter is four times the length of one side.ā Critique: This statement is accurate. A square indeed has four sides of equal length, and the perimeter is the sum of all sides. Therefore, perimeter (P) equals 4 times the side length (s), or P = 4s. The step is \\boxed{correct} Step 2: āI can write an equation to relate the perimeter and the side length: 4s = 32, where s is the side length in feet.ā Critique: Given that the perimeter is 32 feet, setting up the equation 4s = 32 is correct. It properly represents the relationship between the perimeter and the side length of the square. The step is \\boxed{correct} Step 3: āTo solve for s, I can divide both sides by 4: s = 32/4 = 8.ā Critique: This step correctly solves for s by dividing both sides of the equation by 4. The calculation 32 divided by 4 equals 8 is accurate. The step is \\boxed{correct} Step 4: āNow I know the side length of the square is 8 feet.ā Critique: Based on the previous step, s = 8 feet, which is correctly stated here. The step is \\boxed{correct} Step 5: āThe area of a square is the side length squared, so the area is s² = 8² = 64.ā Critique: The formula for the area of a square is indeed s squared. Calculating 8 squared equals 64 is correct. The step is \\boxed{correct} Step 6: āThe area is in square feet, so the units are ft².ā Critique: Since the side length is in feet, the area will be in square feet (ft²). This is the correct unit for area. The step is \\boxed{correct} Step 7: āThe answer is 64 ft²ā Critique: This is the final answer, correctly stating the area as 64 square feet. The step is \\boxed{correct} </think> Is the solution correct? Yes
Figure 20: Training example from our sampled 1K synthetic chains.
A.2 Preprocessing
From the sampled verification CoTs, we extract boxed predictions (e.g., \boxed{correct} or \boxed{incorrect}) for each reasoning step. Then we filter the data as follows:
- CoTs with missing or malformed verification decisions are discarded.
- CoTs where verification decisions donāt match ground truth labels are filtered out. Precisely, we require all step-by-step verification decisions to match the ground truth labels.
Then we clean the data as follows:
- We standardize boxed notation formats.
- Content after the last verification decision is removed.
- Special tokens (e.g., <think> and </think>) are added to designate verification reasoning.
Table 2: Statistics of the 1K finetuning dataset.
| Correctness Distribution | |
| --- | --- |
| # correct | 486 (48.6%) |
| # incorrect | 514 (51.4%) |
| Step label distribution | |
| # correct | 7474 (92.3%) |
| # Incorrect | 625 (7.7%) |
| Prefix statistics | |
| # unique questions | 869 |
| average steps per prefix | 9.5 |
| min # steps | 1 |
| max # steps | 53 |
| Verification Chain Statistics | |
| Average verification chain length (tokens) | 1037.0 |
| Min verification chain length (tokens) | 207 |
| Max verification chain length (tokens) | 3669 |
Data generation prompt
You are given a math problem and a proposed multiple-step solution (with a step on each line): [Math Problem] {problem} [Solution] {solution} Review and critique the proposed solution steps and determine whether each step is correct. If the solution is incomplete, only critique the steps that are provided. Your output must be in the following format: Letās verify step by step: Step 1: <critique>ā¦The step is \boxed{correct/incorrect} Step 2: <critique>ā¦The step is \boxed{correct/incorrect} ⦠Step n: <critique>ā¦The step is \boxed{correct/incorrect} Once you find an incorrect step, you should stop since you donāt need to analyze the remaining steps.
Figure 21: Prompt used to generate synthetic verification chains as described in Section 3.2.
Appendix B LLM-as-a-Judge Limitations
We show examples of failures from QwQ-32B-Preview and Deepseek-R1-Distill-Qwen-14B when used for LLM-as-judge verification in Figs. 23 and 25.
QwQ-32B-Preview stuck in a loop
Okay, so I need to analyze each step of the proposed solution to the math problem. The problem is: If $f(x)=\frac{3x-2}{x-2}$ , what is the value of $f(-2)+f(-1)+f(0)$ ? The proposed solution is given, and I have to check if each step is correct. First, let me read through the problem and the solution carefully. The function is $f(x)=\frac{3x-2}{x-2}$ . They want the sum of $f(-2)+f(-1)+f(0)$ . Looking at the solution steps: Step 0: They plug in $x=-2$ into the function and calculate $\frac{3(-2)-2}{-2-2}=\frac{-8}{-4}=2$ . Step 1: They mention that plugging $x=-2$ into the denominator gives zero, which would be division by zero. Step 2: They generalize that we should avoid plugging in values that make the denominator zero. Step 3: Similarly, plugging $x=-1$ into the denominator also gives zero. Step 4: Plugging $x=0$ into the denominator also results in division by zero. ⦠Wait no! The original function is given as $f(x)=\frac{\text{numerator}}{\text{denominator}}=?$ No! The problem says: If $f(x)=\frac{\text{numerator}}{\text{denominator}}=?$ No! The problem says: If $f(x)=\frac{\text{numerator}}{\text{denominator}}=?$ No! The problem says: If $f(x)=\frac{\text{numerator}}{\text{denominator}}=?$ No! The problem says: If $f(x)=\frac{\text{numerator}}{\text{denominator}}=?$ No! The problem says: If $f(x)=\frac{\text{numerator}}{\text{denominator}}=?$ No! The problem says: If $f(x)=\frac{\text{numerator}}{\text{denominator}}=?$ No! The problem says: If $f(x)=\frac{\text{numerator}}{\text{denominator}}=?$ No! The problem says: If $f(x)=\frac{\text{numerator}}{\text{denominator}}=?$ No! The problem says: If $f(x)=\frac{\text{numerator}}{\text{denominator}}=?$ ā¦
Figure 22: Degeneration behavior from QwQ-32B-Preview with LLM-as-judge verification. Problem and solution are omitted for brevity.
Deepseek-R1-Distill-Qwen-14B stuck in a loop
Okay, so I need to review and critique the solution step by step for the given math problem. The problem is to find the exact value of the continued fraction expression: 1 + 1/(1 + 2/(1 + 1/(1 + 2/(1 + ā¦)))) The solution provided is split into 16 steps, each enclosed in a tag. I need to go through each step, analyze it, and see if there are any errors or if the logic holds. After that, Iāll determine if the solution is correct by outputting āyesā or ānoā in a box. Let me start by understanding the problem. Itās a continued fraction that alternates between 1 and 2 in the numerators. The expression is: x = 1 + 1/(1 + 2/(1 + 1/(1 + 2/(1 + ā¦)))) So, the pattern repeats every two steps: 1, then 2, then 1, then 2, etc. The solution starts by denoting the expression as x and then tries to find a relationship by substituting the inner expression back into x. Wait, the original expression is x = 1 + 1/(1 + 2/(1 + 1/(1 + 2/(1 + ā¦)))). So, the part after the first 1 is 1 + 2/(1 + 1/(1 + 2/(1 + ā¦))). But that part isnāt x, because x starts with 1 + 1/(ā¦). So, maybe the substitution isnāt directly x. Let me think. Wait, perhaps the inner part is similar but not exactly x. Let me denote the inner part as y. So, y = 1 + 2/(1 + 1/(1 + 2/(1 + ā¦))). But then, the expression for y would involve another similar structure. Hmm, maybe the solution is oversimplifying here. ⦠Wait, actually, when you multiply both sides by (1 + 2/x), the left side is x*(1 + 2/x) = x + 2. The right side is [1 + 1/(1 + 2/x)]*(1 + 2/x) = 1*(1 + 2/x) + [1/(1 + 2/x)]*(1 + 2/x) = (1 + 2/x) + 1 = 2 + 2/x. So, the correct equation after multiplying both sides by (1 + 2/x) is: x + 2 = 2 + 2/x. But in step 6, they have: x + 2 = 1 + 1/(1 + 2/x) + 2/x + 4/x² Which seems to be incorrect. They seem to have distributed the multiplication incorrectly. Wait, let me see. If you have x*(1 + 2/x) = [1 + 1/(1 + 2/x)]*(1 + 2/x) + [2/x*(1 + 2/x)] But that doesnāt make sense because when you multiply both sides by (1 + 2/x), you should only have: Left side: x*(1 + 2/x) Right side: [1 + 1/(1 + 2/x)]*(1 + 2/x) Which simplifies to x + 2 = 2 + 2/x. But in the solution, step 6 says: x*(1 + 2/x) = 1 + 1/(1 + 2/x) + 2/x*(1 + 2/x) Which seems to be adding an extra term, 2/x*(1 + 2/x), which is not correct. So, step 6 is incorrect. Therefore, the solution has an error in step 6. Instead of correctly multiplying both sides, they incorrectly expanded the right-hand side, leading to an incorrect equation. Because of this mistake, the subsequent steps are likely incorrect as well. So, the entire solution is flawed from step 6 onwards. ⦠Wait, let me compute the continued fraction numerically to check. Let me compute the continued fraction up to a certain depth. Let me compute it step by step: Start with the innermost part and work outwards. ⦠So, it seems to converge to approximately 2.414, which is 1 + sqrt(2) $ā$ 2.4142. Wait, but according to my earlier solving, x should be sqrt(2) $ā$ 1.414, but the numerical computation suggests it converges to approximately 2.414, which is 1 + sqrt(2). So, which one is correct? Wait, perhaps I made a mistake in setting up the equation. Wait, letās re-examine the original expression: ⦠But when I tried to solve it earlier, I set x = 1 + 1/(1 + 2/x), which led to x = sqrt(2). But the numerical computation suggests it converges to 1 + sqrt(2). So, perhaps my initial substitution was incorrect. Wait, perhaps I need to define it differently. Let me try to define the continued fraction properly. Wait, letās solve the system: From x = 1 + 1/y and y = 1 + 2/x. Substitute y = 1 + 2/x into x = 1 + 1/y: x = 1 + 1/(1 + 2/x) Multiply both sides by (1 + 2/x): ⦠Alternatively, perhaps I need to define y differently. Let me try to define y as the expression starting with 1 + 1/(ā¦): y = 1 + 1/(1 + 2/(1 + 1/(1 + 2/(1 + ā¦)))) But then, y = x, which brings us back to the original substitution. ⦠Which is the same equation as before, leading to x = sqrt(2). But the numerical computation suggests itās approaching 1 + sqrt(2). So, perhaps my initial substitution is incorrect. Alternatively, perhaps the continued fraction is structured such that after the first 1 + 1/(ā¦), the denominator is another continued fraction similar to x but shifted. ā¦
Figure 23: Degeneration behavior from Deepseek-R1-Distill-Qwen-14B with LLM-as-judge verification. Problem and solution are omitted for brevity.
Problem and solution from ProcessBench
Problem: Let $a,b$ be positive reals with $a>b>\frac{1}{2}a$ . Place two squares of side lengths $a,b$ next to each other, such that the larger square has lower left corner at $(0,0)$ and the smaller square has lower left corner at $(a,0)$ . Draw the line passing through $(0,a)$ and $(a+b,0)$ . The region in the two squares lying above the line has area 2013. If $(a,b)$ is the unique pair maximizing $a+b$ , compute $\frac{a}{b}$ . Solution: We begin by finding the equation of the line through $(0,a)$ and $(a+b,0)$ . The slope is: $m=\frac{0-a}{(a+b)-0}=-\frac{a}{a+b}$ Using point-slope form from $(0,a)$ : $y=a-\frac{a}{a+b}x$ Area above the line in the larger square: At $x=a$ , the height is: $y=a-\frac{a^{2}}{a+b}=\frac{ab}{a+b}$ So the area is a trapezoid: $\text{Area}_{1}=\frac{1}{2}\left(a+\frac{ab}{a+b}\right)a=\frac{a^{3}+2a^{2}b}{2(a+b)}$ Area above the line in the smaller square: This is a triangle with height $\frac{ab}{a+b}$ and base $b$ : $\text{Area}_{2}=\frac{1}{2}Ā·\frac{ab}{a+b}Ā· b=\frac{ab^{2}}{2(a+b)}$ Total Area: $\text{Total}=\text{Area}_{1}+\text{Area}_{2}=\frac{a^{3}+2a^{2}b+ab^{2}}{2(a+b)}=\frac{a(a^{2}+2ab+b^{2})}{2(a+b)}=\frac{a(a+b)^{2}}{2(a+b)}=\frac{a(a+b)}{2}$ We are given: $\frac{a(a+b)}{2}=2013\Rightarrow a(a+b)=4026$ Express $b$ in terms of $a$ : $b=\frac{4026}{a}-a$ Apply the constraints $a>b>\frac{1}{2}a$ : ā¢
$a>b\Rightarrow a>\frac{4026}{a}-a\Rightarrow a^{2}>2013\Rightarrow a>\sqrt{2013}$ ā¢
$b>\frac{1}{2}a\Rightarrow\frac{4026}{a}-a>\frac{1}{2}a\Rightarrow a^{2}<2684\Rightarrow a<\sqrt{2684}$ So $\sqrt{2013}<a<\sqrt{2684}$ . To maximize $a+b=\frac{4026}{a}$ , we minimize $a$ , so let $a=\sqrt{2013}$ . Then $b=\sqrt{2013}$ , satisfying the constraints. $\frac{a}{b}=\boxed{2}$
Figure 24: An example of problem and solution from OmniMath subset of ProcessBench.
Incorrect verification
Okay, so I need to review and critique the given solution step by step. Let me go through each step carefully. Step 0: The solution starts by identifying key points and properties. It mentions that $J$ is the excenter opposite to $A$ , which is correct. $A_{1}$ is where the excircle touches $BC$ , and $B_{1},C_{1}$ are where it touches the extensions of $AC$ and $AB$ . Then, it says $A_{1}B_{1}$ is perpendicular to $AB$ and intersects at $D$ . $E$ is the foot of the perpendicular from $C_{1}$ to $DJ$ . Hmm, this seems okay, but I wonder if theyāve correctly identified all the properties. Maybe they should have mentioned more about the excenterās properties, like the fact that itās the intersection of external angle bisectors. But maybe thatās assumed knowledge. Step 1: They note that since $A_{1}B_{1}$ is perpendicular to $AB$ , $D$ is the projection of $A_{1}$ onto $AB$ . So, $D$ lies on $AB$ and $A_{1}D$ is perpendicular to $AB$ . Wait, is that necessarily true? If $A_{1}B_{1}$ is perpendicular to $AB$ , and they intersect at $D$ , then yes, $D$ is the foot of the perpendicular from $A_{1}$ to $AB$ . That seems correct. Step 2: They examine the excircle and the triangle, mentioning that the excircle is tangent to $BC$ at $A_{1}$ , and the tangents from $A_{1}$ to $B_{1}$ and $C_{1}$ form right angles with the radii. Then, $DJ$ is a line through the excenter and the intersection point of the perpendiculars from $A_{1}$ and $B_{1}$ to $AB$ . Hmm, Iām not sure about this part. The line $DJ$ is through the excenter $J$ and point $D$ , which is the foot from $A_{1}$ . But does it involve perpendiculars from $A_{1}$ and $B_{1}$ ? I think the solution might be conflating different properties here. The excenter is equidistant from all sides, but $DJ$ is just a line from $D$ to $J$ , not necessarily involving perpendiculars from both $A_{1}$ and $B_{1}$ . Maybe this is a misstatement. Step 3: They state that since $E$ is the foot of the perpendicular from $C_{1}$ to $DJ$ , angle $C_{1}ED$ is 90 degrees. Then, they argue that angles $BEA_{1}$ and $AEB_{1}$ are equal due to symmetry and properties of the cyclic quadrilateral. Wait, why is $E$ symmetrically placed? The solution says itās because of the properties of the excircle and right angles, but Iām not entirely convinced. Just because $E$ is a foot of a perpendicular doesnāt automatically make it symmetric with respect to $A_{1}B_{1}$ . Maybe thereās a better reason, like cyclic quadrilaterals or similar triangles, but this step feels a bit hand-wavy. Step 4: They conclude both angles are 45 degrees because of symmetry and right angles, forming a cyclic quadrilateral. But Iām not sure if the cyclic quadrilateral part is clearly established. Also, 45 degrees seems specificādoes the problem necessarily lead to that? Wait, maybe I should draw a diagram to visualize. If $A_{1}B_{1}$ is perpendicular to $AB$ , and $E$ is the foot from $C_{1}$ to $DJ$ , which is connected to the excenter. Hmm, maybe the angles are indeed 45 degrees because of the right angles and the properties of the excentral triangle, but Iām not entirely certain. Overall Thoughts: The solution seems to have some correct steps but might be making assumptions without sufficient justification. For example, in Step 3, the symmetry argument isnāt fully explained. Also, in Step 2, the description of $DJ$ might not be accurate. The conclusion of 45 degrees might be correct, but the reasoning feels a bit rushed. Final Answer After considering the steps, I think the solution might have some gaps in justification but arrives at the correct answer. So, Iāll say yes, but with some reservations. $y ā e ā s$
Figure 25: Incorrect verification by Deepseek-R1-Distill-Qwen-14B for the problem-solution pair in Fig. 24.
Appendix C Training details
C.1 Training ThinkPRM
We train our models on the filtered 1K chains using LoRA (Hu et al., 2022) with rank $r=32$ and $\alpha=16$ . Specifically, we finetune Qwen-14B-Instruct and R1-Deepseek (14B) for 3 epochs, and QwQ-32B-Preview for 5 epochs using LoRA. We use an effective batch size of 16 and a fixed learning rate of $4Ć 10^{-4}$ without warmup is used. Training took 1.5 hours for the 14B models and 4.5 hours for QwQ-32B-Preview on a single A100 80GB GPU. Without particularly found QwQ to be hard to train with LoRA and still generates a relatively high percentage of invalid judgments after training. Full training of the model will likely resolve these issues but that would require more compute than we have.
The R1-Distill-Qwen{1.5B,7B} models use full training with the following parameters. The 1.5B model We trained for 3 epochs with an effective batch size of 32, using a constant learning rate of $6Ć 10^{-5}$ without decay or warmup. We train both models using four RTX A6000 48GB GPU using data parallel. Training the 1.5B model on the 1K chains took about half an hour and the 7B model about two hours.
C.2 Training Discriminative Verifiers
We train R1-Qwen-14B for 1 epoch over the entire PRM800K dataset using two A100 80GB GPUs with a batch size of 8 and a learning rate of $6Ć 10^{-5}$ . We use a constant learning rate scheduler with no warmup. Following prior work (Wang et al., 2023b; Zhang et al., 2025) We train the model using binary cross-entropy to maximize the probability of the tokens ā+ā and ā-ā for correct and incorrect steps, respectively. The R1-Qwen-1.5B model is trained with the same infrastructure with a batch size of 64 and a learning rate of $1Ć 10^{-4}$ with a warm up ratio of 10%.
Appendix D Results on ProcessBench before and after finetuning
Table 3 shows the performance numbers of LLM-as-a-Judge and ThinkPRM on ProcessBench.
Table 3: Average F1-score on OlympiadBench and OmniMath subsets of ProcessBench (Zheng et al., 2024) comparing LLM-as-a-Judge to ThinkPRM finetuned on 1K examples. Random baseline for OlympiadBench is 39.1% and for OmniMath is 32.7%. Percentage of bad outputs (repetitions, invalid label formatting, overthinking, etc.) are shown in red. LLM-as-a-judge with reasoning models suffer from issues that limits their utility as generative verifiers.
| Model | LLM-as-a-Judge | ThinkPRM | | |
| --- | --- | --- | --- | --- |
| OlympiadBench | OmniMath | OlympiadBench | OmniMath | |
| Random baseline | 39.1 | 32.7 | 39.1 | 32.7 |
| R1-Qwen-1.5B | 5.0 (51.4 %) | 5.4 (55.1 %) | 76.3 (1.4 %) | 75.7 (2.4 %) |
| R1-Qwen-7B | 44.8 (18.2 %) | 45.7 (20.9 %) | 73.4 (1.1 %) | 74.0 (1.4 %) |
| R1-Qwen-14B | 72.8 (13.3 %) | 67.8 (18.6 %) | 87.3 (2.3 %) | 85.7 (2.3 %) |
| QwQ-32B-preview | 50.6 (7.9 %) | 55.5 (10.9 %) | 73.1 (15.1 %) | 73.2 (7.9 %) |
Appendix E Evaluation details
This section includes exact details on the test-time scaling shown in Section 4.2
E.1 Predicting verification labels
Following prior work (Snell et al., 2024; Beeching et al., ), we aggregate scores from DiscPRM by using the score of the last step. For ThinkPRM, we first prompt the model to generate the verification chain up to a maximum of 8192 tokens, then we force decode the string āIs the solution correct?ā and use $\frac{P(\text{``yes"})}{P(\text{``yes"})+P(\text{``no"})}$ as the solution score.
E.2 Best-of-N selection
We sample solutions using a temperature of $T=0.8$ for Llama-3.2-3B-Instruct and $T=0.4$ for Qwen-2.5-14B. We instruct all models to think step by step and put the final answer in \boxed{}. All our Best-of-N experiments use weighted majority voting, which scores final answers based on the sum of the verifier scores of their solutions (Uesato et al., 2022; Wu et al., 2024; Sun et al., 2024) except for our experiments on AIME ā24, where we use the verifier score directly to rank the solution, as we found this to perform better for all verifiers.
E.3 Verifier-guided beam search
Under verifier-guided beam search, we sample candidate next steps and score them with the process verifier, then selects top- $K$ out of these to further expand and so on. Our implementation is based on (Snell et al., 2024; Beeching et al., ), which maintains $N$ beams in total, and samples $M$ candidate next steps per beam. We set $M=4$ for all experiments and run search for a maximum of 20 steps per beam. To sample next steps, we use $T=0.6$ and use double newlines as the step delimiter.
E.4 Sequential scaling of verifier compute
We achieve budget forcing (Muennighoff et al., 2025) by triggering the model to think again for $R$ rounds, where each round uses a unique trigger phrase that incites the model to revisit or double-check its earlier verification. We use different trigger phrases for each round since we found that using the same phrase causes the model to repeat what it did in the last round.
We do a maximum of $R=4$ thinking rounds, and use the phrases āLet me double checkā, āLetās verify againā, and āDid I miss something?ā, for rounds 2, 3, and 4 respectively. We do not investigate deeply into optimizing the trigger phrase, but we note that performance may depend on these and we use the same phrases for both ThinkPRM and LLM-as-a-judge to ensure fair comparison.
E.5 MATH-500 test examples
As running on all 500 examples from MATH-500 will require a lot of compute, we run all our experiments on 100 randomly sampled subsets from MATH-500 (Hendrycks et al., 2021). We pick the 100 problems such that they cover different difficulty levels, as shown in Fig. 31.
\caption@setoptions
floatrow \caption@setoptions figurerow \caption@setposition b
\caption@setoptions figure \caption@setposition b
<details>
<summary>x14.png Details</summary>

### Visual Description
\n
## Bar Chart: Difficulty Levels in MATH-500
### Overview
This is a bar chart illustrating the distribution of problem difficulty levels within the MATH-500 dataset split. The chart displays the count of problems for each difficulty level, ranging from 1 to 5.
### Components/Axes
* **Title:** "Difficulty levels in the MATH-500 split we use" (positioned at the top-center)
* **X-axis:** "Problem Level" (ranging from 1 to 5, with evenly spaced markers)
* **Y-axis:** "Count" (ranging from 0 to 25, with evenly spaced markers)
* **Bars:** Represent the count of problems for each difficulty level. All bars are the same orange color.
### Detailed Analysis
The chart shows the following counts for each problem level:
* **Problem Level 1:** Approximately 11-12 problems. The bar reaches slightly above the '10' mark on the Y-axis.
* **Problem Level 2:** Approximately 25 problems. The bar reaches the '25' mark on the Y-axis.
* **Problem Level 3:** Approximately 19 problems. The bar reaches slightly below the '20' mark on the Y-axis.
* **Problem Level 4:** Approximately 22 problems. The bar reaches slightly above the '20' mark on the Y-axis.
* **Problem Level 5:** Approximately 23 problems. The bar reaches slightly below the '25' mark on the Y-axis.
The bars generally increase in height from level 1 to level 2, then fluctuate between levels 3, 4, and 5.
### Key Observations
* Problem Level 2 has the highest count of problems (approximately 25).
* Problem Level 1 has the lowest count of problems (approximately 11-12).
* Levels 3, 4, and 5 have relatively similar counts, ranging from approximately 19 to 23.
* The distribution is not uniform, with a clear peak at difficulty level 2.
### Interpretation
The data suggests that the MATH-500 dataset split is not evenly distributed across difficulty levels. There is a concentration of problems at difficulty level 2, indicating that this level is well-represented in the dataset. The relatively lower number of problems at level 1 suggests that easier problems are less common in this split. The similar counts for levels 3, 4, and 5 indicate a more balanced representation of intermediate to difficult problems. This distribution could influence the performance of any models trained on this dataset, potentially leading to biases towards problems of difficulty level 2. The choice of this split may be intentional, perhaps to focus on a specific range of problem difficulties.
</details>
Figure 28: Histogram of difficulty levels in our 100-problem subset from MATH-500. \caption@setoptions figure \caption@setposition b
<details>
<summary>x15.png Details</summary>

### Visual Description
## Line Chart: Parallel Scaling of Verifier Compute - MATH-500
### Overview
This line chart illustrates the relationship between the number of solutions generated and the accuracy achieved by different verification methods on the MATH-500 dataset. The x-axis represents the number of solutions (on a logarithmic scale), and the y-axis represents the accuracy in percentage. The chart compares the performance of several models: ThinkPRM-14B, ThinkPRM-14B@4, ThinkPRM-14B@8, DiscPRM-14B, and Majority voting.
### Components/Axes
* **Title:** Parallel scaling of verifier compute: MATH-500
* **X-axis Label:** Number of solutions
* **X-axis Scale:** Logarithmic scale, with markers at 2ā°, 2¹, 2², 2³, 2ā“, and 2āµ.
* **Y-axis Label:** Accuracy (%)
* **Y-axis Scale:** Linear scale, ranging from 50% to 85%.
* **Legend:** Located at the bottom of the chart.
* ThinkPRM-14B (Orange)
* ThinkPRM-14B@4 (Light Blue, dashed)
* ThinkPRM-14B@8 (Yellow)
* DiscPRM-14B (Teal)
* Majority (Brown)
### Detailed Analysis
The chart displays five distinct lines, each representing a different verification method.
* **ThinkPRM-14B (Orange):** This line starts at approximately 52% accuracy at 2Ⱐsolutions, steadily increases to around 78% at 2³ solutions, then continues to rise to approximately 83% at 2ⵠsolutions.
* **ThinkPRM-14B@4 (Light Blue, dashed):** This line begins at roughly 52% accuracy at 2Ⱐsolutions, rapidly increases to approximately 81% at 2² solutions, plateaus around 82% at 2³ and 2ⓠsolutions, and then slightly decreases to around 81% at 2ⵠsolutions.
* **ThinkPRM-14B@8 (Yellow):** This line starts at approximately 52% accuracy at 2Ⱐsolutions, increases to around 78% at 2³ solutions, and continues to rise to approximately 84% at 2ⵠsolutions.
* **DiscPRM-14B (Teal):** This line begins at approximately 52% accuracy at 2Ⱐsolutions, increases to around 72% at 2³ solutions, and remains relatively stable at around 73% at 2ⓠand 2ⵠsolutions.
* **Majority (Brown):** This line starts at approximately 52% accuracy at 2Ⱐsolutions, sharply increases to around 68% at 2² solutions, then rises to approximately 73% at 2³ solutions, and decreases to around 71% at 2ⵠsolutions.
### Key Observations
* **Performance Improvement with More Solutions:** All methods demonstrate an increase in accuracy as the number of solutions increases, indicating that generating more potential solutions improves verification performance.
* **ThinkPRM-14B@4 Outperforms:** The ThinkPRM-14B@4 model consistently achieves the highest accuracy across most of the solution range, peaking at approximately 82%.
* **Diminishing Returns:** The rate of accuracy improvement appears to diminish as the number of solutions increases, particularly for ThinkPRM-14B@4.
* **Majority Voting is Lowest:** The Majority voting method consistently exhibits the lowest accuracy among the tested models.
* **ThinkPRM-14B and ThinkPRM-14B@8 are similar:** These two lines are very close to each other.
### Interpretation
The data suggests that parallelizing the verification process (as demonstrated by ThinkPRM-14B@4 and ThinkPRM-14B@8) can significantly improve accuracy, especially when a moderate number of solutions are considered. The ThinkPRM-14B@4 model appears to strike a balance between computational cost and accuracy, achieving high performance without requiring a large number of solutions. The diminishing returns observed at higher solution counts suggest that there may be a point where the computational cost of generating additional solutions outweighs the marginal gains in accuracy. The lower performance of the Majority voting method indicates that a more sophisticated verification strategy is necessary for achieving high accuracy on the MATH-500 dataset. The logarithmic scale on the x-axis highlights the importance of scaling the number of solutions to achieve substantial accuracy improvements. The consistent starting point of all lines at approximately 52% suggests a baseline accuracy level inherent to the problem or the initial solution generation process.
</details>
Figure 31: Scaling of verifier compute by parallel sampling of multiple verification CoTs and averaging their scores. Parallel scaling (ThinkPRM -14B@4 and ThinkPRM -14B@8) further boosts performance curve compared to scoring based on a single CoT (ThinkPRM -14B).
E.6 Additional results on scaling verifier compute
Fig. 31 shows results of ThinkPRM -14B of parallel scaling verifier compute by sampling $K=4$ and $K=8$ CoTs with temperature $T=0.6$ and aggregating their scores. Parallel scaling indeed lifts up the accuracy curve of ThinkPRM -14B compared to standard $K=1$ with greedy decoding. However, performance plateaus rather quickly and $K=8$ remains comparable to $K=4$ , while slightly better at smaller sampling budgets. Fig. 32 compares parallel to sequential scaling under the same token budget. While there is no clear winner, parallel scaling seems to perform slightly better at best-of-8.
\caption@setoptions
floatrow \caption@setoptions figurerow \caption@setposition b
<details>
<summary>x16.png Details</summary>

### Visual Description
## Line Chart: Parallel vs. Sequential Scaling: MATH-300
### Overview
This line chart compares the accuracy of three different models (ThinkPRM-14B, ThinkPRM-14B with 2 thinking rounds, and ThinkPRM-14B@2) on the MATH-300 dataset as the number of solutions increases. The x-axis represents the number of solutions (on a logarithmic scale), and the y-axis represents the accuracy in percentage.
### Components/Axes
* **Title:** Parallel vs. Sequential Scaling: MATH-300 (top-center)
* **X-axis Label:** Number of solutions (bottom-center)
* **X-axis Markers:** 2ā°, 2¹, 2², 2³, 2ā“
* **Y-axis Label:** Accuracy (%) (left-center)
* **Y-axis Scale:** 50%, 55%, 60%, 65%, 70%, 75%, 80%
* **Legend:** Located at the bottom-center of the chart.
* **ThinkPRM-14B:** Orange solid line with star marker.
* **ThinkPRM-14B (2 thinking rounds):** Gray dashed line with triangle marker.
* **ThinkPRM-14B@2:** Blue solid line with circle marker.
### Detailed Analysis
* **ThinkPRM-14B (Orange):** The line slopes upward consistently.
* At 2ā° (1 solution): Approximately 51% accuracy.
* At 2¹ (2 solutions): Approximately 62% accuracy.
* At 2² (4 solutions): Approximately 69% accuracy.
* At 2³ (8 solutions): Approximately 77% accuracy.
* At 2ā“ (16 solutions): Approximately 79% accuracy.
* **ThinkPRM-14B (2 thinking rounds) (Gray):** The line also slopes upward, but is generally above the orange line.
* At 2ā° (1 solution): Approximately 54% accuracy.
* At 2¹ (2 solutions): Approximately 64% accuracy.
* At 2² (4 solutions): Approximately 70% accuracy.
* At 2³ (8 solutions): Approximately 81% accuracy.
* At 2ā“ (16 solutions): Approximately 81% accuracy.
* **ThinkPRM-14B@2 (Blue):** The line slopes upward and is generally above the gray line.
* At 2ā° (1 solution): Approximately 52% accuracy.
* At 2¹ (2 solutions): Approximately 63% accuracy.
* At 2² (4 solutions): Approximately 71% accuracy.
* At 2³ (8 solutions): Approximately 81% accuracy.
* At 2ā“ (16 solutions): Approximately 81% accuracy.
### Key Observations
* All three models show increasing accuracy as the number of solutions increases.
* ThinkPRM-14B@2 and ThinkPRM-14B (2 thinking rounds) achieve the highest accuracy, reaching approximately 81% at 2³ and 2ⓠsolutions.
* ThinkPRM-14B has the lowest accuracy across all solution numbers.
* The performance gap between the models appears to narrow at higher numbers of solutions (2³ and 2ā“).
### Interpretation
The data suggests that increasing the number of solutions improves the accuracy of all three models on the MATH-300 dataset. The models that utilize multiple thinking rounds (ThinkPRM-14B (2 thinking rounds) and ThinkPRM-14B@2) demonstrate superior performance compared to the base model (ThinkPRM-14B). This indicates that iterative reasoning or parallel processing of solutions is beneficial for solving mathematical problems. The plateauing of accuracy at higher solution numbers (2³ and 2ā“) might suggest a diminishing return on investment in further increasing the number of solutions, or that the models are approaching their maximum achievable accuracy on this dataset. The difference between ThinkPRM-14B@2 and ThinkPRM-14B (2 thinking rounds) is minimal, suggesting that the benefit of the "@2" configuration is not substantial.
</details>
<details>
<summary>x17.png Details</summary>

### Visual Description
\n
## Line Chart: Parallel vs. Sequential Scaling: MATH-300
### Overview
This line chart compares the accuracy of different models (ThinkPRM-14B and ThinkPRM-14B@4) on the MATH-300 dataset as the number of solutions increases. The x-axis represents the number of solutions (on a logarithmic scale), and the y-axis represents the accuracy in percentage. The chart shows how accuracy changes with the number of solutions for each model, with and without multiple "thinking rounds".
### Components/Axes
* **Title:** Parallel vs. Sequential Scaling: MATH-300
* **X-axis Label:** Number of solutions
* **X-axis Scale:** Logarithmic scale, with markers at 2ā°, 2¹, 2², 2³, and 2ā“.
* **Y-axis Label:** Accuracy (%)
* **Y-axis Scale:** Linear scale, ranging from approximately 50% to 82%.
* **Legend:** Located at the bottom-center of the chart.
* ThinkPRM-14B (Orange line with star marker)
* ThinkPRM-14B (4 thinking rounds) (Gray dashed line with triangle marker)
* ThinkPRM-14B@4 (Blue line with circle marker)
### Detailed Analysis
The chart displays three lines representing the accuracy of different models as the number of solutions increases.
* **ThinkPRM-14B (Orange):** This line starts at approximately 51% accuracy at 2ā° solutions. It increases steadily, reaching approximately 77% accuracy at 2ā“ solutions. The trend is generally upward, but the slope decreases as the number of solutions increases.
* 2ā°: ~51%
* 2¹: ~62%
* 2²: ~68%
* 2³: ~79%
* 2ā“: ~78%
* **ThinkPRM-14B (4 thinking rounds) (Gray):** This line begins at approximately 51% accuracy at 2Ⱐsolutions. It rises more rapidly than the orange line, reaching approximately 81% accuracy at 2³ solutions, and plateaus at approximately 81% at 2ⓠsolutions.
* 2ā°: ~51%
* 2¹: ~64%
* 2²: ~71%
* 2³: ~81%
* 2ā“: ~81%
* **ThinkPRM-14B@4 (Blue):** This line starts at approximately 51% accuracy at 2Ⱐsolutions. It increases rapidly, surpassing the other two lines, and reaches approximately 82% accuracy at 2³ solutions. It plateaus at approximately 81% at 2ⓠsolutions.
* 2ā°: ~51%
* 2¹: ~64%
* 2²: ~72%
* 2³: ~82%
* 2ā“: ~81%
### Key Observations
* All three models start with similar accuracy at 2ā° solutions.
* The models with "4 thinking rounds" (gray and blue lines) consistently outperform the base model (orange line) as the number of solutions increases.
* The ThinkPRM-14B@4 model achieves the highest accuracy, particularly at lower numbers of solutions (2Ⱐto 2³).
* Accuracy plateaus for all models at higher numbers of solutions (2ā“).
### Interpretation
The data suggests that increasing the number of "thinking rounds" significantly improves the accuracy of the ThinkPRM-14B model on the MATH-300 dataset. The ThinkPRM-14B@4 model demonstrates the most substantial improvement, indicating that parallel scaling (represented by "@4") combined with multiple thinking rounds is highly effective. The plateau in accuracy at higher numbers of solutions suggests that there is a diminishing return from adding more solutions beyond a certain point. This could be due to the inherent limitations of the model or the dataset itself. The logarithmic scale on the x-axis emphasizes the rapid gains in accuracy achieved with a relatively small increase in the number of solutions, especially in the early stages. The fact that all lines start at the same point suggests that the initial performance is similar across all configurations, and the differences emerge as the models are given more opportunities to refine their solutions.
</details>
Figure 32: Parallel vs. sequential scaling of ThinkPRM compute under the same generation budget with Qwen-2.5-14B generator. Parallel scaling (model@K) is done by independently sampling $K$ verification CoTs and aggregating their scores. Sequential scaling is done by prompting the model $K$ times to revise its own verification for $K$ thinking rounds. Both setups generate up until 8192 tokens per generation. We do not observe a clear winner although parallel scaling seems slightly better especially at larger sampling budgets.
E.7 Scaling training data of ThinkPRM
Here, we show results when training ThinkPRM -14B and ThinkPRM -1.5B using synthetic data from all PRM800K. The goal is to show that ThinkPRM can still benefit from training on more synthetic data. Here, we train both R1-Distill-Qwen-1.5B and R1-Distill-Qwen-14B on a total of 65K verification CoTs we obtained by sampling and filtering as explained in Section 3.2. Figs. 38 and 38 show best-of-N performance with ThinkPRM -1.5B and ThinkPRM -14B respectively when trained on 65K and compares it to training on 1K examples. Interestingly, ThinkPRM benefits from additional training, and can further improve the accuracy curve compared to the 1K-trained version on MATH-500. We note, however, that while training on more math data boosts performance on MATH-500, we observe some performance drop on out-of-domain tasks due to the distribution shift.
\caption@setoptions
floatrow \caption@setoptions figurerow \caption@setposition b
\caption@setoptions figure \caption@setposition b
<details>
<summary>x18.png Details</summary>

### Visual Description
## Line Chart: Scaling Training Data - MATH-500
### Overview
This line chart illustrates the relationship between the number of solutions used for training and the resulting accuracy on the MATH-500 dataset. Four different models are compared: ThinkPRM-1.5B@4 (1K), ThinkPRM-1.5B@4 (65K), DiscPRM-1.5B, and RLHFflow-8B-Deepseek. The generator used for all models is Llama 3.2-3B-Instruct.
### Components/Axes
* **Title:** Scaling training data: MATH-500
* **X-axis:** Number of solutions. Scale is logarithmic, with markers at 2<sup>1</sup>, 2<sup>3</sup>, 2<sup>5</sup>, and 2<sup>7</sup>.
* **Y-axis:** Accuracy (%). Scale ranges from 45% to 70%.
* **Legend:** Located at the bottom of the chart.
* ThinkPRM-1.5B@4 (1K) - Orange dashed line with triangle markers.
* ThinkPRM-1.5B@4 (65K) - Purple dashed line with triangle markers.
* DiscPRM-1.5B - Teal solid line with circle markers.
* RLHFflow-8B-Deepseek - Yellow solid line with circle markers.
### Detailed Analysis
* **ThinkPRM-1.5B@4 (1K):** The line starts at approximately 48% accuracy at 2<sup>1</sup> solutions, increases to around 58% at 2<sup>3</sup> solutions, then rises to approximately 65% at 2<sup>5</sup> solutions, and finally reaches about 68% at 2<sup>7</sup> solutions. The trend is generally upward, with diminishing returns as the number of solutions increases.
* **ThinkPRM-1.5B@4 (65K):** This line exhibits the steepest upward trend. It begins at approximately 50% accuracy at 2<sup>1</sup> solutions, jumps to around 63% at 2<sup>3</sup> solutions, continues to approximately 69% at 2<sup>5</sup> solutions, and reaches a peak of around 70% at 2<sup>7</sup> solutions.
* **DiscPRM-1.5B:** The line starts at approximately 52% accuracy at 2<sup>1</sup> solutions, increases to around 60% at 2<sup>3</sup> solutions, then rises to approximately 65% at 2<sup>5</sup> solutions, and plateaus at around 65% at 2<sup>7</sup> solutions.
* **RLHFflow-8B-Deepseek:** This line shows a moderate upward trend. It begins at approximately 47% accuracy at 2<sup>1</sup> solutions, increases to around 57% at 2<sup>3</sup> solutions, then rises to approximately 61% at 2<sup>5</sup> solutions, and finally reaches about 63% at 2<sup>7</sup> solutions.
### Key Observations
* The model ThinkPRM-1.5B@4 (65K) consistently outperforms the other models across all training data sizes.
* Increasing the training data size generally leads to improved accuracy for all models, but the rate of improvement diminishes as the number of solutions increases.
* DiscPRM-1.5B shows a plateau in accuracy after 2<sup>5</sup> solutions.
* RLHFflow-8B-Deepseek consistently has the lowest accuracy among the four models.
### Interpretation
The data suggests that scaling the training data size significantly improves the accuracy of these models on the MATH-500 dataset. The model ThinkPRM-1.5B@4 (65K) demonstrates the most substantial gains from increased training data, indicating that it benefits the most from a larger dataset. The logarithmic scale on the x-axis highlights the diminishing returns of adding more data; while initial increases in data size lead to large accuracy improvements, the gains become smaller as the dataset grows. The plateau observed in DiscPRM-1.5B suggests that this model may have reached its capacity to learn from the MATH-500 dataset, or that further improvements require architectural changes rather than simply more data. The consistent lower performance of RLHFflow-8B-Deepseek could be due to its architecture, training methodology, or other factors. Overall, the chart emphasizes the importance of data scaling in improving the performance of language models on mathematical reasoning tasks.
</details>
Figure 35: Best-of-N results with ThinkPRM -1.5B comparing the version trained on 1K examples (used throughout the paper) and a version trained on 65K examples. ThinkPRM benefits from training on more synthetic data as the performance can further improve with more training. \caption@setoptions figure \caption@setposition b
<details>
<summary>x19.png Details</summary>

### Visual Description
## Line Chart: Scaling Training Data - MATH-500
### Overview
This line chart illustrates the relationship between the number of solutions used for training and the resulting accuracy on the MATH-500 dataset. The chart compares the performance of different models: ThinkPRM-14B (trained with 1K solutions), ThinkPRM-14B (trained with 65K solutions), DiscPRM-14B, and a "Majority" model. The generator used for all models is Qwen2.5-14B.
### Components/Axes
* **Title:** Scaling training data: MATH-500
* **X-axis:** Number of solutions. Scale is logarithmic, with markers at 2ā°, 2¹, 2², 2³, 2ā“, and 2āµ.
* **Y-axis:** Accuracy (%). Scale ranges from 50% to 85%.
* **Legend:** Located at the bottom-right of the chart.
* ThinkPRM-14B (1K) - Orange line with star markers.
* ThinkPRM-14B (65K) - Purple line with circle markers.
* DiscPRM-14B - Teal line with diamond markers.
* Majority - Brown line with plus markers.
* **Generator:** Qwen2.5-14B (located at the top-left of the chart)
### Detailed Analysis
Here's a breakdown of each data series and their trends:
* **ThinkPRM-14B (1K) - Orange:** Starts at approximately 51% accuracy at 2Ⱐsolutions. The line slopes upward, reaching approximately 72% at 2² solutions, 78% at 2³ solutions, 81% at 2ⓠsolutions, and 83% at 2ⵠsolutions.
* **ThinkPRM-14B (65K) - Purple:** Begins at approximately 51% accuracy at 2Ⱐsolutions. The line increases steadily, reaching approximately 73% at 2² solutions, 79% at 2³ solutions, 83% at 2ⓠsolutions, and 85% at 2ⵠsolutions.
* **DiscPRM-14B - Teal:** Starts at approximately 51% accuracy at 2Ⱐsolutions. The line rises to approximately 68% at 2² solutions, 73% at 2³ solutions, 74% at 2ⓠsolutions, and then decreases slightly to approximately 72% at 2ⵠsolutions.
* **Majority - Brown:** Starts at approximately 51% accuracy at 2Ⱐsolutions. The line increases rapidly, reaching approximately 66% at 2¹ solutions, 70% at 2² solutions, 77% at 2³ solutions, 80% at 2ⓠsolutions, and 82% at 2ⵠsolutions.
### Key Observations
* All models start with similar accuracy (around 51%) at the lowest number of solutions (2ā°).
* ThinkPRM-14B (65K) consistently achieves the highest accuracy across all numbers of solutions.
* DiscPRM-14B shows a plateau and slight decrease in accuracy at higher numbers of solutions (2ā“ and 2āµ).
* ThinkPRM-14B (1K) and Majority models exhibit similar trends, with the Majority model slightly outperforming ThinkPRM-14B (1K) at 2ā“ and 2āµ.
### Interpretation
The data suggests that increasing the number of training solutions generally improves accuracy for these models on the MATH-500 dataset. The ThinkPRM-14B model, when trained with 65K solutions, demonstrates the best performance, indicating that a larger training dataset is beneficial for this model. The DiscPRM-14B model's performance plateaus and slightly declines with more solutions, which could indicate overfitting or diminishing returns from additional training data. The Majority model performs well, suggesting that a simple majority voting approach can be effective, especially with a sufficient number of solutions. The consistent starting point for all models suggests that the initial performance is likely determined by the generator (Qwen2.5-14B) rather than the specific training data size. The logarithmic scale on the x-axis emphasizes the impact of increasing the number of solutions, particularly at higher values.
</details>
Figure 38: Best-of-N results with ThinkPRM -14B comparing the version trained on 1K examples (used throughout the paper) and a version trained on 65K examples. ThinkPRM benefits from training on more synthetic data as the performance can further improve with more training.
E.8 Results with automatic labels
Table 4 shows performance when filtering training data based on manual labels (PRM800K) vs automatic labels (Math-Shepherd) (Wang et al., 2023b). ThinkPRM still performs well even with automatic labels, and comparably to manual labels.
| ThinkPRM-1.5B (PRM800K) ThinkPRM-1.5B (Math-shepherd) | 76.3 75.8 | 75.7 76.5 |
| --- | --- | --- |
Table 4: Comparison of ThinkPRM-1.5B trained on PRM800K vs Math-shepherd step labels.
E.9 Verifier performance in terms of problem difficulty
<details>
<summary>x20.png Details</summary>

### Visual Description
\n
## Bar Charts: Model Accuracy on Math and Physics Problems
### Overview
The image presents two side-by-side bar charts comparing the accuracy of two models, ThinkPRM-14B (orange) and DiscPRM-14B (teal), on math and physics problems. The x-axis represents problems binned by difficulty (1 to 5), and the y-axis represents accuracy in percentage (%). The left chart focuses on Math-500 problems, while the right chart focuses on GPQA-Physics problems.
### Components/Axes
* **X-axis Label (Both Charts):** "Problems binned by difficulty"
* **Y-axis Label (Both Charts):** "Accuracy (%)"
* **Left Chart Title:** "Best-of-32: Math-500"
* **Right Chart Title:** "Best-of-32: GPQA-Physics"
* **Legend (Bottom Center):**
* Orange: "ThinkPRM-14B"
* Teal: "DiscPRM-14B"
* **X-axis Markers (Both Charts):** 1, 2, 3, 4, 5 (representing difficulty levels)
* **Y-axis Markers (Both Charts):** 0, 20, 40, 60, 80, 100
### Detailed Analysis or Content Details
**Left Chart (Math-500):**
* **DiscPRM-14B (Teal):**
* Difficulty 1: Approximately 95% accuracy.
* Difficulty 2: Approximately 80% accuracy.
* Difficulty 3: Approximately 85% accuracy.
* Difficulty 4: Approximately 70% accuracy.
* Difficulty 5: Approximately 45% accuracy.
* Trend: The teal bars generally decrease in height from difficulty 1 to 5, indicating decreasing accuracy with increasing difficulty.
* **ThinkPRM-14B (Orange):**
* Difficulty 1: Approximately 98% accuracy.
* Difficulty 2: Approximately 90% accuracy.
* Difficulty 3: Approximately 95% accuracy.
* Difficulty 4: Approximately 70% accuracy.
* Difficulty 5: Approximately 40% accuracy.
* Trend: The orange bars also generally decrease in height from difficulty 1 to 5, mirroring the teal bars.
**Right Chart (GPQA-Physics):**
* **DiscPRM-14B (Teal):**
* Difficulty 1: Approximately 100% accuracy.
* Difficulty 2: Approximately 80% accuracy.
* Difficulty 3: Approximately 60% accuracy.
* Difficulty 4: Approximately 10% accuracy.
* Trend: The teal bars show a significant decrease in height from difficulty 1 to 4.
* **ThinkPRM-14B (Orange):**
* Difficulty 1: Approximately 100% accuracy.
* Difficulty 2: Approximately 95% accuracy.
* Difficulty 3: Approximately 70% accuracy.
* Difficulty 4: Approximately 15% accuracy.
* Trend: The orange bars also show a decrease in height from difficulty 1 to 4, but the decrease is less pronounced than for the teal bars.
### Key Observations
* In both charts, both models perform best on the easiest problems (difficulty 1) and their performance degrades as the difficulty increases.
* For Math-500, ThinkPRM-14B consistently outperforms DiscPRM-14B across all difficulty levels, though the difference is not substantial.
* For GPQA-Physics, ThinkPRM-14B also generally outperforms DiscPRM-14B, especially at higher difficulty levels. The performance drop for DiscPRM-14B is more dramatic on the physics problems.
* The accuracy of both models on the most difficult problems (difficulty 5 for Math-500 and difficulty 4 for GPQA-Physics) is significantly lower than on easier problems.
### Interpretation
The data suggests that both ThinkPRM-14B and DiscPRM-14B are capable of solving math and physics problems, but their performance is highly sensitive to the difficulty of the problems. ThinkPRM-14B appears to be slightly more robust to increasing difficulty, particularly in the GPQA-Physics domain. The substantial drop in accuracy for both models on the most difficult problems indicates a limitation in their ability to handle complex reasoning or knowledge requirements. The difference in performance between the two models on the physics problems could be due to differences in their training data or architectures, potentially making ThinkPRM-14B better suited for the specific challenges posed by physics questions. The charts provide a comparative performance assessment of the two models across different problem difficulties, highlighting their strengths and weaknesses. The consistent trend of decreasing accuracy with increasing difficulty is expected, as harder problems inherently require more sophisticated problem-solving skills.
</details>
Figure 39: ThinkPRM helps with challenging reasoning problems compared to DiscPRM. The generator model here is Qwen-2.5-14B for MATH-500 and Qwen-2.5-32B-Instruct for GPQA.
We the difficulty We do not estimate the difficulty over MATH problem since each problem in MATH is annotated based on 1 of 5 difficulty levels. For GPQA-Physics problems, we first compute the pass@1 rate of Qwen2.5-32B-Instruct for every problem by sampling 32 solutions for each problem. Then, we bin the pass@1 rates into four different bins and use the bin assignment for each problem as the difficulty level. As discussed in Section 5.6, Fig. 39 show how ThinkPRM helps more with verifying hard problems compared to DiscPRM.
| QwQ-32B Bad-step-index Decision-for-each-step | Single Yes/No 65.4 (18.3 %) 40.1 (6.4 %) | 50.6 (7.9 %) 69.0 (16.8 %) 36.6 (9.2 %) | 55.5 (3.2 %) |
| --- | --- | --- | --- |
| R1-Qwen-14B | Single Yes/No | 72.8 (13.3 %) | 67.8 (18.6 %) |
| Bad step index | 77.7 (17.0 %) | 76.6 (17.9 %) | |
| Decision-for-each-step | 73.3 (14.6 %) | 71.4 (14.7 %) | |
Table 5: F1-score on OlympiadBench and OmniMath subsets from ProcessBench with different prompts. Ratio of invalid judgments is shown in red.
Example problem-solution pair from ProcessBench
Problem:
In quadrilateral $ABCD$ , $\mathrm{m}\angle B+\mathrm{m}\angle D=270^{\circ}$ . The circumcircle of $\triangle ABD$ intersects $\overline{CD}$ at point $E$ , distinct from $D$ . Given that $BC=4$ , $CE=5$ , and $DE=7$ , compute the diameter of the circumcircle of $\triangle ABD$ .
Solution:
Step 0: To solve this problem, we will use several properties of geometry, particularly related to circles and quadrilaterals. Letās break down the steps: First, we understand the given information. We have a quadrilateral $ABCD$ where $\angle B+\angle D=270^{\circ}$ . The circumcircle of $\triangle ABD$ intersects $\overline{CD}$ at point $E$ , distinct from $D$ . Additionally, $BC=4$ , $CE=5$ , and $DE=7$ .
Step 1: Next, we apply properties of cyclic quadrilaterals and circles. Since $E$ lies on the circumcircle of $\triangle ABD$ , $E$ is also part of the circumcircle of $\square ABCD$ (assuming $ABCD$ is cyclic, which will be justified later). In a cyclic quadrilateral, opposite angles sum up to $180^{\circ}$ . Given $\angle B+\angle D=270^{\circ}$ , it suggests an error in our initial understanding or misinterpretation, since angles cannot sum to more than $360^{\circ}$ in a quadrilateral. However, if we consider the context, letās assume a correction was intended for the sake of explanation: Correct interpretation: If we adjust for logical consistency, perhaps the intention was to imply that one of the angles at $B$ or $D$ is $90^{\circ}$ (since this would fit the property of cyclic quadrilaterals better, aligning with the geometry principles involved). For this explanation, letās proceed with an assumption that either $\angle B$ or $\angle D$ is $90^{\circ}$ , aiming for a coherent solution path.
Step 2: We then use the Power of a Point Theorem. The Power of a Point Theorem states that for a point $P$ outside a circle, the product of the lengths of the two segments from $P$ to the circle equals the product of the lengths of the other two segments when considering tangents or secants from that point. However, in this case, $E$ is not outside but on the circumference of the circle, and we apply it differently: Given $BC=4$ and $CE=5$ , $DE=7$ , and knowing $E$ is part of the circumcircle, we can apply the property of intersecting chords in a circle, which states that the product of the lengths of the segments of one chord is equal to the product of the lengths of the segments of the other chord. Thus, in this scenario, we consider $BC$ and $CD$ as intersecting chords, where $CD=BC+CE=4+5=9$ .
$$
BC\cdot CD=CE\cdot DE
$$
Substituting the known values:
$$
4\cdot 9=5\cdot DE
$$
$$
36=5\cdot DE
$$
Thus,
$$
DE=\frac{36}{5}=7.2.
$$
Step 3: This discrepancy suggests a reevaluation of the given values or a misunderstanding in their application within the context provided. However, for the purpose of demonstrating the method:
Step 4: Finally, we find the diameter of the circumcircle of $\triangle ABD$ . The key to finding the diameter involves recognizing that the circumcircleās radius or diameter is crucial for calculating distances involving points $A$ , $B$ , and $D$ . However, without specific details about how $E$ relates to these points geometrically beyond being on the circle, or additional information such as the length of another segment or angle measures, directly calculating the diameter becomes complex.
Step 5: The questionās focus on $E$ and its relation to $BC$ , $CE$ , and $DE$ might suggest looking at the properties of chords in a circle or applying trigonometric relationships based on the angles given initially. However, due to the discrepancy noted and the complexity introduced by the misinterpretation of the angle condition, a direct calculation for the diameter of the circumcircle of $\triangle ABD$ isnāt straightforward with the information provided.
Step 6: Given the complexities and discrepancies in the original problem statement, particularly with the angles summing to $270^{\circ}$ in a quadrilateral, the solution path becomes less clear. The explanation provided aimed to creatively interpret and apply geometric principles under the constraints of the given information. However, the direct calculation for the diameter of the circumcircle of $\triangle ABD$ requires additional, precise information about the positions and relationships of the points $A$ , $B$ , $C$ , $D$ , and $E$ within the quadrilateral and its circumcircle.
Figure 40: An example problem and solution from the OlympiadBench (He et al., 2024) in ProcessBench (Zheng et al., 2024).
Example from QwQ-32B-Preview
Letās verify step by step: Step 0: The initial step seems to correctly identify the given information in the problem. It mentions the quadrilateral $ABCD$ with angles at $B$ and $D$ summing to $270^{\circ}$ , and the circumcircle of $\triangle ABD$ intersecting $\overline{CD}$ at $E$ , distinct from $D$ . The lengths $BC=4$ , $CE=5$ , and $DE=7$ are also noted. This step is accurate in summarizing the problem statement.
Step 1: This step attempts to apply properties of cyclic quadrilaterals but seems to get confused about the sum of angles in quadrilateral $ABCD$ . It is mentioned that in a cyclic quadrilateral, opposite angles sum to $180^{\circ}$ , but the given sum of angles $\angle B$ and $\angle D$ is $270^{\circ}$ , which indeed seems inconsistent. However, the step then suggests assuming that one of the angles is $90^{\circ}$ for the sake of explanation, which is an adjustment to proceed with the solution. This part is a bit problematic because it alters the given information, but perhaps there was a misinterpretation in the problem statement.
Step 2: Here, the Power of a Point Theorem is applied to points related to the circle. However, there is confusion in calculating the length of $\overline{CD}$ . It states that $CD=BC+CE=4+5=9$ , but since $E$ is a point on $\overline{CD}$ distinct from $D$ , and $DE$ is given as $7$ , this suggests that $CD=CE+DE=5+7=12$ , not $9$ . This is an error in calculation. Furthermore, the application of the Power of a Point Theorem seems misapplied because $E$ is on the circle, not outside it, so the theorem should be applied differently, possibly considering intersecting chords.
Step 3: Recognizing the discrepancy in the previous step, the solution acknowledges the need for reevaluation, which is a good practice. However, without correcting the earlier mistake in calculating $\overline{CD}$ , the subsequent steps may be built on incorrect foundations.
Step 4: This step aims to find the diameter of the circumcircle of $\triangle ABD$ , which is the main goal of the problem. It notes that the diameter is related to the circumradius, but without specific details about the triangleās sides or angles, itās challenging to compute directly. The solution hints at the need for more information, which is accurate.
Step 5: The insight here is that the relationship between $E$ and the other points might provide the necessary information through chord properties or trigonometric relationships. However, due to the earlier discrepancies, this step does not lead to a concrete calculation.
Step 6: The conclusion reflects on the complexities and discrepancies in the problem and the solution approach. It acknowledges that the direct calculation for the diameter is not straightforward with the given information, which is a fair assessment.
Overall, while the solution attempt shows an understanding of relevant geometric principles, there are significant errors in calculation and application of theorems, particularly in Step 2. These errors propagate through the subsequent steps, leading to an inconclusive solution. Therefore, the solution is not correct.
\boxed {no}
Figure 41: Example verification chain generated by QwQ-32B-Preview in response to the problem-solution pair in Fig. 40.
Appendix F More details on LLM-as-judge verification
The instruction used for LLM-as-judge verification in Section 3.1 is shown in Fig. 42, which we refer to as Single-label. Table 5 shows performance of QwQ-32B-Preview and Qwen-Deepseek-R1-14B with two more instructions. Bad-step-index refers to instructing the model to find the index of the first incorrect step and then if a step is found generate a single yes/no label regarding the solution correctness. Decision-for-each-step instructs the model to critique and produce a boxed label for each step. This instruction is shown in Fig. 21. We observe high ha variance in verification performance and invalid judgment ratio depending on the prompt.
LLM-as-a-judge prompt
The following is a math problem and a solution (split into steps, enclosed with tags and indexed from 0): [Math Problem] {problem} [Solution] {prefix} Your task is to review and critique the solution step by step. Your output must be in the following format: Step 0: <critique> Step 1: <critique> ⦠Step n: <critique> After your analysis, return yes/no depending on whether the solution is correct. Please put your final answer (i.e., yes/no) in \boxed{}
Figure 42: Prompt used for LLM-as-a-judge verification in Section 3.1
We also an example from ProcessBench in Fig. 40 and the generated verification chain from QwQ-32B-Preview in Fig. 41.
Appendix G Examples from ThinkPRM
Fig. 44 shows an example of a verification CoT from ThinkPRM -14B on a problem and solution from the OmniMath subset in ProcessBench, shown in Fig. 43.
Figure 43: Problem and solution from the OmniMath subset of ProcessBench. The verification CoT output by ThinkPRM is shown in Fig. 44
Example verification chain-of-thought
Problem: A solid rectangular prism has dimensions 4 by 2 by 2. A 1 by 1 by 1 cube is cut out of the corner creating the new solid shown. What is the surface area of the new solid? Solution: Step 1: To determine the surface area of the new solid after cutting a 1 by 1 by 1 cube from one corner of a 4 by 2 by 2 rectangular prism, we will follow these steps: First, calculate the surface area of the original rectangular prism. The original rectangular prism has dimensions 4 by 2 by 2. The surface area $A_{\text{original}}$ of a rectangular prism is given by: $A_{\text{original}}=2(lw+lh+wh)$ where $l=4$ , $w=2$ , and $h=2$ . Substituting these values in, we get: $A_{\text{original}}=2(4Ā· 2+4Ā· 2+2Ā· 2)=2(8+8+4)=2Ā· 20=40$ Step 2: Second, determine the effect of cutting out the 1 by 1 by 1 cube. When we cut out a 1 by 1 by 1 cube from one corner, we remove three faces of the cube (one face for each dimension) but add three new faces of the same size. Each face of the cube is 1 by 1, so the area of each face is 1 square unit. Step 3: Third, calculate the change in surface area. Removing the three faces of the cube reduces the surface area by $3Ć 1=3$ square units. Adding the three new faces increases the surface area by $3Ć 1=3$ square units. Therefore, the net change in surface area is: $3-3=0$ Step 4: Finally, calculate the total surface area of the new solid. Since the net change in surface area is zero, the surface area of the new solid remains the same as the original surface area: $A_{\text{new}}=A_{\text{original}}=40$ Thus, the surface area of the new solid is $\boxed{40}$ .
Example verification chain-of-thought from ThinkPRM -14B
Verification: <think> Alright, Iāve got this math problem about finding the surface area of a solid after cutting out a small cube from a larger rectangular prism. The original prism has dimensions 4 by 2 by 2, and a 1 by 1 by 1 cube is removed from one corner. The provided solution has four steps, and I need to review each one to see if itās correct. Letās start with Step 1. Step 1: The solution calculates the surface area of the original rectangular prism. The formula used is $A_{\text{original}}=2(lw+lh+wh)$ , which is correct for a rectangular prism. Plugging in the dimensions $l=4$ , $w=2$ , and $h=2$ , the calculation goes like this: $A_{\text{original}}=2(4Ā· 2+4Ā· 2+2Ā· 2)=2(8+8+4)=2Ā· 20=40$ This seems right. So, Step 1 is \\boxed{correct}. Step 2: Here, the solution talks about the effect of cutting out the 1 by 1 by 1 cube from one corner. It says that removing the cube takes away three faces but adds three new faces. Each face is 1 by 1, so each has an area of 1 square unit. Wait a minute, is this accurate? When you cut out a cube from a corner, you are indeed removing material, but how does that affect the surface area? Let me visualize this. Imagine a rectangular prism and cutting out a small cube from one corner. The cube has three faces exposed to the outside of the prism. When you remove the cube, those three faces are no longer part of the surface area. However, the removal also creates three new internal faces where the cube was taken out, which are now part of the surface area. So, the solution says that removing three faces reduces the surface area by 3 square units and adding three faces increases it by 3 square units, leading to no net change. That seems logical at first glance. But is this actually correct? Let me think deeper. When you remove the cube, you are taking away three 1x1 faces from the original prism, which indeed reduces the surface area by 3 square units. However, the new faces created are also 1x1, and there are three of them, so that adds 3 square units back. So, the total surface area remains the same. But Iām not entirely sure. Maybe I should consider if the new faces are actually exposed or if they are somehow internal. Wait, no. When you cut out the cube from the corner, the three new faces are now part of the exterior of the new solid, so they should be included in the surface area. Therefore, the net change is indeed zero, and the surface area remains 40. Hmm, but I recall that sometimes when you make a cut, the surface area can change in unexpected ways. Maybe I should double-check this. Let me consider a simpler example. Suppose I have a cube and I cut out a smaller cube from one corner. How does the surface area change? Original cube: surface area is $6s^{2}$ , where $s$ is the side length. After cutting out a smaller cube from one corner, I remove three faces of the smaller cube and expose three new faces. So, the change in surface area is $-3s^{2}+3s^{2}=0$ , similar to what the solution says. Alternatively, perhaps I can think about the new solidās dimensions. But I think the initial approach is correct. Therefore, Step 2 is \\boxed{correct}. Step 3: This step calculates the net change in surface area as zero, which follows from Step 2. Since the net change is zero, the surface area remains the same as the original. So, Step 3 is \\boxed{correct}. Step 4: Concludes that the surface area of the new solid is 40, the same as the original. Therefore, Step 4 is \\boxed{correct}. Wait, but I recall that sometimes when you make a cut, especially in certain ways, the surface area can change. But in this case, it seems to hold up.
Figure 44: Example output by ThinkPRM -14B, where it verifies and labels every step in the provided prefix via a long verification CoT. The problem and solution are in Fig. 43
Alternatively, maybe I should consider if the cube is cut from a corner where only two faces are on the exterior. Wait, no. In a rectangular prism, each corner is where three faces meet, so cutting out a cube from a corner would remove three faces and expose three new ones. But perhaps I should consider if the cube is cut from a corner where one of the original faces is larger than 1x1. Wait, the original prism is 4x2x2, and the cube being removed is 1x1x1, so the faces being removed are indeed 1x1. So, the calculation seems accurate. Alternatively, maybe I can think about it in terms of the total surface area. Alternatively, perhaps I can think about the surface area of the new solid directly. Let me try that. The new solid has the original surface area minus the three 1x1 faces that were removed, plus the three new 1x1 faces that are now exposed. So, 40 - 3 + 3 = 40. Alternatively, perhaps I can think about the surface area as the sum of all exterior faces. But I think the initial approach is correct. Therefore, the solution is correct. But the instruction says to review and critique each step, and determine if the whole solution is correct. So, in this case, all steps seem correct, leading to the conclusion that the surface area remains 40. </think>
Verification of long CoT solution
Problem: Convert the point $(0,3)$ in rectangular coordinates to polar coordinates. Enter your answer in the form $(r,\theta)$ , where $r>0$ and $0ā¤\theta<2\pi$ . Solution: Okay, so I need to convert the rectangular coordinates $(0,3)$ to polar coordinates. Let me recall how to do this. From what I remember, polar coordinates are given as $(r,\theta)$ , where $r$ is the distance from the origin to the point, and $\theta$ is the angle made with the positive $x$ -axis. The formulas to convert between rectangular $(x,y)$ and polar $(r,\theta)$ coordinates are: $x=r\cos\theta,\quad y=r\sin\theta$ To find $r$ , we use: $r=\sqrt{x^{2}+y^{2}}$ To find $\theta$ , we use: $\theta=\arctan\left(\frac{y}{x}\right)$ But I need to be careful with the quadrant since $\arctan$ may be undefined or ambiguous when $x=0$ . Given $x=0$ and $y=3$ , compute: $r=\sqrt{0^{2}+3^{2}}=\sqrt{9}=3$ So $r=3$ . For $\theta$ , since $x=0$ , the expression $\arctan(y/x)$ is undefined due to division by zero. However, because the point lies on the positive $y$ -axis, $\theta=\frac{\pi}{2}$ . This is the standard angle for points directly above the origin. Therefore, the polar coordinates are \boxed{(3, \frac{\pi}{2})}. Wait, let me check if thereās another way to get this. Since the point is $(0,3)$ , which is on the positive $y$ -axis, the angle is indeed $\frac{\pi}{2}$ . Could $\theta$ be $\frac{3\pi}{2}$ ? No, $\frac{3\pi}{2}$ is the negative $y$ -axis, which is the opposite direction. Since the point is on the positive $y$ -axis, $\theta=\frac{\pi}{2}$ . Let me recall again that when converting from rectangular to polar, $\theta$ is calculated as $\arctan(y/x)$ , but when $x=0$ , $\theta$ is either $\frac{\pi}{2}$ or $\frac{3\pi}{2}$ depending on the sign of $y$ . Here $y>0$ , so the correct angle is $\frac{\pi}{2}$ . ⦠ThinkPRM -1.5B: <think> Step 1: The student correctly recalls the formulas for converting from rectangular to polar coordinates: $x=r\cos\theta,\quad y=r\sin\theta$ and $r=\sqrt{x^{2}+y^{2}},\quad\theta=\arctan\left(\frac{y}{x}\right)$ This step is \boxed{correct}. Step 2: The student computes: $r=\sqrt{0^{2}+3^{2}}=3$ This is \boxed{correct}. Step 3: The student identifies that $\theta=\arctan(3/0)$ is undefined but correctly infers that the point lies on the positive $y$ -axis, hence $\theta=\frac{\pi}{2}$ . This reasoning is also \boxed{correct}. ⦠</think>
Figure 45: Example of verification of long CoT solution from Qwen3-1.7B-thinking using ThinkPRM -1.5B. ThinkPRM extracts and verifies individual steps embedded in the long CoT.