# RPM-MCTS: Knowledge-Retrieval as Process Reward Model with Monte Carlo Tree Search for Code Generation
**Authors**: Yuanyuan Lin, Xiangyu Ouyang, Teng Zhang, Kaixin Sui
> Corresponding Author
## Abstract
Tree search-based methods have made significant progress in enhancing the code generation capabilities of large language models. However, due to the difficulty in effectively evaluating intermediate algorithmic steps and the inability to locate and timely correct erroneous steps, these methods often generate incorrect code and incur increased computational costs. To tackle these problems, we propose RPM-MCTS, an effective method that utilizes Knowledge- R etrieval as P rocess Reward M odel based on M onte C arlo T ree S earch to evaluate intermediate algorithmic steps. By utilizing knowledge base retrieval, RPM-MCTS avoids the complex training of process reward models. During the expansion phase, similarity filtering is employed to remove redundant nodes, ensuring diversity in reasoning paths. Furthermore, our method utilizes sandbox execution feedback to locate erroneous algorithmic steps during generation, enabling timely and targeted corrections. Extensive experiments on four public code generation benchmarks demonstrate that RPM-MCTS outperforms current state-of-the-art methods while achieving an approximately 15% reduction in token consumption. Furthermore, full fine-tuning of the base model using the data constructed by RPM-MCTS significantly enhances its code capabilities.
## Introduction
Code generation aims at understanding problem descriptions in natural language and generating the corresponding code snippets. In recent years, large language models (LLMs) have demonstrated remarkable performance in code generation tasks (zhang2024unifying). For code generation, the early methods involve dividing code planning and synthesis into two phases using chain-of-thought or tree structures (wei2022chain; jiang2024self; zelikman2023parsel). wang2024planning have demonstrated that providing LLMs with a correct solution can significantly enhance model performance, even when these solutions consist of incomplete plans, i.e., for solutions, correctness is preferred over completeness, and the key to enhancing the code generation capability of LLMs lies in generating correct plans.
Programming languages possess their own inherent logical structures and tightly interconnected knowledge, which makes it essential not to overlook long-range dependencies within the code. Previous work has shown that rotary position embedding does not always lead to attention weights decaying with relative distance (barbero2024round). Concurrently, through exploring the attention distribution between tokens, we have experimentally demonstrated that selecting algorithmic steps as the basic units is a superior choice. Therefore, our objective focuses on how to accurately generate intermediate algorithmic steps.
However, a limitation of previous methods lies in the lack of an evaluation and correction mechanism for intermediate algorithmic steps, which fails to guarantee the correctness of these steps (lu2025lsr; li2025structured). One way to tackle this issue is to use a value function or reward model to verify reasoning traces for correctness, which then serves as a learning signal for self-training (lightman2023let; wang2023math). However, training a reliable reward model to verify every step in a reasoning trace generally depends on dense human-generated annotations per reasoning step (lightman2023let), which does not scale well.
Unlike other reasoning tasks, code generation benefits from the homogeneity of algorithmic workflows across different problem categories. This allows us to leverage historical experience from a knowledge base containing numerous correct algorithmic steps to evaluate the process reward of expansion steps. Additionally, code generation typically benefits from detailed feedback provided by compilers. Consequently, in this paper, we propose RPM-MCTS, which optimizes the Monte Carlo Tree Search (MCTS) algorithm using external information feedback. Our method utilizes the knowledge base for intermediate algorithmic step-level evaluation and employs sandbox feedback for result-level assessment of complete code. Specifically, the root node of the Monte Carlo tree represents the coding problem, while all other nodes represent individual algorithmic steps. During each iteration, multiple distinct potential next steps are generated based on the current reasoning path. Node selection is guided by historical experience from the knowledge base, enabling faster discovery of high-value search paths. In the simulation phase, complete code is generated and evaluated using sandbox and model feedback to update node values. Notably, during simulation, we localize erroneous steps within the full algorithmic workflow and incorporate newly generated correct steps into the tree, thereby reducing token consumption. After multiple iterations, the highest-scoring path from root to leaf is selected, ultimately yielding a complete solution alongside its corresponding code. The contributions are summarized as follows:
- We propose RPM-MCTS, which leverages knowledge base retrieval scores to evaluate intermediate algorithmic steps, steering LLMs to explore high-value reasoning paths more effectively.
- We leverage sandbox feedback during the simulation phase to evaluate code generated from reasoning steps, localize errors, and truncate simulations, thereby reducing computational costs.
- We conduct extensive experiments and show that RPM-MCTS is superior to state-of-the-art methods. Moreover, we verify that base models fine-tuned with data generated by RPM-MCTS enjoy greater code capabilities.
## Related Work
#### Monte Carlo Tree Search.
As the extension of Chain-of-Thought (CoT) (wei2022chain), Tree-of-Thought (ToT) (yao2023tree) enhances the reasoning and planning capabilities of LLMs by exploring different thought paths within a tree structure. Subsequently, Monte Carlo Tree Search has served as a search algorithm to more effectively guide LLMs in exploring intermediate sub-steps (zhao2023large; hao2023reasoning; zhou2023language; ding2023everything). ReST-MCTS* (zhang2024rest) combines process reward guidance with Monte Carlo Tree Search to collect high-quality reasoning trajectories and step-by-step values for training strategy and reward models. SRA-MCTS (xu2024sra) further extends this to the field of code generation, using Monte Carlo Tree Search to generate intermediate reasoning steps and conducting iterative self-evaluation to synthesize training data for supervised fine-tuning. However, relying solely on model self-evaluation introduces biases and hallucinations, and small-scale LLMs exhibit limited instruction-following capabilities. RethinkMCTS (li2025rethinkmcts) is another prior work that also uses execution feedback but employs a patching strategy. If this patch fails, the search may proceed on an incorrect path, making it less suitable for generating high-quality SFT data.
#### Process Evaluation.
In heuristic search, a robust reasoning process needs to have self-evaluation capabilities, and the evaluation results are further used to guide the search. Early work mainly focused on outcome-level evaluation (cobbe2021training), that is, evaluating the complete solution after the reasoning is completed. Outcome-level evaluation is simple to implement but often requires more detailed assessment. Step-level evaluation (lightman2023let; wang2023math; gao2024llm) emphasizes the assessment of individual reasoning steps. In tree search algorithms, process evaluation is widely used to guide search trajectories. Logic-RL (xie2025logic) optimizes path selection by implementing state scoring in beam search. Furthermore, step-level evaluation has proven its effectiveness in both error correction and the summarization of reasoning steps. zheng2024makes developed a method capable of accurately locating inaccuracies in specific reasoning steps, thereby providing more precise and actionable feedback for comprehensive evaluation.
## Method
In this section, we elaborate on the proposed modified MCTS that incorporates the knowledge base as a process reward model. The methodology comprises three key components: knowledge base construction, RPM-MCTS, and code generation. First, knowledge base retrieval scores circumvent random selection during node expansion. Then, in the expansion phase, nodes are filtered based on similarity metrics to eliminate redundant candidates. Finally, during the simulation phase, the algorithm performs error reflection and retains nodes with verified correct reasoning. These collective strategies enable faster exploration of higher-quality algorithmic steps.
### Knowledge Base Construction
In this section, we introduce the construction of a retrievable global knowledge base designed to mitigate hallucination during the planning process. Due to the homogeneity of algorithms within the same category, where fundamental principles and methods are relatively similar, we utilize a knowledge base containing numerous correct algorithms across diverse categories. This serves as the evaluation model for intermediate algorithmic steps in RPM-MCTS, eliminating the need to train a separate process reward model.
We use the training set data from APPS (hendrycks2021measuring) and CodeContests (li2022competition), which contain coding problems paired with their correctly implemented solutions. We utilize the Claude Sonnet 3.7 to generate the correct algorithmic steps corresponding to the correct code and decompose them step by step. We sequentially concatenate the problems by rolling them out according to the algorithmic steps. Specifically, for problem $p_{i}$ with $n_{i}$ algorithmic steps and $a_{i}^{(j)}$ corresponding to the $j$ -th step, we have
$$
\displaystyle\mathcal{K}_{i}=\{\mathrm{concat}(p_{i},a_{i}^{(1)},\ldots,a_{i}^{(j)}),~j=1,2,\ldots,n_{i}\}, \tag{1}
$$
and $\mathcal{K}=\uplus_{i=1}^{n}\mathcal{K}_{i}$ is the knowledge base with cardinality $n$ .
To enhance retrieval efficiency and improve retrieval precision by distinguishing between problems with similar descriptions but different algorithmic solutions, we organize the knowledge base into 14 distinct algorithm categories and store them as vector database by using the BGE (xiao2024c) embedding model.
### RPM-MCTS
We propose an enhanced MCTS method, named RPM-MCTS. In this method, the root node represents the problem, while all other nodes represent an algorithmic step. Specifically, the method comprises four distinct phases: Selection, Expansion, Evaluation and Reflection, and Backpropagation, as shown in Figure 6. These phases are performed on a search tree composed of tree nodes and are iterated multiple times, with each iteration generating a concrete algorithmic step.
<details>
<summary>x1.png Details</summary>

### Visual Description
## Flowchart: Reinforcement Learning Workflow with Rollout X Times
### Overview
The image depicts a four-stage reinforcement learning workflow visualized as a flowchart. It illustrates the process of selecting, expanding, evaluating, and backpropagating decisions through iterative rollouts (X times). The diagram uses color-coded nodes, directional arrows, and annotations to represent decision trees, value assignments, and feedback mechanisms.
### Components/Axes
1. **Stages (Left to Right)**:
- **Selection**: Initial decision tree with nodes in pink, green, and blue.
- **Expansion**: Expanded tree with highlighted nodes (red dashed box) and value annotations ("Value: 8", "Value: 9").
- **Evaluation**: Path evaluation with checkmarks (✓) and X marks, showing correct/incorrect outcomes.
- **Backpropagation**: Adjusted tree with arrows indicating feedback corrections.
2. **Node Colors**:
- **Yellow**: Root nodes (top of each tree).
- **Pink**: Intermediate decision nodes.
- **Green**: Correct/positive outcomes.
- **Blue**: Neutral/negative outcomes.
- **Red**: Highlighted/selected paths (Expansion stage).
3. **Annotations**:
- "Rollout X times" (top arrow).
- "Value: 8" and "Value: 9" (Expansion stage, red box).
- "✓" (correct path) and "×" (incorrect path) (Evaluation stage).
- "Code → Sandbox" (Evaluation to Backpropagation arrow).
4. **Data Sources**:
- **UCIB**: Combines Sandbox, Knowledge, and LLM (Selection stage).
- **Value**: Combines Knowledge and LLM (Expansion stage).
- **Code**: Output from Evaluation stage.
- **Sandbox**: Input to Backpropagation stage.
### Detailed Analysis
- **Selection Stage**: A decision tree with 5 nodes (1 yellow root, 3 pink, 1 green). The green node connects to a "UCIB" box containing Sandbox, Knowledge, and LLM components.
- **Expansion Stage**: Tree expands to 7 nodes (1 yellow, 3 pink, 3 green). A red dashed box highlights 3 nodes with values 8 and 9, suggesting quantitative evaluation criteria.
- **Evaluation Stage**: Path evaluation shows 3 pink nodes leading to 2 green (✓) and 1 blue (×) nodes. A highlighted path (pink arrow) connects to "Code" and "Sandbox".
- **Backpropagation Stage**: Adjusted tree with 5 nodes (1 yellow, 2 pink, 2 green). Arrows indicate feedback corrections to specific nodes.
### Key Observations
1. **Iterative Process**: The "Rollout X times" label emphasizes repeated cycles through all stages.
2. **Value Assignment**: Values 8 and 9 in the Expansion stage likely represent heuristic scores for node selection.
3. **Feedback Mechanism**: The Evaluation stage's checkmarks/X marks directly influence Backpropagation adjustments.
4. **Color-Coded Logic**: Green nodes consistently represent positive outcomes across stages.
### Interpretation
This flowchart models a reinforcement learning pipeline where:
1. **Selection** identifies initial decision paths using combined data sources (UCIB).
2. **Expansion** quantitatively evaluates node potential (values 8/9) to prioritize exploration.
3. **Evaluation** tests paths in a sandboxed environment, marking successes (✓) and failures (×).
4. **Backpropagation** refines the decision tree based on evaluation feedback, creating a closed-loop optimization system.
The diagram highlights the importance of value-based node selection (Expansion stage) and the direct impact of evaluation outcomes on model refinement. The use of "X times" rollouts suggests this is part of a larger iterative training process common in reinforcement learning frameworks.
</details>
Figure 1: Overview of RPM-MCTS. (a) Selection: Select a leaf node according to Eqn. (2). (b) Expansion: After selecting a node, expand multiple child nodes, and use knowledge base retrieval scores and LLM evaluation to select nodes for simulation. The node color represents similarity magnitude. (c) Evaluation: Generate complete reasoning steps for the selected node, generate code strictly in accordance with these reasoning steps, and use a sandbox for information feedback. (d) Backpropagation: Propagate the reward scores backward. The yellow root node represents the problem, and the remaining nodes represent each reasoning step.
#### Selection.
In the selection phase, a leaf node is selected from the current tree for further expansion according to the selection score, which is defined as a weighted combination of the Upper Confidence Bound (UCB) (silver2017mastering) and the knowledge base retrieval score:
$$
\displaystyle\mathrm{SelectionScore}(s,a)=\mathrm{UCB}(s,a)+\alpha K(s,a), \tag{2}
$$
where $(s,a)$ denotes a state-action pair with $s$ containing the description of the problem and previously generated algorithmic steps and $a$ representing the new step at the current node. The parameter $\alpha$ is for balancing the two terms.
UCB is a classical multi-armed bandit algorithm and well performed in addressing the exploration-exploitation trade-off. UCB selects actions by computing an upper confidence estimate of each action’s potential reward:
$$
\displaystyle\mathrm{UCB}(s,a)=Q(s,a)+\beta\sqrt{\frac{\log N(s)}{1+N(s,a)}}, \tag{3}
$$
where $Q(s,a)$ represents the empirical mean cumulative reward after taking action $a$ from state $s$ , $N(s)$ is the number of times state $s$ has been explored in the current context, and $N(s,a)$ is the number of times action $a$ has been taken in state $s$ . The parameter $\beta$ is for trading off the exploitation (the former term) and exploration (the latter term).
The knowledge base retrieval score $K(s,a)$ is obtained by retrieving the concatenated $(s,a)$ pair from the knowledge base. Specifically, let $f$ denote the embedding model that maps $(s,a)$ to a vector with the same dimension as the knowledge base. Given the preceding reasoning path, the knowledge base retrieval score for the current node is calculated as follows:
$$
\displaystyle K(s,a)=\max\left(0,\max_{k\in\mathcal{K}}\frac{f((s,a))\cdot k}{\|f((s,a))\|\cdot\|k\|}\right). \tag{4}
$$
The knowledge base similarity score $K(s,a)$ enables acquisition of step-wise assessments prior to the evaluation phase. In other words, when newly generated nodes remain unexplored, we prioritize leveraging historically validated solutions through knowledge base retrieval scores to identify higher-value nodes.
Starting from the root node, we recursively select the child node with the maximum $\mathrm{SelectionScore}$ value at each branching point. Selection ties are resolved stochastically. Each iteration advances to the highest-scoring child node until reaching a leaf node.
<details>
<summary>x2.png Details</summary>

### Visual Description
## Heatmap with Flowchart and Image Series: Python Factorial Calculation Visualization
### Overview
The image presents a technical visualization of a Python factorial calculation problem. It combines three elements:
1. A heatmap (a) showing value distributions
2. A flowchart (b) outlining algorithm steps
3. A series of images (c) depicting computational patterns
### Components/Axes
**Heatmap (a):**
- **Title:** "Problem: Use Python to calculate 5! and output the result"
- **Color Scale:** 0.0 (purple) to 0.8 (yellow)
- **Axes Labels:**
- Vertical: "n = 5", "p = 1", "for i in range(1, n + 1):", "print(p)"
- Horizontal: "n = 5", "p = 1", "for i in range(1, n + 1):", "print(p)"
- **Red Boxes:** Highlight cells at intersections of "n = 5"/"p = 1" and "for i in range(1, n + 1)"/"print(p)"
**Flowchart (b):**
- **Steps:**
1. Pink: "n = 5"
2. Green: "p = 1"
3. Blue: "for i in range(1, n + 1):"
4. Light Blue: "p = p * i"
5. Peach: "print(p)"
**Image Series (c):**
- **Top Row:** 20 pink images with diagonal striped patterns
- **Bottom Row:** 20 gray images with horizontal striped patterns
### Detailed Analysis
**Heatmap (a):**
- **Key Values:**
- Yellow cells (0.7-0.8) at intersections of "n = 5"/"p = 1" and "for i in range(1, n + 1)"/"print(p)"
- Green cells (0.5-0.6) at "n = 5"/"p = 1" and "for i in range(1, n + 1)"/"print(p)"
- **Trend:** Values decrease diagonally from top-left (yellow) to bottom-right (purple)
**Flowchart (b):**
- **Structure:** Linear progression from step 1 (input) to step 5 (output)
- **Color Coding:** Matches heatmap's color scale (e.g., step 3's blue corresponds to mid-range values)
**Image Series (c):**
- **Top Row:** Diagonal lines suggest iterative computation (e.g., factorial multiplication)
- **Bottom Row:** Horizontal lines may represent final output stabilization
### Key Observations
1. **Heatmap Anomalies:**
- Red boxes highlight critical algorithm steps with highest value concentrations
- Diagonal gradient confirms factorial's multiplicative nature
2. **Flowchart-Image Correlation:**
- Step 3 ("for i in range...") aligns with diagonal patterns in top images
- Step 5 ("print(p)") correlates with horizontal patterns in bottom images
3. **Color Consistency:**
- Heatmap's yellow (0.8) matches step 3's blue (mid-range) in flowchart
- Gray images in (c) likely represent lower-value iterations
### Interpretation
The visualization demonstrates:
1. **Algorithmic Flow:** The factorial calculation progresses through input (n=5), initialization (p=1), iteration (1-5), multiplication, and output.
2. **Value Distribution:** The heatmap's diagonal gradient visually represents the exponential growth of factorial values (5! = 120).
3. **Computational Patterns:** The image series (c) likely shows:
- Top row: Intermediate multiplication steps (e.g., 1×2, 2×3, ..., 4×5)
- Bottom row: Final output (120) repeated across iterations
The red boxes in the heatmap emphasize the critical iteration steps where value accumulation occurs, while the flowchart's color coding helps trace the algorithm's logical progression. The image series provides a visual metaphor for the computational process, with diagonal lines symbolizing dynamic calculation and horizontal lines representing static results.
</details>
Figure 2: (a) Token-level attention heatmap for code corresponding to the programming problem. (b) Algorithmic steps and corresponding code for the programming problem. (c) Attention sink phenomenon.
#### Expansion.
Upon selecting a leaf node during the selection phase, the expansion phase aims to generate remaining child nodes, thereby expanding the search scope of the entire tree. Since the attention weights between tokens do not always decay with relative distance (barbero2024round), we conduct an in-depth study on the attention mechanism between tokens in LLMs during code generation to reveal the influencing factors among tokens. As shown in Figure 7, it can be observed that certain tokens have a profound impact on subsequent code generation. It can thus be inferred that these key tokens can summarize and interpret the information of previously generated tokens, and have higher reference value for subsequent token generation. Meanwhile, relevant studies (barbero2025llms; xiao2023efficient) have shown that modern LLMs exhibit the phenomenon of “attention sink”. Specifically, numerous attention heads allocate a disproportionate share of weights, such as exceeding 30% or even 80%, to the beginning-of-sequence token ⟨bos⟩, despite its primary function as a sequence delimiter with minimal semantic content. Therefore, to facilitate our examination of inter-token dependencies in code generation tasks, we selectively visualize token attention mechanisms at designated layers. Figure 7 (c) shows that attention not only sinks to ⟨bos⟩ but also peaks at algorithmic step boundaries, justifying that algorithmic step blocks are more effective basic processing units in code generation tasks. Therefore, we select algorithmic steps as the basic units for expansion.
To ensure diversity in generated steps during the expansion phase, we implement a sampling decoding strategy that sequentially generates each child node. Specifically, to prevent repetitive generation by the LLM, we iteratively provide all previously generated steps as context when producing each new step. The input for the LLM is
$$
\displaystyle\mathrm{concat}(s,a_{1},\ldots,a_{i},g),~i=1,2,\ldots,b \tag{5}
$$
where $g$ represents the reflection in the simulation phase, and $b$ denotes the maximum number of branches each node can expand.
After expanding $b$ nodes, we employ cosine similarity for filtering to reduce computational costs by avoiding simulations on redundant nodes. Specifically, we map the reasoning steps of the $b$ nodes to vectors using the embedding model $\mathcal{E}$ and calculate the cosine similarities between these $b$ nodes. When the similarity exceeds a predetermined threshold, the node is identified as redundant and filtered out. This method effectively reduces the search space and enhances algorithmic efficiency while maintaining diversity.
#### Evaluation and Reflection.
During the evaluation phase, simulation and evaluation are performed for the selected leaf nodes. We provide the LLM with the algorithmic steps $s$ already generated for the node and its ancestor nodes, enabling the LLM to strictly follow the generated steps and continue simulating to complete all remaining steps. We search for the thoughts and evaluate with the code generated following the thoughts.
The generated code undergoes sandbox evaluation using public test cases. However, since public test cases only cover a subset of possible scenarios, the code may fail on unseen cases, such as boundary conditions or performance issues. We therefore employ the LLM to analyze the complete algorithmic steps based on sandbox feedback.
We assess the steps generated during the expansion phase through two components, which are the pass rate on public test cases and LLM evaluation. The final evaluation score is obtained by weighted summation of these two scores. The formula is as follows:
$$
\displaystyle Q(s,a)=\gamma\cdot r_{\text{exec}}+(1-\gamma)\cdot r_{\text{LLM}} \tag{6}
$$
where $r_{\text{exec}}$ denotes the pass rate on public test cases, $r_{\text{LLM}}$ represents the score from LLM evaluation based on the sandbox feedback results and complete steps provided to the LLM, and $\gamma$ indicates the weight controlling these two parts of the scores.
For code that fails to public test cases, we isolate erroneous algorithmic steps by decomposing the code into blocks and sequentially debugging each block via LLM analysis with public test inputs (zhong2024debug). We retain all correct steps generated during the simulation phase, truncated before the first erroneous step. These validated steps are then incorporated into the MCTS tree as expanded nodes.
The entire RPM-MCTS process is terminated when the solution passes all public test cases and achieves a high LLM evaluation score. Otherwise, node updates and reflection are performed, and the RPM-MCTS process proceeds until the maximum iteration count is reached.
#### Backpropagation.
The objective of backpropagation is to update the reward values of nodes upon completion of state value evaluation. We propagate reward values backward from leaf nodes to the root node, updating the state estimates of all nodes along the path. For newly generated nodes during the expansion phase, they collectively update their parent node. As the number of simulations increases, these value estimates become increasingly accurate. This process repeats until the preset maximum simulation count is reached, ultimately resulting in a search tree that records the state value and visit count for each node.
### Generate Code
Termination of the RPM-MCTS process occurs under two conditions: 1) If all public test cases are passed and LLM analysis confirms robustness to unseen edge cases before reaching maximum iterations, the code generated during the simulation phase is retained. 2) When maximum iterations are reached without meeting termination criteria, the leaf node with the highest state value is selected, its ancestral path is traced, and the LLM is instructed to generate code by rigorously adhering to the algorithmic steps assembled from this path.
## Experiments
### Experimental Settings
#### Datasets.
For the construction of the knowledge base, we use the train set splits of APPS (hendrycks2021measuring) and CodeContests (li2022competition) as data sources. After validation and filtering, we obtained 11,038 samples with a total of 82,923 steps. For benchmarking, we used the test set splits of APPS and CodeContests, as well as HumanEval+ (liu2023your) and MBPP+ (liu2023your). The APPS dataset contains three difficulty levels: introductory, interview, and competition. We selected 150 validated samples from each difficulty level. The CodeContests dataset consists of competitive programming problems collected from contest websites such as Codeforces. Additionally, HumanEval (chen2021evaluating) and MBPP (austin2021program) are widely recognized benchmarks in the code generation domain, while HumanEval+ and MBPP+ introduce a larger number of test cases to enable more accurate evaluations. We utilized Claude Sonnet 3.7 to convert all datasets into a unified format, which primarily includes the problem statement, public test cases, private test cases, and standard solution. To facilitate sandbox execution, we transformed datasets with standard input-output problems into function definitions with docstrings. For datasets without public test cases, we selected the first two private test cases as public test cases.
#### Baselines.
We selected the following methods as baselines for comparison. Base LLM refers to directly prompting the LLM to output solution code using the problem statement and public test cases as input. LDB (zhong2024debug) leverages the LLM to track intermediate variables during code execution to iteratively improve the code. ToT (yao2023tree) performs a search of thought steps using DFS or BFS before generating the final code. SRA-MCTS (xu2024sra) combines LLM with MCTS to explore intermediate reasoning steps. The complete steps obtained by SRA-MCTS are used as input to prompt the LLM to directly infer and output the solution code for evaluation.
#### Implementation Details.
We use two large-parameter backbone models, Qwen3-235B-A22B (yang2025qwen3) and Claude Sonnet 3.7, alongside a smaller-parameter model, Qwen3-8B. In the code generation domain, pass@k (chen2024survey) is a widely used metric, and we adopted pass@1 as the evaluation metric. The rollout, i.e., maximum number of iterations, was set to 5 for all methods. The branching factor $b$ for tree-based methods was set to 3. The exploration constant $\beta$ for UCB was set to 0.5. In RPM-MCTS, the weight of the knowledge base retrieval score $\alpha$ was set to 0.5, and the similarity filtering threshold was set to 0.85.
### Main Results
| Method Qwen3-8B Base LLM | APPS-Intro. 56.7 | APPS-Interv. 35.3 | APPS-Comp. 29.3 | CodeContests 10.7 | HumanEval+ 75.6 | MBPP+ 72.2 | Average 52.1 |
| --- | --- | --- | --- | --- | --- | --- | --- |
| LDB | 64.0 (+7.3) | 42.0 (+6.7) | 28.0 (-1.3) | 11.3 (+0.7) | 78.1 (+2.4) | 70.1 (-2.1) | 53.5 (+1.4) |
| ToT | 69.3 (+12.7) | 54.0 (+18.7) | 41.3 (+12.0) | 17.3 (+6.7) | 82.3 (+6.7) | 70.4 (-1.9) | 59.0 (+6.9) |
| SRA-MCTS | 67.3 (+10.7) | 42.7 (+7.3) | 29.3 (+0.0) | 16.0 (+5.3) | 73.8 (-1.8) | 65.9 (-6.4) | 52.8 (+0.7) |
| Ours w/o KB | 76.7 (+20.0) | 56.7 (+21.3) | 40.7 (+11.3) | 22.3 (+11.6) | 82.3 (+6.7) | 78.3 (+6.1) | 63.5 (+11.4) |
| Ours | 77.3 (+20.7) | 60.0 (+24.7) | 43.6 (+14.3) | 23.0 (+12.3) | 83.5 (+7.9) | 76.2 (+4.0) | 64.0 (+11.9) |
| Qwen3-235B-A22B | | | | | | | |
| Base LLM | 78.0 | 54.7 | 42.7 | 24.0 | 86.0 | 78.8 | 64.6 |
| LDB | 78.7 (+0.7) | 61.3 (+6.7) | 47.3 (+4.7) | 25.3 (+1.3) | 86.0 (+0.0) | 78.8 (+0.0) | 66.4 (+1.8) |
| ToT | 84.7 (+6.7) | 62.7 (+8.0) | 57.3 (+14.7) | 27.3 (+3.3) | 85.4 (-0.6) | 75.4 (-3.4) | 67.7 (+3.1) |
| SRA-MCTS | 76.0 (-2.0) | 52.7 (-2.0) | 44.0 (+1.3) | 24.7 (+0.7) | 85.4 (-0.6) | 70.9 (-7.9) | 61.7 (-3.0) |
| Ours w/o KB | 88.0 (+10.0) | 72.0 (+17.3) | 52.0 (+9.3) | 34.7 (+10.7) | 86.6 (+0.6) | 79.9 (+1.1) | 71.3 (+6.7) |
| Ours | 86.7 (+8.7) | 67.3 (+12.7) | 59.3 (+16.7) | 36.7 (+12.7) | 87.8 (+1.8) | 81.2 (+2.4) | 72.3 (+7.7) |
| Claude Sonnet 3.7 | | | | | | | |
| Base LLM | 78.7 | 56.0 | 59.3 | 31.3 | 82.9 | 77.8 | 67.3 |
| LDB | 82.0 (+3.3) | 64.7 (+8.7) | 73.3 (+14.0) | 33.3 (+2.0) | 88.4 (+5.5) | 77.0 (-0.8) | 71.5 (+4.2) |
| ToT | 84.0 (+5.3) | 68.0 (+12.0) | 66.0 (+6.7) | 39.3 (+8.0) | 86.0 (+3.1) | 74.6 (-3.2) | 70.8 (+3.6) |
| SRA-MCTS | 83.3 (+4.7) | 63.3 (+7.3) | 62.0 (+2.7) | 36.0 (+4.7) | 81.1 (-1.8) | 74.3 (-3.4) | 68.4 (+1.1) |
| Ours w/o KB | 92.0 (+13.3) | 73.3 (+17.3) | 78.0 (+18.7) | 42.7 (+11.3) | 86.6 (+3.7) | 79.1 (+1.3) | 76.2 (+8.9) |
| Ours | 92.0 (+13.3) | 74.0 (+18.0) | 81.3 (+22.0) | 46.0 (+14.7) | 89.0 (+6.1) | 81.0 (+3.2) | 78.1 (+10.9) |
Table 1: Performance comparison of all methods across different backbone models on code generation benchmarks. Values in parentheses indicate the improvement over the base LLM.
Our method achieves the most significant improvements across different backbone models and datasets. As shown in Table 3, Qwen3-8B achieves an average improvement of 11.90%, Qwen3-235B-A22B achieves an average improvement of 7.71%, and Claude Sonnet 3.7 achieves an average improvement of 10.86%. Since the base Qwen3-8B performs worse than the other two larger base LLMs across all datasets, especially on simpler datasets, the Qwen3-8B shows the most significant improvement when using RPM-MCTS. On the two more challenging datasets, APPS-competition and CodeContests, Qwen3-8B achieves an average improvement of 13.3%, Qwen3-235B-A22B achieves an average improvement of 14.67%, and Claude Sonnet 3.7 achieves an average improvement of 18.34%. This is because Qwen3-8B has weaker evaluation scoring capabilities, while larger LLMs have relatively stronger evaluation capabilities, resulting in greater gains. This demonstrates that the more difficult the task, the more accurate evaluation of intermediate algorithm steps is required.
LDB achieves greater improvements on simpler datasets compared to more challenging ones. We found that this is because, for more difficult problems, through multiple rounds of execution feedback, LLMs often only modify code conditions to pass public test cases rather than thinking about modifying the actual logic of the code. SRA-MCTS shows performance improvements on more challenging datasets but declines on simpler ones. The reason is that for simple problems, LLM evaluation scores are always perfect or near-perfect, prematurely ending the search for steps, resulting in incomplete or lower-quality reasoning steps.
Comparing the results across three different difficulty levels in the APPS dataset, it can be observed that for the two larger LLMs, as the difficulty increases, our method brings more significant performance improvements. The higher the difficulty of the problem, the more guidance the LLM needs to avoid getting lost in complex reasoning chains. This demonstrates the effectiveness of our method in evaluating intermediate steps, helping LLMs enhance their evaluation capabilities and further unlocking the vast potential code knowledge and reasoning abilities inherent in LLMs.
For fair comparison, even without using knowledge base retrieval scores as rewards, our method outperforms other baselines. Experimental results show that overall, especially on the two most challenging datasets, incorporating the knowledge base further stabilizes and improves performance. The reason is that LLM evaluation of intermediate steps in complex problems is unreliable, and random exploration struggles to find the correct solution path. Therefore, leveraging the knowledge base to use the reasoning patterns of historically similar problems as guidance helps direct the search. This demonstrates the effectiveness of using knowledge base retrieval scores as rewards for intermediate process evaluation.
On a few simpler datasets, performance slightly improves when knowledge base retrieval scores are not used. We analyze that this is because, in simple tasks, LLMs can already accurately evaluate the quality of generated paths. In this case, introducing knowledge base rewards, while aiming to provide additional prior information, may retrieve historical cases that are textually similar but logically different in their solutions, introducing noise into MCTS node selection. In contrast, for complex tasks, LLM evaluation capabilities for intermediate steps are limited, the search space is vast, and solutions are sparse. The structured priors provided by the knowledge base effectively guide the search direction, significantly improving success rates. This phenomenon indicates that the effectiveness of knowledge base rewards depends on the balance between task difficulty and LLM evaluation confidence.
### Ablation Study
We conduct ablation experiments using Qwen3-235B-A22B as the backbone model to evaluate performance, and the results are shown in Figure 8.
w/o KB indicates that only LLM evaluation is used in selection, without knowledge base retrieval. Compared to the complete method, the overall performance slightly decreased, with an average drop of 1.05%. The decline was most significant on the two more challenging datasets, with an average drop of 4.67%. This indicates that large models still face challenges with complex problems. By introducing a knowledge base to compare the generated reasoning steps with the correct reasoning steps of similar problems in the knowledge base, the self-assessment capability for complex problems can be improved.
w/o ER means that the execution rewards of public test cases in the sandbox are not used during the simulation phase. This resulted in the largest overall performance drop, highlighting that the core of RPM-MCTS reflection lies in the detailed feedback provided by the code execution environment. In fact, previous research (huang2023large) has already pointed out that without external feedback, LLMs lack the ability to self-correct their reasoning processes.
<details>
<summary>x3.png Details</summary>

### Visual Description
```markdown
## Bar Chart: Performance (%) Across Evaluation Categories
### Overview
The chart compares the performance of five different methods ("Ours", "w/o KB", "w/o ER", "w/o SF", "w/o LDB") across six evaluation categories: APPS-Intro., APPS-Interv., APPS-Comp., CodeContests, HumanEval+, and MBPP+. Performance is measured in percentage (%) on a y-axis from 20 to 100.
### Components/Axes
- **X-axis (Categories)**:
- APPS-Intro.
- APPS-Interv.
- APPS-Comp.
- CodeContests
- HumanEval+
- MBPP+
- **Y-axis (Performance)**:
- Scale: 20–100% (increments of 10)
- Labels: "Performance (%)"
- **Legend**:
- Colors:
- Blue: "Ours"
- Dark Blue: "w/o KB"
- Orange: "w/o ER"
- Yellow: "w/o SF"
- Red: "w/o LDB"
- Position: Top-right corner
### Detailed Analysis
#### APPS-Intro.
- **Ours**: 86.7% (blue)
- **w/o KB**: 88.0% (dark blue)
- **w/o ER**: 78.0% (orange)
- **w/o SF**: 86.0% (yellow)
- **w/o LDB**: 88.0% (red)
#### APPS-Interv.
- **Ours**: 67.3% (blue)
- **w/o KB**: 72.0% (dark blue)
- **w/o ER**: 63.3% (orange)
- **w/o SF**: 63.3% (yellow)
- **w/o LDB**: 70.7% (red)
#### APPS-Comp.
- **Ours**: 59.3% (blue)
- **w/o KB**: 52.0% (dark blue)
- **w/o ER**: 46.0% (orange)
- **w/o SF**: 56.7% (yellow)
- **w/o LDB**: 60.7% (red)
#### CodeContests
- **Ours**: 36.7% (blue)
- **w/o KB**: 34.7% (dark blue)
- **w/o ER**: 28.9% (orange)
- **w/o SF**: 31.1% (yellow)
- **w/o LDB**: 30.4% (red)
#### HumanEval+
- **Ours**: 87.8% (blue)
- **w/o KB**: 86.6% (dark blue)
- **w/o ER**: 86.4% (orange)
- **w/o SF**: 87.8% (yellow)
- **w/o LDB**: 87.2% (red)
#### MBPP+
- **Ours**: 81.2% (blue)
- **w/o KB**: 79.9% (dark blue)
- **w/o ER**: 77.2% (orange)
- **w/o SF**: 77.5% (yellow)
- **w/o LDB**: 79.6% (red)
### Key Observations
1. **Highest Performance**:
- "Ours" achieves the highest scores in **APPS-Intro.** (86.7%) and **HumanEval+** (87.8%).
- "w/o KB" and "w/o LDB" tie for the highest in **APPS-Intro.** (88.0%).
2. **Lowest Performance**:
- **CodeContests** is the weakest category overall, with all methods scoring below 40%.
3. **Component Impact**:
- Removing **KB** improves performance in **APPS-Interv.** (72.0%) and **MBPP+** (79.9%).
- Removing **LDB** boosts results in **APPS-Comp.** (60.7%) and **MBPP+** (79.6%).
- Removing **SF** matches "Ours" in **HumanEval+** (87.8%).
4. **Outliers**:
- **CodeContests** shows drastic drops when components are removed (e.g., "w/o ER" at 28.9%).
### Interpretation
The data suggests that the "Ours" method generally performs robustly across categories
</details>
Figure 3: Ablation study results on different benchmarks.
w/o SF refers to the removal of similarity filtering, i.e., not discarding similar child nodes during the expansion phase. The results show that filtering out repeated intermediate algorithmic steps based on similarity allows resources to be better allocated to exploring new steps, thereby improving performance while reducing computational costs.
w/o LDB denotes not using LDB to locate erroneous steps in our method. The average performance drop was minimal, indicating that removing LDB has little impact on our method. With execution feedback, LLMs are already capable of accurately locating errors. However, in a few cases, LDB still helps in pinpointing erroneous steps.
### Performance vs. Rollout
We explore the results of different values of the hyperparameter rollout on Qwen3-235B-A22B, as shown in Figure 9. Since SRA-MCTS is prone to premature termination due to self-overestimation by the model, we set its end gate value to exceed the maximum possible score, allowing it to reach the maximum number of iterations whenever possible. we denote this variant as SRA-MCTS_no_eg. The results show that in the early stages, all methods exhibit significant performance improvements as the rollout increases, after which the performance gradually stabilizes. Notably, RPM-MCTS exhibits better performance even with a rollout of 1. This is because it enjoys two advantages in its first rollout: proactive guidance via its Knowledge Base during the selection phase, and wrong step truncation with rethink-based regeneration during the simulation phase. This allows it to perform at least one round of verification and reflection and generate complete code. Moreover, for simpler problems, RPM-MCTS can often arrive at the correct answer with only a single simulation, whereas traditional tree search methods tend to require multiple unnecessary expansions even for straightforward tasks.
<details>
<summary>x4.png Details</summary>

### Visual Description
## Line Graph: Performance (%) vs Max Rollout
### Overview
The image is a line graph comparing the performance of five different methods across varying maximum rollout values (1-10). Performance is measured in percentage, with the y-axis ranging from 30% to 70%. The graph includes five distinct data series represented by colored markers and lines, with a legend in the top-left corner.
### Components/Axes
- **X-axis (Max Rollout)**: Integer values from 1 to 10, labeled "Max Rollout".
- **Y-axis (Performance %)**: Percentage values from 30% to 70%, labeled "Performance (%)".
- **Legend**: Located in the top-left corner, mapping colors to methods:
- Green circles: "Ours"
- Red triangles: "SRA-MCTS"
- Purple stars: "SRA-MCTS_no_eg"
- Blue squares: "LDB"
- Orange diamonds: "ToT"
### Detailed Analysis
#### Data Series Trends
1. **Ours (Green Circles)**:
- Starts at ~54% (Max Rollout 1), peaks at ~63% (Max Rollout 7), dips to ~61% (Max Rollout 9), and rises to ~63% (Max Rollout 10).
- Shows a generally upward trend with minor fluctuations.
2. **SRA-MCTS (Red Triangles)**:
- Begins at ~37% (Max Rollout 1), peaks at ~46% (Max Rollout 6), then declines to ~41% (Max Rollout 10).
- Exhibits moderate volatility with a peak at mid-range rollout.
3. **SRA-MCTS_no_eg (Purple Stars)**:
- Starts at ~42% (Max Rollout 1), peaks at ~46% (Max Rollout 6), dips to ~41% (Max Rollout 8), and stabilizes at ~45% (Max Rollout 10).
- Slightly outperforms SRA-MCTS but remains below "Ours".
4. **LDB (Blue Squares)**:
- Begins at ~40% (Max Rollout 1), peaks at ~50% (Max Rollout 6), then stabilizes around ~50% (Max Rollout 10).
- Shows a steady increase followed by plateauing.
5. **ToT (Orange Diamonds)**:
- Starts at ~41% (Max Rollout 1), peaks at ~57% (Max Rollout 5), fluctuates between ~53% and ~55% (Max Rollout 8-10).
- Highest peak among non-"Ours" methods but declines after Max Rollout 5.
### Key Observations
- **Ours** consistently outperforms all other methods, especially at higher Max Rollout values (7-10).
- **SRA-MCTS_no_eg** (purple) and **SRA-MCTS** (red) show similar trends but lag behind "Ours" by ~15-20%.
- **LDB** and **ToT** achieve moderate performance, with ToT peaking earlier (Max Rollout 5) and LDB maintaining higher values later.
- No method surpasses "Ours" in performance across all Max Rollout values.
### Interpretation
The data suggests that the "Ours" method is the most effective across all tested Max Rollout values, demonstrating superior scalability and stability. The SRA-MCTS variants (with and without "eg") underperform, potentially due to architectural limitations or missing components. LDB and ToT show promise but fail to match "Ours" in later stages. The graph highlights the importance of method design in handling increased complexity (Max Rollout), with "Ours" maintaining a clear advantage. Outliers like ToT's early peak may indicate overfitting or sensitivity to specific rollout thresholds.
</details>
Figure 4: Performance comparison across different maximum rollout values.
### Token Efficiency Analysis
Figure 10 shows the average token usage of different methods across all benchmark datasets. Our method reduces token consumption by approximately 15% compared to the previous MCTS method on both Qwen3-235B-A22B and Claude Sonnet 3.7. This improvement is attributed to: 1) The knowledge base retrieval scoring prioritizes more correct nodes, avoiding exploration of invalid branches. 2) Similarity filtering eliminates duplicate intermediate reasoning steps, enabling dynamic pruning of the Monte Carlo tree and reducing redundant path generation. 3) The simulation phase leverages sandbox feedback to pinpoint erroneous steps, while retaining the verified correct ones. Overall, RPM-MCTS achieves enhanced search efficiency and generation quality through knowledge base guidance and execution feedback.
<details>
<summary>x5.png Details</summary>

### Visual Description
## Scatter Plot: Performance vs. Average Token Usage
### Overview
The image is a scatter plot comparing the performance (y-axis, %) of different methods against their average token usage (x-axis, tokens). Three methods are compared: ToT (blue), SRA-MCTS (green), and "Ours" (orange). Three base models are represented: qwen3-8b (circle), qwen3-235b-a22b (triangle), and claude37_sonnet (square). Data points are labeled with their method-base model combinations.
### Components/Axes
- **X-axis**: "Average Token Usage" (0 to 20,000 tokens, increments of 4,000)
- **Y-axis**: "Performance (%)" (50% to 90%, increments of 10%)
- **Legend**:
- Top-left corner, labeled "Methods":
- Blue circle: ToT
- Green square: SRA-MCTS
- Orange triangle: Ours
- **Base Models**:
- qwen3-8b: Gray circle
- qwen3-235b-a22b: Gray triangle
- claude37_sonnet: Gray square
### Detailed Analysis
1. **SRA-MCTS(qwen3-8b)**:
- Position: (3,000 tokens, 53%)
- Color: Green square
- Label: "SRA-MCTS(qwen3-8b)"
2. **ToT(qwen3-8b)**:
- Position: (9,000 tokens, 58%)
- Color: Blue circle
- Label: "ToT(qwen3-8b)"
3. **Ours(qwen3-8b)**:
- Position: (7,500 tokens, 64%)
- Color: Orange triangle
- Label: "Ours(qwen3-8b)"
4. **SRA-MCTS(qwen3-235b-a22b)**:
- Position: (11,000 tokens, 62%)
- Color: Green square
- Label: "SRA-MCTS(qwen3-235b-a22b)"
5. **ToT(qwen3-235b-a22b)**:
- Position: (13,000 tokens, 67%)
- Color: Blue circle
- Label: "ToT(qwen3-235b-a22b)"
6. **Ours(qwen3-235b-a22b)**:
- Position: (8,500 tokens, 72%)
- Color: Orange triangle
- Label: "Ours(qwen3-235b-a22b)"
7. **SRA-MCTS(claude37_sonnet)**:
- Position: (12,000 tokens, 68%)
- Color: Green square
- Label: "SRA-MCTS(claude37_sonnet)"
8. **ToT(claude37_sonnet)**:
- Position: (19,000 tokens, 71%)
- Color: Blue circle
- Label: "ToT(claude37_sonnet)"
9. **Ours(claude37_sonnet)**:
- Position: (8,000 tokens, 76%)
- Color: Orange triangle
- Label: "Ours(claude37_sonnet)"
### Key Observations
- **Performance Trends**:
- "Ours" method consistently achieves higher performance (64–76%) across all base models.
- SRA-MCTS shows moderate performance (53–68%) but requires higher token usage (3,000–12,000 tokens).
- ToT exhibits variable performance (58–71%) with the highest token usage (9,000–19,000 tokens).
- **Token Efficiency**:
- "Ours" achieves the best performance-to-token ratio, especially with claude37_sonnet (76% at 8,000 tokens).
- ToT requires the most tokens for comparable performance (e.g., 19,000 tokens for 71% vs. 8,000 tokens for 76% with "Ours").
- **Outliers**:
- SRA-MCTS(qwen3-8b) is the lowest-performing point (53% at 3,000 tokens).
- ToT(claude37_sonnet) uses the most tokens (19,000) for only 71% performance.
### Interpretation
The data suggests that the "Ours" method outperforms both ToT and SRA-MCTS in terms of performance while maintaining lower token usage. This indicates superior efficiency, particularly when paired with the claude37_sonnet base model. SRA-MCTS appears less efficient, requiring more tokens for similar or lower performance gains. ToT's performance scales with token usage but remains less efficient than "Ours." The results highlight a trade-off between computational resource consumption and output quality, with "Ours" offering the most favorable balance.
</details>
Figure 5: Comparison of token consumption and performance across different methods and models.
### Reasoning Steps for Data Distillation
Our method is training-free, yet the algorithmic synthesized reasoning steps it produces can also be used for supervised fine-tuning. Based on Doubao-1.5-pro-32K model, we utilize RPM-MCTS for data distillation and construct a dataset of 2.4k code generation samples with reasoning steps. This distilled data is then combined with a foundational dataset of 170k samples to perform full fine-tuning on Doubao-1.5-pro-32K. Benchmark results in Table 4 demonstrate that the training data generated using RPM-MCTS significantly enhances the code capabilities of the base model.
| SWE-Bench (jimenez2024swebench) MBPP+ (liu2023your) LiveCodeBench (jain2024livecodebench) | 37.6 75.4 46.2 | 38.5 76.7 50.5 | +0.9 +1.3 +4.3 |
| --- | --- | --- | --- |
| Aider (aider2024polyglot) | 17.3 | 22.2 | +4.9 |
| McEval (mceval) | 57.5 | 61.2 | +3.7 |
Table 2: Supervised fine-tuning results on Doubao-1.5-pro-32K model using RPM-MCTS synthesized data.
## Conclusion
In this paper, we propose RPM-MCTS, which leverages a knowledge base and external sandbox feedback to directly obtain accurate reward values without requiring additional training of a process reward model. During the search process, errors are identified and promptly corrected. Experimental results demonstrate that RPM-MCTS outperforms current state-of-the-art methods under more constrained search budgets. Additionally, we construct training data using RPM-MCTS and perform full fine-tuning on base model, which significantly enhances code capabilities of the base model.
A limitation of RPM-MCTS is that code solvable in a single line may be divided into multiple lines due to the step-by-step approach, without impacting correctness. In the future, during the evaluation phase of MCTS, we can dynamically adjust the weights of external rewards from the knowledge base and sandbox, based on the uncertainty of LLMs.
## Acknowledgments
This work was supported by the National Science and Technology Major Project (2022ZD0114803), the Natural Science Foundation of Wuhan (2023010201020229), the Fundamental Research Funds for the Central Universities (NO.NJ2023032), and the Major Program (JD) of Hubei Province (2023BAA024).
## Appendix A Topic Categories
Table 3 presents the classification of algorithms in our knowledge base, which is divided into 14 categories. These categories were derived from the most common algorithmic tags on popular programming websites. The final knowledge base comprises a total of 82,923 items.
| Data Structures | 750 |
| --- | --- |
| Algorithm Strategies | 1218 |
| String Processing | 1676 |
| Sorting and Searching | 542 |
| Graph Theory | 977 |
| Bit Manipulation | 403 |
| Mathematics and Number Theory | 1658 |
| Computational Geometry | 611 |
| Optimization Problems | 1310 |
| Two-Pointer Techniques | 213 |
| Dynamic Programming | 836 |
| Recursion and Backtracking | 226 |
| Hashing Techniques | 316 |
| Other | 302 |
Table 3: Knowledge base data categories and statistics.
## Appendix B Dataset Details
As detailed in Table 4, our knowledge base was built from the CodeContests-Train and APPS-Train datasets, which together provide 11,038 training samples. The model’s performance was then benchmarked against a test set consisting of six standard benchmarks: the APPS test splits (by difficulty), the CodeContests test split, HumanEval+, and MBPP+.
| | Dataset | Samples |
| --- | --- | --- |
| Knowledge Base | CodeContests-Train | 7368 |
| APPS-Train | 3670 | |
| Test Set | APPS-Test-Introductory | 150 |
| APPS-Test-Interview | 150 | |
| APPS-Test-Competition | 150 | |
| CodeContests-Test | 150 | |
| HumanEval+ | 164 | |
| MBPP+ | 378 | |
Table 4: Dataset statistics.
## Appendix C Prompts
Figure 6 - 9 demonstrate selected key prompts used in our method. The prompt in Figure 6 generates the next step. The prompt in Figure 7 completes full steps. The prompt in Figure 8 analyzes execution errors to locate error steps. The prompt in Figure 9 translates the full steps into code.
## Appendix D Algorithm Details
Algorithm 1 provides the detailed pseudocode for our proposed RPM-MCTS. The algorithm follows the canonical MCTS structure, iteratively performing four phases consisting of Selection, Expansion, Evaluation, and Backpropagation. A key component is the EVALUATE_NODE function. Instead of a traditional random rollout, this function generates a complete code solution from the current path, executes it in a sandboxed environment, and assesses its correctness. If the execution fails, the algorithm activates a reflection mechanism that identifies the erroneous step, prunes the incorrect sub-path, and updates its understanding to guide future searches.
## Appendix E Case Study
We present a case study of our method in Figure 10. This figure provides a visualization of the final state of the Monte Carlo Tree after the search process concludes. In the tree, the root node represents the problem statement, while subsequent nodes correspond to individual reasoning steps. Furthermore, each node is annotated with its access sequence, visit count, and value. After four search iterations, our method successfully identifies the correct reasoning path to the solution, which corresponds to the leftmost path in the figure.
Prompt for Generating Next Step
Your task is to provide the correct next step based on the previous incorrect code used to solve the problem and a reflection, for a given programming problem and its existing solution steps (which are incomplete). Let’s think step by step. But you only generate one step at a time. We aim to decompose complex problems into a series of simpler subproblems and sequentially generate the corresponding steps to solve each subproblem. All the substeps should be combined in a way that avoids contradictions, forming a coherent solution to the original complex problem. Input format (n steps): •
Problem: {problem} •
Existing steps: {existing_steps} •
Analysis: {reflection} •
History: {history} The historical content is the solution proposed earlier. To ensure the diversity of solutions, please do not generate ideas identical to those in the historical content. Guidelines: •
The steps you generate will be passed to a code generation model, so they should be structured in a way that is easy for the model to understand. •
Keep each step concise and focused, avoiding the inclusion of too much information at once. Ensure clear organization and logical progression in your reasoning. •
Important: You can use very little code as detailed explanations in your answers, but you cannot just write code. •
If your answer includes code, it will cause unforeseen losses! •
Your answer should be based on the given analysis. Only if the analysis is wrong can you answer it in your own way. •
If no existing steps are provided, you should output the first step based on the given analysis. •
If there are existing steps, output the next step (Step n+1) that logically follows the provided analysis and the previous steps. Output format: •
Next step: … Your response should only generate solutions to the problem, without any extra words.
Figure 6: Prompt for Generating Next Step.
Prompt for Generating Full Steps
Your task is to take a programming problem and incomplete solution steps (not a full answer), then continue from the provided steps to complete all remaining steps and generate the complete final solution. Let’s think step by step. We aim to decompose complex problems into a series of simpler subproblems and sequentially generate the corresponding steps to solve each subproblem. All the substeps should be combined in a way that avoids contradictions, forming a coherent solution to the original complex problem. Note: Do not modify the existing solution steps. Input format (n steps): •
Problem: {problem} •
Existing steps: {existing_steps} Guidelines: •
If n is equal to 0, you need to start from scratch and analyze the solution idea briefly, and then output the complete answer. •
Otherwise, you need to output the complete answer that you think is correct following the train of thought of the existing steps. •
Each step generated should be concise and focused, addressing only a small part of the solution. Avoid making the steps too complex or combining multiple ideas into one. •
The complete solution should consist of at least three steps, so don’t skip any essential steps. •
Your output should be clear and systematic, with each step described one at a time to ensure logical progression. •
Note: You are only allowed to describe the reasoning steps in natural language. Do not output any code. Output format: •
Step 1: … •
Step 2: … •
… •
Step n: … •
Step n + 1: … •
… Among them, Step 1 to Step n are consistent with the existing steps. Continue to generate based on the existing steps to obtain a complete answer. The following is the input. Please output according to the specified output format, do not output unnecessary information, and do not repeat the question. Note: Your output should start from Step 1 and include all the steps, not just the next step.
Figure 7: Prompt for Generating Full Steps.
Prompt for Code Debugging and Analysis
The following is a Python code problem, which includes the thoughts and code for solving the problem, as well as the return results of debugging for a failed test case. Input: •
Python code problem: {problem} •
Thoughts: {solution} •
Code: {code} •
Test case debug information: {exec_result} The debugging process is to first split the code into block-level code according to the AST. If the block-level code is correct after debugging analysis, the ”correct” field is True, otherwise it is False. The ”explanation” field is the analysis of the block-level code debugging. Guidelines: Your task is to determine which specific step is written incorrectly based on the debug return results and conduct an analysis and summary. The correctly generated code and corresponding thought processes will be retained, while the incorrect code and corresponding thought processes will be discarded. You need to analyze and summarize the points to note so that subsequent thought processes can be generated based on the correct thought processes to correct the previous errors. Output format: Your output consists of two parts: •
1. Which specific step went wrong. Wrap it with the <step_n>x</step_n> XML tag, where x represents the specific number of the first erroneous step. If there are multiple erroneous steps in the thought process, only output the number of the first erroneous step. Do not output any extra content. •
2. Analyze and summarize the points to note. The final output should look like this: <step_n>x</step_n>..., where … represents the generated analysis.
Figure 8: Prompt for analyzing debugging results and identifying errors.
Prompt for Code Implementation
You will play the role of a code implementer, writing a complete code based on the given problem and the step-by-step analysis of the problem. Your code must strictly follow the analysis steps provided and should not include your own opinions. Rules: •
Importing function libraries(like: import math) and output function code only, without main function so that I can call your generated functions directly. •
The output code should be wrapped with code blocks (like “‘python). Example: “‘python\ndef add(a, b):\n return a + b\n“‘. Input: •
question: {question} •
analysis: {analysis}
Figure 9: Prompt for generating code based on steps.
Algorithm 1 The RPM-MCTS Algorithm for Code Generation
Input: Problem description $P$ , total iterations $I$ , branching factor $B$ , success threshold $\theta_{succ}$ Output: The best generated code solution $C_{best}$
1: $v_{root}\leftarrow\text{CREATE\_NODE}(P)$
2: for $i\leftarrow 1$ to $I$ do
3: // 1. Selection
4: $v_{l}\leftarrow v_{root}$
5: while $v_{l}$ is fully expanded do
6: $v_{l}\leftarrow\text{SELECT\_BEST\_CHILD\_UCB}(v_{l})$
7: end while
8:
9: // 2. Expansion
10: if $v_{l}$ is not a terminal node then
11: $S_{gen}\leftarrow\emptyset,H_{hist}\leftarrow\emptyset$
12: for $j\leftarrow 1$ to $B$ do
13: $s_{new}\leftarrow\text{GENERATE\_NEXT\_STEP}(P,\text{path}(v_{l}),H_{hist})$ {Generate diverse steps}
14: Add $s_{new}$ to $S_{gen}$ and $H_{hist}$
15: end for
16: $S_{unique}\leftarrow\text{FILTER\_SIMILAR\_STEPS}(S_{gen})$ {Filter semantic duplicates}
17: for each step $s$ in $S_{unique}$ do
18: $v_{c}\leftarrow\text{ADD\_CHILD}(v_{l},s)$
19: $v_{c}.value\leftarrow\text{GET\_INITIAL\_VALUE}(\text{path}(v_{c}))$ {Score from knowledge base & LLM}
20: end for
21: Mark $v_{l}$ as fully expanded
22: end if
23:
24: // 3. Evaluation (Simulation)
25: $v_{r}\leftarrow\text{SELECT\_BEST\_CHILD\_UCB}(v_{l})$
26: if $v_{r}$ is not NULL then
27: $is\_solved,Q_{final}\leftarrow\text{EVALUATE\_NODE}(v_{r},P,\theta_{succ})$
28: if $is\_solved$ then
29: return $\text{GET\_SOLUTION}(v_{r})$ {Optimal solution found, terminate early}
30: end if
31: else
32: $Q_{final}\leftarrow v_{l}.value$ {Use parent value if no children to evaluate}
33: end if
34:
35: // 4. Backpropagation
36: $\text{BACKPROPAGATE}(v_{r},Q_{final})$
37: end for
38: return $\text{GET\_BEST\_SOLUTION}(v_{root})$ {Return best solution after all iterations}
39:
40: Function EVALUATE_NODE( $v,P,\theta_{succ}$ )
41: $\pi_{s}\leftarrow\text{path}(v)$
42: $S_{full},C\leftarrow\text{GENERATE\_FULL\_SOLUTION}(\pi_{s})$
43: $r_{exec},res_{sb}\leftarrow\text{EXECUTE\_CODE}(C)$ {Evaluate in sandbox}
44: $r_{llm},f\leftarrow\text{EVALUATE\_WITH\_LLM}(S_{full},C,res_{sb})$ { $f$ is reflection}
45: $Q_{comb}\leftarrow\gamma\cdot r_{exec}+(1-\gamma)\cdot r_{llm}$ {Weighted combined value}
46: if $r_{exec}$ is SUCCESS and $r_{llm}\geq\theta_{succ}$ then
47: $\text{ADD\_SOLUTION\_TO\_TREE}(v,S_{full})$
48: return true, $Q_{comb}$
49: end if
50: if $r_{exec}$ is FAILURE then
51: $idx_{err}\leftarrow\text{LOCATE\_ERROR\_STEP}(S_{full},res_{sb},f)$
52: $S_{pruned}\leftarrow\text{PRUNE\_STEPS}(S_{full},idx_{err})$
53: $\text{ADD\_SOLUTION\_TO\_TREE}(v,S_{pruned})$ {Add correct partial path}
54: $\text{UPDATE\_REFLECTION\_IN\_TREE}(v,f)$
55: end if
56: return false, $Q_{comb}$ {Return failure if not solved}
<details>
<summary>x6.png Details</summary>

### Visual Description
## Flowchart: Maximum Value of (j - i) * g(i, j) for Bounded Array
### Overview
The flowchart outlines an algorithm to compute the maximum value of **(j - i) * g(i, j)** for a bounded array, where **g(i, j) = j² - i²** and **g(i, j) > 0**. The process involves decision points, iterative steps, and conditional logic to optimize the solution.
---
### Components/Axes
- **Decision Points**:
- **Step 1**: Check if the array is empty.
- **Step 2**: Check if the array has only one element.
- **Step 3**: Check if the array is sorted in non-decreasing order.
- **Step 4**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 5**: Check if the array is sorted in non-increasing order.
- **Step 6**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 7**: Check if the array is sorted in non-decreasing order.
- **Step 8**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 9**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 10**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 11**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 12**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 13**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 14**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 15**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 16**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 17**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 18**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 19**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 20**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 21**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 22**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 23**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 24**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 25**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 26**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 27**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 28**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 29**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 30**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 31**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 32**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 33**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 34**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 35**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 36**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 37**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 38**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 39**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 40**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 41**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 42**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 43**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 44**: Check for adjacent pairs with **g(i, j) > 0**.
- **Step 45**: Check for adjacent pairs with **g(i, j) > 0**.
- **Steps**:
- **Step 1**: Initialize variables and check array length.
- **Step 2**: Brute-force calculation of **(j - i) * g(i, j)** for all **i < j**.
- **Step 3**: Use a stack/deque to track potential candidates for maximum value.
- **Step 4**: Calculate maximum value using adjacent pairs.
- **Step 5**: Use a stack/deque for non-increasing arrays.
- **Step 6**: Calculate maximum value using adjacent pairs.
- **Step 7**: Use a stack/deque for non-decreasing arrays.
- **Step 8**: Calculate maximum value using adjacent pairs.
- **Step 9**: Use a stack/deque for non-decreasing arrays.
- **Step 10**: Calculate maximum value using adjacent pairs.
- **Step 11**: Use a stack/deque for non-decreasing arrays.
- **Step 12**: Calculate maximum value using adjacent pairs.
- **Step 13**: Use a stack/deque for non-decreasing arrays.
- **Step 14**: Calculate maximum value using adjacent pairs.
- **Step 15**: Use a stack/deque for non-decreasing arrays.
- **Step 16**: Calculate maximum value using adjacent pairs.
- **Step 17**: Use a stack/deque for non-decreasing arrays.
- **Step 18**: Calculate maximum value using adjacent pairs.
- **Step 19**: Use a stack/deque for non-decreasing arrays.
- **Step 20**: Calculate maximum value using adjacent pairs.
- **Step 21**: Use a stack/deque for non-decreasing arrays.
- **Step 22**: Calculate maximum value using adjacent pairs.
- **Step 23**: Use a stack/deque for non-decreasing arrays.
- **Step 24**: Calculate maximum value using adjacent pairs.
- **Step 25**: Use a stack/deque for non-decreasing arrays.
- **Step 26**: Calculate maximum value using adjacent pairs.
- **Step 27**: Use a stack/deque for non-decreasing arrays.
- **Step 28**: Calculate maximum value using adjacent pairs.
- **Step 29**: Use a stack/deque for non-decreasing arrays.
- **Step 30**: Calculate maximum value using adjacent pairs.
- **Step 31**: Use a stack/deque for non-decreasing arrays.
- **Step 32**: Calculate maximum value using adjacent pairs.
- **Step 33**: Use a stack/deque for non-decreasing arrays.
- **Step 34**: Calculate maximum value using adjacent pairs.
- **Step 35**: Use a stack/deque for non-decreasing arrays.
- **Step 36**: Calculate maximum value using adjacent pairs.
- **Step 37**: Use a stack/deque for non-decreasing arrays.
- **Step 38**: Calculate maximum value using adjacent pairs.
- **Step 39**: Use a stack/deque for non-decreasing arrays.
- **Step 40**: Calculate maximum value using adjacent pairs.
- **Step 41**: Use a stack/deque for non-decreasing arrays.
- **Step 42**: Calculate maximum value using adjacent pairs.
- **Step 43**: Use a stack/deque for non-decreasing arrays.
- **Step 44**: Calculate maximum value using adjacent pairs.
- **Step 45**: Use a stack/deque for non-decreasing arrays.
---
### Detailed Analysis
1. **Initial Checks**:
- If the array is empty, return **0**.
- If the array has only one element, return **0**.
2. **Brute-Force Approach (Step 2)**:
- Iterate over all pairs **(i, j)** where **i < j**.
- Compute **(j - i) * (j² - i²)** and track the maximum value.
- Time complexity: **O(n²)**.
3. **Optimized Approaches**:
- **Stack/Deque Method**:
- Track potential candidates for **i** and **j** using a stack or deque.
- For non-decreasing arrays, use a stack to maintain indices where **g(i, j) > 0**.
- For non-increasing arrays, use a deque to track indices where **g(i, j) > 0**.
- **Adjacent Pairs**:
- Check adjacent pairs **(i, i+1)** for **g(i, j) > 0**.
- Compute **(j - i) * g(i, j)** and track the maximum.
4. **Edge Cases**:
- If the array is sorted in non-decreasing or non-increasing order, use specialized algorithms to reduce time complexity.
- If no valid pairs exist (e.g., all **g(i, j) ≤ 0**), return **0**.
---
### Key Observations
- The flowchart prioritizes **O(n)** or **O(n log n)** solutions over brute-force **O(n²)** methods.
- **Stack/Deque** structures are critical for efficiently tracking candidates in sorted arrays.
- Adjacent pairs are a fallback for unsorted arrays, ensuring the solution works for all cases.
- The problem reduces to maximizing **(j - i)² * (j + i)**, which is equivalent to **(j - i) * (j² - i²)**.
---
### Interpretation
The flowchart demonstrates a systematic approach to solving the problem by:
1. **Reducing complexity** through conditional checks (e.g., sorted arrays).
2. **Leveraging data structures** (stacks/deques) to optimize candidate tracking.
3. **Fallback mechanisms** (adjacent pairs) for unsorted arrays.
The algorithm ensures correctness by covering all edge cases and optimizing for performance. The use of **g(i, j) = j² - i²** simplifies the problem to maximizing **(j - i)² * (j + i)**, which is computationally tractable with the described methods.
</details>
Figure 10: Case study.