# RPM-MCTS: Knowledge-Retrieval as Process Reward Model with Monte Carlo Tree Search for Code Generation
**Authors**: Yuanyuan Lin\equalcontrib, Xiangyu Ouyang\equalcontrib, Teng Zhang, Kaixin Sui
> Corresponding Author
Abstract
Tree search-based methods have made significant progress in enhancing the code generation capabilities of large language models. However, due to the difficulty in effectively evaluating intermediate algorithmic steps and the inability to locate and timely correct erroneous steps, these methods often generate incorrect code and incur increased computational costs. To tackle these problems, we propose RPM-MCTS, an effective method that utilizes Knowledge- R etrieval as P rocess Reward M odel based on M onte C arlo T ree S earch to evaluate intermediate algorithmic steps. By utilizing knowledge base retrieval, RPM-MCTS avoids the complex training of process reward models. During the expansion phase, similarity filtering is employed to remove redundant nodes, ensuring diversity in reasoning paths. Furthermore, our method utilizes sandbox execution feedback to locate erroneous algorithmic steps during generation, enabling timely and targeted corrections. Extensive experiments on four public code generation benchmarks demonstrate that RPM-MCTS outperforms current state-of-the-art methods while achieving an approximately 15% reduction in token consumption. Furthermore, full fine-tuning of the base model using the data constructed by RPM-MCTS significantly enhances its code capabilities.
Introduction
Code generation aims at understanding problem descriptions in natural language and generating the corresponding code snippets. In recent years, large language models (LLMs) have demonstrated remarkable performance in code generation tasks (zhang2024unifying). For code generation, the early methods involve dividing code planning and synthesis into two phases using chain-of-thought or tree structures (wei2022chain; jiang2024self; zelikman2023parsel). wang2024planning have demonstrated that providing LLMs with a correct solution can significantly enhance model performance, even when these solutions consist of incomplete plans, i.e., for solutions, correctness is preferred over completeness, and the key to enhancing the code generation capability of LLMs lies in generating correct plans.
Programming languages possess their own inherent logical structures and tightly interconnected knowledge, which makes it essential not to overlook long-range dependencies within the code. Previous work has shown that rotary position embedding does not always lead to attention weights decaying with relative distance (barbero2024round). Concurrently, through exploring the attention distribution between tokens, we have experimentally demonstrated that selecting algorithmic steps as the basic units is a superior choice. Therefore, our objective focuses on how to accurately generate intermediate algorithmic steps.
However, a limitation of previous methods lies in the lack of an evaluation and correction mechanism for intermediate algorithmic steps, which fails to guarantee the correctness of these steps (lu2025lsr; li2025structured). One way to tackle this issue is to use a value function or reward model to verify reasoning traces for correctness, which then serves as a learning signal for self-training (lightman2023let; wang2023math). However, training a reliable reward model to verify every step in a reasoning trace generally depends on dense human-generated annotations per reasoning step (lightman2023let), which does not scale well.
Unlike other reasoning tasks, code generation benefits from the homogeneity of algorithmic workflows across different problem categories. This allows us to leverage historical experience from a knowledge base containing numerous correct algorithmic steps to evaluate the process reward of expansion steps. Additionally, code generation typically benefits from detailed feedback provided by compilers. Consequently, in this paper, we propose RPM-MCTS, which optimizes the Monte Carlo Tree Search (MCTS) algorithm using external information feedback. Our method utilizes the knowledge base for intermediate algorithmic step-level evaluation and employs sandbox feedback for result-level assessment of complete code. Specifically, the root node of the Monte Carlo tree represents the coding problem, while all other nodes represent individual algorithmic steps. During each iteration, multiple distinct potential next steps are generated based on the current reasoning path. Node selection is guided by historical experience from the knowledge base, enabling faster discovery of high-value search paths. In the simulation phase, complete code is generated and evaluated using sandbox and model feedback to update node values. Notably, during simulation, we localize erroneous steps within the full algorithmic workflow and incorporate newly generated correct steps into the tree, thereby reducing token consumption. After multiple iterations, the highest-scoring path from root to leaf is selected, ultimately yielding a complete solution alongside its corresponding code. The contributions are summarized as follows:
- We propose RPM-MCTS, which leverages knowledge base retrieval scores to evaluate intermediate algorithmic steps, steering LLMs to explore high-value reasoning paths more effectively.
- We leverage sandbox feedback during the simulation phase to evaluate code generated from reasoning steps, localize errors, and truncate simulations, thereby reducing computational costs.
- We conduct extensive experiments and show that RPM-MCTS is superior to state-of-the-art methods. Moreover, we verify that base models fine-tuned with data generated by RPM-MCTS enjoy greater code capabilities.
Related Work
Monte Carlo Tree Search.
As the extension of Chain-of-Thought (CoT) (wei2022chain), Tree-of-Thought (ToT) (yao2023tree) enhances the reasoning and planning capabilities of LLMs by exploring different thought paths within a tree structure. Subsequently, Monte Carlo Tree Search has served as a search algorithm to more effectively guide LLMs in exploring intermediate sub-steps (zhao2023large; hao2023reasoning; zhou2023language; ding2023everything). ReST-MCTS* (zhang2024rest) combines process reward guidance with Monte Carlo Tree Search to collect high-quality reasoning trajectories and step-by-step values for training strategy and reward models. SRA-MCTS (xu2024sra) further extends this to the field of code generation, using Monte Carlo Tree Search to generate intermediate reasoning steps and conducting iterative self-evaluation to synthesize training data for supervised fine-tuning. However, relying solely on model self-evaluation introduces biases and hallucinations, and small-scale LLMs exhibit limited instruction-following capabilities. RethinkMCTS (li2025rethinkmcts) is another prior work that also uses execution feedback but employs a patching strategy. If this patch fails, the search may proceed on an incorrect path, making it less suitable for generating high-quality SFT data.
Process Evaluation.
In heuristic search, a robust reasoning process needs to have self-evaluation capabilities, and the evaluation results are further used to guide the search. Early work mainly focused on outcome-level evaluation (cobbe2021training), that is, evaluating the complete solution after the reasoning is completed. Outcome-level evaluation is simple to implement but often requires more detailed assessment. Step-level evaluation (lightman2023let; wang2023math; gao2024llm) emphasizes the assessment of individual reasoning steps. In tree search algorithms, process evaluation is widely used to guide search trajectories. Logic-RL (xie2025logic) optimizes path selection by implementing state scoring in beam search. Furthermore, step-level evaluation has proven its effectiveness in both error correction and the summarization of reasoning steps. zheng2024makes developed a method capable of accurately locating inaccuracies in specific reasoning steps, thereby providing more precise and actionable feedback for comprehensive evaluation.
Method
In this section, we elaborate on the proposed modified MCTS that incorporates the knowledge base as a process reward model. The methodology comprises three key components: knowledge base construction, RPM-MCTS, and code generation. First, knowledge base retrieval scores circumvent random selection during node expansion. Then, in the expansion phase, nodes are filtered based on similarity metrics to eliminate redundant candidates. Finally, during the simulation phase, the algorithm performs error reflection and retains nodes with verified correct reasoning. These collective strategies enable faster exploration of higher-quality algorithmic steps.
Knowledge Base Construction
In this section, we introduce the construction of a retrievable global knowledge base designed to mitigate hallucination during the planning process. Due to the homogeneity of algorithms within the same category, where fundamental principles and methods are relatively similar, we utilize a knowledge base containing numerous correct algorithms across diverse categories. This serves as the evaluation model for intermediate algorithmic steps in RPM-MCTS, eliminating the need to train a separate process reward model.
We use the training set data from APPS (hendrycks2021measuring) and CodeContests (li2022competition), which contain coding problems paired with their correctly implemented solutions. We utilize the Claude Sonnet 3.7 to generate the correct algorithmic steps corresponding to the correct code and decompose them step by step. We sequentially concatenate the problems by rolling them out according to the algorithmic steps. Specifically, for problem $p_{i}$ with $n_{i}$ algorithmic steps and $a_{i}^{(j)}$ corresponding to the $j$ -th step, we have
$$
\displaystyle\mathcal{K}_{i}=\{\mathrm{concat}(p_{i},a_{i}^{(1)},\ldots,a_{i}^{(j)}),~j=1,2,\ldots,n_{i}\}, \tag{1}
$$
and $\mathcal{K}=\uplus_{i=1}^{n}\mathcal{K}_{i}$ is the knowledge base with cardinality $n$ .
To enhance retrieval efficiency and improve retrieval precision by distinguishing between problems with similar descriptions but different algorithmic solutions, we organize the knowledge base into 14 distinct algorithm categories and store them as vector database by using the BGE (xiao2024c) embedding model.
RPM-MCTS
We propose an enhanced MCTS method, named RPM-MCTS. In this method, the root node represents the problem, while all other nodes represent an algorithmic step. Specifically, the method comprises four distinct phases: Selection, Expansion, Evaluation and Reflection, and Backpropagation, as shown in Figure 6. These phases are performed on a search tree composed of tree nodes and are iterated multiple times, with each iteration generating a concrete algorithmic step.
<details>
<summary>x1.png Details</summary>

### Visual Description
\n
## Diagram: Monte Carlo Tree Search (MCTS) Rollout Process
### Overview
The image depicts a diagram illustrating the iterative process of Monte Carlo Tree Search (MCTS) with rollouts. It shows four main stages – Selection, Expansion, Evaluation, and Backpropagation – repeated 'X' times. Each stage is visually represented with a tree-like structure and associated components. The diagram highlights the interaction between a "Sandbox", "Knowledge", and "LLM" (Large Language Model) at each stage.
### Components/Axes
The diagram is structured horizontally, representing the sequential steps of MCTS. The stages are labeled as "Selection", "Expansion", "Evaluation", and "Backpropagation" positioned at the top of each stage. Below each stage are tree diagrams and associated boxes representing the interaction with "Sandbox", "Knowledge", and "LLM". Arrows indicate the flow of information and the iterative nature of the process. There are also visual indicators (checkmarks and 'X' marks) within the tree diagrams to denote successful and unsuccessful evaluations.
### Detailed Analysis or Content Details
**1. Selection:**
* A tree structure is shown with nodes represented by colored circles (red, green, blue, purple).
* An arrow points downwards from the root node to a node labeled "UCB".
* Below "UCB" are three boxes: "Sandbox" (with a file icon), "Knowledge" (with a book icon), and "LLM" (with a chip icon).
* A plus sign (+) is present between "Sandbox" and "Knowledge", and between "Knowledge" and "LLM".
* A small circular arrow is present next to the "LLM" box.
**2. Expansion:**
* A tree structure is shown, similar to the "Selection" stage.
* A dashed red box highlights a specific branch of the tree with the text "Value: 8 Value: 9" and a red 'X' mark.
* Below the tree are three boxes: "Sandbox", "Knowledge", and "LLM", with a plus sign (+) between each.
**3. Evaluation:**
* A tree structure is shown, with a highlighted path from the root to a leaf node.
* A green checkmark is present on the highlighted path.
* Below the tree is a box labeled "Code" (with a bracket icon) and "Sandbox" (with a file icon).
* An arrow points from the checkmark to the "Code" box, and then to the "Sandbox" box.
* A red 'X' mark is present with a downward arrow.
* The text "杀" (sha) is present above the red 'X' mark. (Chinese character meaning "to kill" or "to destroy").
**4. Backpropagation:**
* A tree structure is shown, with nodes colored similarly to the previous stages.
* Arrows indicate the flow of information upwards through the tree.
* The nodes are colored: red, green, blue, purple.
**Rollout X times:**
* The text "Rollout X times" is positioned at the top center of the diagram, indicating the iterative nature of the process.
### Key Observations
* The diagram illustrates a cyclical process, with each stage feeding into the next.
* The "Sandbox", "Knowledge", and "LLM" components are consistently involved in each stage, suggesting their importance in the MCTS process.
* The red 'X' and green checkmark indicate a binary outcome of the evaluation stage, influencing the backpropagation step.
* The Chinese character "杀" (sha) suggests a negative outcome or rejection during the evaluation phase.
### Interpretation
The diagram represents a simplified view of the Monte Carlo Tree Search algorithm, commonly used in AI for decision-making. The algorithm explores a search space (represented by the tree) by iteratively selecting, expanding, evaluating, and backpropagating information. The "Sandbox" likely represents an environment for executing actions, "Knowledge" represents pre-existing information, and "LLM" represents a large language model used for reasoning or prediction. The "Rollout X times" indicates that this process is repeated multiple times to refine the search and improve the accuracy of the decision-making process. The red 'X' and green checkmark signify the success or failure of a particular path, guiding the algorithm towards more promising options. The inclusion of the Chinese character "杀" (sha) is unusual and could indicate a specific failure mode or rejection criterion within the evaluation process, potentially related to invalid or unsafe actions. The diagram highlights the interplay between exploration (expanding the tree) and exploitation (backpropagating rewards) in the MCTS algorithm.
</details>
Figure 1: Overview of RPM-MCTS. (a) Selection: Select a leaf node according to Eqn. (2). (b) Expansion: After selecting a node, expand multiple child nodes, and use knowledge base retrieval scores and LLM evaluation to select nodes for simulation. The node color represents similarity magnitude. (c) Evaluation: Generate complete reasoning steps for the selected node, generate code strictly in accordance with these reasoning steps, and use a sandbox for information feedback. (d) Backpropagation: Propagate the reward scores backward. The yellow root node represents the problem, and the remaining nodes represent each reasoning step.
Selection.
In the selection phase, a leaf node is selected from the current tree for further expansion according to the selection score, which is defined as a weighted combination of the Upper Confidence Bound (UCB) (silver2017mastering) and the knowledge base retrieval score:
$$
\displaystyle\mathrm{SelectionScore}(s,a)=\mathrm{UCB}(s,a)+\alpha K(s,a), \tag{2}
$$
where $(s,a)$ denotes a state-action pair with $s$ containing the description of the problem and previously generated algorithmic steps and $a$ representing the new step at the current node. The parameter $\alpha$ is for balancing the two terms.
UCB is a classical multi-armed bandit algorithm and well performed in addressing the exploration-exploitation trade-off. UCB selects actions by computing an upper confidence estimate of each action’s potential reward:
$$
\displaystyle\mathrm{UCB}(s,a)=Q(s,a)+\beta\sqrt{\frac{\log N(s)}{1+N(s,a)}}, \tag{3}
$$
where $Q(s,a)$ represents the empirical mean cumulative reward after taking action $a$ from state $s$ , $N(s)$ is the number of times state $s$ has been explored in the current context, and $N(s,a)$ is the number of times action $a$ has been taken in state $s$ . The parameter $\beta$ is for trading off the exploitation (the former term) and exploration (the latter term).
The knowledge base retrieval score $K(s,a)$ is obtained by retrieving the concatenated $(s,a)$ pair from the knowledge base. Specifically, let $f$ denote the embedding model that maps $(s,a)$ to a vector with the same dimension as the knowledge base. Given the preceding reasoning path, the knowledge base retrieval score for the current node is calculated as follows:
$$
\displaystyle K(s,a)=\max\left(0,\max_{k\in\mathcal{K}}\frac{f((s,a))\cdot k}{\|f((s,a))\|\cdot\|k\|}\right). \tag{4}
$$
The knowledge base similarity score $K(s,a)$ enables acquisition of step-wise assessments prior to the evaluation phase. In other words, when newly generated nodes remain unexplored, we prioritize leveraging historically validated solutions through knowledge base retrieval scores to identify higher-value nodes.
Starting from the root node, we recursively select the child node with the maximum $\mathrm{SelectionScore}$ value at each branching point. Selection ties are resolved stochastically. Each iteration advances to the highest-scoring child node until reaching a leaf node.
<details>
<summary>x2.png Details</summary>

### Visual Description
## Heatmap & Algorithm Visualization: Factorial Calculation in Python
### Overview
The image presents a combination of a heatmap, a textual algorithm description, and a visual representation of a process. The heatmap appears to visualize correlations between variables related to a Python program designed to calculate the factorial of 5. The algorithm is outlined in numbered steps, and a visual representation (c) shows a series of bars, potentially representing iterations or data transformations.
### Components/Axes
The image is divided into four main sections:
1. **Header:** "Problem: Use Python to calculate 5! and output the result."
2. **Heatmap (a):** A heatmap with labels along the y-axis (n, p, i, for, range, print, i/n) and x-axis (n, p, i, for, range, 1, print). A colorbar on the right indicates values ranging from 0.0 to 0.8.
3. **Algorithm (b):** A numbered list outlining the steps to calculate the factorial.
4. **Visual Representation (c):** A grid of bars, with a second, more detailed view on the right.
### Detailed Analysis or Content Details
**Heatmap (a):**
The heatmap displays correlation values between the variables listed on the axes. The color intensity represents the strength of the correlation, with darker blues indicating higher correlation and lighter colors indicating lower correlation.
* **Y-axis Labels:** n, p, i, for, range, print, i/n
* **X-axis Labels:** n, p, i, for, range, 1, print
* **Colorbar:** 0.0 to 0.8 (approximately).
* **Notable Values (approximate, based on color):**
* Correlation between 'n' and 'n' is approximately 0.8.
* Correlation between 'p' and 'p' is approximately 0.8.
* Correlation between 'i' and 'i' is approximately 0.7.
* Correlation between 'for' and 'for' is approximately 0.6.
* Correlation between 'range' and 'range' is approximately 0.6.
* Correlation between 'print' and 'print' is approximately 0.7.
* Correlation between 'i/n' and 'i/n' is approximately 0.7.
* The correlation between 'n' and 'p' is approximately 0.3.
* The correlation between 'n' and 'i' is approximately 0.2.
* The correlation between 'n' and 'for' is approximately 0.1.
* The correlation between 'n' and 'range' is approximately 0.1.
* The correlation between 'n' and '1' is approximately 0.1.
* The correlation between 'n' and 'print' is approximately 0.1.
* The correlation between 'p' and 'i' is approximately 0.4.
* The correlation between 'p' and 'for' is approximately 0.2.
* The correlation between 'p' and 'range' is approximately 0.2.
* The correlation between 'p' and '1' is approximately 0.2.
* The correlation between 'p' and 'print' is approximately 0.2.
**Algorithm (b):**
1. n = 5
2. p = 1
3. for i in range(1, n + 1):
4. p = p * i
5. print(p)
**Visual Representation (c):**
A grid of bars, with the right side showing a more detailed view. The bars appear to change in height, potentially representing the value of 'p' at each iteration of the loop. The bars are mostly gray, with some purple highlights on the right side.
### Key Observations
* The heatmap shows strong self-correlation for each variable (n, p, i, for, range, print, i/n).
* The correlation between 'n' and other variables is relatively low, suggesting 'n' is somewhat independent.
* The algorithm clearly outlines the steps for calculating the factorial of 5.
* The visual representation (c) seems to illustrate the iterative process of factorial calculation, with the bars representing the increasing value of the factorial.
### Interpretation
The image demonstrates the process of calculating the factorial of 5 using Python. The heatmap attempts to visualize the relationships between the variables involved in the calculation. The strong self-correlations indicate that each variable is strongly related to itself, which is expected. The low correlations between 'n' and other variables suggest that the initial value of 'n' doesn't strongly influence the other variables directly, but it sets the upper bound for the loop. The algorithm provides a clear, step-by-step guide to the calculation, and the visual representation (c) offers a graphical illustration of the iterative process. The purple highlights on the right side of (c) may indicate the final result or a significant point in the calculation. The heatmap is likely a conceptual visualization rather than a precise statistical analysis, as the correlations don't necessarily have a clear mathematical meaning in this context. The image serves as a pedagogical tool to explain the factorial calculation process and its underlying variables.
</details>
Figure 2: (a) Token-level attention heatmap for code corresponding to the programming problem. (b) Algorithmic steps and corresponding code for the programming problem. (c) Attention sink phenomenon.
Expansion.
Upon selecting a leaf node during the selection phase, the expansion phase aims to generate remaining child nodes, thereby expanding the search scope of the entire tree. Since the attention weights between tokens do not always decay with relative distance (barbero2024round), we conduct an in-depth study on the attention mechanism between tokens in LLMs during code generation to reveal the influencing factors among tokens. As shown in Figure 7, it can be observed that certain tokens have a profound impact on subsequent code generation. It can thus be inferred that these key tokens can summarize and interpret the information of previously generated tokens, and have higher reference value for subsequent token generation. Meanwhile, relevant studies (barbero2025llms; xiao2023efficient) have shown that modern LLMs exhibit the phenomenon of “attention sink”. Specifically, numerous attention heads allocate a disproportionate share of weights, such as exceeding 30% or even 80%, to the beginning-of-sequence token ⟨bos⟩, despite its primary function as a sequence delimiter with minimal semantic content. Therefore, to facilitate our examination of inter-token dependencies in code generation tasks, we selectively visualize token attention mechanisms at designated layers. Figure 7 (c) shows that attention not only sinks to ⟨bos⟩ but also peaks at algorithmic step boundaries, justifying that algorithmic step blocks are more effective basic processing units in code generation tasks. Therefore, we select algorithmic steps as the basic units for expansion.
To ensure diversity in generated steps during the expansion phase, we implement a sampling decoding strategy that sequentially generates each child node. Specifically, to prevent repetitive generation by the LLM, we iteratively provide all previously generated steps as context when producing each new step. The input for the LLM is
$$
\displaystyle\mathrm{concat}(s,a_{1},\ldots,a_{i},g),~i=1,2,\ldots,b \tag{5}
$$
where $g$ represents the reflection in the simulation phase, and $b$ denotes the maximum number of branches each node can expand.
After expanding $b$ nodes, we employ cosine similarity for filtering to reduce computational costs by avoiding simulations on redundant nodes. Specifically, we map the reasoning steps of the $b$ nodes to vectors using the embedding model $\mathcal{E}$ and calculate the cosine similarities between these $b$ nodes. When the similarity exceeds a predetermined threshold, the node is identified as redundant and filtered out. This method effectively reduces the search space and enhances algorithmic efficiency while maintaining diversity.
Evaluation and Reflection.
During the evaluation phase, simulation and evaluation are performed for the selected leaf nodes. We provide the LLM with the algorithmic steps $s$ already generated for the node and its ancestor nodes, enabling the LLM to strictly follow the generated steps and continue simulating to complete all remaining steps. We search for the thoughts and evaluate with the code generated following the thoughts.
The generated code undergoes sandbox evaluation using public test cases. However, since public test cases only cover a subset of possible scenarios, the code may fail on unseen cases, such as boundary conditions or performance issues. We therefore employ the LLM to analyze the complete algorithmic steps based on sandbox feedback.
We assess the steps generated during the expansion phase through two components, which are the pass rate on public test cases and LLM evaluation. The final evaluation score is obtained by weighted summation of these two scores. The formula is as follows:
$$
\displaystyle Q(s,a)=\gamma\cdot r_{\text{exec}}+(1-\gamma)\cdot r_{\text{LLM}} \tag{6}
$$
where $r_{\text{exec}}$ denotes the pass rate on public test cases, $r_{\text{LLM}}$ represents the score from LLM evaluation based on the sandbox feedback results and complete steps provided to the LLM, and $\gamma$ indicates the weight controlling these two parts of the scores.
For code that fails to public test cases, we isolate erroneous algorithmic steps by decomposing the code into blocks and sequentially debugging each block via LLM analysis with public test inputs (zhong2024debug). We retain all correct steps generated during the simulation phase, truncated before the first erroneous step. These validated steps are then incorporated into the MCTS tree as expanded nodes.
The entire RPM-MCTS process is terminated when the solution passes all public test cases and achieves a high LLM evaluation score. Otherwise, node updates and reflection are performed, and the RPM-MCTS process proceeds until the maximum iteration count is reached.
Backpropagation.
The objective of backpropagation is to update the reward values of nodes upon completion of state value evaluation. We propagate reward values backward from leaf nodes to the root node, updating the state estimates of all nodes along the path. For newly generated nodes during the expansion phase, they collectively update their parent node. As the number of simulations increases, these value estimates become increasingly accurate. This process repeats until the preset maximum simulation count is reached, ultimately resulting in a search tree that records the state value and visit count for each node.
Generate Code
Termination of the RPM-MCTS process occurs under two conditions: 1) If all public test cases are passed and LLM analysis confirms robustness to unseen edge cases before reaching maximum iterations, the code generated during the simulation phase is retained. 2) When maximum iterations are reached without meeting termination criteria, the leaf node with the highest state value is selected, its ancestral path is traced, and the LLM is instructed to generate code by rigorously adhering to the algorithmic steps assembled from this path.
Experiments
Experimental Settings
Datasets.
For the construction of the knowledge base, we use the train set splits of APPS (hendrycks2021measuring) and CodeContests (li2022competition) as data sources. After validation and filtering, we obtained 11,038 samples with a total of 82,923 steps. For benchmarking, we used the test set splits of APPS and CodeContests, as well as HumanEval+ (liu2023your) and MBPP+ (liu2023your). The APPS dataset contains three difficulty levels: introductory, interview, and competition. We selected 150 validated samples from each difficulty level. The CodeContests dataset consists of competitive programming problems collected from contest websites such as Codeforces. Additionally, HumanEval (chen2021evaluating) and MBPP (austin2021program) are widely recognized benchmarks in the code generation domain, while HumanEval+ and MBPP+ introduce a larger number of test cases to enable more accurate evaluations. We utilized Claude Sonnet 3.7 to convert all datasets into a unified format, which primarily includes the problem statement, public test cases, private test cases, and standard solution. To facilitate sandbox execution, we transformed datasets with standard input-output problems into function definitions with docstrings. For datasets without public test cases, we selected the first two private test cases as public test cases.
Baselines.
We selected the following methods as baselines for comparison. Base LLM refers to directly prompting the LLM to output solution code using the problem statement and public test cases as input. LDB (zhong2024debug) leverages the LLM to track intermediate variables during code execution to iteratively improve the code. ToT (yao2023tree) performs a search of thought steps using DFS or BFS before generating the final code. SRA-MCTS (xu2024sra) combines LLM with MCTS to explore intermediate reasoning steps. The complete steps obtained by SRA-MCTS are used as input to prompt the LLM to directly infer and output the solution code for evaluation.
Implementation Details.
We use two large-parameter backbone models, Qwen3-235B-A22B (yang2025qwen3) and Claude Sonnet 3.7, alongside a smaller-parameter model, Qwen3-8B. In the code generation domain, pass@k (chen2024survey) is a widely used metric, and we adopted pass@1 as the evaluation metric. The rollout, i.e., maximum number of iterations, was set to 5 for all methods. The branching factor $b$ for tree-based methods was set to 3. The exploration constant $\beta$ for UCB was set to 0.5. In RPM-MCTS, the weight of the knowledge base retrieval score $\alpha$ was set to 0.5, and the similarity filtering threshold was set to 0.85.
Main Results
| Method Qwen3-8B Base LLM | APPS-Intro. 56.7 | APPS-Interv. 35.3 | APPS-Comp. 29.3 | CodeContests 10.7 | HumanEval+ 75.6 | MBPP+ 72.2 | Average 52.1 |
| --- | --- | --- | --- | --- | --- | --- | --- |
| LDB | 64.0 (+7.3) | 42.0 (+6.7) | 28.0 (-1.3) | 11.3 (+0.7) | 78.1 (+2.4) | 70.1 (-2.1) | 53.5 (+1.4) |
| ToT | 69.3 (+12.7) | 54.0 (+18.7) | 41.3 (+12.0) | 17.3 (+6.7) | 82.3 (+6.7) | 70.4 (-1.9) | 59.0 (+6.9) |
| SRA-MCTS | 67.3 (+10.7) | 42.7 (+7.3) | 29.3 (+0.0) | 16.0 (+5.3) | 73.8 (-1.8) | 65.9 (-6.4) | 52.8 (+0.7) |
| Ours w/o KB | 76.7 (+20.0) | 56.7 (+21.3) | 40.7 (+11.3) | 22.3 (+11.6) | 82.3 (+6.7) | 78.3 (+6.1) | 63.5 (+11.4) |
| Ours | 77.3 (+20.7) | 60.0 (+24.7) | 43.6 (+14.3) | 23.0 (+12.3) | 83.5 (+7.9) | 76.2 (+4.0) | 64.0 (+11.9) |
| Qwen3-235B-A22B | | | | | | | |
| Base LLM | 78.0 | 54.7 | 42.7 | 24.0 | 86.0 | 78.8 | 64.6 |
| LDB | 78.7 (+0.7) | 61.3 (+6.7) | 47.3 (+4.7) | 25.3 (+1.3) | 86.0 (+0.0) | 78.8 (+0.0) | 66.4 (+1.8) |
| ToT | 84.7 (+6.7) | 62.7 (+8.0) | 57.3 (+14.7) | 27.3 (+3.3) | 85.4 (-0.6) | 75.4 (-3.4) | 67.7 (+3.1) |
| SRA-MCTS | 76.0 (-2.0) | 52.7 (-2.0) | 44.0 (+1.3) | 24.7 (+0.7) | 85.4 (-0.6) | 70.9 (-7.9) | 61.7 (-3.0) |
| Ours w/o KB | 88.0 (+10.0) | 72.0 (+17.3) | 52.0 (+9.3) | 34.7 (+10.7) | 86.6 (+0.6) | 79.9 (+1.1) | 71.3 (+6.7) |
| Ours | 86.7 (+8.7) | 67.3 (+12.7) | 59.3 (+16.7) | 36.7 (+12.7) | 87.8 (+1.8) | 81.2 (+2.4) | 72.3 (+7.7) |
| Claude Sonnet 3.7 | | | | | | | |
| Base LLM | 78.7 | 56.0 | 59.3 | 31.3 | 82.9 | 77.8 | 67.3 |
| LDB | 82.0 (+3.3) | 64.7 (+8.7) | 73.3 (+14.0) | 33.3 (+2.0) | 88.4 (+5.5) | 77.0 (-0.8) | 71.5 (+4.2) |
| ToT | 84.0 (+5.3) | 68.0 (+12.0) | 66.0 (+6.7) | 39.3 (+8.0) | 86.0 (+3.1) | 74.6 (-3.2) | 70.8 (+3.6) |
| SRA-MCTS | 83.3 (+4.7) | 63.3 (+7.3) | 62.0 (+2.7) | 36.0 (+4.7) | 81.1 (-1.8) | 74.3 (-3.4) | 68.4 (+1.1) |
| Ours w/o KB | 92.0 (+13.3) | 73.3 (+17.3) | 78.0 (+18.7) | 42.7 (+11.3) | 86.6 (+3.7) | 79.1 (+1.3) | 76.2 (+8.9) |
| Ours | 92.0 (+13.3) | 74.0 (+18.0) | 81.3 (+22.0) | 46.0 (+14.7) | 89.0 (+6.1) | 81.0 (+3.2) | 78.1 (+10.9) |
Table 1: Performance comparison of all methods across different backbone models on code generation benchmarks. Values in parentheses indicate the improvement over the base LLM.
Our method achieves the most significant improvements across different backbone models and datasets. As shown in Table 3, Qwen3-8B achieves an average improvement of 11.90%, Qwen3-235B-A22B achieves an average improvement of 7.71%, and Claude Sonnet 3.7 achieves an average improvement of 10.86%. Since the base Qwen3-8B performs worse than the other two larger base LLMs across all datasets, especially on simpler datasets, the Qwen3-8B shows the most significant improvement when using RPM-MCTS. On the two more challenging datasets, APPS-competition and CodeContests, Qwen3-8B achieves an average improvement of 13.3%, Qwen3-235B-A22B achieves an average improvement of 14.67%, and Claude Sonnet 3.7 achieves an average improvement of 18.34%. This is because Qwen3-8B has weaker evaluation scoring capabilities, while larger LLMs have relatively stronger evaluation capabilities, resulting in greater gains. This demonstrates that the more difficult the task, the more accurate evaluation of intermediate algorithm steps is required.
LDB achieves greater improvements on simpler datasets compared to more challenging ones. We found that this is because, for more difficult problems, through multiple rounds of execution feedback, LLMs often only modify code conditions to pass public test cases rather than thinking about modifying the actual logic of the code. SRA-MCTS shows performance improvements on more challenging datasets but declines on simpler ones. The reason is that for simple problems, LLM evaluation scores are always perfect or near-perfect, prematurely ending the search for steps, resulting in incomplete or lower-quality reasoning steps.
Comparing the results across three different difficulty levels in the APPS dataset, it can be observed that for the two larger LLMs, as the difficulty increases, our method brings more significant performance improvements. The higher the difficulty of the problem, the more guidance the LLM needs to avoid getting lost in complex reasoning chains. This demonstrates the effectiveness of our method in evaluating intermediate steps, helping LLMs enhance their evaluation capabilities and further unlocking the vast potential code knowledge and reasoning abilities inherent in LLMs.
For fair comparison, even without using knowledge base retrieval scores as rewards, our method outperforms other baselines. Experimental results show that overall, especially on the two most challenging datasets, incorporating the knowledge base further stabilizes and improves performance. The reason is that LLM evaluation of intermediate steps in complex problems is unreliable, and random exploration struggles to find the correct solution path. Therefore, leveraging the knowledge base to use the reasoning patterns of historically similar problems as guidance helps direct the search. This demonstrates the effectiveness of using knowledge base retrieval scores as rewards for intermediate process evaluation.
On a few simpler datasets, performance slightly improves when knowledge base retrieval scores are not used. We analyze that this is because, in simple tasks, LLMs can already accurately evaluate the quality of generated paths. In this case, introducing knowledge base rewards, while aiming to provide additional prior information, may retrieve historical cases that are textually similar but logically different in their solutions, introducing noise into MCTS node selection. In contrast, for complex tasks, LLM evaluation capabilities for intermediate steps are limited, the search space is vast, and solutions are sparse. The structured priors provided by the knowledge base effectively guide the search direction, significantly improving success rates. This phenomenon indicates that the effectiveness of knowledge base rewards depends on the balance between task difficulty and LLM evaluation confidence.
Ablation Study
We conduct ablation experiments using Qwen3-235B-A22B as the backbone model to evaluate performance, and the results are shown in Figure 8.
w/o KB indicates that only LLM evaluation is used in selection, without knowledge base retrieval. Compared to the complete method, the overall performance slightly decreased, with an average drop of 1.05%. The decline was most significant on the two more challenging datasets, with an average drop of 4.67%. This indicates that large models still face challenges with complex problems. By introducing a knowledge base to compare the generated reasoning steps with the correct reasoning steps of similar problems in the knowledge base, the self-assessment capability for complex problems can be improved.
w/o ER means that the execution rewards of public test cases in the sandbox are not used during the simulation phase. This resulted in the largest overall performance drop, highlighting that the core of RPM-MCTS reflection lies in the detailed feedback provided by the code execution environment. In fact, previous research (huang2023large) has already pointed out that without external feedback, LLMs lack the ability to self-correct their reasoning processes.
<details>
<summary>x3.png Details</summary>

### Visual Description
\n
## Bar Chart: Performance Comparison of Code Generation Models
### Overview
This bar chart compares the performance (in percentage) of a code generation model ("Ours") and its variants with different components removed ("w/o KB", "w/o ER", "w/o ST", "w/o LDB") across six different benchmarks: APPS-Intro, APPS-Interv, APPS-Comp, CodeContests, HumanEval+, and MBPP+. The performance is measured on the y-axis, ranging from 20% to 100%, while the x-axis represents the different benchmarks.
### Components/Axes
* **Y-axis:** "Performance (%)" - Represents the percentage of correctly generated code. Scale ranges from 20 to 100, with increments of 10.
* **X-axis:** Benchmarks: "APPS-Intro", "APPS-Interv", "APPS-Comp", "CodeContests", "HumanEval+", "MBPP+".
* **Legend (Top-Right):**
* "Ours" - Blue bars
* "w/o KB" - Light Blue bars
* "w/o ER" - Orange bars
* "w/o ST" - Red bars
* "w/o LDB" - Yellow bars
### Detailed Analysis
Here's a breakdown of the performance for each benchmark and model variant:
* **APPS-Intro:**
* "Ours": ~88.0%
* "w/o KB": ~86.7%
* "w/o ER": ~78.8%
* "w/o ST": ~67.3%
* "w/o LDB": ~63.3%
* **APPS-Interv:**
* "Ours": ~88.0%
* "w/o KB": ~86.0%
* "w/o ER": ~72.8%
* "w/o ST": ~63.3%
* "w/o LDB": ~63.3%
* **APPS-Comp:**
* "Ours": ~78.7%
* "w/o KB": ~70.7%
* "w/o ER": ~59.3%
* "w/o ST": ~52.0%
* "w/o LDB": ~46.0%
* **CodeContests:**
* "Ours": ~68.7%
* "w/o KB": ~56.7%
* "w/o ER": ~36.7%
* "w/o ST": ~34.7%
* "w/o LDB": ~28.9%
* **HumanEval+:**
* "Ours": ~87.8%
* "w/o KB": ~86.6%
* "w/o ER": ~87.2%
* "w/o ST": ~31.1%
* "w/o LDB": ~38.4%
* **MBPP+:**
* "Ours": ~81.2%
* "w/o KB": ~79.9%
* "w/o ER": ~77.5%
* "w/o ST": ~79.6%
* "w/o LDB": ~72.5%
**Trends:**
* For most benchmarks, the "Ours" model consistently outperforms all variants.
* Removing "ER" (Evidence Retrieval) and "ST" (Structure Transformer) generally leads to the most significant performance drops across all benchmarks.
* Removing "KB" (Knowledge Base) has a moderate impact on performance.
* Removing "LDB" (Long Document Builder) has the least impact on performance.
* The performance gap between "Ours" and its variants is most pronounced on "APPS-Comp" and "CodeContests".
### Key Observations
* The "Ours" model achieves the highest performance across all benchmarks.
* The "w/o ER" and "w/o ST" variants consistently exhibit the lowest performance, indicating the critical role of evidence retrieval and structure transformation in the model's success.
* The performance on "CodeContests" is significantly lower than other benchmarks, suggesting this benchmark is more challenging for the model.
* The performance of "w/o ST" on HumanEval+ is particularly low, suggesting that structure transformation is especially important for this benchmark.
### Interpretation
The data suggests that the "Ours" model, incorporating all components (KB, ER, ST, and LDB), is the most effective code generation model among those tested. The substantial performance drops observed when removing "ER" and "ST" highlight their importance in the model's ability to understand and generate correct code. The relatively smaller impact of removing "KB" and "LDB" suggests that while these components contribute to performance, they are less critical than evidence retrieval and structure transformation.
The lower performance on "CodeContests" could indicate that this benchmark requires a different set of skills or knowledge compared to the other benchmarks. The benchmark may involve more complex problem-solving or require access to a broader range of information.
The chart provides a clear quantitative comparison of the different model variants, allowing for a data-driven assessment of the contribution of each component to the overall performance. This information can be used to guide future research and development efforts aimed at improving the performance of code generation models. The consistent trend across benchmarks reinforces the validity of the findings.
</details>
Figure 3: Ablation study results on different benchmarks.
w/o SF refers to the removal of similarity filtering, i.e., not discarding similar child nodes during the expansion phase. The results show that filtering out repeated intermediate algorithmic steps based on similarity allows resources to be better allocated to exploring new steps, thereby improving performance while reducing computational costs.
w/o LDB denotes not using LDB to locate erroneous steps in our method. The average performance drop was minimal, indicating that removing LDB has little impact on our method. With execution feedback, LLMs are already capable of accurately locating errors. However, in a few cases, LDB still helps in pinpointing erroneous steps.
Performance vs. Rollout
We explore the results of different values of the hyperparameter rollout on Qwen3-235B-A22B, as shown in Figure 9. Since SRA-MCTS is prone to premature termination due to self-overestimation by the model, we set its end gate value to exceed the maximum possible score, allowing it to reach the maximum number of iterations whenever possible. we denote this variant as SRA-MCTS_no_eg. The results show that in the early stages, all methods exhibit significant performance improvements as the rollout increases, after which the performance gradually stabilizes. Notably, RPM-MCTS exhibits better performance even with a rollout of 1. This is because it enjoys two advantages in its first rollout: proactive guidance via its Knowledge Base during the selection phase, and wrong step truncation with rethink-based regeneration during the simulation phase. This allows it to perform at least one round of verification and reflection and generate complete code. Moreover, for simpler problems, RPM-MCTS can often arrive at the correct answer with only a single simulation, whereas traditional tree search methods tend to require multiple unnecessary expansions even for straightforward tasks.
<details>
<summary>x4.png Details</summary>

### Visual Description
## Line Chart: Performance vs. Max Rollout
### Overview
This image presents a line chart comparing the performance of five different algorithms ("Ours", "SRA-MCTS", "SRA-MCTS_no_eg", "LDB", and "ToT") across varying "Max Rollout" values, ranging from 1 to 10. The performance is measured in percentage (%).
### Components/Axes
* **X-axis:** "Max Rollout" - Scale ranges from 1 to 10, with increments of 1.
* **Y-axis:** "Performance (%)" - Scale ranges from 30 to 70, with increments of 5.
* **Legend:** Located in the top-left corner, identifying each line with a corresponding color:
* "Ours" - Green
* "SRA-MCTS" - Magenta
* "SRA-MCTS_no_eg" - Blue
* "LDB" - Orange
* "ToT" - Brown
### Detailed Analysis
Here's a breakdown of each line's trend and approximate data points, verified against the legend colors:
* **Ours (Green):** The line generally slopes upward, with some fluctuations.
* Rollout 1: ~53%
* Rollout 2: ~56%
* Rollout 3: ~59%
* Rollout 4: ~61%
* Rollout 5: ~62%
* Rollout 6: ~61%
* Rollout 7: ~60%
* Rollout 8: ~59%
* Rollout 9: ~60%
* Rollout 10: ~60%
* **SRA-MCTS (Magenta):** The line shows a decreasing trend initially, then plateaus.
* Rollout 1: ~37%
* Rollout 2: ~41%
* Rollout 3: ~44%
* Rollout 4: ~44%
* Rollout 5: ~45%
* Rollout 6: ~44%
* Rollout 7: ~44%
* Rollout 8: ~44%
* Rollout 9: ~44%
* Rollout 10: ~44%
* **SRA-MCTS_no_eg (Blue):** The line increases initially, then fluctuates around a relatively stable level.
* Rollout 1: ~40%
* Rollout 2: ~43%
* Rollout 3: ~46%
* Rollout 4: ~48%
* Rollout 5: ~50%
* Rollout 6: ~50%
* Rollout 7: ~49%
* Rollout 8: ~49%
* Rollout 9: ~49%
* Rollout 10: ~49%
* **LDB (Orange):** The line shows a generally increasing trend.
* Rollout 1: ~42%
* Rollout 2: ~46%
* Rollout 3: ~52%
* Rollout 4: ~54%
* Rollout 5: ~56%
* Rollout 6: ~56%
* Rollout 7: ~56%
* Rollout 8: ~57%
* Rollout 9: ~57%
* Rollout 10: ~57%
* **ToT (Brown):** The line fluctuates with a slight upward trend.
* Rollout 1: ~39%
* Rollout 2: ~41%
* Rollout 3: ~43%
* Rollout 4: ~43%
* Rollout 5: ~44%
* Rollout 6: ~44%
* Rollout 7: ~45%
* Rollout 8: ~45%
* Rollout 9: ~46%
* Rollout 10: ~46%
### Key Observations
* "Ours" consistently outperforms all other algorithms across all "Max Rollout" values.
* "SRA-MCTS" exhibits the lowest performance and remains relatively stable after Rollout 3.
* "LDB" shows a steady improvement in performance as "Max Rollout" increases.
* "SRA-MCTS_no_eg" and "ToT" have similar performance levels, with "SRA-MCTS_no_eg" slightly higher.
* The performance gains from increasing "Max Rollout" diminish for all algorithms beyond a certain point (around Rollout 6-7).
### Interpretation
The chart demonstrates the effectiveness of the "Ours" algorithm compared to the other four algorithms ("SRA-MCTS", "SRA-MCTS_no_eg", "LDB", and "ToT") in terms of performance. The "Max Rollout" parameter appears to have a positive impact on performance for most algorithms, but the rate of improvement decreases as the value increases. The relatively low and stable performance of "SRA-MCTS" suggests that it may not be as effective as the other algorithms, potentially due to the inclusion of a component ("eg") that hinders its performance. The comparison between "SRA-MCTS" and "SRA-MCTS_no_eg" highlights the impact of this component. The chart suggests that there is a trade-off between computational cost (represented by "Max Rollout") and performance, and that finding the optimal "Max Rollout" value is crucial for maximizing performance. The data suggests that the "Ours" algorithm is robust to changes in "Max Rollout" and consistently delivers high performance.
</details>
Figure 4: Performance comparison across different maximum rollout values.
Token Efficiency Analysis
Figure 10 shows the average token usage of different methods across all benchmark datasets. Our method reduces token consumption by approximately 15% compared to the previous MCTS method on both Qwen3-235B-A22B and Claude Sonnet 3.7. This improvement is attributed to: 1) The knowledge base retrieval scoring prioritizes more correct nodes, avoiding exploration of invalid branches. 2) Similarity filtering eliminates duplicate intermediate reasoning steps, enabling dynamic pruning of the Monte Carlo tree and reducing redundant path generation. 3) The simulation phase leverages sandbox feedback to pinpoint erroneous steps, while retaining the verified correct ones. Overall, RPM-MCTS achieves enhanced search efficiency and generation quality through knowledge base guidance and execution feedback.
<details>
<summary>x5.png Details</summary>

### Visual Description
\n
## Scatter Plot: Performance vs. Token Usage for Different Methods and Base Models
### Overview
This scatter plot visualizes the relationship between Performance (in percentage) and Average Token Usage for three different methods (ToT, SRA-MCTS, and Ours) applied to three different base models (qwen3-8b, qwen3-235b-a22b, and claude37_sonnet). Each data point represents a specific combination of method and base model. The plot aims to compare the efficiency and effectiveness of these methods.
### Components/Axes
* **X-axis:** Average Token Usage (ranging from 0 to 20000, with increments of 4000).
* **Y-axis:** Performance (%) (ranging from 50 to 90, with increments of 10).
* **Legend (Top-Left):**
* Methods:
* ToT (Blue circles)
* SRA-MCTS (Green squares)
* Ours (Orange triangles)
* Base Models:
* qwen3-8b (Grey circles)
* qwen3-235b-a22b (Grey triangles)
* claude37_sonnet (Grey squares)
* **Data Points:** Each point is labeled with the method and base model it represents (e.g., "ToT(claude37_sonnet)").
### Detailed Analysis
Here's a breakdown of the data points, categorized by method and base model, with approximate values read from the plot:
**ToT (Blue Circles):**
* ToT(qwen3-8b): Approximately (4000, 58%). Line slopes upward.
* ToT(qwen3-235b-a22b): Approximately (12000, 68%). Line slopes upward.
* ToT(claude37_sonnet): Approximately (20000, 72%). Line slopes upward.
**SRA-MCTS (Green Squares):**
* SRA-MCTS(qwen3-8b): Approximately (4000, 52%). Line slopes upward.
* SRA-MCTS(qwen3-235b-a22b): Approximately (12000, 64%). Line slopes upward.
* SRA-MCTS(claude37_sonnet): Approximately (12000, 68%). Line slopes upward.
**Ours (Orange Triangles):**
* Ours(qwen3-8b): Approximately (8000, 65%). Line slopes upward.
* Ours(qwen3-235b-a22b): Approximately (8000, 72%). Line slopes upward.
* Ours(claude37_sonnet): Approximately (8000, 81%). Line slopes upward.
### Key Observations
* **Performance Trend:** Generally, performance increases with increasing Average Token Usage for all methods and base models.
* **Method Comparison:** "Ours" consistently achieves the highest performance across all base models, especially with claude37_sonnet.
* **Base Model Impact:** claude37_sonnet consistently yields the highest performance when combined with any of the methods.
* **Efficiency:** SRA-MCTS and ToT achieve lower performance with lower token usage, while "Ours" requires more tokens but delivers significantly better performance.
* **Outlier:** The point "Ours(claude37_sonnet)" stands out as having the highest performance (approximately 81%).
### Interpretation
The data suggests that the "Ours" method is the most effective in terms of performance, but it comes at the cost of higher Average Token Usage. The choice of base model significantly impacts performance, with claude37_sonnet being the superior choice. The trade-off between performance and token usage is a key consideration.
The upward slopes of the lines for each method indicate a positive correlation between token usage and performance. This suggests that increasing the computational resources (token usage) can lead to improved results. The clustering of points for each base model suggests that the base model itself is a significant factor in determining the overall performance.
The fact that "Ours" consistently outperforms the other methods, particularly with the claude37_sonnet model, suggests that this combination is the most promising for achieving high performance in this task. The data also highlights the importance of considering the computational cost (token usage) when selecting a method and base model.
</details>
Figure 5: Comparison of token consumption and performance across different methods and models.
Reasoning Steps for Data Distillation
Our method is training-free, yet the algorithmic synthesized reasoning steps it produces can also be used for supervised fine-tuning. Based on Doubao-1.5-pro-32K model, we utilize RPM-MCTS for data distillation and construct a dataset of 2.4k code generation samples with reasoning steps. This distilled data is then combined with a foundational dataset of 170k samples to perform full fine-tuning on Doubao-1.5-pro-32K. Benchmark results in Table 4 demonstrate that the training data generated using RPM-MCTS significantly enhances the code capabilities of the base model.
| SWE-Bench (jimenez2024swebench) MBPP+ (liu2023your) LiveCodeBench (jain2024livecodebench) | 37.6 75.4 46.2 | 38.5 76.7 50.5 | +0.9 +1.3 +4.3 |
| --- | --- | --- | --- |
| Aider (aider2024polyglot) | 17.3 | 22.2 | +4.9 |
| McEval (mceval) | 57.5 | 61.2 | +3.7 |
Table 2: Supervised fine-tuning results on Doubao-1.5-pro-32K model using RPM-MCTS synthesized data.
Conclusion
In this paper, we propose RPM-MCTS, which leverages a knowledge base and external sandbox feedback to directly obtain accurate reward values without requiring additional training of a process reward model. During the search process, errors are identified and promptly corrected. Experimental results demonstrate that RPM-MCTS outperforms current state-of-the-art methods under more constrained search budgets. Additionally, we construct training data using RPM-MCTS and perform full fine-tuning on base model, which significantly enhances code capabilities of the base model.
A limitation of RPM-MCTS is that code solvable in a single line may be divided into multiple lines due to the step-by-step approach, without impacting correctness. In the future, during the evaluation phase of MCTS, we can dynamically adjust the weights of external rewards from the knowledge base and sandbox, based on the uncertainty of LLMs.
Acknowledgments
This work was supported by the National Science and Technology Major Project (2022ZD0114803), the Natural Science Foundation of Wuhan (2023010201020229), the Fundamental Research Funds for the Central Universities (NO.NJ2023032), and the Major Program (JD) of Hubei Province (2023BAA024).
Appendix A Topic Categories
Table 3 presents the classification of algorithms in our knowledge base, which is divided into 14 categories. These categories were derived from the most common algorithmic tags on popular programming websites. The final knowledge base comprises a total of 82,923 items.
| Data Structures | 750 |
| --- | --- |
| Algorithm Strategies | 1218 |
| String Processing | 1676 |
| Sorting and Searching | 542 |
| Graph Theory | 977 |
| Bit Manipulation | 403 |
| Mathematics and Number Theory | 1658 |
| Computational Geometry | 611 |
| Optimization Problems | 1310 |
| Two-Pointer Techniques | 213 |
| Dynamic Programming | 836 |
| Recursion and Backtracking | 226 |
| Hashing Techniques | 316 |
| Other | 302 |
Table 3: Knowledge base data categories and statistics.
Appendix B Dataset Details
As detailed in Table 4, our knowledge base was built from the CodeContests-Train and APPS-Train datasets, which together provide 11,038 training samples. The model’s performance was then benchmarked against a test set consisting of six standard benchmarks: the APPS test splits (by difficulty), the CodeContests test split, HumanEval+, and MBPP+.
| | Dataset | Samples |
| --- | --- | --- |
| Knowledge Base | CodeContests-Train | 7368 |
| APPS-Train | 3670 | |
| Test Set | APPS-Test-Introductory | 150 |
| APPS-Test-Interview | 150 | |
| APPS-Test-Competition | 150 | |
| CodeContests-Test | 150 | |
| HumanEval+ | 164 | |
| MBPP+ | 378 | |
Table 4: Dataset statistics.
Appendix C Prompts
Figure 6 - 9 demonstrate selected key prompts used in our method. The prompt in Figure 6 generates the next step. The prompt in Figure 7 completes full steps. The prompt in Figure 8 analyzes execution errors to locate error steps. The prompt in Figure 9 translates the full steps into code.
Appendix D Algorithm Details
Algorithm 1 provides the detailed pseudocode for our proposed RPM-MCTS. The algorithm follows the canonical MCTS structure, iteratively performing four phases consisting of Selection, Expansion, Evaluation, and Backpropagation. A key component is the EVALUATE_NODE function. Instead of a traditional random rollout, this function generates a complete code solution from the current path, executes it in a sandboxed environment, and assesses its correctness. If the execution fails, the algorithm activates a reflection mechanism that identifies the erroneous step, prunes the incorrect sub-path, and updates its understanding to guide future searches.
Appendix E Case Study
We present a case study of our method in Figure 10. This figure provides a visualization of the final state of the Monte Carlo Tree after the search process concludes. In the tree, the root node represents the problem statement, while subsequent nodes correspond to individual reasoning steps. Furthermore, each node is annotated with its access sequence, visit count, and value. After four search iterations, our method successfully identifies the correct reasoning path to the solution, which corresponds to the leftmost path in the figure.
Prompt for Generating Next Step
Your task is to provide the correct next step based on the previous incorrect code used to solve the problem and a reflection, for a given programming problem and its existing solution steps (which are incomplete). Let’s think step by step. But you only generate one step at a time. We aim to decompose complex problems into a series of simpler subproblems and sequentially generate the corresponding steps to solve each subproblem. All the substeps should be combined in a way that avoids contradictions, forming a coherent solution to the original complex problem. Input format (n steps): •
Problem: {problem} •
Existing steps: {existing_steps} •
Analysis: {reflection} •
History: {history} The historical content is the solution proposed earlier. To ensure the diversity of solutions, please do not generate ideas identical to those in the historical content. Guidelines: •
The steps you generate will be passed to a code generation model, so they should be structured in a way that is easy for the model to understand. •
Keep each step concise and focused, avoiding the inclusion of too much information at once. Ensure clear organization and logical progression in your reasoning. •
Important: You can use very little code as detailed explanations in your answers, but you cannot just write code. •
If your answer includes code, it will cause unforeseen losses! •
Your answer should be based on the given analysis. Only if the analysis is wrong can you answer it in your own way. •
If no existing steps are provided, you should output the first step based on the given analysis. •
If there are existing steps, output the next step (Step n+1) that logically follows the provided analysis and the previous steps. Output format: •
Next step: … Your response should only generate solutions to the problem, without any extra words.
Figure 6: Prompt for Generating Next Step.
Prompt for Generating Full Steps
Your task is to take a programming problem and incomplete solution steps (not a full answer), then continue from the provided steps to complete all remaining steps and generate the complete final solution. Let’s think step by step. We aim to decompose complex problems into a series of simpler subproblems and sequentially generate the corresponding steps to solve each subproblem. All the substeps should be combined in a way that avoids contradictions, forming a coherent solution to the original complex problem. Note: Do not modify the existing solution steps. Input format (n steps): •
Problem: {problem} •
Existing steps: {existing_steps} Guidelines: •
If n is equal to 0, you need to start from scratch and analyze the solution idea briefly, and then output the complete answer. •
Otherwise, you need to output the complete answer that you think is correct following the train of thought of the existing steps. •
Each step generated should be concise and focused, addressing only a small part of the solution. Avoid making the steps too complex or combining multiple ideas into one. •
The complete solution should consist of at least three steps, so don’t skip any essential steps. •
Your output should be clear and systematic, with each step described one at a time to ensure logical progression. •
Note: You are only allowed to describe the reasoning steps in natural language. Do not output any code. Output format: •
Step 1: … •
Step 2: … •
… •
Step n: … •
Step n + 1: … •
… Among them, Step 1 to Step n are consistent with the existing steps. Continue to generate based on the existing steps to obtain a complete answer. The following is the input. Please output according to the specified output format, do not output unnecessary information, and do not repeat the question. Note: Your output should start from Step 1 and include all the steps, not just the next step.
Figure 7: Prompt for Generating Full Steps.
Prompt for Code Debugging and Analysis
The following is a Python code problem, which includes the thoughts and code for solving the problem, as well as the return results of debugging for a failed test case. Input: •
Python code problem: {problem} •
Thoughts: {solution} •
Code: {code} •
Test case debug information: {exec_result} The debugging process is to first split the code into block-level code according to the AST. If the block-level code is correct after debugging analysis, the ”correct” field is True, otherwise it is False. The ”explanation” field is the analysis of the block-level code debugging. Guidelines: Your task is to determine which specific step is written incorrectly based on the debug return results and conduct an analysis and summary. The correctly generated code and corresponding thought processes will be retained, while the incorrect code and corresponding thought processes will be discarded. You need to analyze and summarize the points to note so that subsequent thought processes can be generated based on the correct thought processes to correct the previous errors. Output format: Your output consists of two parts: •
1. Which specific step went wrong. Wrap it with the <step_n>x</step_n> XML tag, where x represents the specific number of the first erroneous step. If there are multiple erroneous steps in the thought process, only output the number of the first erroneous step. Do not output any extra content. •
2. Analyze and summarize the points to note. The final output should look like this: <step_n>x</step_n>..., where … represents the generated analysis.
Figure 8: Prompt for analyzing debugging results and identifying errors.
Prompt for Code Implementation
You will play the role of a code implementer, writing a complete code based on the given problem and the step-by-step analysis of the problem. Your code must strictly follow the analysis steps provided and should not include your own opinions. Rules: •
Importing function libraries(like: import math) and output function code only, without main function so that I can call your generated functions directly. •
The output code should be wrapped with code blocks (like “‘python). Example: “‘python\ndef add(a, b):\n return a + b\n“‘. Input: •
question: {question} •
analysis: {analysis}
Figure 9: Prompt for generating code based on steps.
Algorithm 1 The RPM-MCTS Algorithm for Code Generation
Input: Problem description $P$ , total iterations $I$ , branching factor $B$ , success threshold $\theta_{succ}$ Output: The best generated code solution $C_{best}$
1: $v_{root}←\text{CREATE\_NODE}(P)$
2: for $i← 1$ to $I$ do
3: // 1. Selection
4: $v_{l}← v_{root}$
5: while $v_{l}$ is fully expanded do
6: $v_{l}←\text{SELECT\_BEST\_CHILD\_UCB}(v_{l})$
7: end while
8:
9: // 2. Expansion
10: if $v_{l}$ is not a terminal node then
11: $S_{gen}←\emptyset,H_{hist}←\emptyset$
12: for $j← 1$ to $B$ do
13: $s_{new}←\text{GENERATE\_NEXT\_STEP}(P,\text{path}(v_{l}),H_{hist})$ {Generate diverse steps}
14: Add $s_{new}$ to $S_{gen}$ and $H_{hist}$
15: end for
16: $S_{unique}←\text{FILTER\_SIMILAR\_STEPS}(S_{gen})$ {Filter semantic duplicates}
17: for each step $s$ in $S_{unique}$ do
18: $v_{c}←\text{ADD\_CHILD}(v_{l},s)$
19: $v_{c}.value←\text{GET\_INITIAL\_VALUE}(\text{path}(v_{c}))$ {Score from knowledge base & LLM}
20: end for
21: Mark $v_{l}$ as fully expanded
22: end if
23:
24: // 3. Evaluation (Simulation)
25: $v_{r}←\text{SELECT\_BEST\_CHILD\_UCB}(v_{l})$
26: if $v_{r}$ is not NULL then
27: $is\_solved,Q_{final}←\text{EVALUATE\_NODE}(v_{r},P,\theta_{succ})$
28: if $is\_solved$ then
29: return $\text{GET\_SOLUTION}(v_{r})$ {Optimal solution found, terminate early}
30: end if
31: else
32: $Q_{final}← v_{l}.value$ {Use parent value if no children to evaluate}
33: end if
34:
35: // 4. Backpropagation
36: $\text{BACKPROPAGATE}(v_{r},Q_{final})$
37: end for
38: return $\text{GET\_BEST\_SOLUTION}(v_{root})$ {Return best solution after all iterations}
39:
40: Function EVALUATE_NODE( $v,P,\theta_{succ}$ )
41: $\pi_{s}←\text{path}(v)$
42: $S_{full},C←\text{GENERATE\_FULL\_SOLUTION}(\pi_{s})$
43: $r_{exec},res_{sb}←\text{EXECUTE\_CODE}(C)$ {Evaluate in sandbox}
44: $r_{llm},f←\text{EVALUATE\_WITH\_LLM}(S_{full},C,res_{sb})$ { $f$ is reflection}
45: $Q_{comb}←\gamma· r_{exec}+(1-\gamma)· r_{llm}$ {Weighted combined value}
46: if $r_{exec}$ is SUCCESS and $r_{llm}≥\theta_{succ}$ then
47: $\text{ADD\_SOLUTION\_TO\_TREE}(v,S_{full})$
48: return true, $Q_{comb}$
49: end if
50: if $r_{exec}$ is FAILURE then
51: $idx_{err}←\text{LOCATE\_ERROR\_STEP}(S_{full},res_{sb},f)$
52: $S_{pruned}←\text{PRUNE\_STEPS}(S_{full},idx_{err})$
53: $\text{ADD\_SOLUTION\_TO\_TREE}(v,S_{pruned})$ {Add correct partial path}
54: $\text{UPDATE\_REFLECTION\_IN\_TREE}(v,f)$
55: end if
56: return false, $Q_{comb}$ {Return failure if not solved}
<details>
<summary>x6.png Details</summary>

### Visual Description
\n
## Flowchart: Decision Tree for Feature Selection
### Overview
The image presents a complex flowchart illustrating a decision tree process for feature selection in a machine learning context. The flowchart is structured as a series of questions, each leading to further questions or a final decision. Each step includes mathematical expressions and associated accuracy measures. The flowchart is organized in a tree-like structure, starting from the top and branching downwards.
### Components/Axes
The flowchart consists of rectangular nodes representing decision points, diamond-shaped nodes representing questions, and arrows indicating the flow of the process. Each node contains text describing the decision or question, along with mathematical formulas and associated accuracy values. There are no explicit axes or legends in the traditional sense, but the structure itself functions as a visual representation of a decision-making process.
### Detailed Analysis or Content Details
The flowchart is too complex to transcribe every node exhaustively, but here's a breakdown of key steps and their associated information, moving roughly from top to bottom and left to right:
**Initial Steps (Top Row):**
* **Step 0:** "Let's start with the dataset. We can split it into a training set and a test set." Accuracy measure: 1.0, Value: 1.000.
* **Step 1:** "Let's calculate the information gain for each feature." Accuracy measure: 0.9, Value: 0.900.
* **Step 2:** "Let's evaluate the feature with the highest information gain." Accuracy measure: 0.8, Value: 0.800.
**Second Row:**
* **Step 3:** "Let's calculate the Gini impurity for each feature." Accuracy measure: 0.7, Value: 0.700.
* **Step 4:** "Let's evaluate the feature with the lowest Gini impurity." Accuracy measure: 0.6, Value: 0.600.
* **Step 5:** "Let's calculate the chi-squared statistic for each feature." Accuracy measure: 0.5, Value: 0.500.
**Third Row:**
* **Step 6:** "Let's evaluate the feature with the highest chi-squared statistic." Accuracy measure: 0.4, Value: 0.400.
* **Step 7:** "Let's calculate the correlation coefficient for each feature." Accuracy measure: 0.3, Value: 0.300.
* **Step 8:** "Let's evaluate the feature with the highest correlation coefficient." Accuracy measure: 0.2, Value: 0.200.
**Fourth Row:**
* **Step 9:** "Let's calculate the mutual information for each feature." Accuracy measure: 0.1, Value: 0.100.
* **Step 10:** "Let's evaluate the feature with the highest mutual information." Accuracy measure: 0.0, Value: 0.000.
**Fifth Row:**
* **Step 11:** "Let's calculate the variance threshold for each feature." Accuracy measure: 0.95, Value: 0.950.
* **Step 12:** "Let's evaluate the feature with the lowest variance threshold." Accuracy measure: 0.85, Value: 0.850.
**Sixth Row:**
* **Step 13:** "Let's calculate the regularization term for each feature." Accuracy measure: 0.75, Value: 0.750.
* **Step 14:** "Let's evaluate the feature with the lowest regularization term." Accuracy measure: 0.65, Value: 0.650.
**Seventh Row:**
* **Step 15:** "Let's calculate the feature importance for each feature." Accuracy measure: 0.55, Value: 0.550.
* **Step 16:** "Let's evaluate the feature with the highest feature importance." Accuracy measure: 0.45, Value: 0.450.
**Eighth Row:**
* **Step 17:** "Let's calculate the recursive feature elimination for each feature." Accuracy measure: 0.35, Value: 0.350.
* **Step 18:** "Let's evaluate the feature with the highest recursive feature elimination." Accuracy measure: 0.25, Value: 0.250.
**Ninth Row:**
* **Step 19:** "Let's calculate the sequential feature selection for each feature." Accuracy measure: 0.15, Value: 0.150.
* **Step 20:** "Let's evaluate the feature with the highest sequential feature selection." Accuracy measure: 0.05, Value: 0.050.
**Final Steps (Bottom Row):**
* **Step 21:** "Let's select the best features based on the evaluation results." Accuracy measure: 1.0, Value: 1.000.
* **Step 22:** "Let's train the model with the selected features." Accuracy measure: 0.9, Value: 0.900.
* **Step 23:** "Let's evaluate the model with the selected features." Accuracy measure: 0.8, Value: 0.800.
Each step also includes a mathematical expression, such as "Accuracy = (TP + TN) / (TP + TN + FP + FN)", where TP = True Positives, TN = True Negatives, FP = False Positives, and FN = False Negatives.
### Key Observations
The accuracy values generally decrease as the process progresses through different feature selection methods, then increase again at the final steps of training and evaluating the model. This suggests that the initial steps are exploratory and may involve less refined methods, while the final steps focus on optimizing the model with the selected features. The consistent inclusion of accuracy measures at each step indicates a focus on quantifying the effectiveness of each feature selection technique.
### Interpretation
This flowchart represents a systematic approach to feature selection, aiming to identify the most relevant features for a machine learning model. The process involves evaluating features using various statistical measures (information gain, Gini impurity, chi-squared statistic, correlation coefficient, mutual information, variance threshold, regularization term, feature importance, recursive feature elimination, sequential feature selection) and selecting the best features based on their performance. The decreasing accuracy values in the middle stages could indicate the need for more sophisticated feature selection techniques or a combination of methods. The final increase in accuracy suggests that the selected features are indeed effective in improving the model's performance. The flowchart provides a clear and structured guide for practitioners to follow when selecting features for their machine learning models. The inclusion of accuracy measures at each step allows for a quantitative assessment of the effectiveness of each feature selection technique.
</details>
Figure 10: Case study.