# Kimi-Dev: Agentless Training as Skill Prior for SWE-Agents
> Indicates equal contribution.β Joint leads.
Abstract
Large Language Models (LLMs) are increasingly applied to software engineering (SWE), with SWE-bench as a key benchmark. Solutions are split into SWE-Agent frameworks with multi-turn interactions and workflow-based Agentless methods with single-turn verifiable steps. We argue these paradigms are not mutually exclusive: reasoning-intensive Agentless training induces skill priors, including localization, code edit, and self-reflection that enable efficient and effective SWE-Agent adaptation. In this work, we first curate the Agentless training recipe and present Kimi-Dev, an open-source SWE LLM achieving 60.4% on SWE-bench Verified, the best among workflow approaches. With additional SFT adaptation on 5k publicly-available trajectories, Kimi-Dev powers SWE-Agents to 48.6% pass@1, on par with that of Claude 3.5 Sonnet (241022 version). These results show that structured skill priors from Agentless training can bridge workflow and agentic frameworks for transferable coding agents.
1 Introduction
Recent days have witnessed the rapid development of Large Language Models (LLMs) automating Software-Engineering (SWE) tasks (jimenez2023swe; yang2024swe; xia2024agentless; anthropic_claude_3.5_sonnet_20241022; pan2024training; wang2024openhands; wei2025swe; yang2025qwen3; team2025kimi_k2; openai_gpt5_system_card_2025). Among the benchmarks that track the progress of LLM coding agents in SWE scenarios, SWE-bench (jimenez2023swe) stands out as one of the most representative ones: Given an issue that reports a bug in a real-world GitHub repository, a model is required to produce a patch that fixes the bug, the correctness of which is further judged by whether the corresponding unit tests are passed after its application. The difficulty of the task (as of the date the benchmark was proposed), the existence of the outcome reward with the provided auto-eval harness, as well as the real-world economic value it reflects, have made the SWE-bench a focal point of the field.
Two lines of solutions have emerged for the SWE-bench task. Agent-based solutions like SWE-Agent (yang2024swe) and OpenHands (wang2024openhands) take an interactionist approach: Instructed with the necessary task description, a predefined set of available tools, as well as the specific problem statement, the agent is required to interact with an executable environment for multiple turns, make change to the source codes, and determine when to stop autonomously. In contrast, workflow-based solutions like Agentless (xia2024agentless) pre-define the solving progress as a pipeline, which consists of steps like localization, bug repair, and test composition. Such task decomposition transforms the agentic task into generating correct responses for a chain of single-turn problems with verifiable rewards (guo2025deepseek; wei2025swe; SWESwiss2025).
The two paradigms have been widely viewed as mutually exclusive. On the one hand, SWE-Agents are born with higher potential and better adaptability, thanks to the higher degree of freedom of the multi-turn interaction without the fixed routines. However, it has also proved more difficult to train with such frameworks due to their end-to-end nature (deepswe2025; cao2025skyrl). On the other hand, Agentless methods offer better modularity and the ease to train with Reinforcement Learning with Verifiable Rewards (RLVR) techniques, but more limited exploration space and flexibility, and difficulty in behavior monitoring as the erroneous patterns appear only in the single-turn long reasoning contents (pan2024training). However, we challenge the dichotomy from the perspective of training recipe: We argue that Agentless training should not be viewed as the ultimate deliverable, but rather as a way to induce skill priors β atomic capabilities such as the localization of buggy implementations and the update of erroneous code snippets, as well as self-reflection and verification, all of which help scaffold the efficient adaptation of more capable and generalizable SWE-agents.
Guided by this perspective, we introduce Kimi-Dev, an open-source code LLM for SWE tasks. Specifically, we first develop an Agentless training recipe, which includes mid-training, cold-start, reinforcement learning, and test-time self-play. This results in 60.4% accuracy on SWE-bench Verified, the SoTA performance among the workflow-based solutions. Building on this, we show that Agentless training induces skill priors: a minimal SFT cold-start from Kimi-Dev with 5k publicly-available trajectories enables efficient SWE-agent adaptation and reaches 48.6% pass@1 score, similar to that of Claude 3.5 Sonnet (the 20241022 version, anthropic_claude_3.5_sonnet_20241022). We demonstrate that these induced skills transfer from the non-agentic workflows to the agentic frameworks, and the self-reflection in long Chain-of-Thoughts baked through Agentless training further enable the agentic model to leverage more turns and succeed with a longer horizon. Finally, we also show that the skills from Agentless training generalize beyond SWE-bench Verified to broader benchmarks like SWE-bench-live (zhang2025swe) and SWE-bench Multilingual (yang2025swesmith). Together, these results reframe the relationship between Agentless and agentic frameworks: not mutually exclusive, but as complementary stages in building transferable coding LLMs. This shift offers a principled view that training with structural skill priors could scaffold autonomous agentic interaction.
The remainder of this paper is organized as follows. Section 2 reviews the background of the framework dichotomy and outlines the challenges of training SWE-Agents. Section 3 presents our Agentless training recipe and the experimental results. Section 4 demonstrates how these Agentless-induced skill priors enable efficient SWE-Agent adaptation, and evaluates the skill transfer and generalization beyond SWE-bench Verified.
2 Background
In this section, we first review the two dominant frameworks for SWE tasks and their dichotomy in Section 2.1. We then summarize the progress and challenges of training SWE-Agents in Section 2.2. The background introduction sets the stage for reinterpreting Agentless training as skill priors for SWE-Agents, a central theme developed throughout the later sections.
2.1 Framework Dichotomy
Two paradigms currently dominate the solutions for automating software engineering tasks. Agentless approaches decompose SWE tasks into modular workflows (xia2024agentless; wei2025swe; ma2024lingma; ma2025alibaba; swe-fixer). Typical workflows consist of bug localization, bug repair, and test generation. This design provides modularity and stability: each step could be optimized separately as a single-turn problem with verifiable rewards (wei2025swe; SWESwiss2025). However, such rigidity comes at the cost of flexibility. When encountering scenarios requiring multiple rounds of incremental updates, the Agentless approaches struggle to adapt.
By contrast, SWE-agents adopt an end-to-end, multi-turn reasoning paradigm (yang2024swe; wang2024openhands). Rather than following a fixed workflow, they iteratively plan, act, and reflect, resembling how human developers debug complex issues. This design enables greater adaptability, but introduces significant difficulties: trajectories often extend over tens or even hundreds of steps, context windows of the LLMs must span over the entire interaction history, and the model must handle exploration, reasoning, and tool use simultaneously.
The dichotomy between fixed workflows (e.g., Agentless) and agentic frameworks (e.g., SWE-Agent) has shaped much of the communityβs perspective. The two paradigms are often regarded as mutually exclusive: one trades off flexibility and performance ceiling for modularity and stability, whereas the other makes the reverse compromise. Our work challenges this dichotomy, as we demonstrate that Agentless training induces skill priors that make further SWE-agent training both more stable and more efficient.
2.2 Training SWE-agents
Training SWE-agents relies on acquiring high-quality trajectories through interactions with executable environments. Constructing such large-scale environments and collecting reliable trajectories, however, requires substantial human labor as well as costly calls to frontier models, making data collection slow and resource-demanding (pan2024training; badertdinov2024sweextra). Recent studies also attempt to scale environment construction by synthesizing bugs for the reverse construction of executable runtime (jain2025r2e; yang2025swesmith).
However, credit assignment across long horizons still remains challenging, as outcome rewards are sparse and often only available when a final patch passes its tests. Reinforcement learning techniques have been proposed, but frequently suffer from instability or collapse when trajectories exceed dozens of steps (deepswe2025; cao2025skyrl). SWE-agent training is also highly sensitive to initialization: starting from a generic pre-trained model often leads to brittle behaviors, such as failing to use tools effectively or getting stuck in infinite loops of specific action patterns (pan2024training; yang2025swesmith).
These limitations motivate our central hypothesis: instead of training SWE-agents entirely from scratch, one can first induce skill priors through agentless training, enhancing the atomic capabilities like localization, repair, test composition, and self-reflection. These priors lay a foundation that makes subsequent agentic training both more efficient and more generalizable.
3 Agentless Training Recipe
Instead of training SWE-agents from scratch, we leverage Agentless training to induce skill priors. Skill priors enhanced by Agentless training include but are not limited to bug localization, patch generation, self-reflection and verification, which lay the foundation for end-to-end agentic interaction. In this section, we elaborate our Agentless training recipe: the duo framework design of BugFixer and TestWriter, mid-training and cold-start, reinforcement learning, and test-time self-play. Sections 3.1 β 3.4 detail these ingredients, and Section 3.5 presents the experimental results for each of them. This training recipe results in Kimi-Dev, an open-source 72B model that achieves 60.4% on SWE-bench Verified, the SoTA performance among the workflow-based solutions.
<details>
<summary>x3.png Details</summary>

### Visual Description
## Diagram: LLM Bug Fixing and Testing Workflow
### Overview
The image is a diagram illustrating a workflow for bug fixing and test case generation using a Large Language Model (LLM). It shows the interaction between components like "BugFixer," "TestWriter," and the LLM, along with processes like "File Localization" and "Code Edit."
### Components/Axes
* **Central Component:** LLM (Large Language Model) - Represented by a network-like icon.
* **Left Side:**
* File Localization: Depicted by an icon of a document with a magnifying glass.
* Code Edit: Depicted by an icon of a document with a pen.
* **Right Side:**
* File Localization: Depicted by an icon of a document with a magnifying glass.
* Code Edit: Depicted by an icon of a document with a pen.
* **Process Labels:**
* BugFixer: Indicates the bug-fixing component.
* TestWriter: Indicates the test case generation component.
* Generate Test Case: Describes the process of creating test cases.
* Fix Bugs: Describes the process of fixing bugs.
* **Arrows:**
* Blue arrows indicate the primary flow of information and processes.
* Purple arrows indicate interaction between the LLM and BugFixer/TestWriter.
### Detailed Analysis
1. **File Localization (Left):** The process of locating the relevant file for bug fixing.
2. **Code Edit (Left):** The process of editing the code to fix the bug.
3. **BugFixer:** Interacts with the LLM (purple arrow pointing from BugFixer to LLM).
4. **LLM (Center):** The central component, likely responsible for analyzing the code and suggesting fixes.
5. **Fix Bugs:** A blue arrow indicates the LLM's output leads to fixing bugs.
6. **TestWriter:** Interacts with the LLM (purple arrow pointing from TestWriter to LLM).
7. **Generate Test Case:** A blue arrow indicates the LLM's output leads to generating test cases.
8. **File Localization (Right):** The process of locating the relevant file for testing.
9. **Code Edit (Right):** The process of editing the code based on test results.
10. **Circular Flow:** Blue arrows create a circular flow, suggesting an iterative process of bug fixing and testing.
### Key Observations
* The LLM is the central component, acting as the core intelligence for both bug fixing and test case generation.
* The diagram emphasizes the iterative nature of the process, with feedback loops between bug fixing, testing, and code editing.
* File localization and code editing are present on both sides, suggesting they are integral steps in both bug fixing and testing.
### Interpretation
The diagram illustrates a modern software development workflow leveraging an LLM to automate and enhance bug fixing and testing. The LLM likely analyzes code, suggests fixes through the "BugFixer," and generates test cases through the "TestWriter." The iterative nature of the process, highlighted by the circular flow, suggests continuous improvement and refinement of the code. The presence of "File Localization" and "Code Edit" on both sides indicates that these are essential steps in both the bug fixing and testing phases. The diagram suggests a closed-loop system where the LLM learns from the results of testing and bug fixing, leading to more effective code maintenance and development.
</details>
Figure 1: Agentless framework for Kimi-Dev: the duo of BugFixer and TestWriter.
3.1 Framework: the Duo of Bugfixer and Testwriter
In GitHub issue resolution, we conceptualize the process as the collaboration between two important roles: the BugFixer, who produces patches that correctly address software bugs, and the TestWriter, who creates reproducible unit tests that capture the reported bug. A resolution is considered successful when the BugFixerβs patch passes the tests provided for the issue, while a high-quality test from the TestWriter should fail on the pre-fix version of the code and pass once the fix is applied.
Each role relies on two core skills: (i) file localization, the ability to identify the specific files relevant to the bug or test, and (ii) code edit, the ability to implement the necessary modifications. For BugFixer, effective code edits repair the defective program logic, whereas for TestWriter, they update precise unit test functions that reproduce the issue into the test files. As illustrated in Figure 1, these two skills constitute the fundamental abilities underlying GitHub issue resolution. Thus, we enhance these skills through the following training recipes, including mid-training, cold-start, and RL.
3.2 Mid-Training & Cold Start
To enhance the modelβs prior as both a BugFixer and a TestWriter, we perform mid-training with $\sim$ 150B tokens in high-quality and real-world data. With the Qwen 2.5-72B-Base (qwen2025qwen25technicalreport) model as a starting point, we collect millions of GitHub issues and PR commits to form its mid-training dataset, which consists of (i) $\sim$ 50B tokens in the form of Agentless derived from the natural diff patch, (ii) $\sim$ 20B tokens of curated PR commit packs, and (iii) $\sim$ 20B tokens of synthetic data with reasoning and agentic interaction patterns (upsampled by a factor of 4 during training). The data recipe is carefully constructed to enable the model to learn how human developers reason with GitHub issues, implement code fixes, and develop unit tests. We also performed strict data decontamination to exclude any repository from the SWE-bench Verified test set. Mid-training sufficiently enhances the knowledge in the model about practical bug fixes and unit tests, making it a better starting point for later stages. The details of the recipe are covered in Appendix A.
To activate the modelβs long Chain-of-Thought (CoT) capability, we also construct a cold-start dataset with reasoning trajectories based on the SWE-Gym (pan2024training) and SWE-bench-extra (badertdinov2024scaling) datasets, generated by the DeepSeek R1 model (deepswe2025, the 20250120 version). In this setup, R1 acts the roles of Bugfixer and Testwriter, producing outputs such as file localization and code edits. Through supervised finetuning as a cold start with this dataset, we enable the model to acquire essential reasoning skills, including problem analysis, method sketching, self-refinement, and exploration of alternative solutions.
3.3 Reinforcement Learning
After mid-training and cold-start, the model demonstrates strong performance in localization. Therefore, reinforcement learning (RL) focuses solely on the code edit stage. We construct a training set specifically for this stage, where each prompt is equipped with an executable environment. We further employ multiple localization rollouts from the initial model to generate varied file location predictions, which diversifies the prompts used in code-edit RL.
For the RL algorithm, we adopt the policy optimization method proposed by Kimi k1.5 (team2025kimi_k15), which has shown promising results on reasoning tasks in both math and coding. Kimi k1.5 (team2025kimi_k15) adopts a simpler policy gradient approach based on the REINFORCE algorithm (williams1992simple). Similarly to GRPO (shao2024deepseekmath), we use the average rewards of multiple rollouts as the baseline to normalize the returns. When adapting the algorithm in our SWE-bench setting, we highlight the following 3 key desiderata:
1. Outcome-based reward only: We rely solely on the final execution outcome from the environment as the raw reward (0 or 1), without incorporating any format- or process-based signals. For BugFixer, a positive reward is given if the generated patch passes all ground-truth unittests. For TestWriter, a positive reward is assigned when (i) the predicted test raises a failure in the repo without the ground-truth bugfix patch applied, AND (ii) the failure is resolved once the ground-truth bugfix patch is applied.
1. Adaptive prompt selection: Prompts with pass@16 = 0 are initially discarded as they do not contribute to the batch loss. This results in an initial prompt set of 1,200 problems and enlarges the effective batch size. A curriculum learning scheme is then applied: once the success rate on the current set exceeds a threshold, 500 new (previously excluded) prompts (with initial pass@16 = 0 but improved under RL) are reintroduced every 100 RL steps to gradually raise task difficulty.
1. Positive example reinforcement: As performance improvements begin to plateau in later stages of training, we incorporate the positive samples from the recent RL iterations into the training batch of the current iteration. This approach reinforces the modelβs reliance on successful patterns, thereby accelerating convergence in the final phase.
Robust sandbox infrastructure. We construct the docker environment with Kubernetes (kubernetes), which provides a secure and scalable sandbox infrastructure and efficient training and rollouts. The infra supports over 10,000 concurrent instances with robust performance, making it ideal for competitive programming and software engineering tasks (see Appendix D for details).
3.4 Test-Time Self-Play
After RL, the model masters the roles of both a BugFixer and a TestWriter. During test time, it adopts a self-play mechanism to coordinate its bug-fixing and test-writing abilities.
Following Agentless (xia2024agentless), we leverage the model to generate 40 candidate patches and 40 tests for each instance. Each patch generation involves independent runs of the localization and code edit from BugFixer, where the first run uses greedy decoding (temperature 0), and the remaining 39 use temperature 1 to ensure diversity. Similarly, 40 tests are generated independently from TestWriter. For the test patch candidates, to guarantee their validity, we first filter out those failing to raise a failure in the original repo without applying any BugFixer patch.
Denote the rest TestWriter patches as set $\mathcal{T}$ , and the BugFixer patches as set $\mathcal{B}$ . For each $b_{i}β\mathcal{B}$ and $t_{j}β\mathcal{T}$ , we execute the test suite over the test file modified by $t_{j}$ for twice: first without $b_{i}$ , and then with $b_{i}$ applied. From the execution log for the first run, we get the count of the failed and the passed tests from $t_{j}$ , denoted as ${\rm F}(j)$ and ${\rm P}(j)$ . Comparing the execution logs for the two test suite runs, we get the count of the fail-to-pass and the pass-to-pass tests, denoted as ${\rm FP}(i,j)$ and ${\rm PP}(i,j)$ , respectively. We then calculate the score for each $b_{i}$ with
$$
S_{i}=\frac{\sum_{j}{\rm FP}(i,j)}{\sum_{j}{\rm F}(j)}+\frac{\sum_{j}{\rm PP}(i,j)}{\sum_{j}{\rm P}(j)},\vskip-2.0pt \tag{1}
$$
where the first part reflects the performance of $b_{i}$ under reproduction tests, and the second part could be viewed as the characterization of $b_{i}$ under regression tests (xia2024agentless). We select the BugFixer patch $b_{i}$ with the highest $S_{i}$ score as the ultimate answer.
Table 1: Performance comparison for models on SWE-bench Verified under Agentless-like frameworks. All the performances are obtained under the standard 40 patch, 40 test setting (xia2024agentless), except that Llama3-SWE-RL uses 500 patches and 30 tests.
3.5 Experiments
3.5.1 Main Results
<details>
<summary>figs/sec3_mid_training/mid-train_perf.png Details</summary>

### Visual Description
## Bar Chart: Pass Rate vs. Mid-Training Tokens
### Overview
The image is a bar chart illustrating the relationship between "Mid-training tokens" and "Pass Rate (%)". The x-axis represents the number of mid-training tokens (50B, 100B, 150B), and the y-axis represents the pass rate in percentage. The chart shows an increasing trend in pass rate as the number of mid-training tokens increases.
### Components/Axes
* **X-axis:** "Mid-training tokens" with values 50B, 100B, and 150B.
* **Y-axis:** "Pass Rate (%)" ranging from 26 to 38, with gridlines at intervals of 2.
* **Bars:** Three light blue bars representing the pass rate for each mid-training token value.
* **Values:** Numerical values are displayed above each bar, indicating the exact pass rate.
### Detailed Analysis
* **50B Mid-training tokens:** Pass Rate = 28.6%
* **100B Mid-training tokens:** Pass Rate = 32.6%
* **150B Mid-training tokens:** Pass Rate = 36.6%
The pass rate increases as the number of mid-training tokens increases.
### Key Observations
* The pass rate increases consistently with the increase in mid-training tokens.
* The increase in pass rate from 50B to 100B is 4% (32.6 - 28.6).
* The increase in pass rate from 100B to 150B is 4% (36.6 - 32.6).
### Interpretation
The chart suggests a positive correlation between the number of mid-training tokens and the pass rate. Increasing the number of mid-training tokens leads to a higher pass rate, indicating improved model performance. The consistent increase of 4% between each data point suggests a linear relationship within the observed range. This implies that further increasing the number of mid-training tokens might continue to improve the pass rate, although this cannot be definitively concluded without additional data points.
</details>
Figure 2: The performance on SWE-bench Verified after mid-training with different training token budgets.
Table 1 shows the performance of Kimi-Dev on SWE-bench Verified (jimenez2023swe). Instead of the text-similarity rewards used in SWE-RL (wei2025swe), we adopt execution-based signals for more reliable fix quality. Our two-stage TestWriter also improves over prior Agentless systems (xia2024agentless; guo2025deepseek; SWESwiss2025), which rely on a single root-level test, by better capturing repository context and mirroring real developer workflows (OpenAI-Codex-2025). Kimi-Dev attains state-of-the-art performance among open-source models, resolving 60.4% of issues.
3.5.2 Mid-Training
In this section, we evaluate the relationship between the amount of data used during mid-training and model performance. Specifically, we finetuned Qwen 2.5-72B-Base with the subset of mid-training data of 50B, 100B, and approximately 150B tokens, and then lightly activated these mid-trained models using the same set of 2,000 Bugfixer input-output pairs for SFT cold start. We only report BugFixer pass@1 here for simplicity of evaluation. Figure 2 shows that increasing the number of tokens in mid-training consistently improves model performance, highlighting the effectiveness of this stage.
3.5.3 Reinforcement Learning
<details>
<summary>figs/sec3_rl_scaling/quick_plot_twin_bf_final.png Details</summary>

### Visual Description
## Line Chart: Token Length and Pass Rate vs. RL Training Steps
### Overview
The image is a line chart showing the relationship between RL Training Steps (x-axis) and two metrics: Token Length (left y-axis) and Pass Rate (%) (right y-axis). The chart displays how these metrics change as the RL training progresses.
### Components/Axes
* **X-axis:** RL Training Steps, ranging from 0 to 500 in increments of 50.
* **Left Y-axis:** Token Length, ranging from 4000 to 8000 in increments of 500. Labelled in blue.
* **Right Y-axis:** Pass Rate (%), ranging from 34 to 46 in increments of 2. Labelled in red.
* **Legend:** Located in the top-left corner.
* Blue line with square markers: Token Length
* Red line with circle markers: Pass Rate (%)
### Detailed Analysis
* **Token Length (Blue):**
* Trend: Generally increasing with fluctuations.
* Data Points:
* At 0 steps, Token Length is approximately 3900.
* At 50 steps, Token Length is approximately 4300.
* At 100 steps, Token Length is approximately 4900.
* At 150 steps, Token Length is approximately 5400.
* At 200 steps, Token Length is approximately 5800.
* At 250 steps, Token Length is approximately 5900.
* At 300 steps, Token Length is approximately 6000.
* At 350 steps, Token Length is approximately 6000.
* At 400 steps, Token Length is approximately 7300.
* At 450 steps, Token Length is approximately 7400.
* At 500 steps, Token Length is approximately 6700.
* **Pass Rate (%) (Red):**
* Trend: Generally increasing with significant fluctuations.
* Data Points:
* At 0 steps, Pass Rate is approximately 34.5%.
* At 50 steps, Pass Rate is approximately 35%.
* At 100 steps, Pass Rate is approximately 38%.
* At 150 steps, Pass Rate is approximately 39%.
* At 200 steps, Pass Rate is approximately 37%.
* At 250 steps, Pass Rate is approximately 41%.
* At 300 steps, Pass Rate is approximately 43%.
* At 350 steps, Pass Rate is approximately 39%.
* At 400 steps, Pass Rate is approximately 41%.
* At 450 steps, Pass Rate is approximately 45%.
* At 500 steps, Pass Rate is approximately 46%.
### Key Observations
* Both Token Length and Pass Rate generally increase with RL Training Steps, but the Pass Rate exhibits more volatility.
* There are periods where the Pass Rate decreases while the Token Length continues to increase, suggesting a complex relationship between these metrics.
* The Pass Rate has a large spike at the end of the training steps.
### Interpretation
The chart suggests that as the RL model trains, it tends to generate longer tokens, and the pass rate generally improves. However, the fluctuations in pass rate indicate that simply increasing token length does not guarantee better performance. The model's performance, as measured by the pass rate, is likely influenced by other factors not captured in this chart. The final spike in pass rate at the end of the training steps could indicate a significant improvement in the model's ability to generate successful tokens towards the end of the training process, or could be an outlier. Further investigation would be needed to determine the cause of this spike.
</details>
(a) 72B Joint RL, BugFixer
<details>
<summary>figs/sec3_rl_scaling/quick_plot_twin_tw_final.png Details</summary>

### Visual Description
## Line Chart: Token Length vs. Reproduced Rate (%)
### Overview
The image is a line chart comparing "Token Length" and "Reproduced Rate (%)" over "RL Training Steps". The chart displays two data series: Token Length (blue line with square markers) and Reproduced Rate (%) (red line with circular markers). The x-axis represents "RL Training Steps," while the left y-axis represents "Token Length," and the right y-axis represents "Reproduced Rate (%)".
### Components/Axes
* **X-axis:** "RL Training Steps" ranging from 0 to 500, with increments of 50.
* **Left Y-axis:** "Token Length" ranging from 3000 to 6500, with increments of 500.
* **Right Y-axis:** "Reproduced Rate (%)" ranging from 20.0 to 35.0, with increments of 2.5.
* **Legend (Top-Left):**
* Blue square marker: "Token Length"
* Red circle marker: "Reproduced Rate (%)"
### Detailed Analysis
* **Token Length (Blue):** The token length generally increases with RL Training Steps.
* At 0 steps, the token length is approximately 3050.
* At 100 steps, the token length is approximately 3500.
* At 200 steps, the token length is approximately 4300.
* At 300 steps, the token length is approximately 4700.
* At 400 steps, the token length is approximately 5900.
* At 500 steps, the token length is approximately 6400.
* **Reproduced Rate (%) (Red):** The reproduced rate fluctuates significantly but generally increases with RL Training Steps, especially up to around 400 steps, after which it becomes more volatile.
* At 0 steps, the reproduced rate is approximately 20%.
* At 100 steps, the reproduced rate is approximately 27%.
* At 200 steps, the reproduced rate is approximately 28%.
* At 300 steps, the reproduced rate is approximately 31%.
* At 400 steps, the reproduced rate is approximately 34%.
* At 500 steps, the reproduced rate is approximately 32%.
### Key Observations
* Both Token Length and Reproduced Rate generally increase with RL Training Steps.
* The Reproduced Rate exhibits more volatility than the Token Length.
* The increase in Token Length appears more consistent and linear compared to the Reproduced Rate.
* Around 400 RL Training Steps, the Reproduced Rate shows a peak, followed by fluctuations.
### Interpretation
The chart suggests that as the RL Training Steps increase, the Token Length and Reproduced Rate generally improve. The fluctuations in the Reproduced Rate indicate that the model's performance varies during training, possibly due to exploration and exploitation trade-offs. The consistent increase in Token Length suggests a steady learning process, while the Reproduced Rate's volatility might indicate sensitivity to specific training episodes or changes in the environment. The peak in Reproduced Rate around 400 steps, followed by fluctuations, could indicate a point where the model starts to overfit or requires further fine-tuning.
</details>
(b) 72B Joint RL, TestWriter
Figure 3: Joint code-edit RL experiments on the model after mid-training and cold-start. The pass rate for BugFixer and the reproduced rate for TestWriter are reported as pass@1 with temperature=1.0. The performance improves consistently as the output becomes increasingly longer.
Experimental setup
We set the training step per RL iteration as 5 and sample 10 rollouts for each of the 1,024 problems from the union of SWE-gym (pan2024training) and SWE-bench-extra (badertdinov2024sweextra). We dynamically adjust the prompt set every 20 iterations to gradually increase task difficulty. We fix the maximum training context length as 64k tokens, since the prompt input contains the contents of the entire files localized by the initial model in advance.
Results
Figure 3 shows the performance and response length curves on the test set during RL training. The pass rate and the reproduced rate are calculated from pass@1 and temperature=1. Specifically, we observe that both model performance and response length steadily increase, reflecting the expected benefits of RL scaling. Similar RL scaling curves are also observed in our ablation experiments run on Qwen2.5-14B-Instruct models, proving the effectiveness of the RL training recipe across models of different sizes. The experimental details, as well as the ablation studies on positive example reinforcement in Section 3.3, are listed in Appendix C.2). The lengthy outputs consist of in-depth problem analysis and self-reflection patterns, similar to those in the math and code reasoning tasks (team2025kimi_k15; guo2025deepseek). We have also observed that for TestWriter, occasional false-positive examples take place during RL training due to the lack of reproduction coverage. We leave the case studies in Appendix E and further improvement for future work.
<details>
<summary>figs/sec3_sp_scaling/selfplay_figure_v2.png Details</summary>

### Visual Description
## Line Charts: Pass Rate vs. Number of Patches
### Overview
The image contains two line charts comparing the pass rate (%) against the number of patches (BF x TW) for different methods. The left chart compares "Self-play" and "Majority Voting", while the right chart compares "Self-play" and "Pass@N". The x-axis represents the number of patches, and the y-axis represents the pass rate in percentage.
### Components/Axes
**Left Chart:**
* **Title:** Implicitly, Pass Rate vs. Number of Patches for Self-play and Majority Voting
* **X-axis:**
* Label: "Number of patches: BF x TW"
* Scale: Categorical, with values "1x1", "3x3", "5x5", "10x10", "20x20", "40x40"
* **Y-axis:**
* Label: "Pass Rate (%)"
* Scale: Numerical, ranging from 45.0 to 62.5, with increments of 2.5.
* **Legend:** Located in the top-left corner.
* "Self-play" (blue line with circle markers)
* "Majority Voting" (green line with triangle markers)
**Right Chart:**
* **Title:** Implicitly, Pass Rate vs. Number of Patches for Self-play and Pass@N
* **X-axis:**
* Label: "Number of patches: BF x TW"
* Scale: Categorical, with values "1x1", "3x3", "5x5", "10x10", "20x20", "40x40"
* **Y-axis:**
* Label: "Pass Rate (%)"
* Scale: Numerical, ranging from 45 to 75, with increments of 5.
* **Legend:** Located in the top-left corner.
* "Self-play" (blue line with circle markers)
* "Pass@N" (orange line with diamond markers)
### Detailed Analysis
**Left Chart:**
* **Self-play (blue line):** The pass rate generally increases as the number of patches increases, then plateaus.
* 1x1: 48.0%
* 3x3: 52.6%
* 5x5: 55.4%
* 10x10: 58.8%
* 20x20: 60.4%
* 40x40: 60.4%
* **Majority Voting (green line):** The pass rate increases slightly and then remains relatively stable.
* 1x1: 48.8%
* 3x3: 50.0%
* 5x5: 51.0%
* 10x10: 51.4%
* 20x20: 51.2%
**Right Chart:**
* **Self-play (blue line):** The pass rate increases as the number of patches increases, then plateaus.
* 1x1: 48.0%
* 3x3: 52.6%
* 5x5: 55.4%
* 10x10: 58.8%
* 20x20: 60.4%
* 40x40: 60.4%
* **Pass@N (orange line):** The pass rate consistently increases as the number of patches increases.
* 1x1: 60.4%
* 3x3: 64.0%
* 5x5: 67.4%
* 10x10: 71.6%
* 20x20: 74.8%
### Key Observations
* In both charts, the "Self-play" method shows an increasing pass rate initially, but it plateaus after a certain number of patches (20x20).
* "Pass@N" consistently outperforms "Self-play" and "Majority Voting" as the number of patches increases.
* "Majority Voting" shows a relatively flat pass rate across different numbers of patches.
### Interpretation
The data suggests that increasing the number of patches (BF x TW) generally improves the pass rate for all methods, up to a point. "Pass@N" demonstrates the most significant improvement with increasing patches, indicating it may be more effective at leveraging information from larger patch sizes. "Self-play" benefits from increased patch sizes initially, but its performance plateaus, suggesting a diminishing return. "Majority Voting" appears less sensitive to the number of patches, maintaining a relatively stable pass rate. The choice of method and patch size should be considered based on the desired performance and computational cost.
</details>
Figure 4: Test-time self-play on SWE-bench Verified. Performance improves with more generated patches and tests. Left: Execution-based self-play consistently surpasses BugFixer majority voting. Right: Self-play performances remain below pass@N where the ground-truth test patch is used, suggesting the room exists for TestWriter to improve.
3.5.4 Test-time Self-Play
Following Section 3.4, we evaluate how the final performance on SWE-bench Verified scales with the number of patches and tests generated. The temperature is fixed at 0 for the initial rollout, and set to 1.0 for the subsequent 39 rollouts. As shown on the left of Figure 4, the final performance improves from 48.0% to 60.4% as the number of patch-test pairs increases from 1 $Γ$ 1 to 40 $Γ$ 40, and consistently surpasses the results obtained from the majority vote of the BugFixer patches only.
Specifically, the self-play result obtained from 3 patches and 3 tests for each instance has already surpassed the performance with majority voting from 40 BugFixer patches. This demonstrates the effectiveness of additional information from test-time execution. The room for improvement of TestWriter, though, still exists for more powerful self-play: Shown on Figure 4, self-play performances remain below pass@N, where ground-truth test cases serve as the criterion for issue resolution. This finding aligns with anthropic_claude_3.5_sonnet_20241022, which introduced a final edge-case checking phase to generate a more diverse set of test cases, thereby strengthening the role of the βTestWriterβ in their SWE-Agent framework. We also report preliminary observations of a potential parallel scaling phenomenon, which requires no additional training and may enable scalable performance improvements. The details of the phenomenon and analyses are covered in Appendix F.
4 Initializing SWE-Agents from Agentless Training
End-to-end multi-turn frameworks, such as SWE-Agent (yang2024swe; anthropic_claude_3.5_sonnet_20241022) and OpenHands (wang2024openhands), enable agents to leverage tools and interact with environments. Specifically, the system prompt employed in the SWE-Agent framework (anthropic_claude_3.5_sonnet_20241022) outlines a five-stage workflow: (i) repo exploration, (ii) error reproduction via a test script, (iii) code edit for bug repair, (iv) test re-execution for validation, and (v) edge-case generation and checks. Unlike Agentless, the SWE-Agent framework doesnβt enforce a strict stage-wise workflow; the agent can reflect, transition, and redo freely until it deems the task complete and submits.
The performance potential is therefore higher without a fixed routine; However, the training for SWE-Agent is more challenging because of the sparsity of the outcome reward for long-horizon credit assignment. Meanwhile, our Kimi-Dev model has undergone Agentless training, with its skills of localization and code edit for BugFixer and TestWriter strengthened elaborately. In this section, we investigate whether it can serve as an effective prior for multi-turn SWE-Agent scenarios.
Table 2: Single-attempt performance of different models on SWE-bench Verified under end-to-end agentic frameworks, categorized by proprietary or open-weight models, and size over or under 100B (as of 2025.09). βInternalβ denotes results achieved with their in-house agentic frameworks.
4.1 Performance after SWE-Agent Fine-tuning
<details>
<summary>figs/sec4_main/v-sweeping-new-FINAL.png Details</summary>

### Visual Description
## Line Chart: Pass Rate vs. SWE-Agent SFT Tokens
### Overview
The image is a line chart comparing the pass rates of different models (RL, SFT, MT, and Base) at different "Pass" levels (@1, @2, @3) as the number of SWE-Agent SFT tokens increases. The x-axis represents the number of tokens, and the y-axis represents the pass rate in percentage.
### Components/Axes
* **Title:** There is no explicit title on the chart.
* **X-axis:**
* Label: "# SWE-Agent SFT tokens"
* Scale: The x-axis is logarithmic, with values at: 0, 2<sup>21</sup>, 2<sup>23</sup>, 2<sup>24</sup>, 1.1 x 2<sup>25</sup>, 1.1 x 2<sup>26</sup>, 1.1 x 2<sup>27</sup>, 1.5 x 2<sup>28</sup>
* **Y-axis:**
* Label: "Pass Rate (%)"
* Scale: Linear, ranging from 0 to 60, with increments of 10.
* **Legend:** Located on the right side of the chart. It maps colors and shapes to different models and pass levels:
* Red circle: RL Pass@1
* Red square: RL Pass@2
* Red triangle: RL Pass@3
* Orange circle: SFT Pass@1
* Orange square: SFT Pass@2
* Orange triangle: SFT Pass@3
* Purple circle: MT Pass@1
* Purple square: MT Pass@2
* Purple triangle: MT Pass@3
* Blue circle: Base Pass@1
* Blue square: Base Pass@2
* Blue triangle: Base Pass@3
### Detailed Analysis
Here's a breakdown of each data series and their trends:
* **RL Pass@1 (Red Circle):** The pass rate generally increases with the number of tokens.
* 0 tokens: ~8%
* 2<sup>21</sup> tokens: ~23%
* 2<sup>23</sup> tokens: ~34%
* 2<sup>24</sup> tokens: ~34%
* 1.1 x 2<sup>25</sup> tokens: ~46%
* 1.1 x 2<sup>26</sup> tokens: ~51%
* 1.1 x 2<sup>27</sup> tokens: ~58%
* 1.5 x 2<sup>28</sup> tokens: ~62%
* **RL Pass@2 (Red Square):** The pass rate generally increases with the number of tokens.
* 0 tokens: ~9%
* 2<sup>21</sup> tokens: ~23%
* 2<sup>23</sup> tokens: ~43%
* 2<sup>24</sup> tokens: ~48%
* 1.1 x 2<sup>25</sup> tokens: ~46%
* 1.1 x 2<sup>26</sup> tokens: ~57%
* 1.1 x 2<sup>27</sup> tokens: ~58%
* 1.5 x 2<sup>28</sup> tokens: ~64%
* **RL Pass@3 (Red Triangle):** The pass rate generally increases with the number of tokens.
* 0 tokens: ~11%
* 2<sup>21</sup> tokens: ~38%
* 2<sup>23</sup> tokens: ~44%
* 2<sup>24</sup> tokens: ~48%
* 1.1 x 2<sup>25</sup> tokens: ~54%
* 1.1 x 2<sup>26</sup> tokens: ~57%
* 1.1 x 2<sup>27</sup> tokens: ~61%
* 1.5 x 2<sup>28</sup> tokens: ~66%
* **SFT Pass@1 (Orange Circle):** The pass rate generally increases with the number of tokens.
* 0 tokens: ~13%
* 2<sup>21</sup> tokens: ~20%
* 2<sup>23</sup> tokens: ~20%
* 2<sup>24</sup> tokens: ~30%
* 1.1 x 2<sup>25</sup> tokens: ~48%
* 1.1 x 2<sup>26</sup> tokens: ~50%
* 1.1 x 2<sup>27</sup> tokens: ~48%
* 1.5 x 2<sup>28</sup> tokens: ~48%
* **SFT Pass@2 (Orange Square):** The pass rate generally increases with the number of tokens.
* 0 tokens: ~8%
* 2<sup>21</sup> tokens: ~15%
* 2<sup>23</sup> tokens: ~31%
* 2<sup>24</sup> tokens: ~31%
* 1.1 x 2<sup>25</sup> tokens: ~51%
* 1.1 x 2<sup>26</sup> tokens: ~51%
* 1.1 x 2<sup>27</sup> tokens: ~58%
* 1.5 x 2<sup>28</sup> tokens: ~58%
* **SFT Pass@3 (Orange Triangle):** The pass rate generally increases with the number of tokens.
* 0 tokens: ~12%
* 2<sup>21</sup> tokens: ~16%
* 2<sup>23</sup> tokens: ~40%
* 2<sup>24</sup> tokens: ~36%
* 1.1 x 2<sup>25</sup> tokens: ~56%
* 1.1 x 2<sup>26</sup> tokens: ~57%
* 1.1 x 2<sup>27</sup> tokens: ~60%
* 1.5 x 2<sup>28</sup> tokens: ~60%
* **MT Pass@1 (Purple Circle):** The pass rate generally increases with the number of tokens.
* 0 tokens: ~0%
* 2<sup>21</sup> tokens: ~1%
* 2<sup>23</sup> tokens: ~6%
* 2<sup>24</sup> tokens: ~29%
* 1.1 x 2<sup>25</sup> tokens: ~45%
* 1.1 x 2<sup>26</sup> tokens: ~45%
* 1.1 x 2<sup>27</sup> tokens: ~46%
* 1.5 x 2<sup>28</sup> tokens: ~59%
* **MT Pass@2 (Purple Square):** The pass rate generally increases with the number of tokens.
* 0 tokens: ~0%
* 2<sup>21</sup> tokens: ~2%
* 2<sup>23</sup> tokens: ~35%
* 2<sup>24</sup> tokens: ~42%
* 1.1 x 2<sup>25</sup> tokens: ~46%
* 1.1 x 2<sup>26</sup> tokens: ~57%
* 1.1 x 2<sup>27</sup> tokens: ~57%
* 1.5 x 2<sup>28</sup> tokens: ~61%
* **MT Pass@3 (Purple Triangle):** The pass rate generally increases with the number of tokens.
* 0 tokens: ~0%
* 2<sup>21</sup> tokens: ~2%
* 2<sup>23</sup> tokens: ~40%
* 2<sup>24</sup> tokens: ~43%
* 1.1 x 2<sup>25</sup> tokens: ~53%
* 1.1 x 2<sup>26</sup> tokens: ~57%
* 1.1 x 2<sup>27</sup> tokens: ~57%
* 1.5 x 2<sup>28</sup> tokens: ~63%
* **Base Pass@1 (Blue Circle):** The pass rate generally increases with the number of tokens.
* 0 tokens: ~0%
* 2<sup>21</sup> tokens: ~0%
* 2<sup>23</sup> tokens: ~13%
* 2<sup>24</sup> tokens: ~12%
* 1.1 x 2<sup>25</sup> tokens: ~12%
* 1.1 x 2<sup>26</sup> tokens: ~45%
* 1.1 x 2<sup>27</sup> tokens: ~48%
* 1.5 x 2<sup>28</sup> tokens: ~53%
* **Base Pass@2 (Blue Square):** The pass rate generally increases with the number of tokens.
* 0 tokens: ~0%
* 2<sup>21</sup> tokens: ~2%
* 2<sup>23</sup> tokens: ~2%
* 2<sup>24</sup> tokens: ~22%
* 1.1 x 2<sup>25</sup> tokens: ~22%
* 1.1 x 2<sup>26</sup> tokens: ~22%
* 1.1 x 2<sup>27</sup> tokens: ~36%
* 1.5 x 2<sup>28</sup> tokens: ~57%
* **Base Pass@3 (Blue Triangle):** The pass rate generally increases with the number of tokens.
* 0 tokens: ~0%
* 2<sup>21</sup> tokens: ~3%
* 2<sup>23</sup> tokens: ~3%
* 2<sup>24</sup> tokens: ~27%
* 1.1 x 2<sup>25</sup> tokens: ~27%
* 1.1 x 2<sup>26</sup> tokens: ~27%
* 1.1 x 2<sup>27</sup> tokens: ~45%
* 1.5 x 2<sup>28</sup> tokens: ~58%
### Key Observations
* The RL models generally have the highest pass rates across all token counts.
* The Base models generally have the lowest pass rates across all token counts, especially at lower token counts.
* The pass rates for all models tend to increase as the number of tokens increases, but the rate of increase varies.
* There are plateaus in some of the lines, where increasing the number of tokens does not immediately result in a higher pass rate.
* The MT models start with very low pass rates at 0 tokens, but their performance improves significantly as the token count increases.
### Interpretation
The chart demonstrates the impact of the number of SWE-Agent SFT tokens on the pass rates of different models (RL, SFT, MT, and Base) at different pass levels. The RL models appear to be the most effective, achieving the highest pass rates overall. The Base models, on the other hand, struggle at lower token counts but show significant improvement as the token count increases. The MT models exhibit a similar trend, starting with very low pass rates but catching up as the token count grows. The SFT models show a more moderate improvement with increasing token counts.
The plateaus in some of the lines suggest that there may be a point of diminishing returns for increasing the number of tokens. It's possible that other factors, such as model architecture or training data, become more important beyond a certain token count.
The data suggests that increasing the number of SWE-Agent SFT tokens can improve the performance of these models, but the extent of the improvement varies depending on the model and pass level.
</details>
Figure 5: Comparing the quality of the raw Base, the Agentless mid-trained (MT), the Agentless mid-trained with reasoning-intensive cold-start (SFT), and the Kimi-Dev model after RL as the prior for SWE-Agent adaptation. The tokens of the SWE-Agent SFT trajectories are swept over different scales, and the SWE-Agent performances are reported up to pass@3 on SWE-bench Verified.
We use the publicly available SWE-Agent trajectories to finetune Kimi-Dev. The finetuning dataset we used is released by SWE-smith (yang2025swe), consisting of 5,016 SWE-Agent trajectories collected with Claude 3.7 Sonnet (Anthropic-Claude3.7Sonnet-2025) in the synthetic environments. We perform supervised fine-tuning over Kimi-Dev, setting the maximum context length as 64K tokens during training, and allowing up to 128K tokens and 100 turns during inference.
As shown in Table 2, without collecting more trajectory data over realistic environments, or conducting additional multi-turn agentic RL, our finetuned model achieves a pass@1 score of 48.6% on SWE-bench Verified under the agentic framework setup, without additional test-time scaling. Using the same SFT data, our finetuned Kimi-Dev model outperforms the SWE-agent-LM (yang2025swesmith), with the performance comparable to that of Claude 3.5 Sonnet (49% by the 241022 version). The pass@10 of our SWE-Agent adapted model is 74.0% and surpasses the pass@30 of our model under Agentless (73.8%), proving the higher potential for the SWE-Agent framework.
4.2 Skill Transfer and Generalization
The results shown in Section 4.1 demonstrate that Kimi-Dev, a model with extensive Agentless training, could be adapted to end-to-end SWE-Agents with lightweight supervised finetuning. As the Agentless training recipe consists of mid-training, cold-start (SFT) and RL, we explore the contribution of each part in the recipe to the SWE-Agent capability after adaptation.
To figure this out, we perform SWE-Agent SFT on the original Qwen2.5-72B (Base), the mid-trained model (MT), the model then activated with Agentless-formatted long CoT data (SFT), and the (Kimi-Dev) model after finishing RL training (RL). As we are treating the four models as the prior for SWE-Agents We slightly abuse the term βpriorβ to refer to a model to be finetuned with SWE-Agent trajectories in the following analysis., and a good prior always demonstrates the ability of fast adaptation with a few shots (finn2017model; brown2020language), we also sweep the amount of SWE-Agent SFT data to measure the efficiency of each prior in SWE-Agent adaptation.
Specifically, we randomly shuffle the 5,016 SWE-Agent trajectories and construct nested subsets of sizes 100, 200, 500, 1,000, and 2,000, where each smaller subset is contained within the larger ones. In addition, we prepend two extreme baselines: (i) zero-shot, where the prior model is directly evaluated under the SWE-Agent framework without finetuning, and (ii) one-step gradient descent, where the model is updated with a single gradient step using the 100-trajectory subset. This yields a range of SFT token budgets spanning { $0$ , $2^{21}$ , $2^{23}$ , $2^{24}$ , $1.1Γ 2^{25}$ , $1.1Γ 2^{26}$ , $1.1Γ 2^{27}$ , $1.5Γ 2^{28}$ }. After these lightweight SFT experiments, we evaluate performance in terms of pass@{1,2,3} under the SWE-Agent framework, with evaluations for pass@1 conducted at temperature 0, and those for pass@2 and pass@3 at temperature 1.0.
Figure 5 presents the SWE-Agent performances of each prior (Base, MT, SFT, RL) after being fine-tuned with different amounts of agentic trajectories. We have the following observations:
1. The RL prior outperforms all the other models in nearly all the SWE-Agent SFT settings. This demonstrates that the Agentless training recipe indeed strengthens the prior in terms of SWE-Agent adaptation. For example, To achieve the top pass@1 performance of the Base prior, the RL prior needs only $2^{23}$ SWE-Agent SFT tokens, whereas the Base prior consumes $1.5Γ 2^{28}$ tokens.
1. The MT prior is lagged behind the SFT and the RL ones in extremely data-scarce settings (zero-shot ( $0$ ) and one-step gradient descent ( $2^{21}$ ) ), but quickly becomes on par with them after 200 trajectories ( $2^{24}$ ) are available for finetuning. This indicates that adaptation efficiency remains comparable after the prior is strengthened through Agentless mid-training.
1. The performance of the SFT prior is mostly similar to the RL one except for two cases: (i) The SFT prior outperforms the RL one under the zero-shot setting. This is reasonable, as the RL prior might overfit to the Agentless input-output format, while the SFT prior suffers less from this. (ii) The SFT prior exhibits a significant degradation with 200 SWE-Agent trajectories ( $2^{24}$ ). A potential reason could be that the 200 trajectories collapse onto a single data mode, leading the SFT prior to overfit through memorization (chu2025sft); the RL prior instead embeds stronger transferable skills and thus generalizes better.
<details>
<summary>figs/sec4_long_cot_to_multi_turn/hist_steps_6x4.png Details</summary>

### Visual Description
## Chart: Number of Instances Resolved per Bin of Turns
### Overview
The image is a step chart comparing the number of instances resolved across different numbers of turns for four different methods: RL (Reinforcement Learning), SFT (Supervised Fine-Tuning), MT (Machine Translation), and a Base method. The x-axis represents the number of turns, grouped into bins of 10, and the y-axis represents the number of instances resolved.
### Components/Axes
* **Title:** "Number of instances resolved (per bin of turns)"
* **X-axis:**
* Label: "#Turns"
* Scale: 0 to 100, with markers at 0, 10, 20, 30, 40, 50, 60, 70, 80, 90, and 100. Each bin represents a range of 10 turns (e.g., 0-10, 10-20, etc.).
* **Y-axis:**
* Label: "#Instances resolved"
* Scale: 0 to 160, with markers at 0, 40, 80, 120, and 160.
* **Legend:** Located in the top-right corner of the chart.
* RL: Solid red line
* SFT: Dash-dot orange line
* MT: Dotted purple line
* Base: Dashed blue line
### Detailed Analysis
**RL (Solid Red Line):**
* Trend: Starts at approximately 40 instances resolved for 0-10 turns, jumps to approximately 155 instances resolved for 10-20 turns, and then decreases gradually to approximately 5 instances resolved for 90-100 turns.
* Data Points:
* 0-10 turns: ~40
* 10-20 turns: ~155
* 20-30 turns: ~70
* 30-40 turns: ~30
* 40-50 turns: ~20
* 50-60 turns: ~10
* 60-70 turns: ~10
* 70-80 turns: ~5
* 80-90 turns: ~5
* 90-100 turns: ~5
**SFT (Dash-Dot Orange Line):**
* Trend: Starts at approximately 40 instances resolved for 0-10 turns, jumps to approximately 140 instances resolved for 10-20 turns, and then decreases gradually to approximately 5 instances resolved for 90-100 turns.
* Data Points:
* 0-10 turns: ~40
* 10-20 turns: ~140
* 20-30 turns: ~70
* 30-40 turns: ~30
* 40-50 turns: ~15
* 50-60 turns: ~10
* 60-70 turns: ~5
* 70-80 turns: ~5
* 80-90 turns: ~5
* 90-100 turns: ~5
**MT (Dotted Purple Line):**
* Trend: Starts at approximately 60 instances resolved for 0-10 turns, jumps to approximately 140 instances resolved for 10-20 turns, and then decreases gradually to approximately 5 instances resolved for 90-100 turns.
* Data Points:
* 0-10 turns: ~60
* 10-20 turns: ~140
* 20-30 turns: ~55
* 30-40 turns: ~20
* 40-50 turns: ~15
* 50-60 turns: ~10
* 60-70 turns: ~5
* 70-80 turns: ~5
* 80-90 turns: ~5
* 90-100 turns: ~5
**Base (Dashed Blue Line):**
* Trend: Starts at approximately 30 instances resolved for 0-10 turns, jumps to approximately 140 instances resolved for 10-20 turns, and then decreases gradually to approximately 5 instances resolved for 90-100 turns.
* Data Points:
* 0-10 turns: ~30
* 10-20 turns: ~140
* 20-30 turns: ~60
* 30-40 turns: ~15
* 40-50 turns: ~10
* 50-60 turns: ~5
* 60-70 turns: ~5
* 70-80 turns: ~5
* 80-90 turns: ~5
* 90-100 turns: ~5
### Key Observations
* All four methods show a similar trend: a high number of instances resolved within the first 20 turns, followed by a gradual decrease as the number of turns increases.
* RL resolves the most instances in the 10-20 turn bin.
* MT resolves the most instances in the 0-10 turn bin.
* The number of instances resolved is very low for all methods after 60 turns.
### Interpretation
The chart suggests that all four methods are most effective at resolving instances within a relatively small number of turns (0-20). As the number of turns increases, the effectiveness of all methods decreases significantly. The RL method appears to be slightly more effective than the other methods in the 10-20 turn range, while MT is more effective in the 0-10 turn range. The similarity in the trends suggests that the underlying problem being addressed may have inherent limitations that make it difficult to resolve instances with a large number of turns.
</details>
<details>
<summary>figs/skill_analysis_figure.png Details</summary>

### Visual Description
## Bar Chart: Number of Resolved Cases by Model
### Overview
The image is a bar chart comparing the number of resolved cases for different models: Base, MT, SFT, and RL. Each model has two stacked bars representing "Bugfixer cutoff" and "Reflection". The y-axis represents the "Number of Resolved Cases," and the x-axis represents the "Models."
### Components/Axes
* **Y-axis:** "Number of Resolved Cases," ranging from 0 to 800, with gridlines at intervals of 100.
* **X-axis:** "Models," with four categories: Base, MT, SFT, and RL.
* **Legend (Top-Left):**
* Blue: "Bugfixer cutoff"
* Blue with diagonal lines: "Reflection"
### Detailed Analysis
The chart presents the number of resolved cases for each model, split into "Bugfixer cutoff" and "Reflection" components.
* **Base:**
* Bugfixer cutoff: 484
* Reflection: 94
* Total: 578 (+94)
* **MT:**
* Bugfixer cutoff: 542
* Reflection: 100
* Total: 642 (+100)
* **SFT:**
* Bugfixer cutoff: 584
* Reflection: 109
* Total: 693 (+109)
* **RL:**
* Bugfixer cutoff: 605
* Reflection: 113
* Total: 718 (+113)
### Key Observations
* The "Bugfixer cutoff" component consistently forms the larger portion of the resolved cases for each model.
* The "Reflection" component is smaller but shows a slight increase from Base to RL.
* The total number of resolved cases increases from Base to RL.
### Interpretation
The chart demonstrates the effectiveness of different models in resolving cases, broken down by "Bugfixer cutoff" and "Reflection" components. The RL model resolves the highest number of cases, followed by SFT, MT, and Base. The "Reflection" component contributes a smaller but noticeable portion to the total resolved cases, and its contribution increases slightly across the models. This suggests that the RL model is the most effective in resolving cases overall, and the "Reflection" component plays a role in improving the performance of each model.
</details>
Figure 6: Left: Performance of the four priors under turn limits after SWE-Agent adaptation. Right: The characterization of the BugFixer and the reflection skills for each prior by counting the resolved cases of the 3 runs at Stage-3 cutoff moment, and comparing those with the final success cases.
From long CoT to extended multi-turn interactions.
We hypothesize that reflective behaviors cultivated through long chain-of-thought reasoning may transfer to settings requiring extended multi-turn interactions. To examine this, we evaluate the four priors (Base, MT, SFT, and RL) by finetuning on the 5,016 trajectories and test on SWE-bench Verified, under varying turn limits with pass@3 as the metric (Figure 6, left). The distinct interaction-length profiles show supportive evidence: the RL prior, after finetuning, continues to make progress beyond 70 turns, while the SFT, mid-trained, and raw models show diminishing returns around 70, 60, and 50 turns, respectively.
We further evaluate the efficacy of the Agentless skill priors (BugFixer and reflection) in the SWE-Agent adapted model. For BugFixer, given that the SWE-Agent may autonomously reflect between the five stages, we examine the moment in each trajectory when the bug fix of the third stage is initially completed, and the test rerun of the fourth stage has not yet been entered. Heuristically, when the SWE-Agent just completes the third stage, it has not yet obtained the execution feedback from the fourth stage, and thus has not further reflected based on the execution information or refined the bug fix. We therefore calculate the success rate of direct submission at this cutoff moment, which reflects the capability of the BugFixer skill. Regarding reflection, we further compare the performance at the cutoff point with the performance after full completion for each problem. The increment in the number of successful problems is used to reflect the capability of the reflection skill.
We use kimi-k2-0711-preview (team2025kimi_k2) to annotate the SWE-Agent trajectories, identifying the stage to which each turn belongs. Figure 6 (right) demonstrates that both skills are strengthened through each stage of the Agentless training recipe: For the BugFixer skill, the cutoff performance at Stage-3 within the SWE-Agent interaction trajectories of the four adapted models shows consistent improvement, ranging from 484 cases resolved by the Base prior to 605 cases by the RL prior, as measured by the number of successful resolutions within three passes. For the reflection skill, examining the performance gains from Stage-3 to the end of the trajectories reveals a similar trend, with improvements increasing from +94 under the Base prior to +113 under the RL prior. Taken together, the adapted model from the RL prior achieves the strongest overall performance across both skills. It should be noted that our analysis of the reflection skill remains coarse-grained, since the measured performance gains between the two checkpoints capture not only agentic reflection and redo behaviors, but also the intermediate test-writing process performed by the SWE-Agent. A more fine-grained evaluation that isolates the TestWriter skill prior is left for future work. The prompt for SWE-Agent stage annotation, extended qualitative studies, as well as additional discussions for skill transfer and generalization, are covered in Appendix G.
<details>
<summary>figs/sec4_swe_agent_rl/rebuttal_cmp_prior_pass1.png Details</summary>

### Visual Description
## Line Chart: Pass Rate vs. Steps
### Overview
The image is a line chart comparing the pass rate (%) of two different methods, SFT prior and RL prior, over a range of steps (0 to 300). Each method has two lines representing the average pass rate (Avg@5) and the range between the minimum and maximum pass rates (Min@5-Max@5).
### Components/Axes
* **X-axis:** Steps, ranging from 0 to 300 in increments of 50.
* **Y-axis:** Pass Rate (%), ranging from 25.0 to 42.5 in increments of 2.5.
* **Legend (bottom-right):**
* Red line with dots: SFT prior, Avg@5
* Light red shaded area: SFT prior, Min@5-Max@5
* Blue line with dots: RL prior, Avg@5
* Light blue shaded area: RL prior, Min@5-Max@5
### Detailed Analysis
* **SFT prior, Avg@5 (Red Line):**
* Trend: Generally increases from approximately 26% at step 0 to around 37% at step 300, with fluctuations.
* Data Points:
* Step 0: ~26%
* Step 50: ~34%
* Step 100: ~36%
* Step 150: ~35%
* Step 200: ~37%
* Step 250: ~37%
* Step 300: ~37%
* **SFT prior, Min@5-Max@5 (Light Red Shaded Area):**
* Represents the range of pass rates for the SFT prior method. The shaded area widens and narrows, indicating the variability in pass rates at different steps.
* **RL prior, Avg@5 (Blue Line):**
* Trend: Generally increases from approximately 31% at step 0 to around 40% at step 300, with fluctuations.
* Data Points:
* Step 0: ~31%
* Step 50: ~37%
* Step 100: ~37%
* Step 150: ~39%
* Step 200: ~40%
* Step 250: ~40%
* Step 300: ~39%
* **RL prior, Min@5-Max@5 (Light Blue Shaded Area):**
* Represents the range of pass rates for the RL prior method. The shaded area widens and narrows, indicating the variability in pass rates at different steps.
### Key Observations
* The RL prior method generally outperforms the SFT prior method in terms of average pass rate.
* Both methods show an initial increase in pass rate, followed by fluctuations.
* The variability in pass rates (as indicated by the shaded areas) appears to be higher in the initial steps for both methods.
### Interpretation
The chart suggests that the RL prior method is more effective than the SFT prior method in achieving higher pass rates. The fluctuations in pass rates indicate that the performance of both methods is not consistent across all steps. The wider shaded areas in the initial steps suggest that the methods are less stable during the initial stages of training or evaluation. The "Avg@5" and "Min@5-Max@5" likely refer to averaging the pass rate over a window of 5 steps and the range of pass rates within that window, respectively. This smoothing helps to visualize the overall trend and variability in the data.
</details>
<details>
<summary>figs/sec4_swe_agent_rl/rebuttal_cmp_prior_pass3.png Details</summary>

### Visual Description
## Line Chart: Pass Rate vs. Steps
### Overview
The image is a line chart comparing the "Pass Rate (%)" against "Steps" for two different methods: "SFT prior, Pass@3" and "RL prior, Pass@3". The chart displays how the pass rate changes over the number of steps for each method.
### Components/Axes
* **X-axis:** "Steps", with labeled markers at 0, 50, 100, 150, 200, 250, and 300.
* **Y-axis:** "Pass Rate (%)", with labeled markers at 42, 44, 46, 48, 50, 52, 54, and 56.
* **Legend:** Located in the bottom-right of the chart.
* Red line: "SFT prior, Pass@3"
* Blue line: "RL prior, Pass@3"
* The chart has a grid for easier reading of values.
### Detailed Analysis
* **SFT prior, Pass@3 (Red Line):**
* The line starts at approximately 42% at 0 steps.
* The line generally slopes upward from 0 to 100 steps.
* At 50 steps, the value is approximately 47%.
* At 100 steps, the value is approximately 49%.
* From 100 steps to 300 steps, the line fluctuates between approximately 47% and 52%.
* At 150 steps, the value is approximately 47%.
* At 200 steps, the value is approximately 51%.
* At 250 steps, the value is approximately 50%.
* At 300 steps, the value is approximately 51%.
* **RL prior, Pass@3 (Blue Line):**
* The line starts at approximately 45% at 0 steps.
* The line generally slopes upward from 0 to 150 steps.
* At 50 steps, the value is approximately 51%.
* At 100 steps, the value is approximately 53%.
* At 150 steps, the value is approximately 54%.
* From 150 steps to 300 steps, the line fluctuates between approximately 51% and 56%.
* At 200 steps, the value is approximately 52%.
* At 250 steps, the value is approximately 55%.
* At 300 steps, the value is approximately 54%.
### Key Observations
* The "RL prior, Pass@3" method generally outperforms the "SFT prior, Pass@3" method, showing a higher pass rate across most of the steps.
* Both methods show an initial increase in pass rate, but the "RL prior, Pass@3" method increases more rapidly.
* Both lines exhibit fluctuations, indicating variability in the pass rate as the number of steps increases.
### Interpretation
The chart suggests that using "RL prior, Pass@3" leads to a better pass rate compared to "SFT prior, Pass@3". The initial rapid increase in the "RL prior, Pass@3" method indicates a faster learning or adaptation rate. The fluctuations in both lines could be due to the inherent stochasticity in the learning process or the complexity of the task. The data demonstrates the impact of different prior methods on the pass rate of a system, highlighting the importance of selecting an appropriate prior for optimal performance.
</details>
<details>
<summary>figs/sec4_swe_agent_rl/rebuttal_cmp_prior_pass5.png Details</summary>

### Visual Description
## Line Chart: Pass Rate vs. Steps for SFT and RL Priors
### Overview
The image is a line chart comparing the "Pass Rate (%)" against "Steps" for two different methods: "SFT prior, Pass@5" and "RL prior, Pass@5". The chart displays how the pass rate changes over the number of steps for each method.
### Components/Axes
* **X-axis (Horizontal):** "Steps", with labeled markers at 0, 50, 100, 150, 200, 250, and 300.
* **Y-axis (Vertical):** "Pass Rate (%)", with labeled markers at 48, 50, 52, 54, 56, 58, and 60.
* **Gridlines:** Present on the chart, aiding in value estimation.
* **Legend (Bottom):**
* Red line: "SFT prior, Pass@5"
* Blue line: "RL prior, Pass@5"
### Detailed Analysis
* **SFT prior, Pass@5 (Red Line):**
* **Trend:** Generally increasing, but with significant fluctuations.
* **Data Points:**
* At 0 Steps: Approximately 47.5%
* At 50 Steps: Approximately 52%
* At 100 Steps: Approximately 52.5%
* At 150 Steps: Approximately 52%
* At 200 Steps: Approximately 56%
* At 250 Steps: Approximately 55%
* At 300 Steps: Approximately 58%
* **RL prior, Pass@5 (Blue Line):**
* **Trend:** Generally increasing, with fluctuations, and consistently higher than the SFT prior.
* **Data Points:**
* At 0 Steps: Approximately 50.5%
* At 50 Steps: Approximately 54%
* At 100 Steps: Approximately 57%
* At 150 Steps: Approximately 59%
* At 200 Steps: Approximately 56%
* At 250 Steps: Approximately 59%
* At 300 Steps: Approximately 60%
### Key Observations
* The RL prior (blue line) generally outperforms the SFT prior (red line) in terms of pass rate across the observed steps.
* Both methods exhibit volatility in their pass rates, indicated by the fluctuations in the lines.
* The RL prior shows a more pronounced initial increase in pass rate compared to the SFT prior.
### Interpretation
The chart suggests that the "RL prior, Pass@5" method is more effective than the "SFT prior, Pass@5" method in achieving a higher pass rate, especially in the early stages (lower number of steps). The fluctuations in both lines indicate that the pass rate is not consistently improving and may be influenced by other factors or inherent variability in the methods. The data implies that using a Reinforcement Learning (RL) prior leads to better performance compared to using a Supervised Fine-Tuning (SFT) prior, given the "Pass@5" metric. Further investigation could explore the reasons for the fluctuations and potential optimizations for both methods.
</details>
Figure 7: Comparison between the SFT Prior and the RL Prior when further applied with end-to-end SWE-Agent RL. Left: Pass@1 averaged from 5 runs. Middle: Pass@3. Right: Pass@5. The two priors are activated with the same $2^{21}$ SWE-Agent SFT tokens (the second column in Figure 5). After end-to-end RL, the RL prior slightly outperforms the SFT prior in all the Pass@1, Pass@3, and Pass@5 settings, which agrees with their SWE-Agent SFT performance comparison in Figure 5.
End-to-end SWE-Agent RL for prior comparison. To further validate the effectiveness of the priors baked by the Agentless training recipes, we employ end-to-end SWE-Agent RL (deepswe2025) with the cold-started priors as the initial models. To maximally alleviate the effect from the patterns of proprietary models within the SWE-Smith trajectories, we leverage the setting with $2^{21}$ SWE-Agent SFT tokens, the second column in Figure 5, where a single step of gradient decent takes place on top of each prior. Under the minimal cold-start setup, end-to-end RL reveals the potential of each prior beyond taking the shortcut of imitation (gudibande2024the; chu2025sft).
To run the end-to-end RL training for prior comparison, we use the SWE-Gym (pan2024training) and the SWE-bench-extra (badertdinov2024scaling) subsets as the training set. Similarly to the Agentless RL recipe, we first use each initial model to filter out the problems with Pass@8 = 0. For the model with the MT prior, 260 out of 6,202 problems remain; for the models with the SFT prior and the RL prior, a total of 2,062 from the 6,202 problems are kept. In all end-to-end RL runs, we use the outcome reward only, and the same policy gradient algorithm in Sec. 3.3 without KL or entropy regularization for optimization, with batch size as 256. The results are shown as follows:
For the model with MT prior, the pass@1 performance quickly deteriorates to less than 2% after 10 end-to-end RL steps. The potential reason for this could be the lack of available problems to be trained with, reflecting the inferiority of the prior. For the models with the SFT prior and the RL prior, the RL runs last for 300 steps, and we plot the performance comparison in Figure 7. According to Figure 7, the model with the RL prior demonstrates slightly higher scores of Pass@1, Pass@3, and Pass@5 over the model with the SFT prior. While the phenomenon agrees with the performance comparison under SWE-Agent SFT shown in Figure 5, we observe that the patterns in the interaction trajectories of the models incentivized by end-to-end SWE-Agent RL significantly differ from the patterns of the proprietary models (detailed in Appendix G.3). These results reveal that the Agentless training recipe curates strong priors for end-to-end learning under SWE-Agent frameworks with the minimal supervision of proprietary end-to-end trajectories. We leave the exploration of more advanced agentic RL techniques for further improvement as future work.
5 Conclusion and Future Work
In this work, we reframed Agentless and agentic paradigms for automated software engineering as complementary rather than competing. By introducing Kimi-Dev, we demonstrated that structured Agentless training can induce transferable skill priors, including bug localization, code repair, and self-reflection. As a result, Kimi-Dev not only achieves SoTA results on SWE-bench Verified among the workflow-based approaches, but enables efficient SWE-Agent adaptation as well. These findings establish a novel path toward building more generalizable coding agents through staged training.
Acknowledgements
We thank Yuzhi Wang, Xinyu Zhou, Guokun Lai, Yulun Du, Fang Li, Hao Ding, Dehao Zhang, Enming Yuan, Dikang Du, and Jiacheng You for their valuable suggestions. We also appreciate the members of the infrastructure team at Moonshot AI for their timely support during the project.
Ethics and Reproducibility Statements
This work obeys the Code of Ethics required by the ICLR conference. The study does not involve human subjects or animal experimentation. The personally identifiable information from raw data is excluded for privacy consideration (see the mid-training data recipe detailed in Appendix A). Beyond the scope of this work, we strongly advocate for the community to advance systematic research on agent safety, thereby ensuring responsible progress in this area.
For all of the experiments, we have covered the detailed setups and discussions in the appendices: mid-training for Agentless in Appendix A, details of the used dockers in Appendix B, Agentless RL in Appendix C, agent infrastructure in Appendix D, case studies under Agentless in Appendix E, preliminary findings about emergent test-time parallel scaling in Appendix F, and extended analysis for SWE-Agents in Appendix G.
Appendix
Appendix A Details of Mid-training
We curate a mid-training data recipe with a focus on enhancing SWE capabilities. Central to this effort is the collection of pull request (PR) data from GitHub, which provides extensive coverage of real-world bug fixes, feature requests, and code enhancements. To ensure data quality, we apply two filters: (i) we only retain repositories that have accumulated at least five GitHub stars, thereby excluding sparsely maintained projects with limited community engagement; and (ii) we remove any repositories overlapping with the SWE-bench benchmark (jimenez2023swe) to prevent potential data leakage. For each candidate repository, we query the GitHub API for all PRs with the state MERGED, while discarding those abandoned, superseded, or left under review. To preserve more context information, we also snapshot the entire codebase at the base commit before the first code change in the PR.
After data crawling, we incorporate two complementary forms for the natural code change data: (i) natural diff patches and (ii) PR commit packs. A natural diff patch consolidates all commits in a PR into the final code difference, typically expressed as SEARCHβREPLACE blocks. This format aligns with the Agentless paradigm, in which the model must directly output the final patch. In contrast, a commit pack captures the sequence of human-authored commits within a PR, where each commit message (textual reasoning) is paired with the corresponding code modification (action). This structure closely parallels the SWE-Agent setting, where intermediate reasoning steps are interleaved with actions. However, the distinction of the utilities for the two types of data is not absolute: commit messages in a PR commit pack can still inform the modelβs knowledge and indirectly strengthen its reasoning ability in the Agentless setting.
Natural diff patches. The natural diff patches used in the mid-training data recipe are processed with the following rules:
- Incorporate the agentless prompt template (see Prompts 1, 2, 3, 4; These four prompt templates are also used in the later stages, including cold-start, RL, and test-time self-play), and apply a loss mask to the prompt part. For the localization prompt, the response is set as the files modified in the ground-truth diff patch.
- If a related issue to the PR exists, use its content of the related issue; otherwise, use the PR title as the surrogate of the issue content.
- If a related issue to the PR exists, prepend the issue discussion at the beginning of the output in the code edit response. We aim to strengthen the modelβs capability of code edit reasoning by leveraging the discussion contents.
- Discard PRs that include modifications to files other than {.py, .md, .rst}.
- For PRs containing {.md, .rst} file modifications, retain only the Python diffs and rewrite them into SEARCHβREPLACE blocks.
- Remove PRs involving file additions or deletions.
- For the code edits with only line insertions or deletions, preserve the original Git diff hunks as the SEARCH content in the SEARCHβREPLACE blocks.
- Ensure that no more than three Python files are modified per PR.
- Apply a filtering script to exclude PRs with non-{.py, .md, .rst} modifications, or PRs modifying more than three Python files.
- Further exclude PRs containing more than five SEARCHβREPLACE blocks.
A total of $\sim$ 50B tokens for natural diff patches are obtained after applying these filtering rules.
β¬
Please look through the following GitHub problem description and Repository structure and provide a list of files that one would need to edit to fix the problem.
### GitHub Problem Description ###
{related issue / PR title content}
###
### Repository Structure ###
{file structure induced by the repo snapshot}
###
Please only provide the full path and return at most 5 files.
The returned files should be separated by new lines ordered by most to least important and wrapped with βββ
For example:
βββ
file1. py
file2. py
βββ
Listing 1: Agentless prompt template: Localization for BugFixer.
β¬
Please look through the following GitHub problem description and Repository structure and provide a list of test files that should be run after applying the patch to fix the issue.
### GitHub Problem Description ###
{related issue / PR title content}
###
### Repository Structure ###
{file structure induced by the repo snapshot}
###
Please only provide the full path and return at most 5 files.
The returned files should be separated by new lines ordered by most to least important and wrapped with βββ
For example:
βββ
file1. py
file2. py
βββ
Listing 2: Agentless prompt template: Localization for TestWriter.
β¬
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
{related issue / PR title content}
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILE ---
βββ
### {filename1}
{content of filename1}
### {filename2}
{content of filename2}
{...}
βββ
--- END FILE ---
Please first localize the bug based on the issue statement, and then generate * SEARCH / REPLACE * edits to fix the issue.
Every * SEARCH / REPLACE * edit must use this format:
1. The file path
2. The start of search block: <<<<<<< SEARCH
3. A contiguous chunk of lines to search for in the existing source code
4. The dividing line: =======
5. The lines to replace into the source code
6. The end of the replace block: >>>>>>> REPLACE
Here is an example:
βββ python
### mathweb / flask / app. py
<<<<<<< SEARCH
from flask import Flask
=======
import math
from flask import Flask
>>>>>>> REPLACE
βββ
Please note that the * SEARCH / REPLACE * edit REQUIRES PROPER INDENTATION. If you would like to add the line β print (x)β, you must fully write that out, with all those spaces before the code!
Wrap the * SEARCH / REPLACE * edit in blocks βββ python...βββ.
Listing 3: Agentless prompt template: Code edit for BugFixer.
β¬
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
{related issue / PR title content}
--- END ISSUE ---
Below are some code segments, each from a relevant test file. One or more of these files may be added some new tests which can reproduce the issue.
--- BEGIN FILE ---
βββ
### {filename1}
{content of filename1}
### {filename2}
{content of filename2}
{...}
βββ
--- END FILE ---
Please first localize some possible locations in those test files within the repo, and then generate * SEARCH / REPLACE * edit updates to the ** test ** files in the repo, so that the erroneous scenario described in the problem is reproduced.
Every * SEARCH / REPLACE * edit must use this format:
1. The file path
2. The start of search block: <<<<<<< SEARCH
3. A contiguous chunk of lines to search for in the existing source code
4. The dividing line: =======
5. The lines to replace into the source code
6. The end of the replace block: >>>>>>> REPLACE
Here is an example:
βββ python
### mathweb / flask / app. py
<<<<<<< SEARCH
from flask import Flask
=======
import math
from flask import Flask
def test__rules__std_L060_raised () -> None:
try:
sql = " SELECT IFNULL (NULL, 100),
NVL (NULL,100);"
result = lint (sql, rules =[" L060 "])
assert len (result) == 2
except:
print (" Other issues ")
return
try:
assert result [0][" description "] == " Use β COALESCE β instead of β IFNULL β."
assert result [1][" description "] == " Use β COALESCE β instead of β NVL β."
print (" Issue resolved ")
except AssertionError:
print (" Issue reproduced ")
return
return
>>>>>>> REPLACE
βββ
Please note that the * SEARCH / REPLACE * edit REQUIRES PROPER INDENTATION. If you would like to add the line β print (x)β, you must fully write that out, with all those spaces before the code!
Wrap the * SEARCH / REPLACE * edit in blocks βββ python...βββ.
Listing 4: Agentless prompt template: Code edit for TestWriter.
PR commit packs. The PR commit packs used in the mid-training data recipe are processed with the following rules:
- Discard PRs that include modifications to files other than {.py, .md, .rst}.
- For {.md, .rst} file modifications, retain the βdiff βgitβ signature but remove the actual content changes.
- Ensure that each PR modifies at most five Python files (with at least one required). PRs exceeding this limit are discarded.
- Apply a filtering script to exclude PRs containing non-{.py, .md, .rst} file modifications or those modifying more than five Python files.
- Filter out all of the developer signatures and GitHub IDs for ethics considerations.
A total of $\sim$ 20B tokens for PR commit packs are obtained after applying these filtering rules.
In addition, we incorporate synthetic data to further enhance both the reasoning and agentic capabilities of the model. A key observation is that the ground-truth reward for the localization stage in the Agentless setting can be derived directly from the diff patch, since the set of files requiring modification is explicitly indicated.
Synthetic reasoning data. To improve reasoning quality, we perform a lightweight SFT of the Qwen-2.5-72B-Instruct model on 2,000 R1 trajectories. The resulting model is then used to generate large-scale rollouts for the localization stage of both BugFixer and TestWriter. We retain only the rollouts that achieve exactly correct file localizations. This procedure yields approximately $\sim$ 10B tokens of reasoning-intensive data dedicated to Agentless localization in the mid-training recipe.
Synthetic agentic interactions. To strengthen agentic capabilities, we simulate agentβenvironment interactions with a custom tool set designed to mimic file-system operations without execution. This design is motivated by practical constraints: while repository snapshots from GitHub are available, not all snapshots are equipped with an executable Docker environment. As a result, shell commands are disabled. Instead, we introduce synthetic tools that allow the agent to view file contents and perform keyword-based search for localization, which effectively reproduces the first stage of Agentless but in an agentic manner. The specification of this tool set is covered in the system prompt, which is then used to elicit agentic interaction rollouts from the Qwen-2.5-72B-Instruct model. The complete system prompt is provided in Prompt 5. We apply a loss mask only to the system prompt, and enable the model to simultaneously learn both actions and observations along the trajectory, inspired by yang2024react. This approach integrates both policy and world modeling into mid training.
β¬
Your job is to look through the given GitHub problem description and Repository structure, and edit updates to the files in the repo to resolve the problem.
The job is divided into two stages:
+ In Stage 1, you should localize the files the files that you would need to edit to fix the problem.
+ In Stage 2, you should edit the updates to the repo.
Let β s begin from Stage 1 to localize the bugs:
In Stage 1, besides reading the provided Repository structure, you can use the following skills for exploration. The skills are to be called in an environment wrapped by < execute > and </ execute >, listed in the form of python functions as below:
open_file (path: str, is_all | None = False, line_number: int | None = 1, context_lines: int | None = 100) -> None:
Opens the file at the given path in the editor for exploration.
By default, only the first 100 lines of the file are displayed. To open the entire file, set β is_all β to β True β.
The β context_lines β parameter determines the maximum number of lines to be displayed, with a cap of 100 lines. Use β scroll_up β and β scroll_down β to view more content up or down.
If a β line_number β is provided, the window will be moved to include that line.
Note: When β is_all β is set to β True β, the β line_number β and β context_lines β parameters will not take effect, as the entire file will be opened and displayed without any line - specific focus or context limitation.
Args:
path: str: The path to the file to open. the full path of the filename should be provided.
is_all: bool | None = False: If set to β True β, the entire file will be opened. Defaults to β False β.
line_number: int | None = 1: The line number to move to. Defaults to 1.
context_lines: int | None = 100: Only shows this number of lines in the context window (usually from line 1), with line_number as the center (if possible). Defaults to 100.
goto_line (line_number: int) -> None:
Moves the window to show the specified line number.
Args:
line_number: int: The line number to move to.
goto_class_or_func (class_or_func_name: str) -> None:
Moves the window to show the specified class or function in the current open file.
Args:
class_or_func_name: str: The name of the given class, function, or method in a class to move to.
scroll_down () -> None:
Moves the window down by 100 lines.
Args:
None
scroll_up () -> None:
Moves the window up by 100 lines.
Args:
None
search_dir (search_term: str, dir_path: str | None) -> None:
Searches for search_term in all files in dir. If dir is not provided, searches in the entire repository. Filename, fine - grained line number, and the relative class or function it is located in (if applied) will be shown for each found position.
Args:
search_term: str: The term to search for.
dir_path: str: The path to the directory to search. Should be full path filename.
search_file (search_term: str, file_path: str | None = None) -> None:
Searches for search_term in file. If file is not provided, searches in the current open file. Filename, fine - grained line number, and the relative class or function it is located in (if applied) will be shown for each found position.
Args:
search_term: str: The term to search for.
file_path: str | None: The path to the file to search. Should be full path filename if provided.
find_file (file_name: str, dir_path: str | None) -> None:
Finds all files with the given name in the specified directory. If dir is not provided, find in the entire repository.
Args:
file_name: str: The name of the file to find.
dir_path: str: The path to the directory to search.
str_replace (path: str, old_str, new_str)
old_str =[the old content to be replaced]
new_str =[the new content after replacement]
-> None:
Replace the old content (old_str) in the file at the given path with the new content (new_str). This is the skill that you will be using to edit the updates.
Args:
path: str: The path to the file to be updated. The full path of the filename should be provided.
old_str: str: The old content to be replaced. Note that this argument should be written in a new line starting with " old_str =", and the string content should not be quoted.
new_str: str: The new content after replacement. Note that this argument should be written in a new line starting with " new_str =", and the string content should not be quoted.
Example:
Assuming a call is shown as follows:
βββ
str_replace (" filename. py ", old_str, new_str)
old_str = a
new_str = b
c
βββ
Then it will function as replacing the β a \ n β string with the β b \ nc β string in the β filename. py β file.
insert (path: str, insert_line: int, new_str)
new_str =[the new content to be inserted]
-> None:
Insert the new content (new_str) in the file at the given path. When you want to add an entirely new class / function to the file, it would be better to use this method.
Args:
path: str: The path to the file to be updated. The full path of the filename should be provided.
insert_line: int: The Line number below which the new content is to be added. This Line number should be within the range of lines of the file: [0, Lines_of_the_File]. Specifically, when insert_line = 0, the added content starts from the top of the file.
new_str: str: The new content to be inserted. Note that this argument should be written in a new line starting with " new_str =", and the string content should not be quoted.
Example:
Assuming a call is shown as follows:
βββ
insert (" test_filename. py ", 5, new_str)
new_str = def test_add ():
assert add (1, 2) == 3
βββ
Then it will function as inserting the string β def test_add ():\ n assert add (1, 2) == 3β below the Line 5 of the β test_filename. py β file.
stop () -> None:
Terminate the editing process.
Args:
None
NOTE:
Responses should be concise.
When exploring, you should attempt fewer things at a time: Include ONLY ONE < execute > per response, and use a SINGLE skill listed above within the < execute > environment. DO NOT use other python functions, as the environment does not support them.
You should first reason in the verbal form, then use a skill with < execute > and </ execute >.
You should avoid apologies and thanks in the responses.
When you finish exploring and analyzing with the provided skills, please return at most 3 files with the full path only. Each full path should be placed in a single line, INSTEAD OF BROKEN WITH MULTIPLE LINES.
The returned files should be separated by new lines ordered by most to least important, wrapped with βββ and NOTHING ELSE.
An example for a full output:
βββ
full_path_to_file1. py
full_path_to_file2. py
βββ
Now Let β s start!
### GitHub Problem Description ###
{issue content}
### Repository Structure ###
{file structure}
###
Listing 5: A non-execution set of tools empowering the simulation of agentic interaction trajectories.
After completing the initial localization stage, the agent is guided into the code-editing phase through a follow-up instruction: βNow letβs move on to Stage 2 and edit the updates. Remember, you can still decide at any point whether a file actually requires modification.β We retain partial rollouts from Stage 1, provided that the localization results include at least one correct file.
In Stage 2, we first simulate the agentβs interaction by allowing it to open incorrectly localized files, and we artificially inject agentic reasoning patterns such as βI realize that I do not need to modify this fileβ after inspecting the file content. This procedure is designed to strengthen the self-reflection ability of the agent by exposing it to false-positive contexts regarding the issue to be solved.
Subsequently, we transcribe the ground-truth PR commit pack into trajectory form: each commit message is treated as the agentβs reasoning step, and each code update is represented as the corresponding action, expressed through the βstr_replaceβ or βinsertβ tools. These interactions are appended to the trajectory, followed by a terminating βstopβ call. Due to storage constraints on repository snapshots, this trajectory simulation is applied to only a subset of PRs. Overall, this process contributes approximately $\sim$ 10B tokens of agentic interaction data to the mid-training recipe. Future directions for scaling this component in the data recipe include leveraging the idea of environment scaling (yang2025swesmith).
Training. We perform mid-training using a standard next token prediction approach, initialized from the Qwen2.5-72B-Base (qwen2025qwen25technicalreport) model. We upsample the synthetic part of the data by a factor of 4 during mid-training, inspired by the practice in grattafiori2024llama; qwen2025qwen25technicalreport; gu2025data. A global batch size of 256 with a maximum sequence length of 32K tokens is used, optimizing for long-context capabilities necessary for real-world software engineering tasks. The learning rate is set to 2e-5, with a cosine decay schedule and a minimum learning rate of 2e-6. The warm-up phase covers over approximately 3 billion tokens, followed by learning rate decay until approximately 150 billion tokens are processed.
Appendix B Docker environments
Table 3: The sources of the docker environments used in the development of Kimi-Dev.
| SWE-Gym (pan2024training) | https://huggingface.co/datasets/SWE-Gym/SWE-Gym/ | 2,356 |
| --- | --- | --- |
| SWE-bench-extra (badertdinov2024scaling) | https://huggingface.co/datasets/nebius/SWE-bench-extra/ | 3,846 |
| R2E-Gym-Lite (jain2025r2e) | https://huggingface.co/datasets/R2E-Gym/R2E-Gym-Lite | 3,671 |
Docker environment construction. To validate non-ground-truth patches generated by model rollouts and expand our dataset, we required executable Docker environments. We combined publicly available datasets with custom-configured Docker environments (see Table. 3). Among them, SWE-Gym and R2E-Gym-Lite open-source their dockers that we can directly use. For datasets lacking Docker support (SWE-Bench-Extra), we implemented an automated configuration method:
1. Initialize a Docker environment with fixed dependencies.
1. Select Python version based on commit year.
1. Install dependencies via requirements.txt and β pip install -e . β.
1. Resolve ModuleNotFound errors during test execution.
1. Validate success if a FAIL_TO_PASS test transitions from failing (without gt_patch) to passing (with gt_patch).
Out of 6.38k SWE-bench-extra instances, 3,846 environments are successfully constructed and subsequently used for cold-start and RL training.
Appendix C More Details of RL training
C.1 Prompt set selection
<details>
<summary>figs/sec3_rl_scaling/RL_bugfix_ablation_figure.png Details</summary>

### Visual Description
## Line Chart: Pass Rate vs. RL Training Steps
### Overview
The image is a line chart comparing the pass rate (%) of two reinforcement learning (RL) models: "Bugfixer RL" and "w/o Positive Reinforce" (presumably, without positive reinforcement). The x-axis represents the RL training steps, ranging from 0 to 500. The y-axis represents the pass rate, ranging from 34% to 46%. The chart shows how the pass rate changes with the number of training steps for each model.
### Components/Axes
* **X-axis:** RL Training Steps, ranging from 0 to 500 in increments of 50.
* **Y-axis:** Pass Rate (%), ranging from 34 to 46 in increments of 2.
* **Legend:** Located in the top-left corner.
* Red line with circle markers: "Bugfixer RL"
* Blue line with circle markers: "w/o Positive Reinforce"
### Detailed Analysis
* **Bugfixer RL (Red Line):**
* Trend: Generally increasing with fluctuations.
* Data Points:
* (0, 34)
* (20, 36.3)
* (50, 33.4)
* (75, 37.2)
* (100, 37.5)
* (120, 35.5)
* (150, 38.5)
* (170, 38.1)
* (200, 40.1)
* (210, 37.6)
* (230, 38.9)
* (250, 39.9)
* (270, 42.5)
* (280, 39.7)
* (300, 40.3)
* (325, 41.4)
* (350, 39.5)
* (360, 42.1)
* (380, 44)
* (400, 42.5)
* (425, 43.9)
* (450, 44.1)
* (470, 45.9)
* (480, 44.9)
* (500, 46.1)
* **w/o Positive Reinforce (Blue Line):**
* Trend: Generally increasing with fluctuations, similar to "Bugfixer RL," but with more pronounced drops.
* Data Points:
* (360, 44.8)
* (380, 41.9)
* (400, 42.2)
* (420, 41)
* (440, 43.7)
* (460, 40.4)
* (480, 45.4)
* (500, 43.4)
### Key Observations
* Both models show an increasing trend in pass rate as the number of training steps increases.
* The "Bugfixer RL" model (red line) generally has a higher pass rate than the "w/o Positive Reinforce" model (blue line), especially in the later stages of training (after 400 steps).
* Both models exhibit fluctuations in pass rate, indicating variability in performance during training.
### Interpretation
The data suggests that the "Bugfixer RL" model, which presumably incorporates some bug-fixing mechanism, performs better than the model without positive reinforcement ("w/o Positive Reinforce"). The increasing trend in pass rate for both models indicates that they are learning and improving their performance as they are trained. The fluctuations in pass rate could be due to the stochastic nature of reinforcement learning or the complexity of the task. The "Bugfixer RL" model's consistently higher pass rate after 400 training steps suggests that the bug-fixing mechanism becomes more effective as the model is trained longer. The positive reinforcement seems to be a key component in the Bugfixer RL model, as removing it results in a lower pass rate.
</details>
Figure 8: Ablation of positive example reinforcement during 72B Bugfixer RL.
In the main text, we introduce the adaptive prompt selection method for RL training. Specifically, we construct an initial prompt set of 1,200 problems by selecting those with pass@16 $>$ 0 from SWE-Gym (pan2024training), SWE-bench-extra (badertdinov2025swerebenchautomatedpipelinetask), and R2E-gym (jain2025r2e). Then, every 100 training steps, we expand the prompt set by adding 500 new problems. These additional problems are randomly sampled and filtered from the pool of problems for which the current model has pass@16 = 0, thereby progressively increasing the difficulty and forming a proper curriculum.
C.2 RL experiment ablation
Figure 9 shows the performance of the Qwen2.5-14B model in RL experiments, where both the BugFixer and the TestWriter exhibit clear scaling law behavior.
<details>
<summary>figs/sec3_rl_scaling/quick_plot_M3_bf.png Details</summary>

### Visual Description
## Line Chart: Token Length and Pass Rate vs. RL Training Steps
### Overview
The image is a line chart comparing "Token Length" and "Pass Rate (%)" against "RL Training Steps." The chart uses two y-axes to represent the different scales of the two metrics. The x-axis represents the RL Training Steps.
### Components/Axes
* **X-axis:** "RL Training Steps" with markers at 0, 25, 50, 75, 100, 125, 150, 175, and 200.
* **Left Y-axis:** "Token Length" (blue) with a scale from 6500 to 8500, marked at 6500, 7000, 7500, 8000, and 8500.
* **Right Y-axis:** "Pass Rate (%)" (red) with a scale from 24 to 34, marked at 24, 26, 28, 30, 32, and 34.
* **Legend:** Located in the top-left corner, it identifies the blue line with square markers as "Token Length" and the red line with circle markers as "Pass Rate (%)".
### Detailed Analysis
* **Token Length (Blue Line):**
* The line starts at approximately 6500 at 0 RL Training Steps.
* It increases to approximately 7200 at 25 steps.
* It decreases to approximately 6400 at 37.5 steps.
* It increases to approximately 6900 at 50 steps.
* It decreases to approximately 6700 at 62.5 steps.
* It increases to approximately 7100 at 75 steps.
* It decreases to approximately 6800 at 100 steps.
* It increases to approximately 7050 at 112.5 steps.
* It remains at approximately 7050 at 125 steps.
* It increases to approximately 7550 at 150 steps.
* It decreases to approximately 7500 at 162.5 steps.
* It decreases to approximately 7000 at 175 steps.
* It increases to approximately 8400 at 200 steps.
* **Pass Rate (%) (Red Line):**
* The line starts at approximately 24.5% at 0 RL Training Steps.
* It increases to approximately 26.5% at 25 steps.
* It decreases to approximately 25% at 37.5 steps.
* It increases to approximately 27.5% at 50 steps.
* It decreases to approximately 26% at 62.5 steps.
* It increases to approximately 29% at 75 steps.
* It increases to approximately 31% at 100 steps.
* It decreases to approximately 28% at 112.5 steps.
* It increases to approximately 29% at 125 steps.
* It decreases to approximately 27% at 137.5 steps.
* It increases to approximately 29% at 150 steps.
* It increases to approximately 33% at 162.5 steps.
* It decreases to approximately 30% at 175 steps.
* It increases to approximately 34% at 187.5 steps.
* It decreases to approximately 32% at 200 steps.
### Key Observations
* Both "Token Length" and "Pass Rate (%)" exhibit fluctuations throughout the RL training steps.
* The "Pass Rate (%)" shows a general upward trend, especially after 150 RL Training Steps.
* The "Token Length" also shows an upward trend, especially after 150 RL Training Steps.
* There are some points where the trends of "Token Length" and "Pass Rate (%)" seem to correlate, but this is not consistent throughout the entire range.
### Interpretation
The chart illustrates the relationship between "Token Length" and "Pass Rate (%)" during RL training. The fluctuations suggest that the training process involves exploration and refinement. The upward trend in "Pass Rate (%)" indicates that the model is generally improving its performance as training progresses. The "Token Length" also increases, which could be related to the model generating more complex or detailed outputs as it learns. The correlation between the two metrics is not always consistent, suggesting that other factors may also influence the "Pass Rate (%)". The data suggests that continued training beyond 200 steps might lead to further improvements in "Pass Rate (%)", but it's important to monitor "Token Length" to ensure the model doesn't become overly complex or inefficient.
</details>
(a) 14B BugFixer
<details>
<summary>figs/sec3_rl_scaling/quick_plot_M3_tw.png Details</summary>

### Visual Description
## Line Chart: Token Length vs. Reproduced Rate During RL Training
### Overview
The image is a line chart that plots two metrics, "Token Length" and "Reproduced Rate (%)", against "RL Training Steps". The chart uses two y-axes, one on the left for Token Length and one on the right for Reproduced Rate. The x-axis represents RL Training Steps. The chart aims to show the relationship between these metrics as the RL training progresses.
### Components/Axes
* **Title:** There is no explicit title on the chart.
* **X-axis:**
* Label: "RL Training Steps"
* Scale: 0 to 200, with markers at 0, 25, 50, 75, 100, 125, 150, 175, and 200.
* **Left Y-axis:**
* Label: "Token Length" (in blue)
* Scale: 3000 to 5500, with markers at 3000, 3500, 4000, 4500, 5000, and 5500.
* **Right Y-axis:**
* Label: "Reproduced Rate (%)" (in red)
* Scale: 18 to 26, with markers at 18, 20, 22, 24, and 26.
* **Legend:** Located at the top-left of the chart.
* "Token Length" (blue line with square markers)
* "Reproduced Rate (%)" (red line with circle markers)
### Detailed Analysis
* **Token Length (Blue Line, Square Markers):**
* Trend: Generally increasing with RL Training Steps.
* Data Points:
* 0 Steps: ~3150
* 25 Steps: ~3350
* 50 Steps: ~3400
* 75 Steps: ~3700
* 100 Steps: ~4100
* 125 Steps: ~4350
* 150 Steps: ~4800
* 175 Steps: ~5200
* 200 Steps: ~5700
* **Reproduced Rate (%) (Red Line, Circle Markers):**
* Trend: More volatile, with peaks and troughs, but generally increasing.
* Data Points:
* 0 Steps: ~18.2%
* 25 Steps: ~20%
* 50 Steps: ~22%
* 75 Steps: ~19%
* 100 Steps: ~21%
* 125 Steps: ~20%
* 150 Steps: ~19%
* 175 Steps: ~23%
* 200 Steps: ~26%
### Key Observations
* Token Length shows a consistent upward trend, indicating that the length of tokens increases as the RL training progresses.
* Reproduced Rate is more variable, suggesting that the rate of reproduction fluctuates during training.
* The Reproduced Rate seems to have a local minimum around 150 RL Training Steps, while Token Length continues to increase.
### Interpretation
The chart suggests that as the RL training progresses, the token length tends to increase. The reproduced rate, while fluctuating, also shows a general increase. The relationship between these two metrics is not strictly linear, as the reproduced rate exhibits more volatility. This could indicate that the model is exploring different strategies during training, leading to variations in the reproduced rate, while the token length steadily increases as the model learns to generate longer sequences. The local minimum in the reproduced rate around 150 steps might be a point where the model is adjusting its strategy, before continuing to improve the reproduction rate.
</details>
(b) 14B TestWriter
Figure 9: RL scaling experiments on Qwen2.5-14B model.
Furthermore, Figure 8 illustrates the effect of our proposed positive example reinforcement. We incorporated this improvement in the later stage of training and found that the RL of BugFixer achieved better performance compared to not using this trick. The main reason is that positive examples can reinforce the correct reasoning patterns, enabling the model to converge more quickly, especially when its exploration ability diminishes in the later stages. It is noticed that a similar trick was also applied in the development of the Seed1.5-thinking model (seed2025seed1).
Appendix D Agent Infrastructure
We present a unified sandbox architecture designed for hybrid cloud cluster environments, leveraging Kubernetes orchestration to provide researchers with a flexible, secure, and highly scalable code execution platform. The architecture is accessible through a Python SDK and implements an ephemeral, use-and-destroy paradigm that ensures resource efficiency and security isolation. The system demonstrates exceptional scalability, supporting over 10,000 concurrent sandbox instances while maintaining performance stability. This infrastructure serves as a foundational platform for diverse computational scenarios, including automated code evaluation and RL experiments.
The system employs a sidecar container pattern to efficiently orchestrate and manage a heterogeneous collection of over 25,000 docker images sourced from multiple datasets. This architectural choice enables optimal resource utilization while maintaining isolation between different execution environments. Additionally, we have developed a proprietary Software Engineering (SWE) image-building pipeline that continuously expands the repository of supported images, thereby enhancing the systemβs adaptability to diverse computational workloads and research requirements.
Appendix E Case Studies under Agentless
E.1 Bugfixer Traces
We provide a Bugfixerβs response trace during the code repair stage (see Listing 6). From this, we can observe that the initial code given was return self[i, j - pos - other.cols], but the model quickly realized its mistake and corrected it to return self[i, j - other.cols]. Although some repetitive self-reflections followed, the increasing response length indeed improved answer accuracy and robustness in our experiments.
β¬
$\lhd$ think $\rhd$
Okay, let β s tackle this problem. The user reported an issue where the Matrix. col_insert () method isn β t working correctly. The example given shows that after inserting a matrix V into M at position 3, the resulting matrix has the inserted columns shifted incorrectly. The expected result would have the inserted columns at positions 3 and 4, but the actual output shows them starting at position 3, but the original matrix β s columns after the insert are shifted upwards instead of remaining in their original positions.
First, I need to understand how col_insert works. Looking at the code provided, especially in the MatrixShaping class in sympy / matrices / common. py, the _eval_col_insert method is responsible for inserting columns. The current implementation shifts the existing columns to the right by the number of columns inserted. However, the bug might be in how the original matrix β s columns are being shifted.
Let me look at the existing code for _eval_col_insert:
def _eval_col_insert (self, pos, other):
cols = self. cols
def entry (i, j):
if j < pos:
return self [i, j]
elif pos <= j < pos + other. cols:
return other [i, j - pos]
return self [i, j - pos - other. cols]
return self. _new (self. rows, self. cols + other. cols,
lambda i, j: entry (i, j))
Wait, the problem here is in the last case. When j is greater than or equal to pos + other. cols, the code subtracts (pos + other. cols) from j. But that β s incorrect because the original columns after the inserted ones should start at j = pos + other. cols. So for the original matrix β s columns that come after the inserted ones, their new positions should be j + other. cols. But in the current code, when j is in the original matrix β s columns (after the inserted ones), the code is subtracting (pos + other. cols), which would incorrectly shift them.
Wait, let β s think again. The original code for the entry function:
if j < pos:
return self [i, j] # original columns before pos
elif pos <= j < pos + other. cols:
return other [i, j - pos] # inserted columns
return self [i, j - pos - other. cols] # original columns after the inserted ones
Wait, no. Let β s say the original matrix has cols = 6. We insert 2 columns at pos =3. The new total cols is 6 + 2 = 8. The original columns after pos 3 (columns 3,4,5) should now be at positions 5,6,7. But according to the code, when j is >=3 +2=5 (since pos + other. cols =3+2=5), then the code returns self [i, j -5 -2] = self [i, j -7]. Wait, that β s not right. For example, if j =5 (the new column 5), then j - pos - other. cols =5 -3 -2=0. So the original column 0 would be mapped to the new column 5. That β s definitely wrong.
Ah, here β s the mistake. The code is subtracting (pos + other. cols) from j, but that β s incorrect. The original columns after the inserted ones should be shifted to the right by other. cols. So, the correct calculation should be j - other. cols. For example, original column 3 becomes 3 +2=5, original column 4 becomes 6, etc.
So the line in the code:
return self [i, j - pos - other. cols]
should be:
return self [i, j - other. cols]
Because the inserted columns have taken up β other. cols β positions starting at β pos β, so the original columns after β pos β are shifted by β other. cols β to the right. Therefore, to get the correct original column index, we subtract β other. cols β from the current j.
So the fix is to change the subtraction from (pos + other. cols) to other. cols. Let me check:
Original code:
return self [i, j - pos - other. cols]
Corrected code:
return self [i, j - other. cols]
Wait, no. Wait, the original columns after the inserted block (which starts at pos and has other. cols columns) would start at j = pos + other. cols. So for those j values, the original column index would be j - (pos + other. cols). Wait, no. Let β s think with numbers.
Original matrix has 6 columns. Insert 2 columns at position 3. The new matrix has 8 columns. The original columns 0,1,2 remain in positions 0,1,2. The inserted 2 columns are at positions 3 and 4. The original columns 3,4,5 now become 5,6,7.
So for j in 5,6,7 (original columns 3,4,5), the mapping is:
5 $ \ rightarrow$ 3
6 $ \ rightarrow$ 4
7 $ \ rightarrow$ 5
So the formula should be original_column = j - inserted_columns. In this case, inserted_columns is other. cols. So the code should subtract other. cols from j when accessing the original matrix β s columns after the inserted block.
Therefore, the line:
return self [i, j - pos - other. cols]
should be:
return self [i, j - other. cols]
Because pos is where the insertion starts, but the original columns after the inserted block are shifted by other. cols. Let β s verify with the example:
pos =3, other. cols =2. For j =5 (new column 5), original column index is 5 -2=3, which is correct. For j =6, 6-2=4; j =7, 7-2=5. That β s correct.
So the error in the original code was subtracting (pos + other. cols) instead of other. cols. The pos is the starting position of the insertion, but after inserting other. cols columns, the original columns after the inserted block are shifted by other. cols. Therefore, the correct subtraction is other. cols.
Thus, the fix is to change the line in the _eval_col_insert method from:
return self [i, j - pos - other. cols]
to:
return self [i, j - other. cols]
Now, looking back at the code in the _eval_col_insert function:
The code for the else clause is:
return self [i, j - pos - other. cols]
Which is incorrect. Changing that to j - other. cols should fix the problem.
So the SEARCH block would look for the lines:
def _eval_col_insert (self, pos, other):
cols = self. cols
def entry (i, j):
if j < pos:
return self [i, j]
elif pos <= j < pos + other. cols:
return other [i, j - pos]
return self [i, j - pos - other. cols]
return self. _new (self. rows, self. cols + other. cols,
lambda i, j: entry (i, j))
And the REPLACE block would change the last part to:
elif pos <= j < pos + other. cols:
return other [i, j - pos]
return self [i, j - other. cols]
So the correction is in the return statement of the entry function. The original code subtracted (pos + other. cols) from j, but it should subtract other. cols instead.
Therefore, the correct SEARCH / REPLACE block would be:
### sympy / matrices / common. py
<<<<<<< SEARCH
def _eval_col_insert (self, pos, other):
cols = self. cols
def entry (i, j):
if j < pos:
return self [i, j]
elif pos <= j < pos + other. cols:
return other [i, j - pos]
return self [i, j - pos - other. cols]
return self. _new (self. rows, self. cols + other. cols,
lambda i, j: entry (i, j))
=======
def _eval_col_insert (self, pos, other):
def entry (i, j):
if j < pos:
return self [i, j]
elif pos <= j < pos + other. cols:
return other [i, j - pos]
return self [i, j - other. cols]
return self. _new (self. rows, self. cols + other. cols,
lambda i, j: entry (i, j))
>>>>>>> REPLACE
This change corrects the column shifting calculation for the original matrix β s columns after the inserted block. Instead of subtracting (pos + other. cols), we subtract other. cols, which properly shifts the original columns to the right by the number of inserted columns.
$\lhd$ / think $\rhd$
βββ python
### sympy / matrices / common. py
<<<<<<< SEARCH
def _eval_col_insert (self, pos, other):
cols = self. cols
def entry (i, j):
if j < pos:
return self [i, j]
elif pos <= j < pos + other. cols:
return other [i, j - pos]
return self [i, j - pos - other. cols]
return self. _new (self. rows, self. cols + other. cols,
lambda i, j: entry (i, j))
=======
def _eval_col_insert (self, pos, other):
def entry (i, j):
if j < pos:
return self [i, j]
elif pos <= j < pos + other. cols:
return other [i, j - pos]
return self [i, j - other. cols]
return self. _new (self. rows, self. cols + other. cols,
lambda i, j: entry (i, j))
>>>>>>> REPLACE
βββ
Listing 6: Reasoning trajectory example for BugFixer.
E.2 Testwriter False Positive Cases
During the RL training process, we observed that the Testwriter component exhibits some false positives, which can result in tests that contain certain gaps. This might cause the test patches generated by the model to be mistakenly considered as correct. Consequently, the performance improvement of Testwriter does not accurately reflect the actual performance. In this section, we provide two intuitive examples to illustrate this phenomenon, and leave further enhancement for TestWriter as future work.
Listing 7 shows the ground-truth test patch for instance django__django-16877, with Listing 8 showing the test patch generated by theTestWriter. A detailed comparison reveals that the model-generated test cases are not comprehensive: Two locations should have been tested, but only one was checked, and the effect of mark_safe was not considered. In practice, this allowed the model-predicted patch to pass the TestWriter reward check, but result in a false positive.
The second example instance is sympy__sympy-13974, with Listings 9 and 10 showing the ground-truth and model-generated test patches. In this case, the coverage of the generated test is also insufficient. The generated test does not account for the need to satisfy the associativity property, e.g., assert tensor_product_simp(TP(A, B)**x) == TP(A**x, B**x). As a consequence, it fails to serve as a reliable judge for BugFixer.
β¬
diff -- git a / tests / template_tests / filter_tests / test_escapeseq. py b / tests / template_tests / filter_tests / test_escapeseq. py
new file mode 100644
--- / dev / null
+++ b / tests / template_tests / filter_tests / test_escapeseq. py
@@ -0,0 +1,59 @@
+ from django. test import SimpleTestCase
+ from django. utils. safestring import mark_safe
+
+ from .. utils import setup
+
+
+ class EscapeseqTests (SimpleTestCase):
+ """
+ The " escapeseq " filter works the same whether autoescape is on or off,
+ and has no effect on strings already marked as safe.
+ """
+
+ @setup (
+ {
+ " escapeseq_basic ": (
+ β{{ a | escapeseq | join:", " }} -- {{ b | escapeseq | join:", " }}β
+ ),
+ }
+ )
+ def test_basic (self):
+ output = self. engine. render_to_string (
+ " escapeseq_basic ",
+ {" a ": [" x & y ", "< p >"], " b ": [mark_safe (" x & y "), mark_safe ("< p >")]},
+ )
+ self. assertEqual (output, " x & amp; y, & lt; p & gt; -- x & y, < p >")
+
+ @setup (
+ {
+ " escapeseq_autoescape_off ": (
+ β{% autoescape off %}{{ a | escapeseq | join:", " }}β
+ " -- "
+ β{{ b | escapeseq | join:", "}}{% endautoescape %}β
+ )
+ }
+ )
+ def test_autoescape_off (self):
+ output = self. engine. render_to_string (
+ " escapeseq_autoescape_off ",
+ {" a ": [" x & y ", "< p >"], " b ": [mark_safe (" x & y "), mark_safe ("< p >")]},
+ )
+ self. assertEqual (output, " x & amp; y, & lt; p & gt; -- x & y, < p >")
+
+ @setup ({" escapeseq_join ": β{{ a | escapeseq | join:"< br />" }}β})
+ def test_chain_join (self):
+ output = self. engine. render_to_string (" escapeseq_join ", {" a ": [" x & y ", "< p >"]})
+ self. assertEqual (output, " x & amp; y < br />& lt; p & gt;")
+
+ @setup (
+ {
+ " escapeseq_join_autoescape_off ": (
+ β{% autoescape off %}{{ a | escapeseq | join:"< br />" }}{% endautoescape %}β
+ ),
+ }
+ )
+ def test_chain_join_autoescape_off (self):
+ output = self. engine. render_to_string (
+ " escapeseq_join_autoescape_off ", {" a ": [" x & y ", "< p >"]}
+ )
+ self. assertEqual (output, " x & amp; y < br />& lt; p & gt;")
Listing 7: GT test patch of django__django-16877.
β¬
diff -- git a / tests / template_tests / filter_tests / test_safeseq. py b / tests / template_tests / filter_tests / test_safeseq. py
--- a / tests / template_tests / filter_tests / test_safeseq. py
+++ b / tests / template_tests / filter_tests / test_safeseq. py
@@ -20,3 +20,20 @@
def test_safeseq02 (self):
output = self. engine. render_to_string (" safeseq02 ", {" a ": ["&", "<"]})
self. assertEqual (output, "&, < -- &, <")
+
+ # New tests for escapeseq
+ @setup ({" escapeseq01 ": β{{ a | escapeseq | join:", " }}β})
+ def test_escapeseq01 (self):
+ output = self. engine. render_to_string (" escapeseq01 ", {" a ": ["&", "<"]})
+ self. assertEqual (output, "& amp;, & lt;")
+
+ @setup (
+ {
+ " escapeseq02 ": (
+ β{% autoescape off %}{{ a | escapeseq | join:", " }}{% endautoescape %}β
+ )
+ }
+ )
+ def test_escapeseq02 (self):
+ output = self. engine. render_to_string (" escapeseq02 ", {" a ": ["&", "<"]})
+ self. assertEqual (output, "& amp;, & lt;")
Listing 8: Model predicted test patch of django__django-16877.
β¬
diff -- git a / sympy / physics / quantum / tests / test_tensorproduct. py b / sympy / physics / quantum / tests / test_tensorproduct. py
--- a / sympy / physics / quantum / tests / test_tensorproduct. py
+++ b / sympy / physics / quantum / tests / test_tensorproduct. py
@@ -10,7 +10,7 @@
from sympy. physics. quantum. density import Density
from sympy. core. trace import Tr
- A, B, C = symbols (β A, B, C β, commutative = False)
+ A, B, C, D = symbols (β A, B, C, D β, commutative = False)
x = symbols (β x β)
mat1 = Matrix ([[1, 2* I], [1 + I, 3]])
@@ -47,6 +47,11 @@ def test_tensor_product_commutator ():
def test_tensor_product_simp ():
assert tensor_product_simp (TP (A, B)* TP (B, C)) == TP (A * B, B * C)
+ # tests for Pow - expressions
+ assert tensor_product_simp (TP (A, B)** x) == TP (A ** x, B ** x)
+ assert tensor_product_simp (x * TP (A, B)**2) == x * TP (A **2, B **2)
+ assert tensor_product_simp (x *(TP (A, B)**2)* TP (C, D)) == x * TP (A **2* C, B **2* D)
+ assert tensor_product_simp (TP (A, B)- TP (C, D)** x) == TP (A, B)- TP (C ** x, D ** x)
def test_issue_5923 ():
Listing 9: GT test patch for sympy__sympy-13974.
β¬
diff -- git a / sympy / physics / quantum / tests / test_tensorproduct. py b / sympy / physics / quantum / tests / test_tensorproduct. py
--- a / sympy / physics / quantum / tests / test_tensorproduct. py
+++ b / sympy / physics / quantum / tests / test_tensorproduct. py
@@ -47,6 +47,27 @@
def test_tensor_product_simp ():
assert tensor_product_simp (TP (A, B)* TP (B, C)) == TP (A * B, B * C)
+
+ # Test tensor product powers
+ a = symbols (β a β, commutative = False)
+ t1 = TP (1, 1) * TP (1, 1)
+ t1_simp = tensor_product_simp (t1)
+ assert t1_simp == TP (1, 1), f " Expected 1 x1, got {t1_simp}"
+
+ # Test with Pauli matrices
+ from sympy. physics. paulialgebra import Pauli
+ t2 = TP (1, Pauli (3)) * TP (1, Pauli (3))
+ t2_simp = tensor_product_simp (t2)
+ assert t2_simp == TP (1, 1), f " Expected 1 x1, got {t2_simp}"
+
+ # Test substitution cases from issue
+ expr1 = TP (1, 1) * TP (1, a)
+ result1 = expr1. subs (a, 1)
+ assert tensor_product_simp (result1) == TP (1, 1), f " Substitution 1 failed: {result1}"
+
+ expr2 = TP (1, Pauli (3)) * TP (1, a)
+ result2 = expr2. subs (a, Pauli (3))
+ assert tensor_product_simp (result2) == TP (1, 1), f " Substitution 2 failed: {result2}"
def test_issue_5923 ():
Listing 10: Model predicted test patch of sympy__sympy-13974.
Appendix F Emergent Parallel scaling in Kimi-Dev
<details>
<summary>figs/sec3_sp_scaling/parellel_scaling_figure.png Details</summary>

### Visual Description
## Line Chart: Pass Rate vs. Number of Patches
### Overview
The image is a line chart showing the relationship between the number of patches and the pass rate (in percentage). The x-axis represents the number of patches, and the y-axis represents the pass rate. The chart displays a generally upward trend, indicating that the pass rate increases as the number of patches increases.
### Components/Axes
* **X-axis:** Number of Patches, with markers at 1, 5, 10, 20, and 40.
* **Y-axis:** Pass Rate (%), with markers from 47 to 53, incrementing by 1.
* **Data Series:** A single green line with square markers representing the pass rate at different numbers of patches.
### Detailed Analysis
The green line represents the pass rate. The data points are as follows:
* At 1 patch, the pass rate is 48.0%.
* At 5 patches, the pass rate is 49.2%.
* At 10 patches, the pass rate is 50.2%.
* At 20 patches, the pass rate is 51.2%.
* At 40 patches, the pass rate is 51.6%.
The line slopes upward, indicating a positive correlation between the number of patches and the pass rate.
### Key Observations
The pass rate increases with the number of patches. The rate of increase appears to slow down as the number of patches increases, particularly between 20 and 40 patches.
### Interpretation
The chart suggests that increasing the number of patches generally improves the pass rate. However, the diminishing returns observed between 20 and 40 patches indicate that there may be a point beyond which adding more patches provides minimal improvement. This could be due to factors such as increased complexity or redundancy introduced by excessive patching. The data implies that there may be an optimal number of patches to maximize the pass rate without incurring unnecessary overhead.
</details>
Figure 10: Emergent parallel scaling results on SWE-bench Verified.
Compared to the self-play method that leverages execution during test time, parallel scaling (chen2025parallel; pan2025learning) is a recently-introduced test-time scaling method that aggregates multiple independent rollouts for additional improvement. While previous works incentivize the ability of parallel scaling through training, or train a separate reward model (pan2024training; jain2025r2e) for patch selection, we observe that the parallel scaling effect emerges in our model without any specific training. Specifically, patch candidates from parallel rollouts of BugFixer are used to prompt the model to summarize their differences and compose a new bugfix patch. Listing 11 is the prompt template of parallel scaling. In the template, promblem_statement is the GitHub issue, and trajs_content represents the content of multiple patch candidates.
β¬
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
{problem_statement}
--- END ISSUE ---
Below are some thinking trajectories, each from llm reasoning model. Only one trajetory is right.
--- BEGIN FILE ---
βββ
{trajs_content}
βββ
--- END FILE ---
Please first summary and analyze the key differences between the trajectories, and then generate * SEARCH / REPLACE * edits to fix the issue.
Every * SEARCH / REPLACE * edit must use this format:
1. The file path
2. The start of search block: <<<<<<< SEARCH
3. A contiguous chunk of lines to search for in the existing source code
4. The dividing line: =======
5. The lines to replace into the source code
6. The end of the replace block: >>>>>>> REPLACE
Here is an example:
βββ python
### mathweb / flask / app. py
<<<<<<< SEARCH
from flask import Flask
=======
import math
from flask import Flask
>>>>>>> REPLACE
βββ
Please note that the * SEARCH / REPLACE * edit REQUIRES PROPER INDENTATION. If you would like to add the line β print (x)β, you must fully write that out, with all those spaces before the code!
Wrap the * SEARCH / REPLACE * edit in blocks βββ python...βββ.
The summary of the key differences between the trajectories should be in the thinking part.
Listing 11: The prompt template for parallel scaling.
The results in Figure 10 show that the performance of the parallel aggregation improves as the number of patch candidates in the prompt increases. The advantage of this scaling paradigm over majority voting lies in its ability to leverage the modelβs own capacity to analyze multiple candidate patches, thereby surpassing the simplistic approach of weighting answers merely by their frequency of occurrence. Listing 12 covers a full prompt example with 14 different input patch candidates. Note: As Listing 12 is long, it is OK to jump to Listing 13 to directly observe the model behavior.
β¬
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Data < @filename isn β t relative to the YAML file
The [docs say](https:// gabbi. readthedocs. io / en / latest / format. html # data):
> If the value is a string that begins with < @ then the rest of the string is treated as the name of a file to be loaded from the same directory as the YAML file.
But I haven β t found it works like this unless I cd into the directory containing the yaml file.
β _load_data_file β says:
βββ python
path = os. path. join (self. test_directory, os. path. basename (filename))
βββ
this does a few things:
- makes the path begin with β self. test_directory β (defaults to β.β, the current working directory)
- disguards any folders in the < @ β filename β path
- appends β filename β
This means, if I β m in β/β, and I have a test that says β data: < @cake. jpg β, I cannot run my tests as advised in the docs:
β gabbi - run -- / my / test. yaml / my / other. yaml β
β FileNotFoundError: [Errno 2] No such file or directory: β./ cake. jpg ββ
So in our test running script, we have to start with β cd tests /β.
My preference, would be to make the path relative to the location of the yaml file. That way, I can run my tests from whatever directory.
Obviously this isn β t possible when redirecting with β<β as the filename is never seen by gabbi. So I β m happy is discuss other ideas.
---
Furthermore I can β t keep my test data in a child directory, unless I cd into that directory like:
βββ shell
cd yaml_tests / test_data
gabbi - run -- ../ test_things. yaml
βββ
So for that reason, I β d like to allow directories like β< @test_data / filename. txt β to be included in the path.
---
Happy to write the patch, if we come up with an approach here.
--- END ISSUE ---
Below are some thinking trajectories, each from llm reasoning model.
--- BEGIN FILE ---
βββ
Answer 1:
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
import argparse
from importlib import import_module
import sys
import unittest
=======
import argparse
from importlib import import_module
import os
import sys
import unittest
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
handlers = handler_objects)
result = ConciseTestRunner (
verbosity =2, failfast = failfast). run (test_suite)
return result. wasSuccessful ()
=======
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False, test_directory =β.β):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None,
prefix = prefix, handlers = handler_objects)
result = ConciseTestRunner (
verbosity =2, failfast = failfast). run (test_suite)
return result. wasSuccessful ()
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
else:
for input_file in input_files:
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
=======
else:
for input_file in input_files:
test_directory = os. path. dirname (input_file) or β.β
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast,
test_directory = test_directory)
>>>>>>> REPLACE
βββ
Answer 2:
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
=======
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False, test_directory =β.β):
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
handlers = handler_objects)
=======
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None, prefix = prefix,
handlers = handler_objects)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
=======
test_directory = os. path. dirname (input_file) or β.β
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast, test_directory = test_directory)
>>>>>>> REPLACE
βββ
Answer 3:
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
handlers = handler_objects)
result = ConciseTestRunner (
verbosity =2, failfast = failfast). run (test_suite)
return result. wasSuccessful ()
=======
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False, test_directory =β.β):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None,
prefix = prefix, handlers = handler_objects)
result = ConciseTestRunner (
verbosity =2, failfast = failfast). run (test_suite)
return result. wasSuccessful ()
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
for input_file in input_files:
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
if not failure: # once failed, this is considered immutable
failure = not success
if failure and failfast:
break
=======
for input_file in input_files:
test_directory = os. path. dirname (input_file)
with open (input_file, β r β) as fh:
success = run_suite (
fh, handler_objects, host, port, prefix, force_ssl,
failfast, test_directory = test_directory)
if not failure: # once failed, this is considered immutable
failure = not success
if failure and failfast:
break
>>>>>>> REPLACE
βββ
Answer 4:
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
=======
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False, test_directory =β.β):
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
handlers = handler_objects)
=======
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None,
prefix = prefix, handlers = handler_objects)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
for input_file in input_files:
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
=======
for input_file in input_files:
# Determine test directory based on input file β s location
test_directory = os. path. dirname (os. path. abspath (input_file)) or β.β
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast,
test_directory = test_directory)
>>>>>>> REPLACE
βββ
Answer 5:
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
import sys
import unittest
=======
import os
import sys
import unittest
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
=======
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False, test_directory =β.β):
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
handlers = handler_objects)
=======
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None,
prefix = prefix, handlers = handler_objects)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
for input_file in input_files:
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
=======
for input_file in input_files:
test_directory = os. path. dirname (input_file) or β.β
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast,
test_directory = test_directory)
>>>>>>> REPLACE
βββ
Answer 6:
βββ python
### gabbi / driver. py
<<<<<<< SEARCH
file_suite = suitemaker. test_suite_from_dict (
loader, test_base_name, suite_dict, path, host, port,
fixture_module, intercept, prefix = prefix,
test_loader_name = test_loader_name, handlers = handler_objects,
inner_fixtures = inner_fixtures)
=======
test_file_dir = os. path. dirname (test_file)
file_suite = suitemaker. test_suite_from_dict (
loader, test_base_name, suite_dict, test_file_dir, host, port,
fixture_module, intercept, prefix = prefix,
test_loader_name = test_loader_name, handlers = handler_objects,
inner_fixtures = inner_fixtures)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
=======
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False, test_directory =β.β):
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
handlers = handler_objects)
=======
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None,
prefix = prefix, handlers = handler_objects)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
=======
test_directory = os. path. dirname (input_file)
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast, test_directory)
>>>>>>> REPLACE
βββ
Answer 7:
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
import argparse
from importlib import import_module
import sys
import unittest
=======
import argparse
from importlib import import_module
import os
import sys
import unittest
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
handlers = handler_objects)
=======
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False, test_directory =β.β):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None,
prefix = prefix, handlers = handler_objects)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
if not failure: # once failed, this is considered immutable
failure = not success
if failure and failfast:
break
=======
test_dir = os. path. dirname (os. path. abspath (input_file))
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast, test_directory = test_dir)
if not failure: # once failed, this is considered immutable
failure = not success
if failure and failfast:
break
>>>>>>> REPLACE
βββ
Answer 8:
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
handlers = handler_objects)
=======
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
# Determine test directory based on input handle
if hasattr (handle, β name β):
test_directory = os. path. dirname (os. path. abspath (handle. name))
else:
test_directory = β.β
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None,
prefix = prefix, handlers = handler_objects)
>>>>>>> REPLACE
βββ
Answer 9:
βββ python
### gabbi / driver. py
<<<<<<< SEARCH
for test_file in glob. iglob (β% s /*. yaml β % path):
if β _ β in os. path. basename (test_file):
warnings. warn (exception. GabbiSyntaxWarning (
"β _ β in test filename % s. This can break suite grouping."
% test_file))
if intercept:
host = str (uuid. uuid4 ())
suite_dict = utils. load_yaml (yaml_file = test_file)
test_base_name = os. path. splitext (os. path. basename (test_file))[0]
if all_test_base_name:
test_base_name = β% s_ % s β % (all_test_base_name, test_base_name)
if require_ssl:
if β defaults β in suite_dict:
suite_dict [β defaults β][β ssl β] = True
else:
suite_dict [β defaults β] = {β ssl β: True}
file_suite = suitemaker. test_suite_from_dict (
loader, test_base_name, suite_dict, path, host, port,
fixture_module, intercept, prefix = prefix,
test_loader_name = test_loader_name, handlers = handler_objects,
inner_fixtures = inner_fixtures)
=======
for test_file in glob. iglob (β% s /*. yaml β % path):
if β _ β in os. path. basename (test_file):
warnings. warn (exception. GabbiSyntaxWarning (
"β _ β in test filename % s. This can break suite grouping."
% test_file))
if intercept:
host = str (uuid. uuid4 ())
suite_dict = utils. load_yaml (yaml_file = test_file)
test_base_name = os. path. splitext (os. path. basename (test_file))[0]
if all_test_base_name:
test_base_name = β% s_ % s β % (all_test_base_name, test_base_name)
if require_ssl:
if β defaults β in suite_dict:
suite_dict [β defaults β][β ssl β] = True
else:
suite_dict [β defaults β] = {β ssl β: True}
# Use absolute path of the YAML file β s directory
test_file_dir = os. path. dirname (os. path. abspath (test_file))
file_suite = suitemaker. test_suite_from_dict (
loader, test_base_name, suite_dict, test_file_dir, host, port,
fixture_module, intercept, prefix = prefix,
test_loader_name = test_loader_name, handlers = handler_objects,
inner_fixtures = inner_fixtures)
>>>>>>> REPLACE
βββ
Answer 10:
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
import argparse
from importlib import import_module
import sys
import unittest
=======
import argparse
from importlib import import_module
import os
import sys
import unittest
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
=======
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False, test_directory =β.β):
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
force_ssl = force_ssl, failfast = failfast,
handlers = handler_objects)
=======
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None,
prefix = prefix, force_ssl = force_ssl, failfast = failfast,
handlers = handler_objects)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
=======
test_dir = os. path. dirname (input_file)
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast,
test_directory = test_dir)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / case. py
<<<<<<< SEARCH
path = os. path. join (self. test_directory, os. path. basename (filename))
=======
path = os. path. join (self. test_directory, filename)
>>>>>>> REPLACE
βββ
Answer 11:
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
handlers = handler_objects)
=======
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False, test_directory =β.β):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port,
None, None, prefix = prefix, handlers = handler_objects)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
=======
test_dir = os. path. dirname (input_file)
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast,
test_directory = test_dir)
>>>>>>> REPLACE
βββ
Answer 12:
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
import argparse
from importlib import import_module
import sys
import unittest
=======
import argparse
from importlib import import_module
import os
import sys
import unittest
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
else:
for input_file in input_files:
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
=======
else:
for input_file in input_files:
test_dir = os. path. dirname (os. path. abspath (input_file))
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast,
test_directory = test_dir)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
handlers = handler_objects)
=======
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False, test_directory =β.β):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None,
prefix = prefix, handlers = handler_objects)
>>>>>>> REPLACE
βββ
Answer 13:
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
handlers = handler_objects)
=======
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False, test_directory =β.β):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None,
prefix = prefix, handlers = handler_objects)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
for input_file in input_files:
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
=======
for input_file in input_files:
test_dir = os. path. dirname (os. path. abspath (input_file))
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast,
test_directory = test_dir)
>>>>>>> REPLACE
βββ
Answer 14:
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
for input_file in input_files:
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
=======
for input_file in input_files:
with open (input_file, β r β) as fh:
success = run_suite (input_file, fh, handler_objects, host, port,
prefix, force_ssl, failfast)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_directory = β.β
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None, prefix = prefix,
handlers = handler_objects)
=======
def run_suite (input_file, handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_directory = os. path. dirname (input_file) if input_file else β.β
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None, prefix = prefix,
handlers = handler_objects)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
if not input_files:
success = run_suite (sys. stdin, handler_objects, host, port,
prefix, force_ssl, failfast)
=======
if not input_files:
success = run_suite (None, sys. stdin, handler_objects, host, port,
prefix, force_ssl, failfast)
>>>>>>> REPLACE
βββ
βββ
--- END FILE ---
Please first summary and analyze the key differences between the trajectories, and then generate * SEARCH / REPLACE * edits to fix the issue.
Every * SEARCH / REPLACE * edit must use this format:
1. The file path
2. The start of search block: <<<<<<< SEARCH
3. A contiguous chunk of lines to search for in the existing source code
4. The dividing line: =======
5. The lines to replace into the source code
6. The end of the replace block: >>>>>>> REPLACE
Here is an example:
βββ python
### mathweb / flask / app. py
<<<<<<< SEARCH
from flask import Flask
=======
import math
from flask import Flask
>>>>>>> REPLACE
βββ
Please note that the * SEARCH / REPLACE * edit REQUIRES PROPER INDENTATION. If you would like to add the line β print (x)β, you must fully write that out, with all those spaces before the code!
Wrap the * SEARCH / REPLACE * edit in blocks βββ python...βββ.
The summary of the key differences between the trajectories should be in the thinking part.
"
Listing 12: Example of emergent parallel scaling (cdent__gabbi-191 in SWE-bench-extra), the prompt input part.
Listing 13 shows how the model successfully recognizes the correct answer candidate in Listing 12, and generates a novel patch with additional reasoning to solve the issue:
β¬
$\lhd$ think $\rhd$
Okay, let β s tackle this problem. The user is having an issue where when they specify a data file in their YAML test using the < @ syntax, the path isn β t resolved relative to the YAML file β s location. Instead, it β s using the current working directory. They want the path to be relative to the YAML file so they can run tests from any directory.
First, I need to understand how the current code is handling the data file paths. The user mentioned that in _load_data_file, the code uses os. path. join (self. test_directory, os. path. basename (filename)). This takes the test_directory (which defaults to β.β), discards any directories in the filename, and appends the basename. That β s why if the YAML is in / my / tests / test. yaml and the data is < @data. txt, it looks for ./ data. txt instead of / my / tests / data. txt.
The key here is to make sure that the test_directory is set to the directory where the YAML file is located. Then, when joining with the filename (without basename), it would correctly resolve the relative path.
Looking at the provided code snippets, the main changes are in runner. py where the test_directory is determined. The user β s proposed solutions involve modifying the run_suite function to accept a test_directory parameter, which is derived from the input file β s directory.
In the original code, when running a suite from an input file, the test_directory is hardcoded to β.β. The various answers modify this by passing the directory of the input file as test_directory. For example, in Answer 10, they add the test_directory parameter to run_suite and set it to os. path. dirname (input_file). Then, in case. py, they change the path joining to use the filename directly instead of basename.
Wait, the critical part is in case. py β s _load_data_file method. The original code uses os. path. basename (filename), which strips any directories from the filename. So if the YAML has < @test_data / file. txt, the basename is file. txt, and it β s joined with test_directory. But if test_directory is the YAML β s directory, then it should be YAML_DIR / test_data / file. txt. However, the current code would do test_directory + file. txt, which would be YAML_DIR / file. txt. That β s incorrect. So the problem is that the code is taking the basename, thus ignoring any subdirectories in the filename.
Ah, so the fix here is to not take the basename. Instead, the filename after < @ should be treated as a relative path from the YAML β s directory. So in case. py β s _load_data_file, instead of using os. path. basename (filename), we should use the filename as - is. Then, os. path. join (self. test_directory, filename) would correctly resolve the path relative to the YAML β s directory.
But wait, the filename could be an absolute path. For example, if the YAML has < @ / absolute / path / file. txt, then os. path. join would just use the absolute path. However, the documentation says that the filename is relative to the YAML file. So absolute paths might not be intended. But the code should handle that.
So combining two changes: in runner. py, set test_directory to the YAML file β s directory, and in case. py, stop using basename and use the full filename.
Looking at Answer 10, there β s a change in case. py from os. path. basename (filename) to filename. That β s exactly the fix needed. Then, in runner. py, when running the suite, pass the test_directory as the directory of the input file.
So the key files to modify are:
1. gabbi / runner. py: When calling run_suite, pass test_directory as the directory of the input file. This involves modifying the run_suite function to accept test_directory as a parameter and then passing it when the input file is processed.
2. gabbi / case. py: Change the path construction from using os. path. basename (filename) to using the filename directly. This allows subdirectories in the filename to be included in the path.
Looking through the provided code snippets, the one that does both is Answer 10. Let me check:
In Answer 10, the runner. py changes add the test_directory parameter and set it to os. path. dirname (input_file). Then, in case. py, the path is changed to os. path. join (self. test_directory, filename). That would correctly resolve the path relative to the YAML β s directory, including any subdirectories in the filename.
But wait, the original code in case. py uses os. path. basename (filename). So changing that to filename means that if the input is β test_data / file. txt β, it β s joined as test_directory / test_data / file. txt. If test_directory is the YAML β s directory, that β s correct.
... [intermediate reasoning omitted for display brevity] ...
But perhaps better to use os. path. abspath and then dirname to handle relative paths correctly. But maybe the existing code already works. For example, if the input_file is β tests / test. yaml β, then os. path. dirname would return β tests β, which when joined with os. getcwd () would be correct. Wait, no. os. path. dirname just gives the directory part of the path. So if input_file is a relative path, like β tests / test. yaml β, then test_directory would be β tests β, and when os. path. join is called, it β s relative to the current working directory. But the code in suitemaker. test_suite_from_dict might expect an absolute path. However, the original code uses β.β as the test_directory, which is relative. So perhaps using the directory of the input file as test_directory is sufficient.
Another thing to consider: when the YAML is loaded from stdin (like via redirect), the test_directory remains β.β. That β s acceptable, as the documentation mentions that when using <, the filename isn β t seen by Gabbi, so it β s not possible to resolve the path relative to the YAML β s original location. So in that case, the test_directory defaults to β.β.
Now, in case. py:
Original line:
path = os. path. join (self. test_directory, os. path. basename (filename))
Modified line:
path = os. path. join (self. test_directory, filename)
So the SEARCH / REPLACE block for case. py:
### gabbi / case. py
<<<<<<< SEARCH
path = os. path. join (self. test_directory, os. path. basename (filename))
=======
path = os. path. join (self. test_directory, filename)
>>>>>>> REPLACE
Putting all these together, the final edits are:
1. Add import os in runner. py.
2. Modify run_suite to accept test_directory and pass it to test_suite_from_dict.
3. In the loop over input_files, compute test_directory as the directory of input_file and pass it to run_suite.
4. Modify case. py β s path construction.
So the final answer would include these four SEARCH / REPLACE blocks.
$\lhd$ / think $\rhd$
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
import argparse
from importlib import import_module
import sys
import unittest
=======
import argparse
from importlib import import_module
import os
import sys
import unittest
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
=======
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False, test_directory =β.β):
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
handlers = handler_objects)
=======
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None,
prefix = prefix, handlers = handler_objects)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
else:
for input_file in input_files:
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
=======
else:
for input_file in input_files:
test_directory = os. path. dirname (input_file) or β.β
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast, test_directory = test_directory)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / case. py
<<<<<<< SEARCH
path = os. path. join (self. test_directory, os. path. basename (filename))
=======
path = os. path. join (self. test_directory, filename)
>>>>>>> REPLACE
βββ
These changes:
1. Add β os β import needed for path handling
2. Modify run_suite to accept a test_directory parameter
3. Use the input file β s directory as the test_directory when processing files
4. Update data file path resolution to preserve directory structure from YAML relative paths
5. Remove basename handling that was stripping directory information from data file paths
The net effect is that data files specified with β< @filename β in YAML tests will now be resolved relative to their YAML file β s directory, supporting both flat and nested directory structures for test data. When tests are run from stdin (using β<β), the default directory remains the current working directory as before.
Listing 13: Example of emergent parallel scaling (cdent__gabbi-191 in SWE-bench-extra), the model response part.
Appendix G Analysis for SWE-Agents
G.1 Stage Annotation for SWE-Agent Trajectories
In this section, we present how we use a frontier LLM to annotate the SWE-Agent stage to which each interaction turn within the trajectory rollout belongs. While we have briefly introduced the five stages suggested in the prompt of the SWE-Agent prompt in Section 4, we attach the excerpt in Listing 14 for greater clarity:
β¬
...
Follow these steps to resolve the issue:
1. As a first step, it might be a good idea to find and read code relevant to the < pr_description >
2. Create a script to reproduce the error and execute it with β python < filename. py >β using the bash tool, to confirm the error
3. Edit the source code of the repo to resolve the issue
4. Rerun your reproduce script and confirm that the error is fixed!
5. Think about edgecases and make sure your fix handles them as well
...
Listing 14: The excerpt of the five-stage declaration in the SWE-Agent prompt.
It should be noted that the agent could flexibly transit across the five stages during its working process. For example, after Stage 4 when the agent rerun the test script, possibilities are that erroneous information remains, and this is when the agent goes back to Stage 3 to refine its code repair with reflection; Similar backtracing behavior could be observed from Stage 5 to Stage 3 as well, where the initial code repair has proven correct under the initial test script the agent composes in Stage 2, but fails some edge testcase the agent proposes in Stage 5.
To further analyze the BugFixer and the reflection skill prior, we need to realize which stage each turn along the SWE-Agent trajectory belongs to. As no strict boundaries or special prompt notes are set between each consecutive stage, we leverage an LLM for annotation. The annotation system prompt we set in kimi-k2-0711-preview is shown in Listing 15:
β¬
You are a professional inspector that can analyze the provided agentic interaction trajectory.
The trajectory you are going to analyze is made by an agent that interacts with a computer to solve tasks. This agent has access to the following functions:
---- BEGIN FUNCTION #1: bash ----
Description: Execute a bash command in the terminal.
Parameters:
(1) command (string, required): The bash command to execute. Can be empty to view additional logs when previous exit code is β-1β. Can be β ctrl + c β to interrupt the currently running process.
---- END FUNCTION #1 ----
---- BEGIN FUNCTION #2: submit ----
Description: Finish the interaction when the task is complete OR if the assistant cannot proceed further with the task.
No parameters are required for this function.
---- END FUNCTION #2 ----
---- BEGIN FUNCTION #3: str_replace_editor ----
Description: Custom editing tool for viewing, creating and editing files
* State is persistent across command calls and discussions with the user
* If β path β is a file, β view β displays the result of applying β cat - n β. If β path β is a directory, β view β lists non - hidden files and directories up to 2 levels deep
* The β create β command cannot be used if the specified β path β already exists as a file
* If a β command β generates a long output, it will be truncated and marked with β< response clipped >β
* The β undo_edit β command will revert the last edit made to the file at β path β
Notes for using the β str_replace β command:
* The β old_str β parameter should match EXACTLY one or more consecutive lines from the original file. Be mindful of whitespaces!
* If the β old_str β parameter is not unique in the file, the replacement will not be performed. Make sure to include enough context in β old_str β to make it unique
* The β new_str β parameter should contain the edited lines that should replace the β old_str β
Parameters:
(1) command (string, required): The commands to run. Allowed options are: β view β, β create β, β str_replace β, β insert β, β undo_edit β.
Allowed values: [β view β, β create β, β str_replace β, β insert β, β undo_edit β]
(2) path (string, required): Absolute path to file or directory, e. g. β/ repo / file. py β or β/ repo β.
(3) file_text (string, optional): Required parameter of β create β command, with the content of the file to be created.
(4) old_str (string, optional): Required parameter of β str_replace β command containing the string in β path β to replace.
(5) new_str (string, optional): Optional parameter of β str_replace β command containing the new string (if not given, no string will be added). Required parameter of β insert β command containing the string to insert.
(6) insert_line (integer, optional): Required parameter of β insert β command. The β new_str β will be inserted AFTER the line β insert_line β of β path β.
(7) view_range (array, optional): Optional parameter of β view β command when β path β points to a file. If none is given, the full file is shown. If provided, the file will be shown in the indicated line number range, e. g. [11, 12] will show lines 11 and 12. Indexing at 1 to start. Setting β[start_line, -1]β shows all lines from β start_line β to the end of the file.
---- END FUNCTION #3 ----
The agent was instructed with the following:
* A python code repository has been uploaded in the directory / testbed.
* Implement the necessary changes to the repository so that the requirements specified in the < pr_description > are met.
* All changes to any of the test files described in the < pr_description > have already been taken care of. This means no need to modify the testing logic or any of the tests in any way.
* Make the minimal changes to non - tests files in the / testbed directory to ensure the < pr_description > is satisfied.
The agent was suggested to follow the following steps to resolve the issue:
1. As a first step, it might be a good idea to find and read code relevant to the < pr_description >
2. Create a script to reproduce the error and execute it with β python < filename. py >β using the bash tool, to confirm the error
3. Edit the source code of the repo to resolve the issue
4. Rerun your reproduce script and confirm that the error is fixed!
5. Think about edgecases and make sure your fix handles them as well
The agent was encouraged to think thoroughly, and it β s fine if it β s very long.
You are going to inspect this agent β s interaction trajectory with a computer to solve the given task in the < pr_description >. One turn of interaction contains a pair of OBSERVATION and ACTION, where the OBSERVATION comes from the computer, and the ACTION is taken by the agent.
For each turn of interaction, determine which step (of the aforementioned five) this turn belongs to. Output a single number (1~5) ONLY in a separate line as your classification (DO NOT OUTPUT ANY OTHER WORDS THAN THE DIGIT).
You can think before make the inspection. When thinking, wrap your thought with < think > and </ think >. Don β t forget to output your final inspection after thinking.
Listing 15: The annotation prompt for SWE-Agent stages.
To provide a clearer understanding of the trajectory, we incorporate most of the tool descriptions and instructions from the SWE-Agent system prompt into the annotation system prompt. The annotation is conducted in a multi-round manner, leveraging the agentβs previous actions and observations, as well as the stage classifications of earlier turns, to better exploit contextual information. At the $i$ -th round of annotation, the observationβaction pair from turn $i$ of the SWE-Agent trajectory is appended as input, and the annotator is expected to output the corresponding stage classification.
G.2 Comparative Study
Based on the automatic stage annotation in the above section, we present a comparative study by inspecting the performance on sympy__sympy-20590 among the Kimi-Dev under Agentless, and each of the Base, MT, SFT, and RL priors with SWE-Agent adaptation.
The problem statement of sympy__sympy-20590 is listed in Listing 16:
β¬
Symbol instances have __dict__ since 1.7?
In version 1.6.2 Symbol instances had no β __dict__ β attribute
βββ python
>>> sympy. Symbol (β s β). __dict__
------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
< ipython - input -3- e2060d5eec73 > in < module >
----> 1 sympy. Symbol (β s β). __dict__
AttributeError: β Symbol β object has no attribute β __dict__ β
>>> sympy. Symbol (β s β). __slots__
(β name β,)
βββ
This changes in 1.7 where β sympy. Symbol (β s β). __dict__ β now exists (and returns an empty dict)
I may misinterpret this, but given the purpose of β __slots__ β, I assume this is a bug, introduced because some parent class accidentally stopped defining β __slots__ β.
Listing 16: The problem statement of sympy__sympy-20590.
It is observed that the main difficulty in resolving the issue lies in the realization of the β some parent classβ referenced in the problem. In fact, the hints text of this problem, which reflects the discussion of the developers under the original issue, reveals a much more in-depth investigation into the issue (Listing 17):
β¬
It seems that Basic now inherits β DefaultPrinting β which I guess doesn β t have slots. I β m not sure if it β s a good idea to add β __slots__ β to that class as it would then affect all subclasses.
...
Using slots can break multiple inheritance but only if the slots are non - empty I guess. Maybe this means that any mixin should always declare empty slots or it won β t work properly with subclasses that have slots...
I see that β EvalfMixin β has β __slots__ = ()β.
I guess we should add empty slots to DefaultPrinting then.
Listing 17: The excerpted hints text of sympy__sympy-20590.
According to the discussion, it is clear that the code repair would be to βadd empty slots to DefaultPrintingβ, which naturally leads to the navigation towards the file related to the implementation of the printer (sympy/core/_print_helpers.py, which is also the file updated by the ground-truth patch.) However, the hints_text information in the test set is not allowed to be used in the problem-solving process, which challenges the reasoner or the agent to figure out βthe parent class that stopped defining β__slots__β β autonomously.
We first examine Kimi-Dev under Agentless. None of the 40 runs succeeded in producing the correct file localization. In most cases, the updates are made to sympy/core/symbol.py, which is a plausible choice since the reported problem is triggered by sympy.Symbol(βsβ), and symbol.py should contain the definition of the Symbol class. This line of reasoning, however, diverges from the correct solution. A small number of runs attempt alternative localizations, such as sympy/core/{basic.py, expr.py}, but these are also incorrect.
The limitation of the Agentless framework becomes more obvious when the focus is shifted to the inspection over the SWE-Agent trajectories. For all of the Base, MT, SFT, and RL priors with SWE-Agent adaptation, the resulting trajectories always contain a test-reproducing phase (marked as Stage β2β by the annotator), with the generated script reads like:
β¬
from sympy import Symbol
# Try to access __dict__ on a Symbol instance
s = Symbol (β s β)
print (" Symbol. __slots__:", s. __slots__)
try:
print (" Symbol. __dict__:", s. __dict__)
except AttributeError as e:
print (" Error:", e)
Listing 18: An exampled reproducing script in Stage 2 within the SWE-Agent trajectories in sympy__sympy-20590.
It is trivial for an agent to generate a reproducing script of this form, as it directly transcribes the case described in the problem statement. Executing the script provides an unambiguous signal of whether the issue is resolved: the attribute s.__dict__ should no longer exist, and the expected Error: message should appear in the standard error output. Indeed, observing the initial completion of Stage 3 across all trajectories reveals that none of the agents succeeded on their first attempt. However, the incomplete status is consistently exposed in Stage 4, when the script is rerun and s.__dict__ still persists. This progression highlights the advantage of the SWE-Agent framework over the Agentless paradigm, as the agent is afforded additional opportunities for iterative refinement rather than being limited to a single attempt.
We continue to analyze the behavior of the agents adapted from the Base, MT, and SFT priors. Due to the difficulty of the issue, it is non-trivial to correctly localize the error through their agentic reasoning: the interactions show that the agents were switching among several file candidates, but failed to reach the correct one. The only exception comes from the agent adapted from the RL prior, which marks its unique success on this instance through accurate agentic reasoning. We attach the intermediate process from the initial Stage-3 moment to the Stage-3 revisit (refining the initial code repair) within the successful trajectory, tagged by the stage annotator:
β¬
...
OBSERVATION:
Symbol. __slots__: (β name β,)
Symbol. __dict__: {}
ACTION:
Hmm, it seems like the issue is still there. Let β s check if there are any other classes in the hierarchy that might be missing β __slots__ β:
< function = bash >
< parameter = command > python - c " from sympy import Symbol; s = Symbol (β s β); for cls in type (s). __mro__: print (cls. __name__, β has __slots__:β, hasattr (cls, β __slots__ β))"</ parameter >
</ function >
OBSERVATION:
File "< string >", line 1
from sympy import Symbol; s = Symbol (β s β); for cls in type (s). __mro__: print (cls. __name__, β has __slots__:β, hasattr (cls, β __slots__ β))
^
SyntaxError: invalid syntax
ACTION:
Let β s try a different approach:
< function = str_replace_editor >
< parameter = command > create </ parameter >
< parameter = path >/ testbed / check_slots. py </ parameter >
< parameter = file_text >
from sympy import Symbol
s = Symbol (β s β)
for cls in type (s). __mro__:
print (f "{cls. __name__:15} has __slots__: {hasattr (cls, β __slots__ β)}")
if hasattr (cls, β __slots__ β):
print (f "{β β*16} __slots__ value: {cls. __slots__}")
</ parameter >
</ function >
OBSERVATION:
File created successfully at: / testbed / check_slots. py
ACTION:
< function = bash >
< parameter = command > python / testbed / check_slots. py </ parameter >
</ function >
OBSERVATION:
Symbol has __slots__: True
__slots__ value: (β name β,)
AtomicExpr has __slots__: True
__slots__ value: ()
Atom has __slots__: True
__slots__ value: ()
Expr has __slots__: True
__slots__ value: ()
Boolean has __slots__: True
__slots__ value: ()
Basic has __slots__: True
__slots__ value: (β _mhash β, β _args β, β _assumptions β)
Printable has __slots__: False
EvalfMixin has __slots__: True
__slots__ value: ()
object has __slots__: False
ACTION:
Now I see the issue! β Printable β doesn β t have β __slots__ β defined. Let β s check the β Printable β class:
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / sympy / core / _print_helpers. py </ parameter >
< parameter = view_range >[1, 30]</ parameter >
</ function >
OBSERVATION:
Here β s the result of running β cat - n β on / testbed / sympy / core / _print_helpers. py:
1 """
2 Base class to provide str and repr hooks that β init_printing β can overwrite.
3
4 This is exposed publicly in the β printing. defaults β module,
5 but cannot be defined there without causing circular imports.
6 """
7
8 class Printable:
9 """
10 The default implementation of printing for SymPy classes.
11
12 This implements a hack that allows us to print elements of built - in
13 Python containers in a readable way. Natively Python uses ββ repr ()ββ
14 even if ββ str ()ββ was explicitly requested. Mix in this trait into
15 a class to get proper default printing.
16
17 This also adds support for LaTeX printing in jupyter notebooks.
18 """
19
20 # Note, we always use the default ordering (lex) in __str__ and __repr__,
21 # regardless of the global setting. See issue 5487.
22 def __str__ (self):
23 from sympy. printing. str import sstr
24 return sstr (self, order = None)
25
26 __repr__ = __str__
27
28 def _repr_disabled (self):
29 """
30 No - op repr function used to disable jupyter display hooks.
ACTION:
We need to add β __slots__ = ()β to the β Printable β class as well:
< function = str_replace_editor >
< parameter = command > str_replace </ parameter >
< parameter = path >/ testbed / sympy / core / _print_helpers. py </ parameter >
< parameter = old_str >
class Printable:
"""
The default implementation of printing for SymPy classes.
This implements a hack that allows us to print elements of built - in
Python containers in a readable way. Natively Python uses ββ repr ()ββ
even if ββ str ()ββ was explicitly requested. Mix in this trait into
a class to get proper default printing.
This also adds support for LaTeX printing in jupyter notebooks.
"""
</ parameter >
< parameter = new_str >
class Printable:
"""
The default implementation of printing for SymPy classes.
This implements a hack that allows us to print elements of built - in
Python containers in a readable way. Natively Python uses ββ repr ()ββ
even if ββ str ()ββ was explicitly requested. Mix in this trait into
a class to get proper default printing.
This also adds support for LaTeX printing in jupyter notebooks.
"""
__slots__ = ()
</ parameter >
</ function >
...
Listing 19: The intermediate reflection process between the two Stage-3 moment of the successful trajectory in sympy__sympy-20590.
Listing 19 shows the advanced reflection capability through slot checking along the entire class hierarchy, which successfully leads to the observation of the erroneous Printable. The following behavior from the agent includes the routine of test rerun, and the observation of passing the test script (Listing 18). This chain of actions recovers and resonates with the developer discussions in hints_text (Listing 17), showing the superior reflection skill in the RL prior over other priors.
G.3 Pattern Comparison between SFT with Expert Data and End-to-End RL
In this section, we compare the patterns of the interaction trajectories of the RL prior that undergoes (i) $1.5Γ 2^{28}$ SWE-Agent expert SFT tokens (later referred as βheavy SFTβ) and (ii) $2^{21}$ SWE-Agent expert SFT tokens + end-to-end SWE-Agent RL (later referred as βlight SFT + end-to-end RLβ) as a qualitative study for Figure 7.
We select an instance solved by both methods, matplotlib__matplotlib-13989, as case study. We attach the actions along the two interaction trajectories for comparison.
The action patterns of the heavy SFT model read as Listing 20:
β¬
ACTION 0:
I β ll help you implement the necessary changes to fix the issue where β hist ()β doesn β t respect β range =...β when β density = True β. Let β s follow the steps:
1. First, let β s find and read the relevant code:
< function = bash >
< parameter = command > find / testbed - type f - name "*. py " | grep - v " test_ "</ parameter >
</ function >
ACTION 1:
Let β s look for the histogram implementation since that β s what we need to fix. The PR mentions that the bug is in the β hist ()β function when β density = True β and β range =...β is specified.
< function = bash >
< parameter = command > grep - r " def hist " -- include ="*. py " / testbed | grep - v " test_ "</ parameter >
</ function >
ACTION 2:
The main histogram implementation appears to be in β/ testbed / lib / matplotlib / axes / _axes. py β. Let β s look at that file:
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
</ function >
ACTION 3:
Let me look at the histogram implementation specifically:
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
< parameter = view_range >[6365, 6899]</ parameter >
</ function >
ACTION 4:
Let me also look at the β histogram_bin_edges β function since it β s used in the code:
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
< parameter = view_range >[40, 60]</ parameter >
</ function >
ACTION 5:
2. Let β s create a script to reproduce the error:
< function = str_replace_editor >
< parameter = command > create </ parameter >
< parameter = path >/ testbed / reproduce. py </ parameter >
...
ACTION 6:
Let β s run it to confirm the issue:
< function = bash >
< parameter = command > python / testbed / reproduce. py </ parameter >
</ function >
ACTION 7:
The issue is confirmed: when β density = True β is specified, the β range =(0, 1)β parameter is not respected. The bins should start at 0 and end at 1, but they don β t.
3. Let β s fix the issue. Looking at the code, the problem is in the β hist β method and specifically around line 6689-6690:
βββ python
if density and not stacked:
hist_kwargs = dict (density = density)
βββ
The issue is that when β density = True β, the β range β parameter gets lost because we β re not including it in the β hist_kwargs β. Let β s modify the code to fix this:
< function = str_replace_editor >
< parameter = command > str_replace </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
...
ACTION 8:
4. Let β s run the reproduce script again to confirm the fix:
< function = bash >
< parameter = command > python / testbed / reproduce. py </ parameter >
</ function >
ACTION 9:
Great! The fix works. Now the bins start at 0 and end at 1 as expected.
5. Let β s think about edge cases:
1. When β range = None β - This should use the min / max of the data, which is already handled correctly by the existing code
2. When β range β is specified but β density = False β - The range should be respected, which it already is
3. When β range β is specified and β density = True β - The range should be respected, which our fix now ensures
4. When β bins β is a sequence - The range should be ignored, which is already handled by the code
5. When β range β includes values outside the data range - The histogram should include empty bins for the range outside the data, which is already handled by numpy β s histogram function
Let me test some of these edge cases to be sure:
< function = str_replace_editor >
< parameter = command > str_replace </ parameter >
< parameter = path >/ testbed / reproduce. py </ parameter >
...
ACTION 10:
< function = bash >
< parameter = command > python / testbed / reproduce. py </ parameter >
</ function >
ACTION 11:
All test cases look good:
1. When β range =(0, 1)β and β density = True β, the bins start at 0 and end at 1
2. When β range = None β and β density = True β, the bins use the min / max of the data
3. When β range =(0, 1)β and β density = False β, the bins start at 0 and end at 1
4. When β bins β is a sequence, the range parameter is ignored and the sequence is used directly
5. When β range =(-0.5, 1.5)β, the bins start at -0.5 and end at 1.5
The fix looks good and handles all edge cases correctly. Let me submit the changes:
< function = submit >
</ function >
Listing 20: The interaction made by the RL prior with $1.5Γ 2^{28}$ SWE-Agent SFT tokens on matplotlib__matplotlib-13989.
The action patterns of the light SFT + end-to-end RL model read as Listing 21:
β¬
ACTION 0:
Understood. Let β s tackle this problem step by step.
First, I need to find the relevant code in the / testbed directory. The user mentioned that the code is in / testbed, so I β ll start by listing the files there.
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed </ parameter >
</ function >
ACTION 1:
Hmm, the / testbed directory has a lot of files, but I need to focus on the code related to the hist function. The hist function is part of matplotlib. pyplot, so I should look for the pyplot module in the / testbed / lib / matplotlib directory.
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / pyplot. py </ parameter >
</ function >
ACTION 2:
Okay, I β ll look at the hist function in pyplot. py.
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / pyplot. py </ parameter >
< parameter = view_range >[2610, 2625]</ parameter >
</ function >
ACTION 3:
The hist function in pyplot. py simply calls the hist method of the current axes. So the actual implementation is in the Axes class.
I β ll look for the Axes. hist method in the / testbed / lib / matplotlib / axes directory.
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes </ parameter >
</ function >
ACTION 4:
The Axes. hist method is likely in _axes. py, which is where most of the plotting methods are defined.
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
</ function >
ACTION 5:
I β ll look at the hist method in _axes. py.
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
< parameter = view_range >[6365, 6400]</ parameter >
</ function >
ACTION 6:
I β ll look at more of the hist method to see where the range is handled.
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
< parameter = view_range >[6400, 6450]</ parameter >
</ function >
ACTION 7:
I β ll look at the code where the bins are calculated.
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
< parameter = view_range >[6450, 6500]</ parameter >
</ function >
ACTION 8:
I β ll look at the code where the bins are calculated.
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
< parameter = view_range >[6500, 6550]</ parameter >
</ function >
ACTION 9:
I β ll look at the code where the bins are calculated.
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
< parameter = view_range >[6550, 6600]</ parameter >
</ function >
ACTION 10:
I β ll look at the code where the bins are calculated.
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
< parameter = view_range >[6600, 6650]</ parameter >
</ function >
ACTION 11:
I β ll look at the code where the bins are calculated.
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
< parameter = view_range >[6650, 6700]</ parameter >
</ function >
ACTION 12:
Ah, here β s the key part. The code calculates the bins using np. histogram, and if the range is specified, it β s passed to histogram_bin_edges. But when density is True, the code may be overriding the range.
Wait, in the code:
if not input_empty and len (x) > 1:
if weights is not None:
_w = np. concatenate (w)
else:
_w = None
bins = histogram_bin_edges (np. concatenate (x),
bins, bin_range, _w)
else:
hist_kwargs [β range β] = bin_range
But when density is True, the code may be using the default range of the data, not the specified range.
Wait, no. The code for calculating the bins is:
if bins is None:
bins = rcParams [β hist. bins β]
if bin_range is not None:
bin_range = self. convert_xunits (bin_range)
if bin_range is None:
xmin = np. inf
xmax = - np. inf
for xi in x:
if len (xi):
xmin = min (xmin, np. nanmin (xi))
xmax = max (xmax, np. nanmax (xi))
if not np. isnan ([xmin, xmax]). any () and not (xmin > xmax):
bin_range = (xmin, xmax)
# If bins are not specified either explicitly or via range,
# we need to figure out the range required for all datasets,
# and supply that to np. histogram.
if not input_empty and len (x) > 1:
if weights is not None:
_w = np. concatenate (w)
else:
_w = None
bins = histogram_bin_edges (np. concatenate (x),
bins, bin_range, _w)
else:
hist_kwargs [β range β] = bin_range
density = bool (density) or bool (normed)
if density and not stacked:
hist_kwargs = dict (density = density)
# List to store all the top coordinates of the histograms
tops = []
mlast = None
# Loop through datasets
for i in range (nx):
# this will automatically overwrite bins,
# so that each histogram uses the same bins
m, bins = np. histogram (x [i], bins, weights = w [i], ** hist_kwargs)
m = m. astype (float) # causes problems later if it β s an int
if mlast is None:
But when density is True, the code passes density = True to np. histogram. However, when density is True, np. histogram normalizes the counts, but the bin edges are still determined by the range.
Wait, but the problem is that when density = True, the range is not respected. The user β s code is:
_, bins, _ = plt. hist (np. random. rand (10), " auto ", range =(0, 1), density = True)
print (bins)
and the output is:
[0.00331535 0.18930174 0.37528813 0.56127453 0.74726092 0.93324731]
which is not from 0 to 1. But when density = False, the range is respected.
This suggests that when density = True, the code is not using the specified range for the bins.
But why?
Let me look at the code again.
When the code calculates the bins:
if not input_empty and len (x) > 1:
if weights is not None:
_w = np. concatenate (w)
else:
_w = None
bins = histogram_bin_edges (np. concatenate (x),
bins, bin_range, _w)
else:
hist_kwargs [β range β] = bin_range
But when len (x) is 1 (as in the user β s code), the code enters the else clause and sets hist_kwargs [β range β] = bin_range.
Then, in the loop:
for i in range (nx):
m, bins = np. histogram (x [i], bins, weights = w [i], ** hist_kwargs)
But wait, if bins is None, then np. histogram will use the range. But if bins is a string (like β auto β), then np. histogram will calculate the bins based on the data and the range.
Wait, in the user β s code, bins is β auto β and range is (0, 1). When density = True, the code should calculate the bins based on the range (0, 1).
But in the code, when len (x) is 1, the code sets hist_kwargs [β range β] = bin_range. Then, in the loop, the code calls np. histogram (x [i], bins, weights = w [i], ** hist_kwargs).
But if bins is a string (like β auto β), then np. histogram will calculate the bins based on the data and the range. So the range should be respected.
But in the user β s code, the bins are not from 0 to 1. This suggests that the code is not passing the range to np. histogram when density = True.
But why?
This is puzzling. Let me try to reproduce the error.
I β ll create a script in the / testbed directory.
< function = str_replace_editor >
< parameter = command > create </ parameter >
< parameter = path >/ testbed / reproduce_error. py </ parameter >
...
ACTION 13:
I β ll run the script.
< function = bash >
< parameter = command > python / testbed / reproduce_error. py </ parameter >
</ function >
ACTION 14:
Hmm, the output is not from 0 to 1. This confirms the error.
Now, I need to find where in the code the range is not being passed when density = True.
Let me look at the code again.
When len (x) is 1, the code sets hist_kwargs [β range β] = bin_range. Then, in the loop:
for i in range (nx):
m, bins = np. histogram (x [i], bins, weights = w [i], ** hist_kwargs)
But if bins is a string (like β auto β), then np. histogram will calculate the bins based on the data and the range. So the range should be respected.
But in the user β s code, the bins are not from 0 to 1. This suggests that the code is not passing the range to np. histogram when density = True.
But why?
This is confusing. Let me look at the code for np. histogram.
The np. histogram function has a range parameter. If range is specified, the bin edges are calculated based on that range. If the data is outside the range, it is ignored.
In the user β s code, the data is between 0 and 1, so the range (0, 1) should be respected.
But when density = True, the code is not respecting the range. This suggests that the code is not passing the range to np. histogram when density = True.
But why?
This is puzzling. Let me try to modify the code in _axes. py.
I β ll look for the line where hist_kwargs is set.
In the code:
density = bool (density) or bool (normed)
if density and not stacked:
hist_kwargs = dict (density = density)
This line is overwriting hist_kwargs, which may include the range.
Ah, here β s the problem. If density is True and stacked is False, the code sets hist_kwargs to dict (density = density), which overwrites any previous hist_kwargs, including the range.
This is the root of the problem. The code is overwriting hist_kwargs, which includes the range, when density is True and stacked is False.
So, in the code:
density = bool (density) or bool (normed)
if density and not stacked:
hist_kwargs = dict (density = density)
This line is overwriting hist_kwargs, which may include the range. So the range is not passed to np. histogram.
This is the cause of the problem. The solution is to add the range to hist_kwargs in this case.
So, the code should be:
density = bool (density) or bool (normed)
if density and not stacked:
hist_kwargs. setdefault (β density β, density)
But wait, hist_kwargs may already have the range. So, the code should not overwrite hist_kwargs, but add to it.
The code is:
density = bool (density) or bool (normed)
if density and not stacked:
hist_kwargs = dict (density = density)
This line is overwriting hist_kwargs, which may include the range. So the range is lost.
The solution is to modify this code to add to hist_kwargs, not overwrite it.
So, the code should be:
density = bool (density) or bool (normed)
if density and not stacked:
hist_kwargs. setdefault (β density β, density)
But wait, hist_kwargs may not exist. Let me look at the code.
The code for hist_kwargs is:
hist_kwargs = dict ()
# if the bin_range is not given, compute without nan numpy
# does not do this for us when guessing the range (but will
# happily ignore nans when computing the histogram).
if bin_range is None:
xmin = np. inf
xmax = - np. inf
for xi in x:
if len (xi):
xmin = min (xmin, np. nanmin (xi))
xmax = max (xmax, np. nanmax (xi))
# make sure we have seen at least one non - nan and finite
# value before we reset the bin range
if not np. isnan ([xmin, xmax]). any () and not (xmin > xmax):
bin_range = (xmin, xmax)
# If bins are not specified either explicitly or via range,
# we need to figure out the range required for all datasets,
# and supply that to np. histogram.
if not input_empty and len (x) > 1:
if weights is not None:
_w = np. concatenate (w)
else:
_w = None
bins = histogram_bin_edges (np. concatenate (x),
bins, bin_range, _w)
else:
hist_kwargs [β range β] = bin_range
density = bool (density) or bool (normed)
if density and not stacked:
hist_kwargs = dict (density = density)
# List to store all the top coordinates of the histograms
tops = []
mlast = None
# Loop through datasets
for i in range (nx):
# this will automatically overwrite bins,
# so that each histogram uses the same bins
m, bins = np. histogram (x [i], bins, weights = w [i], ** hist_kwargs)
m = m. astype (float) # causes problems later if it β s an int
if mlast is None:
Ah, here β s the problem. The code sets hist_kwargs = dict (density = density), which overwrites any previous hist_kwargs, including the range.
So, if density is True and stacked is False, the code overwrites hist_kwargs with dict (density = density), which does not include the range. So the range is not passed to np. histogram.
This is the cause of the problem. The solution is to modify this code to add to hist_kwargs, not overwrite it.
So, the code should be:
density = bool (density) or bool (normed)
if density and not stacked:
hist_kwargs. setdefault (β density β, density)
But hist_kwargs may not exist. Wait, hist_kwargs is initialized as:
hist_kwargs = dict ()
So, the code can be:
density = bool (density) or bool (normed)
if density and not stacked:
hist_kwargs [β density β] = density
This way, hist_kwargs includes the density and the range.
This would fix the problem. Let me test this.
I β ll modify the code in _axes. py.
< function = str_replace_editor >
< parameter = command > str_replace </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
...
ACTION 14:
I β ll run the reproduce_error. py script again.
< function = bash >
< parameter = command > python / testbed / reproduce_error. py </ parameter >
</ function >
ACTION 15:
Great, the fix works. The bins are now from 0 to 1.
Now, I can submit the changes.
< function = submit >
</ function >
Listing 21: The interaction made by the RL prior with $2^{21}$ SWE-Agent SFT tokens and end-to-end SWE-Agent RL on matplotlib__matplotlib-13989.
It can be seen that the two types of patterns are significantly different from each other:
- The patterns of the heavy SFT model appear to be more regulated, with explicitly labeled steps 1 to 5 along the intermediate reasoning process. The reasoning content is generally short as well. This reflects the behavior of the model shaped by the heavy SFT data from the proprietary models.
- The patterns of the light SFT + end-to-end RL model, in contrast, are much more under-regulated. More turns of interaction are spent at repo exploration, and a significantly larger amount of tokens are generated at the code edit steps before the action is made. This is the reflection behavior shaped by end-to-end RL (and less impacted by the expert data), as the sign of reasoning-intensive skill priors is retained in the agentic scenario.
G.4 Generalization to Other Tasks
In this section, we study whether the skills incentivized by Agentless training and SWE-Agent adaptation through SWE-Smith trajectories could transfer to other SWE tasks. We use SWE-bench-live and SWE-bench Multilingual as our testbeds.
SWE-bench-live (zhang2025swe) is a benchmark for real-world issue resolution, evaluating AI systems on software engineering tasks. Using an automated curation pipeline, it is updated monthly to ensure fresh tasks and rigorous, contamination-free evaluation. For our experiments, we selected the default set of 300 tasks, with data collected between October 2024 and March 2025. Compared to SWE-bench Verified, SWE-bench-live exhibits a higher degree of distributional shift.
SWE-bench Multilingual (yang2025swesmith) introduces 300 curated tasks from 42 GitHub repositories across 9 programming languages, including Rust, Java, PHP, Ruby, JavaScript/TypeScript, Go, and C/C++, covering domains such as web frameworks, data tools, core utilities, and libraries. Compared to SWE-bench Verified, which focuses exclusively on Python, SWE-bench Multilingual exhibits greater linguistic and domain diversity, posing additional challenges in cross-language generalization and transferability of software engineering capabilities.
Similar to previous experiments, we evaluated four model stages as the priors: the original Qwen2.5-72B (Base), the mid-trained model (MT), the model activated with reasoning data through supervised finetuning (SFT), and the model after RL training (RL). We still use the open-source SWE-smith trajectories to activate the agentic capabilities of each prior.
Figures 11 and 12 show the performance of the four priors on SWE-bench-Live and SWE-bench Multilingual under varied amounts of agentic trajectories for adaptation ( $2^{21}$ as one-step gradient descent, $2^{23}$ , $1.1Γ 2^{27}$ , and $1.5Γ 2^{28}$ as 100, 2,000, and 5,016 training trajectories). Each SWE-Agent adaptation experiment is conducted through lightweight supervised finetuning, the training time of which ranges from several minutes to two hours at most.
Compared to the Base prior, those specifically enhanced with Agentless skills (SFT and RL) demonstrate stronger task generalization, especially under the data-scarce settings. However, when more SWE-Smith trajectories are used for adaptation, the performances of the Base and the MT priors become closer to those of the SFT and the RL priors. This could be attributed to the gaps between the different SWE tasks. The exploration for recipes that enable stronger out-of-distribution and task-agnostic generalization is left for future work.
<details>
<summary>figs/sec4_generalization_plots/l-100-s1.png Details</summary>

### Visual Description
## Line Chart: Pass@k vs. k
### Overview
The image is a line chart comparing the performance of four different methods (RL, SFT, MT, and Base) based on the "Pass@k" metric for varying values of 'k' (1, 2, and 3). The chart displays the percentage of successful attempts (Pass@k) on the y-axis against the value of 'k' on the x-axis.
### Components/Axes
* **X-axis:** Labeled "k" with values 1, 2, and 3.
* **Y-axis:** Labeled "Pass@k (%)" with values ranging from 0.0 to 12.5, incrementing by 2.5.
* **Legend:** Located in the top-left corner, associating colors with methods:
* Red: RL
* Orange: SFT
* Purple: MT
* Blue: Base
### Detailed Analysis
* **RL (Red):** The line slopes upward.
* k=1: Approximately 4.8%
* k=2: Approximately 7.2%
* k=3: Approximately 9.4%
* **SFT (Orange):** The line slopes upward.
* k=1: Approximately 4.5%
* k=2: Approximately 8.0%
* k=3: Approximately 11.0%
* **MT (Purple):** The line slopes slightly upward.
* k=1: Approximately 1.8%
* k=2: Approximately 2.0%
* k=3: Approximately 2.4%
* **Base (Blue):** The line slopes slightly upward.
* k=1: Approximately 1.1%
* k=2: Approximately 2.0%
* k=3: Approximately 2.6%
### Key Observations
* SFT consistently outperforms RL across all values of k.
* MT and Base have significantly lower Pass@k values compared to RL and SFT.
* All methods show an increase in Pass@k as k increases from 1 to 3.
* The performance gap between SFT/RL and MT/Base is substantial.
### Interpretation
The chart suggests that SFT and RL are significantly more effective methods than MT and Base, as indicated by their higher Pass@k values. The increasing trend of Pass@k for all methods as 'k' increases implies that allowing more attempts ('k') leads to a higher success rate. The large performance difference between SFT/RL and MT/Base could be due to differences in their underlying algorithms or training methodologies.
</details>
(a) #SFT $=2^{21}$ .
<details>
<summary>figs/sec4_generalization_plots/l-100.png Details</summary>

### Visual Description
## Line Chart: Pass@k vs k
### Overview
The image is a line chart comparing the performance of four different methods (RL, SFT, MT, and Base) based on the "Pass@k" metric for varying values of 'k' (1, 2, and 3). The chart plots "Pass@k (%)" on the y-axis and 'k' on the x-axis. Each method is represented by a distinct colored line.
### Components/Axes
* **X-axis:** 'k' with values 1, 2, and 3.
* **Y-axis:** "Pass@k (%)" ranging from 0.0 to 12.5, with increments of 2.5.
* **Legend:** Located on the right side of the chart, associating colors with methods:
* Red: RL
* Orange: SFT
* Purple: MT
* Blue: Base
### Detailed Analysis
* **RL (Red):** The red line represents the RL method. It starts at approximately 9% at k=1, increases to about 10.7% at k=2, and reaches approximately 12.3% at k=3.
* **SFT (Orange):** The orange line represents the SFT method. It starts at approximately 5.5% at k=1, increases to about 7.7% at k=2, and reaches approximately 9.8% at k=3.
* **MT (Purple):** The purple line represents the MT method. It starts at approximately 6.2% at k=1, increases to about 9% at k=2, and reaches approximately 12.5% at k=3.
* **Base (Blue):** The blue line represents the Base method. It starts at approximately 2.7% at k=1, increases to about 5.1% at k=2, and reaches approximately 8% at k=3.
### Key Observations
* All methods show an increasing trend in "Pass@k (%)" as 'k' increases.
* The MT method shows the highest increase in "Pass@k (%)" between k=1 and k=3.
* The Base method consistently shows the lowest "Pass@k (%)" values across all 'k' values.
* The RL method starts with a high "Pass@k (%)" at k=1, but its increase is less steep compared to MT.
### Interpretation
The chart suggests that increasing the value of 'k' generally improves the "Pass@k" performance for all tested methods. The MT method appears to benefit the most from increasing 'k', as it shows the steepest increase in "Pass@k (%)". The Base method consistently underperforms compared to the other methods. The RL method, while starting with a relatively high "Pass@k (%)" at k=1, does not increase as much as the MT method as 'k' increases. This indicates that the MT method may be more effective at leveraging higher values of 'k' to improve performance.
</details>
(b) #SFT $=2^{23}$ .
<details>
<summary>figs/sec4_generalization_plots/l-2000.png Details</summary>

### Visual Description
## Line Chart: Pass@k (%) vs. k
### Overview
The image is a line chart comparing the performance of four different models (RL, SFT, MT, and Base) based on the "Pass@k (%)" metric for varying values of 'k' (1, 2, and 3). The chart displays how the performance of each model changes as 'k' increases.
### Components/Axes
* **X-axis:** 'k' with values 1, 2, and 3.
* **Y-axis:** "Pass@k (%)" ranging from 5.0 to 17.5, with increments of 2.5.
* **Legend:** Located on the right side of the chart, associating colors with model names:
* Red: RL
* Orange: SFT
* Purple: MT
* Blue: Base
### Detailed Analysis
* **RL (Red):** The red line represents the RL model.
* At k=1, Pass@k (%) is approximately 10.5%.
* At k=2, Pass@k (%) is approximately 13.5%.
* At k=3, Pass@k (%) is approximately 15.0%.
* Trend: The RL model's performance increases as 'k' increases.
* **SFT (Orange):** The orange line represents the SFT model.
* At k=1, Pass@k (%) is approximately 11.0%.
* At k=2, Pass@k (%) is approximately 12.5%.
* At k=3, Pass@k (%) is approximately 14.8%.
* Trend: The SFT model's performance increases as 'k' increases.
* **MT (Purple):** The purple line represents the MT model.
* At k=1, Pass@k (%) is approximately 10.5%.
* At k=2, Pass@k (%) is approximately 12.3%.
* At k=3, Pass@k (%) is approximately 14.5%.
* Trend: The MT model's performance increases as 'k' increases.
* **Base (Blue):** The blue line represents the Base model.
* At k=1, Pass@k (%) is approximately 6.3%.
* At k=2, Pass@k (%) is approximately 11.8%.
* At k=3, Pass@k (%) is approximately 14.0%.
* Trend: The Base model's performance increases as 'k' increases.
### Key Observations
* All four models show an increase in "Pass@k (%)" as 'k' increases from 1 to 3.
* The Base model starts with the lowest "Pass@k (%)" at k=1 but shows a significant increase, almost catching up with the other models at k=3.
* The RL model has the highest "Pass@k (%)" at k=3.
* The SFT and MT models have similar performance across all values of 'k'.
### Interpretation
The chart illustrates the impact of increasing 'k' on the "Pass@k (%)" metric for different models. The upward trend for all models suggests that increasing 'k' generally improves performance. The Base model's substantial improvement indicates that it benefits more from increasing 'k' compared to the other models. The RL model's consistently high performance suggests it is the most effective among the four, especially at higher values of 'k'. The similar performance of SFT and MT models implies that they have comparable effectiveness in this context.
</details>
(c) #SFT $=1.1Γ 2^{27}$ .
<details>
<summary>figs/sec4_generalization_plots/l-5000.png Details</summary>

### Visual Description
## Line Chart: Pass@k (%) vs. k
### Overview
The image is a line chart comparing the performance of four different models (RL, SFT, MT, and Base) based on the "Pass@k (%)" metric for varying values of 'k' (1, 2, and 3). The chart displays how the performance of each model changes as 'k' increases.
### Components/Axes
* **X-axis:** 'k' (values: 1, 2, 3)
* **Y-axis:** Pass@k (%) (values: 5.0, 7.5, 10.0, 12.5, 15.0, 17.5)
* **Legend:** Located on the right side of the chart.
* Red line: RL
* Orange line: SFT
* Purple line: MT
* Blue line: Base
### Detailed Analysis
* **RL (Red):** The red line represents the RL model. It starts at approximately 11.7% at k=1, increases to about 13.3% at k=2, and reaches approximately 14.3% at k=3. The trend is upward, indicating improved performance with increasing 'k'.
* **SFT (Orange):** The orange line represents the SFT model. It starts at approximately 12.3% at k=1, increases to about 14.0% at k=2, and reaches approximately 15.7% at k=3. The trend is upward, indicating improved performance with increasing 'k'.
* **MT (Purple):** The purple line represents the MT model. It starts at approximately 12.7% at k=1, increases to about 15.3% at k=2, and reaches approximately 16.7% at k=3. The trend is upward, indicating improved performance with increasing 'k'.
* **Base (Blue):** The blue line represents the Base model. It starts at approximately 8.7% at k=1, increases to about 12.0% at k=2, and reaches approximately 12.3% at k=3. The trend is upward, indicating improved performance with increasing 'k'.
### Key Observations
* The MT model consistently outperforms the other models across all values of 'k'.
* The Base model has the lowest performance compared to the other models.
* All models show an increase in "Pass@k (%)" as 'k' increases from 1 to 3.
* The performance gap between the Base model and the other models narrows slightly as 'k' increases.
### Interpretation
The chart demonstrates that increasing the value of 'k' generally improves the "Pass@k (%)" metric for all models tested. The MT model exhibits the best performance, suggesting it is the most effective among the four models. The Base model's lower performance indicates it may benefit from further optimization or a different approach. The upward trends for all models suggest that allowing for more attempts ('k') leads to a higher probability of passing the evaluation criteria.
</details>
(d) #SFT $=1.5Γ 2^{28}$ .
Figure 11: Generalization analysis on SWE-bench-Live.
<details>
<summary>figs/sec4_generalization_plots/m-100-s1.png Details</summary>

### Visual Description
## Line Chart: Pass@k vs. k
### Overview
The image is a line chart comparing the performance of four different methods (RL, SFT, MT, and Base) based on the "Pass@k" metric for varying values of 'k' (1, 2, and 3). The chart displays the relationship between 'k' and the percentage of "Pass@k" for each method.
### Components/Axes
* **X-axis:** 'k' with values 1, 2, and 3.
* **Y-axis:** "Pass@k (%)" with a scale from 0 to 20, incrementing by 5.
* **Legend:** Located on the right side of the chart.
* RL (Red line with circle markers)
* SFT (Orange line with circle markers)
* MT (Purple line with circle markers)
* Base (Blue line with circle markers)
### Detailed Analysis
* **RL (Red):** The red line represents the RL method. It starts at approximately 8.5% at k=1, increases to about 16.8% at k=2, and reaches approximately 19.7% at k=3. The trend is upward.
* **SFT (Orange):** The orange line represents the SFT method. It starts at approximately 12% at k=1, increases to about 17.8% at k=2, and reaches approximately 21.5% at k=3. The trend is upward.
* **MT (Purple):** The purple line represents the MT method. It starts at approximately 2% at k=1, remains relatively constant at approximately 2% at k=2, and remains relatively constant at approximately 2.4% at k=3. The trend is relatively flat.
* **Base (Blue):** The blue line represents the Base method. It starts at approximately 1% at k=1, increases to about 1.8% at k=2, and reaches approximately 4% at k=3. The trend is upward.
### Key Observations
* SFT consistently outperforms the other methods across all values of 'k'.
* RL performs second best, showing a significant improvement as 'k' increases.
* MT shows almost no change in performance as 'k' increases.
* Base performs the worst, but shows some improvement as 'k' increases.
### Interpretation
The chart suggests that the SFT method is the most effective in terms of the "Pass@k" metric, followed by RL. The MT method appears to be largely unaffected by changes in 'k', while the Base method shows some improvement but remains the least effective. The increasing trend of RL, SFT, and Base suggests that increasing 'k' generally improves performance for these methods, while MT is insensitive to 'k'.
</details>
(a) #SFT $=2^{21}$ .
<details>
<summary>figs/sec4_generalization_plots/m-100.png Details</summary>

### Visual Description
## Line Chart: Pass@k vs. k for Different Models
### Overview
The image is a line chart comparing the performance of four different models (RL, SFT, MT, and Base) based on the "Pass@k" metric for varying values of 'k'. The chart displays the relationship between 'k' (x-axis) and "Pass@k (%)" (y-axis) for each model.
### Components/Axes
* **X-axis:** 'k' with values 1, 2, and 3.
* **Y-axis:** "Pass@k (%)" ranging from 0 to 20, with tick marks at intervals of 5 (0, 5, 10, 15, 20).
* **Legend:** Located in the top-right corner, associating each model with a specific color:
* RL: Red
* SFT: Orange
* MT: Purple
* Base: Blue
* **Gridlines:** Light gray, dashed gridlines are present in the background.
### Detailed Analysis
* **RL (Red):** The red line represents the RL model. It shows an upward trend.
* k=1: Pass@k β 8% (marked with an 'x')
* k=2: Pass@k β 16% (marked with a circle)
* k=3: Pass@k β 21% (marked with a circle)
* **SFT (Orange):** The orange line represents the SFT model. It shows an upward trend.
* k=1: Pass@k β 6% (marked with an 'x')
* k=2: Pass@k β 9% (marked with a circle)
* k=3: Pass@k β 13% (marked with a circle)
* **MT (Purple):** The purple line represents the MT model. It shows an upward trend.
* k=1: Pass@k β 7% (marked with an 'x')
* k=2: Pass@k β 12% (marked with a circle)
* k=3: Pass@k β 17% (marked with a circle)
* **Base (Blue):** The blue line represents the Base model. It shows an upward trend.
* k=1: Pass@k β 3% (marked with an 'x')
* k=2: Pass@k β 8% (marked with a circle)
* k=3: Pass@k β 12% (marked with a circle)
### Key Observations
* The RL model consistently outperforms the other models across all values of 'k'.
* The Base model consistently shows the lowest performance.
* All models exhibit an increase in "Pass@k" as 'k' increases.
* The gap between the RL model and the other models appears to widen as 'k' increases.
### Interpretation
The chart demonstrates that the RL model has the highest "Pass@k" performance compared to SFT, MT, and Base models for the given values of 'k'. The upward trend for all models suggests that increasing 'k' generally improves the "Pass@k" metric, indicating a higher probability of finding a correct solution within the top 'k' attempts. The RL model's superior performance suggests it is more effective at generating correct solutions compared to the other models. The widening gap between RL and the other models as 'k' increases implies that RL's advantage becomes more pronounced with more attempts allowed.
</details>
(b) #SFT $=2^{23}$ .
<details>
<summary>figs/sec4_generalization_plots/m-2000.png Details</summary>

### Visual Description
## Line Chart: Pass@k vs. k
### Overview
The image is a line chart comparing the performance of four different models (RL, SFT, MT, and Base) based on the "Pass@k" metric for varying values of 'k' (1, 2, and 3). The chart displays how the Pass@k percentage changes as 'k' increases for each model.
### Components/Axes
* **X-axis:** 'k' with values 1, 2, and 3.
* **Y-axis:** "Pass@k (%)" with a scale from 5 to 35, incrementing by 5.
* **Legend:** Located on the right side of the chart.
* RL (Red line)
* SFT (Orange line)
* MT (Purple line)
* Base (Blue line)
### Detailed Analysis
* **RL (Red):** The red line represents the RL model. It starts at approximately 12% at k=1, increases to about 24% at k=2, and reaches approximately 29% at k=3.
* **SFT (Orange):** The orange line represents the SFT model. It starts at approximately 13% at k=1, increases to about 26% at k=2, and reaches approximately 31% at k=3.
* **MT (Purple):** The purple line represents the MT model. It starts at approximately 12% at k=1, increases to about 22% at k=2, and reaches approximately 27% at k=3.
* **Base (Blue):** The blue line represents the Base model. It starts at approximately 9% at k=1, increases to about 21% at k=2, and reaches approximately 29% at k=3.
### Key Observations
* All models show an increasing trend in Pass@k as 'k' increases.
* SFT consistently outperforms the other models across all values of 'k'.
* The Base model starts with the lowest Pass@k at k=1 but shows a significant increase, almost matching RL at k=3.
* The performance gap between SFT and the other models appears to narrow slightly as 'k' increases from 2 to 3.
### Interpretation
The chart suggests that increasing the value of 'k' generally improves the performance of all models, as measured by the Pass@k metric. The SFT model demonstrates the best performance overall, indicating it may be the most effective approach among those compared. The Base model's improvement with increasing 'k' suggests it benefits more from larger values of 'k' compared to the other models. The relatively close performance of RL, MT, and Base at k=3 indicates that their effectiveness converges as 'k' increases.
</details>
(c) #SFT $=1.1Γ 2^{27}$ .
<details>
<summary>figs/sec4_generalization_plots/m-5000.png Details</summary>

### Visual Description
## Line Chart: Pass@k Performance Comparison
### Overview
The image is a line chart comparing the performance of four different models (RL, SFT, MT, and Base) based on the "Pass@k" metric. The x-axis represents the value of 'k' (1, 2, and 3), and the y-axis represents the Pass@k score in percentage. The chart displays how the Pass@k score changes for each model as 'k' increases.
### Components/Axes
* **X-axis:**
* Label: "k"
* Scale: Categorical values 1, 2, and 3.
* **Y-axis:**
* Label: "Pass@k (%)"
* Scale: Ranges from 5 to 35, with tick marks at intervals of 5 (5, 10, 15, 20, 25, 30, 35).
* **Legend:** Located on the right side of the chart.
* RL (Red line)
* SFT (Orange line)
* MT (Purple line)
* Base (Blue line)
### Detailed Analysis
* **RL (Red):** The red line represents the RL model.
* k=1: Pass@k β 17%
* k=2: Pass@k β 27%
* k=3: Pass@k β 31%
* Trend: The Pass@k score increases as 'k' increases.
* **SFT (Orange):** The orange line represents the SFT model.
* k=1: Pass@k β 17%
* k=2: Pass@k β 26%
* k=3: Pass@k β 34%
* Trend: The Pass@k score increases as 'k' increases.
* **MT (Purple):** The purple line represents the MT model.
* k=1: Pass@k β 16%
* k=2: Pass@k β 25%
* k=3: Pass@k β 30%
* Trend: The Pass@k score increases as 'k' increases.
* **Base (Blue):** The blue line represents the Base model.
* k=1: Pass@k β 18%
* k=2: Pass@k β 26%
* k=3: Pass@k β 30%
* Trend: The Pass@k score increases as 'k' increases.
### Key Observations
* All four models show an increase in Pass@k score as 'k' increases from 1 to 3.
* The SFT model (orange line) generally has the highest Pass@k score across all values of 'k'.
* The MT model (purple line) generally has the lowest Pass@k score across all values of 'k'.
* The Base model (blue line) and RL model (red line) perform similarly.
### Interpretation
The chart demonstrates the performance of different models (RL, SFT, MT, and Base) in terms of Pass@k. The increasing trend of all lines indicates that as 'k' increases, the models are more likely to pass the evaluation criteria. The SFT model consistently outperforms the other models, suggesting it is the most effective among the four in this context. The MT model shows the weakest performance. The Base and RL models are comparable. The data suggests that increasing 'k' generally improves the Pass@k score for all models, but the extent of improvement varies depending on the model architecture or training method.
</details>
(d) #SFT $=1.5Γ 2^{28}$ .
Figure 12: Generalization analysis on SWE-bench Multilingual.
Appendix H Use of Large Language Models
The initial draft of this paper was written entirely by the authors. A large language model (gpt-5) was used only to aid with polishing the language (e.g., grammar and clarity). All conceptual contributions, experimental designs, analyses, and conclusions are the work of the authors.