# Kimi-Dev: Agentless Training as Skill Prior for SWE-Agents
> Indicates equal contribution.β Joint leads.
Abstract
Large Language Models (LLMs) are increasingly applied to software engineering (SWE), with SWE-bench as a key benchmark. Solutions are split into SWE-Agent frameworks with multi-turn interactions and workflow-based Agentless methods with single-turn verifiable steps. We argue these paradigms are not mutually exclusive: reasoning-intensive Agentless training induces skill priors, including localization, code edit, and self-reflection that enable efficient and effective SWE-Agent adaptation. In this work, we first curate the Agentless training recipe and present Kimi-Dev, an open-source SWE LLM achieving 60.4% on SWE-bench Verified, the best among workflow approaches. With additional SFT adaptation on 5k publicly-available trajectories, Kimi-Dev powers SWE-Agents to 48.6% pass@1, on par with that of Claude 3.5 Sonnet (241022 version). These results show that structured skill priors from Agentless training can bridge workflow and agentic frameworks for transferable coding agents.
1 Introduction
Recent days have witnessed the rapid development of Large Language Models (LLMs) automating Software-Engineering (SWE) tasks (jimenez2023swe; yang2024swe; xia2024agentless; anthropic_claude_3.5_sonnet_20241022; pan2024training; wang2024openhands; wei2025swe; yang2025qwen3; team2025kimi_k2; openai_gpt5_system_card_2025). Among the benchmarks that track the progress of LLM coding agents in SWE scenarios, SWE-bench (jimenez2023swe) stands out as one of the most representative ones: Given an issue that reports a bug in a real-world GitHub repository, a model is required to produce a patch that fixes the bug, the correctness of which is further judged by whether the corresponding unit tests are passed after its application. The difficulty of the task (as of the date the benchmark was proposed), the existence of the outcome reward with the provided auto-eval harness, as well as the real-world economic value it reflects, have made the SWE-bench a focal point of the field.
Two lines of solutions have emerged for the SWE-bench task. Agent-based solutions like SWE-Agent (yang2024swe) and OpenHands (wang2024openhands) take an interactionist approach: Instructed with the necessary task description, a predefined set of available tools, as well as the specific problem statement, the agent is required to interact with an executable environment for multiple turns, make change to the source codes, and determine when to stop autonomously. In contrast, workflow-based solutions like Agentless (xia2024agentless) pre-define the solving progress as a pipeline, which consists of steps like localization, bug repair, and test composition. Such task decomposition transforms the agentic task into generating correct responses for a chain of single-turn problems with verifiable rewards (guo2025deepseek; wei2025swe; SWESwiss2025).
The two paradigms have been widely viewed as mutually exclusive. On the one hand, SWE-Agents are born with higher potential and better adaptability, thanks to the higher degree of freedom of the multi-turn interaction without the fixed routines. However, it has also proved more difficult to train with such frameworks due to their end-to-end nature (deepswe2025; cao2025skyrl). On the other hand, Agentless methods offer better modularity and the ease to train with Reinforcement Learning with Verifiable Rewards (RLVR) techniques, but more limited exploration space and flexibility, and difficulty in behavior monitoring as the erroneous patterns appear only in the single-turn long reasoning contents (pan2024training). However, we challenge the dichotomy from the perspective of training recipe: We argue that Agentless training should not be viewed as the ultimate deliverable, but rather as a way to induce skill priors β atomic capabilities such as the localization of buggy implementations and the update of erroneous code snippets, as well as self-reflection and verification, all of which help scaffold the efficient adaptation of more capable and generalizable SWE-agents.
Guided by this perspective, we introduce Kimi-Dev, an open-source code LLM for SWE tasks. Specifically, we first develop an Agentless training recipe, which includes mid-training, cold-start, reinforcement learning, and test-time self-play. This results in 60.4% accuracy on SWE-bench Verified, the SoTA performance among the workflow-based solutions. Building on this, we show that Agentless training induces skill priors: a minimal SFT cold-start from Kimi-Dev with 5k publicly-available trajectories enables efficient SWE-agent adaptation and reaches 48.6% pass@1 score, similar to that of Claude 3.5 Sonnet (the 20241022 version, anthropic_claude_3.5_sonnet_20241022). We demonstrate that these induced skills transfer from the non-agentic workflows to the agentic frameworks, and the self-reflection in long Chain-of-Thoughts baked through Agentless training further enable the agentic model to leverage more turns and succeed with a longer horizon. Finally, we also show that the skills from Agentless training generalize beyond SWE-bench Verified to broader benchmarks like SWE-bench-live (zhang2025swe) and SWE-bench Multilingual (yang2025swesmith). Together, these results reframe the relationship between Agentless and agentic frameworks: not mutually exclusive, but as complementary stages in building transferable coding LLMs. This shift offers a principled view that training with structural skill priors could scaffold autonomous agentic interaction.
The remainder of this paper is organized as follows. Section 2 reviews the background of the framework dichotomy and outlines the challenges of training SWE-Agents. Section 3 presents our Agentless training recipe and the experimental results. Section 4 demonstrates how these Agentless-induced skill priors enable efficient SWE-Agent adaptation, and evaluates the skill transfer and generalization beyond SWE-bench Verified.
2 Background
In this section, we first review the two dominant frameworks for SWE tasks and their dichotomy in Section 2.1. We then summarize the progress and challenges of training SWE-Agents in Section 2.2. The background introduction sets the stage for reinterpreting Agentless training as skill priors for SWE-Agents, a central theme developed throughout the later sections.
2.1 Framework Dichotomy
Two paradigms currently dominate the solutions for automating software engineering tasks. Agentless approaches decompose SWE tasks into modular workflows (xia2024agentless; wei2025swe; ma2024lingma; ma2025alibaba; swe-fixer). Typical workflows consist of bug localization, bug repair, and test generation. This design provides modularity and stability: each step could be optimized separately as a single-turn problem with verifiable rewards (wei2025swe; SWESwiss2025). However, such rigidity comes at the cost of flexibility. When encountering scenarios requiring multiple rounds of incremental updates, the Agentless approaches struggle to adapt.
By contrast, SWE-agents adopt an end-to-end, multi-turn reasoning paradigm (yang2024swe; wang2024openhands). Rather than following a fixed workflow, they iteratively plan, act, and reflect, resembling how human developers debug complex issues. This design enables greater adaptability, but introduces significant difficulties: trajectories often extend over tens or even hundreds of steps, context windows of the LLMs must span over the entire interaction history, and the model must handle exploration, reasoning, and tool use simultaneously.
The dichotomy between fixed workflows (e.g., Agentless) and agentic frameworks (e.g., SWE-Agent) has shaped much of the communityβs perspective. The two paradigms are often regarded as mutually exclusive: one trades off flexibility and performance ceiling for modularity and stability, whereas the other makes the reverse compromise. Our work challenges this dichotomy, as we demonstrate that Agentless training induces skill priors that make further SWE-agent training both more stable and more efficient.
2.2 Training SWE-agents
Training SWE-agents relies on acquiring high-quality trajectories through interactions with executable environments. Constructing such large-scale environments and collecting reliable trajectories, however, requires substantial human labor as well as costly calls to frontier models, making data collection slow and resource-demanding (pan2024training; badertdinov2024sweextra). Recent studies also attempt to scale environment construction by synthesizing bugs for the reverse construction of executable runtime (jain2025r2e; yang2025swesmith).
However, credit assignment across long horizons still remains challenging, as outcome rewards are sparse and often only available when a final patch passes its tests. Reinforcement learning techniques have been proposed, but frequently suffer from instability or collapse when trajectories exceed dozens of steps (deepswe2025; cao2025skyrl). SWE-agent training is also highly sensitive to initialization: starting from a generic pre-trained model often leads to brittle behaviors, such as failing to use tools effectively or getting stuck in infinite loops of specific action patterns (pan2024training; yang2025swesmith).
These limitations motivate our central hypothesis: instead of training SWE-agents entirely from scratch, one can first induce skill priors through agentless training, enhancing the atomic capabilities like localization, repair, test composition, and self-reflection. These priors lay a foundation that makes subsequent agentic training both more efficient and more generalizable.
3 Agentless Training Recipe
Instead of training SWE-agents from scratch, we leverage Agentless training to induce skill priors. Skill priors enhanced by Agentless training include but are not limited to bug localization, patch generation, self-reflection and verification, which lay the foundation for end-to-end agentic interaction. In this section, we elaborate our Agentless training recipe: the duo framework design of BugFixer and TestWriter, mid-training and cold-start, reinforcement learning, and test-time self-play. Sections 3.1 β 3.4 detail these ingredients, and Section 3.5 presents the experimental results for each of them. This training recipe results in Kimi-Dev, an open-source 72B model that achieves 60.4% on SWE-bench Verified, the SoTA performance among the workflow-based solutions.
<details>
<summary>x3.png Details</summary>

### Visual Description
\n
## Diagram: LLM-Based Bug Fixing and Test Case Generation Workflow
### Overview
The image depicts a diagram illustrating a workflow for bug fixing and test case generation using a Large Language Model (LLM). The workflow involves two primary components: a BugFixer and a TestWriter, both powered by the LLM. The process is cyclical, with bug fixing leading to test case generation and vice versa, facilitated by file localization and code editing steps.
### Components/Axes
The diagram consists of the following components:
* **LLM:** Central component, represented as a stylized brain graphic.
* **BugFixer:** A component connected to the LLM, responsible for fixing bugs.
* **TestWriter:** A component connected to the LLM, responsible for generating test cases.
* **File Localization:** Represented by a magnifying glass over a file folder, appearing on both the left and right sides of the diagram.
* **Code Edit:** Represented by a document with a pencil, appearing on both the left and right sides of the diagram.
* **Arrows:** Curved arrows indicate the flow of the process.
* **Text Labels:** "Generate Test Case", "Fix Bugs" are labels associated with the arrows.
### Detailed Analysis or Content Details
The diagram shows a cyclical process:
1. **File Localization & Code Edit** (Left Side): This initiates the process, presumably by identifying the location of a bug and the relevant code.
2. **BugFixer -> LLM:** The BugFixer component sends information to the LLM to fix the identified bug.
3. **LLM -> TestWriter:** The LLM then passes the information to the TestWriter component.
4. **TestWriter -> Generate Test Case:** The TestWriter generates a test case to verify the bug fix.
5. **Generate Test Case -> File Localization & Code Edit** (Right Side): The generated test case is used for file localization and code editing.
6. **File Localization & Code Edit -> Fix Bugs:** The process then loops back to fixing bugs, potentially identifying new issues revealed by the test case.
The arrows indicate the following actions:
* The arrow from BugFixer to TestWriter is labeled "Generate Test Case".
* The arrow from TestWriter back to BugFixer is labeled "Fix Bugs".
### Key Observations
The diagram emphasizes the iterative nature of bug fixing and test case generation. The LLM acts as a central intelligence, coordinating both processes. The inclusion of "File Localization" and "Code Edit" suggests that the workflow is integrated with a code repository or development environment. The cyclical nature implies continuous improvement and refinement of both the code and the test suite.
### Interpretation
This diagram illustrates a modern software development workflow leveraging the capabilities of Large Language Models. The LLM is not simply a tool for code generation but a central component that orchestrates the entire process of identifying, fixing, and verifying software defects. The cyclical nature of the workflow suggests a commitment to continuous testing and improvement. The diagram highlights the potential for LLMs to automate and accelerate the software development lifecycle, reducing the time and effort required for bug fixing and quality assurance. The diagram does not provide any quantitative data or specific details about the LLM's architecture or performance. It is a conceptual representation of a workflow.
</details>
Figure 1: Agentless framework for Kimi-Dev: the duo of BugFixer and TestWriter.
3.1 Framework: the Duo of Bugfixer and Testwriter
In GitHub issue resolution, we conceptualize the process as the collaboration between two important roles: the BugFixer, who produces patches that correctly address software bugs, and the TestWriter, who creates reproducible unit tests that capture the reported bug. A resolution is considered successful when the BugFixerβs patch passes the tests provided for the issue, while a high-quality test from the TestWriter should fail on the pre-fix version of the code and pass once the fix is applied.
Each role relies on two core skills: (i) file localization, the ability to identify the specific files relevant to the bug or test, and (ii) code edit, the ability to implement the necessary modifications. For BugFixer, effective code edits repair the defective program logic, whereas for TestWriter, they update precise unit test functions that reproduce the issue into the test files. As illustrated in Figure 1, these two skills constitute the fundamental abilities underlying GitHub issue resolution. Thus, we enhance these skills through the following training recipes, including mid-training, cold-start, and RL.
3.2 Mid-Training & Cold Start
To enhance the modelβs prior as both a BugFixer and a TestWriter, we perform mid-training with $\sim$ 150B tokens in high-quality and real-world data. With the Qwen 2.5-72B-Base (qwen2025qwen25technicalreport) model as a starting point, we collect millions of GitHub issues and PR commits to form its mid-training dataset, which consists of (i) $\sim$ 50B tokens in the form of Agentless derived from the natural diff patch, (ii) $\sim$ 20B tokens of curated PR commit packs, and (iii) $\sim$ 20B tokens of synthetic data with reasoning and agentic interaction patterns (upsampled by a factor of 4 during training). The data recipe is carefully constructed to enable the model to learn how human developers reason with GitHub issues, implement code fixes, and develop unit tests. We also performed strict data decontamination to exclude any repository from the SWE-bench Verified test set. Mid-training sufficiently enhances the knowledge in the model about practical bug fixes and unit tests, making it a better starting point for later stages. The details of the recipe are covered in Appendix A.
To activate the modelβs long Chain-of-Thought (CoT) capability, we also construct a cold-start dataset with reasoning trajectories based on the SWE-Gym (pan2024training) and SWE-bench-extra (badertdinov2024scaling) datasets, generated by the DeepSeek R1 model (deepswe2025, the 20250120 version). In this setup, R1 acts the roles of Bugfixer and Testwriter, producing outputs such as file localization and code edits. Through supervised finetuning as a cold start with this dataset, we enable the model to acquire essential reasoning skills, including problem analysis, method sketching, self-refinement, and exploration of alternative solutions.
3.3 Reinforcement Learning
After mid-training and cold-start, the model demonstrates strong performance in localization. Therefore, reinforcement learning (RL) focuses solely on the code edit stage. We construct a training set specifically for this stage, where each prompt is equipped with an executable environment. We further employ multiple localization rollouts from the initial model to generate varied file location predictions, which diversifies the prompts used in code-edit RL.
For the RL algorithm, we adopt the policy optimization method proposed by Kimi k1.5 (team2025kimi_k15), which has shown promising results on reasoning tasks in both math and coding. Kimi k1.5 (team2025kimi_k15) adopts a simpler policy gradient approach based on the REINFORCE algorithm (williams1992simple). Similarly to GRPO (shao2024deepseekmath), we use the average rewards of multiple rollouts as the baseline to normalize the returns. When adapting the algorithm in our SWE-bench setting, we highlight the following 3 key desiderata:
1. Outcome-based reward only: We rely solely on the final execution outcome from the environment as the raw reward (0 or 1), without incorporating any format- or process-based signals. For BugFixer, a positive reward is given if the generated patch passes all ground-truth unittests. For TestWriter, a positive reward is assigned when (i) the predicted test raises a failure in the repo without the ground-truth bugfix patch applied, AND (ii) the failure is resolved once the ground-truth bugfix patch is applied.
1. Adaptive prompt selection: Prompts with pass@16 = 0 are initially discarded as they do not contribute to the batch loss. This results in an initial prompt set of 1,200 problems and enlarges the effective batch size. A curriculum learning scheme is then applied: once the success rate on the current set exceeds a threshold, 500 new (previously excluded) prompts (with initial pass@16 = 0 but improved under RL) are reintroduced every 100 RL steps to gradually raise task difficulty.
1. Positive example reinforcement: As performance improvements begin to plateau in later stages of training, we incorporate the positive samples from the recent RL iterations into the training batch of the current iteration. This approach reinforces the modelβs reliance on successful patterns, thereby accelerating convergence in the final phase.
Robust sandbox infrastructure. We construct the docker environment with Kubernetes (kubernetes), which provides a secure and scalable sandbox infrastructure and efficient training and rollouts. The infra supports over 10,000 concurrent instances with robust performance, making it ideal for competitive programming and software engineering tasks (see Appendix D for details).
3.4 Test-Time Self-Play
After RL, the model masters the roles of both a BugFixer and a TestWriter. During test time, it adopts a self-play mechanism to coordinate its bug-fixing and test-writing abilities.
Following Agentless (xia2024agentless), we leverage the model to generate 40 candidate patches and 40 tests for each instance. Each patch generation involves independent runs of the localization and code edit from BugFixer, where the first run uses greedy decoding (temperature 0), and the remaining 39 use temperature 1 to ensure diversity. Similarly, 40 tests are generated independently from TestWriter. For the test patch candidates, to guarantee their validity, we first filter out those failing to raise a failure in the original repo without applying any BugFixer patch.
Denote the rest TestWriter patches as set $\mathcal{T}$ , and the BugFixer patches as set $\mathcal{B}$ . For each $b_{i}β\mathcal{B}$ and $t_{j}β\mathcal{T}$ , we execute the test suite over the test file modified by $t_{j}$ for twice: first without $b_{i}$ , and then with $b_{i}$ applied. From the execution log for the first run, we get the count of the failed and the passed tests from $t_{j}$ , denoted as ${\rm F}(j)$ and ${\rm P}(j)$ . Comparing the execution logs for the two test suite runs, we get the count of the fail-to-pass and the pass-to-pass tests, denoted as ${\rm FP}(i,j)$ and ${\rm PP}(i,j)$ , respectively. We then calculate the score for each $b_{i}$ with
$$
S_{i}=\frac{\sum_{j}{\rm FP}(i,j)}{\sum_{j}{\rm F}(j)}+\frac{\sum_{j}{\rm PP}(i,j)}{\sum_{j}{\rm P}(j)},\vskip-2.0pt \tag{1}
$$
where the first part reflects the performance of $b_{i}$ under reproduction tests, and the second part could be viewed as the characterization of $b_{i}$ under regression tests (xia2024agentless). We select the BugFixer patch $b_{i}$ with the highest $S_{i}$ score as the ultimate answer.
Table 1: Performance comparison for models on SWE-bench Verified under Agentless-like frameworks. All the performances are obtained under the standard 40 patch, 40 test setting (xia2024agentless), except that Llama3-SWE-RL uses 500 patches and 30 tests.
3.5 Experiments
3.5.1 Main Results
<details>
<summary>figs/sec3_mid_training/mid-train_perf.png Details</summary>

### Visual Description
\n
## Bar Chart: Pass Rate vs. Mid-training Tokens
### Overview
This image presents a bar chart illustrating the relationship between the "Mid-training tokens" and the corresponding "Pass Rate (%)". The chart displays three data points, each representing a different number of mid-training tokens (50B, 100B, and 150B) and their associated pass rates.
### Components/Axes
* **X-axis:** Labeled "Mid-training tokens". The axis has three markers: 50B, 100B, and 150B.
* **Y-axis:** Labeled "Pass Rate (%)". The axis ranges from approximately 26% to 38%, with horizontal dashed lines indicating increments of 2%.
* **Bars:** Three light blue bars representing the pass rates for each mid-training token value. Each bar is labeled with its corresponding pass rate value.
### Detailed Analysis
The chart shows a clear upward trend in pass rate as the number of mid-training tokens increases.
* **50B Tokens:** The bar for 50B tokens reaches approximately 28.6% on the Y-axis.
* **100B Tokens:** The bar for 100B tokens reaches approximately 32.6% on the Y-axis.
* **150B Tokens:** The bar for 150B tokens reaches approximately 36.6% on the Y-axis.
### Key Observations
The pass rate increases consistently with the increase in mid-training tokens. The difference in pass rate between 50B and 100B tokens is approximately 4% (32.6 - 28.6 = 4). The difference between 100B and 150B tokens is also approximately 4% (36.6 - 32.6 = 4).
### Interpretation
The data suggests a positive correlation between the number of mid-training tokens and the pass rate. This indicates that increasing the number of mid-training tokens during the training process leads to improved performance, as measured by the pass rate. The consistent increase of approximately 4% for each 50B increment in mid-training tokens suggests a relatively linear relationship within the observed range. This could be due to the model having more opportunities to learn and refine its parameters with more training data (tokens). The chart implies that further increasing the number of mid-training tokens might continue to improve the pass rate, although the rate of improvement may diminish at some point.
</details>
Figure 2: The performance on SWE-bench Verified after mid-training with different training token budgets.
Table 1 shows the performance of Kimi-Dev on SWE-bench Verified (jimenez2023swe). Instead of the text-similarity rewards used in SWE-RL (wei2025swe), we adopt execution-based signals for more reliable fix quality. Our two-stage TestWriter also improves over prior Agentless systems (xia2024agentless; guo2025deepseek; SWESwiss2025), which rely on a single root-level test, by better capturing repository context and mirroring real developer workflows (OpenAI-Codex-2025). Kimi-Dev attains state-of-the-art performance among open-source models, resolving 60.4% of issues.
3.5.2 Mid-Training
In this section, we evaluate the relationship between the amount of data used during mid-training and model performance. Specifically, we finetuned Qwen 2.5-72B-Base with the subset of mid-training data of 50B, 100B, and approximately 150B tokens, and then lightly activated these mid-trained models using the same set of 2,000 Bugfixer input-output pairs for SFT cold start. We only report BugFixer pass@1 here for simplicity of evaluation. Figure 2 shows that increasing the number of tokens in mid-training consistently improves model performance, highlighting the effectiveness of this stage.
3.5.3 Reinforcement Learning
<details>
<summary>figs/sec3_rl_scaling/quick_plot_twin_bf_final.png Details</summary>

### Visual Description
## Line Chart: RL Training Performance
### Overview
This image presents a line chart illustrating the relationship between RL Training Steps, Token Length, and Pass Rate (%). The chart displays two data series plotted against the x-axis of RL Training Steps, ranging from 0 to 500. The left y-axis represents Token Length, and the right y-axis represents Pass Rate (%).
### Components/Axes
* **X-axis:** RL Training Steps (Scale: 0 to 500, increments of 50)
* **Left Y-axis:** Token Length (Scale: 4000 to 8000, increments of 500)
* **Right Y-axis:** Pass Rate (%) (Scale: 34% to 46%, increments of 2%)
* **Legend:**
* Blue Line: Token Length
* Red Line: Pass Rate (%)
### Detailed Analysis
**Token Length (Blue Line):**
The blue line representing Token Length generally slopes upward from 0 to approximately 350 RL Training Steps, then plateaus with some fluctuations.
* At 0 RL Training Steps, Token Length is approximately 4100.
* At 50 RL Training Steps, Token Length is approximately 4600.
* At 100 RL Training Steps, Token Length is approximately 5000.
* At 150 RL Training Steps, Token Length is approximately 5500.
* At 200 RL Training Steps, Token Length is approximately 5900.
* At 250 RL Training Steps, Token Length is approximately 6100.
* At 300 RL Training Steps, Token Length is approximately 6000.
* At 350 RL Training Steps, Token Length is approximately 6200.
* At 400 RL Training Steps, Token Length is approximately 7300.
* At 450 RL Training Steps, Token Length is approximately 7400.
* At 500 RL Training Steps, Token Length is approximately 6300.
**Pass Rate (%) (Red Line):**
The red line representing Pass Rate (%) exhibits more volatility, with significant peaks and troughs throughout the 500 RL Training Steps.
* At 0 RL Training Steps, Pass Rate is approximately 35%.
* At 50 RL Training Steps, Pass Rate is approximately 41%.
* At 100 RL Training Steps, Pass Rate is approximately 38%.
* At 150 RL Training Steps, Pass Rate is approximately 41%.
* At 200 RL Training Steps, Pass Rate is approximately 44%.
* At 250 RL Training Steps, Pass Rate is approximately 47%.
* At 300 RL Training Steps, Pass Rate is approximately 42%.
* At 350 RL Training Steps, Pass Rate is approximately 40%.
* At 400 RL Training Steps, Pass Rate is approximately 43%.
* At 450 RL Training Steps, Pass Rate is approximately 46%.
* At 500 RL Training Steps, Pass Rate is approximately 44%.
### Key Observations
* The Token Length generally increases with RL Training Steps, suggesting the model learns to generate longer sequences.
* The Pass Rate (%) fluctuates considerably, indicating instability in the training process. There is no clear monotonic trend.
* There appears to be a potential inverse relationship between Token Length and Pass Rate in certain regions, where increases in Token Length are accompanied by decreases in Pass Rate, and vice versa.
* The Pass Rate reaches its maximum around 450 RL Training Steps, while the Token Length plateaus around the same point.
### Interpretation
The chart suggests that as the RL agent trains, it learns to generate longer tokens (Token Length increases). However, the Pass Rate does not consistently improve with increasing Token Length, indicating that simply generating longer sequences does not necessarily lead to better performance. The fluctuations in Pass Rate suggest that the training process is sensitive to various factors and may require further tuning or regularization. The plateau in Token Length after 350 steps could indicate the model has reached a limit in its ability to generate longer sequences with the current configuration, or that further increases in length do not contribute to improved performance. The inverse relationship observed in certain regions warrants further investigation to understand the trade-off between sequence length and accuracy. The data suggests that the model is learning, but the learning process is not stable and requires further optimization to achieve consistent improvements in Pass Rate.
</details>
(a) 72B Joint RL, BugFixer
<details>
<summary>figs/sec3_rl_scaling/quick_plot_twin_tw_final.png Details</summary>

### Visual Description
\n
## Line Chart: RL Training Performance
### Overview
This line chart depicts the relationship between RL Training Steps and two metrics: Token Length and Reproduced Rate (%). The chart visualizes how these metrics change as the RL training progresses from 0 to 500 steps. The Token Length is plotted on the left y-axis, while the Reproduced Rate (%) is plotted on the right y-axis.
### Components/Axes
* **X-axis:** RL Training Steps (Scale: 0 to 500, increments of 50)
* **Left Y-axis:** Token Length (Scale: 3000 to 6500, increments of 500)
* **Right Y-axis:** Reproduced Rate (%) (Scale: 20.0 to 35.0, increments of 2.5)
* **Legend:**
* Blue Line: Token Length
* Red Line: Reproduced Rate (%)
### Detailed Analysis
**Token Length (Blue Line):**
The blue line representing Token Length generally slopes upward, indicating an increasing token length as RL training steps increase.
* At 0 RL Training Steps, the Token Length is approximately 3000.
* At 50 RL Training Steps, the Token Length is approximately 3100.
* At 100 RL Training Steps, the Token Length is approximately 3400.
* At 150 RL Training Steps, the Token Length is approximately 3600.
* At 200 RL Training Steps, the Token Length is approximately 4000.
* At 250 RL Training Steps, the Token Length is approximately 4300.
* At 300 RL Training Steps, the Token Length is approximately 4600.
* At 350 RL Training Steps, the Token Length is approximately 4900.
* At 400 RL Training Steps, the Token Length is approximately 5400.
* At 450 RL Training Steps, the Token Length is approximately 5800.
* At 500 RL Training Steps, the Token Length is approximately 6100.
**Reproduced Rate (%) (Red Line):**
The red line representing Reproduced Rate (%) exhibits a fluctuating pattern with peaks and valleys.
* At 0 RL Training Steps, the Reproduced Rate (%) is approximately 31%.
* At 50 RL Training Steps, the Reproduced Rate (%) is approximately 22%.
* At 100 RL Training Steps, the Reproduced Rate (%) is approximately 26%.
* At 150 RL Training Steps, the Reproduced Rate (%) is approximately 30%.
* At 200 RL Training Steps, the Reproduced Rate (%) is approximately 34%.
* At 250 RL Training Steps, the Reproduced Rate (%) is approximately 32%.
* At 300 RL Training Steps, the Reproduced Rate (%) is approximately 30%.
* At 350 RL Training Steps, the Reproduced Rate (%) is approximately 33%.
* At 400 RL Training Steps, the Reproduced Rate (%) is approximately 35%.
* At 450 RL Training Steps, the Reproduced Rate (%) is approximately 32%.
* At 500 RL Training Steps, the Reproduced Rate (%) is approximately 33%.
### Key Observations
* The Token Length consistently increases with RL Training Steps, suggesting the model is learning to generate longer sequences.
* The Reproduced Rate (%) fluctuates, indicating variability in the model's ability to reproduce the desired output. There is a general trend of increasing reproduction rate, but it is not monotonic.
* The peak Reproduced Rate (%) occurs around 400 RL Training Steps, while the Token Length continues to increase beyond this point.
* There appears to be a correlation between the two metrics, with increases in Token Length sometimes coinciding with increases in Reproduced Rate (%).
### Interpretation
The chart suggests that as the RL training progresses, the model learns to generate longer token sequences (Token Length). However, the ability to accurately reproduce the desired output (Reproduced Rate %) is not consistently improving and exhibits significant fluctuations. The peak in Reproduced Rate (%) around 400 training steps could indicate a point of optimal performance, after which further increases in Token Length do not necessarily translate to improved reproduction accuracy. This could be due to overfitting or the model encountering more complex patterns that are harder to reproduce. The fluctuating nature of the Reproduced Rate (%) suggests that the training process is not entirely stable and may benefit from further optimization or regularization techniques. The relationship between the two metrics warrants further investigation to understand whether there is a trade-off between sequence length and reproduction accuracy.
</details>
(b) 72B Joint RL, TestWriter
Figure 3: Joint code-edit RL experiments on the model after mid-training and cold-start. The pass rate for BugFixer and the reproduced rate for TestWriter are reported as pass@1 with temperature=1.0. The performance improves consistently as the output becomes increasingly longer.
Experimental setup
We set the training step per RL iteration as 5 and sample 10 rollouts for each of the 1,024 problems from the union of SWE-gym (pan2024training) and SWE-bench-extra (badertdinov2024sweextra). We dynamically adjust the prompt set every 20 iterations to gradually increase task difficulty. We fix the maximum training context length as 64k tokens, since the prompt input contains the contents of the entire files localized by the initial model in advance.
Results
Figure 3 shows the performance and response length curves on the test set during RL training. The pass rate and the reproduced rate are calculated from pass@1 and temperature=1. Specifically, we observe that both model performance and response length steadily increase, reflecting the expected benefits of RL scaling. Similar RL scaling curves are also observed in our ablation experiments run on Qwen2.5-14B-Instruct models, proving the effectiveness of the RL training recipe across models of different sizes. The experimental details, as well as the ablation studies on positive example reinforcement in Section 3.3, are listed in Appendix C.2). The lengthy outputs consist of in-depth problem analysis and self-reflection patterns, similar to those in the math and code reasoning tasks (team2025kimi_k15; guo2025deepseek). We have also observed that for TestWriter, occasional false-positive examples take place during RL training due to the lack of reproduction coverage. We leave the case studies in Appendix E and further improvement for future work.
<details>
<summary>figs/sec3_sp_scaling/selfplay_figure_v2.png Details</summary>

### Visual Description
\n
## Line Chart: Pass Rate vs. Number of Patches
### Overview
The image presents two line charts comparing the "Pass Rate (%)" of two methods, "Self-play" and "Pass@N" (or "Majority Voting" in the left chart), across different "Number of patches: BF x TW" configurations. The x-axis represents the number of patches, with values 1x1, 3x3, 5x5, 10x10, 20x20, and 40x40. The y-axis represents the Pass Rate, ranging from 45% to 75%.
### Components/Axes
* **X-axis Label:** "Number of patches: BF x TW"
* **Y-axis Label:** "Pass Rate (%)"
* **Left Chart Legend:**
* Blue Line: "Self-play"
* Green Line: "Majority Voting"
* **Right Chart Legend:**
* Blue Line: "Self-play"
* Orange Line: "Pass@N"
* **X-axis Markers:** 1x1, 3x3, 5x5, 10x10, 20x20, 40x40
* **Y-axis Markers:** 45.0, 47.5, 50.0, 52.5, 55.0, 57.5, 60.0, 62.5 (Left Chart), 45.0, 48.0, 50.0, 52.5, 55.0, 57.5, 60.0, 62.5, 65.0, 67.5, 70.0, 72.5, 75.0 (Right Chart)
### Detailed Analysis or Content Details
**Left Chart: Majority Voting vs. Self-play**
* **Self-play (Blue Line):** The line slopes upward initially, then plateaus.
* 1x1: ~48.0%
* 3x3: ~52.6%
* 5x5: ~55.4%
* 10x10: ~58.8%
* 20x20: ~60.4%
* 40x40: ~60.4%
* **Majority Voting (Green Line):** The line initially increases, then decreases and plateaus.
* 1x1: ~48.8%
* 3x3: ~50.0%
* 5x5: ~51.0%
* 10x10: ~51.4%
* 20x20: ~51.2%
* 40x40: ~51.2%
**Right Chart: Pass@N vs. Self-play**
* **Self-play (Blue Line):** The line slopes upward initially, then plateaus.
* 1x1: ~48.0%
* 3x3: ~52.6%
* 5x5: ~55.4%
* 10x10: ~58.8%
* 20x20: ~60.4%
* 40x40: ~60.4%
* **Pass@N (Orange Line):** The line slopes upward consistently.
* 1x1: ~48.0%
* 3x3: ~60.4%
* 5x5: ~64.0%
* 10x10: ~67.4%
* 20x20: ~71.6%
* 40x40: ~74.8%
### Key Observations
* In both charts, the "Self-play" method shows diminishing returns as the number of patches increases, plateauing around 60%.
* The "Majority Voting" method (left chart) performs poorly compared to "Self-play" and plateaus at a lower pass rate.
* The "Pass@N" method (right chart) consistently outperforms "Self-play" and shows a positive correlation between the number of patches and the pass rate.
* The "Pass@N" method demonstrates a significant improvement in pass rate as the number of patches increases, suggesting that more patches lead to better performance.
### Interpretation
The data suggests that increasing the number of patches (BF x TW) generally improves the pass rate for both methods, but the effect is more pronounced for the "Pass@N" method. The "Pass@N" method appears to be a more effective strategy than "Majority Voting" and eventually outperforms "Self-play" as the number of patches increases. The plateauing of the "Self-play" method indicates that there is a limit to its improvement with more patches, while "Pass@N" continues to benefit. This could be due to the "Pass@N" method's ability to leverage information from multiple patches more effectively. The initial similar performance of both methods at 1x1 suggests that the benefit of the "Pass@N" method is only realized with a larger patch size. The consistent upward trend of "Pass@N" suggests that further increasing the number of patches could lead to even higher pass rates.
</details>
Figure 4: Test-time self-play on SWE-bench Verified. Performance improves with more generated patches and tests. Left: Execution-based self-play consistently surpasses BugFixer majority voting. Right: Self-play performances remain below pass@N where the ground-truth test patch is used, suggesting the room exists for TestWriter to improve.
3.5.4 Test-time Self-Play
Following Section 3.4, we evaluate how the final performance on SWE-bench Verified scales with the number of patches and tests generated. The temperature is fixed at 0 for the initial rollout, and set to 1.0 for the subsequent 39 rollouts. As shown on the left of Figure 4, the final performance improves from 48.0% to 60.4% as the number of patch-test pairs increases from 1 $Γ$ 1 to 40 $Γ$ 40, and consistently surpasses the results obtained from the majority vote of the BugFixer patches only.
Specifically, the self-play result obtained from 3 patches and 3 tests for each instance has already surpassed the performance with majority voting from 40 BugFixer patches. This demonstrates the effectiveness of additional information from test-time execution. The room for improvement of TestWriter, though, still exists for more powerful self-play: Shown on Figure 4, self-play performances remain below pass@N, where ground-truth test cases serve as the criterion for issue resolution. This finding aligns with anthropic_claude_3.5_sonnet_20241022, which introduced a final edge-case checking phase to generate a more diverse set of test cases, thereby strengthening the role of the βTestWriterβ in their SWE-Agent framework. We also report preliminary observations of a potential parallel scaling phenomenon, which requires no additional training and may enable scalable performance improvements. The details of the phenomenon and analyses are covered in Appendix F.
4 Initializing SWE-Agents from Agentless Training
End-to-end multi-turn frameworks, such as SWE-Agent (yang2024swe; anthropic_claude_3.5_sonnet_20241022) and OpenHands (wang2024openhands), enable agents to leverage tools and interact with environments. Specifically, the system prompt employed in the SWE-Agent framework (anthropic_claude_3.5_sonnet_20241022) outlines a five-stage workflow: (i) repo exploration, (ii) error reproduction via a test script, (iii) code edit for bug repair, (iv) test re-execution for validation, and (v) edge-case generation and checks. Unlike Agentless, the SWE-Agent framework doesnβt enforce a strict stage-wise workflow; the agent can reflect, transition, and redo freely until it deems the task complete and submits.
The performance potential is therefore higher without a fixed routine; However, the training for SWE-Agent is more challenging because of the sparsity of the outcome reward for long-horizon credit assignment. Meanwhile, our Kimi-Dev model has undergone Agentless training, with its skills of localization and code edit for BugFixer and TestWriter strengthened elaborately. In this section, we investigate whether it can serve as an effective prior for multi-turn SWE-Agent scenarios.
Table 2: Single-attempt performance of different models on SWE-bench Verified under end-to-end agentic frameworks, categorized by proprietary or open-weight models, and size over or under 100B (as of 2025.09). βInternalβ denotes results achieved with their in-house agentic frameworks.
4.1 Performance after SWE-Agent Fine-tuning
<details>
<summary>figs/sec4_main/v-sweeping-new-FINAL.png Details</summary>

### Visual Description
## Line Chart: Pass Rate vs. SWE-Agent SFT Tokens
### Overview
This line chart depicts the relationship between the number of SWE-Agent SFT tokens and the pass rate for different training methods (RL, SFT, MT, and Base). The pass rate is measured in percentage (%). Each training method is further categorized by the "Pass@k" metric, where k represents the number of attempts (1, 2, or 3).
### Components/Axes
* **X-axis:** "# SWE-Agent SFT tokens". Scale is logarithmic, with markers at approximately 0, 221, 223, 224, 1.1 x 2<sup>25</sup>, 1.1 x 2<sup>26</sup>, 1.1 x 2<sup>27</sup>, and 1.5 x 2<sup>28</sup>.
* **Y-axis:** "Pass Rate (%)". Scale ranges from 0 to 60, with increments of 10.
* **Legend:** Located in the top-right corner, listing the following data series:
* RL Pass@1 (Red)
* RL Pass@2 (Medium Red)
* RL Pass@3 (Light Red)
* SFT Pass@1 (Orange)
* SFT Pass@2 (Medium Orange)
* SFT Pass@3 (Light Orange)
* MT Pass@1 (Purple)
* MT Pass@2 (Medium Purple)
* MT Pass@3 (Light Purple)
* Base Pass@1 (Blue)
* Base Pass@2 (Medium Blue)
* Base Pass@3 (Light Blue)
### Detailed Analysis
Here's a breakdown of each data series, noting trends and approximate data points.
* **RL Pass@1 (Red):** Starts at approximately 3% at 0 tokens, increases sharply to around 55% at 1.1 x 2<sup>26</sup> tokens, and plateaus around 58% at 1.5 x 2<sup>28</sup> tokens.
* **RL Pass@2 (Medium Red):** Starts at approximately 5% at 0 tokens, increases steadily to around 50% at 1.1 x 2<sup>26</sup> tokens, and reaches approximately 55% at 1.5 x 2<sup>28</sup> tokens.
* **RL Pass@3 (Light Red):** Starts at approximately 7% at 0 tokens, increases rapidly to around 45% at 1.1 x 2<sup>26</sup> tokens, and reaches approximately 52% at 1.5 x 2<sup>28</sup> tokens.
* **SFT Pass@1 (Orange):** Starts at approximately 18% at 0 tokens, increases to around 45% at 1.1 x 2<sup>26</sup> tokens, and reaches approximately 50% at 1.5 x 2<sup>28</sup> tokens.
* **SFT Pass@2 (Medium Orange):** Starts at approximately 15% at 0 tokens, increases to around 40% at 1.1 x 2<sup>26</sup> tokens, and reaches approximately 48% at 1.5 x 2<sup>28</sup> tokens.
* **SFT Pass@3 (Light Orange):** Starts at approximately 12% at 0 tokens, increases to around 35% at 1.1 x 2<sup>26</sup> tokens, and reaches approximately 45% at 1.5 x 2<sup>28</sup> tokens.
* **MT Pass@1 (Purple):** Starts at approximately 2% at 0 tokens, increases to around 25% at 1.1 x 2<sup>26</sup> tokens, and reaches approximately 30% at 1.5 x 2<sup>28</sup> tokens.
* **MT Pass@2 (Medium Purple):** Starts at approximately 3% at 0 tokens, increases to around 20% at 1.1 x 2<sup>26</sup> tokens, and reaches approximately 25% at 1.5 x 2<sup>28</sup> tokens.
* **MT Pass@3 (Light Purple):** Starts at approximately 4% at 0 tokens, increases to around 15% at 1.1 x 2<sup>26</sup> tokens, and reaches approximately 20% at 1.5 x 2<sup>28</sup> tokens.
* **Base Pass@1 (Blue):** Starts at approximately 1% at 0 tokens, increases to around 10% at 1.1 x 2<sup>26</sup> tokens, and reaches approximately 15% at 1.5 x 2<sup>28</sup> tokens.
* **Base Pass@2 (Medium Blue):** Starts at approximately 2% at 0 tokens, increases to around 8% at 1.1 x 2<sup>26</sup> tokens, and reaches approximately 12% at 1.5 x 2<sup>28</sup> tokens.
* **Base Pass@3 (Light Blue):** Starts at approximately 3% at 0 tokens, increases to around 6% at 1.1 x 2<sup>26</sup> tokens, and reaches approximately 10% at 1.5 x 2<sup>28</sup> tokens.
### Key Observations
* RL methods consistently achieve the highest pass rates across all "Pass@k" values.
* Increasing the number of attempts ("Pass@k") generally improves the pass rate for each training method.
* The pass rate improvement plateaus for all methods as the number of SWE-Agent SFT tokens increases beyond 1.1 x 2<sup>26</sup>.
* The Base methods have the lowest pass rates, indicating they benefit the least from increased SFT tokens.
* The SFT methods perform better than the MT and Base methods, but not as well as the RL methods.
### Interpretation
The data suggests that Reinforcement Learning (RL) is the most effective training method for improving pass rates, followed by Supervised Fine-Tuning (SFT). The number of SWE-Agent SFT tokens has a significant positive impact on pass rates, but there appears to be a diminishing return as the token count increases. The "Pass@k" metric demonstrates that allowing more attempts improves performance, which is expected. The relatively poor performance of the Base methods suggests that fine-tuning with SFT tokens is crucial for achieving higher pass rates. The logarithmic scale of the x-axis indicates that the initial gains in pass rate are more substantial at lower token counts, and that the rate of improvement slows down as the token count increases. This could be due to the model reaching a point of diminishing returns or saturation. The differences between Pass@1, Pass@2, and Pass@3 for each method show the benefit of allowing multiple attempts, and the magnitude of that benefit varies by method.
</details>
Figure 5: Comparing the quality of the raw Base, the Agentless mid-trained (MT), the Agentless mid-trained with reasoning-intensive cold-start (SFT), and the Kimi-Dev model after RL as the prior for SWE-Agent adaptation. The tokens of the SWE-Agent SFT trajectories are swept over different scales, and the SWE-Agent performances are reported up to pass@3 on SWE-bench Verified.
We use the publicly available SWE-Agent trajectories to finetune Kimi-Dev. The finetuning dataset we used is released by SWE-smith (yang2025swe), consisting of 5,016 SWE-Agent trajectories collected with Claude 3.7 Sonnet (Anthropic-Claude3.7Sonnet-2025) in the synthetic environments. We perform supervised fine-tuning over Kimi-Dev, setting the maximum context length as 64K tokens during training, and allowing up to 128K tokens and 100 turns during inference.
As shown in Table 2, without collecting more trajectory data over realistic environments, or conducting additional multi-turn agentic RL, our finetuned model achieves a pass@1 score of 48.6% on SWE-bench Verified under the agentic framework setup, without additional test-time scaling. Using the same SFT data, our finetuned Kimi-Dev model outperforms the SWE-agent-LM (yang2025swesmith), with the performance comparable to that of Claude 3.5 Sonnet (49% by the 241022 version). The pass@10 of our SWE-Agent adapted model is 74.0% and surpasses the pass@30 of our model under Agentless (73.8%), proving the higher potential for the SWE-Agent framework.
4.2 Skill Transfer and Generalization
The results shown in Section 4.1 demonstrate that Kimi-Dev, a model with extensive Agentless training, could be adapted to end-to-end SWE-Agents with lightweight supervised finetuning. As the Agentless training recipe consists of mid-training, cold-start (SFT) and RL, we explore the contribution of each part in the recipe to the SWE-Agent capability after adaptation.
To figure this out, we perform SWE-Agent SFT on the original Qwen2.5-72B (Base), the mid-trained model (MT), the model then activated with Agentless-formatted long CoT data (SFT), and the (Kimi-Dev) model after finishing RL training (RL). As we are treating the four models as the prior for SWE-Agents We slightly abuse the term βpriorβ to refer to a model to be finetuned with SWE-Agent trajectories in the following analysis., and a good prior always demonstrates the ability of fast adaptation with a few shots (finn2017model; brown2020language), we also sweep the amount of SWE-Agent SFT data to measure the efficiency of each prior in SWE-Agent adaptation.
Specifically, we randomly shuffle the 5,016 SWE-Agent trajectories and construct nested subsets of sizes 100, 200, 500, 1,000, and 2,000, where each smaller subset is contained within the larger ones. In addition, we prepend two extreme baselines: (i) zero-shot, where the prior model is directly evaluated under the SWE-Agent framework without finetuning, and (ii) one-step gradient descent, where the model is updated with a single gradient step using the 100-trajectory subset. This yields a range of SFT token budgets spanning { $0$ , $2^{21}$ , $2^{23}$ , $2^{24}$ , $1.1Γ 2^{25}$ , $1.1Γ 2^{26}$ , $1.1Γ 2^{27}$ , $1.5Γ 2^{28}$ }. After these lightweight SFT experiments, we evaluate performance in terms of pass@{1,2,3} under the SWE-Agent framework, with evaluations for pass@1 conducted at temperature 0, and those for pass@2 and pass@3 at temperature 1.0.
Figure 5 presents the SWE-Agent performances of each prior (Base, MT, SFT, RL) after being fine-tuned with different amounts of agentic trajectories. We have the following observations:
1. The RL prior outperforms all the other models in nearly all the SWE-Agent SFT settings. This demonstrates that the Agentless training recipe indeed strengthens the prior in terms of SWE-Agent adaptation. For example, To achieve the top pass@1 performance of the Base prior, the RL prior needs only $2^{23}$ SWE-Agent SFT tokens, whereas the Base prior consumes $1.5Γ 2^{28}$ tokens.
1. The MT prior is lagged behind the SFT and the RL ones in extremely data-scarce settings (zero-shot ( $0$ ) and one-step gradient descent ( $2^{21}$ ) ), but quickly becomes on par with them after 200 trajectories ( $2^{24}$ ) are available for finetuning. This indicates that adaptation efficiency remains comparable after the prior is strengthened through Agentless mid-training.
1. The performance of the SFT prior is mostly similar to the RL one except for two cases: (i) The SFT prior outperforms the RL one under the zero-shot setting. This is reasonable, as the RL prior might overfit to the Agentless input-output format, while the SFT prior suffers less from this. (ii) The SFT prior exhibits a significant degradation with 200 SWE-Agent trajectories ( $2^{24}$ ). A potential reason could be that the 200 trajectories collapse onto a single data mode, leading the SFT prior to overfit through memorization (chu2025sft); the RL prior instead embeds stronger transferable skills and thus generalizes better.
<details>
<summary>figs/sec4_long_cot_to_multi_turn/hist_steps_6x4.png Details</summary>

### Visual Description
\n
## Line Chart: Number of Instances Resolved (per bin of turns)
### Overview
This line chart depicts the number of instances resolved as a function of the number of turns, comparing four different models: RL, SFT, MT, and Base. The y-axis represents the number of instances resolved, while the x-axis represents the number of turns, binned from 0 to 100.
### Components/Axes
* **Title:** Number of instances resolved (per bin of turns)
* **X-axis Label:** #Turns (ranging from 0 to 100, in increments of 10)
* **Y-axis Label:** #Instances resolved (ranging from 0 to 160, in increments of 20)
* **Legend:** Located in the top-right corner, containing the following labels and corresponding colors:
* RL (Red) - Solid line
* SFT (Orange) - Dashed line
* MT (Purple) - Dotted line
* Base (Blue) - Dash-dot line
### Detailed Analysis
* **RL (Red):** The RL line starts at approximately 150 instances resolved at 0 turns, rapidly drops to around 70 instances at 20 turns, and then plateaus around 10-20 instances resolved for turns greater than 30.
* **SFT (Orange):** The SFT line begins at approximately 35 instances resolved at 0 turns, increases slightly to around 50 instances at 10 turns, then decreases to around 20-30 instances resolved between 20 and 100 turns.
* **MT (Purple):** The MT line starts at approximately 50 instances resolved at 0 turns, drops sharply to around 10 instances at 20 turns, and remains relatively stable at around 5-15 instances resolved for turns greater than 20.
* **Base (Blue):** The Base line begins at approximately 30 instances resolved at 0 turns, drops to around 20 instances at 20 turns, and then remains relatively stable at around 10-20 instances resolved for turns greater than 20.
### Key Observations
* The RL model resolves a significantly higher number of instances at lower turn counts (0-20) compared to the other models.
* All models exhibit a decreasing trend in the number of instances resolved as the number of turns increases.
* The MT and Base models show similar behavior, with a sharp initial drop followed by a relatively stable plateau.
* The SFT model shows a more gradual decrease in instances resolved.
### Interpretation
The chart suggests that the RL model is most effective at resolving instances quickly, requiring fewer turns. However, its effectiveness diminishes rapidly after approximately 20 turns. The other models (SFT, MT, and Base) are less effective initially but maintain a more consistent level of resolution over a larger number of turns. This could indicate that the RL model excels at simple cases that can be resolved quickly, while the other models are better suited for more complex instances that require more interaction. The rapid decline in resolution for all models as the number of turns increases suggests a point of diminishing returns, where further interaction does not significantly improve the resolution rate. The differences in the curves could be due to the underlying algorithms and training data used for each model. The chart highlights a trade-off between initial resolution speed (RL) and sustained resolution capability (SFT, MT, Base).
</details>
<details>
<summary>figs/skill_analysis_figure.png Details</summary>

### Visual Description
\n
## Stacked Bar Chart: Resolved Cases by Model
### Overview
This is a stacked bar chart comparing the number of resolved cases across four different models: Base, MT, SFT, and RL. The chart displays the total number of resolved cases for each model, broken down into two components: "Bugfixer cutoff" and "Reflection". The values are presented as bars with numerical labels indicating the total resolved cases and the increase due to "Reflection".
### Components/Axes
* **X-axis:** Models (Base, MT, SFT, RL)
* **Y-axis:** Number of Resolved Cases (Scale from 0 to 800, increments of 100)
* **Legend:**
* Blue: Bugfixer cutoff
* Hatched Red: Reflection
* **Labels:** Each bar is labeled with the total number of resolved cases, followed by the increase due to "Reflection" in parentheses.
### Detailed Analysis
The chart consists of four stacked bars, one for each model.
* **Base Model:**
* Bugfixer cutoff: Approximately 484 cases (solid blue bar).
* Reflection: Approximately 94 cases (hatched red bar stacked on top of the blue bar).
* Total Resolved Cases: 578 (labeled on top of the bar).
* **MT Model:**
* Bugfixer cutoff: Approximately 542 cases (solid magenta bar).
* Reflection: Approximately 100 cases (hatched red bar stacked on top of the magenta bar).
* Total Resolved Cases: 642 (labeled on top of the bar).
* **SFT Model:**
* Bugfixer cutoff: Approximately 584 cases (solid orange bar).
* Reflection: Approximately 109 cases (hatched red bar stacked on top of the orange bar).
* Total Resolved Cases: 693 (labeled on top of the bar).
* **RL Model:**
* Bugfixer cutoff: Approximately 605 cases (solid red bar).
* Reflection: Approximately 113 cases (hatched red bar stacked on top of the red bar).
* Total Resolved Cases: 718 (labeled on top of the bar).
The "Reflection" component consistently adds cases to the "Bugfixer cutoff" for each model. The height of the "Reflection" component increases from the Base model to the RL model.
### Key Observations
* The RL model has the highest total number of resolved cases (718).
* The Base model has the lowest total number of resolved cases (578).
* The contribution of "Reflection" to the total number of resolved cases increases as the model complexity increases (Base < MT < SFT < RL).
* The "Bugfixer cutoff" component is the dominant contributor to the total number of resolved cases for all models.
### Interpretation
The chart demonstrates the impact of incorporating "Reflection" on the number of resolved cases across different models. The increasing contribution of "Reflection" as the model becomes more complex (Base -> MT -> SFT -> RL) suggests that more sophisticated models benefit more from this technique. This could indicate that "Reflection" helps to address more complex issues or edge cases that simpler models are unable to handle. The data suggests that "Reflection" is a valuable addition to these models, consistently improving their ability to resolve cases. The consistent increase in total resolved cases across models suggests a positive correlation between model complexity and the effectiveness of "Reflection". The chart provides quantitative evidence supporting the claim that "Reflection" enhances the performance of these models in resolving cases.
</details>
Figure 6: Left: Performance of the four priors under turn limits after SWE-Agent adaptation. Right: The characterization of the BugFixer and the reflection skills for each prior by counting the resolved cases of the 3 runs at Stage-3 cutoff moment, and comparing those with the final success cases.
From long CoT to extended multi-turn interactions.
We hypothesize that reflective behaviors cultivated through long chain-of-thought reasoning may transfer to settings requiring extended multi-turn interactions. To examine this, we evaluate the four priors (Base, MT, SFT, and RL) by finetuning on the 5,016 trajectories and test on SWE-bench Verified, under varying turn limits with pass@3 as the metric (Figure 6, left). The distinct interaction-length profiles show supportive evidence: the RL prior, after finetuning, continues to make progress beyond 70 turns, while the SFT, mid-trained, and raw models show diminishing returns around 70, 60, and 50 turns, respectively.
We further evaluate the efficacy of the Agentless skill priors (BugFixer and reflection) in the SWE-Agent adapted model. For BugFixer, given that the SWE-Agent may autonomously reflect between the five stages, we examine the moment in each trajectory when the bug fix of the third stage is initially completed, and the test rerun of the fourth stage has not yet been entered. Heuristically, when the SWE-Agent just completes the third stage, it has not yet obtained the execution feedback from the fourth stage, and thus has not further reflected based on the execution information or refined the bug fix. We therefore calculate the success rate of direct submission at this cutoff moment, which reflects the capability of the BugFixer skill. Regarding reflection, we further compare the performance at the cutoff point with the performance after full completion for each problem. The increment in the number of successful problems is used to reflect the capability of the reflection skill.
We use kimi-k2-0711-preview (team2025kimi_k2) to annotate the SWE-Agent trajectories, identifying the stage to which each turn belongs. Figure 6 (right) demonstrates that both skills are strengthened through each stage of the Agentless training recipe: For the BugFixer skill, the cutoff performance at Stage-3 within the SWE-Agent interaction trajectories of the four adapted models shows consistent improvement, ranging from 484 cases resolved by the Base prior to 605 cases by the RL prior, as measured by the number of successful resolutions within three passes. For the reflection skill, examining the performance gains from Stage-3 to the end of the trajectories reveals a similar trend, with improvements increasing from +94 under the Base prior to +113 under the RL prior. Taken together, the adapted model from the RL prior achieves the strongest overall performance across both skills. It should be noted that our analysis of the reflection skill remains coarse-grained, since the measured performance gains between the two checkpoints capture not only agentic reflection and redo behaviors, but also the intermediate test-writing process performed by the SWE-Agent. A more fine-grained evaluation that isolates the TestWriter skill prior is left for future work. The prompt for SWE-Agent stage annotation, extended qualitative studies, as well as additional discussions for skill transfer and generalization, are covered in Appendix G.
<details>
<summary>figs/sec4_swe_agent_rl/rebuttal_cmp_prior_pass1.png Details</summary>

### Visual Description
\n
## Line Chart: Pass Rate vs. Steps for Different Priors
### Overview
This line chart depicts the pass rate as a function of steps, comparing two different priors: SFT (Supervised Fine-Tuning) and RL (Reinforcement Learning). Each prior is represented by two lines: one for the average at 5 (Avg@5) and another representing the minimum to maximum range over 5 (Min@5-Max@5). The chart aims to illustrate how the pass rate evolves over training steps for each approach.
### Components/Axes
* **X-axis:** "Steps" - ranging from 0 to 300, with tick marks every 50 steps.
* **Y-axis:** "Pass Rate (%)" - ranging from 25% to 42.5%, with tick marks every 2.5%.
* **Legend:** Located in the center-right of the chart.
* "SFT prior, Avg@5" - represented by a red line with circular markers.
* "SFT prior, Min@5-Max@5" - represented by a light red shaded area.
* "RL prior, Avg@5" - represented by a blue line with circular markers.
* "RL prior, Min@5-Max@5" - represented by a light blue shaded area.
* **Gridlines:** Present throughout the chart for easier readability.
### Detailed Analysis
**SFT Prior:**
* **Avg@5 (Red Line):** The line starts at approximately 32.5% at step 0 and exhibits a generally upward trend, with fluctuations. It reaches a peak of around 39% at step 200, then dips and rises again, ending at approximately 38% at step 300.
* Step 0: ~32.5%
* Step 50: ~35.5%
* Step 100: ~36.5%
* Step 150: ~37.5%
* Step 200: ~39.0%
* Step 250: ~37.0%
* Step 300: ~38.0%
* **Min@5-Max@5 (Light Red Area):** The shaded area fluctuates around the Avg@5 line, indicating the range of pass rates. It starts wider at the beginning (around 30% to 36% at step 0) and narrows somewhat as steps increase, but remains relatively broad throughout, suggesting variability in the pass rates. At step 300, the range is approximately 36% to 40%.
**RL Prior:**
* **Avg@5 (Blue Line):** The line starts at approximately 33% at step 0 and also shows an upward trend with fluctuations. It reaches a peak of around 40% at step 275, and ends at approximately 39.5% at step 300.
* Step 0: ~33.0%
* Step 50: ~36.0%
* Step 100: ~37.0%
* Step 150: ~37.5%
* Step 200: ~38.0%
* Step 250: ~39.5%
* Step 300: ~39.5%
* **Min@5-Max@5 (Light Blue Area):** Similar to the SFT prior, the shaded area represents the range of pass rates. It starts wider (around 31% to 37% at step 0) and fluctuates around the Avg@5 line. At step 300, the range is approximately 37% to 41%.
### Key Observations
* Both SFT and RL priors show an increasing trend in pass rate with increasing steps, indicating learning over time.
* The RL prior generally achieves a slightly higher pass rate than the SFT prior, particularly in the later stages of training (after step 200).
* The range (Min@5-Max@5) indicates variability in the pass rates for both priors, suggesting that performance can fluctuate even at the same step.
* The fluctuations in both lines suggest that the training process is not perfectly smooth and may be sensitive to the specific training data or random seeds.
### Interpretation
The chart demonstrates the learning progress of two different approaches (SFT and RL) in a task where a "pass rate" is measurable. The upward trends for both priors suggest that both methods are capable of improving performance over time. The consistently higher pass rate achieved by the RL prior suggests that it may be a more effective approach for this particular task, or that it requires fewer steps to reach a comparable level of performance. The variability indicated by the shaded areas highlights the stochastic nature of the training process and the importance of considering multiple runs or averaging results to obtain a more reliable estimate of performance. The fluctuations could be due to the inherent randomness in the training process, or to the presence of challenging examples that require more steps to learn. The chart provides valuable insights into the relative effectiveness of the two priors and the dynamics of the learning process.
</details>
<details>
<summary>figs/sec4_swe_agent_rl/rebuttal_cmp_prior_pass3.png Details</summary>

### Visual Description
\n
## Line Chart: Pass Rate vs. Steps
### Overview
This image presents a line chart comparing the "Pass@3" rate of two models, "SFT prior" and "RL prior", over a range of "Steps" from 0 to 300. The chart visualizes the performance of each model as it progresses through training or evaluation steps, measured by the percentage of times it "passes" a certain criteria (Pass@3).
### Components/Axes
* **X-axis:** "Steps" - ranging from 0 to 300, with tick marks at intervals of 50.
* **Y-axis:** "Pass Rate (%)" - ranging from 42% to 56%, with tick marks at intervals of 2%.
* **Data Series 1:** "SFT prior, Pass@3" - represented by a red line with circular markers.
* **Data Series 2:** "RL prior, Pass@3" - represented by a blue line with circular markers.
* **Legend:** Located in the bottom-right corner, clearly labeling each data series with its corresponding color.
* **Grid:** A light gray grid is present in the background to aid in reading values.
### Detailed Analysis
**SFT prior, Pass@3 (Red Line):**
The red line starts at approximately 42% at Step 0 and generally increases until around Step 50, reaching approximately 49%. It then fluctuates between approximately 47% and 52% for the remainder of the steps, with some dips below 47% around Step 150 and Step 250.
* Step 0: ~42%
* Step 50: ~49%
* Step 100: ~48%
* Step 150: ~47%
* Step 200: ~50%
* Step 250: ~47%
* Step 300: ~50%
**RL prior, Pass@3 (Blue Line):**
The blue line starts at approximately 45% at Step 0 and increases more rapidly than the red line, reaching approximately 53% at Step 50. It continues to fluctuate, generally remaining above the red line, with peaks around 55% at Step 100 and Step 250.
* Step 0: ~45%
* Step 50: ~53%
* Step 100: ~55%
* Step 150: ~52%
* Step 200: ~53%
* Step 250: ~55%
* Step 300: ~54%
### Key Observations
* The "RL prior" model consistently achieves a higher "Pass@3" rate than the "SFT prior" model across all steps.
* Both models exhibit fluctuations in their "Pass@3" rates, suggesting instability or sensitivity to the training process.
* The "RL prior" model shows a more rapid initial increase in "Pass@3" rate compared to the "SFT prior" model.
* The "SFT prior" model experiences a noticeable dip in "Pass@3" rate around Step 250, while the "RL prior" model remains relatively stable.
### Interpretation
The chart demonstrates that the "RL prior" model outperforms the "SFT prior" model in terms of "Pass@3" rate. This suggests that the reinforcement learning approach (RL prior) is more effective in achieving the desired outcome (passing the criteria) than the supervised fine-tuning approach (SFT prior). The fluctuations in both lines indicate that the training process is not entirely smooth and may require further optimization or regularization. The initial rapid increase in the "RL prior" model suggests that it learns quickly, while the later stability indicates that it converges to a relatively consistent performance level. The dip in the "SFT prior" model around Step 250 could be due to a variety of factors, such as a change in the training data or a temporary instability in the learning process. Further investigation would be needed to determine the root cause of this dip. The "Pass@3" metric likely refers to the percentage of times the model passes a specific test or evaluation after 3 attempts, indicating a measure of robustness or reliability.
</details>
<details>
<summary>figs/sec4_swe_agent_rl/rebuttal_cmp_prior_pass5.png Details</summary>

### Visual Description
\n
## Line Chart: Pass Rate vs. Steps
### Overview
This image presents a line chart comparing the "Pass@5" rate of two models, "SFT prior" and "RL prior", over a range of "Steps" from 0 to 300. The chart visualizes the performance of these models as they undergo training or iterative improvement.
### Components/Axes
* **X-axis:** "Steps" - ranging from 0 to 300, with gridlines at intervals of 50.
* **Y-axis:** "Pass Rate (%)" - ranging from 48% to 60%, with gridlines at intervals of 2%.
* **Data Series 1:** "SFT prior, Pass@5" - represented by a red line with circular markers.
* **Data Series 2:** "RL prior, Pass@5" - represented by a blue line with circular markers.
* **Legend:** Located in the bottom-right corner, clearly labeling each data series with its corresponding color.
### Detailed Analysis
**SFT prior, Pass@5 (Red Line):**
The red line initially shows a steep increase from approximately 47% at Step 0 to around 52% at Step 20. It then fluctuates between approximately 52% and 59% for the remainder of the steps.
* Step 0: ~47%
* Step 20: ~52%
* Step 40: ~53%
* Step 60: ~55%
* Step 80: ~56%
* Step 100: ~54%
* Step 120: ~57%
* Step 140: ~58%
* Step 160: ~56%
* Step 180: ~59%
* Step 200: ~55%
* Step 220: ~53%
* Step 240: ~56%
* Step 260: ~55%
* Step 280: ~56%
* Step 300: ~55%
**RL prior, Pass@5 (Blue Line):**
The blue line starts at approximately 55% at Step 0 and generally maintains a higher pass rate than the red line throughout the observed steps. It fluctuates between approximately 54% and 60%.
* Step 0: ~55%
* Step 20: ~56%
* Step 40: ~57%
* Step 60: ~59%
* Step 80: ~58%
* Step 100: ~56%
* Step 120: ~59%
* Step 140: ~60%
* Step 160: ~58%
* Step 180: ~59%
* Step 200: ~57%
* Step 220: ~58%
* Step 240: ~59%
* Step 260: ~58%
* Step 280: ~59%
* Step 300: ~60%
### Key Observations
* The "RL prior" model consistently achieves a higher "Pass@5" rate than the "SFT prior" model across all observed steps.
* Both models exhibit fluctuations in their pass rates, suggesting that performance is not consistently improving or declining.
* The "SFT prior" model shows a more pronounced initial increase in pass rate, but its performance plateaus and fluctuates more significantly than the "RL prior" model.
* The "RL prior" model demonstrates a more stable and generally higher performance level.
### Interpretation
The data suggests that the "RL prior" model is more effective at achieving a higher pass rate than the "SFT prior" model, as measured by "Pass@5". The fluctuations in both lines indicate that the training process is not perfectly smooth and may be sensitive to variations in the data or training parameters. The initial rapid improvement of the "SFT prior" model could be due to quickly learning basic patterns, while the more stable performance of the "RL prior" model suggests a more robust learning process. The "Pass@5" metric likely refers to the percentage of times the model passes a specific test or evaluation criteria within the top 5 attempts, indicating a measure of success or accuracy. The difference in performance between the two models could be attributed to the different training methodologies employed (SFT vs. RL). Further investigation would be needed to understand the specific reasons for these differences and to optimize the performance of both models.
</details>
Figure 7: Comparison between the SFT Prior and the RL Prior when further applied with end-to-end SWE-Agent RL. Left: Pass@1 averaged from 5 runs. Middle: Pass@3. Right: Pass@5. The two priors are activated with the same $2^{21}$ SWE-Agent SFT tokens (the second column in Figure 5). After end-to-end RL, the RL prior slightly outperforms the SFT prior in all the Pass@1, Pass@3, and Pass@5 settings, which agrees with their SWE-Agent SFT performance comparison in Figure 5.
End-to-end SWE-Agent RL for prior comparison. To further validate the effectiveness of the priors baked by the Agentless training recipes, we employ end-to-end SWE-Agent RL (deepswe2025) with the cold-started priors as the initial models. To maximally alleviate the effect from the patterns of proprietary models within the SWE-Smith trajectories, we leverage the setting with $2^{21}$ SWE-Agent SFT tokens, the second column in Figure 5, where a single step of gradient decent takes place on top of each prior. Under the minimal cold-start setup, end-to-end RL reveals the potential of each prior beyond taking the shortcut of imitation (gudibande2024the; chu2025sft).
To run the end-to-end RL training for prior comparison, we use the SWE-Gym (pan2024training) and the SWE-bench-extra (badertdinov2024scaling) subsets as the training set. Similarly to the Agentless RL recipe, we first use each initial model to filter out the problems with Pass@8 = 0. For the model with the MT prior, 260 out of 6,202 problems remain; for the models with the SFT prior and the RL prior, a total of 2,062 from the 6,202 problems are kept. In all end-to-end RL runs, we use the outcome reward only, and the same policy gradient algorithm in Sec. 3.3 without KL or entropy regularization for optimization, with batch size as 256. The results are shown as follows:
For the model with MT prior, the pass@1 performance quickly deteriorates to less than 2% after 10 end-to-end RL steps. The potential reason for this could be the lack of available problems to be trained with, reflecting the inferiority of the prior. For the models with the SFT prior and the RL prior, the RL runs last for 300 steps, and we plot the performance comparison in Figure 7. According to Figure 7, the model with the RL prior demonstrates slightly higher scores of Pass@1, Pass@3, and Pass@5 over the model with the SFT prior. While the phenomenon agrees with the performance comparison under SWE-Agent SFT shown in Figure 5, we observe that the patterns in the interaction trajectories of the models incentivized by end-to-end SWE-Agent RL significantly differ from the patterns of the proprietary models (detailed in Appendix G.3). These results reveal that the Agentless training recipe curates strong priors for end-to-end learning under SWE-Agent frameworks with the minimal supervision of proprietary end-to-end trajectories. We leave the exploration of more advanced agentic RL techniques for further improvement as future work.
5 Conclusion and Future Work
In this work, we reframed Agentless and agentic paradigms for automated software engineering as complementary rather than competing. By introducing Kimi-Dev, we demonstrated that structured Agentless training can induce transferable skill priors, including bug localization, code repair, and self-reflection. As a result, Kimi-Dev not only achieves SoTA results on SWE-bench Verified among the workflow-based approaches, but enables efficient SWE-Agent adaptation as well. These findings establish a novel path toward building more generalizable coding agents through staged training.
Acknowledgements
We thank Yuzhi Wang, Xinyu Zhou, Guokun Lai, Yulun Du, Fang Li, Hao Ding, Dehao Zhang, Enming Yuan, Dikang Du, and Jiacheng You for their valuable suggestions. We also appreciate the members of the infrastructure team at Moonshot AI for their timely support during the project.
Ethics and Reproducibility Statements
This work obeys the Code of Ethics required by the ICLR conference. The study does not involve human subjects or animal experimentation. The personally identifiable information from raw data is excluded for privacy consideration (see the mid-training data recipe detailed in Appendix A). Beyond the scope of this work, we strongly advocate for the community to advance systematic research on agent safety, thereby ensuring responsible progress in this area.
For all of the experiments, we have covered the detailed setups and discussions in the appendices: mid-training for Agentless in Appendix A, details of the used dockers in Appendix B, Agentless RL in Appendix C, agent infrastructure in Appendix D, case studies under Agentless in Appendix E, preliminary findings about emergent test-time parallel scaling in Appendix F, and extended analysis for SWE-Agents in Appendix G.
Appendix
Appendix A Details of Mid-training
We curate a mid-training data recipe with a focus on enhancing SWE capabilities. Central to this effort is the collection of pull request (PR) data from GitHub, which provides extensive coverage of real-world bug fixes, feature requests, and code enhancements. To ensure data quality, we apply two filters: (i) we only retain repositories that have accumulated at least five GitHub stars, thereby excluding sparsely maintained projects with limited community engagement; and (ii) we remove any repositories overlapping with the SWE-bench benchmark (jimenez2023swe) to prevent potential data leakage. For each candidate repository, we query the GitHub API for all PRs with the state MERGED, while discarding those abandoned, superseded, or left under review. To preserve more context information, we also snapshot the entire codebase at the base commit before the first code change in the PR.
After data crawling, we incorporate two complementary forms for the natural code change data: (i) natural diff patches and (ii) PR commit packs. A natural diff patch consolidates all commits in a PR into the final code difference, typically expressed as SEARCHβREPLACE blocks. This format aligns with the Agentless paradigm, in which the model must directly output the final patch. In contrast, a commit pack captures the sequence of human-authored commits within a PR, where each commit message (textual reasoning) is paired with the corresponding code modification (action). This structure closely parallels the SWE-Agent setting, where intermediate reasoning steps are interleaved with actions. However, the distinction of the utilities for the two types of data is not absolute: commit messages in a PR commit pack can still inform the modelβs knowledge and indirectly strengthen its reasoning ability in the Agentless setting.
Natural diff patches. The natural diff patches used in the mid-training data recipe are processed with the following rules:
- Incorporate the agentless prompt template (see Prompts 1, 2, 3, 4; These four prompt templates are also used in the later stages, including cold-start, RL, and test-time self-play), and apply a loss mask to the prompt part. For the localization prompt, the response is set as the files modified in the ground-truth diff patch.
- If a related issue to the PR exists, use its content of the related issue; otherwise, use the PR title as the surrogate of the issue content.
- If a related issue to the PR exists, prepend the issue discussion at the beginning of the output in the code edit response. We aim to strengthen the modelβs capability of code edit reasoning by leveraging the discussion contents.
- Discard PRs that include modifications to files other than {.py, .md, .rst}.
- For PRs containing {.md, .rst} file modifications, retain only the Python diffs and rewrite them into SEARCHβREPLACE blocks.
- Remove PRs involving file additions or deletions.
- For the code edits with only line insertions or deletions, preserve the original Git diff hunks as the SEARCH content in the SEARCHβREPLACE blocks.
- Ensure that no more than three Python files are modified per PR.
- Apply a filtering script to exclude PRs with non-{.py, .md, .rst} modifications, or PRs modifying more than three Python files.
- Further exclude PRs containing more than five SEARCHβREPLACE blocks.
A total of $\sim$ 50B tokens for natural diff patches are obtained after applying these filtering rules.
β¬
Please look through the following GitHub problem description and Repository structure and provide a list of files that one would need to edit to fix the problem.
### GitHub Problem Description ###
{related issue / PR title content}
###
### Repository Structure ###
{file structure induced by the repo snapshot}
###
Please only provide the full path and return at most 5 files.
The returned files should be separated by new lines ordered by most to least important and wrapped with βββ
For example:
βββ
file1. py
file2. py
βββ
Listing 1: Agentless prompt template: Localization for BugFixer.
β¬
Please look through the following GitHub problem description and Repository structure and provide a list of test files that should be run after applying the patch to fix the issue.
### GitHub Problem Description ###
{related issue / PR title content}
###
### Repository Structure ###
{file structure induced by the repo snapshot}
###
Please only provide the full path and return at most 5 files.
The returned files should be separated by new lines ordered by most to least important and wrapped with βββ
For example:
βββ
file1. py
file2. py
βββ
Listing 2: Agentless prompt template: Localization for TestWriter.
β¬
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
{related issue / PR title content}
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILE ---
βββ
### {filename1}
{content of filename1}
### {filename2}
{content of filename2}
{...}
βββ
--- END FILE ---
Please first localize the bug based on the issue statement, and then generate * SEARCH / REPLACE * edits to fix the issue.
Every * SEARCH / REPLACE * edit must use this format:
1. The file path
2. The start of search block: <<<<<<< SEARCH
3. A contiguous chunk of lines to search for in the existing source code
4. The dividing line: =======
5. The lines to replace into the source code
6. The end of the replace block: >>>>>>> REPLACE
Here is an example:
βββ python
### mathweb / flask / app. py
<<<<<<< SEARCH
from flask import Flask
=======
import math
from flask import Flask
>>>>>>> REPLACE
βββ
Please note that the * SEARCH / REPLACE * edit REQUIRES PROPER INDENTATION. If you would like to add the line β print (x)β, you must fully write that out, with all those spaces before the code!
Wrap the * SEARCH / REPLACE * edit in blocks βββ python...βββ.
Listing 3: Agentless prompt template: Code edit for BugFixer.
β¬
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
{related issue / PR title content}
--- END ISSUE ---
Below are some code segments, each from a relevant test file. One or more of these files may be added some new tests which can reproduce the issue.
--- BEGIN FILE ---
βββ
### {filename1}
{content of filename1}
### {filename2}
{content of filename2}
{...}
βββ
--- END FILE ---
Please first localize some possible locations in those test files within the repo, and then generate * SEARCH / REPLACE * edit updates to the ** test ** files in the repo, so that the erroneous scenario described in the problem is reproduced.
Every * SEARCH / REPLACE * edit must use this format:
1. The file path
2. The start of search block: <<<<<<< SEARCH
3. A contiguous chunk of lines to search for in the existing source code
4. The dividing line: =======
5. The lines to replace into the source code
6. The end of the replace block: >>>>>>> REPLACE
Here is an example:
βββ python
### mathweb / flask / app. py
<<<<<<< SEARCH
from flask import Flask
=======
import math
from flask import Flask
def test__rules__std_L060_raised () -> None:
try:
sql = " SELECT IFNULL (NULL, 100),
NVL (NULL,100);"
result = lint (sql, rules =[" L060 "])
assert len (result) == 2
except:
print (" Other issues ")
return
try:
assert result [0][" description "] == " Use β COALESCE β instead of β IFNULL β."
assert result [1][" description "] == " Use β COALESCE β instead of β NVL β."
print (" Issue resolved ")
except AssertionError:
print (" Issue reproduced ")
return
return
>>>>>>> REPLACE
βββ
Please note that the * SEARCH / REPLACE * edit REQUIRES PROPER INDENTATION. If you would like to add the line β print (x)β, you must fully write that out, with all those spaces before the code!
Wrap the * SEARCH / REPLACE * edit in blocks βββ python...βββ.
Listing 4: Agentless prompt template: Code edit for TestWriter.
PR commit packs. The PR commit packs used in the mid-training data recipe are processed with the following rules:
- Discard PRs that include modifications to files other than {.py, .md, .rst}.
- For {.md, .rst} file modifications, retain the βdiff βgitβ signature but remove the actual content changes.
- Ensure that each PR modifies at most five Python files (with at least one required). PRs exceeding this limit are discarded.
- Apply a filtering script to exclude PRs containing non-{.py, .md, .rst} file modifications or those modifying more than five Python files.
- Filter out all of the developer signatures and GitHub IDs for ethics considerations.
A total of $\sim$ 20B tokens for PR commit packs are obtained after applying these filtering rules.
In addition, we incorporate synthetic data to further enhance both the reasoning and agentic capabilities of the model. A key observation is that the ground-truth reward for the localization stage in the Agentless setting can be derived directly from the diff patch, since the set of files requiring modification is explicitly indicated.
Synthetic reasoning data. To improve reasoning quality, we perform a lightweight SFT of the Qwen-2.5-72B-Instruct model on 2,000 R1 trajectories. The resulting model is then used to generate large-scale rollouts for the localization stage of both BugFixer and TestWriter. We retain only the rollouts that achieve exactly correct file localizations. This procedure yields approximately $\sim$ 10B tokens of reasoning-intensive data dedicated to Agentless localization in the mid-training recipe.
Synthetic agentic interactions. To strengthen agentic capabilities, we simulate agentβenvironment interactions with a custom tool set designed to mimic file-system operations without execution. This design is motivated by practical constraints: while repository snapshots from GitHub are available, not all snapshots are equipped with an executable Docker environment. As a result, shell commands are disabled. Instead, we introduce synthetic tools that allow the agent to view file contents and perform keyword-based search for localization, which effectively reproduces the first stage of Agentless but in an agentic manner. The specification of this tool set is covered in the system prompt, which is then used to elicit agentic interaction rollouts from the Qwen-2.5-72B-Instruct model. The complete system prompt is provided in Prompt 5. We apply a loss mask only to the system prompt, and enable the model to simultaneously learn both actions and observations along the trajectory, inspired by yang2024react. This approach integrates both policy and world modeling into mid training.
β¬
Your job is to look through the given GitHub problem description and Repository structure, and edit updates to the files in the repo to resolve the problem.
The job is divided into two stages:
+ In Stage 1, you should localize the files the files that you would need to edit to fix the problem.
+ In Stage 2, you should edit the updates to the repo.
Let β s begin from Stage 1 to localize the bugs:
In Stage 1, besides reading the provided Repository structure, you can use the following skills for exploration. The skills are to be called in an environment wrapped by < execute > and </ execute >, listed in the form of python functions as below:
open_file (path: str, is_all | None = False, line_number: int | None = 1, context_lines: int | None = 100) -> None:
Opens the file at the given path in the editor for exploration.
By default, only the first 100 lines of the file are displayed. To open the entire file, set β is_all β to β True β.
The β context_lines β parameter determines the maximum number of lines to be displayed, with a cap of 100 lines. Use β scroll_up β and β scroll_down β to view more content up or down.
If a β line_number β is provided, the window will be moved to include that line.
Note: When β is_all β is set to β True β, the β line_number β and β context_lines β parameters will not take effect, as the entire file will be opened and displayed without any line - specific focus or context limitation.
Args:
path: str: The path to the file to open. the full path of the filename should be provided.
is_all: bool | None = False: If set to β True β, the entire file will be opened. Defaults to β False β.
line_number: int | None = 1: The line number to move to. Defaults to 1.
context_lines: int | None = 100: Only shows this number of lines in the context window (usually from line 1), with line_number as the center (if possible). Defaults to 100.
goto_line (line_number: int) -> None:
Moves the window to show the specified line number.
Args:
line_number: int: The line number to move to.
goto_class_or_func (class_or_func_name: str) -> None:
Moves the window to show the specified class or function in the current open file.
Args:
class_or_func_name: str: The name of the given class, function, or method in a class to move to.
scroll_down () -> None:
Moves the window down by 100 lines.
Args:
None
scroll_up () -> None:
Moves the window up by 100 lines.
Args:
None
search_dir (search_term: str, dir_path: str | None) -> None:
Searches for search_term in all files in dir. If dir is not provided, searches in the entire repository. Filename, fine - grained line number, and the relative class or function it is located in (if applied) will be shown for each found position.
Args:
search_term: str: The term to search for.
dir_path: str: The path to the directory to search. Should be full path filename.
search_file (search_term: str, file_path: str | None = None) -> None:
Searches for search_term in file. If file is not provided, searches in the current open file. Filename, fine - grained line number, and the relative class or function it is located in (if applied) will be shown for each found position.
Args:
search_term: str: The term to search for.
file_path: str | None: The path to the file to search. Should be full path filename if provided.
find_file (file_name: str, dir_path: str | None) -> None:
Finds all files with the given name in the specified directory. If dir is not provided, find in the entire repository.
Args:
file_name: str: The name of the file to find.
dir_path: str: The path to the directory to search.
str_replace (path: str, old_str, new_str)
old_str =[the old content to be replaced]
new_str =[the new content after replacement]
-> None:
Replace the old content (old_str) in the file at the given path with the new content (new_str). This is the skill that you will be using to edit the updates.
Args:
path: str: The path to the file to be updated. The full path of the filename should be provided.
old_str: str: The old content to be replaced. Note that this argument should be written in a new line starting with " old_str =", and the string content should not be quoted.
new_str: str: The new content after replacement. Note that this argument should be written in a new line starting with " new_str =", and the string content should not be quoted.
Example:
Assuming a call is shown as follows:
βββ
str_replace (" filename. py ", old_str, new_str)
old_str = a
new_str = b
c
βββ
Then it will function as replacing the β a \ n β string with the β b \ nc β string in the β filename. py β file.
insert (path: str, insert_line: int, new_str)
new_str =[the new content to be inserted]
-> None:
Insert the new content (new_str) in the file at the given path. When you want to add an entirely new class / function to the file, it would be better to use this method.
Args:
path: str: The path to the file to be updated. The full path of the filename should be provided.
insert_line: int: The Line number below which the new content is to be added. This Line number should be within the range of lines of the file: [0, Lines_of_the_File]. Specifically, when insert_line = 0, the added content starts from the top of the file.
new_str: str: The new content to be inserted. Note that this argument should be written in a new line starting with " new_str =", and the string content should not be quoted.
Example:
Assuming a call is shown as follows:
βββ
insert (" test_filename. py ", 5, new_str)
new_str = def test_add ():
assert add (1, 2) == 3
βββ
Then it will function as inserting the string β def test_add ():\ n assert add (1, 2) == 3β below the Line 5 of the β test_filename. py β file.
stop () -> None:
Terminate the editing process.
Args:
None
NOTE:
Responses should be concise.
When exploring, you should attempt fewer things at a time: Include ONLY ONE < execute > per response, and use a SINGLE skill listed above within the < execute > environment. DO NOT use other python functions, as the environment does not support them.
You should first reason in the verbal form, then use a skill with < execute > and </ execute >.
You should avoid apologies and thanks in the responses.
When you finish exploring and analyzing with the provided skills, please return at most 3 files with the full path only. Each full path should be placed in a single line, INSTEAD OF BROKEN WITH MULTIPLE LINES.
The returned files should be separated by new lines ordered by most to least important, wrapped with βββ and NOTHING ELSE.
An example for a full output:
βββ
full_path_to_file1. py
full_path_to_file2. py
βββ
Now Let β s start!
### GitHub Problem Description ###
{issue content}
### Repository Structure ###
{file structure}
###
Listing 5: A non-execution set of tools empowering the simulation of agentic interaction trajectories.
After completing the initial localization stage, the agent is guided into the code-editing phase through a follow-up instruction: βNow letβs move on to Stage 2 and edit the updates. Remember, you can still decide at any point whether a file actually requires modification.β We retain partial rollouts from Stage 1, provided that the localization results include at least one correct file.
In Stage 2, we first simulate the agentβs interaction by allowing it to open incorrectly localized files, and we artificially inject agentic reasoning patterns such as βI realize that I do not need to modify this fileβ after inspecting the file content. This procedure is designed to strengthen the self-reflection ability of the agent by exposing it to false-positive contexts regarding the issue to be solved.
Subsequently, we transcribe the ground-truth PR commit pack into trajectory form: each commit message is treated as the agentβs reasoning step, and each code update is represented as the corresponding action, expressed through the βstr_replaceβ or βinsertβ tools. These interactions are appended to the trajectory, followed by a terminating βstopβ call. Due to storage constraints on repository snapshots, this trajectory simulation is applied to only a subset of PRs. Overall, this process contributes approximately $\sim$ 10B tokens of agentic interaction data to the mid-training recipe. Future directions for scaling this component in the data recipe include leveraging the idea of environment scaling (yang2025swesmith).
Training. We perform mid-training using a standard next token prediction approach, initialized from the Qwen2.5-72B-Base (qwen2025qwen25technicalreport) model. We upsample the synthetic part of the data by a factor of 4 during mid-training, inspired by the practice in grattafiori2024llama; qwen2025qwen25technicalreport; gu2025data. A global batch size of 256 with a maximum sequence length of 32K tokens is used, optimizing for long-context capabilities necessary for real-world software engineering tasks. The learning rate is set to 2e-5, with a cosine decay schedule and a minimum learning rate of 2e-6. The warm-up phase covers over approximately 3 billion tokens, followed by learning rate decay until approximately 150 billion tokens are processed.
Appendix B Docker environments
Table 3: The sources of the docker environments used in the development of Kimi-Dev.
| SWE-Gym (pan2024training) | https://huggingface.co/datasets/SWE-Gym/SWE-Gym/ | 2,356 |
| --- | --- | --- |
| SWE-bench-extra (badertdinov2024scaling) | https://huggingface.co/datasets/nebius/SWE-bench-extra/ | 3,846 |
| R2E-Gym-Lite (jain2025r2e) | https://huggingface.co/datasets/R2E-Gym/R2E-Gym-Lite | 3,671 |
Docker environment construction. To validate non-ground-truth patches generated by model rollouts and expand our dataset, we required executable Docker environments. We combined publicly available datasets with custom-configured Docker environments (see Table. 3). Among them, SWE-Gym and R2E-Gym-Lite open-source their dockers that we can directly use. For datasets lacking Docker support (SWE-Bench-Extra), we implemented an automated configuration method:
1. Initialize a Docker environment with fixed dependencies.
1. Select Python version based on commit year.
1. Install dependencies via requirements.txt and β pip install -e . β.
1. Resolve ModuleNotFound errors during test execution.
1. Validate success if a FAIL_TO_PASS test transitions from failing (without gt_patch) to passing (with gt_patch).
Out of 6.38k SWE-bench-extra instances, 3,846 environments are successfully constructed and subsequently used for cold-start and RL training.
Appendix C More Details of RL training
C.1 Prompt set selection
<details>
<summary>figs/sec3_rl_scaling/RL_bugfix_ablation_figure.png Details</summary>

### Visual Description
\n
## Line Chart: Pass Rate vs. RL Training Steps
### Overview
This line chart compares the pass rate of a bug-fixing reinforcement learning (RL) model with and without positive reinforcement, plotted against the number of RL training steps. The chart displays two distinct lines representing the performance of each approach over 500 training steps.
### Components/Axes
* **X-axis:** "RL Training Steps" - Scale ranges from 0 to 500, with tick marks every 50 steps.
* **Y-axis:** "Pass Rate (%)" - Scale ranges from 32% to 46%, with tick marks every 2%.
* **Line 1 (Red):** Labeled "Bugfixer RL" - Represents the pass rate with positive reinforcement.
* **Line 2 (Blue):** Labeled "w/o Positive Reinforce" - Represents the pass rate without positive reinforcement.
* **Legend:** Located in the top-left corner, clearly identifying each line with its corresponding color.
### Detailed Analysis
**Bugfixer RL (Red Line):**
The red line generally exhibits an upward trend, starting at approximately 32% at 0 training steps. It fluctuates, but overall increases to around 46% at 450 training steps, then decreases slightly to approximately 43% at 500 training steps.
* 0 Steps: ~32%
* 50 Steps: ~37%
* 100 Steps: ~38%
* 150 Steps: ~38.5%
* 200 Steps: ~41%
* 250 Steps: ~40.5%
* 300 Steps: ~42%
* 350 Steps: ~44.5%
* 400 Steps: ~43%
* 450 Steps: ~46%
* 500 Steps: ~43%
**w/o Positive Reinforce (Blue Line):**
The blue line also shows fluctuations, starting at approximately 32% at 0 training steps. It generally trends upward, reaching a peak of around 45% at 350 training steps, then declines to approximately 43% at 500 training steps.
* 0 Steps: ~32%
* 50 Steps: ~36%
* 100 Steps: ~38%
* 150 Steps: ~39%
* 200 Steps: ~41%
* 250 Steps: ~41.5%
* 300 Steps: ~43%
* 350 Steps: ~45%
* 400 Steps: ~43%
* 450 Steps: ~43.5%
* 500 Steps: ~43%
### Key Observations
* Both methods start with a similar pass rate (~32%).
* The "Bugfixer RL" (red line) generally achieves a higher pass rate than the "w/o Positive Reinforce" (blue line) after approximately 150 training steps.
* Both lines exhibit significant fluctuations throughout the training process, indicating instability or sensitivity to the training data.
* The performance of both methods appears to plateau or slightly decrease after 400 training steps.
### Interpretation
The data suggests that incorporating positive reinforcement into the reinforcement learning model for bug fixing ("Bugfixer RL") leads to a higher pass rate compared to a model without positive reinforcement. While both methods improve over time, the positive reinforcement approach consistently outperforms the other, particularly after the initial training phase. The fluctuations observed in both lines could be attributed to the stochastic nature of reinforcement learning, the complexity of the bug-fixing task, or the specific training environment. The plateauing of performance after 400 steps might indicate that the models are approaching their maximum achievable performance with the current training setup, or that further training requires adjustments to the learning parameters or training data. The fact that both lines start at the same point suggests that the initial learning phase is similar for both approaches, and the divergence in performance emerges as the models begin to leverage the benefits of positive reinforcement.
</details>
Figure 8: Ablation of positive example reinforcement during 72B Bugfixer RL.
In the main text, we introduce the adaptive prompt selection method for RL training. Specifically, we construct an initial prompt set of 1,200 problems by selecting those with pass@16 $>$ 0 from SWE-Gym (pan2024training), SWE-bench-extra (badertdinov2025swerebenchautomatedpipelinetask), and R2E-gym (jain2025r2e). Then, every 100 training steps, we expand the prompt set by adding 500 new problems. These additional problems are randomly sampled and filtered from the pool of problems for which the current model has pass@16 = 0, thereby progressively increasing the difficulty and forming a proper curriculum.
C.2 RL experiment ablation
Figure 9 shows the performance of the Qwen2.5-14B model in RL experiments, where both the BugFixer and the TestWriter exhibit clear scaling law behavior.
<details>
<summary>figs/sec3_rl_scaling/quick_plot_M3_bf.png Details</summary>

### Visual Description
\n
## Line Chart: RL Training Performance
### Overview
This line chart depicts the relationship between RL Training Steps and two performance metrics: Token Length and Pass Rate (%). The chart visualizes how these metrics change over 200 training steps. The Token Length is plotted on the primary y-axis (left), while the Pass Rate (%) is plotted on the secondary y-axis (right).
### Components/Axes
* **X-axis:** RL Training Steps, ranging from 0 to 200, with markers at intervals of 25.
* **Primary Y-axis (left):** Token Length, ranging from 6500 to 8500, with markers at intervals of 500.
* **Secondary Y-axis (right):** Pass Rate (%), ranging from 24% to 34%, with markers at intervals of 2%.
* **Legend:** Located in the top-left corner.
* Blue line with square markers: Token Length
* Red line with circle markers: Pass Rate (%)
### Detailed Analysis
**Token Length (Blue Line):**
The blue line representing Token Length generally fluctuates around a value of 7000-7500.
* At 0 RL Training Steps, the Token Length is approximately 6500.
* It increases to a peak of approximately 7300 at 25 RL Training Steps.
* It then decreases to around 6600 at 50 RL Training Steps.
* It rises again to approximately 7200 at 75 RL Training Steps.
* The line remains relatively stable between 7100 and 7300 from 100 to 150 RL Training Steps.
* It decreases to approximately 7000 at 175 RL Training Steps.
* Finally, it increases slightly to around 7100 at 200 RL Training Steps.
**Pass Rate (%) (Red Line):**
The red line representing Pass Rate (%) exhibits more pronounced fluctuations.
* At 0 RL Training Steps, the Pass Rate is approximately 26%.
* It decreases to around 25% at 25 RL Training Steps.
* It increases sharply to a peak of approximately 31.5% at 50 RL Training Steps.
* It then decreases to around 26% at 75 RL Training Steps.
* The line increases to approximately 30% at 100 RL Training Steps.
* It rises to a peak of approximately 34% at 150 RL Training Steps.
* It then decreases to approximately 31.5% at 175 RL Training Steps.
* Finally, it decreases to around 32% at 200 RL Training Steps.
### Key Observations
* The Pass Rate (%) shows greater variability than the Token Length.
* There appears to be a slight inverse correlation between the two metrics; when the Pass Rate is high, the Token Length tends to be lower, and vice versa, but this is not consistent.
* The Pass Rate reaches its maximum value at 150 RL Training Steps, while the Token Length is relatively stable around this point.
* The Pass Rate starts at 26% and ends at 32%, showing an overall increase.
* The Token Length starts at 6500 and ends at 7100, showing an overall increase.
### Interpretation
The chart suggests that the RL training process is improving the Pass Rate (%) over time, as evidenced by the overall upward trend. The Token Length also shows a slight increase, indicating that the model is generating longer tokens as training progresses. The fluctuations in both metrics suggest that the training process is not entirely smooth and may be encountering periods of instability or learning plateaus. The peak in Pass Rate at 150 RL Training Steps could indicate a significant learning event or a successful adaptation of the model. The inverse correlation, though not strong, might suggest a trade-off between the length of the generated tokens and the accuracy (Pass Rate) of the model. Further investigation would be needed to determine the optimal balance between these two metrics. The data suggests the model is learning, but the learning process is not linear and requires further tuning or analysis to understand the causes of the fluctuations.
</details>
(a) 14B BugFixer
<details>
<summary>figs/sec3_rl_scaling/quick_plot_M3_tw.png Details</summary>

### Visual Description
## Line Chart: RL Training Performance
### Overview
This image presents a line chart illustrating the relationship between RL Training Steps and two performance metrics: Token Length and Reproduced Rate (%). The chart displays how these metrics evolve over 200 training steps. The Token Length is plotted on the primary y-axis (left), while the Reproduced Rate (%) is plotted on the secondary y-axis (right).
### Components/Axes
* **X-axis:** RL Training Steps, ranging from 0 to 200, with markers at intervals of 25.
* **Primary Y-axis (Left):** Token Length, ranging from 3000 to 5500, with markers at intervals of 500.
* **Secondary Y-axis (Right):** Reproduced Rate (%), ranging from 18% to 26%, with markers at intervals of 2%.
* **Legend:** Located in the top-left corner.
* Blue line with square markers: "Token Length"
* Red line with circular markers: "Reproduced Rate (%)"
* **Gridlines:** Light gray, providing a visual aid for reading values.
### Detailed Analysis
**Token Length (Blue Line):**
The Token Length line generally slopes upward, indicating an increasing token length as training progresses.
* At 0 RL Training Steps, the Token Length is approximately 3000.
* At 25 RL Training Steps, the Token Length is approximately 3200.
* At 50 RL Training Steps, the Token Length is approximately 3900.
* At 75 RL Training Steps, the Token Length is approximately 4000.
* At 100 RL Training Steps, the Token Length is approximately 4200.
* At 125 RL Training Steps, the Token Length is approximately 4300.
* At 150 RL Training Steps, the Token Length is approximately 4400.
* At 175 RL Training Steps, the Token Length is approximately 5200.
* At 200 RL Training Steps, the Token Length is approximately 5400.
**Reproduced Rate (%) (Red Line):**
The Reproduced Rate line exhibits more fluctuation, with peaks and valleys throughout the training process.
* At 0 RL Training Steps, the Reproduced Rate is approximately 19%.
* At 25 RL Training Steps, the Reproduced Rate is approximately 23%.
* At 50 RL Training Steps, the Reproduced Rate is approximately 21%.
* At 75 RL Training Steps, the Reproduced Rate is approximately 20%.
* At 100 RL Training Steps, the Reproduced Rate is approximately 21%.
* At 125 RL Training Steps, the Reproduced Rate is approximately 22%.
* At 150 RL Training Steps, the Reproduced Rate is approximately 24%.
* At 175 RL Training Steps, the Reproduced Rate is approximately 23%.
* At 200 RL Training Steps, the Reproduced Rate is approximately 25%.
### Key Observations
* The Token Length consistently increases over the 200 training steps, suggesting the model is learning to generate longer sequences.
* The Reproduced Rate fluctuates significantly, indicating variability in the model's ability to reproduce the desired output. There is a general upward trend, but with considerable noise.
* The peak Reproduced Rate occurs around 50 RL Training Steps, reaching approximately 25%.
* The lowest Reproduced Rate occurs around 75 RL Training Steps, at approximately 20%.
* The Token Length and Reproduced Rate do not appear to be directly correlated. For example, the Token Length increases steadily while the Reproduced Rate fluctuates.
### Interpretation
The chart suggests that as the RL agent trains, it learns to generate longer token sequences (Token Length). However, the quality of reproduction (Reproduced Rate) is not consistently improving and exhibits significant variability. This could indicate that while the model is becoming more fluent in generating text, it is not necessarily becoming more accurate or faithful to the desired output. The fluctuations in Reproduced Rate might be due to the stochastic nature of the RL algorithm or the complexity of the task. Further investigation is needed to understand the factors contributing to the variability in the Reproduced Rate and to determine whether the increasing Token Length is accompanied by a corresponding improvement in the quality of the generated text. The divergence between the two metrics suggests a potential trade-off between length and accuracy, which could be a focus for future optimization efforts.
</details>
(b) 14B TestWriter
Figure 9: RL scaling experiments on Qwen2.5-14B model.
Furthermore, Figure 8 illustrates the effect of our proposed positive example reinforcement. We incorporated this improvement in the later stage of training and found that the RL of BugFixer achieved better performance compared to not using this trick. The main reason is that positive examples can reinforce the correct reasoning patterns, enabling the model to converge more quickly, especially when its exploration ability diminishes in the later stages. It is noticed that a similar trick was also applied in the development of the Seed1.5-thinking model (seed2025seed1).
Appendix D Agent Infrastructure
We present a unified sandbox architecture designed for hybrid cloud cluster environments, leveraging Kubernetes orchestration to provide researchers with a flexible, secure, and highly scalable code execution platform. The architecture is accessible through a Python SDK and implements an ephemeral, use-and-destroy paradigm that ensures resource efficiency and security isolation. The system demonstrates exceptional scalability, supporting over 10,000 concurrent sandbox instances while maintaining performance stability. This infrastructure serves as a foundational platform for diverse computational scenarios, including automated code evaluation and RL experiments.
The system employs a sidecar container pattern to efficiently orchestrate and manage a heterogeneous collection of over 25,000 docker images sourced from multiple datasets. This architectural choice enables optimal resource utilization while maintaining isolation between different execution environments. Additionally, we have developed a proprietary Software Engineering (SWE) image-building pipeline that continuously expands the repository of supported images, thereby enhancing the systemβs adaptability to diverse computational workloads and research requirements.
Appendix E Case Studies under Agentless
E.1 Bugfixer Traces
We provide a Bugfixerβs response trace during the code repair stage (see Listing 6). From this, we can observe that the initial code given was return self[i, j - pos - other.cols], but the model quickly realized its mistake and corrected it to return self[i, j - other.cols]. Although some repetitive self-reflections followed, the increasing response length indeed improved answer accuracy and robustness in our experiments.
β¬
$\lhd$ think $\rhd$
Okay, let β s tackle this problem. The user reported an issue where the Matrix. col_insert () method isn β t working correctly. The example given shows that after inserting a matrix V into M at position 3, the resulting matrix has the inserted columns shifted incorrectly. The expected result would have the inserted columns at positions 3 and 4, but the actual output shows them starting at position 3, but the original matrix β s columns after the insert are shifted upwards instead of remaining in their original positions.
First, I need to understand how col_insert works. Looking at the code provided, especially in the MatrixShaping class in sympy / matrices / common. py, the _eval_col_insert method is responsible for inserting columns. The current implementation shifts the existing columns to the right by the number of columns inserted. However, the bug might be in how the original matrix β s columns are being shifted.
Let me look at the existing code for _eval_col_insert:
def _eval_col_insert (self, pos, other):
cols = self. cols
def entry (i, j):
if j < pos:
return self [i, j]
elif pos <= j < pos + other. cols:
return other [i, j - pos]
return self [i, j - pos - other. cols]
return self. _new (self. rows, self. cols + other. cols,
lambda i, j: entry (i, j))
Wait, the problem here is in the last case. When j is greater than or equal to pos + other. cols, the code subtracts (pos + other. cols) from j. But that β s incorrect because the original columns after the inserted ones should start at j = pos + other. cols. So for the original matrix β s columns that come after the inserted ones, their new positions should be j + other. cols. But in the current code, when j is in the original matrix β s columns (after the inserted ones), the code is subtracting (pos + other. cols), which would incorrectly shift them.
Wait, let β s think again. The original code for the entry function:
if j < pos:
return self [i, j] # original columns before pos
elif pos <= j < pos + other. cols:
return other [i, j - pos] # inserted columns
return self [i, j - pos - other. cols] # original columns after the inserted ones
Wait, no. Let β s say the original matrix has cols = 6. We insert 2 columns at pos =3. The new total cols is 6 + 2 = 8. The original columns after pos 3 (columns 3,4,5) should now be at positions 5,6,7. But according to the code, when j is >=3 +2=5 (since pos + other. cols =3+2=5), then the code returns self [i, j -5 -2] = self [i, j -7]. Wait, that β s not right. For example, if j =5 (the new column 5), then j - pos - other. cols =5 -3 -2=0. So the original column 0 would be mapped to the new column 5. That β s definitely wrong.
Ah, here β s the mistake. The code is subtracting (pos + other. cols) from j, but that β s incorrect. The original columns after the inserted ones should be shifted to the right by other. cols. So, the correct calculation should be j - other. cols. For example, original column 3 becomes 3 +2=5, original column 4 becomes 6, etc.
So the line in the code:
return self [i, j - pos - other. cols]
should be:
return self [i, j - other. cols]
Because the inserted columns have taken up β other. cols β positions starting at β pos β, so the original columns after β pos β are shifted by β other. cols β to the right. Therefore, to get the correct original column index, we subtract β other. cols β from the current j.
So the fix is to change the subtraction from (pos + other. cols) to other. cols. Let me check:
Original code:
return self [i, j - pos - other. cols]
Corrected code:
return self [i, j - other. cols]
Wait, no. Wait, the original columns after the inserted block (which starts at pos and has other. cols columns) would start at j = pos + other. cols. So for those j values, the original column index would be j - (pos + other. cols). Wait, no. Let β s think with numbers.
Original matrix has 6 columns. Insert 2 columns at position 3. The new matrix has 8 columns. The original columns 0,1,2 remain in positions 0,1,2. The inserted 2 columns are at positions 3 and 4. The original columns 3,4,5 now become 5,6,7.
So for j in 5,6,7 (original columns 3,4,5), the mapping is:
5 $ \ rightarrow$ 3
6 $ \ rightarrow$ 4
7 $ \ rightarrow$ 5
So the formula should be original_column = j - inserted_columns. In this case, inserted_columns is other. cols. So the code should subtract other. cols from j when accessing the original matrix β s columns after the inserted block.
Therefore, the line:
return self [i, j - pos - other. cols]
should be:
return self [i, j - other. cols]
Because pos is where the insertion starts, but the original columns after the inserted block are shifted by other. cols. Let β s verify with the example:
pos =3, other. cols =2. For j =5 (new column 5), original column index is 5 -2=3, which is correct. For j =6, 6-2=4; j =7, 7-2=5. That β s correct.
So the error in the original code was subtracting (pos + other. cols) instead of other. cols. The pos is the starting position of the insertion, but after inserting other. cols columns, the original columns after the inserted block are shifted by other. cols. Therefore, the correct subtraction is other. cols.
Thus, the fix is to change the line in the _eval_col_insert method from:
return self [i, j - pos - other. cols]
to:
return self [i, j - other. cols]
Now, looking back at the code in the _eval_col_insert function:
The code for the else clause is:
return self [i, j - pos - other. cols]
Which is incorrect. Changing that to j - other. cols should fix the problem.
So the SEARCH block would look for the lines:
def _eval_col_insert (self, pos, other):
cols = self. cols
def entry (i, j):
if j < pos:
return self [i, j]
elif pos <= j < pos + other. cols:
return other [i, j - pos]
return self [i, j - pos - other. cols]
return self. _new (self. rows, self. cols + other. cols,
lambda i, j: entry (i, j))
And the REPLACE block would change the last part to:
elif pos <= j < pos + other. cols:
return other [i, j - pos]
return self [i, j - other. cols]
So the correction is in the return statement of the entry function. The original code subtracted (pos + other. cols) from j, but it should subtract other. cols instead.
Therefore, the correct SEARCH / REPLACE block would be:
### sympy / matrices / common. py
<<<<<<< SEARCH
def _eval_col_insert (self, pos, other):
cols = self. cols
def entry (i, j):
if j < pos:
return self [i, j]
elif pos <= j < pos + other. cols:
return other [i, j - pos]
return self [i, j - pos - other. cols]
return self. _new (self. rows, self. cols + other. cols,
lambda i, j: entry (i, j))
=======
def _eval_col_insert (self, pos, other):
def entry (i, j):
if j < pos:
return self [i, j]
elif pos <= j < pos + other. cols:
return other [i, j - pos]
return self [i, j - other. cols]
return self. _new (self. rows, self. cols + other. cols,
lambda i, j: entry (i, j))
>>>>>>> REPLACE
This change corrects the column shifting calculation for the original matrix β s columns after the inserted block. Instead of subtracting (pos + other. cols), we subtract other. cols, which properly shifts the original columns to the right by the number of inserted columns.
$\lhd$ / think $\rhd$
βββ python
### sympy / matrices / common. py
<<<<<<< SEARCH
def _eval_col_insert (self, pos, other):
cols = self. cols
def entry (i, j):
if j < pos:
return self [i, j]
elif pos <= j < pos + other. cols:
return other [i, j - pos]
return self [i, j - pos - other. cols]
return self. _new (self. rows, self. cols + other. cols,
lambda i, j: entry (i, j))
=======
def _eval_col_insert (self, pos, other):
def entry (i, j):
if j < pos:
return self [i, j]
elif pos <= j < pos + other. cols:
return other [i, j - pos]
return self [i, j - other. cols]
return self. _new (self. rows, self. cols + other. cols,
lambda i, j: entry (i, j))
>>>>>>> REPLACE
βββ
Listing 6: Reasoning trajectory example for BugFixer.
E.2 Testwriter False Positive Cases
During the RL training process, we observed that the Testwriter component exhibits some false positives, which can result in tests that contain certain gaps. This might cause the test patches generated by the model to be mistakenly considered as correct. Consequently, the performance improvement of Testwriter does not accurately reflect the actual performance. In this section, we provide two intuitive examples to illustrate this phenomenon, and leave further enhancement for TestWriter as future work.
Listing 7 shows the ground-truth test patch for instance django__django-16877, with Listing 8 showing the test patch generated by theTestWriter. A detailed comparison reveals that the model-generated test cases are not comprehensive: Two locations should have been tested, but only one was checked, and the effect of mark_safe was not considered. In practice, this allowed the model-predicted patch to pass the TestWriter reward check, but result in a false positive.
The second example instance is sympy__sympy-13974, with Listings 9 and 10 showing the ground-truth and model-generated test patches. In this case, the coverage of the generated test is also insufficient. The generated test does not account for the need to satisfy the associativity property, e.g., assert tensor_product_simp(TP(A, B)**x) == TP(A**x, B**x). As a consequence, it fails to serve as a reliable judge for BugFixer.
β¬
diff -- git a / tests / template_tests / filter_tests / test_escapeseq. py b / tests / template_tests / filter_tests / test_escapeseq. py
new file mode 100644
--- / dev / null
+++ b / tests / template_tests / filter_tests / test_escapeseq. py
@@ -0,0 +1,59 @@
+ from django. test import SimpleTestCase
+ from django. utils. safestring import mark_safe
+
+ from .. utils import setup
+
+
+ class EscapeseqTests (SimpleTestCase):
+ """
+ The " escapeseq " filter works the same whether autoescape is on or off,
+ and has no effect on strings already marked as safe.
+ """
+
+ @setup (
+ {
+ " escapeseq_basic ": (
+ β{{ a | escapeseq | join:", " }} -- {{ b | escapeseq | join:", " }}β
+ ),
+ }
+ )
+ def test_basic (self):
+ output = self. engine. render_to_string (
+ " escapeseq_basic ",
+ {" a ": [" x & y ", "< p >"], " b ": [mark_safe (" x & y "), mark_safe ("< p >")]},
+ )
+ self. assertEqual (output, " x & amp; y, & lt; p & gt; -- x & y, < p >")
+
+ @setup (
+ {
+ " escapeseq_autoescape_off ": (
+ β{% autoescape off %}{{ a | escapeseq | join:", " }}β
+ " -- "
+ β{{ b | escapeseq | join:", "}}{% endautoescape %}β
+ )
+ }
+ )
+ def test_autoescape_off (self):
+ output = self. engine. render_to_string (
+ " escapeseq_autoescape_off ",
+ {" a ": [" x & y ", "< p >"], " b ": [mark_safe (" x & y "), mark_safe ("< p >")]},
+ )
+ self. assertEqual (output, " x & amp; y, & lt; p & gt; -- x & y, < p >")
+
+ @setup ({" escapeseq_join ": β{{ a | escapeseq | join:"< br />" }}β})
+ def test_chain_join (self):
+ output = self. engine. render_to_string (" escapeseq_join ", {" a ": [" x & y ", "< p >"]})
+ self. assertEqual (output, " x & amp; y < br />& lt; p & gt;")
+
+ @setup (
+ {
+ " escapeseq_join_autoescape_off ": (
+ β{% autoescape off %}{{ a | escapeseq | join:"< br />" }}{% endautoescape %}β
+ ),
+ }
+ )
+ def test_chain_join_autoescape_off (self):
+ output = self. engine. render_to_string (
+ " escapeseq_join_autoescape_off ", {" a ": [" x & y ", "< p >"]}
+ )
+ self. assertEqual (output, " x & amp; y < br />& lt; p & gt;")
Listing 7: GT test patch of django__django-16877.
β¬
diff -- git a / tests / template_tests / filter_tests / test_safeseq. py b / tests / template_tests / filter_tests / test_safeseq. py
--- a / tests / template_tests / filter_tests / test_safeseq. py
+++ b / tests / template_tests / filter_tests / test_safeseq. py
@@ -20,3 +20,20 @@
def test_safeseq02 (self):
output = self. engine. render_to_string (" safeseq02 ", {" a ": ["&", "<"]})
self. assertEqual (output, "&, < -- &, <")
+
+ # New tests for escapeseq
+ @setup ({" escapeseq01 ": β{{ a | escapeseq | join:", " }}β})
+ def test_escapeseq01 (self):
+ output = self. engine. render_to_string (" escapeseq01 ", {" a ": ["&", "<"]})
+ self. assertEqual (output, "& amp;, & lt;")
+
+ @setup (
+ {
+ " escapeseq02 ": (
+ β{% autoescape off %}{{ a | escapeseq | join:", " }}{% endautoescape %}β
+ )
+ }
+ )
+ def test_escapeseq02 (self):
+ output = self. engine. render_to_string (" escapeseq02 ", {" a ": ["&", "<"]})
+ self. assertEqual (output, "& amp;, & lt;")
Listing 8: Model predicted test patch of django__django-16877.
β¬
diff -- git a / sympy / physics / quantum / tests / test_tensorproduct. py b / sympy / physics / quantum / tests / test_tensorproduct. py
--- a / sympy / physics / quantum / tests / test_tensorproduct. py
+++ b / sympy / physics / quantum / tests / test_tensorproduct. py
@@ -10,7 +10,7 @@
from sympy. physics. quantum. density import Density
from sympy. core. trace import Tr
- A, B, C = symbols (β A, B, C β, commutative = False)
+ A, B, C, D = symbols (β A, B, C, D β, commutative = False)
x = symbols (β x β)
mat1 = Matrix ([[1, 2* I], [1 + I, 3]])
@@ -47,6 +47,11 @@ def test_tensor_product_commutator ():
def test_tensor_product_simp ():
assert tensor_product_simp (TP (A, B)* TP (B, C)) == TP (A * B, B * C)
+ # tests for Pow - expressions
+ assert tensor_product_simp (TP (A, B)** x) == TP (A ** x, B ** x)
+ assert tensor_product_simp (x * TP (A, B)**2) == x * TP (A **2, B **2)
+ assert tensor_product_simp (x *(TP (A, B)**2)* TP (C, D)) == x * TP (A **2* C, B **2* D)
+ assert tensor_product_simp (TP (A, B)- TP (C, D)** x) == TP (A, B)- TP (C ** x, D ** x)
def test_issue_5923 ():
Listing 9: GT test patch for sympy__sympy-13974.
β¬
diff -- git a / sympy / physics / quantum / tests / test_tensorproduct. py b / sympy / physics / quantum / tests / test_tensorproduct. py
--- a / sympy / physics / quantum / tests / test_tensorproduct. py
+++ b / sympy / physics / quantum / tests / test_tensorproduct. py
@@ -47,6 +47,27 @@
def test_tensor_product_simp ():
assert tensor_product_simp (TP (A, B)* TP (B, C)) == TP (A * B, B * C)
+
+ # Test tensor product powers
+ a = symbols (β a β, commutative = False)
+ t1 = TP (1, 1) * TP (1, 1)
+ t1_simp = tensor_product_simp (t1)
+ assert t1_simp == TP (1, 1), f " Expected 1 x1, got {t1_simp}"
+
+ # Test with Pauli matrices
+ from sympy. physics. paulialgebra import Pauli
+ t2 = TP (1, Pauli (3)) * TP (1, Pauli (3))
+ t2_simp = tensor_product_simp (t2)
+ assert t2_simp == TP (1, 1), f " Expected 1 x1, got {t2_simp}"
+
+ # Test substitution cases from issue
+ expr1 = TP (1, 1) * TP (1, a)
+ result1 = expr1. subs (a, 1)
+ assert tensor_product_simp (result1) == TP (1, 1), f " Substitution 1 failed: {result1}"
+
+ expr2 = TP (1, Pauli (3)) * TP (1, a)
+ result2 = expr2. subs (a, Pauli (3))
+ assert tensor_product_simp (result2) == TP (1, 1), f " Substitution 2 failed: {result2}"
def test_issue_5923 ():
Listing 10: Model predicted test patch of sympy__sympy-13974.
Appendix F Emergent Parallel scaling in Kimi-Dev
<details>
<summary>figs/sec3_sp_scaling/parellel_scaling_figure.png Details</summary>

### Visual Description
\n
## Line Chart: Pass Rate vs. Number of Patches
### Overview
The image presents a line chart illustrating the relationship between the number of patches applied and the resulting pass rate, expressed as a percentage. The chart shows a generally increasing trend, suggesting that applying more patches leads to a higher pass rate.
### Components/Axes
* **X-axis:** Labeled "Number of Patches". The scale ranges from 1 to 40, with markers at 1, 5, 10, 20, and 40.
* **Y-axis:** Labeled "Pass Rate (%)". The scale ranges from 47 to 53, with markers at 47, 48, 49, 50, 51, 52, and 53.
* **Data Series:** A single green line representing the pass rate.
* **Data Points:** Square markers are placed at each data point along the line, with the corresponding pass rate value displayed next to each marker.
### Detailed Analysis
The data series shows an upward trend. Let's examine the specific data points:
* **Number of Patches = 1:** Pass Rate = 48.0%
* **Number of Patches = 5:** Pass Rate = 49.2%
* **Number of Patches = 10:** Pass Rate = 50.2%
* **Number of Patches = 20:** Pass Rate = 51.2%
* **Number of Patches = 40:** Pass Rate = 51.6%
The line slopes upward, indicating a positive correlation between the number of patches and the pass rate. The increase in pass rate appears to diminish as the number of patches increases, suggesting a potential saturation point.
### Key Observations
* The pass rate increases from 48.0% to 51.6% as the number of patches increases from 1 to 40.
* The largest increase in pass rate occurs between 10 and 20 patches (an increase of 1.0%).
* The smallest increase in pass rate occurs between 20 and 40 patches (an increase of 0.4%).
### Interpretation
The data suggests that applying patches improves the pass rate, likely by addressing vulnerabilities or bugs. However, the diminishing returns observed at higher patch counts indicate that there's a point where additional patches provide only marginal improvements. This could be due to several factors:
* **Diminishing Returns:** The most critical issues are likely addressed by the initial patches, leaving only less impactful issues to be resolved by subsequent patches.
* **Patch Interactions:** Applying too many patches could introduce conflicts or unintended side effects, potentially offsetting some of the benefits.
* **Testing Limitations:** The pass rate metric may not capture all aspects of system quality, and further testing may be needed to assess the overall impact of patches.
The chart highlights the importance of a balanced patching strategy, where the benefits of applying additional patches are weighed against the potential costs and risks. Further investigation could explore the types of patches applied and their specific impact on the pass rate to gain a more nuanced understanding of the relationship.
</details>
Figure 10: Emergent parallel scaling results on SWE-bench Verified.
Compared to the self-play method that leverages execution during test time, parallel scaling (chen2025parallel; pan2025learning) is a recently-introduced test-time scaling method that aggregates multiple independent rollouts for additional improvement. While previous works incentivize the ability of parallel scaling through training, or train a separate reward model (pan2024training; jain2025r2e) for patch selection, we observe that the parallel scaling effect emerges in our model without any specific training. Specifically, patch candidates from parallel rollouts of BugFixer are used to prompt the model to summarize their differences and compose a new bugfix patch. Listing 11 is the prompt template of parallel scaling. In the template, promblem_statement is the GitHub issue, and trajs_content represents the content of multiple patch candidates.
β¬
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
{problem_statement}
--- END ISSUE ---
Below are some thinking trajectories, each from llm reasoning model. Only one trajetory is right.
--- BEGIN FILE ---
βββ
{trajs_content}
βββ
--- END FILE ---
Please first summary and analyze the key differences between the trajectories, and then generate * SEARCH / REPLACE * edits to fix the issue.
Every * SEARCH / REPLACE * edit must use this format:
1. The file path
2. The start of search block: <<<<<<< SEARCH
3. A contiguous chunk of lines to search for in the existing source code
4. The dividing line: =======
5. The lines to replace into the source code
6. The end of the replace block: >>>>>>> REPLACE
Here is an example:
βββ python
### mathweb / flask / app. py
<<<<<<< SEARCH
from flask import Flask
=======
import math
from flask import Flask
>>>>>>> REPLACE
βββ
Please note that the * SEARCH / REPLACE * edit REQUIRES PROPER INDENTATION. If you would like to add the line β print (x)β, you must fully write that out, with all those spaces before the code!
Wrap the * SEARCH / REPLACE * edit in blocks βββ python...βββ.
The summary of the key differences between the trajectories should be in the thinking part.
Listing 11: The prompt template for parallel scaling.
The results in Figure 10 show that the performance of the parallel aggregation improves as the number of patch candidates in the prompt increases. The advantage of this scaling paradigm over majority voting lies in its ability to leverage the modelβs own capacity to analyze multiple candidate patches, thereby surpassing the simplistic approach of weighting answers merely by their frequency of occurrence. Listing 12 covers a full prompt example with 14 different input patch candidates. Note: As Listing 12 is long, it is OK to jump to Listing 13 to directly observe the model behavior.
β¬
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Data < @filename isn β t relative to the YAML file
The [docs say](https:// gabbi. readthedocs. io / en / latest / format. html # data):
> If the value is a string that begins with < @ then the rest of the string is treated as the name of a file to be loaded from the same directory as the YAML file.
But I haven β t found it works like this unless I cd into the directory containing the yaml file.
β _load_data_file β says:
βββ python
path = os. path. join (self. test_directory, os. path. basename (filename))
βββ
this does a few things:
- makes the path begin with β self. test_directory β (defaults to β.β, the current working directory)
- disguards any folders in the < @ β filename β path
- appends β filename β
This means, if I β m in β/β, and I have a test that says β data: < @cake. jpg β, I cannot run my tests as advised in the docs:
β gabbi - run -- / my / test. yaml / my / other. yaml β
β FileNotFoundError: [Errno 2] No such file or directory: β./ cake. jpg ββ
So in our test running script, we have to start with β cd tests /β.
My preference, would be to make the path relative to the location of the yaml file. That way, I can run my tests from whatever directory.
Obviously this isn β t possible when redirecting with β<β as the filename is never seen by gabbi. So I β m happy is discuss other ideas.
---
Furthermore I can β t keep my test data in a child directory, unless I cd into that directory like:
βββ shell
cd yaml_tests / test_data
gabbi - run -- ../ test_things. yaml
βββ
So for that reason, I β d like to allow directories like β< @test_data / filename. txt β to be included in the path.
---
Happy to write the patch, if we come up with an approach here.
--- END ISSUE ---
Below are some thinking trajectories, each from llm reasoning model.
--- BEGIN FILE ---
βββ
Answer 1:
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
import argparse
from importlib import import_module
import sys
import unittest
=======
import argparse
from importlib import import_module
import os
import sys
import unittest
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
handlers = handler_objects)
result = ConciseTestRunner (
verbosity =2, failfast = failfast). run (test_suite)
return result. wasSuccessful ()
=======
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False, test_directory =β.β):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None,
prefix = prefix, handlers = handler_objects)
result = ConciseTestRunner (
verbosity =2, failfast = failfast). run (test_suite)
return result. wasSuccessful ()
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
else:
for input_file in input_files:
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
=======
else:
for input_file in input_files:
test_directory = os. path. dirname (input_file) or β.β
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast,
test_directory = test_directory)
>>>>>>> REPLACE
βββ
Answer 2:
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
=======
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False, test_directory =β.β):
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
handlers = handler_objects)
=======
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None, prefix = prefix,
handlers = handler_objects)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
=======
test_directory = os. path. dirname (input_file) or β.β
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast, test_directory = test_directory)
>>>>>>> REPLACE
βββ
Answer 3:
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
handlers = handler_objects)
result = ConciseTestRunner (
verbosity =2, failfast = failfast). run (test_suite)
return result. wasSuccessful ()
=======
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False, test_directory =β.β):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None,
prefix = prefix, handlers = handler_objects)
result = ConciseTestRunner (
verbosity =2, failfast = failfast). run (test_suite)
return result. wasSuccessful ()
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
for input_file in input_files:
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
if not failure: # once failed, this is considered immutable
failure = not success
if failure and failfast:
break
=======
for input_file in input_files:
test_directory = os. path. dirname (input_file)
with open (input_file, β r β) as fh:
success = run_suite (
fh, handler_objects, host, port, prefix, force_ssl,
failfast, test_directory = test_directory)
if not failure: # once failed, this is considered immutable
failure = not success
if failure and failfast:
break
>>>>>>> REPLACE
βββ
Answer 4:
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
=======
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False, test_directory =β.β):
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
handlers = handler_objects)
=======
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None,
prefix = prefix, handlers = handler_objects)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
for input_file in input_files:
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
=======
for input_file in input_files:
# Determine test directory based on input file β s location
test_directory = os. path. dirname (os. path. abspath (input_file)) or β.β
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast,
test_directory = test_directory)
>>>>>>> REPLACE
βββ
Answer 5:
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
import sys
import unittest
=======
import os
import sys
import unittest
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
=======
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False, test_directory =β.β):
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
handlers = handler_objects)
=======
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None,
prefix = prefix, handlers = handler_objects)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
for input_file in input_files:
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
=======
for input_file in input_files:
test_directory = os. path. dirname (input_file) or β.β
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast,
test_directory = test_directory)
>>>>>>> REPLACE
βββ
Answer 6:
βββ python
### gabbi / driver. py
<<<<<<< SEARCH
file_suite = suitemaker. test_suite_from_dict (
loader, test_base_name, suite_dict, path, host, port,
fixture_module, intercept, prefix = prefix,
test_loader_name = test_loader_name, handlers = handler_objects,
inner_fixtures = inner_fixtures)
=======
test_file_dir = os. path. dirname (test_file)
file_suite = suitemaker. test_suite_from_dict (
loader, test_base_name, suite_dict, test_file_dir, host, port,
fixture_module, intercept, prefix = prefix,
test_loader_name = test_loader_name, handlers = handler_objects,
inner_fixtures = inner_fixtures)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
=======
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False, test_directory =β.β):
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
handlers = handler_objects)
=======
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None,
prefix = prefix, handlers = handler_objects)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
=======
test_directory = os. path. dirname (input_file)
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast, test_directory)
>>>>>>> REPLACE
βββ
Answer 7:
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
import argparse
from importlib import import_module
import sys
import unittest
=======
import argparse
from importlib import import_module
import os
import sys
import unittest
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
handlers = handler_objects)
=======
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False, test_directory =β.β):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None,
prefix = prefix, handlers = handler_objects)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
if not failure: # once failed, this is considered immutable
failure = not success
if failure and failfast:
break
=======
test_dir = os. path. dirname (os. path. abspath (input_file))
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast, test_directory = test_dir)
if not failure: # once failed, this is considered immutable
failure = not success
if failure and failfast:
break
>>>>>>> REPLACE
βββ
Answer 8:
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
handlers = handler_objects)
=======
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
# Determine test directory based on input handle
if hasattr (handle, β name β):
test_directory = os. path. dirname (os. path. abspath (handle. name))
else:
test_directory = β.β
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None,
prefix = prefix, handlers = handler_objects)
>>>>>>> REPLACE
βββ
Answer 9:
βββ python
### gabbi / driver. py
<<<<<<< SEARCH
for test_file in glob. iglob (β% s /*. yaml β % path):
if β _ β in os. path. basename (test_file):
warnings. warn (exception. GabbiSyntaxWarning (
"β _ β in test filename % s. This can break suite grouping."
% test_file))
if intercept:
host = str (uuid. uuid4 ())
suite_dict = utils. load_yaml (yaml_file = test_file)
test_base_name = os. path. splitext (os. path. basename (test_file))[0]
if all_test_base_name:
test_base_name = β% s_ % s β % (all_test_base_name, test_base_name)
if require_ssl:
if β defaults β in suite_dict:
suite_dict [β defaults β][β ssl β] = True
else:
suite_dict [β defaults β] = {β ssl β: True}
file_suite = suitemaker. test_suite_from_dict (
loader, test_base_name, suite_dict, path, host, port,
fixture_module, intercept, prefix = prefix,
test_loader_name = test_loader_name, handlers = handler_objects,
inner_fixtures = inner_fixtures)
=======
for test_file in glob. iglob (β% s /*. yaml β % path):
if β _ β in os. path. basename (test_file):
warnings. warn (exception. GabbiSyntaxWarning (
"β _ β in test filename % s. This can break suite grouping."
% test_file))
if intercept:
host = str (uuid. uuid4 ())
suite_dict = utils. load_yaml (yaml_file = test_file)
test_base_name = os. path. splitext (os. path. basename (test_file))[0]
if all_test_base_name:
test_base_name = β% s_ % s β % (all_test_base_name, test_base_name)
if require_ssl:
if β defaults β in suite_dict:
suite_dict [β defaults β][β ssl β] = True
else:
suite_dict [β defaults β] = {β ssl β: True}
# Use absolute path of the YAML file β s directory
test_file_dir = os. path. dirname (os. path. abspath (test_file))
file_suite = suitemaker. test_suite_from_dict (
loader, test_base_name, suite_dict, test_file_dir, host, port,
fixture_module, intercept, prefix = prefix,
test_loader_name = test_loader_name, handlers = handler_objects,
inner_fixtures = inner_fixtures)
>>>>>>> REPLACE
βββ
Answer 10:
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
import argparse
from importlib import import_module
import sys
import unittest
=======
import argparse
from importlib import import_module
import os
import sys
import unittest
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
=======
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False, test_directory =β.β):
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
force_ssl = force_ssl, failfast = failfast,
handlers = handler_objects)
=======
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None,
prefix = prefix, force_ssl = force_ssl, failfast = failfast,
handlers = handler_objects)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
=======
test_dir = os. path. dirname (input_file)
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast,
test_directory = test_dir)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / case. py
<<<<<<< SEARCH
path = os. path. join (self. test_directory, os. path. basename (filename))
=======
path = os. path. join (self. test_directory, filename)
>>>>>>> REPLACE
βββ
Answer 11:
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
handlers = handler_objects)
=======
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False, test_directory =β.β):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port,
None, None, prefix = prefix, handlers = handler_objects)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
=======
test_dir = os. path. dirname (input_file)
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast,
test_directory = test_dir)
>>>>>>> REPLACE
βββ
Answer 12:
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
import argparse
from importlib import import_module
import sys
import unittest
=======
import argparse
from importlib import import_module
import os
import sys
import unittest
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
else:
for input_file in input_files:
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
=======
else:
for input_file in input_files:
test_dir = os. path. dirname (os. path. abspath (input_file))
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast,
test_directory = test_dir)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
handlers = handler_objects)
=======
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False, test_directory =β.β):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None,
prefix = prefix, handlers = handler_objects)
>>>>>>> REPLACE
βββ
Answer 13:
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
handlers = handler_objects)
=======
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False, test_directory =β.β):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None,
prefix = prefix, handlers = handler_objects)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
for input_file in input_files:
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
=======
for input_file in input_files:
test_dir = os. path. dirname (os. path. abspath (input_file))
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast,
test_directory = test_dir)
>>>>>>> REPLACE
βββ
Answer 14:
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
for input_file in input_files:
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
=======
for input_file in input_files:
with open (input_file, β r β) as fh:
success = run_suite (input_file, fh, handler_objects, host, port,
prefix, force_ssl, failfast)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_directory = β.β
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None, prefix = prefix,
handlers = handler_objects)
=======
def run_suite (input_file, handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_directory = os. path. dirname (input_file) if input_file else β.β
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None, prefix = prefix,
handlers = handler_objects)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
if not input_files:
success = run_suite (sys. stdin, handler_objects, host, port,
prefix, force_ssl, failfast)
=======
if not input_files:
success = run_suite (None, sys. stdin, handler_objects, host, port,
prefix, force_ssl, failfast)
>>>>>>> REPLACE
βββ
βββ
--- END FILE ---
Please first summary and analyze the key differences between the trajectories, and then generate * SEARCH / REPLACE * edits to fix the issue.
Every * SEARCH / REPLACE * edit must use this format:
1. The file path
2. The start of search block: <<<<<<< SEARCH
3. A contiguous chunk of lines to search for in the existing source code
4. The dividing line: =======
5. The lines to replace into the source code
6. The end of the replace block: >>>>>>> REPLACE
Here is an example:
βββ python
### mathweb / flask / app. py
<<<<<<< SEARCH
from flask import Flask
=======
import math
from flask import Flask
>>>>>>> REPLACE
βββ
Please note that the * SEARCH / REPLACE * edit REQUIRES PROPER INDENTATION. If you would like to add the line β print (x)β, you must fully write that out, with all those spaces before the code!
Wrap the * SEARCH / REPLACE * edit in blocks βββ python...βββ.
The summary of the key differences between the trajectories should be in the thinking part.
"
Listing 12: Example of emergent parallel scaling (cdent__gabbi-191 in SWE-bench-extra), the prompt input part.
Listing 13 shows how the model successfully recognizes the correct answer candidate in Listing 12, and generates a novel patch with additional reasoning to solve the issue:
β¬
$\lhd$ think $\rhd$
Okay, let β s tackle this problem. The user is having an issue where when they specify a data file in their YAML test using the < @ syntax, the path isn β t resolved relative to the YAML file β s location. Instead, it β s using the current working directory. They want the path to be relative to the YAML file so they can run tests from any directory.
First, I need to understand how the current code is handling the data file paths. The user mentioned that in _load_data_file, the code uses os. path. join (self. test_directory, os. path. basename (filename)). This takes the test_directory (which defaults to β.β), discards any directories in the filename, and appends the basename. That β s why if the YAML is in / my / tests / test. yaml and the data is < @data. txt, it looks for ./ data. txt instead of / my / tests / data. txt.
The key here is to make sure that the test_directory is set to the directory where the YAML file is located. Then, when joining with the filename (without basename), it would correctly resolve the relative path.
Looking at the provided code snippets, the main changes are in runner. py where the test_directory is determined. The user β s proposed solutions involve modifying the run_suite function to accept a test_directory parameter, which is derived from the input file β s directory.
In the original code, when running a suite from an input file, the test_directory is hardcoded to β.β. The various answers modify this by passing the directory of the input file as test_directory. For example, in Answer 10, they add the test_directory parameter to run_suite and set it to os. path. dirname (input_file). Then, in case. py, they change the path joining to use the filename directly instead of basename.
Wait, the critical part is in case. py β s _load_data_file method. The original code uses os. path. basename (filename), which strips any directories from the filename. So if the YAML has < @test_data / file. txt, the basename is file. txt, and it β s joined with test_directory. But if test_directory is the YAML β s directory, then it should be YAML_DIR / test_data / file. txt. However, the current code would do test_directory + file. txt, which would be YAML_DIR / file. txt. That β s incorrect. So the problem is that the code is taking the basename, thus ignoring any subdirectories in the filename.
Ah, so the fix here is to not take the basename. Instead, the filename after < @ should be treated as a relative path from the YAML β s directory. So in case. py β s _load_data_file, instead of using os. path. basename (filename), we should use the filename as - is. Then, os. path. join (self. test_directory, filename) would correctly resolve the path relative to the YAML β s directory.
But wait, the filename could be an absolute path. For example, if the YAML has < @ / absolute / path / file. txt, then os. path. join would just use the absolute path. However, the documentation says that the filename is relative to the YAML file. So absolute paths might not be intended. But the code should handle that.
So combining two changes: in runner. py, set test_directory to the YAML file β s directory, and in case. py, stop using basename and use the full filename.
Looking at Answer 10, there β s a change in case. py from os. path. basename (filename) to filename. That β s exactly the fix needed. Then, in runner. py, when running the suite, pass the test_directory as the directory of the input file.
So the key files to modify are:
1. gabbi / runner. py: When calling run_suite, pass test_directory as the directory of the input file. This involves modifying the run_suite function to accept test_directory as a parameter and then passing it when the input file is processed.
2. gabbi / case. py: Change the path construction from using os. path. basename (filename) to using the filename directly. This allows subdirectories in the filename to be included in the path.
Looking through the provided code snippets, the one that does both is Answer 10. Let me check:
In Answer 10, the runner. py changes add the test_directory parameter and set it to os. path. dirname (input_file). Then, in case. py, the path is changed to os. path. join (self. test_directory, filename). That would correctly resolve the path relative to the YAML β s directory, including any subdirectories in the filename.
But wait, the original code in case. py uses os. path. basename (filename). So changing that to filename means that if the input is β test_data / file. txt β, it β s joined as test_directory / test_data / file. txt. If test_directory is the YAML β s directory, that β s correct.
... [intermediate reasoning omitted for display brevity] ...
But perhaps better to use os. path. abspath and then dirname to handle relative paths correctly. But maybe the existing code already works. For example, if the input_file is β tests / test. yaml β, then os. path. dirname would return β tests β, which when joined with os. getcwd () would be correct. Wait, no. os. path. dirname just gives the directory part of the path. So if input_file is a relative path, like β tests / test. yaml β, then test_directory would be β tests β, and when os. path. join is called, it β s relative to the current working directory. But the code in suitemaker. test_suite_from_dict might expect an absolute path. However, the original code uses β.β as the test_directory, which is relative. So perhaps using the directory of the input file as test_directory is sufficient.
Another thing to consider: when the YAML is loaded from stdin (like via redirect), the test_directory remains β.β. That β s acceptable, as the documentation mentions that when using <, the filename isn β t seen by Gabbi, so it β s not possible to resolve the path relative to the YAML β s original location. So in that case, the test_directory defaults to β.β.
Now, in case. py:
Original line:
path = os. path. join (self. test_directory, os. path. basename (filename))
Modified line:
path = os. path. join (self. test_directory, filename)
So the SEARCH / REPLACE block for case. py:
### gabbi / case. py
<<<<<<< SEARCH
path = os. path. join (self. test_directory, os. path. basename (filename))
=======
path = os. path. join (self. test_directory, filename)
>>>>>>> REPLACE
Putting all these together, the final edits are:
1. Add import os in runner. py.
2. Modify run_suite to accept test_directory and pass it to test_suite_from_dict.
3. In the loop over input_files, compute test_directory as the directory of input_file and pass it to run_suite.
4. Modify case. py β s path construction.
So the final answer would include these four SEARCH / REPLACE blocks.
$\lhd$ / think $\rhd$
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
import argparse
from importlib import import_module
import sys
import unittest
=======
import argparse
from importlib import import_module
import os
import sys
import unittest
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
=======
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False, test_directory =β.β):
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
handlers = handler_objects)
=======
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None,
prefix = prefix, handlers = handler_objects)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
else:
for input_file in input_files:
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
=======
else:
for input_file in input_files:
test_directory = os. path. dirname (input_file) or β.β
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast, test_directory = test_directory)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / case. py
<<<<<<< SEARCH
path = os. path. join (self. test_directory, os. path. basename (filename))
=======
path = os. path. join (self. test_directory, filename)
>>>>>>> REPLACE
βββ
These changes:
1. Add β os β import needed for path handling
2. Modify run_suite to accept a test_directory parameter
3. Use the input file β s directory as the test_directory when processing files
4. Update data file path resolution to preserve directory structure from YAML relative paths
5. Remove basename handling that was stripping directory information from data file paths
The net effect is that data files specified with β< @filename β in YAML tests will now be resolved relative to their YAML file β s directory, supporting both flat and nested directory structures for test data. When tests are run from stdin (using β<β), the default directory remains the current working directory as before.
Listing 13: Example of emergent parallel scaling (cdent__gabbi-191 in SWE-bench-extra), the model response part.
Appendix G Analysis for SWE-Agents
G.1 Stage Annotation for SWE-Agent Trajectories
In this section, we present how we use a frontier LLM to annotate the SWE-Agent stage to which each interaction turn within the trajectory rollout belongs. While we have briefly introduced the five stages suggested in the prompt of the SWE-Agent prompt in Section 4, we attach the excerpt in Listing 14 for greater clarity:
β¬
...
Follow these steps to resolve the issue:
1. As a first step, it might be a good idea to find and read code relevant to the < pr_description >
2. Create a script to reproduce the error and execute it with β python < filename. py >β using the bash tool, to confirm the error
3. Edit the source code of the repo to resolve the issue
4. Rerun your reproduce script and confirm that the error is fixed!
5. Think about edgecases and make sure your fix handles them as well
...
Listing 14: The excerpt of the five-stage declaration in the SWE-Agent prompt.
It should be noted that the agent could flexibly transit across the five stages during its working process. For example, after Stage 4 when the agent rerun the test script, possibilities are that erroneous information remains, and this is when the agent goes back to Stage 3 to refine its code repair with reflection; Similar backtracing behavior could be observed from Stage 5 to Stage 3 as well, where the initial code repair has proven correct under the initial test script the agent composes in Stage 2, but fails some edge testcase the agent proposes in Stage 5.
To further analyze the BugFixer and the reflection skill prior, we need to realize which stage each turn along the SWE-Agent trajectory belongs to. As no strict boundaries or special prompt notes are set between each consecutive stage, we leverage an LLM for annotation. The annotation system prompt we set in kimi-k2-0711-preview is shown in Listing 15:
β¬
You are a professional inspector that can analyze the provided agentic interaction trajectory.
The trajectory you are going to analyze is made by an agent that interacts with a computer to solve tasks. This agent has access to the following functions:
---- BEGIN FUNCTION #1: bash ----
Description: Execute a bash command in the terminal.
Parameters:
(1) command (string, required): The bash command to execute. Can be empty to view additional logs when previous exit code is β-1β. Can be β ctrl + c β to interrupt the currently running process.
---- END FUNCTION #1 ----
---- BEGIN FUNCTION #2: submit ----
Description: Finish the interaction when the task is complete OR if the assistant cannot proceed further with the task.
No parameters are required for this function.
---- END FUNCTION #2 ----
---- BEGIN FUNCTION #3: str_replace_editor ----
Description: Custom editing tool for viewing, creating and editing files
* State is persistent across command calls and discussions with the user
* If β path β is a file, β view β displays the result of applying β cat - n β. If β path β is a directory, β view β lists non - hidden files and directories up to 2 levels deep
* The β create β command cannot be used if the specified β path β already exists as a file
* If a β command β generates a long output, it will be truncated and marked with β< response clipped >β
* The β undo_edit β command will revert the last edit made to the file at β path β
Notes for using the β str_replace β command:
* The β old_str β parameter should match EXACTLY one or more consecutive lines from the original file. Be mindful of whitespaces!
* If the β old_str β parameter is not unique in the file, the replacement will not be performed. Make sure to include enough context in β old_str β to make it unique
* The β new_str β parameter should contain the edited lines that should replace the β old_str β
Parameters:
(1) command (string, required): The commands to run. Allowed options are: β view β, β create β, β str_replace β, β insert β, β undo_edit β.
Allowed values: [β view β, β create β, β str_replace β, β insert β, β undo_edit β]
(2) path (string, required): Absolute path to file or directory, e. g. β/ repo / file. py β or β/ repo β.
(3) file_text (string, optional): Required parameter of β create β command, with the content of the file to be created.
(4) old_str (string, optional): Required parameter of β str_replace β command containing the string in β path β to replace.
(5) new_str (string, optional): Optional parameter of β str_replace β command containing the new string (if not given, no string will be added). Required parameter of β insert β command containing the string to insert.
(6) insert_line (integer, optional): Required parameter of β insert β command. The β new_str β will be inserted AFTER the line β insert_line β of β path β.
(7) view_range (array, optional): Optional parameter of β view β command when β path β points to a file. If none is given, the full file is shown. If provided, the file will be shown in the indicated line number range, e. g. [11, 12] will show lines 11 and 12. Indexing at 1 to start. Setting β[start_line, -1]β shows all lines from β start_line β to the end of the file.
---- END FUNCTION #3 ----
The agent was instructed with the following:
* A python code repository has been uploaded in the directory / testbed.
* Implement the necessary changes to the repository so that the requirements specified in the < pr_description > are met.
* All changes to any of the test files described in the < pr_description > have already been taken care of. This means no need to modify the testing logic or any of the tests in any way.
* Make the minimal changes to non - tests files in the / testbed directory to ensure the < pr_description > is satisfied.
The agent was suggested to follow the following steps to resolve the issue:
1. As a first step, it might be a good idea to find and read code relevant to the < pr_description >
2. Create a script to reproduce the error and execute it with β python < filename. py >β using the bash tool, to confirm the error
3. Edit the source code of the repo to resolve the issue
4. Rerun your reproduce script and confirm that the error is fixed!
5. Think about edgecases and make sure your fix handles them as well
The agent was encouraged to think thoroughly, and it β s fine if it β s very long.
You are going to inspect this agent β s interaction trajectory with a computer to solve the given task in the < pr_description >. One turn of interaction contains a pair of OBSERVATION and ACTION, where the OBSERVATION comes from the computer, and the ACTION is taken by the agent.
For each turn of interaction, determine which step (of the aforementioned five) this turn belongs to. Output a single number (1~5) ONLY in a separate line as your classification (DO NOT OUTPUT ANY OTHER WORDS THAN THE DIGIT).
You can think before make the inspection. When thinking, wrap your thought with < think > and </ think >. Don β t forget to output your final inspection after thinking.
Listing 15: The annotation prompt for SWE-Agent stages.
To provide a clearer understanding of the trajectory, we incorporate most of the tool descriptions and instructions from the SWE-Agent system prompt into the annotation system prompt. The annotation is conducted in a multi-round manner, leveraging the agentβs previous actions and observations, as well as the stage classifications of earlier turns, to better exploit contextual information. At the $i$ -th round of annotation, the observationβaction pair from turn $i$ of the SWE-Agent trajectory is appended as input, and the annotator is expected to output the corresponding stage classification.
G.2 Comparative Study
Based on the automatic stage annotation in the above section, we present a comparative study by inspecting the performance on sympy__sympy-20590 among the Kimi-Dev under Agentless, and each of the Base, MT, SFT, and RL priors with SWE-Agent adaptation.
The problem statement of sympy__sympy-20590 is listed in Listing 16:
β¬
Symbol instances have __dict__ since 1.7?
In version 1.6.2 Symbol instances had no β __dict__ β attribute
βββ python
>>> sympy. Symbol (β s β). __dict__
------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
< ipython - input -3- e2060d5eec73 > in < module >
----> 1 sympy. Symbol (β s β). __dict__
AttributeError: β Symbol β object has no attribute β __dict__ β
>>> sympy. Symbol (β s β). __slots__
(β name β,)
βββ
This changes in 1.7 where β sympy. Symbol (β s β). __dict__ β now exists (and returns an empty dict)
I may misinterpret this, but given the purpose of β __slots__ β, I assume this is a bug, introduced because some parent class accidentally stopped defining β __slots__ β.
Listing 16: The problem statement of sympy__sympy-20590.
It is observed that the main difficulty in resolving the issue lies in the realization of the β some parent classβ referenced in the problem. In fact, the hints text of this problem, which reflects the discussion of the developers under the original issue, reveals a much more in-depth investigation into the issue (Listing 17):
β¬
It seems that Basic now inherits β DefaultPrinting β which I guess doesn β t have slots. I β m not sure if it β s a good idea to add β __slots__ β to that class as it would then affect all subclasses.
...
Using slots can break multiple inheritance but only if the slots are non - empty I guess. Maybe this means that any mixin should always declare empty slots or it won β t work properly with subclasses that have slots...
I see that β EvalfMixin β has β __slots__ = ()β.
I guess we should add empty slots to DefaultPrinting then.
Listing 17: The excerpted hints text of sympy__sympy-20590.
According to the discussion, it is clear that the code repair would be to βadd empty slots to DefaultPrintingβ, which naturally leads to the navigation towards the file related to the implementation of the printer (sympy/core/_print_helpers.py, which is also the file updated by the ground-truth patch.) However, the hints_text information in the test set is not allowed to be used in the problem-solving process, which challenges the reasoner or the agent to figure out βthe parent class that stopped defining β__slots__β β autonomously.
We first examine Kimi-Dev under Agentless. None of the 40 runs succeeded in producing the correct file localization. In most cases, the updates are made to sympy/core/symbol.py, which is a plausible choice since the reported problem is triggered by sympy.Symbol(βsβ), and symbol.py should contain the definition of the Symbol class. This line of reasoning, however, diverges from the correct solution. A small number of runs attempt alternative localizations, such as sympy/core/{basic.py, expr.py}, but these are also incorrect.
The limitation of the Agentless framework becomes more obvious when the focus is shifted to the inspection over the SWE-Agent trajectories. For all of the Base, MT, SFT, and RL priors with SWE-Agent adaptation, the resulting trajectories always contain a test-reproducing phase (marked as Stage β2β by the annotator), with the generated script reads like:
β¬
from sympy import Symbol
# Try to access __dict__ on a Symbol instance
s = Symbol (β s β)
print (" Symbol. __slots__:", s. __slots__)
try:
print (" Symbol. __dict__:", s. __dict__)
except AttributeError as e:
print (" Error:", e)
Listing 18: An exampled reproducing script in Stage 2 within the SWE-Agent trajectories in sympy__sympy-20590.
It is trivial for an agent to generate a reproducing script of this form, as it directly transcribes the case described in the problem statement. Executing the script provides an unambiguous signal of whether the issue is resolved: the attribute s.__dict__ should no longer exist, and the expected Error: message should appear in the standard error output. Indeed, observing the initial completion of Stage 3 across all trajectories reveals that none of the agents succeeded on their first attempt. However, the incomplete status is consistently exposed in Stage 4, when the script is rerun and s.__dict__ still persists. This progression highlights the advantage of the SWE-Agent framework over the Agentless paradigm, as the agent is afforded additional opportunities for iterative refinement rather than being limited to a single attempt.
We continue to analyze the behavior of the agents adapted from the Base, MT, and SFT priors. Due to the difficulty of the issue, it is non-trivial to correctly localize the error through their agentic reasoning: the interactions show that the agents were switching among several file candidates, but failed to reach the correct one. The only exception comes from the agent adapted from the RL prior, which marks its unique success on this instance through accurate agentic reasoning. We attach the intermediate process from the initial Stage-3 moment to the Stage-3 revisit (refining the initial code repair) within the successful trajectory, tagged by the stage annotator:
β¬
...
OBSERVATION:
Symbol. __slots__: (β name β,)
Symbol. __dict__: {}
ACTION:
Hmm, it seems like the issue is still there. Let β s check if there are any other classes in the hierarchy that might be missing β __slots__ β:
< function = bash >
< parameter = command > python - c " from sympy import Symbol; s = Symbol (β s β); for cls in type (s). __mro__: print (cls. __name__, β has __slots__:β, hasattr (cls, β __slots__ β))"</ parameter >
</ function >
OBSERVATION:
File "< string >", line 1
from sympy import Symbol; s = Symbol (β s β); for cls in type (s). __mro__: print (cls. __name__, β has __slots__:β, hasattr (cls, β __slots__ β))
^
SyntaxError: invalid syntax
ACTION:
Let β s try a different approach:
< function = str_replace_editor >
< parameter = command > create </ parameter >
< parameter = path >/ testbed / check_slots. py </ parameter >
< parameter = file_text >
from sympy import Symbol
s = Symbol (β s β)
for cls in type (s). __mro__:
print (f "{cls. __name__:15} has __slots__: {hasattr (cls, β __slots__ β)}")
if hasattr (cls, β __slots__ β):
print (f "{β β*16} __slots__ value: {cls. __slots__}")
</ parameter >
</ function >
OBSERVATION:
File created successfully at: / testbed / check_slots. py
ACTION:
< function = bash >
< parameter = command > python / testbed / check_slots. py </ parameter >
</ function >
OBSERVATION:
Symbol has __slots__: True
__slots__ value: (β name β,)
AtomicExpr has __slots__: True
__slots__ value: ()
Atom has __slots__: True
__slots__ value: ()
Expr has __slots__: True
__slots__ value: ()
Boolean has __slots__: True
__slots__ value: ()
Basic has __slots__: True
__slots__ value: (β _mhash β, β _args β, β _assumptions β)
Printable has __slots__: False
EvalfMixin has __slots__: True
__slots__ value: ()
object has __slots__: False
ACTION:
Now I see the issue! β Printable β doesn β t have β __slots__ β defined. Let β s check the β Printable β class:
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / sympy / core / _print_helpers. py </ parameter >
< parameter = view_range >[1, 30]</ parameter >
</ function >
OBSERVATION:
Here β s the result of running β cat - n β on / testbed / sympy / core / _print_helpers. py:
1 """
2 Base class to provide str and repr hooks that β init_printing β can overwrite.
3
4 This is exposed publicly in the β printing. defaults β module,
5 but cannot be defined there without causing circular imports.
6 """
7
8 class Printable:
9 """
10 The default implementation of printing for SymPy classes.
11
12 This implements a hack that allows us to print elements of built - in
13 Python containers in a readable way. Natively Python uses ββ repr ()ββ
14 even if ββ str ()ββ was explicitly requested. Mix in this trait into
15 a class to get proper default printing.
16
17 This also adds support for LaTeX printing in jupyter notebooks.
18 """
19
20 # Note, we always use the default ordering (lex) in __str__ and __repr__,
21 # regardless of the global setting. See issue 5487.
22 def __str__ (self):
23 from sympy. printing. str import sstr
24 return sstr (self, order = None)
25
26 __repr__ = __str__
27
28 def _repr_disabled (self):
29 """
30 No - op repr function used to disable jupyter display hooks.
ACTION:
We need to add β __slots__ = ()β to the β Printable β class as well:
< function = str_replace_editor >
< parameter = command > str_replace </ parameter >
< parameter = path >/ testbed / sympy / core / _print_helpers. py </ parameter >
< parameter = old_str >
class Printable:
"""
The default implementation of printing for SymPy classes.
This implements a hack that allows us to print elements of built - in
Python containers in a readable way. Natively Python uses ββ repr ()ββ
even if ββ str ()ββ was explicitly requested. Mix in this trait into
a class to get proper default printing.
This also adds support for LaTeX printing in jupyter notebooks.
"""
</ parameter >
< parameter = new_str >
class Printable:
"""
The default implementation of printing for SymPy classes.
This implements a hack that allows us to print elements of built - in
Python containers in a readable way. Natively Python uses ββ repr ()ββ
even if ββ str ()ββ was explicitly requested. Mix in this trait into
a class to get proper default printing.
This also adds support for LaTeX printing in jupyter notebooks.
"""
__slots__ = ()
</ parameter >
</ function >
...
Listing 19: The intermediate reflection process between the two Stage-3 moment of the successful trajectory in sympy__sympy-20590.
Listing 19 shows the advanced reflection capability through slot checking along the entire class hierarchy, which successfully leads to the observation of the erroneous Printable. The following behavior from the agent includes the routine of test rerun, and the observation of passing the test script (Listing 18). This chain of actions recovers and resonates with the developer discussions in hints_text (Listing 17), showing the superior reflection skill in the RL prior over other priors.
G.3 Pattern Comparison between SFT with Expert Data and End-to-End RL
In this section, we compare the patterns of the interaction trajectories of the RL prior that undergoes (i) $1.5Γ 2^{28}$ SWE-Agent expert SFT tokens (later referred as βheavy SFTβ) and (ii) $2^{21}$ SWE-Agent expert SFT tokens + end-to-end SWE-Agent RL (later referred as βlight SFT + end-to-end RLβ) as a qualitative study for Figure 7.
We select an instance solved by both methods, matplotlib__matplotlib-13989, as case study. We attach the actions along the two interaction trajectories for comparison.
The action patterns of the heavy SFT model read as Listing 20:
β¬
ACTION 0:
I β ll help you implement the necessary changes to fix the issue where β hist ()β doesn β t respect β range =...β when β density = True β. Let β s follow the steps:
1. First, let β s find and read the relevant code:
< function = bash >
< parameter = command > find / testbed - type f - name "*. py " | grep - v " test_ "</ parameter >
</ function >
ACTION 1:
Let β s look for the histogram implementation since that β s what we need to fix. The PR mentions that the bug is in the β hist ()β function when β density = True β and β range =...β is specified.
< function = bash >
< parameter = command > grep - r " def hist " -- include ="*. py " / testbed | grep - v " test_ "</ parameter >
</ function >
ACTION 2:
The main histogram implementation appears to be in β/ testbed / lib / matplotlib / axes / _axes. py β. Let β s look at that file:
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
</ function >
ACTION 3:
Let me look at the histogram implementation specifically:
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
< parameter = view_range >[6365, 6899]</ parameter >
</ function >
ACTION 4:
Let me also look at the β histogram_bin_edges β function since it β s used in the code:
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
< parameter = view_range >[40, 60]</ parameter >
</ function >
ACTION 5:
2. Let β s create a script to reproduce the error:
< function = str_replace_editor >
< parameter = command > create </ parameter >
< parameter = path >/ testbed / reproduce. py </ parameter >
...
ACTION 6:
Let β s run it to confirm the issue:
< function = bash >
< parameter = command > python / testbed / reproduce. py </ parameter >
</ function >
ACTION 7:
The issue is confirmed: when β density = True β is specified, the β range =(0, 1)β parameter is not respected. The bins should start at 0 and end at 1, but they don β t.
3. Let β s fix the issue. Looking at the code, the problem is in the β hist β method and specifically around line 6689-6690:
βββ python
if density and not stacked:
hist_kwargs = dict (density = density)
βββ
The issue is that when β density = True β, the β range β parameter gets lost because we β re not including it in the β hist_kwargs β. Let β s modify the code to fix this:
< function = str_replace_editor >
< parameter = command > str_replace </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
...
ACTION 8:
4. Let β s run the reproduce script again to confirm the fix:
< function = bash >
< parameter = command > python / testbed / reproduce. py </ parameter >
</ function >
ACTION 9:
Great! The fix works. Now the bins start at 0 and end at 1 as expected.
5. Let β s think about edge cases:
1. When β range = None β - This should use the min / max of the data, which is already handled correctly by the existing code
2. When β range β is specified but β density = False β - The range should be respected, which it already is
3. When β range β is specified and β density = True β - The range should be respected, which our fix now ensures
4. When β bins β is a sequence - The range should be ignored, which is already handled by the code
5. When β range β includes values outside the data range - The histogram should include empty bins for the range outside the data, which is already handled by numpy β s histogram function
Let me test some of these edge cases to be sure:
< function = str_replace_editor >
< parameter = command > str_replace </ parameter >
< parameter = path >/ testbed / reproduce. py </ parameter >
...
ACTION 10:
< function = bash >
< parameter = command > python / testbed / reproduce. py </ parameter >
</ function >
ACTION 11:
All test cases look good:
1. When β range =(0, 1)β and β density = True β, the bins start at 0 and end at 1
2. When β range = None β and β density = True β, the bins use the min / max of the data
3. When β range =(0, 1)β and β density = False β, the bins start at 0 and end at 1
4. When β bins β is a sequence, the range parameter is ignored and the sequence is used directly
5. When β range =(-0.5, 1.5)β, the bins start at -0.5 and end at 1.5
The fix looks good and handles all edge cases correctly. Let me submit the changes:
< function = submit >
</ function >
Listing 20: The interaction made by the RL prior with $1.5Γ 2^{28}$ SWE-Agent SFT tokens on matplotlib__matplotlib-13989.
The action patterns of the light SFT + end-to-end RL model read as Listing 21:
β¬
ACTION 0:
Understood. Let β s tackle this problem step by step.
First, I need to find the relevant code in the / testbed directory. The user mentioned that the code is in / testbed, so I β ll start by listing the files there.
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed </ parameter >
</ function >
ACTION 1:
Hmm, the / testbed directory has a lot of files, but I need to focus on the code related to the hist function. The hist function is part of matplotlib. pyplot, so I should look for the pyplot module in the / testbed / lib / matplotlib directory.
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / pyplot. py </ parameter >
</ function >
ACTION 2:
Okay, I β ll look at the hist function in pyplot. py.
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / pyplot. py </ parameter >
< parameter = view_range >[2610, 2625]</ parameter >
</ function >
ACTION 3:
The hist function in pyplot. py simply calls the hist method of the current axes. So the actual implementation is in the Axes class.
I β ll look for the Axes. hist method in the / testbed / lib / matplotlib / axes directory.
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes </ parameter >
</ function >
ACTION 4:
The Axes. hist method is likely in _axes. py, which is where most of the plotting methods are defined.
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
</ function >
ACTION 5:
I β ll look at the hist method in _axes. py.
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
< parameter = view_range >[6365, 6400]</ parameter >
</ function >
ACTION 6:
I β ll look at more of the hist method to see where the range is handled.
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
< parameter = view_range >[6400, 6450]</ parameter >
</ function >
ACTION 7:
I β ll look at the code where the bins are calculated.
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
< parameter = view_range >[6450, 6500]</ parameter >
</ function >
ACTION 8:
I β ll look at the code where the bins are calculated.
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
< parameter = view_range >[6500, 6550]</ parameter >
</ function >
ACTION 9:
I β ll look at the code where the bins are calculated.
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
< parameter = view_range >[6550, 6600]</ parameter >
</ function >
ACTION 10:
I β ll look at the code where the bins are calculated.
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
< parameter = view_range >[6600, 6650]</ parameter >
</ function >
ACTION 11:
I β ll look at the code where the bins are calculated.
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
< parameter = view_range >[6650, 6700]</ parameter >
</ function >
ACTION 12:
Ah, here β s the key part. The code calculates the bins using np. histogram, and if the range is specified, it β s passed to histogram_bin_edges. But when density is True, the code may be overriding the range.
Wait, in the code:
if not input_empty and len (x) > 1:
if weights is not None:
_w = np. concatenate (w)
else:
_w = None
bins = histogram_bin_edges (np. concatenate (x),
bins, bin_range, _w)
else:
hist_kwargs [β range β] = bin_range
But when density is True, the code may be using the default range of the data, not the specified range.
Wait, no. The code for calculating the bins is:
if bins is None:
bins = rcParams [β hist. bins β]
if bin_range is not None:
bin_range = self. convert_xunits (bin_range)
if bin_range is None:
xmin = np. inf
xmax = - np. inf
for xi in x:
if len (xi):
xmin = min (xmin, np. nanmin (xi))
xmax = max (xmax, np. nanmax (xi))
if not np. isnan ([xmin, xmax]). any () and not (xmin > xmax):
bin_range = (xmin, xmax)
# If bins are not specified either explicitly or via range,
# we need to figure out the range required for all datasets,
# and supply that to np. histogram.
if not input_empty and len (x) > 1:
if weights is not None:
_w = np. concatenate (w)
else:
_w = None
bins = histogram_bin_edges (np. concatenate (x),
bins, bin_range, _w)
else:
hist_kwargs [β range β] = bin_range
density = bool (density) or bool (normed)
if density and not stacked:
hist_kwargs = dict (density = density)
# List to store all the top coordinates of the histograms
tops = []
mlast = None
# Loop through datasets
for i in range (nx):
# this will automatically overwrite bins,
# so that each histogram uses the same bins
m, bins = np. histogram (x [i], bins, weights = w [i], ** hist_kwargs)
m = m. astype (float) # causes problems later if it β s an int
if mlast is None:
But when density is True, the code passes density = True to np. histogram. However, when density is True, np. histogram normalizes the counts, but the bin edges are still determined by the range.
Wait, but the problem is that when density = True, the range is not respected. The user β s code is:
_, bins, _ = plt. hist (np. random. rand (10), " auto ", range =(0, 1), density = True)
print (bins)
and the output is:
[0.00331535 0.18930174 0.37528813 0.56127453 0.74726092 0.93324731]
which is not from 0 to 1. But when density = False, the range is respected.
This suggests that when density = True, the code is not using the specified range for the bins.
But why?
Let me look at the code again.
When the code calculates the bins:
if not input_empty and len (x) > 1:
if weights is not None:
_w = np. concatenate (w)
else:
_w = None
bins = histogram_bin_edges (np. concatenate (x),
bins, bin_range, _w)
else:
hist_kwargs [β range β] = bin_range
But when len (x) is 1 (as in the user β s code), the code enters the else clause and sets hist_kwargs [β range β] = bin_range.
Then, in the loop:
for i in range (nx):
m, bins = np. histogram (x [i], bins, weights = w [i], ** hist_kwargs)
But wait, if bins is None, then np. histogram will use the range. But if bins is a string (like β auto β), then np. histogram will calculate the bins based on the data and the range.
Wait, in the user β s code, bins is β auto β and range is (0, 1). When density = True, the code should calculate the bins based on the range (0, 1).
But in the code, when len (x) is 1, the code sets hist_kwargs [β range β] = bin_range. Then, in the loop, the code calls np. histogram (x [i], bins, weights = w [i], ** hist_kwargs).
But if bins is a string (like β auto β), then np. histogram will calculate the bins based on the data and the range. So the range should be respected.
But in the user β s code, the bins are not from 0 to 1. This suggests that the code is not passing the range to np. histogram when density = True.
But why?
This is puzzling. Let me try to reproduce the error.
I β ll create a script in the / testbed directory.
< function = str_replace_editor >
< parameter = command > create </ parameter >
< parameter = path >/ testbed / reproduce_error. py </ parameter >
...
ACTION 13:
I β ll run the script.
< function = bash >
< parameter = command > python / testbed / reproduce_error. py </ parameter >
</ function >
ACTION 14:
Hmm, the output is not from 0 to 1. This confirms the error.
Now, I need to find where in the code the range is not being passed when density = True.
Let me look at the code again.
When len (x) is 1, the code sets hist_kwargs [β range β] = bin_range. Then, in the loop:
for i in range (nx):
m, bins = np. histogram (x [i], bins, weights = w [i], ** hist_kwargs)
But if bins is a string (like β auto β), then np. histogram will calculate the bins based on the data and the range. So the range should be respected.
But in the user β s code, the bins are not from 0 to 1. This suggests that the code is not passing the range to np. histogram when density = True.
But why?
This is confusing. Let me look at the code for np. histogram.
The np. histogram function has a range parameter. If range is specified, the bin edges are calculated based on that range. If the data is outside the range, it is ignored.
In the user β s code, the data is between 0 and 1, so the range (0, 1) should be respected.
But when density = True, the code is not respecting the range. This suggests that the code is not passing the range to np. histogram when density = True.
But why?
This is puzzling. Let me try to modify the code in _axes. py.
I β ll look for the line where hist_kwargs is set.
In the code:
density = bool (density) or bool (normed)
if density and not stacked:
hist_kwargs = dict (density = density)
This line is overwriting hist_kwargs, which may include the range.
Ah, here β s the problem. If density is True and stacked is False, the code sets hist_kwargs to dict (density = density), which overwrites any previous hist_kwargs, including the range.
This is the root of the problem. The code is overwriting hist_kwargs, which includes the range, when density is True and stacked is False.
So, in the code:
density = bool (density) or bool (normed)
if density and not stacked:
hist_kwargs = dict (density = density)
This line is overwriting hist_kwargs, which may include the range. So the range is not passed to np. histogram.
This is the cause of the problem. The solution is to add the range to hist_kwargs in this case.
So, the code should be:
density = bool (density) or bool (normed)
if density and not stacked:
hist_kwargs. setdefault (β density β, density)
But wait, hist_kwargs may already have the range. So, the code should not overwrite hist_kwargs, but add to it.
The code is:
density = bool (density) or bool (normed)
if density and not stacked:
hist_kwargs = dict (density = density)
This line is overwriting hist_kwargs, which may include the range. So the range is lost.
The solution is to modify this code to add to hist_kwargs, not overwrite it.
So, the code should be:
density = bool (density) or bool (normed)
if density and not stacked:
hist_kwargs. setdefault (β density β, density)
But wait, hist_kwargs may not exist. Let me look at the code.
The code for hist_kwargs is:
hist_kwargs = dict ()
# if the bin_range is not given, compute without nan numpy
# does not do this for us when guessing the range (but will
# happily ignore nans when computing the histogram).
if bin_range is None:
xmin = np. inf
xmax = - np. inf
for xi in x:
if len (xi):
xmin = min (xmin, np. nanmin (xi))
xmax = max (xmax, np. nanmax (xi))
# make sure we have seen at least one non - nan and finite
# value before we reset the bin range
if not np. isnan ([xmin, xmax]). any () and not (xmin > xmax):
bin_range = (xmin, xmax)
# If bins are not specified either explicitly or via range,
# we need to figure out the range required for all datasets,
# and supply that to np. histogram.
if not input_empty and len (x) > 1:
if weights is not None:
_w = np. concatenate (w)
else:
_w = None
bins = histogram_bin_edges (np. concatenate (x),
bins, bin_range, _w)
else:
hist_kwargs [β range β] = bin_range
density = bool (density) or bool (normed)
if density and not stacked:
hist_kwargs = dict (density = density)
# List to store all the top coordinates of the histograms
tops = []
mlast = None
# Loop through datasets
for i in range (nx):
# this will automatically overwrite bins,
# so that each histogram uses the same bins
m, bins = np. histogram (x [i], bins, weights = w [i], ** hist_kwargs)
m = m. astype (float) # causes problems later if it β s an int
if mlast is None:
Ah, here β s the problem. The code sets hist_kwargs = dict (density = density), which overwrites any previous hist_kwargs, including the range.
So, if density is True and stacked is False, the code overwrites hist_kwargs with dict (density = density), which does not include the range. So the range is not passed to np. histogram.
This is the cause of the problem. The solution is to modify this code to add to hist_kwargs, not overwrite it.
So, the code should be:
density = bool (density) or bool (normed)
if density and not stacked:
hist_kwargs. setdefault (β density β, density)
But hist_kwargs may not exist. Wait, hist_kwargs is initialized as:
hist_kwargs = dict ()
So, the code can be:
density = bool (density) or bool (normed)
if density and not stacked:
hist_kwargs [β density β] = density
This way, hist_kwargs includes the density and the range.
This would fix the problem. Let me test this.
I β ll modify the code in _axes. py.
< function = str_replace_editor >
< parameter = command > str_replace </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
...
ACTION 14:
I β ll run the reproduce_error. py script again.
< function = bash >
< parameter = command > python / testbed / reproduce_error. py </ parameter >
</ function >
ACTION 15:
Great, the fix works. The bins are now from 0 to 1.
Now, I can submit the changes.
< function = submit >
</ function >
Listing 21: The interaction made by the RL prior with $2^{21}$ SWE-Agent SFT tokens and end-to-end SWE-Agent RL on matplotlib__matplotlib-13989.
It can be seen that the two types of patterns are significantly different from each other:
- The patterns of the heavy SFT model appear to be more regulated, with explicitly labeled steps 1 to 5 along the intermediate reasoning process. The reasoning content is generally short as well. This reflects the behavior of the model shaped by the heavy SFT data from the proprietary models.
- The patterns of the light SFT + end-to-end RL model, in contrast, are much more under-regulated. More turns of interaction are spent at repo exploration, and a significantly larger amount of tokens are generated at the code edit steps before the action is made. This is the reflection behavior shaped by end-to-end RL (and less impacted by the expert data), as the sign of reasoning-intensive skill priors is retained in the agentic scenario.
G.4 Generalization to Other Tasks
In this section, we study whether the skills incentivized by Agentless training and SWE-Agent adaptation through SWE-Smith trajectories could transfer to other SWE tasks. We use SWE-bench-live and SWE-bench Multilingual as our testbeds.
SWE-bench-live (zhang2025swe) is a benchmark for real-world issue resolution, evaluating AI systems on software engineering tasks. Using an automated curation pipeline, it is updated monthly to ensure fresh tasks and rigorous, contamination-free evaluation. For our experiments, we selected the default set of 300 tasks, with data collected between October 2024 and March 2025. Compared to SWE-bench Verified, SWE-bench-live exhibits a higher degree of distributional shift.
SWE-bench Multilingual (yang2025swesmith) introduces 300 curated tasks from 42 GitHub repositories across 9 programming languages, including Rust, Java, PHP, Ruby, JavaScript/TypeScript, Go, and C/C++, covering domains such as web frameworks, data tools, core utilities, and libraries. Compared to SWE-bench Verified, which focuses exclusively on Python, SWE-bench Multilingual exhibits greater linguistic and domain diversity, posing additional challenges in cross-language generalization and transferability of software engineering capabilities.
Similar to previous experiments, we evaluated four model stages as the priors: the original Qwen2.5-72B (Base), the mid-trained model (MT), the model activated with reasoning data through supervised finetuning (SFT), and the model after RL training (RL). We still use the open-source SWE-smith trajectories to activate the agentic capabilities of each prior.
Figures 11 and 12 show the performance of the four priors on SWE-bench-Live and SWE-bench Multilingual under varied amounts of agentic trajectories for adaptation ( $2^{21}$ as one-step gradient descent, $2^{23}$ , $1.1Γ 2^{27}$ , and $1.5Γ 2^{28}$ as 100, 2,000, and 5,016 training trajectories). Each SWE-Agent adaptation experiment is conducted through lightweight supervised finetuning, the training time of which ranges from several minutes to two hours at most.
Compared to the Base prior, those specifically enhanced with Agentless skills (SFT and RL) demonstrate stronger task generalization, especially under the data-scarce settings. However, when more SWE-Smith trajectories are used for adaptation, the performances of the Base and the MT priors become closer to those of the SFT and the RL priors. This could be attributed to the gaps between the different SWE tasks. The exploration for recipes that enable stronger out-of-distribution and task-agnostic generalization is left for future work.
<details>
<summary>figs/sec4_generalization_plots/l-100-s1.png Details</summary>

### Visual Description
\n
## Line Chart: Pass@k vs. k for Different Models
### Overview
This image presents a line chart illustrating the relationship between the 'k' value (likely representing the number of samples or candidates considered) and the 'Pass@k' metric (likely representing the percentage of successful passes or correct predictions) for four different models: RL, SFT, MT, and Base. The chart visually compares the performance of these models as 'k' increases from 1 to 3.
### Components/Axes
* **X-axis:** Labeled "k", ranging from 1 to 3. The axis represents the number of samples or candidates.
* **Y-axis:** Labeled "Pass@k (%)", ranging from 0.0 to 12.5. The axis represents the percentage of successful passes or correct predictions.
* **Legend:** Located in the top-left corner of the chart. It identifies the four data series:
* RL (Red line with circular markers)
* SFT (Orange line with circular markers)
* MT (Magenta line with circular markers)
* Base (Blue line with circular markers)
* **Gridlines:** Light gray vertical and horizontal lines provide a visual reference for data point values.
### Detailed Analysis
* **RL (Red):** The line slopes upward consistently.
* At k=1, Pass@k is approximately 4.8%.
* At k=2, Pass@k is approximately 7.4%.
* At k=3, Pass@k is approximately 9.4%.
* **SFT (Orange):** The line shows an initial increase, then flattens.
* At k=1, Pass@k is approximately 4.2%.
* At k=2, Pass@k is approximately 5.2%.
* At k=3, Pass@k is approximately 5.8%.
* **MT (Magenta):** The line shows a slight upward trend.
* At k=1, Pass@k is approximately 2.2%.
* At k=2, Pass@k is approximately 2.4%.
* At k=3, Pass@k is approximately 2.6%.
* **Base (Blue):** The line is relatively flat, with a slight upward trend.
* At k=1, Pass@k is approximately 1.2%.
* At k=2, Pass@k is approximately 1.6%.
* At k=3, Pass@k is approximately 2.0%.
### Key Observations
* The RL model consistently outperforms the other models across all values of 'k'.
* The SFT model shows a small initial improvement as 'k' increases, but its performance plateaus.
* The MT and Base models exhibit the lowest Pass@k values and the smallest changes in performance as 'k' increases.
* The gap between the RL model and the other models widens as 'k' increases, suggesting that the RL model benefits more from considering a larger number of samples.
### Interpretation
The data suggests that the RL model is the most effective at achieving successful passes or correct predictions, and its performance improves significantly as the number of samples considered ('k') increases. The SFT model shows some improvement with increasing 'k', but its gains are limited. The MT and Base models appear to be less sensitive to the value of 'k', indicating that they may be limited by other factors.
The chart demonstrates the impact of sample size on model performance. The RL model's strong positive correlation between 'k' and Pass@k suggests that it is capable of leveraging additional information to improve its predictions. The relatively flat lines for the SFT, MT, and Base models suggest that they may have reached a performance ceiling or that their performance is constrained by other factors, such as model capacity or data quality.
The differences in performance between the models could be due to variations in their training data, architectures, or optimization algorithms. Further investigation would be needed to determine the specific factors that contribute to these differences.
</details>
(a) #SFT $=2^{21}$ .
<details>
<summary>figs/sec4_generalization_plots/l-100.png Details</summary>

### Visual Description
\n
## Line Chart: Pass@k vs. k for Different Models
### Overview
This image presents a line chart comparing the Pass@k metric (in percentage) for four different models (RL, SFT, MT, and Base) across three values of k (1, 2, and 3). The chart illustrates how the performance of each model changes as the value of k increases.
### Components/Axes
* **X-axis:** Labeled "k", with values 1, 2, and 3.
* **Y-axis:** Labeled "Pass@k (%)", with a scale ranging from 0.0 to 12.5, incrementing by 2.5.
* **Legend:** Located in the top-right corner, identifying the four data series:
* RL (Red)
* SFT (Orange)
* MT (Magenta)
* Base (Blue)
### Detailed Analysis
Let's analyze each line individually, noting trends and approximate data points.
* **RL (Red):** The line slopes upward, indicating increasing Pass@k with increasing k.
* k=1: Approximately 8.8%
* k=2: Approximately 10.8%
* k=3: Approximately 12.3%
* **SFT (Orange):** The line also slopes upward, but less steeply than RL.
* k=1: Approximately 5.2%
* k=2: Approximately 7.8%
* k=3: Approximately 9.6%
* **MT (Magenta):** This line shows a significant upward slope, and appears to be the highest performing model at k=2 and k=3.
* k=1: Approximately 6.2%
* k=2: Approximately 10.2%
* k=3: Approximately 12.2%
* **Base (Blue):** The line slopes upward, but is the flattest of the four, indicating the smallest increase in Pass@k with increasing k.
* k=1: Approximately 2.5%
* k=2: Approximately 5.0%
* k=3: Approximately 7.6%
### Key Observations
* The MT model consistently outperforms the other models at k=2 and k=3.
* The Base model has the lowest Pass@k values across all k values.
* All models show an increase in Pass@k as k increases, suggesting that considering more top-k predictions improves performance.
* The RL model shows the most significant improvement in Pass@k as k increases.
### Interpretation
The chart demonstrates the impact of the 'k' parameter on the Pass@k metric for different models. Pass@k represents the percentage of times the correct answer is within the top 'k' predicted results. The results suggest that increasing 'k' generally improves performance, as a wider range of predictions increases the likelihood of including the correct answer. The differences between the models indicate varying levels of confidence and accuracy in their predictions. The MT model's superior performance suggests it is better at generating a diverse set of plausible answers, increasing the chance of including the correct one within the top 'k' predictions. The Base model's lower performance suggests it may be less capable of generating diverse or accurate predictions. The steep slope of the RL model indicates that it benefits significantly from considering more predictions (larger k). This data could be used to optimize the choice of 'k' for each model to achieve a desired level of performance.
</details>
(b) #SFT $=2^{23}$ .
<details>
<summary>figs/sec4_generalization_plots/l-2000.png Details</summary>

### Visual Description
\n
## Line Chart: Pass@k vs. k for Different Training Methods
### Overview
This image presents a line chart illustrating the relationship between the 'Pass@k' metric (expressed as a percentage) and the value of 'k' for four different training methods: RL, SFT, MT, and Base. The chart shows how the Pass@k performance changes as 'k' increases from 1 to 3.
### Components/Axes
* **X-axis:** Labeled "k", ranging from 1 to 3 with tick marks at each integer value.
* **Y-axis:** Labeled "Pass@k (%)", ranging from 5.0 to 17.5 with tick marks at 5.0, 7.5, 10.0, 12.5, 15.0, and 17.5.
* **Legend:** Located in the top-right corner, identifying the four data series:
* RL (Red)
* SFT (Orange)
* MT (Purple)
* Base (Blue)
* **Data Series:** Four distinct lines representing the Pass@k performance for each training method.
### Detailed Analysis
* **Base (Blue):** This line shows a consistent upward trend.
* At k=1, Pass@k is approximately 6.3%.
* At k=2, Pass@k is approximately 11.5%.
* At k=3, Pass@k is approximately 14.3%.
* **MT (Purple):** This line also shows an upward trend, but it appears to plateau slightly at k=3.
* At k=1, Pass@k is approximately 10.5%.
* At k=2, Pass@k is approximately 12.3%.
* At k=3, Pass@k is approximately 14.0%.
* **SFT (Orange):** This line exhibits an upward trend, starting higher than Base and MT at k=1.
* At k=1, Pass@k is approximately 11.3%.
* At k=2, Pass@k is approximately 13.3%.
* At k=3, Pass@k is approximately 14.7%.
* **RL (Red):** This line shows a consistent upward trend, starting slightly below SFT at k=1 and reaching the highest Pass@k value at k=3.
* At k=1, Pass@k is approximately 10.0%.
* At k=2, Pass@k is approximately 13.5%.
* At k=3, Pass@k is approximately 15.0%.
### Key Observations
* All four training methods demonstrate an increasing Pass@k performance as 'k' increases.
* The SFT method consistently achieves the highest Pass@k values across all 'k' values.
* The Base method starts with the lowest Pass@k value but shows a significant improvement as 'k' increases.
* The RL method shows the most consistent upward trend and reaches the highest Pass@k value at k=3.
### Interpretation
The chart suggests that increasing the value of 'k' generally improves the Pass@k metric for all four training methods. The SFT method appears to be the most effective in terms of achieving high Pass@k values, while the Base method shows the most substantial improvement with increasing 'k'. The RL method demonstrates a steady and consistent performance gain.
The Pass@k metric likely represents the proportion of times the model passes a test or meets a certain criteria within the top 'k' predictions. The 'k' parameter could represent the number of candidate solutions or predictions considered. The data indicates that exploring a larger number of candidates ('k') leads to better performance for all training methods, but the extent of improvement varies depending on the method used. The differences in performance between the methods suggest that the training strategies (RL, SFT, MT, Base) have varying impacts on the model's ability to generate accurate or relevant predictions.
</details>
(c) #SFT $=1.1Γ 2^{27}$ .
<details>
<summary>figs/sec4_generalization_plots/l-5000.png Details</summary>

### Visual Description
\n
## Line Chart: Pass@k vs. k for Different Models
### Overview
This image presents a line chart illustrating the relationship between the 'Pass@k' metric (expressed as a percentage) and the value of 'k' for four different models: RL, SFT, MT, and Base. The chart displays how the Pass@k performance changes as 'k' varies from 1 to 3.
### Components/Axes
* **X-axis:** Labeled "k", ranging from 1 to 3. The axis markers are 1, 2, and 3.
* **Y-axis:** Labeled "Pass@k (%)", ranging from 5.0 to 17.5. The axis markers are 5.0, 7.5, 10.0, 12.5, 15.0, and 17.5.
* **Legend:** Located in the top-right corner of the chart. It identifies the four data series:
* RL (Red)
* SFT (Orange)
* MT (Purple)
* Base (Blue)
### Detailed Analysis
* **RL (Red Line):** The red line shows a generally increasing trend.
* At k=1, Pass@k is approximately 11.5%.
* At k=2, Pass@k is approximately 13.75%.
* At k=3, Pass@k is approximately 14.25%.
* **SFT (Orange Line):** The orange line also shows an increasing trend, but is generally flatter than the RL line.
* At k=1, Pass@k is approximately 12.75%.
* At k=2, Pass@k is approximately 14.5%.
* At k=3, Pass@k is approximately 15.75%.
* **MT (Purple Line):** The purple line exhibits the steepest upward slope, indicating the most significant performance improvement with increasing 'k'.
* At k=1, Pass@k is approximately 14.75%.
* At k=2, Pass@k is approximately 15.75%.
* At k=3, Pass@k is approximately 16.75%.
* **Base (Blue Line):** The blue line shows a consistent, but relatively slow, increase in Pass@k.
* At k=1, Pass@k is approximately 8.5%.
* At k=2, Pass@k is approximately 11.25%.
* At k=3, Pass@k is approximately 12.25%.
### Key Observations
* The MT model consistently outperforms the other models across all values of 'k'.
* The Base model has the lowest Pass@k values at all 'k' values.
* The performance gains from increasing 'k' are most pronounced for the MT model.
* The RL and SFT models show similar performance, with SFT slightly outperforming RL at k=3.
### Interpretation
The chart demonstrates the impact of the 'k' parameter on the Pass@k metric for different models. The Pass@k metric likely represents the percentage of times the model achieves a successful outcome within the top 'k' predictions. The results suggest that increasing 'k' generally improves performance for all models, but the extent of improvement varies significantly. The MT model appears to benefit the most from increasing 'k', indicating that it is more sensitive to the number of considered predictions. The Base model's lower performance suggests it may require further optimization or a different architecture. The differences in performance between the models could be attributed to variations in their training data, model size, or training methodology (RL, SFT, MT). The chart provides valuable insights into the trade-offs between performance and the number of predictions considered, which can inform model selection and parameter tuning.
</details>
(d) #SFT $=1.5Γ 2^{28}$ .
Figure 11: Generalization analysis on SWE-bench-Live.
<details>
<summary>figs/sec4_generalization_plots/m-100-s1.png Details</summary>

### Visual Description
\n
## Line Chart: Pass@k vs. k for Different Models
### Overview
This line chart displays the relationship between the Pass@k metric (in percentage) and the value of 'k' for four different models: RL, SFT, MT, and Base. The chart shows how the Pass@k score changes as 'k' increases from 1 to 3.
### Components/Axes
* **X-axis:** Labeled "k", with values 1, 2, and 3.
* **Y-axis:** Labeled "Pass@k (%)", with a scale ranging from 0 to 20, incrementing by 5.
* **Legend:** Located in the top-right corner, identifying the four data series:
* RL (Red)
* SFT (Orange)
* MT (Purple)
* Base (Blue)
* **Gridlines:** Present to aid in reading values.
### Detailed Analysis
The chart contains four lines, each representing a different model.
* **RL (Red):** The line slopes upward consistently.
* At k=1, Pass@k is approximately 8%.
* At k=2, Pass@k is approximately 17%.
* At k=3, Pass@k is approximately 20%.
* **SFT (Orange):** The line slopes upward consistently, but is above the RL line.
* At k=1, Pass@k is approximately 12%.
* At k=2, Pass@k is approximately 17%.
* At k=3, Pass@k is approximately 20%.
* **MT (Purple):** The line is relatively flat, with a slight upward trend.
* At k=1, Pass@k is approximately 1%.
* At k=2, Pass@k is approximately 2%.
* At k=3, Pass@k is approximately 3%.
* **Base (Blue):** The line is relatively flat, with a slight upward trend.
* At k=1, Pass@k is approximately 1%.
* At k=2, Pass@k is approximately 2%.
* At k=3, Pass@k is approximately 4%.
### Key Observations
* The RL and SFT models show a significant increase in Pass@k as 'k' increases.
* The MT and Base models exhibit minimal improvement in Pass@k with increasing 'k'.
* The SFT model consistently performs slightly better than the RL model.
* The MT and Base models have significantly lower Pass@k scores compared to RL and SFT.
### Interpretation
The data suggests that the RL and SFT models benefit from increasing the value of 'k', indicating that considering more options improves their performance as measured by the Pass@k metric. The Pass@k metric likely represents the probability of a correct answer being within the top 'k' predictions. The flat lines for MT and Base suggest that their performance does not improve significantly with a larger 'k', potentially indicating a limitation in their underlying capabilities or a different mechanism for generating predictions. The consistent outperformance of SFT over RL suggests that the SFT training method is more effective at improving the model's ability to rank correct answers higher in the prediction list. The large gap between RL/SFT and MT/Base indicates a substantial difference in model quality or training methodology. The fact that Pass@k plateaus around 20% for RL and SFT suggests there may be an upper bound on performance achievable with these models, or that the metric is reaching its saturation point.
</details>
(a) #SFT $=2^{21}$ .
<details>
<summary>figs/sec4_generalization_plots/m-100.png Details</summary>

### Visual Description
\n
## Line Chart: Pass@k vs. k for Different Models
### Overview
This image presents a line chart comparing the Pass@k metric for four different models (RL, SFT, MT, and Base) across varying values of 'k' (1, 2, and 3). The chart illustrates how the percentage of successful passes changes as 'k' increases for each model.
### Components/Axes
* **X-axis:** Labeled "k", with markers at 1, 2, and 3.
* **Y-axis:** Labeled "Pass@k (%)", with a scale ranging from 0 to 20, incrementing by 5.
* **Legend:** Located in the top-left corner, listing the models and their corresponding colors:
* RL (Red)
* SFT (Orange)
* MT (Purple)
* Base (Blue)
* **Data Series:** Four distinct lines representing each model's performance.
### Detailed Analysis
Let's analyze each line's trend and extract the approximate data points.
* **RL (Red):** The line slopes upward consistently.
* k=1: Approximately 7.5%
* k=2: Approximately 9%
* k=3: Approximately 21%
* **SFT (Orange):** The line also slopes upward, but less steeply than RL.
* k=1: Approximately 4.5%
* k=2: Approximately 8%
* k=3: Approximately 12%
* **MT (Purple):** The line slopes upward, with a moderate steepness.
* k=1: Approximately 7%
* k=2: Approximately 13%
* k=3: Approximately 18%
* **Base (Blue):** The line slopes upward, but is the least steep of all four.
* k=1: Approximately 2.5%
* k=2: Approximately 6%
* k=3: Approximately 12.5%
### Key Observations
* The RL model consistently outperforms the other models across all values of 'k'.
* The Base model consistently underperforms the other models across all values of 'k'.
* All models show an increase in Pass@k as 'k' increases, indicating that allowing more attempts improves performance.
* The difference in performance between the models becomes more pronounced as 'k' increases.
### Interpretation
The chart demonstrates the impact of different training methodologies (RL, SFT, MT, and Base) on the Pass@k metric, which likely represents the success rate of a model in generating acceptable outputs within 'k' attempts. The RL model's superior performance suggests that reinforcement learning is an effective approach for improving the quality of generated outputs. The increasing trend for all models with higher 'k' values indicates a trade-off between efficiency (fewer attempts) and accuracy (higher success rate). The Base model's lower performance suggests that it may require further refinement or a different training strategy. The data suggests that increasing 'k' is a viable strategy for improving performance, but the optimal value of 'k' likely depends on the specific application and the desired balance between efficiency and accuracy. The gap between the models widens as k increases, suggesting that the benefits of the more advanced training methods are more apparent when more attempts are allowed.
</details>
(b) #SFT $=2^{23}$ .
<details>
<summary>figs/sec4_generalization_plots/m-2000.png Details</summary>

### Visual Description
\n
## Line Chart: Pass@k vs. k for Different Models
### Overview
This image presents a line chart illustrating the relationship between the Pass@k metric (in percentage) and the value of 'k' for four different models: RL, SFT, MT, and Base. The chart displays how the Pass@k score changes as 'k' increases from 1 to 3.
### Components/Axes
* **X-axis:** Labeled "k", with values ranging from 1 to 3. The axis is linearly scaled.
* **Y-axis:** Labeled "Pass@k (%)", with values ranging from 5 to 35. The axis is linearly scaled.
* **Legend:** Located in the top-right corner of the chart. It identifies the four data series:
* RL (Red line with circular markers)
* SFT (Orange line with circular markers)
* MT (Purple line with circular markers)
* Base (Blue line with circular markers)
* **Gridlines:** Horizontal and vertical gridlines are present to aid in reading values.
### Detailed Analysis
The chart shows four lines, each representing a different model's Pass@k performance as 'k' varies.
* **Base (Blue):** The line slopes upward consistently.
* At k=1, Pass@k is approximately 9%.
* At k=2, Pass@k is approximately 20%.
* At k=3, Pass@k is approximately 29%.
* **MT (Purple):** The line slopes upward consistently, and is generally above the Base line.
* At k=1, Pass@k is approximately 12%.
* At k=2, Pass@k is approximately 23%.
* At k=3, Pass@k is approximately 28%.
* **SFT (Orange):** The line slopes upward consistently, and is generally above the MT line.
* At k=1, Pass@k is approximately 14%.
* At k=2, Pass@k is approximately 25%.
* At k=3, Pass@k is approximately 31%.
* **RL (Red):** The line slopes upward consistently, and is generally above the SFT line.
* At k=1, Pass@k is approximately 13%.
* At k=2, Pass@k is approximately 25%.
* At k=3, Pass@k is approximately 26%.
### Key Observations
* All models show an increasing Pass@k score as 'k' increases. This indicates that allowing more attempts (higher 'k') generally improves the probability of passing.
* The SFT model consistently outperforms the other models across all values of 'k'.
* The Base model has the lowest Pass@k scores across all values of 'k'.
* The RL model has a similar performance to the SFT model at k=1 and k=2, but falls behind at k=3.
### Interpretation
The chart demonstrates the impact of the 'k' parameter on the Pass@k metric for different models. The Pass@k metric, representing the percentage of times a model passes when given 'k' attempts, is a measure of the model's reliability or success rate. The consistent upward trend for all models suggests that increasing the number of attempts ('k') improves performance. The superior performance of the SFT model indicates that this model is more robust and reliable than the others, achieving higher pass rates even with a smaller number of attempts. The differences in performance between the models suggest that the training methods or architectures used for each model have a significant impact on their ability to succeed. The slight divergence in the RL model's performance at k=3 could indicate diminishing returns or a saturation point for this model's capabilities. The data suggests that for maximizing Pass@k, the SFT model is the preferred choice, and increasing 'k' is a generally effective strategy for improving performance across all models.
</details>
(c) #SFT $=1.1Γ 2^{27}$ .
<details>
<summary>figs/sec4_generalization_plots/m-5000.png Details</summary>

### Visual Description
\n
## Line Chart: Pass@k vs. k for Different Models
### Overview
This image presents a line chart comparing the Pass@k metric for four different models (RL, SFT, MT, and Base) across varying values of 'k' (1, 2, and 3). The chart visualizes how the percentage of successful passes changes as the value of 'k' increases for each model.
### Components/Axes
* **X-axis:** Labeled "k", with values 1, 2, and 3.
* **Y-axis:** Labeled "Pass@k (%)", with a scale ranging from 5% to 35%.
* **Legend:** Located in the top-right corner, identifying the four data series:
* RL (Red)
* SFT (Orange)
* MT (Purple)
* Base (Blue)
* **Gridlines:** Present to aid in reading values.
### Detailed Analysis
Let's analyze each line and extract the approximate data points.
* **RL (Red):** The line slopes upward, indicating an increase in Pass@k as k increases.
* k = 1: Approximately 17%
* k = 2: Approximately 27%
* k = 3: Approximately 31%
* **SFT (Orange):** The line shows a significant increase from k=1 to k=2, then a smaller increase from k=2 to k=3.
* k = 1: Approximately 19%
* k = 2: Approximately 27%
* k = 3: Approximately 34%
* **MT (Purple):** The line slopes upward, but less steeply than RL and SFT.
* k = 1: Approximately 16%
* k = 2: Approximately 26%
* k = 3: Approximately 30%
* **Base (Blue):** The line shows a moderate increase from k=1 to k=2, and a smaller increase from k=2 to k=3.
* k = 1: Approximately 15%
* k = 2: Approximately 23%
* k = 3: Approximately 29%
### Key Observations
* The SFT model consistently achieves the highest Pass@k values across all 'k' values.
* The RL and MT models perform similarly, with RL slightly outperforming MT at k=3.
* The Base model consistently has the lowest Pass@k values.
* All models show an improvement in Pass@k as 'k' increases, but the rate of improvement varies.
### Interpretation
The chart demonstrates the impact of different model training approaches (RL, SFT, MT, and Base) on the Pass@k metric, which likely represents the success rate of a model in passing a certain test or benchmark. The 'k' parameter likely represents the number of attempts or samples considered.
The superior performance of the SFT model suggests that supervised fine-tuning is an effective strategy for improving the model's ability to pass the test. The increasing Pass@k values with increasing 'k' indicate that allowing the model more attempts or considering more samples improves its chances of success.
The differences between the models suggest that the training data and methods used have a significant impact on performance. The Base model's lower performance may indicate that it lacks the specific knowledge or skills required to succeed on the test. The relatively small difference between k=2 and k=3 for all models suggests diminishing returns as 'k' increases, indicating that there may be a point where additional attempts or samples do not significantly improve performance.
</details>
(d) #SFT $=1.5Γ 2^{28}$ .
Figure 12: Generalization analysis on SWE-bench Multilingual.
Appendix H Use of Large Language Models
The initial draft of this paper was written entirely by the authors. A large language model (gpt-5) was used only to aid with polishing the language (e.g., grammar and clarity). All conceptual contributions, experimental designs, analyses, and conclusions are the work of the authors.