# Kimi-Dev: Agentless Training as Skill Prior for SWE-Agents
> Indicates equal contribution.β Joint leads.
Abstract
Large Language Models (LLMs) are increasingly applied to software engineering (SWE), with SWE-bench as a key benchmark. Solutions are split into SWE-Agent frameworks with multi-turn interactions and workflow-based Agentless methods with single-turn verifiable steps. We argue these paradigms are not mutually exclusive: reasoning-intensive Agentless training induces skill priors, including localization, code edit, and self-reflection that enable efficient and effective SWE-Agent adaptation. In this work, we first curate the Agentless training recipe and present Kimi-Dev, an open-source SWE LLM achieving 60.4% on SWE-bench Verified, the best among workflow approaches. With additional SFT adaptation on 5k publicly-available trajectories, Kimi-Dev powers SWE-Agents to 48.6% pass@1, on par with that of Claude 3.5 Sonnet (241022 version). These results show that structured skill priors from Agentless training can bridge workflow and agentic frameworks for transferable coding agents.
1 Introduction
Recent days have witnessed the rapid development of Large Language Models (LLMs) automating Software-Engineering (SWE) tasks (jimenez2023swe; yang2024swe; xia2024agentless; anthropic_claude_3.5_sonnet_20241022; pan2024training; wang2024openhands; wei2025swe; yang2025qwen3; team2025kimi_k2; openai_gpt5_system_card_2025). Among the benchmarks that track the progress of LLM coding agents in SWE scenarios, SWE-bench (jimenez2023swe) stands out as one of the most representative ones: Given an issue that reports a bug in a real-world GitHub repository, a model is required to produce a patch that fixes the bug, the correctness of which is further judged by whether the corresponding unit tests are passed after its application. The difficulty of the task (as of the date the benchmark was proposed), the existence of the outcome reward with the provided auto-eval harness, as well as the real-world economic value it reflects, have made the SWE-bench a focal point of the field.
Two lines of solutions have emerged for the SWE-bench task. Agent-based solutions like SWE-Agent (yang2024swe) and OpenHands (wang2024openhands) take an interactionist approach: Instructed with the necessary task description, a predefined set of available tools, as well as the specific problem statement, the agent is required to interact with an executable environment for multiple turns, make change to the source codes, and determine when to stop autonomously. In contrast, workflow-based solutions like Agentless (xia2024agentless) pre-define the solving progress as a pipeline, which consists of steps like localization, bug repair, and test composition. Such task decomposition transforms the agentic task into generating correct responses for a chain of single-turn problems with verifiable rewards (guo2025deepseek; wei2025swe; SWESwiss2025).
The two paradigms have been widely viewed as mutually exclusive. On the one hand, SWE-Agents are born with higher potential and better adaptability, thanks to the higher degree of freedom of the multi-turn interaction without the fixed routines. However, it has also proved more difficult to train with such frameworks due to their end-to-end nature (deepswe2025; cao2025skyrl). On the other hand, Agentless methods offer better modularity and the ease to train with Reinforcement Learning with Verifiable Rewards (RLVR) techniques, but more limited exploration space and flexibility, and difficulty in behavior monitoring as the erroneous patterns appear only in the single-turn long reasoning contents (pan2024training). However, we challenge the dichotomy from the perspective of training recipe: We argue that Agentless training should not be viewed as the ultimate deliverable, but rather as a way to induce skill priors β atomic capabilities such as the localization of buggy implementations and the update of erroneous code snippets, as well as self-reflection and verification, all of which help scaffold the efficient adaptation of more capable and generalizable SWE-agents.
Guided by this perspective, we introduce Kimi-Dev, an open-source code LLM for SWE tasks. Specifically, we first develop an Agentless training recipe, which includes mid-training, cold-start, reinforcement learning, and test-time self-play. This results in 60.4% accuracy on SWE-bench Verified, the SoTA performance among the workflow-based solutions. Building on this, we show that Agentless training induces skill priors: a minimal SFT cold-start from Kimi-Dev with 5k publicly-available trajectories enables efficient SWE-agent adaptation and reaches 48.6% pass@1 score, similar to that of Claude 3.5 Sonnet (the 20241022 version, anthropic_claude_3.5_sonnet_20241022). We demonstrate that these induced skills transfer from the non-agentic workflows to the agentic frameworks, and the self-reflection in long Chain-of-Thoughts baked through Agentless training further enable the agentic model to leverage more turns and succeed with a longer horizon. Finally, we also show that the skills from Agentless training generalize beyond SWE-bench Verified to broader benchmarks like SWE-bench-live (zhang2025swe) and SWE-bench Multilingual (yang2025swesmith). Together, these results reframe the relationship between Agentless and agentic frameworks: not mutually exclusive, but as complementary stages in building transferable coding LLMs. This shift offers a principled view that training with structural skill priors could scaffold autonomous agentic interaction.
The remainder of this paper is organized as follows. Section 2 reviews the background of the framework dichotomy and outlines the challenges of training SWE-Agents. Section 3 presents our Agentless training recipe and the experimental results. Section 4 demonstrates how these Agentless-induced skill priors enable efficient SWE-Agent adaptation, and evaluates the skill transfer and generalization beyond SWE-bench Verified.
2 Background
In this section, we first review the two dominant frameworks for SWE tasks and their dichotomy in Section 2.1. We then summarize the progress and challenges of training SWE-Agents in Section 2.2. The background introduction sets the stage for reinterpreting Agentless training as skill priors for SWE-Agents, a central theme developed throughout the later sections.
2.1 Framework Dichotomy
Two paradigms currently dominate the solutions for automating software engineering tasks. Agentless approaches decompose SWE tasks into modular workflows (xia2024agentless; wei2025swe; ma2024lingma; ma2025alibaba; swe-fixer). Typical workflows consist of bug localization, bug repair, and test generation. This design provides modularity and stability: each step could be optimized separately as a single-turn problem with verifiable rewards (wei2025swe; SWESwiss2025). However, such rigidity comes at the cost of flexibility. When encountering scenarios requiring multiple rounds of incremental updates, the Agentless approaches struggle to adapt.
By contrast, SWE-agents adopt an end-to-end, multi-turn reasoning paradigm (yang2024swe; wang2024openhands). Rather than following a fixed workflow, they iteratively plan, act, and reflect, resembling how human developers debug complex issues. This design enables greater adaptability, but introduces significant difficulties: trajectories often extend over tens or even hundreds of steps, context windows of the LLMs must span over the entire interaction history, and the model must handle exploration, reasoning, and tool use simultaneously.
The dichotomy between fixed workflows (e.g., Agentless) and agentic frameworks (e.g., SWE-Agent) has shaped much of the communityβs perspective. The two paradigms are often regarded as mutually exclusive: one trades off flexibility and performance ceiling for modularity and stability, whereas the other makes the reverse compromise. Our work challenges this dichotomy, as we demonstrate that Agentless training induces skill priors that make further SWE-agent training both more stable and more efficient.
2.2 Training SWE-agents
Training SWE-agents relies on acquiring high-quality trajectories through interactions with executable environments. Constructing such large-scale environments and collecting reliable trajectories, however, requires substantial human labor as well as costly calls to frontier models, making data collection slow and resource-demanding (pan2024training; badertdinov2024sweextra). Recent studies also attempt to scale environment construction by synthesizing bugs for the reverse construction of executable runtime (jain2025r2e; yang2025swesmith).
However, credit assignment across long horizons still remains challenging, as outcome rewards are sparse and often only available when a final patch passes its tests. Reinforcement learning techniques have been proposed, but frequently suffer from instability or collapse when trajectories exceed dozens of steps (deepswe2025; cao2025skyrl). SWE-agent training is also highly sensitive to initialization: starting from a generic pre-trained model often leads to brittle behaviors, such as failing to use tools effectively or getting stuck in infinite loops of specific action patterns (pan2024training; yang2025swesmith).
These limitations motivate our central hypothesis: instead of training SWE-agents entirely from scratch, one can first induce skill priors through agentless training, enhancing the atomic capabilities like localization, repair, test composition, and self-reflection. These priors lay a foundation that makes subsequent agentic training both more efficient and more generalizable.
3 Agentless Training Recipe
Instead of training SWE-agents from scratch, we leverage Agentless training to induce skill priors. Skill priors enhanced by Agentless training include but are not limited to bug localization, patch generation, self-reflection and verification, which lay the foundation for end-to-end agentic interaction. In this section, we elaborate our Agentless training recipe: the duo framework design of BugFixer and TestWriter, mid-training and cold-start, reinforcement learning, and test-time self-play. Sections 3.1 β 3.4 detail these ingredients, and Section 3.5 presents the experimental results for each of them. This training recipe results in Kimi-Dev, an open-source 72B model that achieves 60.4% on SWE-bench Verified, the SoTA performance among the workflow-based solutions.
<details>
<summary>x3.png Details</summary>

### Visual Description
# Technical Document Extraction: Bug Fixing and Test Case Generation Workflow
## Diagram Overview
This image depicts a cyclical workflow for automated bug fixing and test case generation using a Large Language Model (LLM). The process involves iterative interactions between three core components: **BugFixer**, **TestWriter**, and **LLM**, with file localization and code editing as peripheral operations.
---
### Key Components and Flow
1. **Left Workflow Branch**
- **File Localization** (icon: magnifying glass over document)
- Identifies files containing bugs.
- **Code Edit** (icon: document with pencil)
- Modifies code to address localized issues.
- **BugFixer** (purple arrow)
- Sends localized files and code edits to the LLM for analysis.
2. **Central LLM Processing**
- **LLM** (central node with interconnected nodes)
- Receives input from BugFixer and TestWriter.
- Performs two primary tasks:
- **Generate Test Case** (blue arrow to TestWriter)
- **Fix Bugs** (blue arrow to BugFixer)
3. **Right Workflow Branch**
- **TestWriter** (purple arrow)
- Receives test cases from LLM.
- Sends test results back to LLM for validation.
- **File Localization** (icon: magnifying glass over document)
- Identifies files requiring test case integration.
- **Code Edit** (icon: document with pencil)
- Implements test case changes into code.
---
### Process Flow
1. **Initialization**:
- File Localization identifies problematic files on both left and right branches.
2. **Bug Fixing Loop**:
- BugFixer β LLM β Fix Bugs β BugFixer (cyclical).
3. **Test Case Generation Loop**:
- TestWriter β LLM β Generate Test Case β TestWriter (cyclical).
4. **Integration**:
- Code Edit operations on both branches ensure changes are applied to localized files.
---
### Visual Elements
- **Arrows**:
- Blue arrows represent data flow between LLM and external components.
- Purple arrows indicate interactions between BugFixer/TestWriter and LLM.
- **Icons**:
- **File Localization**: Magnifying glass over document (appears on both left and right branches).
- **Code Edit**: Document with pencil (appears on both left and right branches).
---
### Notes
- No numerical data, charts, or tables are present.
- All text is in English; no additional languages detected.
- The workflow emphasizes iterative refinement: bugs are fixed, test cases are generated, and changes are localized/edited repeatedly until resolution.
This diagram illustrates a closed-loop system where the LLM acts as the central intelligence, coordinating between automated bug resolution and test case validation.
</details>
Figure 1: Agentless framework for Kimi-Dev: the duo of BugFixer and TestWriter.
3.1 Framework: the Duo of Bugfixer and Testwriter
In GitHub issue resolution, we conceptualize the process as the collaboration between two important roles: the BugFixer, who produces patches that correctly address software bugs, and the TestWriter, who creates reproducible unit tests that capture the reported bug. A resolution is considered successful when the BugFixerβs patch passes the tests provided for the issue, while a high-quality test from the TestWriter should fail on the pre-fix version of the code and pass once the fix is applied.
Each role relies on two core skills: (i) file localization, the ability to identify the specific files relevant to the bug or test, and (ii) code edit, the ability to implement the necessary modifications. For BugFixer, effective code edits repair the defective program logic, whereas for TestWriter, they update precise unit test functions that reproduce the issue into the test files. As illustrated in Figure 1, these two skills constitute the fundamental abilities underlying GitHub issue resolution. Thus, we enhance these skills through the following training recipes, including mid-training, cold-start, and RL.
3.2 Mid-Training & Cold Start
To enhance the modelβs prior as both a BugFixer and a TestWriter, we perform mid-training with $\sim$ 150B tokens in high-quality and real-world data. With the Qwen 2.5-72B-Base (qwen2025qwen25technicalreport) model as a starting point, we collect millions of GitHub issues and PR commits to form its mid-training dataset, which consists of (i) $\sim$ 50B tokens in the form of Agentless derived from the natural diff patch, (ii) $\sim$ 20B tokens of curated PR commit packs, and (iii) $\sim$ 20B tokens of synthetic data with reasoning and agentic interaction patterns (upsampled by a factor of 4 during training). The data recipe is carefully constructed to enable the model to learn how human developers reason with GitHub issues, implement code fixes, and develop unit tests. We also performed strict data decontamination to exclude any repository from the SWE-bench Verified test set. Mid-training sufficiently enhances the knowledge in the model about practical bug fixes and unit tests, making it a better starting point for later stages. The details of the recipe are covered in Appendix A.
To activate the modelβs long Chain-of-Thought (CoT) capability, we also construct a cold-start dataset with reasoning trajectories based on the SWE-Gym (pan2024training) and SWE-bench-extra (badertdinov2024scaling) datasets, generated by the DeepSeek R1 model (deepswe2025, the 20250120 version). In this setup, R1 acts the roles of Bugfixer and Testwriter, producing outputs such as file localization and code edits. Through supervised finetuning as a cold start with this dataset, we enable the model to acquire essential reasoning skills, including problem analysis, method sketching, self-refinement, and exploration of alternative solutions.
3.3 Reinforcement Learning
After mid-training and cold-start, the model demonstrates strong performance in localization. Therefore, reinforcement learning (RL) focuses solely on the code edit stage. We construct a training set specifically for this stage, where each prompt is equipped with an executable environment. We further employ multiple localization rollouts from the initial model to generate varied file location predictions, which diversifies the prompts used in code-edit RL.
For the RL algorithm, we adopt the policy optimization method proposed by Kimi k1.5 (team2025kimi_k15), which has shown promising results on reasoning tasks in both math and coding. Kimi k1.5 (team2025kimi_k15) adopts a simpler policy gradient approach based on the REINFORCE algorithm (williams1992simple). Similarly to GRPO (shao2024deepseekmath), we use the average rewards of multiple rollouts as the baseline to normalize the returns. When adapting the algorithm in our SWE-bench setting, we highlight the following 3 key desiderata:
1. Outcome-based reward only: We rely solely on the final execution outcome from the environment as the raw reward (0 or 1), without incorporating any format- or process-based signals. For BugFixer, a positive reward is given if the generated patch passes all ground-truth unittests. For TestWriter, a positive reward is assigned when (i) the predicted test raises a failure in the repo without the ground-truth bugfix patch applied, AND (ii) the failure is resolved once the ground-truth bugfix patch is applied.
1. Adaptive prompt selection: Prompts with pass@16 = 0 are initially discarded as they do not contribute to the batch loss. This results in an initial prompt set of 1,200 problems and enlarges the effective batch size. A curriculum learning scheme is then applied: once the success rate on the current set exceeds a threshold, 500 new (previously excluded) prompts (with initial pass@16 = 0 but improved under RL) are reintroduced every 100 RL steps to gradually raise task difficulty.
1. Positive example reinforcement: As performance improvements begin to plateau in later stages of training, we incorporate the positive samples from the recent RL iterations into the training batch of the current iteration. This approach reinforces the modelβs reliance on successful patterns, thereby accelerating convergence in the final phase.
Robust sandbox infrastructure. We construct the docker environment with Kubernetes (kubernetes), which provides a secure and scalable sandbox infrastructure and efficient training and rollouts. The infra supports over 10,000 concurrent instances with robust performance, making it ideal for competitive programming and software engineering tasks (see Appendix D for details).
3.4 Test-Time Self-Play
After RL, the model masters the roles of both a BugFixer and a TestWriter. During test time, it adopts a self-play mechanism to coordinate its bug-fixing and test-writing abilities.
Following Agentless (xia2024agentless), we leverage the model to generate 40 candidate patches and 40 tests for each instance. Each patch generation involves independent runs of the localization and code edit from BugFixer, where the first run uses greedy decoding (temperature 0), and the remaining 39 use temperature 1 to ensure diversity. Similarly, 40 tests are generated independently from TestWriter. For the test patch candidates, to guarantee their validity, we first filter out those failing to raise a failure in the original repo without applying any BugFixer patch.
Denote the rest TestWriter patches as set $\mathcal{T}$ , and the BugFixer patches as set $\mathcal{B}$ . For each $b_{i}β\mathcal{B}$ and $t_{j}β\mathcal{T}$ , we execute the test suite over the test file modified by $t_{j}$ for twice: first without $b_{i}$ , and then with $b_{i}$ applied. From the execution log for the first run, we get the count of the failed and the passed tests from $t_{j}$ , denoted as ${\rm F}(j)$ and ${\rm P}(j)$ . Comparing the execution logs for the two test suite runs, we get the count of the fail-to-pass and the pass-to-pass tests, denoted as ${\rm FP}(i,j)$ and ${\rm PP}(i,j)$ , respectively. We then calculate the score for each $b_{i}$ with
$$
S_{i}=\frac{\sum_{j}{\rm FP}(i,j)}{\sum_{j}{\rm F}(j)}+\frac{\sum_{j}{\rm PP}(i,j)}{\sum_{j}{\rm P}(j)},\vskip-2.0pt \tag{1}
$$
where the first part reflects the performance of $b_{i}$ under reproduction tests, and the second part could be viewed as the characterization of $b_{i}$ under regression tests (xia2024agentless). We select the BugFixer patch $b_{i}$ with the highest $S_{i}$ score as the ultimate answer.
Table 1: Performance comparison for models on SWE-bench Verified under Agentless-like frameworks. All the performances are obtained under the standard 40 patch, 40 test setting (xia2024agentless), except that Llama3-SWE-RL uses 500 patches and 30 tests.
3.5 Experiments
3.5.1 Main Results
<details>
<summary>figs/sec3_mid_training/mid-train_perf.png Details</summary>

### Visual Description
# Technical Document Extraction: Bar Chart Analysis
## Chart Type
- **Bar Chart** with vertical orientation.
## Axes
- **X-Axis (Horizontal)**:
- Label: *"Mid-training tokens"*
- Categories:
- `50B` (28.6%)
- `100B` (32.6%)
- `150B` (36.6%)
- Tick Marks: Positioned below each bar.
- **Y-Axis (Vertical)**:
- Label: *"Pass Rate (%)"*
- Range: 26% to 38% (inclusive).
- Tick Intervals: 2% increments (26%, 28%, ..., 38%).
- Grid Lines: Dashed horizontal lines across the chart.
## Data Points
| Mid-training Tokens | Pass Rate (%) |
|---------------------|---------------|
| 50B | 28.6 |
| 100B | 32.6 |
| 150B | 36.6 |
## Trends
- **Increasing Trend**: Pass rate rises monotonically as mid-training tokens increase.
- From `50B` (28.6%) to `100B` (32.6%): +4% improvement.
- From `100B` (32.6%) to `150B` (36.6%): +4% improvement.
## Visual Components
- **Bars**:
- Color: Light blue (`#ADD8E6`).
- Outline: Black.
- Height: Proportional to pass rate values.
- **Grid**: Dashed gray lines for reference.
- **Background**: White.
## Legend
- **Absent**: No explicit legend present. All bars share the same color, implying a single data series.
## Spatial Grounding
- **X-Axis**: Centered below the chart.
- **Y-Axis**: Left-aligned, extending vertically.
- **Bars**: Positioned above their respective x-axis labels.
## Component Isolation
1. **Header**: None.
2. **Main Chart**: Three bars with grid lines.
3. **Footer**: None.
## Notes
- No textual annotations or additional data tables present.
- All values are explicitly labeled on the bars.
- No secondary axes or annotations detected.
</details>
Figure 2: The performance on SWE-bench Verified after mid-training with different training token budgets.
Table 1 shows the performance of Kimi-Dev on SWE-bench Verified (jimenez2023swe). Instead of the text-similarity rewards used in SWE-RL (wei2025swe), we adopt execution-based signals for more reliable fix quality. Our two-stage TestWriter also improves over prior Agentless systems (xia2024agentless; guo2025deepseek; SWESwiss2025), which rely on a single root-level test, by better capturing repository context and mirroring real developer workflows (OpenAI-Codex-2025). Kimi-Dev attains state-of-the-art performance among open-source models, resolving 60.4% of issues.
3.5.2 Mid-Training
In this section, we evaluate the relationship between the amount of data used during mid-training and model performance. Specifically, we finetuned Qwen 2.5-72B-Base with the subset of mid-training data of 50B, 100B, and approximately 150B tokens, and then lightly activated these mid-trained models using the same set of 2,000 Bugfixer input-output pairs for SFT cold start. We only report BugFixer pass@1 here for simplicity of evaluation. Figure 2 shows that increasing the number of tokens in mid-training consistently improves model performance, highlighting the effectiveness of this stage.
3.5.3 Reinforcement Learning
<details>
<summary>figs/sec3_rl_scaling/quick_plot_twin_bf_final.png Details</summary>

### Visual Description
# Technical Document Extraction: Line Chart Analysis
## Chart Overview
The image depicts a line chart with two data series plotted against RL Training Steps. The chart contains:
- **Primary Y-axis (Left):** Token Length (blue squares)
- **Secondary Y-axis (Right):** Pass Rate (%) (red circles)
- **X-axis:** RL Training Steps (0β500)
## Legend
- **Position:** Top-left corner
- **Color Coding:**
- Blue squares: Token Length
- Red circles: Pass Rate (%)
## Axis Labels
- **X-axis:** "RL Training Steps" (0β500 in increments of 50)
- **Primary Y-axis:** "Token Length" (3,900β8,000 in increments of 500)
- **Secondary Y-axis:** "Pass Rate (%)" (34%β46% in increments of 2%)
## Data Series Analysis
### Token Length (Blue Squares)
- **Trend:** Steady upward trajectory with minor fluctuations
- **Key Data Points:**
- [0, 3900]
- [50, 4300]
- [100, 5050]
- [150, 5400]
- [200, 5600]
- [250, 5800]
- [300, 5950]
- [350, 6200]
- [400, 7300]
- [450, 7600]
- [500, 7800]
### Pass Rate (%) (Red Circles)
- **Trend:** Volatile with significant peaks and troughs
- **Key Data Points:**
- [0, 34%]
- [50, 36%]
- [100, 38%]
- [150, 38%]
- [200, 40%]
- [250, 42%]
- [300, 44%]
- [350, 43%]
- [400, 44%]
- [450, 46%]
- [500, 46%]
## Spatial Grounding
- **Legend Coordinates:** [x: 0.05, y: 0.95] (top-left corner)
- **Data Point Verification:**
- Blue squares consistently match Token Length values
- Red circles consistently match Pass Rate values
## Trend Verification
1. **Token Length:**
- Initial plateau (3900β4300)
- Accelerated growth post-100 steps
- Steep increase after 350 steps
2. **Pass Rate:**
- Gradual improvement until 250 steps
- Sharp peak at 300 steps (44%)
- Post-350 steps: Stabilization with minor fluctuations
## Component Isolation
- **Main Chart:** Line graph with dual Y-axes
- **No additional regions** (header/footer) present
## Data Table Reconstruction
| RL Training Steps | Token Length | Pass Rate (%) |
|-------------------|--------------|---------------|
| 0 | 3900 | 34% |
| 50 | 4300 | 36% |
| 100 | 5050 | 38% |
| 150 | 5400 | 38% |
| 200 | 5600 | 40% |
| 250 | 5800 | 42% |
| 300 | 5950 | 44% |
| 350 | 6200 | 43% |
| 400 | 7300 | 44% |
| 450 | 7600 | 46% |
| 500 | 7800 | 46% |
## Critical Observations
1. Token Length increases by **3,900 units** (100% growth) over 500 steps
2. Pass Rate demonstrates **non-linear improvement**, with:
- 12% absolute increase (34% β 46%)
- 33% relative improvement
3. Divergence between metrics observed post-350 steps:
- Token Length: +1,600 units (25.8% growth)
- Pass Rate: +2% (4.5% growth)
## Language Note
All textual content is in English. No non-English elements detected.
</details>
(a) 72B Joint RL, BugFixer
<details>
<summary>figs/sec3_rl_scaling/quick_plot_twin_tw_final.png Details</summary>

### Visual Description
# Technical Document Extraction: Line Chart Analysis
## Chart Overview
The image depicts a dual-axis line chart comparing two metrics across RL Training Steps. The chart contains two distinct data series with contrasting trends.
### Legend & Labels
- **Legend Position**: Top-left quadrant
- **Legend Entries**:
- Blue squares: Token Length
- Red circles: Reproduced Rate (%)
- **Axis Labels**:
- X-axis: RL Training Steps (0-500)
- Left Y-axis: Token Length (3000-6500)
- Right Y-axis: Reproduced Rate (%) (20-35)
## Data Series Analysis
### Token Length (Blue Squares)
**Visual Trend**:
- Initial dip from 3000 β 2950 (steps 0-50)
- Steady upward trajectory with minor fluctuations
- Final value: 6400 at step 500
**Key Data Points**:
| RL Training Steps | Token Length |
|-------------------|--------------|
| 0 | 3000 |
| 50 | 2950 |
| 100 | 3450 |
| 150 | 3600 |
| 200 | 4200 |
| 250 | 4600 |
| 300 | 4800 |
| 350 | 5100 |
| 400 | 5900 |
| 450 | 6200 |
| 500 | 6400 |
### Reproduced Rate (%) (Red Circles)
**Visual Trend**:
- Initial volatility (20% β 25% β 30% β 22% β 35%)
- Sustained peak at 35% (steps 350-450)
- Final value: 32% at step 500
**Key Data Points**:
| RL Training Steps | Reproduced Rate (%) |
|-------------------|----------------------|
| 0 | 20 |
| 50 | 25 |
| 100 | 30 |
| 150 | 22 |
| 200 | 35 |
| 250 | 32 |
| 300 | 34 |
| 350 | 35 |
| 400 | 35 |
| 450 | 33 |
| 500 | 32 |
## Cross-Series Correlation
- **Divergence Point**: Step 150 (Token Length: 3600 vs Reproduced Rate: 22%)
- **Convergence Zone**: Steps 350-450 (Token Length: 5100-5900 vs Reproduced Rate: 35%)
- **Final Relationship**: At step 500, Token Length (6400) correlates with Reproduced Rate (32%)
## Spatial Grounding
- Legend coordinates: [x=50, y=50] (top-left quadrant)
- Data point verification: All blue squares match Token Length values; red circles match Reproduced Rate percentages
## Trend Verification
1. Token Length shows consistent growth after initial dip (RΒ² > 0.95)
2. Reproduced Rate exhibits cyclical pattern with sustained peak (standard deviation: Β±1.5%)
3. No data points violate established trends
## Component Isolation
1. **Header**: Chart title and legend (top section)
2. **Main Chart**: Dual-axis plot (center 80% of image)
3. **Footer**: Axis labels and grid lines (bottom 10%)
## Language Analysis
- Primary language: English
- No secondary languages detected
## Data Integrity Check
- All axis markers confirmed present
- No missing data points in either series
- Color coding 100% consistent with legend
## Conclusion
The chart demonstrates a positive correlation between Token Length and Reproduced Rate after initial training steps, with both metrics showing stabilization patterns after step 350.
</details>
(b) 72B Joint RL, TestWriter
Figure 3: Joint code-edit RL experiments on the model after mid-training and cold-start. The pass rate for BugFixer and the reproduced rate for TestWriter are reported as pass@1 with temperature=1.0. The performance improves consistently as the output becomes increasingly longer.
Experimental setup
We set the training step per RL iteration as 5 and sample 10 rollouts for each of the 1,024 problems from the union of SWE-gym (pan2024training) and SWE-bench-extra (badertdinov2024sweextra). We dynamically adjust the prompt set every 20 iterations to gradually increase task difficulty. We fix the maximum training context length as 64k tokens, since the prompt input contains the contents of the entire files localized by the initial model in advance.
Results
Figure 3 shows the performance and response length curves on the test set during RL training. The pass rate and the reproduced rate are calculated from pass@1 and temperature=1. Specifically, we observe that both model performance and response length steadily increase, reflecting the expected benefits of RL scaling. Similar RL scaling curves are also observed in our ablation experiments run on Qwen2.5-14B-Instruct models, proving the effectiveness of the RL training recipe across models of different sizes. The experimental details, as well as the ablation studies on positive example reinforcement in Section 3.3, are listed in Appendix C.2). The lengthy outputs consist of in-depth problem analysis and self-reflection patterns, similar to those in the math and code reasoning tasks (team2025kimi_k15; guo2025deepseek). We have also observed that for TestWriter, occasional false-positive examples take place during RL training due to the lack of reproduction coverage. We leave the case studies in Appendix E and further improvement for future work.
<details>
<summary>figs/sec3_sp_scaling/selfplay_figure_v2.png Details</summary>

### Visual Description
# Technical Document Extraction: Pass Rate Analysis by Patch Size
## Chart 1: Pass Rate Comparison (Self-Play vs. Majority Voting)
### Axes and Labels
- **X-axis**: "Number of patches: BF x TW"
- Categories: `1x1`, `3x3`, `5x5`, `10x10`, `20x20`, `40x40`
- **Y-axis**: "Pass Rate (%)"
- Range: 45% to 62.5%
### Legend
- **Blue line**: Self-play
- **Green line**: Majority Voting
### Data Points and Trends
1. **Self-Play (Blue)**
- **Trend**: Steadily increasing with patch size.
- **Values**:
- `1x1`: 48.0%
- `3x3`: 52.6%
- `5x5`: 55.4%
- `10x10`: 58.8%
- `20x20`: 60.4%
- `40x40`: 60.4%
2. **Majority Voting (Green)**
- **Trend**: Gradual increase, plateauing at larger patch sizes.
- **Values**:
- `1x1`: 48.0%
- `3x3`: 48.8%
- `5x5`: 50.0%
- `10x10`: 51.0%
- `20x20`: 51.4%
- `40x40`: 51.2%
### Spatial Grounding
- Legend positioned in the **top-right corner** of the chart.
- All data points match legend colors:
- Blue circles (self-play) align with blue line.
- Green triangles (majority voting) align with green line.
---
## Chart 2: Pass Rate Comparison (Self-Play vs. Pass@N)
### Axes and Labels
- **X-axis**: "Number of patches: BF x TW"
- Categories: `1x1`, `3x3`, `5x5`, `10x10`, `20x20`, `40x40`
- **Y-axis**: "Pass Rate (%)"
- Range: 45% to 75%
### Legend
- **Blue line**: Self-play
- **Orange line**: Pass@N
### Data Points and Trends
1. **Self-Play (Blue)**
- **Trend**: Steady increase, plateauing at larger patch sizes.
- **Values**:
- `1x1`: 48.0%
- `3x3`: 52.6%
- `5x5`: 55.4%
- `10x10`: 58.8%
- `20x20`: 60.4%
- `40x40`: 60.4%
2. **Pass@N (Orange)**
- **Trend**: Sharp upward trajectory, outperforming self-play at all patch sizes.
- **Values**:
- `1x1`: 48.0%
- `3x3`: 60.4%
- `5x5`: 64.0%
- `10x10`: 67.4%
- `20x20`: 71.6%
- `40x40`: 74.8%
### Spatial Grounding
- Legend positioned in the **top-right corner** of the chart.
- All data points match legend colors:
- Blue circles (self-play) align with blue line.
- Orange diamonds (Pass@N) align with orange line.
---
## Key Observations
1. **Self-Play Performance**
- Both charts show self-play pass rates plateauing at `20x20` and `40x40` patches (~60.4%).
- Consistent across both metrics (majority voting and Pass@N).
2. **Pass@N Advantage**
- Pass@N significantly outperforms self-play, especially at larger patch sizes (e.g., 74.8% vs. 60.4% at `40x40`).
- Pass@N demonstrates a steeper growth curve compared to self-play.
3. **Majority Voting Limitation**
- Majority voting shows minimal improvement beyond `5x5` patches, suggesting diminishing returns.
## Conclusion
- Larger patch sizes improve pass rates for all methods, but **Pass@N** achieves the highest performance, particularly at scale.
- Self-play and majority voting exhibit similar trends but lag behind Pass@N in effectiveness.
</details>
Figure 4: Test-time self-play on SWE-bench Verified. Performance improves with more generated patches and tests. Left: Execution-based self-play consistently surpasses BugFixer majority voting. Right: Self-play performances remain below pass@N where the ground-truth test patch is used, suggesting the room exists for TestWriter to improve.
3.5.4 Test-time Self-Play
Following Section 3.4, we evaluate how the final performance on SWE-bench Verified scales with the number of patches and tests generated. The temperature is fixed at 0 for the initial rollout, and set to 1.0 for the subsequent 39 rollouts. As shown on the left of Figure 4, the final performance improves from 48.0% to 60.4% as the number of patch-test pairs increases from 1 $Γ$ 1 to 40 $Γ$ 40, and consistently surpasses the results obtained from the majority vote of the BugFixer patches only.
Specifically, the self-play result obtained from 3 patches and 3 tests for each instance has already surpassed the performance with majority voting from 40 BugFixer patches. This demonstrates the effectiveness of additional information from test-time execution. The room for improvement of TestWriter, though, still exists for more powerful self-play: Shown on Figure 4, self-play performances remain below pass@N, where ground-truth test cases serve as the criterion for issue resolution. This finding aligns with anthropic_claude_3.5_sonnet_20241022, which introduced a final edge-case checking phase to generate a more diverse set of test cases, thereby strengthening the role of the βTestWriterβ in their SWE-Agent framework. We also report preliminary observations of a potential parallel scaling phenomenon, which requires no additional training and may enable scalable performance improvements. The details of the phenomenon and analyses are covered in Appendix F.
4 Initializing SWE-Agents from Agentless Training
End-to-end multi-turn frameworks, such as SWE-Agent (yang2024swe; anthropic_claude_3.5_sonnet_20241022) and OpenHands (wang2024openhands), enable agents to leverage tools and interact with environments. Specifically, the system prompt employed in the SWE-Agent framework (anthropic_claude_3.5_sonnet_20241022) outlines a five-stage workflow: (i) repo exploration, (ii) error reproduction via a test script, (iii) code edit for bug repair, (iv) test re-execution for validation, and (v) edge-case generation and checks. Unlike Agentless, the SWE-Agent framework doesnβt enforce a strict stage-wise workflow; the agent can reflect, transition, and redo freely until it deems the task complete and submits.
The performance potential is therefore higher without a fixed routine; However, the training for SWE-Agent is more challenging because of the sparsity of the outcome reward for long-horizon credit assignment. Meanwhile, our Kimi-Dev model has undergone Agentless training, with its skills of localization and code edit for BugFixer and TestWriter strengthened elaborately. In this section, we investigate whether it can serve as an effective prior for multi-turn SWE-Agent scenarios.
Table 2: Single-attempt performance of different models on SWE-bench Verified under end-to-end agentic frameworks, categorized by proprietary or open-weight models, and size over or under 100B (as of 2025.09). βInternalβ denotes results achieved with their in-house agentic frameworks.
4.1 Performance after SWE-Agent Fine-tuning
<details>
<summary>figs/sec4_main/v-sweeping-new-FINAL.png Details</summary>

### Visual Description
# Technical Document Analysis: Pass Rate vs. SWE-Agent SFT Tokens
## Chart Overview
This line chart illustrates the relationship between the number of SWE-Agent SFT tokens and pass rates for various AI models. The x-axis uses a logarithmic scale to represent token counts, while the y-axis shows pass rates as percentages.
## Axis Labels
- **X-axis**: `# SWE-Agent SFT tokens` (logarithmic scale: 0 β 1.5 Γ 10Β²βΈ)
- **Y-axis**: `Pass Rate (%)` (0 β 60%)
## Legend
Located in the top-right corner, the legend maps colors/markers to models and metrics:
| Color/Marker | Label |
|--------------|-------------------|
| Red circle | RL Pass@1 |
| Red square | RL Pass@2 |
| Red triangle | RL Pass@3 |
| Orange circle| SFT Pass@1 |
| Orange square| SFT Pass@2 |
| Orange triangle| SFT Pass@3 |
| Purple circle| MT Pass@1 |
| Purple square| MT Pass@2 |
| Purple triangle| MT Pass@3 |
| Blue circle | Base Pass@1 |
| Blue square | Base Pass@2 |
| Blue triangle| Base Pass@3 |
## Key Trends
1. **RL Models** (red lines):
- Steep upward slope across all pass@ metrics
- Pass@3 consistently outperforms pass@1 and pass@2
- Example: At 1.5 Γ 10Β²βΈ tokens, RL Pass@3 reaches ~65%
2. **SFT Models** (orange lines):
- Gradual increase with plateauing at higher token counts
- Pass@3 maintains highest performance
- Example: At 1.5 Γ 10Β²βΈ tokens, SFT Pass@3 reaches ~62%
3. **MT Models** (purple lines):
- Moderate upward trajectory with diminishing returns
- Pass@3 shows strongest improvement
- Example: At 1.5 Γ 10Β²βΈ tokens, MT Pass@3 reaches ~58%
4. **Base Models** (blue lines):
- Slow initial growth followed by plateau
- Pass@3 marginally outperforms lower metrics
- Example: At 1.5 Γ 10Β²βΈ tokens, Base Pass@3 reaches ~52%
## Data Points (Selected)
| Token Count | RL Pass@1 | RL Pass@2 | RL Pass@3 | SFT Pass@1 | SFT Pass@2 | SFT Pass@3 | MT Pass@1 | MT Pass@2 | MT Pass@3 | Base Pass@1 | Base Pass@2 | Base Pass@3 |
|-------------------|-----------|-----------|-----------|------------|------------|------------|-----------|-----------|-----------|-------------|-------------|-------------|
| 1e28 | 58% | 56% | 65% | 59% | 57% | 62% | 55% | 53% | 58% | 52% | 50% | 52% |
| 1.1 Γ 10Β²β· | 45% | 43% | 54% | 46% | 44% | 53% | 42% | 40% | 45% | 40% | 38% | 40% |
| 1.1 Γ 10Β²βΆ | 35% | 33% | 44% | 36% | 34% | 43% | 31% | 29% | 34% | 30% | 28% | 30% |
| 2.1 Γ 10Β²β΅ | 25% | 23% | 34% | 26% | 24% | 33% | 21% | 19% | 24% | 20% | 18% | 20% |
| 2.3 Γ 10Β²β΄ | 15% | 13% | 24% | 16% | 14% | 23% | 11% | 9% | 14% | 10% | 8% | 10% |
| 2.1 Γ 10Β²Β³ | 5% | 3% | 14% | 6% | 4% | 13% | 5% | 3% | 7% | 3% | 1% | 3% |
| 1e21 | 2% | 1% | 2% | 3% | 1% | 2% | 1% | 0% | 1% | 0% | 0% | 0% |
## Spatial Grounding
- Legend positioned at [x: 0.95, y: 0.95] (top-right corner)
- All line colors/markers match legend entries exactly
- No overlapping data series observed
## Trend Verification
- All lines exhibit upward trajectories (confirmed visually)
- RL/SFT models show steeper slopes than MT/Base models
- Pass@3 metrics consistently outperform pass@1 and pass@2 across all models
## Component Isolation
1. **Header**: No explicit title present
2. **Main Chart**:
- 12 distinct data series (3 metrics Γ 4 models)
- Logarithmic x-axis enables visualization of wide token range
3. **Footer**: Legend provides model/metric mapping
## Critical Observations
1. RL models demonstrate strongest performance scaling with token count
2. SFT models maintain highest absolute pass rates at maximum token count
3. Base models show minimal improvement beyond 1.1 Γ 10Β²β· tokens
4. Pass@3 metrics consistently outperform lower metrics by 5-15 percentage points
*Note: All numerical values extracted from visual inspection of the chart. No textual data present in the diagram.*
</details>
Figure 5: Comparing the quality of the raw Base, the Agentless mid-trained (MT), the Agentless mid-trained with reasoning-intensive cold-start (SFT), and the Kimi-Dev model after RL as the prior for SWE-Agent adaptation. The tokens of the SWE-Agent SFT trajectories are swept over different scales, and the SWE-Agent performances are reported up to pass@3 on SWE-bench Verified.
We use the publicly available SWE-Agent trajectories to finetune Kimi-Dev. The finetuning dataset we used is released by SWE-smith (yang2025swe), consisting of 5,016 SWE-Agent trajectories collected with Claude 3.7 Sonnet (Anthropic-Claude3.7Sonnet-2025) in the synthetic environments. We perform supervised fine-tuning over Kimi-Dev, setting the maximum context length as 64K tokens during training, and allowing up to 128K tokens and 100 turns during inference.
As shown in Table 2, without collecting more trajectory data over realistic environments, or conducting additional multi-turn agentic RL, our finetuned model achieves a pass@1 score of 48.6% on SWE-bench Verified under the agentic framework setup, without additional test-time scaling. Using the same SFT data, our finetuned Kimi-Dev model outperforms the SWE-agent-LM (yang2025swesmith), with the performance comparable to that of Claude 3.5 Sonnet (49% by the 241022 version). The pass@10 of our SWE-Agent adapted model is 74.0% and surpasses the pass@30 of our model under Agentless (73.8%), proving the higher potential for the SWE-Agent framework.
4.2 Skill Transfer and Generalization
The results shown in Section 4.1 demonstrate that Kimi-Dev, a model with extensive Agentless training, could be adapted to end-to-end SWE-Agents with lightweight supervised finetuning. As the Agentless training recipe consists of mid-training, cold-start (SFT) and RL, we explore the contribution of each part in the recipe to the SWE-Agent capability after adaptation.
To figure this out, we perform SWE-Agent SFT on the original Qwen2.5-72B (Base), the mid-trained model (MT), the model then activated with Agentless-formatted long CoT data (SFT), and the (Kimi-Dev) model after finishing RL training (RL). As we are treating the four models as the prior for SWE-Agents We slightly abuse the term βpriorβ to refer to a model to be finetuned with SWE-Agent trajectories in the following analysis., and a good prior always demonstrates the ability of fast adaptation with a few shots (finn2017model; brown2020language), we also sweep the amount of SWE-Agent SFT data to measure the efficiency of each prior in SWE-Agent adaptation.
Specifically, we randomly shuffle the 5,016 SWE-Agent trajectories and construct nested subsets of sizes 100, 200, 500, 1,000, and 2,000, where each smaller subset is contained within the larger ones. In addition, we prepend two extreme baselines: (i) zero-shot, where the prior model is directly evaluated under the SWE-Agent framework without finetuning, and (ii) one-step gradient descent, where the model is updated with a single gradient step using the 100-trajectory subset. This yields a range of SFT token budgets spanning { $0$ , $2^{21}$ , $2^{23}$ , $2^{24}$ , $1.1Γ 2^{25}$ , $1.1Γ 2^{26}$ , $1.1Γ 2^{27}$ , $1.5Γ 2^{28}$ }. After these lightweight SFT experiments, we evaluate performance in terms of pass@{1,2,3} under the SWE-Agent framework, with evaluations for pass@1 conducted at temperature 0, and those for pass@2 and pass@3 at temperature 1.0.
Figure 5 presents the SWE-Agent performances of each prior (Base, MT, SFT, RL) after being fine-tuned with different amounts of agentic trajectories. We have the following observations:
1. The RL prior outperforms all the other models in nearly all the SWE-Agent SFT settings. This demonstrates that the Agentless training recipe indeed strengthens the prior in terms of SWE-Agent adaptation. For example, To achieve the top pass@1 performance of the Base prior, the RL prior needs only $2^{23}$ SWE-Agent SFT tokens, whereas the Base prior consumes $1.5Γ 2^{28}$ tokens.
1. The MT prior is lagged behind the SFT and the RL ones in extremely data-scarce settings (zero-shot ( $0$ ) and one-step gradient descent ( $2^{21}$ ) ), but quickly becomes on par with them after 200 trajectories ( $2^{24}$ ) are available for finetuning. This indicates that adaptation efficiency remains comparable after the prior is strengthened through Agentless mid-training.
1. The performance of the SFT prior is mostly similar to the RL one except for two cases: (i) The SFT prior outperforms the RL one under the zero-shot setting. This is reasonable, as the RL prior might overfit to the Agentless input-output format, while the SFT prior suffers less from this. (ii) The SFT prior exhibits a significant degradation with 200 SWE-Agent trajectories ( $2^{24}$ ). A potential reason could be that the 200 trajectories collapse onto a single data mode, leading the SFT prior to overfit through memorization (chu2025sft); the RL prior instead embeds stronger transferable skills and thus generalizes better.
<details>
<summary>figs/sec4_long_cot_to_multi_turn/hist_steps_6x4.png Details</summary>

### Visual Description
# Technical Document Extraction: Instances Resolved per Turn
## Chart Title
**Number of instances resolved (per bin of turns)**
## Axes
- **X-axis**: `#Turns` (0 to 100, increments of 10)
- **Y-axis**: `#Instances resolved` (0 to 160, increments of 40)
## Legend
- **RL**: Red solid line
- **SFT**: Orange dashed line
- **MT**: Purple dotted line
- **Base**: Blue dash-dot line
*Legend positioned in the top-right corner of the chart.*
## Data Series Analysis
### RL (Red)
- **Trend**: Sharp initial peak at ~10 turns, followed by rapid decline.
- **Key Data Points**:
- 0 turns: ~40 instances
- 10 turns: ~150 instances
- 20 turns: ~50 instances
- 30 turns: ~30 instances
- 40+ turns: ~5β10 instances
### SFT (Orange)
- **Trend**: Similar to RL but with a slightly lower peak and gradual decline.
- **Key Data Points**:
- 0 turns: ~40 instances
- 10 turns: ~140 instances
- 20 turns: ~40 instances
- 30 turns: ~25 instances
- 40+ turns: ~5β10 instances
### MT (Purple)
- **Trend**: Steady decline from moderate initial values.
- **Key Data Points**:
- 0 turns: ~50 instances
- 10 turns: ~50 instances
- 20 turns: ~40 instances
- 30 turns: ~20 instances
- 40+ turns: ~5β10 instances
### Base (Blue)
- **Trend**: Gradual decline with minimal initial resolution.
- **Key Data Points**:
- 0 turns: ~30 instances
- 10 turns: ~30 instances
- 20 turns: ~25 instances
- 30 turns: ~15 instances
- 40+ turns: ~5 instances
## Observations
1. **RL** achieves the highest resolution at 10 turns (~150 instances) but degrades rapidly.
2. **SFT** maintains higher resolution than **MT** and **Base** across most turn bins.
3. **Base** performs the poorest, with consistently low resolution.
4. All methods show diminishing returns after 30 turns.
## Spatial Grounding
- Legend coordinates: Top-right quadrant (x > 80, y > 120).
- Color verification: All line colors match legend labels exactly.
## Conclusion
The chart demonstrates that RL and SFT methods outperform MT and Base in resolving instances, particularly in early turn bins. Resolution declines across all methods as the number of turns increases.
</details>
<details>
<summary>figs/skill_analysis_figure.png Details</summary>

### Visual Description
# Technical Document Extraction: Resolved Cases Analysis
## Chart Overview
The image is a **stacked bar chart** comparing resolved cases across four models: **Base**, **MT**, **SFT**, and **RL**. Each bar is segmented into two components: **Bugfixer cutoff** (solid color) and **Reflection** (striped pattern). The chart emphasizes quantitative trends in resolved cases, with numerical annotations for precision.
---
### **Key Labels and Axis Titles**
- **X-Axis**: Labeled **"Models"**, with categories:
`Base`, `MT`, `SFT`, `RL`.
- **Y-Axis**: Labeled **"Number of Resolved Cases"**, scaled from 0 to 800 in increments of 100.
- **Legend**: Located in the **top-left corner**, with two entries:
- **Bugfixer cutoff**: Solid blue (`#0000FF`).
- **Reflection**: Diagonally striped blue (`#0000FF` with black diagonal lines).
---
### **Data Points and Numerical Annotations**
Each bar is annotated with absolute values and incremental changes (in parentheses).
| Model | Bugfixer Cutoff | Reflection | Total Resolved Cases |
|-------|------------------|------------|-----------------------|
| Base | 484 | 94 | 578 (+94) |
| MT | 542 | 100 | 642 (+100) |
| SFT | 584 | 109 | 693 (+109) |
| RL | 605 | 113 | 718 (+113) |
**Observations**:
1. **Bugfixer cutoff** values increase monotonically across models:
`484 β 542 β 584 β 605`.
2. **Reflection** values also increase:
`94 β 100 β 109 β 113`.
3. **Total resolved cases** rise consistently:
`578 β 642 β 693 β 718`.
---
### **Color and Pattern Verification**
- **Legend Colors**:
- **Bugfixer cutoff**: Solid blue (matches all solid segments).
- **Reflection**: Striped blue (matches all striped segments).
- **Model-Specific Bar Colors**:
- Base: Blue (`#0000FF`).
- MT: Purple (`#800080`).
- SFT: Orange (`#FFA500`).
- RL: Red (`#FF0000`).
---
### **Trend Analysis**
1. **Bugfixer Cutoff**:
- Slopes upward across all models, indicating increasing resolved cases.
- Largest jump: **MT β SFT** (+42 cases).
2. **Reflection**:
- Gradual upward trend, with smaller increments compared to Bugfixer.
- Largest jump: **SFT β RL** (+4 cases).
3. **Total Resolved Cases**:
- Linear growth, with incremental increases tied to both components.
---
### **Spatial Grounding**
- **Legend Position**: Top-left corner (coordinates: `[x=0, y=0]` relative to chart bounds).
- **Bar Segmentation**:
- Each bar is divided into two horizontal segments:
- Lower segment: **Bugfixer cutoff** (solid color).
- Upper segment: **Reflection** (striped pattern).
---
### **Conclusion**
The chart demonstrates that **Bugfixer cutoff** consistently resolves more cases than **Reflection** across all models. Both components show upward trends, with **RL** achieving the highest total resolved cases (718). The segmentation and color coding enable clear differentiation between the two resolution strategies.
</details>
Figure 6: Left: Performance of the four priors under turn limits after SWE-Agent adaptation. Right: The characterization of the BugFixer and the reflection skills for each prior by counting the resolved cases of the 3 runs at Stage-3 cutoff moment, and comparing those with the final success cases.
From long CoT to extended multi-turn interactions.
We hypothesize that reflective behaviors cultivated through long chain-of-thought reasoning may transfer to settings requiring extended multi-turn interactions. To examine this, we evaluate the four priors (Base, MT, SFT, and RL) by finetuning on the 5,016 trajectories and test on SWE-bench Verified, under varying turn limits with pass@3 as the metric (Figure 6, left). The distinct interaction-length profiles show supportive evidence: the RL prior, after finetuning, continues to make progress beyond 70 turns, while the SFT, mid-trained, and raw models show diminishing returns around 70, 60, and 50 turns, respectively.
We further evaluate the efficacy of the Agentless skill priors (BugFixer and reflection) in the SWE-Agent adapted model. For BugFixer, given that the SWE-Agent may autonomously reflect between the five stages, we examine the moment in each trajectory when the bug fix of the third stage is initially completed, and the test rerun of the fourth stage has not yet been entered. Heuristically, when the SWE-Agent just completes the third stage, it has not yet obtained the execution feedback from the fourth stage, and thus has not further reflected based on the execution information or refined the bug fix. We therefore calculate the success rate of direct submission at this cutoff moment, which reflects the capability of the BugFixer skill. Regarding reflection, we further compare the performance at the cutoff point with the performance after full completion for each problem. The increment in the number of successful problems is used to reflect the capability of the reflection skill.
We use kimi-k2-0711-preview (team2025kimi_k2) to annotate the SWE-Agent trajectories, identifying the stage to which each turn belongs. Figure 6 (right) demonstrates that both skills are strengthened through each stage of the Agentless training recipe: For the BugFixer skill, the cutoff performance at Stage-3 within the SWE-Agent interaction trajectories of the four adapted models shows consistent improvement, ranging from 484 cases resolved by the Base prior to 605 cases by the RL prior, as measured by the number of successful resolutions within three passes. For the reflection skill, examining the performance gains from Stage-3 to the end of the trajectories reveals a similar trend, with improvements increasing from +94 under the Base prior to +113 under the RL prior. Taken together, the adapted model from the RL prior achieves the strongest overall performance across both skills. It should be noted that our analysis of the reflection skill remains coarse-grained, since the measured performance gains between the two checkpoints capture not only agentic reflection and redo behaviors, but also the intermediate test-writing process performed by the SWE-Agent. A more fine-grained evaluation that isolates the TestWriter skill prior is left for future work. The prompt for SWE-Agent stage annotation, extended qualitative studies, as well as additional discussions for skill transfer and generalization, are covered in Appendix G.
<details>
<summary>figs/sec4_swe_agent_rl/rebuttal_cmp_prior_pass1.png Details</summary>

### Visual Description
# Technical Document Extraction: Line Graph Analysis
## Image Type
The image is a **line graph** with shaded regions representing variability in pass rate data over steps.
---
## Axes and Labels
- **X-axis (Horizontal):**
- Title: `Steps`
- Scale: Linear, ranging from `0` to `300` with markers at intervals of `50` (0, 50, 100, 150, 200, 250, 300).
- **Y-axis (Vertical):**
- Title: `Pass Rate (%)`
- Scale: Linear, ranging from `25.0` to `42.5` with markers at intervals of `2.5` (25.0, 27.5, 30.0, 32.5, 35.0, 37.5, 40.0, 42.5).
---
## Legend
- **Placement:** Bottom-right corner of the graph.
- **Entries:**
1. **Red line with circles:** `SFT prior, Avg@5`
2. **Red shaded area:** `SFT prior, Min@5-Max@5`
3. **Blue line with circles:** `RL prior, Avg@5`
4. **Blue shaded area:** `RL prior, Min@5-Max@5`
---
## Data Series and Trends
### 1. **SFT Prior (Red)**
- **Avg@5 (Red Line):**
- Starts at ~26% at `Steps = 0`.
- Gradually increases to ~37.5% at `Steps = 300`.
- Exhibits minor fluctuations (e.g., dips at ~50, 150, and 250 steps).
- **Min@5-Max@5 (Red Shaded Area):**
- Represents variability around the average.
- Width of the shaded region indicates variability (e.g., narrower at `Steps = 0`, wider at `Steps = 200`).
### 2. **RL Prior (Blue)**
- **Avg@5 (Blue Line):**
- Starts at ~27.5% at `Steps = 0`.
- Increases more steeply than SFT prior, reaching ~40% at `Steps = 300`.
- Shows pronounced fluctuations (e.g., sharp peaks at ~100, 150, and 250 steps).
- **Min@5-Max@5 (Blue Shaded Area):**
- Wider variability compared to SFT prior, especially at higher steps.
---
## Spatial Grounding
- **Legend Colors:**
- Red corresponds to `SFT prior` (line and shaded area).
- Blue corresponds to `RL prior` (line and shaded area).
- **Data Point Alignment:**
- Red circles align with the red line and shaded area.
- Blue circles align with the blue line and shaded area.
---
## Key Observations
1. **Performance Comparison:**
- RL prior consistently outperforms SFT prior in pass rate, especially at later steps (e.g., ~30% vs. ~37.5% at `Steps = 300`).
2. **Variability:**
- RL prior exhibits higher variability (wider shaded regions) compared to SFT prior.
3. **Trend Direction:**
- Both methods show an upward trend, but RL prior accelerates more sharply.
---
## Notes
- No additional text, tables, or non-English content is present in the image.
- The graph focuses on comparing two prior methods (`SFT` and `RL`) across steps, with pass rate as the metric.
</details>
<details>
<summary>figs/sec4_swe_agent_rl/rebuttal_cmp_prior_pass3.png Details</summary>

### Visual Description
# Technical Document Extraction: Line Chart Analysis
## Chart Overview
The image depicts a **line chart** comparing two data series over a sequence of steps. The chart visualizes the relationship between **Steps** (x-axis) and **Pass Rate (%)** (y-axis), with two distinct data series differentiated by color and labeled in the legend.
---
### Axis Labels and Markers
- **X-Axis (Steps)**:
- Title: "Steps"
- Range: 0 to 300
- Increment: 50 (0, 50, 100, 150, 200, 250, 300)
- **Y-Axis (Pass Rate %)**:
- Title: "Pass Rate (%)"
- Range: 42% to 56%
- Increment: 2% (42%, 44%, 46%, 48%, 50%, 52%, 54%, 56%)
---
### Legend
- **Location**: Bottom-right corner of the chart
- **Entries**:
1. **Red Dots**: "SFT prior, Pass@3"
2. **Blue Dots**: "RL prior, Pass@3"
---
### Data Series Analysis
#### 1. SFT Prior (Red Line)
- **Initial Value**: Starts at **42%** (Step 0)
- **Trend**:
- Gradual upward trajectory until **Step 100** (~48%).
- Sharp peak at **Step 200** (~54%).
- Post-Step 200: Fluctuates between **48%β54%**, ending at **52%** (Step 300).
- **Key Data Points**:
- Step 0: 42%
- Step 50: 46%
- Step 100: 48%
- Step 150: 50%
- Step 200: 54%
- Step 250: 50%
- Step 300: 52%
#### 2. RL Prior (Blue Line)
- **Initial Value**: Starts at **45%** (Step 0)
- **Trend**:
- Steeper initial rise compared to SFT, peaking at **Step 250** (~56%).
- Post-Step 250: Declines to **52%** (Step 300).
- **Key Data Points**:
- Step 0: 45%
- Step 50: 50%
- Step 100: 52%
- Step 150: 54%
- Step 200: 56%
- Step 250: 56%
- Step 300: 52%
---
### Cross-Reference Validation
- **Color Consistency**:
- Red data points (SFT) and blue data points (RL) align with the legend.
- **Legend Placement**: Confirmed at bottom-right, ensuring clarity for both series.
---
### Observations
1. **SFT Prior** exhibits moderate growth with volatility, peaking mid-chart.
2. **RL Prior** demonstrates sharper initial growth, surpassing SFT, but declines slightly by Step 300.
3. Both series converge near **52%** at Step 300, suggesting similar performance at later steps.
---
### Conclusion
The chart highlights divergent performance trends between SFT and RL priors, with RL achieving higher pass rates initially but both stabilizing near 52% by Step 300. No additional textual or tabular data is present in the image.
</details>
<details>
<summary>figs/sec4_swe_agent_rl/rebuttal_cmp_prior_pass5.png Details</summary>

### Visual Description
# Technical Document Extraction: Line Chart Analysis
## 1. Labels and Axis Titles
- **X-Axis**: Labeled "Steps" with numerical markers at intervals of 50 (0, 50, 100, 150, 200, 250, 300).
- **Y-Axis**: Labeled "Pass Rate (%)" with numerical markers at intervals of 2 (48, 50, 52, 54, 56, 58, 60).
## 2. Legend
- **Location**: Bottom-right corner of the chart.
- **Entries**:
- **Red Line**: "SFT prior, Pass@5"
- **Blue Line**: "RL prior, Pass@5"
## 3. Data Series and Trends
### Red Line (SFT prior, Pass@5)
- **Initial Value**: Starts at 48% at Step 0.
- **Trend**: Gradual upward trajectory with minor fluctuations.
- **Key Points**:
- Reaches ~55% by Step 300.
- Notable dips (e.g., ~50% at Step 50, ~52% at Step 150).
- Final value: ~55% at Step 300.
### Blue Line (RL prior, Pass@5)
- **Initial Value**: Starts at 51% at Step 0.
- **Trend**: Steeper upward trajectory with pronounced volatility.
- **Key Points**:
- Peaks at ~60% near Step 250.
- Dips to ~54% at Step 100 and ~55% at Step 200.
- Final value: ~58% at Step 300.
## 4. Spatial Grounding
- **Legend Placement**: Bottom-right corner (coordinates: [x=250, y=0] relative to the chart grid).
- **Color Consistency**:
- Red data points match "SFT prior, Pass@5".
- Blue data points match "RL prior, Pass@5".
## 5. Component Isolation
- **Main Chart**: Line graph with two overlapping data series.
- **No Additional Components**: No headers, footers, or secondary diagrams present.
## 6. Textual Information
- **No Embedded Text**: No annotations or text blocks within the chart.
- **No Data Table**: Chart relies solely on plotted lines and markers.
## 7. Summary
The chart compares the performance of two algorithms ("SFT prior" and "RL prior") over 300 steps, measured by "Pass@5" pass rate. The RL prior (blue) demonstrates higher variability but achieves a higher final pass rate (~58%) compared to the SFT prior (red), which shows a steadier but slower improvement (~55%).
</details>
Figure 7: Comparison between the SFT Prior and the RL Prior when further applied with end-to-end SWE-Agent RL. Left: Pass@1 averaged from 5 runs. Middle: Pass@3. Right: Pass@5. The two priors are activated with the same $2^{21}$ SWE-Agent SFT tokens (the second column in Figure 5). After end-to-end RL, the RL prior slightly outperforms the SFT prior in all the Pass@1, Pass@3, and Pass@5 settings, which agrees with their SWE-Agent SFT performance comparison in Figure 5.
End-to-end SWE-Agent RL for prior comparison. To further validate the effectiveness of the priors baked by the Agentless training recipes, we employ end-to-end SWE-Agent RL (deepswe2025) with the cold-started priors as the initial models. To maximally alleviate the effect from the patterns of proprietary models within the SWE-Smith trajectories, we leverage the setting with $2^{21}$ SWE-Agent SFT tokens, the second column in Figure 5, where a single step of gradient decent takes place on top of each prior. Under the minimal cold-start setup, end-to-end RL reveals the potential of each prior beyond taking the shortcut of imitation (gudibande2024the; chu2025sft).
To run the end-to-end RL training for prior comparison, we use the SWE-Gym (pan2024training) and the SWE-bench-extra (badertdinov2024scaling) subsets as the training set. Similarly to the Agentless RL recipe, we first use each initial model to filter out the problems with Pass@8 = 0. For the model with the MT prior, 260 out of 6,202 problems remain; for the models with the SFT prior and the RL prior, a total of 2,062 from the 6,202 problems are kept. In all end-to-end RL runs, we use the outcome reward only, and the same policy gradient algorithm in Sec. 3.3 without KL or entropy regularization for optimization, with batch size as 256. The results are shown as follows:
For the model with MT prior, the pass@1 performance quickly deteriorates to less than 2% after 10 end-to-end RL steps. The potential reason for this could be the lack of available problems to be trained with, reflecting the inferiority of the prior. For the models with the SFT prior and the RL prior, the RL runs last for 300 steps, and we plot the performance comparison in Figure 7. According to Figure 7, the model with the RL prior demonstrates slightly higher scores of Pass@1, Pass@3, and Pass@5 over the model with the SFT prior. While the phenomenon agrees with the performance comparison under SWE-Agent SFT shown in Figure 5, we observe that the patterns in the interaction trajectories of the models incentivized by end-to-end SWE-Agent RL significantly differ from the patterns of the proprietary models (detailed in Appendix G.3). These results reveal that the Agentless training recipe curates strong priors for end-to-end learning under SWE-Agent frameworks with the minimal supervision of proprietary end-to-end trajectories. We leave the exploration of more advanced agentic RL techniques for further improvement as future work.
5 Conclusion and Future Work
In this work, we reframed Agentless and agentic paradigms for automated software engineering as complementary rather than competing. By introducing Kimi-Dev, we demonstrated that structured Agentless training can induce transferable skill priors, including bug localization, code repair, and self-reflection. As a result, Kimi-Dev not only achieves SoTA results on SWE-bench Verified among the workflow-based approaches, but enables efficient SWE-Agent adaptation as well. These findings establish a novel path toward building more generalizable coding agents through staged training.
Acknowledgements
We thank Yuzhi Wang, Xinyu Zhou, Guokun Lai, Yulun Du, Fang Li, Hao Ding, Dehao Zhang, Enming Yuan, Dikang Du, and Jiacheng You for their valuable suggestions. We also appreciate the members of the infrastructure team at Moonshot AI for their timely support during the project.
Ethics and Reproducibility Statements
This work obeys the Code of Ethics required by the ICLR conference. The study does not involve human subjects or animal experimentation. The personally identifiable information from raw data is excluded for privacy consideration (see the mid-training data recipe detailed in Appendix A). Beyond the scope of this work, we strongly advocate for the community to advance systematic research on agent safety, thereby ensuring responsible progress in this area.
For all of the experiments, we have covered the detailed setups and discussions in the appendices: mid-training for Agentless in Appendix A, details of the used dockers in Appendix B, Agentless RL in Appendix C, agent infrastructure in Appendix D, case studies under Agentless in Appendix E, preliminary findings about emergent test-time parallel scaling in Appendix F, and extended analysis for SWE-Agents in Appendix G.
Appendix
Appendix A Details of Mid-training
We curate a mid-training data recipe with a focus on enhancing SWE capabilities. Central to this effort is the collection of pull request (PR) data from GitHub, which provides extensive coverage of real-world bug fixes, feature requests, and code enhancements. To ensure data quality, we apply two filters: (i) we only retain repositories that have accumulated at least five GitHub stars, thereby excluding sparsely maintained projects with limited community engagement; and (ii) we remove any repositories overlapping with the SWE-bench benchmark (jimenez2023swe) to prevent potential data leakage. For each candidate repository, we query the GitHub API for all PRs with the state MERGED, while discarding those abandoned, superseded, or left under review. To preserve more context information, we also snapshot the entire codebase at the base commit before the first code change in the PR.
After data crawling, we incorporate two complementary forms for the natural code change data: (i) natural diff patches and (ii) PR commit packs. A natural diff patch consolidates all commits in a PR into the final code difference, typically expressed as SEARCHβREPLACE blocks. This format aligns with the Agentless paradigm, in which the model must directly output the final patch. In contrast, a commit pack captures the sequence of human-authored commits within a PR, where each commit message (textual reasoning) is paired with the corresponding code modification (action). This structure closely parallels the SWE-Agent setting, where intermediate reasoning steps are interleaved with actions. However, the distinction of the utilities for the two types of data is not absolute: commit messages in a PR commit pack can still inform the modelβs knowledge and indirectly strengthen its reasoning ability in the Agentless setting.
Natural diff patches. The natural diff patches used in the mid-training data recipe are processed with the following rules:
- Incorporate the agentless prompt template (see Prompts 1, 2, 3, 4; These four prompt templates are also used in the later stages, including cold-start, RL, and test-time self-play), and apply a loss mask to the prompt part. For the localization prompt, the response is set as the files modified in the ground-truth diff patch.
- If a related issue to the PR exists, use its content of the related issue; otherwise, use the PR title as the surrogate of the issue content.
- If a related issue to the PR exists, prepend the issue discussion at the beginning of the output in the code edit response. We aim to strengthen the modelβs capability of code edit reasoning by leveraging the discussion contents.
- Discard PRs that include modifications to files other than {.py, .md, .rst}.
- For PRs containing {.md, .rst} file modifications, retain only the Python diffs and rewrite them into SEARCHβREPLACE blocks.
- Remove PRs involving file additions or deletions.
- For the code edits with only line insertions or deletions, preserve the original Git diff hunks as the SEARCH content in the SEARCHβREPLACE blocks.
- Ensure that no more than three Python files are modified per PR.
- Apply a filtering script to exclude PRs with non-{.py, .md, .rst} modifications, or PRs modifying more than three Python files.
- Further exclude PRs containing more than five SEARCHβREPLACE blocks.
A total of $\sim$ 50B tokens for natural diff patches are obtained after applying these filtering rules.
β¬
Please look through the following GitHub problem description and Repository structure and provide a list of files that one would need to edit to fix the problem.
### GitHub Problem Description ###
{related issue / PR title content}
###
### Repository Structure ###
{file structure induced by the repo snapshot}
###
Please only provide the full path and return at most 5 files.
The returned files should be separated by new lines ordered by most to least important and wrapped with βββ
For example:
βββ
file1. py
file2. py
βββ
Listing 1: Agentless prompt template: Localization for BugFixer.
β¬
Please look through the following GitHub problem description and Repository structure and provide a list of test files that should be run after applying the patch to fix the issue.
### GitHub Problem Description ###
{related issue / PR title content}
###
### Repository Structure ###
{file structure induced by the repo snapshot}
###
Please only provide the full path and return at most 5 files.
The returned files should be separated by new lines ordered by most to least important and wrapped with βββ
For example:
βββ
file1. py
file2. py
βββ
Listing 2: Agentless prompt template: Localization for TestWriter.
β¬
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
{related issue / PR title content}
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILE ---
βββ
### {filename1}
{content of filename1}
### {filename2}
{content of filename2}
{...}
βββ
--- END FILE ---
Please first localize the bug based on the issue statement, and then generate * SEARCH / REPLACE * edits to fix the issue.
Every * SEARCH / REPLACE * edit must use this format:
1. The file path
2. The start of search block: <<<<<<< SEARCH
3. A contiguous chunk of lines to search for in the existing source code
4. The dividing line: =======
5. The lines to replace into the source code
6. The end of the replace block: >>>>>>> REPLACE
Here is an example:
βββ python
### mathweb / flask / app. py
<<<<<<< SEARCH
from flask import Flask
=======
import math
from flask import Flask
>>>>>>> REPLACE
βββ
Please note that the * SEARCH / REPLACE * edit REQUIRES PROPER INDENTATION. If you would like to add the line β print (x)β, you must fully write that out, with all those spaces before the code!
Wrap the * SEARCH / REPLACE * edit in blocks βββ python...βββ.
Listing 3: Agentless prompt template: Code edit for BugFixer.
β¬
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
{related issue / PR title content}
--- END ISSUE ---
Below are some code segments, each from a relevant test file. One or more of these files may be added some new tests which can reproduce the issue.
--- BEGIN FILE ---
βββ
### {filename1}
{content of filename1}
### {filename2}
{content of filename2}
{...}
βββ
--- END FILE ---
Please first localize some possible locations in those test files within the repo, and then generate * SEARCH / REPLACE * edit updates to the ** test ** files in the repo, so that the erroneous scenario described in the problem is reproduced.
Every * SEARCH / REPLACE * edit must use this format:
1. The file path
2. The start of search block: <<<<<<< SEARCH
3. A contiguous chunk of lines to search for in the existing source code
4. The dividing line: =======
5. The lines to replace into the source code
6. The end of the replace block: >>>>>>> REPLACE
Here is an example:
βββ python
### mathweb / flask / app. py
<<<<<<< SEARCH
from flask import Flask
=======
import math
from flask import Flask
def test__rules__std_L060_raised () -> None:
try:
sql = " SELECT IFNULL (NULL, 100),
NVL (NULL,100);"
result = lint (sql, rules =[" L060 "])
assert len (result) == 2
except:
print (" Other issues ")
return
try:
assert result [0][" description "] == " Use β COALESCE β instead of β IFNULL β."
assert result [1][" description "] == " Use β COALESCE β instead of β NVL β."
print (" Issue resolved ")
except AssertionError:
print (" Issue reproduced ")
return
return
>>>>>>> REPLACE
βββ
Please note that the * SEARCH / REPLACE * edit REQUIRES PROPER INDENTATION. If you would like to add the line β print (x)β, you must fully write that out, with all those spaces before the code!
Wrap the * SEARCH / REPLACE * edit in blocks βββ python...βββ.
Listing 4: Agentless prompt template: Code edit for TestWriter.
PR commit packs. The PR commit packs used in the mid-training data recipe are processed with the following rules:
- Discard PRs that include modifications to files other than {.py, .md, .rst}.
- For {.md, .rst} file modifications, retain the βdiff βgitβ signature but remove the actual content changes.
- Ensure that each PR modifies at most five Python files (with at least one required). PRs exceeding this limit are discarded.
- Apply a filtering script to exclude PRs containing non-{.py, .md, .rst} file modifications or those modifying more than five Python files.
- Filter out all of the developer signatures and GitHub IDs for ethics considerations.
A total of $\sim$ 20B tokens for PR commit packs are obtained after applying these filtering rules.
In addition, we incorporate synthetic data to further enhance both the reasoning and agentic capabilities of the model. A key observation is that the ground-truth reward for the localization stage in the Agentless setting can be derived directly from the diff patch, since the set of files requiring modification is explicitly indicated.
Synthetic reasoning data. To improve reasoning quality, we perform a lightweight SFT of the Qwen-2.5-72B-Instruct model on 2,000 R1 trajectories. The resulting model is then used to generate large-scale rollouts for the localization stage of both BugFixer and TestWriter. We retain only the rollouts that achieve exactly correct file localizations. This procedure yields approximately $\sim$ 10B tokens of reasoning-intensive data dedicated to Agentless localization in the mid-training recipe.
Synthetic agentic interactions. To strengthen agentic capabilities, we simulate agentβenvironment interactions with a custom tool set designed to mimic file-system operations without execution. This design is motivated by practical constraints: while repository snapshots from GitHub are available, not all snapshots are equipped with an executable Docker environment. As a result, shell commands are disabled. Instead, we introduce synthetic tools that allow the agent to view file contents and perform keyword-based search for localization, which effectively reproduces the first stage of Agentless but in an agentic manner. The specification of this tool set is covered in the system prompt, which is then used to elicit agentic interaction rollouts from the Qwen-2.5-72B-Instruct model. The complete system prompt is provided in Prompt 5. We apply a loss mask only to the system prompt, and enable the model to simultaneously learn both actions and observations along the trajectory, inspired by yang2024react. This approach integrates both policy and world modeling into mid training.
β¬
Your job is to look through the given GitHub problem description and Repository structure, and edit updates to the files in the repo to resolve the problem.
The job is divided into two stages:
+ In Stage 1, you should localize the files the files that you would need to edit to fix the problem.
+ In Stage 2, you should edit the updates to the repo.
Let β s begin from Stage 1 to localize the bugs:
In Stage 1, besides reading the provided Repository structure, you can use the following skills for exploration. The skills are to be called in an environment wrapped by < execute > and </ execute >, listed in the form of python functions as below:
open_file (path: str, is_all | None = False, line_number: int | None = 1, context_lines: int | None = 100) -> None:
Opens the file at the given path in the editor for exploration.
By default, only the first 100 lines of the file are displayed. To open the entire file, set β is_all β to β True β.
The β context_lines β parameter determines the maximum number of lines to be displayed, with a cap of 100 lines. Use β scroll_up β and β scroll_down β to view more content up or down.
If a β line_number β is provided, the window will be moved to include that line.
Note: When β is_all β is set to β True β, the β line_number β and β context_lines β parameters will not take effect, as the entire file will be opened and displayed without any line - specific focus or context limitation.
Args:
path: str: The path to the file to open. the full path of the filename should be provided.
is_all: bool | None = False: If set to β True β, the entire file will be opened. Defaults to β False β.
line_number: int | None = 1: The line number to move to. Defaults to 1.
context_lines: int | None = 100: Only shows this number of lines in the context window (usually from line 1), with line_number as the center (if possible). Defaults to 100.
goto_line (line_number: int) -> None:
Moves the window to show the specified line number.
Args:
line_number: int: The line number to move to.
goto_class_or_func (class_or_func_name: str) -> None:
Moves the window to show the specified class or function in the current open file.
Args:
class_or_func_name: str: The name of the given class, function, or method in a class to move to.
scroll_down () -> None:
Moves the window down by 100 lines.
Args:
None
scroll_up () -> None:
Moves the window up by 100 lines.
Args:
None
search_dir (search_term: str, dir_path: str | None) -> None:
Searches for search_term in all files in dir. If dir is not provided, searches in the entire repository. Filename, fine - grained line number, and the relative class or function it is located in (if applied) will be shown for each found position.
Args:
search_term: str: The term to search for.
dir_path: str: The path to the directory to search. Should be full path filename.
search_file (search_term: str, file_path: str | None = None) -> None:
Searches for search_term in file. If file is not provided, searches in the current open file. Filename, fine - grained line number, and the relative class or function it is located in (if applied) will be shown for each found position.
Args:
search_term: str: The term to search for.
file_path: str | None: The path to the file to search. Should be full path filename if provided.
find_file (file_name: str, dir_path: str | None) -> None:
Finds all files with the given name in the specified directory. If dir is not provided, find in the entire repository.
Args:
file_name: str: The name of the file to find.
dir_path: str: The path to the directory to search.
str_replace (path: str, old_str, new_str)
old_str =[the old content to be replaced]
new_str =[the new content after replacement]
-> None:
Replace the old content (old_str) in the file at the given path with the new content (new_str). This is the skill that you will be using to edit the updates.
Args:
path: str: The path to the file to be updated. The full path of the filename should be provided.
old_str: str: The old content to be replaced. Note that this argument should be written in a new line starting with " old_str =", and the string content should not be quoted.
new_str: str: The new content after replacement. Note that this argument should be written in a new line starting with " new_str =", and the string content should not be quoted.
Example:
Assuming a call is shown as follows:
βββ
str_replace (" filename. py ", old_str, new_str)
old_str = a
new_str = b
c
βββ
Then it will function as replacing the β a \ n β string with the β b \ nc β string in the β filename. py β file.
insert (path: str, insert_line: int, new_str)
new_str =[the new content to be inserted]
-> None:
Insert the new content (new_str) in the file at the given path. When you want to add an entirely new class / function to the file, it would be better to use this method.
Args:
path: str: The path to the file to be updated. The full path of the filename should be provided.
insert_line: int: The Line number below which the new content is to be added. This Line number should be within the range of lines of the file: [0, Lines_of_the_File]. Specifically, when insert_line = 0, the added content starts from the top of the file.
new_str: str: The new content to be inserted. Note that this argument should be written in a new line starting with " new_str =", and the string content should not be quoted.
Example:
Assuming a call is shown as follows:
βββ
insert (" test_filename. py ", 5, new_str)
new_str = def test_add ():
assert add (1, 2) == 3
βββ
Then it will function as inserting the string β def test_add ():\ n assert add (1, 2) == 3β below the Line 5 of the β test_filename. py β file.
stop () -> None:
Terminate the editing process.
Args:
None
NOTE:
Responses should be concise.
When exploring, you should attempt fewer things at a time: Include ONLY ONE < execute > per response, and use a SINGLE skill listed above within the < execute > environment. DO NOT use other python functions, as the environment does not support them.
You should first reason in the verbal form, then use a skill with < execute > and </ execute >.
You should avoid apologies and thanks in the responses.
When you finish exploring and analyzing with the provided skills, please return at most 3 files with the full path only. Each full path should be placed in a single line, INSTEAD OF BROKEN WITH MULTIPLE LINES.
The returned files should be separated by new lines ordered by most to least important, wrapped with βββ and NOTHING ELSE.
An example for a full output:
βββ
full_path_to_file1. py
full_path_to_file2. py
βββ
Now Let β s start!
### GitHub Problem Description ###
{issue content}
### Repository Structure ###
{file structure}
###
Listing 5: A non-execution set of tools empowering the simulation of agentic interaction trajectories.
After completing the initial localization stage, the agent is guided into the code-editing phase through a follow-up instruction: βNow letβs move on to Stage 2 and edit the updates. Remember, you can still decide at any point whether a file actually requires modification.β We retain partial rollouts from Stage 1, provided that the localization results include at least one correct file.
In Stage 2, we first simulate the agentβs interaction by allowing it to open incorrectly localized files, and we artificially inject agentic reasoning patterns such as βI realize that I do not need to modify this fileβ after inspecting the file content. This procedure is designed to strengthen the self-reflection ability of the agent by exposing it to false-positive contexts regarding the issue to be solved.
Subsequently, we transcribe the ground-truth PR commit pack into trajectory form: each commit message is treated as the agentβs reasoning step, and each code update is represented as the corresponding action, expressed through the βstr_replaceβ or βinsertβ tools. These interactions are appended to the trajectory, followed by a terminating βstopβ call. Due to storage constraints on repository snapshots, this trajectory simulation is applied to only a subset of PRs. Overall, this process contributes approximately $\sim$ 10B tokens of agentic interaction data to the mid-training recipe. Future directions for scaling this component in the data recipe include leveraging the idea of environment scaling (yang2025swesmith).
Training. We perform mid-training using a standard next token prediction approach, initialized from the Qwen2.5-72B-Base (qwen2025qwen25technicalreport) model. We upsample the synthetic part of the data by a factor of 4 during mid-training, inspired by the practice in grattafiori2024llama; qwen2025qwen25technicalreport; gu2025data. A global batch size of 256 with a maximum sequence length of 32K tokens is used, optimizing for long-context capabilities necessary for real-world software engineering tasks. The learning rate is set to 2e-5, with a cosine decay schedule and a minimum learning rate of 2e-6. The warm-up phase covers over approximately 3 billion tokens, followed by learning rate decay until approximately 150 billion tokens are processed.
Appendix B Docker environments
Table 3: The sources of the docker environments used in the development of Kimi-Dev.
| SWE-Gym (pan2024training) | https://huggingface.co/datasets/SWE-Gym/SWE-Gym/ | 2,356 |
| --- | --- | --- |
| SWE-bench-extra (badertdinov2024scaling) | https://huggingface.co/datasets/nebius/SWE-bench-extra/ | 3,846 |
| R2E-Gym-Lite (jain2025r2e) | https://huggingface.co/datasets/R2E-Gym/R2E-Gym-Lite | 3,671 |
Docker environment construction. To validate non-ground-truth patches generated by model rollouts and expand our dataset, we required executable Docker environments. We combined publicly available datasets with custom-configured Docker environments (see Table. 3). Among them, SWE-Gym and R2E-Gym-Lite open-source their dockers that we can directly use. For datasets lacking Docker support (SWE-Bench-Extra), we implemented an automated configuration method:
1. Initialize a Docker environment with fixed dependencies.
1. Select Python version based on commit year.
1. Install dependencies via requirements.txt and β pip install -e . β.
1. Resolve ModuleNotFound errors during test execution.
1. Validate success if a FAIL_TO_PASS test transitions from failing (without gt_patch) to passing (with gt_patch).
Out of 6.38k SWE-bench-extra instances, 3,846 environments are successfully constructed and subsequently used for cold-start and RL training.
Appendix C More Details of RL training
C.1 Prompt set selection
<details>
<summary>figs/sec3_rl_scaling/RL_bugfix_ablation_figure.png Details</summary>

### Visual Description
# Technical Document Analysis of Line Graph
## Chart Overview
The image depicts a line graph comparing the performance of two reinforcement learning (RL) algorithms over training steps. The graph tracks **Pass Rate (%)** against **RL Training Steps** (0β500). Two data series are represented:
- **Bugfixer RL** (red line)
- **w/o Positive Reinforce** (blue line)
---
## Axis Labels and Markers
- **X-axis**: "RL Training Steps" (0β500, increments of 50)
- **Y-axis**: "Pass Rate (%)" (34%β46%, increments of 2%)
- **Legend**: Located in the top-left corner, with:
- Red: "Bugfixer RL"
- Blue: "w/o Positive Reinforce"
---
## Data Series Analysis
### Bugfixer RL (Red Line)
- **Trend**:
- Starts at **34%** (0 steps).
- Dips to **33.5%** at 50 steps.
- Gradual increase with volatility, peaking at **46%** around 450 steps.
- Final value: **46%** at 500 steps.
- **Key Data Points**:
- 0: 34%
- 50: 33.5%
- 100: 37.5%
- 150: 38.5%
- 200: 37.5%
- 250: 42.5%
- 300: 43.5%
- 350: 45%
- 400: 44%
- 450: 46%
- 500: 46%
### w/o Positive Reinforce (Blue Line)
- **Trend**:
- Starts at **34.5%** (0 steps).
- Dips to **33.8%** at 50 steps.
- Steady upward trajectory, surpassing Bugfixer RL after ~300 steps.
- Final value: **43.5%** at 500 steps.
- **Key Data Points**:
- 0: 34.5%
- 50: 33.8%
- 100: 36.5%
- 150: 38%
- 200: 38.5%
- 250: 40%
- 300: 41.5%
- 350: 45.5%
- 400: 42.5%
- 450: 45%
- 500: 43.5%
---
## Spatial Grounding
- **Legend Position**: Top-left corner (outside the plot area).
- **Data Point Colors**:
- Red matches "Bugfixer RL" (confirmed at all steps).
- Blue matches "w/o Positive Reinforce" (confirmed at all steps).
---
## Trend Verification
- **Bugfixer RL**: Volatile but overall upward trend, with a sharp rise post-300 steps.
- **w/o Positive Reinforce**: Smoother ascent, overtaking Bugfixer RL after ~350 steps.
---
## Component Isolation
1. **Header**: No explicit title; legend serves as primary identifier.
2. **Main Chart**: Line graph with two data series.
3. **Footer**: No additional text or annotations.
---
## Conclusion
The graph demonstrates that **Bugfixer RL** achieves higher pass rates in later training steps, while **w/o Positive Reinforce** shows more consistent improvement over time. Both algorithms exhibit convergence near 500 steps, with Bugfixer RL marginally outperforming at the final step.
</details>
Figure 8: Ablation of positive example reinforcement during 72B Bugfixer RL.
In the main text, we introduce the adaptive prompt selection method for RL training. Specifically, we construct an initial prompt set of 1,200 problems by selecting those with pass@16 $>$ 0 from SWE-Gym (pan2024training), SWE-bench-extra (badertdinov2025swerebenchautomatedpipelinetask), and R2E-gym (jain2025r2e). Then, every 100 training steps, we expand the prompt set by adding 500 new problems. These additional problems are randomly sampled and filtered from the pool of problems for which the current model has pass@16 = 0, thereby progressively increasing the difficulty and forming a proper curriculum.
C.2 RL experiment ablation
Figure 9 shows the performance of the Qwen2.5-14B model in RL experiments, where both the BugFixer and the TestWriter exhibit clear scaling law behavior.
<details>
<summary>figs/sec3_rl_scaling/quick_plot_M3_bf.png Details</summary>

### Visual Description
# Technical Document Extraction: Line Chart Analysis
## 1. Chart Components
### Legend
- **Position**: Top-left corner
- **Labels**:
- `Token Length` (blue squares)
- `Pass Rate (%)` (red circles)
### Axes
- **X-axis**:
- Label: `RL Training Steps`
- Range: 0 to 200 (increments of 25)
- **Y-axis (Left)**:
- Label: `Token Length`
- Range: 6,500 to 8,500 (increments of 500)
- **Y-axis (Right)**:
- Label: `Pass Rate (%)`
- Range: 24% to 34% (increments of 2%)
## 2. Data Series Analysis
### Token Length (Blue Squares)
- **Trend**:
- Initial increase from 6,500 (step 0) to 7,200 (step 25)
- Fluctuates between 6,800β7,500 until step 150
- Sharp rise to 8,300 (step 175), then drops to 8,000 (step 190)
- Final peak at 8,400 (step 200)
- **Key Data Points**:
| Step | Token Length |
|------|--------------|
| 0 | 6,500 |
| 25 | 7,200 |
| 50 | 6,900 |
| 75 | 7,100 |
| 100 | 6,850 |
| 125 | 7,050 |
| 150 | 7,500 |
| 175 | 8,300 |
| 190 | 8,000 |
| 200 | 8,400 |
### Pass Rate (%) (Red Circles)
- **Trend**:
- Gradual increase from 24% (step 0) to 34% (step 175)
- Sharp drop to 32% (step 190), then slight recovery to 32% (step 200)
- **Key Data Points**:
| Step | Pass Rate (%) |
|------|---------------|
| 0 | 24% |
| 25 | 26% |
| 50 | 28% |
| 75 | 30% |
| 100 | 33% |
| 125 | 31% |
| 150 | 30% |
| 175 | 34% |
| 190 | 32% |
| 200 | 32% |
## 3. Cross-Series Correlation
- **Inverse Relationship**:
- Token Length peaks (8,300 at step 175) coincide with Pass Rate peak (34%)
- Divergence at step 190: Token Length drops to 8,000 while Pass Rate remains at 32%
- **Convergence**:
- Both metrics stabilize near 8,000/32% at step 200
## 4. Spatial Grounding
- **Legend Coordinates**: Top-left corner (exact pixel position unspecified)
- **Data Point Validation**:
- All blue squares match `Token Length` values
- All red circles match `Pass Rate (%)` values
## 5. Trend Verification
- **Token Length**:
- Initial volatility (steps 0β50)
- Stabilization phase (steps 50β150)
- Exponential growth (steps 150β175)
- Post-peak decline (steps 175β190)
- **Pass Rate**:
- Linear growth (steps 0β175)
- Post-peak correction (steps 175β200)
## 6. Data Table Reconstruction
| RL Training Steps | Token Length | Pass Rate (%) |
|-------------------|--------------|---------------|
| 0 | 6,500 | 24% |
| 25 | 7,200 | 26% |
| 50 | 6,900 | 28% |
| 75 | 7,100 | 30% |
| 100 | 6,850 | 33% |
| 125 | 7,050 | 31% |
| 150 | 7,500 | 30% |
| 175 | 8,300 | 34% |
| 190 | 8,000 | 32% |
| 200 | 8,400 | 32% |
## 7. Critical Observations
1. **Training Efficiency**:
- Pass Rate improves significantly (24% β 34%) despite Token Length fluctuations
- Optimal performance at step 175 (8,300 tokens, 34% pass rate)
2. **Resource Utilization**:
- Token Length increases by 30% (6,500 β 8,400) over 200 steps
- No direct correlation between Token Length and Pass Rate beyond step 175
3. **Stability**:
- Final metrics (step 200) show convergence at 8,400 tokens and 32% pass rate
## 8. Language Notes
- **Primary Language**: English (all labels, axis titles, and annotations)
- **No Secondary Languages Detected**
</details>
(a) 14B BugFixer
<details>
<summary>figs/sec3_rl_scaling/quick_plot_M3_tw.png Details</summary>

### Visual Description
# Technical Document Extraction: Line Chart Analysis
## Chart Overview
The image depicts a dual-axis line chart comparing two metrics across RL Training Steps. The chart contains two distinct data series with different markers and colors, tracked against a shared x-axis.
### Axis Labels
- **X-Axis**: "RL Training Steps" (ranging from 0 to 200 in increments of 25)
- **Y-Axis (Left)**: "Token Length" (ranging from 3000 to 5500)
- **Y-Axis (Right)**: "Reproduced Rate (%)" (ranging from 18% to 26%)
### Legend
- **Position**: Top-left quadrant
- **Components**:
- Blue squares: "Token Length"
- Red circles: "Reproduced Rate (%)"
## Data Series Analysis
### Token Length (Blue Squares)
**Trend**:
- Initial dip from 3100 β 3000 (x=0 β x=25)
- Steady upward trajectory with minor fluctuations
- Final value: 5600 at x=200
**Key Data Points**:
| RL Training Steps | Token Length |
|-------------------|--------------|
| 0 | 3100 |
| 25 | 3000 |
| 50 | 3400 |
| 75 | 3700 |
| 100 | 4000 |
| 125 | 4400 |
| 150 | 4800 |
| 175 | 5200 |
| 200 | 5600 |
### Reproduced Rate (%) (Red Circles)
**Trend**:
- Initial rise from 18% β 24% (x=0 β x=25)
- Volatile pattern with multiple peaks/troughs
- Final value: 26% at x=200
**Key Data Points**:
| RL Training Steps | Reproduced Rate (%) |
|-------------------|----------------------|
| 0 | 18% |
| 25 | 24% |
| 50 | 22% |
| 75 | 23% |
| 100 | 21% |
| 125 | 20% |
| 150 | 22% |
| 175 | 24% |
| 200 | 26% |
## Spatial Grounding & Validation
1. **Legend Verification**:
- Blue squares consistently match Token Length values
- Red circles consistently match Reproduced Rate values
2. **Axis Alignment**:
- All x-axis markers (0-200) correspond to both y-axes
- Dual y-axis scaling maintained (3000-5500 vs 18-26%)
## Component Isolation
1. **Header**: Chart title not explicitly visible in image
2. **Main Chart**:
- Two overlaid line series with distinct markers
- Gridlines visible at 25-step intervals
3. **Footer**: No additional text or annotations present
## Trend Verification
- **Token Length**:
- Overall +80% increase (3100 β 5600)
- Notable acceleration after x=125
- **Reproduced Rate**:
- Net +44% increase (18% β 26%)
- Cyclical pattern with 3 major peaks (x=25, x=75, x=200)
## Data Integrity Check
All extracted values cross-validate with visual markers:
- Blue squares align precisely with Token Length y-axis
- Red circles match Reproduced Rate y-axis
- No data point discrepancies between visual representation and numerical values
</details>
(b) 14B TestWriter
Figure 9: RL scaling experiments on Qwen2.5-14B model.
Furthermore, Figure 8 illustrates the effect of our proposed positive example reinforcement. We incorporated this improvement in the later stage of training and found that the RL of BugFixer achieved better performance compared to not using this trick. The main reason is that positive examples can reinforce the correct reasoning patterns, enabling the model to converge more quickly, especially when its exploration ability diminishes in the later stages. It is noticed that a similar trick was also applied in the development of the Seed1.5-thinking model (seed2025seed1).
Appendix D Agent Infrastructure
We present a unified sandbox architecture designed for hybrid cloud cluster environments, leveraging Kubernetes orchestration to provide researchers with a flexible, secure, and highly scalable code execution platform. The architecture is accessible through a Python SDK and implements an ephemeral, use-and-destroy paradigm that ensures resource efficiency and security isolation. The system demonstrates exceptional scalability, supporting over 10,000 concurrent sandbox instances while maintaining performance stability. This infrastructure serves as a foundational platform for diverse computational scenarios, including automated code evaluation and RL experiments.
The system employs a sidecar container pattern to efficiently orchestrate and manage a heterogeneous collection of over 25,000 docker images sourced from multiple datasets. This architectural choice enables optimal resource utilization while maintaining isolation between different execution environments. Additionally, we have developed a proprietary Software Engineering (SWE) image-building pipeline that continuously expands the repository of supported images, thereby enhancing the systemβs adaptability to diverse computational workloads and research requirements.
Appendix E Case Studies under Agentless
E.1 Bugfixer Traces
We provide a Bugfixerβs response trace during the code repair stage (see Listing 6). From this, we can observe that the initial code given was return self[i, j - pos - other.cols], but the model quickly realized its mistake and corrected it to return self[i, j - other.cols]. Although some repetitive self-reflections followed, the increasing response length indeed improved answer accuracy and robustness in our experiments.
β¬
$\lhd$ think $\rhd$
Okay, let β s tackle this problem. The user reported an issue where the Matrix. col_insert () method isn β t working correctly. The example given shows that after inserting a matrix V into M at position 3, the resulting matrix has the inserted columns shifted incorrectly. The expected result would have the inserted columns at positions 3 and 4, but the actual output shows them starting at position 3, but the original matrix β s columns after the insert are shifted upwards instead of remaining in their original positions.
First, I need to understand how col_insert works. Looking at the code provided, especially in the MatrixShaping class in sympy / matrices / common. py, the _eval_col_insert method is responsible for inserting columns. The current implementation shifts the existing columns to the right by the number of columns inserted. However, the bug might be in how the original matrix β s columns are being shifted.
Let me look at the existing code for _eval_col_insert:
def _eval_col_insert (self, pos, other):
cols = self. cols
def entry (i, j):
if j < pos:
return self [i, j]
elif pos <= j < pos + other. cols:
return other [i, j - pos]
return self [i, j - pos - other. cols]
return self. _new (self. rows, self. cols + other. cols,
lambda i, j: entry (i, j))
Wait, the problem here is in the last case. When j is greater than or equal to pos + other. cols, the code subtracts (pos + other. cols) from j. But that β s incorrect because the original columns after the inserted ones should start at j = pos + other. cols. So for the original matrix β s columns that come after the inserted ones, their new positions should be j + other. cols. But in the current code, when j is in the original matrix β s columns (after the inserted ones), the code is subtracting (pos + other. cols), which would incorrectly shift them.
Wait, let β s think again. The original code for the entry function:
if j < pos:
return self [i, j] # original columns before pos
elif pos <= j < pos + other. cols:
return other [i, j - pos] # inserted columns
return self [i, j - pos - other. cols] # original columns after the inserted ones
Wait, no. Let β s say the original matrix has cols = 6. We insert 2 columns at pos =3. The new total cols is 6 + 2 = 8. The original columns after pos 3 (columns 3,4,5) should now be at positions 5,6,7. But according to the code, when j is >=3 +2=5 (since pos + other. cols =3+2=5), then the code returns self [i, j -5 -2] = self [i, j -7]. Wait, that β s not right. For example, if j =5 (the new column 5), then j - pos - other. cols =5 -3 -2=0. So the original column 0 would be mapped to the new column 5. That β s definitely wrong.
Ah, here β s the mistake. The code is subtracting (pos + other. cols) from j, but that β s incorrect. The original columns after the inserted ones should be shifted to the right by other. cols. So, the correct calculation should be j - other. cols. For example, original column 3 becomes 3 +2=5, original column 4 becomes 6, etc.
So the line in the code:
return self [i, j - pos - other. cols]
should be:
return self [i, j - other. cols]
Because the inserted columns have taken up β other. cols β positions starting at β pos β, so the original columns after β pos β are shifted by β other. cols β to the right. Therefore, to get the correct original column index, we subtract β other. cols β from the current j.
So the fix is to change the subtraction from (pos + other. cols) to other. cols. Let me check:
Original code:
return self [i, j - pos - other. cols]
Corrected code:
return self [i, j - other. cols]
Wait, no. Wait, the original columns after the inserted block (which starts at pos and has other. cols columns) would start at j = pos + other. cols. So for those j values, the original column index would be j - (pos + other. cols). Wait, no. Let β s think with numbers.
Original matrix has 6 columns. Insert 2 columns at position 3. The new matrix has 8 columns. The original columns 0,1,2 remain in positions 0,1,2. The inserted 2 columns are at positions 3 and 4. The original columns 3,4,5 now become 5,6,7.
So for j in 5,6,7 (original columns 3,4,5), the mapping is:
5 $ \ rightarrow$ 3
6 $ \ rightarrow$ 4
7 $ \ rightarrow$ 5
So the formula should be original_column = j - inserted_columns. In this case, inserted_columns is other. cols. So the code should subtract other. cols from j when accessing the original matrix β s columns after the inserted block.
Therefore, the line:
return self [i, j - pos - other. cols]
should be:
return self [i, j - other. cols]
Because pos is where the insertion starts, but the original columns after the inserted block are shifted by other. cols. Let β s verify with the example:
pos =3, other. cols =2. For j =5 (new column 5), original column index is 5 -2=3, which is correct. For j =6, 6-2=4; j =7, 7-2=5. That β s correct.
So the error in the original code was subtracting (pos + other. cols) instead of other. cols. The pos is the starting position of the insertion, but after inserting other. cols columns, the original columns after the inserted block are shifted by other. cols. Therefore, the correct subtraction is other. cols.
Thus, the fix is to change the line in the _eval_col_insert method from:
return self [i, j - pos - other. cols]
to:
return self [i, j - other. cols]
Now, looking back at the code in the _eval_col_insert function:
The code for the else clause is:
return self [i, j - pos - other. cols]
Which is incorrect. Changing that to j - other. cols should fix the problem.
So the SEARCH block would look for the lines:
def _eval_col_insert (self, pos, other):
cols = self. cols
def entry (i, j):
if j < pos:
return self [i, j]
elif pos <= j < pos + other. cols:
return other [i, j - pos]
return self [i, j - pos - other. cols]
return self. _new (self. rows, self. cols + other. cols,
lambda i, j: entry (i, j))
And the REPLACE block would change the last part to:
elif pos <= j < pos + other. cols:
return other [i, j - pos]
return self [i, j - other. cols]
So the correction is in the return statement of the entry function. The original code subtracted (pos + other. cols) from j, but it should subtract other. cols instead.
Therefore, the correct SEARCH / REPLACE block would be:
### sympy / matrices / common. py
<<<<<<< SEARCH
def _eval_col_insert (self, pos, other):
cols = self. cols
def entry (i, j):
if j < pos:
return self [i, j]
elif pos <= j < pos + other. cols:
return other [i, j - pos]
return self [i, j - pos - other. cols]
return self. _new (self. rows, self. cols + other. cols,
lambda i, j: entry (i, j))
=======
def _eval_col_insert (self, pos, other):
def entry (i, j):
if j < pos:
return self [i, j]
elif pos <= j < pos + other. cols:
return other [i, j - pos]
return self [i, j - other. cols]
return self. _new (self. rows, self. cols + other. cols,
lambda i, j: entry (i, j))
>>>>>>> REPLACE
This change corrects the column shifting calculation for the original matrix β s columns after the inserted block. Instead of subtracting (pos + other. cols), we subtract other. cols, which properly shifts the original columns to the right by the number of inserted columns.
$\lhd$ / think $\rhd$
βββ python
### sympy / matrices / common. py
<<<<<<< SEARCH
def _eval_col_insert (self, pos, other):
cols = self. cols
def entry (i, j):
if j < pos:
return self [i, j]
elif pos <= j < pos + other. cols:
return other [i, j - pos]
return self [i, j - pos - other. cols]
return self. _new (self. rows, self. cols + other. cols,
lambda i, j: entry (i, j))
=======
def _eval_col_insert (self, pos, other):
def entry (i, j):
if j < pos:
return self [i, j]
elif pos <= j < pos + other. cols:
return other [i, j - pos]
return self [i, j - other. cols]
return self. _new (self. rows, self. cols + other. cols,
lambda i, j: entry (i, j))
>>>>>>> REPLACE
βββ
Listing 6: Reasoning trajectory example for BugFixer.
E.2 Testwriter False Positive Cases
During the RL training process, we observed that the Testwriter component exhibits some false positives, which can result in tests that contain certain gaps. This might cause the test patches generated by the model to be mistakenly considered as correct. Consequently, the performance improvement of Testwriter does not accurately reflect the actual performance. In this section, we provide two intuitive examples to illustrate this phenomenon, and leave further enhancement for TestWriter as future work.
Listing 7 shows the ground-truth test patch for instance django__django-16877, with Listing 8 showing the test patch generated by theTestWriter. A detailed comparison reveals that the model-generated test cases are not comprehensive: Two locations should have been tested, but only one was checked, and the effect of mark_safe was not considered. In practice, this allowed the model-predicted patch to pass the TestWriter reward check, but result in a false positive.
The second example instance is sympy__sympy-13974, with Listings 9 and 10 showing the ground-truth and model-generated test patches. In this case, the coverage of the generated test is also insufficient. The generated test does not account for the need to satisfy the associativity property, e.g., assert tensor_product_simp(TP(A, B)**x) == TP(A**x, B**x). As a consequence, it fails to serve as a reliable judge for BugFixer.
β¬
diff -- git a / tests / template_tests / filter_tests / test_escapeseq. py b / tests / template_tests / filter_tests / test_escapeseq. py
new file mode 100644
--- / dev / null
+++ b / tests / template_tests / filter_tests / test_escapeseq. py
@@ -0,0 +1,59 @@
+ from django. test import SimpleTestCase
+ from django. utils. safestring import mark_safe
+
+ from .. utils import setup
+
+
+ class EscapeseqTests (SimpleTestCase):
+ """
+ The " escapeseq " filter works the same whether autoescape is on or off,
+ and has no effect on strings already marked as safe.
+ """
+
+ @setup (
+ {
+ " escapeseq_basic ": (
+ β{{ a | escapeseq | join:", " }} -- {{ b | escapeseq | join:", " }}β
+ ),
+ }
+ )
+ def test_basic (self):
+ output = self. engine. render_to_string (
+ " escapeseq_basic ",
+ {" a ": [" x & y ", "< p >"], " b ": [mark_safe (" x & y "), mark_safe ("< p >")]},
+ )
+ self. assertEqual (output, " x & amp; y, & lt; p & gt; -- x & y, < p >")
+
+ @setup (
+ {
+ " escapeseq_autoescape_off ": (
+ β{% autoescape off %}{{ a | escapeseq | join:", " }}β
+ " -- "
+ β{{ b | escapeseq | join:", "}}{% endautoescape %}β
+ )
+ }
+ )
+ def test_autoescape_off (self):
+ output = self. engine. render_to_string (
+ " escapeseq_autoescape_off ",
+ {" a ": [" x & y ", "< p >"], " b ": [mark_safe (" x & y "), mark_safe ("< p >")]},
+ )
+ self. assertEqual (output, " x & amp; y, & lt; p & gt; -- x & y, < p >")
+
+ @setup ({" escapeseq_join ": β{{ a | escapeseq | join:"< br />" }}β})
+ def test_chain_join (self):
+ output = self. engine. render_to_string (" escapeseq_join ", {" a ": [" x & y ", "< p >"]})
+ self. assertEqual (output, " x & amp; y < br />& lt; p & gt;")
+
+ @setup (
+ {
+ " escapeseq_join_autoescape_off ": (
+ β{% autoescape off %}{{ a | escapeseq | join:"< br />" }}{% endautoescape %}β
+ ),
+ }
+ )
+ def test_chain_join_autoescape_off (self):
+ output = self. engine. render_to_string (
+ " escapeseq_join_autoescape_off ", {" a ": [" x & y ", "< p >"]}
+ )
+ self. assertEqual (output, " x & amp; y < br />& lt; p & gt;")
Listing 7: GT test patch of django__django-16877.
β¬
diff -- git a / tests / template_tests / filter_tests / test_safeseq. py b / tests / template_tests / filter_tests / test_safeseq. py
--- a / tests / template_tests / filter_tests / test_safeseq. py
+++ b / tests / template_tests / filter_tests / test_safeseq. py
@@ -20,3 +20,20 @@
def test_safeseq02 (self):
output = self. engine. render_to_string (" safeseq02 ", {" a ": ["&", "<"]})
self. assertEqual (output, "&, < -- &, <")
+
+ # New tests for escapeseq
+ @setup ({" escapeseq01 ": β{{ a | escapeseq | join:", " }}β})
+ def test_escapeseq01 (self):
+ output = self. engine. render_to_string (" escapeseq01 ", {" a ": ["&", "<"]})
+ self. assertEqual (output, "& amp;, & lt;")
+
+ @setup (
+ {
+ " escapeseq02 ": (
+ β{% autoescape off %}{{ a | escapeseq | join:", " }}{% endautoescape %}β
+ )
+ }
+ )
+ def test_escapeseq02 (self):
+ output = self. engine. render_to_string (" escapeseq02 ", {" a ": ["&", "<"]})
+ self. assertEqual (output, "& amp;, & lt;")
Listing 8: Model predicted test patch of django__django-16877.
β¬
diff -- git a / sympy / physics / quantum / tests / test_tensorproduct. py b / sympy / physics / quantum / tests / test_tensorproduct. py
--- a / sympy / physics / quantum / tests / test_tensorproduct. py
+++ b / sympy / physics / quantum / tests / test_tensorproduct. py
@@ -10,7 +10,7 @@
from sympy. physics. quantum. density import Density
from sympy. core. trace import Tr
- A, B, C = symbols (β A, B, C β, commutative = False)
+ A, B, C, D = symbols (β A, B, C, D β, commutative = False)
x = symbols (β x β)
mat1 = Matrix ([[1, 2* I], [1 + I, 3]])
@@ -47,6 +47,11 @@ def test_tensor_product_commutator ():
def test_tensor_product_simp ():
assert tensor_product_simp (TP (A, B)* TP (B, C)) == TP (A * B, B * C)
+ # tests for Pow - expressions
+ assert tensor_product_simp (TP (A, B)** x) == TP (A ** x, B ** x)
+ assert tensor_product_simp (x * TP (A, B)**2) == x * TP (A **2, B **2)
+ assert tensor_product_simp (x *(TP (A, B)**2)* TP (C, D)) == x * TP (A **2* C, B **2* D)
+ assert tensor_product_simp (TP (A, B)- TP (C, D)** x) == TP (A, B)- TP (C ** x, D ** x)
def test_issue_5923 ():
Listing 9: GT test patch for sympy__sympy-13974.
β¬
diff -- git a / sympy / physics / quantum / tests / test_tensorproduct. py b / sympy / physics / quantum / tests / test_tensorproduct. py
--- a / sympy / physics / quantum / tests / test_tensorproduct. py
+++ b / sympy / physics / quantum / tests / test_tensorproduct. py
@@ -47,6 +47,27 @@
def test_tensor_product_simp ():
assert tensor_product_simp (TP (A, B)* TP (B, C)) == TP (A * B, B * C)
+
+ # Test tensor product powers
+ a = symbols (β a β, commutative = False)
+ t1 = TP (1, 1) * TP (1, 1)
+ t1_simp = tensor_product_simp (t1)
+ assert t1_simp == TP (1, 1), f " Expected 1 x1, got {t1_simp}"
+
+ # Test with Pauli matrices
+ from sympy. physics. paulialgebra import Pauli
+ t2 = TP (1, Pauli (3)) * TP (1, Pauli (3))
+ t2_simp = tensor_product_simp (t2)
+ assert t2_simp == TP (1, 1), f " Expected 1 x1, got {t2_simp}"
+
+ # Test substitution cases from issue
+ expr1 = TP (1, 1) * TP (1, a)
+ result1 = expr1. subs (a, 1)
+ assert tensor_product_simp (result1) == TP (1, 1), f " Substitution 1 failed: {result1}"
+
+ expr2 = TP (1, Pauli (3)) * TP (1, a)
+ result2 = expr2. subs (a, Pauli (3))
+ assert tensor_product_simp (result2) == TP (1, 1), f " Substitution 2 failed: {result2}"
def test_issue_5923 ():
Listing 10: Model predicted test patch of sympy__sympy-13974.
Appendix F Emergent Parallel scaling in Kimi-Dev
<details>
<summary>figs/sec3_sp_scaling/parellel_scaling_figure.png Details</summary>

### Visual Description
# Technical Document Extraction: Line Chart Analysis
## 1. Chart Type and Structure
- **Chart Type**: Line chart with square markers
- **Axes**:
- **X-axis**: "Number of Patches" (linear scale, 1 to 40)
- **Y-axis**: "Pass Rate (%)" (linear scale, 47% to 53%)
- **Legend**: Located at top-right corner (color: green, label: "Pass Rate" inferred from context)
## 2. Data Points and Trends
- **Visual Trend**: Steady upward slope from left to right
- **Data Points** (x, y):
- [1, 48.0] (green square)
- [5, 49.2] (green square)
- [10, 50.2] (green square)
- [20, 51.2] (green square)
- [40, 51.6] (green square)
- **Trend Verification**:
- Slope increases consistently (Ξy/Ξx β 0.3% per patch increment)
- Final segment (20β40 patches) shows decelerated growth (Ξy β 0.4% over 20 patches)
## 3. Spatial Grounding
- **Legend Position**: [x: 90%, y: 10%] (top-right corner)
- **Data Point Colors**: All points match legend color (green)
- **Axis Labels Position**:
- X-axis label: Bottom center
- Y-axis label: Left center
## 4. Component Isolation
### Header
- No explicit title present
### Main Chart
- **Line Characteristics**:
- Color: Forest green (#228B22)
- Line style: Solid
- Marker style: Square (filled)
- **Grid**:
- Dashed gray lines
- Major ticks every 1 unit (x-axis) and 1% (y-axis)
### Footer
- No additional elements
## 5. Data Table Reconstruction
| Number of Patches | Pass Rate (%) |
|-------------------|---------------|
| 1 | 48.0 |
| 5 | 49.2 |
| 10 | 50.2 |
| 20 | 51.2 |
| 40 | 51.6 |
## 6. Cross-Reference Validation
- All data points match legend color (green)
- Y-axis percentage values align with chart scale
- X-axis patch counts correspond to horizontal positioning
## 7. Additional Observations
- **Data Range**:
- X: 1β40 patches
- Y: 48.0β51.6%
- **Growth Pattern**:
- Initial rapid improvement (48.0β50.2 in first 10 patches)
- Diminishing returns after 10 patches (50.2β51.6 over 30 patches)
- **Scale Consistency**:
- Y-axis increments match data point spacing
- X-axis logarithmic spacing not evident (linear progression)
## 8. Language Declaration
- All text in English. No foreign language elements detected.
</details>
Figure 10: Emergent parallel scaling results on SWE-bench Verified.
Compared to the self-play method that leverages execution during test time, parallel scaling (chen2025parallel; pan2025learning) is a recently-introduced test-time scaling method that aggregates multiple independent rollouts for additional improvement. While previous works incentivize the ability of parallel scaling through training, or train a separate reward model (pan2024training; jain2025r2e) for patch selection, we observe that the parallel scaling effect emerges in our model without any specific training. Specifically, patch candidates from parallel rollouts of BugFixer are used to prompt the model to summarize their differences and compose a new bugfix patch. Listing 11 is the prompt template of parallel scaling. In the template, promblem_statement is the GitHub issue, and trajs_content represents the content of multiple patch candidates.
β¬
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
{problem_statement}
--- END ISSUE ---
Below are some thinking trajectories, each from llm reasoning model. Only one trajetory is right.
--- BEGIN FILE ---
βββ
{trajs_content}
βββ
--- END FILE ---
Please first summary and analyze the key differences between the trajectories, and then generate * SEARCH / REPLACE * edits to fix the issue.
Every * SEARCH / REPLACE * edit must use this format:
1. The file path
2. The start of search block: <<<<<<< SEARCH
3. A contiguous chunk of lines to search for in the existing source code
4. The dividing line: =======
5. The lines to replace into the source code
6. The end of the replace block: >>>>>>> REPLACE
Here is an example:
βββ python
### mathweb / flask / app. py
<<<<<<< SEARCH
from flask import Flask
=======
import math
from flask import Flask
>>>>>>> REPLACE
βββ
Please note that the * SEARCH / REPLACE * edit REQUIRES PROPER INDENTATION. If you would like to add the line β print (x)β, you must fully write that out, with all those spaces before the code!
Wrap the * SEARCH / REPLACE * edit in blocks βββ python...βββ.
The summary of the key differences between the trajectories should be in the thinking part.
Listing 11: The prompt template for parallel scaling.
The results in Figure 10 show that the performance of the parallel aggregation improves as the number of patch candidates in the prompt increases. The advantage of this scaling paradigm over majority voting lies in its ability to leverage the modelβs own capacity to analyze multiple candidate patches, thereby surpassing the simplistic approach of weighting answers merely by their frequency of occurrence. Listing 12 covers a full prompt example with 14 different input patch candidates. Note: As Listing 12 is long, it is OK to jump to Listing 13 to directly observe the model behavior.
β¬
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Data < @filename isn β t relative to the YAML file
The [docs say](https:// gabbi. readthedocs. io / en / latest / format. html # data):
> If the value is a string that begins with < @ then the rest of the string is treated as the name of a file to be loaded from the same directory as the YAML file.
But I haven β t found it works like this unless I cd into the directory containing the yaml file.
β _load_data_file β says:
βββ python
path = os. path. join (self. test_directory, os. path. basename (filename))
βββ
this does a few things:
- makes the path begin with β self. test_directory β (defaults to β.β, the current working directory)
- disguards any folders in the < @ β filename β path
- appends β filename β
This means, if I β m in β/β, and I have a test that says β data: < @cake. jpg β, I cannot run my tests as advised in the docs:
β gabbi - run -- / my / test. yaml / my / other. yaml β
β FileNotFoundError: [Errno 2] No such file or directory: β./ cake. jpg ββ
So in our test running script, we have to start with β cd tests /β.
My preference, would be to make the path relative to the location of the yaml file. That way, I can run my tests from whatever directory.
Obviously this isn β t possible when redirecting with β<β as the filename is never seen by gabbi. So I β m happy is discuss other ideas.
---
Furthermore I can β t keep my test data in a child directory, unless I cd into that directory like:
βββ shell
cd yaml_tests / test_data
gabbi - run -- ../ test_things. yaml
βββ
So for that reason, I β d like to allow directories like β< @test_data / filename. txt β to be included in the path.
---
Happy to write the patch, if we come up with an approach here.
--- END ISSUE ---
Below are some thinking trajectories, each from llm reasoning model.
--- BEGIN FILE ---
βββ
Answer 1:
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
import argparse
from importlib import import_module
import sys
import unittest
=======
import argparse
from importlib import import_module
import os
import sys
import unittest
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
handlers = handler_objects)
result = ConciseTestRunner (
verbosity =2, failfast = failfast). run (test_suite)
return result. wasSuccessful ()
=======
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False, test_directory =β.β):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None,
prefix = prefix, handlers = handler_objects)
result = ConciseTestRunner (
verbosity =2, failfast = failfast). run (test_suite)
return result. wasSuccessful ()
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
else:
for input_file in input_files:
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
=======
else:
for input_file in input_files:
test_directory = os. path. dirname (input_file) or β.β
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast,
test_directory = test_directory)
>>>>>>> REPLACE
βββ
Answer 2:
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
=======
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False, test_directory =β.β):
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
handlers = handler_objects)
=======
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None, prefix = prefix,
handlers = handler_objects)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
=======
test_directory = os. path. dirname (input_file) or β.β
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast, test_directory = test_directory)
>>>>>>> REPLACE
βββ
Answer 3:
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
handlers = handler_objects)
result = ConciseTestRunner (
verbosity =2, failfast = failfast). run (test_suite)
return result. wasSuccessful ()
=======
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False, test_directory =β.β):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None,
prefix = prefix, handlers = handler_objects)
result = ConciseTestRunner (
verbosity =2, failfast = failfast). run (test_suite)
return result. wasSuccessful ()
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
for input_file in input_files:
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
if not failure: # once failed, this is considered immutable
failure = not success
if failure and failfast:
break
=======
for input_file in input_files:
test_directory = os. path. dirname (input_file)
with open (input_file, β r β) as fh:
success = run_suite (
fh, handler_objects, host, port, prefix, force_ssl,
failfast, test_directory = test_directory)
if not failure: # once failed, this is considered immutable
failure = not success
if failure and failfast:
break
>>>>>>> REPLACE
βββ
Answer 4:
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
=======
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False, test_directory =β.β):
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
handlers = handler_objects)
=======
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None,
prefix = prefix, handlers = handler_objects)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
for input_file in input_files:
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
=======
for input_file in input_files:
# Determine test directory based on input file β s location
test_directory = os. path. dirname (os. path. abspath (input_file)) or β.β
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast,
test_directory = test_directory)
>>>>>>> REPLACE
βββ
Answer 5:
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
import sys
import unittest
=======
import os
import sys
import unittest
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
=======
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False, test_directory =β.β):
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
handlers = handler_objects)
=======
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None,
prefix = prefix, handlers = handler_objects)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
for input_file in input_files:
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
=======
for input_file in input_files:
test_directory = os. path. dirname (input_file) or β.β
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast,
test_directory = test_directory)
>>>>>>> REPLACE
βββ
Answer 6:
βββ python
### gabbi / driver. py
<<<<<<< SEARCH
file_suite = suitemaker. test_suite_from_dict (
loader, test_base_name, suite_dict, path, host, port,
fixture_module, intercept, prefix = prefix,
test_loader_name = test_loader_name, handlers = handler_objects,
inner_fixtures = inner_fixtures)
=======
test_file_dir = os. path. dirname (test_file)
file_suite = suitemaker. test_suite_from_dict (
loader, test_base_name, suite_dict, test_file_dir, host, port,
fixture_module, intercept, prefix = prefix,
test_loader_name = test_loader_name, handlers = handler_objects,
inner_fixtures = inner_fixtures)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
=======
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False, test_directory =β.β):
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
handlers = handler_objects)
=======
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None,
prefix = prefix, handlers = handler_objects)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
=======
test_directory = os. path. dirname (input_file)
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast, test_directory)
>>>>>>> REPLACE
βββ
Answer 7:
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
import argparse
from importlib import import_module
import sys
import unittest
=======
import argparse
from importlib import import_module
import os
import sys
import unittest
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
handlers = handler_objects)
=======
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False, test_directory =β.β):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None,
prefix = prefix, handlers = handler_objects)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
if not failure: # once failed, this is considered immutable
failure = not success
if failure and failfast:
break
=======
test_dir = os. path. dirname (os. path. abspath (input_file))
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast, test_directory = test_dir)
if not failure: # once failed, this is considered immutable
failure = not success
if failure and failfast:
break
>>>>>>> REPLACE
βββ
Answer 8:
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
handlers = handler_objects)
=======
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
# Determine test directory based on input handle
if hasattr (handle, β name β):
test_directory = os. path. dirname (os. path. abspath (handle. name))
else:
test_directory = β.β
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None,
prefix = prefix, handlers = handler_objects)
>>>>>>> REPLACE
βββ
Answer 9:
βββ python
### gabbi / driver. py
<<<<<<< SEARCH
for test_file in glob. iglob (β% s /*. yaml β % path):
if β _ β in os. path. basename (test_file):
warnings. warn (exception. GabbiSyntaxWarning (
"β _ β in test filename % s. This can break suite grouping."
% test_file))
if intercept:
host = str (uuid. uuid4 ())
suite_dict = utils. load_yaml (yaml_file = test_file)
test_base_name = os. path. splitext (os. path. basename (test_file))[0]
if all_test_base_name:
test_base_name = β% s_ % s β % (all_test_base_name, test_base_name)
if require_ssl:
if β defaults β in suite_dict:
suite_dict [β defaults β][β ssl β] = True
else:
suite_dict [β defaults β] = {β ssl β: True}
file_suite = suitemaker. test_suite_from_dict (
loader, test_base_name, suite_dict, path, host, port,
fixture_module, intercept, prefix = prefix,
test_loader_name = test_loader_name, handlers = handler_objects,
inner_fixtures = inner_fixtures)
=======
for test_file in glob. iglob (β% s /*. yaml β % path):
if β _ β in os. path. basename (test_file):
warnings. warn (exception. GabbiSyntaxWarning (
"β _ β in test filename % s. This can break suite grouping."
% test_file))
if intercept:
host = str (uuid. uuid4 ())
suite_dict = utils. load_yaml (yaml_file = test_file)
test_base_name = os. path. splitext (os. path. basename (test_file))[0]
if all_test_base_name:
test_base_name = β% s_ % s β % (all_test_base_name, test_base_name)
if require_ssl:
if β defaults β in suite_dict:
suite_dict [β defaults β][β ssl β] = True
else:
suite_dict [β defaults β] = {β ssl β: True}
# Use absolute path of the YAML file β s directory
test_file_dir = os. path. dirname (os. path. abspath (test_file))
file_suite = suitemaker. test_suite_from_dict (
loader, test_base_name, suite_dict, test_file_dir, host, port,
fixture_module, intercept, prefix = prefix,
test_loader_name = test_loader_name, handlers = handler_objects,
inner_fixtures = inner_fixtures)
>>>>>>> REPLACE
βββ
Answer 10:
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
import argparse
from importlib import import_module
import sys
import unittest
=======
import argparse
from importlib import import_module
import os
import sys
import unittest
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
=======
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False, test_directory =β.β):
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
force_ssl = force_ssl, failfast = failfast,
handlers = handler_objects)
=======
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None,
prefix = prefix, force_ssl = force_ssl, failfast = failfast,
handlers = handler_objects)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
=======
test_dir = os. path. dirname (input_file)
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast,
test_directory = test_dir)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / case. py
<<<<<<< SEARCH
path = os. path. join (self. test_directory, os. path. basename (filename))
=======
path = os. path. join (self. test_directory, filename)
>>>>>>> REPLACE
βββ
Answer 11:
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
handlers = handler_objects)
=======
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False, test_directory =β.β):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port,
None, None, prefix = prefix, handlers = handler_objects)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
=======
test_dir = os. path. dirname (input_file)
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast,
test_directory = test_dir)
>>>>>>> REPLACE
βββ
Answer 12:
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
import argparse
from importlib import import_module
import sys
import unittest
=======
import argparse
from importlib import import_module
import os
import sys
import unittest
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
else:
for input_file in input_files:
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
=======
else:
for input_file in input_files:
test_dir = os. path. dirname (os. path. abspath (input_file))
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast,
test_directory = test_dir)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
handlers = handler_objects)
=======
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False, test_directory =β.β):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None,
prefix = prefix, handlers = handler_objects)
>>>>>>> REPLACE
βββ
Answer 13:
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
handlers = handler_objects)
=======
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False, test_directory =β.β):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None,
prefix = prefix, handlers = handler_objects)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
for input_file in input_files:
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
=======
for input_file in input_files:
test_dir = os. path. dirname (os. path. abspath (input_file))
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast,
test_directory = test_dir)
>>>>>>> REPLACE
βββ
Answer 14:
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
for input_file in input_files:
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
=======
for input_file in input_files:
with open (input_file, β r β) as fh:
success = run_suite (input_file, fh, handler_objects, host, port,
prefix, force_ssl, failfast)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_directory = β.β
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None, prefix = prefix,
handlers = handler_objects)
=======
def run_suite (input_file, handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
""" Run the tests from the YAML in handle."""
data = utils. load_yaml (handle)
if force_ssl:
if β defaults β in data:
data [β defaults β][β ssl β] = True
else:
data [β defaults β] = {β ssl β: True}
loader = unittest. defaultTestLoader
test_directory = os. path. dirname (input_file) if input_file else β.β
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None, prefix = prefix,
handlers = handler_objects)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
if not input_files:
success = run_suite (sys. stdin, handler_objects, host, port,
prefix, force_ssl, failfast)
=======
if not input_files:
success = run_suite (None, sys. stdin, handler_objects, host, port,
prefix, force_ssl, failfast)
>>>>>>> REPLACE
βββ
βββ
--- END FILE ---
Please first summary and analyze the key differences between the trajectories, and then generate * SEARCH / REPLACE * edits to fix the issue.
Every * SEARCH / REPLACE * edit must use this format:
1. The file path
2. The start of search block: <<<<<<< SEARCH
3. A contiguous chunk of lines to search for in the existing source code
4. The dividing line: =======
5. The lines to replace into the source code
6. The end of the replace block: >>>>>>> REPLACE
Here is an example:
βββ python
### mathweb / flask / app. py
<<<<<<< SEARCH
from flask import Flask
=======
import math
from flask import Flask
>>>>>>> REPLACE
βββ
Please note that the * SEARCH / REPLACE * edit REQUIRES PROPER INDENTATION. If you would like to add the line β print (x)β, you must fully write that out, with all those spaces before the code!
Wrap the * SEARCH / REPLACE * edit in blocks βββ python...βββ.
The summary of the key differences between the trajectories should be in the thinking part.
"
Listing 12: Example of emergent parallel scaling (cdent__gabbi-191 in SWE-bench-extra), the prompt input part.
Listing 13 shows how the model successfully recognizes the correct answer candidate in Listing 12, and generates a novel patch with additional reasoning to solve the issue:
β¬
$\lhd$ think $\rhd$
Okay, let β s tackle this problem. The user is having an issue where when they specify a data file in their YAML test using the < @ syntax, the path isn β t resolved relative to the YAML file β s location. Instead, it β s using the current working directory. They want the path to be relative to the YAML file so they can run tests from any directory.
First, I need to understand how the current code is handling the data file paths. The user mentioned that in _load_data_file, the code uses os. path. join (self. test_directory, os. path. basename (filename)). This takes the test_directory (which defaults to β.β), discards any directories in the filename, and appends the basename. That β s why if the YAML is in / my / tests / test. yaml and the data is < @data. txt, it looks for ./ data. txt instead of / my / tests / data. txt.
The key here is to make sure that the test_directory is set to the directory where the YAML file is located. Then, when joining with the filename (without basename), it would correctly resolve the relative path.
Looking at the provided code snippets, the main changes are in runner. py where the test_directory is determined. The user β s proposed solutions involve modifying the run_suite function to accept a test_directory parameter, which is derived from the input file β s directory.
In the original code, when running a suite from an input file, the test_directory is hardcoded to β.β. The various answers modify this by passing the directory of the input file as test_directory. For example, in Answer 10, they add the test_directory parameter to run_suite and set it to os. path. dirname (input_file). Then, in case. py, they change the path joining to use the filename directly instead of basename.
Wait, the critical part is in case. py β s _load_data_file method. The original code uses os. path. basename (filename), which strips any directories from the filename. So if the YAML has < @test_data / file. txt, the basename is file. txt, and it β s joined with test_directory. But if test_directory is the YAML β s directory, then it should be YAML_DIR / test_data / file. txt. However, the current code would do test_directory + file. txt, which would be YAML_DIR / file. txt. That β s incorrect. So the problem is that the code is taking the basename, thus ignoring any subdirectories in the filename.
Ah, so the fix here is to not take the basename. Instead, the filename after < @ should be treated as a relative path from the YAML β s directory. So in case. py β s _load_data_file, instead of using os. path. basename (filename), we should use the filename as - is. Then, os. path. join (self. test_directory, filename) would correctly resolve the path relative to the YAML β s directory.
But wait, the filename could be an absolute path. For example, if the YAML has < @ / absolute / path / file. txt, then os. path. join would just use the absolute path. However, the documentation says that the filename is relative to the YAML file. So absolute paths might not be intended. But the code should handle that.
So combining two changes: in runner. py, set test_directory to the YAML file β s directory, and in case. py, stop using basename and use the full filename.
Looking at Answer 10, there β s a change in case. py from os. path. basename (filename) to filename. That β s exactly the fix needed. Then, in runner. py, when running the suite, pass the test_directory as the directory of the input file.
So the key files to modify are:
1. gabbi / runner. py: When calling run_suite, pass test_directory as the directory of the input file. This involves modifying the run_suite function to accept test_directory as a parameter and then passing it when the input file is processed.
2. gabbi / case. py: Change the path construction from using os. path. basename (filename) to using the filename directly. This allows subdirectories in the filename to be included in the path.
Looking through the provided code snippets, the one that does both is Answer 10. Let me check:
In Answer 10, the runner. py changes add the test_directory parameter and set it to os. path. dirname (input_file). Then, in case. py, the path is changed to os. path. join (self. test_directory, filename). That would correctly resolve the path relative to the YAML β s directory, including any subdirectories in the filename.
But wait, the original code in case. py uses os. path. basename (filename). So changing that to filename means that if the input is β test_data / file. txt β, it β s joined as test_directory / test_data / file. txt. If test_directory is the YAML β s directory, that β s correct.
... [intermediate reasoning omitted for display brevity] ...
But perhaps better to use os. path. abspath and then dirname to handle relative paths correctly. But maybe the existing code already works. For example, if the input_file is β tests / test. yaml β, then os. path. dirname would return β tests β, which when joined with os. getcwd () would be correct. Wait, no. os. path. dirname just gives the directory part of the path. So if input_file is a relative path, like β tests / test. yaml β, then test_directory would be β tests β, and when os. path. join is called, it β s relative to the current working directory. But the code in suitemaker. test_suite_from_dict might expect an absolute path. However, the original code uses β.β as the test_directory, which is relative. So perhaps using the directory of the input file as test_directory is sufficient.
Another thing to consider: when the YAML is loaded from stdin (like via redirect), the test_directory remains β.β. That β s acceptable, as the documentation mentions that when using <, the filename isn β t seen by Gabbi, so it β s not possible to resolve the path relative to the YAML β s original location. So in that case, the test_directory defaults to β.β.
Now, in case. py:
Original line:
path = os. path. join (self. test_directory, os. path. basename (filename))
Modified line:
path = os. path. join (self. test_directory, filename)
So the SEARCH / REPLACE block for case. py:
### gabbi / case. py
<<<<<<< SEARCH
path = os. path. join (self. test_directory, os. path. basename (filename))
=======
path = os. path. join (self. test_directory, filename)
>>>>>>> REPLACE
Putting all these together, the final edits are:
1. Add import os in runner. py.
2. Modify run_suite to accept test_directory and pass it to test_suite_from_dict.
3. In the loop over input_files, compute test_directory as the directory of input_file and pass it to run_suite.
4. Modify case. py β s path construction.
So the final answer would include these four SEARCH / REPLACE blocks.
$\lhd$ / think $\rhd$
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
import argparse
from importlib import import_module
import sys
import unittest
=======
import argparse
from importlib import import_module
import os
import sys
import unittest
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False):
=======
def run_suite (handle, handler_objects, host, port, prefix, force_ssl = False,
failfast = False, test_directory =β.β):
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, β.β, host, port, None, None, prefix = prefix,
handlers = handler_objects)
=======
test_suite = suitemaker. test_suite_from_dict (
loader, β input β, data, test_directory, host, port, None, None,
prefix = prefix, handlers = handler_objects)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / runner. py
<<<<<<< SEARCH
else:
for input_file in input_files:
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast)
=======
else:
for input_file in input_files:
test_directory = os. path. dirname (input_file) or β.β
with open (input_file, β r β) as fh:
success = run_suite (fh, handler_objects, host, port,
prefix, force_ssl, failfast, test_directory = test_directory)
>>>>>>> REPLACE
βββ
βββ python
### gabbi / case. py
<<<<<<< SEARCH
path = os. path. join (self. test_directory, os. path. basename (filename))
=======
path = os. path. join (self. test_directory, filename)
>>>>>>> REPLACE
βββ
These changes:
1. Add β os β import needed for path handling
2. Modify run_suite to accept a test_directory parameter
3. Use the input file β s directory as the test_directory when processing files
4. Update data file path resolution to preserve directory structure from YAML relative paths
5. Remove basename handling that was stripping directory information from data file paths
The net effect is that data files specified with β< @filename β in YAML tests will now be resolved relative to their YAML file β s directory, supporting both flat and nested directory structures for test data. When tests are run from stdin (using β<β), the default directory remains the current working directory as before.
Listing 13: Example of emergent parallel scaling (cdent__gabbi-191 in SWE-bench-extra), the model response part.
Appendix G Analysis for SWE-Agents
G.1 Stage Annotation for SWE-Agent Trajectories
In this section, we present how we use a frontier LLM to annotate the SWE-Agent stage to which each interaction turn within the trajectory rollout belongs. While we have briefly introduced the five stages suggested in the prompt of the SWE-Agent prompt in Section 4, we attach the excerpt in Listing 14 for greater clarity:
β¬
...
Follow these steps to resolve the issue:
1. As a first step, it might be a good idea to find and read code relevant to the < pr_description >
2. Create a script to reproduce the error and execute it with β python < filename. py >β using the bash tool, to confirm the error
3. Edit the source code of the repo to resolve the issue
4. Rerun your reproduce script and confirm that the error is fixed!
5. Think about edgecases and make sure your fix handles them as well
...
Listing 14: The excerpt of the five-stage declaration in the SWE-Agent prompt.
It should be noted that the agent could flexibly transit across the five stages during its working process. For example, after Stage 4 when the agent rerun the test script, possibilities are that erroneous information remains, and this is when the agent goes back to Stage 3 to refine its code repair with reflection; Similar backtracing behavior could be observed from Stage 5 to Stage 3 as well, where the initial code repair has proven correct under the initial test script the agent composes in Stage 2, but fails some edge testcase the agent proposes in Stage 5.
To further analyze the BugFixer and the reflection skill prior, we need to realize which stage each turn along the SWE-Agent trajectory belongs to. As no strict boundaries or special prompt notes are set between each consecutive stage, we leverage an LLM for annotation. The annotation system prompt we set in kimi-k2-0711-preview is shown in Listing 15:
β¬
You are a professional inspector that can analyze the provided agentic interaction trajectory.
The trajectory you are going to analyze is made by an agent that interacts with a computer to solve tasks. This agent has access to the following functions:
---- BEGIN FUNCTION #1: bash ----
Description: Execute a bash command in the terminal.
Parameters:
(1) command (string, required): The bash command to execute. Can be empty to view additional logs when previous exit code is β-1β. Can be β ctrl + c β to interrupt the currently running process.
---- END FUNCTION #1 ----
---- BEGIN FUNCTION #2: submit ----
Description: Finish the interaction when the task is complete OR if the assistant cannot proceed further with the task.
No parameters are required for this function.
---- END FUNCTION #2 ----
---- BEGIN FUNCTION #3: str_replace_editor ----
Description: Custom editing tool for viewing, creating and editing files
* State is persistent across command calls and discussions with the user
* If β path β is a file, β view β displays the result of applying β cat - n β. If β path β is a directory, β view β lists non - hidden files and directories up to 2 levels deep
* The β create β command cannot be used if the specified β path β already exists as a file
* If a β command β generates a long output, it will be truncated and marked with β< response clipped >β
* The β undo_edit β command will revert the last edit made to the file at β path β
Notes for using the β str_replace β command:
* The β old_str β parameter should match EXACTLY one or more consecutive lines from the original file. Be mindful of whitespaces!
* If the β old_str β parameter is not unique in the file, the replacement will not be performed. Make sure to include enough context in β old_str β to make it unique
* The β new_str β parameter should contain the edited lines that should replace the β old_str β
Parameters:
(1) command (string, required): The commands to run. Allowed options are: β view β, β create β, β str_replace β, β insert β, β undo_edit β.
Allowed values: [β view β, β create β, β str_replace β, β insert β, β undo_edit β]
(2) path (string, required): Absolute path to file or directory, e. g. β/ repo / file. py β or β/ repo β.
(3) file_text (string, optional): Required parameter of β create β command, with the content of the file to be created.
(4) old_str (string, optional): Required parameter of β str_replace β command containing the string in β path β to replace.
(5) new_str (string, optional): Optional parameter of β str_replace β command containing the new string (if not given, no string will be added). Required parameter of β insert β command containing the string to insert.
(6) insert_line (integer, optional): Required parameter of β insert β command. The β new_str β will be inserted AFTER the line β insert_line β of β path β.
(7) view_range (array, optional): Optional parameter of β view β command when β path β points to a file. If none is given, the full file is shown. If provided, the file will be shown in the indicated line number range, e. g. [11, 12] will show lines 11 and 12. Indexing at 1 to start. Setting β[start_line, -1]β shows all lines from β start_line β to the end of the file.
---- END FUNCTION #3 ----
The agent was instructed with the following:
* A python code repository has been uploaded in the directory / testbed.
* Implement the necessary changes to the repository so that the requirements specified in the < pr_description > are met.
* All changes to any of the test files described in the < pr_description > have already been taken care of. This means no need to modify the testing logic or any of the tests in any way.
* Make the minimal changes to non - tests files in the / testbed directory to ensure the < pr_description > is satisfied.
The agent was suggested to follow the following steps to resolve the issue:
1. As a first step, it might be a good idea to find and read code relevant to the < pr_description >
2. Create a script to reproduce the error and execute it with β python < filename. py >β using the bash tool, to confirm the error
3. Edit the source code of the repo to resolve the issue
4. Rerun your reproduce script and confirm that the error is fixed!
5. Think about edgecases and make sure your fix handles them as well
The agent was encouraged to think thoroughly, and it β s fine if it β s very long.
You are going to inspect this agent β s interaction trajectory with a computer to solve the given task in the < pr_description >. One turn of interaction contains a pair of OBSERVATION and ACTION, where the OBSERVATION comes from the computer, and the ACTION is taken by the agent.
For each turn of interaction, determine which step (of the aforementioned five) this turn belongs to. Output a single number (1~5) ONLY in a separate line as your classification (DO NOT OUTPUT ANY OTHER WORDS THAN THE DIGIT).
You can think before make the inspection. When thinking, wrap your thought with < think > and </ think >. Don β t forget to output your final inspection after thinking.
Listing 15: The annotation prompt for SWE-Agent stages.
To provide a clearer understanding of the trajectory, we incorporate most of the tool descriptions and instructions from the SWE-Agent system prompt into the annotation system prompt. The annotation is conducted in a multi-round manner, leveraging the agentβs previous actions and observations, as well as the stage classifications of earlier turns, to better exploit contextual information. At the $i$ -th round of annotation, the observationβaction pair from turn $i$ of the SWE-Agent trajectory is appended as input, and the annotator is expected to output the corresponding stage classification.
G.2 Comparative Study
Based on the automatic stage annotation in the above section, we present a comparative study by inspecting the performance on sympy__sympy-20590 among the Kimi-Dev under Agentless, and each of the Base, MT, SFT, and RL priors with SWE-Agent adaptation.
The problem statement of sympy__sympy-20590 is listed in Listing 16:
β¬
Symbol instances have __dict__ since 1.7?
In version 1.6.2 Symbol instances had no β __dict__ β attribute
βββ python
>>> sympy. Symbol (β s β). __dict__
------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
< ipython - input -3- e2060d5eec73 > in < module >
----> 1 sympy. Symbol (β s β). __dict__
AttributeError: β Symbol β object has no attribute β __dict__ β
>>> sympy. Symbol (β s β). __slots__
(β name β,)
βββ
This changes in 1.7 where β sympy. Symbol (β s β). __dict__ β now exists (and returns an empty dict)
I may misinterpret this, but given the purpose of β __slots__ β, I assume this is a bug, introduced because some parent class accidentally stopped defining β __slots__ β.
Listing 16: The problem statement of sympy__sympy-20590.
It is observed that the main difficulty in resolving the issue lies in the realization of the β some parent classβ referenced in the problem. In fact, the hints text of this problem, which reflects the discussion of the developers under the original issue, reveals a much more in-depth investigation into the issue (Listing 17):
β¬
It seems that Basic now inherits β DefaultPrinting β which I guess doesn β t have slots. I β m not sure if it β s a good idea to add β __slots__ β to that class as it would then affect all subclasses.
...
Using slots can break multiple inheritance but only if the slots are non - empty I guess. Maybe this means that any mixin should always declare empty slots or it won β t work properly with subclasses that have slots...
I see that β EvalfMixin β has β __slots__ = ()β.
I guess we should add empty slots to DefaultPrinting then.
Listing 17: The excerpted hints text of sympy__sympy-20590.
According to the discussion, it is clear that the code repair would be to βadd empty slots to DefaultPrintingβ, which naturally leads to the navigation towards the file related to the implementation of the printer (sympy/core/_print_helpers.py, which is also the file updated by the ground-truth patch.) However, the hints_text information in the test set is not allowed to be used in the problem-solving process, which challenges the reasoner or the agent to figure out βthe parent class that stopped defining β__slots__β β autonomously.
We first examine Kimi-Dev under Agentless. None of the 40 runs succeeded in producing the correct file localization. In most cases, the updates are made to sympy/core/symbol.py, which is a plausible choice since the reported problem is triggered by sympy.Symbol(βsβ), and symbol.py should contain the definition of the Symbol class. This line of reasoning, however, diverges from the correct solution. A small number of runs attempt alternative localizations, such as sympy/core/{basic.py, expr.py}, but these are also incorrect.
The limitation of the Agentless framework becomes more obvious when the focus is shifted to the inspection over the SWE-Agent trajectories. For all of the Base, MT, SFT, and RL priors with SWE-Agent adaptation, the resulting trajectories always contain a test-reproducing phase (marked as Stage β2β by the annotator), with the generated script reads like:
β¬
from sympy import Symbol
# Try to access __dict__ on a Symbol instance
s = Symbol (β s β)
print (" Symbol. __slots__:", s. __slots__)
try:
print (" Symbol. __dict__:", s. __dict__)
except AttributeError as e:
print (" Error:", e)
Listing 18: An exampled reproducing script in Stage 2 within the SWE-Agent trajectories in sympy__sympy-20590.
It is trivial for an agent to generate a reproducing script of this form, as it directly transcribes the case described in the problem statement. Executing the script provides an unambiguous signal of whether the issue is resolved: the attribute s.__dict__ should no longer exist, and the expected Error: message should appear in the standard error output. Indeed, observing the initial completion of Stage 3 across all trajectories reveals that none of the agents succeeded on their first attempt. However, the incomplete status is consistently exposed in Stage 4, when the script is rerun and s.__dict__ still persists. This progression highlights the advantage of the SWE-Agent framework over the Agentless paradigm, as the agent is afforded additional opportunities for iterative refinement rather than being limited to a single attempt.
We continue to analyze the behavior of the agents adapted from the Base, MT, and SFT priors. Due to the difficulty of the issue, it is non-trivial to correctly localize the error through their agentic reasoning: the interactions show that the agents were switching among several file candidates, but failed to reach the correct one. The only exception comes from the agent adapted from the RL prior, which marks its unique success on this instance through accurate agentic reasoning. We attach the intermediate process from the initial Stage-3 moment to the Stage-3 revisit (refining the initial code repair) within the successful trajectory, tagged by the stage annotator:
β¬
...
OBSERVATION:
Symbol. __slots__: (β name β,)
Symbol. __dict__: {}
ACTION:
Hmm, it seems like the issue is still there. Let β s check if there are any other classes in the hierarchy that might be missing β __slots__ β:
< function = bash >
< parameter = command > python - c " from sympy import Symbol; s = Symbol (β s β); for cls in type (s). __mro__: print (cls. __name__, β has __slots__:β, hasattr (cls, β __slots__ β))"</ parameter >
</ function >
OBSERVATION:
File "< string >", line 1
from sympy import Symbol; s = Symbol (β s β); for cls in type (s). __mro__: print (cls. __name__, β has __slots__:β, hasattr (cls, β __slots__ β))
^
SyntaxError: invalid syntax
ACTION:
Let β s try a different approach:
< function = str_replace_editor >
< parameter = command > create </ parameter >
< parameter = path >/ testbed / check_slots. py </ parameter >
< parameter = file_text >
from sympy import Symbol
s = Symbol (β s β)
for cls in type (s). __mro__:
print (f "{cls. __name__:15} has __slots__: {hasattr (cls, β __slots__ β)}")
if hasattr (cls, β __slots__ β):
print (f "{β β*16} __slots__ value: {cls. __slots__}")
</ parameter >
</ function >
OBSERVATION:
File created successfully at: / testbed / check_slots. py
ACTION:
< function = bash >
< parameter = command > python / testbed / check_slots. py </ parameter >
</ function >
OBSERVATION:
Symbol has __slots__: True
__slots__ value: (β name β,)
AtomicExpr has __slots__: True
__slots__ value: ()
Atom has __slots__: True
__slots__ value: ()
Expr has __slots__: True
__slots__ value: ()
Boolean has __slots__: True
__slots__ value: ()
Basic has __slots__: True
__slots__ value: (β _mhash β, β _args β, β _assumptions β)
Printable has __slots__: False
EvalfMixin has __slots__: True
__slots__ value: ()
object has __slots__: False
ACTION:
Now I see the issue! β Printable β doesn β t have β __slots__ β defined. Let β s check the β Printable β class:
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / sympy / core / _print_helpers. py </ parameter >
< parameter = view_range >[1, 30]</ parameter >
</ function >
OBSERVATION:
Here β s the result of running β cat - n β on / testbed / sympy / core / _print_helpers. py:
1 """
2 Base class to provide str and repr hooks that β init_printing β can overwrite.
3
4 This is exposed publicly in the β printing. defaults β module,
5 but cannot be defined there without causing circular imports.
6 """
7
8 class Printable:
9 """
10 The default implementation of printing for SymPy classes.
11
12 This implements a hack that allows us to print elements of built - in
13 Python containers in a readable way. Natively Python uses ββ repr ()ββ
14 even if ββ str ()ββ was explicitly requested. Mix in this trait into
15 a class to get proper default printing.
16
17 This also adds support for LaTeX printing in jupyter notebooks.
18 """
19
20 # Note, we always use the default ordering (lex) in __str__ and __repr__,
21 # regardless of the global setting. See issue 5487.
22 def __str__ (self):
23 from sympy. printing. str import sstr
24 return sstr (self, order = None)
25
26 __repr__ = __str__
27
28 def _repr_disabled (self):
29 """
30 No - op repr function used to disable jupyter display hooks.
ACTION:
We need to add β __slots__ = ()β to the β Printable β class as well:
< function = str_replace_editor >
< parameter = command > str_replace </ parameter >
< parameter = path >/ testbed / sympy / core / _print_helpers. py </ parameter >
< parameter = old_str >
class Printable:
"""
The default implementation of printing for SymPy classes.
This implements a hack that allows us to print elements of built - in
Python containers in a readable way. Natively Python uses ββ repr ()ββ
even if ββ str ()ββ was explicitly requested. Mix in this trait into
a class to get proper default printing.
This also adds support for LaTeX printing in jupyter notebooks.
"""
</ parameter >
< parameter = new_str >
class Printable:
"""
The default implementation of printing for SymPy classes.
This implements a hack that allows us to print elements of built - in
Python containers in a readable way. Natively Python uses ββ repr ()ββ
even if ββ str ()ββ was explicitly requested. Mix in this trait into
a class to get proper default printing.
This also adds support for LaTeX printing in jupyter notebooks.
"""
__slots__ = ()
</ parameter >
</ function >
...
Listing 19: The intermediate reflection process between the two Stage-3 moment of the successful trajectory in sympy__sympy-20590.
Listing 19 shows the advanced reflection capability through slot checking along the entire class hierarchy, which successfully leads to the observation of the erroneous Printable. The following behavior from the agent includes the routine of test rerun, and the observation of passing the test script (Listing 18). This chain of actions recovers and resonates with the developer discussions in hints_text (Listing 17), showing the superior reflection skill in the RL prior over other priors.
G.3 Pattern Comparison between SFT with Expert Data and End-to-End RL
In this section, we compare the patterns of the interaction trajectories of the RL prior that undergoes (i) $1.5Γ 2^{28}$ SWE-Agent expert SFT tokens (later referred as βheavy SFTβ) and (ii) $2^{21}$ SWE-Agent expert SFT tokens + end-to-end SWE-Agent RL (later referred as βlight SFT + end-to-end RLβ) as a qualitative study for Figure 7.
We select an instance solved by both methods, matplotlib__matplotlib-13989, as case study. We attach the actions along the two interaction trajectories for comparison.
The action patterns of the heavy SFT model read as Listing 20:
β¬
ACTION 0:
I β ll help you implement the necessary changes to fix the issue where β hist ()β doesn β t respect β range =...β when β density = True β. Let β s follow the steps:
1. First, let β s find and read the relevant code:
< function = bash >
< parameter = command > find / testbed - type f - name "*. py " | grep - v " test_ "</ parameter >
</ function >
ACTION 1:
Let β s look for the histogram implementation since that β s what we need to fix. The PR mentions that the bug is in the β hist ()β function when β density = True β and β range =...β is specified.
< function = bash >
< parameter = command > grep - r " def hist " -- include ="*. py " / testbed | grep - v " test_ "</ parameter >
</ function >
ACTION 2:
The main histogram implementation appears to be in β/ testbed / lib / matplotlib / axes / _axes. py β. Let β s look at that file:
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
</ function >
ACTION 3:
Let me look at the histogram implementation specifically:
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
< parameter = view_range >[6365, 6899]</ parameter >
</ function >
ACTION 4:
Let me also look at the β histogram_bin_edges β function since it β s used in the code:
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
< parameter = view_range >[40, 60]</ parameter >
</ function >
ACTION 5:
2. Let β s create a script to reproduce the error:
< function = str_replace_editor >
< parameter = command > create </ parameter >
< parameter = path >/ testbed / reproduce. py </ parameter >
...
ACTION 6:
Let β s run it to confirm the issue:
< function = bash >
< parameter = command > python / testbed / reproduce. py </ parameter >
</ function >
ACTION 7:
The issue is confirmed: when β density = True β is specified, the β range =(0, 1)β parameter is not respected. The bins should start at 0 and end at 1, but they don β t.
3. Let β s fix the issue. Looking at the code, the problem is in the β hist β method and specifically around line 6689-6690:
βββ python
if density and not stacked:
hist_kwargs = dict (density = density)
βββ
The issue is that when β density = True β, the β range β parameter gets lost because we β re not including it in the β hist_kwargs β. Let β s modify the code to fix this:
< function = str_replace_editor >
< parameter = command > str_replace </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
...
ACTION 8:
4. Let β s run the reproduce script again to confirm the fix:
< function = bash >
< parameter = command > python / testbed / reproduce. py </ parameter >
</ function >
ACTION 9:
Great! The fix works. Now the bins start at 0 and end at 1 as expected.
5. Let β s think about edge cases:
1. When β range = None β - This should use the min / max of the data, which is already handled correctly by the existing code
2. When β range β is specified but β density = False β - The range should be respected, which it already is
3. When β range β is specified and β density = True β - The range should be respected, which our fix now ensures
4. When β bins β is a sequence - The range should be ignored, which is already handled by the code
5. When β range β includes values outside the data range - The histogram should include empty bins for the range outside the data, which is already handled by numpy β s histogram function
Let me test some of these edge cases to be sure:
< function = str_replace_editor >
< parameter = command > str_replace </ parameter >
< parameter = path >/ testbed / reproduce. py </ parameter >
...
ACTION 10:
< function = bash >
< parameter = command > python / testbed / reproduce. py </ parameter >
</ function >
ACTION 11:
All test cases look good:
1. When β range =(0, 1)β and β density = True β, the bins start at 0 and end at 1
2. When β range = None β and β density = True β, the bins use the min / max of the data
3. When β range =(0, 1)β and β density = False β, the bins start at 0 and end at 1
4. When β bins β is a sequence, the range parameter is ignored and the sequence is used directly
5. When β range =(-0.5, 1.5)β, the bins start at -0.5 and end at 1.5
The fix looks good and handles all edge cases correctly. Let me submit the changes:
< function = submit >
</ function >
Listing 20: The interaction made by the RL prior with $1.5Γ 2^{28}$ SWE-Agent SFT tokens on matplotlib__matplotlib-13989.
The action patterns of the light SFT + end-to-end RL model read as Listing 21:
β¬
ACTION 0:
Understood. Let β s tackle this problem step by step.
First, I need to find the relevant code in the / testbed directory. The user mentioned that the code is in / testbed, so I β ll start by listing the files there.
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed </ parameter >
</ function >
ACTION 1:
Hmm, the / testbed directory has a lot of files, but I need to focus on the code related to the hist function. The hist function is part of matplotlib. pyplot, so I should look for the pyplot module in the / testbed / lib / matplotlib directory.
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / pyplot. py </ parameter >
</ function >
ACTION 2:
Okay, I β ll look at the hist function in pyplot. py.
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / pyplot. py </ parameter >
< parameter = view_range >[2610, 2625]</ parameter >
</ function >
ACTION 3:
The hist function in pyplot. py simply calls the hist method of the current axes. So the actual implementation is in the Axes class.
I β ll look for the Axes. hist method in the / testbed / lib / matplotlib / axes directory.
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes </ parameter >
</ function >
ACTION 4:
The Axes. hist method is likely in _axes. py, which is where most of the plotting methods are defined.
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
</ function >
ACTION 5:
I β ll look at the hist method in _axes. py.
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
< parameter = view_range >[6365, 6400]</ parameter >
</ function >
ACTION 6:
I β ll look at more of the hist method to see where the range is handled.
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
< parameter = view_range >[6400, 6450]</ parameter >
</ function >
ACTION 7:
I β ll look at the code where the bins are calculated.
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
< parameter = view_range >[6450, 6500]</ parameter >
</ function >
ACTION 8:
I β ll look at the code where the bins are calculated.
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
< parameter = view_range >[6500, 6550]</ parameter >
</ function >
ACTION 9:
I β ll look at the code where the bins are calculated.
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
< parameter = view_range >[6550, 6600]</ parameter >
</ function >
ACTION 10:
I β ll look at the code where the bins are calculated.
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
< parameter = view_range >[6600, 6650]</ parameter >
</ function >
ACTION 11:
I β ll look at the code where the bins are calculated.
< function = str_replace_editor >
< parameter = command > view </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
< parameter = view_range >[6650, 6700]</ parameter >
</ function >
ACTION 12:
Ah, here β s the key part. The code calculates the bins using np. histogram, and if the range is specified, it β s passed to histogram_bin_edges. But when density is True, the code may be overriding the range.
Wait, in the code:
if not input_empty and len (x) > 1:
if weights is not None:
_w = np. concatenate (w)
else:
_w = None
bins = histogram_bin_edges (np. concatenate (x),
bins, bin_range, _w)
else:
hist_kwargs [β range β] = bin_range
But when density is True, the code may be using the default range of the data, not the specified range.
Wait, no. The code for calculating the bins is:
if bins is None:
bins = rcParams [β hist. bins β]
if bin_range is not None:
bin_range = self. convert_xunits (bin_range)
if bin_range is None:
xmin = np. inf
xmax = - np. inf
for xi in x:
if len (xi):
xmin = min (xmin, np. nanmin (xi))
xmax = max (xmax, np. nanmax (xi))
if not np. isnan ([xmin, xmax]). any () and not (xmin > xmax):
bin_range = (xmin, xmax)
# If bins are not specified either explicitly or via range,
# we need to figure out the range required for all datasets,
# and supply that to np. histogram.
if not input_empty and len (x) > 1:
if weights is not None:
_w = np. concatenate (w)
else:
_w = None
bins = histogram_bin_edges (np. concatenate (x),
bins, bin_range, _w)
else:
hist_kwargs [β range β] = bin_range
density = bool (density) or bool (normed)
if density and not stacked:
hist_kwargs = dict (density = density)
# List to store all the top coordinates of the histograms
tops = []
mlast = None
# Loop through datasets
for i in range (nx):
# this will automatically overwrite bins,
# so that each histogram uses the same bins
m, bins = np. histogram (x [i], bins, weights = w [i], ** hist_kwargs)
m = m. astype (float) # causes problems later if it β s an int
if mlast is None:
But when density is True, the code passes density = True to np. histogram. However, when density is True, np. histogram normalizes the counts, but the bin edges are still determined by the range.
Wait, but the problem is that when density = True, the range is not respected. The user β s code is:
_, bins, _ = plt. hist (np. random. rand (10), " auto ", range =(0, 1), density = True)
print (bins)
and the output is:
[0.00331535 0.18930174 0.37528813 0.56127453 0.74726092 0.93324731]
which is not from 0 to 1. But when density = False, the range is respected.
This suggests that when density = True, the code is not using the specified range for the bins.
But why?
Let me look at the code again.
When the code calculates the bins:
if not input_empty and len (x) > 1:
if weights is not None:
_w = np. concatenate (w)
else:
_w = None
bins = histogram_bin_edges (np. concatenate (x),
bins, bin_range, _w)
else:
hist_kwargs [β range β] = bin_range
But when len (x) is 1 (as in the user β s code), the code enters the else clause and sets hist_kwargs [β range β] = bin_range.
Then, in the loop:
for i in range (nx):
m, bins = np. histogram (x [i], bins, weights = w [i], ** hist_kwargs)
But wait, if bins is None, then np. histogram will use the range. But if bins is a string (like β auto β), then np. histogram will calculate the bins based on the data and the range.
Wait, in the user β s code, bins is β auto β and range is (0, 1). When density = True, the code should calculate the bins based on the range (0, 1).
But in the code, when len (x) is 1, the code sets hist_kwargs [β range β] = bin_range. Then, in the loop, the code calls np. histogram (x [i], bins, weights = w [i], ** hist_kwargs).
But if bins is a string (like β auto β), then np. histogram will calculate the bins based on the data and the range. So the range should be respected.
But in the user β s code, the bins are not from 0 to 1. This suggests that the code is not passing the range to np. histogram when density = True.
But why?
This is puzzling. Let me try to reproduce the error.
I β ll create a script in the / testbed directory.
< function = str_replace_editor >
< parameter = command > create </ parameter >
< parameter = path >/ testbed / reproduce_error. py </ parameter >
...
ACTION 13:
I β ll run the script.
< function = bash >
< parameter = command > python / testbed / reproduce_error. py </ parameter >
</ function >
ACTION 14:
Hmm, the output is not from 0 to 1. This confirms the error.
Now, I need to find where in the code the range is not being passed when density = True.
Let me look at the code again.
When len (x) is 1, the code sets hist_kwargs [β range β] = bin_range. Then, in the loop:
for i in range (nx):
m, bins = np. histogram (x [i], bins, weights = w [i], ** hist_kwargs)
But if bins is a string (like β auto β), then np. histogram will calculate the bins based on the data and the range. So the range should be respected.
But in the user β s code, the bins are not from 0 to 1. This suggests that the code is not passing the range to np. histogram when density = True.
But why?
This is confusing. Let me look at the code for np. histogram.
The np. histogram function has a range parameter. If range is specified, the bin edges are calculated based on that range. If the data is outside the range, it is ignored.
In the user β s code, the data is between 0 and 1, so the range (0, 1) should be respected.
But when density = True, the code is not respecting the range. This suggests that the code is not passing the range to np. histogram when density = True.
But why?
This is puzzling. Let me try to modify the code in _axes. py.
I β ll look for the line where hist_kwargs is set.
In the code:
density = bool (density) or bool (normed)
if density and not stacked:
hist_kwargs = dict (density = density)
This line is overwriting hist_kwargs, which may include the range.
Ah, here β s the problem. If density is True and stacked is False, the code sets hist_kwargs to dict (density = density), which overwrites any previous hist_kwargs, including the range.
This is the root of the problem. The code is overwriting hist_kwargs, which includes the range, when density is True and stacked is False.
So, in the code:
density = bool (density) or bool (normed)
if density and not stacked:
hist_kwargs = dict (density = density)
This line is overwriting hist_kwargs, which may include the range. So the range is not passed to np. histogram.
This is the cause of the problem. The solution is to add the range to hist_kwargs in this case.
So, the code should be:
density = bool (density) or bool (normed)
if density and not stacked:
hist_kwargs. setdefault (β density β, density)
But wait, hist_kwargs may already have the range. So, the code should not overwrite hist_kwargs, but add to it.
The code is:
density = bool (density) or bool (normed)
if density and not stacked:
hist_kwargs = dict (density = density)
This line is overwriting hist_kwargs, which may include the range. So the range is lost.
The solution is to modify this code to add to hist_kwargs, not overwrite it.
So, the code should be:
density = bool (density) or bool (normed)
if density and not stacked:
hist_kwargs. setdefault (β density β, density)
But wait, hist_kwargs may not exist. Let me look at the code.
The code for hist_kwargs is:
hist_kwargs = dict ()
# if the bin_range is not given, compute without nan numpy
# does not do this for us when guessing the range (but will
# happily ignore nans when computing the histogram).
if bin_range is None:
xmin = np. inf
xmax = - np. inf
for xi in x:
if len (xi):
xmin = min (xmin, np. nanmin (xi))
xmax = max (xmax, np. nanmax (xi))
# make sure we have seen at least one non - nan and finite
# value before we reset the bin range
if not np. isnan ([xmin, xmax]). any () and not (xmin > xmax):
bin_range = (xmin, xmax)
# If bins are not specified either explicitly or via range,
# we need to figure out the range required for all datasets,
# and supply that to np. histogram.
if not input_empty and len (x) > 1:
if weights is not None:
_w = np. concatenate (w)
else:
_w = None
bins = histogram_bin_edges (np. concatenate (x),
bins, bin_range, _w)
else:
hist_kwargs [β range β] = bin_range
density = bool (density) or bool (normed)
if density and not stacked:
hist_kwargs = dict (density = density)
# List to store all the top coordinates of the histograms
tops = []
mlast = None
# Loop through datasets
for i in range (nx):
# this will automatically overwrite bins,
# so that each histogram uses the same bins
m, bins = np. histogram (x [i], bins, weights = w [i], ** hist_kwargs)
m = m. astype (float) # causes problems later if it β s an int
if mlast is None:
Ah, here β s the problem. The code sets hist_kwargs = dict (density = density), which overwrites any previous hist_kwargs, including the range.
So, if density is True and stacked is False, the code overwrites hist_kwargs with dict (density = density), which does not include the range. So the range is not passed to np. histogram.
This is the cause of the problem. The solution is to modify this code to add to hist_kwargs, not overwrite it.
So, the code should be:
density = bool (density) or bool (normed)
if density and not stacked:
hist_kwargs. setdefault (β density β, density)
But hist_kwargs may not exist. Wait, hist_kwargs is initialized as:
hist_kwargs = dict ()
So, the code can be:
density = bool (density) or bool (normed)
if density and not stacked:
hist_kwargs [β density β] = density
This way, hist_kwargs includes the density and the range.
This would fix the problem. Let me test this.
I β ll modify the code in _axes. py.
< function = str_replace_editor >
< parameter = command > str_replace </ parameter >
< parameter = path >/ testbed / lib / matplotlib / axes / _axes. py </ parameter >
...
ACTION 14:
I β ll run the reproduce_error. py script again.
< function = bash >
< parameter = command > python / testbed / reproduce_error. py </ parameter >
</ function >
ACTION 15:
Great, the fix works. The bins are now from 0 to 1.
Now, I can submit the changes.
< function = submit >
</ function >
Listing 21: The interaction made by the RL prior with $2^{21}$ SWE-Agent SFT tokens and end-to-end SWE-Agent RL on matplotlib__matplotlib-13989.
It can be seen that the two types of patterns are significantly different from each other:
- The patterns of the heavy SFT model appear to be more regulated, with explicitly labeled steps 1 to 5 along the intermediate reasoning process. The reasoning content is generally short as well. This reflects the behavior of the model shaped by the heavy SFT data from the proprietary models.
- The patterns of the light SFT + end-to-end RL model, in contrast, are much more under-regulated. More turns of interaction are spent at repo exploration, and a significantly larger amount of tokens are generated at the code edit steps before the action is made. This is the reflection behavior shaped by end-to-end RL (and less impacted by the expert data), as the sign of reasoning-intensive skill priors is retained in the agentic scenario.
G.4 Generalization to Other Tasks
In this section, we study whether the skills incentivized by Agentless training and SWE-Agent adaptation through SWE-Smith trajectories could transfer to other SWE tasks. We use SWE-bench-live and SWE-bench Multilingual as our testbeds.
SWE-bench-live (zhang2025swe) is a benchmark for real-world issue resolution, evaluating AI systems on software engineering tasks. Using an automated curation pipeline, it is updated monthly to ensure fresh tasks and rigorous, contamination-free evaluation. For our experiments, we selected the default set of 300 tasks, with data collected between October 2024 and March 2025. Compared to SWE-bench Verified, SWE-bench-live exhibits a higher degree of distributional shift.
SWE-bench Multilingual (yang2025swesmith) introduces 300 curated tasks from 42 GitHub repositories across 9 programming languages, including Rust, Java, PHP, Ruby, JavaScript/TypeScript, Go, and C/C++, covering domains such as web frameworks, data tools, core utilities, and libraries. Compared to SWE-bench Verified, which focuses exclusively on Python, SWE-bench Multilingual exhibits greater linguistic and domain diversity, posing additional challenges in cross-language generalization and transferability of software engineering capabilities.
Similar to previous experiments, we evaluated four model stages as the priors: the original Qwen2.5-72B (Base), the mid-trained model (MT), the model activated with reasoning data through supervised finetuning (SFT), and the model after RL training (RL). We still use the open-source SWE-smith trajectories to activate the agentic capabilities of each prior.
Figures 11 and 12 show the performance of the four priors on SWE-bench-Live and SWE-bench Multilingual under varied amounts of agentic trajectories for adaptation ( $2^{21}$ as one-step gradient descent, $2^{23}$ , $1.1Γ 2^{27}$ , and $1.5Γ 2^{28}$ as 100, 2,000, and 5,016 training trajectories). Each SWE-Agent adaptation experiment is conducted through lightweight supervised finetuning, the training time of which ranges from several minutes to two hours at most.
Compared to the Base prior, those specifically enhanced with Agentless skills (SFT and RL) demonstrate stronger task generalization, especially under the data-scarce settings. However, when more SWE-Smith trajectories are used for adaptation, the performances of the Base and the MT priors become closer to those of the SFT and the RL priors. This could be attributed to the gaps between the different SWE tasks. The exploration for recipes that enable stronger out-of-distribution and task-agnostic generalization is left for future work.
<details>
<summary>figs/sec4_generalization_plots/l-100-s1.png Details</summary>

### Visual Description
# Technical Document Analysis of Line Chart
## 1. Labels and Axis Titles
- **X-axis**: Labeled `k` (integer values: 1, 2, 3)
- **Y-axis**: Labeled `Pass@k (%)` (range: 0.0 to 12.5, increments of 2.5)
- **Legend**: Located in the top-left corner, with four entries:
- **RL**: Red line with circular markers
- **SFT**: Orange line with circular markers
- **MT**: Purple line with circular markers
- **Base**: Blue line with circular markers
## 2. Key Trends and Data Points
### RL (Red Line)
- **Trend**: Steadily increasing slope from left to right.
- **Data Points**:
- `k=1`: ~4.8%
- `k=2`: ~7.2%
- `k=3`: ~9.5%
### SFT (Orange Line)
- **Trend**: Starts below RL at `k=1`, crosses RL between `k=1` and `k=2`, then rises sharply.
- **Data Points**:
- `k=1`: ~4.5%
- `k=2`: ~8.0%
- `k=3`: ~11.0%
### MT (Purple Line)
- **Trend**: Flat with a slight upward curve.
- **Data Points**:
- `k=1`: ~1.8%
- `k=2`: ~2.0%
- `k=3`: ~2.5%
### Base (Blue Line)
- **Trend**: Flat with a slight upward curve, parallel to MT.
- **Data Points**:
- `k=1`: ~1.0%
- `k=2`: ~1.5%
- `k=3`: ~2.5%
## 3. Spatial Grounding and Color Verification
- **Legend Position**: Top-left corner (coordinates: [x=0.1, y=0.9] relative to chart bounds).
- **Color Consistency**:
- RL: Red (`#FF0000`) matches all red data points.
- SFT: Orange (`#FFA500`) matches all orange data points.
- MT: Purple (`#800080`) matches all purple data points.
- Base: Blue (`#0000FF`) matches all blue data points.
## 4. Component Isolation
### Header
- No explicit header text; legend serves as the primary identifier.
### Main Chart
- **Axes**:
- X-axis spans `k=1` to `k=3` with gridlines at integer intervals.
- Y-axis spans `0.0%` to `12.5%` with gridlines at 2.5% intervals.
- **Lines**:
- RL, SFT, MT, and Base lines are plotted with distinct colors and markers.
- SFT overtakes RL between `k=1` and `k=2`, becoming the highest-performing line by `k=3`.
### Footer
- No footer text or annotations.
## 5. Data Table Reconstruction
| k | RL (%) | SFT (%) | MT (%) | Base (%) |
|----|--------|---------|--------|----------|
| 1 | 4.8 | 4.5 | 1.8 | 1.0 |
| 2 | 7.2 | 8.0 | 2.0 | 1.5 |
| 3 | 9.5 | 11.0 | 2.5 | 2.5 |
## 6. Observations
- **Performance Trends**:
- SFT demonstrates the steepest growth, surpassing RL by `k=2` and reaching ~11% at `k=3`.
- RL maintains a consistent upward trajectory but grows slower than SFT.
- MT and Base lines remain the lowest performers, with minimal improvement across `k`.
- **Cross-Validation**:
- All data points align with their respective legend colors and markers.
- Y-axis increments (2.5%) match gridline spacing, ensuring accurate trend interpretation.
## 7. Conclusion
The chart illustrates performance metrics (Pass@k %) for four algorithms (RL, SFT, MT, Base) across increasing values of `k`. SFT outperforms all others by `k=3`, while MT and Base show negligible improvement. RL remains a strong performer but is overtaken by SFT at intermediate `k` values.
</details>
(a) #SFT $=2^{21}$ .
<details>
<summary>figs/sec4_generalization_plots/l-100.png Details</summary>

### Visual Description
# Technical Document Extraction: Line Chart Analysis
## Chart Overview
The image depicts a line chart comparing four performance metrics across three data points (k=1, 2, 3). The y-axis represents "Pass@k (%)" with a range of 0.0 to 12.5, while the x-axis represents discrete values of k (1, 2, 3).
## Axis Labels
- **X-axis**: `k` (discrete values: 1, 2, 3)
- **Y-axis**: `Pass@k (%)` (range: 0.0 to 12.5)
## Legend
The legend is positioned on the right side of the chart and maps colors to labels:
- **Red**: RL (Reinforcement Learning)
- **Orange**: SFT (Softmax Fine-Tuning)
- **Purple**: MT (Model Tuning)
- **Blue**: Base (Baseline)
## Data Series Analysis
### RL (Red Line)
- **Trend**: Steady upward slope
- **Data Points**:
- k=1: 9.0%
- k=2: 10.5%
- k=3: 12.3%
### SFT (Orange Line)
- **Trend**: Gradual linear increase
- **Data Points**:
- k=1: 5.2%
- k=2: 7.8%
- k=3: 9.7%
### MT (Purple Line)
- **Trend**: Accelerated growth (overtakes RL at k=2)
- **Data Points**:
- k=1: 6.0%
- k=2: 10.5%
- k=3: 12.7%
### Base (Blue Line)
- **Trend**: Slowest linear progression
- **Data Points**:
- k=1: 2.8%
- k=2: 5.0%
- k=3: 8.0%
## Spatial Grounding
- **Legend Position**: Right-aligned, outside the main chart area
- **Color Consistency**: All data points match their corresponding legend colors (e.g., red markers for RL, orange for SFT).
## Key Observations
1. **Performance Gaps**:
- At k=1, RL (9.0%) outperforms MT (6.0%) and SFT (5.2%) by 3-4 percentage points.
- At k=3, MT (12.7%) surpasses RL (12.3%) by 0.4 percentage points.
2. **Baseline Comparison**:
- Base (blue) remains consistently 3-5 percentage points below SFT across all k values.
3. **MT's Acceleration**:
- MT's growth rate doubles between k=1β2 (4.5% increase) vs. k=2β3 (2.2% increase), suggesting diminishing returns.
## Data Table Reconstruction
| k | RL (%) | SFT (%) | MT (%) | Base (%) |
|----|--------|---------|--------|----------|
| 1 | 9.0 | 5.2 | 6.0 | 2.8 |
| 2 | 10.5 | 7.8 | 10.5 | 5.0 |
| 3 | 12.3 | 9.7 | 12.7 | 8.0 |
## Conclusion
The chart demonstrates that MT achieves the highest Pass@k performance at k=3, while RL maintains the second-highest performance throughout. SFT shows moderate improvement, and Base remains the weakest performer across all k values.
</details>
(b) #SFT $=2^{23}$ .
<details>
<summary>figs/sec4_generalization_plots/l-2000.png Details</summary>

### Visual Description
# Technical Document Extraction: Line Chart Analysis
## Chart Overview
The image depicts a line chart titled **"Pass@k (%)"** with the following components:
### Axes
- **X-axis (Horizontal):** Labeled **"k"** with discrete values at **1, 2, 3**.
- **Y-axis (Vertical):** Labeled **"Pass@k (%)"** with a range from **5.0% to 17.5%**, marked in increments of **2.5%**.
### Legend
- Located in the **lower-right quadrant** of the chart.
- **Color-Coded Labels:**
- **Red (β ):** RL
- **Orange (β ):** SFT
- **Purple (β ):** MT
- **Blue (β ):** Base
### Data Series
#### 1. RL (Red Line)
- **Trend:** Steadily increasing from **k=1 to k=3**.
- **Data Points:**
- **k=1:** ~10.2%
- **k=2:** ~13.2%
- **k=3:** ~15.0%
#### 2. SFT (Orange Line)
- **Trend:** Gradual upward slope, slightly less steep than RL.
- **Data Points:**
- **k=1:** ~10.8%
- **k=2:** ~12.8%
- **k=3:** ~14.8%
#### 3. MT (Purple Line)
- **Trend:** Consistent increase, closely following RL.
- **Data Points:**
- **k=1:** ~10.1%
- **k=2:** ~12.1%
- **k=3:** ~14.9%
#### 4. Base (Blue Line)
- **Trend:** Sharp rise from **k=1 to k=2**, then a slower increase.
- **Data Points:**
- **k=1:** ~6.5%
- **k=2:** ~11.8%
- **k=3:** ~14.0%
### Spatial Grounding
- **Legend Position:** Lower-right quadrant (coordinates: x=0.85, y=0.15 relative to chart boundaries).
- **Data Point Markers:** All lines use **cross-shaped markers (Γ)** at each data point.
### Key Observations
1. **RL** achieves the highest Pass@k values across all k, peaking at **15.0%** for k=3.
2. **Base** starts significantly lower but surpasses SFT and MT by k=3.
3. All lines exhibit upward trends, indicating improved performance as k increases.
### Notes
- No textual annotations or additional legends are present.
- All data points align with the legend colors (e.g., red markers correspond to RL).
- No non-English text detected.
This extraction ensures full fidelity to the chart's structure, trends, and numerical data.
</details>
(c) #SFT $=1.1Γ 2^{27}$ .
<details>
<summary>figs/sec4_generalization_plots/l-5000.png Details</summary>

### Visual Description
# Technical Document Extraction: Line Chart Analysis
## Axes and Labels
- **X-axis**: Labeled "k" with discrete values at 1, 2, 3 (integer increments).
- **Y-axis**: Labeled "Pass@k (%)" with a range from 5.0% to 17.5%, marked in 2.5% increments (5.0, 7.5, 10.0, 12.5, 15.0, 17.5).
## Legend
- **Position**: Bottom-right corner of the chart.
- **Entries**:
- **RL**: Red line with square markers.
- **SFT**: Orange line with circular markers.
- **MT**: Purple line with circular markers.
- **Base**: Blue line with circular markers.
## Data Series and Trends
1. **RL (Red)**:
- **Trend**: Steady upward slope.
- **Data Points**:
- k=1: ~11.5%
- k=2: ~13.0%
- k=3: ~14.5%
2. **SFT (Orange)**:
- **Trend**: Consistent upward trajectory.
- **Data Points**:
- k=1: ~12.0%
- k=2: ~13.5%
- k=3: ~15.5%
3. **MT (Purple)**:
- **Trend**: Sharpest upward slope.
- **Data Points**:
- k=1: ~12.5%
- k=2: ~15.0%
- k=3: ~16.5%
4. **Base (Blue)**:
- **Trend**: Moderate upward slope with plateauing.
- **Data Points**:
- k=1: ~8.5%
- k=2: ~12.0%
- k=3: ~12.5%
## Key Observations
- **Performance Comparison**:
- MT outperforms all methods at k=3 (~16.5%).
- Base starts lowest at k=1 (~8.5%) but closes the gap by k=3 (~12.5%).
- SFT and RL show similar growth rates, with SFT slightly ahead at k=3 (~15.5% vs. RL's ~14.5%).
## Spatial Grounding
- **Legend Placement**: Bottom-right quadrant, outside the main plot area.
- **Data Point Alignment**: All markers (squares/circles) align vertically with their respective k-values on the x-axis.
## Structural Notes
- **Chart Type**: Line chart with discrete x-axis values (k=1, 2, 3).
- **Gridlines**: Dashed gray lines at y-axis increments (5.0, 7.5, 10.0, 12.5, 15.0, 17.5).
- **No Additional Text**: No embedded text blocks or annotations beyond axis labels and legend.
## Data Table Reconstruction
| k | RL (%) | SFT (%) | MT (%) | Base (%) |
|----|--------|---------|--------|----------|
| 1 | 11.5 | 12.0 | 12.5 | 8.5 |
| 2 | 13.0 | 13.5 | 15.0 | 12.0 |
| 3 | 14.5 | 15.5 | 16.5 | 12.5 |
## Language Declaration
- **Primary Language**: English (all labels, axis titles, and legend entries are in English).
</details>
(d) #SFT $=1.5Γ 2^{28}$ .
Figure 11: Generalization analysis on SWE-bench-Live.
<details>
<summary>figs/sec4_generalization_plots/m-100-s1.png Details</summary>

### Visual Description
# Technical Document Extraction: Line Chart Analysis
## Chart Overview
The image depicts a **line chart** visualizing performance metrics across four data series labeled **RL**, **SFT**, **MT**, and **Base**. The chart tracks the metric **Pass@k (%)** against the variable **k** (x-axis). Key components include:
---
### **Axis Labels and Markers**
- **X-axis (Horizontal):**
- Label: `k`
- Tick marks: `1`, `2`, `3`
- Scale: Discrete intervals (no intermediate values).
- **Y-axis (Vertical):**
- Label: `Pass@k (%)`
- Range: `0` to `20` (increments of `5`).
- Tick marks: `0`, `5`, `10`, `15`, `20`.
---
### **Legend and Data Series**
The legend is positioned in the **top-right corner** of the chart. Colors and markers are explicitly mapped as follows:
1. **RL**: Red line with circular markers (`β`).
2. **SFT**: Orange line with square markers (`β `).
3. **MT**: Purple line with circular markers (`β`).
4. **Base**: Blue line with circular markers (`β`).
---
### **Data Trends and Values**
#### **1. RL (Red Line)**
- **Trend**: Steadily increasing.
- **Data Points**:
- `k=1`: ~9%
- `k=2`: ~17%
- `k=3`: ~20%
#### **2. SFT (Orange Line)**
- **Trend**: Steeper upward trajectory than RL.
- **Data Points**:
- `k=1`: ~12%
- `k=2`: ~17.5%
- `k=3`: ~21.5%
#### **3. MT (Purple Line)**
- **Trend**: Flat, minimal growth.
- **Data Points**:
- `k=1`: ~2%
- `k=2`: ~2%
- `k=3`: ~2%
#### **4. Base (Blue Line)**
- **Trend**: Gradual increase.
- **Data Points**:
- `k=1`: ~1%
- `k=2`: ~1.5%
- `k=3`: ~4%
---
### **Spatial Grounding**
- **Legend Placement**: Top-right corner (outside the plot area).
- **Data Point Verification**:
- RL (red) matches red line with circular markers.
- SFT (orange) matches orange line with square markers.
- MT (purple) matches purple line with circular markers.
- Base (blue) matches blue line with circular markers.
---
### **Key Observations**
1. **RL vs. SFT**: Both series show significant growth, but SFT outperforms RL at all `k` values.
2. **MT Stagnation**: MT remains constant across all `k`, suggesting no improvement.
3. **Base Growth**: Base starts lowest but shows the steepest relative increase (from ~1% to ~4%).
---
### **Conclusion**
The chart highlights performance disparities between data series, with SFT leading in Pass@k metrics and MT showing no progress. RL and Base demonstrate moderate improvement, with Base having the highest growth rate.
</details>
(a) #SFT $=2^{21}$ .
<details>
<summary>figs/sec4_generalization_plots/m-100.png Details</summary>

### Visual Description
# Technical Document Extraction: Line Chart Analysis
## Chart Overview
The image depicts a line chart titled **"Pass@k (%)"** with the following components:
### Axes
- **X-axis (Horizontal):** Labeled `k` with discrete values at `1`, `2`, and `3`.
- **Y-axis (Vertical):** Labeled `Pass@k (%)` with a range from `0` to `20` in increments of `5`.
### Legend
- Located in the **top-left corner** of the chart.
- **Color-Coded Labels:**
- `RL` (Red)
- `SFT` (Orange)
- `MT` (Purple)
- `Base` (Blue)
### Data Series
#### 1. RL (Red Line)
- **Trend:** Steep upward slope.
- **Data Points:**
- `k=1`: `8%`
- `k=2`: `16%`
- `k=3`: `20.5%`
#### 2. SFT (Orange Line)
- **Trend:** Moderate upward slope.
- **Data Points:**
- `k=1`: `5.5%`
- `k=2`: `9.5%`
- `k=3`: `13%`
#### 3. MT (Purple Line)
- **Trend:** Steeper than SFT but less than RL.
- **Data Points:**
- `k=1`: `7%`
- `k=2`: `12.5%`
- `k=3`: `17.5%`
#### 4. Base (Blue Line)
- **Trend:** Gentle upward slope.
- **Data Points:**
- `k=1`: `3%`
- `k=2`: `8%`
- `k=3`: `12%`
### Key Observations
- All lines show **increasing trends** as `k` increases.
- `RL` consistently outperforms other methods across all `k` values.
- `Base` has the lowest performance, while `RL` achieves the highest Pass@k percentage.
### Spatial Grounding
- **Legend Position:** Top-left corner (coordinates: `[x=0, y=0]` relative to chart boundaries).
- **Data Point Colors:** Match legend labels exactly (e.g., red for `RL`, orange for `SFT`).
### Trend Verification
- **RL:** Sharpest increase (e.g., `8%` β `20.5%` over `k=1` to `k=3`).
- **MT:** Second-steepest increase (e.g., `7%` β `17.5%`).
- **SFT/Base:** Gradual increases (e.g., `5.5%` β `13%` for SFT; `3%` β `12%` for Base).
### Component Isolation
1. **Header:** Chart title (`Pass@k (%)`) and axis labels.
2. **Main Chart:** Four distinct lines with markers and slopes.
3. **Footer:** No additional text or annotations.
### Final Notes
- No non-English text or embedded diagrams present.
- All numerical values and labels extracted directly from the chart.
</details>
(b) #SFT $=2^{23}$ .
<details>
<summary>figs/sec4_generalization_plots/m-2000.png Details</summary>

### Visual Description
# Technical Document Extraction: Line Chart Analysis
## 1. Axis Labels and Markers
- **X-axis**: Labeled "k" with discrete markers at positions 1, 2, 3.
- **Y-axis**: Labeled "Pass@k (%)" with tick marks at 5, 10, 15, 20, 25, 30, 35.
## 2. Legend and Color Mapping
- **Legend Position**: Bottom-right quadrant of the chart.
- **Color-Coded Series**:
- **RL**: Red line with circular markers.
- **SFT**: Orange line with circular markers.
- **MT**: Purple line with circular markers.
- **Base**: Blue line with circular markers.
## 3. Data Series and Trends
### RL (Red)
- **Trend**: Steady upward slope.
- **Data Points**:
- k=1: 12%
- k=2: 24%
- k=3: 28%
### SFT (Orange)
- **Trend**: Steepest upward trajectory.
- **Data Points**:
- k=1: 13%
- k=2: 26%
- k=3: 30%
### MT (Purple)
- **Trend**: Moderate upward slope.
- **Data Points**:
- k=1: 11%
- k=2: 22%
- k=3: 27%
### Base (Blue)
- **Trend**: Gradual upward slope with acceleration at k=3.
- **Data Points**:
- k=1: 9%
- k=2: 21%
- k=3: 29%
## 4. Spatial Grounding
- **Legend Coordinates**: [x=0.85, y=0.15] (normalized to chart dimensions).
- **Data Point Verification**:
- All series colors match legend entries (e.g., red = RL, orange = SFT).
## 5. Key Observations
- **SFT** consistently outperforms other methods across all k values.
- **Base** shows the slowest growth but surpasses MT at k=3.
- **RL** and **MT** exhibit similar growth patterns but diverge at k=3.
## 6. Data Table Reconstruction
| k | RL (%) | SFT (%) | MT (%) | Base (%) |
|---|--------|---------|--------|----------|
| 1 | 12 | 13 | 11 | 9 |
| 2 | 24 | 26 | 22 | 21 |
| 3 | 28 | 30 | 27 | 29 |
## 7. Language Declaration
- **Primary Language**: English (all labels, axis titles, and annotations are in English).
</details>
(c) #SFT $=1.1Γ 2^{27}$ .
<details>
<summary>figs/sec4_generalization_plots/m-5000.png Details</summary>

### Visual Description
# Technical Document Extraction: Line Chart Analysis
## 1. Axis Labels and Markers
- **X-axis**: Labeled "k" with discrete ticks at positions 1, 2, 3.
- **Y-axis**: Labeled "Pass@k (%)" with a range from 5% to 35%, incremented by 5%.
- **Grid**: Dashed gray gridlines for reference.
## 2. Legend and Color Mapping
- **Legend Position**: Lower-right quadrant of the chart.
- **Color-Label Mapping**:
- **Red**: RL (Reinforcement Learning)
- **Orange**: SFT (Softmax Fine-Tuning)
- **Purple**: MT (Model Tuning)
- **Blue**: Base (Baseline)
## 3. Data Series and Trends
### RL (Red Line)
- **Trend**: Steady upward slope with moderate curvature.
- **Data Points**:
- k=1: ~17%
- k=2: ~27%
- k=3: ~31%
### SFT (Orange Line)
- **Trend**: Sharpest upward trajectory, most aggressive growth.
- **Data Points**:
- k=1: ~16%
- k=2: ~26%
- k=3: ~34.5%
### MT (Purple Line)
- **Trend**: Consistent linear increase, least curvature.
- **Data Points**:
- k=1: ~15.5%
- k=2: ~25.5%
- k=3: ~30%
### Base (Blue Line)
- **Trend**: Slightly curved upward, closely follows RL.
- **Data Points**:
- k=1: ~18.5%
- k=2: ~26%
- k=3: ~30.5%
## 4. Spatial Grounding
- **Legend**: Located at [x=0.85, y=0.15] (normalized coordinates).
- **Data Point Verification**:
- All line colors match legend labels (e.g., red = RL, orange = SFT).
## 5. Key Observations
- **Performance Gaps**:
- At k=3, SFT outperforms all methods by ~4.5% over RL and ~4% over Base.
- MT consistently lags by ~1β2% compared to RL/Base.
- **Scalability**: All methods improve with increasing k, but SFT shows the highest scalability.
## 6. Missing Elements
- No embedded text, tables, or non-English content detected.
- No heatmap or multi-category sub-categories present.
## 7. Final Data Table Reconstruction
| k | RL (%) | SFT (%) | MT (%) | Base (%) |
|----|--------|---------|--------|----------|
| 1 | 17 | 16 | 15.5 | 18.5 |
| 2 | 27 | 26 | 25.5 | 26 |
| 3 | 31 | 34.5 | 30 | 30.5 |
</details>
(d) #SFT $=1.5Γ 2^{28}$ .
Figure 12: Generalization analysis on SWE-bench Multilingual.
Appendix H Use of Large Language Models
The initial draft of this paper was written entirely by the authors. A large language model (gpt-5) was used only to aid with polishing the language (e.g., grammar and clarity). All conceptual contributions, experimental designs, analyses, and conclusions are the work of the authors.