2602.02276
Model: healer-alpha-free
# Kimi K2.5: Visual Agentic Intelligence
**Authors**: Kimi Team
## Abstract
We introduce Kimi K2.5, an open-source multimodal agentic model designed to advance general agentic intelligence. K2.5 emphasizes the joint optimization of text and vision so that two modalities enhance each other. This includes a series of techniques such as joint text-vision pre-training, zero-vision SFT, and joint text-vision reinforcement learning. Building on this multimodal foundation, K2.5 introduces Agent Swarm, a self-directed parallel agent orchestration framework that dynamically decomposes complex tasks into heterogeneous sub-problems and executes them concurrently. Extensive evaluations show that Kimi K2.5 achieves state-of-the-art results across various domains including coding, vision, reasoning, and agentic tasks. Agent Swarm also reduces latency by up to $4.5Ă$ over single-agent baselines. We release the post-trained Kimi K2.5 model checkpoint https://huggingface.co/moonshotai/Kimi-K2.5 to facilitate future research and real-world applications of agentic intelligence.
<details>
<summary>figures/k25-main-result.png Details</summary>

### Visual Description
## Grouped Bar Chart: AI Model Performance Across Diverse Benchmarks
### Overview
This image is a grouped bar chart comparing the performance of four AI modelsâ**Kimi K2.5** (blue bars), **GPT-5.2 (xhigh)** (light gray bars), **Claude Opus 4.5** (light gray bars with orange star icon), and **Gemini 3 Pro** (light gray bars with blue star icon)âacross 10 benchmarks grouped into four categories: Agents, Coding, Image, and Video. Performance is measured in percentiles (%), with higher scores indicating better performance. A footnote clarifies the score calculation for one benchmark.
### Components/Axes
- **Legend (Top)**: Four models with distinct visual identifiers:
- Kimi K2.5: Blue bar + black square icon with white "K"
- GPT-5.2 (xhigh): Light gray bar + black spiral icon
- Claude Opus 4.5: Light gray bar + orange star icon
- Gemini 3 Pro: Light gray bar + blue star icon
- **Benchmark Categories (X-axis Groupings)**:
- Agents: Humanity's Last Exam (Full), BrowseComp, DeepSearchQA
- Coding: SWE-bench Verified, SWE-bench Multilingual
- Image: MMMU Pro, MathVision, OmniDocBench 1.5*
- Video: VideoMMMU, LongVideoBench
- **Score Metric**: Percentiles (%) (implied by the footnote and numerical labels above bars)
- **Footnote (Bottom)**: "* OmniDocBench Score is computed as (1 â normalized Levenshtein distance) Ă 100, where a higher score denotes superior accuracy."
### Detailed Analysis
Below are the percentile scores for each model across all benchmarks (scores labeled above bars):
#### Agents Category
| Benchmark | Kimi K2.5 | GPT-5.2 (xhigh) | Claude Opus 4.5 | Gemini 3 Pro |
|----------------------------|-----------|-----------------|-----------------|--------------|
| Humanity's Last Exam (Full)| 50.2 | 45.5 | 43.2 | 45.8 |
| BrowseComp | 74.9 | 65.8 | 57.8 | 59.2 |
| DeepSearchQA | 77.1 | 71.3 | 76.1 | 63.2 |
#### Coding Category
| Benchmark | Kimi K2.5 | GPT-5.2 (xhigh) | Claude Opus 4.5 | Gemini 3 Pro |
|----------------------------|-----------|-----------------|-----------------|--------------|
| SWE-bench Verified | 76.8 | 80.0 | 80.9 | 76.2 |
| SWE-bench Multilingual | 73.0 | 72.0 | 77.5 | 65.0 |
#### Image Category
| Benchmark | Kimi K2.5 | GPT-5.2 (xhigh) | Claude Opus 4.5 | Gemini 3 Pro |
|----------------------------|-----------|-----------------|-----------------|--------------|
| MMMU Pro | 78.5 | 79.5 | 74.0 | 81.0 |
| MathVision | 84.2 | 83.0 | 77.1 | 86.1 |
| OmniDocBench 1.5* | 88.8 | 85.7 | 87.7 | 88.5 |
#### Video Category
| Benchmark | Kimi K2.5 | GPT-5.2 (xhigh) | Claude Opus 4.5 | Gemini 3 Pro |
|----------------------------|-----------|-----------------|-----------------|--------------|
| VideoMMMU | 86.6 | 85.9 | 84.4 | 87.6 |
| LongVideoBench | 79.8 | 76.5 | 67.2 | 77.7 |
### Key Observations
1. **Agents Benchmarks**: Kimi K2.5 leads in all three agent-focused tasks (50.2â77.1), with a significant margin in BrowseComp (74.9 vs. next-highest 65.8).
2. **Coding Benchmarks**: Claude Opus 4.5 outperforms others in both SWE-bench tasks (80.9 in Verified, 77.5 in Multilingual), while GPT-5.2 is competitive in SWE-bench Verified (80.0).
3. **Image Benchmarks**: Gemini 3 Pro leads in MMMU Pro (81.0) and MathVision (86.1), while Kimi K2.5 narrowly leads in OmniDocBench 1.5 (88.8 vs. Geminiâs 88.5).
4. **Video Benchmarks**: Gemini 3 Pro leads in VideoMMMU (87.6), and Kimi K2.5 leads in LongVideoBench (79.8).
5. **Outlier**: Claude Opus 4.5 has a notably low score in LongVideoBench (67.2), far below the other three models (76.5â79.8).
6. **Consistency**: Kimi K2.5 is top or near-top in 8 of 10 benchmarks, showing strong cross-category performance.
### Interpretation
This chart illustrates the competitive landscape of leading AI models across specialized tasks. The percentile scores reflect relative performance within each benchmark, so higher values indicate better capability for that specific task. Key takeaways:
- **Kimi K2.5** excels in agent-oriented tasks (e.g., browsing, deep search) and document understanding (OmniDocBench 1.5), suggesting strong reasoning and information retrieval abilities.
- **Claude Opus 4.5** dominates coding benchmarks, indicating superior software engineering and code generation capabilities.
- **Gemini 3 Pro** performs best in image understanding tasks (MMMU Pro, MathVision), highlighting strengths in visual reasoning.
- The OmniDocBench 1.5 footnote clarifies that its score measures document accuracy via Levenshtein distance, making Kimiâs lead here meaningful for document processing use cases.
Overall, the data shows no single model dominates all tasksâeach has niche strengths, which is critical for users selecting models for specific applications (e.g., coding vs. image analysis).
</details>
Figure 1: Kimi K2.5 main results.
## 1 Introduction
Large Language Models (LLMs) are rapidly evolving toward agentic intelligence. Recent advances, such as GPT-5.2 [41], Claude Opus 4.5 [4], Gemini 3 Pro [19], and Kimi K2-Thinking [1], demonstrate substantial progress in agentic capabilities, particularly in tool calling and reasoning. These models increasingly exhibit the ability to decompose complex problems into multi-step plans and to execute long sequences of interleaved reasoning and actions.
In this report, we introduce the training methods and evaluation results of Kimi K2.5. Concretely, we improve the training of K2.5 over previous models in the following two key aspects.
Joint Optimization of Text and Vision. A key insight from the practice of K2.5 is that joint optimization of text and vision enhances both modalities and avoids the conflict. Specifically, we devise a set of techniques for this purpose. During pre-training, in contrast to conventional approaches that add visual tokens to a text backbone at a late stage [7, 20], we find early vision fusion with lower ratios tends to yield better results given the fixed total vision-text tokens. Therefore, K2.5 mixes text and vision tokens with a constant ratio throughout the entire training process.
Architecturally, Kimi K2.5 employs MoonViT-3D, a native-resolution vision encoder incorporating the NaViT packing strategy [14], enabling variable-resolution image inputs. For video understanding, we introduce a lightweight 3D ViT compression mechanism: consecutive frames are grouped in fours, processed through the shared MoonViT encoder, and temporally averaged at the patch level. This design allows Kimi K2.5 to process videos up to 4 $Ă$ longer within the same context window while maintaining complete weight sharing between image and video encoders.
During post-training, we introduce zero-vision SFTâtext-only SFT alone activates visual reasoning and tool use. We find that adding human-designed visual trajectories at this stage hurts generalization. In contrast, text-only SFT performs betterâlikely because joint pretraining already establishes strong vision-text alignment, enabling capabilities to generalize naturally across modalities. We then apply joint RL on both text and vision tasks. Crucially, we find visual RL enhances textual performance rather than degrading it, with improvements on MMLU-Pro and GPQA-Diamond. This bidirectional enhancementâtext bootstraps vision, vision refines textârepresents superior cross-modal alignment in joint training.
Agent Swarm: Parallel Agent Orchestration. Most existing agentic models rely on sequential execution of tool calls. Even systems capable of hundreds of reasoning steps, such as Kimi K2-Thinking [1], suffer from linear scaling of inference time, leading to unacceptable latency and limiting task complexity. As agentic workloads grow in scope and heterogeneityâe.g., building a complex project that involves massive-scale research, design, and developmentâthe sequential paradigm becomes increasingly inefficient.
To overcome the latency and scalability limits of sequential agent execution, Kimi K2.5 introduces Agent Swarm, a dynamic framework for parallel agent orchestration. We propose a Parallel-Agent Reinforcement Learning (PARL) paradigm that departs from traditional agentic RL [2]. In addition to optimizing tool execution via verifiable rewards, the model is equipped with interfaces for sub-agent creation and task delegation. During training, sub-agents are frozen and their execution trajectories are excluded from the optimization objective; only the orchestrator is updated via reinforcement learning. This decoupling circumvents two challenges of end-to-end co-optimization: credit assignment ambiguity and training instability. Agent Swarm enables complex tasks to be decomposed into heterogeneous sub-problems executed concurrently by domain-specialized agents, transforming task complexity from linear scaling to parallel processing. In wide-search scenarios, Agent Swarm reduces inference latency by up to 4.5 $Ă$ while improving item-level F1 from 72.8% to 79.0% compared to single-agent baselines.
Kimi K2.5 represents a unified architecture for general-purpose agentic intelligence, integrating vision and language, thinking and instant modes, chats and agents. It achieves strong performance across a broad range of agentic and frontier benchmarks, including state-of-the-art results in visual-to-code generation (image/video-to-code) and real-world software engineering in our internal evaluations, while scaling both the diversity of specialized agents and the degree of parallelism. To accelerate community progress toward General Agentic Intelligence, we open-source our post-trained checkpoints of Kimi K2.5, enabling researchers and developers to explore, refine, and deploy scalable agentic intelligence.
## 2 Joint Optimization of Text and Vision
Kimi K2.5 is a native multimodal model built upon Kimi K2 through large-scale joint pre-training on approximately 15 trillion mixed visual and text tokens. Unlike vision-adapted models that compromise either linguistic or visual capabilities, our joint pre-training paradigm enhances both modalities simultaneously. This section describes the multimodal joint optimization methodology that extends Kimi K2 to Kimi K2.5.
### 2.1 Native Multimodal Pre-Training
Table 1: Performance comparison across different vision-text joint-training strategies. Early fusion with a lower vision ratio yields better results given a fixed total vision-text token budget.
| | Vision Injection Timing | Vision-Text Ratio | Vision Knowledge | Vision Reasoning | OCR | Text Knowledge | Text Reasoning | Code |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Early | 0% | 10%:90% | 25.8 | 43.8 | 65.7 | 45.5 | 58.5 | 24.8 |
| Mid | 50% | 20%:80% | 25.0 | 40.7 | 64.1 | 43.9 | 58.6 | 24.0 |
| Late | 80% | 50%:50% | 24.2 | 39.0 | 61.5 | 43.1 | 57.8 | 24.0 |
A key design question for multimodal pre-training is: Given a fixed vision-text token budget, what is the optimal vision-text joint-training strategy. Conventional wisdom [7, 20] suggests introducing vision tokens predominantly in the later stages of LLM training at high ratios (e.g., 50% or higher) should accelerate multimodal capability acquisition, treating multimodal capability as a post-hoc add-on to linguistic competence.
However, our experiments (as shown in Table 1 Figure 9) reveal a different story. We conducted ablation studies varying the vision ratio and vision injection timing while keeping the total vision and text token budgets fixed. To strictly meet the targets for different ratios, we pre-trained the model with text-only tokens for a specifically calculated number of tokens before introducing vision data. Surprisingly, we found that the vision ratio has minimal impact on final multimodal performance. In fact, early fusion with a lower vision ratio yields better results given a fixed total vision-text token budget. This motivates our native multimodal pre-training strategy: rather than aggressive vision-heavy training concentrated at the end, we adopt a moderate vision ratio integrated early in the training process, allowing the model to naturally develop balanced multimodal representations while benefiting from extended co-optimization of both modalities.
### 2.2 Zero-Vision SFT
Pretrained VLMs do not naturally perform vision-based tool-calling, which poses a cold-start problem for multimodal RL. Conventional approaches address this issue through manually annotated or prompt-engineered chain-of-thought (CoT) data [7], but such methods are limited in diversity, often restricting visual reasoning to simple diagrams and primitive tool manipulations (crop, rotate, flip).
An observation is that high-quality text SFT data are relatively abundant and diverse. We propose a novel approach, zero-vision SFT, that uses only text SFT data to activate the visual, agentic capabilities during post-training. In this approach, all image manipulations are proxied through programmatic operations in IPython, effectively serving as a generalization of traditional vision tool-use. This "zero-vision" activation enables diverse reasoning behaviors, including pixel-level operations such as object size estimation via binarization and counting, and generalizes to visually grounded tasks such as object localization, counting, and OCR.
Figure 2 illustrates the RL training curves, where the starting points are obtained from zero-vision SFT. The results show that zero-vision SFT is sufficient for activating vision capabilities while ensuring generalization across modalities. This phenomenon is likely due to the joint pretraining of text and vision data as described in Section 2.1. Compared to zero-vision SFT, our preliminary experiments show that text-vision SFT yields much worse performance on visual, agentic tasks, possibly because of the lack of high-quality vision data.
### 2.3 Joint Multimodal Reinforcement Learning (RL)
In this section, we describe the methodology implemented in K2.5 that enables effective multimodal RL, from outcome-based visual RL to emergent cross-modal transfer that enhances textual performance.
<details>
<summary>x2.png Details</summary>

### Visual Description
## Line Charts: Model Accuracy vs. RL Flops
### Overview
The image displays four separate line charts arranged in a 2x2 grid. Each chart plots the "Accuracy" of a different model or benchmark against "RL flops" (Reinforcement Learning floating-point operations). The charts illustrate how performance changes with increased computational training effort. All charts share the same x-axis label but have different y-axis scales and data trends.
### Components/Axes
* **Common Elements:**
* **X-axis (All Charts):** Labeled "RL flops". The axis represents a progression of increasing computational effort, though specific numerical markers are not visible.
* **Y-axis (All Charts):** Labeled "Accuracy". The scale and range differ for each chart.
* **Chart Type:** Line chart with data points marked by small circles. Each chart uses a distinct color for its line and a light shaded area beneath it.
* **Individual Chart Details (Spatial Grounding):**
1. **Top-Left Chart: "MMMU Pro"**
* **Title:** "MMMU Pro" (centered at top).
* **Y-axis Scale:** Ranges from approximately 0.71 to 0.76. Major gridlines are visible at 0.71, 0.72, 0.73, 0.74, 0.75, 0.76.
* **Line Color:** Red.
2. **Top-Right Chart: "MathVision"**
* **Title:** "MathVision" (centered at top).
* **Y-axis Scale:** Ranges from approximately 0.68 to 0.78. Major gridlines are visible at 0.68, 0.70, 0.72, 0.74, 0.76, 0.78.
* **Line Color:** Green.
3. **Bottom-Left Chart: "CharXiv(RQ)"**
* **Title:** "CharXiv(RQ)" (centered at top).
* **Y-axis Scale:** Ranges from approximately 0.62 to 0.78. Major gridlines are visible at 0.62, 0.64, 0.66, 0.68, 0.70, 0.72, 0.74, 0.76, 0.78.
* **Line Color:** Blue.
4. **Bottom-Right Chart: "OCRBench"**
* **Title:** "OCRBench" (centered at top).
* **Y-axis Scale:** Ranges from approximately 0.78 to 0.92. Major gridlines are visible at 0.78, 0.80, 0.82, 0.84, 0.86, 0.88, 0.90, 0.92.
* **Line Color:** Orange.
### Detailed Analysis
* **MMMU Pro (Red Line - Top-Left):**
* **Trend Verification:** The line shows a general upward trend with significant volatility. It starts low, rises sharply, then enters a phase of high-frequency oscillation within a band.
* **Data Points (Approximate):** Begins near 0.715. Shows a steep climb to ~0.745. The majority of subsequent data points fluctuate between ~0.735 and ~0.755, with a final point near the top of the range at ~0.76.
* **MathVision (Green Line - Top-Right):**
* **Trend Verification:** Shows a strong, consistent upward trend that begins to plateau in the latter half.
* **Data Points (Approximate):** Starts at the lowest point on its chart, ~0.69. Climbs steadily, crossing 0.74 and 0.76. The trend flattens in the upper region, with most later points oscillating between ~0.77 and ~0.78.
* **CharXiv(RQ) (Blue Line - Bottom-Left):**
* **Trend Verification:** Exhibits a very clear and steady upward trend with moderate noise.
* **Data Points (Approximate):** Begins at the lowest value across all charts, ~0.62. Shows a consistent rise, passing through 0.66, 0.70, and 0.74. The final data points are near ~0.77.
* **OCRBench (Orange Line - Bottom-Right):**
* **Trend Verification:** Demonstrates a rapid initial ascent followed by a stable plateau with minor fluctuations.
* **Data Points (Approximate):** Starts at ~0.79. Increases very quickly to ~0.88. The line then stabilizes, with the majority of points oscillating in a narrow band between ~0.89 and ~0.91. The final point is slightly lower, near ~0.90.
### Key Observations
1. **Universal Improvement:** All four benchmarks show a positive correlation between "RL flops" and "Accuracy," indicating that increased computational training generally improves model performance on these tasks.
2. **Performance Ceiling & Volatility:** Each model appears to approach a performance ceiling. MMMU Pro shows the most volatile plateau, while OCRBench shows the most stable one.
3. **Benchmark Difficulty:** The starting and ending accuracy values suggest varying difficulty. CharXiv(RQ) starts lowest (~0.62), implying it may be the most challenging task initially. OCRBench reaches the highest absolute accuracy (~0.91), suggesting models achieve higher proficiency on this task relative to the others.
4. **Learning Rate:** The slope of the initial ascent varies. OCRBench and MathVision show very steep initial learning curves, while CharXiv(RQ) has a more gradual, sustained climb.
### Interpretation
This composite figure likely comes from a research paper or technical report evaluating the scaling laws of a vision-language model or a reinforcement learning process. The data suggests that:
* **Investment Pays Off:** Allocating more computational resources (RL flops) during training yields measurable accuracy gains across diverse multimodal benchmarks (visual reasoning, math, chart understanding, OCR).
* **Task-Specific Scaling:** The model's learning dynamics are task-dependent. Some tasks (like OCRBench) are mastered quickly and then refined, while others (like CharXiv(RQ)) show continuous, steady improvement, indicating they may require more data or complexity to master.
* **Stability vs. Volatility:** The stability of the plateau (e.g., OCRBench vs. MMMU Pro) may reflect the nature of the task. Noisy tasks with less clear-cut answers might lead to more volatile performance metrics even after extensive training.
* **Practical Implication:** For a practitioner, this chart helps decide the optimal training budget. For example, training beyond a certain point for OCRBench yields diminishing returns, whereas for CharXiv(RQ), further investment might still be beneficial. The charts provide a visual cost-benefit analysis for scaling training compute.
</details>
Figure 2: Vision RL training curves on vision benchmarks starting from minimal zero-vision SFT. By scaling vision RL FLOPs, the performance continues to improve, demonstrating that zero-vision activation paired with long-running RL is sufficient for acquiring robust visual capabilities.
Outcome-Based Visual RL
Following the zero-vision SFT, the model requires further refinement to reliably incorporate visual inputs into reasoning. Text-initiated activation alone exhibits notable failure modes: visual inputs are sometimes ignored, and images may not be attended to when necessary. We employ outcome-based RL on tasks that explicitly require visual comprehension for correct solutions. We categorize these tasks into three domains:
- Visual grounding and counting: Accurate localization and enumeration of objects within images;
- Chart and document understanding: Interpretation of structured visual information and text extraction;
- Vision-critical STEM problems: Mathematical and scientific questions filtered to require visual inputs.
Outcome-based RL on these tasks improves both basic visual capabilities and more complex agentic behaviors. Extracting these trajectories for rejection-sampling fine-tuning (RFT) enables a self-improving data pipeline, allowing subsequent joint RL stages to leverage richer multimodal reasoning traces.
Visual RL Improves Text Performance
Table 2: Cross-Modal Transfer: Vision RL Improves Textual Knowledge
| Benchmark | Before Vision-RL | After Vision-RL | Improvement |
| --- | --- | --- | --- |
| MMLU-Pro | 84.7 | 86.4 | +1.7 |
| GPQA-Diamond | 84.3 | 86.4 | +2.1 |
| LongBench v2 | 56.7 | 58.9 | +2.2 |
To investigate potential trade-offs between visual and textual performance, we evaluated text-only benchmarks before and after visual RL. Surprisingly, outcome-based visual RL produced measurable improvements in textual tasks, including MMLU-Pro (84.7% $â$ 86.4%), GPQA-Diamond (84.3% $â$ 86.4%), and LongBench v2 (56.7% $â$ 58.9%) (Table 2). Analysis suggests that visual RL enhances calibration in areas requiring structured information extraction, reducing uncertainty on queries that resemble visually grounded reasoning (e.g., counting, OCR). These findings indicate that visual RL can contribute to cross-modal generalization, improving textual reasoning without observable degradation of language capabilities.
Joint Multimodal RL Motivated by the finding that robust visual capabilities can emerge from zero-vision SFT paired with vision RLâwhich further enhances general text abilitiesâwe adopt a joint multimodal RL paradigm during Kimi K2.5âs post-training. Departing from conventional modality-specific expert divisions, we organize RL domains not by input modality but by abilitiesâknowledge, reasoning, coding, agentic, etc. These domain experts jointly learn from both pure-text and multimodal queries, while the Generative Reward Model (GRM) similarly optimizes across heterogeneous traces without modality barriers. This pardaigm ensures that capability improvements acquired through either textual or visual inputs inherently generalize to enhance related abilities across the alternate modality, thereby maximizing cross-modal capability transfer.
## 3 Agent Swarm
The primary challenge of existing agent-based systems lies in their reliance on sequential execution of reasoning and tool-calling steps. While this structure may be effective for simpler, short-horizon tasks, it becomes inadequate as the complexity of the task increases and the accumulated context grows. As tasks evolve to contain broad information gathering and intricate, multi-branch reasoning, sequential systems often encounter significant bottlenecks [6, 4, 5]. The limited capacity of a single agent working through each step one by one can lead to the exhaustion of practical reasoning depth and tool-call budgets, ultimately hindering the systemâs ability to handle more complex scenarios.
To address this, we introduce Agent Swarm and Parallel Agent Reinforcement Learning (PARL). Instead of executing a task as a reasoning chain or relying on pre-specified parallelization heuristics, K2.5 initiates an Agent Swarm through dynamic task decomposition, subagent instantiation, and parallel subtask scheduling. Importantly, parallelism is not presumed to be inherently advantageous; decisions regarding whether, when, and how to parallelize are explicitly learned through environmental feedback and RL-driven exploration. As shown in Figure 4, the progression of performance demonstrates this adaptive capability, with the cumulative reward increasing smoothly as the orchestrator optimizes its parallelization strategy throughout training.
<details>
<summary>figures/multi-agent-rl-system.png Details</summary>

### Visual Description
## System Architecture Diagram: Multi-Agent Orchestration Workflow
### Overview
The image is a technical diagram illustrating a hierarchical multi-agent system orchestrated by a central "Orchestrator." The system demonstrates a workflow where the Orchestrator creates specialized subagents, assigns tasks to them in parallel, collects results, and produces a final output. The diagram is structured in three horizontal tiers, showing the flow from agent creation to task execution and result aggregation.
### Components/Axes
**Primary Component (Left Column):**
* **Orchestrator:** A large, light-blue vertical rectangle on the left side. It acts as the central controller.
* **Tools List:** Located within the Orchestrator box, listing its capabilities: `create_subagent`, `assign_task`, `search`, `browser, ...`.
**Top Tier (Agent Creation):**
* **Action:** An arrow labeled `create subagents` points from the Orchestrator to a row of subagent boxes.
* **Subagent Types (Top Row, Left to Right):**
* `AI Researcher`
* `Physics Researcher`
* `Life Sciences Researcher`
* `Anthropology Researcher`
* `...` (ellipsis indicating more types)
* `Fact Checker`
* `Web Developer`
* **Subagent Icons:** Each subagent box contains three small icons: a magnifying glass (search), a Python logo, and a browser window.
* **Feedback:** A return arrow labeled `success` points back to the Orchestrator.
**Middle Tier (Research Task Assignment):**
* **Action:** An arrow labeled `Assign Tasks` points from the Orchestrator to a large container box.
* **Task Grid:** Inside the container, white rounded rectangles represent individual tasks assigned to specific subagents.
* **Row 1:** `AI Researcher` (Task 1), `AI Researcher` (Task 2), `AI Researcher` (Task 3), `AI Researcher` (Task 4), `Physics Researcher` (Task 5).
* **Ellipsis:** `...` (indicating tasks 6-95 are not shown).
* **Row 2:** `Life Sciences Researcher` (Task 96), `Life Sciences Researcher` (Task 97), `Life Sciences Researcher` (Task 98), `Life Sciences Researcher` (Task 99), `Anthropology Researcher` (Task 100).
* **Result Flow:** Multiple return arrows point back to the Orchestrator, labeled sequentially: `task 1 result`, `task 2 result`, `task 3 result`, `task 4 result`, `...`, `task 100 result`.
**Bottom Tier (Utility Task Assignment):**
* **Action:** A second arrow labeled `Assign Tasks` points from the Orchestrator to another container box.
* **Task Grid:** Shows tasks for different agent types.
* `Fact Checker` (Task 1), `Fact Checker` (Task 2), `File Downloader` (Task 3), `...`, `Web Developer` (Task 25).
* **Result Flow:** Return arrows labeled `task 1 result`, `...`, `task 25 result`.
**Final Output:**
* A single arrow points from the bottom of the Orchestrator to the text `Final Results` at the bottom center of the diagram.
### Detailed Analysis
**Workflow Sequence:**
1. **Initialization:** The Orchestrator uses its `create_subagent` tool to instantiate a pool of specialized agents (Researchers, Fact Checker, Web Developer).
2. **Parallel Task Execution (Research):** The Orchestrator uses `assign_task` to distribute 100 discrete tasks (Task 1 to Task 100) across the researcher agents. The diagram shows a clear mapping: AI Researchers handle the initial batch (Tasks 1-4), a Physics Researcher handles Task 5, Life Sciences Researchers handle a later batch (Tasks 96-99), and an Anthropology Researcher handles Task 100. This implies task specialization or load distribution.
3. **Parallel Task Execution (Utilities):** A separate batch of 25 tasks (Task 1 to Task 25) is assigned to utility agents like Fact Checkers, a File Downloader, and a Web Developer.
4. **Result Aggregation:** All task results (100 from researchers, 25 from utilities) are sent back to the Orchestrator.
5. **Synthesis:** The Orchestrator processes the aggregated results and outputs the `Final Results`.
**Spatial & Visual Relationships:**
* The Orchestrator is the persistent, central entity on the left.
* Subagent creation is a one-to-many relationship shown at the top.
* Task assignment is a one-to-many relationship shown in two distinct parallel batches (middle and bottom).
* The use of ellipses (`...`) in both the agent list and task lists indicates the system is scalable and can handle more agent types and tasks than are explicitly drawn.
### Key Observations
* **Scalability:** The diagram emphasizes scalability through the use of ellipses and numbered tasks (up to 100 and 25), suggesting the system can manage a large volume of concurrent operations.
* **Specialization:** Agents are domain-specific (AI, Physics, Life Sciences, Anthropology) or role-specific (Fact Checker, Web Developer, File Downloader), indicating a design for complex, multi-disciplinary problems.
* **Tool Integration:** Each subagent is equipped with a standard set of tools (search, coding, browsing), enabling them to perform autonomous research and development tasks.
* **Centralized Control:** All coordination, task assignment, and result collection flows through the single Orchestrator, highlighting a hub-and-spoke control model.
### Interpretation
This diagram models a **scalable, multi-agent AI system for complex research and development projects**. The Orchestrator functions as a project manager or "conductor," breaking down a large, overarching problem into discrete sub-tasks. These tasks are then delegated to a fleet of specialized AI agents that can work in parallel, leveraging their domain expertise and tool access.
The separation into two task assignment batches (100 research tasks, 25 utility tasks) suggests a possible two-phase workflow: a primary research phase followed by a validation/implementation phase (fact-checking, file handling, web development). The system's strength lies in its ability to **parallelize work across many specialized agents**, dramatically speeding up processes that would be sequential for a single AI. The final output is a synthesized product of all these distributed efforts. This architecture is indicative of advanced AI frameworks designed for autonomous scientific discovery, large-scale data analysis, or complex software development projects.
</details>
Figure 3: An agent swarm has a trainable orchestrator that dynamically creates specialized frozen subagents and decomposes complex tasks into parallelizable subtasks for efficient distributed execution.
Architecture and Learning Setup
The PARL framework adopts a decoupled architecture comprising a trainable orchestrator and frozen subagents instantiated from fixed intermediate policy checkpoints. This design deliberately avoids end-to-end co-optimization to circumvent two fundamental challenges: credit assignment ambiguity and training instability. In this multi-agent setting, outcome-based rewards are inherently sparse and noisy; a correct final answer does not guarantee flawless subagent execution, just as a failure does not imply universal subagent error. By freezing the subagents and treating their outputs as environmental observations rather than differentiable decision points, we disentangle high-level coordination logic from low-level execution proficiency, leading to more robust convergence. To improve efficiency, we first train the orchestrator using small-size subagents before transitioning to larger models. Our RL framework also supports dynamically adjusting the inference instance ratios between subagents and the orchestrator, thereby maximizing the resource usage across the cluster.
PARL Reward
Training a reliable parallel orchestrator is challenging due to the delayed, sparse, and non-stationary feedback inherent in independent subagent execution. To address this, we define the PARL reward as:
| | $\displaystyle r_PARL(x,y)=λ_1·\mspace{-26.0mu}\underbrace{r_parallel}_instantiation reward\mspace{-9.0mu}+\mspace{18.0mu}λ_2·\mspace{-32.0mu}\underbrace{r_finish}_sub-agent finish rate+\underbrace{r_perf(x,y)}_task-level outcome .$ | |
| --- | --- | --- |
The performance reward $r_perf$ evaluates the overall success and quality of the solution $y$ for a given task $x$ . This is augmented by two auxiliary rewards, each addressing a distinct challenge in learning parallel orchestration. The reward $r_parallel$ is introduced to mitigate serial collapse âa local optimum where the orchestrator defaults to single-agent execution. By incentivizing subagent instantiation, this term encourages the exploration of concurrent scheduling spaces. The $r_finish$ reward focuses on the successful completion of assigned subtasks. It is used to prevent spurious parallelism, a reward-hacking behavior in which the orchestrator increases parallel metrics dramatically by spawning many subagents without meaningful task decomposition. By rewarding completed subtasks, $r_finish$ enforces feasibility and guides the policy toward valid and effective decompositions.
To ensure the final policy optimizes for the primary objective, the hyperparameters $λ_1$ and $λ_2$ are annealed to zero over the course of training.
Critical Steps as Resource Constraint
To measure computational time cost in a parallel-agent setting, we define critical steps by analogy to the critical path in a computation graph. We model an episode as a sequence of execution stages indexed by $t=1,\dots,T$ . In each stage, the main agent executes an action, which corresponds to either direct tool invocation or the instantiation of a group of subagents running in parallel. Let $S_main^(t)$ denote the number of steps taken by the main agent in stage $t$ (typically $S_main^(t)=1$ ), and $S_sub,i^(t)$ denote the number of steps taken by the $i$ -th subagent in that parallel group. The duration of stage $t$ is governed by the longest-running subagent within that cohort. Consequently, the total critical steps for an episode are defined as
| | $\displaystyleCriticalSteps=â_t=1^Tâ€ft(S_main^(t)+\max_iS_sub,i^(t)\right).$ | |
| --- | --- | --- |
By constraining training and evaluation using critical steps rather than total steps, the framework explicitly incentivizes effective parallelization. Excessive subtask creation that does not reduce the maximum execution time of parallel groups yields little benefit under this metric, while well-balanced task decomposition that shortens the longest parallel branch directly reduces critical steps. As a result, the orchestrator is encouraged to allocate work across subagents in a way that minimizes end-to-end latency, rather than merely maximizing concurrency or total work performed.
Prompt Construction for Parallel-agent Capability Induction
To incentivize the orchestrator to leverage the advantages of parallelization, we construct a suite of synthetic prompts designed to stress the limits of sequential agentic execution. These prompts emphasize either wide search, requiring simultaneous exploration of many independent information sources, or deep search, requiring multiple reasoning branches with delayed aggregation. We additionally include tasks inspired by real-world workloads, such as long-context document analysis and large-scale file downloading. When executed sequentially, these tasks are difficult to complete within fixed reasoning-step and tool-call budgets. By construction, they encourage the orchestrator to allocate subtasks in parallel, enabling completion within fewer critical steps than would be feasible for a single sequential agent. Importantly, the prompts do not explicitly instruct the model to parallelize. Instead, they shape the task distribution such that parallel decomposition and scheduling strategies are naturally favored.
<details>
<summary>x3.png Details</summary>

### Visual Description
## [Chart Type]: Dual-Panel Line Charts with Scatter Points
### Overview
The image displays two side-by-side charts that plot different performance metrics against a common computational cost metric ("RL flops"). Both charts use a scatter plot of individual data points (blue dots) overlaid with a red smoothed trend line. The charts appear to analyze the training progression of a machine learning model, likely in a Reinforcement Learning (RL) context.
### Components/Axes
**Common Elements:**
* **X-Axis (Both Charts):** Label: "RL flops". This axis represents the computational cost or training steps, measured in floating-point operations (flops) for a Reinforcement Learning process. The scale is linear but unlabeled with specific numerical markers.
* **Legend (Both Charts):** Positioned in the top-left corner of each chart's plot area.
* Left Chart: "Training Accuracy" (blue dots), "Smoothed Curve" (red line).
* Right Chart: "Average Parallelism" (blue dots), "Smoothed Curve" (red line).
**Left Chart: "Training Accuracy vs Steps"**
* **Y-Axis:** Label: "Training Accuracy". Scale: Linear, ranging from 30.0% to 70.0% with major gridlines at 5.0% intervals (30.0%, 35.0%, 40.0%, 45.0%, 50.0%, 55.0%, 60.0%, 65.0%, 70.0%).
**Right Chart: "Average parallelism vs Steps"**
* **Y-Axis:** Label: "Average Parallelism". Scale: Linear, ranging from 7 to 14 with major gridlines at integer intervals (7, 8, 9, 10, 11, 12, 13, 14).
### Detailed Analysis
**Left Chart - Training Accuracy:**
* **Trend Verification:** The data series shows a clear, consistent upward trend. The blue dots and the red smoothed curve both slope upward from left to right.
* **Data Points & Values:**
* **Start (Low RL flops):** Training accuracy begins at approximately 35-37%.
* **Mid-Range:** Accuracy crosses the 50% threshold at a mid-point on the x-axis. The data shows moderate scatter around the trend line.
* **End (High RL flops):** The final data points cluster between approximately 62% and 66%. The smoothed curve ends at roughly 63-64%.
* **Distribution:** The scatter of blue dots around the red line is relatively uniform, suggesting consistent variance in accuracy measurements throughout training.
**Right Chart - Average Parallelism:**
* **Trend Verification:** The trend is non-linear. It begins relatively flat, shows a slight dip, then rises gradually before a sharp, accelerating increase at the far right.
* **Data Points & Values:**
* **Start (Low RL flops):** Average parallelism starts around 8.0-8.5.
* **Mid-Range (Dip & Plateau):** There is a noticeable dip where values fall to approximately 7.5-8.0. Following this, the metric recovers and plateaus in the 8.0-9.0 range for a significant portion of the x-axis.
* **End (High RL flops):** A sharp, near-exponential increase occurs. The final data points reach values between 13.0 and 14.0, with the smoothed curve ending at approximately 14.0.
* **Distribution:** The scatter is tighter during the initial flat/dip phase and increases significantly during the final sharp rise, indicating greater variability in parallelism at higher computational scales.
### Key Observations
1. **Positive Correlation:** Both training accuracy and average parallelism show a positive correlation with increased RL flops (training steps/computation).
2. **Divergent Growth Patterns:** While accuracy grows in a roughly linear fashion, parallelism exhibits a "hockey stick" or phase-change growth pattern, with a dramatic acceleration after a long period of modest change.
3. **Initial Parallelism Dip:** The right chart shows a distinct, temporary decrease in average parallelism early in training before it begins its sustained increase.
4. **Increased Variance at Scale:** The scatter (variance) of the "Average Parallelism" data points increases markedly during its final growth phase, unlike the more consistent scatter in the accuracy chart.
### Interpretation
These charts together suggest a narrative about the training dynamics of this RL system:
* **Performance Improves with Compute:** The left chart confirms the expected outcome: investing more computational resources (RL flops) leads to a steady improvement in the model's task performance (accuracy).
* **System Behavior Changes with Scale:** The right chart reveals a more complex underlying system behavior. The "Average Parallelism" likely measures how the computational workload is distributed (e.g., across multiple processors or threads). The initial dip and plateau suggest an initial phase where the system's parallelization strategy is stable or even slightly hindered. The final sharp rise indicates a **critical scaling point** where the system's architecture or the nature of the task allows for a massive increase in parallel execution efficiency.
* **Implication:** The most significant gains in computational efficiency (parallelism) are unlocked only after a substantial amount of training has already occurred. This could imply that the model's structure or the problem's state space evolves to become more amenable to parallel processing later in training. The increased variance at high parallelism might reflect instability or sensitivity in the system when operating at this high-efficiency frontier.
**Language Declaration:** All text in the image is in English.
</details>
Figure 4: In our parallel-agent reinforcement learning environment, the training accuracy increases smoothly as training progresses. At the same time, the level of parallelism during training also gradually increases.
## 4 Method Overview
### 4.1 Foundation: Kimi K2 Base Model
The foundation of Kimi K2.5 is Kimi K2 [53], a trillion-parameter mixture-of-experts (MoE) transformer [59] model pre-trained on 15 trillion high-quality text tokens. Kimi K2 employs the token-efficient MuonClip optimizer [29, 33] with QK-Clip for training stability. The model comprises 1.04 trillion total parameters with 32 billion activated parameters, utilizing 384 experts with 8 activated per token (sparsity of 48). For detailed descriptions of MuonClip, architecture design, and training infrastructure, we refer to the Kimi K2 technical report [53].
### 4.2 Model Architecture
The multimodal architecture of Kimi K2.5 consists of three components: a three-dimensional native-resolution vision encoder (MoonViT-3D), an MLP projector, and the Kimi K2 MoE language model, following the design principles established in Kimi-VL [54].
MoonViT-3D: Shared Embedding Space for Images and Videos
In Kimi-VL, we employ MoonViT to natively process images at their original resolutions, eliminating the need for complex sub-image splitting and splicing operations. Initialized from SigLIP-SO-400M [77], MoonViT incorporates the patch packing strategy from NaViT [14], where single images are divided into patches, flattened, and sequentially concatenated into 1D sequences, thereby enabling efficient simultaneous training on images at varying resolutions.
To maximize the transfer of image understanding capabilities to video, we introduce MoonViT-3D with a unified architecture, fully shared parameters, and a consistent embedding space. By generalizing the âpatch nâ packâ philosophy to the temporal dimension, up to four consecutive frames are treated as a spatiotemporal volume: 2D patches from these frames are jointly flattened and packed into a single 1D sequence, allowing the identical attention mechanism to operate seamlessly across both space and time. While the extra temporal attention improves understanding on high-speed motions and visual effects, the sharing maximizes knowledge generalization from static images to dynamic videos, achieving strong video understanding performance (see in Tab. 4) without requiring specialized video modules or architectural bifurcation. Prior to the MLP projector, lightweight temporal pooling aggregates patches within each temporal chunk, yielding $4Ă$ temporal compression to significantly extend feasible video length. The result is a unified pipeline where knowledge and ability obtained from image pretraining transfers holistically to videos through one shared parameter space and feature representation.
### 4.3 Pre-training Pipeline
As illustrated in Table 3, Kimi K2.5âs pre-training builds upon the Kimi K2 language model checkpoint and processes approximately 15T tokens across three stages: first, standalone ViT training to establish a robust native-resolution visual encoder; second, joint pre-training to simultaneously enhance language and multimodal capabilities; and third, mid-training on high-quality data and long-context activation to refine capabilities and extend context windows.
Table 3: Overview of training stages: data composition, token volumes, sequence lengths, and trainable components.
| Stages | ViT Training | Joint Pre-training | Joint Long-context Mid-training |
| --- | --- | --- | --- |
| Data | Alt text Synthesis Caption Grounding, OCR, Video | + Text, Knowledge Interleaving Video, OS Screenshot | + High-quality Text & Multimodal Long Text, Long Video Reasoning, Long-CoT |
| Sequence length | 4096 | 4096 | 32768 $â$ 262144 |
| Tokens | 1T | 15T | 500B $â$ 200B |
| Training | ViT | ViT & LLM | ViT & LLM |
ViT Training Stage
The MoonViT-3D is continual pre-trained from SigLIP [77] on image-text and video-text pairs, where the text components consist of a variety of targets: image alt texts, synthetic captions of images and videos, grounding bboxes, and OCR texts. Unlike the implementation in Kimi-VL [54], this continual pre-training does not include a contrastive loss, but incorporates solely cross-entropy loss ${L}_caption$ for caption generation conditioned on input images and videos. We adopt a two-stage alignment strategy. In the first stage, we update the MoonViT-3D to align it with Moonlight-16B-A3B [33] via the caption loss, consuming about 1T tokens with very few training FLOPs. This stage allows MoonViT-3D to primarily understand high-resolution images and videos. A very short second stage follows, updating only the MLP projector to bridge the ViT with the 1T LLM for smoother joint pre-training.
Joint Training Stages
The joint pre-training stage continues from a near-end Kimi K2 checkpoint over additional 15T vision-text tokens at 4K sequence length. The data recipe extends Kimi K2âs pre-training distribution by introducing unique tokens, adjusting data proportions with increased weight on coding-related content, and controlling maximum epochs per data source. The third stage performs long-context activation with integrated higher-quality mid-training data, sequentially extending context length via YaRN [44] interpolation. This yields significant generalization improvements in long-context text understanding and long video comprehension.
### 4.4 Post-Training
#### 4.4.1 Supervised Fine-Tuning
Following the SFT pipeline established by Kimi K2 [53], we developed K2.5 by synthesizing high-quality candidate responses from K2, K2 Thinking and a suite of proprietary in-house expert models. Our data generation strategy employs specialized pipelines tailored to specific domains, integrating human annotation with advanced prompt engineering and multi-stage verification. This methodology produced a large-scale instruction-tuning dataset featuring diverse prompts and intricate reasoning trajectories, ultimately training the model to prioritize interactive reasoning and precise tool-calling for complex, real-world applications.
#### 4.4.2 Reinforcement Learning
Reinforcement learning constitutes a crucial phase of our post-training. To facilitate joint optimization across text and vision modalities, as well as to enable PARL for agent swarm, we develop a Unified Agentic Reinforcement Learning Environment (Appendix D) and optimize the RL algorithms. Both text-vision joint RL and PARL are built upon the algorithms described in this section.
Policy Optimization
For each problem $x$ sampled from a dataset $D$ , $K$ responses $\{y_1,\dots,y_K\}$ are generated using the previous policy $Ï_old$ . We optimize the model $Ï_Ξ$ with respect to the following objective:
$$
\displaystyle L_RL(Ξ)=E_xâŒDâ€ft[\frac{1}{N}â_j=1^Kâ_i=1^|y_j|Clipâ€ft(\frac{Ï_Ξ(y_j^i|x,y_j^0:i)}{Ï_old(y_j^i|x,y_j^0:i)},α,ÎČ\right)({r}(x,y_j)-\bar{r}(x))-Ïâ€ft(\log\frac{Ï_Ξ(y_j^i|x,y_j^0:i)}{Ï_old(y_j^i|x,y_j^0:i)}\right)^2\right] . \tag{1}
$$
Here $α,ÎČ,Ï>0$ are hyperparameters, $y^j_0:i$ is the prefix up to the $i$ -th token of the $j$ -th response, $N=â_i=1^K|y_i|$ is the total number of generated tokens in a batch, $\bar{r}(x)=\frac{1}{K}â_j=1^Kr(x,y_j)$ is the mean reward of all generated responses.
This loss function departs from the policy optimization algorithm used in K1.5 [30] by introducing a token-level clipping mechanism designed to mitigate the off-policy divergence amplified by discrepancies between training and inference frameworks. The mechanism functions as a simple gradient masking scheme: policy gradients are computed normally for tokens with log-ratios within the interval $[α,ÎČ]$ , while gradients for tokens falling outside this range are zeroed out. Notably, a key distinction from standard PPO clipping [50] is that our method relies strictly on the log-ratio to explicitly bound off-policy drift, regardless of the sign of the advantages. This approach aligns with recent strategies proposed to stabilize large-scale RL training [74, 78]. Empirically, we find this mechanism essential for maintaining training stability in complex domains requiring long-horizon, multi-step tool-use reasoning. We employ the MuonClip optimizer [29, 33] to minimize this objective.
Reward Function
We apply a rule-based outcome reward for tasks with verifiable solutions, such as reasoning and agentic tasks. To optimize resource consumption, we also incorporate a budget-control reward aimed at enhancing token efficiency. For general-purpose tasks, we employ Generative Reward Models (GRMs) that provide granular evaluations aligned with Kimiâs internal value criteria. In addition, for visual tasks, we design task-specific reward functions to provide fine-grained supervision. For visual grounding and point localization tasks, we employ an F1-based reward with soft matching: grounding tasks derive soft matches from Intersection over Union (IoU) and point tasks derive soft matches from Gaussian-weighted distances under optimal matching. For polygon segmentation tasks, we rasterize the predicted polygon into a binary mask and compute the segmentation IoU against the ground-truth mask to assign the reward. For OCR tasks, we adopt normalized edit distance to quantify character-level alignment between predictions and ground-truth. For counting tasks, rewards are assigned based on the absolute difference between predictions and ground-truth. Furthermore, we synthesize complex visual puzzle problems and utilize an LLM verifier (Kimi K2) to provide feedback.
Generative Reward Models
Kimi K2 leverages a self-critique rubric reward for open-ended generation [53], and K2.5 extends this line of work by systematically deploying Generative Reward Models (GRMs) across a broad range of agentic behaviors and multimodal trajectories. Rather than limiting reward modeling to conversational outputs, we apply GRMs on top of verified reward signals in diverse environments, including chat assistants, coding agents, search agents, and artifact-generating agents. Notably, GRMs function not as binary adjudicators, but as fine-grained evaluators aligned with Kimiâs values that are critical to user experiences, such as helpfulness, response readiness, contextual relevance, appropriate level of detail, aesthetic quality of generated artifacts, and strict instruction following. This design allows the reward signal to capture nuanced preference gradients that are difficult to encode with purely rule-based or task-specific verifiers. To mitigate reward hacking and overfitting to a single preference signal, we employ multiple alternative GRM rubrics tailored to different task contexts.
Token Efficient Reinforcement Learning
Token efficiency is central to LLMs with test-time scaling. While test-time scaling inherently trades computation for reasoning quality, practical gains require algorithmic innovations that actively navigate this trade-off. Our previous findings indicate that imposing a problem-dependent budget effectively constrains inference-time compute, incentivizing the model to generate more concise chain of thought reasoning patterns without unnecessary token expansion [30, 53]. However, we also observe a length-overfitting phenomenon: models trained under rigid budget constraints often fail to generalize to higher compute scales. Consequently, they cannot effectively leverage additional inference-time tokens to solve complex problems, instead defaulting to truncated reasoning patterns.
To this end, we propose Toggle, a training heuristic that alternates between inference-time scaling and budget-constrained optimization: for learning iteration $t$ , the reward function is defined by
| | $\displaystyle\tilde{r}(x,y)=\begin{cases}r(x,y)·Iâ€ft\{\frac{1}{K}â_i=1^Kr(x,y_i)<λ or |y_i|â€budget(x)\right\}&if \lfloor t/m\rfloor±od{2}=0 ({Phase0})\ r(x,y)&if \lfloor t/m\rfloor±od{2}=1 ({Phase1})\end{cases} .$ | |
| --- | --- | --- |
where $λ$ and $m$ are hyper-parameters of the algorithm and $K$ is the number of rollouts per problem. Specifically, the algorithm alternates between two optimization phases every $m$ iterations:
- Phase0 (budget limited phase): The model is trained to solve the problem within a task-dependent token budget. To prevent a premature sacrifice of quality for efficiency, this constraint is conditionally applied: it is only enforced when the modelâs mean accuracy for a given problem exceeds the threshold $λ$ .
- Phase1 (standard scaling phase): The model generates responses up to the maximum token limit, encouraging the model to leverage computation for better inference-time scaling.
The problem-dependent budget is estimated from the $Ï$ -th percentile of token lengths among the subset of correct responses:
$$
budget(x)=Percentileâ€ft(\{|y_j|\mid r(x,y_i)=1,i=1,\dots,K\},Ï\right) . \tag{2}
$$
This budget is estimated once at the beginning of training and remains fixed thereafter. Notably, Toggle functions as a stochastic alternating optimization for a bi-objective problem. It is specifically designed to reconcile reasoning capabilities with computational efficiency.
<details>
<summary>figures/te-k2-thinking-radar.png Details</summary>

### Visual Description
## [Dual Radar Chart]: Token Efficiency before and after Toggle across Benchmarks
### Overview
The image contains two side-by-side radar (spider) charts comparing system performance (left, in %) and token usage (right, in numerical values) **before** (gray squares) and **after** (blue/orange circles) a âToggleâ (configuration/model change) across seven benchmarks: *HMMT25_Feb, GPQADIAMOND, AIME2025, Overall, LiveCodeBenchV6, MMLUPro, HMMT25_Nov*.
### Components/Axes
#### Left Chart: Performance (%)
- **Axes**: Radial axes with benchmarks (HMMT25_Feb, GPQADIAMOND, AIME2025, Overall, LiveCodeBenchV6, MMLUPro, HMMT25_Nov) around the perimeter.
- **Legend**: âData Pointsâ (gray square = *Before Toggle*, blue circle = *After Toggle*).
- **Performance Ranges** (blue boxes):
- HMMT25_Feb: [85â95%]
- GPQADIAMOND: [80â90%]
- AIME2025: [90â100%]
- Overall: [80â90%]
- LiveCodeBenchV6: [80â90%]
- MMLUPro: [80â90%]
- HMMT25_Nov: [85â95%]
- **Change Indicators**: Green boxes (+X% = improvement) or red boxes (-X% = degradation) between *Before* and *After* points.
#### Right Chart: Token Usage
- **Axes**: Same benchmarks as the left chart.
- **Legend**: âData Pointsâ (gray square = *Before Toggle*, orange circle = *After Toggle*).
- **Token Ranges** (orange boxes):
- HMMT25_Feb: [20Kâ40K]
- GPQADIAMOND: [5Kâ15K]
- AIME2025: [20Kâ35K]
- Overall: [15Kâ25K]
- LiveCodeBenchV6: [20Kâ30K]
- MMLUPro: [1Kâ4K]
- HMMT25_Nov: [20Kâ35K]
- **Change Indicators**: Green boxes (-X = token reduction) between *Before* and *After* points.
#### Bottom Legends
- Left (Performance): *ââ Improved: 5 | â Degraded: 2â* (blue box).
- Right (Token Usage): *ââ Reduced: 7 | â Increased: 0â* (orange box).
- Overall: Green = *Improvement*, Red = *Degradation*.
### Detailed Analysis (Performance Chart)
| Benchmark | Before â After (Change) | Performance Range |
|-----------------|-------------------------|-------------------|
| HMMT25_Feb | +0.6% (improvement) | [85â95%] |
| GPQADIAMOND | -1.0% (degradation) | [80â90%] |
| AIME2025 | +1.1% (improvement) | [90â100%] |
| Overall | +0.3% (improvement) | [80â90%] |
| LiveCodeBenchV6 | +2.2% (improvement) | [80â90%] |
| MMLUPro | -2.0% (degradation) | [80â90%] |
| HMMT25_Nov | +0.8% (improvement) | [85â95%] |
### Detailed Analysis (Token Usage Chart)
| Benchmark | Before â After (Change) | Token Range |
|-----------------|-------------------------|-------------------|
| HMMT25_Feb | -7967 (reduction) | [20Kâ40K] |
| GPQADIAMOND | -4912 (reduction) | [5Kâ15K] |
| AIME2025 | -6179 (reduction) | [20Kâ35K] |
| Overall | -4791 (reduction) | [15Kâ25K] |
| LiveCodeBenchV6 | -745 (reduction) | [20Kâ30K] |
| MMLUPro | -817 (reduction) | [1Kâ4K] |
| HMMT25_Nov | -8127 (reduction) | [20Kâ35K] |
### Key Observations
- **Performance**: 5/7 benchmarks improved (green), 2 degraded (red: *GPQADIAMOND, MMLUPro*). Largest improvement: *LiveCodeBenchV6* (+2.2%); largest degradation: *MMLUPro* (-2.0%).
- **Token Usage**: All 7 benchmarks reduced tokens (green). Largest reduction: *HMMT25_Nov* (-8127); smallest: *LiveCodeBenchV6* (-745).
- **Correlation**: Most performance improvements align with token reductions, except *GPQADIAMOND* (performance degraded, tokens reduced) and *MMLUPro* (performance degraded, tokens reduced).
### Interpretation
The âToggleâ (configuration/model change) **improves efficiency** by reducing token usage across all benchmarks (7/7) and enhancing performance in most cases (5/7). The two performance degradations (*GPQADIAMOND, MMLUPro*) suggest the Toggle may not be optimal for all tasks, but token efficiency is consistently improved. This implies the Toggle optimizes resource usage (tokens) while maintaining or enhancing performance in most scenarios, making it a beneficial change for overall efficiencyâthough task-specific tuning may be needed for the two degraded benchmarks.
</details>
Figure 5: Comparison of model performance and token usage for Kimi K2 Thinking following token-efficient RL.
We evaluate the effectiveness of Toggle on K2 Thinking [1]. As shown in Figure 5, we observe a consistent reduction in output length across nearly all benchmarks. On average, Toggle decreases output tokens by 25 $âŒ$ 30% with a negligible impact on performance. We also observe that redundant patterns in the chain-of-thought, such as repeated verifications and mechanical calculations, decrease substantially. Furthermore, Toggle shows strong domain generalization. For example, when trained exclusively on mathematics and programming tasks, the model still achieves consistent token reductions on GPQA and MMLU-Pro with only marginal degradation in performance (Figure 5).
### 4.5 Training Infrastructure
Kimi K2.5 inherits the training infrastructure from Kimi K2 [53] with minimal modifications. For multimodal training, we propose Decoupled Encoder Process, where the vision encoder is incorporated into the existing pipeline with negligible additional overhead.
#### 4.5.1 Decoupled Encoder Process (DEP)
In a typical multimodal training paradigm utilizing Pipeline Parallelism (PP), the vision encoder and text embedding are co-located in the first stage of the pipeline (Stage-0). However, due to the inherent variations of multimodal input size (e.g., image counts and resolutions), Stage-0 suffers from drastic fluctuations in both computational load and memory usage. This forces existing solutions to adopt custom PP configurations for vision-language models â for instance, [54] manually adjusts the number of text decoder layers in Stage-0 to reserve memory. While this compromise alleviates memory pressure, it does not fundamentally resolve the load imbalance caused by multimodal input sizes. More critically, it precludes the direct reuse of parallel strategies that have been highly optimized for text-only training.
Leveraging the unique topological position of the visual encoder within the computation graph â specifically, its role as the start of the forward pass and the end of the backward pass â our training uses Decoupled Encoder Process (DEP), which is composed of three stages in each training step:
- Balanced Vision Forward: We first execute the forward pass for all visual data in the global batch. Because the vision encoder is small, we replicate it on all GPUs regardless of other parallelism strategies. During this phase, the forward computational workload is evenly distributed across all GPUs based on load metrics (e.g., image or patch counts). This eliminates load-imbalance caused by PP and visual token counts. To minimize peak memory usage, we discard all intermediate activations, retaining only the final output activations. The results are gathered back to PP Stage-0;
- Backbone Training: This phase performs the forward and backward passes for the main transformer backbone. By discarding intermediate activations in the preceding phase, we can now fully leverage any efficient parallel strategies validated in pure text training. After this phase, gradients are accumulated at the visual encoder output;
- Vision Recomputation & Backward: We re-compute the vision encoder forward pass, followed by a backward pass to compute gradients for parameters in the vision encoder;
DEP not only achieves load-balance, but also decouples the optimization strategy of the vision encoder and the main backbone. K2.5 seamlessly inherits the parallel strategy of K2, achieving a multimodal training efficiency of 90% relative to text-only training. We note a concurrent work, LongCat-Flash-Omni [55], shares a similar design philosophy.
## 5 Evaluations
### 5.1 Main Results
#### 5.1.1 Evaluation Settings
Benchmarks
We evaluate Kimi K2.5 on a comprehensive benchmark suite spanning text-based reasoning, competitive and agentic coding, multimodal understanding (image and video), autonomous agentic execution, and computer use. Our benchmark taxonomy is organized along the following capability axes:
- Reasoning & General: Humanityâs Last Exam (HLE) [46], AIME 2025 [40], HMMT 2025 (Feb) [58], IMO-AnswerBench [36], GPQA-Diamond [47], MMLU-Pro [64], SimpleQA Verified [21], AdvancedIF [22], and LongBench v2 [8].
- Coding: SWE-Bench Verified [28], SWE-Bench Pro (public) [15], SWE-Bench Multilingual [28], Terminal Bench 2.0 [38], PaperBench (CodeDev) [52], CyberGym [66], SciCode [56], OJBench (cpp) [65], and LiveCodeBench (v6) [27].
- Agentic Capabilities: BrowseComp [68], WideSearch [69],DeepSearchQA [60], FinSearchComp (T2&T3) [25], Seal-0 [45], GDPVal [43].
- Image Understanding: (math & reasoning) MMMU-Pro [76], MMMU (val) [75], CharXiv (RQ) [67], MathVision [61] and MathVista (mini) [35]; (vision knowledge) SimpleVQA [12] and WorldVQA https://github.com/MoonshotAI/WorldVQA; (perception) ZeroBench (w/ and w/o tools) [48], BabyVision [11], BLINK [17] and MMVP [57]; (OCR & document) OCRBench [34], OmniDocBench 1.5 [42] and InfoVQA [37].
- Video Understanding: VideoMMMU [24], MMVU [79], MotionBench [23], Video-MME [16] (with subtitles), LongVideoBench [70], and LVBench [62].
- Computer Use: OSWorld-Verified [72, 73], and WebArena [80].
Table 4: Performance comparison of Kimi K2.5 against open-source and proprietary models. Bold denotes the global SOTA; Data points marked with * are taken from our internal evaluations. â refers to their scores of text-only subset.
| | | Proprietary | Open Source | | | |
| --- | --- | --- | --- | --- | --- | --- |
| Benchmark | Kimi K2.5 | Claude Opus 4.5 | GPT-5.2 (xhigh) | Gemini 3 Pro | DeepSeek-V3.2 | Qwen3-VL-235B-A22B |
| Reasoning & General | | | | | | |
| HLE-Full | 30.1 | 30.8 | 34.5 | 37.5 | 25.1 â | - |
| HLE-Full w/ tools | 50.2 | 43.2 | 45.5 | 45.8 | 40.8 â | - |
| AIME 2025 | 96.1 | 92.8 | 100 | 95.0 | 93.1 | - |
| HMMT 2025 (Feb) | 95.4 | 92.9* | 99.4 | 97.3* | 92.5 | - |
| IMO-AnswerBench | 81.8 | 78.5* | 86.3 | 83.1* | 78.3 | - |
| GPQA-Diamond | 87.6 | 87.0 | 92.4 | 91.9 | 82.4 | - |
| MMLU-Pro | 87.1 | 89.3* | 86.7* | 90.1 | 85.0 | - |
| SimpleQA Verified | 36.9 | 44.1 | 38.9 | 72.1 | 27.5 | - |
| AdvancedIF | 75.6 | 63.1 | 81.1 | 74.7 | 58.8 | - |
| LongBench v2 | 61.0 | 64.4* | 54.5* | 68.2* | 59.8* | - |
| Coding | | | | | | |
| SWE-Bench Verified | 76.8 | 80.9 | 80.0 | 76.2 | 73.1 | - |
| SWE-Bench Pro (public) | 50.7 | 55.4* | 55.6 | - | - | - |
| SWE-Bench Multilingual | 73.0 | 77.5 | 72.0 | 65.0 | 70.2 | - |
| Terminal Bench 2.0 | 50.8 | 59.3 | 54.0 | 54.2 | 46.4 | - |
| PaperBench (CodeDev) | 63.5 | 72.9* | 63.7* | - | 47.1 | - |
| CyberGym | 41.3 | 50.6 | - | 39.9* | 17.3* | - |
| SciCode | 48.7 | 49.5 | 52.1 | 56.1 | 38.9 | - |
| OJBench (cpp) | 57.4 | 54.6* | - | 68.5* | 54.7* | - |
| LiveCodeBench (v6) | 85.0 | 82.2* | - | 87.4* | 83.3 | - |
| Agentic | | | | | | |
| BrowseComp | 60.6 | 37.0 | 65.8 | 37.8 | 51.4 | - |
| BrowseComp (w/ ctx manage) | 74.9 | 57.8 | | 59.2 | 67.6 | - |
| BrowseComp (Agent Swarm) | 78.4 | - | - | - | - | - |
| WideSearch | 72.7 | 76.2* | - | 57.0 | 32.5* | - |
| WideSearch (Agent Swarm) | 79.0 | - | - | - | - | - |
| DeepSearchQA | 77.1 | 76.1* | 71.3* | 63.2* | 60.9* | - |
| FinSearchCompT2&T3 | 67.8 | 66.2* | - | 49.9 | 59.1* | - |
| Seal-0 | 57.4 | 47.7* | 45.0 | 45.5* | 49.5* | - |
| GDPVal-AA | 41.0 | 45.0 | 48.0 | 35.0 | 34.0 | - |
| Image | | | | | | |
| MMMU-Pro | 78.5 | 74.0 | 79.5* | 81.0 | - | 69.3 |
| MMMU (val) | 84.3 | 80.7 | 86.7* | 87.5* | - | 80.6 |
| CharXiv (RQ) | 77.5 | 67.2* | 82.1 | 81.4 | - | 66.1 |
| MathVision | 84.2 | 77.1* | 83.0 | 86.1* | - | 74.6 |
| MathVista (mini) | 90.1 | 80.2* | 82.8* | 89.8* | - | 85.8 |
| SimpleVQA | 71.2 | 69.7* | 55.8* | 69.7* | - | 56.8* |
| WorldVQA | 46.3 | 36.8 | 28.0 | 47.4 | - | 23.5 |
| ZeroBench | 9 | 3* | 9* | 8* | - | 4* |
| ZeroBench w/ tools | 11 | 9* | 7* | 12* | - | 3* |
| BabyVision | 36.5 | 14.2 | 34.4 | 49.7 | - | 22.2 |
| BLINK | 78.9 | 68.8* | - | 78.7* | - | 68.9 |
| MMVP | 87.0 | 80.0* | 83.0* | 90.0* | - | 84.3 |
| OmniDocBench 1.5 | 88.8 | 87.7* | 85.7 | 88.5 | - | 82.0* |
| OCRBench | 92.3 | 86.5* | 80.7* | 90.3* | - | 87.5 |
| InfoVQA (test) | 92.6 | 76.9* | 84* | 57.2* | - | 89.5 |
| Video | | | | | | |
| VideoMMMU | 86.6 | 84.4* | 85.9 | 87.6 | - | 80.0 |
| MMVU | 80.4 | 77.3* | 80.8* | 77.5* | - | 71.1 |
| MotionBench | 70.4 | 60.3* | 64.8* | 70.3 | - | - |
| Video-MME | 87.4 | 77.6* | 86.0* | 88.4* | - | 79.0 |
| LongVideoBench | 79.8 | 67.2* | 76.5* | 77.7* | - | 65.6* |
| LVBench | 75.9 | 57.3 | - | 73.5* | - | 63.6 |
| Computer Use | | | | | | |
| OSWorld-Verified | 63.3 | 66.3 | 8.6* | 20.7* | - | 38.1 |
| WebArena | 58.9 | 63.4 * | - | - | - | 26.4* |
Table 5: Performance and token efficiency of some reasoning models. Average output token counts (in thousands) are shown in parentheses.
| Benchmark | Kimi K2.5 | Kimi K2 | Gemini-3.0 | DeepSeek-V3.2 |
| --- | --- | --- | --- | --- |
| Thinking | Pro | Thinking | | |
| AIME 2025 | 96.1 (25k) | 94.5 (30k) | 95.0 (15k) | 93.1 (16k) |
| HMMT Feb 2025 | 95.4 (27k) | 89.4 (35k) | 97.3 (16k) | 92.5 (19k) |
| HMMT Nov 2025 | 91.1 (24k) | 89.2 (32k) | 94.5 (15k) | 90.2 (18k) |
| IMO-AnswerBench | 81.8 (36k) | 78.6 (37k) | 83.1 (18k) | 78.3 (27k) |
| LiveCodeBench | 85.0 (18k) | 82.6 (25k) | 87.4 (13k) | 83.3 (16k) |
| GPQA Diamond | 87.6 (14k) | 84.5 (13k) | 91.9 (8k) | 82.4 (7k) |
| HLE-Text | 31.5 (24k) | 23.9 (29k) | 38.4 (13k) | 25.1 (21k) |
Baselines
We benchmark against state-of-the-art proprietary and open-source models. For proprietary models, we compare against Claude Opus 4.5 (with extended thinking) [4], GPT-5.2 (with xhigh reasoning effort) [41], and Gemini 3 Pro (with high reasoning-level) [19]. For open-source models, we include DeepSeek-V3.2 (with thinking mode enabled) [13] for text benchmarks, while vision benchmarks report Qwen3-VL-235B-A22B-Thinking [7] instead.
Evaluation Configurations
Unless otherwise specified, all Kimi K2.5 evaluations use temperature = 1.0, top-p = 0.95, and a context length of 256k tokens. Benchmarks without publicly available scores were re-evaluated under identical conditions and marked with an asterisk (*). The full evaluation settings can be found in appendix E.
#### 5.1.2 Evaluation Results
Comprehensive results comparing Kimi K2.5 against proprietary and open-source baselines are presented in Table 4. We highlight key observations across core capability domains:
Reasoning and General
Kimi K2.5 achieves competitive performance with top-tier proprietary models on rigorous STEM benchmarks. On Math tasks, AIME 2025, K2.5 scores 96.1%, approaching GPT-5.2âs perfect score while outperforming Claude Opus 4.5 (92.8%) and Gemini 3 Pro (95.0%). This high-level performance extends to the HMMT 2025 (95.4%) and IMO-AnswerBench (81.8%), demonstrating K2.5âs superior reasoning depth. Kimi K2.5 also exhibits remarkable knowledge and scientific reasoning capabilities, scoring 36.9% on SimpleQA Verified, 87.1% on MMLU-Pro and 87.6% on GPQA. Notably, on HLE without the use of tools, K2.5 achieves an HLE-Full score of 30.1%, with component-wise scores of 31.5% on text subset and 21.3% on image subset. When tool-use is enabled, K2.5âs HLE-Full score rises to 50.2%, with 51.8% (text) and 39.8% (image), significantly outperforming Gemini 3 Pro (45.8%) and GPT-5.2 (45.5%). In addition to reasoning and knowledge, K2.5 shows strong instruction-following performance (75.6% on AdvancedIF) and competitive long-context abilities, achieving 61.0% on LongBench v2 compared to both proprietary and open-source models.
Complex Coding and Software Engineering
Kimi K2.5 exhibits strong software engineering capabilities, especially on realistic coding and maintenance tasks. It achieves 76.8% on SWE-Bench Verified and 73.0% on SWE-Bench Multilingual, outperforming Gemini 3 Pro while remaining competitive with Claude Opus 4.5 and GPTâ5.2. On LiveCodeBench v6, Kimi K2.5 reaches 85.0%, surpassing DeepSeekâV3.2 (83.3%) and Claude Opus 4.5 (82.2%), highlighting its robustness on live, continuously updated coding challenges. On TerminalBench 2.0, PaperBench, and SciCode, it scores 50.8%, 63.5%, and 48.7% respectively, demonstrating stable competitionâlevel performance in automated software engineering and problem solving across diverse domains. In addition, K2.5 attains a score of 41.3 on CyberGym, on the task of finding previously discovered vulnerabilities in real openâsource software projects given only a highâlevel description of the weakness, further underscoring its effectiveness in securityâoriented software analysis.
Agentic Capabilities
Kimi K2.5 establishes new state-of-the-art performance on complex agentic search and browsing tasks. On BrowseComp, K2.5 achieves 60.6% without context management techniques, 74.9% with Discard-all context management [13] â substantially outperforming GPT-5.2âs reported 65.8%, Claude Opus 4.5 (37.0%) and Gemini 3 Pro (37.8%). Similarly, WideSearch reaches 72.7% on item-f1. On DeepSearchQA (77.1%), FinSearchCompT2&T3 (67.8%) and Seal-0 (57.4%), K2.5 leads all evaluated models, demonstrating superior capacity for agentic deep research, information synthesis, and multi-step tool orchestration.
Vision Reasoning, Knowledge and Perception
Kimi K2.5 demonstrates strong visual reasoning and world knowledge capabilities. It scores 78.5% on MMMU-Pro, spanning multi-disciplinary multimodal tasks. For world knowledge question answering, K2.5 achieves 71.2% on SimpleVQA and 46.3% on WorldVQA. For visual reasoning, it achieves 84.2% on MathVision, 90.1% on MathVista (mini), and 36.5% on BabyVision. For OCR and document understanding, K2.5 delivers outstanding results with 77.5% on CharXiv (RQ), 92.3% on OCRBench, 88.8% on OmniDocBench 1.5, and 92.6% on InfoVQA (test). On the challenging ZeroBench, Kimi K2.5 achieves 9% and 11% with tool augmentation, substantially ahead of competing models. On basic visual perception benchmarks BLINK (78.9%) and MMVP (87.0%), we also observe competitive performance of Kimi K2.5, demonstrating its robust real-world visual perceptions.
Video Understanding
Kimi K2.5 achieves state-of-the-art performance across diverse video understanding tasks. It attains 86.6% on VideoMMMU and 80.4% on MMVU, rivaling frontier leaderships. With the context-compression and dense temporal understanding abilities of MoonViT-3D, Kimi K2.5 also establishes new global SOTA records in long-video comprehension with 75.9% on LVBench and 79.8% on LongVideoBench by feeding over 2,000 frames, while demonstrating robust dense-motion understanding at 70.4% on the highly-dimensional MotionBench.
Computer-Use Capability
Kimi K2.5 demonstrates state-of-the-art computer-use capability on real-world tasks. On the computer-use benchmark OSWorld-Verified [72, 73], it achieves a 63.3% success rate relying solely on GUI actions without external tools. This substantially outperforms open-source models such as Qwen3-VL-235B-A22B (38.1%) and OpenAIâs computer-use agent framework Operator (o3-based) (42.9%), while remaining competitive with the current leading CUA model, Claude Opus 4.5 (66.3%). On WebArena [80], an established benchmark for GUI-based web browsing, Kimi K2.5 achieves a 58.9% success rate, surpassing OpenAIâs Operator (58.1%) and approaching the performance of Claude Opus 4.5 (63.4%).
### 5.2 Agent Swarm Results
Benchmarks
To rigorously evaluate the effectiveness of the agent swarm framework, we select three representative benchmarks that collectively cover deep reasoning, large-scale retrieval, and real-world complexity:
- BrowseComp: A challenging deep-research benchmark that requires multi-step reasoning and complex information synthesis.
- WideSearch: A benchmark designed to evaluate the ability to perform broad, multi-step information seeking and reasoning across diverse sources.
- In-house Swarm Bench: An internally developed Swarm benchmark, designed to evaluate the agent swarm performance under real-world, high-complexity conditions. It covers four domains: WildSearch (unconstrained, real-world information retrieval over the open web), Batch Download (large-scale acquisition of diverse resources), WideRead (large-scale document comprehension involving more than 100 input documents), and Long-Form Writing (coherent generation of extensive content exceeding 100k words). This benchmark incorporates extreme-scale scenarios that stress-test the orchestration, scalability, and coordination capabilities of agent-based systems.
Table 6: Performance comparison of Kimi K2.5 Agent Swarm against single-agent and proprietary baselines on agentic search benchmarks. Bold denotes the best result per benchmark.
| Benchmark | K2.5 Agent Swarm | Kimi K2.5 | Claude Opus 4.5 | GPT-5.2 | GPT-5.2 Pro |
| --- | --- | --- | --- | --- | --- |
| BrowseComp | 78.4 | 60.6 | 37.0 | 65.8 | 77.9 |
| WideSearch | 79.0 | 72.7 | 76.2 | - | - |
| In-house Swarm Bench | 58.3 | 41.6 | 45.8 | - | - |
Performance
Table 6 presents the performance of Kimi K2.5 Agent Swarm against single-agent configurations and proprietary baselines. The results demonstrate substantial performance improvements from multi-agent orchestration. On BrowseComp, Agent Swarm achieves 78.4%, representing a 17.8% absolute gain over the single-agent K2.5 (60.6%) and surpassing even GPT-5.2 Pro (77.9%). Similarly, WideSearch sees a 6.3% improvement (72.7% $â$ 79.0%) on Item-F1, enabling K2.5 Agent Swarm to outperform Claude Opus 4.5 (76.2%) and establish a new state-of-the-art. The gains are most pronounced on In-house Swarm bench (16.7%), where tasks are explicitly designed to reward parallel decomposition. These consistent improvements across benchmarks validate that Agent Swarm effectively converts computational parallelism into qualitative capability gains, particularly for problems requiring broad exploration, multi-source verification, or simultaneous handling of independent sub-tasks.
<details>
<summary>x4.png Details</summary>

### Visual Description
## Word Cloud: Researcher and Specialist Roles
### Overview
The image is a word cloud composed of numerous job titles, roles, and specializations related to research, verification, and analysis. The terms are presented in varying font sizes and shades of blue against a white background, with the size of each term likely indicating its relative importance, frequency, or prominence within the dataset used to generate the cloud.
### Components/Axes
* **Type:** Word Cloud (Text Visualization)
* **Primary Language:** English
* **Visual Structure:** Terms are arranged in a dense, overlapping cluster without a formal axis or legend. The spatial arrangement is organic, with larger terms generally occupying the central and more prominent positions.
* **Color Scheme:** All text is in shades of blue, ranging from dark navy to light sky blue. The color does not appear to encode a separate data dimension but contributes to the visual hierarchy alongside font size.
### Detailed Analysis
The word cloud contains a comprehensive list of specialized research roles. Below is a transcription of the visible terms, grouped by approximate visual prominence (from largest to smallest).
**Most Prominent (Largest Font Size):**
* Biography Researcher (Dark blue, center)
* Verification Specialist (Dark blue, center-right)
* Historical Researcher (Dark blue, upper-center)
* Verification Researcher (Dark blue, center)
* Timeline Researcher (Dark blue, right)
* Cross Reference Analyst (Dark blue, lower-center)
* University Researcher (Dark blue, lower-center)
* Book Researcher (Dark blue, lower-right)
* Article Researcher (Dark blue, bottom-center)
**High Prominence (Medium-Large Font):**
* Award Researcher (Blue, upper-center)
* Publication Researcher (Blue, left)
* Thesis Researcher (Blue, left)
* Academic Researcher (Blue, center)
* Biography Investigator (Blue, lower-left)
* Article Finder (Blue, bottom)
* Cross Reference Investigator (Blue, right)
* Location Researcher (Blue, left)
* Data Verifier (Blue, left)
**Moderate/Lower Prominence (Smaller Font Sizes - Partial List):**
* Cross Reference Specialist
* Timeline Investigator
* Biographical Researcher
* Education Researcher
* Timeline Analyst
* Biography Analyst
* Verification Agent
* Film Researcher
* Author Investigator
* Literary Researcher
* University Investigator
* Publication Investigator
* Music Biography Researcher
* Media Researcher
* Country Identifier
* Location Specialist
* Writer Identifier
* Article Searcher
* Comprehensive Researcher
* Game Identifier
* Event Researcher
* Company Identifier
* Award Investigator
* Director Researcher
* Paper Searcher
* Blog Researcher
* Sports Investigator
* Relation Identifier
* Statistics Manager
* Interview Researcher
* Paper Searcher 2
* Verifier 1, Verifier 2, Verifier 3
* Writer Sec01, Writer Sec02, Writer Sec03, Writer Sec04
* Researcher Sec01, Researcher Sec02, Researcher Sec03, Researcher Sec04
### Key Observations
1. **Thematic Clustering:** The cloud is heavily dominated by terms combining "Researcher," "Analyst," "Investigator," "Verifier," or "Specialist" with a specific domain (e.g., Biography, Historical, Timeline, Cross Reference, Article, University).
2. **Core Focus Areas:** The largest terms suggest a primary focus on **verification**, **biography**, **historical context**, and **cross-referencing**. "Biography Researcher" and "Verification Specialist" are the two most visually dominant elements.
3. **Hierarchy of Information:** The visual hierarchy (size) implies a ranking where generalist or highly critical roles (like Verification Specialist) are more prominent than very niche identifiers (like "Country Identifier" or "Game Identifier").
4. **Redundancy and Specificity:** There is significant redundancy (e.g., "Biography Researcher," "Biography Investigator," "Biographical Researcher") and a high degree of specificity in the roles listed, indicating a complex ecosystem of research tasks.
### Interpretation
This word cloud visually represents the multifaceted and specialized nature of modern research, particularly in fields requiring high levels of accuracy and source validation. The prominence of "Verification" and "Cross Reference" roles underscores the critical importance of fact-checking and data integrity in information processing. The central role of "Biography Researcher" suggests that compiling and validating personal histories is a core activity within this context.
The cloud does not provide quantitative data but offers a qualitative map of the professional landscape. It suggests that a comprehensive research project might involve a team with these diverse, specialized roles, moving from broad "Historical Research" and "Article Research" to specific "Verification" and "Cross Reference Analysis" to ensure the final output is accurate and well-sourced. The inclusion of very specific identifiers (e.g., "Writer Sec01") hints at a structured, possibly large-scale or institutional, research workflow where tasks are finely divided.
</details>
Figure 6: The word cloud visualizes heterogeneous K2.5-based sub-agents dynamically instantiated by the Orchestrator across tests.
<details>
<summary>x5.png Details</summary>

### Visual Description
## Scatter Plot: Performance vs Step
### Overview
The image is a scatter plot comparing the performance of two different methods, "Discard-all Context Management" and "Agent Swarm," as a function of training steps plotted on a logarithmic scale. The chart demonstrates how the performance metric (in percentage) evolves for each method as the number of steps increases from 100 (10ÂČ) to 1000 (10Âł).
### Components/Axes
* **Title:** "Performance vs Step"
* **Y-Axis:**
* **Label:** "Performance"
* **Scale:** Linear, ranging from 40.0% to 80.0%.
* **Major Tick Marks:** 40.0%, 45.0%, 50.0%, 55.0%, 60.0%, 65.0%, 70.0%, 75.0%, 80.0%.
* **X-Axis:**
* **Label:** "log(steps)"
* **Scale:** Logarithmic (base 10).
* **Major Tick Mark Labels:** 10ÂČ (100) and 10Âł (1000).
* **Legend:**
* **Position:** Bottom-right corner of the plot area.
* **Entry 1:** Blue circle marker, labeled "Discard-all Context Management".
* **Entry 2:** Red circle marker, labeled "Agent Swarm".
* **Plot Area:** Contains a grid of light gray dashed lines aligned with the major y-axis ticks.
### Detailed Analysis
The plot displays two distinct data series, each represented by a set of colored circles. The trend for each series is described before listing approximate data points.
**1. Discard-all Context Management (Blue Series)**
* **Trend:** The blue data points show a steady, monotonic upward trend that begins to plateau at higher step counts. The rate of improvement slows as steps increase.
* **Approximate Data Points (Step, Performance):**
* (100, ~60.0%)
* (~125, ~61.5%)
* (~160, ~62.5%)
* (~200, ~63.5%)
* (~250, ~64.5%)
* (~315, ~66.0%)
* (~400, ~67.0%)
* (~500, ~68.0%)
* (~630, ~69.0%)
* (~795, ~70.0%)
* (~1000, ~75.0%) - *Note: The final point shows a significant jump, potentially indicating a measurement at exactly 1000 steps.*
**2. Agent Swarm (Red Series)**
* **Trend:** The red data points show a steep, monotonic upward trend that surpasses the blue series after an initial lower starting point. It also shows signs of plateauing but at a higher performance level.
* **Approximate Data Points (Step, Performance):**
* (100, ~47.5%) - *Notable outlier, significantly lower than the blue series at the same step.*
* (~125, ~58.0%)
* (~160, ~65.5%)
* (~200, ~70.0%)
* (~250, ~73.0%)
* (~315, ~74.0%)
* (~400, ~75.0%)
* (~500, ~76.0%)
* (~630, ~77.0%)
* (~795, ~78.0%)
* (~1000, ~78.5%)
### Key Observations
1. **Performance Crossover:** The "Agent Swarm" method starts with lower performance than "Discard-all Context Management" at 100 steps but overtakes it before 200 steps and maintains a consistent lead thereafter.
2. **Diminishing Returns:** Both methods exhibit diminishing returns; the performance gain per unit of step (on a log scale) decreases as the total number of steps increases.
3. **Final Performance Gap:** At 1000 steps, "Agent Swarm" achieves a performance of approximately 78.5%, while "Discard-all Context Management" reaches approximately 75.0%, resulting in a ~3.5 percentage point advantage for Agent Swarm.
4. **Initial Anomaly:** The first data point for "Agent Swarm" is a clear outlier, suggesting a possible warm-up period or different initial conditions before rapid improvement begins.
### Interpretation
This chart provides a comparative analysis of learning efficiency and final capability between two context management strategies in an AI or machine learning system.
* **What the data suggests:** The "Agent Swarm" approach demonstrates superior long-term learning efficiency. While it may have a slower or less effective start (as seen at 100 steps), its rate of improvement is greater, allowing it to quickly surpass the baseline "Discard-all" method and achieve a higher asymptotic performance. This suggests that the "Agent Swarm" strategy is more effective at leveraging additional training steps to refine its performance.
* **How elements relate:** The logarithmic x-axis is crucial for interpreting the relationship. It compresses the later stages of training, highlighting that the most significant performance gains for both methods occur in the earlier phases (between 100 and ~400 steps). The consistent vertical separation between the red and blue dots after the crossover point visually quantifies the persistent advantage of the Agent Swarm method.
* **Notable implications:** The plateauing of both curves indicates that simply adding more steps beyond 1000 may yield only marginal improvements for these specific configurations. The initial low performance of Agent Swarm could be a critical factor for applications with strict early-stage performance requirements, whereas its higher final performance makes it preferable for scenarios where maximum capability is the goal and longer training is feasible. The data argues that "Agent Swarm" is a more scalable and ultimately more powerful strategy, albeit with a potential cost in early training efficiency.
</details>
Figure 7: Comparison of Kimi K2.5 performance under Agent Swarm and Discard-all context management in BrowseComp.
<details>
<summary>x6.png Details</summary>

### Visual Description
## Scatter Plot: Execution Time to Achieve a Target Item-F1
### Overview
This is a scatter plot comparing the execution time required for two different methods ("Agent Swarm" and "Single Agent") to achieve various target performance levels, measured by "Target Item-F1". The chart demonstrates a significant performance advantage for the "Agent Swarm" method, especially at higher target performance levels.
### Components/Axes
* **Chart Title:** "Execution Time to Achieve a Target Item-F1"
* **X-Axis:** "Target Item-F1". Scale is linear, marked from 30.0% to 70.0% in increments of 5.0%.
* **Y-Axis:** "Execution Time". Scale is linear, marked from 0x to 8.0x in increments of 1.0x. The unit "x" likely represents a multiple of some baseline time.
* **Legend:** Located in the top-left corner of the plot area.
* Blue circle (â): "Agent Swarm"
* Red square (â ): "Single Agent"
* **Annotations:** Vertical dashed lines with text annotations ("save x3.0", "save x3.2", etc.) connect specific data points between the two series, highlighting the time savings.
### Detailed Analysis
**Data Series & Trends:**
1. **Single Agent (Red Squares):** The data points show a strong, approximately linear upward trend. As the Target Item-F1 increases, the execution time increases steeply.
* **Trend Verification:** The red squares form a line that slopes sharply upward from left to right.
* **Data Points (Approximate):**
* (30.0%, ~1.8x)
* (32.5%, ~2.0x)
* (35.0%, ~2.0x)
* (37.5%, ~2.1x)
* (40.0%, ~2.4x)
* (42.5%, ~2.4x)
* (45.0%, ~2.8x)
* (47.5%, ~3.0x)
* (50.0%, ~3.2x)
* (52.5%, ~3.4x)
* (55.0%, ~3.8x)
* (57.5%, ~4.0x)
* (60.0%, ~4.4x)
* (62.5%, ~4.6x)
* (65.0%, ~5.2x)
* (67.5%, ~6.4x)
* (70.0%, ~7.2x)
2. **Agent Swarm (Blue Circles):** The data points show a much gentler, slightly upward trend. Execution time increases only modestly as the target performance increases.
* **Trend Verification:** The blue circles form a line that slopes gently upward from left to right.
* **Data Points (Approximate):**
* (30.0%, ~0.6x)
* (32.5%, ~0.6x)
* (35.0%, ~0.8x)
* (37.5%, ~0.8x)
* (40.0%, ~0.8x)
* (42.5%, ~0.8x)
* (45.0%, ~0.8x)
* (47.5%, ~1.0x)
* (50.0%, ~1.0x)
* (52.5%, ~1.0x)
* (55.0%, ~1.0x)
* (57.5%, ~1.2x)
* (60.0%, ~1.2x)
* (62.5%, ~1.4x)
* (65.0%, ~1.4x)
* (67.5%, ~1.4x)
* (70.0%, ~1.6x)
**Annotations (Time Savings):**
The chart explicitly calculates the performance gap at several points:
* At ~30% Target Item-F1: "save x3.0" (Single Agent ~1.8x vs. Agent Swarm ~0.6x)
* At 40% Target Item-F1: "save x3.0" (Single Agent ~2.4x vs. Agent Swarm ~0.8x)
* At 50% Target Item-F1: "save x3.2" (Single Agent ~3.2x vs. Agent Swarm ~1.0x)
* At 60% Target Item-F1: "save x3.7" (Single Agent ~4.4x vs. Agent Swarm ~1.2x)
* At 70% Target Item-F1: "save x4.5" (Single Agent ~7.2x vs. Agent Swarm ~1.6x)
### Key Observations
1. **Diverging Performance:** The performance gap between the two methods widens dramatically as the target difficulty (Target Item-F1) increases. The "save" factor grows from x3.0 to x4.5.
2. **Scalability:** The "Agent Swarm" method exhibits far better scalability. Its execution time grows slowly and sub-linearly with the target, while the "Single Agent" time grows steeply and linearly.
3. **Consistency:** The "Agent Swarm" data points are tightly clustered along a smooth curve, suggesting predictable performance. The "Single Agent" points are also consistent along their steeper trend line.
4. **Outliers:** There are no apparent outliers; all data points follow their respective trends closely.
### Interpretation
This chart provides strong empirical evidence for the superior efficiency of a multi-agent ("Agent Swarm") approach over a single-agent approach for the task of achieving a target Item-F1 score. The "Item-F1" metric is common in information retrieval and natural language processing, suggesting this could be a machine learning or search task.
The key takeaway is not just that the Agent Swarm is faster, but that its **relative advantage increases with the task's difficulty**. Achieving a high-performance target (e.g., 70% F1) is prohibitively expensive in time for a single agent (7.2x baseline), while a swarm can accomplish it in a fraction of that time (1.6x baseline). This suggests that for complex, high-precision tasks, parallelizing the work across multiple agents is a highly effective strategy. The "save" annotations serve as a direct, compelling metric for this advantage, making the chart an effective tool for advocating for the Agent Swarm methodology.
</details>
Figure 8: Agent Swarm achieves 3 $Ă$ â4.5 $Ă$ faster execution time compared to single-agent baselines as target Item-F1 increases from 30% to 70% in WideSearch testing.
Execution Time Savings via Parallelism
Beyond improved task performance, Agent Swarm achieves substantial wall-clock time reductions through parallel subagent execution. On the WideSearch benchmark, it reduces the execution time required to reach target performance by 3 $ĂâŒ$ 4.5 $Ă$ compared to a single-agent baseline. As shown in Figure 8, this efficiency gain scales with task complexity: as the target Item-F1 increases from 30% to 70%, the single agentâs execution time grows from approximately 1.8 $Ă$ to over 7.0 $Ă$ the baseline, whereas Agent Swarm maintains near-constant low latency in the range of $0.6Ă⌠1.6Ă$ . These results indicate that Agent Swarm effectively transforms sequential tool invocations into parallel operations, preventing the linear growth in completion time typically observed as task difficulty increases.
Dynamic Subagent Creation and Scheduling
Within an agent swarm, subagents are dynamically instantiated rather than pre-defined. Through PARL, the orchestrator learns adaptive policies to create and schedule self-hosted subagents in response to evolving task structures and problem states. Unlike static decomposition approaches, this learned policy enables the Orchestrator to reason about the requisite number, timing, and specialization of subagents based on query. Consequently, a heterogeneous agent group emerges organically from this adaptive allocation strategy (Figure 7).
Agent Swarm as Proactive Context Management
Beyond better performance and runtime acceleration, an agent swarm is a kind of proactive and intelligent context management enabled by multi-agent architecture [6]. This approach differs from test-time context truncation strategies such as Hide-Tool-Result [2], Summary [71], or Discard-all [13], which react to context overflow by compressing or discarding accumulated histories. While effective at reducing token usage, these methods are inherently reactive and often sacrifice structural information or intermediate reasoning.
In contrast, Agent Swarm enables proactive context control through explicit orchestration. Long-horizon tasks are decomposed into parallel, semantically isolated subtasks, each executed by a specialized subagent with a bounded local context. Crucially, these subagents maintain independent working memories and perform local reasoning without directly mutating or contaminating the global context of the central orchestrator. Only task-relevant outputsârather than full interaction tracesâare selectively routed back to the orchestrator. This design induces context sharding rather than context truncation, allowing the system to scale effective context length along an additional architectural dimension while preserving modularity, information locality, and reasoning integrity.
As shown in Figure 7, this proactive strategy outperforms Discard-all in both efficiency and accuracy on BrowseComp. By preserving task-level coherence at the orchestrator level while keeping subagent contexts tightly bounded, Agent Swarm enables parallel execution with selective context persistence, retaining only high-level coordination signals or essential intermediate results. Consequently, Agent Swarm operates as an active, structured context manager, achieving higher accuracy with substantially fewer critical steps than uniform context truncation.
## 6 Conclusions
Kimi K2.5 shows that scalable and general agentic intelligence can be achieved through joint optimization of text and vision together with parallel agent execution. By unifying language and vision across pre-training and reinforcement learning, the model achieves strong cross-modal alignment and visualâtext reasoning. Agent Swarm enables concurrent execution of heterogeneous sub-tasks, reducing inference latency while improving performance on complex agentic workloads. Grounded in visionâtext intelligence and agent swarms, Kimi K2.5 demonstrates strong performance on benchmarks and real-world tasks. By open-sourcing the post-trained checkpoints, we aim to support the open-source community in building scalable and general-purpose agentic systems and to accelerate progress toward General Agentic Intelligence.
## References
- [1] M. AI (2025) Introducing kimi k2 thinking. External Links: Link Cited by: §1, §1, §4.4.2.
- [2] M. AI (2025) Kimi-researcher end-to-end rl training for emerging agentic capabilities. External Links: Link Cited by: §1, §5.2.
- [3] Amazon Web Services (2023) Amazon simple storage service (amazon s3). Note: WebAvailable at: https://aws.amazon.com/s3/ External Links: Link Cited by: §C.1.
- [4] Anthropic (2025) Claude opus 4.5 system card. External Links: Link Cited by: §E.7, §1, §3, §5.1.1.
- [5] Anthropic (2025) How we built our multi-agent research system. External Links: Link Cited by: §3.
- [6] Anthropic (2026) Building multi-agent systems: when and how to use them. External Links: Link Cited by: §3, §5.2.
- [7] S. Bai, Y. Cai, R. Chen, K. Chen, X. Chen, Z. Cheng, L. Deng, W. Ding, C. Gao, C. Ge, W. Ge, Z. Guo, Q. Huang, J. Huang, F. Huang, B. Hui, S. Jiang, Z. Li, M. Li, M. Li, K. Li, Z. Lin, J. Lin, X. Liu, J. Liu, C. Liu, Y. Liu, D. Liu, S. Liu, D. Lu, R. Luo, C. Lv, R. Men, L. Meng, X. Ren, X. Ren, S. Song, Y. Sun, J. Tang, J. Tu, J. Wan, P. Wang, P. Wang, Q. Wang, Y. Wang, T. Xie, Y. Xu, H. Xu, J. Xu, Z. Yang, M. Yang, J. Yang, A. Yang, B. Yu, F. Zhang, H. Zhang, X. Zhang, B. Zheng, H. Zhong, J. Zhou, F. Zhou, J. Zhou, Y. Zhu, and K. Zhu (2025) Qwen3-vl technical report. External Links: 2511.21631, Link Cited by: §1, §2.1, §2.2, §5.1.1.
- [8] Y. Bai, S. Tu, J. Zhang, H. Peng, X. Wang, X. Lv, S. Cao, J. Xu, L. Hou, Y. Dong, J. Tang, and J. Li (2025) LongBench v2: towards deeper understanding and reasoning on realistic long-context multitasks. External Links: 2412.15204, Link Cited by: §E.3, 1st item.
- [9] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba (2016) OpenAI gym. External Links: 1606.01540, Link Cited by: Appendix D.
- [10] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei (2020) Language models are few-shot learners. External Links: 2005.14165, Link Cited by: §B.3.
- [11] L. Chen, W. Xie, Y. Liang, H. He, H. Zhao, Z. Yang, Z. Huang, H. Wu, H. Lu, Y. charles, Y. Bao, Y. Fan, G. Li, H. Shen, X. Chen, W. Xu, S. Si, Z. Cai, W. Chai, Z. Huang, F. Liu, T. Liu, B. Chang, X. Hu, K. Chen, Y. Ren, Y. Liu, Y. Gong, and K. Li (2026) BabyVision: visual reasoning beyond language. External Links: 2601.06521, Link Cited by: 4th item.
- [12] X. Cheng, W. Zhang, S. Zhang, J. Yang, X. Guan, X. Wu, X. Li, G. Zhang, J. Liu, Y. Mai, Y. Zeng, Z. Wen, K. Jin, B. Wang, W. Zhou, Y. Lu, T. Li, W. Huang, and Z. Li (2025) SimpleVQA: multimodal factuality evaluation for multimodal large language models. External Links: 2502.13059, Link Cited by: 4th item.
- [13] DeepSeek-AI, A. Liu, A. Mei, B. Lin, B. Xue, B. Wang, B. Xu, B. Wu, B. Zhang, C. Lin, C. Dong, C. Lu, C. Zhao, C. Deng, C. Xu, C. Ruan, D. Dai, D. Guo, D. Yang, D. Chen, E. Li, F. Zhou, F. Lin, F. Dai, G. Hao, G. Chen, G. Li, H. Zhang, H. Xu, H. Li, H. Liang, H. Wei, H. Zhang, H. Luo, H. Ji, H. Ding, H. Tang, H. Cao, H. Gao, H. Qu, H. Zeng, J. Huang, J. Li, J. Xu, J. Hu, J. Chen, J. Xiang, J. Yuan, J. Cheng, J. Zhu, J. Ran, J. Jiang, J. Qiu, J. Li, J. Song, K. Dong, K. Gao, K. Guan, K. Huang, K. Zhou, K. Huang, K. Yu, L. Wang, L. Zhang, L. Wang, L. Zhao, L. Yin, L. Guo, L. Luo, L. Ma, L. Wang, L. Zhang, M. S. Di, M. Y. Xu, M. Zhang, M. Zhang, M. Tang, M. Zhou, P. Huang, P. Cong, P. Wang, Q. Wang, Q. Zhu, Q. Li, Q. Chen, Q. Du, R. Xu, R. Ge, R. Zhang, R. Pan, R. Wang, R. Yin, R. Xu, R. Shen, R. Zhang, S. H. Liu, S. Lu, S. Zhou, S. Chen, S. Cai, S. Chen, S. Hu, S. Liu, S. Hu, S. Ma, S. Wang, S. Yu, S. Zhou, S. Pan, S. Zhou, T. Ni, T. Yun, T. Pei, T. Ye, T. Yue, W. Zeng, W. Liu, W. Liang, W. Pang, W. Luo, W. Gao, W. Zhang, X. Gao, X. Wang, X. Bi, X. Liu, X. Wang, X. Chen, X. Zhang, X. Nie, X. Cheng, X. Liu, X. Xie, X. Liu, X. Yu, X. Li, X. Yang, X. Li, X. Chen, X. Su, X. Pan, X. Lin, X. Fu, Y. Q. Wang, Y. Zhang, Y. Xu, Y. Ma, Y. Li, Y. Li, Y. Zhao, Y. Sun, Y. Wang, Y. Qian, Y. Yu, Y. Zhang, Y. Ding, Y. Shi, Y. Xiong, Y. He, Y. Zhou, Y. Zhong, Y. Piao, Y. Wang, Y. Chen, Y. Tan, Y. Wei, Y. Ma, Y. Liu, Y. Yang, Y. Guo, Y. Wu, Y. Wu, Y. Cheng, Y. Ou, Y. Xu, Y. Wang, Y. Gong, Y. Wu, Y. Zou, Y. Li, Y. Xiong, Y. Luo, Y. You, Y. Liu, Y. Zhou, Z. F. Wu, Z. Z. Ren, Z. Zhao, Z. Ren, Z. Sha, Z. Fu, Z. Xu, Z. Xie, Z. Zhang, Z. Hao, Z. Gou, Z. Ma, Z. Yan, Z. Shao, Z. Huang, Z. Wu, Z. Li, Z. Zhang, Z. Xu, Z. Wang, Z. Gu, Z. Zhu, Z. Li, Z. Zhang, Z. Xie, Z. Gao, Z. Pan, Z. Yao, B. Feng, H. Li, J. L. Cai, J. Ni, L. Xu, M. Li, N. Tian, R. J. Chen, R. L. Jin, S. S. Li, S. Zhou, T. Sun, X. Q. Li, X. Jin, X. Shen, X. Chen, X. Song, X. Zhou, Y. X. Zhu, Y. Huang, Y. Li, Y. Zheng, Y. Zhu, Y. Ma, Z. Huang, Z. Xu, Z. Zhang, D. Ji, J. Liang, J. Guo, J. Chen, L. Xia, M. Wang, M. Li, P. Zhang, R. Chen, S. Sun, S. Wu, S. Ye, T. Wang, W. L. Xiao, W. An, X. Wang, X. Sun, X. Wang, Y. Tang, Y. Zha, Z. Zhang, Z. Ju, Z. Zhang, and Z. Qu (2025) DeepSeek-v3.2: pushing the frontier of open large language models. External Links: 2512.02556, Link Cited by: §5.1.1, §5.1.2, §5.2.
- [14] M. Dehghani, B. Mustafa, J. Djolonga, J. Heek, M. Minderer, M. Caron, A. Steiner, J. Puigcerver, R. Geirhos, I. Alabdulmohsin, A. Oliver, P. Padlewski, A. Gritsenko, M. LuÄiÄ, and N. Houlsby (2023) Patch nâ pack: navit, a vision transformer for any aspect ratio and resolution. External Links: 2307.06304, Link Cited by: §1, §4.2.
- [15] X. Deng, J. Da, E. Pan, Y. Y. He, C. Ide, K. Garg, N. Lauffer, A. Park, N. Pasari, C. Rane, et al. (2025) SWE-bench pro: can ai agents solve long-horizon software engineering tasks?. arXiv preprint arXiv:2509.16941. Cited by: 2nd item.
- [16] C. Fu, Y. Dai, Y. Luo, L. Li, S. Ren, R. Zhang, Z. Wang, C. Zhou, Y. Shen, M. Zhang, P. Chen, Y. Li, S. Lin, S. Zhao, K. Li, T. Xu, X. Zheng, E. Chen, C. Shan, R. He, and X. Sun (2025) Video-mme: the first-ever comprehensive evaluation benchmark of multi-modal llms in video analysis. External Links: 2405.21075, Link Cited by: 5th item.
- [17] X. Fu, Y. Hu, B. Li, Y. Feng, H. Wang, X. Lin, D. Roth, N. A. Smith, W. Ma, and R. Krishna (2024) BLINK: multimodal large language models can see but not perceive. External Links: 2404.12390, Link Cited by: 4th item.
- [18] S. Y. Gadre, G. Ilharco, A. Fang, J. Hayase, G. Smyrnis, T. Nguyen, R. Marten, M. Wortsman, D. Ghosh, J. Zhang, et al. (2024) Datacomp: in search of the next generation of multimodal datasets. Advances in Neural Information Processing Systems 36. Cited by: §B.3.
- [19] Google (2025) Gemini 3 pro. External Links: Link Cited by: §1, §5.1.1.
- [20] D. Guo, F. Wu, F. Zhu, F. Leng, G. Shi, H. Chen, H. Fan, J. Wang, J. Jiang, J. Wang, J. Chen, J. Huang, K. Lei, L. Yuan, L. Luo, P. Liu, Q. Ye, R. Qian, S. Yan, S. Zhao, S. Peng, S. Li, S. Yuan, S. Wu, T. Cheng, W. Liu, W. Wang, X. Zeng, X. Liu, X. Qin, X. Ding, X. Xiao, X. Zhang, X. Zhang, X. Xiong, Y. Peng, Y. Chen, Y. Li, Y. Hu, Y. Lin, Y. Hu, Y. Zhang, Y. Wu, Y. Li, Y. Liu, Y. Ling, Y. Qin, Z. Wang, Z. He, A. Zhang, B. Yi, B. Liao, C. Huang, C. Zhang, C. Deng, C. Deng, C. Lin, C. Yuan, C. Li, C. Gou, C. Lou, C. Wei, C. Liu, C. Li, D. Zhu, D. Zhong, F. Li, F. Zhang, G. Wu, G. Li, G. Xiao, H. Lin, H. Yang, H. Wang, H. Ji, H. Hao, H. Shen, H. Li, J. Li, J. Wu, J. Zhu, J. Jiao, J. Feng, J. Chen, J. Duan, J. Liu, J. Zeng, J. Tang, J. Sun, J. Chen, J. Long, J. Feng, J. Zhan, J. Fang, J. Lu, K. Hua, K. Liu, K. Shen, K. Zhang, K. Shen, K. Wang, K. Pan, K. Zhang, K. Li, L. Li, L. Li, L. Shi, L. Han, L. Xiang, L. Chen, L. Chen, L. Li, L. Yan, L. Chi, L. Liu, M. Du, M. Wang, N. Pan, P. Chen, P. Chen, P. Wu, Q. Yuan, Q. Shuai, Q. Tao, R. Zheng, R. Zhang, R. Zhang, R. Wang, R. Yang, R. Zhao, S. Xu, S. Liang, S. Yan, S. Zhong, S. Cao, S. Wu, S. Liu, S. Chang, S. Cai, T. Ao, T. Yang, T. Zhang, W. Zhong, W. Jia, W. Weng, W. Yu, W. Huang, W. Zhu, W. Yang, W. Wang, X. Long, X. Yin, X. Li, X. Zhu, X. Jia, X. Zhang, X. Liu, X. Zhang, X. Yang, X. Luo, X. Chen, X. Zhong, X. Xiao, X. Li, Y. Wu, Y. Wen, Y. Du, Y. Zhang, Y. Ye, Y. Wu, Y. Liu, Y. Yue, Y. Zhou, Y. Yuan, Y. Xu, Y. Yang, Y. Zhang, Y. Fang, Y. Li, Y. Ren, Y. Xiong, Z. Hong, Z. Wang, Z. Sun, Z. Wang, Z. Cai, Z. Zha, Z. An, Z. Zhao, Z. Xu, Z. Chen, Z. Wu, Z. Zheng, Z. Wang, Z. Huang, Z. Zhu, and Z. Song (2025) Seed1.5-vl technical report. External Links: 2505.07062, Link Cited by: §1, §2.1.
- [21] L. Haas, G. Yona, G. DâAntonio, S. Goldshtein, and D. Das (2025) SimpleQA verified: a reliable factuality benchmark to measure parametric knowledge. External Links: 2509.07968, Link Cited by: 1st item.
- [22] Y. He, W. Li, H. Zhang, S. Li, K. Mandyam, S. Khosla, Y. Xiong, N. Wang, X. Peng, B. Li, S. Bi, S. G. Patil, Q. Qi, S. Feng, J. Katz-Samuels, R. Y. Pang, S. Gonugondla, H. Lang, Y. Yu, Y. Qian, M. Fazel-Zarandi, L. Yu, A. Benhalloum, H. Awadalla, and M. Faruqui (2025) AdvancedIF: rubric-based benchmarking and reinforcement learning for advancing llm instruction following. External Links: 2511.10507, Link Cited by: 1st item.
- [23] W. Hong, Y. Cheng, Z. Yang, W. Wang, L. Wang, X. Gu, S. Huang, Y. Dong, and J. Tang (2025) MotionBench: benchmarking and improving fine-grained video motion understanding for vision language models. External Links: 2501.02955, Link Cited by: 5th item.
- [24] K. Hu, P. Wu, F. Pu, W. Xiao, Y. Zhang, X. Yue, B. Li, and Z. Liu (2025) Video-mmmu: evaluating knowledge acquisition from multi-discipline professional videos. External Links: 2501.13826, Link Cited by: 5th item.
- [25] L. Hu, J. Jiao, J. Liu, Y. Ren, Z. Wen, K. Zhang, X. Zhang, X. Gao, T. He, F. Hu, Y. Liao, Z. Wang, C. Yang, Q. Yang, M. Yin, Z. Zeng, G. Zhang, X. Zhang, X. Zhao, Z. Zhu, H. Namkoong, W. Huang, and Y. Tang (2025) FinSearchComp: towards a realistic, expert-level evaluation of financial search and reasoning. External Links: 2509.13160, Link Cited by: 3rd item.
- [26] Y. Huang, Y. Cheng, A. Bapna, O. Firat, M. X. Chen, D. Chen, H. Lee, J. Ngiam, Q. V. Le, Y. Wu, and Z. Chen (2019) GPipe: efficient training of giant neural networks using pipeline parallelism. External Links: 1811.06965, Link Cited by: Appendix C.
- [27] N. Jain, K. Han, A. Gu, W. Li, F. Yan, T. Zhang, S. Wang, A. Solar-Lezama, K. Sen, and I. Stoica (2024) Livecodebench: holistic and contamination free evaluation of large language models for code. arXiv preprint arXiv:2403.07974. Cited by: 2nd item.
- [28] C. E. Jimenez, J. Yang, A. Wettig, S. Yao, K. Pei, O. Press, and K. Narasimhan (2023) Swe-bench: can language models resolve real-world github issues?. arXiv preprint arXiv:2310.06770. Cited by: 2nd item.
- [29] K. Jordan, Y. Jin, V. Boza, J. You, F. Cesista, L. Newhouse, and J. Bernstein (2024) Muon: an optimizer for hidden layers in neural networks. External Links: Link Cited by: §4.1, §4.4.2.
- [30] Kimi Team (2025) Kimi k1. 5: scaling reinforcement learning with llms. arXiv preprint arXiv:2501.12599. Cited by: Appendix D, §4.4.2, §4.4.2.
- [31] H. Laurençon, L. Saulnier, L. Tronchon, S. Bekman, A. Singh, A. Lozhkov, T. Wang, S. Karamcheti, A. Rush, D. Kiela, et al. (2024) Obelics: an open web-scale filtered dataset of interleaved image-text documents. Advances in Neural Information Processing Systems 36. Cited by: §B.3.
- [32] D. Lepikhin, H. Lee, Y. Xu, D. Chen, O. Firat, Y. Huang, M. Krikun, N. Shazeer, and Z. Chen (2020) Gshard: scaling giant models with conditional computation and automatic sharding. arXiv preprint arXiv:2006.16668. Cited by: Appendix C.
- [33] J. Liu, J. Su, X. Yao, Z. Jiang, G. Lai, Y. Du, Y. Qin, W. Xu, E. Lu, J. Yan, et al. (2025) Muon is scalable for llm training. arXiv preprint arXiv:2502.16982. Cited by: §4.1, §4.3, §4.4.2.
- [34] Y. Liu, Z. Li, M. Huang, B. Yang, W. Yu, C. Li, X. Yin, C. Liu, L. Jin, and X. Bai (2024-12) OCRBench: on the hidden mystery of ocr in large multimodal models. Science China Information Sciences 67 (12). External Links: ISSN 1869-1919, Link, Document Cited by: 4th item.
- [35] P. Lu, H. Bansal, T. Xia, J. Liu, C. Li, H. Hajishirzi, H. Cheng, K. Chang, M. Galley, and J. Gao (2024) MathVista: evaluating mathematical reasoning of foundation models in visual contexts. External Links: 2310.02255, Link Cited by: 4th item.
- [36] T. Luong, D. Hwang, H. H. Nguyen, G. Ghiasi, Y. Chervonyi, I. Seo, J. Kim, G. Bingham, J. Lee, S. Mishra, A. Zhai, H. Hu, H. Michalewski, J. Kim, J. Ahn, J. Bae, X. Song, T. H. Trinh, Q. V. Le, and J. Jung (2025-11) Towards robust mathematical reasoning. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, C. Christodoulopoulos, T. Chakraborty, C. Rose, and V. Peng (Eds.), Suzhou, China, pp. 35418â35442. External Links: Link, Document, ISBN 979-8-89176-332-6 Cited by: 1st item.
- [37] M. Mathew, V. Bagal, R. P. Tito, D. Karatzas, E. Valveny, and C. V. Jawahar (2021) InfographicVQA. External Links: 2104.12756, Link Cited by: 4th item.
- [38] M. A. Merrill, A. G. Shaw, N. Carlini, B. Li, H. Raj, I. Bercovich, L. Shi, J. Y. Shin, T. Walshe, E. K. Buchanan, et al. (2026) Terminal-bench: benchmarking agents on hard, realistic tasks in command line interfaces. arXiv preprint arXiv:2601.11868. Cited by: 2nd item.
- [39] D. Narayanan, M. Shoeybi, J. Casper, P. LeGresley, M. Patwary, V. A. Korthikanti, D. Vainbrand, P. Kashinkunti, J. Bernauer, B. Catanzaro, A. Phanishayee, and M. Zaharia (2021) Efficient large-scale language model training on gpu clusters using megatron-lm. External Links: 2104.04473, Link Cited by: Appendix C.
- [40] M. A. of America (2025) 2025 american invitational mathematics examination i. Note: Held on February 6, 2025 External Links: Link Cited by: 1st item.
- [41] OpenAI (2025) Introducing gpt 5.2. External Links: Link Cited by: §1, §5.1.1.
- [42] L. Ouyang, Y. Qu, H. Zhou, J. Zhu, R. Zhang, Q. Lin, B. Wang, Z. Zhao, M. Jiang, X. Zhao, J. Shi, F. Wu, P. Chu, M. Liu, Z. Li, C. Xu, B. Zhang, B. Shi, Z. Tu, and C. He (2025) OmniDocBench: benchmarking diverse pdf document parsing with comprehensive annotations. External Links: 2412.07626, Link Cited by: 4th item.
- [43] T. Patwardhan, R. Dias, E. Proehl, G. Kim, M. Wang, O. Watkins, S. P. Fishman, M. Aljubeh, P. Thacker, L. Fauconnet, N. S. Kim, P. Chao, S. Miserendino, G. Chabot, D. Li, M. Sharman, A. Barr, A. Glaese, and J. Tworek (2025) GDPval: evaluating AI model performance on real-world economically valuable tasks. External Links: 2510.04374, Link Cited by: 3rd item.
- [44] B. Peng, J. Quesnelle, H. Fan, and E. Shippole (2023) Yarn: efficient context window extension of large language models. arXiv preprint arXiv:2309.00071. Cited by: §4.3.
- [45] T. Pham, N. Nguyen, P. Zunjare, W. Chen, Y. Tseng, and T. Vu (2025) SealQA: raising the bar for reasoning in search-augmented language models. Note: Seal-0 is the main subset of this benchmark External Links: 2506.01062, Link Cited by: 3rd item.
- [46] L. Phan, A. Gatti, Z. Han, N. Li, J. Hu, H. Zhang, C. B. C. Zhang, M. Shaaban, J. Ling, S. Shi, M. Choi, A. Agrawal, A. Chopra, A. Khoja, R. Kim, R. Ren, J. Hausenloy, O. Zhang, M. Mazeika, D. Dodonov, T. Nguyen, J. Lee, D. Anderson, M. Doroshenko, A. C. Stokes, M. Mahmood, O. Pokutnyi, O. Iskra, J. P. Wang, J. Levin, M. Kazakov, F. Feng, S. Y. Feng, H. Zhao, M. Yu, V. Gangal, C. Zou, Z. Wang, S. Popov, R. Gerbicz, G. Galgon, J. Schmitt, W. Yeadon, Y. Lee, S. Sauers, A. Sanchez, F. Giska, M. Roth, S. Riis, S. Utpala, N. Burns, G. M. Goshu, M. M. Naiya, C. Agu, Z. Giboney, A. Cheatom, F. Fournier-Facio, S. Crowson, L. Finke, Z. Cheng, J. Zampese, R. G. Hoerr, M. Nandor, H. Park, T. Gehrunger, J. Cai, B. McCarty, A. C. Garretson, E. Taylor, D. Sileo, Q. Ren, U. Qazi, L. Li, J. Nam, J. B. Wydallis, P. Arkhipov, J. W. L. Shi, A. Bacho, C. G. Willcocks, H. Cao, S. Motwani, E. de Oliveira Santos, J. Veith, E. Vendrow, D. Cojoc, K. Zenitani, J. Robinson, L. Tang, Y. Li, J. Vendrow, N. W. Fraga, V. Kuchkin, A. P. Maksimov, P. Marion, D. Efremov, J. Lynch, K. Liang, A. Mikov, A. Gritsevskiy, J. Guillod, G. Demir, D. Martinez, B. Pageler, K. Zhou, S. Soori, O. Press, H. Tang, P. Rissone, S. R. Green, L. BrĂŒssel, M. Twayana, A. Dieuleveut, J. M. Imperial, A. Prabhu, J. Yang, N. Crispino, A. Rao, D. Zvonkine, G. Loiseau, M. Kalinin, M. Lukas, C. Manolescu, N. Stambaugh, S. Mishra, T. Hogg, C. Bosio, B. P. Coppola, J. Salazar, J. Jin, R. Sayous, S. Ivanov, P. Schwaller, S. Senthilkuma, A. M. Bran, A. Algaba, K. V. den Houte, L. V. D. Sypt, B. Verbeken, D. Noever, A. Kopylov, B. Myklebust, B. Li, L. Schut, E. Zheltonozhskii, Q. Yuan, D. Lim, R. Stanley, T. Yang, J. Maar, J. Wykowski, M. Oller, A. Sahu, C. G. Ardito, Y. Hu, A. G. K. Kamdoum, A. Jin, T. G. Vilchis, Y. Zu, M. Lackner, J. Koppel, G. Sun, D. S. Antonenko, S. Chern, B. Zhao, P. Arsene, J. M. Cavanagh, D. Li, J. Shen, D. Crisostomi, W. Zhang, A. Dehghan, S. Ivanov, D. Perrella, N. Kaparov, A. Zang, I. Sucholutsky, A. Kharlamova, D. Orel, V. Poritski, S. Ben-David, Z. Berger, P. Whitfill, M. Foster, D. Munro, L. Ho, S. Sivarajan, D. B. Hava, A. Kuchkin, D. Holmes, A. Rodriguez-Romero, F. Sommerhage, A. Zhang, R. Moat, K. Schneider, Z. Kazibwe, D. Clarke, D. H. Kim, F. M. Dias, S. Fish, V. Elser, T. Kreiman, V. E. G. Vilchis, I. Klose, U. Anantheswaran, A. Zweiger, K. Rawal, J. Li, J. Nguyen, N. Daans, H. Heidinger, M. Radionov, V. RozhoĆ, V. Ginis, C. Stump, N. Cohen, R. PoĆwiata, J. Tkadlec, A. Goldfarb, C. Wang, P. Padlewski, S. Barzowski, K. Montgomery, R. Stendall, J. Tucker-Foltz, J. Stade, T. R. Rogers, T. Goertzen, D. Grabb, A. Shukla, A. GivrĂ©, J. A. Ambay, A. Sen, M. F. Aziz, M. H. Inlow, H. He, L. Zhang, Y. Kaddar, I. Ăngquist, Y. Chen, H. K. Wang, K. Ramakrishnan, E. Thornley, A. Terpin, H. Schoelkopf, E. Zheng, A. Carmi, E. D. L. Brown, K. Zhu, M. Bartolo, R. Wheeler, M. Stehberger, P. Bradshaw, J. Heimonen, K. Sridhar, I. Akov, J. Sandlin, Y. Makarychev, J. Tam, H. Hoang, D. M. Cunningham, V. Goryachev, D. Patramanis, M. Krause, A. Redenti, D. Aldous, J. Lai, S. Coleman, J. Xu, S. Lee, I. Magoulas, S. Zhao, N. Tang, M. K. Cohen, O. Paradise, J. H. Kirchner, M. Ovchynnikov, J. O. Matos, A. Shenoy, M. Wang, Y. Nie, A. Sztyber-Betley, P. Faraboschi, R. Riblet, J. Crozier, S. Halasyamani, S. Verma, P. Joshi, E. Meril, Z. Ma, J. AndrĂ©oletti, R. Singhal, J. Platnick, V. Nevirkovets, L. Basler, A. Ivanov, S. Khoury, N. Gustafsson, M. Piccardo, H. Mostaghimi, Q. Chen, V. Singh, T. Q. KhĂĄnh, P. Rosu, H. Szlyk, Z. Brown, H. Narayan, A. Menezes, J. Roberts, W. Alley, K. Sun, A. Patel, M. Lamparth, A. Reuel, L. Xin, H. Xu, J. Loader, F. Martin, Z. Wang, A. Achilleos, T. Preu, T. Korbak, I. Bosio, F. Kazemi, Z. Chen, B. BĂĄlint, E. J. Y. Lo, J. Wang, M. I. S. Nunes, J. Milbauer, M. S. Bari, Z. Wang, B. Ansarinejad, Y. Sun, S. Durand, H. Elgnainy, G. Douville, D. Tordera, G. Balabanian, H. Wolff, L. Kvistad, H. Milliron, A. Sakor, M. Eron, A. F. D. O., S. Shah, X. Zhou, F. Kamalov, S. Abdoli, T. Santens, S. Barkan, A. Tee, R. Zhang, A. Tomasiello, G. B. D. Luca, S. Looi, V. Le, N. Kolt, J. Pan, E. Rodman, J. Drori, C. J. Fossum, N. Muennighoff, M. Jagota, R. Pradeep, H. Fan, J. Eicher, M. Chen, K. Thaman, W. Merrill, M. Firsching, C. Harris, S. CiobĂącÄ, J. Gross, R. Pandey, I. Gusev, A. Jones, S. Agnihotri, P. Zhelnov, M. Mofayezi, A. Piperski, D. K. Zhang, K. Dobarskyi, R. Leventov, I. Soroko, J. Duersch, V. Taamazyan, A. Ho, W. Ma, W. Held, R. Xian, A. R. Zebaze, M. Mohamed, J. N. Leser, M. X. Yuan, L. Yacar, J. Lengler, K. Olszewska, C. D. Fratta, E. Oliveira, J. W. Jackson, A. Zou, M. Chidambaram, T. Manik, H. Haffenden, D. Stander, A. Dasouqi, A. Shen, B. Golshani, D. Stap, E. Kretov, M. Uzhou, A. B. Zhidkovskaya, N. Winter, M. O. Rodriguez, R. Lauff, D. Wehr, C. Tang, Z. Hossain, S. Phillips, F. Samuele, F. Ekström, A. Hammon, O. Patel, F. Farhidi, G. Medley, F. Mohammadzadeh, M. Peñaflor, H. Kassahun, A. Friedrich, R. H. Perez, D. Pyda, T. Sakal, O. Dhamane, A. K. Mirabadi, E. Hallman, K. Okutsu, M. Battaglia, M. Maghsoudimehrabani, A. Amit, D. Hulbert, R. Pereira, S. Weber, Handoko, A. Peristyy, S. Malina, M. Mehkary, R. Aly, F. Reidegeld, A. Dick, C. Friday, M. Singh, H. Shapourian, W. Kim, M. Costa, H. Gurdogan, H. Kumar, C. Ceconello, C. Zhuang, H. Park, M. Carroll, A. R. Tawfeek, S. Steinerberger, D. Aggarwal, M. Kirchhof, L. Dai, E. Kim, J. Ferret, J. Shah, Y. Wang, M. Yan, K. Burdzy, L. Zhang, A. Franca, D. T. Pham, K. Y. Loh, J. Robinson, A. Jackson, P. Giordano, P. Petersen, A. Cosma, J. Colino, C. White, J. Votava, V. Vinnikov, E. Delaney, P. Spelda, V. Stritecky, S. M. Shahid, J. Mourrat, L. Vetoshkin, K. Sponselee, R. Bacho, Z. Yong, F. de la Rosa, N. Cho, X. Li, G. Malod, O. Weller, G. Albani, L. Lang, J. Laurendeau, D. Kazakov, F. Adesanya, J. Portier, L. Hollom, V. Souza, Y. A. Zhou, J. Degorre, Y. Yalın, G. D. Obikoya, Rai, F. Bigi, M. C. BoscĂĄ, O. Shumar, K. Bacho, G. Recchia, M. Popescu, N. Shulga, N. M. Tanwie, T. C. H. Lux, B. Rank, C. Ni, M. Brooks, A. Yakimchyk, Huanxu, Liu, S. Cavalleri, O. HĂ€ggström, E. Verkama, J. Newbould, H. Gundlach, L. Brito-Santana, B. Amaro, V. Vajipey, R. Grover, T. Wang, Y. Kratish, W. Li, S. Gopi, A. Caciolai, C. S. de Witt, P. HernĂĄndez-CĂĄmara, E. RodolĂ , J. Robins, D. Williamson, V. Cheng, B. Raynor, H. Qi, B. Segev, J. Fan, S. Martinson, E. Y. Wang, K. Hausknecht, M. P. Brenner, M. Mao, C. Demian, P. Kassani, X. Zhang, D. Avagian, E. J. Scipio, A. Ragoler, J. Tan, B. Sims, R. Plecnik, A. Kirtland, O. F. Bodur, D. P. Shinde, Y. C. L. Labrador, Z. Adoul, M. Zekry, A. Karakoc, T. C. B. Santos, S. Shamseldeen, L. Karim, A. Liakhovitskaia, N. Resman, N. Farina, J. C. Gonzalez, G. Maayan, E. Anderson, R. D. O. Pena, E. Kelley, H. Mariji, R. Pouriamanesh, W. Wu, R. Finocchio, I. Alarab, J. Cole, D. Ferreira, B. Johnson, M. Safdari, L. Dai, S. Arthornthurasuk, I. C. McAlister, A. J. Moyano, A. Pronin, J. Fan, A. Ramirez-Trinidad, Y. Malysheva, D. Pottmaier, O. Taheri, S. Stepanic, S. Perry, L. Askew, R. A. H. RodrĂguez, A. M. R. Minissi, R. Lorena, K. Iyer, A. A. Fasiludeen, R. Clark, J. Ducey, M. Piza, M. Somrak, E. Vergo, J. Qin, B. BorbĂĄs, E. Chu, J. Lindsey, A. Jallon, I. M. J. McInnis, E. Chen, A. Semler, L. Gloor, T. Shah, M. Carauleanu, P. Lauer, T. Ä. Huy, H. Shahrtash, E. Duc, L. Lewark, A. Brown, S. Albanie, B. Weber, W. S. Vaz, P. Clavier, Y. Fan, G. P. R. e Silva, Long, Lian, M. Abramovitch, X. Jiang, S. Mendoza, M. Islam, J. Gonzalez, V. Mavroudis, J. Xu, P. Kumar, L. P. Goswami, D. Bugas, N. Heydari, F. Jeanplong, T. Jansen, A. Pinto, A. Apronti, A. Galal, N. Ze-An, A. Singh, T. Jiang, J. of Arc Xavier, K. P. Agarwal, M. Berkani, G. Zhang, Z. Du, B. A. de Oliveira Junior, D. Malishev, N. Remy, T. D. Hartman, T. Tarver, S. Mensah, G. A. Loume, W. Morak, F. Habibi, S. Hoback, W. Cai, J. Gimenez, R. G. Montecillo, J. Ćucki, R. Campbell, A. Sharma, K. Meer, S. Gul, D. E. Gonzalez, X. Alapont, A. Hoover, G. Chhablani, F. Vargus, A. Agarwal, Y. Jiang, D. Patil, D. Outevsky, K. J. Scaria, R. Maheshwari, A. Dendane, P. Shukla, A. Cartwright, S. Bogdanov, N. MĂŒndler, S. Möller, L. Arnaboldi, K. Thaman, M. R. Siddiqi, P. Saxena, H. Gupta, T. Fruhauff, G. Sherman, M. Vincze, S. Usawasutsakorn, D. Ler, A. Radhakrishnan, I. Enyekwe, S. M. Salauddin, J. Muzhen, A. Maksapetyan, V. Rossbach, C. Harjadi, M. Bahaloohoreh, C. Sparrow, J. Sidhu, S. Ali, S. Bian, J. Lai, E. Singer, J. L. Uro, G. Bateman, M. Sayed, A. Menshawy, D. Duclosel, D. Bezzi, Y. Jain, A. Aaron, M. Tiryakioglu, S. Siddh, K. Krenek, I. A. Shah, J. Jin, S. Creighton, D. Peskoff, Z. EL-Wasif, R. P. V, M. Richmond, J. McGowan, T. Patwardhan, H. Sun, T. Sun, N. ZubiÄ, S. Sala, S. Ebert, J. Kaddour, M. Schottdorf, D. Wang, G. Petruzella, A. Meiburg, T. Medved, A. ElSheikh, S. A. Hebbar, L. Vaquero, X. Yang, J. Poulos, V. Zouhar, S. Bogdanik, M. Zhang, J. Sanz-Ros, D. Anugraha, Y. Dai, A. N. Nhu, X. Wang, A. A. Demircali, Z. Jia, Y. Zhou, J. Wu, M. He, N. Chandok, A. Sinha, G. Luo, L. Le, M. NoyĂ©, M. PereĆkiewicz, I. Pantidis, T. Qi, S. S. Purohit, L. Parcalabescu, T. Nguyen, G. I. Winata, E. M. Ponti, H. Li, K. Dhole, J. Park, D. Abbondanza, Y. Wang, A. Nayak, D. M. Caetano, A. A. W. L. Wong, M. del Rio-Chanona, D. Kondor, P. Francois, E. Chalstrey, J. Zsambok, D. Hoyer, J. Reddish, J. Hauser, F. Rodrigo-GinĂ©s, S. Datta, M. Shepherd, T. Kamphuis, Q. Zhang, H. Kim, R. Sun, J. Yao, F. Dernoncourt, S. Krishna, S. Rismanchian, B. Pu, F. Pinto, Y. Wang, K. Shridhar, K. J. Overholt, G. Briia, H. Nguyen, David, S. Bartomeu, T. C. Pang, A. Wecker, Y. Xiong, F. Li, L. S. Huber, J. Jaeger, R. D. Maddalena, X. H. LĂč, Y. Zhang, C. Beger, P. T. J. Kon, S. Li, V. Sanker, M. Yin, Y. Liang, X. Zhang, A. Agrawal, L. S. Yifei, Z. Zhang, M. Cai, Y. Sonmez, C. Cozianu, C. Li, A. Slen, S. Yu, H. K. Park, G. Sarti, M. BriaĆski, A. Stolfo, T. A. Nguyen, M. Zhang, Y. Perlitz, J. Hernandez-Orallo, R. Li, A. Shabani, F. Juefei-Xu, S. Dhingra, O. Zohar, M. C. Nguyen, A. Pondaven, A. Yilmaz, X. Zhao, C. Jin, M. Jiang, S. Todoran, X. Han, J. Kreuer, B. Rabern, A. Plassart, M. Maggetti, L. Yap, R. Geirhos, J. Kean, D. Wang, S. Mollaei, C. Sun, Y. Yin, S. Wang, R. Li, Y. Chang, A. Wei, A. Bizeul, X. Wang, A. O. Arrais, K. Mukherjee, J. Chamorro-Padial, J. Liu, X. Qu, J. Guan, A. Bouyamourn, S. Wu, M. Plomecka, J. Chen, M. Tang, J. Deng, S. Subramanian, H. Xi, H. Chen, W. Zhang, Y. Ren, H. Tu, S. Kim, Y. Chen, S. V. MarjanoviÄ, J. Ha, G. Luczyna, J. J. Ma, Z. Shen, D. Song, C. E. Zhang, Z. Wang, G. Gendron, Y. Xiao, L. Smucker, E. Weng, K. H. Lee, Z. Ye, S. Ermon, I. D. Lopez-Miguel, T. Knights, A. Gitter, N. Park, B. Wei, H. Chen, K. Pai, A. Elkhanany, H. Lin, P. D. Siedler, J. Fang, R. Mishra, K. Zsolnai-FehĂ©r, X. Jiang, S. Khan, J. Yuan, R. K. Jain, X. Lin, M. Peterson, Z. Wang, A. Malusare, M. Tang, I. Gupta, I. Fosin, T. Kang, B. Dworakowska, K. Matsumoto, G. Zheng, G. Sewuster, J. P. Villanueva, I. Rannev, I. Chernyavsky, J. Chen, D. Banik, B. Racz, W. Dong, J. Wang, L. Bashmal, D. V. Gonçalves, W. Hu, K. Bar, O. Bohdal, A. S. Patlan, S. Dhuliawala, C. Geirhos, J. Wist, Y. Kansal, B. Chen, K. Tire, A. T. YĂŒcel, B. Christof, V. Singla, Z. Song, S. Chen, J. Ge, K. Ponkshe, I. Park, T. Shi, M. Q. Ma, J. Mak, S. Lai, A. Moulin, Z. Cheng, Z. Zhu, Z. Zhang, V. Patil, K. Jha, Q. Men, J. Wu, T. Zhang, B. H. Vieira, A. F. Aji, J. Chung, M. Mahfoud, H. T. Hoang, M. Sperzel, W. Hao, K. Meding, S. Xu, V. Kostakos, D. Manini, Y. Liu, C. Toukmaji, J. Paek, E. Yu, A. E. Demircali, Z. Sun, I. Dewerpe, H. Qin, R. Pflugfelder, J. Bailey, J. Morris, V. Heilala, S. Rosset, Z. Yu, P. E. Chen, W. Yeo, E. Jain, R. Yang, S. Chigurupati, J. Chernyavsky, S. P. Reddy, S. Venugopalan, H. Batra, C. F. Park, H. Tran, G. Maximiano, G. Zhang, Y. Liang, H. Shiyu, R. Xu, R. Pan, S. Suresh, Z. Liu, S. Gulati, S. Zhang, P. Turchin, C. W. Bartlett, C. R. Scotese, P. M. Cao, A. Nattanmai, G. McKellips, A. Cheraku, A. Suhail, E. Luo, M. Deng, J. Luo, A. Zhang, K. Jindel, J. Paek, K. Halevy, A. Baranov, M. Liu, A. Avadhanam, D. Zhang, V. Cheng, B. Ma, E. Fu, L. Do, J. Lass, H. Yang, S. Sunkari, V. Bharath, V. Ai, J. Leung, R. Agrawal, A. Zhou, K. Chen, T. Kalpathi, Z. Xu, G. Wang, T. Xiao, E. Maung, S. Lee, R. Yang, R. Yue, B. Zhao, J. Yoon, S. Sun, A. Singh, E. Luo, C. Peng, T. Osbey, T. Wang, D. Echeazu, H. Yang, T. Wu, S. Patel, V. Kulkarni, V. Sundarapandiyan, A. Zhang, A. Le, Z. Nasim, S. Yalam, R. Kasamsetty, S. Samal, H. Yang, D. Sun, N. Shah, A. Saha, A. Zhang, L. Nguyen, L. Nagumalli, K. Wang, A. Zhou, A. Wu, J. Luo, A. Telluri, S. Yue, A. Wang, and D. Hendrycks (2025) Humanityâs last exam. External Links: 2501.14249, Link Cited by: 1st item.
- [47] D. Rein, B. L. Hou, A. C. Stickland, J. Petty, R. Y. Pang, J. Dirani, J. Michael, and S. R. Bowman (2024) Gpqa: a graduate-level google-proof q&a benchmark. In First Conference on Language Modeling, Cited by: 1st item.
- [48] J. Roberts, M. R. Taesiri, A. Sharma, A. Gupta, S. Roberts, I. Croitoru, S. Bogolin, J. Tang, F. Langer, V. Raina, V. Raina, H. Xiong, V. Udandarao, J. Lu, S. Chen, S. Purkis, T. Yan, W. Lin, G. Shin, Q. Yang, A. T. Nguyen, D. I. Atkinson, A. Baranwal, A. Coca, M. Dang, S. Dziadzio, J. D. Kunz, K. Liang, A. Lo, B. Pulfer, S. Walton, C. Yang, K. Han, and S. Albanie (2025) ZeroBench: an impossible visual benchmark for contemporary large multimodal models. External Links: 2502.09696, Link Cited by: 4th item.
- [49] C. Schuhmann, R. Beaumont, R. Vencu, C. Gordon, R. Wightman, M. Cherti, T. Coombes, A. Katta, C. Mullis, M. Wortsman, et al. (2022) Laion-5b: an open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems 35, pp. 25278â25294. Cited by: §B.3.
- [50] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov (2017) Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347. External Links: Link Cited by: §4.4.2.
- [51] T. Song, H. Lu, H. Yang, L. Sui, H. Wu, Z. Zhou, Z. Huang, Y. Bao, Y. Charles, X. Zhou, and L. Wang (2026) Towards pixel-level vlm perception via simple points prediction. External Links: 2601.19228, Link Cited by: §B.3.
- [52] G. Starace, O. Jaffe, D. Sherburn, J. Aung, J. S. Chan, L. Maksin, R. Dias, E. Mays, B. Kinsella, W. Thompson, et al. (2025) PaperBench: evaluating aiâs ability to replicate ai research. arXiv preprint arXiv:2504.01848. Cited by: 2nd item.
- [53] K. Team, Y. Bai, Y. Bao, G. Chen, J. Chen, N. Chen, R. Chen, Y. Chen, Y. Chen, Y. Chen, et al. (2025) Kimi k2: open agentic intelligence. arXiv preprint arXiv:2507.20534. Cited by: §B.2, §4.1, §4.4.1, §4.4.2, §4.4.2, §4.5.
- [54] K. Team, A. Du, B. Yin, B. Xing, B. Qu, B. Wang, C. Chen, C. Zhang, C. Du, C. Wei, et al. (2025) Kimi-vl technical report. arXiv preprint arXiv:2504.07491. Cited by: §4.2, §4.3, §4.5.1.
- [55] M. L. Team, B. Wang, B. Xiao, B. Zhang, B. Rong, B. Chen, C. Wan, C. Zhang, C. Huang, C. Chen, et al. (2025) Longcat-flash-omni technical report. arXiv preprint arXiv:2511.00279. Cited by: §4.5.1.
- [56] M. Tian, L. Gao, S. Zhang, X. Chen, C. Fan, X. Guo, R. Haas, P. Ji, K. Krongchon, Y. Li, et al. (2024) Scicode: a research coding benchmark curated by scientists. Advances in Neural Information Processing Systems 37, pp. 30624â30650. Cited by: 2nd item.
- [57] S. Tong, Z. Liu, Y. Zhai, Y. Ma, Y. LeCun, and S. Xie (2024) Eyes wide shut? exploring the visual shortcomings of multimodal llms. External Links: 2401.06209, Link Cited by: 4th item.
- [58] H. M. Tournament (2025) Harvard-mit mathematics tournament, february 2025. Note: Held on February 15, 2025 External Links: Link Cited by: 1st item.
- [59] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ć. Kaiser, and I. Polosukhin (2017) Attention is all you need. In Advances in Neural Information Processing Systems, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), Vol. 30, pp. . External Links: Link Cited by: §4.1.
- [60] N. Vedula, M. Collins, E. Agichtein, and O. Rokhlenko (2025) DeepSearchQA: bridging the comprehensiveness gap for deep research agents. Google DeepMind, Google Search, Kaggle, and Google Research. External Links: Link Cited by: 3rd item.
- [61] K. Wang, J. Pan, W. Shi, Z. Lu, M. Zhan, and H. Li (2024) Measuring multimodal mathematical reasoning with math-vision dataset. External Links: 2402.14804, Link Cited by: 4th item.
- [62] W. Wang, Z. He, W. Hong, Y. Cheng, X. Zhang, J. Qi, X. Gu, S. Huang, B. Xu, Y. Dong, M. Ding, and J. Tang (2025) LVBench: an extreme long video understanding benchmark. External Links: 2406.08035, Link Cited by: 5th item.
- [63] X. Wang, B. Wang, D. Lu, J. Yang, T. Xie, J. Wang, J. Deng, X. Guo, Y. Xu, C. H. Wu, Z. Shen, Z. Li, R. Li, X. Li, J. Chen, B. Zheng, P. Li, F. Lei, R. Cao, Y. Fu, D. Shin, M. Shin, J. Hu, Y. Wang, J. Chen, Y. Ye, D. Zhang, D. Du, H. Hu, H. Chen, Z. Zhou, H. Yao, Z. Chen, Q. Gu, Y. Wang, H. Wang, D. Yang, V. Zhong, F. Sung, Y. Charles, Z. Yang, and T. Yu (2025) OpenCUA: open foundations for computer-use agents. External Links: 2508.09123, Link Cited by: §E.7.
- [64] Y. Wang, X. Ma, G. Zhang, Y. Ni, A. Chandra, S. Guo, W. Ren, A. Arulraj, X. He, Z. Jiang, T. Li, M. Ku, K. Wang, A. Zhuang, R. Fan, X. Yue, and W. Chen (2024) MMLU-pro: a more robust and challenging multi-task language understanding benchmark. External Links: 2406.01574, Link Cited by: 1st item.
- [65] Z. Wang, Y. Liu, Y. Wang, W. He, B. Gao, M. Diao, Y. Chen, K. Fu, F. Sung, Z. Yang, et al. (2025) OJBench: a competition level code benchmark for large language models. arXiv preprint arXiv:2506.16395. Cited by: 2nd item.
- [66] Z. Wang, T. Shi, J. He, M. Cai, J. Zhang, and D. Song (2025) CyberGym: evaluating ai agentsâ cybersecurity capabilities with real-world vulnerabilities at scale. arXiv preprint arXiv:2506.02548. Cited by: 2nd item.
- [67] Z. Wang, M. Xia, L. He, H. Chen, Y. Liu, R. Zhu, K. Liang, X. Wu, H. Liu, S. Malladi, A. Chevalier, S. Arora, and D. Chen (2024) CharXiv: charting gaps in realistic chart understanding in multimodal llms. External Links: 2406.18521, Link Cited by: 4th item.
- [68] J. Wei, Z. Sun, S. Papay, S. McKinney, J. Han, I. Fulford, H. W. Chung, A. T. Passos, W. Fedus, and A. Glaese (2025) BrowseComp: a simple yet challenging benchmark for browsing agents. External Links: 2504.12516, Link Cited by: 3rd item.
- [69] R. Wong, J. Wang, J. Zhao, L. Chen, Y. Gao, L. Zhang, X. Zhou, Z. Wang, K. Xiang, G. Zhang, W. Huang, Y. Wang, and K. Wang (2025) WideSearch: benchmarking agentic broad info-seeking. External Links: 2508.07999, Link Cited by: 3rd item.
- [70] H. Wu, D. Li, B. Chen, and J. Li (2024) LongVideoBench: a benchmark for long-context interleaved video-language understanding. External Links: 2407.15754, Link Cited by: 5th item.
- [71] X. Wu, K. Li, Y. Zhao, L. Zhang, L. Ou, H. Yin, Z. Zhang, X. Yu, D. Zhang, Y. Jiang, P. Xie, F. Huang, M. Cheng, S. Wang, H. Cheng, and J. Zhou (2025) ReSum: unlocking long-horizon search intelligence via context summarization. External Links: 2509.13313, Link Cited by: §5.2.
- [72] T. Xie, M. Yuan, D. Zhang, X. Xiong, Z. Shen, Z. Zhou, X. Wang, Y. Chen, J. Deng, J. Chen, B. Wang, H. Wu, J. Chen, J. Wang, D. Lu, H. Hu, and T. Yu (2025-07) Introducing osworld-verified. xlang.ai. External Links: Link Cited by: 6th item, §5.1.2.
- [73] T. Xie, D. Zhang, J. Chen, X. Li, S. Zhao, R. Cao, T. J. Hua, Z. Cheng, D. Shin, F. Lei, Y. Liu, Y. Xu, S. Zhou, S. Savarese, C. Xiong, V. Zhong, and T. Yu (2024) OSWorld: benchmarking multimodal agents for open-ended tasks in real computer environments. External Links: 2404.07972 Cited by: 6th item, §5.1.2.
- [74] F. Yao, L. Liu, D. Zhang, C. Dong, J. Shang, and J. Gao (2025-08) Your efficient rl framework secretly brings you off-policy rl training. External Links: Link Cited by: §4.4.2.
- [75] X. Yue, Y. Ni, K. Zhang, T. Zheng, R. Liu, G. Zhang, S. Stevens, D. Jiang, W. Ren, Y. Sun, C. Wei, B. Yu, R. Yuan, R. Sun, M. Yin, B. Zheng, Z. Yang, Y. Liu, W. Huang, H. Sun, Y. Su, and W. Chen (2024) MMMU: a massive multi-discipline multimodal understanding and reasoning benchmark for expert agi. In Proceedings of CVPR, Cited by: 4th item.
- [76] X. Yue, T. Zheng, Y. Ni, Y. Wang, K. Zhang, S. Tong, Y. Sun, B. Yu, G. Zhang, H. Sun, Y. Su, W. Chen, and G. Neubig (2025) MMMU-pro: a more robust multi-discipline multimodal understanding benchmark. External Links: 2409.02813, Link Cited by: 4th item.
- [77] X. Zhai, B. Mustafa, A. Kolesnikov, and L. Beyer (2023) Sigmoid loss for language image pre-training. External Links: 2303.15343, Link Cited by: §4.2, §4.3.
- [78] X. Zhao, Y. Liu, K. Xu, J. Guo, Z. Wang, Y. Sun, X. Kong, Q. Cao, L. Jiang, Z. Wen, Z. Zhang, and J. Zhou (2025-09) Small leak can sink a great shipâboost rl training on moe with icepop!. External Links: Link Cited by: §4.4.2.
- [79] Y. Zhao, L. Xie, H. Zhang, G. Gan, Y. Long, Z. Hu, T. Hu, W. Chen, C. Li, J. Song, Z. Xu, C. Wang, W. Pan, Z. Shangguan, X. Tang, Z. Liang, Y. Liu, C. Zhao, and A. Cohan (2025) MMVU: measuring expert-level multi-discipline video understanding. External Links: 2501.12380, Link Cited by: 5th item.
- [80] S. Zhou, F. F. Xu, H. Zhu, X. Zhou, R. Lo, A. Sridhar, X. Cheng, T. Ou, Y. Bisk, D. Fried, U. Alon, and G. Neubig (2023) WebArena: a realistic web environment for building autonomous agents. arXiv preprint arXiv:2307.13854. External Links: Link Cited by: 6th item, §5.1.2.
- [81] W. Zhu, J. Hessel, A. Awadalla, S. Y. Gadre, J. Dodge, A. Fang, Y. Yu, L. Schmidt, W. Y. Wang, and Y. Choi (2024) Multimodal c4: an open, billion-scale corpus of images interleaved with text. Advances in Neural Information Processing Systems 36. Cited by: §B.3.
## Appendix A Contributors
Tongtong Bai Yifan Bai Yiping Bao S.H. Cai Yuan Cao Y. Charles H.S. Che Cheng Chen Guanduo Chen Huarong Chen Jia Chen Jiahao Chen Jianlong Chen Jun Chen Kefan Chen Liang Chen Ruijue Chen Xinhao Chen Yanru Chen Yanxu Chen Yicun Chen Yimin Chen Yingjiang Chen Yuankun Chen Yujie Chen Yutian Chen Zhirong Chen Ziwei Chen Dazhi Cheng Minghan Chu Jialei Cui Jiaqi Deng Muxi Diao Hao Ding Mengfan Dong Mengnan Dong Yuxin Dong Yuhao Dong Angâang Du Chenzhuang Du Dikang Du Lingxiao Du Yulun Du Yu Fan Shengjun Fang Qiulin Feng Yichen Feng Garimugai Fu Kelin Fu Hongcheng Gao Tong Gao Yuyao Ge Shangyi Geng Chengyang Gong Xiaochen Gong Zhuoma Gongque Qizheng Gu Xinran Gu Yicheng Gu Longyu Guan Yuanying Guo Xiaoru Hao Weiran He Wenyang He Yunjia He Chao Hong Hao Hu Jiaxi Hu Yangyang Hu Zhenxing Hu Ke Huang Ruiyuan Huang Weixiao Huang Zhiqi Huang Tao Jiang Zhejun Jiang Xinyi Jin Yu Jing Guokun Lai Aidi Li C. Li Cheng Li Fang Li Guanghe Li Guanyu Li Haitao Li Haoyang Li Jia Li Jingwei Li Junxiong Li Lincan Li Mo Li Weihong Li Wentao Li Xinhang Li Xinhao Li Yang Li Yanhao Li Yiwei Li Yuxiao Li Zhaowei Li Zheming Li Weilong Liao Jiawei Lin Xiaohan Lin Zhishan Lin Zichao Lin Cheng Liu Chenyu Liu Hongzhang Liu Liang Liu Shaowei Liu Shudong Liu Shuran Liu Tianwei Liu Tianyu Liu Weizhou Liu Xiangyan Liu Yangyang Liu Yanming Liu Yibo Liu Yuanxin Liu Yue Liu Zhengying Liu Zhongnuo Liu Enzhe Lu Haoyu Lu Zhiyuan Lu Junyu Luo Tongxu Luo Yashuo Luo Long Ma Yingwei Ma Shaoguang Mao Yuan Mei Xin Men Fanqing Meng Zhiyong Meng Yibo Miao Minqing Ni Kun Ouyang Siyuan Pan Bo Pang Yuchao Qian Ruoyu Qin Zeyu Qin Jiezhong Qiu Bowen Qu Zeyu Shang Youbo Shao Tianxiao Shen Zhennan Shen Juanfeng Shi Lidong Shi Shengyuan Shi Feifan Song Pengwei Song Tianhui Song Xiaoxi Song Hongjin Su Jianlin Su Zhaochen Su Lin Sui Jinsong Sun Junyao Sun Tongyu Sun Flood Sung Yunpeng Tai Chuning Tang Heyi Tang Xiaojuan Tang Zhengyang Tang Jiawen Tao Shiyuan Teng Chaoran Tian Pengfei Tian Ao Wang Bowen Wang Chensi Wang Chuang Wang Congcong Wang Dingkun Wang Dinglu Wang Dongliang Wang Feng Wang Hailong Wang Haiming Wang Hengzhi Wang Huaqing Wang Hui Wang Jiahao Wang Jinhong Wang Jiuzheng Wang Kaixin Wang Linian Wang Qibin Wang Shengjie Wang Shuyi Wang Si Wang Wei Wang Xiaochen Wang Xinyuan Wang Yao Wang Yejie Wang Yipu Wang Yiqin Wang Yucheng Wang Yuzhi Wang Zhaoji Wang Zhaowei Wang Zhengtao Wang Zhexu Wang Zihan Wang Zizhe Wang Chu Wei Ming Wei Chuan Wen Zichen Wen Chengjie Wu Haoning Wu Junyan Wu Rucong Wu Wenhao Wu Yuefeng Wu Yuhao Wu Yuxin Wu Zijian Wu Chenjun Xiao Jin Xie Xiaotong Xie Yuchong Xie Yifei Xin Bowei Xing Boyu Xu Jianfan Xu Jing Xu Jinjing Xu L.H. Xu Lin Xu Suting Xu Weixin Xu Xinbo Xu Xinran Xu Yangchuan Xu Yichang Xu Yuemeng Xu Zelai Xu Ziyao Xu Junjie Yan Yuzi Yan Guangyao Yang Hao Yang Junwei Yang Kai Yang Ningyuan Yang Ruihan Yang Xiaofei Yang Xinlong Yang Ying Yang Yi (ćŒ) Yang Yi (çż) Yang Zhen Yang Zhilin Yang Zonghan Yang Haotian Yao Dan Ye Wenjie Ye Zhuorui Ye Bohong Yin Chengzhen Yu Longhui Yu Tao Yu â Tianxiang Yu Enming Yuan Mengjie Yuan Xiaokun Yuan Yang Yue Weihao Zeng Dunyuan Zha Haobing Zhan Dehao Zhang Hao Zhang Jin Zhang Puqi Zhang Qiao Zhang Rui Zhang Xiaobin Zhang Y. Zhang Yadong Zhang Yangkun Zhang Yichi Zhang Yizhi Zhang Yongting Zhang Yu Zhang Yushun Zhang Yutao Zhang Yutong Zhang Zheng Zhang Chenguang Zhao Feifan Zhao Jinxiang Zhao Shuai Zhao Xiangyu Zhao Yikai Zhao Zijia Zhao Huabin Zheng Ruihan Zheng Shaojie Zheng Tengyang Zheng Junfeng Zhong Longguang Zhong Weiming Zhong M. Zhou Runjie Zhou Xinyu Zhou Zaida Zhou Jinguo Zhu Liya Zhu Xinhao Zhu Yuxuan Zhu Zhen Zhu Jingze Zhuang Weiyu Zhuang Ying Zou Xinxing Zu Kimi K2 Kimi K2.5 footnotetext: The listing of authors is in alphabetical order based on their last names. footnotetext: â The University of Hong Kong
## Appendix B Pre-training
<details>
<summary>x7.png Details</summary>

### Visual Description
## Line Charts: Multi-Task Performance by Vision-Text Data Ratio
### Overview
The image displays a 2x3 grid of six line charts. Each chart plots the "Score" (y-axis) against "Steps" (x-axis) for a specific task, comparing three different training data mixtures defined by the ratio of Vision to Text data. The charts collectively analyze how varying the proportion of vision and text data in training affects model performance across different cognitive tasks.
### Components/Axes
* **Titles (Top of each chart):**
* Top Row (Left to Right): `Vision Knowledge`, `Vision General Reasoning`, `OCR`
* Bottom Row (Left to Right): `Text Knowledge`, `Text General Reasoning`, `Coding`
* **Axes:**
* **X-axis (All charts):** Labeled `Steps`. The axis has tick marks but no numerical labels, indicating a progression of training iterations.
* **Y-axis (All charts):** Labeled `Score`. The axis has tick marks but no numerical labels, indicating a performance metric (likely accuracy or a similar score) on a scale from low to high.
* **Legend (Present in the bottom-right corner of each chart):**
* A red line with a square marker: `Vision:Text = 10%:90%`
* A green line with a square marker: `Vision:Text = 20%:80%`
* A blue line with a square marker: `Vision:Text = 50%:50%`
* **Background Shading:** Each chart features vertical shaded regions corresponding to the colors of the three data series (red, green, blue from left to right). This likely indicates distinct phases of training where one data mixture was predominantly used.
### Detailed Analysis
**Chart 1: Vision Knowledge**
* **Trend Verification:**
* **Red Line (10%:90%):** Starts at a low score and shows a steady, noisy upward trend throughout all steps.
* **Green Line (20%:80%):** Begins at a later step (within the green shaded region) and exhibits a very steep upward slope, eventually surpassing the red line.
* **Blue Line (50%:50%):** Begins at the latest step (within the blue shaded region) and also shows a steep upward slope, converging near the top with the green line.
* **Key Data Points (Approximate):** The final scores for the green and blue lines are the highest, with the red line slightly lower. The green line shows the most dramatic improvement rate.
**Chart 2: Vision General Reasoning**
* **Trend Verification:**
* **Red Line (10%:90%):** Shows a noisy but generally upward trend from the start.
* **Green Line (20%:80%):** Starts later and rises sharply, crossing above the red line.
* **Blue Line (50%:50%):** Starts last and rises steeply, ending at a similar high level as the green line.
* **Key Data Points (Approximate):** Similar pattern to Vision Knowledge, with the 20% and 50% vision mixtures achieving higher final scores than the 10% mixture.
**Chart 3: OCR (Optical Character Recognition)**
* **Trend Verification:**
* **Red Line (10%:90%):** Rises quickly early on and then plateaus with minor fluctuations.
* **Green Line (20%:80%):** Starts later and climbs steadily, approaching the red line's plateau.
* **Blue Line (50%:50%):** Starts last and climbs, also approaching the plateau level.
* **Key Data Points (Approximate):** The red line establishes a high score early. The green and blue lines, starting from zero at later steps, show strong learning curves but do not clearly surpass the initial red line's performance within the displayed steps.
**Chart 4: Text Knowledge**
* **Trend Verification:**
* **Red Line (10%:90%):** Starts at a moderate score and shows a steady, gradual upward trend.
* **Green Line (20%:80%):** Begins at the same step as the red line but at a slightly lower initial score, following a parallel upward trend.
* **Blue Line (50%:50%):** Also begins at the same step, starting lower than both red and green, and follows a similar upward slope.
* **Key Data Points (Approximate):** The lines are ordered: Red (highest) > Green > Blue (lowest) throughout the entire training process, maintaining a consistent gap.
**Chart 5: Text General Reasoning**
* **Trend Verification:**
* **Red Line (10%:90%):** Shows a steady upward trend from the start.
* **Green Line (20%:80%):** Begins at the same step, follows a very similar trajectory to the red line but slightly below it.
* **Blue Line (50%:50%):** Begins at the same step, follows a similar trajectory but is the lowest of the three.
* **Key Data Points (Approximate):** The performance hierarchy (Red > Green > Blue) is maintained, but the gaps between the lines are smaller than in Text Knowledge.
**Chart 6: Coding**
* **Trend Verification:**
* **Red Line (10%:90%):** Exhibits a noisy but clear upward trend.
* **Green Line (20%:80%):** Starts at the same step, follows a similar noisy upward path, closely tracking the red line.
* **Blue Line (50%:50%):** Starts at the same step, also follows a similar noisy upward path, generally the lowest but intertwined with the green line.
* **Key Data Points (Approximate):** The three lines are closely clustered, showing similar performance and learning trends. The 10% vision mixture (red) has a slight edge for most of the training.
### Key Observations
1. **Task-Dependent Data Sensitivity:** Vision-centric tasks (Vision Knowledge, Vision General Reasoning, OCR) show a dramatic performance boost when the training data mixture includes a higher proportion of vision data (20% or 50%), especially when training starts from scratch with that mixture (indicated by the later starting points of green and blue lines).
2. **Text Task Robustness:** Text-centric tasks (Text Knowledge, Text General Reasoning, Coding) are less sensitive to the vision-text ratio. Performance is best with the highest text proportion (10%:90%), and increasing vision data leads to a gradual, consistent decrease in score.
3. **Staged Training Implication:** The shaded backgrounds and the delayed start of the green and blue lines in the top-row charts suggest a potential training curriculum: a model might be first trained on a text-heavy mixture (red phase), then fine-tuned on mixtures with more vision data (green and blue phases).
4. **Learning Efficiency:** For vision tasks, models trained with more vision data from the start (green/blue lines) learn much faster (steeper slope) once they begin, compared to the model trained throughout on text-heavy data (red line).
### Interpretation
The data demonstrates a clear trade-off in multi-modal model training: **specialization vs. generalization**. Allocating more data to a modality (vision) significantly boosts performance on tasks requiring that modality, but comes at a slight cost to performance on tasks dominated by the other modality (text).
The charts suggest that for optimal performance across a diverse benchmark, a balanced approach or a curriculum strategy (starting with text-heavy data for foundational knowledge, then incorporating more vision data) might be more effective than a single fixed ratio. The "OCR" chart is particularly insightful; while more vision data helps, the text-heavy model's early lead indicates that strong text understanding is a crucial foundation for character recognition, which is then refined with visual training.
The close clustering in the "Coding" chart implies that coding proficiency, as measured here, may be more dependent on textual (code) data and reasoning patterns than on visual information, making it robust to changes in the vision-text data mix. This analysis provides empirical guidance for designing data sampling strategies when training large multi-modal models.
</details>
Figure 9: Learning curves comparing vision-to-text ratios (10:90, 20:80, 50:50) under fixed vision-text token budget across vision and language tasks. Early fusion with lower vision ratios tend to yield better results.
### B.1 Joint-Training
We further provide the full training curves for all configurations in Figure 9. Notably, we observe a "dip-and-recover" pattern in text performance during mid-fusion and late-fusion stages: when vision data is first introduced, text capability initially degrades before gradually recovering. We attribute this to the modality domain shiftâthe sudden introduction of vision tokens disrupts the established linguistic representation space, forcing the model to temporarily sacrifice text-specific competence for cross-modal alignment.
In contrast, early fusion maintains a healthier and more stable text performance curve throughout training. By co-optimizing vision and language from the outset, the model naturally evolves unified multimodal representations without the shock of late-stage domain migration. This suggests that early exposure not only prevents the representation collapse observed in late fusion but also facilitates smoother gradient landscapes for both modalities. Collectively, these findings reinforce our proposal of native multimodal pre-training: moderate vision ratios combined with early fusion yield superior convergence properties and more robust bi-modal competence under fixed token budgets.
### B.2 Text data
The Kimi K2.5 pre-training text corpus comprises curated, high-quality data spanning four primary domains: Web Text, Code, Mathematics, and Knowledge. Most data processing pipelines follow the methodologies outlined in Kimi K2 [53]. For each domain, we performed rigorous correctness and quality validation and designed targeted data experiments to ensure the curated dataset achieved both high diversity and effectiveness.
Enhanced Code Intelligence We upweighted code-centric data, significantly expanding (1) repository-level code supporting cross-file reasoning and architectural understanding, (2) issues, code reviews and commit histories from the internet capturing real-world development patterns, and (3) code-related documents retrieved from PDF and webtext corpora. These efforts strengthen repository-level comprehension for complex coding tasks, improve performance on agentic coding subtasks such as patch generation and unit test writing, and enhance code-related knowledge capabilities.
### B.3 Vision data
Our multimodal pre-training corpus includes seven categories: caption, interleaving, OCR, knowledge, perception, video, and agent data. Caption data [49, 18] provides fundamental modality alignment, with strict limits on synthetic captions to mitigate hallucination. Image-text interleaving data from books, web pages, and tutorials [81, 31] enables multi-image comprehension and longer context learning. OCR data spans multilingual text, dense layouts, and multi-page documents. Knowledge data incorporates academic materials processed via layout parsers to develop visual reasoning capabilities.
Furthermore, we curate a specialized multimodal problem-solving corpus to bolster reasoning within Science, Technology, Engineering, and Mathematics domains. This data is aggregated through targeted retrieval and web crawling; for informational content lacking explicit query formats, we employ in-context learning [10] to automatically reformulate raw materials into structured academic problems spanning K-12 to university levels. To bridge the modality gap between visual layouts and code data, we incorporate extensive image-code paired data. This includes a diverse array of code formatsâsuch as HTML, React, and SVG, among othersâpaired with their corresponding rendered screenshots, enabling the model to align abstract structural logic with concrete visual geometry.
For agentic and temporal understanding, we collect GUI screenshots and action trajectories across desktop, mobile, and web environments, including human-annotated demonstrations. Video data from diverse sources enables both hour-long video comprehension and fine-grained spatio-temporal perception. Additionally, we incorporate grounding data to enhance fine-grained visual localization, including perception annotations (bounding boxes), point-based references. We also introduce a new contour-level segmentation task [51] for pixel-level perception learning. All data undergoes rigorous filtering, deduplication, and quality control to ensure high diversity and effectiveness.
## Appendix C Infra
Kimi K2.5 is trained on NVIDIA H800 GPU clusters with 8 $Ă$ 400 Gbps RoCE interconnects across nodes. We employ a flexible parallelism strategy combining 16-way Pipeline Parallelism (PP) with virtual stages [26, 39], 16-way Expert Parallelism (EP) [32], and ZeRO-1 Data Parallelism, enabling training on any number of nodes that is a multiple of 32. EP all-to-all communication is overlapped with computation under interleaved 1F1B scheduling. To fit activations within GPU memory constraints, we apply selective recomputation for LayerNorm, SwiGLU, and MLA up-projections, compress insensitive activations to FP8-E4M3, and offload remaining activations to CPU with overlapped streaming.
### C.1 Data Storage and Loading
We employ S3 [3] compatible object storage solutions from cloud providers to house our VLM datasets. To bridge the gap between data preparation and model training, we retain visual data in its native format and have engineered a highly efficient and adaptable data loading infrastructure. This infrastructure offers several critical advantages:
- Flexibility: Facilitates dynamic data shuffling, blending, tokenization, loss masking, and sequence packing throughout the training process, enabling adjustable data ratios as requirements evolve;
- Augmentation: Allows for stochastic augmentation of both visual and textual modalities, while maintaining the integrity of 2D spatial coordinates and orientation metadata during geometric transformations;
- Determinism: Guarantees fully deterministic training through meticulous management of random seeds and worker states, ensuring that any training interruption can be resumed seamlessly â the data sequence after resumption remains identical to an uninterrupted run;
- Scalability: Achieves superior data loading throughput via tiered caching mechanisms, robustly scaling to large distributed clusters while regulating request frequency to object storage within acceptable bounds.
Furthermore, to uphold uniform dataset quality standards, we have built a unified platform overseeing data registration, visualization, statistical analysis, cross-cloud synchronization, and lifecycle governance.
## Appendix D Unified Agentic Reinforcement Learning Environment
<details>
<summary>x8.png Details</summary>

### Visual Description
## System Architecture Diagram: Single Agent Task Framework
### Overview
This image is a technical system architecture diagram illustrating the components and data flow of a "Single Agent Task" framework. The diagram depicts a modular system designed for agent-based tasks, involving pluggable components, core processing loops, multiple environment types, and external services for inference and training. The overall flow suggests a system for developing, testing, and refining AI agents.
### Components/Axes
The diagram is organized into several interconnected blocks and services:
1. **Rollout Manager** (Far left, blue box with a crown icon): The entry point or orchestrator that initiates the process.
2. **Single Agent Task** (Large central container): The main processing unit, containing:
* **Pluggable Components** (Left sub-box, blue): A module containing:
* `Toolset`
* `Judge`
* `Prompt & Instruction Enhancement`
* **Core Agent Loop** (Top-center sub-box, blue with gear icon): The central processing engine.
* **Environment Layer** (Bottom-center sub-box, blue):
* `Black-Box Env` (Left)
* `White-Box Env` (Right)
* `LLM Gateway` (Below Black-Box Env)
* `Env Pool` (Database icon, below White-Box Env)
3. **External Services** (Right side):
* `Inference Engine Service` (Top blue box)
* `Training Engine Service` (Bottom blue box)
**Labels and Text Flow:**
* Arrows indicate data/control flow. Key labels on arrows include:
* `Obs` (Observation) and `Act` (Action) between the Core Agent Loop and both environments.
* `Token-in` and `Token-out` between the Core Agent Loop and the Inference Engine Service.
* `Mismatch Correction` from the Inference Engine Service to the Training Engine Service.
* `Recursive Call` looping back from the Core Agent Loop to itself.
* The `LLM Gateway` has a bidirectional arrow connecting it to the `Black-Box Env`.
* The `Env Pool` is connected to the `White-Box Env`.
### Detailed Analysis
The system operates through a defined sequence of interactions:
1. **Initiation:** The `Rollout Manager` sends a task to the `Single Agent Task` unit.
2. **Agent Core Processing:** The `Core Agent Loop` is the central hub. It:
* Receives configuration and tools from the `Pluggable Components`.
* Engages in a **recursive call** loop with itself.
* Exchanges `Token-in` and `Token-out` with the external `Inference Engine Service`.
* Sends `Act` (actions) to and receives `Obs` (observations) from two types of environments.
3. **Environment Interaction:**
* The `Black-Box Env` interacts with an external `LLM Gateway` (likely for API-based model calls).
* The `White-Box Env` draws from an `Env Pool` (suggesting a repository of accessible, internal environments).
4. **Learning & Correction:** The `Inference Engine Service` detects a "Mismatch" and sends a `Mismatch Correction` signal to the `Training Engine Service`. The `Training Engine Service` also receives a direct input from the `Single Agent Task` unit, indicating a feedback loop for model improvement.
### Key Observations
* **Modularity:** The system is highly modular, with clear separation between the agent's core logic (`Core Agent Loop`), its configurable tools (`Pluggable Components`), and the environments it operates in.
* **Dual Environment Strategy:** The explicit separation of `Black-Box` and `White-Box` environments is a key architectural choice. This suggests the system is designed to handle scenarios where the agent has limited visibility (black-box) versus full access (white-box) to the environment's internal state.
* **Closed-Loop Learning:** The connection from `Inference Engine Service` to `Training Engine Service` via `Mismatch Correction` creates a closed feedback loop, enabling the system to learn from its errors.
* **Central Orchestration:** The `Core Agent Loop` acts as the central nervous system, coordinating between tools, environments, and external services.
### Interpretation
This diagram represents a sophisticated framework for developing and training AI agents in a controlled, iterative manner. The architecture is designed for **Peircean investigative** reasoning: the agent forms hypotheses (Acts), tests them in environments (both opaque and transparent), observes outcomes (Obs), and the system uses discrepancies (mismatches) to correct and improve the underlying models.
The **"reading between the lines"** suggests this is a research or production framework for:
1. **Benchmarking & Evaluation:** The `Judge` component and dual environments allow for rigorous testing of agent performance under different conditions.
2. **Iterative Refinement:** The recursive call and training feedback loop enable continuous agent improvement without full retraining.
3. **Hybrid Model Deployment:** The use of both an `LLM Gateway` (for black-box, possibly commercial APIs) and an `Env Pool` (for white-box, custom environments) indicates a flexible approach to leveraging different types of models and simulation environments.
The system's goal is likely to create more robust, reliable, and self-improving AI agents by systematically exposing them to varied challenges and using the resulting data to correct inference errors.
</details>
Figure 10: Overview of our agentic RL framework.
Environment
To support unified Agentic RL, our RL framework features a standardized Gym-like [9] interface to streamline the implementation of diverse environments. Such design empowers users to implement and customize environments with minimal overhead. Our design prioritizes compositional modularity by integrating a suite of pluggable components, such as a Toolset module for supporting various tools with sandboxes, a Judge module for multi-faceted reward signals, and specialized modules for prompt diversification and instruction-following enhancement. These components can be dynamically composed with core agent loops, offering high flexibility and enhancing model generalization.
At the execution level, our RL framework treats every agent task as an independent asynchronous coroutine. Each task can recursively trigger sub-task rollouts, simplifying the implementation of complex multi-agent paradigms such as Parallel-Agent RL and Agent-as-Judge. As shown in the figure 10, a dedicated Rollout Manager orchestrates up to 100,000 concurrent agent tasks during the RL process, providing fine-grained control to enable features like partial rollout [30]. Upon activation, each task acquires an environment instance from a managed pool, equipped with a sandbox and specialized tools.
Inference Engine Co-design
Our framework strictly follows a Token-in-Token-out paradigm. We also record log probabilities for all inference engine outputs to perform train-inference mismatch correction, ensuring stable RL training. A co-design of inference engine for RL requirements has allowed us to support these features by custom inference APIs for RL.
Besides a comprehensive suite of built-in white-box environments, there are also black-box environments that can only run under standard LLM API protocol, missing the opportunity to use advanced features offered by our custom API protocol. To facilitate model optimization under black-box environments, we developed LLM Gateway, which is a proxy service that keeps detailed records of rollout requests and responses under our custom protocol.
Monitoring and debugging
It is a challenging task to optimize performance of a highly-parallel asynchronous execution system, while ensuring correctness. We develop a series of tools for performance monitoring, profiling, data visualization and data verification. We found these to be instrumental in debugging and ensuring both the efficiency and correctness of our Agentic RL.
## Appendix E Evaluation Settings
This section provides comprehensive configuration details and testing protocols for all benchmarks reported in Table 4.
### E.1 General Evaluation Protocol
Unless explicitly stated otherwise, all experiments for Kimi-K2.5 adhere to the following hyperparameter configuration:
- Temperature: $1.0$
- Top-p: $0.95$
- Context Length: $256k$ tokens
### E.2 Baselines
For baseline models, we report results under their respective high-performance reasoning configurations:
- Claude Opus 4.5: Extended thinking mode
- GPT-5.2: Maximum reasoning effort (xhigh)
- Gemini 3 Pro: High thinking level
- DeepSeek-V3.2: Thinking mode enabled (for text-only benchmarks)
- Qwen3-VL-235B-A22B: Thinking mode (for vision benchmarks only)
For vision and multimodal benchmarks, GPT-5.2-xhigh exhibited an approximate 10% failure rate (i.e., no output generated despite three retry attempts) during vision evaluations. These failures were treated as incorrect predictions, meaning that the reported scores may be conservative lower bounds of the modelâs true capability.
In addition, because we were unable to consistently access a stable GPT-5.2 API, we skipped some benchmarks with high evaluation costs, such as WideSearch.
### E.3 Text Benchmarks
Reasoning Benchmarks.
For high-complexity reasoning benchmarks, including HLE-Full, AIME 2025, HMMT 2025, GPQA-Diamond, and IMO-AnswerBench, we enforce a maximum completion budget of $96k$ tokens to ensure sufficient reasoning depth. To reduce variance arising from stochastic reasoning paths, results on AIME 2025 and HMMT 2025 (Feb) are averaged over 64 independent runs (Avg@64), while GPQA-Diamond is averaged over 8 runs (Avg@8).
LongBench v2.
For a fair comparison, we standardize all input contexts to approximately $128k$ tokens using the same truncation strategy as in [8]. We observe that GPT5.2-xhigh frequently produces free-form questionâanswer style responses rather than the required multiple-choice format. Therefore, we report results using GPT5.2-high, which consistently adheres to the expected output format.
### E.4 Image and Video Benchmarks
All image and video understanding evaluations utilize the following configuration:
- Maximum Tokens: $64k$
- Sampling: Averaged over 3 independent runs (Avg@3)
ZeroBench (w/ tools).
Multi-step reasoning evaluations use constrained step-wise generation:
- Max Tokens per Step: $24k$
- Maximum Steps: $30$
MMMU-Pro.
We adhere strictly to the official evaluation protocol: input order is preserved for all modalities, with images prepended to text sequences as specified in the benchmark guidelines.
Sampling Strategies for Video Benchmarks.
For short video benchmarks (VideoMMMU, MMVU & MotionBench), we sample 128 uniform input frames with a maximum spatial resolution at 896; 2048 uniform frames are sampled for long video benchmarks (Video-MME, LongVideoBench & LVBench) with 448 spatial resolution.
Specialized Metrics.
- OmniDocBench 1.5: Scores are computed as $(1-normalized Levenshtein distance)Ă 100$ , where higher values indicate superior OCR and document understanding accuracy.
- WorldVQA: Access available at https://github.com/MoonshotAI/WorldVQA. This benchmark evaluates atomic, vision-centric world knowledge requiring fine-grained visual recognition and geographic understanding.
### E.5 Coding and Software Engineering
Terminal Bench 2.0.
All scores are obtained using the default Terminus-2 agent framework with the provided JSON parser. Notably, we evaluate under non-thinking mode because our current context management implementation for thinking mode is technically incompatible with Terminus-2âs conversation state handling.
SWE-Bench Series.
We employ an internally developed evaluation framework featuring a minimal tool set: bash, create_file, insert, view, str_replace, and submit. System prompts are specifically tailored for repository-level code manipulation. Peak performance is achieved under non-thinking mode across all SWE-Bench variants (Verified, Multilingual, and Pro).
CyberGym.
Claude Opus 4.5 results for this benchmark are reported under non-thinking settings as specified in their technical documentation. We report scores in the difficulty level 1 (the primary setting).
PaperBench.
We report the scores under the CodeDev setting.
Sampling.
All coding task results are averaged over 5 independent runs (Avg@5) to ensure stability across environment initialization and non-deterministic test case ordering.
### E.6 Agentic Evaluation
Tool Setting.
Kimi-K2.5 is equipped with web search tool, code interpreter (Python execution environment), and web browsing tools for all agentic evaluations, including HLE with tools and agentic search benchmarks (BrowseComp, WideSearch, DeepSearchQA, FinSearchComp T2&T3 and Seal-0).
Context Management Strategies.
To handle the extended trajectory lengths inherent in complex agentic tasks, we implement domain-specific context management protocols. Unless otherwise specified below, no context management is applied to agentic evaluations; tasks exceeding the modelâs supported context window are directly counted as failures rather than truncated.
- Humanityâs Last Exam (HLE). For the HLE tool-augmented setting, we employ a Hide-Tool-Result Context Management strategy: when the context length exceeds predefined thresholds, only the most recent round of tool messages (observations and return values) is retained, while the reasoning chain and thinking processes from all previous steps are preserved in full.
- BrowseComp. For BrowseComp evaluations, our evaluation contains both with and without context management settings. Under the context management setting, we adopt the same discard-all strategy proposed by DeepSeek, where all history is truncated once token thresholds are exceeded.
System Prompt.
All agentic search and HLE evaluations utilize the following unified system prompt, where DATE is dynamically set to the current timestamp:
You are Kimi, todayâs date: DATE. Your task is to help the user with their questions by using various tools, thinking deeply, and ultimately answering the userâs questions. Please follow the following principles strictly during the deep research: 1. Always focus on the userâs original question during the research process, avoiding deviating from the topic. 2. When facing uncertain information, use search tools to confirm. 3. When searching, filter high-trust sources (such as authoritative websites, academic databases, and professional media) and maintain a critical mindset towards low-trust sources. 4. When performing numerical calculations, prioritize using programming tools to ensure accuracy. 5. Please use the format [^index^] to cite any information you use. 6. This is a **Very Difficult** problemdo not underestimate it. You must use tools to help your reasoning and then solve the problem. 7. Before you finally give your answer, please recall what the question is asking for.
Sampling Protocol.
To account for the inherent stochasticity in search engine result rankings and dynamic web content availability, results for Seal-0 and WideSearch are averaged over 4 independent runs (Avg@4). All other agentic benchmarks are evaluated under single-run protocols unless explicitly stated otherwise.
### E.7 Computer-Use Evaluation
Hyperparameter Settings.
We set $\texttt{max\_steps\_per\_episode}=100$ for all experiments, with $\texttt{temperature}=0$ for OSWorld-Verified and $\texttt{temperature}=0.1$ for WebArena. Due to resource constraints, all models are evaluated in a one-shot setting. Adhering to the OpenCUA configuration [63], the agent context includes the last 3 history images, the complete thought history, and the task instruction. For WebArena, we manually corrected errors in the evaluation scripts and employed GPT-4o as the judge model for the fuzzy_match function. To ensure fair comparison, Claude Opus 4.5 is evaluated solely with computer-use tools (excluding browser tools), a departure from the System Card configuration [4].
System Prompt
We utilize a unified system prompt for all computer use tasks: You are a GUI agent. You are given an instruction, a screenshot of the screen and your previous interactions with the computer. You need to perform a series of actions to complete the task. The password of the computer is {password}. For each step, provide your response in this format: {thought} ## Action: {action} ## Code: {code} In the code section, the code should be either pyautogui code or one of the following functions wrapped in the code block: - {"name": "computer.wait", "description": "Make the computer wait for 20 seconds for installation, running code, etc.", "parameters": {"type": "object", "properties": {}, "required": []}} - {"name": "computer.terminate", "description": "Terminate the current task and report its completion status", "parameters": {"type": "object", "properties": {"status": {"type": "string", "enum": ["success", "failure"], "description": "The status of the task"}, "answer": {"type": "string", "description": "The answer of the task"}}, "required": ["status"]}}
### E.8 Agent Swarm Configuration
Tool Setting.
In addition to the core toolset described in Appendix E.6 (web search, code interpreter, and web browsing), the orchestrator is equipped with two specialized tools for sub-agent creation and scheduling:
- create_subagent: Instantiates a specialized sub-agent with a custom system prompt and identifier for reuse across tasks.
- assign_task: Dispatches assignments to created sub-agents.
The tool schemas are provided below:
{ "name": "create_subagent", "description": "Create a custom subagent with specific system prompt and name for reuse.", "parameters": { "type": "object", "properties": { "name": { "type": "string", "description": "Unique name for this agent configuration" }, "system_prompt": { "type": "string", "description": "System prompt defining the agentâs role, capabilities, and boundaries" } }, "required": ["name", "system_prompt"] } } { "name": "assign_task", "description": "Launch a new agent.\nUsage notes:\n 1. You can launch multiple agents concurrently whenever possible, to maximize performance;\n 2. When the agent is done, it will return a single message back to you.", "parameters": { "type": "object", "properties": { "agent": { "type": "string", "description": "Specify which created agent to use." }, "prompt": { "type": "string", "description": "The task for the agent to perform" } }, "required": ["agent", "prompt"] } }
Step Limits.
When operating in Agent Swarm mode, we set computational budgets for the orchestrator and sub-agents. Step limits apply to the aggregate count of tool invocations and environment interactions.
- BrowseComp: The orchestrator is constrained to a maximum of 15 steps. Each spawned sub-agent operates under a limit of 100 steps (i.e., up to 100 tool calls per sub-agent).
- WideSearch: Both the orchestrator and each sub-agent are allocated a maximum budget of 100 steps.
- In-house Bench: The orchestrator is constrained to a maximum of 100 steps. Each spawned sub-agent operates under a limit of 50 steps .
System Prompt.
You are Kimi, a professional and meticulous expert in information collection and organization. You fully understand user needs, skillfully use various tools, and complete tasks with the highest efficiency. # Task Description After receiving usersâ questions, you need to fully understand their needs and think about and plan how to complete the tasks efficiently and quickly. # Available Tools To help you complete tasks better and faster, I have provided you with the following tools: 1. Search tool: You can use the search engine to retrieve information, supporting multiple queries in parallel. 2. Browser tools: You can visit web links (web pages, PDFs, etc.), get page content, and perform interactions such as clicking, inputting, finding, and scrolling. 3. Sub Agent tools: - âcreate_subagentâ: Create a new sub-agent with a unique name and clear, specific system prompt. - âassign_taskâ: Delegate tasks to created sub-agents. Sub-agents can also use search and browser tools. 4. Other tools: Including code execution (IPython, Shell).
### E.9 GDPVal
We cite the GDPVal-AA evaluation by Artificial Analysis, and the scores reported in Table 4 reflect the official leaderboard metrics as of January 28, 2026.
<details>
<summary>x9.png Details</summary>

### Visual Description
## [Diagram/Workflow]: Black Myth: Wukong Video Analysis & Web Showcase Workflow
### Overview
This image is a detailed workflow diagram illustrating a parallel processing strategy for analyzing 32 large gameplay video files (~40GB total) of the game "Black Myth: Wukong" and compiling the results into an interactive HTML web page. The diagram shows a hierarchical agent-based system where a main agent orchestrates multiple sub-agents to perform tasks in parallel. The visual style is a flowchart with text boxes, arrows, and embedded screenshots.
### Components/Axes
The diagram is organized into three main spatial regions:
1. **Header (Top):** Contains the project title, instructions, and a row of video thumbnails.
2. **Main Chart (Center):** A large flowchart split into two columns.
* **Left Column:** Represents the "MAIN AGENT's" high-level task sequence (THINK, SUMMARY, TOOL).
* **Right Column:** Represents the detailed actions of spawned "SUB AGENT" instances (SUB AGENT 1, 2, 3... 32).
3. **Footer (Bottom):** Contains the final summary and output instructions.
**Key Textual Elements & Labels:**
* **Title/Instruction (Top Center):** "As a video web master, please analyze all these files (in ./video/wukong/) and create a cool HTML page: 1. Clarify the chronological timeline of video events 2. Identify the timestamps when player levels are upgraded 3. demonstrate your #2.5 agent's powerful long-video understanding and web-page design capability. Output results to ./video/wukong-result-video-web-master/"
* **Project Stats (Top Right, Red Text):** "32Parts, 24Hours, 40G"
* **Agent Labels:** "MAIN AGENT", "SUB AGENT 1", "SUB AGENT 2", "SUB AGENT 3", "SUB AGENT 32".
* **Process Step Labels:** "THINK", "SUMMARY", "TOOL".
* **Tool/Command Labels:** `ls -la ./video/wukong/`, `<Create Subagent x 32>`, `<Read File x 32>`, `<Create Subagent x 9>`, `<Build Frontend>`, `<Analyze Video> & <Extract Frames with FFMPEG> & <Read Media File> x N`, `<Extract Video Clip with FFMPEG>`.
* **File Paths & Data:**
* `./video/wukong/01-25805508462_Proleague.mp4`
* `./video/wukong-result-video-web-master/analysis_01.json`
* `./video/wukong-result-video-web-master/frames_01/`
* `./video/wukong-result-video-web-master/clips/01_prologue_battle.mp4`
* **Embedded Chinese Text (with English Translation):**
* **Location:** Within the "THINK" box in the right column's "SUB AGENT 1" section.
* **Transcription:** "çšæ·æłèŠæćæäžäžȘăé»ç„èŻïŒæç©șăçæžžæè§éąă"
* **Translation:** "The user wants me to analyze a Black Myth: Wukong gameplay video."
* **Data Points & Values:**
* Number of videos: 32
* Total size: ~40GB
* Individual file size range: 700MB - 2GB
* Sub-agent threshold: Files >25MB
* Extracted frames for Video #01: 82 frames
* Clip extraction time range: 00:05:40 to 00:05:50 (10-second clip)
* Clip encoding settings: CRF 23, preset fast
### Detailed Analysis
The workflow proceeds as follows:
1. **Initialization (Main Agent - Left Column):**
* **THINK:** Acknowledges the task: analyze 32 video files to create an HTML page with a timeline, level-up timestamps, and demonstrate capability.
* **TOOL:** Lists the directory `./video/wukong/` to see the files.
* **THINK:** Discovers 32 large video files (~40GB total). Decides on a parallel strategy using sub-agents because each file is >25MB. The plan is for each sub-agent to analyze one video for timeline, level-ups, and boss fights, then aggregate results.
* **TOOL:** Spawns 32 sub-agents (`<Create Subagent x 32>`).
2. **Parallel Analysis (Sub Agents - Right Column):**
* The diagram zooms into the process for "SUB AGENT 1" as an example.
* **MAIN AGENT (to Sub Agent):** Assigns the task to analyze `./video/wukong/01-25805508462_Proleague.mp4` (Video #01 - Prologue, 320MB, ~5-8 minutes).
* **SUB AGENT 1 - THINK:** Plans to explore the file and extract frames.
* **SUB AGENT 1 - TOOL:** Uses tools to analyze the video, extract frames with FFMPEG, and read the media file. Shows a screenshot of extracted frames.
* **SUB AGENT 1 - SUMMARY:** Reports completion, creating `analysis_01.json` and saving 82 frames to `./video/wukong-result-video-web-master/frames_01/`.
3. **Clip Extraction (Main Agent & Sub Agent):**
* **Main Agent (Left Column):** After initial analysis, it reads the 32 analysis files to identify key moments (e.g., Prologue battle 00:05:40-00:06:10, Chapter 1 boss 02:27:00-02:27:30). It then spawns 9 more sub-agents to extract these clips in parallel.
* **Sub Agent (Right Column - Lower Section):** Receives a command to extract a 10-second clip from the Prologue video (00:05:40 to 00:05:50) showing the battle with Erlang Shen. It uses `ffmpeg` with specific parameters (`-ss 00:05:40 -i ... -t 10 -c:v libx264 -crf 23 -preset fast`) and saves the output to `./video/wukong-result-video-web-master/clips/01_prologue_battle.mp4`.
4. **Compilation & Output (Main Agent - Left Column):**
* **TOOL:** Gathers key analysis files and builds the frontend (`<Build Frontend>`).
* **SUMMARY (Footer):** Declares "Analysis Complete! Black Myth: Wukong Journey Showcase". Instructs the user to open `./video/wukong-result-video-web-master/index.html` in a browser to view the interactive showcase with embedded clips, animated charts, and a chronological timeline.
### Key Observations
* **Hierarchical Parallelism:** The core strategy is using a main agent to manage and spawn numerous sub-agents (up to 32 initially, then 9 more) to handle the large workload concurrently.
* **Tool Integration:** The workflow heavily relies on external tools like `ls`, `ffmpeg` (for frame extraction and clip cutting), and file reading/writing operations.
* **Structured Output:** The process generates structured data (`analysis_01.json`), media assets (frames, clips), and a final compiled HTML page.
* **Visual Documentation:** The diagram itself uses embedded screenshots (video thumbnails, extracted frames) to provide concrete examples of the data being processed.
* **Scale Indication:** The red text "32Parts, 24Hours, 40G" at the top right emphasizes the significant scale of the input data.
### Interpretation
This diagram is a **Peircean investigative map** of a complex, automated data processing pipeline. It demonstrates a scalable solution to a big data problem in multimedia analysis.
* **What it Suggests:** The system is designed for efficiency and scalability. By breaking down a monolithic task (analyzing 40GB of video) into independent, parallel sub-tasks (one per video file), it drastically reduces total processing time. The use of a main agent for orchestration and sub-agents for execution mimics a distributed computing model.
* **How Elements Relate:** The left column (Main Agent) acts as the control plane, making strategic decisions and delegating work. The right column (Sub Agents) represents the data plane, performing the repetitive, intensive work. The arrows show the flow of commands and data between these layers. The final HTML page is the synthesized product, transforming raw video data into an interactive, human-readable format.
* **Notable Anomalies/Patterns:** The workflow shows a two-phase parallel approach: first, broad analysis of all videos; second, targeted extraction of key clips based on the initial analysis. This is an intelligent, adaptive pattern. The explicit mention of file size thresholds (>25MB) for triggering sub-agent use indicates a built-in heuristic for resource management. The entire process is a practical example of "long-video understanding" as mentioned in the initial instructions, moving from raw pixels to structured metadata (timestamps, events) and finally to a curated presentation.
</details>
Figure 11: Qualitative example of Kimi K2.5 analyzing a complete playthrough of Black Myth: Wukong (24 hours of continuous gameplay across 32 videos at 1080p) using parallel visual agents. See generated webpage and source videos (all rights reserved by source authors).
<details>
<summary>x10.png Details</summary>

### Visual Description
## [Multi-Panel Technical Analysis Image]: Maze Solving, Pie Chart Analysis, and Spot-the-Difference Game
### Overview
The image is a composite of three vertical columns, each documenting a separate image analysis task performed by an AI assistant. Each column follows a similar structure: a user prompt, a "THINK" section detailing the reasoning process, a "TOOL" section showing code execution or visual output, and a final "SUMMARY" section. The tasks are: 1) Solving a maze, 2) Analyzing a pie chart's color percentages, and 3) Finding differences between two similar images.
### Column 1: Maze Pathfinding Analysis
#### Components/Axes
* **User Prompt:** "Please help the ant walk through the maze. Do not cross walls."
* **Visual Elements:** A black-and-white line drawing of a maze. A starting point is marked with a hill icon on the left. An ending point is marked with an ant icon on the right. A red line traces the solution path.
* **Algorithm Mentioned:** Bresenham's line algorithm (BFA) for finding the shortest path.
* **Key Coordinates (Approximate):** Start (hill): (100, 200). End (ant): (600, 450).
#### Detailed Analysis
1. **Task Definition:** The AI interprets the request as finding a path from the hill (start) to the ant (end) without crossing maze walls.
2. **Process:**
* **Step 1 - Segmentation:** The maze image is processed to identify walls (black) and paths (white).
* **Step 2 - Coordinate Identification:** The start and end points are manually identified from the image.
* **Step 3 - Pathfinding:** A Breadth-First Search (BFS) algorithm is used to find the shortest path through the maze's white corridors.
* **Step 4 - Path Visualization:** The found path is drawn as a red line on the original maze image.
3. **Result:** The path is successfully found and visualized. The final path consists of **3,288 steps** (pixels).
#### Key Observations
* The maze is complex with many dead ends and winding corridors.
* The solution path (red line) is clearly visible, snaking from the left start to the right end.
* The process involved both automated algorithms (BFS) and manual coordinate input.
#### Interpretation
This panel demonstrates a classic computer vision and robotics problem: autonomous navigation in a constrained environment. The successful application of BFS shows how graph search algorithms can solve spatial puzzles. The manual step of identifying start/end points highlights a common human-in-the-loop requirement in such systems. The final red path serves as a clear, verifiable output of the algorithm's correctness.
---
### Column 2: Pie Chart Color Percentage Analysis
#### Components/Axes
* **User Question:** "What is the total percentage of the green and blue areas?"
* **Chart Type:** Pie chart.
* **Legend/Sections:**
* **Section 1 (Black):** Top-left segment.
* **Section 2 (Blue):** Top-right segment.
* **Section 3 (Green):** Bottom segment.
* **Color Detection Results (from tool output):**
* Total colored pixels: 61,693
* Red/Pink pixels: 28,934
* Blue pixels: 14,811
* Green pixels: 17,948
#### Detailed Analysis
1. **Methodology:** The AI uses Python image processing to detect colors in the pie chart. It converts the image to HSV color space and creates masks for red/pink, blue, and green ranges.
2. **Calculated Percentages (from "THINK" section):**
* **Section 1 (Black):** 45.9%
* **Section 2 (Blue):** 23.4%
* **Section 3 (Green):** 30.7%
3. **Answer to User:** The total percentage of green and blue areas is **54.1%** (23.4% + 30.7%).
4. **Visual Verification:** The pie chart visually confirms that the green and blue sections together make up slightly more than half of the circle.
#### Key Observations
* There is a discrepancy between the initial pixel counts (which include "Red/Pink") and the final section labels (Black, Blue, Green). The AI reconciles this by interpreting the "Red/Pink" pixels as representing the "Black" section, likely due to anti-aliasing or image compression artifacts.
* The final percentages are provided with one decimal place of precision.
#### Interpretation
This task showcases practical image analysis for data extraction. The key challenge was accurate color segmentation, where real-world image artifacts (like anti-aliasing) complicated the detection of pure "black." The AI's reasoning to map detected "Red/Pink" pixels to the "Black" section demonstrates adaptive problem-solving. The final answer directly addresses the user's query by summing the relevant percentages, providing a clear, quantitative result.
---
### Column 3: Spot-the-Difference Game Analysis
#### Components/Axes
* **User Request:** "Find the differences in this picture."
* **Visual Elements:** Two nearly identical cartoon images placed side-by-side (top and bottom). The scene depicts a fox in a green coat and hat in a forest setting with trees, a stump, and a bird.
* **Analysis Output:** A "difference map" (a black image with white contours highlighting areas of change) and a final image with the 10 major differences circled in red and numbered.
#### Detailed Analysis
1. **Process:**
* **Step 1 - Automated Detection:** The AI performs a pixel-wise comparison between the two images, generating a difference map. It initially detects **27 regions** of change.
* **Step 2 - Manual Refinement:** Recognizing that automated detection may include noise or minor artifacts, the AI manually refines the list to identify the **10 most prominent, intentional differences**.
2. **List of 10 Major Differences (from SUMMARY):**
1. Bird vs. UFO
2. Tree branch shape
3. Fox's mouth expression (Closed vs. Open/Teeth)
4. Fox's hat band color (Red vs. Yellow)
5. Fox's third button
6. Tree knot position
7. Acorn vs. Paper airplane
8. Mushroom spot pattern
9. Sign text change ("YOU SHAN'T ENTER!" vs. "I'LL GET IT.")
10. Folder file position/shape
#### Key Observations
* The differences are a mix of object substitutions (Bird/UFO, Acorn/Airplane), attribute changes (color, expression), and positional/shape alterations.
* The final annotated image provides clear spatial grounding for each difference, numbered for easy reference.
* The process moved from broad, automated detection to focused, intelligent curation of meaningful changes.
#### Interpretation
This panel illustrates a common computer vision task: change detection. The two-stage approach (automated detection followed by human-like curation) is highly effective. The automated system casts a wide net, ensuring no potential difference is missed. The subsequent curation step applies contextual understanding to filter out noise and identify the semantically significant changes that a human would consider "the differences." This mimics human perceptual grouping and attention. The final list is not just a set of pixel changes but a catalog of narrative alterations in the scene.
---
### Overall Interpretation
This composite image serves as a demonstration of an AI's multimodal capabilities in technical document and image analysis. It showcases three distinct problem-solving paradigms:
1. **Spatial Navigation & Algorithm Application** (Maze): Using graph theory to solve a physical constraint problem.
2. **Quantitative Data Extraction** (Pie Chart): Applying image processing to derive precise numerical data from a visual representation.
3. **Perceptual Comparison & Reasoning** (Spot-the-Difference): Combining low-level pixel comparison with high-level scene understanding to identify meaningful changes.
The consistent structure across panels (THINK, TOOL, SUMMARY) highlights a transparent, step-by-step reasoning process, making the AI's methodology auditable. The tasks progress from a well-defined algorithmic problem (maze) to a more interpretive one (spotting intentional differences), demonstrating a range of analytical depth.
</details>
Figure 12: Qualitative examples of Kimi K2.5 solving visual reasoning tasks via tool use.
## Appendix F Visualization
Figure 11 demonstrates our Agent Swarm tackling a challenging long-form video understanding task: analyzing a complete playthrough of Black Myth: Wukong (24 hours of continuous gameplay across 32 videos, totaling 40GB). The system employs a hierarchical multi-agent architecture where a Main Agent orchestrates parallel Sub Agents to process individual video segments independently. Each sub agent performs frame extraction, temporal event analysis, and key moment identification (e.g., boss fights, level-ups). The Main Agent subsequently aggregates these distributed analyses to synthesize a comprehensive HTML showcase featuring chronological timelines, embedded video clips, and interactive visualizations. This example demonstrates the systemâs ability to handle massive-scale multimodal content through parallelization while maintaining coherent long-context understanding.
Figure 12 presents qualitative examples of Kimi K2.5 solving diverse visual reasoning tasks via tool-augmented reasoning. The model demonstrates: (1) Maze Solvingâprocessing binary image segmentation and implementing pathfinding algorithms (BFS) to navigate complex mazes; (2) Pie Chart Analysisâperforming pixel-level color segmentation and geometric calculations to determine precise area proportions; and (3) Spot-the-Differenceâemploying computer vision techniques to detect pixel-level discrepancies between image pairs. These examples highlight the modelâs capability to decompose complex visual problems into executable code, iteratively refine strategies based on intermediate results, and synthesize precise answers through quantitative visual analysis.